Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
71 changes: 70 additions & 1 deletion src/builder.rs
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,7 @@ use crate::event::EventQueue;
use crate::fee_estimator::OnchainFeeEstimator;
use crate::gossip::GossipSource;
use crate::io::sqlite_store::SqliteStore;
use crate::io::tier_store::TierStore;
use crate::io::utils::{
read_all_objects, read_event_queue, read_external_pathfinding_scores_from_cache,
read_network_graph, read_node_metrics, read_output_sweeper, read_peer_info, read_scorer,
Expand Down Expand Up @@ -154,6 +155,21 @@ impl std::fmt::Debug for LogWriterConfig {
}
}

#[derive(Default)]
struct TierStoreConfig {
ephemeral: Option<Arc<DynStore>>,
backup: Option<Arc<DynStore>>,
}

impl std::fmt::Debug for TierStoreConfig {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("TierStoreConfig")
.field("ephemeral", &self.ephemeral.as_ref().map(|_| "Arc<DynStore>"))
.field("backup", &self.backup.as_ref().map(|_| "Arc<DynStore>"))
.finish()
}
}

/// An error encountered during building a [`Node`].
///
/// [`Node`]: crate::Node
Expand Down Expand Up @@ -289,6 +305,7 @@ pub struct NodeBuilder {
liquidity_source_config: Option<LiquiditySourceConfig>,
log_writer_config: Option<LogWriterConfig>,
async_payments_role: Option<AsyncPaymentsRole>,
tier_store_config: Option<TierStoreConfig>,
runtime_handle: Option<tokio::runtime::Handle>,
pathfinding_scores_sync_config: Option<PathfindingScoresSyncConfig>,
recovery_mode: bool,
Expand All @@ -307,6 +324,7 @@ impl NodeBuilder {
let gossip_source_config = None;
let liquidity_source_config = None;
let log_writer_config = None;
let tier_store_config = None;
let runtime_handle = None;
let pathfinding_scores_sync_config = None;
let recovery_mode = false;
Expand All @@ -316,6 +334,7 @@ impl NodeBuilder {
gossip_source_config,
liquidity_source_config,
log_writer_config,
tier_store_config,
runtime_handle,
async_payments_role: None,
pathfinding_scores_sync_config,
Expand Down Expand Up @@ -625,6 +644,34 @@ impl NodeBuilder {
self
}

/// Configures the backup store for local disaster recovery.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So here we set the backup store, but how do we envision the restore to work? Should that be part of the recovery_mode?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are two approaches we can take here. For the first, have both primary and backup concrete stores implement MigratableKVStore and have the user call migrate_kv_store_data before building. It's simple but not ideal as it's not part of any existing node or builder APIs, and would require explicit documentation.

Alternatively, we can add a restore_from_backup(backup) method on NodeBuilder and have the migration/restoration of data from backup (source) to primary (target) happen inside build. This requires adding list_all_keys to DynStoreTrait, and MigratableKVStore for both stores so the migration can work through the type-erased Arc<DynStore> layer.

For the second approach, we could also refactor recovery_mode from a bool into a struct:

pub struct RecoveryMode {
    pub wallet: bool, // resync on-chain wallet from genesis
    pub backup: bool, // restore persisted data from backup store
}

This keeps both recovery concerns under one concept while remaining independent. A user restoring from backup may not need a full wallet resync, and vice versa.

Copy link
Copy Markdown
Contributor Author

@enigbe enigbe Apr 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On the restore side, I went with the second approach from my earlier comment restore_from_backup() on the builder, with restoration happening inside build before normal startup reads. I refactored recovery_mode from a bool into a RecoveryConfig struct that keeps wallet recovery and backup restore independent. The restore itself enumerates the backup store via list_all_keys, filters to a known durable key inventory, checks that the primary store is empty, and copies only matching entries. This keeps restore policy explicit rather than blindly migrating everything from the backup store.

///
/// When building with tiered storage, this store receives a second durable
/// copy of data written to the primary store.
///
/// Writes and removals for primary-backed data only succeed once both the
/// primary and backup stores complete successfully.
///
/// If not set, durable data will be stored only in the primary store.
pub fn set_backup_store(&mut self, backup_store: Arc<DynStore>) -> &mut Self {
let tier_store_config = self.tier_store_config.get_or_insert(TierStoreConfig::default());
tier_store_config.backup = Some(backup_store);
self
}

/// Configures the ephemeral store for non-critical, frequently-accessed data.
///
/// When building with tiered storage, this store is used for ephemeral data like
/// the network graph and scorer data to reduce latency for reads. Data stored here
/// can be rebuilt if lost.
///
/// If not set, non-critical data will be stored in the primary store.
pub fn set_ephemeral_store(&mut self, ephemeral_store: Arc<DynStore>) -> &mut Self {
let tier_store_config = self.tier_store_config.get_or_insert(TierStoreConfig::default());
tier_store_config.ephemeral = Some(ephemeral_store);
self
}

/// Builds a [`Node`] instance with a [`SqliteStore`] backend and according to the options
/// previously configured.
pub fn build(&self, node_entropy: NodeEntropy) -> Result<Node, BuildError> {
Expand Down Expand Up @@ -773,8 +820,23 @@ impl NodeBuilder {
}

/// Builds a [`Node`] instance according to the options previously configured.
///
/// The provided `kv_store` will be used as the primary storage backend. Optionally,
/// an ephemeral store for frequently-accessed non-critical data (e.g., network graph, scorer)
/// and a backup store for local disaster recovery can be configured via
/// [`set_ephemeral_store`] and [`set_backup_store`].
///
/// [`set_ephemeral_store`]: Self::set_ephemeral_store
/// [`set_backup_store`]: Self::set_backup_store
pub fn build_with_store<S: SyncAndAsyncKVStore + Send + Sync + 'static>(
&self, node_entropy: NodeEntropy, kv_store: S,
) -> Result<Node, BuildError> {
let primary_store: Arc<DynStore> = Arc::new(DynStoreWrapper(kv_store));
self.build_with_dynstore(node_entropy, primary_store)
}

fn build_with_dynstore(
&self, node_entropy: NodeEntropy, primary_store: Arc<DynStore>,
) -> Result<Node, BuildError> {
let logger = setup_logger(&self.log_writer_config, &self.config)?;

Expand All @@ -787,6 +849,13 @@ impl NodeBuilder {
})?)
};

let ts_config = self.tier_store_config.as_ref();
let mut tier_store = TierStore::new(primary_store, Arc::clone(&logger));
if let Some(config) = ts_config {
config.ephemeral.as_ref().map(|s| tier_store.set_ephemeral_store(Arc::clone(s)));
config.backup.as_ref().map(|s| tier_store.set_backup_store(Arc::clone(s)));
}

let seed_bytes = node_entropy.to_seed_bytes();
let config = Arc::new(self.config.clone());

Expand All @@ -801,7 +870,7 @@ impl NodeBuilder {
seed_bytes,
runtime,
logger,
Arc::new(DynStoreWrapper(kv_store)),
Arc::new(DynStoreWrapper(tier_store)),
)
}
}
Expand Down
1 change: 1 addition & 0 deletions src/io/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
pub mod sqlite_store;
#[cfg(test)]
pub(crate) mod test_utils;
pub(crate) mod tier_store;
pub(crate) mod utils;
pub mod vss_store;

Expand Down
Loading
Loading