Running Stellar RPC with Full History Access on Testnet
This guide explains how to deploy a Stellar RPC node on testnet with access to complete historical ledger data using Google Cloud Storage as an external datastore. This setup allows you to query any point in Stellar's testnet history without storing the entire ledger locally.
Overview
Stellar RPC nodes typically store only recent ledger data due to storage constraints. By integrating with an external datastore (like Google Cloud Storage), you can:
- Access the complete history of the Stellar testnet
- Minimize local storage requirements
- Query historical transactions and ledger states
- Scale your infrastructure efficiently
Available Datastores
Obsrvr provides public access to testnet ledger data through Google Cloud Storage:
- Bucket:
obsrvr-stellar-ledger-data-testnet-data
- Path:
landing/ledgers/testnet
- Billing: Requester pays (you'll need your own GCP billing account)
Prerequisites
Before starting, ensure you have:
- Docker installed and running
- Google Cloud SDK installed (for authentication)
- A Google Cloud account with billing enabled (for requester pays)
- Basic familiarity with command line operations
Step 1: Install Google Cloud SDK
Install the Google Cloud SDK to handle authentication:
macOS
brew install google-cloud-sdk
Ubuntu/Debian
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
sudo apt-get update && sudo apt-get install google-cloud-sdk
Other Systems
Visit the official Google Cloud SDK documentation for installation instructions.
Step 2: Configure Authentication
Since the Obsrvr public bucket uses "requester pays", you must authenticate with Google Cloud:
Option A: Application Default Credentials (Development)
Best for local development and testing:
gcloud auth application-default login
This opens a browser for authentication and stores credentials locally.
Option B: Service Account (Production)
Recommended for production deployments:
- Create a service account in Google Cloud Console
- Download the JSON key file
- Ensure the service account has billing enabled
- Use the key file in your Docker deployment (see Step 4)
Note: The requester pays model means your Google Cloud account will be billed for data egress charges when accessing the bucket.
Step 3: Create Configuration Files
Stellar RPC Configuration
Create stellar-rpc-datastore.toml
:
# Stellar RPC configuration with external datastore
# For stellar/stellar-rpc:23.0.0
# Basic configuration
ENDPOINT = "0.0.0.0:8000"
ADMIN_ENDPOINT = "0.0.0.0:8001"
NETWORK_PASSPHRASE = "Test SDF Network ; September 2015"
HISTORY_ARCHIVE_URLS = ["https://history.stellar.org/prd/core-testnet/core_testnet_001"]
STELLAR_CORE_BINARY_PATH = "/usr/bin/stellar-core"
CAPTIVE_CORE_CONFIG_PATH = "/config/stellar-core.cfg"
CAPTIVE_CORE_STORAGE_PATH = "/data/captive-core"
DB_PATH = "/data/stellar_rpc.sqlite"
# Logging
LOG_LEVEL = "info"
LOG_FORMAT = "text"
# Set retention to minimum value (1 ledger) to minimize local storage
# The datastore will handle historical queries beyond this
HISTORY_RETENTION_WINDOW = 1
# Fee stats windows must be <= history retention window
SOROBAN_FEE_STATS_RETENTION_WINDOW = 1
CLASSIC_FEE_STATS_RETENTION_WINDOW = 1
# Enable external datastore for historical ledgers
SERVE_LEDGERS_FROM_DATASTORE = true
# Datastore configuration
[datastore_config]
type = "GCS"
[datastore_config.params]
# Obsrvr testnet ledger data bucket
destination_bucket_path = "obsrvr-stellar-ledger-data-testnet-data/landing/ledgers/testnet"
[datastore_config.schema]
ledgers_per_file = 1
files_per_partition = 64000
# Buffered storage backend configuration
[buffered_storage_backend_config]
buffer_size = 100
num_workers = 10
retry_limit = 3
retry_wait = "5s"