Files
smom-dbis-138/services/token-aggregation/docs/DEPLOYMENT.md

289 lines
6.3 KiB
Markdown

# Deployment Guide
## Prerequisites
1. **Database**: PostgreSQL 14+ with TimescaleDB extension
2. **Node.js**: Version 20 or higher
3. **Docker**: (Optional) For containerized deployment
4. **RPC Access**: Access to RPC endpoints for ChainID 138 and 651940
## Database Setup
1. Ensure PostgreSQL is running with TimescaleDB extension enabled:
```sql
CREATE EXTENSION IF NOT EXISTS timescaledb;
```
2. Run the migration from the explorer database:
```bash
# The migration file is located at:
# explorer-monorepo/backend/database/migrations/0011_token_aggregation_schema.up.sql
```
3. Verify tables were created:
```sql
\dt token_market_data
\dt liquidity_pools
\dt token_ohlcv
```
## Environment Configuration
1. Copy the example environment file:
```bash
cp .env.example .env
```
2. Configure required variables:
```bash
# Required
CHAIN_138_RPC_URL=http://192.168.11.221:8545
CHAIN_651940_RPC_URL=https://mainnet-rpc.alltra.global
DATABASE_URL=postgresql://user:password@localhost:5432/explorer_db
# Optional (for external API enrichment)
COINGECKO_API_KEY=your_key_here
COINMARKETCAP_API_KEY=your_key_here
DEXSCREENER_API_KEY=your_key_here
```
For explorer/LAN deployments, `CHAIN_138_RPC_URL` should point to the public Chain 138 RPC node directly at `http://192.168.11.221:8545`. Use `https://rpc-http-pub.d-bis.org` for external-only consumers. Do not point explorer/read services at the operator core RPC `http://192.168.11.211:8545`.
## Local Deployment
### Using npm
1. Install dependencies:
```bash
npm install
```
2. Build the project:
```bash
npm run build
```
3. Start the service:
```bash
npm start
```
### Using Docker
1. Build the image:
```bash
docker build -t token-aggregation-service .
```
2. Run the container:
```bash
docker run -d \
--name token-aggregation \
-p 3000:3000 \
--env-file .env \
token-aggregation-service
```
### Using Docker Compose
1. Start all services:
```bash
docker-compose up -d
```
2. View logs:
```bash
docker-compose logs -f token-aggregation
```
## Production Deployment
### Kubernetes
1. Create a ConfigMap for environment variables:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: token-aggregation-config
data:
CHAIN_138_RPC_URL: "http://192.168.11.221:8545"
CHAIN_651940_RPC_URL: "https://mainnet-rpc.alltra.global"
INDEXING_INTERVAL: "5000"
ENABLE_INDEXER: "true"
LOG_LEVEL: "info"
```
Set `ENABLE_INDEXER` to `"false"` for public read-only explorer deployments that should serve API traffic without running the in-process multi-chain indexer.
2. Create a Secret for sensitive data:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: token-aggregation-secrets
type: Opaque
stringData:
DATABASE_URL: "postgresql://..."
COINGECKO_API_KEY: "..."
COINMARKETCAP_API_KEY: "..."
```
3. Deploy the service:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: token-aggregation
spec:
replicas: 2
selector:
matchLabels:
app: token-aggregation
template:
metadata:
labels:
app: token-aggregation
spec:
containers:
- name: token-aggregation
image: token-aggregation-service:latest
ports:
- containerPort: 3000
envFrom:
- configMapRef:
name: token-aggregation-config
- secretRef:
name: token-aggregation-secrets
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 10
periodSeconds: 5
```
## DEX Factory Configuration
For ChainID 138, configure DODO PoolManager address:
```bash
CHAIN_138_DODO_POOL_MANAGER=0x...
CHAIN_138_UNISWAP_V2_FACTORY=0x0C30F6e67Ab3667fCc2f5CEA8e274ef1FB920279
CHAIN_138_UNISWAP_V2_ROUTER=0x3019A7fDc76ba7F64F18d78e66842760037ee638
CHAIN_138_UNISWAP_V2_START_BLOCK=4041370
CHAIN_138_SUSHISWAP_FACTORY=0x2871207ff0d56089D70c0134d33f1291B6Fce0BE
CHAIN_138_SUSHISWAP_ROUTER=0xB37b93D38559f53b62ab020A14919f2630a1aE34
CHAIN_138_SUSHISWAP_START_BLOCK=4041495
```
For ChainID 651940, configure DEX factories as they are discovered:
```bash
CHAIN_651940_UNISWAP_V2_FACTORY=0x...
CHAIN_651940_UNISWAP_V2_ROUTER=0x...
CHAIN_651940_UNISWAP_V2_START_BLOCK=0
CHAIN_651940_UNISWAP_V3_FACTORY=0x...
CHAIN_651940_UNISWAP_V3_ROUTER=0x...
CHAIN_651940_UNISWAP_V3_START_BLOCK=0
CHAIN_651940_HYDX_FACTORY=0x...
CHAIN_651940_HYDX_ROUTER=0x...
CHAIN_651940_HYDX_START_BLOCK=0
```
The canonical ALL Mainnet non-DODO inventory is also tracked in the parent repo at `config/allmainnet-non-dodo-protocol-surface.json`.
## Monitoring
### Health Checks
The service exposes a health check endpoint:
```bash
curl http://localhost:3000/health
```
### Logs
View service logs:
```bash
# Docker
docker logs -f token-aggregation
# Kubernetes
kubectl logs -f deployment/token-aggregation
```
### Metrics
Monitor the following:
- Database connection pool usage
- Indexing progress (tokens indexed, pools discovered)
- API request rates
- External API call success rates
## Troubleshooting
### Database Connection Issues
1. Verify database is accessible:
```bash
psql $DATABASE_URL -c "SELECT 1"
```
2. Check connection pool settings in `.env`
### RPC Connection Issues
1. Test RPC endpoints:
```bash
curl -X POST $CHAIN_138_RPC_URL \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}'
```
2. Verify RPC URLs in `.env`
### Indexing Not Working
1. Check logs for errors
2. Verify DEX factory addresses are configured
3. Ensure RPC endpoints have required APIs enabled (ETH, NET, etc.)
## Scaling
### Horizontal Scaling
The service is stateless and can be scaled horizontally:
- Multiple instances can run simultaneously
- Each instance will index independently
- Database handles concurrent writes
### Vertical Scaling
For high-volume chains:
- Increase `INDEXING_INTERVAL` for less frequent updates
- Increase database connection pool size
- Use read replicas for database queries
## Backup and Recovery
### Database Backups
Regular backups of the following tables:
- `token_market_data`
- `liquidity_pools`
- `token_ohlcv`
- `swap_events`
### Recovery
1. Restore database from backup
2. Restart indexing service
3. Service will backfill missing data automatically