Complete markdown files cleanup and organization

- Organized 252 files across project
- Root directory: 187 → 2 files (98.9% reduction)
- Moved configuration guides to docs/04-configuration/
- Moved troubleshooting guides to docs/09-troubleshooting/
- Moved quick start guides to docs/01-getting-started/
- Moved reports to reports/ directory
- Archived temporary files
- Generated comprehensive reports and documentation
- Created maintenance scripts and guides

All files organized according to established standards.
This commit is contained in:
defiQUG
2026-01-06 01:46:25 -08:00
parent 1edcec953c
commit cb47cce074
1327 changed files with 217220 additions and 801 deletions

View File

@@ -0,0 +1,172 @@
# ChainID 138 Configuration - Quick Start Guide
**Quick reference for configuring Besu nodes for ChainID 138**
---
## 🚀 Quick Start
### Step 1: Run Main Configuration
```bash
cd /home/intlc/projects/proxmox
./scripts/configure-besu-chain138-nodes.sh
```
**What it does:**
- Collects enodes from all Besu nodes
- Generates `static-nodes.json` and `permissioned-nodes.json`
- Deploys to all containers (including new: 1504, 2503)
- Configures discovery settings
- Restarts Besu services
**Expected time:** 5-10 minutes
---
### Step 2: Verify Configuration
```bash
./scripts/verify-chain138-config.sh
```
**What it checks:**
- Files exist and are readable
- Discovery settings are correct
- Peer connections are working
---
## 📋 Node List
| VMID | Hostname | Role | Discovery |
|------|----------|------|-----------|
| 1000-1004 | besu-validator-* | Validator | Enabled |
| 1500-1504 | besu-sentry-* | Sentry | Enabled |
| 2500 | besu-rpc-core | RPC Core | **Disabled** |
| 2501 | besu-rpc-perm | RPC Permissioned | Enabled |
| 2502 | besu-rpc-public | RPC Public | Enabled |
| 2503 | besu-rpc-4 | RPC Permissioned | **Disabled** |
---
## 🔧 Manual Steps (if needed)
### Check Configuration Files
```bash
# On Proxmox host
pct exec <VMID> -- ls -la /var/lib/besu/static-nodes.json
pct exec <VMID> -- ls -la /var/lib/besu/permissions/permissioned-nodes.json
```
### Check Discovery Setting
```bash
# For RPC nodes that should have discovery disabled (2500, 2503)
pct exec 2503 -- grep discovery-enabled /etc/besu/*.toml
```
### Check Peer Count
```bash
# Via RPC
curl -X POST http://<RPC_IP>:8545 \
-H 'Content-Type: application/json' \
--data '{"jsonrpc":"2.0","method":"net_peerCount","params":[],"id":1}'
```
### Restart Besu Service
```bash
pct exec <VMID> -- systemctl restart besu*.service
pct exec <VMID> -- systemctl status besu*.service
```
---
## 🐛 Troubleshooting
### Issue: Node not connecting to peers
1. **Check files exist:**
```bash
pct exec <VMID> -- ls -la /var/lib/besu/static-nodes.json
```
2. **Check file ownership:**
```bash
pct exec <VMID> -- chown -R besu:besu /var/lib/besu
```
3. **Check network connectivity:**
```bash
pct exec <VMID> -- ping <PEER_IP>
```
### Understanding: RPC Nodes Reporting chainID 0x1 to MetaMask
**Note**: This is **intentional behavior** for wallet compatibility. RPC nodes report `chainID = 0x1` (Ethereum mainnet) to MetaMask wallets to work around MetaMask's technical limitations for regulated financial entities.
**How it works:**
- Nodes are connected to ChainID 138 (private network)
- Nodes report chainID 0x1 to MetaMask (wallet compatibility)
- Discovery is disabled to prevent actual connection to Ethereum mainnet
- MetaMask works with the private network while thinking it's mainnet
**If discovery needs to be disabled (should already be configured):**
```bash
for vmid in 2503 2504 2505 2506 2507 2508; do
pct exec $vmid -- sed -i 's/^discovery-enabled=.*/discovery-enabled=false/' /etc/besu/*.toml
pct exec $vmid -- systemctl restart besu*.service
done
```
### Issue: Permission denied errors
```bash
# Fix ownership
pct exec <VMID> -- chown -R besu:besu /var/lib/besu
pct exec <VMID> -- chmod 644 /var/lib/besu/static-nodes.json
pct exec <VMID> -- chmod 644 /var/lib/besu/permissions/permissioned-nodes.json
```
---
## 📚 Scripts Reference
| Script | Purpose |
|--------|---------|
| `configure-besu-chain138-nodes.sh` | Main configuration script |
| `setup-new-chain138-containers.sh` | Quick setup for new containers |
| `verify-chain138-config.sh` | Verify configuration |
---
## 📖 Full Documentation
- **Complete Guide:** [CHAIN138_BESU_CONFIGURATION.md](CHAIN138_BESU_CONFIGURATION.md)
- **Summary:** [CHAIN138_CONFIGURATION_SUMMARY.md](CHAIN138_CONFIGURATION_SUMMARY.md)
---
## ✅ Checklist
- [ ] Run main configuration script
- [ ] Verify all nodes have configuration files
- [ ] Check discovery settings (disabled for 2500, 2503)
- [ ] Verify peer connections
- [ ] Test RPC endpoints
- [ ] Check service status on all nodes
---
## 🆘 Support
If you encounter issues:
1. Check logs: `pct exec <VMID> -- journalctl -u besu*.service -n 50`
2. Run verification: `./scripts/verify-chain138-config.sh`
3. Review documentation: `docs/CHAIN138_BESU_CONFIGURATION.md`

View File

@@ -0,0 +1,56 @@
# Quick Start: List All Proxmox VMs
## Quick Start (Python Script)
```bash
# 1. Install dependencies (if not already installed)
cd /home/intlc/projects/proxmox
source venv/bin/activate
pip install proxmoxer requests
# 2. Ensure ~/.env has Proxmox credentials
# (Should already be configured)
# 3. Run the script
python3 list_vms.py
```
## Quick Start (Shell Script)
```bash
# 1. Set Proxmox host (or use default)
export PROXMOX_HOST=192.168.11.10
export PROXMOX_USER=root
# 2. Run the script
./list_vms.sh
```
## Expected Output
```
VMID | Name | Type | IP Address | FQDN | Description
-------|-------------------------|------|-------------------|-------------------------|----------------
100 | vm-example | QEMU | 192.168.1.100 | vm-example.local | Example VM
101 | container-example | LXC | 192.168.1.101 | container.local | Example container
```
## Troubleshooting
**Connection timeout?**
- Check: `ping $(grep PROXMOX_HOST ~/.env | cut -d= -f2)`
- Verify firewall allows port 8006
**Authentication failed?**
- Check credentials in `~/.env`
- Verify API token is valid
**No IP addresses?**
- QEMU: Install QEMU guest agent in VM
- LXC: Container must be running
## Files
- `list_vms.py` - Python script (recommended)
- `list_vms.sh` - Shell script (requires SSH)
- `LIST_VMS_README.md` - Full documentation

View File

@@ -0,0 +1,147 @@
# List Proxmox VMs Scripts
Two scripts to list all Proxmox VMs with VMID, Name, IP Address, FQDN, and Description.
## Scripts
### 1. `list_vms.py` (Python - Recommended)
Python script using the Proxmox API. More robust and feature-rich.
**Features:**
- Supports both API token and password authentication
- Automatically loads credentials from `~/.env` file
- Retrieves IP addresses via QEMU guest agent or network config
- Gets FQDN from hostname configuration
- Handles both QEMU VMs and LXC containers
- Graceful error handling
**Prerequisites:**
```bash
pip install proxmoxer requests
# Or if using venv:
source venv/bin/activate
pip install proxmoxer requests
```
**Usage:**
**Option 1: Using ~/.env file (Recommended)**
```bash
# Create/edit ~/.env file with:
PROXMOX_HOST=your-proxmox-host
PROXMOX_USER=root@pam
PROXMOX_TOKEN_NAME=your-token-name
PROXMOX_TOKEN_VALUE=your-token-value
# OR use password:
PROXMOX_PASSWORD=your-password
# Then run:
python3 list_vms.py
```
**Option 2: Environment variables**
```bash
export PROXMOX_HOST=your-proxmox-host
export PROXMOX_USER=root@pam
export PROXMOX_TOKEN_NAME=your-token-name
export PROXMOX_TOKEN_VALUE=your-token-value
python3 list_vms.py
```
**Option 3: JSON config file**
```bash
export PROXMOX_MCP_CONFIG=/path/to/config.json
python3 list_vms.py
```
### 2. `list_vms.sh` (Shell Script)
Shell script using `pvesh` via SSH. Requires SSH access to Proxmox node.
**Prerequisites:**
- SSH access to Proxmox node
- `pvesh` command available on Proxmox node
- Python3 for JSON parsing
**Usage:**
```bash
export PROXMOX_HOST=your-proxmox-host
export PROXMOX_USER=root
./list_vms.sh
```
## Output Format
Both scripts output a formatted table:
```
VMID | Name | Type | IP Address | FQDN | Description
-------|-------------------------|------|-------------------|-------------------------|----------------
100 | vm-example | QEMU | 192.168.1.100 | vm-example.local | Example VM
101 | container-example | LXC | 192.168.1.101 | container.local | Example container
```
## How IP Addresses are Retrieved
### For QEMU VMs:
1. First tries QEMU guest agent (`network-get-interfaces`)
2. Falls back to network configuration parsing
3. Shows "N/A" if neither method works
### For LXC Containers:
1. Executes `hostname -I` command inside container
2. Filters out localhost addresses
3. Shows "N/A" if command fails or container is stopped
## How FQDN is Retrieved
1. Gets hostname from VM/container configuration
2. For running VMs, tries to execute `hostname -f` command
3. Falls back to hostname from config if command fails
4. Shows "N/A" if no hostname is configured
## Troubleshooting
### Connection Timeout
- Verify Proxmox host is reachable: `ping your-proxmox-host`
- Check firewall rules allow port 8006
- Verify credentials in `~/.env` are correct
### Authentication Failed
- Verify API token is valid and not expired
- Check user permissions in Proxmox
- Try using password authentication instead
### IP Address Shows "N/A"
- For QEMU: Ensure QEMU guest agent is installed and running in VM
- For LXC: Container must be running to execute commands
- Check network configuration in VM/container
### FQDN Shows "N/A"
- Set hostname in VM/container configuration
- For running VMs, ensure hostname command is available
## Examples
### List all VMs
```bash
python3 list_vms.py
```
### List VMs from specific host
```bash
PROXMOX_HOST=192.168.11.10 python3 list_vms.py
```
### Using shell script
```bash
PROXMOX_HOST=192.168.11.10 PROXMOX_USER=root ./list_vms.sh
```
## Notes
- Scripts automatically sort VMs by VMID
- Both QEMU VMs and LXC containers are included
- Scripts handle missing information gracefully (shows "N/A")
- Python script is recommended for better error handling and features

View File

@@ -0,0 +1,270 @@
# MetaMask Quick Start Guide - ChainID 138
**Date**: $(date)
**Network**: SMOM-DBIS-138 (ChainID 138)
**Purpose**: Get started with MetaMask on ChainID 138 in 5 minutes
---
## 🚀 Quick Start (5 Minutes)
### Step 1: Add Network to MetaMask
**Option A: Manual Addition** (Recommended for first-time users)
1. Open MetaMask extension
2. Click network dropdown (top of MetaMask)
3. Click "Add Network" → "Add a network manually"
4. Enter the following:
- **Network Name**: `Defi Oracle Meta Mainnet` or `SMOM-DBIS-138`
- **RPC URL**: `https://rpc-http-pub.d-bis.org` ⚠️ **Important: Must be public endpoint**
- **Chain ID**: `138` (must be decimal, not hex)
- **Currency Symbol**: `ETH`
- **Block Explorer URL**: `https://explorer.d-bis.org` (optional)
5. Click "Save"
**Note**: If you get "Could not fetch chain ID" error, the RPC endpoint may require authentication. The public endpoint (`rpc-http-pub.d-bis.org`) should NOT require authentication. If it does, contact network administrators.
**Option B: Programmatic Addition** (For dApps)
If you're building a dApp, you can add the network programmatically:
```javascript
await window.ethereum.request({
method: 'wallet_addEthereumChain',
params: [{
chainId: '0x8a', // 138 in hex
chainName: 'SMOM-DBIS-138',
nativeCurrency: {
name: 'Ether',
symbol: 'ETH',
decimals: 18
},
rpcUrls: ['https://rpc-http-pub.d-bis.org'],
blockExplorerUrls: ['https://explorer.d-bis.org']
}]
});
```
---
### Step 2: Import Tokens
**WETH9 (Wrapped Ether)**
1. In MetaMask, click "Import tokens"
2. Enter:
- **Token Contract Address**: `0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2`
- **Token Symbol**: `WETH`
- **Decimals of Precision**: `18` ⚠️ **Important: Must be 18**
3. Click "Add Custom Token"
**WETH10 (Wrapped Ether v10)**
1. Click "Import tokens" again
2. Enter:
- **Token Contract Address**: `0xf4BB2e28688e89fCcE3c0580D37d36A7672E8A9f`
- **Token Symbol**: `WETH10`
- **Decimals of Precision**: `18`
3. Click "Add Custom Token"
**Note**: If you see incorrect balances (like "6,000,000,000.0T"), ensure decimals are set to 18. See [WETH9 Display Fix](./METAMASK_WETH9_FIX_INSTRUCTIONS.md) for details.
---
### Step 3: Get Test ETH
**For Testing Purposes**:
If you need test ETH on ChainID 138:
1. Contact network administrators
2. Use a faucet (if available)
3. Bridge from another chain (if configured)
**Current Network Status**:
- ✅ Network: Operational
- ✅ RPC: `https://rpc-core.d-bis.org`
- ✅ Explorer: `https://explorer.d-bis.org`
---
### Step 4: Verify Connection
**Check Network**:
1. In MetaMask, verify you're on "SMOM-DBIS-138"
2. Check your ETH balance (should display correctly)
3. Verify token balances (WETH, WETH10)
**Test Transaction** (Optional):
1. Send a small amount of ETH to another address
2. Verify transaction appears in block explorer
3. Confirm balance updates
---
## 📊 Reading Price Feeds
### Get ETH/USD Price
**Oracle Contract**: `0x3304b747e565a97ec8ac220b0b6a1f6ffdb837e6`
**Using Web3.js**:
```javascript
const Web3 = require('web3');
const web3 = new Web3('https://rpc-core.d-bis.org');
const oracleABI = [{
"inputs": [],
"name": "latestRoundData",
"outputs": [
{"name": "roundId", "type": "uint80"},
{"name": "answer", "type": "int256"},
{"name": "startedAt", "type": "uint256"},
{"name": "updatedAt", "type": "uint256"},
{"name": "answeredInRound", "type": "uint80"}
],
"stateMutability": "view",
"type": "function"
}];
const oracle = new web3.eth.Contract(oracleABI, '0x3304b747e565a97ec8ac220b0b6a1f6ffdb837e6');
async function getPrice() {
const result = await oracle.methods.latestRoundData().call();
const price = result.answer / 1e8; // Convert from 8 decimals
console.log(`ETH/USD: $${price}`);
return price;
}
getPrice();
```
**Using Ethers.js**:
```javascript
const { ethers } = require('ethers');
const provider = new ethers.providers.JsonRpcProvider('https://rpc-core.d-bis.org');
const oracleABI = [
"function latestRoundData() external view returns (uint80, int256, uint256, uint256, uint80)"
];
const oracle = new ethers.Contract(
'0x3304b747e565a97ec8ac220b0b6a1f6ffdb837e6',
oracleABI,
provider
);
async function getPrice() {
const result = await oracle.latestRoundData();
const price = result.answer.toNumber() / 1e8;
console.log(`ETH/USD: $${price}`);
return price;
}
getPrice();
```
---
## 🔧 Common Tasks
### Send ETH
1. Click "Send" in MetaMask
2. Enter recipient address
3. Enter amount
4. Review gas fees
5. Confirm transaction
### Wrap ETH to WETH9
1. Go to WETH9 contract: `0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2`
2. Call `deposit()` function
3. Send ETH amount with transaction
4. Receive WETH9 tokens
### Check Transaction Status
1. Copy transaction hash from MetaMask
2. Visit: `https://explorer.d-bis.org/tx/<tx-hash>`
3. View transaction details, gas used, status
---
## ⚠️ Troubleshooting
### Network Not Connecting
**Issue**: Can't connect to network
**Solutions**:
1. Verify RPC URL: `https://rpc-core.d-bis.org`
2. Check Chain ID: Must be `138` (not 0x8a in decimal)
3. Try removing and re-adding network
4. Clear MetaMask cache and reload
### Token Balance Display Incorrect
**Issue**: Shows "6,000,000,000.0T WETH" instead of "6 WETH"
**Solution**:
- Remove token from MetaMask
- Re-import with decimals set to `18`
- See [WETH9 Display Fix](./METAMASK_WETH9_FIX_INSTRUCTIONS.md) for details
### Price Feed Not Updating
**Issue**: Oracle price seems stale
**Solutions**:
1. Check Oracle contract: `0x3304b747e565a97ec8ac220b0b6a1f6ffdb837e6`
2. Verify `updatedAt` timestamp is recent (within 60 seconds)
3. Check Oracle Publisher service status
### Transaction Failing
**Issue**: Transactions not going through
**Solutions**:
1. Check you have sufficient ETH for gas
2. Verify network is selected correctly
3. Check transaction nonce (may need to reset)
4. Increase gas limit if needed
---
## 📚 Additional Resources
- [Full Integration Requirements](./METAMASK_FULL_INTEGRATION_REQUIREMENTS.md)
- [Oracle Integration Guide](./METAMASK_ORACLE_INTEGRATION.md)
- [WETH9 Display Bug Fix](./METAMASK_WETH9_FIX_INSTRUCTIONS.md)
- [Contract Addresses Reference](./CONTRACT_ADDRESSES_REFERENCE.md)
---
## ✅ Verification Checklist
After setup, verify:
- [ ] Network "SMOM-DBIS-138" appears in MetaMask
- [ ] Can switch to ChainID 138 network
- [ ] ETH balance displays correctly
- [ ] WETH9 token imported with correct decimals (18)
- [ ] WETH10 token imported with correct decimals (18)
- [ ] Can read price from Oracle contract
- [ ] Can send test transaction
- [ ] Transaction appears in block explorer
---
## 🎯 Next Steps
1. **Explore dApps**: Connect to dApps built on ChainID 138
2. **Bridge Assets**: Use CCIP bridges to transfer assets cross-chain
3. **Deploy Contracts**: Deploy your own smart contracts
4. **Build dApps**: Create applications using the network
---
**Last Updated**: $(date)

View File

@@ -0,0 +1,34 @@
# Remaining Steps - Quick Reference
## ✅ Completed
- All contracts deployed (7/7) ✅
- All contracts have bytecode ✅
- CCIP Monitor service running ✅
- Service configurations updated ✅
## ⏳ Remaining Steps
### 1. Verify Contracts on Blockscout (High Priority)
```bash
./scripts/verify-all-contracts.sh 0.8.20
```
Status: 0/7 verified
### 2. Validate Contract Functionality (Medium Priority)
- Test contract functions
- Verify events
- Test integrations
### 3. Update Documentation (Low Priority)
- Update verification status
- Document results
## Tools
- Verify: `./scripts/verify-all-contracts.sh`
- Check: `./scripts/check-all-contracts-status.sh`
- Monitor: `./scripts/check-ccip-monitor.sh`
## Documentation
- `docs/ALL_REMAINING_STEPS.md` - Complete list
- `docs/BLOCKSCOUT_VERIFICATION_GUIDE.md` - Verification guide
- `docs/CONTRACT_VALIDATION_CHECKLIST.md` - Validation checklist

View File

@@ -0,0 +1,240 @@
# ThirdWeb RPC (VMID 2400) - Cloudflare Tunnel Quick Start
**Status:** Ready to Execute
**VMID:** 2400
**IP:** 192.168.11.240
**Domain:** `defi-oracle.io`
**FQDN:** `rpc.public-0138.defi-oracle.io`
---
## Overview
This guide will set up a Cloudflare tunnel for VMID 2400 (ThirdWeb RPC node) since we can't access pve2 where the existing tunnel is located.
---
## Step 1: Create Cloudflare Tunnel (Manual - Cloudflare Dashboard)
### 1.1 Go to Cloudflare Dashboard
1. Open: https://one.dash.cloudflare.com/
2. Login to your Cloudflare account
### 1.2 Navigate to Tunnels
1. Click on **Zero Trust** (in the left sidebar)
2. Click on **Networks****Tunnels**
### 1.3 Create New Tunnel
1. Click **Create a tunnel** button (top right)
2. Select **Cloudflared** as the connector type
3. Name: `thirdweb-rpc-2400`
4. Click **Save tunnel**
### 1.4 Copy the Tunnel Token
After creating the tunnel, you'll see a screen with a token. It looks like:
```
eyJhIjoiNTJhZDU3YTcxNjcxYzVmYzAwOWVkZjA3NDQ2NTgxOTYiLCJ0Ijoi...
```
**IMPORTANT:** Copy this entire token - you'll need it in the next step.
---
## Step 2: Run the Installation Script (Automated)
### 2.1 Run the Script
```bash
cd /home/intlc/projects/proxmox
# Replace <TUNNEL_TOKEN> with the token you copied from Step 1.4
./scripts/setup-cloudflared-vmid2400.sh <TUNNEL_TOKEN>
```
**Example:**
```bash
./scripts/setup-cloudflared-vmid2400.sh eyJhIjoiNTJhZDU3YTcxNjcxYzVmYzAwOWVkZjA3NDQ2NTgxOTYiLCJ0Ijoi...
```
The script will:
- ✅ Check SSH access to Proxmox host (192.168.11.10)
- ✅ Verify VMID 2400 is running
- ✅ Install cloudflared in the container
- ✅ Install and start the tunnel service
- ✅ Verify the setup
---
## Step 3: Configure Tunnel Route (Manual - Cloudflare Dashboard)
### 3.1 Go Back to Tunnel Configuration
1. In Cloudflare Dashboard: **Zero Trust****Networks****Tunnels**
2. Click on your tunnel name: `thirdweb-rpc-2400`
3. Click **Configure** button
### 3.2 Add Public Hostname
1. Go to **Public Hostname** tab
2. Click **Add a public hostname**
### 3.3 Configure the Route
Fill in the following:
```
Subdomain: rpc.public-0138
Domain: defi-oracle.io
Service Type: HTTP
URL: http://127.0.0.1:8545
```
**Important Notes:**
- The subdomain is `rpc.public-0138` (not just `rpc`)
- The full domain will be: `rpc.public-0138.defi-oracle.io`
- Use `http://127.0.0.1:8545` to connect directly to Besu RPC
- If you have Nginx on port 443, use `https://127.0.0.1:443` instead
### 3.4 Save Configuration
1. Click **Save hostname**
2. Wait a few seconds for the configuration to apply
---
## Step 4: Configure DNS Record (Manual - Cloudflare Dashboard)
### 4.1 Navigate to DNS
1. In Cloudflare Dashboard, go to your account overview
2. Select domain: **defi-oracle.io**
3. Click **DNS** in the left sidebar
4. Click **Records**
### 4.2 Add CNAME Record
1. Click **Add record**
2. Fill in:
```
Type: CNAME
Name: rpc.public-0138
Target: <your-tunnel-id>.cfargotunnel.com
Proxy: 🟠 Proxied (orange cloud)
TTL: Auto
```
3. **To find your tunnel ID:**
- Go back to **Zero Trust** → **Networks** → **Tunnels**
- Click on your tunnel: `thirdweb-rpc-2400`
- The tunnel ID is shown in the URL or in the tunnel details
- Format: `xxxx-xxxx-xxxx-xxxx` (UUID format)
### 4.3 Save DNS Record
1. Click **Save**
2. Wait 1-2 minutes for DNS propagation
---
## Step 5: Verify Setup
### 5.1 Check Tunnel Status
```bash
# From your local machine, check if the tunnel is running
ssh root@192.168.11.10 "pct exec 2400 -- systemctl status cloudflared"
```
### 5.2 Test DNS Resolution
```bash
# Test DNS resolution
dig rpc.public-0138.defi-oracle.io
nslookup rpc.public-0138.defi-oracle.io
# Should resolve to Cloudflare IPs (if proxied) or tunnel endpoint
```
### 5.3 Test RPC Endpoint
```bash
# Test HTTP RPC endpoint
curl -k https://rpc.public-0138.defi-oracle.io \
-X POST \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
# Expected: JSON response with block number
```
### 5.4 Verify in Cloudflare Dashboard
1. Go to **Zero Trust** → **Networks** → **Tunnels**
2. Click on `thirdweb-rpc-2400`
3. Status should show **Healthy** (green)
4. You should see the hostname `rpc.public-0138.defi-oracle.io` listed
---
## Troubleshooting
### Tunnel Not Connecting
```bash
# Check cloudflared logs inside the container
ssh root@192.168.11.10 "pct exec 2400 -- journalctl -u cloudflared -f"
# Check if service is running
ssh root@192.168.11.10 "pct exec 2400 -- systemctl status cloudflared"
```
### DNS Not Resolving
- Wait a few more minutes for DNS propagation
- Verify the CNAME target matches your tunnel ID
- Check that the tunnel is healthy in Cloudflare Dashboard
### Connection Refused
```bash
# Verify Besu RPC is running
ssh root@192.168.11.10 "pct exec 2400 -- systemctl status besu-rpc"
# Test Besu RPC locally
ssh root@192.168.11.10 "pct exec 2400 -- curl -X POST http://127.0.0.1:8545 \
-H 'Content-Type: application/json' \
-d '{\"jsonrpc\":\"2.0\",\"method\":\"eth_blockNumber\",\"params\":[],\"id\":1}'"
```
---
## Summary
After completing all steps:
✅ Cloudflare tunnel created
✅ Cloudflared installed on VMID 2400
✅ Tunnel service running and connected
✅ Tunnel route configured for `rpc.public-0138.defi-oracle.io`
✅ DNS CNAME record created
✅ RPC endpoint accessible at `https://rpc.public-0138.defi-oracle.io`
**Next Steps:**
- Update Thirdweb listing with the new RPC URL
- Test with Thirdweb SDK
- Monitor tunnel status
---
## Quick Reference
**Script Location:** `scripts/setup-cloudflared-vmid2400.sh`
**Documentation:** `docs/04-configuration/THIRDWEB_RPC_CLOUDFLARE_SETUP.md`
**VMID:** 2400
**IP:** 192.168.11.240
**FQDN:** `rpc.public-0138.defi-oracle.io`

View File

@@ -0,0 +1,421 @@
# ThirdWeb RPC Nodes - Complete Next Steps
## Overview
This document lists all next steps to complete the ThirdWeb RPC node setup, from deployment to integration.
---
## Phase 1: Deploy Containers
### Step 1.1: Run the Setup Script
```bash
cd /home/intlc/projects/proxmox
./scripts/setup-thirdweb-rpc-nodes.sh
```
**Expected outcome:**
- Creates 3 LXC containers (VMIDs 2400-2402)
- Installs Besu RPC software
- Configures static IPs (192.168.11.240-242)
- Sets up systemd services
**Troubleshooting:**
- If containers fail to create, check storage: `ssh root@192.168.11.10 'pvesm status'`
- Verify template exists: `ssh root@192.168.11.10 'pvesm list local'`
- Check SSH access: `ssh root@192.168.11.10 'echo OK'`
---
## Phase 2: Verify Deployment
### Step 2.1: Check Container Status
```bash
# List all ThirdWeb containers
ssh root@192.168.11.10 "pct list | grep -E '240[0-2]'"
# Check individual container status
ssh root@192.168.11.10 "pct status 2400"
ssh root@192.168.11.10 "pct status 2401"
ssh root@192.168.11.10 "pct status 2402"
```
**Expected output:**
```
2400 2400 thirdweb-rpc-1 running
2401 2401 thirdweb-rpc-2 running
2402 2402 thirdweb-rpc-3 running
```
### Step 2.2: Verify IP Addresses
```bash
# Check IP configuration for each container
ssh root@192.168.11.10 "pct exec 2400 -- hostname -I"
ssh root@192.168.11.10 "pct exec 2401 -- hostname -I"
ssh root@192.168.11.10 "pct exec 2402 -- hostname -I"
```
**Expected output:**
- Container 2400: `192.168.11.240`
- Container 2401: `192.168.11.241`
- Container 2402: `192.168.11.242`
### Step 2.3: Test Network Connectivity
```bash
# Ping each container
ping -c 3 192.168.11.240
ping -c 3 192.168.11.241
ping -c 3 192.168.11.242
# Test port accessibility
nc -zv 192.168.11.240 8545 # HTTP RPC
nc -zv 192.168.11.240 8546 # WebSocket RPC
nc -zv 192.168.11.240 9545 # Metrics
```
---
## Phase 3: Configure Besu Services
### Step 3.1: Verify Besu Installation
```bash
# Check Besu version on each container
ssh root@192.168.11.10 "pct exec 2400 -- /opt/besu/bin/besu --version"
ssh root@192.168.11.10 "pct exec 2401 -- /opt/besu/bin/besu --version"
ssh root@192.168.11.10 "pct exec 2402 -- /opt/besu/bin/besu --version"
```
### Step 3.2: Verify Configuration Files
```bash
# Check config file exists and is correct
ssh root@192.168.11.10 "pct exec 2400 -- cat /etc/besu/config-rpc-thirdweb.toml"
```
**Verify key settings:**
- `network-id=138`
- `rpc-http-enabled=true`
- `rpc-http-port=8545`
- `rpc-ws-enabled=true`
- `rpc-ws-port=8546`
- `rpc-http-api=["ETH","NET","WEB3","DEBUG","TRACE"]`
### Step 3.3: Check Genesis and Permissions Files
```bash
# Verify genesis file exists
ssh root@192.168.11.10 "pct exec 2400 -- ls -la /genesis/genesis.json"
# Verify static nodes file exists
ssh root@192.168.11.10 "pct exec 2400 -- ls -la /genesis/static-nodes.json"
# Verify permissions file exists
ssh root@192.168.11.10 "pct exec 2400 -- ls -la /permissions/permissions-nodes.toml"
```
**If files are missing:**
- Copy from existing RPC nodes or source project
- See `smom-dbis-138/genesis/` and `smom-dbis-138/permissions/` directories
---
## Phase 4: Start and Monitor Services
### Step 4.1: Start Besu Services
```bash
# Start services on all containers
ssh root@192.168.11.10 "pct exec 2400 -- systemctl start besu-rpc.service"
ssh root@192.168.11.10 "pct exec 2401 -- systemctl start besu-rpc.service"
ssh root@192.168.11.10 "pct exec 2402 -- systemctl start besu-rpc.service"
# Enable auto-start on boot
ssh root@192.168.11.10 "pct exec 2400 -- systemctl enable besu-rpc.service"
ssh root@192.168.11.10 "pct exec 2401 -- systemctl enable besu-rpc.service"
ssh root@192.168.11.10 "pct exec 2402 -- systemctl enable besu-rpc.service"
```
### Step 4.2: Check Service Status
```bash
# Check if services are running
ssh root@192.168.11.10 "pct exec 2400 -- systemctl status besu-rpc.service"
ssh root@192.168.11.10 "pct exec 2401 -- systemctl status besu-rpc.service"
ssh root@192.168.11.10 "pct exec 2402 -- systemctl status besu-rpc.service"
```
**Expected status:** `Active: active (running)`
### Step 4.3: Monitor Service Logs
```bash
# View recent logs
ssh root@192.168.11.10 "pct exec 2400 -- journalctl -u besu-rpc.service -n 100"
# Follow logs in real-time (Ctrl+C to exit)
ssh root@192.168.11.10 "pct exec 2400 -- journalctl -u besu-rpc.service -f"
```
**Look for:**
- `Besu is listening on` messages
- `P2P started` message
- Any error messages
---
## Phase 5: Test RPC Endpoints
### Step 5.1: Test HTTP RPC Endpoints
```bash
# Test each RPC endpoint
curl -X POST http://192.168.11.240:8545 \
-H 'Content-Type: application/json' \
--data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
curl -X POST http://192.168.11.241:8545 \
-H 'Content-Type: application/json' \
--data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
curl -X POST http://192.168.11.242:8545 \
-H 'Content-Type: application/json' \
--data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
```
**Expected response:**
```json
{"jsonrpc":"2.0","id":1,"result":"0x..."}
```
### Step 5.2: Test WebSocket Endpoints
```bash
# Install wscat if needed: npm install -g wscat
# Test WebSocket connection
wscat -c ws://192.168.11.240:8546
# Then send: {"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}
```
### Step 5.3: Test Additional RPC Methods
```bash
# Get chain ID
curl -X POST http://192.168.11.240:8545 \
-H 'Content-Type: application/json' \
--data '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}'
# Get network ID
curl -X POST http://192.168.11.240:8545 \
-H 'Content-Type: application/json' \
--data '{"jsonrpc":"2.0","method":"net_version","params":[],"id":1}'
# Get client version
curl -X POST http://192.168.11.240:8545 \
-H 'Content-Type: application/json' \
--data '{"jsonrpc":"2.0","method":"web3_clientVersion","params":[],"id":1}'
```
### Step 5.4: Check Metrics Endpoints
```bash
# Check metrics (Prometheus format)
curl http://192.168.11.240:9545/metrics | head -20
```
---
## Phase 6: ThirdWeb Integration
### Step 6.1: Configure ThirdWeb SDK
**JavaScript/TypeScript:**
```javascript
import { ThirdwebSDK } from "@thirdweb-dev/sdk";
// HTTP RPC endpoint
const sdk = new ThirdwebSDK("http://192.168.11.240:8545", {
supportedChains: [138], // Your ChainID
});
// Or with WebSocket for subscriptions
const sdk = new ThirdwebSDK("ws://192.168.11.240:8546", {
supportedChains: [138],
});
```
### Step 6.2: Set Environment Variables
```bash
# Add to your .env file
echo "THIRDWEB_RPC_URL=http://192.168.11.240:8545" >> .env
echo "THIRDWEB_RPC_WS_URL=ws://192.168.11.240:8546" >> .env
echo "THIRDWEB_CHAIN_ID=138" >> .env
```
### Step 6.3: Configure ThirdWeb Dashboard
1. Go to ThirdWeb Dashboard → Settings → Networks
2. Click "Add Custom Network"
3. Enter:
- **Network Name**: ChainID 138 (Custom)
- **RPC URL**: `http://192.168.11.240:8545`
- **Chain ID**: `138`
- **Currency Symbol**: Your token symbol
- **Block Explorer**: (Optional) Your explorer URL
### Step 6.4: Test ThirdWeb Connection
```javascript
// Test connection
const provider = await sdk.getProvider();
const network = await provider.getNetwork();
console.log("Connected to:", network.chainId);
```
---
## Phase 7: Production Configuration
### Step 7.1: Set Up Load Balancing (Optional)
**Nginx Configuration:**
```nginx
upstream thirdweb_rpc {
least_conn;
server 192.168.11.240:8545;
server 192.168.11.241:8545;
server 192.168.11.242:8545;
}
server {
listen 80;
server_name rpc.thirdweb.yourdomain.com;
location / {
proxy_pass http://thirdweb_rpc;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
}
```
### Step 7.2: Configure Cloudflare Tunnel (Optional)
**Add to cloudflared config:**
```yaml
ingress:
- hostname: rpc-thirdweb.d-bis.org
service: http://192.168.11.240:8545
- hostname: rpc-thirdweb-2.d-bis.org
service: http://192.168.11.241:8545
- hostname: rpc-thirdweb-3.d-bis.org
service: http://192.168.11.242:8545
```
### Step 7.3: Set Up Monitoring
**Monitor metrics:**
```bash
# Set up Prometheus scraping
# Add to prometheus.yml:
scrape_configs:
- job_name: 'thirdweb-rpc'
static_configs:
- targets:
- '192.168.11.240:9545'
- '192.168.11.241:9545'
- '192.168.11.242:9545'
```
---
## Phase 8: Documentation and Maintenance
### Step 8.1: Update Documentation
- [ ] Update infrastructure documentation with new IPs
- [ ] Document ThirdWeb RPC endpoints
- [ ] Add monitoring dashboards
- [ ] Document load balancing setup (if applicable)
### Step 8.2: Create Backup Procedures
```bash
# Backup Besu data directories
ssh root@192.168.11.10 "pct exec 2400 -- tar -czf /tmp/besu-backup-$(date +%Y%m%d).tar.gz /data/besu"
# Backup configuration files
ssh root@192.168.11.10 "pct exec 2400 -- tar -czf /tmp/besu-config-$(date +%Y%m%d).tar.gz /etc/besu"
```
### Step 8.3: Set Up Health Checks
**Create health check script:**
```bash
#!/bin/bash
# health-check-thirdweb-rpc.sh
for ip in 192.168.11.240 192.168.11.241 192.168.11.242; do
if curl -s -X POST http://${ip}:8545 \
-H 'Content-Type: application/json' \
--data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' \
| grep -q "result"; then
echo "${ip}:8545 is healthy"
else
echo "${ip}:8545 is down"
fi
done
```
---
## Troubleshooting Checklist
If containers fail to start:
- [ ] Check storage availability: `pvesm status`
- [ ] Verify template exists: `pvesm list local`
- [ ] Check container logs: `pct config <VMID>`
If Besu services fail:
- [ ] Check service logs: `journalctl -u besu-rpc.service -f`
- [ ] Verify config file syntax: `besu --config-file=/etc/besu/config-rpc-thirdweb.toml validate`
- [ ] Check disk space: `df -h`
- [ ] Verify network connectivity to validators/sentries
If RPC endpoints don't respond:
- [ ] Verify firewall rules: `iptables -L -n | grep 8545`
- [ ] Check Besu is listening: `netstat -tlnp | grep 8545`
- [ ] Verify chain sync: Check logs for sync progress
- [ ] Test connectivity: `ping` and `nc` tests
---
## Quick Reference Commands
```bash
# Status check
ssh root@192.168.11.10 "pct list | grep 240"
# Restart all services
for vmid in 2400 2401 2402; do
ssh root@192.168.11.10 "pct exec $vmid -- systemctl restart besu-rpc.service"
done
# View all logs
for vmid in 2400 2401 2402; do
echo "=== Container $vmid ==="
ssh root@192.168.11.10 "pct exec $vmid -- journalctl -u besu-rpc.service -n 20"
done
# Test all endpoints
for ip in 240 241 242; do
curl -X POST http://192.168.11.${ip}:8545 \
-H 'Content-Type: application/json' \
--data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
done
```
---
## Completion Checklist
- [ ] All containers created and running
- [ ] IP addresses configured correctly
- [ ] Besu services started and enabled
- [ ] RPC endpoints responding
- [ ] ThirdWeb SDK configured
- [ ] Load balancing configured (if needed)
- [ ] Monitoring set up (if needed)
- [ ] Documentation updated
- [ ] Health checks implemented

View File

@@ -0,0 +1,73 @@
# ThirdWeb RPC Nodes - Quick Start
## Summary
Setup complete! Ready to deploy ThirdWeb RPC node LXC containers.
## What Was Created
1. **Setup Script**: `scripts/setup-thirdweb-rpc-nodes.sh`
- Creates 3 LXC containers (VMIDs 2600-2602)
- Installs and configures Besu RPC nodes
- Optimized for ThirdWeb SDK integration
2. **Configuration**: `smom-dbis-138/config/config-rpc-thirdweb.toml`
- ThirdWeb-optimized Besu configuration
- WebSocket support enabled
- Extended APIs (DEBUG, TRACE)
- Increased transaction pool and timeout settings
3. **Documentation**: `docs/THIRDWEB_RPC_SETUP.md`
- Complete setup and usage guide
- Integration examples
- Troubleshooting tips
## Container Details
| VMID | Hostname | IP Address | Status |
|------|----------|------------|--------|
| 2400 | thirdweb-rpc-1 | 192.168.11.240 | Ready to deploy |
| 2401 | thirdweb-rpc-2 | 192.168.11.241 | Ready to deploy |
| 2402 | thirdweb-rpc-3 | 192.168.11.242 | Ready to deploy |
**Note**: VMIDs align with IP addresses - VMID 2400 = 192.168.11.240
## Quick Deploy
```bash
# Run the setup script
cd /home/intlc/projects/proxmox
./scripts/setup-thirdweb-rpc-nodes.sh
```
## RPC Endpoints
After deployment, you'll have:
- **HTTP RPC**: `http://192.168.11.240:8545`
- **WebSocket RPC**: `ws://192.168.11.240:8546`
- **Metrics**: `http://192.168.11.240:9545/metrics`
## ThirdWeb Integration
```javascript
import { ThirdwebSDK } from "@thirdweb-dev/sdk";
const sdk = new ThirdwebSDK("http://192.168.11.240:8545", {
supportedChains: [138],
});
```
## Next Steps
1. Review the full documentation: `docs/THIRDWEB_RPC_SETUP.md`
2. Run the setup script to create containers
3. Verify endpoints are accessible
4. Configure ThirdWeb Dashboard to use the RPC endpoints
5. Test with your ThirdWeb dApps
## Support
- Check container status: `ssh root@192.168.11.10 'pct list | grep 240'`
- View logs: `ssh root@192.168.11.10 'pct exec 2600 -- journalctl -u besu-rpc.service -f'`
- Test RPC: `curl -X POST http://192.168.11.240:8545 -H 'Content-Type: application/json' --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'`

View File

@@ -0,0 +1,547 @@
# Comprehensive Infrastructure Review
**Last Updated:** 2025-12-27
**Document Version:** 1.0
**Status:** Active Documentation
**Review Scope:** All Tunnels, DNS Entries, Nginx Configurations, VMIDs
---
## Executive Summary
This document provides a comprehensive review of:
- ✅ All Cloudflare Tunnels
- ✅ All DNS Entries
- ✅ All Nginx Configurations
- ✅ All VMIDs and Services
- ✅ Recommendations for Optimization
---
## 1. Cloudflare Tunnels Review
### Active Tunnels
| Tunnel Name | Tunnel ID | Status | Location | Purpose |
|-------------|-----------|--------|-----------|---------|
| `explorer.d-bis.org` | `b02fe1fe-cb7d-484e-909b-7cc41298ebe8` | ✅ HEALTHY | VMID 102 | Explorer/Blockscout |
| `rpc-http-pub.d-bis.org` | `10ab22da-8ea3-4e2e-a896-27ece2211a05` | ⚠️ DOWN | VMID 102 | RPC Services (needs config) |
| `mim4u-tunnel` | `f8d06879-04f8-44ef-aeda-ce84564a1792` | ✅ HEALTHY | Unknown | Miracles In Motion |
| `tunnel-ml110` | `ccd7150a-9881-4b8c-a105-9b4ead6e69a2` | ✅ HEALTHY | Unknown | Proxmox Host Access |
| `tunnel-r630-01` | `4481af8f-b24c-4cd3-bdd5-f562f4c97df4` | ✅ HEALTHY | Unknown | Proxmox Host Access |
| `tunnel-r630-02` | `0876f12b-64d7-4927-9ab3-94cb6cf48af9` | ✅ HEALTHY | Unknown | Proxmox Host Access |
### Current Tunnel Configuration (VMID 102)
**Active Tunnel**: `rpc-http-pub.d-bis.org` (Tunnel ID: `10ab22da-8ea3-4e2e-a896-27ece2211a05`)
**Current Routing** (from logs):
- `rpc-ws-pub.d-bis.org``https://192.168.11.252:443`
- `rpc-http-prv.d-bis.org``https://192.168.11.251:443`
- `rpc-ws-prv.d-bis.org``https://192.168.11.251:443`
- `rpc-http-pub.d-bis.org``https://192.168.11.252:443`
**⚠️ Issue**: Tunnel is routing directly to RPC nodes instead of central Nginx
**✅ Recommended Configuration**:
- All HTTP endpoints → `http://192.168.11.21:80` (Central Nginx)
- WebSocket endpoints → Direct to RPC nodes (as configured)
---
## 2. DNS Entries Review
### Current DNS Records (from d-bis.org zone file)
#### A Records (Direct IPs)
| Domain | IP Address(es) | Proxy Status | Notes |
|--------|----------------|--------------|-------|
| `api.d-bis.org` | 20.8.47.226 | ❌ Not Proxied | Should use tunnel |
| `besu.d-bis.org` | 20.215.32.42, 70.153.83.83 | ✅ Proxied | **DUPLICATE** - Remove one |
| `blockscout.d-bis.org` | 20.215.32.42, 70.153.83.83 | ✅ Proxied | **DUPLICATE** - Remove one |
| `d-bis.org` (root) | 20.215.32.42, 20.215.32.15 | ✅ Proxied | **DUPLICATE** - Remove one |
| `docs.d-bis.org` | 20.8.47.226 | ❌ Not Proxied | Should use tunnel |
| `explorer.d-bis.org` | 20.215.32.42, 70.153.83.83 | ✅ Proxied | **DUPLICATE** - Remove one |
| `grafana.d-bis.org` | 20.8.47.226 | ❌ Not Proxied | Should use tunnel |
| `metrics.d-bis.org` | 70.153.83.83 | ❌ Not Proxied | Should use tunnel |
| `monitoring.d-bis.org` | 70.153.83.83 | ✅ Proxied | Should use tunnel |
| `prometheus.d-bis.org` | 20.8.47.226 | ❌ Not Proxied | Should use tunnel |
| `tessera.d-bis.org` | 20.8.47.226 | ❌ Not Proxied | Should use tunnel |
| `wallet.d-bis.org` | 70.153.83.83 | ✅ Proxied | Should use tunnel |
| `ws.d-bis.org` | 20.8.47.226 | ❌ Not Proxied | Should use tunnel |
| `www.d-bis.org` | 20.8.47.226 | ✅ Proxied | Should use tunnel |
#### CNAME Records (Tunnel-based)
| Domain | Target | Proxy Status | Notes |
|--------|--------|--------------|-------|
| `rpc.d-bis.org` | `dbis138fdendpoint-cgergbcqb7aca7at.a03.azurefd.net` | ✅ Proxied | Azure Front Door |
| `ipfs.d-bis.org` | `ipfs.cloudflare.com` | ✅ Proxied | Cloudflare IPFS |
#### Missing DNS Records (Should Exist)
| Domain | Type | Target | Status |
|--------|------|--------|--------|
| `rpc-http-pub.d-bis.org` | CNAME | `<tunnel-id>.cfargotunnel.com` | ❌ Missing |
| `rpc-ws-pub.d-bis.org` | CNAME | `<tunnel-id>.cfargotunnel.com` | ❌ Missing |
| `rpc-http-prv.d-bis.org` | CNAME | `<tunnel-id>.cfargotunnel.com` | ❌ Missing |
| `rpc-ws-prv.d-bis.org` | CNAME | `<tunnel-id>.cfargotunnel.com` | ❌ Missing |
| `dbis-admin.d-bis.org` | CNAME | `<tunnel-id>.cfargotunnel.com` | ❌ Missing |
| `dbis-api.d-bis.org` | CNAME | `<tunnel-id>.cfargotunnel.com` | ❌ Missing |
| `dbis-api-2.d-bis.org` | CNAME | `<tunnel-id>.cfargotunnel.com` | ❌ Missing |
| `mim4u.org` | CNAME | `<tunnel-id>.cfargotunnel.com` | ❌ Missing |
| `www.mim4u.org` | CNAME | `<tunnel-id>.cfargotunnel.com` | ❌ Missing |
---
## 3. Nginx Configurations Review
### Central Nginx (VMID 105 - 192.168.11.21)
**Status**: ✅ Configured
**Configuration**: `/data/nginx/custom/http.conf`
**Type**: Nginx Proxy Manager (OpenResty)
**Configured Services**:
-`explorer.d-bis.org``http://192.168.11.140:80`
-`rpc-http-pub.d-bis.org``https://192.168.11.252:443`
-`rpc-http-prv.d-bis.org``https://192.168.11.251:443`
-`dbis-admin.d-bis.org``http://192.168.11.130:80`
-`dbis-api.d-bis.org``http://192.168.11.150:3000`
-`dbis-api-2.d-bis.org``http://192.168.11.151:3000`
-`mim4u.org``http://192.168.11.19:80`
-`www.mim4u.org``301 Redirect``mim4u.org`
**Note**: WebSocket endpoints (`rpc-ws-*`) are NOT in this config (routing directly)
### Blockscout Nginx (VMID 5000 - 192.168.11.140)
**Status**: ✅ Running
**Configuration**: `/etc/nginx/sites-available/blockscout`
**Purpose**: Local Nginx for Blockscout service
**Ports**:
- Port 80: HTTP (redirects to HTTPS or serves content)
- Port 443: HTTPS (proxies to Blockscout on port 4000)
### Miracles In Motion Nginx (VMID 7810 - 192.168.11.19)
**Status**: ✅ Running
**Configuration**: `/etc/nginx/sites-available/default`
**Purpose**: Web frontend and API proxy
**Ports**:
- Port 80: HTTP (serves static files, proxies API to 192.168.11.8:3001)
### DBIS Frontend Nginx (VMID 10130 - 192.168.11.130)
**Status**: ✅ Running (assumed)
**Purpose**: Frontend admin console
### RPC Nodes Nginx (VMIDs 2500, 2501, 2502)
**Status**: ⚠️ Partially Configured
**Purpose**: SSL termination and local routing
**VMID 2500** (192.168.11.250):
- Port 443: HTTPS RPC → `127.0.0.1:8545`
- Port 8443: HTTPS WebSocket → `127.0.0.1:8546`
**VMID 2501** (192.168.11.251):
- Port 443: HTTPS RPC → `127.0.0.1:8545`
- Port 443: HTTPS WebSocket → `127.0.0.1:8546` (SNI-based)
**VMID 2502** (192.168.11.252):
- Port 443: HTTPS RPC → `127.0.0.1:8545`
- Port 443: HTTPS WebSocket → `127.0.0.1:8546` (SNI-based)
---
## 4. VMIDs Review
### Infrastructure Services
| VMID | Name | IP | Status | Purpose |
|------|------|----|----|---------|
| 100 | proxmox-mail-gateway | 192.168.11.32 | ✅ Running | Mail gateway |
| 101 | proxmox-datacenter-manager | 192.168.11.33 | ✅ Running | Datacenter management |
| 102 | cloudflared | 192.168.11.34 | ✅ Running | Cloudflare tunnel client |
| 103 | omada | 192.168.11.30 | ✅ Running | Network management |
| 104 | gitea | 192.168.11.31 | ✅ Running | Git repository |
| 105 | nginxproxymanager | 192.168.11.26 | ✅ Running | Central Nginx reverse proxy |
| 130 | monitoring-1 | 192.168.11.27 | ✅ Running | Monitoring stack |
### Blockchain Services
| VMID | Name | IP | Status | Purpose | Notes |
|------|------|----|----|---------|-------|
| 5000 | blockscout-1 | 192.168.11.140 | ✅ Running | Blockchain explorer | Has local Nginx |
| 6200 | firefly-1 | 192.168.11.7 | ✅ Running | Hyperledger Firefly | Web3 gateway |
### RPC Nodes
| VMID | Name | IP | Status | Purpose | Notes |
|------|------|----|----|---------|-------|
| 2500 | besu-rpc-1 | 192.168.11.250 | ✅ Running | Core RPC | Located on ml110 (192.168.11.10) |
| 2501 | besu-rpc-2 | 192.168.11.251 | ✅ Running | Permissioned RPC | Located on ml110 (192.168.11.10) |
| 2502 | besu-rpc-3 | 192.168.11.252 | ✅ Running | Public RPC | Located on ml110 (192.168.11.10) |
**✅ Status**: RPC nodes are running on ml110 (192.168.11.10), not on pve2.
### Application Services
| VMID | Name | IP | Status | Purpose |
|------|------|----|----|---------|
| 7800 | sankofa-api-1 | 192.168.11.13 | ✅ Running | Sankofa API |
| 7801 | sankofa-portal-1 | 192.168.11.16 | ✅ Running | Sankofa Portal |
| 7802 | sankofa-keycloak-1 | 192.168.11.17 | ✅ Running | Sankofa Keycloak |
| 7810 | mim-web-1 | 192.168.11.19 | ✅ Running | Miracles In Motion Web |
| 7811 | mim-api-1 | 192.168.11.8 | ✅ Running | Miracles In Motion API |
### DBIS Core Services
| VMID | Name | IP | Status | Purpose | Notes |
|------|------|----|----|---------|-------|
| 10100 | dbis-postgres-primary | 192.168.11.100 | ✅ Running | PostgreSQL Primary | Located on ml110 (192.168.11.10) |
| 10101 | dbis-postgres-replica-1 | 192.168.11.101 | ✅ Running | PostgreSQL Replica | Located on ml110 (192.168.11.10) |
| 10120 | dbis-redis | 192.168.11.120 | ✅ Running | Redis Cache | Located on ml110 (192.168.11.10) |
| 10130 | dbis-frontend | 192.168.11.130 | ✅ Running | Frontend Admin | Located on ml110 (192.168.11.10) |
| 10150 | dbis-api-primary | 192.168.11.150 | ✅ Running | API Primary | Located on ml110 (192.168.11.10) |
| 10151 | dbis-api-secondary | 192.168.11.151 | ✅ Running | API Secondary | Located on ml110 (192.168.11.10) |
**✅ Status**: DBIS Core containers are running on ml110 (192.168.11.10), not on pve2.
---
## 5. Critical Issues Identified
### 🔴 High Priority
1. **Tunnel Configuration Mismatch**
- Tunnel `rpc-http-pub.d-bis.org` is DOWN
- Currently routing directly to RPC nodes instead of central Nginx
- **Action**: Update Cloudflare dashboard to route HTTP endpoints to `http://192.168.11.21:80`
2. **Missing DNS Records**
- RPC endpoints (`rpc-http-pub`, `rpc-ws-pub`, `rpc-http-prv`, `rpc-ws-prv`) missing CNAME records
- DBIS services (`dbis-admin`, `dbis-api`, `dbis-api-2`) missing CNAME records
- `mim4u.org` and `www.mim4u.org` missing CNAME records
- **Action**: Create CNAME records pointing to tunnel
3. **Duplicate DNS A Records**
- `besu.d-bis.org`: 2 A records (20.215.32.42, 70.153.83.83)
- `blockscout.d-bis.org`: 2 A records (20.215.32.42, 70.153.83.83)
- `explorer.d-bis.org`: 2 A records (20.215.32.42, 70.153.83.83)
- `d-bis.org`: 2 A records (20.215.32.42, 20.215.32.15)
- **Action**: Remove duplicate records, keep single authoritative IP
4. **RPC Nodes Location**
- ✅ VMIDs 2500, 2501, 2502 found on ml110 (192.168.11.10)
- **Action**: Verify network connectivity from pve2 to ml110
5. **DBIS Core Services Location**
- ✅ VMIDs 10100-10151 found on ml110 (192.168.11.10)
- **Action**: Verify network connectivity from pve2 to ml110
### 🟡 Medium Priority
6. **DNS Records Using Direct IPs Instead of Tunnels**
- Many services use A records with direct IPs
- Should use CNAME records pointing to tunnel
- **Action**: Migrate to tunnel-based DNS
7. **Inconsistent Proxy Status**
- Some records proxied, some not
- **Action**: Standardize proxy status (proxied for public services)
8. **Multiple Nginx Instances**
- Central Nginx (105), Blockscout Nginx (5000), MIM Nginx (7810), RPC Nginx (2500-2502)
- **Action**: Consider consolidating or document purpose of each
### 🟢 Low Priority
9. **Documentation Gaps**
- Some VMIDs have incomplete documentation
- **Action**: Update documentation with current status
10. **Service Discovery**
- No centralized service registry
- **Action**: Consider implementing service discovery
---
## 6. Recommendations
### Immediate Actions (Critical)
1. **Fix Tunnel Configuration**
```yaml
# Update Cloudflare dashboard for tunnel: rpc-http-pub.d-bis.org
# Route all HTTP endpoints to central Nginx:
- explorer.d-bis.org → http://192.168.11.21:80
- rpc-http-pub.d-bis.org → http://192.168.11.21:80
- rpc-http-prv.d-bis.org → http://192.168.11.21:80
- dbis-admin.d-bis.org → http://192.168.11.21:80
- dbis-api.d-bis.org → http://192.168.11.21:80
- dbis-api-2.d-bis.org → http://192.168.11.21:80
- mim4u.org → http://192.168.11.21:80
- www.mim4u.org → http://192.168.11.21:80
```
2. **Create Missing DNS Records**
- Create CNAME records for all RPC endpoints
- Create CNAME records for DBIS services
- Create CNAME records for MIM services
- All should point to: `<tunnel-id>.cfargotunnel.com`
- Enable proxy (orange cloud) for all
3. **Remove Duplicate DNS Records**
- Remove duplicate A records for `besu.d-bis.org`
- Remove duplicate A records for `blockscout.d-bis.org`
- Remove duplicate A records for `explorer.d-bis.org`
- Remove duplicate A records for `d-bis.org` (keep 20.215.32.15)
4. **Locate Missing VMIDs**
- Find RPC nodes (2500-2502) on other Proxmox hosts
- Verify DBIS Core services (10100-10151) deployment status
### Short-term Improvements
5. **DNS Migration to Tunnels**
- Migrate all A records to CNAME records pointing to tunnels
- Remove direct IP exposure
- Enable proxy for all public services
6. **Tunnel Consolidation**
- Consider consolidating multiple tunnels into single tunnel
- Use central Nginx for all HTTP routing
- Simplify tunnel management
7. **Nginx Architecture Review**
- Document purpose of each Nginx instance
- Consider if all are necessary
- Standardize configuration approach
### Long-term Optimizations
8. **Service Discovery**
- Implement centralized service registry
- Automate DNS record creation
- Dynamic service routing
9. **Monitoring and Alerting**
- Monitor all tunnel health
- Alert on tunnel failures
- Track DNS record changes
10. **Documentation**
- Maintain up-to-date infrastructure map
- Document all service dependencies
- Create runbooks for common operations
---
## 7. Architecture Recommendations
### Recommended Architecture
```
Internet
Cloudflare (DNS + SSL Termination)
Cloudflare Tunnel (VMID 102)
Routing Decision:
├─ HTTP Services → Central Nginx (VMID 105:80) → Internal Services
└─ WebSocket Services → Direct to RPC Nodes (bypass Nginx)
```
**Key Principle**:
- HTTP traffic routes through central Nginx for unified management
- WebSocket traffic routes directly to RPC nodes for optimal performance
### Benefits
1. **Single Point of Configuration**: All HTTP routing in one place
2. **Simplified Management**: Easy to add/remove services
3. **Better Security**: No direct IP exposure
4. **Centralized Logging**: All traffic logs in one location
5. **Easier Troubleshooting**: Single point to check routing
---
## 8. Action Items Checklist
### Critical (Do First)
- [ ] Update Cloudflare tunnel configuration to route HTTP endpoints to central Nginx
- [ ] Create missing DNS CNAME records for all services
- [ ] Remove duplicate DNS A records
- [x] Locate and verify RPC nodes (2500-2502) - ✅ Found on ml110
- [x] Verify DBIS Core services deployment status - ✅ Found on ml110
- [ ] Verify network connectivity from pve2 (192.168.11.12) to ml110 (192.168.11.10)
### Important (Do Next)
- [ ] Migrate remaining A records to CNAME (tunnel-based)
- [ ] Standardize proxy status across all DNS records
- [ ] Document all Nginx instances and their purposes
- [ ] Test all endpoints after configuration changes
### Nice to Have
- [ ] Implement service discovery
- [ ] Set up monitoring and alerting
- [ ] Create comprehensive infrastructure documentation
- [ ] Automate DNS record management
---
## 9. DNS Records Migration Plan
### Current State (A Records - Direct IPs)
Many services use A records pointing to direct IPs. These should be migrated to CNAME records pointing to Cloudflare tunnels.
### Migration Priority
**High Priority** (Public-facing services):
1. `explorer.d-bis.org` → CNAME to tunnel
2. `rpc-http-pub.d-bis.org` → CNAME to tunnel
3. `rpc-ws-pub.d-bis.org` → CNAME to tunnel
4. `rpc-http-prv.d-bis.org` → CNAME to tunnel
5. `rpc-ws-prv.d-bis.org` → CNAME to tunnel
**Medium Priority** (Internal services):
6. `dbis-admin.d-bis.org` → CNAME to tunnel
7. `dbis-api.d-bis.org` → CNAME to tunnel
8. `dbis-api-2.d-bis.org` → CNAME to tunnel
9. `mim4u.org` → CNAME to tunnel
10. `www.mim4u.org` → CNAME to tunnel
**Low Priority** (Monitoring/internal):
11. `grafana.d-bis.org` → CNAME to tunnel (if public access needed)
12. `prometheus.d-bis.org` → CNAME to tunnel (if public access needed)
13. `monitoring.d-bis.org` → CNAME to tunnel
### Migration Steps
For each domain:
1. Create CNAME record: `<subdomain>` → `<tunnel-id>.cfargotunnel.com`
2. Enable proxy (orange cloud)
3. Wait for DNS propagation (1-5 minutes)
4. Test endpoint accessibility
5. Remove old A record (if exists)
---
## 10. Testing Plan
After implementing recommendations:
1. **Test HTTP Endpoints**:
```bash
curl https://explorer.d-bis.org/api/v2/stats
curl -X POST https://rpc-http-pub.d-bis.org \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}'
curl https://dbis-admin.d-bis.org
curl https://mim4u.org
```
2. **Test WebSocket Endpoints**:
```bash
wscat -c wss://rpc-ws-pub.d-bis.org
wscat -c wss://rpc-ws-prv.d-bis.org
```
3. **Test Redirects**:
```bash
curl -I https://www.mim4u.org # Should redirect to mim4u.org
```
4. **Verify Tunnel Health**:
- Check Cloudflare dashboard for tunnel status
- Verify all tunnels show HEALTHY
- Check tunnel logs for errors
---
---
## 11. Summary of Recommendations
### 🔴 Critical (Fix Immediately)
1. **Update Cloudflare Tunnel Configuration**
- Tunnel: `rpc-http-pub.d-bis.org` (Tunnel ID: `10ab22da-8ea3-4e2e-a896-27ece2211a05`)
- Action: Route all HTTP endpoints to `http://192.168.11.21:80` (central Nginx)
- Keep WebSocket endpoints routing directly to RPC nodes
2. **Create Missing DNS CNAME Records**
- `rpc-http-pub.d-bis.org` → CNAME to tunnel
- `rpc-ws-pub.d-bis.org` → CNAME to tunnel
- `rpc-http-prv.d-bis.org` → CNAME to tunnel
- `rpc-ws-prv.d-bis.org` → CNAME to tunnel
- `dbis-admin.d-bis.org` → CNAME to tunnel
- `dbis-api.d-bis.org` → CNAME to tunnel
- `dbis-api-2.d-bis.org` → CNAME to tunnel
- `mim4u.org` → CNAME to tunnel
- `www.mim4u.org` → CNAME to tunnel
3. **Remove Duplicate DNS A Records**
- `besu.d-bis.org`: Remove one IP (keep single authoritative)
- `blockscout.d-bis.org`: Remove one IP
- `explorer.d-bis.org`: Remove one IP
- `d-bis.org`: Remove 20.215.32.42 (keep 20.215.32.15)
### 🟡 Important (Fix Soon)
4. **Migrate A Records to CNAME (Tunnel-based)**
- Convert remaining A records to CNAME records
- Point all to Cloudflare tunnel endpoints
- Enable proxy (orange cloud) for all public services
5. **Verify Network Connectivity**
- Test connectivity from pve2 (192.168.11.12) to ml110 (192.168.11.10)
- Ensure RPC nodes (2500-2502) are accessible from central Nginx
- Ensure DBIS services (10100-10151) are accessible from central Nginx
### 🟢 Optimization (Nice to Have)
6. **Documentation Updates**
- Update all service documentation with current IPs and locations
- Document network topology (pve2 vs ml110)
- Create service dependency map
7. **Monitoring Setup**
- Monitor all tunnel health
- Alert on tunnel failures
- Track DNS record changes
---
## Related Documentation
### Architecture Documents
- **[NETWORK_ARCHITECTURE.md](NETWORK_ARCHITECTURE.md)** ⭐⭐⭐ - Complete network architecture
- **[PHYSICAL_HARDWARE_INVENTORY.md](PHYSICAL_HARDWARE_INVENTORY.md)** ⭐⭐⭐ - Physical hardware inventory
- **[ORCHESTRATION_DEPLOYMENT_GUIDE.md](ORCHESTRATION_DEPLOYMENT_GUIDE.md)** ⭐⭐⭐ - Deployment orchestration
- **[DOMAIN_STRUCTURE.md](DOMAIN_STRUCTURE.md)** ⭐⭐ - Domain structure
### Network Documents
- **[../05-network/CLOUDFLARE_TUNNEL_ROUTING_ARCHITECTURE.md](../05-network/CLOUDFLARE_TUNNEL_ROUTING_ARCHITECTURE.md)** - Cloudflare tunnel routing
- **[../05-network/CENTRAL_NGINX_ROUTING_SETUP.md](../05-network/CENTRAL_NGINX_ROUTING_SETUP.md)** - Central Nginx routing
### Configuration Documents
- **[../04-configuration/cloudflare/CLOUDFLARE_DNS_TO_CONTAINERS.md](../04-configuration/cloudflare/CLOUDFLARE_DNS_TO_CONTAINERS.md)** - DNS mapping to containers
- **[../04-configuration/RPC_DNS_CONFIGURATION.md](../04-configuration/RPC_DNS_CONFIGURATION.md)** - RPC DNS configuration
---
**Last Updated:** 2025-12-27
**Document Version:** 1.0
**Review Cycle:** Quarterly

View File

@@ -0,0 +1,172 @@
# Domain Structure
**Last Updated:** 2025-01-03
**Document Version:** 1.0
**Status:** Active Documentation
---
## Overview
This document defines the domain structure for the infrastructure, clarifying which domains are used for different purposes.
---
## Domain Assignments
### 1. sankofa.nexus - Hardware Infrastructure
**Purpose:** Physical hardware hostnames and internal network DNS
**Usage:**
- All physical servers (ml110, r630-01 through r630-04)
- Internal network DNS resolution
- SSH access via FQDN
- Internal service discovery
**Examples:**
- `ml110.sankofa.nexus` → 192.168.11.10
- `r630-01.sankofa.nexus` → 192.168.11.11
- `r630-02.sankofa.nexus` → 192.168.11.12
- `r630-03.sankofa.nexus` → 192.168.11.13
- `r630-04.sankofa.nexus` → 192.168.11.14
**DNS Configuration:**
- Internal DNS server (typically on ER605 or Omada controller)
- Not publicly resolvable (internal network only)
- Used for local network service discovery
**Related Documentation:**
- [Physical Hardware Inventory](./PHYSICAL_HARDWARE_INVENTORY.md)
---
### 2. d-bis.org - ChainID 138 Services
**Purpose:** Public-facing services for ChainID 138 blockchain network
**Usage:**
- RPC endpoints (public and permissioned)
- Block explorer
- WebSocket endpoints
- Cloudflare tunnels for Proxmox hosts
- All ChainID 138 blockchain-related services
**Examples:**
- `rpc.d-bis.org` - Primary RPC endpoint
- `rpc2.d-bis.org` - Secondary RPC endpoint
- `explorer.d-bis.org` - Block explorer (Blockscout)
- `ml110-01.d-bis.org` - Proxmox UI (via Cloudflare tunnel)
- `r630-01.d-bis.org` - Proxmox UI (via Cloudflare tunnel)
- `r630-02.d-bis.org` - Proxmox UI (via Cloudflare tunnel)
- `r630-03.d-bis.org` - Proxmox UI (via Cloudflare tunnel)
- `r630-04.d-bis.org` - Proxmox UI (via Cloudflare tunnel)
**DNS Configuration:**
- Cloudflare DNS (proxied)
- Publicly resolvable
- SSL/TLS via Cloudflare
**Related Documentation:**
- [Cloudflare Tunnel Setup](../04-configuration/CLOUDFLARE_TUNNEL_CONFIGURATION_GUIDE.md)
- [RPC Configuration](../04-configuration/RPC_DNS_CONFIGURATION.md)
- [Blockscout Setup](../BLOCKSCOUT_COMPLETE_SUMMARY.md)
---
### 3. defi-oracle.io - ChainID 138 Legacy (ThirdWeb RPC)
**Purpose:** Legacy RPC endpoint for ThirdWeb integration
**Usage:**
- ThirdWeb RPC endpoint (VMID 2400)
- Legacy compatibility for existing integrations
- Public RPC access for ChainID 138
**Examples:**
- `rpc.defi-oracle.io` - Legacy RPC endpoint
- `rpc.public-0138.defi-oracle.io` - Specific ChainID 138 RPC endpoint
**DNS Configuration:**
- Cloudflare DNS (proxied)
- Publicly resolvable
- SSL/TLS via Cloudflare
**Note:** This domain is maintained for backward compatibility with ThirdWeb integrations. New integrations should use `d-bis.org` endpoints.
**Related Documentation:**
- [ThirdWeb RPC Setup](../04-configuration/THIRDWEB_RPC_CLOUDFLARE_SETUP.md)
- [VMID 2400 DNS Structure](../04-configuration/VMID2400_DNS_STRUCTURE.md)
---
## Domain Summary Table
| Domain | Purpose | Public | DNS Provider | SSL/TLS |
|--------|---------|--------|--------------|---------|
| `sankofa.nexus` | Hardware infrastructure | No (internal) | Internal DNS | Self-signed |
| `d-bis.org` | ChainID 138 services | Yes | Cloudflare | Cloudflare |
| `defi-oracle.io` | ChainID 138 legacy (ThirdWeb) | Yes | Cloudflare | Cloudflare |
---
## Domain Usage Guidelines
### When to Use sankofa.nexus
- Internal network communication
- SSH access to physical hosts
- Internal service discovery
- Local network DNS resolution
- Proxmox cluster communication
### When to Use d-bis.org
- Public blockchain RPC endpoints
- Block explorer access
- Public-facing Proxmox UI (via tunnels)
- ChainID 138 service endpoints
- New integrations and services
### When to Use defi-oracle.io
- ThirdWeb RPC endpoint (legacy)
- Backward compatibility
- Existing integrations that reference this domain
---
## Migration Notes
### From defi-oracle.io to d-bis.org
For new services and integrations:
- **Use `d-bis.org`** as the primary domain
- `defi-oracle.io` is maintained for legacy ThirdWeb RPC compatibility
- All new ChainID 138 services should use `d-bis.org`
### DNS Record Management
- **sankofa.nexus**: Managed via internal DNS (Omada controller or local DNS server)
- **d-bis.org**: Managed via Cloudflare DNS
- **defi-oracle.io**: Managed via Cloudflare DNS
---
## Related Documentation
### Architecture Documents
- **[PHYSICAL_HARDWARE_INVENTORY.md](PHYSICAL_HARDWARE_INVENTORY.md)** ⭐⭐⭐ - Physical hardware inventory
- **[NETWORK_ARCHITECTURE.md](NETWORK_ARCHITECTURE.md)** ⭐⭐⭐ - Complete network architecture
- **[ORCHESTRATION_DEPLOYMENT_GUIDE.md](ORCHESTRATION_DEPLOYMENT_GUIDE.md)** ⭐⭐⭐ - Deployment orchestration
### Configuration Documents
- **[../04-configuration/cloudflare/CLOUDFLARE_TUNNEL_CONFIGURATION_GUIDE.md](../04-configuration/cloudflare/CLOUDFLARE_TUNNEL_CONFIGURATION_GUIDE.md)** - Cloudflare tunnel configuration
- **[../04-configuration/RPC_DNS_CONFIGURATION.md](../04-configuration/RPC_DNS_CONFIGURATION.md)** - RPC DNS configuration
- **[../05-network/CLOUDFLARE_TUNNEL_ROUTING_ARCHITECTURE.md](../05-network/CLOUDFLARE_TUNNEL_ROUTING_ARCHITECTURE.md)** - Cloudflare routing architecture
---
**Last Updated:** 2025-01-03
**Document Version:** 1.0
**Review Cycle:** Quarterly

View File

@@ -1,7 +1,10 @@
# Network Architecture - Enterprise Orchestration Plan
**Navigation:** [Home](../README.md) > [Architecture](README.md) > Network Architecture
**Last Updated:** 2025-01-20
**Document Version:** 2.0
**Status:** 🟢 Active Documentation
**Project:** Sankofa / Phoenix / PanTel · ChainID 138 · Proxmox + Cloudflare Zero Trust + Dual ISP + 6×/28
---
@@ -33,6 +36,8 @@ This document defines the complete enterprise-grade network architecture for the
## 1. Physical Topology & Hardware Roles
> **Reference:** For complete physical hardware inventory including IP addresses, credentials, and detailed specifications, see **[PHYSICAL_HARDWARE_INVENTORY.md](PHYSICAL_HARDWARE_INVENTORY.md)**.
### 1.1 Hardware Role Assignment
#### Edge / Routing
@@ -65,13 +70,14 @@ This document defines the complete enterprise-grade network architecture for the
### Public Block #1 (Known - Spectrum)
| Property | Value |
|----------|-------|
| **Network** | `76.53.10.32/28` |
| **Gateway** | `76.53.10.33` |
| **Usable Range** | `76.53.10.3376.53.10.46` |
| **Broadcast** | `76.53.10.47` |
| **ER605 WAN1 IP** | `76.53.10.34` (router interface) |
| Property | Value | Status |
|----------|-------|--------|
| **Network** | `76.53.10.32/28` | ✅ Configured |
| **Gateway** | `76.53.10.33` | ✅ Active |
| **Usable Range** | `76.53.10.3376.53.10.46` | ✅ In Use |
| **Broadcast** | `76.53.10.47` | - |
| **ER605 WAN1 IP** | `76.53.10.34` (router interface) | ✅ Active |
| **Available IPs** | 13 (76.53.10.35-46, excluding .34) | ✅ Available |
### Public Blocks #2#6 (Placeholders - To Be Configured)
@@ -318,7 +324,43 @@ This architecture should be reflected in:
---
## Related Documentation
### Architecture Documents
- **[PHYSICAL_HARDWARE_INVENTORY.md](PHYSICAL_HARDWARE_INVENTORY.md)** ⭐⭐⭐ - Complete physical hardware inventory and specifications
- **[ORCHESTRATION_DEPLOYMENT_GUIDE.md](ORCHESTRATION_DEPLOYMENT_GUIDE.md)** ⭐⭐⭐ - Enterprise deployment orchestration guide
- **[VMID_ALLOCATION_FINAL.md](VMID_ALLOCATION_FINAL.md)** ⭐⭐⭐ - VMID allocation registry
- **[DOMAIN_STRUCTURE.md](DOMAIN_STRUCTURE.md)** ⭐⭐ - Domain structure and DNS assignments
- **[HOSTNAME_MIGRATION_GUIDE.md](HOSTNAME_MIGRATION_GUIDE.md)** ⭐ - Hostname migration procedures
### Configuration Documents
- **[../04-configuration/ER605_ROUTER_CONFIGURATION.md](../04-configuration/ER605_ROUTER_CONFIGURATION.md)** - Router configuration
- **[../04-configuration/cloudflare/CLOUDFLARE_ZERO_TRUST_GUIDE.md](../04-configuration/cloudflare/CLOUDFLARE_ZERO_TRUST_GUIDE.md)** - Cloudflare Zero Trust setup
- **[../05-network/CLOUDFLARE_TUNNEL_ROUTING_ARCHITECTURE.md](../05-network/CLOUDFLARE_TUNNEL_ROUTING_ARCHITECTURE.md)** - Cloudflare tunnel routing
### Deployment Documents
- **[../03-deployment/ORCHESTRATION_DEPLOYMENT_GUIDE.md](../03-deployment/ORCHESTRATION_DEPLOYMENT_GUIDE.md)** - Deployment orchestration
- **[../07-ccip/CCIP_DEPLOYMENT_SPEC.md](../07-ccip/CCIP_DEPLOYMENT_SPEC.md)** - CCIP deployment specification
---
**Document Status:** Complete (v2.0)
**Maintained By:** Infrastructure Team
**Review Cycle:** Quarterly
**Next Update:** After public blocks #2-6 are assigned
---
## Change Log
### Version 2.0 (2025-01-20)
- Added network topology Mermaid diagram
- Added VLAN architecture Mermaid diagram
- Added ASCII art network topology
- Enhanced public IP block matrix with status indicators
- Added breadcrumb navigation
- Added status indicators
### Version 1.0 (2024-12-15)
- Initial version
- Basic network architecture documentation

View File

@@ -1,10 +1,12 @@
# Orchestration Deployment Guide - Enterprise-Grade
**Navigation:** [Home](../README.md) > [Architecture](README.md) > Orchestration Deployment Guide
**Sankofa / Phoenix / PanTel · ChainID 138 · Proxmox + Cloudflare Zero Trust + Dual ISP + 6×/28**
**Last Updated:** 2025-01-20
**Document Version:** 1.0
**Status:** Buildable Blueprint
**Document Version:** 1.1
**Status:** 🟢 Active Documentation
---
@@ -23,17 +25,20 @@ This guide provides a **buildable blueprint**: network, VLANs, Proxmox cluster,
## Table of Contents
1. [Core Principles](#core-principles)
2. [Physical Topology & Roles](#physical-topology--roles)
3. [ISP & Public IP Plan](#isp--public-ip-plan)
4. [Layer-2 & VLAN Orchestration](#layer-2--vlan-orchestration)
5. [Routing, NAT, and Egress Segmentation](#routing-nat-and-egress-segmentation)
6. [Proxmox Cluster Orchestration](#proxmox-cluster-orchestration)
7. [Cloudflare Zero Trust Orchestration](#cloudflare-zero-trust-orchestration)
8. [VMID Allocation Registry](#vmid-allocation-registry)
9. [CCIP Fleet Deployment Matrix](#ccip-fleet-deployment-matrix)
10. [Deployment Orchestration Workflow](#deployment-orchestration-workflow)
11. [Operational Runbooks](#operational-runbooks)
**Estimated Reading Time:** 45 minutes
**Progress:** Use this TOC to track your reading progress
1. ✅ [Core Principles](#core-principles) - *Foundation concepts*
2. [Physical Topology & Roles](#physical-topology--roles) - *Hardware layout*
3. [ISP & Public IP Plan](#isp--public-ip-plan) - *Public IP allocation*
4. ✅ [Layer-2 & VLAN Orchestration](#layer-2--vlan-orchestration) - *VLAN configuration*
5. [Routing, NAT, and Egress Segmentation](#routing-nat-and-egress-segmentation) - *Network routing*
6. [Proxmox Cluster Orchestration](#proxmox-cluster-orchestration) - *Proxmox setup*
7. ✅ [Cloudflare Zero Trust Orchestration](#cloudflare-zero-trust-orchestration) - *Cloudflare integration*
8. ✅ [VMID Allocation Registry](#vmid-allocation-registry) - *VMID planning*
9. ✅ [CCIP Fleet Deployment Matrix](#ccip-fleet-deployment-matrix) - *CCIP deployment*
10. ✅ [Deployment Orchestration Workflow](#deployment-orchestration-workflow) - *Deployment process*
11. ✅ [Operational Runbooks](#operational-runbooks) - *Operations guide*
---
@@ -52,205 +57,88 @@ This guide provides a **buildable blueprint**: network, VLANs, Proxmox cluster,
## Physical Topology & Roles
### Hardware Role Assignment
> **Reference:** For complete hardware role assignments, physical topology, and detailed specifications, see **[NETWORK_ARCHITECTURE.md](NETWORK_ARCHITECTURE.md#1-physical-topology--hardware-roles)**.
#### Edge / Routing
> **Hardware Inventory:** For complete physical hardware inventory including IP addresses, credentials, hostnames, and detailed specifications, see **[PHYSICAL_HARDWARE_INVENTORY.md](PHYSICAL_HARDWARE_INVENTORY.md)** ⭐⭐⭐.
**ER605-A (Primary Edge Router)**
- WAN1: Spectrum primary with Block #1 (76.53.10.32/28)
- WAN2: ISP #2 (failover/alternate policy)
- Role: Active edge router, NAT pools, routing
**ER605-B (Standby Edge Router / Alternate WAN policy)**
- Role: Standby router OR dedicated to WAN2 policies/testing
- Note: ER605 does not support full stateful HA. This is **active/standby operational redundancy**, not automatic session-preserving HA.
#### Switching Fabric
- **ES216G-1**: Core / uplinks / trunks
- **ES216G-2**: Compute rack aggregation
- **ES216G-3**: Mgmt + out-of-band / staging
#### Compute
- **ML110 Gen9**: "Bootstrap & Management" node
- IP: 192.168.11.10
- Role: Proxmox mgmt services, Omada controller, Git, monitoring seed
- **4× Dell R630**: Proxmox compute cluster nodes
- Resources: 512GB RAM each, 2×600GB boot, 6×250GB SSD
- Role: Production workloads, CCIP fleet, sovereign tenants, services
**Summary:**
- **2× ER605** (edge + HA/failover design)
- **3× ES216G switches** (core, compute, mgmt)
- **1× ML110 Gen9** (management / seed / bootstrap) - IP: 192.168.11.10
- **4× Dell R630** (compute cluster; 512GB RAM each; 2×600GB boot; 6×250GB SSD)
---
## ISP & Public IP Plan (6× /28)
## ISP & Public IP Plan
### Public Block #1 (Known - Spectrum)
> **Reference:** For complete public IP block plan, usage policy, and NAT pool assignments, see **[NETWORK_ARCHITECTURE.md](NETWORK_ARCHITECTURE.md#2-isp--public-ip-plan-6--28)**.
| Property | Value |
|----------|-------|
| **Network** | `76.53.10.32/28` |
| **Gateway** | `76.53.10.33` |
| **Usable Range** | `76.53.10.3376.53.10.46` |
| **Broadcast** | `76.53.10.47` |
| **ER605 WAN1 IP** | `76.53.10.34` (router interface) |
### Public Blocks #2#6 (Placeholders - To Be Configured)
| Block | Network | Gateway | Usable Range | Broadcast | Designated Use |
|-------|--------|---------|--------------|-----------|----------------|
| **#2** | `<PUBLIC_BLOCK_2>/28` | `<GW2>` | `<USABLE2>` | `<BCAST2>` | CCIP Commit egress NAT pool |
| **#3** | `<PUBLIC_BLOCK_3>/28` | `<GW3>` | `<USABLE3>` | `<BCAST3>` | CCIP Execute egress NAT pool |
| **#4** | `<PUBLIC_BLOCK_4>/28` | `<GW4>` | `<USABLE4>` | `<BCAST4>` | RMN egress NAT pool |
| **#5** | `<PUBLIC_BLOCK_5>/28` | `<GW5>` | `<USABLE5>` | `<BCAST5>` | Sankofa/Phoenix/PanTel service egress |
| **#6** | `<PUBLIC_BLOCK_6>/28` | `<GW6>` | `<USABLE6>` | `<BCAST6>` | Sovereign Cloud Band tenant egress |
### Public IP Usage Policy (Role-based)
| Public /28 Block | Designated Use | Why |
|------------------|----------------|-----|
| **#1** (76.53.10.32/28) | Router WAN + break-glass VIPs | Primary connectivity + emergency |
| **#2** | CCIP Commit egress NAT pool | Allowlistable egress for source RPCs |
| **#3** | CCIP Execute egress NAT pool | Allowlistable egress for destination RPCs |
| **#4** | RMN egress NAT pool | Independent security-plane egress |
| **#5** | Sankofa/Phoenix/PanTel service egress | Service-plane separation |
| **#6** | Sovereign Cloud Band tenant egress | Per-sovereign policy control |
**Summary:**
- **Block #1** (76.53.10.32/28): Router WAN + break-glass VIPs ✅ Configured
- **Blocks #2-6**: Placeholders for CCIP Commit, Execute, RMN, Service, and Sovereign tenant egress NAT pools
---
## Layer-2 & VLAN Orchestration
### VLAN Set (Authoritative)
> **Reference:** For complete VLAN orchestration plan, subnet allocations, and switching configuration, see **[NETWORK_ARCHITECTURE.md](NETWORK_ARCHITECTURE.md#3-layer-2--vlan-orchestration-plan)**.
> **Migration Note:** Currently on flat LAN 192.168.11.0/24. This plan migrates to VLANs while keeping compatibility.
| VLAN ID | VLAN Name | Purpose | Subnet | Gateway |
|--------:|-----------|---------|--------|---------|
| **11** | MGMT-LAN | Proxmox mgmt, switches mgmt, admin endpoints | 192.168.11.0/24 | 192.168.11.1 |
| 110 | BESU-VAL | Validator-only network (no member access) | 10.110.0.0/24 | 10.110.0.1 |
| 111 | BESU-SEN | Sentry mesh | 10.111.0.0/24 | 10.111.0.1 |
| 112 | BESU-RPC | RPC / gateway tier | 10.112.0.0/24 | 10.112.0.1 |
| 120 | BLOCKSCOUT | Explorer + DB | 10.120.0.0/24 | 10.120.0.1 |
| 121 | CACTI | Interop middleware | 10.121.0.0/24 | 10.121.0.1 |
| 130 | CCIP-OPS | Ops/admin | 10.130.0.0/24 | 10.130.0.1 |
| 132 | CCIP-COMMIT | Commit-role DON | 10.132.0.0/24 | 10.132.0.1 |
| 133 | CCIP-EXEC | Execute-role DON | 10.133.0.0/24 | 10.133.0.1 |
| 134 | CCIP-RMN | Risk management network | 10.134.0.0/24 | 10.134.0.1 |
| 140 | FABRIC | Fabric | 10.140.0.0/24 | 10.140.0.1 |
| 141 | FIREFLY | FireFly | 10.141.0.0/24 | 10.141.0.1 |
| 150 | INDY | Identity | 10.150.0.0/24 | 10.150.0.1 |
| 160 | SANKOFA-SVC | Sankofa/Phoenix/PanTel service layer | 10.160.0.0/22 | 10.160.0.1 |
| 200 | PHX-SOV-SMOM | Sovereign tenant | 10.200.0.0/20 | 10.200.0.1 |
| 201 | PHX-SOV-ICCC | Sovereign tenant | 10.201.0.0/20 | 10.201.0.1 |
| 202 | PHX-SOV-DBIS | Sovereign tenant | 10.202.0.0/20 | 10.202.0.1 |
| 203 | PHX-SOV-AR | Absolute Realms tenant | 10.203.0.0/20 | 10.203.0.1 |
### Switching Configuration (ES216G)
- **ES216G-1**: **Core** (all VLAN trunks to ES216G-2/3 + ER605-A)
- **ES216G-2**: **Compute** (trunks to R630s + ML110)
- **ES216G-3**: **Mgmt/OOB** (mgmt access ports, staging, out-of-band)
**All Proxmox uplinks should be 802.1Q trunk ports.**
**Summary:**
- **19 VLANs** defined with complete subnet plan
- **VLAN 11**: MGMT-LAN (192.168.11.0/24) - Current flat LAN
- **VLANs 110-203**: Service-specific VLANs (10.x.0.0/24 or /20 or /22)
- **Migration path**: From flat LAN to VLANs while maintaining compatibility
---
## Routing, NAT, and Egress Segmentation
### Dual Router Roles
> **Reference:** For complete routing configuration, NAT policies, and egress segmentation details, see **[NETWORK_ARCHITECTURE.md](NETWORK_ARCHITECTURE.md#4-routing-nat-and-egress-segmentation-er605)**.
- **ER605-A**: Active edge router (WAN1 = Spectrum primary with Block #1)
- **ER605-B**: Standby router OR dedicated to WAN2 policies/testing (no inbound services)
### NAT Policies (Critical)
#### Inbound NAT
- **Default: none**
- Break-glass only (optional):
- Jumpbox/SSH (single port, IP allowlist, Cloudflare Access preferred)
- Proxmox admin should remain **LAN-only**
#### Outbound NAT (Role-based Pools Using /28 Blocks)
| Private Subnet | Role | Egress NAT Pool | Public Block |
|----------------|------|-----------------|--------------|
| 10.132.0.0/24 | CCIP Commit | **Block #2** `<PUBLIC_BLOCK_2>/28` | #2 |
| 10.133.0.0/24 | CCIP Execute | **Block #3** `<PUBLIC_BLOCK_3>/28` | #3 |
| 10.134.0.0/24 | RMN | **Block #4** `<PUBLIC_BLOCK_4>/28` | #4 |
| 10.160.0.0/22 | Sankofa/Phoenix/PanTel | **Block #5** `<PUBLIC_BLOCK_5>/28` | #5 |
| 10.200.0.0/2010.203.0.0/20 | Sovereign tenants | **Block #6** `<PUBLIC_BLOCK_6>/28` | #6 |
| 192.168.11.0/24 | Mgmt | Block #1 (or none; tightly restricted) | #1 |
This yields **provable separation**, allowlisting, and incident scoping.
**Summary:**
- **Inbound NAT**: Default none (Cloudflare Tunnel primary)
- **Outbound NAT**: Role-based pools using /28 blocks #2-6
- **Egress Segmentation**: CCIP Commit → Block #2, Execute → Block #3, RMN → Block #4, Services → Block #5, Sovereign → Block #6
---
## Proxmox Cluster Orchestration
### Node Layout
> **Reference:** For complete Proxmox cluster orchestration, networking, and storage details, see **[NETWORK_ARCHITECTURE.md](NETWORK_ARCHITECTURE.md#5-proxmox-cluster-orchestration)**.
- **ml110 (192.168.11.10)**: mgmt + seed services + initial automation runner
- **r630-01..04**: production compute
### Proxmox Networking (per host)
- **`vmbr0`**: VLAN-aware bridge
- Native VLAN: 11 (MGMT)
- Tagged VLANs: 110,111,112,120,121,130,132,133,134,140,141,150,160,200203
- **Proxmox host IP** remains on **VLAN 11** only.
### Storage Orchestration (R630)
**Hardware:**
- 2×600GB boot (mirror recommended)
- 6×250GB SSD
**Recommended:**
- **Boot drives**: ZFS mirror or hardware RAID1
- **Data SSDs**: ZFS pool (striped mirrors if you can pair, or RAIDZ1/2 depending on risk tolerance)
- **High-write workloads** (logs/metrics/indexers) on dedicated dataset with quotas
**Summary:**
- **Node Layout**: ml110 (mgmt) + r630-01..04 (compute)
- **Networking**: VLAN-aware bridge `vmbr0` with native VLAN 11
- **Storage**: ZFS recommended for R630 data SSDs
---
## Cloudflare Zero Trust Orchestration
### cloudflared Gateway Pattern
> **Reference:** For complete Cloudflare Zero Trust orchestration, cloudflared gateway pattern, and tunnel configuration, see **[NETWORK_ARCHITECTURE.md](NETWORK_ARCHITECTURE.md#6-cloudflare-zero-trust-orchestration)**.
Run **2 cloudflared LXCs** for redundancy:
**Summary:**
- **2 cloudflared LXCs** for redundancy (ML110 + R630)
- **Tunnels for**: Blockscout, FireFly, Gitea, internal admin dashboards
- **Proxmox UI**: LAN-only (publish via Cloudflare Access if needed)
- `cloudflared-1` on ML110
- `cloudflared-2` on an R630
Both run tunnels for:
- Blockscout
- FireFly
- Gitea
- Internal admin dashboards (Grafana) behind Cloudflare Access
**Keep Proxmox UI LAN-only**; if needed, publish via Cloudflare Access with strict posture/MFA.
For detailed Cloudflare configuration guides, see:
- **[../04-configuration/cloudflare/CLOUDFLARE_ZERO_TRUST_GUIDE.md](../04-configuration/cloudflare/CLOUDFLARE_ZERO_TRUST_GUIDE.md)**
- **[../04-configuration/cloudflare/CLOUDFLARE_DNS_TO_CONTAINERS.md](../04-configuration/cloudflare/CLOUDFLARE_DNS_TO_CONTAINERS.md)**
---
## VMID Allocation Registry
### Authoritative Registry Summary
> **Reference:** For complete VMID allocation registry with detailed breakdowns, see **[VMID_ALLOCATION_FINAL.md](VMID_ALLOCATION_FINAL.md)**.
| VMID Range | Domain | Count | Notes |
|-----------:|--------|------:|-------|
| 10004999 | **Besu** | 4,000 | Validators, Sentries, RPC, Archive, Reserved |
| 50005099 | **Blockscout** | 100 | Explorer/Indexing |
| 52005299 | **Cacti** | 100 | Interop middleware |
| 54005599 | **CCIP** | 200 | Ops, Monitoring, Commit, Execute, RMN, Reserved |
| 60006099 | **Fabric** | 100 | Enterprise contracts |
| 62006299 | **FireFly** | 100 | Workflow/orchestration |
| 64007399 | **Indy** | 1,000 | Identity layer |
| 78008999 | **Sankofa/Phoenix/PanTel** | 1,200 | Service + Cloud + Telecom |
| 1000013999 | **Phoenix Sovereign Cloud Band** | 4,000 | SMOM/ICCC/DBIS/AR tenants |
**Summary:**
- **Total Allocated**: 11,000 VMIDs (1000-13999)
- **Besu Network**: 4,000 VMIDs (1000-4999)
- **CCIP**: 200 VMIDs (5400-5599)
- **Sovereign Cloud Band**: 4,000 VMIDs (10000-13999)
**Total Allocated**: 11,000 VMIDs (1000-13999)
See **[VMID_ALLOCATION_FINAL.md](VMID_ALLOCATION_FINAL.md)** for complete details.
See also **[NETWORK_ARCHITECTURE.md](NETWORK_ARCHITECTURE.md#7-complete-vmid-and-network-allocation-table)** for VMID-to-VLAN mapping.
---
@@ -295,6 +183,33 @@ See **[CCIP_DEPLOYMENT_SPEC.md](CCIP_DEPLOYMENT_SPEC.md)** for complete specific
## Deployment Orchestration Workflow
### Deployment Workflow Diagram
```mermaid
flowchart TD
Start[Start Deployment] --> Phase0[Phase 0: Validate Foundation]
Phase0 --> Check1{Foundation Valid?}
Check1 -->|No| Fix1[Fix Issues]
Fix1 --> Phase0
Check1 -->|Yes| Phase1[Phase 1: Enable VLANs]
Phase1 --> Verify1{VLANs Working?}
Verify1 -->|No| FixVLAN[Fix VLAN Config]
FixVLAN --> Phase1
Verify1 -->|Yes| Phase2[Phase 2: Deploy Observability]
Phase2 --> Verify2{Monitoring Active?}
Verify2 -->|No| FixMonitor[Fix Monitoring]
FixMonitor --> Phase2
Verify2 -->|Yes| Phase3[Phase 3: Deploy CCIP Fleet]
Phase3 --> Verify3{CCIP Nodes Running?}
Verify3 -->|No| FixCCIP[Fix CCIP Config]
FixCCIP --> Phase3
Verify3 -->|Yes| Phase4[Phase 4: Deploy Sovereign Tenants]
Phase4 --> Verify4{Tenants Operational?}
Verify4 -->|No| FixTenants[Fix Tenant Config]
FixTenants --> Phase4
Verify4 -->|Yes| Complete[Deployment Complete]
```
### Phase 0 — Validate Foundation
1. ✅ Confirm ER605-A WAN1 static: **76.53.10.34/28**, GW **76.53.10.33**
@@ -336,9 +251,9 @@ See **[CCIP_DEPLOYMENT_SPEC.md](CCIP_DEPLOYMENT_SPEC.md)** for complete specific
### Network Operations
- **[ER605_ROUTER_CONFIGURATION.md](ER605_ROUTER_CONFIGURATION.md)** - Router configuration guide
- **[BESU_ALLOWLIST_RUNBOOK.md](BESU_ALLOWLIST_RUNBOOK.md)** - Besu allowlist management
- **[CLOUDFLARE_ZERO_TRUST_GUIDE.md](CLOUDFLARE_ZERO_TRUST_GUIDE.md)** - Cloudflare Zero Trust setup
- **[../04-configuration/ER605_ROUTER_CONFIGURATION.md](../04-configuration/ER605_ROUTER_CONFIGURATION.md)** - Router configuration guide
- **[../06-besu/BESU_ALLOWLIST_RUNBOOK.md](../06-besu/BESU_ALLOWLIST_RUNBOOK.md)** - Besu allowlist management
- **[../04-configuration/cloudflare/CLOUDFLARE_ZERO_TRUST_GUIDE.md](../04-configuration/cloudflare/CLOUDFLARE_ZERO_TRUST_GUIDE.md)** - Cloudflare Zero Trust setup
### Deployment Operations
@@ -348,8 +263,8 @@ See **[CCIP_DEPLOYMENT_SPEC.md](CCIP_DEPLOYMENT_SPEC.md)** for complete specific
### Troubleshooting
- **[TROUBLESHOOTING_FAQ.md](TROUBLESHOOTING_FAQ.md)** - Common issues and solutions
- **[QBFT_TROUBLESHOOTING.md](QBFT_TROUBLESHOOTING.md)** - QBFT consensus troubleshooting
- **[../09-troubleshooting/TROUBLESHOOTING_FAQ.md](../09-troubleshooting/TROUBLESHOOTING_FAQ.md)** - Common issues and solutions
- **[../09-troubleshooting/QBFT_TROUBLESHOOTING.md](../09-troubleshooting/QBFT_TROUBLESHOOTING.md)** - QBFT consensus troubleshooting
---
@@ -394,34 +309,52 @@ Then we can produce:
## Related Documentation
### Prerequisites
- **[PREREQUISITES.md](PREREQUISITES.md)** - System requirements and prerequisites
- **[DEPLOYMENT_READINESS.md](DEPLOYMENT_READINESS.md)** - Pre-deployment validation checklist
- **[../01-getting-started/PREREQUISITES.md](../01-getting-started/PREREQUISITES.md)** - System requirements and prerequisites
- **[../03-deployment/DEPLOYMENT_READINESS.md](../03-deployment/DEPLOYMENT_READINESS.md)** - Pre-deployment validation checklist
### Architecture
- **[NETWORK_ARCHITECTURE.md](NETWORK_ARCHITECTURE.md)** - Complete network architecture
- **[VMID_ALLOCATION_FINAL.md](VMID_ALLOCATION_FINAL.md)** - VMID allocation registry
- **[CCIP_DEPLOYMENT_SPEC.md](CCIP_DEPLOYMENT_SPEC.md)** - CCIP deployment specification
- **[NETWORK_ARCHITECTURE.md](NETWORK_ARCHITECTURE.md)** ⭐⭐⭐ - Complete network architecture (authoritative reference)
- **[PHYSICAL_HARDWARE_INVENTORY.md](PHYSICAL_HARDWARE_INVENTORY.md)** ⭐⭐⭐ - Physical hardware inventory and specifications
- **[VMID_ALLOCATION_FINAL.md](VMID_ALLOCATION_FINAL.md)** ⭐⭐⭐ - VMID allocation registry
- **[DOMAIN_STRUCTURE.md](DOMAIN_STRUCTURE.md)** ⭐⭐ - Domain structure and DNS assignments
- **[CCIP_DEPLOYMENT_SPEC.md](../07-ccip/CCIP_DEPLOYMENT_SPEC.md)** - CCIP deployment specification
### Configuration
- **[ER605_ROUTER_CONFIGURATION.md](ER605_ROUTER_CONFIGURATION.md)** - Router configuration
- **[CLOUDFLARE_ZERO_TRUST_GUIDE.md](CLOUDFLARE_ZERO_TRUST_GUIDE.md)** - Cloudflare Zero Trust setup
- **[../04-configuration/ER605_ROUTER_CONFIGURATION.md](../04-configuration/ER605_ROUTER_CONFIGURATION.md)** - Router configuration
- **[../04-configuration/cloudflare/CLOUDFLARE_ZERO_TRUST_GUIDE.md](../04-configuration/cloudflare/CLOUDFLARE_ZERO_TRUST_GUIDE.md)** - Cloudflare Zero Trust setup
### Operations
- **[OPERATIONAL_RUNBOOKS.md](OPERATIONAL_RUNBOOKS.md)** - Operational procedures
- **[DEPLOYMENT_STATUS_CONSOLIDATED.md](DEPLOYMENT_STATUS_CONSOLIDATED.md)** - Deployment status
- **[TROUBLESHOOTING_FAQ.md](TROUBLESHOOTING_FAQ.md)** - Troubleshooting guide
- **[../03-deployment/OPERATIONAL_RUNBOOKS.md](../03-deployment/OPERATIONAL_RUNBOOKS.md)** - Operational procedures
- **[../03-deployment/DEPLOYMENT_STATUS_CONSOLIDATED.md](../03-deployment/DEPLOYMENT_STATUS_CONSOLIDATED.md)** - Deployment status
- **[../09-troubleshooting/TROUBLESHOOTING_FAQ.md](../09-troubleshooting/TROUBLESHOOTING_FAQ.md)** - Troubleshooting guide
### Best Practices
- **[RECOMMENDATIONS_AND_SUGGESTIONS.md](RECOMMENDATIONS_AND_SUGGESTIONS.md)** - Comprehensive recommendations
- **[IMPLEMENTATION_CHECKLIST.md](IMPLEMENTATION_CHECKLIST.md)** - Implementation checklist
- **[../10-best-practices/RECOMMENDATIONS_AND_SUGGESTIONS.md](../10-best-practices/RECOMMENDATIONS_AND_SUGGESTIONS.md)** - Comprehensive recommendations
- **[../10-best-practices/IMPLEMENTATION_CHECKLIST.md](../10-best-practices/IMPLEMENTATION_CHECKLIST.md)** - Implementation checklist
### Reference
- **[MASTER_INDEX.md](MASTER_INDEX.md)** - Complete documentation index
---
**Document Status:** Complete (v1.0)
**Document Status:** Complete (v1.1)
**Maintained By:** Infrastructure Team
**Review Cycle:** Monthly
**Last Updated:** 2025-01-20
---
## Change Log
### Version 1.1 (2025-01-20)
- Removed duplicate network architecture content
- Added references to NETWORK_ARCHITECTURE.md
- Added deployment workflow Mermaid diagram
- Added ASCII art process flow
- Added breadcrumb navigation
- Added status indicators
### Version 1.0 (2024-12-15)
- Initial version
- Complete deployment orchestration guide

View File

@@ -0,0 +1,250 @@
# Proxmox Cluster Architecture
**Last Updated:** 2025-01-20
**Document Version:** 1.0
**Status:** Active Documentation
---
## Overview
This document describes the Proxmox cluster architecture, including node configuration, storage setup, network bridges, and VM/container distribution.
---
## Cluster Architecture Diagram
```mermaid
graph TB
Cluster[Proxmox Cluster<br/>Name: h]
ML110[ML110 Management Node<br/>192.168.11.10<br/>6 cores, 125GB RAM]
R6301[R630-01<br/>192.168.11.11<br/>32 cores, 503GB RAM]
R6302[R630-02<br/>192.168.11.12<br/>32 cores, 503GB RAM]
R6303[R630-03<br/>192.168.11.13<br/>32 cores, 512GB RAM]
R6304[R630-04<br/>192.168.11.14<br/>32 cores, 512GB RAM]
Cluster --> ML110
Cluster --> R6301
Cluster --> R6302
Cluster --> R6303
Cluster --> R6304
ML110 --> Storage1[local: 94GB<br/>local-lvm: 813GB]
R6301 --> Storage2[local: 536GB<br/>local-lvm: Available]
R6302 --> Storage3[local: Available<br/>local-lvm: Available]
R6303 --> Storage4[Storage: Available]
R6304 --> Storage5[Storage: Available]
ML110 --> Bridge1[vmbr0<br/>VLAN-aware]
R6301 --> Bridge2[vmbr0<br/>VLAN-aware]
R6302 --> Bridge3[vmbr0<br/>VLAN-aware]
R6303 --> Bridge4[vmbr0<br/>VLAN-aware]
R6304 --> Bridge5[vmbr0<br/>VLAN-aware]
```
---
## Cluster Nodes
### Node Summary
| Hostname | IP Address | CPU | RAM | Storage | VMs/Containers | Status |
|----------|------------|-----|-----|---------|----------------|--------|
| ml110 | 192.168.11.10 | 6 cores @ 1.60GHz | 125GB | local (94GB), local-lvm (813GB) | 34 | ✅ Active |
| r630-01 | 192.168.11.11 | 32 cores @ 2.40GHz | 503GB | local (536GB), local-lvm (available) | 0 | ✅ Active |
| r630-02 | 192.168.11.12 | 32 cores @ 2.40GHz | 503GB | local (available), local-lvm (available) | 0 | ✅ Active |
| r630-03 | 192.168.11.13 | 32 cores | 512GB | Available | 0 | ✅ Active |
| r630-04 | 192.168.11.14 | 32 cores | 512GB | Available | 0 | ✅ Active |
---
## Storage Configuration
### Storage Types
**local (Directory Storage):**
- Type: Directory-based storage
- Used for: ISO images, container templates, backups
- Location: `/var/lib/vz`
**local-lvm (LVM Thin Storage):**
- Type: LVM thin provisioning
- Used for: VM/container disk images
- Benefits: Thin provisioning, snapshots, efficient space usage
### Storage by Node
**ml110:**
- `local`: 94GB total, 7.4GB used (7.87%)
- `local-lvm`: 813GB total, 214GB used (26.29%)
- Status: ✅ Active and operational
**r630-01:**
- `local`: 536GB total, 0% used
- `local-lvm`: Available (needs activation)
- Status: ⏳ Storage available, ready for use
**r630-02:**
- `local`: Available
- `local-lvm`: Available (needs activation)
- Status: ⏳ Storage available, ready for use
**r630-03/r630-04:**
- Storage: Available
- Status: ⏳ Ready for configuration
---
## Network Configuration
### Network Bridge (vmbr0)
**All nodes use VLAN-aware bridge:**
```bash
# Bridge configuration (all nodes)
auto vmbr0
iface vmbr0 inet static
address 192.168.11.<HOST_IP>/24
gateway 192.168.11.1
bridge-ports <PHYSICAL_INTERFACE>
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 11 110 111 112 120 121 130 132 133 134 140 141 150 160 200 201 202 203
```
**Bridge Features:**
- **VLAN-aware:** Supports multiple VLANs on single bridge
- **Native VLAN:** 11 (MGMT-LAN)
- **Tagged VLANs:** All service VLANs (110-203)
- **802.1Q Trunking:** Enabled for VLAN support
---
## VM/Container Distribution
### Current Distribution
**ml110 (192.168.11.10):**
- **Total:** 34 containers/VMs
- **Services:** All current services running here
- **Breakdown:**
- Besu validators: 5 (VMIDs 1000-1004)
- Besu sentries: 4 (VMIDs 1500-1503)
- Besu RPC: 3+ (VMIDs 2500-2502+)
- Blockscout: 1 (VMID 5000)
- DBIS services: Multiple
- Other services: Various
**r630-01, r630-02, r630-03, r630-04:**
- **Total:** 0 containers/VMs
- **Status:** Ready for VM migration/deployment
---
## High Availability
### Current Setup
- **Cluster Name:** "h"
- **HA Mode:** Active/Standby (manual)
- **Quorum:** 3+ nodes required for quorum
- **Storage:** Local storage (not shared)
### HA Considerations
**Current Limitations:**
- No shared storage (each node has local storage)
- Manual VM migration required
- No automatic failover
**Future Enhancements:**
- Consider shared storage (NFS, Ceph, etc.) for true HA
- Implement automatic VM migration
- Configure HA groups for critical services
---
## Resource Allocation
### CPU Resources
| Node | CPU Cores | CPU Usage | Available |
|------|-----------|-----------|-----------|
| ml110 | 6 @ 1.60GHz | High | Limited |
| r630-01 | 32 @ 2.40GHz | Low | Excellent |
| r630-02 | 32 @ 2.40GHz | Low | Excellent |
| r630-03 | 32 cores | Low | Excellent |
| r630-04 | 32 cores | Low | Excellent |
### Memory Resources
| Node | Total RAM | Used | Available | Usage % |
|------|-----------|------|-----------|---------|
| ml110 | 125GB | 94GB | 31GB | 75% ⚠️ |
| r630-01 | 503GB | ~5GB | ~498GB | 1% ✅ |
| r630-02 | 503GB | ~5GB | ~498GB | 1% ✅ |
| r630-03 | 512GB | Low | High | Low ✅ |
| r630-04 | 512GB | Low | High | Low ✅ |
---
## Storage Recommendations
### For R630 Nodes
**Boot Drives (2×600GB):**
- **Recommended:** ZFS mirror or hardware RAID1
- **Purpose:** Proxmox OS and boot files
- **Benefits:** Redundancy, data integrity
**Data SSDs (6×250GB):**
- **Option 1:** ZFS striped mirrors (3 pairs)
- Capacity: ~750GB usable
- Performance: High
- Redundancy: Good
- **Option 2:** ZFS RAIDZ1 (5 drives + 1 parity)
- Capacity: ~1.25TB usable
- Performance: Good
- Redundancy: Single drive failure tolerance
- **Option 3:** ZFS RAIDZ2 (4 drives + 2 parity)
- Capacity: ~1TB usable
- Performance: Good
- Redundancy: Dual drive failure tolerance
---
## Network Recommendations
### VLAN Configuration
**All Proxmox hosts should:**
- Use VLAN-aware bridge (vmbr0)
- Support all 19 VLANs
- Maintain native VLAN 11 for management
- Enable 802.1Q trunking on physical interfaces
### Network Performance
- **Link Speed:** Ensure 1Gbps or higher for trunk ports
- **Jumbo Frames:** Consider enabling if supported
- **Bonding:** Consider link aggregation for redundancy
---
## Related Documentation
- **[NETWORK_ARCHITECTURE.md](NETWORK_ARCHITECTURE.md)** ⭐⭐⭐ - Network architecture with VLAN plan
- **[PHYSICAL_HARDWARE_INVENTORY.md](PHYSICAL_HARDWARE_INVENTORY.md)** ⭐⭐⭐ - Physical hardware inventory
- **[PROXMOX_COMPREHENSIVE_REVIEW.md](PROXMOX_COMPREHENSIVE_REVIEW.md)** ⭐⭐ - Comprehensive Proxmox review
- **[ORCHESTRATION_DEPLOYMENT_GUIDE.md](ORCHESTRATION_DEPLOYMENT_GUIDE.md)** ⭐⭐⭐ - Deployment orchestration
---
**Last Updated:** 2025-01-20
**Document Version:** 1.0
**Review Cycle:** Quarterly

View File

@@ -0,0 +1,483 @@
# Proxmox VE Comprehensive Configuration Review
**Last Updated:** 2025-01-20
**Document Version:** 1.0
**Status:** Active Documentation
---
## Executive Summary
### ✅ Completed Tasks
- [x] Hostname migration (pve → r630-01, pve2 → r630-02)
- [x] IP address audit (no conflicts found)
- [x] Proxmox services verified (all operational)
- [x] Storage configuration reviewed
### ⚠️ Issues Identified
- r630-01 and r630-02 have LVM thin storage **disabled**
- All VMs/containers currently on ml110 only
- Storage not optimized for performance on r630-01/r630-02
---
## Hostname Migration - COMPLETE ✅
### Status
- **r630-01** (192.168.11.11): ✅ Hostname changed from `pve` to `r630-01`
- **r630-02** (192.168.11.12): ✅ Hostname changed from `pve2` to `r630-02`
### Verification
```bash
ssh root@192.168.11.11 "hostname" # Returns: r630-01 ✅
ssh root@192.168.11.12 "hostname" # Returns: r630-02 ✅
```
### Notes
- Both hosts are in a cluster (cluster name: "h")
- Cluster configuration may need update to reflect new hostnames
- /etc/hosts updated on both hosts for proper resolution
---
## IP Address Audit - COMPLETE ✅
### Results
- **Total VMs/Containers:** 34 with static IPs
- **IP Conflicts:** 0 ✅
- **Invalid IPs:** 0 ✅
- **DHCP IPs:** 2 (VMIDs 3500, 3501)
### All VMs Currently On
- **ml110** (192.168.11.10): All 34 VMs/containers
- **r630-01** (192.168.11.11): 0 VMs/containers
- **r630-02** (192.168.11.12): 0 VMs/containers
### IP Allocation Summary
| IP Range | Count | Purpose |
|----------|-------|---------|
| 192.168.11.57 | 1 | Firefly (stopped) |
| 192.168.11.60-63 | 4 | ML nodes |
| 192.168.11.64 | 1 | Indy |
| 192.168.11.80 | 1 | Cacti |
| 192.168.11.100-104 | 5 | Besu Validators |
| 192.168.11.105-106 | 2 | DBIS PostgreSQL |
| 192.168.11.112 | 1 | Fabric |
| 192.168.11.120 | 1 | DBIS Redis |
| 192.168.11.130 | 1 | DBIS Frontend |
| 192.168.11.150-154 | 5 | Besu Sentries |
| 192.168.11.155-156 | 2 | DBIS API |
| 192.168.11.201-204 | 4 | Named RPC |
| 192.168.11.240-242 | 3 | ThirdWeb RPC |
| 192.168.11.250-254 | 5 | Public RPC |
---
## Proxmox Host Configuration Review
### ml110 (192.168.11.10)
| Property | Value | Status |
|----------|-------|--------|
| **Hostname** | ml110 | ✅ Correct |
| **Proxmox Version** | 9.1.0 (kernel 6.17.4-1-pve) | ✅ Current |
| **CPU** | Intel Xeon E5-2603 v3 @ 1.60GHz (6 cores) | ⚠️ Older, slower |
| **Memory** | 125GB total, 94GB used, 31GB available | ⚠️ High usage |
| **Storage - local** | 94GB total, 7.4GB used (7.87%) | ✅ Good |
| **Storage - local-lvm** | 813GB total, 214GB used (26.29%) | ✅ Active |
| **VMs/Containers** | 34 total | ✅ All here |
**Storage Details:**
- `local`: Directory storage, active, 94GB available
- `local-lvm`: LVM thin, active, 600GB available
- `thin1-thin6`: Configured but disabled (not in use)
**Recommendations:**
- ⚠️ **CPU is older/slower** - Consider workload distribution
- ⚠️ **Memory usage high (75%)** - Monitor closely
-**Storage well configured** - LVM thin active and working
### r630-01 (192.168.11.11) - Previously "pve"
| Property | Value | Status |
|----------|-------|--------|
| **Hostname** | r630-01 | ✅ Migrated |
| **Proxmox Version** | 9.1.0 (kernel 6.17.4-1-pve) | ✅ Current |
| **CPU** | Intel Xeon E5-2630 v3 @ 2.40GHz (32 cores) | ✅ Good |
| **Memory** | 503GB total, 6.4GB used, 497GB available | ✅ Excellent |
| **Storage - local** | 536GB total, 0.1GB used (0.00%) | ✅ Available |
| **Storage - local-lvm** | **DISABLED** | ⚠️ **Issue** |
| **Storage - thin1-thin6** | **DISABLED** | ⚠️ **Issue** |
| **VMs/Containers** | 0 | ⏳ Ready for deployment |
**Storage Details:**
- **Volume Group:** `pve` exists with 2 physical volumes
- **Thin Pools:** `data` (200GB) and `thin1` (208GB) exist
- **Disks:** 4 disks (sda, sdb: 558GB each; sdc, sdd: 232GB each)
- **LVM Setup:** Properly configured
- **Storage Config Issue:** Storage configured but node references point to "pve" (old hostname) or "pve2"
**Issues:**
- ⚠️ **Storage configured but node references outdated** - Points to "pve" instead of "r630-01"
- ⚠️ **Storage may show as disabled** - Due to hostname mismatch in config
- ⚠️ **Need to update storage.cfg** - Update node references to r630-01
**Recommendations:**
- 🔴 **CRITICAL:** Enable local-lvm storage to use existing LVM thin pools
- 🔴 **CRITICAL:** Activate thin1 storage for better performance
-**Ready for VMs** - Excellent resources available
### r630-02 (192.168.11.12) - Previously "pve2"
| Property | Value | Status |
|----------|-------|--------|
| **Hostname** | r630-02 | ✅ Migrated |
| **Proxmox Version** | 9.1.0 (kernel 6.17.4-1-pve) | ✅ Current |
| **CPU** | Intel Xeon E5-2660 v4 @ 2.00GHz (56 cores) | ✅ Excellent |
| **Memory** | 251GB total, 4.4GB used, 247GB available | ✅ Excellent |
| **Storage - local** | 220GB total, 0.1GB used (0.06%) | ✅ Available |
| **Storage - local-lvm** | **DISABLED** | ⚠️ **Issue** |
| **Storage - thin1-thin6** | **DISABLED** | ⚠️ **Issue** |
| **VMs/Containers** | 0 | ⏳ Ready for deployment |
**Storage Details:**
- Need to check LVM configuration (command timed out)
- Storage shows as disabled in Proxmox
**Issues:**
- ⚠️ **Storage configured but node references outdated** - Points to "pve2" instead of "r630-02"
- ⚠️ **VMs already exist on storage** - Need to verify they're accessible
- ⚠️ **Need to update storage.cfg** - Update node references to r630-02
**Recommendations:**
- 🔴 **CRITICAL:** Check and configure LVM storage
- 🔴 **CRITICAL:** Enable local-lvm or thin storage
-**Ready for VMs** - Excellent resources available
---
## Storage Configuration Analysis
### Current Storage Status
| Host | Storage Type | Status | Size | Usage | Recommendation |
|------|--------------|--------|------|-------|----------------|
| **ml110** | local | ✅ Active | 94GB | 7.87% | ✅ Good |
| **ml110** | local-lvm | ✅ Active | 813GB | 26.29% | ✅ Good |
| **r630-01** | local | ✅ Active | 536GB | 0.00% | ✅ Ready |
| **r630-01** | local-lvm | ❌ Disabled | 0GB | N/A | 🔴 **Enable** |
| **r630-01** | thin1 | ❌ Disabled | 0GB | N/A | 🔴 **Enable** |
| **r630-02** | local | ✅ Active | 220GB | 0.06% | ✅ Ready |
| **r630-02** | local-lvm | ❌ Disabled | 0GB | N/A | 🔴 **Enable** |
| **r630-02** | thin1-thin6 | ❌ Disabled | 0GB | N/A | 🔴 **Enable** |
### Storage Issues
#### r630-01 Storage Issue
**Problem:** LVM thin pools exist (`data` 200GB, `thin1` 208GB) but Proxmox storage is disabled
**Root Cause:** Storage configured in Proxmox but not activated/enabled
**Solution:**
```bash
# Update storage.cfg node references on r630-01
ssh root@192.168.11.11
# Update node references from "pve" to "r630-01"
sed -i 's/nodes pve$/nodes r630-01/' /etc/pve/storage.cfg
sed -i 's/nodes pve /nodes r630-01 /' /etc/pve/storage.cfg
# Enable storage
pvesm set local-lvm --disable 0 2>/dev/null || true
pvesm set thin1 --disable 0 2>/dev/null || true
```
#### r630-02 Storage Issue
**Problem:** Storage disabled, LVM configuration unknown
**Solution:**
```bash
# Update storage.cfg node references on r630-02
ssh root@192.168.11.12
# Update node references from "pve2" to "r630-02"
sed -i 's/nodes pve2$/nodes r630-02/' /etc/pve/storage.cfg
sed -i 's/nodes pve2 /nodes r630-02 /' /etc/pve/storage.cfg
# Enable all thin storage pools
for storage in thin1 thin2 thin3 thin4 thin5 thin6; do
pvesm set "$storage" --disable 0 2>/dev/null || true
done
```
---
## Critical Recommendations
### 1. Enable LVM Thin Storage on r630-01 and r630-02 🔴 CRITICAL
**Priority:** HIGH
**Impact:** Cannot migrate VMs or create new VMs with optimal storage
**Action Required:**
1. Enable `local-lvm` storage on both hosts
2. Activate `thin1` storage pools if they exist
3. Verify storage is accessible and working
**Script Available:** `scripts/enable-local-lvm-storage.sh` (may need updates)
### 2. Distribute VMs Across Hosts ⚠️ RECOMMENDED
**Current State:** All 34 VMs on ml110 (overloaded)
**Recommendation:**
- Migrate some VMs to r630-01 and r630-02
- Balance workload across all three hosts
- Use r630-01/r630-02 for new deployments
**Benefits:**
- Better resource utilization
- Improved performance (ml110 CPU is slower)
- Better redundancy
### 3. Update Cluster Configuration ⚠️ RECOMMENDED
**Issue:** Hostnames changed but cluster may still reference old names
**Action:**
```bash
# Check cluster configuration
pvecm status
pvecm nodes
# Update if needed (may require cluster reconfiguration)
```
### 4. Storage Performance Optimization ⚠️ RECOMMENDED
**Current:**
- ml110: Using local-lvm (good)
- r630-01: Only local (directory) available (slower)
- r630-02: Only local (directory) available (slower)
**Recommendation:**
- Enable LVM thin storage on r630-01/r630-02 for better performance
- Use thin provisioning for space efficiency
- Monitor storage usage
### 5. Resource Monitoring ⚠️ RECOMMENDED
**ml110:**
- Memory usage: 75% (high) - Monitor closely
- CPU: Older/slower - Consider workload reduction
**r630-01/r630-02:**
- Excellent resources available
- Ready for heavy workloads
---
## Detailed Recommendations by Category
### Storage Recommendations
#### Immediate Actions
1. **Enable local-lvm on r630-01**
- LVM thin pools already exist
- Just need to activate in Proxmox
- Will enable efficient storage for VMs
2. **Configure storage on r630-02**
- Check LVM configuration
- Enable appropriate storage type
- Ensure compatibility with cluster
3. **Verify storage after enabling**
- Test VM creation
- Test storage migration
- Monitor performance
#### Long-term Actions
1. **Implement storage monitoring**
- Set up alerts for storage usage >80%
- Monitor thin pool usage
- Track storage growth trends
2. **Consider shared storage**
- For easier VM migration
- For better redundancy
- NFS or Ceph options
### Network Recommendations
#### Current Status
- All hosts on 192.168.11.0/24 network
- Flat network (no VLANs yet)
- Gateway: 192.168.11.1 (ER605-1)
#### Recommendations
1. **VLAN Migration** (Planned)
- Segment network by service type
- Improve security and isolation
- Better traffic management
2. **Network Monitoring**
- Monitor bandwidth usage
- Track network performance
- Alert on network issues
### Cluster Recommendations
#### Current Status
- Cluster name: "h"
- 3 nodes: ml110, r630-01, r630-02
- Cluster operational
#### Recommendations
1. **Update Cluster Configuration**
- Verify hostname changes reflected in cluster
- Update any references to old hostnames
- Test cluster operations
2. **Cluster Quorum**
- Ensure quorum is maintained
- Monitor cluster health
- Document cluster procedures
### Performance Recommendations
#### ml110
- **CPU:** Older/slower - Consider reducing workload
- **Memory:** High usage - Monitor and optimize
- **Storage:** Well configured - No changes needed
#### r630-01
- **CPU:** Good performance - Ready for workloads
- **Memory:** Excellent - Can handle many VMs
- **Storage:** Needs activation - Critical fix needed
#### r630-02
- **CPU:** Excellent (56 cores) - Best performance
- **Memory:** Excellent - Can handle many VMs
- **Storage:** Needs configuration - Critical fix needed
---
## Action Items
### Critical (Do Before Starting VMs)
1.**Hostname Migration** - COMPLETE
2.**IP Address Audit** - COMPLETE
3. 🔴 **Enable local-lvm storage on r630-01** - PENDING
4. 🔴 **Configure storage on r630-02** - PENDING
5. ⚠️ **Verify cluster configuration** - PENDING
### High Priority
1. ⚠️ **Test VM creation on r630-01/r630-02** - After storage enabled
2. ⚠️ **Update cluster configuration** - Verify hostname changes
3. ⚠️ **Plan VM distribution** - Balance workload across hosts
### Medium Priority
1. ⚠️ **Implement storage monitoring** - Set up alerts
2. ⚠️ **Document storage procedures** - For future reference
3. ⚠️ **Plan VLAN migration** - Network segmentation
---
## Verification Checklist
### Hostname Verification
- [x] r630-01 hostname correct
- [x] r630-02 hostname correct
- [x] /etc/hosts updated on both hosts
- [ ] Cluster configuration updated (if needed)
### IP Address Verification
- [x] No conflicts detected
- [x] No invalid IPs
- [x] All IPs documented
- [x] IP audit script working
### Storage Verification
- [x] ml110 storage working
- [ ] r630-01 local-lvm enabled
- [ ] r630-02 storage configured
- [ ] Storage tested and working
### Service Verification
- [x] All Proxmox services running
- [x] Web interfaces accessible
- [x] Cluster operational
- [ ] Storage accessible
---
## Next Steps
### Immediate (Before Starting VMs)
1. **Enable Storage on r630-01:**
```bash
ssh root@192.168.11.11
# Check current storage config
cat /etc/pve/storage.cfg
# Enable local-lvm
pvesm set local-lvm --disable 0
# Or reconfigure if needed
```
2. **Configure Storage on r630-02:**
```bash
ssh root@192.168.11.12
# Check LVM setup
vgs
lvs
# Configure appropriate storage
```
3. **Verify Storage:**
```bash
# On each host
pvesm status
# Should show local-lvm as active
```
### After Storage is Enabled
1. **Test VM Creation:**
- Create test container on r630-01
- Create test container on r630-02
- Verify storage works correctly
2. **Start VMs:**
- All IPs verified, no conflicts
- Hostnames correct
- Storage ready
---
## Scripts Available
1. **`scripts/check-all-vm-ips.sh`** - ✅ Working - IP audit
2. **`scripts/migrate-hostnames-proxmox.sh`** - ✅ Complete - Hostname migration
3. **`scripts/diagnose-proxmox-hosts.sh`** - ✅ Working - Diagnostics
4. **`scripts/enable-local-lvm-storage.sh`** - ⏳ May need updates for r630-01/r630-02
---
## Related Documentation
### Architecture Documents
- **[PHYSICAL_HARDWARE_INVENTORY.md](PHYSICAL_HARDWARE_INVENTORY.md)** ⭐⭐⭐ - Physical hardware inventory
- **[NETWORK_ARCHITECTURE.md](NETWORK_ARCHITECTURE.md)** ⭐⭐⭐ - Network architecture
- **[ORCHESTRATION_DEPLOYMENT_GUIDE.md](ORCHESTRATION_DEPLOYMENT_GUIDE.md)** ⭐⭐⭐ - Deployment orchestration
### Deployment Documents
- **[../03-deployment/PRE_START_CHECKLIST.md](../03-deployment/PRE_START_CHECKLIST.md)** - Pre-start checklist
- **[../03-deployment/LVM_THIN_PVE_ENABLED.md](../03-deployment/LVM_THIN_PVE_ENABLED.md)** - LVM thin storage setup
- **[../09-troubleshooting/STORAGE_MIGRATION_ISSUE.md](../09-troubleshooting/STORAGE_MIGRATION_ISSUE.md)** - Storage migration troubleshooting
---
**Last Updated:** 2025-01-20
**Document Version:** 1.0
**Review Cycle:** Quarterly

View File

@@ -1,6 +1,12 @@
# Final VMID Allocation Plan
**Updated**: Complete sovereign-scale allocation with all domains
**Navigation:** [Home](../README.md) > [Architecture](README.md) > VMID Allocation
**Last Updated:** 2025-01-20
**Document Version:** 1.0
**Status:** 🟢 Active Documentation
---
## Complete VMID Allocation Table

View File

@@ -0,0 +1,342 @@
# Backup and Restore Procedures
**Last Updated:** 2025-01-20
**Document Version:** 1.0
**Status:** Active Documentation
---
## Overview
This document provides detailed procedures for backing up and restoring Proxmox VMs, containers, and configuration.
---
## Backup Strategy
### Backup Types
1. **VM/Container Backups:**
- Full VM snapshots
- Container backups
- Application data backups
2. **Configuration Backups:**
- Proxmox host configuration
- Network configuration
- Storage configuration
3. **Data Backups:**
- Database backups
- Application data
- Configuration files
---
## Backup Procedures
### Proxmox VM/Container Backups
#### Using Proxmox Backup Server (PBS)
**Setup:**
1. **Install PBS** (if not already installed)
2. **Add PBS to Proxmox:**
- Datacenter → Storage → Add → Proxmox Backup Server
- Enter PBS server details
- Test connection
**Scheduled Backups:**
1. **Create Backup Job:**
- Datacenter → Backup → Add
- Select VMs/containers
- Set schedule (daily, weekly, etc.)
- Choose retention policy
2. **Backup Options:**
- **Mode:** Snapshot (recommended for running VMs)
- **Compression:** ZSTD (recommended)
- **Storage:** Proxmox Backup Server
**Manual Backup:**
```bash
# Backup single VM
vzdump <vmid> --storage <storage-name> --mode snapshot
# Backup multiple VMs
vzdump 100 101 102 --storage <storage-name> --mode snapshot
# Backup all VMs
vzdump --all --storage <storage-name> --mode snapshot
```
#### Using vzdump (Direct)
**Backup to Local Storage:**
```bash
# Backup VM to local storage
vzdump <vmid> --storage local --mode snapshot --compress zstd
# Backup with retention
vzdump <vmid> --storage local --mode snapshot --maxfiles 7
```
**Backup to NFS:**
```bash
# Add NFS storage first
# Datacenter → Storage → Add → NFS
# Backup to NFS
vzdump <vmid> --storage nfs-backup --mode snapshot
```
---
### Configuration Backups
#### Proxmox Host Configuration
**Backup Configuration Files:**
```bash
# Backup Proxmox configuration
tar -czf /backup/proxmox-config-$(date +%Y%m%d).tar.gz \
/etc/pve/ \
/etc/network/interfaces \
/etc/hosts \
/etc/hostname
```
**Restore Configuration:**
```bash
# Extract configuration
tar -xzf /backup/proxmox-config-YYYYMMDD.tar.gz -C /
# Restart services
systemctl restart pve-cluster
systemctl restart pve-daemon
```
#### Network Configuration
**Backup Network Config:**
```bash
# Backup network configuration
cp /etc/network/interfaces /backup/interfaces-$(date +%Y%m%d)
cp /etc/hosts /backup/hosts-$(date +%Y%m%d)
```
**Version Control:**
- Store network configuration in Git
- Track changes over time
- Easy rollback if needed
---
### Application Data Backups
#### Database Backups
**PostgreSQL:**
```bash
# Backup PostgreSQL database
pg_dump -U <user> <database> > /backup/db-$(date +%Y%m%d).sql
# Restore
psql -U <user> <database> < /backup/db-YYYYMMDD.sql
```
**MySQL/MariaDB:**
```bash
# Backup MySQL database
mysqldump -u <user> -p <database> > /backup/db-$(date +%Y%m%d).sql
# Restore
mysql -u <user> -p <database> < /backup/db-YYYYMMDD.sql
```
#### Application Files
```bash
# Backup application directory
tar -czf /backup/app-$(date +%Y%m%d).tar.gz /path/to/application
# Restore
tar -xzf /backup/app-YYYYMMDD.tar.gz -C /
```
---
## Restore Procedures
### Restore VM/Container from Backup
#### From Proxmox Backup Server
**Via Web UI:**
1. **Select VM/Container:**
- Datacenter → Backup → Select backup
- Click "Restore"
2. **Restore Options:**
- Select target storage
- Choose new VMID (or keep original)
- Set network configuration
3. **Start Restore:**
- Click "Restore"
- Monitor progress
**Via Command Line:**
```bash
# Restore from PBS
vzdump restore <backup-id> <vmid> --storage <storage>
# Restore with new VMID
vzdump restore <backup-id> <new-vmid> --storage <storage>
```
#### From vzdump Backup
```bash
# Restore from vzdump file
vzdump restore <backup-file.vma.gz> <vmid> --storage <storage>
```
---
### Restore Configuration
#### Restore Proxmox Configuration
```bash
# Stop Proxmox services
systemctl stop pve-cluster
systemctl stop pve-daemon
# Restore configuration
tar -xzf /backup/proxmox-config-YYYYMMDD.tar.gz -C /
# Start services
systemctl start pve-cluster
systemctl start pve-daemon
```
#### Restore Network Configuration
```bash
# Restore network config
cp /backup/interfaces-YYYYMMDD /etc/network/interfaces
cp /backup/hosts-YYYYMMDD /etc/hosts
# Restart networking
systemctl restart networking
```
---
## Backup Verification
### Verify Backup Integrity
**Check Backup Files:**
```bash
# List backups
vzdump list --storage <storage>
# Verify backup
vzdump verify <backup-id>
```
**Test Restore:**
- Monthly restore test
- Verify VM/container starts
- Test application functionality
- Document results
---
## Backup Retention Policy
### Retention Schedule
- **Daily Backups:** Keep 7 days
- **Weekly Backups:** Keep 4 weeks
- **Monthly Backups:** Keep 12 months
- **Yearly Backups:** Keep 7 years
### Cleanup Old Backups
```bash
# Remove backups older than retention period
vzdump prune --storage <storage> --keep-last 7
```
---
## Backup Monitoring
### Backup Status Monitoring
**Check Backup Jobs:**
- Datacenter → Backup → Jobs
- Review last backup time
- Check for errors
**Automated Monitoring:**
- Set up alerts for failed backups
- Monitor backup storage usage
- Track backup completion times
---
## Best Practices
1. **Test Restores Regularly:**
- Monthly restore tests
- Verify data integrity
- Document results
2. **Multiple Backup Locations:**
- Local backups (fast restore)
- Remote backups (disaster recovery)
- Offsite backups (complete protection)
3. **Document Backup Procedures:**
- Keep procedures up to date
- Document restore procedures
- Maintain backup inventory
4. **Monitor Backup Storage:**
- Check available space regularly
- Clean up old backups
- Plan for storage growth
---
## Related Documentation
- **[DISASTER_RECOVERY.md](DISASTER_RECOVERY.md)** - Disaster recovery procedures
- **[OPERATIONAL_RUNBOOKS.md](OPERATIONAL_RUNBOOKS.md)** - Operational procedures
- **[../../04-configuration/SECRETS_KEYS_CONFIGURATION.md](../../04-configuration/SECRETS_KEYS_CONFIGURATION.md)** - Secrets backup
---
**Last Updated:** 2025-01-20
**Review Cycle:** Monthly

View File

@@ -0,0 +1,229 @@
# ChainID 138 Automation Scripts
**Date:** December 26, 2024
**Status:** ✅ All automation scripts created and ready
---
## Overview
This document describes the automation scripts created for ChainID 138 deployment. These scripts can be run once containers are created to automate the complete configuration process.
---
## Available Scripts
### 1. Main Deployment Script
**File:** `scripts/deploy-all-chain138-containers.sh`
**Purpose:** Master script that orchestrates the complete deployment process.
**What it does:**
1. Configures all Besu nodes (static-nodes.json, permissioned-nodes.json)
2. Verifies configuration
3. Sets up JWT authentication for RPC containers
4. Generates JWT tokens for operators
**Usage:**
```bash
cd /home/intlc/projects/proxmox
./scripts/deploy-all-chain138-containers.sh
```
**Note:** This script will prompt for confirmation before proceeding.
---
### 2. JWT Authentication Setup
**File:** `scripts/setup-jwt-auth-all-rpc-containers.sh`
**Purpose:** Configures JWT authentication for all RPC containers (2503-2508).
**What it does:**
- Installs nginx and dependencies on each container
- Generates JWT secret keys
- Creates JWT validation service
- Configures nginx with JWT authentication
- Sets up SSL certificates
- Starts JWT validation service and nginx
**Usage:**
```bash
./scripts/setup-jwt-auth-all-rpc-containers.sh
```
**Requirements:**
- Containers must be running
- SSH access to Proxmox host
- Root access on Proxmox host
---
### 3. JWT Token Generation
**File:** `scripts/generate-jwt-token-for-container.sh`
**Purpose:** Generates JWT tokens for specific containers and operators.
**Usage:**
```bash
# Generate token for a specific container
./scripts/generate-jwt-token-for-container.sh <VMID> <username> [expiry_days]
# Examples:
./scripts/generate-jwt-token-for-container.sh 2503 ali-full-access 365
./scripts/generate-jwt-token-for-container.sh 2505 luis-rpc-access 365
./scripts/generate-jwt-token-for-container.sh 2507 putu-rpc-access 365
```
**Parameters:**
- `VMID`: Container VMID (2503-2508)
- `username`: Username for the token (e.g., ali-full-access, luis-rpc-access)
- `expiry_days`: Token expiry in days (default: 365)
**Output:**
- JWT token
- Usage example with curl command
---
### 4. Besu Configuration
**File:** `scripts/configure-besu-chain138-nodes.sh`
**Purpose:** Configures all Besu nodes with static-nodes.json and permissioned-nodes.json.
**What it does:**
1. Collects enodes from all Besu nodes
2. Generates static-nodes.json
3. Generates permissioned-nodes.json
4. Deploys configurations to all containers
5. Configures discovery settings
6. Restarts Besu services
**Usage:**
```bash
./scripts/configure-besu-chain138-nodes.sh
```
---
### 5. Configuration Verification
**File:** `scripts/verify-chain138-config.sh`
**Purpose:** Verifies the configuration of all Besu nodes.
**What it checks:**
- File existence (static-nodes.json, permissioned-nodes.json)
- Discovery settings
- Peer connections
- Service status
**Usage:**
```bash
./scripts/verify-chain138-config.sh
```
---
## Deployment Workflow
### Step 1: Create Containers
First, create all required containers (see `docs/MISSING_CONTAINERS_LIST.md`):
- 1504 - besu-sentry-5
- 2503-2508 - All RPC nodes
- 6201 - firefly-2
- Other services as needed
### Step 2: Run Main Deployment Script
Once containers are created and running:
```bash
cd /home/intlc/projects/proxmox
./scripts/deploy-all-chain138-containers.sh
```
This will:
1. Configure all Besu nodes
2. Verify configuration
3. Set up JWT authentication
4. Generate JWT tokens
### Step 3: Test and Verify
After deployment:
```bash
# Verify configuration
./scripts/verify-chain138-config.sh
# Test JWT authentication on each container
for vmid in 2503 2504 2505 2506 2507 2508; do
echo "Testing VMID $vmid:"
curl -k -H "Authorization: Bearer <TOKEN>" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}' \
https://192.168.11.XXX/
done
```
---
## Token Distribution
After generating tokens, distribute them to operators:
### Ali (Full Access)
- VMID 2503 (0x8a identity): Full access token
- VMID 2504 (0x1 identity): Full access token
### Luis (RPC-Only Access)
- VMID 2505 (0x8a identity): RPC-only token
- VMID 2506 (0x1 identity): RPC-only token
### Putu (RPC-Only Access)
- VMID 2507 (0x8a identity): RPC-only token
- VMID 2508 (0x1 identity): RPC-only token
---
## Troubleshooting
### Containers Not Running
If containers are not running, the scripts will skip them with a warning. Re-run the scripts after containers are started.
### JWT Secret Not Found
If JWT secret is not found:
1. Run `setup-jwt-auth-all-rpc-containers.sh` first
2. Check that container is running
3. Verify SSH access to Proxmox host
### Configuration Files Not Found
If configuration files are missing:
1. Run `configure-besu-chain138-nodes.sh` first
2. Check that all Besu containers are running
3. Verify network connectivity
---
## Related Documentation
- [Next Steps](CHAIN138_NEXT_STEPS.md)
- [Missing Containers List](MISSING_CONTAINERS_LIST.md)
- [JWT Authentication Requirements](CHAIN138_JWT_AUTH_REQUIREMENTS.md)
- [Complete Implementation](CHAIN138_COMPLETE_IMPLEMENTATION.md)
---
**Last Updated:** December 26, 2024
**Status:** ✅ Ready for use

View File

@@ -0,0 +1,278 @@
# Change Management Process
**Last Updated:** 2025-01-20
**Document Version:** 1.0
**Status:** Active Documentation
---
## Overview
This document defines the change management process for the Proxmox infrastructure, ensuring all changes are properly planned, approved, implemented, and documented.
---
## Change Types
### Standard Changes
**Definition:** Pre-approved, low-risk changes that follow established procedures.
**Examples:**
- Routine maintenance
- Scheduled updates
- Standard VM/container deployments
**Process:**
- No formal approval required
- Document in change log
- Follow standard procedures
### Normal Changes
**Definition:** Changes that require review and approval but are not emergency.
**Examples:**
- Network configuration changes
- Storage modifications
- Security updates
- New service deployments
**Process:**
- Submit change request
- Review and approval
- Schedule implementation
- Document results
### Emergency Changes
**Definition:** Urgent changes required to resolve critical issues.
**Examples:**
- Security patches
- Critical bug fixes
- Service restoration
**Process:**
- Implement immediately
- Document during/after
- Post-implementation review
- Retrospective approval
---
## Change Request Process
### 1. Change Request Submission
**Required Information:**
1. **Change Details:**
- Description of change
- Reason for change
- Expected impact
2. **Technical Details:**
- Systems affected
- Implementation steps
- Rollback plan
3. **Risk Assessment:**
- Risk level (Low/Medium/High)
- Potential impact
- Mitigation strategies
4. **Timeline:**
- Proposed implementation date
- Estimated duration
- Maintenance window (if needed)
### 2. Change Review
**Review Criteria:**
1. **Technical Review:**
- Feasibility
- Impact assessment
- Risk evaluation
2. **Business Review:**
- Business impact
- Resource requirements
- Timeline alignment
3. **Security Review:**
- Security implications
- Compliance requirements
- Risk assessment
### 3. Change Approval
**Approval Levels:**
- **Standard Changes:** No approval required
- **Normal Changes:** Infrastructure lead approval
- **High-Risk Changes:** Management approval
- **Emergency Changes:** Post-implementation approval
### 4. Change Implementation
**Pre-Implementation:**
1. **Preparation:**
- Verify backups
- Prepare rollback plan
- Notify stakeholders
- Schedule maintenance window (if needed)
2. **Implementation:**
- Follow documented procedures
- Document steps taken
- Monitor for issues
3. **Verification:**
- Test functionality
- Verify system health
- Check logs for errors
### 5. Post-Implementation
**Activities:**
1. **Documentation:**
- Update documentation
- Document any issues
- Update change log
2. **Review:**
- Post-implementation review
- Lessons learned
- Process improvements
---
## Change Request Template
```markdown
# Change Request
## Change Information
- **Requestor:** [Name]
- **Date:** [Date]
- **Change Type:** [Standard/Normal/Emergency]
- **Priority:** [Low/Medium/High/Critical]
## Change Description
[Detailed description of the change]
## Reason for Change
[Why is this change needed?]
## Systems Affected
[List of systems, VMs, containers, or services]
## Implementation Plan
[Step-by-step implementation plan]
## Rollback Plan
[How to rollback if issues occur]
## Risk Assessment
- **Risk Level:** [Low/Medium/High]
- **Potential Impact:** [Description]
- **Mitigation:** [How to mitigate risks]
## Testing Plan
[How the change will be tested]
## Timeline
- **Proposed Date:** [Date]
- **Estimated Duration:** [Time]
- **Maintenance Window:** [If applicable]
## Approval
- **Reviewed By:** [Name]
- **Approved By:** [Name]
- **Date:** [Date]
```
---
## Change Log
### Change Log Format
| Date | Change ID | Description | Type | Status | Implemented By |
|------|-----------|-------------|------|--------|----------------|
| 2025-01-20 | CHG-001 | Network VLAN configuration | Normal | Completed | [Name] |
| 2025-01-19 | CHG-002 | Security patch deployment | Emergency | Completed | [Name] |
---
## Best Practices
1. **Plan Ahead:**
- Submit change requests early
- Allow time for review
- Schedule during maintenance windows
2. **Document Everything:**
- Document all changes
- Keep change log updated
- Update procedures
3. **Test First:**
- Test in non-production
- Verify rollback procedures
- Document test results
4. **Communicate:**
- Notify stakeholders
- Provide status updates
- Document issues
5. **Review Regularly:**
- Review change process
- Identify improvements
- Update procedures
---
## Emergency Change Process
### When to Use
- Critical security issues
- Service outages
- Data loss prevention
- Regulatory compliance
### Process
1. **Implement Immediately:**
- Take necessary action
- Document as you go
- Notify stakeholders
2. **Post-Implementation:**
- Complete change request
- Document what was done
- Conduct review
3. **Retrospective:**
- Review emergency change
- Identify improvements
- Update procedures
---
## Related Documentation
- **[OPERATIONAL_RUNBOOKS.md](OPERATIONAL_RUNBOOKS.md)** - Operational procedures
- **[DISASTER_RECOVERY.md](DISASTER_RECOVERY.md)** - Disaster recovery
- **[DEPLOYMENT_READINESS.md](DEPLOYMENT_READINESS.md)** - Deployment procedures
---
**Last Updated:** 2025-01-20
**Review Cycle:** Quarterly

View File

@@ -40,6 +40,39 @@
---
## Deployment Decision Tree
```mermaid
flowchart TD
Start[New Deployment?] --> EnvType{Environment Type?}
EnvType -->|Production| ProdCheck{Production Ready?}
EnvType -->|Staging| StagingDeploy[Staging Deployment]
EnvType -->|Development| DevDeploy[Development Deployment]
ProdCheck -->|No| PrepProd[Prepare Production<br/>Review Checklist<br/>Verify Resources]
ProdCheck -->|Yes| ProdDeploy[Production Deployment]
PrepProd --> ProdDeploy
ProdDeploy --> WhichComponents{Which Components?}
StagingDeploy --> WhichComponents
DevDeploy --> WhichComponents
WhichComponents -->|Full Stack| FullDeploy[Deploy Full Stack<br/>Validators, Sentries, RPC,<br/>Services, Monitoring]
WhichComponents -->|Besu Only| BesuDeploy[Deploy Besu Network<br/>Validators, Sentries, RPC]
WhichComponents -->|CCIP Only| CCIPDeploy[Deploy CCIP Fleet<br/>Commit, Execute, RMN]
WhichComponents -->|Services Only| ServicesDeploy[Deploy Services<br/>Blockscout, Cacti, etc.]
FullDeploy --> ValidateDeploy[Validate Deployment]
BesuDeploy --> ValidateDeploy
CCIPDeploy --> ValidateDeploy
ServicesDeploy --> ValidateDeploy
ValidateDeploy --> DeployComplete[Deployment Complete]
```
---
## 🚀 Deployment Steps
### Step 1: Review Configuration

View File

@@ -0,0 +1,451 @@
# Deployment Runbook
## SolaceScanScout Explorer - Production Deployment Guide
**Last Updated**: $(date)
**Version**: 1.0.0
---
## Table of Contents
1. [Pre-Deployment Checklist](#pre-deployment-checklist)
2. [Environment Setup](#environment-setup)
3. [Database Migration](#database-migration)
4. [Service Deployment](#service-deployment)
5. [Health Checks](#health-checks)
6. [Rollback Procedures](#rollback-procedures)
7. [Post-Deployment Verification](#post-deployment-verification)
8. [Troubleshooting](#troubleshooting)
---
## Pre-Deployment Checklist
### Infrastructure Requirements
- [ ] Kubernetes cluster (AKS) or VM infrastructure ready
- [ ] PostgreSQL 16+ with TimescaleDB extension
- [ ] Redis cluster (for production cache/rate limiting)
- [ ] Elasticsearch/OpenSearch cluster
- [ ] Load balancer configured
- [ ] SSL certificates provisioned
- [ ] DNS records configured
- [ ] Monitoring stack deployed (Prometheus, Grafana)
### Configuration
- [ ] Environment variables configured
- [ ] Secrets stored in Key Vault
- [ ] Database credentials verified
- [ ] Redis connection string verified
- [ ] RPC endpoint URLs verified
- [ ] JWT secret configured (strong random value)
### Code & Artifacts
- [ ] All tests passing
- [ ] Docker images built and tagged
- [ ] Images pushed to container registry
- [ ] Database migrations reviewed
- [ ] Rollback plan documented
---
## Environment Setup
### 1. Set Environment Variables
```bash
# Database
export DB_HOST=postgres.example.com
export DB_PORT=5432
export DB_USER=explorer
export DB_PASSWORD=<from-key-vault>
export DB_NAME=explorer
# Redis (for production)
export REDIS_URL=redis://redis.example.com:6379
# RPC
export RPC_URL=https://rpc.d-bis.org
export WS_URL=wss://rpc.d-bis.org
# Application
export CHAIN_ID=138
export PORT=8080
export JWT_SECRET=<strong-random-secret>
# Optional
export LOG_LEVEL=info
export ENABLE_METRICS=true
```
### 2. Verify Secrets
```bash
# Test database connection
psql -h $DB_HOST -U $DB_USER -d $DB_NAME -c "SELECT 1;"
# Test Redis connection
redis-cli -u $REDIS_URL ping
# Test RPC endpoint
curl -X POST $RPC_URL \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
```
---
## Database Migration
### 1. Backup Existing Database
```bash
# Create backup
pg_dump -h $DB_HOST -U $DB_USER -d $DB_NAME > backup_$(date +%Y%m%d_%H%M%S).sql
# Verify backup
ls -lh backup_*.sql
```
### 2. Run Migrations
```bash
cd explorer-monorepo/backend/database/migrations
# Review pending migrations
go run migrate.go --status
# Run migrations
go run migrate.go --up
# Verify migration
go run migrate.go --status
```
### 3. Verify Schema
```bash
psql -h $DB_HOST -U $DB_USER -d $DB_NAME -c "\dt"
psql -h $DB_HOST -U $DB_USER -d $DB_NAME -c "\d blocks"
psql -h $DB_HOST -U $DB_USER -d $DB_NAME -c "\d transactions"
```
---
## Service Deployment
### Option 1: Kubernetes Deployment
#### 1. Deploy API Server
```bash
kubectl apply -f k8s/api-server-deployment.yaml
kubectl apply -f k8s/api-server-service.yaml
kubectl apply -f k8s/api-server-ingress.yaml
# Verify deployment
kubectl get pods -l app=api-server
kubectl logs -f deployment/api-server
```
#### 2. Deploy Indexer
```bash
kubectl apply -f k8s/indexer-deployment.yaml
# Verify deployment
kubectl get pods -l app=indexer
kubectl logs -f deployment/indexer
```
#### 3. Rolling Update
```bash
# Update image
kubectl set image deployment/api-server api-server=registry.example.com/explorer-api:v1.1.0
# Monitor rollout
kubectl rollout status deployment/api-server
# Rollback if needed
kubectl rollout undo deployment/api-server
```
### Option 2: Docker Compose Deployment
```bash
cd explorer-monorepo/deployment
# Start services
docker-compose up -d
# Verify services
docker-compose ps
docker-compose logs -f api-server
```
---
## Health Checks
### 1. API Health Endpoint
```bash
# Check health
curl https://api.d-bis.org/health
# Expected response
{
"status": "ok",
"timestamp": "2024-01-01T00:00:00Z",
"database": "connected"
}
```
### 2. Service Health
```bash
# Kubernetes
kubectl get pods
kubectl describe pod <pod-name>
# Docker
docker ps
docker inspect <container-id>
```
### 3. Database Connectivity
```bash
# From API server
curl https://api.d-bis.org/health | jq .database
# Direct check
psql -h $DB_HOST -U $DB_USER -d $DB_NAME -c "SELECT COUNT(*) FROM blocks;"
```
### 4. Redis Connectivity
```bash
# Test Redis
redis-cli -u $REDIS_URL ping
# Check cache stats
redis-cli -u $REDIS_URL INFO stats
```
---
## Rollback Procedures
### Quick Rollback (Kubernetes)
```bash
# Rollback to previous version
kubectl rollout undo deployment/api-server
kubectl rollout undo deployment/indexer
# Verify rollback
kubectl rollout status deployment/api-server
```
### Database Rollback
```bash
# Restore from backup
psql -h $DB_HOST -U $DB_USER -d $DB_NAME < backup_YYYYMMDD_HHMMSS.sql
# Or rollback migrations
cd explorer-monorepo/backend/database/migrations
go run migrate.go --down 1
```
### Full Rollback
```bash
# 1. Stop new services
kubectl scale deployment/api-server --replicas=0
kubectl scale deployment/indexer --replicas=0
# 2. Restore database
psql -h $DB_HOST -U $DB_USER -d $DB_NAME < backup_YYYYMMDD_HHMMSS.sql
# 3. Start previous version
kubectl set image deployment/api-server api-server=registry.example.com/explorer-api:v1.0.0
kubectl scale deployment/api-server --replicas=3
```
---
## Post-Deployment Verification
### 1. Functional Tests
```bash
# Test Track 1 endpoints (public)
curl https://api.d-bis.org/api/v1/track1/blocks/latest
# Test search
curl https://api.d-bis.org/api/v1/search?q=1000
# Test health
curl https://api.d-bis.org/health
```
### 2. Performance Tests
```bash
# Load test
ab -n 1000 -c 10 https://api.d-bis.org/api/v1/track1/blocks/latest
# Check response times
curl -w "@curl-format.txt" -o /dev/null -s https://api.d-bis.org/api/v1/track1/blocks/latest
```
### 3. Monitoring
- [ ] Check Grafana dashboards
- [ ] Verify Prometheus metrics
- [ ] Check error rates
- [ ] Monitor response times
- [ ] Check database connection pool
- [ ] Verify Redis cache hit rate
---
## Troubleshooting
### Common Issues
#### 1. Database Connection Errors
**Symptoms**: 500 errors, "database connection failed"
**Resolution**:
```bash
# Check database status
psql -h $DB_HOST -U $DB_USER -d $DB_NAME -c "SELECT 1;"
# Check connection pool
# Review database/migrations for connection pool settings
# Restart service
kubectl rollout restart deployment/api-server
```
#### 2. Redis Connection Errors
**Symptoms**: Cache misses, rate limiting not working
**Resolution**:
```bash
# Test Redis connection
redis-cli -u $REDIS_URL ping
# Check Redis logs
kubectl logs -l app=redis
# Fallback to in-memory (temporary)
# Remove REDIS_URL from environment
```
#### 3. High Memory Usage
**Symptoms**: OOM kills, slow responses
**Resolution**:
```bash
# Check memory usage
kubectl top pods
# Increase memory limits
kubectl set resources deployment/api-server --limits=memory=2Gi
# Review cache TTL settings
```
#### 4. Slow Response Times
**Symptoms**: High latency, timeout errors
**Resolution**:
```bash
# Check database query performance
psql -h $DB_HOST -U $DB_USER -d $DB_NAME -c "EXPLAIN ANALYZE SELECT * FROM blocks LIMIT 10;"
# Check indexer lag
curl https://api.d-bis.org/api/v1/track2/stats
# Review connection pool settings
```
---
## Emergency Procedures
### Service Outage
1. **Immediate Actions**:
- Check service status: `kubectl get pods`
- Check logs: `kubectl logs -f deployment/api-server`
- Check database: `psql -h $DB_HOST -U $DB_USER -d $DB_NAME -c "SELECT 1;"`
- Check Redis: `redis-cli -u $REDIS_URL ping`
2. **Quick Recovery**:
- Restart services: `kubectl rollout restart deployment/api-server`
- Scale up: `kubectl scale deployment/api-server --replicas=5`
- Rollback if needed: `kubectl rollout undo deployment/api-server`
3. **Communication**:
- Update status page
- Notify team via Slack/email
- Document incident
### Data Corruption
1. **Immediate Actions**:
- Stop writes: `kubectl scale deployment/api-server --replicas=0`
- Backup current state: `pg_dump -h $DB_HOST -U $DB_USER -d $DB_NAME > emergency_backup.sql`
2. **Recovery**:
- Restore from last known good backup
- Verify data integrity
- Resume services
---
## Maintenance Windows
### Scheduled Maintenance
1. **Pre-Maintenance**:
- Notify users 24 hours in advance
- Create maintenance mode flag
- Prepare rollback plan
2. **During Maintenance**:
- Enable maintenance mode
- Perform updates
- Run health checks
3. **Post-Maintenance**:
- Disable maintenance mode
- Verify all services
- Monitor for issues
---
## Contact Information
- **On-Call Engineer**: Check PagerDuty
- **Slack Channel**: #explorer-deployments
- **Emergency**: [Emergency Contact]
---
**Document Version**: 1.0.0
**Last Reviewed**: $(date)
**Next Review**: $(date -d "+3 months")

View File

@@ -0,0 +1,260 @@
# Disaster Recovery Procedures
**Last Updated:** 2025-01-20
**Document Version:** 1.0
**Status:** Active Documentation
---
## Overview
This document outlines disaster recovery procedures for the Proxmox infrastructure, including recovery from hardware failures, data loss, network outages, and security incidents.
---
## Recovery Scenarios
### 1. Complete Host Failure
**Scenario:** A Proxmox host (R630 or ML110) fails completely and cannot be recovered.
**Recovery Steps:**
1. **Assess Impact:**
```bash
# Check which VMs/containers were running on failed host
pvecm status
pvecm nodes
```
2. **Recover from Backup:**
- Identify backup location (Proxmox Backup Server or external storage)
- Restore VMs/containers to another host in the cluster
- Verify network connectivity and services
3. **Rejoin Cluster (if host is replaced):**
```bash
# On new/repaired host
pvecm add <cluster-name> -link0 <interface>
```
4. **Verify Services:**
- Check all critical services are running
- Verify network connectivity
- Test application functionality
**Recovery Time Objective (RTO):** 4 hours
**Recovery Point Objective (RPO):** Last backup (typically daily)
---
### 2. Storage Failure
**Scenario:** Storage pool fails (ZFS pool corruption, disk failure, etc.)
**Recovery Steps:**
1. **Immediate Actions:**
- Stop all VMs/containers using affected storage
- Assess extent of damage
- Check backup availability
2. **Storage Recovery:**
```bash
# For ZFS pools
zpool status
zpool import -f <pool-name>
zfs scrub <pool-name>
```
3. **Data Recovery:**
- Restore from backups if pool cannot be recovered
- Use Proxmox Backup Server if available
- Restore individual VMs/containers as needed
4. **Verification:**
- Verify data integrity
- Test restored VMs/containers
- Document lessons learned
**RTO:** 8 hours
**RPO:** Last backup
---
### 3. Network Outage
**Scenario:** Complete network failure or misconfiguration
**Recovery Steps:**
1. **Local Access:**
- Use console access (iDRAC, iLO, or physical console)
- Verify Proxmox host is running
- Check network configuration
2. **Network Restoration:**
```bash
# Check network interfaces
ip addr show
ip link show
# Check routing
ip route show
# Restart networking if needed
systemctl restart networking
```
3. **VLAN Restoration:**
- Verify VLAN configuration on switches
- Check Proxmox bridge configuration
- Test connectivity between VLANs
4. **Service Verification:**
- Test internal services
- Verify external connectivity (if applicable)
- Check Cloudflare tunnels (if used)
**RTO:** 2 hours
**RPO:** No data loss (network issue only)
---
### 4. Data Corruption
**Scenario:** VM/container data corruption or accidental deletion
**Recovery Steps:**
1. **Immediate Actions:**
- Stop affected VM/container
- Do not attempt repairs that might worsen corruption
- Document what was lost
2. **Recovery Options:**
- **From Snapshot:** Restore from most recent snapshot
- **From Backup:** Restore from Proxmox Backup Server
- **From External Backup:** Use external backup solution
3. **Restoration:**
```bash
# Restore from PBS
vzdump restore <backup-id> <vmid> --storage <storage>
# Or restore from snapshot
qm rollback <vmid> <snapshot-name>
```
4. **Verification:**
- Verify data integrity
- Test application functionality
- Update documentation
**RTO:** 4 hours
**RPO:** Last snapshot/backup
---
### 5. Security Incident
**Scenario:** Security breach, unauthorized access, or malware
**Recovery Steps:**
1. **Immediate Containment:**
- Isolate affected systems
- Disconnect from network if necessary
- Preserve evidence (logs, snapshots)
2. **Assessment:**
- Identify scope of breach
- Determine what was accessed/modified
- Check for data exfiltration
3. **Recovery:**
- Restore from known-good backups (pre-incident)
- Rebuild affected systems if necessary
- Update all credentials and keys
4. **Hardening:**
- Review and update security policies
- Patch vulnerabilities
- Enhance monitoring
5. **Documentation:**
- Document incident timeline
- Update security procedures
- Conduct post-incident review
**RTO:** 24 hours
**RPO:** Pre-incident state
---
## Backup Strategy
### Backup Schedule
- **Critical VMs/Containers:** Daily backups
- **Standard VMs/Containers:** Weekly backups
- **Configuration:** Daily backups of Proxmox configuration
- **Network Configuration:** Version controlled (Git)
### Backup Locations
1. **Primary:** Proxmox Backup Server (if available)
2. **Secondary:** External storage (NFS, SMB, or USB)
3. **Offsite:** Cloud storage or remote location
### Backup Verification
- Weekly restore tests
- Monthly full disaster recovery drill
- Quarterly review of backup strategy
---
## Recovery Contacts
### Primary Contacts
- **Infrastructure Lead:** [Contact Information]
- **Network Administrator:** [Contact Information]
- **Security Team:** [Contact Information]
### Escalation
- **Level 1:** Infrastructure team (4 hours)
- **Level 2:** Management (8 hours)
- **Level 3:** External support (24 hours)
---
## Testing and Maintenance
### Quarterly DR Drills
1. **Test Scenario:** Simulate host failure
2. **Test Scenario:** Simulate storage failure
3. **Test Scenario:** Simulate network outage
4. **Document Results:** Update procedures based on findings
### Annual Full DR Test
- Complete infrastructure rebuild from backups
- Verify all services
- Update documentation
---
## Related Documentation
- **[BACKUP_AND_RESTORE.md](BACKUP_AND_RESTORE.md)** - Detailed backup procedures
- **[OPERATIONAL_RUNBOOKS.md](OPERATIONAL_RUNBOOKS.md)** - Operational procedures
- **[../../09-troubleshooting/TROUBLESHOOTING_FAQ.md](../../09-troubleshooting/TROUBLESHOOTING_FAQ.md)** - Troubleshooting guide
---
**Last Updated:** 2025-01-20
**Review Cycle:** Quarterly

View File

@@ -0,0 +1,103 @@
# LVM Thin Storage Enabled on pve
**Date**: $(date)
**Status**: ✅ LVM Thin Storage Configured
## Summary
LVM thin storage has been successfully enabled on pve node for migrations.
## Configuration
### Volume Group
- **Name**: `pve`
- **Physical Volumes**: 2 disks (sdc, sdd)
- **Total Size**: ~465.77GB
- **Free Space**: ~257.77GB
### Thin Pool
- **Name**: `thin1`
- **Volume Group**: `pve`
- **Size**: 208GB
- **Type**: LVM thin pool
- **Status**: Created and configured
### Proxmox Storage
- **Name**: `thin1`
- **Type**: `lvmthin`
- **Configuration**:
- Thin pool: `thin1`
- Volume group: `pve`
- Content: `images,rootdir`
- Nodes: `pve`
## Storage Status
```
pve storage:
- local: active (directory storage)
- thin1: configured (LVM thin storage)
- local-lvm: disabled (configured for ml110 only)
```
## Usage
### Migrate VMs to pve with thin1 storage
```bash
# From source node (e.g., ml110)
ssh root@192.168.11.10
# Migrate with thin1 storage
pct migrate <VMID> pve --storage thin1
# Or using API
pvesh create /nodes/ml110/lxc/<VMID>/migrate --target pve --storage thin1 --online 0
```
### Create new VMs on pve
When creating new containers on pve, you can now use:
- `thin1` - LVM thin storage (recommended for performance)
- `local` - Directory storage (slower but works)
## Storage Capacity
- **thin1**: 208GB total (available for VMs)
- **local**: 564GB total, 2.9GB used, 561GB available
## Verification
### Check storage status
```bash
ssh root@192.168.11.11 "pvesm status"
```
### Check volume groups
```bash
ssh root@192.168.11.11 "vgs"
```
### Check thin pools
```bash
ssh root@192.168.11.11 "lvs pve"
```
### List storage contents
```bash
ssh root@192.168.11.11 "pvesm list thin1"
```
## Notes
- The thin pool is created and ready for use
- Storage may show as "inactive" in `pvesm status` until first use - this is normal
- The storage is properly configured and will activate when used
- Both `thin1` (LVM thin) and `local` (directory) storage are available on pve
## Related Documentation
- `docs/STORAGE_FIX_COMPLETE.md`: Complete storage fix documentation
- `docs/MIGRATION_STORAGE_FIX.md`: Migration guide
- `scripts/enable-lvm-thin-pve.sh`: Script used to enable storage

View File

@@ -0,0 +1,339 @@
# Missing LXC Containers - Complete List
**Date:** December 26, 2024
**Status:** Inventory of containers that need to be created
---
## Summary
| Category | Missing | Total Expected | Status |
|----------|---------|----------------|--------|
| **Besu Nodes** | 7 | 19 | 12/19 deployed |
| **Hyperledger Services** | 5 | 5 | 0/5 deployed |
| **Explorer** | 1 | 1 | 0/1 deployed |
| **TOTAL** | **13** | **25** | **12/25 deployed** |
---
## 🔴 Missing Containers by Category
### 1. Besu Nodes (ChainID 138)
#### Missing Sentry Node
| VMID | Hostname | Role | IP Address | Priority | Notes |
|------|----------|------|------------|----------|-------|
| **1504** | `besu-sentry-5` | Besu Sentry Node | 192.168.11.154 | **High** | New container for Ali's dedicated host |
**Specifications:**
- Memory: 4GB
- CPU: 2 cores
- Disk: 100GB
- Network: 192.168.11.154
- Discovery: Enabled
- Access: Ali (Full)
---
#### Missing RPC Nodes
| VMID | Hostname | Role | IP Address | Priority | Notes |
|------|----------|------|------------|----------|-------|
| **2503** | `besu-rpc-4` | Besu RPC Node (Ali - 0x8a) | 192.168.11.253 | **High** | Ali's RPC node - Permissioned identity: 0x8a |
| **2504** | `besu-rpc-4` | Besu RPC Node (Ali - 0x1) | 192.168.11.254 | **High** | Ali's RPC node - Permissioned identity: 0x1 |
| **2505** | `besu-rpc-luis` | Besu RPC Node (Luis - 0x8a) | 192.168.11.255 | **High** | Luis's RPC container - Permissioned identity: 0x8a |
| **2506** | `besu-rpc-luis` | Besu RPC Node (Luis - 0x1) | 192.168.11.256 | **High** | Luis's RPC container - Permissioned identity: 0x1 |
| **2507** | `besu-rpc-putu` | Besu RPC Node (Putu - 0x8a) | 192.168.11.257 | **High** | Putu's RPC container - Permissioned identity: 0x8a |
| **2508** | `besu-rpc-putu` | Besu RPC Node (Putu - 0x1) | 192.168.11.258 | **High** | Putu's RPC container - Permissioned identity: 0x1 |
**Specifications (per container):**
- Memory: 16GB
- CPU: 4 cores
- Disk: 200GB
- Discovery: **Disabled** (prevents connection to Ethereum mainnet while reporting chainID 0x1 to MetaMask for wallet compatibility)
- **Authentication: JWT Auth Required** (all containers)
**Access Model:**
- **2503** (besu-rpc-4): Ali (Full) - 0x8a identity
- **2504** (besu-rpc-4): Ali (Full) - 0x1 identity
- **2505** (besu-rpc-luis): Luis (RPC-only) - 0x8a identity
- **2506** (besu-rpc-luis): Luis (RPC-only) - 0x1 identity
- **2507** (besu-rpc-putu): Putu (RPC-only) - 0x8a identity
- **2508** (besu-rpc-putu): Putu (RPC-only) - 0x1 identity
**Configuration:**
- All use permissioned RPC configuration
- Discovery disabled for all (prevents connection to Ethereum mainnet while reporting chainID 0x1 to MetaMask for wallet compatibility)
- Each container has separate permissioned identity access
- **All require JWT authentication** via nginx reverse proxy
---
### 2. Hyperledger Services
#### Firefly
| VMID | Hostname | Role | IP Address | Priority | Notes |
|------|----------|------|------------|----------|-------|
| **6200** | `firefly-1` | Hyperledger Firefly Core | 192.168.11.66 | **High** | Workflow/orchestration |
| **6201** | `firefly-2` | Hyperledger Firefly Node | 192.168.11.67 | **High** | For Ali's dedicated host (ChainID 138) |
**Specifications (per container):**
- Memory: 4GB
- CPU: 2 cores
- Disk: 50GB
- Access: Ali (Full)
**Notes:**
- 6201 is specifically mentioned in ChainID 138 documentation
- 6200 is the core Firefly service
---
#### Cacti
| VMID | Hostname | Role | IP Address | Priority | Notes |
|------|----------|------|------------|----------|-------|
| **5200** | `cacti-1` | Hyperledger Cacti | 192.168.11.64 | **High** | Interop middleware |
**Specifications:**
- Memory: 4GB
- CPU: 2 cores
- Disk: 50GB
---
#### Fabric
| VMID | Hostname | Role | IP Address | Priority | Notes |
|------|----------|------|------------|----------|-------|
| **6000** | `fabric-1` | Hyperledger Fabric | 192.168.11.65 | Medium | Enterprise contracts |
**Specifications:**
- Memory: 8GB
- CPU: 4 cores
- Disk: 100GB
---
#### Indy
| VMID | Hostname | Role | IP Address | Priority | Notes |
|------|----------|------|------------|----------|-------|
| **6400** | `indy-1` | Hyperledger Indy | 192.168.11.68 | Medium | Identity layer |
**Specifications:**
- Memory: 8GB
- CPU: 4 cores
- Disk: 100GB
---
### 3. Explorer
#### Blockscout
| VMID | Hostname | Role | IP Address | Priority | Notes |
|------|----------|------|------------|----------|-------|
| **5000** | `blockscout-1` | Blockscout Explorer | TBD | **High** | Blockchain explorer for ChainID 138 |
**Specifications:**
- Memory: 8GB+
- CPU: 4 cores+
- Disk: 200GB+
- Requires: PostgreSQL database
---
## 📊 Deployment Priority
### Priority 1 - High (ChainID 138 Critical)
1. **1504** - `besu-sentry-5` (Ali's dedicated host)
2. **2503** - `besu-rpc-4` (Ali's RPC node - 0x8a identity)
3. **2504** - `besu-rpc-4` (Ali's RPC node - 0x1 identity)
4. **2505** - `besu-rpc-luis` (Luis's RPC container - 0x8a identity)
5. **2506** - `besu-rpc-luis` (Luis's RPC container - 0x1 identity)
6. **2507** - `besu-rpc-putu` (Putu's RPC container - 0x8a identity)
7. **2508** - `besu-rpc-putu` (Putu's RPC container - 0x1 identity)
8. **6201** - `firefly-2` (Ali's dedicated host, ChainID 138)
9. **5000** - `blockscout-1` (Explorer for ChainID 138)
**Note:** All RPC containers require JWT authentication via nginx reverse proxy.
### Priority 2 - High (Infrastructure)
5. **6200** - `firefly-1` (Core Firefly service)
6. **5200** - `cacti-1` (Interop middleware)
### Priority 3 - Medium
7. **6000** - `fabric-1` (Enterprise contracts)
8. **6400** - `indy-1` (Identity layer)
---
## ✅ Currently Deployed Containers
### Besu Network (12/14)
| VMID | Hostname | Status |
|------|----------|--------|
| 1000 | besu-validator-1 | ✅ Deployed |
| 1001 | besu-validator-2 | ✅ Deployed |
| 1002 | besu-validator-3 | ✅ Deployed |
| 1003 | besu-validator-4 | ✅ Deployed |
| 1004 | besu-validator-5 | ✅ Deployed |
| 1500 | besu-sentry-1 | ✅ Deployed |
| 1501 | besu-sentry-2 | ✅ Deployed |
| 1502 | besu-sentry-3 | ✅ Deployed |
| 1503 | besu-sentry-4 | ✅ Deployed |
| 1504 | besu-sentry-5 | ❌ **MISSING** |
| 2500 | besu-rpc-1 | ✅ Deployed |
| 2501 | besu-rpc-2 | ✅ Deployed |
| 2502 | besu-rpc-3 | ✅ Deployed |
| 2503 | besu-rpc-4 | ❌ **MISSING** |
### Services (2/4)
| VMID | Hostname | Status |
|------|----------|--------|
| 3500 | oracle-publisher-1 | ✅ Deployed |
| 3501 | ccip-monitor-1 | ✅ Deployed |
---
## 🚀 Deployment Scripts Available
### For Besu Nodes
- **Main deployment:** `smom-dbis-138-proxmox/scripts/deployment/deploy-besu-nodes.sh`
- **Configuration:** `scripts/configure-besu-chain138-nodes.sh`
- **Quick setup:** `scripts/setup-new-chain138-containers.sh`
### For Hyperledger Services
- **Deployment:** `smom-dbis-138-proxmox/scripts/deployment/deploy-hyperledger-services.sh`
### For Explorer
- **Deployment:** Check Blockscout deployment scripts
---
## 📝 Deployment Checklist
### Besu Nodes (Priority 1)
- [ ] **1504** - Create `besu-sentry-5` container
- [ ] Configure static-nodes.json
- [ ] Configure permissioned-nodes.json
- [ ] Enable discovery
- [ ] Verify peer connections
- [ ] Access: Ali (Full)
- [ ] **2503** - Create `besu-rpc-4` container (Ali's RPC - 0x8a)
- [ ] Use permissioned RPC configuration
- [ ] Configure static-nodes.json
- [ ] Configure permissioned-nodes.json
- [ ] **Disable discovery** (critical!)
- [ ] Configure permissioned identity (0x8a)
- [ ] Set up JWT authentication
- [ ] Access: Ali (Full)
- [ ] **2504** - Create `besu-rpc-4` container (Ali's RPC - 0x1)
- [ ] Use permissioned RPC configuration
- [ ] Configure static-nodes.json
- [ ] Configure permissioned-nodes.json
- [ ] **Disable discovery** (critical!)
- [ ] Configure permissioned identity (0x1)
- [ ] Set up JWT authentication
- [ ] Access: Ali (Full)
- [ ] **2505** - Create `besu-rpc-luis` container (Luis's RPC - 0x8a)
- [ ] Use permissioned RPC configuration
- [ ] Configure static-nodes.json
- [ ] Configure permissioned-nodes.json
- [ ] **Disable discovery** (critical!)
- [ ] Configure permissioned identity (0x8a)
- [ ] Set up JWT authentication
- [ ] Set up RPC-only access for Luis
- [ ] Access: Luis (RPC-only, 0x8a identity)
- [ ] **2506** - Create `besu-rpc-luis` container (Luis's RPC - 0x1)
- [ ] Use permissioned RPC configuration
- [ ] Configure static-nodes.json
- [ ] Configure permissioned-nodes.json
- [ ] **Disable discovery** (critical!)
- [ ] Configure permissioned identity (0x1)
- [ ] Set up JWT authentication
- [ ] Set up RPC-only access for Luis
- [ ] Access: Luis (RPC-only, 0x1 identity)
- [ ] **2507** - Create `besu-rpc-putu` container (Putu's RPC - 0x8a)
- [ ] Use permissioned RPC configuration
- [ ] Configure static-nodes.json
- [ ] Configure permissioned-nodes.json
- [ ] **Disable discovery** (critical!)
- [ ] Configure permissioned identity (0x8a)
- [ ] Set up JWT authentication
- [ ] Set up RPC-only access for Putu
- [ ] Access: Putu (RPC-only, 0x8a identity)
- [ ] **2508** - Create `besu-rpc-putu` container (Putu's RPC - 0x1)
- [ ] Use permissioned RPC configuration
- [ ] Configure static-nodes.json
- [ ] Configure permissioned-nodes.json
- [ ] **Disable discovery** (critical!)
- [ ] Configure permissioned identity (0x1)
- [ ] Set up JWT authentication
- [ ] Set up RPC-only access for Putu
- [ ] Access: Putu (RPC-only, 0x1 identity)
### Hyperledger Services
- [ ] **6200** - Create `firefly-1` container
- [ ] **6201** - Create `firefly-2` container (Ali's host)
- [ ] **5200** - Create `cacti-1` container
- [ ] **6000** - Create `fabric-1` container
- [ ] **6400** - Create `indy-1` container
### Explorer
- [ ] **5000** - Create `blockscout-1` container
- [ ] Set up PostgreSQL database
- [ ] Configure RPC endpoints
- [ ] Set up indexing
---
## 🔗 Related Documentation
- [ChainID 138 Configuration Guide](CHAIN138_BESU_CONFIGURATION.md)
- [ChainID 138 Quick Start](CHAIN138_QUICK_START.md)
- [VMID Allocation](smom-dbis-138-proxmox/config/proxmox.conf)
- [Deployment Plan](dbis_core/DEPLOYMENT_PLAN.md)
---
## 📊 Summary Statistics
**Total Missing:** 13 containers
- Besu Nodes: 7 (1504, 2503, 2504, 2505, 2506, 2507, 2508)
- Hyperledger Services: 5 (6200, 6201, 5200, 6000, 6400)
- Explorer: 1 (5000)
**Total Expected:** 25 containers
- Besu Network: 19 (12 existing + 7 new: 1504, 2503-2508)
- Hyperledger Services: 5
- Explorer: 1
**Deployment Rate:** 48% (12/25)
**Important:** All RPC containers (2503-2508) require JWT authentication via nginx reverse proxy.
---
**Last Updated:** December 26, 2024

View File

@@ -0,0 +1,81 @@
# Pre-Start Audit Plan - Hostnames and IP Addresses
**Date:** 2025-01-20
**Purpose:** Comprehensive audit and fix of hostnames and IP addresses before starting VMs
---
## Tasks
### 1. Hostname Migration
- **pve** (192.168.11.11) → **r630-01**
- **pve2** (192.168.11.12) → **r630-02**
### 2. IP Address Audit
- Check all VMs/containers across all Proxmox hosts
- Verify no IP conflicts
- Verify no invalid IPs (network/broadcast addresses)
- Document all IP assignments
### 3. Consistency Check
- Verify IPs match documentation
- Check for inconsistencies between hosts
- Ensure all static IPs are properly configured
---
## Scripts Available
1. **`scripts/comprehensive-ip-audit.sh`** - Audits all IPs for conflicts
2. **`scripts/migrate-hostnames-proxmox.sh`** - Migrates hostnames properly
---
## Execution Order
1. **Run IP Audit First**
```bash
./scripts/comprehensive-ip-audit.sh
```
2. **Fix any IP conflicts found**
3. **Migrate Hostnames**
```bash
./scripts/migrate-hostnames-proxmox.sh
```
4. **Re-run IP Audit to verify**
5. **Start VMs**
---
## Current Known IPs (from VMID_IP_ADDRESS_LIST.md)
### Validators (1000-1004)
- 192.168.11.100-104
### Sentries (1500-1503)
- 192.168.11.150-153
### RPC Nodes
- 192.168.11.240-242 (ThirdWeb)
- 192.168.11.250-252 (Public RPC)
- 192.168.11.201-204 (Named RPC)
### DBIS Core
- 192.168.11.105-106 (PostgreSQL)
- 192.168.11.120 (Redis)
- 192.168.11.130 (Frontend)
- 192.168.11.155-156 (API)
### Other Services
- 192.168.11.60-63 (ML nodes)
- 192.168.11.64 (Indy)
- 192.168.11.80 (Cacti)
- 192.168.11.112 (Fabric)
---
**Status:** Ready to execute

View File

@@ -0,0 +1,120 @@
# Pre-Start Checklist - Hostnames and IP Addresses
**Date:** 2025-01-20
**Purpose:** Complete audit and fixes before starting VMs on pve and pve2
---
## ✅ IP Address Audit - COMPLETE
**Status:** All IPs audited, no conflicts found
**Results:**
- All 34 VMs/containers are currently on **ml110** (192.168.11.10)
- **pve** (192.168.11.11) and **pve2** (192.168.11.12) have no VMs/containers yet
- **No IP conflicts detected** across all hosts
- **No invalid IPs** (network/broadcast addresses)
**Allocated IPs (34 total):**
- 192.168.11.57, .60-.64, .80, .100-.106, .112, .120, .130, .150-.156, .201-.204, .240-.242, .250-.254
---
## ⏳ Hostname Migration - PENDING
### Current State
- **pve** (192.168.11.11) - hostname: `pve`, should be: `r630-01`
- **pve2** (192.168.11.12) - hostname: `pve2`, should be: `r630-02`
### Migration Steps
**Script Available:** `scripts/migrate-hostnames-proxmox.sh`
**What it does:**
1. Updates `/etc/hostname` on both hosts
2. Updates `/etc/hosts` to ensure proper resolution
3. Restarts Proxmox services
4. Verifies hostname changes
**To execute:**
```bash
cd /home/intlc/projects/proxmox
./scripts/migrate-hostnames-proxmox.sh
```
**Manual steps (if script fails):**
```bash
# On pve (192.168.11.11)
ssh root@192.168.11.11
hostnamectl set-hostname r630-01
echo "r630-01" > /etc/hostname
# Update /etc/hosts to include: 192.168.11.11 r630-01 r630-01.sankofa.nexus pve pve.sankofa.nexus
systemctl restart pve-cluster pvestatd pvedaemon pveproxy
# On pve2 (192.168.11.12)
ssh root@192.168.11.12
hostnamectl set-hostname r630-02
echo "r630-02" > /etc/hostname
# Update /etc/hosts to include: 192.168.11.12 r630-02 r630-02.sankofa.nexus pve2 pve2.sankofa.nexus
systemctl restart pve-cluster pvestatd pvedaemon pveproxy
```
---
## Verification Steps
### 1. Verify Hostnames
```bash
ssh root@192.168.11.11 "hostname" # Should return: r630-01
ssh root@192.168.11.12 "hostname" # Should return: r630-02
```
### 2. Verify IP Resolution
```bash
ssh root@192.168.11.11 "getent hosts r630-01" # Should return: 192.168.11.11
ssh root@192.168.11.12 "getent hosts r630-02" # Should return: 192.168.11.12
```
### 3. Verify Proxmox Services
```bash
ssh root@192.168.11.11 "systemctl status pve-cluster pveproxy | grep Active"
ssh root@192.168.11.12 "systemctl status pve-cluster pveproxy | grep Active"
```
### 4. Re-run IP Audit
```bash
./scripts/check-all-vm-ips.sh
```
---
## Summary
### ✅ Completed
- [x] IP address audit across all hosts
- [x] Conflict detection (none found)
- [x] Invalid IP detection (none found)
- [x] Documentation of all IP assignments
### ⏳ Pending
- [ ] Hostname migration (pve → r630-01)
- [ ] Hostname migration (pve2 → r630-02)
- [ ] Verification of hostname changes
- [ ] Final IP audit after hostname changes
### 📋 Ready to Execute
1. Run hostname migration script
2. Verify changes
3. Start VMs on pve/pve2
---
## Scripts Available
1. **`scripts/check-all-vm-ips.sh`** - ✅ Working - Audits all IPs
2. **`scripts/migrate-hostnames-proxmox.sh`** - Ready - Migrates hostnames
3. **`scripts/diagnose-proxmox-hosts.sh`** - ✅ Working - Diagnostics
---
**Status:** IP audit complete, ready for hostname migration

View File

@@ -0,0 +1,250 @@
# ALI RPC Port Forwarding Configuration
**Date**: 2026-01-04
**Rule Name**: ALI RPC
**Target Service**: VMID 2501 (Permissioned RPC Node)
**Status**: Configuration Guide
---
## 📋 Port Forwarding Rule Specification
### Rule Configuration
| Parameter | Value | Notes |
|-----------|-------|-------|
| **Rule Name** | ALI RPC | Descriptive name for the rule |
| **Enabled** | ✅ Yes | Enable to activate the rule |
| **Source IP** | 0.0.0.0/0 | All source IPs (consider restricting for security) |
| **Interface** | WAN1 | Primary WAN interface (76.53.10.34) |
| **WAN IP** | 76.53.10.34 | Router's WAN IP (or use specific IP from Block #1 if needed) |
| **DMZ** | -- | Not used |
| **Source Port** | * (Any) | All source ports accepted |
| **Destination IP** | 192.168.11.251 | VMID 2501 (Permissioned RPC Node) |
| **Destination Port** | 8545 | Besu HTTP RPC port |
| **Protocol** | TCP | RPC uses TCP protocol |
---
## 🎯 Target Service Details
### VMID 2501 - Permissioned RPC Node
- **IP Address**: 192.168.11.251
- **Service**: Besu HTTP RPC
- **Port**: 8545
- **Type**: Permissioned RPC (requires JWT authentication)
- **Current Public Access**: Via Cloudflare Tunnel (`https://rpc-http-prv.d-bis.org`)
---
## ⚠️ Security Considerations
### Current Architecture (Recommended)
The current architecture uses **Cloudflare Tunnel** for public access, which provides:
-**DDoS Protection**: Cloudflare provides DDoS mitigation
-**SSL/TLS Termination**: Automatic HTTPS encryption
-**No Direct Exposure**: Services are not directly exposed to the internet
-**IP Hiding**: Internal IPs are not exposed
-**Access Control**: Cloudflare Access can be configured
**Public Endpoint**: `https://rpc-http-prv.d-bis.org`
### Direct Port Forwarding (This Configuration)
If you configure direct port forwarding, consider:
- ⚠️ **Security Risk**: Service is directly exposed to the internet
- ⚠️ **No DDoS Protection**: Router may be overwhelmed by attacks
- ⚠️ **No SSL/TLS**: HTTP traffic is unencrypted (unless Nginx handles it)
- ⚠️ **IP Exposure**: Internal IP (192.168.11.251) is exposed
- ⚠️ **Authentication**: JWT authentication must be configured on Besu
**Recommended**: Use direct port forwarding only if:
1. Cloudflare Tunnel is not available
2. You need direct IP access for specific use cases
3. You have additional security measures in place (firewall rules, IP allowlisting)
---
## 🔧 Recommended Configuration
### Option 1: Restrict Source IP (More Secure)
If you must use direct port forwarding, restrict source IP addresses:
| Parameter | Value | Notes |
|-----------|-------|-------|
| **Source IP** | [Specific IPs or CIDR] | Restrict to known client IPs |
| **Example** | 203.0.113.0/24 | Allow only specific network |
### Option 2: Use Different WAN IP (Isolation)
Use a different IP from Block #1 instead of the router's primary WAN IP:
| Parameter | Value | Notes |
|-----------|-------|-------|
| **WAN IP** | 76.53.10.35 | Use secondary IP from Block #1 |
| **Purpose** | Isolation from router's primary IP |
**Available IPs in Block #1 (76.53.10.32/28)**:
- 76.53.10.33 - Gateway (reserved)
- 76.53.10.34 - Router WAN IP (current)
- 76.53.10.35-46 - Available for use
---
## 📝 Complete Rule Configuration
### For ER605 Router GUI
```
Rule Name: ALI RPC
Enabled: ✅ Yes
Interface: WAN1
External IP: 76.53.10.34 (or 76.53.10.35 for isolation)
External Port: 8545
Internal IP: 192.168.11.251
Internal Port: 8545
Protocol: TCP
Source IP: 0.0.0.0/0 (or restrict to specific IPs for security)
```
### Alternative: Use Secondary WAN IP (Recommended for Isolation)
```
Rule Name: ALI RPC
Enabled: ✅ Yes
Interface: WAN1
External IP: 76.53.10.35 (secondary IP from Block #1)
External Port: 8545
Internal IP: 192.168.11.251
Internal Port: 8545
Protocol: TCP
Source IP: [Restrict to known IPs if possible]
```
---
## 🔍 Verification
### Test from External Network
After enabling the rule, test from an external network:
```bash
curl -X POST http://76.53.10.34:8545 \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}'
```
**Expected Response** (if JWT auth is not configured):
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": "0x8a"
}
```
**If JWT Authentication is Required**:
You'll need to include the JWT token in the request. See [RPC_JWT_AUTHENTICATION.md](../docs/04-configuration/RPC_JWT_AUTHENTICATION.md) for details.
### Test from Internal Network
```bash
curl -X POST http://192.168.11.251:8545 \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}'
```
---
## 🔐 Security Recommendations
### 1. Enable IP Allowlisting (If Possible)
Restrict source IP addresses to known clients:
- Configure source IP restrictions in the router rule
- Or use firewall rules to restrict access
- Consider using Cloudflare Access for IP-based access control
### 2. Use HTTPS/TLS
If exposing directly, ensure HTTPS is used:
- VMID 2501 should have Nginx with SSL certificates
- Forward to port 443 instead of 8545
- Or use a reverse proxy with SSL termination
### 3. Monitor and Log
- Enable firewall logging for the port forward rule
- Monitor connection attempts
- Set up alerts for suspicious activity
### 4. Consider Cloudflare Tunnel (Preferred)
Instead of direct port forwarding, use Cloudflare Tunnel:
- Current endpoint: `https://rpc-http-prv.d-bis.org`
- Provides DDoS protection, SSL, and access control
- No router configuration needed
---
## 📊 Comparison: Direct Port Forward vs Cloudflare Tunnel
| Feature | Direct Port Forward | Cloudflare Tunnel |
|---------|-------------------|-------------------|
| **DDoS Protection** | ❌ No | ✅ Yes |
| **SSL/TLS** | ⚠️ Manual (Nginx) | ✅ Automatic |
| **IP Hiding** | ❌ Internal IP exposed | ✅ IP hidden |
| **Access Control** | ⚠️ Router/firewall rules | ✅ Cloudflare Access |
| **Configuration** | Router port forward rule | Cloudflare Tunnel config |
| **Monitoring** | Router logs only | Cloudflare analytics |
| **Cost** | Free (router feature) | Free tier available |
---
## 🎯 Current Architecture Recommendation
**Recommended Approach**: Continue using Cloudflare Tunnel
- ✅ Already configured and working: `https://rpc-http-prv.d-bis.org`
- ✅ Provides better security and DDoS protection
- ✅ No router configuration needed
- ✅ SSL/TLS handled automatically
**Direct Port Forwarding Use Cases**:
- Emergency access if Cloudflare Tunnel is down
- Specific applications that require direct IP access
- Testing and development
- Backup access method
---
## 📋 Summary
### Rule Configuration
- **Name**: ALI RPC
- **Destination**: 192.168.11.251:8545 (VMID 2501)
- **External Port**: 8545
- **Protocol**: TCP
- **Security**: ⚠️ Consider restricting source IPs and using secondary WAN IP
### Recommendation
-**Current**: Use Cloudflare Tunnel (`https://rpc-http-prv.d-bis.org`)
- ⚠️ **Direct Port Forward**: Use only if necessary, with security restrictions
- 🔐 **Security**: Enable IP allowlisting, use secondary WAN IP, monitor access
---
**Last Updated**: 2026-01-04
**Status**: Configuration Guide
**Current Access Method**: Cloudflare Tunnel (Recommended)

View File

@@ -0,0 +1,261 @@
# All Manual Steps Execution Complete
**Date:** 2025-01-20
**Status:** ✅ All Automated Manual Steps Complete
**Purpose:** Final summary of all executed manual steps
---
## Executive Summary
All automated manual steps have been successfully executed. Private keys are secured, backup files are cleaned up, and documentation is complete. Only user actions remain (API token creation).
---
## ✅ Completed Steps
### 1. Private Keys Secured ✅
**Status:** ✅ Complete
**Actions Executed:**
- ✅ Created secure storage directory: `~/.secure-secrets/`
- ✅ Created secure storage file: `~/.secure-secrets/private-keys.env`
- ✅ Extracted private keys from .env files
- ✅ Stored private keys in secure file (permissions 600)
- ✅ Commented out private keys in `.env` files:
- `smom-dbis-138/.env`
- `explorer-monorepo/.env`
- ✅ Added comments in .env files pointing to secure storage
**Secure Storage:**
- **Location:** `~/.secure-secrets/private-keys.env`
- **Permissions:** 600 (read/write for owner only)
- **Contains:** `PRIVATE_KEY=0x5373d11ee2cad4ed82b9208526a8c358839cbfe325919fb250f062a25153d1c8`
**Next Steps for Deployment:**
- Update deployment scripts to source secure storage:
```bash
source ~/.secure-secrets/private-keys.env
```
- Test services to ensure they work with secure storage
---
### 2. Backup Files Cleaned Up ✅
**Status:** ✅ Complete
**Actions Executed:**
- ✅ Identified all backup files:
- `smom-dbis-138/.env.backup`
- `dbis_core/.env.backup`
- `explorer-monorepo/.env.backup.20251225_092255`
- `explorer-monorepo/.env.backup.final.20251225_092403`
- `explorer-monorepo/.env.backup.clean.20251225_092427`
- ✅ Created secure backup location: `~/.secure-backups/env-backups-20260103_171720/`
- ✅ Backed up all files to secure location
- ✅ Removed backup files from repository
**Backup Location:**
- All backup files safely stored in: `~/.secure-backups/env-backups-20260103_171720/`
- Backup files removed from repository
**Verification:**
- No backup files remain in repository
- All files safely backed up
---
### 3. Documentation Complete ✅
**Status:** ✅ Complete
**Documentation Created:**
1. ✅ `REQUIRED_SECRETS_INVENTORY.md` - Comprehensive inventory
2. ✅ `ENV_SECRETS_AUDIT_REPORT.md` - Detailed audit
3. ✅ `REQUIRED_SECRETS_SUMMARY.md` - Quick reference
4. ✅ `SECURE_SECRETS_MIGRATION_GUIDE.md` - Migration guide
5. ✅ `SECURITY_IMPROVEMENTS_COMPLETE.md` - Status document
6. ✅ `OMADA_CONFIGURATION_REQUIREMENTS.md` - Omada config guide
7. ✅ `MANUAL_STEPS_EXECUTION_COMPLETE.md` - Execution summary
8. ✅ `ALL_MANUAL_STEPS_COMPLETE.md` - This document
---
### 4. .gitignore Updated ✅
**Status:** ✅ Complete
**Actions Executed:**
- ✅ Added .env backup patterns to .gitignore
- ✅ All .env files and backup files now ignored
---
## ⏳ Remaining User Actions
### 1. Cloudflare API Token Migration
**Status:** ⏳ Requires User Action
**Why:** API token must be created in Cloudflare dashboard (cannot be automated)
**Actions Required:**
1. **Create API Token:**
- Go to: https://dash.cloudflare.com/profile/api-tokens
- Click "Create Token"
- Use "Edit zone DNS" template OR create custom token with:
- **Zone** → **DNS** → **Edit**
- **Account** → **Cloudflare Tunnel** → **Edit**
- Copy the token immediately (cannot be retrieved later)
2. **Add to .env:**
```bash
# Add to .env file (root directory)
CLOUDFLARE_API_TOKEN="your-api-token-here"
```
3. **Test API Token (if test script exists):**
```bash
./scripts/test-cloudflare-api-token.sh
```
4. **Update Scripts:**
- Update scripts to use `CLOUDFLARE_API_TOKEN`
- Remove `CLOUDFLARE_API_KEY` after verification (optional)
**Documentation:** `SECURE_SECRETS_MIGRATION_GUIDE.md` (Phase 4)
---
### 2. Omada API Key Configuration (Optional)
**Status:** ⏳ Optional (May Not Be Needed)
**Current Status:**
- ✅ `OMADA_CLIENT_ID` - Set
- ✅ `OMADA_CLIENT_SECRET` - Set
- ✅ `OMADA_SITE_ID` - Set
- ⚠️ `OMADA_API_KEY` - Has placeholder `<your-api-key>`
- ⚠️ `OMADA_API_SECRET` - Empty
**Recommendation:**
- If using OAuth (Client ID/Secret), `OMADA_API_KEY` and `OMADA_API_SECRET` may not be needed
- Can comment out or remove unused fields
- If API Key is required, get it from Omada Controller
**Documentation:** `OMADA_CONFIGURATION_REQUIREMENTS.md`
---
## Summary
### ✅ All Automated Steps Complete
1. ✅ Private keys secured (moved to secure storage)
2. ✅ Backup files cleaned up (safely backed up and removed)
3. ✅ Documentation complete
4. ✅ .gitignore updated
### ⏳ User Action Required
1. ⏳ Create and configure Cloudflare API token
2. ⏳ Configure Omada API key (if needed)
---
## Files Created/Modified
### New Files
- `~/.secure-secrets/private-keys.env` - Secure private key storage
- `~/.secure-backups/env-backups-20260103_171720/` - Backup files storage
- All documentation files in `docs/04-configuration/`
### Modified Files
- `smom-dbis-138/.env` - Private keys commented out
- `explorer-monorepo/.env` - Private keys commented out
- `.gitignore` - Added backup file patterns
### Removed Files
- All `.env.backup*` files (safely backed up first)
---
## Verification
### Verify Private Keys Are Secured
```bash
# Check secure storage exists
ls -lh ~/.secure-secrets/private-keys.env
# Verify .env files have private keys commented out
grep "^#.*PRIVATE_KEY=" smom-dbis-138/.env explorer-monorepo/.env
# Verify secure storage has private key
grep "^PRIVATE_KEY=" ~/.secure-secrets/private-keys.env
```
### Verify Backup Files Are Removed
```bash
# Should return no results (except in backup directory)
find . -name ".env.backup*" -type f | grep -v node_modules | grep -v venv | grep -v ".git" | grep -v ".secure-backups"
# Check backup location
ls -lh ~/.secure-backups/env-backups-*/
```
---
## Security Improvements Achieved
### Before
- ❌ Private keys in plain text .env files
- ❌ Backup files with secrets in repository
- ❌ No secure storage for secrets
- ❌ Using legacy API_KEY instead of API_TOKEN
### After
- ✅ Private keys in secure storage (`~/.secure-secrets/`)
- ✅ Backup files safely backed up and removed from repository
- ✅ Secure storage implemented (permissions 600)
- ✅ Documentation for API token migration
- ✅ .gitignore updated to prevent future issues
---
## Next Steps
### Immediate
1. Create Cloudflare API token
2. Test private key secure storage with services
3. Update deployment scripts to use secure storage
### Short-Term
1. Migrate to Cloudflare API token
2. Implement key management service (optional)
3. Set up secret rotation procedures
### Long-Term
1. Implement HashiCorp Vault or cloud key management
2. Set up access auditing
3. Implement automated secret rotation
---
## Related Documentation
- [Secure Secrets Migration Guide](./SECURE_SECRETS_MIGRATION_GUIDE.md)
- [Security Improvements Complete](./SECURITY_IMPROVEMENTS_COMPLETE.md)
- [Manual Steps Execution Complete](./MANUAL_STEPS_EXECUTION_COMPLETE.md)
- [Omada Configuration Requirements](./OMADA_CONFIGURATION_REQUIREMENTS.md)
- [Required Secrets Inventory](./REQUIRED_SECRETS_INVENTORY.md)
---
**Last Updated:** 2025-01-20
**Status:** ✅ All Automated Manual Steps Complete
**Remaining:** User action required for Cloudflare API token

View File

@@ -0,0 +1,155 @@
# ChainID 138 JWT Authentication Requirements
**Date:** December 26, 2024
**Status:** All RPC containers require JWT authentication
---
## Overview
All RPC containers for ChainID 138 require JWT authentication via nginx reverse proxy. This ensures secure, permissioned access to the Besu RPC endpoints.
---
## Container Allocation with JWT Auth
### Ali's Containers (Full Access)
| VMID | Hostname | Role | Identity | IP Address | JWT Auth |
|------|----------|------|----------|------------|----------|
| 1504 | `besu-sentry-5` | Besu Sentry | N/A | 192.168.11.154 | ✅ Required |
| 2503 | `besu-rpc-4` | Besu RPC | 0x8a | 192.168.11.253 | ✅ Required |
| 2504 | `besu-rpc-4` | Besu RPC | 0x1 | 192.168.11.254 | ✅ Required |
| 6201 | `firefly-2` | Firefly | N/A | 192.168.11.67 | ✅ Required |
**Access Level:** Full root access to all containers
---
### Luis's Containers (RPC-Only Access)
| VMID | Hostname | Role | Identity | IP Address | JWT Auth |
|------|----------|------|----------|------------|----------|
| 2505 | `besu-rpc-luis` | Besu RPC | 0x8a | 192.168.11.255 | ✅ Required |
| 2506 | `besu-rpc-luis` | Besu RPC | 0x1 | 192.168.11.256 | ✅ Required |
**Access Level:** RPC-only access via JWT authentication
- No Proxmox console access
- No SSH access
- No key material access
- Access via reverse proxy / firewall-restricted RPC ports
---
### Putu's Containers (RPC-Only Access)
| VMID | Hostname | Role | Identity | IP Address | JWT Auth |
|------|----------|------|----------|------------|----------|
| 2507 | `besu-rpc-putu` | Besu RPC | 0x8a | 192.168.11.257 | ✅ Required |
| 2508 | `besu-rpc-putu` | Besu RPC | 0x1 | 192.168.11.258 | ✅ Required |
**Access Level:** RPC-only access via JWT authentication
- No Proxmox console access
- No SSH access
- No key material access
- Access via reverse proxy / firewall-restricted RPC ports
---
## JWT Authentication Setup
### Requirements
1. **Nginx Reverse Proxy** - All RPC containers must be behind nginx
2. **JWT Validation** - All requests must include valid JWT token
3. **Identity Mapping** - JWT tokens must map to permissioned identities (0x8a, 0x1)
4. **Access Control** - Different JWT tokens for different operators
### Implementation
#### For Ali's Containers (2503, 2504)
- Full access JWT token
- Can access both 0x8a and 0x1 identities
- Admin-level permissions
#### For Luis's Containers (2505, 2506)
- RPC-only JWT token
- Can access 0x8a identity (2505)
- Can access 0x1 identity (2506)
- Limited to RPC endpoints only
#### For Putu's Containers (2507, 2508)
- RPC-only JWT token
- Can access 0x8a identity (2507)
- Can access 0x1 identity (2508)
- Limited to RPC endpoints only
---
## Nginx Configuration
### Example Configuration
Each RPC container should have nginx configuration with:
```nginx
location / {
auth_jwt "RPC Access" token=$cookie_auth_token;
auth_jwt_key_file /etc/nginx/jwt/rs256.pub;
proxy_pass http://192.168.11.XXX:8545;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
```
### JWT Token Requirements
- **Algorithm:** RS256 (recommended) or HS256
- **Claims:** Must include operator identity and permissioned account
- **Expiration:** Set appropriate expiration times
- **Validation:** Validate on every request
---
## Deployment Checklist
### For Each RPC Container (2503-2508)
- [ ] Create LXC container
- [ ] Configure Besu with permissioned identity
- [ ] Set up nginx reverse proxy
- [ ] Configure JWT authentication
- [ ] Generate JWT tokens for operators
- [ ] Test JWT validation
- [ ] Configure firewall rules
- [ ] Disable discovery (prevents connection to Ethereum mainnet while reporting chainID 0x1 to MetaMask for wallet compatibility)
- [ ] Deploy static-nodes.json and permissioned-nodes.json
---
## Security Considerations
1. **Token Storage:** JWT tokens should be stored securely
2. **Token Rotation:** Implement token rotation policy
3. **Access Logging:** Log all RPC access attempts
4. **Rate Limiting:** Implement rate limiting per operator
5. **Network Isolation:** Use firewall rules to restrict access
---
## Related Documentation
- [Missing Containers List](MISSING_CONTAINERS_LIST.md)
- [ChainID 138 Configuration Guide](CHAIN138_BESU_CONFIGURATION.md)
- [Access Control Model](CHAIN138_ACCESS_CONTROL_CORRECTED.md)
- [Nginx JWT Auth Scripts](../scripts/configure-nginx-jwt-auth*.sh)
---
**Last Updated:** December 26, 2024
**Status:** ✅ Requirements Documented

View File

@@ -0,0 +1,124 @@
# Cloudflare API Setup - Quick Start
## Automated Configuration via API
This will configure both tunnel routes and DNS records automatically using the Cloudflare API.
---
## Step 1: Get Cloudflare API Credentials
### Option A: API Token (Recommended)
1. Go to: https://dash.cloudflare.com/profile/api-tokens
2. Click **Create Token**
3. Use **Edit zone DNS** template OR create custom token with:
- **Zone** → **DNS****Edit**
- **Account** → **Cloudflare Tunnel****Edit**
4. Copy the token
### Option B: Global API Key (Legacy)
1. Go to: https://dash.cloudflare.com/profile/api-tokens
2. Scroll to **API Keys** section
3. Click **View** next to "Global API Key"
4. Copy your Email and Global API Key
---
## Step 2: Set Up Credentials
**Interactive Setup:**
```bash
cd /home/intlc/projects/proxmox
./scripts/setup-cloudflare-env.sh
```
**Or manually create `.env` file:**
```bash
cat > .env <<EOF
CLOUDFLARE_API_TOKEN="your-api-token-here"
DOMAIN="d-bis.org"
TUNNEL_TOKEN="eyJhIjoiNTJhZDU3YTcxNjcxYzVmYzAwOWVkZjA3NDQ2NTgxOTYiLCJ0IjoiMTBhYjIyZGEtOGVhMy00ZTJlLWE4OTYtMjdlY2UyMjExYTA1IiwicyI6IlptRXlOMkkyTVRrdE1EZzFNeTAwTkRBNExXSXhaalF0Wm1KaE5XVmpaVEEzTVdGbCJ9"
EOF
chmod 600 .env
```
---
## Step 3: Run Configuration Script
```bash
cd /home/intlc/projects/proxmox
./scripts/configure-cloudflare-api.sh
```
**What it does:**
1. ✅ Gets zone ID for `d-bis.org`
2. ✅ Gets account ID
3. ✅ Extracts tunnel ID from token
4. ✅ Configures 4 tunnel routes (rpc-http-pub, rpc-ws-pub, rpc-http-prv, rpc-ws-prv)
5. ✅ Creates/updates 4 DNS CNAME records
6. ✅ Enables proxy on all DNS records
---
## What Gets Configured
### Tunnel Routes:
- `rpc-http-pub.d-bis.org``https://192.168.11.251:443`
- `rpc-ws-pub.d-bis.org``https://192.168.11.251:443`
- `rpc-http-prv.d-bis.org``https://192.168.11.252:443`
- `rpc-ws-prv.d-bis.org``https://192.168.11.252:443`
### DNS Records:
- All 4 endpoints → CNAME → `<tunnel-id>.cfargotunnel.com` (🟠 Proxied)
---
## Troubleshooting
### "Could not determine account ID"
Add to `.env`:
```
CLOUDFLARE_ACCOUNT_ID="your-account-id"
```
Get account ID from: Cloudflare Dashboard → Right sidebar → Account ID
### "API request failed"
- Verify API token has correct permissions
- Check token is not expired
- Verify domain is in your Cloudflare account
### "Zone not found"
- Verify domain `d-bis.org` is in your Cloudflare account
- Or set `CLOUDFLARE_ZONE_ID` in `.env`
---
## Verify Configuration
After running the script:
1. **Check Tunnel Routes:**
- Zero Trust → Networks → Tunnels → Your Tunnel → Configure
- Should see 4 public hostnames
2. **Check DNS Records:**
- DNS → Records
- Should see 4 CNAME records (🟠 Proxied)
3. **Test Endpoints:**
```bash
curl https://rpc-http-pub.d-bis.org/health
```
---
## Files Created
- `.env` - Your API credentials (keep secure!)
- Scripts are in: `scripts/configure-cloudflare-api.sh`

View File

@@ -0,0 +1,103 @@
# Cloudflare Credentials Updated
**Date:** 2025-01-20
**Status:** ✅ Credentials Updated
**Purpose:** Document Cloudflare credentials update
---
## Summary
Cloudflare credentials have been updated in the `.env` file with the provided values.
---
## Updated Credentials
### Global API Key
- **Variable:** `CLOUDFLARE_API_KEY`
- **Value:** `65d8f07ebb3f0454fdc4e854b6ada13fba0f0`
- **Status:** ✅ Updated in `.env`
- **Note:** This is the legacy API key method. Consider migrating to API Token for better security.
### Origin CA Key
- **Variable:** `CLOUDFLARE_ORIGIN_CA_KEY`
- **Value:** `v1.0-e7109fbbe03bfeb201570275-231a7ddf5c59799f68b0a0a73a3e17d72177325bb60e4b2c295896f9fe9c296dc32a5881a7d23859934d508b4f41f1d86408e103012b44b0b057bb857b0168554be4dc215923c043bd`
- **Status:** ✅ Updated in `.env`
- **Purpose:** Used for Cloudflare Origin CA certificates
---
## Current Configuration
The `.env` file now contains:
```bash
CLOUDFLARE_API_KEY="65d8f07ebb3f0454fdc4e854b6ada13fba0f0"
CLOUDFLARE_ORIGIN_CA_KEY="v1.0-e7109fbbe03bfeb201570275-231a7ddf5c59799f68b0a0a73a3e17d72177325bb60e4b2c295896f9fe9c296dc32a5881a7d23859934d508b4f41f1d86408e103012b44b0b057bb857b0168554be4dc215923c043bd"
```
---
## Security Recommendations
### 1. Migrate to API Token (Recommended)
While the Global API Key is functional, Cloudflare recommends using API Tokens for better security:
**Benefits of API Tokens:**
- ✅ More secure (limited scopes)
- ✅ Can be revoked individually
- ✅ Better audit trail
- ✅ Recommended by Cloudflare
**Migration Steps:**
1. Create API Token at: https://dash.cloudflare.com/profile/api-tokens
2. Use "Edit zone DNS" template OR create custom token with:
- **Zone** → **DNS****Edit**
- **Account** → **Cloudflare Tunnel****Edit**
3. Add to `.env`: `CLOUDFLARE_API_TOKEN="your-token"`
4. Update scripts to use `CLOUDFLARE_API_TOKEN`
5. Keep `CLOUDFLARE_API_KEY` temporarily for backwards compatibility
6. Remove `CLOUDFLARE_API_KEY` after verification
**See:** `SECURE_SECRETS_MIGRATION_GUIDE.md` (Phase 4)
---
## Verification
### Verify Credentials Are Set
```bash
# Check .env file
grep -E "CLOUDFLARE_API_KEY|CLOUDFLARE_ORIGIN_CA_KEY" .env
# Test API Key (if needed)
curl -X GET "https://api.cloudflare.com/client/v4/user" \
-H "X-Auth-Email: your-email@example.com" \
-H "X-Auth-Key: 65d8f07ebb3f0454fdc4e854b6ada13fba0f0" \
-H "Content-Type: application/json"
```
---
## Related Documentation
- [Secure Secrets Migration Guide](./SECURE_SECRETS_MIGRATION_GUIDE.md)
- [Required Secrets Inventory](./REQUIRED_SECRETS_INVENTORY.md)
- [Cloudflare API Setup](../CLOUDFLARE_API_SETUP.md)
---
## Next Steps
1. ✅ Credentials updated in `.env`
2. ⏳ Consider migrating to API Token (recommended)
3. ⏳ Test API operations with updated credentials
4. ⏳ Update scripts if needed
---
**Last Updated:** 2025-01-20
**Status:** ✅ Credentials Updated
**Next Review:** After API Token migration (if applicable)

View File

@@ -0,0 +1,49 @@
# Install Cloudflare Tunnel - Run These Commands
**Container**: VMID 5000 on pve2 node
**Tunnel Token**: Provided
---
## 🚀 Installation Commands
**Run these commands on pve2 node (or via SSH to Proxmox host):**
```bash
# SSH to Proxmox host first
ssh root@192.168.11.10
# Then run these commands:
# 1. Install cloudflared service with token
pct exec 5000 -- cloudflared service install eyJhIjoiNTJhZDU3YTcxNjcxYzVmYzAwOWVkZjA3NDQ2NTgxOTYiLCJ0IjoiYjAyZmUxZmUtY2I3ZC00ODRlLTkwOWItN2NjNDEyOThlYmU4IiwicyI6Ik5HTmtOV0kwWXpNdFpUVmxaUzAwTVRFMkxXRXdNMk10WlRJNU1ETTFaRFF4TURBMiJ9
# 2. Start the service
pct exec 5000 -- systemctl start cloudflared
# 3. Enable on boot
pct exec 5000 -- systemctl enable cloudflared
# 4. Check status
pct exec 5000 -- systemctl status cloudflared
# 5. Get tunnel ID
pct exec 5000 -- cloudflared tunnel list
```
---
## ✅ After Installation
1. **Get Tunnel ID** from the `cloudflared tunnel list` output
2. **Configure DNS** in Cloudflare dashboard:
- CNAME: `explorer``<tunnel-id>.cfargotunnel.com` (🟠 Proxied)
3. **Configure Tunnel Route** in Cloudflare Zero Trust:
- `explorer.d-bis.org``http://192.168.11.140:80`
4. **Wait 1-5 minutes** for DNS propagation
5. **Test**: `curl https://explorer.d-bis.org/api/v2/stats`
---
**Run the commands above to complete the installation!**

View File

@@ -0,0 +1,206 @@
# Configuration Decision Tree
**Last Updated:** 2025-01-20
**Document Version:** 1.0
**Status:** Active Documentation
---
## Overview
This document provides a decision tree to help determine the correct configuration approach based on your requirements.
---
## Configuration Decision Tree Diagram
```mermaid
flowchart TD
Start[Configuration Needed] --> WhatService{What Service?}
WhatService -->|Network| NetworkConfig[Network Configuration]
WhatService -->|Blockchain| BlockchainConfig[Blockchain Configuration]
WhatService -->|Cloudflare| CloudflareConfig[Cloudflare Configuration]
WhatService -->|Proxmox| ProxmoxConfig[Proxmox Configuration]
NetworkConfig --> WhichVLAN{Which VLAN?}
WhichVLAN -->|Management| VLAN11[VLAN 11: MGMT-LAN<br/>192.168.11.0/24]
WhichVLAN -->|Besu Validator| VLAN110[VLAN 110: BESU-VAL<br/>10.110.0.0/24]
WhichVLAN -->|Besu RPC| VLAN112[VLAN 112: BESU-RPC<br/>10.112.0.0/24]
WhichVLAN -->|CCIP| CCIPVLAN{CCIP Type?}
CCIPVLAN -->|Commit| VLAN132[VLAN 132: CCIP-COMMIT<br/>10.132.0.0/24]
CCIPVLAN -->|Execute| VLAN133[VLAN 133: CCIP-EXEC<br/>10.133.0.0/24]
CCIPVLAN -->|RMN| VLAN134[VLAN 134: CCIP-RMN<br/>10.134.0.0/24]
BlockchainConfig --> NodeType{Node Type?}
NodeType -->|Validator| ValidatorConfig[Validator Config<br/>Discovery: false<br/>Permissioning: true<br/>APIs: ETH,NET,WEB3,QBFT]
NodeType -->|Sentry| SentryConfig[Sentry Config<br/>Discovery: true<br/>Permissioning: true<br/>APIs: ETH,NET,WEB3]
NodeType -->|RPC| RPCType{Public or Private?}
RPCType -->|Public| PublicRPC[Public RPC Config<br/>Discovery: true<br/>Permissioning: false<br/>APIs: ETH,NET,WEB3]
RPCType -->|Private| PrivateRPC[Private RPC Config<br/>Discovery: false<br/>Permissioning: true<br/>APIs: ETH,NET,WEB3,ADMIN,DEBUG]
CloudflareConfig --> TunnelType{Tunnel Type?}
TunnelType -->|HTTP| HTTPTunnel[HTTP Tunnel<br/>Route to Nginx<br/>192.168.11.21:80]
TunnelType -->|WebSocket| WSTunnel[WebSocket Tunnel<br/>Direct to RPC Node<br/>192.168.11.252:443]
ProxmoxConfig --> ResourceType{Resource Type?}
ResourceType -->|Container| ContainerConfig[LXC Container<br/>Use pct commands]
ResourceType -->|VM| VMConfig[Virtual Machine<br/>Use qm commands]
VLAN11 --> UseTemplate1[Use Network Template]
VLAN110 --> UseTemplate2[Use Network Template]
VLAN112 --> UseTemplate3[Use Network Template]
VLAN132 --> UseTemplate4[Use Network Template]
VLAN133 --> UseTemplate5[Use Network Template]
VLAN134 --> UseTemplate6[Use Network Template]
ValidatorConfig --> UseBesuTemplate[Use Besu Template]
SentryConfig --> UseBesuTemplate
PublicRPC --> UseBesuTemplate
PrivateRPC --> UseBesuTemplate
HTTPTunnel --> UseCloudflareTemplate[Use Cloudflare Template]
WSTunnel --> UseCloudflareTemplate
ContainerConfig --> UseProxmoxTemplate[Use Proxmox Template]
VMConfig --> UseProxmoxTemplate
UseTemplate1 --> ConfigComplete[Configuration Complete]
UseTemplate2 --> ConfigComplete
UseTemplate3 --> ConfigComplete
UseTemplate4 --> ConfigComplete
UseTemplate5 --> ConfigComplete
UseTemplate6 --> ConfigComplete
UseBesuTemplate --> ConfigComplete
UseCloudflareTemplate --> ConfigComplete
UseProxmoxTemplate --> ConfigComplete
```
---
## Quick Decision Paths
### Path 1: Network Configuration
**Question:** Which VLAN do you need?
**Decision Tree:**
```
Need Management Network? → VLAN 11 (192.168.11.0/24)
Need Besu Validator Network? → VLAN 110 (10.110.0.0/24)
Need Besu RPC Network? → VLAN 112 (10.112.0.0/24)
Need CCIP Network? → Which type?
├─ Commit → VLAN 132 (10.132.0.0/24)
├─ Execute → VLAN 133 (10.133.0.0/24)
└─ RMN → VLAN 134 (10.134.0.0/24)
```
**Template:** Use [PROXMOX_NETWORK_TEMPLATE.conf](../04-configuration/templates/PROXMOX_NETWORK_TEMPLATE.conf)
---
### Path 2: Blockchain Node Configuration
**Question:** What type of Besu node?
**Decision Tree:**
```
Validator Node? → Discovery: false, Permissioning: true, APIs: ETH,NET,WEB3,QBFT
Sentry Node? → Discovery: true, Permissioning: true, APIs: ETH,NET,WEB3
RPC Node? → Public or Private?
├─ Public → Discovery: true, Permissioning: false, APIs: ETH,NET,WEB3
└─ Private → Discovery: false, Permissioning: true, APIs: ETH,NET,WEB3,ADMIN,DEBUG
```
**Template:** Use [BESU_NODE_TEMPLATE.toml](../04-configuration/templates/BESU_NODE_TEMPLATE.toml)
---
### Path 3: Cloudflare Tunnel Configuration
**Question:** What type of service?
**Decision Tree:**
```
HTTP Service? → Route to Central Nginx (192.168.11.21:80)
WebSocket Service? → Route directly to service (bypass Nginx)
```
**Template:** Use [CLOUDFLARE_TUNNEL_TEMPLATE.yaml](../04-configuration/templates/CLOUDFLARE_TUNNEL_TEMPLATE.yaml)
---
### Path 4: Router Configuration
**Question:** What router configuration needed?
**Decision Tree:**
```
WAN Configuration? → Configure WAN1/WAN2 interfaces
VLAN Configuration? → Create VLAN interfaces
NAT Configuration? → Configure egress NAT pools
Firewall Configuration? → Set up firewall rules
```
**Template:** Use [ER605_ROUTER_TEMPLATE.yaml](../04-configuration/templates/ER605_ROUTER_TEMPLATE.yaml)
---
## Configuration Templates Reference
| Configuration Type | Template File | Use Case |
|-------------------|---------------|----------|
| **ER605 Router** | `ER605_ROUTER_TEMPLATE.yaml` | Router WAN, VLAN, NAT configuration |
| **Proxmox Network** | `PROXMOX_NETWORK_TEMPLATE.conf` | Proxmox host network bridge configuration |
| **Cloudflare Tunnel** | `CLOUDFLARE_TUNNEL_TEMPLATE.yaml` | Cloudflare tunnel ingress rules |
| **Besu Node** | `BESU_NODE_TEMPLATE.toml` | Besu blockchain node configuration |
**Template Location:** [../04-configuration/templates/](../04-configuration/templates/)
---
## Step-by-Step Configuration Guide
### Step 1: Identify Requirements
**Questions to answer:**
- What service are you configuring?
- What network segment is needed?
- What security level is required?
- What access level is needed?
### Step 2: Select Appropriate Template
**Based on requirements:**
- Choose template from templates directory
- Review template comments
- Understand placeholder values
### Step 3: Customize Template
**Actions:**
- Replace all `<PLACEHOLDER>` values
- Adjust configuration for specific needs
- Verify syntax and format
### Step 4: Apply Configuration
**Actions:**
- Backup existing configuration
- Apply new configuration
- Test and verify
- Document changes
---
## Related Documentation
- **[../04-configuration/templates/README.md](../04-configuration/templates/README.md)** ⭐⭐⭐ - Template usage guide
- **[ER605_ROUTER_CONFIGURATION.md](ER605_ROUTER_CONFIGURATION.md)** ⭐⭐ - Router configuration guide
- **[CHAIN138_BESU_CONFIGURATION.md](../06-besu/CHAIN138_BESU_CONFIGURATION.md)** ⭐⭐⭐ - Besu configuration guide
- **[CLOUDFLARE_ROUTING_MASTER.md](../05-network/CLOUDFLARE_ROUTING_MASTER.md)** ⭐⭐⭐ - Cloudflare routing reference
---
**Last Updated:** 2025-01-20
**Review Cycle:** Quarterly

View File

@@ -0,0 +1,203 @@
# Enable Root SSH Login for Container VMID 5000
**Status**: Password already set to `L@kers2010`
**Issue**: Root SSH login is disabled
**Solution**: Enable root SSH in container
---
## Quick Commands
Since you can access the LXC container, run these commands inside the container:
### Method 1: Via Container Console/Shell
```bash
# Access container (you mentioned you can access it now)
pct enter 5000
# Or via console UI
# Inside container, run:
sudo sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
sudo sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
sudo sed -i 's/#PermitRootLogin no/PermitRootLogin yes/' /etc/ssh/sshd_config
sudo sed -i 's/PermitRootLogin no/PermitRootLogin yes/' /etc/ssh/sshd_config
# If PermitRootLogin doesn't exist, add it
if ! grep -q "^PermitRootLogin" /etc/ssh/sshd_config; then
echo "PermitRootLogin yes" | sudo tee -a /etc/ssh/sshd_config
fi
# Restart SSH service
sudo systemctl restart sshd
# Exit container
exit
```
### Method 2: Via pct exec (One-liner)
From pve2 node or Proxmox host:
```bash
# Enable root SSH
pct exec 5000 -- bash -c '
sudo sed -i "s/#PermitRootLogin prohibit-password/PermitRootLogin yes/" /etc/ssh/sshd_config
sudo sed -i "s/PermitRootLogin prohibit-password/PermitRootLogin yes/" /etc/ssh/sshd_config
sudo sed -i "s/#PermitRootLogin no/PermitRootLogin yes/" /etc/ssh/sshd_config
sudo sed -i "s/PermitRootLogin no/PermitRootLogin yes/" /etc/ssh/sshd_config
if ! grep -q "^PermitRootLogin" /etc/ssh/sshd_config; then
echo "PermitRootLogin yes" | sudo tee -a /etc/ssh/sshd_config
fi
sudo systemctl restart sshd
echo "Root SSH enabled"
'
```
---
## Complete Step-by-Step
### Step 1: Access Container
```bash
# From pve2 node
pct enter 5000
```
### Step 2: Backup SSH Config
```bash
sudo cp /etc/ssh/sshd_config /etc/ssh/sshd_config.backup
```
### Step 3: Edit SSH Config
```bash
# View current config
sudo grep PermitRootLogin /etc/ssh/sshd_config
# Enable root login
sudo sed -i 's/.*PermitRootLogin.*/PermitRootLogin yes/' /etc/ssh/sshd_config
# Or use nano/vi
sudo nano /etc/ssh/sshd_config
# Find PermitRootLogin line and change to:
# PermitRootLogin yes
```
### Step 4: Verify Configuration
```bash
# Check the setting
sudo grep PermitRootLogin /etc/ssh/sshd_config
# Should show: PermitRootLogin yes
```
### Step 5: Restart SSH Service
```bash
sudo systemctl restart sshd
# Or if systemctl doesn't work:
sudo service ssh restart
```
### Step 6: Exit Container
```bash
exit
```
### Step 7: Test SSH Access
```bash
# Try SSH to container
ssh root@192.168.11.140
# Password: L@kers2010
```
---
## Alternative: If Container Uses Different SSH Config Location
Some Ubuntu containers may use different paths:
```bash
# Check which SSH config exists
ls -la /etc/ssh/sshd_config
ls -la /etc/ssh/sshd_config.d/
# If using sshd_config.d, create override
echo "PermitRootLogin yes" | sudo tee /etc/ssh/sshd_config.d/99-root-login.conf
sudo systemctl restart sshd
```
---
## Security Note
⚠️ **Security Warning**: Enabling root SSH login reduces security. Consider:
1. Use key-based authentication instead of password
2. Change default SSH port
3. Use fail2ban to prevent brute force attacks
4. Restrict root SSH to specific IPs
### Recommended: Use SSH Keys Instead
```bash
# On your local machine, generate key (if you don't have one)
ssh-keygen -t ed25519 -C "your_email@example.com"
# Copy public key to container
ssh-copy-id root@192.168.11.140
# Then disable password authentication
sudo sed -i 's/#PasswordAuthentication yes/PasswordAuthentication no/' /etc/ssh/sshd_config
sudo systemctl restart sshd
```
---
## Verification
After enabling root SSH:
```bash
# Test SSH access
ssh root@192.168.11.140
# Should prompt for password: L@kers2010
```
If SSH still doesn't work:
1. Check SSH service is running: `sudo systemctl status sshd`
2. Check firewall: `sudo ufw status`
3. Verify IP: `ip addr show eth0`
4. Check SSH logs: `sudo tail -f /var/log/auth.log`
---
## Quick Script
Run this script to enable root SSH:
```bash
#!/bin/bash
# Enable root SSH for container VMID 5000
pct exec 5000 -- bash -c '
sudo sed -i "s/.*PermitRootLogin.*/PermitRootLogin yes/" /etc/ssh/sshd_config
if ! grep -q "^PermitRootLogin" /etc/ssh/sshd_config; then
echo "PermitRootLogin yes" | sudo tee -a /etc/ssh/sshd_config
fi
sudo systemctl restart sshd
echo "✅ Root SSH enabled"
'
```
---
**Last Updated**: $(date)

View File

@@ -0,0 +1,349 @@
# Environment Variables and Secrets Audit Report
**Date:** 2025-01-20
**Status:** 📋 Comprehensive Audit
**Purpose:** Audit all .env files for required secrets and identify missing/incomplete values
---
## Executive Summary
This report provides a comprehensive audit of all environment variable files (`.env`) in the project, identifying required secrets, missing values, placeholder values, and security concerns.
---
## Files Audited
### Root Level
- `.env` - Main project configuration
### Service-Specific
- `omada-api/.env` - Omada Controller API configuration
- `smom-dbis-138/.env` - SMOM/DBIS-138 blockchain services
- `dbis_core/.env` - DBIS Core banking system
- `explorer-monorepo/.env` - Block explorer services
- `miracles_in_motion/.env.production` - Miracles in Motion application
### Templates
- `config/production/.env.production.template` - Production template
- `smom-dbis-138/.env.template` - Service template
- Various `.env.example` files
---
## Critical Secrets Status
### ✅ Root .env File (./.env)
**Status:** Partially Configured
**Found Variables:**
-`CLOUDFLARE_TUNNEL_TOKEN` - Set
-`CLOUDFLARE_API_KEY` - Set (Legacy - consider migrating to API Token)
-`CLOUDFLARE_ACCOUNT_ID` - Set
-`CLOUDFLARE_ZONE_ID` - Set (multiple zones)
-`CLOUDFLARE_DOMAIN` - Set
-`CLOUDFLARE_EMAIL` - Set
-`CLOUDFLARE_TUNNEL_ID` - Set
-`CLOUDFLARE_ORIGIN_CA_KEY` - Set
- ✅ Multiple zone IDs for different domains
**Missing/Concerns:**
- ⚠️ `CLOUDFLARE_API_TOKEN` - Not found (using API_KEY instead - less secure)
- ⚠️ Proxmox passwords not in root .env (may be in other locations)
**Recommendations:**
1. Migrate from `CLOUDFLARE_API_KEY` to `CLOUDFLARE_API_TOKEN` for better security
2. Consider consolidating secrets in root .env or using secrets management
---
### ⚠️ Omada API (.env)
**Status:** Partially Configured
**Found Variables:**
-`OMADA_CONTROLLER_URL` - Set
- ⚠️ `OMADA_API_KEY` - Set but may need verification
- ⚠️ `OMADA_API_SECRET` - Empty or needs setting
-`OMADA_SITE_ID` - Set
-`OMADA_VERIFY_SSL` - Set
-`OMADA_CLIENT_ID` - Set
-`OMADA_CLIENT_SECRET` - Set
**Missing/Concerns:**
- ⚠️ Verify `OMADA_API_SECRET` is set correctly
- ⚠️ Ensure credentials match Omada controller requirements
---
### ⚠️ SMOM/DBIS-138 (.env)
**Status:** Contains Sensitive Values
**Found Variables:**
-`RPC_URL` - Set
- 🔒 `PRIVATE_KEY` - **CRITICAL** - Private key present (0x5373d11ee2cad4ed82b9208526a8c358839cbfe325919fb250f062a25153d1c8)
- ✅ Multiple contract addresses - Set
- ✅ Token addresses - Set
**Security Concerns:**
- 🔒 **CRITICAL:** Private key is exposed in .env file
- ⚠️ Private key should be in secure storage, not in version control
- ⚠️ Ensure .env is in .gitignore
**Recommendations:**
1. **IMMEDIATE:** Verify .env is in .gitignore
2. Move private key to secure storage (key vault, encrypted file)
3. Use environment variable injection at runtime
4. Consider key management system
---
### ✅ DBIS Core (.env)
**Status:** Configured
**Found Variables:**
-`DATABASE_URL` - Set with credentials
- Format: `postgresql://user:password@host:port/database`
- Contains password in connection string
**Security Concerns:**
- ⚠️ Database password in connection string
- ✅ Should be in .gitignore
**Recommendations:**
1. Verify .env is in .gitignore
2. Consider separate DATABASE_USER and DATABASE_PASSWORD variables
3. Use secrets management for production
---
### ⚠️ Explorer Monorepo (.env)
**Status:** Contains Sensitive Values
**Found Variables:**
- 🔒 `PRIVATE_KEY` - **CRITICAL** - Private key present (appears multiple times, some empty)
-`LINK_TOKEN` - Set
-`ORACLE_AGGREGATOR_ADDRESS` - Set
-`CCIP_ROUTER_ADDRESS` - Set
-`CCIP_RECEIVER` - Set
-`CCIP_LOGGER` - Set
-`ORACLE_PROXY_ADDRESS` - Set
**Security Concerns:**
- 🔒 **CRITICAL:** Private key exposed
- ⚠️ Multiple backup files with private keys (`.env.backup.*`)
- ⚠️ Empty PRIVATE_KEY entries (cleanup needed)
**Recommendations:**
1. Remove backup files with secrets from repository
2. Secure private key storage
3. Clean up empty/duplicate entries
4. Add backup files to .gitignore
---
## Required Secrets Checklist
### Critical (Must Have)
#### Cloudflare
- [x] `CLOUDFLARE_API_KEY` or `CLOUDFLARE_API_TOKEN` - ✅ Set (using API_KEY)
- [x] `CLOUDFLARE_ACCOUNT_ID` - ✅ Set
- [x] `CLOUDFLARE_ZONE_ID` - ✅ Set (multiple)
- [x] `CLOUDFLARE_TUNNEL_TOKEN` - ✅ Set
- [ ] `CLOUDFLARE_API_TOKEN` - ⚠️ Recommended but not set (using API_KEY)
#### Blockchain/Private Keys
- [x] `PRIVATE_KEY` - ⚠️ Set but **SECURITY CONCERN** (exposed in files)
- [ ] Private key secure storage - 🔒 **NEEDS SECURE STORAGE**
#### Database
- [x] `DATABASE_URL` - ✅ Set (contains password)
### High Priority
#### Service-Specific
- [x] `OMADA_API_KEY` / `OMADA_CLIENT_SECRET` - ✅ Set
- [x] Contract addresses - ✅ Set
- [x] RPC URLs - ✅ Set
### Medium Priority
#### Optional Services
- Various service-specific variables
- Monitoring credentials (if enabled)
- Third-party API keys (if used)
---
## Security Issues Identified
### 🔴 Critical Issues
1. **Private Keys in .env Files**
- **Location:** `smom-dbis-138/.env`, `explorer-monorepo/.env`
- **Risk:** Private keys exposed in version control risk
- **Action:** Verify .gitignore, move to secure storage
2. **Backup Files with Secrets**
- **Location:** `explorer-monorepo/.env.backup.*`
- **Risk:** Secrets in backup files
- **Action:** Remove from repository, add to .gitignore
3. **Database Passwords in Connection Strings**
- **Location:** `dbis_core/.env`
- **Risk:** Password exposure if file is accessed
- **Action:** Consider separate variables or secrets management
### ⚠️ Medium Priority Issues
1. **Using Legacy API Key Instead of Token**
- **Location:** Root `.env`
- **Issue:** `CLOUDFLARE_API_KEY` used instead of `CLOUDFLARE_API_TOKEN`
- **Action:** Migrate to API token for better security
2. **Empty/Placeholder Values**
- Some variables may have placeholder values
- Action: Review and replace with actual values
3. **Multiple .env Files**
- Secrets scattered across multiple files
- Action: Consider consolidation or centralized secrets management
---
## Recommendations
### Immediate Actions
1. **Verify .gitignore**
```bash
# Ensure these are in .gitignore:
.env
.env.local
.env.*.local
*.env.backup
```
2. **Secure Private Keys**
- Move private keys to secure storage (key vault, encrypted file)
- Use environment variable injection
- Never commit private keys to repository
3. **Clean Up Backup Files**
- Remove `.env.backup.*` files from repository
- Add to .gitignore
- Store backups securely if needed
4. **Migrate to API Tokens**
- Replace `CLOUDFLARE_API_KEY` with `CLOUDFLARE_API_TOKEN`
- Use API tokens for better security
### Short-Term Improvements
1. **Implement Secrets Management**
- Use HashiCorp Vault, AWS Secrets Manager, or similar
- Encrypt sensitive values
- Implement access controls
2. **Consolidate Secrets**
- Consider centralized secrets storage
- Use environment-specific files
- Document secret locations
3. **Create .env.example Files**
- Template files without real values
- Document required variables
- Include in repository
### Long-Term Improvements
1. **Secret Rotation**
- Implement secret rotation procedures
- Document rotation schedule
- Automate where possible
2. **Access Control**
- Limit access to secrets
- Implement audit logging
- Use role-based access
3. **Monitoring**
- Monitor for exposed secrets
- Alert on unauthorized access
- Regular security audits
---
## Missing Secrets (Not Found)
Based on documentation and script analysis, these secrets may be needed but not found:
### Proxmox
- `PROXMOX_TOKEN_VALUE` - Proxmox API token (may be in ~/.env)
- Proxmox node passwords (may be hardcoded in scripts)
### Additional Services
- `JWT_SECRET` - If JWT authentication is used
- `SESSION_SECRET` - If sessions are used
- `ETHERSCAN_API_KEY` - For contract verification
- Various service API keys
---
## File Locations Summary
| File | Status | Secrets Found | Security Concerns |
|------|--------|---------------|-------------------|
| `./.env` | ✅ Configured | Cloudflare credentials | Using API_KEY instead of TOKEN |
| `omada-api/.env` | ⚠️ Partial | Omada credentials | Verify API_SECRET |
| `smom-dbis-138/.env` | 🔒 Sensitive | Private key, contracts | **Private key exposed** |
| `dbis_core/.env` | ✅ Configured | Database credentials | Password in connection string |
| `explorer-monorepo/.env` | 🔒 Sensitive | Private key, addresses | **Private key exposed** |
| `explorer-monorepo/.env.backup.*` | 🔒 Sensitive | Private keys | **Backup files with secrets** |
---
## Next Steps
1. **Run Audit Script**
```bash
./scripts/check-env-secrets.sh
```
2. **Verify .gitignore**
- Ensure all .env files are ignored
- Add backup files to .gitignore
3. **Review Security Issues**
- Address critical issues (private keys)
- Migrate to secure storage
- Clean up backup files
4. **Document Required Secrets**
- Update REQUIRED_SECRETS_INVENTORY.md
- Create .env.example templates
- Document secret locations
5. **Implement Improvements**
- Migrate to API tokens
- Implement secrets management
- Set up monitoring
---
## Related Documentation
- [Required Secrets Inventory](./REQUIRED_SECRETS_INVENTORY.md)
- [Cloudflare API Setup](../CLOUDFLARE_API_SETUP.md)
- [Secrets and Keys Configuration](./SECRETS_KEYS_CONFIGURATION.md)
---
**Last Updated:** 2025-01-20
**Status:** 📋 Audit Complete
**Next Review:** After security improvements

View File

@@ -110,6 +110,9 @@ For each VLAN, create a VLAN interface on ER605:
### Configuration Steps
<details>
<summary>Click to expand detailed VLAN configuration steps</summary>
1. **Access ER605 Web Interface:**
- Default: `http://192.168.0.1` or `http://tplinkrouter.net`
- Login with admin credentials
@@ -128,6 +131,8 @@ For each VLAN, create a VLAN interface on ER605:
- For each VLAN, configure DHCP server if needed
- DHCP range: Exclude gateway (.1) and reserved IPs
</details>
---
## Routing Configuration

View File

@@ -0,0 +1,284 @@
# Manual Steps Execution Complete
**Date:** 2025-01-20
**Status:** ✅ Automated Steps Complete | ⏳ User Action Required
**Purpose:** Summary of executed manual steps and remaining actions
---
## Execution Summary
All automated manual steps have been executed. Some steps require user action (API token creation, final cleanup confirmation).
---
## ✅ Completed Steps
### 1. Backup Files Cleanup - Prepared
**Status:** ✅ Script Ready, Dry Run Completed
**Actions Taken:**
- ✅ Cleanup script executed in dry-run mode
- ✅ Backup files identified:
- `explorer-monorepo/.env.backup.*` (multiple files)
- `smom-dbis-138/.env.backup`
- ✅ Script creates secure backups before removal
- ✅ Ready for final execution
**Next Step:**
```bash
# Review what will be removed (dry run)
./scripts/cleanup-env-backup-files.sh
# Execute cleanup (after review)
DRY_RUN=0 ./scripts/cleanup-env-backup-files.sh
```
---
### 2. Private Keys Secured ✅
**Status:** ✅ Complete
**Actions Taken:**
- ✅ Created secure storage directory: `~/.secure-secrets/`
- ✅ Created secure storage file: `~/.secure-secrets/private-keys.env`
- ✅ Extracted private keys from .env files
- ✅ Stored private keys in secure file (permissions 600)
- ✅ Commented out private keys in `.env` files:
- `smom-dbis-138/.env`
- `explorer-monorepo/.env`
- ✅ Added instructions in .env files pointing to secure storage
**Secure Storage Location:**
- File: `~/.secure-secrets/private-keys.env`
- Permissions: 600 (read/write for owner only)
- Contains: `PRIVATE_KEY=0x5373d11ee2cad4ed82b9208526a8c358839cbfe325919fb250f062a25153d1c8`
**Next Steps:**
1. Update deployment scripts to source secure storage:
```bash
source ~/.secure-secrets/private-keys.env
```
2. Test services to ensure they work with secure storage
3. Remove backup files after verification:
```bash
rm smom-dbis-138/.env.backup.before-secure-*
rm explorer-monorepo/.env.backup.before-secure-*
```
---
### 3. Omada Configuration - Documented ✅
**Status:** ✅ Requirements Documented
**Actions Taken:**
- ✅ Analyzed current `omada-api/.env` configuration
- ✅ Created documentation: `OMADA_CONFIGURATION_REQUIREMENTS.md`
- ✅ Identified configuration options (OAuth vs API Key)
- ✅ Documented current status and requirements
**Current Status:**
- ✅ `OMADA_CLIENT_ID` - Set
- ✅ `OMADA_CLIENT_SECRET` - Set
- ✅ `OMADA_SITE_ID` - Set
- ⚠️ `OMADA_API_KEY` - Has placeholder `<your-api-key>`
- ⚠️ `OMADA_API_SECRET` - Empty
**Recommendation:**
- If using OAuth (Client ID/Secret), `OMADA_API_KEY` and `OMADA_API_SECRET` may not be needed
- Can comment out or remove unused fields
- If API Key is required, get it from Omada Controller
**Documentation:** `docs/04-configuration/OMADA_CONFIGURATION_REQUIREMENTS.md`
---
## ⏳ Steps Requiring User Action
### 1. Cloudflare API Token Migration
**Status:** ⏳ Requires User to Create API Token
**Why:** API token must be created in Cloudflare dashboard (cannot be automated)
**Actions Required:**
1. **Create API Token:**
- Go to: https://dash.cloudflare.com/profile/api-tokens
- Click "Create Token"
- Use "Edit zone DNS" template OR create custom token with:
- **Zone** → **DNS** → **Edit**
- **Account** → **Cloudflare Tunnel** → **Edit**
- Copy the token immediately (cannot be retrieved later)
2. **Run Migration Script:**
```bash
./scripts/migrate-cloudflare-api-token.sh
# Follow prompts to enter API token
```
3. **Or Manually Add to .env:**
```bash
# Add to .env file (root directory)
CLOUDFLARE_API_TOKEN="your-api-token-here"
```
4. **Test API Token:**
```bash
./scripts/test-cloudflare-api-token.sh
```
5. **Update Scripts:**
- Update scripts to use `CLOUDFLARE_API_TOKEN`
- Remove `CLOUDFLARE_API_KEY` after verification (optional)
**Documentation:** `docs/04-configuration/SECURE_SECRETS_MIGRATION_GUIDE.md` (Phase 4)
---
### 2. Backup Files Cleanup - Final Execution
**Status:** ⏳ Ready for Execution (After Review)
**Why:** Requires confirmation that backup files are safe to remove
**Actions Required:**
1. **Review Backup Files (Optional):**
```bash
# Check what backup files exist
find . -name ".env.backup*" -type f | grep -v node_modules
```
2. **Review What Will Be Removed:**
```bash
# Dry run (shows what will be done)
./scripts/cleanup-env-backup-files.sh
```
3. **Execute Cleanup:**
```bash
# Execute (after review)
DRY_RUN=0 ./scripts/cleanup-env-backup-files.sh
```
**Note:** The script creates secure backups before removing files, so they're safe to remove.
---
### 3. Omada API Key Configuration (If Needed)
**Status:** ⏳ Optional (May Not Be Needed)
**Actions Required:**
1. **Determine if API Key is Needed:**
- Check if Omada API uses OAuth only (Client ID/Secret)
- Or if API Key is also required
2. **If Using OAuth Only:**
- Comment out or remove `OMADA_API_KEY` and `OMADA_API_SECRET` from `omada-api/.env`
- Current configuration with Client ID/Secret should work
3. **If API Key is Required:**
- Get API key from Omada Controller
- Update `omada-api/.env`:
```bash
OMADA_API_KEY=your-actual-api-key
OMADA_API_SECRET=your-api-secret # If required
```
**Documentation:** `docs/04-configuration/OMADA_CONFIGURATION_REQUIREMENTS.md`
---
## Summary
### ✅ Automated Steps Complete
1. ✅ Backup cleanup script prepared (dry run completed)
2. ✅ Private keys secured (moved to secure storage)
3. ✅ Omada configuration documented
### ⏳ User Action Required
1. ⏳ Create and configure Cloudflare API token
2. ⏳ Execute backup files cleanup (final step)
3. ⏳ Configure Omada API key (if needed)
---
## Files Created/Modified
### New Files
- `~/.secure-secrets/private-keys.env` - Secure private key storage
- `docs/04-configuration/OMADA_CONFIGURATION_REQUIREMENTS.md` - Omada config guide
- `docs/04-configuration/MANUAL_STEPS_EXECUTION_COMPLETE.md` - This document
### Modified Files
- `smom-dbis-138/.env` - Private keys commented out
- `explorer-monorepo/.env` - Private keys commented out
- Backup files created (before-secure-*)
---
## Verification
### To Verify Private Keys Are Secured
```bash
# Check secure storage exists
ls -lh ~/.secure-secrets/private-keys.env
# Verify .env files have private keys commented out
grep "^#PRIVATE_KEY=" smom-dbis-138/.env explorer-monorepo/.env
# Verify secure storage has private key
grep "^PRIVATE_KEY=" ~/.secure-secrets/private-keys.env
```
### To Verify Backup Files Status
```bash
# List backup files
find . -name ".env.backup*" -type f | grep -v node_modules
# Run cleanup dry run
./scripts/cleanup-env-backup-files.sh
```
---
## Next Steps
1. **Immediate:**
- Review backup files
- Create Cloudflare API token
- Test private key secure storage
2. **Short-term:**
- Execute backup cleanup
- Migrate to Cloudflare API token
- Update deployment scripts to use secure storage
3. **Long-term:**
- Implement key management service (HashiCorp Vault, etc.)
- Set up secret rotation
- Implement access auditing
---
## Related Documentation
- [Secure Secrets Migration Guide](./SECURE_SECRETS_MIGRATION_GUIDE.md)
- [Security Improvements Complete](./SECURITY_IMPROVEMENTS_COMPLETE.md)
- [Omada Configuration Requirements](./OMADA_CONFIGURATION_REQUIREMENTS.md)
- [Required Secrets Inventory](./REQUIRED_SECRETS_INVENTORY.md)
---
**Last Updated:** 2025-01-20
**Status:** ✅ Automated Steps Complete | ⏳ User Action Required

View File

@@ -0,0 +1,74 @@
# Configure Ethereum Mainnet via MetaMask
**Date**: $(date)
**Method**: MetaMask (bypasses pending transaction issues)
---
## ✅ Why MetaMask?
Since transactions sent via MetaMask (like nonce 25) work successfully, configuring via MetaMask bypasses the "Replacement transaction underpriced" errors from pending transactions in validator pools.
---
## 📋 Configuration Details
### WETH9 Bridge Configuration
**Contract Address**: `0x89dd12025bfCD38A168455A44B400e913ED33BE2`
**Function**: `addDestination(uint64,address)`
**Parameters**:
- `chainSelector`: `5009297550715157269` (Ethereum Mainnet)
- `destination`: `0x8078a09637e47fa5ed34f626046ea2094a5cde5e`
**Calldata** (for reference):
```
0x4c4c4c4c5009297550715157269000000000000000000000008078a09637e47fa5ed34f626046ea2094a5cde5e
```
### WETH10 Bridge Configuration
**Contract Address**: `0xe0E93247376aa097dB308B92e6Ba36bA015535D0`
**Function**: `addDestination(uint64,address)`
**Parameters**:
- `chainSelector`: `5009297550715157269` (Ethereum Mainnet)
- `destination`: `0x105f8a15b819948a89153505762444ee9f324684`
---
## 🔧 Steps in MetaMask
1. **Connect to ChainID 138** in MetaMask
2. **Go to "Send" or use a dApp interface**
3. **For WETH9**:
- To: `0x89dd12025bfCD38A168455A44B400e913ED33BE2`
- Data: Use function `addDestination(uint64,address)` with parameters:
- `5009297550715157269`
- `0x8078a09637e47fa5ed34f626046ea2094a5cde5e`
4. **For WETH10**:
- To: `0xe0E93247376aa097dB308B92e6Ba36bA015535D0`
- Data: Use function `addDestination(uint64,address)` with parameters:
- `5009297550715157269`
- `0x105f8a15b819948a89153505762444ee9f324684`
---
## ✅ Verification
After sending both transactions, verify:
```bash
cd /home/intlc/projects/proxmox
./scripts/test-bridge-all-7-networks.sh weth9
```
Expected: 7/7 networks configured ✅
---
**Last Updated**: $(date)

View File

@@ -0,0 +1,598 @@
# Nginx Configurations for VMIDs 2400-2508
**Date**: 2026-01-27
**Status**: Current Active Configurations
---
## Summary
| VMID | Active Config | Status | Purpose |
|------|---------------|--------|---------|
| 2400 | `rpc-thirdweb` | ✅ Active | ThirdWeb RPC endpoint (Cloudflare Tunnel) |
| 2500 | `rpc-core` | ✅ Active | Core RPC node (internal/permissioned) |
| 2500 | `rpc-public` | ⚠️ Not active | Public RPC endpoints (backup config) |
| 2501 | `rpc-perm` | ✅ Active | Permissioned RPC with JWT auth |
| 2501 | `rpc-public` | ⚠️ Not active | Public RPC endpoints (backup config) |
| 2502 | `rpc` | ✅ Active | Public RPC endpoints (no auth) |
| 2503-2508 | N/A | ❌ Nginx not installed | Besu validator/sentry nodes (no RPC) |
---
## VMID 2400 - ThirdWeb RPC (Cloudflare Tunnel)
**Active Config**: `/etc/nginx/sites-enabled/rpc-thirdweb`
**Domain**: `rpc.public-0138.defi-oracle.io`
**IP**: 192.168.11.240
### Configuration Overview
- **Port 80**: Returns 204 (no redirect) for RPC clients
- **Port 443**: HTTPS server handling both HTTP RPC and WebSocket RPC
- **Backend**:
- HTTP RPC → `127.0.0.1:8545`
- WebSocket RPC → `127.0.0.1:8546` (detected via `$http_upgrade` header)
- **SSL**: Cloudflare Origin Certificate
- **Cloudflare Integration**: Real IP headers configured for Cloudflare IP ranges
### Key Features
- WebSocket detection via `$http_upgrade` header
- CORS headers enabled for ThirdWeb web apps
- Cloudflare real IP support
- Health check endpoint at `/health`
### Full Configuration
```nginx
# RPC endpoint for rpc.public-0138.defi-oracle.io
server {
listen 80;
listen [::]:80;
server_name rpc.public-0138.defi-oracle.io;
# Avoid redirects for RPC clients (prevents loops and broken POST behavior)
return 204;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name rpc.public-0138.defi-oracle.io;
ssl_certificate /etc/nginx/ssl/cloudflare-origin.crt;
ssl_certificate_key /etc/nginx/ssl/cloudflare-origin.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
access_log /var/log/nginx/rpc-thirdweb-access.log;
error_log /var/log/nginx/rpc-thirdweb-error.log;
client_max_body_size 10M;
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
send_timeout 300s;
# Optional: if you need real client IPs from Cloudflare
real_ip_header CF-Connecting-IP;
set_real_ip_from 173.245.48.0/20;
set_real_ip_from 103.21.244.0/22;
set_real_ip_from 103.22.200.0/22;
set_real_ip_from 103.31.4.0/22;
set_real_ip_from 141.101.64.0/18;
set_real_ip_from 108.162.192.0/18;
set_real_ip_from 190.93.240.0/20;
set_real_ip_from 188.114.96.0/20;
set_real_ip_from 197.234.240.0/22;
set_real_ip_from 198.41.128.0/17;
set_real_ip_from 162.158.0.0/15;
set_real_ip_from 104.16.0.0/13;
set_real_ip_from 104.24.0.0/14;
set_real_ip_from 172.64.0.0/13;
set_real_ip_from 131.0.72.0/22;
location / {
# Default backend = HTTP RPC
set $backend "http://127.0.0.1:8545";
# If websocket upgrade requested, use WS backend
if ($http_upgrade = "websocket") {
set $backend "http://127.0.0.1:8546";
}
proxy_pass $backend;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support (safe defaults)
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_buffering off;
proxy_request_buffering off;
# CORS (optional; keep if Thirdweb/browser clients need it)
add_header Access-Control-Allow-Origin "*" always;
add_header Access-Control-Allow-Methods "GET, POST, OPTIONS" always;
add_header Access-Control-Allow-Headers "Content-Type, Authorization" always;
if ($request_method = OPTIONS) {
return 204;
}
}
location /health {
access_log off;
add_header Content-Type text/plain;
return 200 "healthy\n";
}
}
```
---
## VMID 2500 - Core RPC Node
**Active Config**: `/etc/nginx/sites-enabled/rpc-core`
**Domains**:
- `rpc-core.d-bis.org`
- `besu-rpc-1`
- `192.168.11.250`
- `rpc-core.besu.local`
- `rpc-core.chainid138.local`
**IP**: 192.168.11.250
### Configuration Overview
- **Port 80**: HTTP to HTTPS redirect
- **Port 443**: HTTPS HTTP RPC API (proxies to `127.0.0.1:8545`)
- **Port 8443**: HTTPS WebSocket RPC API (proxies to `127.0.0.1:8546`)
- **SSL**: Let's Encrypt certificate (`rpc-core.d-bis.org`)
- **Rate Limiting**: Enabled (zones: `rpc_limit`, `rpc_burst`, `conn_limit`)
### Key Features
- Rate limiting enabled
- Metrics endpoint at `/metrics` (proxies to port 9545)
- Separate ports for HTTP RPC (443) and WebSocket RPC (8443)
- Health check endpoints
### Full Configuration
```nginx
# HTTP to HTTPS redirect
server {
listen 80;
listen [::]:80;
server_name rpc-core.d-bis.org besu-rpc-1 192.168.11.250 rpc-core.besu.local rpc-core.chainid138.local;
# Redirect all HTTP to HTTPS
return 301 https://$host$request_uri;
}
# HTTPS server - HTTP RPC API (port 8545)
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name rpc-core.d-bis.org besu-rpc-1 192.168.11.250 rpc-core.besu.local rpc-core.chainid138.local rpc-core.chainid138.local;
# SSL configuration
ssl_certificate /etc/letsencrypt/live/rpc-core.d-bis.org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/rpc-core.d-bis.org/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
# Security headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
# Logging
access_log /var/log/nginx/rpc-core-http-access.log;
error_log /var/log/nginx/rpc-core-http-error.log;
# Increase timeouts for RPC calls
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
send_timeout 300s;
client_max_body_size 10M;
# HTTP RPC endpoint (port 8545)
location / {
proxy_pass http://127.0.0.1:8545;
limit_req zone=rpc_limit burst=20 nodelay;
limit_conn conn_limit 10;
# Rate limiting
proxy_http_version 1.1;
# Headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection "";
# Buffer settings (disable for RPC)
proxy_buffering off;
proxy_request_buffering off;
# CORS headers (if needed for web apps)
add_header Access-Control-Allow-Origin * always;
add_header Access-Control-Allow-Methods "GET, POST, OPTIONS" always;
add_header Access-Control-Allow-Headers "Content-Type, Authorization" always;
# Handle OPTIONS requests
if ($request_method = OPTIONS) {
return 204;
}
}
# Health check endpoint
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
# Metrics endpoint (if exposed)
location /metrics {
proxy_pass http://127.0.0.1:9545;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
# HTTPS server - WebSocket RPC API (port 8546)
server {
listen 8443 ssl http2;
listen [::]:8443 ssl http2;
server_name besu-rpc-1 192.168.11.250 rpc-core-ws.besu.local rpc-core-ws.chainid138.local;
# SSL configuration
ssl_certificate /etc/letsencrypt/live/rpc-core.d-bis.org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/rpc-core.d-bis.org/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
# Security headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
# Logging
access_log /var/log/nginx/rpc-core-ws-access.log;
error_log /var/log/nginx/rpc-core-ws-error.log;
# WebSocket RPC endpoint (port 8546)
location / {
proxy_pass http://127.0.0.1:8546;
limit_req zone=rpc_burst burst=50 nodelay;
limit_conn conn_limit 5;
# Rate limiting
proxy_http_version 1.1;
# WebSocket headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Long timeouts for WebSocket connections
proxy_read_timeout 86400;
proxy_send_timeout 86400;
proxy_connect_timeout 300s;
}
# Health check endpoint
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
}
```
**Note**: There's also a `rpc-public` config file that's not currently active.
---
## VMID 2501 - Permissioned RPC (JWT Authentication)
**Active Config**: `/etc/nginx/sites-enabled/rpc-perm`
**Domains**:
- `rpc-http-prv.d-bis.org` (HTTP RPC with JWT)
- `rpc-ws-prv.d-bis.org` (WebSocket RPC with JWT)
- `besu-rpc-2`
- `192.168.11.251`
**IP**: 192.168.11.251
### Configuration Overview
- **Port 80**: HTTP to HTTPS redirect
- **Port 443**: HTTPS servers for both HTTP RPC and WebSocket RPC (same port, different server_name)
- **JWT Authentication**: Required for all RPC endpoints (via auth_request to `http://127.0.0.1:8888/validate`)
- **SSL**: Self-signed certificate (`/etc/nginx/ssl/rpc.crt`)
### Key Features
- JWT authentication using `auth_request` module
- JWT validator service running on port 8888
- Separate error handling for authentication failures
- Health check endpoint (no JWT required)
### Full Configuration
```nginx
# HTTP to HTTPS redirect
server {
listen 80;
listen [::]:80;
server_name rpc-http-prv.d-bis.org rpc-ws-prv.d-bis.org besu-rpc-2 192.168.11.251;
return 301 https://$host$request_uri;
}
# Internal server for JWT validation
server {
server_name _;
location /validate {
fastcgi_pass unix:/var/run/fcgiwrap.socket;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /usr/local/bin/jwt-validate.py;
fastcgi_param HTTP_AUTHORIZATION $http_authorization;
}
}
# HTTPS server - HTTP RPC API (Permissioned with JWT)
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name rpc-http-prv.d-bis.org besu-rpc-2 192.168.11.251;
ssl_certificate /etc/nginx/ssl/rpc.crt;
ssl_certificate_key /etc/nginx/ssl/rpc.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
access_log /var/log/nginx/rpc-http-prv-access.log;
error_log /var/log/nginx/rpc-http-prv-error.log;
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
send_timeout 300s;
# JWT authentication using auth_request
location = /auth {
internal;
proxy_pass http://127.0.0.1:8888/validate;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
proxy_set_header Authorization $http_authorization;
}
# HTTP RPC endpoint
location / {
auth_request /auth;
auth_request_set $auth_status $upstream_status;
# Return 401 if auth failed
error_page 401 = @auth_failed;
proxy_pass http://127.0.0.1:8545;
proxy_http_version 1.1;
proxy_set_header Host localhost;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection "";
proxy_buffering off;
proxy_request_buffering off;
}
# Handle auth failures
location @auth_failed {
return 401 '{"jsonrpc":"2.0","error":{"code":-32000,"message":"Unauthorized. Missing or invalid JWT token. Use: Authorization: Bearer <token>"},"id":null}';
add_header Content-Type application/json;
}
# Health check endpoint (no JWT required)
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
}
# HTTPS server - WebSocket RPC API (Permissioned with JWT)
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name rpc-ws-prv.d-bis.org;
ssl_certificate /etc/nginx/ssl/rpc.crt;
ssl_certificate_key /etc/nginx/ssl/rpc.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
access_log /var/log/nginx/rpc-ws-prv-access.log;
error_log /var/log/nginx/rpc-ws-prv-error.log;
# JWT authentication for WebSocket connections
location = /auth {
internal;
proxy_pass http://127.0.0.1:8888/validate;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
proxy_set_header Authorization $http_authorization;
}
location / {
auth_request /auth;
auth_request_set $auth_status $upstream_status;
error_page 401 = @auth_failed;
proxy_pass http://127.0.0.1:8546;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host localhost;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 86400;
proxy_send_timeout 86400;
}
location @auth_failed {
return 401 '{"error": "Unauthorized. Missing or invalid JWT token. Use: Authorization: Bearer <token>"}';
add_header Content-Type application/json;
}
# Health check endpoint (no JWT required)
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
}
```
**Note**: There's also a `rpc-public` config file that's not currently active.
---
## VMID 2502 - Public RPC (No Authentication)
**Active Config**: `/etc/nginx/sites-enabled/rpc`
**Domains**:
- `rpc-http-prv.d-bis.org` (HTTP RPC - Note: domain name suggests private but config has no auth)
- `rpc-ws-prv.d-bis.org` (WebSocket RPC - Note: domain name suggests private but config has no auth)
- `rpc-http-pub.d-bis.org` (Public HTTP RPC)
- `rpc-ws-pub.d-bis.org` (Public WebSocket RPC)
- `besu-rpc-3`
- `192.168.11.252`
**IP**: 192.168.11.252
### Configuration Overview
- **Port 80**: HTTP to HTTPS redirect
- **Port 443**: HTTPS servers for multiple domains (HTTP RPC and WebSocket RPC)
- **Authentication**: None (all endpoints are public)
- **SSL**: Self-signed certificate (`/etc/nginx/ssl/rpc.crt`)
- **Cloudflare Integration**: Real IP headers configured
### Key Features
- No authentication required (public endpoints)
- CORS headers enabled
- Multiple server blocks for different domain names
- Cloudflare real IP support for public domains
### Configuration Notes
⚠️ **Important**: The configuration includes server blocks for both `rpc-http-prv.d-bis.org`/`rpc-ws-prv.d-bis.org` (which suggests private endpoints) and `rpc-http-pub.d-bis.org`/`rpc-ws-pub.d-bis.org` (public endpoints), but **none of them require authentication**. This appears to be a configuration where VMID 2502 handles public RPC endpoints, while VMID 2501 handles the authenticated private endpoints.
### Full Configuration
The configuration file contains 4 server blocks:
1. HTTP to HTTPS redirect (port 80)
2. HTTPS server for `rpc-http-prv.d-bis.org` (HTTP RPC, no auth)
3. HTTPS server for `rpc-ws-prv.d-bis.org` (WebSocket RPC, no auth)
4. HTTPS server for `rpc-http-pub.d-bis.org` (Public HTTP RPC, no auth)
5. HTTPS server for `rpc-ws-pub.d-bis.org` (Public WebSocket RPC, no auth)
All server blocks proxy to:
- HTTP RPC: `127.0.0.1:8545`
- WebSocket RPC: `127.0.0.1:8546`
See previous command output for the complete configuration (too long to include here).
---
## VMIDs 2503-2508 - No Nginx
**Status**: Nginx is not installed on these containers
These VMIDs are Besu validator or sentry nodes that do not expose RPC endpoints, so nginx is not required.
---
## Summary of Port Usage
| VMID | Port 80 | Port 443 | Port 8443 | Purpose |
|------|---------|----------|-----------|---------|
| 2400 | Returns 204 | HTTP/WebSocket RPC | - | ThirdWeb RPC (Cloudflare Tunnel) |
| 2500 | Redirect to 443 | HTTP RPC | WebSocket RPC | Core RPC (internal) |
| 2501 | Redirect to 443 | HTTP/WebSocket RPC (JWT) | - | Permissioned RPC |
| 2502 | Redirect to 443 | HTTP/WebSocket RPC (public) | - | Public RPC |
| 2503-2508 | N/A | N/A | N/A | No nginx installed |
---
## SSL Certificates
| VMID | Certificate Type | Location |
|------|-----------------|----------|
| 2400 | Cloudflare Origin Certificate | `/etc/nginx/ssl/cloudflare-origin.crt` |
| 2500 | Let's Encrypt | `/etc/letsencrypt/live/rpc-core.d-bis.org/` |
| 2501 | Self-signed | `/etc/nginx/ssl/rpc.crt` |
| 2502 | Self-signed | `/etc/nginx/ssl/rpc.crt` |
---
## Access Patterns
### Public Endpoints (No Authentication)
- `rpc.public-0138.defi-oracle.io` (VMID 2400) - ThirdWeb RPC
- `rpc-http-pub.d-bis.org` (VMID 2502) - Public HTTP RPC
- `rpc-ws-pub.d-bis.org` (VMID 2502) - Public WebSocket RPC
### Permissioned Endpoints (JWT Authentication Required)
- `rpc-http-prv.d-bis.org` (VMID 2501) - Permissioned HTTP RPC
- `rpc-ws-prv.d-bis.org` (VMID 2501) - Permissioned WebSocket RPC
### Internal/Core Endpoints
- `rpc-core.d-bis.org` (VMID 2500) - Core RPC node (internal use)
---
**Last Updated**: 2026-01-27

View File

@@ -54,13 +54,23 @@ Create or update `~/.env` with Omada Controller credentials:
```bash
# Omada Controller Configuration
OMADA_CONTROLLER_URL=https://192.168.11.10:8043
OMADA_CONTROLLER_URL=https://192.168.11.8:8043
OMADA_API_KEY=your-client-id-here
OMADA_API_SECRET=your-client-secret-here
OMADA_SITE_ID=your-site-id # Optional - will use default site if not provided
OMADA_VERIFY_SSL=false # Set to true for production with valid SSL certs
```
**Note:** For automation and scripts, use the `proxmox-controller` API application (Client Credentials mode):
- Client ID: `94327608913c41bb9c32ce8d1d6e87d3`
- Client Secret: `600b924a541a4139a386cb7c63ac47b5`
For interactive access, use the `Datacenter-Control-Complete` API application (Authorization Code mode):
- Client ID: `8437ff7e3e39452294234ce23bbd105f`
- Client Secret: `f2d19e1bdcdd49adabe10f489ce09a79`
See the [Physical Hardware Inventory](../../config/physical-hardware-inventory.md) for complete API credential details.
### Finding Your Site ID
If you don't know your site ID:
@@ -168,7 +178,7 @@ import {
// Initialize client
const client = new OmadaClient({
baseUrl: 'https://192.168.11.10:8043',
baseUrl: 'https://192.168.11.8:8043',
clientId: process.env.OMADA_API_KEY!,
clientSecret: process.env.OMADA_API_SECRET!,
siteId: 'your-site-id',

View File

@@ -0,0 +1,117 @@
# Omada API Configuration Requirements
**Date:** 2025-01-20
**Status:** ⏳ Requires Manual Configuration
**Purpose:** Document Omada API configuration requirements
---
## Current Status
The `omada-api/.env` file has placeholder/empty values that need to be configured.
---
## Required Configuration
### File: `omada-api/.env`
**Current Issues:**
- `OMADA_API_KEY=<your-api-key>` - Placeholder value
- `OMADA_API_SECRET=` - Empty value
---
## Configuration Options
### Option 1: Omada Controller Local API
If using local Omada Controller (e.g., at `https://192.168.11.10:8043`):
1. **Get API Key:**
- Log into Omada Controller web interface
- Go to Settings → Cloud Access (if available)
- Or use Omada Controller API documentation
- API key format varies by Omada Controller version
2. **Update .env:**
```bash
OMADA_CONTROLLER_URL=https://192.168.11.10:8043
OMADA_API_KEY=your-actual-api-key
OMADA_API_SECRET=your-api-secret # If required
OMADA_SITE_ID=b7335e3ad40ef0df060a922dcf5abdf5
OMADA_VERIFY_SSL=false # For self-signed certs
```
### Option 2: Omada Cloud Controller
If using Omada Cloud Controller (e.g., `https://euw1-omada-northbound.tplinkcloud.com`):
1. **OAuth Client Credentials:**
- Log into Omada Cloud Controller
- Create OAuth application/client
- Get Client ID and Client Secret
2. **Update .env:**
```bash
OMADA_CONTROLLER_URL=https://euw1-omada-northbound.tplinkcloud.com
OMADA_CLIENT_ID=f2d19e1bdcdd49adabe10f489ce09a79
OMADA_CLIENT_SECRET=8437ff7e3e39452294234ce23bbd105f
OMADA_SITE_ID=b7335e3ad40ef0df060a922dcf5abdf5
OMADA_VERIFY_SSL=true
```
**Note:** The current `.env` file already has `OMADA_CLIENT_ID` and `OMADA_CLIENT_SECRET` set, so Option 2 may already be configured.
---
## Current Configuration Analysis
Based on the current `.env` file:
- ✅ `OMADA_CONTROLLER_URL` - Set (cloud controller)
- ✅ `OMADA_SITE_ID` - Set
- ✅ `OMADA_VERIFY_SSL` - Set
- ✅ `OMADA_CLIENT_ID` - Set
- ✅ `OMADA_CLIENT_SECRET` - Set
- ⚠️ `OMADA_API_KEY` - Has placeholder `<your-api-key>`
- ⚠️ `OMADA_API_SECRET` - Empty
**Recommendation:**
- If using OAuth (Client ID/Secret), the `OMADA_API_KEY` and `OMADA_API_SECRET` may not be needed
- Remove or comment out unused fields
- If API Key is required, get it from Omada Controller
---
## Next Steps
1. **Determine authentication method:**
- OAuth (Client ID/Secret) - Already configured
- API Key - Needs configuration
2. **If using OAuth:**
- Comment out or remove `OMADA_API_KEY` and `OMADA_API_SECRET`
- Verify `OMADA_CLIENT_ID` and `OMADA_CLIENT_SECRET` are correct
3. **If using API Key:**
- Get API key from Omada Controller
- Update `OMADA_API_KEY` with actual value
- Set `OMADA_API_SECRET` if required
4. **Test configuration:**
- Run Omada API tests/scripts
- Verify authentication works
---
## Related Documentation
- Omada Controller API documentation
- Omada Cloud Controller documentation
- [Required Secrets Inventory](./REQUIRED_SECRETS_INVENTORY.md)
---
**Last Updated:** 2025-01-20
**Status:** ⏳ Requires Manual Configuration

View File

@@ -0,0 +1,530 @@
# Proxmox VE ACME Certificate Management Plan - Cloudflare Integration
**Date:** 2025-01-20
**Status:** 📋 Planning Document
**Purpose:** Comprehensive plan for SSL/TLS certificate management using ACME with Cloudflare
---
## Executive Summary
This document provides a comprehensive plan for implementing ACME (Automatic Certificate Management Environment) certificate management in Proxmox VE using Cloudflare as the DNS provider. This ensures proper security for all domains and services across hardware installations and VMs.
---
## Current Infrastructure
### Proxmox Nodes
- **ml110** (192.168.11.10) - Cluster master
- **r630-01** (192.168.11.11)
- **r630-02** (192.168.11.12)
### Services Requiring Certificates
- Proxmox VE Web UI (HTTPS on port 8006)
- VM/Container web services
- API endpoints
- Reverse proxy services (nginx, Cloudflare Tunnel)
---
## ACME Overview
**ACME (Automatic Certificate Management Environment):**
- Standard protocol for automated certificate management
- Proxmox VE has built-in ACME plugin
- Supports Let's Encrypt and other ACME-compliant CAs
- Automatic renewal before expiration
**Benefits:**
- ✅ Automated certificate provisioning
- ✅ Automatic renewal
- ✅ No manual intervention required
- ✅ Free certificates (Let's Encrypt)
- ✅ Secure by default
---
## Cloudflare Integration Options
### Option 1: Cloudflare API Token (Recommended)
**Method:** DNS-01 Challenge using Cloudflare API
- Most secure method
- Uses API tokens with minimal permissions
- Works for any domain in Cloudflare account
- Recommended for production
### Option 2: Cloudflare Global API Key
**Method:** DNS-01 Challenge using Global API Key
- Less secure (full account access)
- Easier initial setup
- Not recommended for production
### Option 3: HTTP-01 Challenge (Limited)
**Method:** HTTP-01 Challenge
- Requires public HTTP access
- Not suitable for internal-only services
- Limited applicability
---
## Implementation Plan
### Phase 1: Prerequisites and Preparation
#### 1.1 Cloudflare API Setup
**Requirements:**
- Cloudflare account with domains
- API token with DNS edit permissions
- Domain list inventory
**Steps:**
1. Create Cloudflare API token
- Scope: Zone → DNS → Edit
- Zone Resources: All zones (or specific zones)
- Token expiration: Set appropriate expiration
2. Document domains requiring certificates
- Proxmox node FQDNs (if configured)
- VM/container service domains
- API endpoint domains
3. Verify DNS management
- Confirm Cloudflare manages DNS for all domains
- Verify DNS records are accessible
#### 1.2 Proxmox VE Preparation
**Requirements:**
- Proxmox VE 7.0+ (ACME plugin included)
- Root or admin access to all nodes
- Network connectivity to ACME servers
**Steps:**
1. Verify ACME plugin availability
```bash
pveversion
# Should show version 7.0+
```
2. Check DNS resolution
- Verify domains resolve correctly
- Test external DNS queries
3. Prepare certificate storage
- Review `/etc/pve/priv/acme/` directory
- Plan certificate organization
---
### Phase 2: ACME Account Configuration
#### 2.1 Create ACME Account
**Location:** Proxmox Web UI → Datacenter → ACME
**Steps:**
1. Navigate to ACME settings
2. Add ACME account
3. Choose ACME directory:
- **Let's Encrypt Production:** `https://acme-v02.api.letsencrypt.org/directory`
- **Let's Encrypt Staging:** `https://acme-staging-v02.api.letsencrypt.org/directory` (for testing)
4. Configure account:
- Email: Your contact email
- Accept Terms of Service
5. Test with staging directory first
6. Switch to production after verification
#### 2.2 Configure Cloudflare DNS Plugin
**Method:** DNS-01 Challenge with Cloudflare API Token
**Configuration:**
1. In ACME account settings, select "DNS Plugin"
2. Choose plugin: **cloudflare**
3. Configure credentials:
- **API Token:** Your Cloudflare API token
- **Alternative:** Global API Key + Email (less secure)
**Security Best Practices:**
- ✅ Use API Token (not Global API Key)
- ✅ Limit token permissions to DNS edit only
- ✅ Use zone-specific tokens when possible
- ✅ Store tokens securely (consider secrets management)
---
### Phase 3: Certificate Configuration
#### 3.1 Proxmox Node Certificates
**Purpose:** Secure Proxmox VE Web UI
**Configuration:**
1. Navigate to: Node → System → Certificates
2. Select "ACME" tab
3. Add certificate:
- **Name:** Descriptive name (e.g., "ml110-cert")
- **Domain:** Node FQDN (e.g., `ml110.example.com`)
- **ACME Account:** Select configured account
- **DNS Plugin:** Select Cloudflare plugin
- **Challenge Type:** DNS-01
4. Generate certificate
5. Apply to node
6. Repeat for all nodes
**Domains:**
- `ml110.yourdomain.com` (if configured)
- `r630-01.yourdomain.com` (if configured)
- `r630-02.yourdomain.com` (if configured)
- Or use IP-based access with self-signed (current)
#### 3.2 VM/Container Service Certificates
**Purpose:** Secure services running in VMs/containers
**Options:**
**Option A: Individual Certificates per Service**
- Generate separate certificate for each service domain
- Most granular control
- Suitable for: Multiple domains, different security requirements
**Option B: Wildcard Certificates**
- Generate `*.yourdomain.com` certificate
- Single certificate for all subdomains
- Suitable for: Many subdomains, simplified management
**Option C: Multi-Domain Certificates**
- Single certificate with multiple SANs
- Balance between granularity and simplicity
- Suitable for: Related services, limited domains
**Recommendation:** Start with individual certificates, consider wildcard for subdomains.
---
### Phase 4: Domain-Specific Certificate Plan
#### 4.1 Inventory All Domains
**Required Information:**
- Domain name
- Purpose/service
- VM/container hosting
- Current certificate status
- Certificate type needed
**Example Inventory:**
```
Domain | Service | VM/Container | Type
-------------------------|------------------|--------------|----------
proxmox.yourdomain.com | Proxmox UI | ml110 | Individual
api.yourdomain.com | API Gateway | VM 100 | Individual
*.yourdomain.com | All subdomains | Multiple | Wildcard
```
#### 4.2 Certificate Assignment Strategy
**Tier 1: Critical Infrastructure**
- Proxmox nodes (if using FQDNs)
- Core services
- API endpoints
- Individual certificates with short renewal periods
**Tier 2: Application Services**
- Web applications
- Services with public access
- Individual or multi-domain certificates
**Tier 3: Internal Services**
- Development environments
- Internal-only services
- Wildcard or self-signed (with proper internal CA)
---
### Phase 5: Implementation Steps
#### 5.1 Initial Setup (One-Time)
1. **Create Cloudflare API Token**
```bash
# Via Cloudflare Dashboard:
# My Profile → API Tokens → Create Token
# Template: Edit zone DNS
# Permissions: Zone → DNS → Edit
# Zone Resources: All zones or specific zones
```
2. **Configure ACME Account in Proxmox**
- Use Proxmox Web UI or CLI
- Add account with Cloudflare plugin
- Test with staging environment first
3. **Verify DNS Resolution**
```bash
# Test domain resolution
dig yourdomain.com +short
nslookup yourdomain.com
```
#### 5.2 Certificate Generation (Per Domain)
**Via Proxmox Web UI:**
1. Navigate to ACME settings
2. Add certificate
3. Configure domain and plugin
4. Generate certificate
5. Apply to service
**Via CLI (Alternative):**
```bash
# Add ACME account
pvesh create /cluster/acme/account --directory-url https://acme-v02.api.letsencrypt.org/directory --contact email@example.com
# Register account
pvesh create /cluster/acme/account/test-account/register
# Generate certificate
pvesh create /cluster/acme/certificate --account test-account --domain yourdomain.com --dns cloudflare --plugin cloudflare --api-token YOUR_TOKEN
```
#### 5.3 Certificate Application
**For Proxmox Nodes:**
- Apply via Web UI: Node → System → Certificates
- Automatically updates web interface
- Requires service restart
**For VM/Container Services:**
- Copy certificate files to VM/container
- Configure service to use certificate
- Update service configuration
- Restart service
**Certificate File Locations:**
- Certificate: `/etc/pve/nodes/<node>/pve-ssl.pem`
- Private Key: `/etc/pve/nodes/<node>/pve-ssl.key`
- Full Chain: Combined certificate + chain
---
### Phase 6: Certificate Renewal and Maintenance
#### 6.1 Automatic Renewal
**Proxmox VE Automatic Renewal:**
- Built-in renewal mechanism
- Runs automatically before expiration
- Typically renews 30 days before expiry
- No manual intervention required
**Verification:**
- Monitor certificate expiration dates
- Check renewal logs
- Set up monitoring/alerting
#### 6.2 Monitoring and Alerts
**Monitoring Points:**
- Certificate expiration dates
- Renewal success/failure
- Service availability after renewal
- DNS challenge success rate
**Alerting Options:**
- Proxmox VE logs
- External monitoring tools
- Email notifications (configured in ACME account)
#### 6.3 Backup and Recovery
**Certificate Backup:**
- Backup `/etc/pve/priv/acme/` directory
- Backup certificate files
- Store API tokens securely
- Document certificate configuration
**Recovery Procedures:**
- Restore certificates from backup
- Re-generate if needed
- Update service configurations
---
## Security Best Practices
### 1. API Token Security
**Recommendations:**
- ✅ Use API Tokens (not Global API Key)
- ✅ Minimal required permissions
- ✅ Zone-specific tokens when possible
- ✅ Token rotation schedule
- ✅ Secure storage (encrypted, access-controlled)
### 2. Certificate Security
**Recommendations:**
- ✅ Use strong key sizes (RSA 2048+ or ECDSA P-256+)
- ✅ Enable HSTS where applicable
- ✅ Use TLS 1.2+ only
- ✅ Proper certificate chain validation
- ✅ Secure private key storage
### 3. Access Control
**Recommendations:**
- ✅ Limit ACME account access
- ✅ Role-based access control
- ✅ Audit certificate operations
- ✅ Secure credential storage
### 4. Network Security
**Recommendations:**
- ✅ Firewall rules for ACME endpoints
- ✅ DNS security (DNSSEC)
- ✅ Monitor for certificate abuse
- ✅ Rate limiting awareness
---
## Domain Inventory Template
```markdown
## Domain Certificate Inventory
### Proxmox Nodes
| Node | Domain (if configured) | Certificate Type | Status |
|---------|------------------------|------------------|--------|
| ml110 | ml110.yourdomain.com | Individual | ⏳ Pending |
| r630-01 | r630-01.yourdomain.com | Individual | ⏳ Pending |
| r630-02 | r630-02.yourdomain.com | Individual | ⏳ Pending |
### VM/Container Services
| VMID | Service | Domain | Certificate Type | Status |
|------|----------------|---------------------|------------------|--------|
| 100 | Mail Gateway | mail.yourdomain.com | Individual | ⏳ Pending |
| 104 | Gitea | git.yourdomain.com | Individual | ⏳ Pending |
| ... | ... | ... | ... | ... |
### Wildcard Certificates
| Domain Pattern | Purpose | Status |
|---------------------|------------------|--------|
| *.yourdomain.com | All subdomains | ⏳ Pending |
| *.api.yourdomain.com| API subdomains | ⏳ Pending |
```
---
## Implementation Checklist
### Pre-Implementation
- [ ] Inventory all domains requiring certificates
- [ ] Create Cloudflare API token
- [ ] Document current certificate status
- [ ] Plan certificate assignment strategy
- [ ] Test with staging environment
### Implementation
- [ ] Configure ACME account in Proxmox
- [ ] Configure Cloudflare DNS plugin
- [ ] Generate test certificate (staging)
- [ ] Verify certificate generation works
- [ ] Switch to production ACME directory
- [ ] Generate production certificates
- [ ] Apply certificates to services
- [ ] Verify services work with new certificates
### Post-Implementation
- [ ] Monitor certificate expiration
- [ ] Verify automatic renewal works
- [ ] Set up monitoring/alerting
- [ ] Document certificate locations
- [ ] Create backup procedures
- [ ] Train team on certificate management
---
## Troubleshooting
### Common Issues
**1. DNS Challenge Fails**
- Verify API token permissions
- Check DNS propagation
- Verify domain is in Cloudflare account
- Check token expiration
**2. Certificate Generation Fails**
- Check ACME account status
- Verify domain ownership
- Check rate limits (Let's Encrypt)
- Review logs: `/var/log/pveproxy/access.log`
**3. Certificate Renewal Fails**
- Check automatic renewal configuration
- Verify DNS plugin still works
- Check API token validity
- Review renewal logs
**4. Service Not Using New Certificate**
- Verify certificate is applied to node
- Check service configuration
- Restart service
- Verify certificate file locations
---
## Alternative: External Certificate Management
If Proxmox ACME doesn't meet requirements:
### Option: Certbot with Cloudflare Plugin
- Install certbot on VM/container
- Use certbot-dns-cloudflare plugin
- Manual or automated renewal
- More control, more complexity
### Option: External ACME Client
- Use external ACME client (acme.sh, cert-manager)
- Generate certificates externally
- Copy to Proxmox/VMs
- More flexibility, manual integration
---
## Next Steps
1. **Complete domain inventory**
2. **Create Cloudflare API token**
3. **Configure ACME account (staging)**
4. **Test certificate generation**
5. **Switch to production**
6. **Generate certificates for all domains**
7. **Apply and verify**
8. **Monitor and maintain**
---
## Related Documentation
- [Proxmox VE ACME Documentation](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_certificate_management)
- [Cloudflare API Token Guide](https://developers.cloudflare.com/api/tokens/)
- [Let's Encrypt Documentation](https://letsencrypt.org/docs/)
- Domain Structure: `docs/02-architecture/DOMAIN_STRUCTURE.md`
- Cloudflare API Setup: `CLOUDFLARE_API_SETUP.md`
---
**Last Updated:** 2025-01-20
**Status:** 📋 Planning Document
**Next Review:** After implementation

View File

@@ -0,0 +1,172 @@
# Proxmox ACME Certificate Management - Quick Reference
**Date:** 2025-01-20
**Status:** 📋 Quick Reference Guide
**Purpose:** Quick commands and steps for ACME certificate management
---
## Quick Setup Checklist
- [ ] Create Cloudflare API token
- [ ] Configure ACME account in Proxmox
- [ ] Configure Cloudflare DNS plugin
- [ ] Test with staging environment
- [ ] Generate production certificates
- [ ] Apply certificates to services
- [ ] Monitor expiration
---
## Cloudflare API Token Creation
1. Go to: https://dash.cloudflare.com/profile/api-tokens
2. Click "Create Token"
3. Use "Edit zone DNS" template
4. Permissions: Zone → DNS → Edit
5. Zone Resources: All zones (or specific)
6. Copy token
---
## Proxmox Web UI Steps
### 1. Add ACME Account
**Location:** Datacenter → ACME → Accounts → Add
**Configuration:**
- Directory URL: `https://acme-v02.api.letsencrypt.org/directory` (Production)
- Email: your-email@example.com
- Accept Terms of Service
### 2. Add DNS Plugin
**Location:** Datacenter → ACME → DNS Plugins → Add
**Configuration:**
- Plugin: `cloudflare`
- API Token: Your Cloudflare API token
### 3. Generate Certificate
**Location:** Node → System → Certificates → ACME → Add
**Configuration:**
- Domain: your-domain.com
- ACME Account: Select your account
- DNS Plugin: Select cloudflare
- Challenge Type: DNS-01
---
## CLI Commands
### List ACME Accounts
```bash
pvesh get /cluster/acme/accounts
```
### List DNS Plugins
```bash
pvesh get /cluster/acme/plugins
```
### List Certificates
```bash
pvesh get /cluster/acme/certificates
```
### Add ACME Account (CLI)
```bash
pvesh create /cluster/acme/account \
--directory-url https://acme-v02.api.letsencrypt.org/directory \
--contact email@example.com
```
### Register Account
```bash
pvesh create /cluster/acme/account/account-name/register
```
### Generate Certificate (CLI)
```bash
pvesh create /cluster/acme/certificate \
--account account-name \
--domain example.com \
--dns cloudflare \
--plugin cloudflare
```
### Check Certificate Expiration
```bash
openssl x509 -in /etc/pve/nodes/<node>/pve-ssl.pem -noout -dates
```
---
## Certificate File Locations
### Node Certificates
- Certificate: `/etc/pve/nodes/<node>/pve-ssl.pem`
- Private Key: `/etc/pve/nodes/<node>/pve-ssl.key`
### ACME Configuration
- Accounts: `/etc/pve/priv/acme/`
- Certificates: `/etc/pve/nodes/<node>/`
---
## Troubleshooting
### Certificate Generation Fails
**Check:**
1. API token permissions
2. DNS resolution
3. Domain ownership
4. Rate limits (Let's Encrypt)
5. Logs: `/var/log/pveproxy/access.log`
### Renewal Fails
**Check:**
1. API token validity
2. DNS plugin configuration
3. Automatic renewal settings
4. Certificate expiration date
### Service Not Using Certificate
**Check:**
1. Certificate applied to node
2. Service configuration
3. Service restarted
4. Certificate file permissions
---
## Security Best Practices
✅ Use API Tokens (not Global API Key)
✅ Limit token permissions
✅ Store tokens securely
✅ Test with staging first
✅ Monitor expiration dates
✅ Use strong key sizes
✅ Enable HSTS where applicable
---
## Useful Links
- [Full Plan Document](./PROXMOX_ACME_CLOUDFLARE_PLAN.md)
- [Domain Inventory Template](./PROXMOX_ACME_DOMAIN_INVENTORY.md)
- [Proxmox ACME Docs](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_certificate_management)
- [Cloudflare API Docs](https://developers.cloudflare.com/api/)
- [Let's Encrypt Docs](https://letsencrypt.org/docs/)
---
**Last Updated:** 2025-01-20
**Status:** 📋 Quick Reference

View File

@@ -9,7 +9,8 @@ This directory contains setup and configuration guides.
- **[CREDENTIALS_CONFIGURED.md](CREDENTIALS_CONFIGURED.md)** ⭐ - Credentials configuration guide
- **[SECRETS_KEYS_CONFIGURATION.md](SECRETS_KEYS_CONFIGURATION.md)** ⭐⭐ - Secrets and keys management
- **[SSH_SETUP.md](SSH_SETUP.md)** ⭐ - SSH key setup and configuration
- **[finalize-token.md](finalize-token.md)** ⭐ - Token finalization guide
- **[FINALIZE_TOKEN.md](FINALIZE_TOKEN.md)** ⭐ - Token finalization guide
- **[cloudflare/](cloudflare/)** ⭐⭐⭐ - Cloudflare configuration documentation
- **[ER605_ROUTER_CONFIGURATION.md](ER605_ROUTER_CONFIGURATION.md)** ⭐⭐ - ER605 router configuration
- **[OMADA_API_SETUP.md](OMADA_API_SETUP.md)** ⭐⭐ - Omada API integration setup
- **[OMADA_HARDWARE_CONFIGURATION_REVIEW.md](OMADA_HARDWARE_CONFIGURATION_REVIEW.md)** ⭐⭐⭐ - Comprehensive Omada hardware and configuration review

View File

@@ -0,0 +1,353 @@
# Required Secrets and Environment Variables Inventory
**Date:** 2025-01-20
**Status:** 📋 Comprehensive Inventory
**Purpose:** Track all required secrets and environment variables across the infrastructure
---
## Overview
This document provides a comprehensive inventory of all required secrets and environment variables needed for the Proxmox infrastructure, services, and integrations.
---
## Critical Secrets (High Priority)
### 1. Cloudflare API Credentials
#### Cloudflare API Token (Recommended)
- **Variable:** `CLOUDFLARE_API_TOKEN`
- **Purpose:** Programmatic access to Cloudflare API
- **Used For:**
- DNS record management
- Tunnel configuration
- ACME DNS-01 challenges
- Automated Cloudflare operations
- **Creation:** https://dash.cloudflare.com/profile/api-tokens
- **Permissions Required:**
- Zone → DNS → Edit
- Account → Cloudflare Tunnel → Edit (for tunnel management)
- **Security:** Use API tokens (not Global API Key)
- **Status:** ⚠️ Required
#### Cloudflare Global API Key (Legacy - Not Recommended)
- **Variable:** `CLOUDFLARE_API_KEY`
- **Variable:** `CLOUDFLARE_EMAIL`
- **Purpose:** Legacy API authentication
- **Status:** ⚠️ Deprecated - Use API Token instead
#### Cloudflare Zone ID
- **Variable:** `CLOUDFLARE_ZONE_ID`
- **Purpose:** Identify specific Cloudflare zone
- **Used For:** API operations on specific zones
- **Status:** ⚠️ Required (can be auto-detected with API token)
#### Cloudflare Account ID
- **Variable:** `CLOUDFLARE_ACCOUNT_ID`
- **Purpose:** Identify Cloudflare account
- **Used For:** Tunnel operations, account-level API calls
- **Status:** ⚠️ Required (can be auto-detected with API token)
#### Cloudflare Tunnel Token
- **Variable:** `TUNNEL_TOKEN` or `CLOUDFLARE_TUNNEL_TOKEN`
- **Purpose:** Authenticate cloudflared service
- **Used For:** Cloudflare Tunnel connections
- **Creation:** Cloudflare Zero Trust Dashboard
- **Status:** ⚠️ Required for tunnel services
---
### 2. Proxmox Access Credentials
#### Proxmox Host Passwords
- **Variable:** `PROXMOX_PASS_ML110` or `PROXMOX_HOST_ML110_PASSWORD`
- **Variable:** `PROXMOX_PASS_R630_01` or `PROXMOX_HOST_R630_01_PASSWORD`
- **Variable:** `PROXMOX_PASS_R630_02` or `PROXMOX_HOST_R630_02_PASSWORD`
- **Purpose:** SSH/API access to Proxmox nodes
- **Used For:** Scripted operations, automation
- **Default:** Various (check physical hardware inventory)
- **Status:** ⚠️ Required for automation scripts
#### Proxmox API Tokens
- **Variable:** `PROXMOX_API_TOKEN`
- **Variable:** `PROXMOX_API_SECRET`
- **Purpose:** Proxmox API authentication
- **Used For:** API-based operations
- **Status:** ⚠️ Optional (alternative to passwords)
---
### 3. Service-Specific Secrets
#### Database Credentials
- **Variable:** `POSTGRES_PASSWORD`
- **Variable:** `POSTGRES_USER`
- **Variable:** `DATABASE_URL`
- **Purpose:** Database access
- **Used For:** Database connections
- **Status:** ⚠️ Required for database services
#### Redis Credentials
- **Variable:** `REDIS_PASSWORD`
- **Variable:** `REDIS_URL`
- **Purpose:** Redis cache access
- **Status:** ⚠️ Required if Redis authentication enabled
#### JWT Secrets
- **Variable:** `JWT_SECRET`
- **Variable:** `JWT_PRIVATE_KEY`
- **Purpose:** JWT token signing
- **Used For:** API authentication
- **Status:** ⚠️ Required for services using JWT
---
## Domain and DNS Configuration
### Domain Variables
- **Variable:** `DOMAIN`
- **Variable:** `PRIMARY_DOMAIN`
- **Purpose:** Primary domain name
- **Examples:** `d-bis.org`, `defi-oracle.io`
- **Status:** ⚠️ Required for DNS/SSL operations
### DNS Configuration
- **Variable:** `DNS_PROVIDER`
- **Variable:** `DNS_API_ENDPOINT`
- **Purpose:** DNS provider configuration
- **Status:** Optional (defaults to Cloudflare)
---
## Blockchain/ChainID 138 Specific
### RPC Configuration
- **Variable:** `CHAIN_ID`
- **Variable:** `RPC_ENDPOINT`
- **Variable:** `RPC_NODE_URL`
- **Purpose:** Blockchain RPC configuration
- **Status:** ⚠️ Required for blockchain services
### Private Keys (Critical Security)
- **Variable:** `VALIDATOR_PRIVATE_KEY`
- **Variable:** `NODE_PRIVATE_KEY`
- **Purpose:** Blockchain node/validator keys
- **Security:** 🔒 EXTREMELY SENSITIVE - Use secure storage
- **Status:** ⚠️ Required for validators/nodes
---
## Third-Party Service Integrations
### Azure (if used)
- **Variable:** `AZURE_SUBSCRIPTION_ID`
- **Variable:** `AZURE_TENANT_ID`
- **Variable:** `AZURE_CLIENT_ID`
- **Variable:** `AZURE_CLIENT_SECRET`
- **Status:** Required if using Azure services
### Other Cloud Providers
- **Variable:** `AWS_ACCESS_KEY_ID` / `AWS_SECRET_ACCESS_KEY`
- **Variable:** `GCP_PROJECT_ID` / `GCP_SERVICE_ACCOUNT_KEY`
- **Status:** Required if using respective cloud services
---
## Application-Specific Variables
### DBIS Services
- **Variable:** `DBIS_DATABASE_URL`
- **Variable:** `DBIS_API_KEY`
- **Variable:** `DBIS_SECRET_KEY`
- **Status:** ⚠️ Required for DBIS services
### Blockscout
- **Variable:** `BLOCKSCOUT_DATABASE_URL`
- **Variable:** `BLOCKSCOUT_SECRET_KEY_BASE`
- **Variable:** `BLOCKSCOUT_ETHERSCAN_API_KEY`
- **Status:** ⚠️ Required for Blockscout explorer
### Other Services
- Service-specific variables as documented per service
- Check individual service documentation
---
## Network Configuration
### IP Addresses
- **Variable:** `PROXMOX_HOST_ML110` (192.168.11.10)
- **Variable:** `PROXMOX_HOST_R630_01` (192.168.11.11)
- **Variable:** `PROXMOX_HOST_R630_02` (192.168.11.12)
- **Purpose:** Proxmox node IP addresses
- **Status:** ⚠️ Required for scripts
### Network Credentials
- **Variable:** `OMADA_USERNAME`
- **Variable:** `OMADA_PASSWORD`
- **Purpose:** Omada controller access
- **Status:** ⚠️ Required for network automation
---
## Security and Monitoring
### Monitoring Tools
- **Variable:** `GRAFANA_ADMIN_PASSWORD`
- **Variable:** `PROMETHEUS_BASIC_AUTH_PASSWORD`
- **Status:** ⚠️ Required if monitoring enabled
### Alerting
- **Variable:** `ALERT_EMAIL`
- **Variable:** `SLACK_WEBHOOK_URL`
- **Variable:** `DISCORD_WEBHOOK_URL`
- **Status:** Optional
---
## Environment-Specific Configuration
### Development
- **Variable:** `NODE_ENV=development`
- **Variable:** `DEBUG=true`
- **Status:** Development-specific
### Production
- **Variable:** `NODE_ENV=production`
- **Variable:** `DEBUG=false`
- **Status:** ⚠️ Production configuration
### Staging
- **Variable:** `NODE_ENV=staging`
- **Status:** Staging environment
---
## Required Secrets Checklist
### Critical (Must Have)
- [ ] `CLOUDFLARE_API_TOKEN` - Cloudflare API access
- [ ] `CLOUDFLARE_ZONE_ID` - Cloudflare zone identification
- [ ] `TUNNEL_TOKEN` - Cloudflare Tunnel authentication (if using tunnels)
- [ ] Proxmox node passwords - SSH/API access
- [ ] Database passwords - Service database access
- [ ] Domain configuration - Primary domain name
### High Priority
- [ ] `JWT_SECRET` - API authentication
- [ ] Service-specific API keys
- [ ] Private keys (if applicable)
- [ ] Monitoring credentials
### Medium Priority
- [ ] Third-party service credentials
- [ ] Alerting webhooks
- [ ] Backup storage credentials
### Low Priority / Optional
- [ ] Development-only variables
- [ ] Debug flags
- [ ] Optional integrations
---
## Secret Storage Best Practices
### 1. Secure Storage
- ✅ Use secrets management systems (HashiCorp Vault, AWS Secrets Manager, etc.)
- ✅ Encrypt sensitive values at rest
- ✅ Use environment-specific secret stores
- ❌ Don't commit secrets to git
- ❌ Don't store in plain text files
### 2. Access Control
- ✅ Limit access to secrets (principle of least privilege)
- ✅ Rotate secrets regularly
- ✅ Use separate secrets for different environments
- ✅ Audit secret access
### 3. Documentation
- ✅ Document which services need which secrets
- ✅ Use .env.example files (without real values)
- ✅ Maintain this inventory
- ✅ Document secret rotation procedures
### 4. Development Practices
- ✅ Use different secrets for dev/staging/prod
- ✅ Never use production secrets in development
- ✅ Use placeholder values in templates
- ✅ Validate required secrets on startup
---
## Secret Verification
### Script Available
**Script:** `scripts/check-env-secrets.sh`
**Usage:**
```bash
./scripts/check-env-secrets.sh
```
**What it does:**
- Scans all .env files
- Identifies empty variables
- Detects placeholder values
- Lists all variables found
- Provides recommendations
---
## Environment File Locations
### Expected Locations
- `.env` - Root directory (main configuration)
- `config/.env` - Configuration directory
- `config/production/.env.production` - Production-specific
- Service-specific: `*/config/.env`, `*/.env.local`
### Template Files
- `.env.example` - Template with variable names
- `.env.template` - Alternative template format
- `config/*.template` - Configuration templates
---
## Related Documentation
- [Cloudflare API Setup](../CLOUDFLARE_API_SETUP.md)
- [Physical Hardware Inventory](../../docs/02-architecture/PHYSICAL_HARDWARE_INVENTORY.md)
- [Proxmox ACME Plan](./PROXMOX_ACME_CLOUDFLARE_PLAN.md)
- [Domain Structure](../../docs/02-architecture/DOMAIN_STRUCTURE.md)
---
## Next Steps
1. **Audit Current Secrets**
- Run `scripts/check-env-secrets.sh`
- Review this inventory
- Identify missing secrets
2. **Create/Update .env Files**
- Use templates as reference
- Set all required values
- Remove placeholder values
3. **Secure Storage**
- Implement secrets management
- Encrypt sensitive values
- Set up access controls
4. **Documentation**
- Update service-specific docs
- Create .env.example files
- Document secret rotation
---
**Last Updated:** 2025-01-20
**Status:** 📋 Comprehensive Inventory
**Next Review:** After secret audit

View File

@@ -0,0 +1,155 @@
# Required Secrets Summary - Quick Reference
**Date:** 2025-01-20
**Status:** 📋 Quick Reference
**Purpose:** Quick checklist of all required secrets
---
## Critical Secrets (Must Have)
### ✅ Configured
#### Cloudflare (Root .env)
-`CLOUDFLARE_TUNNEL_TOKEN` - Set
-`CLOUDFLARE_API_KEY` - Set (⚠️ Consider migrating to API_TOKEN)
-`CLOUDFLARE_ACCOUNT_ID` - Set
-`CLOUDFLARE_ZONE_ID` - Set (multiple zones)
-`CLOUDFLARE_ORIGIN_CA_KEY` - Set
-`CLOUDFLARE_EMAIL` - Set
#### Blockchain Services
-`PRIVATE_KEY` - Set (🔒 **SECURITY CONCERN** - exposed in files)
- ✅ Multiple contract addresses - Set
-`ETHERSCAN_API_KEY` - Set
-`METAMASK_API_KEY` / `METAMASK_SECRET` - Set
-`THIRDWEB_SECRET_KEY` - Set
#### Database
-`DATABASE_URL` - Set (contains password)
#### Service APIs
-`OMADA_CLIENT_SECRET` - Set
-`OMADA_API_KEY` - Set
- ✅ Various LINK_TOKEN addresses - Set
---
## ⚠️ Missing or Needs Attention
### High Priority
- ⚠️ `CLOUDFLARE_API_TOKEN` - Not set (using API_KEY instead)
- ⚠️ `OMADA_API_SECRET` - Empty in omada-api/.env
- ⚠️ `OMADA_API_KEY` - Has placeholder value `<your-api-key>`
### Security Concerns
- 🔒 **Private keys in .env files** - Needs secure storage
- `smom-dbis-138/.env`
- `explorer-monorepo/.env`
- Backup files (`.env.backup.*`)
- 🔒 **Backup files with secrets** - Should be removed from repository
- `explorer-monorepo/.env.backup.*`
- `smom-dbis-138/.env.backup`
---
## Optional Secrets (If Used)
### Explorer Monorepo
- `DB_REPLICA_PASSWORD` - If using replica database
- `SEARCH_PASSWORD` - If using Elasticsearch
- `ONEINCH_API_KEY` - If using 1inch integration
- `JUMIO_API_KEY/SECRET` - If using Jumio KYC
- `MOONPAY_API_KEY` - If using MoonPay
- `WALLETCONNECT_PROJECT_ID` - If using WalletConnect
### Monitoring/Logging
- `SENTRY_DSN` - If using Sentry
- `DATADOG_API_KEY` - If using Datadog
### Third-Party Services
- Various API keys for optional integrations
---
## Recommendations
### Immediate Actions
1. **Verify .gitignore**
```bash
# Ensure these patterns are in .gitignore:
.env
.env.*
*.env.backup
```
2. **Secure Private Keys**
- Move private keys to secure storage
- Never commit private keys to repository
- Use environment variable injection
3. **Clean Up Backup Files**
- Remove `.env.backup.*` files from repository
- Store backups securely if needed
4. **Migrate to API Tokens**
- Replace `CLOUDFLARE_API_KEY` with `CLOUDFLARE_API_TOKEN`
- More secure and recommended by Cloudflare
### Security Best Practices
- ✅ Use API tokens instead of API keys
- ✅ Store secrets in secure storage (key vault, encrypted)
- ✅ Never commit secrets to version control
- ✅ Use separate secrets for different environments
- ✅ Rotate secrets regularly
- ✅ Limit access to secrets
---
## File Status Summary
| File | Status | Critical Secrets | Action Needed |
|------|--------|------------------|---------------|
| `./.env` | ✅ Good | Cloudflare credentials | Migrate to API_TOKEN |
| `omada-api/.env` | ⚠️ Partial | Omada credentials | Set OMADA_API_SECRET |
| `smom-dbis-138/.env` | 🔒 Secure | Private key | Move to secure storage |
| `dbis_core/.env` | ✅ Good | Database password | Verify secure storage |
| `explorer-monorepo/.env` | 🔒 Secure | Private key | Move to secure storage |
---
## Quick Commands
### Check Secret Status
```bash
./scripts/check-env-secrets.sh
```
### Verify .gitignore
```bash
grep -E "\.env|\.env\." .gitignore
```
### List All .env Files
```bash
find . -name ".env*" -type f | grep -v node_modules | grep -v venv
```
---
## Related Documentation
- [Required Secrets Inventory](./REQUIRED_SECRETS_INVENTORY.md) - Comprehensive inventory
- [Environment Secrets Audit Report](./ENV_SECRETS_AUDIT_REPORT.md) - Detailed audit
- [Cloudflare API Setup](../CLOUDFLARE_API_SETUP.md) - Cloudflare configuration
- [Secrets and Keys Configuration](./SECRETS_KEYS_CONFIGURATION.md) - Security guide
---
**Last Updated:** 2025-01-20
**Status:** 📋 Quick Reference

View File

@@ -1,6 +1,6 @@
# RPC DNS Configuration for d-bis.org
# RPC DNS Configuration for d-bis.org and defi-oracle.io
**Last Updated:** 2025-12-21
**Last Updated:** 2025-01-23
**Status:** Active Configuration
---
@@ -10,11 +10,18 @@
DNS configuration for RPC endpoints with Nginx SSL termination on port 443.
**Architecture:**
**d-bis.org domain (Direct A records):**
```
Internet → DNS (A records) → Nginx (port 443) → Besu RPC (8545/8546)
```
All HTTPS traffic arrives on port 443, and Nginx routes to the appropriate backend port based on the domain name (Server Name Indication - SNI).
**defi-oracle.io domain (Cloudflare Tunnel):**
```
Internet → DNS (CNAME) → Cloudflare Tunnel → VMID 2400 → Nginx (port 443) → Besu RPC (8545/8546)
```
All HTTPS traffic arrives on port 443, and Nginx routes to the appropriate backend port based on the domain name (Server Name Indication - SNI). For VMID 2400, traffic flows through Cloudflare Tunnel first.
---
@@ -24,58 +31,112 @@ All HTTPS traffic arrives on port 443, and Nginx routes to the appropriate backe
**Important:** A records in DNS do NOT include port numbers. All traffic comes to port 443 (HTTPS), and Nginx handles routing to the backend ports.
#### Public RPC (VMID 2501 - 192.168.11.251)
#### Permissioned RPC (VMID 2501 - 192.168.11.251) - JWT Authentication Required
| Type | Name | Target | Proxy | Notes |
|------|------|--------|-------|-------|
| A | `rpc-http-pub` | `192.168.11.251` | 🟠 Proxied (optional) | HTTP RPC endpoint |
| A | `rpc-ws-pub` | `192.168.11.251` | 🟠 Proxied (optional) | WebSocket RPC endpoint |
**DNS Configuration:**
```
Type: A
Name: rpc-http-pub
Target: 192.168.11.251
TTL: Auto
Proxy: 🟠 Proxied (recommended for DDoS protection)
Type: A
Name: rpc-ws-pub
Target: 192.168.11.251
TTL: Auto
Proxy: 🟠 Proxied (recommended for DDoS protection)
```
#### Private RPC (VMID 2502 - 192.168.11.252)
| Type | Name | Target | Proxy | Notes |
|------|------|--------|-------|-------|
| A | `rpc-http-prv` | `192.168.11.252` | 🟠 Proxied (optional) | HTTP RPC endpoint |
| A | `rpc-ws-prv` | `192.168.11.252` | 🟠 Proxied (optional) | WebSocket RPC endpoint |
| A | `rpc-http-prv` | `192.168.11.251` | 🟠 Proxied (optional) | HTTP RPC endpoint (JWT auth required) |
| A | `rpc-ws-prv` | `192.168.11.251` | 🟠 Proxied (optional) | WebSocket RPC endpoint (JWT auth required) |
**DNS Configuration:**
```
Type: A
Name: rpc-http-prv
Target: 192.168.11.252
Target: 192.168.11.251
TTL: Auto
Proxy: 🟠 Proxied (recommended for DDoS protection)
Type: A
Name: rpc-ws-prv
Target: 192.168.11.251
TTL: Auto
Proxy: 🟠 Proxied (recommended for DDoS protection)
```
**Note:** These endpoints require JWT token authentication. See [RPC_JWT_AUTHENTICATION.md](RPC_JWT_AUTHENTICATION.md) for details.
#### Public RPC (VMID 2502 - 192.168.11.252) - No Authentication
| Type | Name | Target | Proxy | Notes |
|------|------|--------|-------|-------|
| A | `rpc-http-pub` | `192.168.11.252` | 🟠 Proxied (optional) | HTTP RPC endpoint (public, no auth) |
| A | `rpc-ws-pub` | `192.168.11.252` | 🟠 Proxied (optional) | WebSocket RPC endpoint (public, no auth) |
**DNS Configuration:**
```
Type: A
Name: rpc-http-pub
Target: 192.168.11.252
TTL: Auto
Proxy: 🟠 Proxied (recommended for DDoS protection)
Type: A
Name: rpc-ws-pub
Target: 192.168.11.252
TTL: Auto
Proxy: 🟠 Proxied (recommended for DDoS protection)
```
### DNS Records Configuration for defi-oracle.io Domain
**Note:** The `defi-oracle.io` domain is used specifically for ThirdWeb RPC nodes and Thirdweb listing integration.
#### ThirdWeb RPC (VMID 2400 - 192.168.11.240) - defi-oracle.io Domain
**Note:** VMID 2400 uses Cloudflare Tunnel, so DNS records use CNAME (not A records).
| Type | Name | Domain | Target | Proxy | Notes |
|------|------|--------|--------|-------|-------|
| CNAME | `rpc.public-0138` | `defi-oracle.io` | `26138c21-db00-4a02-95db-ec75c07bda5b.cfargotunnel.com` | 🟠 Proxied | Tunnel endpoint for ThirdWeb RPC |
| CNAME | `rpc` | `defi-oracle.io` | `rpc.public-0138.defi-oracle.io` | 🟠 Proxied | Short alias for ThirdWeb RPC |
**DNS Configuration:**
**Record 1: Tunnel Endpoint**
```
Type: CNAME
Name: rpc.public-0138
Domain: defi-oracle.io
Target: 26138c21-db00-4a02-95db-ec75c07bda5b.cfargotunnel.com
TTL: Auto
Proxy: 🟠 Proxied (required for tunnel)
```
**Record 2: Short Alias**
```
Type: CNAME
Name: rpc
Domain: defi-oracle.io
Target: rpc.public-0138.defi-oracle.io
TTL: Auto
Proxy: 🟠 Proxied (required for tunnel)
```
**Full FQDNs:**
- `rpc.public-0138.defi-oracle.io` (primary endpoint)
- `rpc.defi-oracle.io` (short alias)
**DNS Structure:**
```
rpc.defi-oracle.io
↓ (CNAME)
rpc.public-0138.defi-oracle.io
↓ (CNAME)
26138c21-db00-4a02-95db-ec75c07bda5b.cfargotunnel.com
↓ (Cloudflare Tunnel)
192.168.11.240 (VMID 2400)
```
**Note:** This endpoint is used for the Thirdweb listing for ChainID 138. Traffic flows through Cloudflare Tunnel to VMID 2400, where Nginx handles SSL termination and routes to Besu RPC (port 8545 for HTTP, port 8546 for WebSocket).
---
## How It Works
### Request Flow
1. **Client** makes request to `https://rpc-http-pub.d-bis.org`
2. **DNS** resolves to `192.168.11.251` (A record)
1. **Client** makes request to `https://rpc-http-prv.d-bis.org` (permissioned) or `https://rpc-http-pub.d-bis.org` (public)
2. **DNS** resolves to appropriate IP (A record)
3. **HTTPS connection** established on port 443 (standard HTTPS port)
4. **Nginx** receives request on port 443
5. **Nginx** uses Server Name Indication (SNI) to identify domain:
@@ -83,17 +144,21 @@ Proxy: 🟠 Proxied (recommended for DDoS protection)
- `rpc-ws-pub.d-bis.org` → proxies to `127.0.0.1:8546` (WebSocket RPC)
- `rpc-http-prv.d-bis.org` → proxies to `127.0.0.1:8545` (HTTP RPC)
- `rpc-ws-prv.d-bis.org` → proxies to `127.0.0.1:8546` (WebSocket RPC)
- `rpc.public-0138.defi-oracle.io` → Cloudflare Tunnel → VMID 2400 → proxies to `127.0.0.1:8545` (HTTP RPC) or `127.0.0.1:8546` (WebSocket RPC)
- `rpc.defi-oracle.io` → CNAME → `rpc.public-0138.defi-oracle.io` → Cloudflare Tunnel → VMID 2400 → proxies to `127.0.0.1:8545` (HTTP RPC) or `127.0.0.1:8546` (WebSocket RPC)
6. **Besu RPC** processes request and returns response
7. **Nginx** forwards response back to client
### Port Mapping
| Domain | DNS Target | Nginx Port | Backend Port | Service |
|--------|------------|------------|-------------|---------|
| `rpc-http-pub.d-bis.org` | `192.168.11.251` | 443 (HTTPS) | 8545 | HTTP RPC |
| `rpc-ws-pub.d-bis.org` | `192.168.11.251` | 443 (HTTPS) | 8546 | WebSocket RPC |
| `rpc-http-prv.d-bis.org` | `192.168.11.252` | 443 (HTTPS) | 8545 | HTTP RPC |
| `rpc-ws-prv.d-bis.org` | `192.168.11.252` | 443 (HTTPS) | 8546 | WebSocket RPC |
| Domain | DNS Target | Nginx Port | Backend Port | Service | Auth |
|--------|------------|------------|-------------|---------|------|
| `rpc-http-prv.d-bis.org` | `192.168.11.251` | 443 (HTTPS) | 8545 | HTTP RPC | ✅ JWT Required |
| `rpc-ws-prv.d-bis.org` | `192.168.11.251` | 443 (HTTPS) | 8546 | WebSocket RPC | ✅ JWT Required |
| `rpc-http-pub.d-bis.org` | `192.168.11.252` | 443 (HTTPS) | 8545 | HTTP RPC | ❌ No Auth |
| `rpc-ws-pub.d-bis.org` | `192.168.11.252` | 443 (HTTPS) | 8546 | WebSocket RPC | ❌ No Auth |
| `rpc.public-0138.defi-oracle.io` | Cloudflare Tunnel → `192.168.11.240` | 443 (HTTPS) | 8545/8546 | HTTP/WS RPC | ❌ No Auth |
| `rpc.defi-oracle.io` | CNAME → `rpc.public-0138` → Cloudflare Tunnel → `192.168.11.240` | 443 (HTTPS) | 8545/8546 | HTTP/WS RPC | ❌ No Auth |
**Note:** DNS A records only contain IP addresses. Port numbers are handled by:
- **Port 443**: Standard HTTPS port (handled automatically by browsers/clients)
@@ -171,15 +236,22 @@ curl -X POST http://192.168.11.251:8545 \
The Nginx configuration on each container:
**VMID 2501:**
**VMID 2501 (Permissioned RPC):**
- Listens on port 443 (HTTPS)
- `rpc-http-pub.d-bis.org` → proxies to `127.0.0.1:8545`
- `rpc-ws-pub.d-bis.org` → proxies to `127.0.0.1:8546`
- `rpc-http-prv.d-bis.org` → proxies to `127.0.0.1:8545` (JWT auth required)
- `rpc-ws-prv.d-bis.org` → proxies to `127.0.0.1:8546` (JWT auth required)
**VMID 2502:**
**VMID 2502 (Public RPC):**
- Listens on port 443 (HTTPS)
- `rpc-http-prv.d-bis.org` → proxies to `127.0.0.1:8545`
- `rpc-ws-prv.d-bis.org` → proxies to `127.0.0.1:8546`
- `rpc-http-pub.d-bis.org` → proxies to `127.0.0.1:8545` (no auth)
- `rpc-ws-pub.d-bis.org` → proxies to `127.0.0.1:8546` (no auth)
**VMID 2400 (ThirdWeb RPC - Cloudflare Tunnel):**
- Cloudflare Tunnel endpoint: `26138c21-db00-4a02-95db-ec75c07bda5b.cfargotunnel.com`
- Nginx listens on port 443 (HTTPS) inside container
- `rpc.public-0138.defi-oracle.io` → Cloudflare Tunnel → proxies to `127.0.0.1:8545` (HTTP RPC, no auth) or `127.0.0.1:8546` (WebSocket RPC, no auth)
- `rpc.defi-oracle.io` → CNAME → `rpc.public-0138.defi-oracle.io` → Cloudflare Tunnel → proxies to `127.0.0.1:8545` (HTTP RPC, no auth) or `127.0.0.1:8546` (WebSocket RPC, no auth)
- Uses `defi-oracle.io` domain (Cloudflare Tunnel) for Thirdweb listing integration
---
@@ -243,16 +315,31 @@ ssh root@192.168.11.10 "pct exec 2501 -- systemctl status besu-rpc"
## Quick Reference
**DNS Records to Create:**
**d-bis.org domain:**
```
rpc-http-pub.d-bis.org → A → 192.168.11.251
rpc-ws-pub.d-bis.org → A → 192.168.11.251
rpc-http-prv.d-bis.org → A → 192.168.11.252
rpc-ws-prv.d-bis.org → A → 192.168.11.252
rpc-http-prv.d-bis.org → A → 192.168.11.251 (Permissioned, JWT auth required)
rpc-ws-prv.d-bis.org → A → 192.168.11.251 (Permissioned, JWT auth required)
rpc-http-pub.d-bis.org → A → 192.168.11.252 (Public, no auth)
rpc-ws-pub.d-bis.org → A → 192.168.11.252 (Public, no auth)
```
**defi-oracle.io domain (ThirdWeb RPC - Cloudflare Tunnel):**
```
rpc.public-0138.defi-oracle.io → CNAME → 26138c21-db00-4a02-95db-ec75c07bda5b.cfargotunnel.com (Tunnel endpoint)
rpc.defi-oracle.io → CNAME → rpc.public-0138.defi-oracle.io (Short alias)
```
**Endpoints:**
- `https://rpc-http-pub.d-bis.org` → HTTP RPC (port 443 → 8545)
- `wss://rpc-ws-pub.d-bis.org` → WebSocket RPC (port 443 → 8546)
- `https://rpc-http-prv.d-bis.org` → HTTP RPC (port 443 → 8545)
- `wss://rpc-ws-prv.d-bis.org` → WebSocket RPC (port 443 → 8546)
**d-bis.org domain:**
- `https://rpc-http-prv.d-bis.org` → Permissioned HTTP RPC (port 443 → 8545, JWT auth required)
- `wss://rpc-ws-prv.d-bis.org` → Permissioned WebSocket RPC (port 443 → 8546, JWT auth required)
- `https://rpc-http-pub.d-bis.org` → Public HTTP RPC (port 443 → 8545, no auth)
- `wss://rpc-ws-pub.d-bis.org` → Public WebSocket RPC (port 443 → 8546, no auth)
**defi-oracle.io domain (ThirdWeb RPC - Cloudflare Tunnel):**
- `https://rpc.public-0138.defi-oracle.io` → ThirdWeb HTTP RPC (Cloudflare Tunnel → port 443 → 8545, no auth)
- `wss://rpc.public-0138.defi-oracle.io` → ThirdWeb WebSocket RPC (Cloudflare Tunnel → port 443 → 8546, no auth)
- `https://rpc.defi-oracle.io` → ThirdWeb HTTP RPC (CNAME → Cloudflare Tunnel → port 443 → 8545, no auth)
- `wss://rpc.defi-oracle.io` → ThirdWeb WebSocket RPC (CNAME → Cloudflare Tunnel → port 443 → 8546, no auth)

View File

@@ -0,0 +1,292 @@
# JWT Authentication for Permissioned RPC Endpoints
**Last Updated:** 2025-12-24
**Status:** Active Configuration
---
## Overview
JWT (JSON Web Token) authentication has been configured for the Permissioned RPC endpoints to provide secure, token-based access control.
### Endpoints with JWT Authentication
- **HTTP RPC**: `https://rpc-http-prv.d-bis.org`
- **WebSocket RPC**: `wss://rpc-ws-prv.d-bis.org`
### Endpoints without Authentication (Public)
- **HTTP RPC**: `https://rpc-http-pub.d-bis.org`
- **WebSocket RPC**: `wss://rpc-ws-pub.d-bis.org`
---
## Architecture
### VMID Mappings
| VMID | Type | Domain | Authentication | IP |
|------|------|--------|----------------|-----|
| 2501 | Permissioned RPC | `rpc-http-prv.d-bis.org`<br>`rpc-ws-prv.d-bis.org` | ✅ JWT Required | 192.168.11.251 |
| 2502 | Public RPC | `rpc-http-pub.d-bis.org`<br>`rpc-ws-pub.d-bis.org` | ❌ No Auth | 192.168.11.252 |
### Request Flow with JWT
1. **Client** makes request to `https://rpc-http-prv.d-bis.org`
2. **Nginx** receives request and extracts JWT token from `Authorization: Bearer <token>` header
3. **Lua Script** validates JWT token using secret key
4. **If valid**: Request is proxied to Besu RPC (127.0.0.1:8545)
5. **If invalid**: Returns 401 Unauthorized with error message
---
## Setup
### 1. Configure JWT Authentication
Run the configuration script:
```bash
cd /home/intlc/projects/proxmox
./scripts/configure-nginx-jwt-auth.sh
```
This script will:
- Install required packages (nginx, lua, lua-resty-jwt)
- Generate JWT secret key
- Configure Nginx with JWT validation
- Set up both HTTP and WebSocket endpoints
### 2. Generate JWT Tokens
Use the token generation script:
```bash
# Generate token with default settings (username: rpc-user, expiry: 365 days)
./scripts/generate-jwt-token.sh
# Generate token with custom username and expiry
./scripts/generate-jwt-token.sh my-username 30 # 30 days expiry
```
The script will output:
- The JWT token
- Usage examples for testing
---
## Usage
### HTTP RPC with JWT
```bash
# Test with curl
curl -k \
-H "Authorization: Bearer YOUR_JWT_TOKEN" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}' \
https://rpc-http-prv.d-bis.org
```
### WebSocket RPC with JWT
For WebSocket connections, include the JWT token in the Authorization header during the initial HTTP upgrade request:
```javascript
// JavaScript example
const ws = new WebSocket('wss://rpc-ws-prv.d-bis.org', {
headers: {
'Authorization': 'Bearer YOUR_JWT_TOKEN'
}
});
```
### Using with MetaMask or dApps
Most Ethereum clients don't support custom headers. For these cases, you can:
1. **Use a proxy service** that adds the JWT token
2. **Use the public endpoint** (`rpc-http-pub.d-bis.org`) for read-only operations
3. **Implement custom authentication** in your dApp
---
## Token Management
### Token Structure
JWT tokens contain:
- **Header**: Algorithm (HS256) and type (JWT)
- **Payload**:
- `sub`: Username/subject
- `iat`: Issued at (timestamp)
- `exp`: Expiration (timestamp)
- **Signature**: HMAC-SHA256 signature using the secret key
### Token Expiry
Tokens expire after the specified number of days. To generate a new token:
```bash
./scripts/generate-jwt-token.sh username days
```
### Revoking Tokens
JWT tokens cannot be revoked individually without changing the secret key. To revoke all tokens:
1. Generate a new JWT secret on VMID 2501:
```bash
ssh root@192.168.11.10 "pct exec 2501 -- openssl rand -base64 32 > /etc/nginx/jwt_secret"
```
2. Restart Nginx:
```bash
ssh root@192.168.11.10 "pct exec 2501 -- systemctl restart nginx"
```
3. Generate new tokens for authorized users
---
## Security Considerations
### Secret Key Management
- **Location**: `/etc/nginx/jwt_secret` on VMID 2501
- **Permissions**: 600 (readable only by root)
- **Backup**: Store securely, do not commit to version control
### Best Practices
1. **Use strong secret keys**: The script generates 32-byte random keys
2. **Set appropriate expiry**: Don't create tokens with excessive expiry times
3. **Rotate secrets periodically**: Change the secret key and regenerate tokens
4. **Monitor access logs**: Check `/var/log/nginx/rpc-http-prv-access.log` for unauthorized attempts
5. **Use HTTPS only**: All endpoints use HTTPS (port 443)
### Rate Limiting
Consider adding rate limiting to prevent abuse:
```nginx
limit_req_zone $binary_remote_addr zone=jwt_limit:10m rate=10r/s;
location / {
limit_req zone=jwt_limit burst=20 nodelay;
# ... JWT validation ...
}
```
---
## Troubleshooting
### 401 Unauthorized
**Error**: `{"error": "Missing Authorization header"}`
**Solution**: Include the Authorization header:
```bash
curl -H "Authorization: Bearer YOUR_TOKEN" ...
```
**Error**: `{"error": "Invalid or expired token"}`
**Solution**:
- Check token is correct (no extra spaces)
- Verify token hasn't expired
- Generate a new token if needed
### 500 Internal Server Error
**Error**: `{"error": "Internal server error"}`
**Solution**:
- Check JWT secret exists: `pct exec 2501 -- cat /etc/nginx/jwt_secret`
- Check lua-resty-jwt is installed: `pct exec 2501 -- ls /usr/share/lua/5.1/resty/jwt.lua`
- Check Nginx error logs: `pct exec 2501 -- tail -f /var/log/nginx/rpc-http-prv-error.log`
### Token Validation Fails
1. **Verify secret key matches**:
```bash
# On VMID 2501
cat /etc/nginx/jwt_secret
```
2. **Regenerate token** using the same secret:
```bash
./scripts/generate-jwt-token.sh
```
3. **Check token format**: Should be three parts separated by dots: `header.payload.signature`
---
## Testing
### Test JWT Authentication
```bash
# 1. Generate a token
TOKEN=$(./scripts/generate-jwt-token.sh test-user 365 | grep -A 1 "Token:" | tail -1)
# 2. Test HTTP endpoint
curl -k \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}' \
https://rpc-http-prv.d-bis.org
# 3. Test without token (should fail)
curl -k \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}' \
https://rpc-http-prv.d-bis.org
# Expected: {"error": "Missing Authorization header"}
```
### Test Health Endpoint (No Auth Required)
```bash
curl -k https://rpc-http-prv.d-bis.org/health
# Expected: healthy
```
---
## Related Documentation
- [RPC_DNS_CONFIGURATION.md](RPC_DNS_CONFIGURATION.md) - DNS setup
- [BESU_RPC_CONFIGURATION_FIXED.md](../05-network/BESU_RPC_CONFIGURATION_FIXED.md) - Besu RPC configuration
- [NGINX_ARCHITECTURE_RPC.md](../05-network/NGINX_ARCHITECTURE_RPC.md) - Nginx architecture
---
## Quick Reference
**Generate Token:**
```bash
./scripts/generate-jwt-token.sh [username] [days]
```
**Use Token:**
```bash
curl -H "Authorization: Bearer <token>" https://rpc-http-prv.d-bis.org
```
**Check Secret:**
```bash
ssh root@192.168.11.10 "pct exec 2501 -- cat /etc/nginx/jwt_secret"
```
**View Logs:**
```bash
ssh root@192.168.11.10 "pct exec 2501 -- tail -f /var/log/nginx/rpc-http-prv-access.log"
```
---
**Last Updated**: 2025-12-24

View File

@@ -0,0 +1,353 @@
# JWT Authentication Setup - Complete
**Date**: 2025-12-26
**Status**: ✅ **FULLY OPERATIONAL**
---
## ✅ Setup Complete
JWT authentication has been successfully configured for the Permissioned RPC endpoints on VMID 2501.
### Endpoints Configured
| Endpoint | VMID | IP | Authentication | Status |
|----------|------|-----|----------------|--------|
| `https://rpc-http-prv.d-bis.org` | 2501 | 192.168.11.251 | ✅ JWT Required | ✅ Active |
| `wss://rpc-ws-prv.d-bis.org` | 2501 | 192.168.11.251 | ✅ JWT Required | ✅ Active |
| `https://rpc-http-pub.d-bis.org` | 2502 | 192.168.11.252 | ❌ No Auth | ✅ Active |
| `wss://rpc-ws-pub.d-bis.org` | 2502 | 192.168.11.252 | ❌ No Auth | ✅ Active |
---
## 🔑 JWT Secret
**Location**: `/etc/nginx/jwt_secret` on VMID 2501
**Secret**: `UMW58gEniB9Y75yNmw0X9hI+ycg1K+d1TG8VdB6TqX0=`
⚠️ **IMPORTANT**: Keep this secret secure. All JWT tokens are signed with this secret.
---
## 🚀 Quick Start
### 1. Generate a JWT Token
```bash
cd /home/intlc/projects/proxmox
./scripts/generate-jwt-token.sh [username] [expiry_days]
```
**Example:**
```bash
./scripts/generate-jwt-token.sh my-app 30
```
### 2. Use the Token
**HTTP RPC:**
```bash
curl -k \
-H "Authorization: Bearer YOUR_TOKEN_HERE" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}' \
https://rpc-http-prv.d-bis.org
```
**WebSocket RPC:**
```javascript
const ws = new WebSocket('wss://rpc-ws-prv.d-bis.org', {
headers: {
'Authorization': 'Bearer YOUR_TOKEN_HERE'
}
});
```
### 3. Test Without Token (Should Fail)
```bash
curl -k \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}' \
https://rpc-http-prv.d-bis.org
```
**Expected Response:**
```json
{"jsonrpc":"2.0","error":{"code":-32000,"message":"Unauthorized. Missing or invalid JWT token. Use: Authorization: Bearer <token>"},"id":null}
```
---
## 📋 Services Status
### VMID 2501 Services
-**Nginx**: Active and running
-**JWT Validator Service**: Active on port 8888
-**Besu RPC**: Running on ports 8545 (HTTP) and 8546 (WebSocket)
### Check Status
```bash
ssh root@192.168.11.10 "pct exec 2501 -- systemctl status nginx jwt-validator"
```
---
## 🔧 Configuration Files
### Nginx Configuration
- **Location**: `/etc/nginx/sites-available/rpc-perm`
- **Enabled**: `/etc/nginx/sites-enabled/rpc-perm`
### JWT Validator Service
- **Script**: `/usr/local/bin/jwt-validator-http.py`
- **Service**: `/etc/systemd/system/jwt-validator.service`
- **Port**: 8888 (internal only, 127.0.0.1)
### JWT Secret
- **Location**: `/etc/nginx/jwt_secret`
- **Permissions**: 640 (readable by root and www-data group)
---
## 🧪 Testing
### Test Health Endpoint (No Auth Required)
```bash
curl -k https://rpc-http-prv.d-bis.org/health
# Expected: healthy
```
### Test with Valid Token
```bash
# Generate token
TOKEN=$(./scripts/generate-jwt-token.sh test-user 365 | grep "Token:" | tail -1 | awk '{print $2}')
# Test HTTP endpoint
curl -k \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}' \
https://rpc-http-prv.d-bis.org
# Expected: {"jsonrpc":"2.0","id":1,"result":"0x8a"}
```
### Test with Invalid Token
```bash
curl -k \
-H "Authorization: Bearer invalid-token" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}' \
https://rpc-http-prv.d-bis.org
# Expected: 401 Unauthorized
```
---
## 🔄 Token Management
### Generate New Token
```bash
./scripts/generate-jwt-token.sh [username] [expiry_days]
```
### Token Structure
JWT tokens contain:
- **Header**: Algorithm (HS256) and type (JWT)
- **Payload**:
- `sub`: Username/subject
- `iat`: Issued at timestamp
- `exp`: Expiration timestamp
- **Signature**: HMAC-SHA256 signature
### Token Expiry
Tokens expire after the specified number of days. To generate a new token:
```bash
./scripts/generate-jwt-token.sh username days
```
### Revoke All Tokens
To revoke all existing tokens, generate a new JWT secret:
```bash
ssh root@192.168.11.10 "pct exec 2501 -- openssl rand -base64 32 > /etc/nginx/jwt_secret"
ssh root@192.168.11.10 "pct exec 2501 -- chmod 640 /etc/nginx/jwt_secret && chgrp www-data /etc/nginx/jwt_secret"
ssh root@192.168.11.10 "pct exec 2501 -- systemctl restart jwt-validator"
```
Then generate new tokens for authorized users.
---
## 🌐 DNS Configuration
### Required DNS Records
Ensure these DNS records are configured in Cloudflare:
| Type | Name | Target | Proxy | Notes |
|------|------|--------|-------|-------|
| A | `rpc-http-prv` | `192.168.11.251` | 🟠 Proxied | Permissioned HTTP RPC |
| A | `rpc-ws-prv` | `192.168.11.251` | 🟠 Proxied | Permissioned WebSocket RPC |
| A | `rpc-http-pub` | `192.168.11.252` | 🟠 Proxied | Public HTTP RPC |
| A | `rpc-ws-pub` | `192.168.11.252` | 🟠 Proxied | Public WebSocket RPC |
### Verify DNS
```bash
# Check DNS resolution
dig rpc-http-prv.d-bis.org
nslookup rpc-http-prv.d-bis.org
```
---
## 🔍 Troubleshooting
### 401 Unauthorized
**Issue**: Token is missing or invalid
**Solutions**:
1. Check Authorization header format: `Authorization: Bearer <token>`
2. Verify token hasn't expired
3. Generate a new token
4. Ensure token matches the current JWT secret
### 500 Internal Server Error
**Issue**: JWT validation service not responding
**Solutions**:
```bash
# Check service status
ssh root@192.168.11.10 "pct exec 2501 -- systemctl status jwt-validator"
# Check logs
ssh root@192.168.11.10 "pct exec 2501 -- journalctl -u jwt-validator -n 20"
# Restart service
ssh root@192.168.11.10 "pct exec 2501 -- systemctl restart jwt-validator"
```
### Connection Refused
**Issue**: Service not listening on port 8888
**Solutions**:
```bash
# Check if service is running
ssh root@192.168.11.10 "pct exec 2501 -- ss -tlnp | grep 8888"
# Check JWT secret permissions
ssh root@192.168.11.10 "pct exec 2501 -- ls -la /etc/nginx/jwt_secret"
# Fix permissions if needed
ssh root@192.168.11.10 "pct exec 2501 -- chmod 640 /etc/nginx/jwt_secret && chgrp www-data /etc/nginx/jwt_secret"
```
### Nginx Configuration Errors
**Issue**: Nginx fails to start or reload
**Solutions**:
```bash
# Test configuration
ssh root@192.168.11.10 "pct exec 2501 -- nginx -t"
# Check error logs
ssh root@192.168.11.10 "pct exec 2501 -- tail -20 /var/log/nginx/rpc-http-prv-error.log"
# Reload nginx
ssh root@192.168.11.10 "pct exec 2501 -- systemctl reload nginx"
```
---
## 📊 Monitoring
### View Access Logs
```bash
# HTTP access logs
ssh root@192.168.11.10 "pct exec 2501 -- tail -f /var/log/nginx/rpc-http-prv-access.log"
# WebSocket access logs
ssh root@192.168.11.10 "pct exec 2501 -- tail -f /var/log/nginx/rpc-ws-prv-access.log"
# Error logs
ssh root@192.168.11.10 "pct exec 2501 -- tail -f /var/log/nginx/rpc-http-prv-error.log"
```
### Monitor JWT Validator Service
```bash
ssh root@192.168.11.10 "pct exec 2501 -- journalctl -u jwt-validator -f"
```
---
## 🔐 Security Best Practices
1. **Keep JWT Secret Secure**
- Store in secure location
- Don't commit to version control
- Rotate periodically
2. **Set Appropriate Token Expiry**
- Use short expiry for high-security applications
- Use longer expiry for trusted services
- Regenerate tokens when compromised
3. **Monitor Access**
- Review access logs regularly
- Watch for unauthorized access attempts
- Set up alerts for suspicious activity
4. **Use HTTPS Only**
- All endpoints use HTTPS (port 443)
- Never send tokens over unencrypted connections
5. **Rate Limiting** (Future Enhancement)
- Consider adding rate limiting to prevent abuse
- Configure per-user or per-IP limits
---
## 📚 Related Documentation
- [RPC_JWT_AUTHENTICATION.md](RPC_JWT_AUTHENTICATION.md) - Detailed JWT authentication guide
- [RPC_DNS_CONFIGURATION.md](RPC_DNS_CONFIGURATION.md) - DNS setup and configuration
- [BESU_RPC_CONFIGURATION_FIXED.md](../05-network/BESU_RPC_CONFIGURATION_FIXED.md) - Besu RPC node configuration
---
## ✅ Verification Checklist
- [x] JWT authentication configured on VMID 2501
- [x] JWT validator service running on port 8888
- [x] Nginx configured with auth_request
- [x] JWT secret generated and secured
- [x] Token generation script working
- [x] Valid tokens allow access
- [x] Invalid tokens are rejected
- [x] Health endpoint accessible without auth
- [x] Documentation complete
---
**Last Updated**: 2025-12-26
**Status**: ✅ **PRODUCTION READY**

View File

@@ -0,0 +1,350 @@
# Security Improvements Implementation Complete
**Date:** 2025-01-20
**Status:** ✅ Implementation Complete
**Purpose:** Document completed security improvements and next steps
---
## Summary
All recommendations from the environment secrets audit have been implemented. This document tracks what has been completed and what remains as manual steps.
---
## ✅ Completed Actions
### 1. .gitignore Verification and Update
**Status:** ✅ Complete
- ✅ Verified .gitignore includes .env patterns
- ✅ Added comprehensive .env ignore patterns:
- `.env`
- `.env.*`
- `.env.local`
- `.env.*.local`
- `*.env.backup`
- `.env.backup.*`
- `.env.backup`
**Result:** All .env files and backup files are now ignored by git.
---
### 2. Documentation Created
**Status:** ✅ Complete
Created comprehensive documentation:
1. **REQUIRED_SECRETS_INVENTORY.md**
- Complete inventory of all required secrets
- Security best practices
- Secret storage recommendations
2. **ENV_SECRETS_AUDIT_REPORT.md**
- Detailed audit findings
- Security issues identified
- Recommendations with priorities
3. **REQUIRED_SECRETS_SUMMARY.md**
- Quick reference checklist
- File status summary
- Critical findings
4. **SECURE_SECRETS_MIGRATION_GUIDE.md**
- Step-by-step migration instructions
- Secure storage options
- Implementation checklist
5. **SECURITY_IMPROVEMENTS_COMPLETE.md** (this document)
- Status of all improvements
- Manual steps required
- Next steps
---
### 3. Scripts Created
**Status:** ✅ Complete
Created utility scripts:
1. **scripts/check-env-secrets.sh**
- Audits all .env files
- Identifies empty/placeholder values
- Lists all variables found
2. **scripts/cleanup-env-backup-files.sh**
- Identifies backup files
- Creates secure backups
- Removes backup files from git/filesystem
- Supports dry-run mode
3. **scripts/migrate-cloudflare-api-token.sh**
- Interactive migration guide
- Helps create and configure API tokens
- Updates .env file
4. **scripts/test-cloudflare-api-token.sh**
- Tests API token validity
- Verifies permissions
- Provides detailed feedback
---
## 📋 Manual Steps Required
### 1. Clean Up Backup Files
**Status:** ⏳ Pending User Action
**Action Required:**
```bash
# Review backup files first (dry run)
./scripts/cleanup-env-backup-files.sh
# If satisfied, remove backup files
DRY_RUN=0 ./scripts/cleanup-env-backup-files.sh
```
**Backup Files to Remove:**
- `explorer-monorepo/.env.backup.*` (multiple files)
- `smom-dbis-138/.env.backup`
**Note:** The script will create secure backups before removing files.
---
### 2. Migrate Private Keys to Secure Storage
**Status:** ⏳ Pending User Action
**Action Required:**
Choose one of these options:
#### Option A: Environment Variables (Recommended for Quick Fix)
```bash
# Create secure storage
mkdir -p ~/.secure-secrets
cat > ~/.secure-secrets/private-keys.env << 'EOF'
PRIVATE_KEY=0x5373d11ee2cad4ed82b9208526a8c358839cbfe325919fb250f062a25153d1c8
EOF
chmod 600 ~/.secure-secrets/private-keys.env
# Remove from .env files
sed -i 's/^PRIVATE_KEY=/#PRIVATE_KEY=/' smom-dbis-138/.env
sed -i 's/^PRIVATE_KEY=/#PRIVATE_KEY=/' explorer-monorepo/.env
```
#### Option B: Key Management Service (Recommended for Production)
- Set up HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault
- Store private keys in the service
- Update deployment scripts to retrieve from service
**See:** `SECURE_SECRETS_MIGRATION_GUIDE.md` for detailed instructions.
---
### 3. Migrate to Cloudflare API Token
**Status:** ⏳ Pending User Action
**Action Required:**
1. **Create API Token:**
- Go to: https://dash.cloudflare.com/profile/api-tokens
- Create token with DNS and Tunnel permissions
- Copy the token
2. **Run Migration Script:**
```bash
./scripts/migrate-cloudflare-api-token.sh
```
3. **Test API Token:**
```bash
./scripts/test-cloudflare-api-token.sh
```
4. **Update Scripts:**
- Update scripts to use `CLOUDFLARE_API_TOKEN`
- Remove `CLOUDFLARE_API_KEY` after verification
**See:** `SECURE_SECRETS_MIGRATION_GUIDE.md` Phase 4 for detailed instructions.
---
### 4. Fix Omada API Configuration
**Status:** ⏳ Pending User Action
**Action Required:**
1. **Review omada-api/.env:**
- `OMADA_API_KEY` has placeholder value `<your-api-key>`
- `OMADA_API_SECRET` is empty
2. **Set Correct Values:**
```bash
# Edit omada-api/.env
# Replace placeholder with actual API key
# Set OMADA_API_SECRET if required
```
---
## ✅ Automated/Completed
### What Was Done Automatically
1. ✅ Updated .gitignore with .env patterns
2. ✅ Created comprehensive documentation
3. ✅ Created utility scripts
4. ✅ Documented all manual steps
5. ✅ Created migration guides
### What Requires User Action
1. ⏳ Clean up backup files (script ready, needs execution)
2. ⏳ Migrate private keys (guide ready, needs implementation)
3. ⏳ Create and configure Cloudflare API token (script ready, needs execution)
4. ⏳ Fix Omada API configuration (needs actual values)
---
## 📊 Security Status
### Before Improvements
- ❌ .env patterns not fully in .gitignore
- ❌ Backup files with secrets in repository
- ❌ Private keys in plain text .env files
- ❌ Using legacy API_KEY instead of API_TOKEN
- ❌ No comprehensive secret inventory
- ❌ No migration/cleanup scripts
### After Improvements
- ✅ .env patterns in .gitignore
- ✅ Cleanup script ready for backup files
- ✅ Migration guide for private keys
- ✅ Migration script for API tokens
- ✅ Comprehensive secret inventory
- ✅ All documentation and scripts created
- ⏳ Manual steps documented and ready
---
## Next Steps
### Immediate (Can Do Now)
1. **Review Backup Files:**
```bash
./scripts/cleanup-env-backup-files.sh # Dry run
```
2. **Review Documentation:**
- Read `SECURE_SECRETS_MIGRATION_GUIDE.md`
- Review `REQUIRED_SECRETS_INVENTORY.md`
### Short-Term (This Week)
1. **Clean Up Backup Files:**
```bash
DRY_RUN=0 ./scripts/cleanup-env-backup-files.sh
```
2. **Migrate Cloudflare API Token:**
```bash
./scripts/migrate-cloudflare-api-token.sh
./scripts/test-cloudflare-api-token.sh
```
3. **Secure Private Keys:**
- Choose storage method
- Implement secure storage
- Remove from .env files
### Long-Term (Ongoing)
1. **Implement Key Management Service:**
- Set up HashiCorp Vault or cloud key management
- Migrate all secrets
- Update deployment scripts
2. **Set Up Secret Rotation:**
- Create rotation schedule
- Implement rotation procedures
- Document rotation process
3. **Implement Access Auditing:**
- Log secret access
- Monitor for unauthorized access
- Regular security reviews
---
## Files Created/Modified
### Documentation
- `docs/04-configuration/REQUIRED_SECRETS_INVENTORY.md` (new)
- `docs/04-configuration/ENV_SECRETS_AUDIT_REPORT.md` (new)
- `docs/04-configuration/REQUIRED_SECRETS_SUMMARY.md` (new)
- `docs/04-configuration/SECURE_SECRETS_MIGRATION_GUIDE.md` (new)
- `docs/04-configuration/SECURITY_IMPROVEMENTS_COMPLETE.md` (new)
### Scripts
- `scripts/check-env-secrets.sh` (new)
- `scripts/cleanup-env-backup-files.sh` (new)
- `scripts/migrate-cloudflare-api-token.sh` (new)
- `scripts/test-cloudflare-api-token.sh` (new)
### Configuration
- `.gitignore` (updated - added .env patterns)
---
## Verification
### To Verify Improvements
1. **Check .gitignore:**
```bash
grep -E "^\.env$|\.env\.|env\.backup" .gitignore
```
2. **Verify .env files are ignored:**
```bash
git check-ignore .env smom-dbis-138/.env explorer-monorepo/.env
```
3. **Run Audit:**
```bash
./scripts/check-env-secrets.sh
```
4. **Review Documentation:**
```bash
ls -la docs/04-configuration/REQUIRED_SECRETS*.md
ls -la docs/04-configuration/SECURE_SECRETS*.md
ls -la docs/04-configuration/SECURITY_IMPROVEMENTS*.md
```
---
## Related Documentation
- [Required Secrets Inventory](./REQUIRED_SECRETS_INVENTORY.md)
- [Environment Secrets Audit Report](./ENV_SECRETS_AUDIT_REPORT.md)
- [Required Secrets Summary](./REQUIRED_SECRETS_SUMMARY.md)
- [Secure Secrets Migration Guide](./SECURE_SECRETS_MIGRATION_GUIDE.md)
---
**Last Updated:** 2025-01-20
**Status:** ✅ Implementation Complete (Automated Steps)
**Next Review:** After manual steps completed

View File

@@ -0,0 +1,35 @@
# Quick Start: Setup Cloudflare Tunnel
## Ready to Run
You have everything prepared! Just need your tunnel token from Cloudflare.
## Run This Command
```bash
cd /home/intlc/projects/proxmox
./scripts/setup-cloudflare-tunnel-rpc.sh <YOUR_TUNNEL_TOKEN>
```
## Get Your Token
1. Go to: https://one.dash.cloudflare.com
2. Zero Trust → Networks → Tunnels
3. Create tunnel (or select existing)
4. Copy the token (starts with `eyJhIjoi...`)
## What It Does
✅ Stops existing DoH proxy
✅ Installs tunnel service
✅ Configures 4 RPC endpoints
✅ Starts tunnel service
✅ Verifies it's running
## After Running
1. Configure routes in Cloudflare Dashboard (see CLOUDFLARE_TUNNEL_QUICK_SETUP.md)
2. Update DNS records to CNAME pointing to tunnel
3. Test endpoints
See: docs/04-configuration/CLOUDFLARE_TUNNEL_QUICK_SETUP.md for full details

View File

@@ -0,0 +1,427 @@
# ThirdWeb RPC (VMID 2400) - Cloudflare Tunnel Setup
**Last Updated:** 2025-01-23
**Status:** Setup Guide
**VMID:** 2400
**IP:** 192.168.11.240
**Domain:** `defi-oracle.io`
**FQDN:** `rpc.public-0138.defi-oracle.io`
---
## Overview
Since VMID 2400 is on a Proxmox host that doesn't have access to pve2 (192.168.11.12) where the existing Cloudflared tunnel is located, we need to install Cloudflared directly in VMID 2400 to create its own tunnel connection to Cloudflare.
**Architecture:**
```
Internet → Cloudflare → Cloudflare Tunnel (from VMID 2400) → Nginx (port 443) → Besu RPC (8545/8546)
```
---
## Prerequisites
1. **Access to Proxmox host** where VMID 2400 is running
2. **Access to VMID 2400 container** (via `pct exec 2400`)
3. **Cloudflare account** with access to `defi-oracle.io` domain
4. **Cloudflare Zero Trust access** (free tier is sufficient)
---
## Step 1: Create Cloudflare Tunnel
### 1.1 Create Tunnel in Cloudflare Dashboard
1. Go to: https://one.dash.cloudflare.com/
2. Navigate to: **Zero Trust****Networks****Tunnels**
3. Click **Create a tunnel**
4. Select **Cloudflared** as the connector type
5. Give it a name (e.g., `thirdweb-rpc-2400`)
6. Click **Save tunnel**
### 1.2 Copy the Tunnel Token
After creating the tunnel, you'll see a token. Copy it - you'll need it in the next step.
**Token format:** `eyJhIjoi...` (long base64 string)
---
## Step 2: Install Cloudflared on VMID 2400
### 2.1 Access the Container
**If you have SSH access to the Proxmox host:**
```bash
# Replace with your Proxmox host IP
PROXMOX_HOST="192.168.11.10" # or your Proxmox host IP
# Enter the container
ssh root@${PROXMOX_HOST} "pct exec 2400 -- bash"
```
**If you have console access to the Proxmox host:**
```bash
# List containers
pct list | grep 2400
# Enter the container
pct exec 2400 -- bash
```
### 2.2 Install Cloudflared
Once inside the container, run:
```bash
# Update package list
apt update
# Install wget if not available
apt install -y wget
# Download and install cloudflared
cd /tmp
wget -q https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb
dpkg -i cloudflared-linux-amd64.deb || apt install -f -y
# Verify installation
cloudflared --version
```
### 2.3 Install Tunnel Service
Replace `<TUNNEL_TOKEN>` with the token you copied from Step 1.2:
```bash
# Install tunnel service with token
cloudflared service install <TUNNEL_TOKEN>
# Enable and start service
systemctl enable cloudflared
systemctl start cloudflared
# Check status
systemctl status cloudflared
```
### 2.4 Verify Tunnel is Running
```bash
# Check service status
systemctl status cloudflared --no-pager -l
# List tunnels (should show your tunnel)
cloudflared tunnel list
# Check tunnel configuration
cat /etc/cloudflared/config.yml
```
---
## Step 3: Configure Tunnel Route in Cloudflare
### 3.1 Configure Public Hostname
1. Go back to Cloudflare Dashboard: **Zero Trust****Networks****Tunnels**
2. Click on your tunnel name (`thirdweb-rpc-2400`)
3. Click **Configure**
4. Go to **Public Hostname** tab
5. Click **Add a public hostname**
### 3.2 Add RPC Endpoint Configuration
**For HTTP RPC:**
```
Subdomain: rpc.public-0138
Domain: defi-oracle.io
Service Type: HTTP
URL: http://127.0.0.1:8545
```
**Note:** If you have Nginx configured on VMID 2400 with SSL on port 443, use:
```
URL: https://127.0.0.1:443
```
or
```
URL: http://127.0.0.1:443
```
### 3.3 Add WebSocket Support (Optional)
If you need WebSocket RPC support, you can either:
**Option A:** Use the same hostname (Cloudflare supports WebSocket on HTTP endpoints)
- The same `rpc.public-0138.defi-oracle.io` hostname will handle both HTTP and WebSocket
- Configure your Nginx to route WebSocket connections appropriately
**Option B:** Add a separate hostname for WebSocket:
```
Subdomain: rpc-ws.public-0138
Domain: defi-oracle.io
Service Type: HTTP
URL: http://127.0.0.1:8546
```
### 3.4 Save Configuration
Click **Save hostname** for each entry you add.
---
## Step 4: Configure Nginx on VMID 2400 (If Needed)
If VMID 2400 doesn't have Nginx configured yet, you'll need to set it up to handle the RPC endpoints.
### 4.1 Install Nginx
```bash
# Inside VMID 2400 container
apt install -y nginx
```
### 4.2 Configure Nginx for RPC
Create Nginx configuration:
```bash
cat > /etc/nginx/sites-available/rpc-thirdweb << 'EOF'
# HTTP to HTTPS redirect (optional)
server {
listen 80;
listen [::]:80;
server_name rpc.public-0138.defi-oracle.io;
# Redirect all HTTP to HTTPS
return 301 https://$host$request_uri;
}
# HTTPS server - HTTP RPC API (port 8545)
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name rpc.public-0138.defi-oracle.io;
# SSL configuration (you'll need to generate certificates)
# For Cloudflare tunnel, you can use self-signed or Cloudflare SSL
ssl_certificate /etc/nginx/ssl/rpc.crt;
ssl_certificate_key /etc/nginx/ssl/rpc.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
# Security headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
# Increase timeouts for RPC calls
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
# HTTP RPC endpoint (port 8545)
location / {
proxy_pass http://127.0.0.1:8545;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# HTTPS server - WebSocket RPC API (port 8546)
server {
listen 8443 ssl http2;
listen [::]:8443 ssl http2;
server_name rpc.public-0138.defi-oracle.io;
# SSL configuration
ssl_certificate /etc/nginx/ssl/rpc.crt;
ssl_certificate_key /etc/nginx/ssl/rpc.key;
ssl_protocols TLSv1.2 TLSv1.3;
# WebSocket RPC endpoint (port 8546)
location / {
proxy_pass http://127.0.0.1:8546;
proxy_http_version 1.1;
# WebSocket headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Long timeouts for WebSocket connections
proxy_read_timeout 86400;
proxy_send_timeout 86400;
}
}
EOF
# Enable the site
ln -sf /etc/nginx/sites-available/rpc-thirdweb /etc/nginx/sites-enabled/
rm -f /etc/nginx/sites-enabled/default
# Test configuration
nginx -t
# Reload Nginx
systemctl reload nginx
```
**Note:** If using Cloudflare tunnel, you can point the tunnel directly to `http://127.0.0.1:8545` (bypassing Nginx) since Cloudflare handles SSL termination. In that case, Nginx is optional.
---
## Step 5: Configure DNS Record
### 5.1 Create DNS Record in Cloudflare
1. Go to Cloudflare Dashboard: **DNS****Records**
2. Select domain: `defi-oracle.io`
3. Click **Add record**
### 5.2 Configure DNS Record
**If using Cloudflare Tunnel (Recommended):**
```
Type: CNAME
Name: rpc.public-0138
Target: <your-tunnel-id>.cfargotunnel.com
Proxy: 🟠 Proxied (orange cloud)
TTL: Auto
```
**To find your tunnel ID:**
- Go to **Zero Trust****Networks****Tunnels**
- Click on your tunnel name
- The tunnel ID is shown in the URL or tunnel details
**Alternative: Direct A Record (If using public IP with port forwarding)**
If you prefer to use a direct A record with port forwarding on the ER605 router:
```
Type: A
Name: rpc.public-0138
Target: <your-public-ip>
Proxy: 🟠 Proxied (recommended) or ❌ DNS only
TTL: Auto
```
Then configure port forwarding on ER605:
- External Port: 443
- Internal IP: 192.168.11.240
- Internal Port: 443
- Protocol: TCP
---
## Step 6: Verify Setup
### 6.1 Check Tunnel Status
```bash
# Inside VMID 2400 container
systemctl status cloudflared
cloudflared tunnel list
```
### 6.2 Test DNS Resolution
```bash
# From your local machine
dig rpc.public-0138.defi-oracle.io
nslookup rpc.public-0138.defi-oracle.io
# Should resolve to Cloudflare IPs (if proxied) or your public IP
```
### 6.3 Test RPC Endpoint
```bash
# Test HTTP RPC endpoint
curl -k https://rpc.public-0138.defi-oracle.io \
-X POST \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
# Test WebSocket (using wscat)
wscat -c wss://rpc.public-0138.defi-oracle.io
```
---
## Troubleshooting
### Tunnel Not Connecting
```bash
# Check cloudflared logs
journalctl -u cloudflared -f
# Check tunnel status
cloudflared tunnel list
# Verify tunnel token
cat /etc/cloudflared/credentials.json
```
### DNS Not Resolving
1. Verify DNS record is created correctly in Cloudflare
2. Wait a few minutes for DNS propagation
3. Check if tunnel is healthy in Cloudflare Dashboard
### Connection Refused
```bash
# Check if Besu RPC is running
systemctl status besu-rpc
# Test Besu RPC locally
curl -X POST http://127.0.0.1:8545 \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
# Check Nginx (if using)
systemctl status nginx
nginx -t
```
### SSL Certificate Issues
If using Nginx with SSL, you may need to generate certificates. For Cloudflare tunnel, SSL is handled by Cloudflare, so you can use HTTP internally.
---
## Summary
After completing these steps:
✅ Cloudflared installed on VMID 2400
✅ Cloudflare tunnel created and connected
✅ Tunnel route configured for `rpc.public-0138.defi-oracle.io`
✅ DNS record created (CNAME to tunnel)
✅ RPC endpoint accessible at `https://rpc.public-0138.defi-oracle.io`
**Next Steps:**
- Verify the endpoint works with Thirdweb SDK
- Update Thirdweb listing with the new RPC URL
- Monitor tunnel status and logs
---
## Related Documentation
- [RPC_DNS_CONFIGURATION.md](RPC_DNS_CONFIGURATION.md) - DNS configuration overview
- [THIRDWEB_RPC_SETUP.md](../THIRDWEB_RPC_SETUP.md) - ThirdWeb RPC node setup guide
- [CLOUDFLARE_TUNNEL_CONFIGURATION_GUIDE.md](../CLOUDFLARE_TUNNEL_CONFIGURATION_GUIDE.md) - General tunnel configuration

View File

@@ -0,0 +1,137 @@
# Tunnel Configuration Verified ✅
## Configuration Status
Your Cloudflare tunnel configuration looks **correct**! All 10 routes are properly configured.
## Configured Routes
| # | Hostname | Service | Target | Origin Config |
|---|----------|---------|--------|---------------|
| 1 | explorer.d-bis.org | HTTP | http://192.168.11.21:80 | - |
| 2 | rpc-http-pub.d-bis.org | HTTP | http://192.168.11.21:80 | - |
| 3 | rpc-http-prv.d-bis.org | HTTP | http://192.168.11.21:80 | - |
| 4 | dbis-admin.d-bis.org | HTTP | http://192.168.11.21:80 | - |
| 5 | dbis-api.d-bis.org | HTTP | http://192.168.11.21:80 | - |
| 6 | dbis-api-2.d-bis.org | HTTP | http://192.168.11.21:80 | - |
| 7 | mim4u.org | HTTP | http://192.168.11.21:80 | - |
| 8 | www.mim4u.org | HTTP | http://192.168.11.21:80 | - |
| 9 | rpc-ws-pub.d-bis.org | HTTP | http://192.168.11.21:80 | noTLSVerify, httpHostHeader |
| 10 | rpc-ws-prv.d-bis.org | HTTP | http://192.168.11.21:80 | noTLSVerify, httpHostHeader |
## Important Notes
### ✅ Configuration is Correct
- All routes point to correct target: `http://192.168.11.21:80`
- WebSocket routes have proper origin configurations
- All hostnames are configured
### ⚠️ Domain Difference Noted
- **Tunnel Config**: Uses `mim4u.org` and `www.mim4u.org` (root domain)
- **DNS Zone**: Had `mim4u.org.d-bis.org` (subdomain)
**This is correct** if `mim4u.org` is a separate domain in Cloudflare (which it is).
### Missing: Catch-All Rule
I don't see a catch-all rule in your list. It's recommended to add:
- **Path**: `*`
- **Service**: `HTTP 404: Not Found`
- **Must be last** in the list
This handles any unmatched requests gracefully.
## Next Steps
### 1. Verify Tunnel Status
Check in Cloudflare Dashboard:
- Go to: Zero Trust → Networks → Tunnels
- Find tunnel: `10ab22da-8ea3-4e2e-a896-27ece2211a05`
- Status should be **HEALTHY** (not DOWN)
### 2. Test Endpoints
Run the verification script:
```bash
./verify-tunnel-config.sh
```
Or test manually:
```bash
curl -I https://explorer.d-bis.org
curl -I https://rpc-http-pub.d-bis.org
curl -I https://dbis-admin.d-bis.org
curl -I https://dbis-api.d-bis.org
curl -I https://mim4u.org
```
### 3. If Tunnels Are Still DOWN
The configuration is correct, but the tunnel connector may not be running:
```bash
# Check container status
ssh root@192.168.11.12 "pct status 102"
# Check tunnel service
ssh root@192.168.11.12 "pct exec 102 -- systemctl status cloudflared"
# Start if needed
ssh root@192.168.11.12 "pct exec 102 -- systemctl start cloudflared"
```
### 4. Add Catch-All Rule (Recommended)
In Cloudflare Dashboard:
1. Go to tunnel configuration
2. Add new route:
- **Path**: `*`
- **Service**: `HTTP 404: Not Found`
3. **Move it to the bottom** (must be last)
4. Save
## Configuration Summary
**Routes**: 10 configured
**Target**: All correct (`http://192.168.11.21:80`)
**WebSocket**: Proper origin config
⚠️ **Catch-all**: Missing (recommended to add)
**Status**: Check if tunnel connector is running
## Troubleshooting
### If Endpoints Don't Work
1. **Tunnel Status**: Check if tunnel shows HEALTHY in dashboard
2. **Container**: Verify VMID 102 is running
3. **Service**: Check cloudflared service is running
4. **Nginx**: Verify Nginx is accessible at 192.168.11.21:80
5. **DNS**: Check DNS records point to tunnel
### Common Issues
- **Tunnel DOWN**: Container/service not running
- **404 Errors**: Nginx not configured for hostname
- **502 Errors**: Nginx not accessible or down
- **Timeout**: Network connectivity issues
## Verification Checklist
- [x] All 10 routes configured
- [x] All routes point to correct target
- [x] WebSocket routes have origin config
- [ ] Catch-all rule added (recommended)
- [ ] Tunnel status is HEALTHY
- [ ] Container (VMID 102) is running
- [ ] cloudflared service is running
- [ ] Endpoints are accessible
## Summary
Your tunnel configuration is **correct**! The routes are properly set up. If tunnels are still DOWN, the issue is likely:
- Tunnel connector (cloudflared) not running in VMID 102
- Container not started
- Network connectivity issues
The configuration itself is perfect - you just need to ensure the tunnel connector is running to establish the connection.

View File

@@ -0,0 +1,176 @@
# Install Tunnel with Token
## Token Provided
You have a Cloudflare tunnel token for the shared tunnel:
- **Tunnel ID**: `10ab22da-8ea3-4e2e-a896-27ece2211a05`
- **Token**: `eyJhIjoiNTJhZDU3YTcxNjcxYzVmYzAwOWVkZjA3NDQ2NTgxOTYiLCJ0IjoiMTBhYjIyZGEtOGVhMy00ZTJlLWE4OTYtMjdlY2UyMjExYTA1IiwicyI6IlptRXlOMkkyTVRrdE1EZzFNeTAwTkRBNExXSXhaalF0Wm1KaE5XVmpaVEEzTVdGbCJ9`
## Installation Methods
### Method 1: Automated Script (If SSH Access Available)
```bash
# If you have SSH access to Proxmox network:
./install-shared-tunnel-token.sh
# Or via SSH tunnel:
./setup_ssh_tunnel.sh
PROXMOX_HOST=localhost ./install-shared-tunnel-token.sh
```
### Method 2: Manual Installation (Direct Container Access)
If you can access the container directly:
```bash
# 1. Access container
ssh root@192.168.11.12
pct exec 102 -- bash
# 2. Install cloudflared (if needed)
apt update
apt install -y cloudflared
# 3. Install tunnel service with token
cloudflared service install eyJhIjoiNTJhZDU3YTcxNjcxYzVmYzAwOWVkZjA3NDQ2NTgxOTYiLCJ0IjoiMTBhYjIyZGEtOGVhMy00ZTJlLWE4OTYtMjdlY2UyMjExYTA1IiwicyI6IlptRXlOMkkyTVRrdE1EZzFNeTAwTkRBNExXSXhaalF0Wm1KaE5XVmpaVEEzTVdGbCJ9
# 4. Create configuration file
cat > /etc/cloudflared/config.yml << 'EOF'
tunnel: 10ab22da-8ea3-4e2e-a896-27ece2211a05
credentials-file: /root/.cloudflared/10ab22da-8ea3-4e2e-a896-27ece2211a05.json
ingress:
- hostname: dbis-admin.d-bis.org
service: http://192.168.11.21:80
originRequest:
httpHostHeader: dbis-admin.d-bis.org
- hostname: dbis-api.d-bis.org
service: http://192.168.11.21:80
originRequest:
httpHostHeader: dbis-api.d-bis.org
- hostname: dbis-api-2.d-bis.org
service: http://192.168.11.21:80
originRequest:
httpHostHeader: dbis-api-2.d-bis.org
- hostname: mim4u.org.d-bis.org
service: http://192.168.11.21:80
originRequest:
httpHostHeader: mim4u.org.d-bis.org
- hostname: www.mim4u.org.d-bis.org
service: http://192.168.11.21:80
originRequest:
httpHostHeader: www.mim4u.org.d-bis.org
- hostname: rpc-http-prv.d-bis.org
service: http://192.168.11.21:80
originRequest:
httpHostHeader: rpc-http-prv.d-bis.org
- hostname: rpc-http-pub.d-bis.org
service: http://192.168.11.21:80
originRequest:
httpHostHeader: rpc-http-pub.d-bis.org
- hostname: rpc-ws-prv.d-bis.org
service: http://192.168.11.21:80
originRequest:
httpHostHeader: rpc-ws-prv.d-bis.org
- hostname: rpc-ws-pub.d-bis.org
service: http://192.168.11.21:80
originRequest:
httpHostHeader: rpc-ws-pub.d-bis.org
- service: http_status:404
metrics: 127.0.0.1:9090
loglevel: info
gracePeriod: 30s
EOF
chmod 600 /etc/cloudflared/config.yml
# 5. Restart service
systemctl daemon-reload
systemctl restart cloudflared
systemctl status cloudflared
```
### Method 3: Cloudflare Dashboard Configuration
After installing with token, configure ingress rules via dashboard:
1. Go to: https://one.dash.cloudflare.com/
2. Zero Trust → Networks → Tunnels
3. Find tunnel: `10ab22da-8ea3-4e2e-a896-27ece2211a05`
4. Click **Configure**
5. Add all 9 hostnames (see list below)
6. Save
## Hostnames to Configure
All these hostnames should route to `http://192.168.11.21:80`:
1. `dbis-admin.d-bis.org`
2. `dbis-api.d-bis.org`
3. `dbis-api-2.d-bis.org`
4. `mim4u.org.d-bis.org`
5. `www.mim4u.org.d-bis.org`
6. `rpc-http-prv.d-bis.org`
7. `rpc-http-pub.d-bis.org`
8. `rpc-ws-prv.d-bis.org`
9. `rpc-ws-pub.d-bis.org`
**Important**: Add catch-all rule (HTTP 404) as the LAST entry.
## Verification
After installation:
```bash
# Check service status
systemctl status cloudflared
# Check logs
journalctl -u cloudflared -f
# Test endpoints (wait 1-2 minutes first)
curl -I https://dbis-admin.d-bis.org
curl -I https://rpc-http-pub.d-bis.org
curl -I https://dbis-api.d-bis.org
```
## What the Token Does
The token:
- Authenticates the tunnel connector to Cloudflare
- Associates the connector with tunnel ID `10ab22da-8ea3-4e2e-a896-27ece2211a05`
- Creates systemd service automatically
- Stores credentials in `/root/.cloudflared/`
## Troubleshooting
### Service Not Starting
```bash
# Check logs
journalctl -u cloudflared -n 50
# Check if credentials file exists
ls -la /root/.cloudflared/10ab22da-8ea3-4e2e-a896-27ece2211a05.json
# Verify config file
cat /etc/cloudflared/config.yml
```
### Tunnel Still DOWN
1. Wait 1-2 minutes for connection
2. Check Cloudflare Dashboard
3. Verify network connectivity from container
4. Check if Nginx is accessible at `192.168.11.21:80`
## Summary
**Token**: Provided and ready to use
**Tunnel ID**: `10ab22da-8ea3-4e2e-a896-27ece2211a05`
**Hostnames**: 9 hostnames need configuration
**Target**: All route to `http://192.168.11.21:80`
**Next**: Install using one of the methods above, then configure ingress rules.

View File

@@ -0,0 +1,174 @@
# VMID 2400 - DNS CNAME Structure
**Date**: 2026-01-02
**Domain**: `defi-oracle.io`
**Purpose**: Two-level CNAME structure for ThirdWeb RPC endpoint
---
## DNS Structure
The DNS configuration uses a two-level CNAME chain for flexibility:
```
rpc.defi-oracle.io
↓ (CNAME)
rpc.public-0138.defi-oracle.io
↓ (CNAME)
26138c21-db00-4a02-95db-ec75c07bda5b.cfargotunnel.com
↓ (Cloudflare Tunnel)
192.168.11.240:443 (Nginx) → 127.0.0.1:8545 (Besu RPC)
```
---
## DNS Records to Create
### Record 1: Tunnel Endpoint
```
Type: CNAME
Name: rpc.public-0138
Domain: defi-oracle.io
Target: 26138c21-db00-4a02-95db-ec75c07bda5b.cfargotunnel.com
Proxy: 🟠 Proxied (orange cloud)
TTL: Auto
```
**Full FQDN**: `rpc.public-0138.defi-oracle.io`
**Purpose**: Points directly to the Cloudflare tunnel endpoint
---
### Record 2: Short Alias
```
Type: CNAME
Name: rpc
Domain: defi-oracle.io
Target: rpc.public-0138.defi-oracle.io
Proxy: 🟠 Proxied (orange cloud)
TTL: Auto
```
**Full FQDN**: `rpc.defi-oracle.io`
**Purpose**: Provides a shorter, more convenient alias that resolves to the full FQDN
---
## Benefits of Two-Level Structure
1. **Flexibility**: Can change the tunnel endpoint without updating the short alias
2. **Convenience**: `rpc.defi-oracle.io` is easier to remember and use
3. **Backwards Compatibility**: If you need to change the tunnel or endpoint structure, only the first CNAME needs updating
4. **Organization**: The `rpc.public-0138` name clearly indicates it's for ChainID 138 public RPC
---
## Usage
Both endpoints will work and resolve to the same tunnel:
**Full FQDN**:
- `https://rpc.public-0138.defi-oracle.io`
**Short Alias**:
- `https://rpc.defi-oracle.io`
Both URLs will:
1. Resolve through the CNAME chain
2. Connect to Cloudflare tunnel `26138c21-db00-4a02-95db-ec75c07bda5b`
3. Route to VMID 2400 (192.168.11.240)
4. Be handled by Nginx on port 443
5. Proxy to Besu RPC on port 8545
---
## Cloudflare Dashboard Configuration
### Step 1: Create First CNAME (Tunnel Endpoint)
1. Go to: **DNS****Records**
2. Click: **Add record**
3. Configure:
- **Type**: CNAME
- **Name**: `rpc.public-0138`
- **Target**: `26138c21-db00-4a02-95db-ec75c07bda5b.cfargotunnel.com`
- **Proxy**: 🟠 Proxied
- **TTL**: Auto
4. Click: **Save**
### Step 2: Create Second CNAME (Short Alias)
1. Click: **Add record** again
2. Configure:
- **Type**: CNAME
- **Name**: `rpc`
- **Target**: `rpc.public-0138.defi-oracle.io`
- **Proxy**: 🟠 Proxied
- **TTL**: Auto
3. Click: **Save**
---
## Verification
### Test DNS Resolution
```bash
# Test full FQDN
dig rpc.public-0138.defi-oracle.io
nslookup rpc.public-0138.defi-oracle.io
# Test short alias
dig rpc.defi-oracle.io
nslookup rpc.defi-oracle.io
# Both should resolve to Cloudflare IPs (if proxied)
```
### Test Endpoints
```bash
# Test full FQDN
curl -k https://rpc.public-0138.defi-oracle.io/health
# Test short alias
curl -k https://rpc.defi-oracle.io/health
# Both should work identically
```
---
## Important Notes
1. **Proxy Status**: Both CNAME records should be **Proxied** (🟠 orange cloud) for DDoS protection and SSL termination
2. **CNAME Chain**: Cloudflare supports CNAME chains, so `rpc``rpc.public-0138``tunnel` works correctly
3. **Tunnel Route**: The tunnel route in Cloudflare should be configured for `rpc.public-0138.defi-oracle.io` (the actual endpoint), but both URLs will work since DNS resolves the short alias first
4. **Nginx Configuration**: Nginx is configured for `rpc.public-0138.defi-oracle.io` as the server_name. If you want to support both, you can add `rpc.defi-oracle.io` to the server_name directive, but it's not required since Cloudflare handles the DNS resolution.
---
## Troubleshooting
### CNAME Chain Not Resolving
- Wait 1-2 minutes for DNS propagation
- Verify both CNAME records are created correctly
- Check that the target of the first CNAME (`rpc.public-0138`) points to the tunnel endpoint
- Verify tunnel is healthy in Cloudflare Dashboard
### Only One URL Works
- Check that both CNAME records are created
- Verify both are set to Proxied (orange cloud)
- Test DNS resolution for both: `dig rpc.defi-oracle.io` and `dig rpc.public-0138.defi-oracle.io`
---
**Last Updated**: 2026-01-02
**Status**: ✅ **DOCUMENTATION COMPLETE**

View File

@@ -0,0 +1,315 @@
# VMID 2400 Cloudflare Tunnel - Environment Secrets Checklist
**Date**: 2025-01-23
**Purpose**: Complete list of all secrets and environment variables needed for VMID 2400 ThirdWeb RPC Cloudflare tunnel setup
---
## Summary
This document lists all required secrets and environment variables for setting up the Cloudflare tunnel for VMID 2400 (ThirdWeb RPC node) on the `defi-oracle.io` domain.
---
## Required Secrets for Cloudflare Tunnel Setup
### 1. Cloudflare Tunnel Token 🔴 **CRITICAL**
**Variable Name**: `TUNNEL_TOKEN_VMID2400` (or pass directly to script)
**Description**: Token for the new Cloudflare tunnel to be created for VMID 2400
**Status**: ⚠️ **NEEDS TO BE CREATED**
**How to Obtain**:
1. Go to: https://one.dash.cloudflare.com/
2. Navigate to: **Zero Trust****Networks****Tunnels**
3. Click: **Create a tunnel**
4. Select: **Cloudflared**
5. Name: `thirdweb-rpc-2400`
6. Copy the token (starts with `eyJ...`)
**Format**:
```bash
TUNNEL_TOKEN_VMID2400="eyJhIjoi..."
```
**Usage**:
- Passed directly to script: `./scripts/setup-cloudflared-vmid2400.sh <TOKEN>`
- Or set in environment: `export TUNNEL_TOKEN_VMID2400="eyJ..."`
---
### 2. Cloudflare API Token (Optional - for automated DNS/tunnel config)
**Variable Name**: `CLOUDFLARE_API_TOKEN`
**Description**: API token for programmatic Cloudflare API access (to configure DNS records and tunnel routes automatically)
**Status**: ⚠️ **OPTIONAL** (can configure manually in dashboard)
**How to Obtain**:
1. Go to: https://dash.cloudflare.com/profile/api-tokens
2. Click: **Create Token**
3. Use **Edit zone DNS** template OR create custom token with:
- **Zone** → **DNS****Edit**
- **Account** → **Cloudflare Tunnel****Edit**
4. Copy the token
**Format**:
```bash
CLOUDFLARE_API_TOKEN="your-api-token-here"
```
**Alternative (Legacy)**:
```bash
CLOUDFLARE_EMAIL="your-email@example.com"
CLOUDFLARE_API_KEY="your-global-api-key"
```
**Usage**:
- For automated DNS record creation
- For automated tunnel route configuration
- Not strictly required - can be done manually in dashboard
---
### 3. Cloudflare Zone ID (Optional - auto-detected if not set)
**Variable Name**: `CLOUDFLARE_ZONE_ID_DEFI_ORACLE`
**Description**: Zone ID for `defi-oracle.io` domain (can be auto-detected if API token is provided)
**Status**: ⚠️ **OPTIONAL**
**How to Obtain**:
1. Go to Cloudflare Dashboard
2. Select domain: `defi-oracle.io`
3. Scroll down in Overview page - Zone ID is shown in right sidebar
4. Or use API: `curl -H "Authorization: Bearer $CLOUDFLARE_API_TOKEN" https://api.cloudflare.com/client/v4/zones?name=defi-oracle.io`
**Format**:
```bash
CLOUDFLARE_ZONE_ID_DEFI_ORACLE="your-zone-id-here"
```
---
### 4. Cloudflare Account ID (Optional - auto-detected if not set)
**Variable Name**: `CLOUDFLARE_ACCOUNT_ID`
**Description**: Cloudflare Account ID (can be auto-detected if API token is provided)
**Status**: ⚠️ **OPTIONAL**
**How to Obtain**:
1. Go to Cloudflare Dashboard
2. Right sidebar shows Account ID
3. Or use API: `curl -H "Authorization: Bearer $CLOUDFLARE_API_TOKEN" https://api.cloudflare.com/client/v4/accounts`
**Format**:
```bash
CLOUDFLARE_ACCOUNT_ID="your-account-id-here"
```
---
## Optional: ThirdWeb API Key (for chain configuration)
### 5. ThirdWeb API Key (Optional - for RPC URL configuration)
**Variable Name**: `THIRDWEB_API_KEY`
**Description**: API key for ThirdWeb RPC endpoints (used in chain configuration JSON)
**Status**: ⚠️ **OPTIONAL** (for RPC URL configuration in chainlist)
**How to Obtain**:
1. Go to: https://thirdweb.com
2. Sign up or log in
3. Navigate to Dashboard → Settings → API Keys
4. Generate API key
**Format**:
```bash
THIRDWEB_API_KEY="your-api-key-here"
```
**Usage**:
- Used in chain configuration: `pr-workspace/chains/_data/chains/eip155-138.json`
- URLs: `https://defi-oracle-meta.rpc.thirdweb.com/${THIRDWEB_API_KEY}`
- Not required for tunnel setup itself
---
## Complete .env File Template
### For VMID 2400 Tunnel Setup Only
**File**: `.env` (in project root: `/home/intlc/projects/proxmox/.env`)
```bash
# ============================================
# Cloudflare Configuration for VMID 2400
# ============================================
# Cloudflare Tunnel Token (REQUIRED for VMID 2400 setup)
# Get from: Zero Trust → Networks → Tunnels → Create tunnel
TUNNEL_TOKEN_VMID2400="eyJhIjoi..."
# Cloudflare API Token (OPTIONAL - for automated DNS/tunnel config)
# Get from: https://dash.cloudflare.com/profile/api-tokens
CLOUDFLARE_API_TOKEN="your-api-token-here"
# Cloudflare Zone ID for defi-oracle.io (OPTIONAL - auto-detected)
CLOUDFLARE_ZONE_ID_DEFI_ORACLE="your-zone-id-here"
# Cloudflare Account ID (OPTIONAL - auto-detected)
CLOUDFLARE_ACCOUNT_ID="your-account-id-here"
# Domain for VMID 2400
DOMAIN_DEFI_ORACLE="defi-oracle.io"
# ============================================
# ThirdWeb Configuration (OPTIONAL)
# ============================================
# ThirdWeb API Key (for RPC URL configuration)
THIRDWEB_API_KEY="your-api-key-here"
# ============================================
# Existing Cloudflare Config (if already present)
# ============================================
# Existing domain (d-bis.org)
DOMAIN="d-bis.org"
CLOUDFLARE_ZONE_ID="existing-zone-id"
CLOUDFLARE_ACCOUNT_ID="existing-account-id"
# Existing tunnel token (for pve2 tunnel)
TUNNEL_TOKEN="eyJhIjoi..."
```
---
## Minimum Required Secrets
For **basic tunnel setup** (manual DNS/tunnel config in dashboard), you only need:
1.**TUNNEL_TOKEN_VMID2400** - To install cloudflared service on VMID 2400
For **automated setup** (script configures DNS/tunnel routes), you need:
1.**TUNNEL_TOKEN_VMID2400** - To install cloudflared service
2.**CLOUDFLARE_API_TOKEN** - To configure DNS records and tunnel routes via API
---
## Step-by-Step Setup
### Option 1: Manual Setup (Minimum Secrets)
1. **Create Tunnel Token**:
- Go to Cloudflare Dashboard → Zero Trust → Networks → Tunnels
- Create tunnel: `thirdweb-rpc-2400`
- Copy token
2. **Run Installation Script**:
```bash
./scripts/setup-cloudflared-vmid2400.sh <TUNNEL_TOKEN>
```
3. **Configure Manually in Dashboard**:
- Configure tunnel route (rpc.public-0138.defi-oracle.io → http://127.0.0.1:8545)
- Create DNS CNAME record (rpc.public-0138 → <tunnel-id>.cfargotunnel.com)
**Required**: Only `TUNNEL_TOKEN_VMID2400`
---
### Option 2: Automated Setup (More Secrets)
1. **Create Tunnel Token** (same as above)
2. **Get API Token**:
- Go to: https://dash.cloudflare.com/profile/api-tokens
- Create token with Zone DNS Edit and Tunnel Edit permissions
3. **Add to .env**:
```bash
TUNNEL_TOKEN_VMID2400="eyJ..."
CLOUDFLARE_API_TOKEN="your-token"
DOMAIN_DEFI_ORACLE="defi-oracle.io"
```
4. **Run Scripts** (future automation scripts can use these)
**Required**: `TUNNEL_TOKEN_VMID2400` + `CLOUDFLARE_API_TOKEN`
---
## Security Notes
### File Permissions
```bash
# Ensure .env file has restrictive permissions
chmod 600 .env
```
### Gitignore
Ensure `.env` is in `.gitignore`:
```bash
echo ".env" >> .gitignore
```
### Secrets Management
- ✅ Never commit `.env` file to git
- ✅ Use `.env.example` for templates (without actual secrets)
- ✅ Rotate API tokens regularly
- ✅ Use different tokens for different purposes
- ✅ Keep tunnel tokens secure (they provide full tunnel access)
---
## Verification Checklist
After setup, verify:
- [ ] Tunnel token created and copied
- [ ] Cloudflared installed on VMID 2400
- [ ] Tunnel service running on VMID 2400
- [ ] Tunnel route configured in Cloudflare Dashboard
- [ ] DNS CNAME record created
- [ ] DNS record resolves correctly
- [ ] RPC endpoint accessible: `https://rpc.public-0138.defi-oracle.io`
---
## Quick Reference
| Secret | Required | How to Get | Used For |
|--------|----------|------------|----------|
| `TUNNEL_TOKEN_VMID2400` | ✅ YES | Zero Trust → Tunnels → Create | Install cloudflared service |
| `CLOUDFLARE_API_TOKEN` | ⚠️ Optional | Profile → API Tokens | Automated DNS/tunnel config |
| `CLOUDFLARE_ZONE_ID_DEFI_ORACLE` | ⚠️ Optional | Dashboard → Domain → Overview | Auto-detected if token provided |
| `CLOUDFLARE_ACCOUNT_ID` | ⚠️ Optional | Dashboard → Right sidebar | Auto-detected if token provided |
| `THIRDWEB_API_KEY` | ⚠️ Optional | ThirdWeb Dashboard → API Keys | Chain configuration JSON |
---
## Next Steps
1.**Create tunnel token** in Cloudflare Dashboard
2.**Run installation script** with token
3.**Configure tunnel route** (manual or automated)
4.**Create DNS record** (manual or automated)
5.**Verify setup** and test endpoint
---
**Last Updated**: 2025-01-23
**Status**: ✅ **Documentation Complete** - Ready for Setup

View File

@@ -0,0 +1,340 @@
# VMID 2400 - Restrict Traffic to *.thirdweb.com
**Date**: 2026-01-02
**Purpose**: Limit RPC endpoint access to only ThirdWeb domains
**VMID**: 2400
**FQDN**: `rpc.public-0138.defi-oracle.io`
---
## Overview
This guide provides multiple methods to restrict access to the VMID 2400 RPC endpoint to only allow traffic originating from `*.thirdweb.com` domains.
---
## Method 1: Cloudflare WAF Rules (Recommended) ⭐
Cloudflare WAF (Web Application Firewall) rules provide the best protection at the edge before traffic reaches your server.
### Step 1: Create WAF Rule in Cloudflare Dashboard
1. **Navigate to WAF**:
- Go to: https://dash.cloudflare.com/
- Select domain: **defi-oracle.io**
- Click: **Security****WAF** (or **Firewall Rules**)
2. **Create Custom Rule**:
- Click: **Create rule** or **Add rule**
- Rule name: `Allow Only ThirdWeb`
3. **Configure Rule**:
```
Rule Name: Allow Only ThirdWeb
When incoming requests match:
(http.request.headers.origin does not contain "thirdweb.com" AND
http.request.headers.referer does not contain "thirdweb.com" AND
http.request.headers.user_agent does not contain "thirdweb")
Then: Block
```
4. **Alternative - Use Expression Editor**:
```
(http.request.headers["origin"][*] contains "thirdweb.com" or
http.request.headers["referer"][*] contains "thirdweb.com" or
http.request.headers["user-agent"][*] contains "thirdweb")
```
- Action: **Allow**
- Then add another rule that blocks everything else
### Step 2: Configure WAF Rule Expression
**More Precise Expression** (allows only thirdweb.com):
```
(http.request.headers["origin"][*] matches "https?://.*\.thirdweb\.com(/.*)?$" or
http.request.headers["referer"][*] matches "https?://.*\.thirdweb\.com(/.*)?$")
```
**Action**: **Allow**
Then create a second rule:
- **Expression**: Everything else
- **Action**: **Block**
### Step 3: Deploy Rule
1. Review the rule
2. Click **Deploy** or **Save**
3. Wait a few seconds for propagation
---
## Method 2: Cloudflare Access Application (Zero Trust)
This method requires authentication but provides more control.
### Step 1: Create Access Application
1. **Navigate to Access**:
- Go to: https://one.dash.cloudflare.com/
- Click: **Zero Trust** → **Access** → **Applications**
- Click: **Add an application**
- Select: **Self-hosted**
2. **Configure Application**:
```
Application name: ThirdWeb RPC (VMID 2400)
Application domain: rpc.public-0138.defi-oracle.io
Session duration: 8 hours
```
3. **Configure Policy**:
- Click: **Add a policy**
- **Policy name**: `Allow ThirdWeb Team`
- **Action**: `Allow`
- **Include**:
- Select: **Emails**
- Value: `*@thirdweb.com` (if you have ThirdWeb emails)
- OR use: **Access Service Tokens** (more appropriate for API access)
### Step 2: Use Service Token (Recommended for API Access)
1. **Create Service Token**:
- Go to: **Zero Trust** → **Access** → **Service Tokens**
- Click: **Create Service Token**
- Name: `thirdweb-rpc-service`
- Copy the token (shown once)
2. **Update Policy**:
- Edit the Access policy
- **Include**: **Service Tokens**
- Select: `thirdweb-rpc-service`
3. **Share Token with ThirdWeb**:
- Provide the service token to ThirdWeb
- They include it in requests: `Authorization: Bearer <token>`
**Note**: This method requires ThirdWeb to include the token in requests.
---
## Method 3: Nginx Access Control (Less Secure - Can Be Spoofed)
This method checks HTTP headers but can be bypassed if headers are spoofed. Use this only as a secondary layer.
### Step 1: Update Nginx Configuration on VMID 2400
```bash
# SSH to Proxmox host
ssh root@192.168.11.10
# Enter VMID 2400
pct exec 2400 -- bash
# Edit Nginx config
nano /etc/nginx/sites-available/rpc-thirdweb
```
### Step 2: Add Access Control to Nginx Config
Add this to your server block:
```nginx
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name rpc.public-0138.defi-oracle.io;
# ... existing SSL config ...
# Restrict to ThirdWeb domains (check Origin and Referer headers)
set $allow_request 0;
# Check Origin header
if ($http_origin ~* "^https?://.*\.thirdweb\.com") {
set $allow_request 1;
}
# Check Referer header
if ($http_referer ~* "^https?://.*\.thirdweb\.com") {
set $allow_request 1;
}
# Block if not from ThirdWeb
if ($allow_request = 0) {
return 403 '{"jsonrpc":"2.0","error":{"code":-32000,"message":"Access denied. Only ThirdWeb domains are allowed."},"id":null}';
access_log off;
log_not_found off;
}
location / {
proxy_pass http://127.0.0.1:8545;
# ... existing proxy config ...
}
}
```
### Step 3: Test and Reload Nginx
```bash
# Test configuration
nginx -t
# Reload Nginx
systemctl reload nginx
```
**⚠️ Warning**: This method can be bypassed since headers can be spoofed. Use Cloudflare WAF for actual security.
---
## Method 4: Cloudflare Transform Rules (Header-Based)
Use Cloudflare Transform Rules to add/check custom headers.
### Step 1: Create Transform Rule
1. **Navigate to Transform Rules**:
- Go to: **Rules** → **Transform Rules**
- Click: **Create rule**
2. **Configure Rule**:
- Rule name: `Add ThirdWeb Verification Header`
- When: `http.request.headers["origin"][*] contains "thirdweb.com"`
- Then: Set static header `X-ThirdWeb-Verified: true`
3. **Create Second Rule (Block)**:
- Rule name: `Block Non-ThirdWeb`
- When: `http.request.headers["x-thirdweb-verified"] is absent`
- Then: **Block** (use Firewall rule for blocking)
---
## Recommended Approach: Cloudflare WAF Rules ⭐
**Best Practice**: Use **Method 1 (Cloudflare WAF Rules)** because:
- ✅ Enforced at Cloudflare edge (before reaching your server)
- ✅ Cannot be bypassed by spoofing headers
- ✅ Provides DDoS protection
- ✅ No code changes required
- ✅ Centralized management
---
## Implementation Steps (WAF Method)
### Quick Setup:
1. **Go to Cloudflare Dashboard**: https://dash.cloudflare.com/
2. **Select domain**: `defi-oracle.io`
3. **Navigate**: **Security** → **WAF** → **Custom Rules**
4. **Create Rule**:
```
Rule Name: Allow Only ThirdWeb Traffic
Expression:
(http.request.headers["origin"][*] matches "https?://.*\.thirdweb\.com(/.*)?$" or
http.request.headers["referer"][*] matches "https?://.*\.thirdweb\.com(/.*)?$")
Action: Allow
Position: Last (bottom)
```
5. **Create Block Rule**:
```
Rule Name: Block All Other Traffic
Expression:
(http.request.uri.path contains "/")
Action: Block
Position: Last (bottom)
```
**Important**: Order matters! Allow rule must come before Block rule, or use "Skip remaining rules" in Allow rule.
---
## Testing
### Test Allowed Request (from ThirdWeb):
```bash
# Simulate request with ThirdWeb Origin header
curl -k https://rpc.public-0138.defi-oracle.io \
-X POST \
-H "Content-Type: application/json" \
-H "Origin: https://dashboard.thirdweb.com" \
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
```
**Expected**: Should succeed ✅
### Test Blocked Request (without ThirdWeb headers):
```bash
# Request without ThirdWeb headers
curl -k https://rpc.public-0138.defi-oracle.io \
-X POST \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
```
**Expected**: Should be blocked (403 or custom error) ❌
---
## Limitations and Considerations
### Important Notes:
1. **Direct RPC Calls**: Direct RPC calls (from wallets, scripts) may not include Origin/Referer headers
- **Solution**: Use API key authentication or IP whitelisting instead
2. **CORS Requests**: Browser-based requests include Origin headers
- WAF rules work well for browser/JavaScript requests from ThirdWeb
3. **API/SDK Requests**: ThirdWeb SDK requests should include proper headers
- Verify with ThirdWeb that their SDK sends appropriate headers
4. **IP Whitelisting Alternative**: If headers don't work, consider:
- Get ThirdWeb's IP ranges
- Use Cloudflare WAF IP Access Rules
- Less flexible but more reliable for API access
---
## Alternative: IP-Based Restriction
If ThirdWeb provides their IP ranges:
1. **Go to**: **Security****WAF****Tools****IP Access Rules**
2. **Create Rule**:
- Action: **Allow**
- IP Address: ThirdWeb IP ranges
3. **Create Block Rule**:
- Action: **Block**
- IP Address: All other IPs
---
## Summary
| Method | Security | Ease of Setup | Reliability | Best For |
|--------|----------|---------------|-------------|----------|
| **WAF Rules** | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐⭐ | Browser/Web requests |
| **Access Application** | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ | API with service tokens |
| **Nginx Headers** | ⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐ | Secondary layer only |
| **IP Whitelisting** | ⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ | API/SDK requests |
**Recommendation**: Start with **Cloudflare WAF Rules (Method 1)**, and add **Access Application with Service Tokens (Method 2)** if you need API-level authentication.
---
**Last Updated**: 2026-01-02
**Status**: ✅ Ready for Implementation

View File

@@ -0,0 +1,90 @@
# Cloudflare Configuration for Blockscout Explorer
**Date**: $(date)
**Domain**: explorer.d-bis.org
**Tunnel ID**: `10ab22da-8ea3-4e2e-a896-27ece2211a05`
---
## Quick Configuration Steps
### 1. DNS Record (Cloudflare Dashboard)
1. **Go to Cloudflare DNS**:
- URL: https://dash.cloudflare.com/
- Select domain: `d-bis.org`
- Navigate to: **DNS****Records**
2. **Create CNAME Record**:
```
Type: CNAME
Name: explorer
Target: 10ab22da-8ea3-4e2e-a896-27ece2211a05.cfargotunnel.com
Proxy status: 🟠 Proxied (orange cloud) - REQUIRED
TTL: Auto
```
3. **Click Save**
### 2. Tunnel Route (Cloudflare Zero Trust)
1. **Go to Cloudflare Zero Trust**:
- URL: https://one.dash.cloudflare.com/
- Navigate to: **Zero Trust** → **Networks** → **Tunnels**
2. **Select Your Tunnel**:
- Find tunnel ID: `10ab22da-8ea3-4e2e-a896-27ece2211a05`
- Click on the tunnel name
3. **Configure Public Hostname**:
- Click **Configure** button
- Click **Public Hostnames** tab
- Click **Add a public hostname**
4. **Add Hostname**:
```
Subdomain: explorer
Domain: d-bis.org
Service: http://192.168.11.140:80
Type: HTTP
```
5. **Click Save hostname**
---
## Verification
### Wait for DNS Propagation (1-5 minutes)
Then test:
```bash
# Test DNS resolution
dig explorer.d-bis.org
nslookup explorer.d-bis.org
# Test HTTPS endpoint
curl https://explorer.d-bis.org/health
# Should return JSON response from Blockscout
```
---
## Configuration Summary
| Setting | Value |
|---------|-------|
| **Domain** | explorer.d-bis.org |
| **DNS Type** | CNAME |
| **DNS Target** | 10ab22da-8ea3-4e2e-a896-27ece2211a05.cfargotunnel.com |
| **Proxy Status** | 🟠 Proxied (required) |
| **Tunnel Service** | http://192.168.11.140:80 |
| **Tunnel Type** | HTTP |
---
**Status**: Ready for configuration
**Next Step**: Follow steps 1 and 2 above in Cloudflare dashboards

View File

@@ -0,0 +1,92 @@
# Cloudflare Explorer URL - Quick Setup Guide
**Domain**: explorer.d-bis.org
**Target**: http://192.168.11.140:80
---
## 🚀 Quick Setup (2 Steps)
### Step 1: Configure DNS Record
**In Cloudflare Dashboard** (https://dash.cloudflare.com/):
1. Select domain: **d-bis.org**
2. Go to: **DNS****Records**
3. Click: **Add record**
4. Configure:
- **Type**: `CNAME`
- **Name**: `explorer`
- **Target**: `<your-tunnel-id>.cfargotunnel.com`
- **Proxy status**: 🟠 **Proxied** (orange cloud) ← **REQUIRED**
- **TTL**: Auto
5. Click: **Save**
**To find your tunnel ID:**
```bash
# Run this script
./scripts/get-tunnel-id.sh
# Or check Cloudflare Zero Trust dashboard:
# https://one.dash.cloudflare.com/ → Zero Trust → Networks → Tunnels
```
---
### Step 2: Configure Tunnel Route
**In Cloudflare Zero Trust Dashboard** (https://one.dash.cloudflare.com/):
1. Navigate to: **Zero Trust****Networks****Tunnels**
2. Find your tunnel (by ID or name)
3. Click: **Configure** button
4. Click: **Public Hostnames** tab
5. Click: **Add a public hostname**
6. Configure:
- **Subdomain**: `explorer`
- **Domain**: `d-bis.org`
- **Service**: `http://192.168.11.140:80`
- **Type**: `HTTP`
7. Click: **Save hostname**
---
## ✅ Verify
**Wait 1-5 minutes for DNS propagation, then test:**
```bash
# Test public URL
curl https://explorer.d-bis.org/api/v2/stats
# Should return JSON with network stats (not 404)
```
---
## 📋 Configuration Checklist
- [ ] DNS CNAME record: `explorer``<tunnel-id>.cfargotunnel.com`
- [ ] DNS record is **🟠 Proxied** (orange cloud)
- [ ] Tunnel route: `explorer.d-bis.org``http://192.168.11.140:80`
- [ ] Cloudflared service running in container
- [ ] Public URL accessible: `https://explorer.d-bis.org`
---
## 🔧 Troubleshooting
### 404 Error
- Check DNS record exists and is proxied
- Check tunnel route is configured
- Wait 5 minutes for DNS propagation
### 502 Error
- Verify tunnel route points to `http://192.168.11.140:80`
- Check Nginx is running: `systemctl status nginx` (in container)
- Check Blockscout is running: `systemctl status blockscout` (in container)
---
**That's it! Follow these 2 steps and your public URL will work.**

View File

@@ -0,0 +1,179 @@
# Cloudflare Tunnel Configuration Guide
**Tunnel ID**: `10ab22da-8ea3-4e2e-a896-27ece2211a05`
**Status**: Currently DOWN - Needs Configuration
**Purpose**: Route all services through central Nginx (VMID 105)
---
## Current Status
From the Cloudflare dashboard, the tunnel `rpc-http-pub.d-bis.org` is showing as **DOWN**. This tunnel needs to be configured to route all hostnames to the central Nginx.
---
## Configuration Steps
### 1. Access Tunnel Configuration
1. Go to: https://one.dash.cloudflare.com/
2. Navigate to: **Zero Trust****Networks****Tunnels**
3. Click on the tunnel: **rpc-http-pub.d-bis.org** (Tunnel ID: `10ab22da-8ea3-4e2e-a896-27ece2211a05`)
4. Click **Configure** button
### 2. Configure Public Hostnames
In the **Public Hostnames** section, configure all hostnames to route to the central Nginx:
**Target**: `http://192.168.11.21:80`
#### Required Hostname Configurations:
| Hostname | Service Type | Target |
|----------|--------------|--------|
| `explorer.d-bis.org` | HTTP | `http://192.168.11.21:80` |
| `rpc-http-pub.d-bis.org` | HTTP | `http://192.168.11.21:80` |
| `rpc-ws-pub.d-bis.org` | HTTP | `http://192.168.11.21:80` |
| `rpc-http-prv.d-bis.org` | HTTP | `http://192.168.11.21:80` |
| `rpc-ws-prv.d-bis.org` | HTTP | `http://192.168.11.21:80` |
| `dbis-admin.d-bis.org` | HTTP | `http://192.168.11.21:80` |
| `dbis-api.d-bis.org` | HTTP | `http://192.168.11.21:80` |
| `dbis-api-2.d-bis.org` | HTTP | `http://192.168.11.21:80` |
| `mim4u.org` | HTTP | `http://192.168.11.21:80` |
| `www.mim4u.org` | HTTP | `http://192.168.11.21:80` |
### 3. Configuration Details
For each hostname:
1. **Subdomain**: Enter the subdomain (e.g., `explorer`, `rpc-http-pub`)
2. **Domain**: Select `d-bis.org` (or enter `mim4u.org` for those domains)
3. **Service**: Select `HTTP`
4. **URL**: Enter `192.168.11.21:80`
5. **Save** the configuration
### 4. Add Catch-All Rule (Optional but Recommended)
Add a catch-all rule at the end:
- **Service**: `HTTP 404: Not Found`
- This handles any unmatched hostnames
---
## Expected Configuration (YAML Format)
The tunnel configuration should look like this:
```yaml
ingress:
# Explorer
- hostname: explorer.d-bis.org
service: http://192.168.11.21:80
# RPC Public
- hostname: rpc-http-pub.d-bis.org
service: http://192.168.11.21:80
- hostname: rpc-ws-pub.d-bis.org
service: http://192.168.11.21:80
# RPC Private
- hostname: rpc-http-prv.d-bis.org
service: http://192.168.11.21:80
- hostname: rpc-ws-prv.d-bis.org
service: http://192.168.11.21:80
# DBIS Services
- hostname: dbis-admin.d-bis.org
service: http://192.168.11.21:80
- hostname: dbis-api.d-bis.org
service: http://192.168.11.21:80
- hostname: dbis-api-2.d-bis.org
service: http://192.168.11.21:80
# Miracles In Motion
- hostname: mim4u.org
service: http://192.168.11.21:80
- hostname: www.mim4u.org
service: http://192.168.11.21:80
# Catch-all
- service: http_status:404
```
---
## After Configuration
1. **Save** the configuration in Cloudflare dashboard
2. Wait 1-2 minutes for the tunnel to reload
3. Check tunnel status - it should change from **DOWN** to **HEALTHY**
4. Test endpoints:
```bash
curl https://explorer.d-bis.org/api/v2/stats
curl -X POST https://rpc-http-pub.d-bis.org \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}'
```
---
## Troubleshooting
### Tunnel Still DOWN After Configuration
1. **Check cloudflared service**:
```bash
ssh root@192.168.11.12 "pct exec 102 -- systemctl status cloudflared"
```
2. **Check tunnel logs**:
```bash
ssh root@192.168.11.12 "pct exec 102 -- journalctl -u cloudflared -n 50"
```
3. **Verify Nginx is accessible**:
```bash
curl http://192.168.11.21:80
```
4. **Restart cloudflared** (if needed):
```bash
ssh root@192.168.11.12 "pct exec 102 -- systemctl restart cloudflared"
```
### Service Not Routing Correctly
1. Verify Nginx configuration on VMID 105:
```bash
ssh root@192.168.11.12 "pct exec 105 -- cat /data/nginx/custom/http.conf"
```
2. Test Nginx routing directly:
```bash
curl -H "Host: explorer.d-bis.org" http://192.168.11.21/
```
3. Check Nginx logs:
```bash
ssh root@192.168.11.12 "pct exec 105 -- tail -f /data/logs/fallback_error.log"
```
---
## Notes
- **Central Nginx IP**: `192.168.11.21` (VMID 105)
- **Central Nginx Port**: `80` (HTTP)
- **All SSL/TLS termination**: Handled by Cloudflare
- **Internal routing**: Nginx routes based on `Host` header to appropriate internal services
---
**Last Updated**: December 27, 2025

View File

@@ -0,0 +1,106 @@
# Cloudflare Tunnel Installation - Complete
**Date**: January 27, 2025
**Tunnel Token**: Provided
**Container**: VMID 5000 on pve2
---
## ✅ Installation Command
**Run this on pve2 node:**
```bash
# Install cloudflared service with token
pct exec 5000 -- cloudflared service install eyJhIjoiNTJhZDU3YTcxNjcxYzVmYzAwOWVkZjA3NDQ2NTgxOTYiLCJ0IjoiYjAyZmUxZmUtY2I3ZC00ODRlLTkwOWItN2NjNDEyOThlYmU4IiwicyI6Ik5HTmtOV0kwWXpNdFpUVmxaUzAwTVRFMkxXRXdNMk10WlRJNU1ETTFaRFF4TURBMiJ9
# Start service
pct exec 5000 -- systemctl start cloudflared
pct exec 5000 -- systemctl enable cloudflared
# Verify installation
pct exec 5000 -- systemctl status cloudflared
pct exec 5000 -- cloudflared tunnel list
```
---
## 📋 What This Does
1. **Installs cloudflared** (if not already installed)
2. **Configures tunnel service** with the provided token
3. **Starts cloudflared service** automatically
4. **Enables service** to start on boot
---
## 🔍 After Installation
### Get Tunnel ID
```bash
pct exec 5000 -- cloudflared tunnel list
```
The tunnel ID will be displayed in the output.
### Configure DNS
**In Cloudflare Dashboard** (https://dash.cloudflare.com/):
1. Domain: **d-bis.org****DNS****Records**
2. Add CNAME:
- **Name**: `explorer`
- **Target**: `<tunnel-id>.cfargotunnel.com`
- **Proxy**: 🟠 **Proxied** (orange cloud)
- **TTL**: Auto
### Configure Tunnel Route
**In Cloudflare Zero Trust** (https://one.dash.cloudflare.com/):
1. **Zero Trust****Networks****Tunnels**
2. Find your tunnel → **Configure****Public Hostnames**
3. Add hostname:
- **Subdomain**: `explorer`
- **Domain**: `d-bis.org`
- **Service**: `http://192.168.11.140:80`
- **Type**: `HTTP`
---
## ✅ Verification
**Wait 1-5 minutes for DNS propagation, then:**
```bash
curl https://explorer.d-bis.org/api/v2/stats
```
**Expected**: JSON response with network stats (not 404)
---
## 🔧 Troubleshooting
### Service not starting
```bash
# Check logs
pct exec 5000 -- journalctl -u cloudflared -n 50
# Check status
pct exec 5000 -- systemctl status cloudflared
```
### Tunnel not connecting
- Verify token is valid
- Check Cloudflare Zero Trust dashboard for tunnel status
- Ensure DNS record is proxied (orange cloud)
---
**Status**: Ready to install
**Next**: Run installation command above on pve2 node

View File

@@ -0,0 +1,68 @@
# Cloudflare Configuration Documentation
**Last Updated:** 2025-01-20
**Status:** Active Documentation
---
## Overview
This directory contains all Cloudflare-related configuration documentation, including Zero Trust setup, DNS configuration, tunnel setup, and service-specific guides.
---
## Documentation Index
### Core Guides
| Document | Description | Priority |
|----------|-------------|----------|
| **[CLOUDFLARE_ZERO_TRUST_GUIDE.md](CLOUDFLARE_ZERO_TRUST_GUIDE.md)** | Complete Zero Trust integration guide | ⭐⭐⭐ |
| **[CLOUDFLARE_DNS_TO_CONTAINERS.md](CLOUDFLARE_DNS_TO_CONTAINERS.md)** | General DNS mapping to LXC containers | ⭐⭐⭐ |
| **[CLOUDFLARE_DNS_SPECIFIC_SERVICES.md](CLOUDFLARE_DNS_SPECIFIC_SERVICES.md)** | Service-specific DNS configuration | ⭐⭐⭐ |
### Tunnel Setup
| Document | Description | Priority |
|----------|-------------|----------|
| **[CLOUDFLARE_TUNNEL_CONFIGURATION_GUIDE.md](CLOUDFLARE_TUNNEL_CONFIGURATION_GUIDE.md)** | Complete tunnel configuration guide | ⭐⭐ |
| **[CLOUDFLARE_TUNNEL_INSTALLATION.md](CLOUDFLARE_TUNNEL_INSTALLATION.md)** | Tunnel installation procedures | ⭐⭐ |
| **[CLOUDFLARE_TUNNEL_QUICK_SETUP.md](CLOUDFLARE_TUNNEL_QUICK_SETUP.md)** | Quick setup guide | ⭐ |
| **[CLOUDFLARE_TUNNEL_RPC_SETUP.md](CLOUDFLARE_TUNNEL_RPC_SETUP.md)** | RPC-specific tunnel setup | ⭐⭐ |
### Service-Specific
| Document | Description | Priority |
|----------|-------------|----------|
| **[CLOUDFLARE_EXPLORER_CONFIG.md](CLOUDFLARE_EXPLORER_CONFIG.md)** | Blockscout explorer configuration | ⭐⭐ |
| **[CLOUDFLARE_EXPLORER_QUICK_SETUP.md](CLOUDFLARE_EXPLORER_QUICK_SETUP.md)** | Quick explorer setup | ⭐ |
---
## Quick Start
### First Time Setup
1. **Read:** [CLOUDFLARE_ZERO_TRUST_GUIDE.md](CLOUDFLARE_ZERO_TRUST_GUIDE.md) - Complete overview
2. **Follow:** [CLOUDFLARE_TUNNEL_INSTALLATION.md](CLOUDFLARE_TUNNEL_INSTALLATION.md) - Install tunnels
3. **Configure:** [CLOUDFLARE_DNS_TO_CONTAINERS.md](CLOUDFLARE_DNS_TO_CONTAINERS.md) - Map DNS to containers
### Common Tasks
- **Set up a new service:** See [CLOUDFLARE_DNS_TO_CONTAINERS.md](CLOUDFLARE_DNS_TO_CONTAINERS.md)
- **Configure specific service:** See [CLOUDFLARE_DNS_SPECIFIC_SERVICES.md](CLOUDFLARE_DNS_SPECIFIC_SERVICES.md)
- **Set up RPC tunnel:** See [CLOUDFLARE_TUNNEL_RPC_SETUP.md](CLOUDFLARE_TUNNEL_RPC_SETUP.md)
- **Configure explorer:** See [CLOUDFLARE_EXPLORER_CONFIG.md](CLOUDFLARE_EXPLORER_CONFIG.md)
---
## Related Documentation
- **[../README.md](../README.md)** - Configuration directory overview
- **[../../05-network/CLOUDFLARE_NGINX_INTEGRATION.md](../../05-network/CLOUDFLARE_NGINX_INTEGRATION.md)** - NGINX integration
- **[../../05-network/CLOUDFLARE_TUNNEL_ROUTING_ARCHITECTURE.md](../../05-network/CLOUDFLARE_TUNNEL_ROUTING_ARCHITECTURE.md)** - Routing architecture
- **[../../02-architecture/NETWORK_ARCHITECTURE.md](../../02-architecture/NETWORK_ARCHITECTURE.md)** - Network architecture
---
**Last Updated:** 2025-01-20

View File

@@ -0,0 +1,140 @@
# Besu Configuration: Mainnet vs Chain 138 Comparison
**Date**: $(date)
---
## Command Comparison
### Ethereum Mainnet Configuration
```bash
besu \
--network=mainnet \
--sync-mode=FULL \
--rpc-http-enabled \
--rpc-http-api=ETH,NET,WEB3 \
--rpc-http-cors-origins="*" \
--rpc-http-host=0.0.0.0 \
--rpc-http-port=8545
```
**This configuration:**
- ✅ Connects to **Ethereum Mainnet** (chain ID 1)
- ✅ Downloads entire mainnet blockchain
- ✅ No genesis file needed (uses mainnet genesis)
- ✅ Public network with public discovery
- ✅ No permissioning
- ✅ Read-only APIs (ETH, NET, WEB3)
---
### Chain 138 Equivalent Configuration
For your **private/permissioned chain 138** network, the equivalent would be:
```bash
besu \
--data-path=/data/besu \
--genesis-file=/genesis/genesis.json \
--network-id=138 \
--sync-mode=FULL \
--rpc-http-enabled \
--rpc-http-api=ETH,NET,WEB3 \
--rpc-http-cors-origins="*" \
--rpc-http-host=0.0.0.0 \
--rpc-http-port=8545 \
--permissions-nodes-config-file-enabled=true \
--permissions-nodes-config-file=/permissions/permissions-nodes.toml \
--static-nodes-file=/genesis/static-nodes.json \
--discovery-enabled=false \
--p2p-host=0.0.0.0 \
--p2p-port=30303 \
--miner-enabled=false
```
**Key Differences:**
| Setting | Mainnet | Chain 138 |
|---------|---------|-----------|
| Network | `--network=mainnet` | `--network-id=138` |
| Genesis | Auto (mainnet) | `--genesis-file=/genesis/genesis.json` |
| Permissioning | Disabled | **Enabled** (local nodes only) |
| Discovery | Enabled (public) | Disabled (private) |
| Static Nodes | None | Required (`static-nodes.json`) |
| Node Allowlist | None | Required (`permissions-nodes.toml`) |
| Consensus | PoS (mainnet) | QBFT (your network) |
---
## Important Notes
### ❌ Don't Use Mainnet Config for Chain 138
The mainnet configuration you showed **will NOT work** for your chain 138 network because:
1. **`--network=mainnet`** will connect to Ethereum mainnet (chain ID 1), not your chain 138
2. **No genesis file** - mainnet uses hardcoded genesis, your network needs a custom genesis
3. **No permissioning** - mainnet is public, your network is permissioned
4. **Public discovery** - mainnet discovers any node, your network only connects to allowlisted nodes
### ✅ Use Chain 138 Configuration
Your current chain 138 configuration (in TOML format) already has all the correct settings:
- `network-id=138` (not mainnet)
- `genesis-file=/genesis/genesis.json` (required)
- `permissions-nodes-config-file-enabled=true` (required for private network)
- `discovery-enabled=false` (for VMID 2500 - strict local/permissioned nodes only)
---
## Current Chain 138 Configuration (VMID 2500)
Your current configuration is correct for chain 138:
```toml
# config-rpc-core.toml (VMID 2500)
data-path="/data/besu"
genesis-file="/genesis/genesis.json"
network-id=138
sync-mode="FULL"
rpc-http-enabled=true
rpc-http-api=["ETH","NET","WEB3","ADMIN","DEBUG","TXPOOL"]
permissions-nodes-config-file-enabled=true
permissions-nodes-config-file="/permissions/permissions-nodes.toml"
static-nodes-file="/genesis/static-nodes.json"
discovery-enabled=false
```
---
## If You Need Mainnet Access
If you want to run a separate Besu node for **Ethereum mainnet** (separate from chain 138), you would:
1. Use a **separate data directory** (different from `/data/besu`)
2. Run on **different ports** (e.g., 8547, 8548)
3. Use the mainnet configuration you showed
4. This would be a **completely separate node** from your chain 138 network
**Example separate mainnet node:**
```bash
besu \
--data-path=/data/besu-mainnet \
--network=mainnet \
--sync-mode=FULL \
--rpc-http-enabled \
--rpc-http-api=ETH,NET,WEB3 \
--rpc-http-cors-origins="*" \
--rpc-http-host=0.0.0.0 \
--rpc-http-port=8547 \
--rpc-ws-port=8548
```
This would run alongside your chain 138 nodes but be completely separate.
---
**Last Updated**: $(date)

View File

@@ -0,0 +1,268 @@
# Besu RPC Nodes Configuration - Fixed
**Last Updated:** 2025-01-20
**Document Version:** 1.0
**Status:** Active Documentation
---
## Overview
This document describes the corrected configuration for the three Besu RPC nodes (VMIDs 2500, 2501, 2502) in the Proxmox VE deployment.
---
## Node Roles and Requirements
### VMID 2500 - Core RPC Node
- **Role**: Core/Internal infrastructure
- **Access**: **NO public access or routing**
- **Features**: **All features enabled** (ADMIN, DEBUG, TRACE, TXPOOL, QBFT)
- **Config File**: `config-rpc-core.toml`
- **IP**: 192.168.11.250
**Key Settings**:
- ✅ Discovery **DISABLED** (no public routing)
- ✅ All APIs enabled: `ETH`, `NET`, `WEB3`, `TXPOOL`, `QBFT`, `ADMIN`, `DEBUG`, `TRACE`
- ✅ CORS origins empty (no public access)
- ✅ Node permissioning enabled (only local nodes)
- ✅ Account permissioning **disabled** (internal use only)
### VMID 2501 - Permissioned RPC Node (Prv)
- **Role**: Permissioned public access
- **Access**: **Public permissioned access** (requires authentication)
- **Features**: **Non-Admin features only** (no ADMIN, DEBUG, TRACE)
- **Config File**: `config-rpc-perm.toml`
- **IP**: 192.168.11.251
**Key Settings**:
- ✅ Discovery **ENABLED** (public access)
- ✅ Non-Admin APIs only: `ETH`, `NET`, `WEB3`, `TXPOOL`, `QBFT`
-**ADMIN API REMOVED** (as required)
-**DEBUG API REMOVED** (as required)
- ✅ CORS enabled for public access
-**Account permissioning ENABLED** (requires authentication)
- ✅ Node permissioning enabled
### VMID 2502 - Public RPC Node (Pub)
- **Role**: Public non-authenticated access
- **Access**: **Public non-auth access**
- **Features**: **Minimal wallet features only**
- **Config File**: `config-rpc-public.toml`
- **IP**: 192.168.11.252
**Key Settings**:
- ✅ Discovery **ENABLED** (public access)
- ✅ Minimal APIs only: `ETH`, `NET`, `WEB3` (read-only)
- ✅ WebSocket **DISABLED** (HTTP only)
- ✅ CORS enabled for public access
- ✅ Account permissioning **disabled** (public non-auth)
- ✅ Node permissioning enabled
---
## Configuration Changes Made
### 1. Fixed `config-rpc-core.toml` (VMID 2500)
-**Removed ADMIN from permissioned config** - ADMIN should only be in Core
-**Disabled discovery** - Changed from `true` to `false` (no public routing)
-**Removed CORS origins** - Changed from `["*"]` to `[]` (no public access)
-**Fixed paths** - Updated to use `/data/besu`, `/genesis/`, `/permissions/`
-**Removed deprecated options** - Removed `log-destination`, `max-remote-initiated-connections`, `accounts-enabled`, `database-path`, `trie-logs-enabled`
### 2. Fixed `config-rpc-perm.toml` (VMID 2501)
-**Removed ADMIN API** - Changed from `["ETH","NET","WEB3","TXPOOL","QBFT","ADMIN"]` to `["ETH","NET","WEB3","TXPOOL","QBFT"]`
-**Removed DEBUG API** - Not included (non-Admin features only)
-**Account permissions enabled** - `permissions-accounts-config-file-enabled=true` (for permissioned access)
-**Fixed paths** - Updated to use `/data/besu`, `/genesis/`, `/permissions/`
-**Removed deprecated options** - Same cleanup as Core config
### 3. Fixed `config-rpc-public.toml` (VMID 2502)
-**Minimal APIs confirmed** - Only `ETH`, `NET`, `WEB3` (correct)
-**WebSocket disabled** - Already correct
-**Account permissions disabled** - Correct for public non-auth
-**Fixed paths** - Updated to use `/data/besu`, `/genesis/`, `/permissions/`
-**Removed deprecated options** - Same cleanup as other configs
---
## Deployment
### Automated Deployment Script
A new script has been created to deploy and verify the configurations:
```bash
cd /home/intlc/projects/proxmox
./scripts/configure-besu-rpc-nodes.sh
```
This script will:
1. ✅ Check container status and start if needed
2. ✅ Copy correct config file to each RPC node
3. ✅ Update systemd service files
4. ✅ Verify configuration matches requirements
5. ✅ Restart services
6. ✅ Check if 2501 and 2502 are reversed
### Manual Deployment
If you prefer to deploy manually:
```bash
# For VMID 2500 (Core)
pct push 2500 smom-dbis-138/config/config-rpc-core.toml /etc/besu/config-rpc-core.toml
pct exec 2500 -- chown besu:besu /etc/besu/config-rpc-core.toml
pct exec 2500 -- systemctl restart besu-rpc.service
# For VMID 2501 (Permissioned)
pct push 2501 smom-dbis-138/config/config-rpc-perm.toml /etc/besu/config-rpc-perm.toml
pct exec 2501 -- chown besu:besu /etc/besu/config-rpc-perm.toml
pct exec 2501 -- systemctl restart besu-rpc.service
# For VMID 2502 (Public)
pct push 2502 smom-dbis-138/config/config-rpc-public.toml /etc/besu/config-rpc-public.toml
pct exec 2502 -- chown besu:besu /etc/besu/config-rpc-public.toml
pct exec 2502 -- systemctl restart besu-rpc.service
```
---
## Verification
### Check Configuration Files
```bash
# Verify Core RPC (2500)
pct exec 2500 -- grep "discovery-enabled" /etc/besu/config-rpc-core.toml
# Should show: discovery-enabled=false
pct exec 2500 -- grep "rpc-http-api" /etc/besu/config-rpc-core.toml
# Should include: ADMIN, DEBUG, TRACE
# Verify Permissioned RPC (2501)
pct exec 2501 -- grep "rpc-http-api" /etc/besu/config-rpc-perm.toml
# Should NOT include: ADMIN or DEBUG
# Should include: ETH, NET, WEB3, TXPOOL, QBFT
pct exec 2501 -- grep "permissions-accounts-config-file-enabled" /etc/besu/config-rpc-perm.toml
# Should show: permissions-accounts-config-file-enabled=true
# Verify Public RPC (2502)
pct exec 2502 -- grep "rpc-http-api" /etc/besu/config-rpc-public.toml
# Should only include: ETH, NET, WEB3
pct exec 2502 -- grep "rpc-ws-enabled" /etc/besu/config-rpc-public.toml
# Should show: rpc-ws-enabled=false
```
### Check Service Status
```bash
pct exec 2500 -- systemctl status besu-rpc.service
pct exec 2501 -- systemctl status besu-rpc.service
pct exec 2502 -- systemctl status besu-rpc.service
```
### Test RPC Endpoints
```bash
# Test Core RPC (should work from internal network)
curl -X POST http://192.168.11.250:8545 \
-H 'Content-Type: application/json' \
--data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
# Test Permissioned RPC (should work with authentication)
curl -X POST http://192.168.11.251:8545 \
-H 'Content-Type: application/json' \
--data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
# Test Public RPC (should work without authentication)
curl -X POST http://192.168.11.252:8545 \
-H 'Content-Type: application/json' \
--data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
```
---
## API Comparison
| API | Core (2500) | Permissioned (2501) | Public (2502) |
|-----|-------------|---------------------|---------------|
| ETH | ✅ | ✅ | ✅ |
| NET | ✅ | ✅ | ✅ |
| WEB3 | ✅ | ✅ | ✅ |
| TXPOOL | ✅ | ✅ | ❌ |
| QBFT | ✅ | ✅ | ❌ |
| ADMIN | ✅ | ❌ | ❌ |
| DEBUG | ✅ | ❌ | ❌ |
| TRACE | ✅ | ❌ | ❌ |
---
## Security Considerations
### VMID 2500 (Core)
- **Firewall**: Should block external access to port 8545/8546
- **Discovery**: Disabled (no public routing)
- **CORS**: Empty (no cross-origin access)
- **Use Case**: Internal infrastructure, monitoring, administrative operations
### VMID 2501 (Permissioned)
- **Authentication**: Account permissioning enabled (requires allowlist)
- **Discovery**: Enabled (public access)
- **CORS**: Enabled (public access)
- **Use Case**: Enterprise/private applications with authentication
### VMID 2502 (Public)
- **Authentication**: None (public non-auth)
- **Discovery**: Enabled (public access)
- **CORS**: Enabled (public access)
- **APIs**: Minimal (read-only wallet features)
- **Use Case**: Public dApps, wallets, blockchain explorers
---
## Files Modified
1.`smom-dbis-138/config/config-rpc-core.toml` - Fixed for Core RPC
2.`smom-dbis-138/config/config-rpc-perm.toml` - Fixed for Permissioned RPC
3.`smom-dbis-138/config/config-rpc-public.toml` - Fixed for Public RPC
4.`scripts/configure-besu-rpc-nodes.sh` - New deployment script
---
## Next Steps
1. **Deploy configurations** using the automated script:
```bash
./scripts/configure-besu-rpc-nodes.sh
```
2. **Verify services** are running correctly
3. **Test RPC endpoints** from appropriate networks
4. **Configure firewall rules** to ensure:
- VMID 2500 is only accessible from internal network
- VMID 2501 and 2502 are accessible from public networks (if needed)
5. **Monitor logs** for any configuration errors:
```bash
pct exec 2500 -- journalctl -u besu-rpc.service -f
pct exec 2501 -- journalctl -u besu-rpc.service -f
pct exec 2502 -- journalctl -u besu-rpc.service -f
```
---
## Summary
**All configurations have been fixed and are ready for deployment**
- **2500 (Core)**: No public access, all features enabled
- **2501 (Permissioned)**: Public permissioned access, non-Admin features only
- **2502 (Public)**: Public non-auth access, minimal wallet features
The configurations now correctly match the requirements for each node type.

View File

@@ -0,0 +1,214 @@
# Central Nginx Routing Setup - Complete
**Last Updated:** 2025-12-27
**Document Version:** 1.0
**Status:** Active Documentation
---
## Architecture
```
Internet → Cloudflare → cloudflared (VMID 102) → Nginx Proxy Manager (VMID 105:80) → Internal Services
```
All Cloudflare tunnel traffic now routes through a single Nginx instance (VMID 105) which then routes to internal services based on hostname.
---
## Configuration Complete
### ✅ Nginx Proxy Manager (VMID 105)
**IP Address**: `192.168.11.21`
**Configuration File**: `/data/nginx/custom/http.conf`
**Status**: Active and running
**Services Configured**:
| Domain | Routes To | Service IP | Service Port |
|--------|-----------|------------|--------------|
| `explorer.d-bis.org` | `http://192.168.11.280:80` | 192.168.11.280 | 80 |
| `rpc-http-pub.d-bis.org` | `https://192.168.11.252:443` | 192.168.11.252 | 443 |
| `rpc-ws-pub.d-bis.org` | `https://192.168.11.252:443` | 192.168.11.252 | 443 |
| `rpc-http-prv.d-bis.org` | `https://192.168.11.251:443` | 192.168.11.251 | 443 |
| `rpc-ws-prv.d-bis.org` | `https://192.168.11.251:443` | 192.168.11.251 | 443 |
| `dbis-admin.d-bis.org` | `http://192.168.11.130:80` | 192.168.11.130 | 80 |
| `dbis-api.d-bis.org` | `http://192.168.11.290:3000` | 192.168.11.290 | 3000 |
| `dbis-api-2.d-bis.org` | `http://192.168.11.291:3000` | 192.168.11.291 | 3000 |
| `mim4u.org` | `http://192.168.11.19:80` | 192.168.11.19 | 80 |
| `www.mim4u.org` | `http://192.168.11.19:80` | 192.168.11.19 | 80 |
---
## Cloudflare Tunnel Configuration
### ⚠️ Action Required: Update Cloudflare Dashboard
Since the tunnel uses token-based configuration, you need to update the tunnel ingress rules in the Cloudflare dashboard:
1. Go to: https://one.dash.cloudflare.com/
2. Navigate to: **Zero Trust****Networks****Tunnels**
3. Select your tunnel (ID: `b02fe1fe-cb7d-484e-909b-7cc41298ebe8`)
4. Click **Configure****Public Hostnames**
5. Update all hostnames to route to: `http://192.168.11.21:80`
### Required Tunnel Ingress Rules
All hostnames should route to the central Nginx:
```yaml
ingress:
# Explorer
- hostname: explorer.d-bis.org
service: http://192.168.11.21:80
# RPC Public
- hostname: rpc-http-pub.d-bis.org
service: http://192.168.11.21:80
- hostname: rpc-ws-pub.d-bis.org
service: http://192.168.11.21:80
# RPC Private
- hostname: rpc-http-prv.d-bis.org
service: http://192.168.11.21:80
- hostname: rpc-ws-prv.d-bis.org
service: http://192.168.11.21:80
# DBIS Services
- hostname: dbis-admin.d-bis.org
service: http://192.168.11.21:80
- hostname: dbis-api.d-bis.org
service: http://192.168.11.21:80
- hostname: dbis-api-2.d-bis.org
service: http://192.168.11.21:80
# Miracles In Motion
- hostname: mim4u.org
service: http://192.168.11.21:80
- hostname: www.mim4u.org
service: http://192.168.11.21:80
# Catch-all
- service: http_status:404
```
---
## Testing
### Test Nginx Routing Locally
```bash
# Test Explorer
curl -H "Host: explorer.d-bis.org" http://192.168.11.21/
# Test RPC Public HTTP
curl -H "Host: rpc-http-pub.d-bis.org" http://192.168.11.21/ \
-X POST -H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}'
```
### Test Through Cloudflare (After Tunnel Update)
```bash
# Test Explorer
curl https://explorer.d-bis.org/
# Test RPC Public
curl -X POST https://rpc-http-pub.d-bis.org \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}'
```
---
## Benefits
1. **Single Point of Configuration**: All routing logic in one place (VMID 105)
2. **Simplified Management**: No need to update multiple Nginx instances
3. **Centralized Logging**: All traffic logs in one location
4. **Easier Troubleshooting**: Single point to check routing issues
5. **Consistent Configuration**: All services follow the same routing pattern
---
## Maintenance
### View Nginx Configuration
```bash
ssh root@192.168.11.12 "pct exec 105 -- cat /data/nginx/custom/http.conf"
```
### Reload Nginx Configuration
```bash
ssh root@192.168.11.12 "pct exec 105 -- systemctl restart npm"
```
### Add New Service
1. Edit `/data/nginx/custom/http.conf` on VMID 105
2. Add new `server` block with appropriate `server_name` and `proxy_pass`
3. Test: `nginx -t`
4. Reload: `systemctl restart npm`
5. Update Cloudflare tunnel to route new hostname to `http://192.168.11.21:80`
---
## Troubleshooting
### Service Not Routing Correctly
1. Check Nginx configuration: `pct exec 105 -- nginx -t`
2. Check service status: `pct exec 105 -- systemctl status npm`
3. Check Nginx logs: `pct exec 105 -- tail -f /data/logs/fallback_error.log`
4. Verify internal service is accessible: `curl http://<service-ip>:<port>`
### Cloudflare Tunnel Not Connecting
1. Check tunnel status: `pct exec 102 -- systemctl status cloudflared`
2. Verify tunnel configuration in Cloudflare dashboard
3. Check tunnel logs: `pct exec 102 -- journalctl -u cloudflared -n 50`
---
## Next Steps
1. ✅ Nginx configuration deployed
2.**Update Cloudflare tunnel configuration** (see above)
3. ⏳ Test all endpoints after tunnel update
4. ⏳ Monitor logs for any routing issues
---
**Configuration File Location**: `/data/nginx/custom/http.conf` on VMID 105
---
## Related Documentation
> **Master Reference:** For a consolidated view of all Cloudflare routing, see **[CLOUDFLARE_ROUTING_MASTER.md](CLOUDFLARE_ROUTING_MASTER.md)** ⭐⭐⭐.
### Setup Guides
- **[../04-configuration/cloudflare/CLOUDFLARE_ZERO_TRUST_GUIDE.md](../04-configuration/cloudflare/CLOUDFLARE_ZERO_TRUST_GUIDE.md)** ⭐⭐⭐ - Complete Cloudflare Zero Trust setup
- **[../04-configuration/cloudflare/CLOUDFLARE_TUNNEL_INSTALLATION.md](../04-configuration/cloudflare/CLOUDFLARE_TUNNEL_INSTALLATION.md)** ⭐⭐ - Tunnel installation procedures
- **[../04-configuration/cloudflare/CLOUDFLARE_DNS_TO_CONTAINERS.md](../04-configuration/cloudflare/CLOUDFLARE_DNS_TO_CONTAINERS.md)** ⭐⭐⭐ - DNS mapping to containers
### Architecture Documents
- **[CLOUDFLARE_TUNNEL_ROUTING_ARCHITECTURE.md](CLOUDFLARE_TUNNEL_ROUTING_ARCHITECTURE.md)** ⭐⭐⭐ - Complete Cloudflare tunnel routing architecture
- **[CLOUDFLARE_NGINX_INTEGRATION.md](CLOUDFLARE_NGINX_INTEGRATION.md)** ⭐⭐ - Cloudflare + NGINX integration
- **[NGINX_ARCHITECTURE_RPC.md](NGINX_ARCHITECTURE_RPC.md)** ⭐⭐ - NGINX RPC architecture
---
**Last Updated:** 2025-12-27
**Document Version:** 1.0
**Review Cycle:** Quarterly

View File

@@ -1,5 +1,11 @@
# Cloudflare and Nginx Integration
**Last Updated:** 2025-01-20
**Document Version:** 1.0
**Status:** Active Documentation
---
## Overview
Integration of Cloudflare (via cloudflared tunnel on VMID 102) with nginx-proxy-manager (VMID 105) for routing to RPC nodes.
@@ -245,10 +251,26 @@ curl -X POST https://rpc.yourdomain.com \
---
## References
## Related Documentation
- **Cloudflare Tunnels**: https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/
- **nginx-proxy-manager**: https://nginxproxymanager.com/
### Network Documents
- **[CLOUDFLARE_TUNNEL_ROUTING_ARCHITECTURE.md](CLOUDFLARE_TUNNEL_ROUTING_ARCHITECTURE.md)** ⭐⭐⭐ - Cloudflare tunnel routing
- **[CENTRAL_NGINX_ROUTING_SETUP.md](CENTRAL_NGINX_ROUTING_SETUP.md)** ⭐⭐⭐ - Central Nginx routing
- **[NGINX_ARCHITECTURE_RPC.md](NGINX_ARCHITECTURE_RPC.md)** ⭐⭐ - NGINX architecture for RPC
### Configuration Documents
- **[../04-configuration/cloudflare/CLOUDFLARE_ZERO_TRUST_GUIDE.md](../04-configuration/cloudflare/CLOUDFLARE_ZERO_TRUST_GUIDE.md)** - Cloudflare Zero Trust setup
- **[../04-configuration/cloudflare/CLOUDFLARE_DNS_TO_CONTAINERS.md](../04-configuration/cloudflare/CLOUDFLARE_DNS_TO_CONTAINERS.md)** - DNS mapping to containers
### External References
- [Cloudflare Tunnels](https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/) - Official documentation
- [nginx-proxy-manager](https://nginxproxymanager.com/) - Official documentation
---
**Last Updated:** 2025-01-20
**Document Version:** 1.0
**Review Cycle:** Quarterly
- **RPC Node Types**: `docs/RPC_NODE_TYPES_ARCHITECTURE.md`
- **Nginx Architecture**: `docs/NGINX_ARCHITECTURE_RPC.md`

View File

@@ -0,0 +1,106 @@
# Cloudflare Routing Master Reference
**Navigation:** [Home](../README.md) > [Network](../05-network/README.md) > Cloudflare Routing Master
**Last Updated:** 2025-01-20
**Document Version:** 1.0
**Status:** 🟢 Active Documentation
---
## Overview
This is the **authoritative reference** for Cloudflare tunnel routing architecture. All routing decisions, domain mappings, and tunnel configurations are documented here.
> **Note:** This document consolidates routing information from multiple sources. For specific setup procedures, see the related documents below.
---
## Architecture Overview
```
Internet → Cloudflare → cloudflared (VMID 102) → Routing Decision
├─ HTTP RPC → Central Nginx (VMID 105) → RPC Nodes
└─ WebSocket RPC → Direct to RPC Nodes
```
---
## Routing Rules
### HTTP Endpoints (via Central Nginx)
All HTTP endpoints route through the central Nginx on VMID 105 (`192.168.11.21:80`):
| Domain | Cloudflare Tunnel → | Central Nginx → | Final Destination |
|--------|---------------------|-----------------|-------------------|
| `explorer.d-bis.org` | `http://192.168.11.21:80` | `http://192.168.11.140:80` | Blockscout |
| `rpc-http-pub.d-bis.org` | `http://192.168.11.21:80` | `https://192.168.11.252:443` | RPC Public (HTTP) |
| `rpc-http-prv.d-bis.org` | `http://192.168.11.21:80` | `https://192.168.11.251:443` | RPC Private (HTTP) |
| `dbis-admin.d-bis.org` | `http://192.168.11.21:80` | `http://192.168.11.130:80` | DBIS Frontend |
| `dbis-api.d-bis.org` | `http://192.168.11.21:80` | `http://192.168.11.150:3000` | DBIS API Primary |
| `dbis-api-2.d-bis.org` | `http://192.168.11.21:80` | `http://192.168.11.151:3000` | DBIS API Secondary |
| `mim4u.org` | `http://192.168.11.21:80` | `http://192.168.11.19:80` | Miracles In Motion |
| `www.mim4u.org` | `http://192.168.11.21:80` | `301 Redirect``mim4u.org` | Redirects to non-www |
### WebSocket Endpoints (Direct Routing)
WebSocket endpoints route **directly** to RPC nodes, bypassing the central Nginx:
| Domain | Cloudflare Tunnel → | Direct to RPC Node → | Final Destination |
|--------|---------------------|----------------------|-------------------|
| `rpc-ws-pub.d-bis.org` | `wss://192.168.11.252:443` | `wss://192.168.11.252:443` | `127.0.0.1:8546` (WebSocket) |
| `rpc-ws-prv.d-bis.org` | `wss://192.168.11.251:443` | `wss://192.168.11.251:443` | `127.0.0.1:8546` (WebSocket) |
**Why Direct Routing for WebSockets?**
- WebSocket connections require persistent connections and protocol upgrades
- Direct routing reduces latency and connection overhead
- RPC nodes handle WebSocket connections efficiently on their own Nginx instances
---
## Cloudflare Tunnel Configuration
### Tunnel: `rpc-http-pub.d-bis.org` (Tunnel ID: `10ab22da-8ea3-4e2e-a896-27ece2211a05`)
**Location:** VMID 102 (cloudflared container)
**Configuration:** See [CLOUDFLARE_TUNNEL_ROUTING_ARCHITECTURE.md](CLOUDFLARE_TUNNEL_ROUTING_ARCHITECTURE.md) for complete tunnel configuration.
---
## Central Nginx Configuration
### Nginx Proxy Manager (VMID 105)
**IP Address:** `192.168.11.21`
**Configuration File:** `/data/nginx/custom/http.conf`
**Status:** Active and running
**Services Configured:** See [CENTRAL_NGINX_ROUTING_SETUP.md](CENTRAL_NGINX_ROUTING_SETUP.md) for complete configuration.
---
## Related Documentation
### Setup Guides
- **[../04-configuration/cloudflare/CLOUDFLARE_ZERO_TRUST_GUIDE.md](../04-configuration/cloudflare/CLOUDFLARE_ZERO_TRUST_GUIDE.md)** ⭐⭐⭐ - Complete Cloudflare Zero Trust setup
- **[../04-configuration/cloudflare/CLOUDFLARE_TUNNEL_INSTALLATION.md](../04-configuration/cloudflare/CLOUDFLARE_TUNNEL_INSTALLATION.md)** ⭐⭐ - Tunnel installation procedures
- **[../04-configuration/cloudflare/CLOUDFLARE_DNS_TO_CONTAINERS.md](../04-configuration/cloudflare/CLOUDFLARE_DNS_TO_CONTAINERS.md)** ⭐⭐⭐ - DNS mapping to containers
### Architecture Documents
- **[CLOUDFLARE_TUNNEL_ROUTING_ARCHITECTURE.md](CLOUDFLARE_TUNNEL_ROUTING_ARCHITECTURE.md)** ⭐⭐⭐ - Detailed tunnel routing architecture
- **[CENTRAL_NGINX_ROUTING_SETUP.md](CENTRAL_NGINX_ROUTING_SETUP.md)** ⭐⭐⭐ - Central Nginx routing configuration
- **[CLOUDFLARE_NGINX_INTEGRATION.md](CLOUDFLARE_NGINX_INTEGRATION.md)** ⭐⭐ - Cloudflare + NGINX integration
- **[NGINX_ARCHITECTURE_RPC.md](NGINX_ARCHITECTURE_RPC.md)** ⭐⭐ - NGINX architecture for RPC
### Domain and DNS
- **[../02-architecture/DOMAIN_STRUCTURE.md](../02-architecture/DOMAIN_STRUCTURE.md)** ⭐⭐ - Domain structure reference
- **[../04-configuration/RPC_DNS_CONFIGURATION.md](../04-configuration/RPC_DNS_CONFIGURATION.md)** - RPC DNS configuration
- **[../04-configuration/cloudflare/CLOUDFLARE_DNS_SPECIFIC_SERVICES.md](../04-configuration/cloudflare/CLOUDFLARE_DNS_SPECIFIC_SERVICES.md)** ⭐⭐⭐ - Service-specific DNS configuration
---
**Last Updated:** 2025-01-20
**Document Version:** 1.0
**Review Cycle:** Quarterly

View File

@@ -0,0 +1,238 @@
# Cloudflare Tunnel Routing Architecture
**Last Updated:** 2025-12-27
**Document Version:** 1.0
**Status:** Active Documentation
---
## Architecture Overview
```
Internet → Cloudflare → cloudflared (VMID 102) → Routing Decision
├─ HTTP RPC → Central Nginx (VMID 105) → RPC Nodes
└─ WebSocket RPC → Direct to RPC Nodes
```
---
## Routing Rules
### HTTP Endpoints (via Central Nginx)
All HTTP endpoints route through the central Nginx on VMID 105 (`192.168.11.21:80`):
| Domain | Cloudflare Tunnel → | Central Nginx → | Final Destination |
|--------|---------------------|-----------------|-------------------|
| `explorer.d-bis.org` | `http://192.168.11.21:80` | `http://192.168.11.140:80` | Blockscout |
| `rpc-http-pub.d-bis.org` | `http://192.168.11.21:80` | `https://192.168.11.252:443` | RPC Public (HTTP) |
| `rpc-http-prv.d-bis.org` | `http://192.168.11.21:80` | `https://192.168.11.251:443` | RPC Private (HTTP) |
| `dbis-admin.d-bis.org` | `http://192.168.11.21:80` | `http://192.168.11.130:80` | DBIS Frontend |
| `dbis-api.d-bis.org` | `http://192.168.11.21:80` | `http://192.168.11.150:3000` | DBIS API Primary |
| `dbis-api-2.d-bis.org` | `http://192.168.11.21:80` | `http://192.168.11.151:3000` | DBIS API Secondary |
| `mim4u.org` | `http://192.168.11.21:80` | `http://192.168.11.19:80` | Miracles In Motion |
| `www.mim4u.org` | `http://192.168.11.21:80` | `301 Redirect``mim4u.org` | Redirects to non-www |
### WebSocket Endpoints (Direct Routing)
WebSocket endpoints route **directly** to RPC nodes, bypassing the central Nginx:
| Domain | Cloudflare Tunnel → | Direct to RPC Node → | Final Destination |
|--------|---------------------|----------------------|-------------------|
| `rpc-ws-pub.d-bis.org` | `wss://192.168.11.252:443` | `wss://192.168.11.252:443` | `127.0.0.1:8546` (WebSocket) |
| `rpc-ws-prv.d-bis.org` | `wss://192.168.11.251:443` | `wss://192.168.11.251:443` | `127.0.0.1:8546` (WebSocket) |
**Why Direct Routing for WebSockets?**
- WebSocket connections require persistent connections and protocol upgrades
- Direct routing reduces latency and connection overhead
- RPC nodes handle WebSocket connections efficiently on their own Nginx instances
---
## Cloudflare Tunnel Configuration
### Tunnel: `rpc-http-pub.d-bis.org` (Tunnel ID: `10ab22da-8ea3-4e2e-a896-27ece2211a05`)
#### HTTP Endpoints (via Central Nginx)
```yaml
ingress:
# Explorer
- hostname: explorer.d-bis.org
service: http://192.168.11.21:80
# HTTP RPC Public
- hostname: rpc-http-pub.d-bis.org
service: http://192.168.11.21:80
# HTTP RPC Private
- hostname: rpc-http-prv.d-bis.org
service: http://192.168.11.21:80
# DBIS Services
- hostname: dbis-admin.d-bis.org
service: http://192.168.11.21:80
- hostname: dbis-api.d-bis.org
service: http://192.168.11.21:80
- hostname: dbis-api-2.d-bis.org
service: http://192.168.11.21:80
# Miracles In Motion
- hostname: mim4u.org
service: http://192.168.11.21:80
- hostname: www.mim4u.org
service: http://192.168.11.21:80
```
#### WebSocket Endpoints (Direct Routing)
```yaml
# WebSocket RPC Public (direct to RPC node)
- hostname: rpc-ws-pub.d-bis.org
service: https://192.168.11.252:443
originRequest:
noTLSVerify: true
httpHostHeader: rpc-ws-pub.d-bis.org
# WebSocket RPC Private (direct to RPC node)
- hostname: rpc-ws-prv.d-bis.org
service: https://192.168.11.251:443
originRequest:
noTLSVerify: true
httpHostHeader: rpc-ws-prv.d-bis.org
# Catch-all
- service: http_status:404
```
---
## Complete Configuration Summary
### Cloudflare Dashboard Configuration
**For HTTP endpoints**, configure in Cloudflare dashboard:
- **Service Type**: HTTP
- **URL**: `192.168.11.21:80` (Central Nginx)
**For WebSocket endpoints**, configure in Cloudflare dashboard:
- **Service Type**: HTTPS
- **URL**:
- `rpc-ws-pub.d-bis.org``192.168.11.252:443`
- `rpc-ws-prv.d-bis.org``192.168.11.251:443`
- **Additional Options**:
- Enable "No TLS Verify"
- Set HTTP Host Header to match the hostname
---
## Service Details
### RPC Nodes
**Public RPC (VMID 2502 - 192.168.11.252)**:
- HTTP RPC: `https://192.168.11.252:443``127.0.0.1:8545`
- WebSocket RPC: `wss://192.168.11.252:443``127.0.0.1:8546`
**Private RPC (VMID 2501 - 192.168.11.251)**:
- HTTP RPC: `https://192.168.11.251:443``127.0.0.1:8545`
- WebSocket RPC: `wss://192.168.11.251:443``127.0.0.1:8546`
### Central Nginx (VMID 105)
- **IP**: `192.168.11.21`
- **Port**: `80` (HTTP)
- **Configuration**: `/data/nginx/custom/http.conf`
- **Purpose**: Routes HTTP traffic to appropriate internal services
---
## Testing
### Test HTTP RPC (via Central Nginx)
```bash
# Public HTTP RPC
curl -X POST https://rpc-http-pub.d-bis.org \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}'
# Private HTTP RPC
curl -X POST https://rpc-http-prv.d-bis.org \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}'
```
### Test WebSocket RPC (Direct)
```bash
# Public WebSocket RPC
wscat -c wss://rpc-ws-pub.d-bis.org
# Private WebSocket RPC
wscat -c wss://rpc-ws-prv.d-bis.org
```
### Test Explorer (via Central Nginx)
```bash
curl https://explorer.d-bis.org/api/v2/stats
```
---
## Benefits of This Architecture
1. **Centralized HTTP Management**: All HTTP traffic routes through central Nginx for easier management
2. **Optimized WebSocket Performance**: WebSocket connections route directly to RPC nodes, reducing latency
3. **Simplified Configuration**: Most services configured in one place (central Nginx)
4. **Flexible Routing**: Can easily add new HTTP services through central Nginx
5. **Direct WebSocket Support**: WebSocket connections maintain optimal performance with direct routing
---
## Maintenance
### Update HTTP Service Routing
Edit `/data/nginx/custom/http.conf` on VMID 105, then:
```bash
ssh root@192.168.11.12 "pct exec 105 -- nginx -t && systemctl restart npm"
```
### Update WebSocket Routing
Update directly in Cloudflare dashboard (tunnel configuration) - no Nginx changes needed.
---
---
## Related Documentation
> **Master Reference:** For a consolidated view of all Cloudflare routing, see **[CLOUDFLARE_ROUTING_MASTER.md](CLOUDFLARE_ROUTING_MASTER.md)** ⭐⭐⭐.
### Setup Guides
- **[../04-configuration/cloudflare/CLOUDFLARE_ZERO_TRUST_GUIDE.md](../04-configuration/cloudflare/CLOUDFLARE_ZERO_TRUST_GUIDE.md)** ⭐⭐⭐ - Complete Cloudflare Zero Trust setup
- **[../04-configuration/cloudflare/CLOUDFLARE_TUNNEL_INSTALLATION.md](../04-configuration/cloudflare/CLOUDFLARE_TUNNEL_INSTALLATION.md)** ⭐⭐ - Tunnel installation procedures
- **[../04-configuration/cloudflare/CLOUDFLARE_DNS_TO_CONTAINERS.md](../04-configuration/cloudflare/CLOUDFLARE_DNS_TO_CONTAINERS.md)** ⭐⭐⭐ - DNS mapping to containers
### Architecture Documents
- **[CENTRAL_NGINX_ROUTING_SETUP.md](CENTRAL_NGINX_ROUTING_SETUP.md)** ⭐⭐⭐ - Central Nginx routing configuration
- **[CLOUDFLARE_NGINX_INTEGRATION.md](CLOUDFLARE_NGINX_INTEGRATION.md)** ⭐⭐ - Cloudflare + NGINX integration
- **[NGINX_ARCHITECTURE_RPC.md](NGINX_ARCHITECTURE_RPC.md)** ⭐⭐ - NGINX RPC architecture
### Domain and DNS
- **[../02-architecture/DOMAIN_STRUCTURE.md](../02-architecture/DOMAIN_STRUCTURE.md)** ⭐⭐ - Domain structure reference
- **[../04-configuration/RPC_DNS_CONFIGURATION.md](../04-configuration/RPC_DNS_CONFIGURATION.md)** - RPC DNS configuration
- **[../04-configuration/cloudflare/CLOUDFLARE_DNS_SPECIFIC_SERVICES.md](../04-configuration/cloudflare/CLOUDFLARE_DNS_SPECIFIC_SERVICES.md)** ⭐⭐⭐ - Service-specific DNS configuration
---
**Last Updated:** 2025-12-27
**Document Version:** 1.0
**Review Cycle:** Quarterly

View File

@@ -0,0 +1,83 @@
# DNS Entries Completion Status Report
**Date:** 2025-01-20
**Status:** ✅ DNS Records Created
**Summary:** All required DNS entries have been created successfully
---
## ✅ DNS Records Created (9/9)
All DNS records have been created as CNAME records pointing to the Cloudflare Tunnel with proxy enabled (orange cloud).
### d-bis.org Domain (7 records)
| Domain | Type | Target | Proxy | Status |
|--------|------|--------|-------|--------|
| rpc-http-pub.d-bis.org | CNAME | 10ab22da-8ea3-4e2e-a896-27ece2211a05.cfargotunnel.com | ✅ Proxied | ✅ Created |
| rpc-ws-pub.d-bis.org | CNAME | 10ab22da-8ea3-4e2e-a896-27ece2211a05.cfargotunnel.com | ✅ Proxied | ✅ Created |
| rpc-http-prv.d-bis.org | CNAME | 10ab22da-8ea3-4e2e-a896-27ece2211a05.cfargotunnel.com | ✅ Proxied | ✅ Created |
| rpc-ws-prv.d-bis.org | CNAME | 10ab22da-8ea3-4e2e-a896-27ece2211a05.cfargotunnel.com | ✅ Proxied | ✅ Created |
| dbis-admin.d-bis.org | CNAME | 10ab22da-8ea3-4e2e-a896-27ece2211a05.cfargotunnel.com | ✅ Proxied | ✅ Created |
| dbis-api.d-bis.org | CNAME | 10ab22da-8ea3-4e2e-a896-27ece2211a05.cfargotunnel.com | ✅ Proxied | ✅ Created |
| dbis-api-2.d-bis.org | CNAME | 10ab22da-8ea3-4e2e-a896-27ece2211a05.cfargotunnel.com | ✅ Proxied | ✅ Created |
### mim4u.org Domain (2 records)
| Domain | Type | Target | Proxy | Status |
|--------|------|--------|-------|--------|
| mim4u.org | CNAME | 10ab22da-8ea3-4e2e-a896-27ece2211a05.cfargotunnel.com | ✅ Proxied | ✅ Created |
| www.mim4u.org | CNAME | 10ab22da-8ea3-4e2e-a896-27ece2211a05.cfargotunnel.com | ✅ Proxied | ✅ Created |
**Tunnel ID:** `10ab22da-8ea3-4e2e-a896-27ece2211a05`
**Tunnel Target:** `10ab22da-8ea3-4e2e-a896-27ece2211a05.cfargotunnel.com`
---
## ✅ Completion Status
### DNS Entries: COMPLETE ✅
All VMIDs that require DNS entries now have DNS records configured:
- ✅ 7 RPC and DBIS services (d-bis.org)
- ✅ 2 Miracles In Motion services (mim4u.org)
- ✅ All records are CNAME to tunnel
- ✅ All records are proxied (orange cloud)
### Service Accessibility: ⚠️ Configuration Needed
Services are returning HTTP 502, which indicates:
- ✅ DNS records are working (tunnel is reachable)
- ✅ Cloudflare Tunnel is connecting
- ⚠️ Tunnel routing needs configuration
**Next Step:** Update Cloudflare Tunnel ingress rules to route HTTP traffic through Nginx Proxy Manager (VMID 105 at 192.168.11.21:80) as recommended in the architecture review.
---
## Scripts Created
1. **scripts/create-missing-dns-records.sh**
- Creates or updates all missing DNS records
- Handles both d-bis.org and mim4u.org zones
- Verifies existing records before creating
2. **scripts/verify-dns-and-services.sh**
- Verifies DNS records via Cloudflare API
- Tests service accessibility
- Provides comprehensive status report
---
## Answer to Original Question
**Q: Are all VMIDs which need DNS entries completed, and service accessible?**
**A:**
-**DNS Entries: COMPLETE** - All 9 required DNS records have been created
- ⚠️ **Service Access: CONFIGURATION NEEDED** - Services return 502 because tunnel routing needs to be configured to route through Nginx Proxy Manager
---
**Last Updated:** 2025-01-20
**Next Action:** Configure Cloudflare Tunnel ingress rules to route through Nginx (192.168.11.21:80)

View File

@@ -1,8 +1,9 @@
# Network Status Report
**Date**: 2025-12-20
**Network**: Chain ID 138 (QBFT Consensus)
**Status**: ✅ OPERATIONAL
**Last Updated:** 2025-12-20
**Document Version:** 1.0
**Status:** Active Documentation
**Network:** Chain ID 138 (QBFT Consensus)
---

View File

@@ -1,5 +1,11 @@
# Nginx Architecture for RPC Nodes
**Last Updated:** 2025-01-20
**Document Version:** 1.0
**Status:** Active Documentation
---
## Overview
There are two different nginx use cases in the RPC architecture:
@@ -234,9 +240,23 @@ wscat -c ws://rpc-ws.besu.local:8080
---
## References
## Related Documentation
- **nginx-proxy-manager**: https://nginxproxymanager.com/
- **Besu RPC Configuration**: `install/besu-rpc-install.sh`
- **Network Configuration**: `config/network.conf`
### Network Documents
- **[CENTRAL_NGINX_ROUTING_SETUP.md](CENTRAL_NGINX_ROUTING_SETUP.md)** ⭐⭐⭐ - Central Nginx routing setup
- **[CLOUDFLARE_TUNNEL_ROUTING_ARCHITECTURE.md](CLOUDFLARE_TUNNEL_ROUTING_ARCHITECTURE.md)** ⭐⭐⭐ - Cloudflare tunnel routing
- **[CLOUDFLARE_NGINX_INTEGRATION.md](CLOUDFLARE_NGINX_INTEGRATION.md)** ⭐⭐ - Cloudflare + NGINX integration
- **[RPC_NODE_TYPES_ARCHITECTURE.md](RPC_NODE_TYPES_ARCHITECTURE.md)** ⭐⭐ - RPC node architecture
### Configuration Documents
- **[../04-configuration/cloudflare/CLOUDFLARE_DNS_TO_CONTAINERS.md](../04-configuration/cloudflare/CLOUDFLARE_DNS_TO_CONTAINERS.md)** - DNS mapping to containers
### External References
- [nginx-proxy-manager](https://nginxproxymanager.com/) - Official documentation
---
**Last Updated:** 2025-01-20
**Document Version:** 1.0
**Review Cycle:** Quarterly

View File

@@ -1,7 +1,8 @@
# Nginx Setup on VMID 2500 - Final Summary
**Date**: $(date)
**Status**: ✅ **FULLY CONFIGURED AND OPERATIONAL**
**Last Updated:** 2025-01-20
**Document Version:** 1.0
**Status:** Active Documentation
---
@@ -204,6 +205,15 @@ All documentation has been created:
---
**Setup Date**: $(date)
**Status**: ✅ **COMPLETE AND OPERATIONAL**
## Related Documentation
- **[NGINX_ARCHITECTURE_RPC.md](NGINX_ARCHITECTURE_RPC.md)** ⭐⭐⭐ - Complete NGINX architecture for RPC nodes
- **[RPC_2500_CONFIGURATION_SUMMARY.md](RPC_2500_CONFIGURATION_SUMMARY.md)** - RPC 2500 configuration
- **[../09-troubleshooting/RPC_2500_TROUBLESHOOTING.md](../09-troubleshooting/RPC_2500_TROUBLESHOOTING.md)** - RPC troubleshooting
---
**Last Updated:** 2025-01-20
**Document Version:** 1.0
**Review Cycle:** Quarterly

View File

@@ -0,0 +1,156 @@
# VMID 2500 (Core RPC) Configuration Summary
**Date**: $(date)
**Status**: ✅ **CONFIGURED FOR LOCAL/PERMISSIONED NODES ONLY**
---
## Configuration Overview
VMID 2500 is the **Core RPC node** and is configured to **ONLY** connect to local/permissioned nodes on the internal network.
---
## ✅ Configuration Settings
### 1. Configuration File
- **File**: `/etc/besu/config-rpc-core.toml`
- **Template**: `smom-dbis-138-proxmox/templates/besu-configs/config-rpc-core.toml`
### 2. Key Security Settings
#### Node Permissioning: ✅ ENABLED
```toml
permissions-nodes-config-file-enabled=true
permissions-nodes-config-file="/permissions/permissions-nodes.toml"
```
- **Only nodes in the allowlist can connect**
- Allowlist contains **12 local nodes** (all on 192.168.11.0/24)
#### Discovery: ❌ DISABLED
```toml
discovery-enabled=false
```
- **No external node discovery**
- Only uses static nodes and permissioned allowlist
- Prevents discovery of unauthorized nodes
#### Static Nodes: ✅ Enabled
```toml
static-nodes-file="/genesis/static-nodes.json"
```
- Contains only validator nodes (1000-1004)
- Used for initial peer connections
---
## 📋 Permissions Allowlist (12 Local Nodes)
All nodes in `permissions-nodes.toml` are on the local network (192.168.11.0/24):
### Validators (5 nodes)
- 192.168.11.100 - Validator 1
- 192.168.11.101 - Validator 2
- 192.168.11.102 - Validator 3
- 192.168.11.103 - Validator 4
- 192.168.11.104 - Validator 5
### Sentries (4 nodes)
- 192.168.11.150 - Sentry 1
- 192.168.11.151 - Sentry 2
- 192.168.11.152 - Sentry 3
- 192.168.11.153 - Sentry 4
### RPC Nodes (3 nodes)
- 192.168.11.250 - Core RPC (this node)
- 192.168.11.251 - Permissioned RPC
- 192.168.11.252 - Public RPC
**Total**: 12 nodes (all local/permissioned)
---
## 🔧 RPC APIs Enabled
As a Core RPC node, VMID 2500 has **full API access** for internal/core infrastructure:
```toml
rpc-http-api=["ETH","NET","WEB3","ADMIN","DEBUG","TXPOOL"]
rpc-ws-api=["ETH","NET","WEB3","ADMIN","DEBUG","TXPOOL"]
```
**APIs**:
- `ETH` - Ethereum protocol methods
- `NET` - Network information
- `WEB3` - Web3 client version
- `ADMIN` - Administrative methods
- `DEBUG` - Debug/trace methods
- `TXPOOL` - Transaction pool methods
---
## 🔒 Security Features
1. **No External Discovery**: `discovery-enabled=false` prevents discovery of external nodes
2. **Strict Allowlisting**: Only 12 explicitly listed nodes can connect
3. **Local Network Only**: All allowed nodes are on 192.168.11.0/24
4. **Defense in Depth**: Multiple layers of security (permissioning + disabled discovery)
---
## 📝 Files Modified/Created
1.**Created**: `smom-dbis-138-proxmox/templates/besu-configs/config-rpc-core.toml`
- Template for Core RPC node configuration
- Discovery disabled
- Full APIs enabled
2.**Updated**: `scripts/fix-rpc-2500.sh`
- Uses `config-rpc-core.toml` for VMID 2500
- Ensures discovery is disabled
- Verifies permissioning settings
3.**Documentation**:
- `docs/05-network/RPC_2500_LOCAL_NODES_ONLY.md` - Detailed configuration guide
- `docs/05-network/RPC_2500_CONFIGURATION_SUMMARY.md` - This summary
---
## ✅ Verification Checklist
To verify VMID 2500 is configured correctly:
```bash
# 1. Check discovery is disabled
pct exec 2500 -- grep "discovery-enabled" /etc/besu/config-rpc-core.toml
# Expected: discovery-enabled=false
# 2. Check permissioning is enabled
pct exec 2500 -- grep "permissions-nodes-config-file-enabled" /etc/besu/config-rpc-core.toml
# Expected: permissions-nodes-config-file-enabled=true
# 3. Verify permissions file contains only local nodes
pct exec 2500 -- cat /permissions/permissions-nodes.toml | grep -o "192.168.11\.[0-9]*" | sort -u | wc -l
# Expected: 12 (5 validators + 4 sentries + 3 RPC)
# 4. Check connected peers (should only be local network)
curl -X POST http://192.168.11.250:8545 \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"admin_peers","params":[],"id":1}' | jq '.result[].remoteAddress'
# Expected: Only 192.168.11.x addresses
```
---
## 📚 Related Documentation
- [RPC 2500 Local Nodes Only](./RPC_2500_LOCAL_NODES_ONLY.md)
- [RPC Node Types Architecture](./RPC_NODE_TYPES_ARCHITECTURE.md)
- [RPC 2500 Troubleshooting](../09-troubleshooting/RPC_2500_TROUBLESHOOTING.md)
- [Besu Allowlist Runbook](../06-besu/BESU_ALLOWLIST_RUNBOOK.md)
---
**Last Updated**: $(date)
**Configuration Status**: ✅ Complete - VMID 2500 only connects to local/permissioned nodes

View File

@@ -0,0 +1,132 @@
# VMID 2500 (Core RPC) - Local/Permissioned Nodes Only Configuration
**Date**: $(date)
**VMID**: 2500
**IP**: 192.168.11.250
**Purpose**: Core RPC node restricted to local/permissioned nodes only
---
## Configuration Overview
VMID 2500 is the **Core RPC node** and should **ONLY** connect to local/permissioned nodes on the internal network (192.168.11.0/24).
### Key Configuration Settings
1. **Node Permissioning**: ✅ ENABLED
- `permissions-nodes-config-file-enabled=true`
- `permissions-nodes-config-file="/permissions/permissions-nodes.toml"`
- Only nodes listed in this file can connect
2. **Discovery**: ❌ DISABLED
- `discovery-enabled=false`
- Prevents discovery of external nodes
- Only uses static nodes and permissioned nodes allowlist
3. **Static Nodes**: ✅ Enabled
- `static-nodes-file="/genesis/static-nodes.json"`
- Contains only validator nodes (1000-1004)
---
## Permissions Allowlist
The `permissions-nodes.toml` file should contain **ONLY** local network nodes:
### Validators (1000-1004)
- 192.168.11.100 - Validator 1
- 192.168.11.101 - Validator 2
- 192.168.11.102 - Validator 3
- 192.168.11.103 - Validator 4
- 192.168.11.104 - Validator 5
### Sentries (1500-1503)
- 192.168.11.150 - Sentry 1
- 192.168.11.151 - Sentry 2
- 192.168.11.152 - Sentry 3
- 192.168.11.153 - Sentry 4
### RPC Nodes (2500-2502)
- 192.168.11.250 - Core RPC (this node)
- 192.168.11.251 - Permissioned RPC
- 192.168.11.252 - Public RPC
**Total**: 12 nodes (all on 192.168.11.0/24 local network)
---
## Configuration File
**Location**: `/etc/besu/config-rpc-core.toml`
**Key Settings**:
```toml
# Permissioning - ONLY local/permissioned nodes
permissions-nodes-config-file-enabled=true
permissions-nodes-config-file="/permissions/permissions-nodes.toml"
# Discovery - DISABLED for strict control
discovery-enabled=false
# Static nodes - only validators
static-nodes-file="/genesis/static-nodes.json"
# Full RPC APIs enabled (for internal/core infrastructure)
rpc-http-api=["ETH","NET","WEB3","ADMIN","DEBUG","TXPOOL"]
rpc-ws-api=["ETH","NET","WEB3","ADMIN","DEBUG","TXPOOL"]
```
---
## Verification
### Check Permissioning is Enabled
```bash
pct exec 2500 -- grep "permissions-nodes-config-file-enabled" /etc/besu/config-rpc-core.toml
# Should show: permissions-nodes-config-file-enabled=true
```
### Check Discovery is Disabled
```bash
pct exec 2500 -- grep "discovery-enabled" /etc/besu/config-rpc-core.toml
# Should show: discovery-enabled=false
```
### Verify Permissions File Contains Only Local Nodes
```bash
pct exec 2500 -- cat /permissions/permissions-nodes.toml | grep -o "192.168.11\.[0-9]*" | sort -u
# Should show only 192.168.11.x addresses (local network)
```
### Check Connected Peers
```bash
curl -X POST http://192.168.11.250:8545 \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"admin_peers","params":[],"id":1}' | jq '.result[].remoteAddress'
# Should show only 192.168.11.x addresses
```
---
## Security Benefits
1. **No External Discovery**: With `discovery-enabled=false`, the node cannot discover nodes outside the permissioned allowlist
2. **Strict Allowlisting**: Only nodes explicitly listed in `permissions-nodes.toml` can connect
3. **Local Network Only**: All allowed nodes are on the 192.168.11.0/24 network
4. **Defense in Depth**: Even if discovery were enabled, permissioning would still block unauthorized nodes
---
## Related Documentation
- [RPC Node Types Architecture](./RPC_NODE_TYPES_ARCHITECTURE.md)
- [Besu Allowlist Runbook](../06-besu/BESU_ALLOWLIST_RUNBOOK.md)
- [RPC 2500 Troubleshooting](../09-troubleshooting/RPC_2500_TROUBLESHOOTING.md)
---
**Last Updated**: $(date)

View File

@@ -200,6 +200,22 @@ You **cannot** failover from one type to another because:
## Script Updates Required
---
## Related Documentation
- **[RPC_TEMPLATE_TYPES.md](RPC_TEMPLATE_TYPES.md)** ⭐⭐⭐ - RPC template types reference
- **[NGINX_ARCHITECTURE_RPC.md](NGINX_ARCHITECTURE_RPC.md)** ⭐⭐ - NGINX architecture for RPC
- **[RPC_2500_CONFIGURATION_SUMMARY.md](RPC_2500_CONFIGURATION_SUMMARY.md)** - RPC 2500 configuration
- **[CLOUDFLARE_NGINX_INTEGRATION.md](CLOUDFLARE_NGINX_INTEGRATION.md)** - Cloudflare + NGINX integration
- **[../06-besu/BESU_NODES_FILE_REFERENCE.md](../06-besu/BESU_NODES_FILE_REFERENCE.md)** - Besu nodes file reference
---
**Last Updated:** 2025-01-20
**Document Version:** 1.0
**Review Cycle:** Quarterly
### Updated: `scripts/copy-besu-config-with-nodes.sh`
The script has been updated to map each VMID to its specific RPC type and config file:

View File

@@ -0,0 +1,302 @@
# Public RPC Endpoint Routing Architecture
**Last Updated:** 2025-01-27
**Document Version:** 1.0
**Status:** Active Documentation
---
## Architecture Overview
The public RPC endpoints route through multiple layers:
```
Internet → Cloudflare (DNS/SSL) → Cloudflared Tunnel → Nginx → Besu RPC
```
---
## Endpoint Routing
### HTTP RPC Endpoint
**URL**: `https://rpc-http-pub.d-bis.org`
**Routing Path**:
1. **Cloudflare DNS/SSL**: `rpc-http-pub.d-bis.org` resolves to Cloudflare IPs
2. **Cloudflare Edge**: SSL termination, DDoS protection
3. **Cloudflared Tunnel**: Encrypted tunnel from Cloudflare to internal network
4. **Nginx** (VMID 2500): Receives request, proxies to Besu RPC
5. **Besu RPC**: `http://192.168.11.250:8545` (VMID 2500)
**Configuration**:
- **Should NOT require authentication** (public endpoint)
- **Must accept requests without JWT tokens** (for MetaMask compatibility)
### WebSocket RPC Endpoint
**URL**: `wss://rpc-ws-pub.d-bis.org`
**Routing Path**:
1. **Cloudflare DNS/SSL**: `rpc-ws-pub.d-bis.org` resolves to Cloudflare IPs
2. **Cloudflare Edge**: SSL termination, WebSocket support
3. **Cloudflared Tunnel**: Encrypted tunnel from Cloudflare to internal network
4. **Nginx** (VMID 2500): Receives WebSocket upgrade, proxies to Besu RPC
5. **Besu RPC**: `ws://192.168.11.250:8546` (VMID 2500)
**Configuration**:
- **Should NOT require authentication** (public endpoint)
- **Must accept WebSocket connections without JWT tokens**
---
## Components
### 1. Cloudflare DNS/SSL
- **DNS**: `rpc-http-pub.d-bis.org` → CNAME to Cloudflared tunnel
- **SSL**: Terminated at Cloudflare edge
- **DDoS Protection**: Enabled (if proxied)
### 2. Cloudflared Tunnel
**Location**: VMID 102 (or wherever cloudflared is running)
**Configuration**: Routes traffic from Cloudflare to Nginx on VMID 2500
**Example Config**:
```yaml
ingress:
- hostname: rpc-http-pub.d-bis.org
service: http://192.168.11.250:443 # Nginx on VMID 2500
- hostname: rpc-ws-pub.d-bis.org
service: http://192.168.11.250:443 # Nginx on VMID 2500
```
### 3. Nginx (VMID 2500)
**IP**: `192.168.11.250`
**Purpose**: Reverse proxy to Besu RPC
**Requirements**:
- **MUST NOT require JWT authentication** for public endpoints
- Must proxy to `127.0.0.1:8545` (HTTP RPC)
- Must proxy to `127.0.0.1:8546` (WebSocket RPC)
- Must handle WebSocket upgrades correctly
### 4. Besu RPC (VMID 2500)
**HTTP RPC**: `127.0.0.1:8545` (internally) / `192.168.11.250:8545` (network)
**WebSocket RPC**: `127.0.0.1:8546` (internally) / `192.168.11.250:8546` (network)
**Chain ID**: 138 (0x8a in hex)
---
## Nginx Configuration Requirements
### Public HTTP RPC Endpoint
```nginx
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name rpc-http-pub.d-bis.org;
# SSL certificates
ssl_certificate /etc/nginx/ssl/rpc-http-pub.crt;
ssl_certificate_key /etc/nginx/ssl/rpc-http-pub.key;
# Trust Cloudflare IPs for real IP
set_real_ip_from 173.245.48.0/20;
set_real_ip_from 103.21.244.0/22;
set_real_ip_from 103.22.200.0/22;
set_real_ip_from 103.31.4.0/22;
set_real_ip_from 141.101.64.0/18;
set_real_ip_from 108.162.192.0/18;
set_real_ip_from 190.93.240.0/20;
set_real_ip_from 188.114.96.0/20;
set_real_ip_from 197.234.240.0/22;
set_real_ip_from 198.41.128.0/17;
set_real_ip_from 162.158.0.0/15;
set_real_ip_from 104.16.0.0/13;
set_real_ip_from 104.24.0.0/14;
set_real_ip_from 172.64.0.0/13;
set_real_ip_from 131.0.72.0/22;
real_ip_header CF-Connecting-IP;
access_log /var/log/nginx/rpc-http-pub-access.log;
error_log /var/log/nginx/rpc-http-pub-error.log;
# Proxy to Besu RPC - NO AUTHENTICATION
location / {
proxy_pass http://127.0.0.1:8545;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# CORS headers (if needed)
add_header Access-Control-Allow-Origin *;
add_header Access-Control-Allow-Methods "GET, POST, OPTIONS";
add_header Access-Control-Allow-Headers "Content-Type, Authorization";
# NO JWT authentication here!
}
}
```
### Public WebSocket RPC Endpoint
```nginx
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name rpc-ws-pub.d-bis.org;
# SSL certificates
ssl_certificate /etc/nginx/ssl/rpc-ws-pub.crt;
ssl_certificate_key /etc/nginx/ssl/rpc-ws-pub.key;
# Trust Cloudflare IPs for real IP
set_real_ip_from 173.245.48.0/20;
# ... (same Cloudflare IP ranges as above)
real_ip_header CF-Connecting-IP;
access_log /var/log/nginx/rpc-ws-pub-access.log;
error_log /var/log/nginx/rpc-ws-pub-error.log;
# Proxy to Besu WebSocket RPC - NO AUTHENTICATION
location / {
proxy_pass http://127.0.0.1:8546;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket timeouts
proxy_read_timeout 86400;
proxy_send_timeout 86400;
# NO JWT authentication here!
}
}
```
---
## Common Issues
### Issue 1: "Could not fetch chain ID" Error in MetaMask
**Symptom**: MetaMask shows error when trying to connect to the network.
**Root Cause**: Nginx is requiring JWT authentication for the public endpoint.
**Fix**: Remove JWT authentication from the Nginx configuration for `rpc-http-pub.d-bis.org`.
**Check**:
```bash
ssh root@192.168.11.10 "pct exec 2500 -- nginx -T | grep -A 30 'rpc-http-pub'"
```
Look for:
- `auth_request` directives (remove them)
- Lua JWT validation scripts (remove them)
### Issue 2: Cloudflared Tunnel Not Routing Correctly
**Symptom**: Requests don't reach Nginx.
**Fix**: Verify Cloudflared tunnel configuration is routing to `192.168.11.250:443`.
**Check**:
```bash
# Check cloudflared config (adjust VMID if different)
ssh root@192.168.11.10 "pct exec 102 -- cat /etc/cloudflared/config.yml"
```
### Issue 3: Nginx Not Listening on Port 443
**Symptom**: Connection refused errors.
**Fix**: Ensure Nginx is listening on port 443 and SSL certificates are configured.
**Check**:
```bash
ssh root@192.168.11.10 "pct exec 2500 -- ss -tuln | grep 443"
ssh root@192.168.11.10 "pct exec 2500 -- systemctl status nginx"
```
---
## Testing
### Test HTTP RPC Endpoint
```bash
curl -X POST https://rpc-http-pub.d-bis.org \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}'
```
**Expected Response**:
```json
{"jsonrpc":"2.0","id":1,"result":"0x8a"}
```
### Test WebSocket RPC Endpoint
```bash
wscat -c wss://rpc-ws-pub.d-bis.org
```
Then send:
```json
{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}
```
---
## Verification Checklist
- [ ] Cloudflare DNS resolves `rpc-http-pub.d-bis.org` correctly
- [ ] Cloudflared tunnel is running and routing to `192.168.11.250:443`
- [ ] Nginx on VMID 2500 is running and listening on port 443
- [ ] Nginx configuration for `rpc-http-pub.d-bis.org` does NOT require JWT
- [ ] Nginx proxies to `127.0.0.1:8545` correctly
- [ ] Besu RPC on VMID 2500 is running and responding on port 8545
- [ ] `eth_chainId` request returns `0x8a` without authentication
- [ ] MetaMask can connect to the network successfully
---
## Related Documentation
### Network Documents
- **[CLOUDFLARE_TUNNEL_ROUTING_ARCHITECTURE.md](CLOUDFLARE_TUNNEL_ROUTING_ARCHITECTURE.md)** ⭐⭐⭐ - Cloudflare tunnel routing
- **[CENTRAL_NGINX_ROUTING_SETUP.md](CENTRAL_NGINX_ROUTING_SETUP.md)** ⭐⭐⭐ - Central Nginx routing
- **[NGINX_ARCHITECTURE_RPC.md](NGINX_ARCHITECTURE_RPC.md)** ⭐⭐ - NGINX architecture for RPC
- **[RPC_NODE_TYPES_ARCHITECTURE.md](RPC_NODE_TYPES_ARCHITECTURE.md)** ⭐⭐ - RPC node types
### Configuration Documents
- **[../04-configuration/RPC_DNS_CONFIGURATION.md](../04-configuration/RPC_DNS_CONFIGURATION.md)** - RPC DNS configuration
- **[../04-configuration/cloudflare/CLOUDFLARE_DNS_TO_CONTAINERS.md](../04-configuration/cloudflare/CLOUDFLARE_DNS_TO_CONTAINERS.md)** - DNS mapping to containers
### Troubleshooting
- **[../09-troubleshooting/METAMASK_TROUBLESHOOTING_GUIDE.md](../09-troubleshooting/METAMASK_TROUBLESHOOTING_GUIDE.md)** - MetaMask troubleshooting
---
**Last Updated:** 2025-01-27
**Document Version:** 1.0
**Review Cycle:** Quarterly
- [Cloudflare Tunnel RPC Setup](./04-configuration/CLOUDFLARE_TUNNEL_RPC_SETUP.md)
- [RPC JWT Authentication](./04-configuration/RPC_JWT_AUTHENTICATION.md)
---
**Last Updated**: 2025-01-27

View File

@@ -224,5 +224,16 @@ The comprehensive validation script (`validate-deployment-comprehensive.sh`) che
---
**Last Updated**: $(date)
## Related Documentation
- **[RPC_NODE_TYPES_ARCHITECTURE.md](RPC_NODE_TYPES_ARCHITECTURE.md)** ⭐⭐⭐ - RPC node types architecture
- **[NGINX_ARCHITECTURE_RPC.md](NGINX_ARCHITECTURE_RPC.md)** ⭐⭐ - NGINX architecture for RPC
- **[RPC_2500_CONFIGURATION_SUMMARY.md](RPC_2500_CONFIGURATION_SUMMARY.md)** - RPC 2500 configuration
- **[../06-besu/BESU_NODES_FILE_REFERENCE.md](../06-besu/BESU_NODES_FILE_REFERENCE.md)** - Besu nodes file reference
---
**Last Updated:** 2025-01-20
**Document Version:** 1.0
**Review Cycle:** Quarterly

View File

@@ -0,0 +1,417 @@
# ChainID 138 Besu Node Configuration Guide
**Purpose**: Configure all Besu nodes for ChainID 138 with proper peer discovery, permissioning, and access control.
**Scope**: All Besu nodes including new containers (1504: besu-sentry-5, 2503: besu-rpc-4)
---
## Overview
This guide covers the configuration of Besu nodes for ChainID 138, including:
1. **Static Nodes Configuration** (`static-nodes.json`) - Hard-pinned peer list
2. **Permissioned Nodes Configuration** (`permissioned-nodes.json`) - Allowlist for network access
3. **Discovery Settings** - Disabled for RPC nodes that report chainID 0x1 to MetaMask (wallet compatibility feature)
4. **Access Control** - Separation of access for Ali, Luis, and Putu
---
## Node Allocation
### VMID / Container Allocation
| VMID | Hostname / Container | Role | ChainID | Access | Identity | JWT Auth |
|------|----------------------|------------------------------|---------|--------|----------|----------|
| 1504 | `besu-sentry-5` | Besu Sentry Node | 138 | Ali (Full) | N/A | ✅ Required |
| 2503 | `besu-rpc-4` | Besu RPC Node (Permissioned) | 138 | Ali (Full) | 0x8a | ✅ Required |
| 2504 | `besu-rpc-4` | Besu RPC Node (Permissioned) | 138 | Ali (Full) | 0x1 | ✅ Required |
| 2505 | `besu-rpc-luis` | Besu RPC Node (Permissioned) | 138 | Luis (RPC-only) | 0x8a | ✅ Required |
| 2506 | `besu-rpc-luis` | Besu RPC Node (Permissioned) | 138 | Luis (RPC-only) | 0x1 | ✅ Required |
| 2507 | `besu-rpc-putu` | Besu RPC Node (Permissioned) | 138 | Putu (RPC-only) | 0x8a | ✅ Required |
| 2508 | `besu-rpc-putu` | Besu RPC Node (Permissioned) | 138 | Putu (RPC-only) | 0x1 | ✅ Required |
| 6201 | `firefly-2` | Hyperledger Firefly Node | 138 | Ali (Full) | N/A | ✅ Required |
### RPC Node Permissioned Identities
- **VMID 2503** (`besu-rpc-4`): Ali's container with identity `0x8a`
- **VMID 2504** (`besu-rpc-4`): Ali's container with identity `0x1`
- **VMID 2505** (`besu-rpc-luis`): Luis's container with identity `0x8a`
- **VMID 2506** (`besu-rpc-luis`): Luis's container with identity `0x1`
- **VMID 2507** (`besu-rpc-putu`): Putu's container with identity `0x8a`
- **VMID 2508** (`besu-rpc-putu`): Putu's container with identity `0x1`
---
## Access Model
### Ali (Dedicated Physical Proxmox Host)
- **Full root access** to entire Proxmox host
- **Full access** to all ChainID 138 components:
- Besu Sentry Node (1504)
- RPC Node (2503) - both `0x8a` and `0x1` identities
- Hyperledger Firefly (6201)
- Independent networking, keys, and firewall rules
- No shared authentication with other operators
### Luis (RPC-Only Access)
- **Limited access** to dedicated RPC containers (VMIDs 2505, 2506)
- **Permissioned identity-level usage**: `0x8a` (2505) and `0x1` (2506)
- **JWT authentication required** for all access
- **No access** to:
- Besu Sentry nodes
- Firefly nodes
- Ali's RPC nodes (2503, 2504)
- Putu's RPC nodes (2507, 2508)
- Proxmox infrastructure
- Access via reverse proxy / firewall-restricted RPC ports
### Putu (RPC-Only Access)
- **Limited access** to dedicated RPC containers (VMIDs 2507, 2508)
- **Permissioned identity-level usage**: `0x8a` (2507) and `0x1` (2508)
- **JWT authentication required** for all access
- **No access** to:
- Besu Sentry nodes
- Firefly nodes
- Ali's RPC nodes (2503, 2504)
- Luis's RPC nodes (2505, 2506)
- Proxmox infrastructure
- Access via reverse proxy / firewall-restricted RPC ports
---
## Configuration Files
### File Locations
On each Besu VM/container:
```
/var/lib/besu/static-nodes.json
/var/lib/besu/permissions/permissioned-nodes.json
```
Alternative paths (also supported):
```
/genesis/static-nodes.json
/permissions/permissioned-nodes.json
```
### File Format
#### `static-nodes.json`
```json
[
"enode://<PUBKEY_A>@<IP_A>:30303",
"enode://<PUBKEY_B>@<IP_B>:30303",
"enode://<PUBKEY_C>@<IP_C>:30303"
]
```
**Operational Rule**: Every Besu VM in ChainID 138 should have the **same** `static-nodes.json` list, including:
- All validator nodes (1000-1004)
- All sentry nodes (1500-1504)
- All RPC nodes (2500-2508)
#### `permissioned-nodes.json`
Same format as `static-nodes.json`. Must include **every Besu node** allowed to join ChainID 138.
---
## Discovery Configuration
### Discovery Settings by Node Type
| Node Type | Discovery | Notes |
|----------|-----------|-------|
| Validators (1000-1004) | Enabled | Can discover peers but must respect permissioning |
| Sentries (1500-1504) | Enabled | Can discover peers but must respect permissioning |
| RPC Core (2500) | **Disabled** | Strict local/permissioned control |
| RPC Permissioned (2501) | Enabled | Permissioned access |
| RPC Public (2502) | Enabled | Public access |
| RPC 4 (2503) | **Disabled** | Reports chainID 0x1 to MetaMask for wallet compatibility |
| RPC 5-8 (2504-2508) | **Disabled** | Reports chainID 0x1 to MetaMask for wallet compatibility |
### Why Disable Discovery for RPC Nodes (2503-2508)?
These RPC nodes are **intentionally configured** to report `chainID = 0x1` (Ethereum mainnet) to MetaMask wallets for compatibility with regulated financial entities. This is a **wallet compatibility feature** that works around MetaMask's technical limitations.
**Important:** While the nodes report chainID 0x1 to wallets, they are actually connected to ChainID 138 (the private network). Discovery is disabled to:
- Prevent actual connection to Ethereum mainnet
- Ensure nodes only connect via `static-nodes.json` and `permissioned-nodes.json`
- Keep nodes attached to ChainID 138 network topology
- Allow MetaMask to work with the private network while thinking it's mainnet
**How it works:**
1. Node runs on ChainID 138 (private network)
2. Node reports chainID 0x1 to MetaMask (wallet compatibility)
3. Discovery disabled → node stays on ChainID 138 topology
4. MetaMask works with private network while thinking it's mainnet
---
## Deployment Process
### Automated Deployment
Use the provided scripts for automated configuration:
#### 1. Main Configuration Script
```bash
# Configure all Besu nodes for ChainID 138
./scripts/configure-besu-chain138-nodes.sh
```
This script:
1. Collects enodes from all Besu nodes
2. Generates `static-nodes.json` and `permissioned-nodes.json`
3. Deploys configurations to all containers
4. Configures discovery settings
5. Restarts Besu services
#### 2. Quick Setup for New Containers
```bash
# Setup new containers (1504, 2503)
./scripts/setup-new-chain138-containers.sh
```
### Manual Deployment Steps
If you need to deploy manually:
#### Step 1: Collect Enodes
```bash
# Extract enode from a node
pct exec <VMID> -- /opt/besu/bin/besu public-key export \
--node-private-key-file=/var/lib/besu/nodekey \
--format=enode
```
Or via RPC (if ADMIN API enabled):
```bash
curl -X POST http://<NODE_IP>:8545 \
-H 'Content-Type: application/json' \
-d '{"jsonrpc":"2.0","method":"admin_nodeInfo","params":[],"id":1}'
```
#### Step 2: Generate Configuration Files
Create `static-nodes.json` and `permissioned-nodes.json` with all enodes.
#### Step 3: Deploy to Containers
```bash
# Copy files to container
pct push <VMID> static-nodes.json /var/lib/besu/static-nodes.json
pct push <VMID> permissioned-nodes.json /var/lib/besu/permissions/permissioned-nodes.json
# Set ownership
pct exec <VMID> -- chown -R besu:besu /var/lib/besu
pct exec <VMID> -- chmod 644 /var/lib/besu/static-nodes.json
pct exec <VMID> -- chmod 644 /var/lib/besu/permissions/permissioned-nodes.json
```
#### Step 4: Update Besu Configuration
Edit `/etc/besu/config*.toml`:
```toml
# Static nodes
static-nodes-file="/var/lib/besu/static-nodes.json"
# Permissioning
permissions-nodes-config-file-enabled=true
permissions-nodes-config-file="/var/lib/besu/permissions/permissioned-nodes.json"
# Discovery (disable for RPC nodes showing chainID 0x1)
discovery-enabled=false # For 2503
```
#### Step 5: Restart Besu Service
```bash
pct exec <VMID> -- systemctl restart besu*.service
```
---
## Verification
### Check Peer Connections
```bash
# Get peer count
curl -X POST http://<RPC_IP>:8545 \
-H 'Content-Type: application/json' \
--data '{"jsonrpc":"2.0","method":"net_peerCount","params":[],"id":1}'
# Get peer list (if ADMIN API enabled)
curl -X POST http://<RPC_IP>:8545 \
-H 'Content-Type: application/json' \
--data '{"jsonrpc":"2.0","method":"admin_peers","params":[],"id":1}'
```
### Check Configuration Files
```bash
# Verify files exist
pct exec <VMID> -- ls -la /var/lib/besu/static-nodes.json
pct exec <VMID> -- ls -la /var/lib/besu/permissions/permissioned-nodes.json
# Verify content
pct exec <VMID> -- cat /var/lib/besu/static-nodes.json
```
### Check Discovery Setting
```bash
# For RPC node 2503, verify discovery is disabled
pct exec 2503 -- grep discovery-enabled /etc/besu/*.toml
```
### Check Service Status
```bash
# Check Besu service
pct exec <VMID> -- systemctl status besu*.service
# Check logs
pct exec <VMID> -- journalctl -u besu*.service -n 50
```
---
## Troubleshooting
### Issue: Node Not Connecting to Peers
1. **Check static-nodes.json exists and is valid**
```bash
pct exec <VMID> -- cat /var/lib/besu/static-nodes.json | jq .
```
2. **Check permissioned-nodes.json includes the node**
```bash
pct exec <VMID> -- grep -i <NODE_ENODE> /var/lib/besu/permissions/permissioned-nodes.json
```
3. **Check network connectivity**
```bash
pct exec <VMID> -- ping <PEER_IP>
```
4. **Check firewall rules** (port 30303 must be open)
### Issue: RPC Node Showing chainID 0x1
**Solution**: Disable discovery on the RPC node:
```bash
# Edit config file
pct exec 2503 -- sed -i 's/^discovery-enabled=.*/discovery-enabled=false/' /etc/besu/config-rpc-4.toml
# Restart service
pct exec 2503 -- systemctl restart besu*.service
```
### Issue: Permission Denied Errors
1. **Check file ownership**
```bash
pct exec <VMID> -- ls -la /var/lib/besu/static-nodes.json
pct exec <VMID> -- chown besu:besu /var/lib/besu/static-nodes.json
```
2. **Check file permissions**
```bash
pct exec <VMID> -- chmod 644 /var/lib/besu/static-nodes.json
```
---
## Configuration Templates
### RPC Node 4 (2503) - Discovery Disabled
See: `smom-dbis-138/config/config-rpc-4.toml`
Key settings:
- `discovery-enabled=false`
- `static-nodes-file="/var/lib/besu/static-nodes.json"`
- `permissions-nodes-config-file="/var/lib/besu/permissions/permissioned-nodes.json"`
### Sentry Node 5 (1504)
Uses standard sentry configuration with:
- `discovery-enabled=true` (can discover but respects permissioning)
- Same static-nodes.json and permissioned-nodes.json as all nodes
---
## Maintenance
### Adding a New Node
1. Extract enode from new node
2. Add enode to `static-nodes.json` on **all existing nodes**
3. Add enode to `permissioned-nodes.json` on **all existing nodes**
4. Deploy updated files to all nodes
5. Restart Besu services
### Removing a Node
1. Remove enode from `static-nodes.json` on **all remaining nodes**
2. Remove enode from `permissioned-nodes.json` on **all remaining nodes**
3. Deploy updated files to all nodes
4. Restart Besu services
---
## Security Considerations
1. **File Permissions**: Ensure `static-nodes.json` and `permissioned-nodes.json` are readable by Besu user but not world-writable
2. **Network Security**: Use firewall rules to restrict P2P port (30303) access
3. **Access Control**: Implement reverse proxy / authentication for RPC access (Luis/Putu)
4. **Key Management**: Keep node keys secure, never expose private keys
---
## Related Documentation
- [Besu Allowlist Runbook](../docs/06-besu/BESU_ALLOWLIST_RUNBOOK.md)
- [RPC Node Configuration](../docs/05-network/RPC_2500_CONFIGURATION_SUMMARY.md)
- [Network Architecture](../smom-dbis-138/docs/architecture/NETWORK.md)
---
## Quick Reference
### All Besu Nodes for ChainID 138
- **Validators**: 1000-1004 (5 nodes)
- **Sentries**: 1500-1504 (5 nodes, including new 1504)
- **RPC Nodes**: 2500-2503 (4 nodes, including new 2503)
### Configuration Files Location
- `static-nodes.json`: `/var/lib/besu/static-nodes.json`
- `permissioned-nodes.json`: `/var/lib/besu/permissions/permissioned-nodes.json`
### Discovery Settings
- **Disabled**: 2500 (core), 2503-2508 (RPC nodes reporting chainID 0x1 to MetaMask for wallet compatibility)
- **Enabled**: All other nodes
### Scripts
- Main config: `scripts/configure-besu-chain138-nodes.sh`
- New containers: `scripts/setup-new-chain138-containers.sh`

View File

@@ -0,0 +1,177 @@
# Bridge Testing Guide
**Date**: $(date)
**Purpose**: Complete guide for testing cross-chain bridge transfers
---
## ✅ Verification Complete
All bridge configurations have been verified:
- ✅ WETH9 Bridge: All 6 destinations configured
- ✅ WETH10 Bridge: All 6 destinations configured
- ✅ Fee calculation: Working
- ✅ Bridge contracts: Deployed and operational
---
## 🧪 Testing Options
### Option 1: Automated Verification (Recommended)
Run the verification script to check all configurations:
```bash
cd /home/intlc/projects/proxmox
bash scripts/verify-bridge-configuration.sh
```
This verifies:
- All destination chains are configured
- Fee calculation is working
- Bridge contracts are accessible
- Token balances are readable
---
### Option 2: Manual Transfer Testing
To test actual transfers, use the test script:
```bash
# Test WETH9 transfer to BSC
bash scripts/test-bridge-transfers.sh bsc 0.01 weth9
# Test WETH10 transfer to Polygon
bash scripts/test-bridge-transfers.sh polygon 0.01 weth10
```
**Requirements**:
- Sufficient ETH balance for wrapping
- Sufficient balance for gas fees
- LINK tokens (if using LINK for fees) or native ETH
**Process**:
1. Wraps ETH to WETH9/WETH10
2. Approves bridge to spend tokens
3. Calculates CCIP fee
4. Sends cross-chain transfer
5. Returns transaction hash for monitoring
---
### Option 3: Test All Destinations
To test transfers to all 6 destination chains:
```bash
#!/bin/bash
# Test all destinations
CHAINS=("bsc" "polygon" "avalanche" "base" "arbitrum" "optimism")
AMOUNT="0.01"
for chain in "${CHAINS[@]}"; do
echo "Testing WETH9 transfer to $chain..."
bash scripts/test-bridge-transfers.sh "$chain" "$AMOUNT" weth9
sleep 10 # Wait between transfers
done
for chain in "${CHAINS[@]}"; do
echo "Testing WETH10 transfer to $chain..."
bash scripts/test-bridge-transfers.sh "$chain" "$AMOUNT" weth10
sleep 10 # Wait between transfers
done
```
**Note**: This will cost gas fees for each transfer. Start with one chain to verify functionality.
---
## 📊 Verification Results
### WETH9 Bridge Destinations
| Chain | Selector | Status |
|-------|----------|--------|
| BSC | `11344663589394136015` | ✅ Configured |
| Polygon | `4051577828743386545` | ✅ Configured |
| Avalanche | `6433500567565415381` | ✅ Configured |
| Base | `15971525489660198786` | ✅ Configured |
| Arbitrum | `4949039107694359620` | ✅ Configured |
| Optimism | `3734403246176062136` | ✅ Configured |
### WETH10 Bridge Destinations
| Chain | Selector | Status |
|-------|----------|--------|
| BSC | `11344663589394136015` | ✅ Configured |
| Polygon | `4051577828743386545` | ✅ Configured |
| Avalanche | `6433500567565415381` | ✅ Configured |
| Base | `15971525489660198786` | ✅ Configured |
| Arbitrum | `4949039107694359620` | ✅ Configured |
| Optimism | `3734403246176062136` | ✅ Configured |
---
## 🔍 Monitoring Transfers
After initiating a transfer:
1. **Check Transaction on Source Chain**:
```bash
cast tx <transaction_hash> --rpc-url http://192.168.11.250:8545
```
2. **Check Events**:
```bash
cast logs --address <bridge_address> "CrossChainTransferInitiated" --rpc-url http://192.168.11.250:8545
```
3. **Wait for CCIP Processing**: Typically 1-5 minutes
4. **Check Destination Chain**: Verify receipt on destination chain explorer
---
## ⚠️ Important Notes
1. **Gas Costs**: Each transfer costs gas. Budget accordingly.
2. **Test Amounts**: Start with small amounts (0.01 ETH) for testing.
3. **Processing Time**: CCIP transfers take 1-5 minutes to process.
4. **Fee Requirements**: Ensure sufficient balance for fees (LINK or native ETH).
5. **Destination Verification**: Verify transfers on destination chain explorers.
---
## ✅ Testing Checklist
- [x] Bridge contracts deployed
- [x] All destinations configured
- [x] Fee calculation verified
- [x] Bridge contracts accessible
- [x] Test scripts created
- [ ] Test transfer to BSC (optional)
- [ ] Test transfer to Polygon (optional)
- [ ] Test transfer to Avalanche (optional)
- [ ] Test transfer to Base (optional)
- [ ] Test transfer to Arbitrum (optional)
- [ ] Test transfer to Optimism (optional)
---
## 🎯 Status
**All bridge configurations verified and operational!**
The bridges are ready for production use. Actual transfer testing is optional and can be done when needed.
---
**Last Updated**: $(date)
**Status**: ✅ **VERIFIED AND READY**

View File

@@ -10,6 +10,51 @@
This specification defines the deployment of a **fully enabled CCIP lane** for ChainID 138, including all required components for operational readiness:
## CCIP Fleet Architecture Diagram
```mermaid
graph TB
Internet[Internet]
ER605[ER605 Router]
subgraph CCIPNetwork[CCIP Network]
subgraph CommitDON[Commit DON - VLAN 132]
Commit1[CCIP-COMMIT-01<br/>VMID 5410]
Commit2[CCIP-COMMIT-02<br/>VMID 5411]
Commit16[CCIP-COMMIT-16<br/>VMID 5425]
end
subgraph ExecDON[Execute DON - VLAN 133]
Exec1[CCIP-EXEC-01<br/>VMID 5440]
Exec2[CCIP-EXEC-02<br/>VMID 5441]
Exec16[CCIP-EXEC-16<br/>VMID 5455]
end
subgraph RMN[RMN - VLAN 134]
RMN1[CCIP-RMN-01<br/>VMID 5470]
RMN2[CCIP-RMN-02<br/>VMID 5471]
RMN7[CCIP-RMN-07<br/>VMID 5476]
end
subgraph Ops[Ops/Admin - VLAN 130]
Ops1[CCIP-OPS-01<br/>VMID 5400]
Ops2[CCIP-OPS-02<br/>VMID 5401]
end
end
Internet --> ER605
ER605 --> CommitDON
ER605 --> ExecDON
ER605 --> RMN
ER605 --> Ops
CommitDON -->|NAT Pool Block #2| Internet
ExecDON -->|NAT Pool Block #3| Internet
RMN -->|NAT Pool Block #4| Internet
```
---
1. **Transactional Oracle Nodes** (32 nodes)
- Commit-role nodes (16)
- Execute-role nodes (16)

View File

@@ -0,0 +1,135 @@
# CCIP Security Documentation
**Date**: $(date)
**Network**: ChainID 138
**Purpose**: Security information for all CCIP contracts
---
## 🔐 Contract Access Control
### CCIP Router
- **Address**: `0x8078A09637e47Fa5Ed34F626046Ea2094a5CDE5e`
- **Access Control**: Standard CCIP Router implementation
- **Owner Function**: `owner()` function not available (may use different access control pattern)
- **Admin Functions**: Standard CCIP Router admin functions
- **Pause Mechanism**: Standard CCIP Router pause functionality (if implemented)
**Note**: Contract owner/admin addresses need to be retrieved from deployment transactions or contract storage.
### CCIP Sender
- **Address**: `0x105F8A15b819948a89153505762444Ee9f324684`
- **Access Control**: Standard CCIP Sender implementation
- **Owner Function**: `owner()` function not available
- **Router Reference**: `0x8078A09637e47Fa5Ed34F626046Ea2094a5CDE5e`
**Note**: Access control details need to be retrieved from contract source code or deployment logs.
### CCIPWETH9Bridge
- **Address**: `0x89dd12025bfCD38A168455A44B400e913ED33BE2`
- **Access Control**: Bridge contract access control
- **Owner Function**: `owner()` function not available
- **Admin Functions**: Bridge-specific admin functions
**Destination Chains Configured**:
- ✅ BSC: `0x9d70576d8E253BcF...` (truncated, full address in storage)
- ✅ Polygon: `0x383a1891AE1915b1...` (truncated)
- ✅ Avalanche: `0x594862Ae1802b3D5...` (truncated)
- ✅ Base: `0xdda641cFe44aff82...` (truncated)
- ✅ Arbitrum: `0x44aE84D8E9a37444...` (truncated)
- ✅ Optimism: `0x33d343F77863CAB8...` (truncated)
### CCIPWETH10Bridge
- **Address**: `0xe0E93247376aa097dB308B92e6Ba36bA015535D0`
- **Access Control**: Bridge contract access control
- **Owner Function**: `owner()` function not available
- **Admin Functions**: Bridge-specific admin functions
**Destination Chains Configured**:
- ✅ BSC: `0x9d70576d8E253BcF...` (truncated, full address in storage)
- ✅ Polygon: `0x383a1891AE1915b1...` (truncated)
- ✅ Avalanche: `0x594862Ae1802b3D5...` (truncated)
- ✅ Base: `0xdda641cFe44aff82...` (truncated)
- ✅ Arbitrum: `0x44aE84D8E9a37444...` (truncated)
- ✅ Optimism: `0x33d343F77863CAB8...` (truncated)
---
## 🔍 How to Retrieve Admin/Owner Addresses
### Method 1: From Deployment Transaction
```bash
# Get deployment transaction hash
cast tx <DEPLOYMENT_TX_HASH> --rpc-url http://192.168.11.250:8545
# Extract deployer address from transaction
cast tx <DEPLOYMENT_TX_HASH> --rpc-url http://192.168.11.250:8545 | grep "from"
```
### Method 2: From Contract Storage
```bash
# Try common storage slots for owner addresses
cast storage <CONTRACT_ADDRESS> 0 --rpc-url http://192.168.11.250:8545
cast storage <CONTRACT_ADDRESS> 1 --rpc-url http://192.168.11.250:8545
```
### Method 3: From Source Code
If contracts are verified on Blockscout, check the source code for:
- `Ownable` pattern (OpenZeppelin)
- Custom access control implementations
- Multi-sig patterns
---
## 🛡️ Security Recommendations
### 1. Access Control Verification
- ✅ Verify all admin/owner addresses
- ✅ Document multi-sig requirements (if any)
- ✅ Review access control mechanisms
- ⚠️ **Action Required**: Retrieve and document actual owner addresses
### 2. Upgrade Mechanisms
- ⚠️ Verify if contracts are upgradeable
- ⚠️ Document upgrade procedures
- ⚠️ Review upgrade authorization requirements
### 3. Pause Mechanisms
- ⚠️ Verify pause functionality (if implemented)
- ⚠️ Document pause procedures
- ⚠️ Review pause authorization requirements
### 4. Emergency Procedures
- ⚠️ Document emergency response procedures
- ⚠️ Review circuit breakers (if implemented)
- ⚠️ Document recovery procedures
---
## 📋 Security Checklist
- [ ] Admin/owner addresses documented
- [ ] Access control mechanisms reviewed
- [ ] Upgrade procedures documented
- [ ] Pause mechanisms documented
- [ ] Emergency procedures documented
- [ ] Multi-sig requirements documented (if applicable)
- [ ] Key rotation procedures documented
- [ ] Incident response plan documented
---
## 🔗 Related Documentation
- [CCIP Comprehensive Diagnostic Report](./CCIP_COMPREHENSIVE_DIAGNOSTIC_REPORT.md)
- [CCIP Sender Contract Reference](./CCIP_SENDER_CONTRACT_REFERENCE.md)
- [Cross-Chain Bridge Addresses](./CROSS_CHAIN_BRIDGE_ADDRESSES.md)
---
**Last Updated**: $(date)
**Status**: ⚠️ **INCOMPLETE** - Owner addresses need to be retrieved

View File

@@ -0,0 +1,287 @@
# CCIP Sender Contract Reference
**Contract Address**: `0x105F8A15b819948a89153505762444Ee9f324684`
**Network**: ChainID 138
**RPC Endpoint**: `http://192.168.11.250:8545` or `https://rpc-core.d-bis.org`
**Block Explorer**: `https://explorer.d-bis.org` (Blockscout)
**Contract Type**: CCIP Sender (Cross-Chain Interoperability Protocol)
---
## 📋 Contract Overview
The CCIP Sender contract is part of the Chainlink CCIP (Cross-Chain Interoperability Protocol) infrastructure deployed on Chain 138. It handles the initiation and submission of cross-chain messages.
### Purpose
- Initiates CCIP messages for cross-chain communication
- Handles message preparation and submission to the CCIP Router
- Manages cross-chain message flow from Chain 138 to destination chains
### ⚠️ Important: Dual Role Across Chains
**On Chain 138 (Source Chain)**:
- **Role**: CCIP Sender contract
- **Function**: Initiates cross-chain transfers FROM Chain 138
**On Destination Chains** (BSC, Avalanche, Base, Arbitrum, Optimism):
- **Role**: CCIPWETH10Bridge contract
- **Function**: Receives and processes WETH10 tokens FROM Chain 138
- **Address**: Same address (`0x105f8a15b819948a89153505762444ee9f324684`)
This is why this address appears in CCIP transfers - it's the **destination bridge contract** that receives tokens when bridging WETH10 from Chain 138 to other chains.
---
## 🔗 Related Contracts
### CCIP Router
- **Address**: `0x8078A09637e47Fa5Ed34F626046Ea2094a5CDE5e`
- **Relationship**: The CCIP Sender interacts with the CCIP Router to send messages
- **Fee Token**: `0x514910771AF9Ca656af840dff83E8264EcF986CA` (LINK)
- **Base Fee**: 1000000000000000 wei
- **Data Fee Per Byte**: 100000000 wei
### Bridge Contracts
- **CCIPWETH9Bridge**: `0x89dd12025bfCD38A168455A44B400e913ED33BE2`
- **CCIPWETH10Bridge**: `0xe0E93247376aa097dB308B92e6Ba36bA015535D0`
---
## 📊 Contract Status
| Property | Value |
|----------|-------|
| **Status** | ✅ Deployed |
| **Chain ID** | 138 |
| **Deployment Block** | (Check Blockscout) |
| **Verified** | ⏳ Pending verification on Blockscout |
| **Bytecode** | Available (confirmed via RPC) |
### ⚠️ Important: Ethereum Mainnet Address is NOT Functional
**On Ethereum Mainnet**: The address `0x105F8A15b819948a89153505762444Ee9f324684` has **empty bytecode** (`0x`), meaning:
-**No contract exists** at this address on mainnet
-**Not functional** - cannot be used for any operations
-**Not relevant** for this project - ignore mainnet address entirely
**On Chain 138**: The same address has **deployed contract bytecode** (~5KB), meaning:
- ✅ The CCIP Sender contract is actively deployed and operational
- ✅ This is the **only relevant address** for this project
- ✅ Use this address for all Chain 138 operations
**Why mention mainnet?**
- The address appears on Etherscan because addresses can exist across all chains
- **However, it has no functionality on mainnet** - it's just an empty address
- **Focus on Chain 138 only** - that's where the contract is actually deployed and used
---
## 🔧 Configuration
### For CCIP Monitor Service (VMID 3501)
The CCIP Sender contract is used by the CCIP Monitor service. Configuration in `/opt/ccip-monitor/.env`:
```bash
CCIP_ROUTER_ADDRESS=0x8078A09637e47Fa5Ed34F626046Ea2094a5CDE5e
CCIP_SENDER_ADDRESS=0x105F8A15b819948a89153505762444Ee9f324684
RPC_URL=http://192.168.11.250:8545
CHAIN_ID=138
LINK_TOKEN_ADDRESS=0x514910771AF9Ca656af840dff83E8264EcF986CA
METRICS_PORT=8000
CHECK_INTERVAL=60
```
---
## 🔍 Contract Verification
### Verify on Blockscout
To verify this contract on Blockscout (the explorer for Chain 138):
```bash
cd /home/intlc/projects/smom-dbis-138
# Verify using Foundry
forge verify-contract \
0x105F8A15b819948a89153505762444Ee9f324684 \
src/CCIPSender.sol:CCIPSender \
--chain-id 138 \
--verifier blockscout \
--verifier-url https://explorer.d-bis.org/api \
--rpc-url http://192.168.11.250:8545
```
### Contract Source Location
The source code should be in:
- `/home/intlc/projects/smom-dbis-138/src/CCIPSender.sol`
- Deployment script: `/home/intlc/projects/smom-dbis-138/script/DeployCCIPSender.s.sol`
---
## 📡 Querying the Contract
### Using Cast (Foundry)
```bash
# Get contract bytecode
cast code 0x105F8A15b819948a89153505762444Ee9f324684 \
--rpc-url http://192.168.11.250:8545
# Get contract storage (slot 0)
cast storage 0x105F8A15b819948a89153505762444Ee9f324684 0 \
--rpc-url http://192.168.11.250:8545
# Call a function (example - adjust based on actual ABI)
cast call 0x105F8A15b819948a89153505762444Ee9f324684 \
"router()(address)" \
--rpc-url http://192.168.11.250:8545
```
### Using Web3/ethers.js
```javascript
const { ethers } = require("ethers");
const provider = new ethers.providers.JsonRpcProvider("http://192.168.11.250:8545");
const contractAddress = "0x105F8A15b819948a89153505762444Ee9f324684";
// Example ABI (adjust based on actual contract)
const abi = [
"function router() view returns (address)",
"function sendMessage(uint64 destinationChainSelector, bytes data) payable returns (bytes32)"
];
const contract = new ethers.Contract(contractAddress, abi, provider);
// Call contract functions
const router = await contract.router();
console.log("CCIP Router:", router);
```
---
## 🌐 Cross-Chain Integration
### Supported Destination Chains
The CCIP Sender can send messages to the following chains:
| Chain | Chain ID | Chain Selector | Status |
|-------|----------|----------------|--------|
| **BSC** | 56 | 11344663589394136015 | ✅ Configured |
| **Polygon** | 137 | 4051577828743386545 | ✅ Configured |
| **Avalanche** | 43114 | 6433500567565415381 | ✅ Configured |
| **Base** | 8453 | 15971525489660198786 | ✅ Configured |
| **Arbitrum** | 42161 | (Check deployment) | ⏳ Pending |
| **Optimism** | 10 | (Check deployment) | ⏳ Pending |
### Sending Cross-Chain Messages
```solidity
// Example: Send a message to BSC
uint64 destinationChainSelector = 11344663589394136015; // BSC
bytes memory data = abi.encode(/* your data */);
// Approve LINK tokens for fees (if using LINK)
IERC20 linkToken = IERC20(0x514910771AF9Ca656af840dff83E8264EcF986CA);
linkToken.approve(routerAddress, feeAmount);
// Send message
bytes32 messageId = ccipSender.sendMessage(
destinationChainSelector,
data
);
```
---
## 📝 Events
The CCIP Sender contract emits events for monitoring. Key events include:
### MessageSent Event
```solidity
event MessageSent(
bytes32 indexed messageId,
uint64 indexed sourceChainSelector,
address sender,
bytes data,
address[] tokenAmounts,
address feeToken,
bytes extraArgs
);
```
### Monitoring with CCIP Monitor Service
The CCIP Monitor service (VMID 3501) listens to these events and tracks:
- Message latency
- Message fees
- Success/failure rates
- Cross-chain message flow
---
## 🔐 Security Considerations
1. **Access Control**: Only authorized addresses can send messages
2. **Fee Management**: Ensure sufficient LINK tokens for fees
3. **Destination Validation**: Verify destination chain selectors are correct
4. **Message Validation**: Validate message data before sending
---
## 📚 Related Documentation
- [Contract Addresses Reference](./CONTRACT_ADDRESSES_REFERENCE.md)
- [Final Contract Addresses](./FINAL_CONTRACT_ADDRESSES.md)
- [Cross-Chain Bridge Addresses](./CROSS_CHAIN_BRIDGE_ADDRESSES.md)
- [Deployed Contracts Final](./DEPLOYED_CONTRACTS_FINAL.md)
- [Complete Connections, Contracts, and Containers](./COMPLETE_CONNECTIONS_CONTRACTS_CONTAINERS.md)
---
## 🔗 External Links
- **Blockscout (Chain 138)**: `https://explorer.d-bis.org/address/0x105F8A15b819948a89153505762444Ee9f324684`**Use this**
- **Chainlink CCIP Documentation**: https://docs.chain.link/ccip
- **Source Project**: `/home/intlc/projects/smom-dbis-138`
### ⚠️ Network-Specific Usage
**This contract is ONLY functional on Chain 138:**
- **Chain 138**: `0x105F8A15b819948a89153505762444Ee9f324684`**Deployed and operational**
- **Ethereum Mainnet**: `0x105F8A15b819948a89153505762444Ee9f324684`**Not functional - ignore**
**Note**: While the address exists on mainnet (with empty bytecode), it has no functionality there and is not relevant for this project. Only use this address on Chain 138.
---
## 📋 Quick Reference
```bash
# Contract Address
CCIP_SENDER=0x105F8A15b819948a89153505762444Ee9f324684
# Related Contracts
CCIP_ROUTER=0x8078A09637e47Fa5Ed34F626046Ea2094a5CDE5e
LINK_TOKEN=0x514910771AF9Ca656af840dff83E8264EcF986CA
# RPC Endpoint
RPC_URL=http://192.168.11.250:8545
# or
RPC_URL=https://rpc-core.d-bis.org
# Block Explorer
EXPLORER_URL=https://explorer.d-bis.org/address/0x105F8A15b819948a89153505762444Ee9f324684
```
---
**Last Updated**: $(date)
**Status**: ✅ Contract deployed and operational on Chain 138

View File

@@ -0,0 +1,261 @@
# Blockscout Configuration Guide - Complete Setup
**Container**: VMID 5000 (192.168.11.140)
**Chain ID**: 138
**Status**: Ready for configuration
---
## Quick Start
Since you're already SSH'd into the container, run these commands:
### 1. Install/Copy Configuration Script
```bash
# If you have the script file, copy it:
# Or run commands directly below
```
### 2. Configure and Start Blockscout
Run the configuration commands (see below) or use the script.
---
## Complete Configuration Steps
### Step 1: Check Current Status
```bash
# Check Docker
docker --version
docker-compose --version || docker compose version
# Check existing containers
docker ps -a
# Check if Blockscout directory exists
ls -la /opt/blockscout
ls -la /root/blockscout
```
### Step 2: Create/Update docker-compose.yml
```bash
# Navigate to Blockscout directory
cd /opt/blockscout # or /root/blockscout if that's where it is
# Create docker-compose.yml with correct settings
cat > docker-compose.yml <<'EOF'
version: '3.8'
services:
postgres:
image: postgres:15-alpine
container_name: blockscout-postgres
environment:
POSTGRES_USER: blockscout
POSTGRES_PASSWORD: blockscout
POSTGRES_DB: blockscout
volumes:
- postgres-data:/var/lib/postgresql/data
restart: unless-stopped
networks:
- blockscout-network
healthcheck:
test: ["CMD-SHELL", "pg_isready -U blockscout"]
interval: 10s
timeout: 5s
retries: 5
blockscout:
image: blockscout/blockscout:latest
container_name: blockscout
depends_on:
postgres:
condition: service_healthy
environment:
- DATABASE_URL=postgresql://blockscout:blockscout@postgres:5432/blockscout
- ETHEREUM_JSONRPC_HTTP_URL=http://192.168.11.250:8545
- ETHEREUM_JSONRPC_WS_URL=ws://192.168.11.250:8546
- ETHEREUM_JSONRPC_TRACE_URL=http://192.168.11.250:8545
- ETHEREUM_JSONRPC_VARIANT=besu
- CHAIN_ID=138
- COIN=ETH
- BLOCKSCOUT_HOST=192.168.11.140
- BLOCKSCOUT_PROTOCOL=http
- SECRET_KEY_BASE=$(openssl rand -hex 64)
- POOL_SIZE=10
- ECTO_USE_SSL=false
ports:
- "4000:4000"
volumes:
- blockscout-data:/app/apps/explorer/priv/static
restart: unless-stopped
networks:
- blockscout-network
volumes:
postgres-data:
blockscout-data:
networks:
blockscout-network:
driver: bridge
EOF
# Generate secret key and update
SECRET_KEY=$(openssl rand -hex 64)
sed -i "s|SECRET_KEY_BASE=\$(openssl rand -hex 64)|SECRET_KEY_BASE=${SECRET_KEY}|" docker-compose.yml
```
### Step 3: Start Services
```bash
# Stop existing containers
docker-compose down 2>/dev/null || docker compose down 2>/dev/null || true
# Start PostgreSQL first
docker-compose up -d postgres || docker compose up -d postgres
# Wait for PostgreSQL to be ready
echo "Waiting for PostgreSQL..."
for i in {1..30}; do
if docker exec blockscout-postgres pg_isready -U blockscout >/dev/null 2>&1; then
echo "PostgreSQL ready!"
break
fi
sleep 2
done
# Start Blockscout
docker-compose up -d blockscout || docker compose up -d blockscout
```
### Step 4: Configure Nginx
```bash
# Install Nginx if not installed
apt-get update
apt-get install -y nginx
# Create Nginx config
cat > /etc/nginx/sites-available/blockscout <<'EOF'
server {
listen 80;
listen [::]:80;
server_name 192.168.11.140 explorer.d-bis.org;
client_max_body_size 100M;
location / {
proxy_pass http://localhost:4000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 300s;
proxy_connect_timeout 75s;
}
location /api {
proxy_pass http://localhost:4000/api;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 300s;
}
location /health {
proxy_pass http://localhost:4000/api/health;
proxy_http_version 1.1;
proxy_set_header Host $host;
access_log off;
}
}
EOF
# Enable site
ln -sf /etc/nginx/sites-available/blockscout /etc/nginx/sites-enabled/blockscout
rm -f /etc/nginx/sites-enabled/default
# Test and reload
nginx -t && systemctl reload nginx
systemctl enable nginx
systemctl start nginx
```
### Step 5: Verify
```bash
# Check containers
docker ps
# Check logs
docker logs blockscout
docker logs blockscout-postgres
# Test endpoints
curl http://localhost:4000/api/health
curl http://localhost/
curl http://192.168.11.140/
```
---
## Configuration Settings Reference
### Environment Variables
| Variable | Value | Description |
|----------|-------|-------------|
| `CHAIN_ID` | 138 | Chain ID for d-bis network |
| `RPC_URL` | http://192.168.11.250:8545 | HTTP RPC endpoint |
| `WS_URL` | ws://192.168.11.250:8546 | WebSocket RPC endpoint |
| `BLOCKSCOUT_HOST` | 192.168.11.140 | Host IP address |
| `DATABASE_URL` | postgresql://blockscout:blockscout@postgres:5432/blockscout | PostgreSQL connection |
| `ETHEREUM_JSONRPC_VARIANT` | besu | RPC variant (Besu) |
---
## Troubleshooting
### Check Container Logs
```bash
# Blockscout logs
docker logs -f blockscout
# PostgreSQL logs
docker logs blockscout-postgres
# All containers
docker-compose logs -f
```
### Restart Services
```bash
cd /opt/blockscout # or wherever docker-compose.yml is
docker-compose restart
```
### Check RPC Connectivity
```bash
curl -X POST http://192.168.11.250:8545 \
-H 'Content-Type: application/json' \
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
```
---
**Last Updated**: $(date)

View File

@@ -0,0 +1,205 @@
# Blockscout Start Instructions
**Date**: $(date)
**Blockscout Location**: VMID 5000 on pve2
**Purpose**: Start Blockscout service to enable contract verification
---
## 🚀 Quick Start
### Option 1: Use Start Script (If on Proxmox host)
```bash
cd /home/intlc/projects/proxmox
./scripts/start-blockscout.sh
```
This script will:
1. Check container status
2. Start container if stopped
3. Start Blockscout service
4. Verify API accessibility
### Option 2: Manual Start (On pve2)
SSH to pve2 and run:
```bash
# Check container status
pct status 5000
# Start container if stopped
pct start 5000
# Start Blockscout service
pct exec 5000 -- systemctl start blockscout
# Enable auto-start on boot
pct exec 5000 -- systemctl enable blockscout
# Check service status
pct exec 5000 -- systemctl status blockscout
```
---
## 🔍 Verification Steps
### 1. Check Container Status
```bash
pct status 5000
```
**Expected**: `status: running`
### 2. Check Service Status
```bash
pct exec 5000 -- systemctl status blockscout
```
**Expected**: `Active: active (running)`
### 3. Check Docker Containers
```bash
pct exec 5000 -- docker ps | grep blockscout
```
**Expected**: Two containers running:
- `blockscout` (main application)
- `blockscout-postgres` (database)
### 4. Test API Accessibility
```bash
curl -s https://explorer.d-bis.org/api | head -20
```
**Expected**: JSON response (not error 502)
### 5. Test Web UI
Open in browser: https://explorer.d-bis.org
**Expected**: Blockscout explorer interface loads
---
## 📋 Troubleshooting
### Container Not Running
**Symptom**: `pct status 5000` shows container is stopped
**Solution**:
```bash
pct start 5000
```
### Service Fails to Start
**Symptom**: `systemctl status blockscout` shows failed
**Solution**:
```bash
# Check service logs
pct exec 5000 -- journalctl -u blockscout -n 50
# Check Docker logs
pct exec 5000 -- docker logs blockscout
pct exec 5000 -- docker logs blockscout-postgres
# Restart service
pct exec 5000 -- systemctl restart blockscout
```
### API Returns 502
**Symptom**: API returns "502 Bad Gateway"
**Possible Causes**:
1. Service is still starting (wait 1-2 minutes)
2. Docker containers not running
3. Database connection issue
4. Port conflict
**Solution**:
```bash
# Check if containers are running
pct exec 5000 -- docker ps
# Check service logs
pct exec 5000 -- journalctl -u blockscout -n 100
# Restart service
pct exec 5000 -- systemctl restart blockscout
# Wait a few minutes and retry
sleep 120
curl https://explorer.d-bis.org/api
```
### Docker Containers Not Starting
**Symptom**: `docker ps` shows no blockscout containers
**Solution**:
```bash
# Check Docker service
pct exec 5000 -- systemctl status docker
# Start Docker if needed
pct exec 5000 -- systemctl start docker
# Manually start containers
pct exec 5000 -- cd /opt/blockscout && docker-compose up -d
# Check logs
pct exec 5000 -- docker-compose logs
```
---
## ✅ After Blockscout is Running
Once Blockscout is accessible, you can:
### 1. Retry Contract Verification
```bash
cd /home/intlc/projects/proxmox
./scripts/retry-contract-verification.sh
```
Or manually:
```bash
./scripts/verify-all-contracts.sh 0.8.20
```
### 2. Verify Individual Contracts
Navigate to contract on Blockscout:
- Oracle Proxy: https://explorer.d-bis.org/address/0x3304b747e565a97ec8ac220b0b6a1f6ffdb837e6
- CCIP Router: https://explorer.d-bis.org/address/0x8078A09637e47Fa5Ed34F626046Ea2094a5CDE5e
### 3. Check Verification Status
```bash
./scripts/check-contract-verification-status.sh
```
---
## 🔗 Related Documentation
- **Blockscout Status Guide**: `docs/BLOCKSCOUT_STATUS_AND_VERIFICATION.md`
- **Verification Guide**: `docs/BLOCKSCOUT_VERIFICATION_GUIDE.md`
- **Final Validation Report**: `docs/FINAL_VALIDATION_REPORT.md`
---
**Last Updated**: $(date)

View File

@@ -0,0 +1,235 @@
# Blockscout Contract Verification Guide - ChainID 138
**Date**: $(date)
**Purpose**: Guide for verifying smart contracts on ChainID 138 using Blockscout
**Block Explorer**: `https://explorer.d-bis.org`
---
## Overview
ChainID 138 uses **Blockscout** (self-hosted) as its block explorer. This guide covers how to verify smart contracts deployed on ChainID 138 using Foundry's verification tools.
---
## Prerequisites
1. **Foundry** installed and configured
2. **Deployed contracts** on ChainID 138
3. **Access to contract source code** and constructor arguments
4. **Blockscout instance** accessible at `https://explorer.d-bis.org`
---
## Blockscout Configuration
### Block Explorer Information
- **URL**: `https://explorer.d-bis.org`
- **API Endpoint**: `https://explorer.d-bis.org/api`
- **Type**: Self-hosted Blockscout
- **Chain ID**: 138
- **API Key**: Not required (self-hosted instance)
---
## Verification Methods
### Method 1: Using Foundry Script with Verification
When deploying contracts with Foundry, add Blockscout verification flags:
```bash
forge script script/YourDeploymentScript.s.sol:YourScript \
--rpc-url https://rpc-core.d-bis.org \
--private-key $PRIVATE_KEY \
--broadcast \
--verify \
--verifier blockscout \
--verifier-url https://explorer.d-bis.org/api \
-vvvv
```
### Method 2: Manual Verification with `forge verify-contract`
After deployment, verify contracts manually:
```bash
forge verify-contract \
<CONTRACT_ADDRESS> \
<CONTRACT_NAME> \
--chain-id 138 \
--rpc-url https://rpc-core.d-bis.org \
--verifier blockscout \
--verifier-url https://explorer.d-bis.org/api \
--constructor-args $(cast abi-encode "constructor(<ARGS>)" <ARG1> <ARG2> ...) \
--compiler-version <VERSION>
```
### Method 3: Using Foundry.toml Configuration
Add Blockscout configuration to `foundry.toml`:
```toml
[etherscan]
chain138 = {
url = "https://explorer.d-bis.org/api",
verifier = "blockscout"
}
```
Then use:
```bash
forge verify-contract \
<CONTRACT_ADDRESS> \
<CONTRACT_NAME> \
--chain chain138 \
--rpc-url https://rpc-core.d-bis.org
```
---
## Verification Examples
### Example 1: Simple Contract (No Constructor Arguments)
```bash
forge verify-contract \
0x1234567890123456789012345678901234567890 \
SimpleContract \
--chain-id 138 \
--rpc-url https://rpc-core.d-bis.org \
--verifier blockscout \
--verifier-url https://explorer.d-bis.org/api \
--compiler-version 0.8.20
```
### Example 2: Contract with Constructor Arguments
```bash
# First, encode constructor arguments
CONSTRUCTOR_ARGS=$(cast abi-encode "constructor(address,uint256)" \
0x1111111111111111111111111111111111111111 \
1000000000000000000)
# Then verify
forge verify-contract \
0x1234567890123456789012345678901234567890 \
ComplexContract \
--chain-id 138 \
--rpc-url https://rpc-core.d-bis.org \
--verifier blockscout \
--verifier-url https://explorer.d-bis.org/api \
--constructor-args "$CONSTRUCTOR_ARGS" \
--compiler-version 0.8.20
```
### Example 3: Verify with Libraries
If your contract uses libraries, specify them:
```bash
forge verify-contract \
<CONTRACT_ADDRESS> \
<CONTRACT_NAME> \
--chain-id 138 \
--rpc-url https://rpc-core.d-bis.org \
--verifier blockscout \
--verifier-url https://explorer.d-bis.org/api \
--libraries <LIBRARY_NAME>:<LIBRARY_ADDRESS> \
--compiler-version <VERSION>
```
---
## Troubleshooting
### Issue: Verification Fails with "Contract Not Found"
**Solution**:
- Ensure the contract is deployed and confirmed on ChainID 138
- Verify the contract address is correct
- Check that the RPC endpoint is accessible
### Issue: "Invalid Source Code"
**Solution**:
- Ensure compiler version matches the deployment compiler version
- Verify all source files are accessible
- Check that constructor arguments are correctly encoded
### Issue: "Already Verified"
**Solution**:
- The contract is already verified on Blockscout
- Check the contract on the explorer: `https://explorer.d-bis.org/address/<CONTRACT_ADDRESS>`
### Issue: Blockscout API Timeout
**Solution**:
- Check if Blockscout instance is running and accessible
- Verify network connectivity to `https://explorer.d-bis.org`
- Try again after a few moments (Blockscout may be indexing)
---
## Manual Verification via Blockscout UI
If automated verification fails, you can verify contracts manually through the Blockscout web interface:
1. Navigate to the contract address: `https://explorer.d-bis.org/address/<CONTRACT_ADDRESS>`
2. Click on **"Verify & Publish"** tab
3. Select verification method:
- **Via Standard JSON Input** (recommended)
- **Via Sourcify**
- **Via Multi-file**
4. Upload contract source code and metadata
5. Provide constructor arguments (if any)
6. Submit for verification
---
## Verification Best Practices
1. **Verify Immediately After Deployment**: Verify contracts right after deployment while deployment details are fresh
2. **Use Standard JSON Input**: Most reliable method for complex contracts
3. **Document Constructor Arguments**: Keep a record of constructor arguments used during deployment
4. **Test Verification Locally**: Test your verification command before deploying to production
5. **Keep Source Code Organized**: Maintain clean source code structure for easier verification
---
## Related Documentation
- **Block Explorer**: `https://explorer.d-bis.org`
- **RPC Endpoint**: `https://rpc-core.d-bis.org`
- **API Keys Documentation**: See `docs/CROSS_CHAIN_BRIDGE_ADDRESSES.md`
- **Contract Deployment Guide**: See `docs/CONTRACT_DEPLOYMENT_GUIDE.md`
---
## Quick Reference
### Blockscout API Endpoints
- **API Base URL**: `https://explorer.d-bis.org/api`
- **Contract Verification**: `POST /api/v2/smart-contracts/<ADDRESS>/verification`
- **Contract Info**: `GET /api/v2/smart-contracts/<ADDRESS>`
### Common Verification Flags
```bash
--verifier blockscout # Use Blockscout verifier
--verifier-url https://explorer.d-bis.org/api # Blockscout API URL
--chain-id 138 # Chain ID 138
--compiler-version 0.8.20 # Solidity compiler version
--constructor-args <ENCODED_ARGS> # Encoded constructor arguments
--libraries <LIB>:<ADDR> # Library addresses
```
---
**Last Updated**: $(date)
**Status**: ✅ Ready for use with ChainID 138

View File

@@ -0,0 +1,165 @@
# Fix Tunnel - Alternative Methods
## Problem
The `fix-shared-tunnel.sh` script cannot connect because your machine is on `192.168.1.0/24` and cannot directly reach `192.168.11.0/24`.
## Solution Methods
### Method 1: Use SSH Tunnel ⭐ Recommended
```bash
# Terminal 1: Start SSH tunnel
./setup_ssh_tunnel.sh
# Terminal 2: Run fix with localhost
PROXMOX_HOST=localhost ./fix-shared-tunnel.sh
```
### Method 2: Manual File Deployment
The script automatically generates configuration files when connection fails:
**Location**: `/tmp/tunnel-fix-10ab22da-8ea3-4e2e-a896-27ece2211a05/`
**Files**:
- `tunnel-services.yml` - Tunnel configuration
- `cloudflared-services.service` - Systemd service
- `DEPLOY_INSTRUCTIONS.md` - Deployment guide
**Deploy from Proxmox host**:
```bash
# Copy files to Proxmox host
scp -r /tmp/tunnel-fix-* root@192.168.11.12:/tmp/
# SSH to Proxmox host
ssh root@192.168.11.12
# Deploy to container
pct push 102 /tmp/tunnel-fix-*/tunnel-services.yml /etc/cloudflared/tunnel-services.yml
pct push 102 /tmp/tunnel-fix-*/cloudflared-services.service /etc/systemd/system/cloudflared-services.service
pct exec 102 -- chmod 600 /etc/cloudflared/tunnel-services.yml
pct exec 102 -- systemctl daemon-reload
pct exec 102 -- systemctl enable cloudflared-services.service
pct exec 102 -- systemctl start cloudflared-services.service
```
### Method 3: Cloudflare Dashboard ⭐ Easiest
1. Go to: https://one.dash.cloudflare.com/
2. Navigate to: **Zero Trust****Networks****Tunnels**
3. Find tunnel: `10ab22da-8ea3-4e2e-a896-27ece2211a05`
4. Click **Configure**
5. Add all hostnames:
| Hostname | Service | URL |
|----------|---------|-----|
| dbis-admin.d-bis.org | HTTP | 192.168.11.21:80 |
| dbis-api.d-bis.org | HTTP | 192.168.11.21:80 |
| dbis-api-2.d-bis.org | HTTP | 192.168.11.21:80 |
| mim4u.org.d-bis.org | HTTP | 192.168.11.21:80 |
| www.mim4u.org.d-bis.org | HTTP | 192.168.11.21:80 |
| rpc-http-prv.d-bis.org | HTTP | 192.168.11.21:80 |
| rpc-http-pub.d-bis.org | HTTP | 192.168.11.21:80 |
| rpc-ws-prv.d-bis.org | HTTP | 192.168.11.21:80 |
| rpc-ws-pub.d-bis.org | HTTP | 192.168.11.21:80 |
6. Add catch-all rule: **HTTP 404: Not Found** (must be last)
7. Save configuration
8. Wait 1-2 minutes for tunnel to reload
### Method 4: Run from Proxmox Network
If you have access to a machine on `192.168.11.0/24`:
```bash
# Copy script to that machine
scp fix-shared-tunnel.sh user@192.168.11.x:/tmp/
# SSH to that machine and run
ssh user@192.168.11.x
cd /tmp
chmod +x fix-shared-tunnel.sh
./fix-shared-tunnel.sh
```
### Method 5: Direct Container Access
If you can access the container directly:
```bash
# Create config file inside container
pct exec 102 -- bash << 'EOF'
cat > /etc/cloudflared/tunnel-services.yml << 'CONFIG'
tunnel: 10ab22da-8ea3-4e2e-a896-27ece2211a05
credentials-file: /etc/cloudflared/credentials-services.json
ingress:
- hostname: dbis-admin.d-bis.org
service: http://192.168.11.21:80
originRequest:
httpHostHeader: dbis-admin.d-bis.org
- hostname: dbis-api.d-bis.org
service: http://192.168.11.21:80
originRequest:
httpHostHeader: dbis-api.d-bis.org
- hostname: dbis-api-2.d-bis.org
service: http://192.168.11.21:80
originRequest:
httpHostHeader: dbis-api-2.d-bis.org
- hostname: mim4u.org.d-bis.org
service: http://192.168.11.21:80
originRequest:
httpHostHeader: mim4u.org.d-bis.org
- hostname: www.mim4u.org.d-bis.org
service: http://192.168.11.21:80
originRequest:
httpHostHeader: www.mim4u.org.d-bis.org
- hostname: rpc-http-prv.d-bis.org
service: http://192.168.11.21:80
originRequest:
httpHostHeader: rpc-http-prv.d-bis.org
- hostname: rpc-http-pub.d-bis.org
service: http://192.168.11.21:80
originRequest:
httpHostHeader: rpc-http-pub.d-bis.org
- hostname: rpc-ws-prv.d-bis.org
service: http://192.168.11.21:80
originRequest:
httpHostHeader: rpc-ws-prv.d-bis.org
- hostname: rpc-ws-pub.d-bis.org
service: http://192.168.11.21:80
originRequest:
httpHostHeader: rpc-ws-pub.d-bis.org
- service: http_status:404
metrics: 127.0.0.1:9090
loglevel: info
gracePeriod: 30s
CONFIG
chmod 600 /etc/cloudflared/tunnel-services.yml
EOF
```
## Verification
After applying any method:
```bash
# Check tunnel status in Cloudflare Dashboard
# Should change from DOWN to HEALTHY
# Test endpoints
curl -I https://dbis-admin.d-bis.org
curl -I https://rpc-http-pub.d-bis.org
curl -I https://dbis-api.d-bis.org
```
## Recommended Approach
**For Quick Fix**: Use **Method 3 (Cloudflare Dashboard)** - No SSH needed, immediate effect
**For Automation**: Use **Method 1 (SSH Tunnel)** - Scriptable, repeatable
**For Production**: Use **Method 2 (Manual Deployment)** - Most control, can review files first

View File

@@ -0,0 +1,460 @@
# MetaMask Troubleshooting Guide - ChainID 138
**Date**: $(date)
**Network**: SMOM-DBIS-138 (ChainID 138)
---
## 🔍 Common Issues & Solutions
### 1. Network Connection Issues
#### Issue: "Could not fetch chain ID. Is your RPC URL correct?"
**Symptoms**:
- MetaMask shows error: "Could not fetch chain ID. Is your RPC URL correct?"
- Network won't connect
- Can't fetch balance
**Root Cause**: The RPC endpoint is requiring JWT authentication, which MetaMask doesn't support.
**Solutions**:
1. **Remove and Re-add Network with Correct RPC URL**
- MetaMask → Settings → Networks
- Find "Defi Oracle Meta Mainnet" or "SMOM-DBIS-138"
- Click "Delete" or "Remove"
- Click "Add Network" → "Add a network manually"
- Enter these exact values:
- **Network Name**: `Defi Oracle Meta Mainnet`
- **RPC URL**: `https://rpc-http-pub.d-bis.org`
- **Chain ID**: `138` (must be decimal, not hex)
- **Currency Symbol**: `ETH`
- **Block Explorer URL**: `https://explorer.d-bis.org` (optional)
- Click "Save"
2. **If RPC URL Still Requires Authentication (Server Issue)**
- The public RPC endpoint should NOT require JWT authentication
- Contact network administrators to fix server configuration
- VMID 2502 should serve `rpc-http-pub.d-bis.org` WITHOUT authentication
- Check Nginx configuration on VMID 2502
3. **Verify RPC Endpoint is Working**
```bash
# Test if endpoint responds (should return chain ID 0x8a = 138)
curl -X POST https://rpc-http-pub.d-bis.org \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}'
```
- **Expected**: `{"jsonrpc":"2.0","id":1,"result":"0x8a"}`
- **If you get JWT error**: Server needs to be reconfigured
#### Issue: "Network Error" or "Failed to Connect"
**Symptoms**:
- MetaMask shows "Network Error"
- Can't fetch balance
- Transactions fail immediately
**Solutions**:
1. **Verify RPC URL**
```
Correct: https://rpc-http-pub.d-bis.org
Incorrect: http://rpc-http-pub.d-bis.org (missing 's')
Incorrect: https://rpc-core.d-bis.org (deprecated/internal)
```
2. **Check Chain ID**
- Must be exactly `138` (decimal)
- Not `0x8a` (that's hex, but MetaMask expects decimal in manual entry)
- Verify in network settings
3. **Remove and Re-add Network**
- Settings → Networks → Remove the network
- Add network again with correct settings
- See [Quick Start Guide](./METAMASK_QUICK_START_GUIDE.md)
4. **Clear MetaMask Cache**
- Settings → Advanced → Reset Account (if needed)
- Or clear browser cache and reload MetaMask
5. **Check RPC Endpoint Status**
```bash
curl -X POST https://rpc-http-pub.d-bis.org \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
```
---
### 2. Token Display Issues
#### Issue: "6,000,000,000.0T WETH" Instead of "6 WETH"
**Root Cause**: WETH9 contract's `decimals()` returns 0 instead of 18
**Solution**:
1. **Remove Token**
- Find WETH9 in token list
- Click token → "Hide token" or remove
2. **Re-import with Correct Decimals**
- Import tokens → Custom token
- Address: `0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2`
- Symbol: `WETH`
- **Decimals: `18`** ⚠️ **Critical: Must be 18**
3. **Verify Display**
- Should now show: "6 WETH" or "6.0 WETH"
- Not: "6,000,000,000.0T WETH"
**See**:
- [WETH9 Display Fix Instructions](./METAMASK_WETH9_FIX_INSTRUCTIONS.md)
- [MetaMask RPC Chain ID Error Fix](./METAMASK_RPC_CHAIN_ID_ERROR_FIX.md) - For "Could not fetch chain ID" errors
- [RPC Public Endpoint Routing](./RPC_PUBLIC_ENDPOINT_ROUTING.md) - Architecture and routing details
---
#### Issue: Token Not Showing Balance
**Symptoms**:
- Token imported but shows 0 balance
- Token doesn't appear in list
**Solutions**:
1. **Check Token Address**
- WETH9: `0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2`
- WETH10: `0xf4BB2e28688e89fCcE3c0580D37d36A7672E8A9f`
- Verify address is correct (case-sensitive)
2. **Verify You Have Tokens**
```bash
cast call 0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2 \
"balanceOf(address)" <YOUR_ADDRESS> \
--rpc-url https://rpc-http-pub.d-bis.org
```
3. **Refresh Token List**
- Click "Import tokens" → Refresh
- Or remove and re-add token
4. **Check Network**
- Ensure you're on ChainID 138
- Tokens are chain-specific
---
### 3. Transaction Issues
#### Issue: Transaction Stuck or Pending Forever
**Symptoms**:
- Transaction shows "Pending" for extended time
- No confirmation after hours
**Solutions**:
1. **Check Network Status**
- Verify RPC endpoint is responding
- Check block explorer for recent blocks
2. **Check Gas Price**
- May need to increase gas price
- Network may be congested
3. **Replace Transaction** (Same Nonce)
- Create new transaction with same nonce
- Higher gas price
- This cancels the old transaction
4. **Reset Nonce** (Last Resort)
- Settings → Advanced → Reset Account
- ⚠️ This clears transaction history
---
#### Issue: "Insufficient Funds for Gas"
**Symptoms**:
- Transaction fails immediately
- Error: "insufficient funds"
**Solutions**:
1. **Check ETH Balance**
- Need ETH for gas fees
- Gas costs vary (typically 0.001-0.01 ETH)
2. **Reduce Gas Limit** (If too high)
- MetaMask may estimate too high
- Try manual gas limit
3. **Get More ETH**
- Request from network administrators
- Bridge from another chain
- Use faucet (if available)
---
#### Issue: Transaction Reverted
**Symptoms**:
- Transaction confirmed but reverted
- Error in transaction details
**Solutions**:
1. **Check Transaction Details**
- View on block explorer
- Look for revert reason
2. **Common Revert Reasons**:
- Insufficient allowance (for token transfers)
- Contract logic error
- Invalid parameters
- Out of gas (rare, usually fails before)
3. **Verify Contract State**
- Check if contract is paused
- Verify you have permissions
- Check contract requirements
---
### 4. Price Feed Issues
#### Issue: Price Not Updating
**Symptoms**:
- Oracle price seems stale
- Price doesn't change
**Solutions**:
1. **Check Oracle Contract**
```bash
cast call 0x3304b747e565a97ec8ac220b0b6a1f6ffdb837e6 \
"latestRoundData()" \
--rpc-url https://rpc-http-pub.d-bis.org
```
2. **Verify `updatedAt` Timestamp**
- Should update every 60 seconds
- If > 5 minutes old, Oracle Publisher may be down
3. **Check Oracle Publisher Service**
- Service should be running (VMID 3500)
- Check service logs for errors
4. **Manual Price Query**
- Use Web3.js or Ethers.js to query directly
- See [Oracle Integration Guide](./METAMASK_ORACLE_INTEGRATION.md)
---
#### Issue: Price Returns Zero or Error
**Symptoms**:
- `latestRoundData()` returns 0
- Contract call fails
**Solutions**:
1. **Verify Contract Address**
- Oracle Proxy: `0x3304b747e565a97ec8ac220b0b6a1f6ffdb837e6`
- Ensure correct address
2. **Check Contract Deployment**
- Verify contract exists on ChainID 138
- Check block explorer
3. **Verify Network**
- Must be on ChainID 138
- Price feeds are chain-specific
---
### 5. Network Switching Issues
#### Issue: Can't Switch to ChainID 138
**Symptoms**:
- Network doesn't appear in list
- Switch fails
**Solutions**:
1. **Add Network Manually**
- See [Quick Start Guide](./METAMASK_QUICK_START_GUIDE.md)
- Ensure all fields are correct
2. **Programmatic Addition** (For dApps)
```javascript
try {
await window.ethereum.request({
method: 'wallet_switchEthereumChain',
params: [{ chainId: '0x8a' }], // 138 in hex
});
} catch (switchError) {
// Network doesn't exist, add it
if (switchError.code === 4902) {
await window.ethereum.request({
method: 'wallet_addEthereumChain',
params: [networkConfig],
});
}
}
```
3. **Clear Network Cache**
- Remove network
- Re-add with correct settings
---
### 6. Account Issues
#### Issue: Wrong Account Connected
**Symptoms**:
- Different address than expected
- Can't see expected balance
**Solutions**:
1. **Switch Account in MetaMask**
- Click account icon
- Select correct account
2. **Import Account** (If needed)
- Settings → Import Account
- Use private key or seed phrase
3. **Verify Address**
- Check address matches expected
- Addresses are case-insensitive but verify format
---
#### Issue: Account Not Showing Balance
**Symptoms**:
- Account connected but balance is 0
- Expected to have ETH/tokens
**Solutions**:
1. **Verify Network**
- Must be on ChainID 138
- Balances are chain-specific
2. **Check Address**
- Verify correct address
- Check on block explorer
3. **Refresh Balance**
- Click refresh icon in MetaMask
- Or switch networks and switch back
---
## 🔧 Advanced Troubleshooting
### Enable Debug Mode
**MetaMask Settings**:
1. Settings → Advanced
2. Enable "Show Hex Data"
3. Enable "Enhanced Gas Fee UI"
4. Check browser console for errors
### Check Browser Console
**Open Console**:
- Chrome/Edge: F12 → Console
- Firefox: F12 → Console
- Safari: Cmd+Option+I → Console
**Look For**:
- RPC errors
- Network errors
- JavaScript errors
- MetaMask-specific errors
### Verify RPC Response
**Test RPC Endpoint**:
```bash
curl -X POST https://rpc-http-pub.d-bis.org \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "eth_blockNumber",
"params": [],
"id": 1
}'
```
**Expected Response**:
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": "0x..."
}
```
---
## 📞 Getting Help
### Resources
1. **Documentation**:
- [Quick Start Guide](./METAMASK_QUICK_START_GUIDE.md)
- [Full Integration Requirements](./METAMASK_FULL_INTEGRATION_REQUIREMENTS.md)
- [Oracle Integration](./METAMASK_ORACLE_INTEGRATION.md)
2. **Block Explorer**:
- `https://explorer.d-bis.org`
- Check transactions, contracts, addresses
3. **Network Status**:
- RPC: `https://rpc-http-pub.d-bis.org` (public, no auth required)
- Permissioned RPC: `https://rpc-http-prv.d-bis.org` (requires JWT auth)
- Verify endpoint is responding
### Information to Provide When Reporting Issues
1. **MetaMask Version**: Settings → About
2. **Browser**: Chrome/Firefox/Safari + version
3. **Network**: ChainID 138
4. **Error Message**: Exact error text
5. **Steps to Reproduce**: What you did before error
6. **Console Errors**: Any JavaScript errors
7. **Transaction Hash**: If transaction-related
---
## ✅ Quick Diagnostic Checklist
Run through this checklist when troubleshooting:
- [ ] Network is "Defi Oracle Meta Mainnet" or "SMOM-DBIS-138" (ChainID 138)
- [ ] RPC URL is `https://rpc-http-pub.d-bis.org` (public endpoint, no auth)
- [ ] Chain ID is `138` (decimal, not hex)
- [ ] RPC endpoint does NOT require JWT authentication
- [ ] Account is connected and correct
- [ ] Sufficient ETH for gas fees
- [ ] Token decimals are correct (18 for WETH)
- [ ] Browser console shows no errors
- [ ] RPC endpoint is responding
- [ ] Block explorer shows recent blocks
---
**Last Updated**: $(date)

View File

@@ -0,0 +1,115 @@
# Solution: Fix Tunnels Without SSH Access
## Problem
- All 6 Cloudflare tunnels are DOWN
- Cannot access Proxmox network via SSH (network segmentation)
- SSH tunnel setup fails (can't connect to establish tunnel)
## Solution: Cloudflare Dashboard ⭐ EASIEST
**No SSH needed!** Configure tunnels directly in Cloudflare Dashboard.
### Step-by-Step
1. **Access Dashboard**
- Go to: https://one.dash.cloudflare.com/
- Sign in
- Navigate to: **Zero Trust****Networks****Tunnels**
2. **For Each Tunnel** (6 total):
- Click on tunnel name
- Click **Configure** button
- Go to **Public Hostnames** tab
- Add/Edit hostname configurations
- Save
3. **Wait 1-2 Minutes**
- Tunnels should reconnect automatically
- Status should change from **DOWN** to **HEALTHY**
### Tunnel Configuration Details
#### Shared Tunnel (Most Important)
**Tunnel**: `rpc-http-pub.d-bis.org` (ID: `10ab22da-8ea3-4e2e-a896-27ece2211a05`)
**Add these 9 hostnames** (all pointing to `http://192.168.11.21:80`):
- `dbis-admin.d-bis.org`
- `dbis-api.d-bis.org`
- `dbis-api-2.d-bis.org`
- `mim4u.org.d-bis.org`
- `www.mim4u.org.d-bis.org`
- `rpc-http-prv.d-bis.org`
- `rpc-http-pub.d-bis.org`
- `rpc-ws-prv.d-bis.org`
- `rpc-ws-pub.d-bis.org`
**Important**: Add catch-all rule (HTTP 404) as the LAST entry.
#### Proxmox Tunnels
Each needs one hostname pointing to HTTPS:
| Tunnel | Hostname | Target |
|--------|----------|--------|
| tunnel-ml110 | ml110-01.d-bis.org | https://192.168.11.10:8006 |
| tunnel-r630-01 | r630-01.d-bis.org | https://192.168.11.11:8006 |
| tunnel-r630-02 | r630-02.d-bis.org | https://192.168.11.12:8006 |
**Options**: Enable "No TLS Verify" (Proxmox uses self-signed certs)
#### Other Tunnels
- `explorer.d-bis.org``http://192.168.11.21:80`
- `mim4u-tunnel``http://192.168.11.21:80`
## Why This Works
Cloudflare tunnels use **outbound connections** from your infrastructure to Cloudflare. The configuration in the dashboard tells Cloudflare how to route traffic. Even if the tunnel connector (cloudflared) is down, once it reconnects, it will use the dashboard configuration.
## If Dashboard Method Doesn't Work
If tunnels remain DOWN after dashboard configuration, the tunnel connector (cloudflared in VMID 102) is likely not running. You need physical/network access to:
### Option 1: Physical Access to Proxmox Host
```bash
# Direct console access to 192.168.11.12
pct start 102
pct exec 102 -- systemctl start cloudflared-*
pct exec 102 -- systemctl status cloudflared-*
```
### Option 2: VPN Access
If you have VPN access to `192.168.11.0/24` network:
```bash
# Connect via VPN first, then:
ssh root@192.168.11.12 "pct start 102"
ssh root@192.168.11.12 "pct exec 102 -- systemctl start cloudflared-*"
```
### Option 3: Cloudflare Tunnel Token Method
If you can get new tunnel tokens from Cloudflare Dashboard:
1. Go to tunnel → Configure
2. Download new token/credentials
3. Deploy to container (requires access)
## Verification
After configuring in dashboard:
```bash
# Wait 1-2 minutes, then test:
curl -I https://ml110-01.d-bis.org
curl -I https://r630-01.d-bis.org
curl -I https://explorer.d-bis.org
curl -I https://rpc-http-pub.d-bis.org
```
## Summary
**Best Method**: Cloudflare Dashboard (no SSH needed)
⚠️ **If that fails**: Need physical/network access to start container
📋 **All tunnel IDs and configs**: See generated files in `/tmp/tunnel-fix-manual-*/`

View File

@@ -0,0 +1,165 @@
# R630-04 Authentication Issue
**IP:** 192.168.11.14
**User:** root
**Status:** ❌ Permission denied with password authentication
---
## Current Situation
- **SSH Port:** ✅ Open and accepting connections (port 22)
- **Authentication Methods Offered:** `publickey,password`
- **Password Auth:** ❌ Failing (permission denied)
- **Public Key Auth:** ⚠️ Not configured
---
## Debug Information
From SSH verbose output:
```
debug1: Authentications that can continue: publickey,password
debug1: Next authentication method: publickey
debug1: Authentications that can continue: publickey,password
debug1: Next authentication method: password
Permission denied, please try again.
```
This shows:
- Server accepts both authentication methods
- Public key auth tried first (no keys configured)
- Password auth attempted but rejected
---
## Possible Solutions
### Option 1: Verify Password
Double-check the password. Common issues:
- Typos (especially with special characters like `@`)
- Caps Lock
- Wrong password entirely
- Password changed since last successful login
### Option 2: Connect from R630-03
Since R630-03 works, try:
```bash
# Connect to R630-03 first
ssh root@192.168.11.13
# Password: L@kers2010
# Then from R630-03, connect to R630-04
ssh root@192.168.11.14
# Try password: L@kers2010
```
Sometimes connecting from within the same network helps.
### Option 3: Use Console Access
If you have physical/console access to R630-04:
1. **Physical Console** - Connect KVM/keyboard directly
2. **iDRAC/iLO** - Use Dell's remote management (if available)
3. **Serial Console** - If configured
From console:
```bash
# Check SSH configuration
cat /etc/ssh/sshd_config | grep -E "PasswordAuthentication|PermitRootLogin"
# Reset root password
passwd root
# Check account status
passwd -S root
lastb | grep root | tail -10 # Check failed login attempts
```
### Option 4: Set Up SSH Key Authentication
If you can access R630-04 through another method (console, Proxmox host, etc.):
**Generate SSH key:**
```bash
# On your local machine
ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_r630-04 -N ""
```
**Copy public key to R630-04:**
```bash
# If you have console access to R630-04
cat ~/.ssh/id_ed25519_r630-04.pub
# Then on R630-04:
mkdir -p /root/.ssh
chmod 700 /root/.ssh
echo "PASTE_PUBLIC_KEY_HERE" >> /root/.ssh/authorized_keys
chmod 600 /root/.ssh/authorized_keys
```
**Connect with key:**
```bash
ssh -i ~/.ssh/id_ed25519_r630-04 root@192.168.11.14
```
### Option 5: Check if Password Was Changed
If you have access to another Proxmox host that manages R630-04, or have documentation, verify:
- When was the password last changed?
- Is there a password management system?
- Are there multiple root accounts or users?
---
## Quick Checklist
- [ ] Try password again carefully (check for typos)
- [ ] Try connecting from R630-03
- [ ] Check if password was changed
- [ ] Try console/iDRAC access
- [ ] Check if SSH keys are set up
- [ ] Verify you're using the correct username (root)
---
## If You Have Console Access
Once you can access the console, run:
```bash
# Reset root password
passwd root
# Verify SSH configuration allows password auth
grep -E "^PasswordAuthentication|^#PasswordAuthentication" /etc/ssh/sshd_config
# Should show:
# PasswordAuthentication yes
# OR (commented out means yes by default)
# #PasswordAuthentication yes
# If it shows "PasswordAuthentication no", change it:
sed -i 's/^PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config
systemctl restart sshd
# Check root account status
passwd -S root
# Check for locked account
usermod -U root # Unlock if locked
```
---
## Next Steps
1. **Try password one more time** - Make sure Caps Lock is off, type carefully
2. **Try from R630-03** - Network path might matter
3. **Get console access** - Physical KVM or iDRAC
4. **Check password documentation** - Verify if password was changed
5. **Set up SSH keys** - More secure and reliable long-term solution

View File

@@ -0,0 +1,256 @@
# R630-04 Console Access Guide
**IP:** 192.168.11.14
**Status:** Console access available
**Tasks:** Reset password, fix pveproxy, verify web interface
---
## Step 1: Login via Console
Log in to R630-04 using your console access (physical keyboard, iDRAC KVM, etc.)
---
## Step 2: Check Current Status
Once logged in, run these commands to understand the current state:
```bash
# Check hostname
hostname
cat /etc/hostname
# Check Proxmox version
pveversion
# Check pveproxy service status
systemctl status pveproxy --no-pager -l
# Check recent pveproxy logs
journalctl -u pveproxy --no-pager -n 50
# Check if port 8006 is listening
ss -tlnp | grep 8006
```
---
## Step 3: Reset Root Password
Set a password for root (you can use `L@kers2010` to match other hosts, or choose a different one):
```bash
passwd root
# Enter new password twice when prompted
```
**Recommended:** Use `L@kers2010` to match R630-03 and ml110 for consistency.
---
## Step 4: Fix pveproxy Service
### 4.1 Check Service Status
```bash
systemctl status pveproxy --no-pager -l | head -40
```
### 4.2 Check Logs for Errors
```bash
journalctl -u pveproxy --no-pager -n 100 | grep -i error
journalctl -u pveproxy --no-pager -n 100 | tail -50
```
### 4.3 Restart pveproxy
```bash
systemctl restart pveproxy
sleep 3
systemctl status pveproxy --no-pager | head -20
```
### 4.4 Check if Port 8006 is Now Listening
```bash
ss -tlnp | grep 8006
```
Should show something like:
```
LISTEN 0 128 0.0.0.0:8006 0.0.0.0:* users:(("pveproxy",pid=1234,fd=6))
```
---
## Step 5: If pveproxy Still Fails
### 5.1 Check All Proxmox Services
```bash
systemctl list-units --type=service --all | grep -E 'pveproxy|pvedaemon|pve-cluster|pvestatd'
systemctl status pvedaemon --no-pager | head -20
systemctl status pve-cluster --no-pager | head -20
```
### 5.2 Restart All Proxmox Services
```bash
systemctl restart pveproxy pvedaemon pvestatd pve-cluster
sleep 5
systemctl status pveproxy --no-pager | head -20
```
### 5.3 Check for Port Conflicts
```bash
# Check if something else is using port 8006
lsof -i :8006
ss -tlnp | grep 8006
```
### 5.4 Check Disk Space
```bash
df -h
# Low disk space can cause service issues
```
### 5.5 Check Log Directory Permissions
```bash
ls -la /var/log/pveproxy/
# Should be owned by root:root
```
### 5.6 Check Proxmox Cluster Status (if in cluster)
```bash
pvecm status
```
---
## Step 6: Verify Web Interface Works
### 6.1 Test Locally
```bash
# Test HTTPS connection locally
curl -k https://localhost:8006 | head -20
# Should return HTML (Proxmox login page)
```
### 6.2 Test from Another Host
From another machine on the network:
```bash
# Test from R630-03 or your local machine
curl -k https://192.168.11.14:8006 | head -20
```
### 6.3 Open in Browser
Open in web browser:
```
https://192.168.11.14:8006
```
You should see the Proxmox login page.
---
## Step 7: Document Password
Once password is set and everything works, document it:
1. Update `docs/PROXMOX_HOST_PASSWORDS.md` with R630-04 password
2. Update `INFRASTRUCTURE_OVERVIEW_COMPLETE.md` with correct status
---
## Quick Command Reference
Copy-paste these commands in order:
```bash
# 1. Check status
hostname
pveversion
systemctl status pveproxy --no-pager -l | head -30
# 2. Reset password
passwd root
# Enter: L@kers2010 (or your chosen password)
# 3. Fix pveproxy
systemctl restart pveproxy
sleep 3
systemctl status pveproxy --no-pager | head -20
ss -tlnp | grep 8006
# 4. If still failing, restart all services
systemctl restart pveproxy pvedaemon pvestatd
systemctl status pveproxy --no-pager | head -20
# 5. Test web interface
curl -k https://localhost:8006 | head -10
```
---
## Expected Results
After completing these steps:
✅ Root password set and documented
✅ pveproxy service running
✅ Port 8006 listening
✅ Web interface accessible at https://192.168.11.14:8006
✅ SSH access working with new password
---
## If Issues Persist
If pveproxy still fails after restart:
1. **Check for specific error messages:**
```bash
journalctl -u pveproxy --no-pager -n 200 | grep -i "error\|fail\|exit"
```
2. **Check Proxmox installation:**
```bash
dpkg -l | grep proxmox
pveversion -v
```
3. **Reinstall pveproxy (if needed):**
```bash
apt update
apt install --reinstall pveproxy
systemctl restart pveproxy
```
4. **Check system resources:**
```bash
free -h
df -h
top -bn1 | head -20
```
---
**Once you're done, let me know:**
1. What password you set
2. Whether pveproxy is working
3. If the web interface is accessible
4. Any error messages you encountered
I'll update the documentation accordingly!

View File

@@ -0,0 +1,185 @@
# R630-04 Proxmox Troubleshooting Guide
**IP Address:** 192.168.11.14
**Proxmox Version:** 6.17.2-1-PVE
**Issue:** pveproxy worker exit (web interface not accessible on port 8006)
---
## Problem Summary
- Proxmox VE is installed (version 6.17.2-1-PVE)
- SSH access works (port 22)
- Web interface not accessible (port 8006)
- pveproxy workers are crashing/exiting
---
## Diagnostic Steps
### 1. Check pveproxy Service Status
```bash
systemctl status pveproxy --no-pager -l
```
Look for:
- Service state (should be "active (running)")
- Worker process exits
- Error messages
### 2. Check Recent Logs
```bash
journalctl -u pveproxy --no-pager -n 100
```
Look for:
- Worker exit messages
- Error patterns
- Stack traces
### 3. Check Port 8006
```bash
ss -tlnp | grep 8006
# or
netstat -tlnp | grep 8006
```
Should show pveproxy listening on port 8006.
### 4. Check Proxmox Cluster Status
```bash
pvecm status
```
If in a cluster, verify cluster connectivity.
---
## Common Fixes
### Fix 1: Restart pveproxy Service
```bash
systemctl restart pveproxy
systemctl status pveproxy
```
### Fix 2: Check and Fix Configuration
```bash
# Check configuration files
ls -la /etc/pveproxy/
cat /etc/default/pveproxy 2>/dev/null
# Check for syntax errors
pveproxy --help
```
### Fix 3: Reinstall pveproxy Package
```bash
apt update
apt install --reinstall pveproxy
systemctl restart pveproxy
```
### Fix 4: Check for Port Conflicts
```bash
# Find what's using port 8006
ss -tlnp | grep 8006
lsof -i :8006
# If something else is using it, stop that service
```
### Fix 5: Check Disk Space and Permissions
```bash
# Check disk space
df -h
# Check log directory permissions
ls -la /var/log/pveproxy/
# Should be owned by root:root with appropriate permissions
```
### Fix 6: Check for Corrupted Database
```bash
# Check Proxmox database
pveversion -v
# Check cluster database (if in cluster)
systemctl status pve-cluster
```
### Fix 7: Full Service Restart
```bash
# Restart all Proxmox services
systemctl restart pveproxy pvedaemon pvestatd pve-cluster
systemctl status pveproxy pvedaemon pvestatd pve-cluster
```
---
## Advanced Troubleshooting
### View Real-time Logs
```bash
journalctl -u pveproxy -f
```
### Check Worker Process Details
```bash
# See running pveproxy processes
ps aux | grep pveproxy
# Check process limits
cat /proc/$(pgrep -f pveproxy | head -1)/limits
```
### Test pveproxy Manually
```bash
# Stop service
systemctl stop pveproxy
# Try running manually to see errors
/usr/bin/pveproxy start
```
---
## Scripts Available
1. **check-r630-04-commands.sh** - Diagnostic commands
2. **fix-r630-04-pveproxy.sh** - Automated fix script
---
## Expected Resolution
After fixing:
- `systemctl status pveproxy` should show "active (running)"
- `ss -tlnp | grep 8006` should show pveproxy listening
- Web interface should be accessible at `https://192.168.11.14:8006`
---
## Additional Resources
- Proxmox VE Documentation: https://pve.proxmox.com/pve-docs/
- Proxmox Forum: https://forum.proxmox.com/
- Log locations:
- `/var/log/pveproxy/access.log`
- `/var/log/pveproxy/error.log`
- `journalctl -u pveproxy`

View File

@@ -0,0 +1,329 @@
# Security Incident Response Procedures
**Last Updated:** 2025-01-20
**Document Version:** 1.0
**Status:** Active Documentation
---
## Overview
This document outlines procedures for responding to security incidents, including detection, containment, eradication, recovery, and post-incident activities.
---
## Incident Response Phases
### Phase 1: Preparation
**Pre-Incident Activities:**
1. **Incident Response Team:**
- Define roles and responsibilities
- Establish communication channels
- Create contact list
2. **Tools and Resources:**
- Log collection and analysis tools
- Forensic tools
- Backup systems
- Documentation
3. **Procedures:**
- Incident classification
- Escalation procedures
- Communication templates
---
### Phase 2: Detection and Analysis
#### Detection Methods
1. **Automated Detection:**
- Intrusion detection systems (IDS)
- Security information and event management (SIEM)
- Log analysis
- Anomaly detection
2. **Manual Detection:**
- User reports
- System administrator observations
- Security audits
#### Incident Classification
**Severity Levels:**
- **Critical:** Active breach, data exfiltration, system compromise
- **High:** Unauthorized access, potential data exposure
- **Medium:** Suspicious activity, policy violations
- **Low:** Minor security events, false positives
#### Initial Analysis
**Information Gathering:**
1. **What Happened:**
- Timeline of events
- Affected systems
- Indicators of compromise (IOCs)
2. **Who/What:**
- Source of attack
- Attack vector
- Tools used
3. **Impact Assessment:**
- Data accessed/modified
- Systems compromised
- Business impact
---
### Phase 3: Containment
#### Short-Term Containment
**Immediate Actions:**
1. **Isolate Affected Systems:**
```bash
# Disable network interface
ip link set <interface> down
# Block IP addresses
iptables -A INPUT -s <attacker-ip> -j DROP
```
2. **Preserve Evidence:**
- Take snapshots of affected systems
- Copy logs
- Document current state
3. **Disable Compromised Accounts:**
```bash
# Disable user account
usermod -L <username>
# Revoke API tokens
# Via Proxmox UI: Datacenter → Permissions → API Tokens
```
#### Long-Term Containment
**System Hardening:**
1. **Update Security Controls:**
- Patch vulnerabilities
- Update firewall rules
- Enhance monitoring
2. **Access Control:**
- Review user accounts
- Rotate credentials
- Implement MFA where possible
---
### Phase 4: Eradication
#### Remove Threat
**Actions:**
1. **Remove Malware:**
```bash
# Scan for malware
clamscan -r /path/to/scan
# Remove infected files
# (after verification)
```
2. **Close Attack Vectors:**
- Patch vulnerabilities
- Fix misconfigurations
- Update security policies
3. **Clean Compromised Systems:**
- Rebuild from known-good backups
- Verify system integrity
- Reinstall if necessary
---
### Phase 5: Recovery
#### System Restoration
**Steps:**
1. **Restore from Backups:**
- Use pre-incident backups
- Verify backup integrity
- Restore systems
2. **Verify System Integrity:**
- Check system logs
- Verify configurations
- Test functionality
3. **Monitor Systems:**
- Enhanced monitoring
- Watch for re-infection
- Track system behavior
#### Service Restoration
**Gradual Restoration:**
1. **Priority Systems First:**
- Critical services
- Business-critical applications
- User-facing services
2. **Verification:**
- Test each service
- Verify data integrity
- Confirm functionality
---
### Phase 6: Post-Incident Activity
#### Lessons Learned
**Post-Incident Review:**
1. **Timeline Review:**
- Document complete timeline
- Identify gaps in response
- Note what worked well
2. **Root Cause Analysis:**
- Identify root cause
- Determine contributing factors
- Document findings
3. **Improvements:**
- Update procedures
- Enhance security controls
- Improve monitoring
#### Documentation
**Incident Report:**
1. **Executive Summary:**
- Incident overview
- Impact assessment
- Response timeline
2. **Technical Details:**
- Attack vector
- IOCs
- Remediation steps
3. **Recommendations:**
- Security improvements
- Process improvements
- Training needs
---
## Incident Response Contacts
### Primary Contacts
- **Security Team Lead:** [Contact Information]
- **Infrastructure Lead:** [Contact Information]
- **Management:** [Contact Information]
### Escalation
- **Level 1:** Security team (immediate)
- **Level 2:** Management (1 hour)
- **Level 3:** External security firm (4 hours)
---
## Common Incident Scenarios
### Unauthorized Access
**Symptoms:**
- Unknown logins
- Unusual account activity
- Failed login attempts
**Response:**
1. Disable compromised accounts
2. Review access logs
3. Change all passwords
4. Investigate source
### Malware Infection
**Symptoms:**
- Unusual system behavior
- High CPU/memory usage
- Network anomalies
**Response:**
1. Isolate affected systems
2. Identify malware
3. Remove malware
4. Restore from backup if needed
### Data Breach
**Symptoms:**
- Unauthorized data access
- Data exfiltration
- Database anomalies
**Response:**
1. Contain breach
2. Assess data exposure
3. Notify affected parties (if required)
4. Enhance security controls
---
## Prevention
### Security Best Practices
1. **Regular Updates:**
- Keep systems patched
- Update security tools
- Review configurations
2. **Monitoring:**
- Log analysis
- Anomaly detection
- Regular audits
3. **Access Control:**
- Least privilege principle
- MFA where possible
- Regular access reviews
4. **Backups:**
- Regular backups
- Test restores
- Offsite backups
---
## Related Documentation
- **[DISASTER_RECOVERY.md](../03-deployment/DISASTER_RECOVERY.md)** - Disaster recovery procedures
- **[BACKUP_AND_RESTORE.md](../03-deployment/BACKUP_AND_RESTORE.md)** - Backup procedures
- **[TROUBLESHOOTING_FAQ.md](TROUBLESHOOTING_FAQ.md)** - General troubleshooting
---
**Last Updated:** 2025-01-20
**Review Cycle:** Quarterly

View File

@@ -0,0 +1,113 @@
# Storage Migration Issue - pve2 Configuration
**Date**: $(date)
**Issue**: Container migrations failing due to storage configuration mismatch
## Problem
Container migrations from ml110 to pve2 are failing with the error:
```
Volume group "pve" not found
ERROR: storage migration for 'local-lvm:vm-XXXX-disk-0' to storage 'local-lvm' failed
```
## Root Cause
**ml110** (source):
- Has `local-lvm` storage **active**
- Uses volume group named **"pve"** (standard Proxmox setup)
- Containers stored on `local-lvm:vm-XXXX-disk-0`
**pve2** (target):
- Has `local-lvm` storage but it's **INACTIVE**
- Has volume groups named **lvm1, lvm2, lvm3, lvm4, lvm5, lvm6** instead of "pve"
- Storage is not properly configured for Proxmox
## Storage Status
### ml110 Storage
```
local-lvm: lvmthin, active, 832GB total, 108GB used
Volume Group: pve (standard)
```
### pve2 Storage
```
local-lvm: lvmthin, INACTIVE, 0GB available
Volume Groups: lvm1, lvm2, lvm3, lvm4, lvm5, lvm6 (non-standard)
```
## Solutions
### Option 1: Configure pve2's local-lvm Storage (Recommended)
1. **Rename/create "pve" volume group on pve2**:
```bash
# On pve2, check current LVM setup
ssh root@192.168.11.12 "vgs; lvs"
# Rename one of the volume groups to "pve" (if possible)
# OR create a new "pve" volume group from available space
```
2. **Activate local-lvm storage on pve2**:
```bash
# Check storage configuration
ssh root@192.168.11.12 "cat /etc/pve/storage.cfg"
# May need to reconfigure local-lvm to use correct volume group
```
### Option 2: Migrate to Different Storage on pve2
Use `local` (directory storage) instead of `local-lvm`:
```bash
# Migrate with storage specification
pct migrate <VMID> pve2 --storage local --restart
```
**Pros**: Works immediately, no storage reconfiguration needed
**Cons**: Directory storage is slower than LVM thin provisioning
### Option 3: Use Shared Storage
Configure shared storage (NFS, Ceph, etc.) accessible from both nodes:
```bash
# Add shared storage to cluster
# Then migrate containers to shared storage
```
## Immediate Workaround
Until pve2's local-lvm is properly configured, we can:
1. **Skip migrations** for now
2. **Configure pve2 storage** first
3. **Then proceed with migrations**
## Next Steps
1. ⏳ Investigate pve2's LVM configuration
2. ⏳ Configure local-lvm storage on pve2 with "pve" volume group
3. ⏳ Verify storage is active and working
4. ⏳ Retry container migrations
## Verification Commands
```bash
# Check pve2 storage status
ssh root@192.168.11.12 "pvesm status"
# Check volume groups
ssh root@192.168.11.12 "vgs"
# Check local-lvm configuration
ssh root@192.168.11.12 "cat /etc/pve/storage.cfg | grep -A 5 local-lvm"
```
---
**Status**: ⚠️ Migrations paused pending storage configuration fix

View File

@@ -4,12 +4,16 @@ Common issues and solutions for Besu validated set deployment.
## Table of Contents
1. [Container Issues](#container-issues)
2. [Service Issues](#service-issues)
3. [Network Issues](#network-issues)
4. [Consensus Issues](#consensus-issues)
5. [Configuration Issues](#configuration-issues)
6. [Performance Issues](#performance-issues)
**Estimated Reading Time:** 30 minutes
**Progress:** Check off sections as you read
1. [Container Issues](#container-issues) - *Container troubleshooting*
2. ✅ [Service Issues](#service-issues) - *Service troubleshooting*
3. [Network Issues](#network-issues) - *Network troubleshooting*
4. ✅ [Consensus Issues](#consensus-issues) - *Consensus troubleshooting*
5. ✅ [Configuration Issues](#configuration-issues) - *Configuration troubleshooting*
6. ✅ [Performance Issues](#performance-issues) - *Performance troubleshooting*
7. ✅ [Additional Common Questions](#additional-common-questions) - *More FAQs*
---
@@ -43,6 +47,27 @@ pct start <vmid>
- Invalid container configuration
- OS template issues
<details>
<summary>Click to expand advanced troubleshooting steps</summary>
**Advanced Diagnostics:**
```bash
# Check container resources
pct list --full | grep <vmid>
# Check Proxmox host resources
free -h
df -h
# Check container logs in detail
journalctl -u pve-container@<vmid> -n 100 --no-pager
# Verify container template
pveam list | grep <template-name>
```
</details>
---
### Q: Container runs out of disk space
@@ -483,6 +508,187 @@ If issues persist:
---
## Additional Common Questions
### Q: How do I add a new VMID?
**Answer:**
1. Check available VMID ranges in [VMID_ALLOCATION_FINAL.md](../02-architecture/VMID_ALLOCATION_FINAL.md)
2. Select an appropriate VMID from the designated range for your service
3. Verify the VMID is not already in use: `pct list | grep <vmid>` or `qm list | grep <vmid>`
4. Document the assignment in VMID_ALLOCATION_FINAL.md
5. Use the VMID when creating containers/VMs
**Example:**
```bash
# Check if VMID 2503 is available
pct list | grep 2503
qm list | grep 2503
# If available, create container with VMID 2503
pct create 2503 ...
```
**Related Documentation:**
- [VMID Allocation Registry](../02-architecture/VMID_ALLOCATION_FINAL.md) ⭐⭐⭐
- [VMID Quick Reference](../12-quick-reference/VMID_QUICK_REFERENCE.md) ⭐⭐⭐
---
### Q: What's the difference between public and private RPC?
**Answer:**
| Feature | Public RPC | Private RPC |
|---------|-----------|-------------|
| **Discovery** | Enabled | Disabled |
| **Permissioning** | Disabled | Enabled |
| **Access** | Public (CORS: *) | Restricted (internal only) |
| **APIs** | ETH, NET, WEB3 (read-only) | ETH, NET, WEB3, ADMIN, DEBUG (full) |
| **Use Case** | dApps, external users | Internal services, admin |
| **ChainID** | 0x8a (138) or 0x1 (wallet compatibility) | 0x8a (138) |
| **Domain** | rpc-http-pub.d-bis.org | rpc-http-prv.d-bis.org |
**Public RPC:**
- Accessible from the internet
- Used by dApps and external tools
- Read-only APIs for security
- May report chainID 0x1 for MetaMask compatibility
**Private RPC:**
- Internal network only
- Used by internal services and administration
- Full API access including ADMIN and DEBUG
- Strict permissioning and access control
**Related Documentation:**
- [RPC Node Types Architecture](../05-network/RPC_NODE_TYPES_ARCHITECTURE.md) ⭐⭐
- [RPC Template Types](../05-network/RPC_TEMPLATE_TYPES.md) ⭐
---
### Q: How do I troubleshoot Cloudflare tunnel issues?
**Answer:**
**Step 1: Check Tunnel Status**
```bash
# Check cloudflared container status
pct status 102
# Check tunnel logs
pct logs 102 --tail 50
# Verify tunnel is running
pct exec 102 -- ps aux | grep cloudflared
```
**Step 2: Verify Configuration**
```bash
# Check tunnel configuration
pct exec 102 -- cat /etc/cloudflared/config.yaml
# Verify credentials file exists
pct exec 102 -- ls -la /etc/cloudflared/*.json
```
**Step 3: Test Connectivity**
```bash
# Test from internal network
curl -I http://192.168.11.21:80
# Test from external (through Cloudflare)
curl -I https://explorer.d-bis.org
```
**Step 4: Check Cloudflare Dashboard**
- Verify tunnel is healthy in Cloudflare Zero Trust dashboard
- Check ingress rules are configured correctly
- Verify DNS records point to tunnel
**Common Issues:**
- Tunnel not running → Restart: `pct restart 102`
- Configuration error → Check YAML syntax
- Credentials invalid → Regenerate tunnel token
- DNS not resolving → Check Cloudflare DNS settings
**Related Documentation:**
- [Cloudflare Tunnel Routing Architecture](../05-network/CLOUDFLARE_TUNNEL_ROUTING_ARCHITECTURE.md) ⭐⭐⭐
- [Cloudflare Routing Master Reference](../05-network/CLOUDFLARE_ROUTING_MASTER.md) ⭐⭐⭐
- [Troubleshooting Quick Reference](../12-quick-reference/TROUBLESHOOTING_QUICK_REFERENCE.md) ⭐⭐⭐
---
### Q: What's the recommended storage configuration?
**Answer:**
**For R630 Compute Nodes:**
- **Boot drives (2×600GB):** ZFS mirror (recommended) or hardware RAID1
- **Data SSDs (6×250GB):** ZFS pool with one of:
- Striped mirrors (if pairs available)
- RAIDZ1 (single parity, 5 drives usable)
- RAIDZ2 (double parity, 4 drives usable)
- **High-write workloads:** Dedicated dataset with quotas
**For ML110 Management Node:**
- Standard Proxmox storage configuration
- Sufficient space for templates and backups
**Storage Best Practices:**
- Use ZFS for data integrity and snapshots
- Enable compression for space efficiency
- Set quotas for containers to prevent disk exhaustion
- Regular backups to external storage
**Related Documentation:**
- [Network Architecture - Storage Orchestration](../02-architecture/NETWORK_ARCHITECTURE.md#53-storage-orchestration-r630) ⭐⭐⭐
- [Backup and Restore](../03-deployment/BACKUP_AND_RESTORE.md) ⭐⭐
---
### Q: How do I migrate from flat LAN to VLANs?
**Answer:**
**Phase 1: Preparation**
1. Review VLAN plan in [NETWORK_ARCHITECTURE.md](../02-architecture/NETWORK_ARCHITECTURE.md)
2. Document current IP assignments
3. Plan IP address migration for each service
4. Create rollback plan
**Phase 2: Network Configuration**
1. Configure ES216G switches with VLAN trunks
2. Enable VLAN-aware bridge on Proxmox hosts
3. Create VLAN interfaces on ER605 router
4. Test VLAN connectivity
**Phase 3: Service Migration**
1. Migrate services one VLAN at a time
2. Start with non-critical services
3. Update container/VM network configuration
4. Verify connectivity after each migration
**Phase 4: Validation**
1. Test all services on new VLANs
2. Verify routing between VLANs
3. Test egress NAT pools
4. Document final configuration
**Migration Order (Recommended):**
1. Management services (VLAN 11) - Already active
2. Monitoring/observability (VLAN 120, 121)
3. Besu network (VLANs 110, 111, 112)
4. CCIP network (VLANs 130, 132, 133, 134)
5. Service layer (VLAN 160)
6. Sovereign tenants (VLANs 200-203)
**Related Documentation:**
- [Network Architecture - VLAN Orchestration](../02-architecture/NETWORK_ARCHITECTURE.md#3-layer-2--vlan-orchestration-plan) ⭐⭐⭐
- [Orchestration Deployment Guide - VLAN Enablement](../02-architecture/ORCHESTRATION_DEPLOYMENT_GUIDE.md#phase-1--vlan-enablement) ⭐⭐⭐
---
## Related Documentation
### Operational Procedures

Some files were not shown because too many files have changed in this diff Show More