Compare commits

..

15 Commits

Author SHA1 Message Date
defiQUG
48d3e3f761 Backfill Phoenix deploy API env on install
Some checks failed
Deploy to Phoenix / validate (push) Successful in 1m3s
Deploy to Phoenix / deploy (push) Successful in 42s
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Successful in 2m27s
phoenix-deploy Deploy failed: Command failed: bash scripts/deployment/gitea-cloudflare-sync.sh bash: scripts/deployment/gitea-cloudflare-sync.sh: No s
Deploy to Phoenix / cloudflare (push) Successful in 2m45s
2026-04-28 05:21:56 -07:00
defiQUG
bed94a3ad4 Keep optional Cloudflare sync non-blocking
Some checks failed
Deploy to Phoenix / validate (push) Successful in 1m0s
Deploy to Phoenix / deploy (push) Successful in 42s
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Failing after 45s
Deploy to Phoenix / cloudflare (push) Has been skipped
2026-04-28 05:13:07 -07:00
defiQUG
7be2190441 Allow long atomic dapp deploy requests
Some checks failed
Deploy to Phoenix / validate (push) Successful in 1m14s
Deploy to Phoenix / deploy (push) Successful in 46s
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Successful in 2m35s
phoenix-deploy Deploy failed: Command failed: bash scripts/deployment/gitea-cloudflare-sync.sh bash: scripts/deployment/gitea-cloudflare-sync.sh: No s
Deploy to Phoenix / cloudflare (push) Failing after 2m57s
2026-04-28 04:57:24 -07:00
defiQUG
19cb7fe8b5 Align GRU main overlay with deployment graph
Some checks failed
Deploy to Phoenix / validate (push) Has started running
Deploy to Phoenix / deploy (push) Has been cancelled
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Has been cancelled
Deploy to Phoenix / cloudflare (push) Has been cancelled
2026-04-28 04:56:28 -07:00
defiQUG
f1715fb684 Serialize atomic deploy after Phoenix self-deploy
Some checks failed
Deploy to Phoenix / validate (push) Failing after 1m7s
Deploy to Phoenix / deploy (push) Has been skipped
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Has been skipped
Deploy to Phoenix / cloudflare (push) Has been skipped
2026-04-28 04:48:22 -07:00
defiQUG
6a9f5dead0 Add atomic swap deploy helper to main
Some checks failed
Deploy to Phoenix / validate (push) Failing after 1m36s
Deploy to Phoenix / deploy (push) Has been skipped
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Has been skipped
Deploy to Phoenix / cloudflare (push) Has been skipped
2026-04-28 04:44:03 -07:00
defiQUG
770a1db99a Treat Phoenix self-deploy restart as successful handoff
Some checks failed
Deploy to Phoenix / validate (push) Failing after 1m8s
Deploy to Phoenix / deploy (push) Has been skipped
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Has been skipped
Deploy to Phoenix / cloudflare (push) Has been skipped
2026-04-28 04:39:16 -07:00
defiQUG
8868a3501f Add pnpm workspace lockfile checker to main
Some checks failed
Deploy to Phoenix / validate (push) Has been cancelled
Deploy to Phoenix / deploy (push) Has been cancelled
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Has been cancelled
Deploy to Phoenix / cloudflare (push) Has been cancelled
2026-04-28 04:36:21 -07:00
defiQUG
cf96e9d821 Retry transient Phoenix deploy POST failures
Some checks failed
Deploy to Phoenix / validate (push) Failing after 1m4s
Deploy to Phoenix / deploy (push) Has been skipped
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Has been skipped
Deploy to Phoenix / cloudflare (push) Has been skipped
2026-04-28 04:31:23 -07:00
defiQUG
4c4aa28c95 Materialize PMM config in deploy validation
Some checks failed
Deploy to Phoenix / validate (push) Failing after 1m7s
Deploy to Phoenix / deploy (push) Has been skipped
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Has been skipped
Deploy to Phoenix / cloudflare (push) Has been skipped
2026-04-28 04:20:01 -07:00
defiQUG
9306a65186 Install validation dependencies in Gitea workflows
Some checks failed
Deploy to Phoenix / validate (push) Failing after 1m10s
Deploy to Phoenix / deploy (push) Has been skipped
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Has been skipped
Deploy to Phoenix / cloudflare (push) Has been skipped
2026-04-28 04:15:17 -07:00
defiQUG
7ab231c4ce ci: align validate workflow strict closure env
Some checks failed
Deploy to Phoenix / validate (push) Failing after 29s
Deploy to Phoenix / deploy (push) Has been skipped
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Has been skipped
Deploy to Phoenix / cloudflare (push) Has been skipped
2026-04-28 01:28:47 -07:00
defiQUG
b58c3a0342 ci: prefer gitea remote for workflow parity checks
Some checks failed
Deploy to Phoenix / validate (push) Failing after 14s
Deploy to Phoenix / deploy (push) Has been skipped
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Has been skipped
Deploy to Phoenix / cloudflare (push) Has been skipped
2026-04-22 21:48:58 -07:00
defiQUG
b8e735dcac ci: lock deploy workflows across main and master
Some checks failed
Deploy to Phoenix / validate (push) Failing after 14s
Deploy to Phoenix / deploy (push) Has been skipped
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Has been skipped
Deploy to Phoenix / cloudflare (push) Has been skipped
2026-04-22 21:47:57 -07:00
defiQUG
3bea587e12 phoenix: automate CurrenciCombo e2e deploys
All checks were successful
Deploy to Phoenix / deploy (push) Successful in 31s
2026-04-22 20:06:19 -07:00
50 changed files with 1995 additions and 2705 deletions

View File

@@ -6,6 +6,10 @@
2. Make changes, ensure tests pass
3. Open a pull request
Deploy workflow policy:
`main` and `master` are both deploy-triggering branches, so `.gitea/workflow-sources/deploy-to-phoenix.yml` and `.gitea/workflow-sources/validate-on-pr.yml` must stay identical across both branches.
Use `bash scripts/verify/sync-gitea-workflows.sh` after editing workflow-source files, and `bash scripts/verify/run-all-validation.sh --skip-genesis` to catch workflow drift before push.
## Pull Requests
- Use the PR template when opening a PR

View File

@@ -0,0 +1,125 @@
# Canonical deploy workflow. Keep source and checked-in workflow copies byte-identical.
# Validation checks both file sync and main/master parity.
name: Deploy to Phoenix
on:
push:
branches: [main, master]
workflow_dispatch:
jobs:
validate:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Fetch deploy branches for workflow parity check
run: |
REMOTE="${GITEA_WORKFLOW_REMOTE:-origin}"
if git remote | grep -qx gitea; then
REMOTE="${GITEA_WORKFLOW_REMOTE:-gitea}"
fi
git fetch --depth=1 "$REMOTE" main master
- name: Install validation dependencies
run: |
corepack enable
pnpm install --frozen-lockfile
# The cW* mesh matrix and deployment-status validators read
# cross-chain-pmm-lps/config/*.json. The parent checkout does not
# materialize submodules by default, and .gitmodules mixes public HTTPS
# with SSH URLs, so clone only the required public validation dependency.
- name: Materialize cross-chain-pmm-lps
run: |
set -euo pipefail
if [ ! -f cross-chain-pmm-lps/config/deployment-status.json ]; then
rm -rf cross-chain-pmm-lps
git clone --depth=1 \
https://gitea.d-bis.org/d-bis/cross-chain-pmm-lps.git \
cross-chain-pmm-lps
fi
- name: Run repo validation gate
run: |
bash scripts/verify/run-all-validation.sh --skip-genesis
deploy:
needs: validate
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Trigger Phoenix deployment
run: |
set -euo pipefail
SHA="$(git rev-parse HEAD)"
BRANCH="$(git rev-parse --abbrev-ref HEAD)"
set +e
curl -sSf --retry 3 --retry-connrefused --retry-delay 10 --retry-max-time 180 \
--connect-timeout 10 --max-time 120 \
-X POST "${{ secrets.PHOENIX_DEPLOY_URL }}" \
-H "Authorization: Bearer ${{ secrets.PHOENIX_DEPLOY_TOKEN }}" \
-H "Content-Type: application/json" \
-d "{\"repo\":\"${{ gitea.repository }}\",\"sha\":\"${SHA}\",\"branch\":\"${BRANCH}\",\"target\":\"default\"}"
rc="$?"
set -e
if [ "$rc" -eq 52 ]; then
HEALTH_URL="${{ secrets.PHOENIX_DEPLOY_URL }}"
HEALTH_URL="${HEALTH_URL%/api/deploy}/health"
echo "Phoenix deploy API restarted during self-deploy; verifying ${HEALTH_URL}"
for i in $(seq 1 12); do
if curl -fsS --max-time 5 "$HEALTH_URL"; then
exit 0
fi
sleep 5
done
fi
exit "$rc"
deploy-atomic-swap-dapp:
needs: deploy
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Trigger Atomic Swap dApp deployment (Phoenix)
run: |
set -euo pipefail
SHA="$(git rev-parse HEAD)"
BRANCH="$(git rev-parse --abbrev-ref HEAD)"
curl -sSf \
--connect-timeout 10 --max-time 900 \
-X POST "${{ secrets.PHOENIX_DEPLOY_URL }}" \
-H "Authorization: Bearer ${{ secrets.PHOENIX_DEPLOY_TOKEN }}" \
-H "Content-Type: application/json" \
-d "{\"repo\":\"${{ gitea.repository }}\",\"sha\":\"${SHA}\",\"branch\":\"${BRANCH}\",\"target\":\"atomic-swap-dapp-live\"}"
# After app deploy, ask Phoenix to run path-gated Cloudflare DNS sync on the host that has
# PHOENIX_REPO_ROOT + .env (not on this runner). Skips unless PHOENIX_CLOUDFLARE_SYNC=1 on that host.
# continue-on-error: first-time or missing opt-in should not block the main deploy.
cloudflare:
needs:
- deploy
- deploy-atomic-swap-dapp
runs-on: ubuntu-latest
continue-on-error: true
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Request Cloudflare DNS sync (Phoenix)
run: |
set -euo pipefail
SHA="$(git rev-parse HEAD)"
BRANCH="$(git rev-parse --abbrev-ref HEAD)"
curl -sSf --retry 5 --retry-all-errors --retry-connrefused --retry-delay 10 --retry-max-time 300 \
--connect-timeout 10 --max-time 120 \
-X POST "${{ secrets.PHOENIX_DEPLOY_URL }}" \
-H "Authorization: Bearer ${{ secrets.PHOENIX_DEPLOY_TOKEN }}" \
-H "Content-Type: application/json" \
-d "{\"repo\":\"${{ gitea.repository }}\",\"sha\":\"${SHA}\",\"branch\":\"${BRANCH}\",\"target\":\"cloudflare-sync\"}" \
|| { echo "Cloudflare DNS sync request failed; optional sync is non-blocking."; exit 0; }

View File

@@ -0,0 +1,33 @@
# Canonical PR validation workflow. Keep source and checked-in workflow copies byte-identical.
# Validation checks both file sync and main/master parity.
# PR-only: push validation already runs in deploy-to-phoenix.yml; this gives PRs the same
# no-LAN checks without the deploy job (and without deploy secrets).
name: Validate (PR)
on:
pull_request:
types: [opened, synchronize, reopened]
branches: [main, master]
workflow_dispatch:
jobs:
run-all-validation:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Fetch deploy branches for workflow parity check
run: |
REMOTE="${GITEA_WORKFLOW_REMOTE:-origin}"
if git remote | grep -qx gitea; then
REMOTE="${GITEA_WORKFLOW_REMOTE:-gitea}"
fi
git fetch --depth=1 "$REMOTE" main master
- name: Install validation dependencies
run: |
corepack enable
pnpm install --frozen-lockfile
# Optional: set org/repo variable URA_STRICT_CLOSURE=1 to fail PRs while pilot placeholders
# remain in manifest (see scripts/ura/validate-manifest-closure.mjs). Not enabled by default.
- name: run-all-validation (no LAN, no genesis)
env:
URA_STRICT_CLOSURE: ${{ vars.URA_STRICT_CLOSURE }}
run: bash scripts/verify/run-all-validation.sh --skip-genesis

View File

@@ -1,11 +1,52 @@
# Canonical deploy workflow. Keep source and checked-in workflow copies byte-identical.
# Validation checks both file sync and main/master parity.
name: Deploy to Phoenix
on:
push:
branches: [main, master]
workflow_dispatch:
jobs:
validate:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Fetch deploy branches for workflow parity check
run: |
REMOTE="${GITEA_WORKFLOW_REMOTE:-origin}"
if git remote | grep -qx gitea; then
REMOTE="${GITEA_WORKFLOW_REMOTE:-gitea}"
fi
git fetch --depth=1 "$REMOTE" main master
- name: Install validation dependencies
run: |
corepack enable
pnpm install --frozen-lockfile
# The cW* mesh matrix and deployment-status validators read
# cross-chain-pmm-lps/config/*.json. The parent checkout does not
# materialize submodules by default, and .gitmodules mixes public HTTPS
# with SSH URLs, so clone only the required public validation dependency.
- name: Materialize cross-chain-pmm-lps
run: |
set -euo pipefail
if [ ! -f cross-chain-pmm-lps/config/deployment-status.json ]; then
rm -rf cross-chain-pmm-lps
git clone --depth=1 \
https://gitea.d-bis.org/d-bis/cross-chain-pmm-lps.git \
cross-chain-pmm-lps
fi
- name: Run repo validation gate
run: |
bash scripts/verify/run-all-validation.sh --skip-genesis
deploy:
needs: validate
runs-on: ubuntu-latest
steps:
- name: Checkout code
@@ -13,8 +54,72 @@ jobs:
- name: Trigger Phoenix deployment
run: |
curl -sSf -X POST "${{ secrets.PHOENIX_DEPLOY_URL }}" \
set -euo pipefail
SHA="$(git rev-parse HEAD)"
BRANCH="$(git rev-parse --abbrev-ref HEAD)"
set +e
curl -sSf --retry 3 --retry-connrefused --retry-delay 10 --retry-max-time 180 \
--connect-timeout 10 --max-time 120 \
-X POST "${{ secrets.PHOENIX_DEPLOY_URL }}" \
-H "Authorization: Bearer ${{ secrets.PHOENIX_DEPLOY_TOKEN }}" \
-H "Content-Type: application/json" \
-d "{\"repo\":\"${{ gitea.repository }}\",\"sha\":\"${{ gitea.sha }}\",\"branch\":\"${{ gitea.ref_name }}\"}"
continue-on-error: true
-d "{\"repo\":\"${{ gitea.repository }}\",\"sha\":\"${SHA}\",\"branch\":\"${BRANCH}\",\"target\":\"default\"}"
rc="$?"
set -e
if [ "$rc" -eq 52 ]; then
HEALTH_URL="${{ secrets.PHOENIX_DEPLOY_URL }}"
HEALTH_URL="${HEALTH_URL%/api/deploy}/health"
echo "Phoenix deploy API restarted during self-deploy; verifying ${HEALTH_URL}"
for i in $(seq 1 12); do
if curl -fsS --max-time 5 "$HEALTH_URL"; then
exit 0
fi
sleep 5
done
fi
exit "$rc"
deploy-atomic-swap-dapp:
needs: deploy
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Trigger Atomic Swap dApp deployment (Phoenix)
run: |
set -euo pipefail
SHA="$(git rev-parse HEAD)"
BRANCH="$(git rev-parse --abbrev-ref HEAD)"
curl -sSf \
--connect-timeout 10 --max-time 900 \
-X POST "${{ secrets.PHOENIX_DEPLOY_URL }}" \
-H "Authorization: Bearer ${{ secrets.PHOENIX_DEPLOY_TOKEN }}" \
-H "Content-Type: application/json" \
-d "{\"repo\":\"${{ gitea.repository }}\",\"sha\":\"${SHA}\",\"branch\":\"${BRANCH}\",\"target\":\"atomic-swap-dapp-live\"}"
# After app deploy, ask Phoenix to run path-gated Cloudflare DNS sync on the host that has
# PHOENIX_REPO_ROOT + .env (not on this runner). Skips unless PHOENIX_CLOUDFLARE_SYNC=1 on that host.
# continue-on-error: first-time or missing opt-in should not block the main deploy.
cloudflare:
needs:
- deploy
- deploy-atomic-swap-dapp
runs-on: ubuntu-latest
continue-on-error: true
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Request Cloudflare DNS sync (Phoenix)
run: |
set -euo pipefail
SHA="$(git rev-parse HEAD)"
BRANCH="$(git rev-parse --abbrev-ref HEAD)"
curl -sSf --retry 5 --retry-all-errors --retry-connrefused --retry-delay 10 --retry-max-time 300 \
--connect-timeout 10 --max-time 120 \
-X POST "${{ secrets.PHOENIX_DEPLOY_URL }}" \
-H "Authorization: Bearer ${{ secrets.PHOENIX_DEPLOY_TOKEN }}" \
-H "Content-Type: application/json" \
-d "{\"repo\":\"${{ gitea.repository }}\",\"sha\":\"${SHA}\",\"branch\":\"${BRANCH}\",\"target\":\"cloudflare-sync\"}" \
|| { echo "Cloudflare DNS sync request failed; optional sync is non-blocking."; exit 0; }

View File

@@ -0,0 +1,33 @@
# Canonical PR validation workflow. Keep source and checked-in workflow copies byte-identical.
# Validation checks both file sync and main/master parity.
# PR-only: push validation already runs in deploy-to-phoenix.yml; this gives PRs the same
# no-LAN checks without the deploy job (and without deploy secrets).
name: Validate (PR)
on:
pull_request:
types: [opened, synchronize, reopened]
branches: [main, master]
workflow_dispatch:
jobs:
run-all-validation:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Fetch deploy branches for workflow parity check
run: |
REMOTE="${GITEA_WORKFLOW_REMOTE:-origin}"
if git remote | grep -qx gitea; then
REMOTE="${GITEA_WORKFLOW_REMOTE:-gitea}"
fi
git fetch --depth=1 "$REMOTE" main master
- name: Install validation dependencies
run: |
corepack enable
pnpm install --frozen-lockfile
# Optional: set org/repo variable URA_STRICT_CLOSURE=1 to fail PRs while pilot placeholders
# remain in manifest (see scripts/ura/validate-manifest-closure.mjs). Not enabled by default.
- name: run-all-validation (no LAN, no genesis)
env:
URA_STRICT_CLOSURE: ${{ vars.URA_STRICT_CLOSURE }}
run: bash scripts/verify/run-all-validation.sh --skip-genesis

View File

@@ -1,15 +0,0 @@
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"description": "Canonical manifest template for the native Chain 138 Aave rollout.",
"chainId": 138,
"network": "Chain 138",
"aave": {
"pool": "",
"poolAddressesProvider": "",
"poolDataProvider": "",
"startBlock": "",
"executorTreasury": "",
"executorOwner": "",
"quotePushReceiverOwner": ""
}
}

View File

@@ -1,29 +0,0 @@
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"description": "Chain 138 native Aave V3 Origin market deployment manifest template.",
"chainId": 138,
"network": "Chain 138",
"contractName": "Chain138AaveV3OriginMarket",
"roles": {
"marketOwner": "",
"poolAdmin": "",
"emergencyAdmin": ""
},
"flags": {
"l2": false
},
"marketConfig": {
"marketId": "Chain 138 Aave V3 Market",
"providerId": 138,
"oracleDecimals": 8,
"networkBaseTokenPriceInUsdProxyAggregator": "",
"marketReferenceCurrencyPriceInUsdProxyAggregator": "",
"l2SequencerUptimeFeed": "",
"l2PriceOracleSentinelGracePeriod": 0,
"salt": "0x0000000000000000000000000000000000000000000000000000000000000000",
"wrappedNativeToken": "",
"flashLoanPremium": "5000000000000000",
"incentivesProxy": "",
"treasury": ""
}
}

View File

@@ -1,86 +0,0 @@
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"description": "Chain 138 native GMX synthetics deployment manifest template.",
"chainId": 138,
"network": "chain138",
"rpcUrl": "http://192.168.11.211:8545",
"explorer": {
"apiUrl": "https://explorer.d-bis.org/api",
"browserUrl": "https://explorer.d-bis.org"
},
"general": {
"feeReceiver": "",
"holdingAddress": "",
"sequencerUptimeFeed": "0x0000000000000000000000000000000000000000",
"sequencerGraceDuration": 300,
"maxUiFeeFactor": "0",
"maxAutoCancelOrders": 6,
"maxTotalCallbackGasLimitForAutoCancelOrders": 5000000,
"minHandleExecutionErrorGas": 1200000,
"minHandleExecutionErrorGasToForward": 1000000,
"minAdditionalGasForExecution": 1000000,
"refundExecutionFeeGasLimit": 200000,
"depositGasLimit": 2050000,
"withdrawalGasLimit": 1500000,
"shiftGasLimit": 2500000,
"createDepositGasLimit": 5000000,
"createGlvDepositGasLimit": 5000000,
"createWithdrawalGasLimit": 5000000,
"createGlvWithdrawalGasLimit": 5000000,
"singleSwapGasLimit": 1000000,
"increaseOrderGasLimit": 3900000,
"decreaseOrderGasLimit": 3900000,
"swapOrderGasLimit": 3400000,
"glvPerMarketGasLimit": 100000,
"glvDepositGasLimit": 2000000,
"glvWithdrawalGasLimit": 2000000,
"glvShiftGasLimit": 3000000,
"tokenTransferGasLimit": 200000,
"nativeTokenTransferGasLimit": 50000,
"setTraderReferralCodeGasLimit": 200000,
"registerCodeGasLimit": 200000,
"estimatedGasFeeBaseAmount": 600000,
"estimatedGasPerOraclePrice": 250000,
"estimatedGasFeeMultiplierFactor": "1000000000000000000000000000000",
"executionGasFeeBaseAmount": 600000,
"executionGasPerOraclePrice": 250000,
"executionGasFeeMultiplierFactor": "1000000000000000000000000000000",
"requestExpirationTime": 300,
"maxSwapPathLength": 3,
"maxCallbackGasLimit": 2000000,
"minCollateralUsd": "1000000000000000000000000000000",
"minPositionSizeUsd": "1000000000000000000000000000000",
"claimableCollateralTimeDivisor": 3600,
"claimableCollateralDelay": 432000,
"positionFeeReceiverFactor": "0",
"swapFeeReceiverFactor": "0",
"borrowingFeeReceiverFactor": "0",
"liquidationFeeReceiverFactor": "0",
"skipBorrowingFeeForSmallerSide": true,
"maxExecutionFeeMultiplierFactor": "100000000000000000000000000000000",
"oracleProviderMinChangeDelay": 3600,
"configMaxPriceAge": 180,
"gelatoRelayFeeMultiplierFactor": "0",
"gelatoRelayFeeBaseAmount": 0,
"relayFeeAddress": "0x0000000000000000000000000000000000000000",
"maxRelayFeeUsdForSubaccount": "0",
"maxDataLength": 18,
"multichainProviders": {},
"multichainEndpoints": {},
"srcChainIds": {},
"eids": {}
},
"roles": {
"CONTROLLER": {},
"ORDER_KEEPER": {},
"ADL_KEEPER": {},
"LIQUIDATION_KEEPER": {},
"MARKET_KEEPER": {},
"FROZEN_ORDER_KEEPER": {},
"CONFIG_KEEPER": {},
"LIMITED_CONFIG_KEEPER": {},
"TIMELOCK_ADMIN": {}
},
"tokens": {},
"markets": []
}

View File

@@ -1,76 +0,0 @@
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"description": "Canonical Chain 138 remaining native protocol surface inventory for Aave, GMX, and dYdX.",
"version": "1.0.0",
"updated": "2026-04-15",
"chainId": 138,
"network": "Chain 138",
"protocols": [
{
"key": "aave",
"status": "source-backed",
"discoveredAddresses": {},
"sourceSubmodule": "vendor/chain138-protocols/aave-v3-origin",
"discoveryEvidence": [
"2026-04-15: explorer search /api/v2/search?q=Aave returned items=[]",
"2026-04-15: token-aggregation provider capabilities for chainId=138 did not advertise provider=aave",
"2026-04-15: MEV venue coverage and native-venue-coverage for chainId=138 did not include venue=aave",
"2026-04-15: eth_getLogs scan for Aave PoolAddressesProvider event topics returned no matches from block 0..latest"
],
"requiredEnv": [
"CHAIN_138_AAVE_POOL",
"CHAIN_138_AAVE_POOL_ADDRESSES_PROVIDER",
"CHAIN_138_AAVE_POOL_DATA_PROVIDER",
"CHAIN_138_AAVE_START_BLOCK",
"CHAIN_138_AAVE_EXECUTOR_TREASURY",
"CHAIN_138_AAVE_EXECUTOR_OWNER",
"CHAIN_138_AAVE_QUOTE_PUSH_RECEIVER_OWNER"
],
"deployerScripts": [
"scripts/deployment/deploy-chain138-aave-v3-execution-stack.sh",
"scripts/deployment/deploy-chain138-aave-quote-push-receiver.sh",
"scripts/deployment/publish-chain138-aave-runtime-from-artifacts.sh"
],
"verifierScripts": [
"scripts/verify/check-chain138-remaining-protocol-env.sh",
"scripts/verify/check-chain138-aave-rollout-readiness.sh"
]
},
{
"key": "gmx",
"status": "source-backed",
"discoveredAddresses": {},
"sourceSubmodule": "vendor/chain138-protocols/gmx-synthetics",
"discoveryEvidence": [
"2026-04-15: explorer search /api/v2/search?q=GMX returned items=[]",
"2026-04-15: token-aggregation provider capabilities for chainId=138 did not advertise provider=gmx",
"2026-04-15: MEV venue coverage and native-venue-coverage for chainId=138 did not include venue=gmx",
"2026-04-15: imported official upstream source submodule gmx-io/gmx-synthetics into vendor/chain138-protocols/gmx-synthetics"
],
"requiredEnv": [
"CHAIN_138_GMX_ROUTER",
"CHAIN_138_GMX_EXCHANGE_ROUTER",
"CHAIN_138_GMX_READER",
"CHAIN_138_GMX_ORDER_VAULT",
"CHAIN_138_GMX_DEPOSIT_VAULT",
"CHAIN_138_GMX_WITHDRAWAL_VAULT",
"CHAIN_138_GMX_START_BLOCK"
]
},
{
"key": "dydx",
"status": "inventory-only",
"discoveredAddresses": {},
"discoveryEvidence": [
"2026-04-15: explorer search /api/v2/search?q=dydx, dYdX, and SoloMargin returned items=[]",
"2026-04-15: token-aggregation provider capabilities for chainId=138 did not advertise provider=dydx",
"2026-04-15: MEV venue coverage and native-venue-coverage for chainId=138 did not include venue=dydx"
],
"requiredEnv": [
"CHAIN_138_DYDX_SOLO",
"CHAIN_138_DYDX_DATA_PROVIDER",
"CHAIN_138_DYDX_START_BLOCK"
]
}
]
}

View File

@@ -2076,10 +2076,10 @@
"baseSymbol": "cWETH",
"quoteSymbol": "USDC",
"poolAddress": "0xd012000000000000000000000000000000000001",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "eth_mainnet",
"venue": "dodo_pmm",
@@ -2091,10 +2091,10 @@
"baseSymbol": "cWETH",
"quoteSymbol": "WETH",
"poolAddress": "0xd011000000000000000000000000000000000001",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "eth_mainnet",
"venue": "dodo_pmm",
@@ -2150,10 +2150,10 @@
"baseSymbol": "cWETHL2",
"quoteSymbol": "USDC",
"poolAddress": "0xd02200000000000000000000000000000000000a",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "eth_l2",
"venue": "dodo_pmm",
@@ -2165,10 +2165,10 @@
"baseSymbol": "cWETHL2",
"quoteSymbol": "WETH",
"poolAddress": "0xd02100000000000000000000000000000000000a",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "eth_l2",
"venue": "dodo_pmm",
@@ -2246,10 +2246,10 @@
"baseSymbol": "cWXDAI",
"quoteSymbol": "USDC",
"poolAddress": "0xd072000000000000000000000000000000000064",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "xdai",
"venue": "dodo_pmm",
@@ -2261,10 +2261,10 @@
"baseSymbol": "cWXDAI",
"quoteSymbol": "WXDAI",
"poolAddress": "0xd071000000000000000000000000000000000064",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "xdai",
"venue": "dodo_pmm",
@@ -2276,10 +2276,10 @@
"baseSymbol": "cWWEMIX",
"quoteSymbol": "USDC",
"poolAddress": "0xd092000000000000000000000000000000000457",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "wemix",
"venue": "dodo_pmm",
@@ -2291,10 +2291,10 @@
"baseSymbol": "cWWEMIX",
"quoteSymbol": "WWEMIX",
"poolAddress": "0xd091000000000000000000000000000000000457",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "wemix",
"venue": "dodo_pmm",
@@ -2339,10 +2339,10 @@
"baseSymbol": "cWPOL",
"quoteSymbol": "USDC",
"poolAddress": "0xd042000000000000000000000000000000000089",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "pol",
"venue": "dodo_pmm",
@@ -2354,10 +2354,10 @@
"baseSymbol": "cWPOL",
"quoteSymbol": "WPOL",
"poolAddress": "0xd041000000000000000000000000000000000089",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "pol",
"venue": "dodo_pmm",
@@ -2413,10 +2413,10 @@
"baseSymbol": "cWCRO",
"quoteSymbol": "USDT",
"poolAddress": "0xd062000000000000000000000000000000000019",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "cro",
"venue": "dodo_pmm",
@@ -2428,10 +2428,10 @@
"baseSymbol": "cWCRO",
"quoteSymbol": "WCRO",
"poolAddress": "0xd061000000000000000000000000000000000019",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "cro",
"venue": "dodo_pmm",
@@ -2487,10 +2487,10 @@
"baseSymbol": "cWETHL2",
"quoteSymbol": "USDC",
"poolAddress": "0xd02200000000000000000000000000000000a4b1",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "eth_l2",
"venue": "dodo_pmm",
@@ -2502,10 +2502,10 @@
"baseSymbol": "cWETHL2",
"quoteSymbol": "WETH",
"poolAddress": "0xd02100000000000000000000000000000000a4b1",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "eth_l2",
"venue": "dodo_pmm",
@@ -2572,10 +2572,10 @@
"baseSymbol": "cWCELO",
"quoteSymbol": "USDC",
"poolAddress": "0xd08200000000000000000000000000000000a4ec",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "celo",
"venue": "dodo_pmm",
@@ -2587,10 +2587,10 @@
"baseSymbol": "cWCELO",
"quoteSymbol": "WCELO",
"poolAddress": "0xd08100000000000000000000000000000000a4ec",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "celo",
"venue": "dodo_pmm",
@@ -2635,10 +2635,10 @@
"baseSymbol": "cWAVAX",
"quoteSymbol": "USDC",
"poolAddress": "0xd05200000000000000000000000000000000a86a",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "avax",
"venue": "dodo_pmm",
@@ -2650,10 +2650,10 @@
"baseSymbol": "cWAVAX",
"quoteSymbol": "WAVAX",
"poolAddress": "0xd05100000000000000000000000000000000a86a",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "avax",
"venue": "dodo_pmm",
@@ -2720,10 +2720,10 @@
"baseSymbol": "cWBNB",
"quoteSymbol": "USDT",
"poolAddress": "0xd032000000000000000000000000000000000038",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "bnb",
"venue": "dodo_pmm",
@@ -2735,10 +2735,10 @@
"baseSymbol": "cWBNB",
"quoteSymbol": "WBNB",
"poolAddress": "0xd031000000000000000000000000000000000038",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "bnb",
"venue": "dodo_pmm",
@@ -2816,10 +2816,10 @@
"baseSymbol": "cWETHL2",
"quoteSymbol": "USDC",
"poolAddress": "0xd022000000000000000000000000000000002105",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "eth_l2",
"venue": "dodo_pmm",
@@ -2831,10 +2831,10 @@
"baseSymbol": "cWETHL2",
"quoteSymbol": "WETH",
"poolAddress": "0xd021000000000000000000000000000000002105",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "eth_l2",
"venue": "dodo_pmm",

View File

@@ -1936,7 +1936,7 @@
"key": "Compliant_WEMIX_cW",
"name": "cWEMIX->cWWEMIX",
"addressFrom": "0x4d82206bec5b4dfa17759ffede07e35f4f63a050",
"addressTo": "0xc111000000000000000000000000000000000457",
"addressTo": "0x4c38f9a5ed68a04cd28a72e8c68c459ec34576f3",
"notes": "Wave 1 gas-family lane wemix: Chain 138 cWEMIX -> Wemix cWWEMIX. hybrid_cap backing with uniswap_v3 reference pricing and DODO PMM edge liquidity."
}
]

View File

@@ -1,121 +0,0 @@
# Chain 138 Aave Blocker Removal Worksheet
Use this worksheet to remove the remaining native Chain `138` Aave blockers in a controlled order.
## Goal
Populate and verify the canonical Chain `138` Aave market addresses and operator values required for:
- `scripts/deployment/deploy-chain138-aave-v3-execution-stack.sh`
- `scripts/deployment/deploy-chain138-aave-quote-push-receiver.sh`
- `scripts/deployment/publish-chain138-aave-runtime-from-artifacts.sh`
## Canonical inputs
Fill these values first:
- `CHAIN_138_AAVE_POOL`
- `CHAIN_138_AAVE_POOL_ADDRESSES_PROVIDER`
- `CHAIN_138_AAVE_POOL_DATA_PROVIDER`
- `CHAIN_138_AAVE_START_BLOCK`
- `CHAIN_138_AAVE_EXECUTOR_TREASURY`
- `CHAIN_138_AAVE_EXECUTOR_OWNER`
- `CHAIN_138_AAVE_QUOTE_PUSH_RECEIVER_OWNER`
Use the manifest template:
- [chain138-aave-rollout-manifest.example.json](/home/intlc/projects/proxmox/config/chain138-aave-rollout-manifest.example.json:1)
Apply a filled manifest into an env snippet with:
```bash
bash scripts/deployment/apply-chain138-aave-manifest.sh \
--manifest config/chain138-aave-rollout-manifest.example.json \
--write reports/status/chain138_aave_runtime.env
```
## Blocker-by-blocker checklist
### 1. `CHAIN_138_AAVE_POOL`
- [ ] confirm the native Chain `138` Aave pool contract is deployed
- [ ] confirm bytecode exists on Chain `138`
- [ ] confirm `ADDRESSES_PROVIDER()` is readable
- [ ] confirm `FLASHLOAN_PREMIUM_TOTAL()` is readable
- [ ] record the address in the manifest and runtime env
### 2. `CHAIN_138_AAVE_POOL_ADDRESSES_PROVIDER`
- [ ] confirm the native Chain `138` Aave `PoolAddressesProvider` is deployed
- [ ] confirm bytecode exists on Chain `138`
- [ ] confirm `pool.ADDRESSES_PROVIDER()` equals this value
- [ ] record the address in the manifest and runtime env
### 3. `CHAIN_138_AAVE_POOL_DATA_PROVIDER`
- [ ] confirm the native Chain `138` Aave market data-provider contract is deployed
- [ ] confirm bytecode exists on Chain `138`
- [ ] confirm it belongs to the same market as the Pool and AddressesProvider
- [ ] record the address in the manifest and runtime env
### 4. `CHAIN_138_AAVE_START_BLOCK`
- [ ] choose the canonical start block for indexing / discovery
- [ ] verify it is numeric
- [ ] record it in the manifest and runtime env
### 5. `CHAIN_138_AAVE_EXECUTOR_TREASURY`
- [ ] decide the treasury address for the Chain `138` Aave-backed execution stack
- [ ] confirm it is a real operator-controlled address
- [ ] record it in the manifest and runtime env
### 6. `CHAIN_138_AAVE_EXECUTOR_OWNER`
- [ ] decide the final owner for the execution stack
- [ ] confirm it is the intended operational owner
- [ ] record it in the manifest and runtime env
### 7. `CHAIN_138_AAVE_QUOTE_PUSH_RECEIVER_OWNER`
- [ ] decide the final owner for the `AaveQuotePushFlashReceiver`
- [ ] confirm whether it should match the executor owner or use a separate operator address
- [ ] record it in the manifest and runtime env
## Verification commands
Run these after the manifest/env is populated:
```bash
bash scripts/verify/check-chain138-remaining-protocol-env.sh
bash scripts/verify/check-chain138-aave-rollout-readiness.sh
```
## Deployment commands
Dry-run first:
```bash
bash scripts/deployment/deploy-chain138-aave-v3-execution-stack.sh --dry-run
bash scripts/deployment/deploy-chain138-aave-quote-push-receiver.sh --dry-run
```
Apply once all inputs are real and funded:
```bash
bash scripts/deployment/deploy-chain138-aave-v3-execution-stack.sh --apply
bash scripts/deployment/deploy-chain138-aave-quote-push-receiver.sh --apply
bash scripts/deployment/publish-chain138-aave-runtime-from-artifacts.sh
```
## Completion condition
The Chain `138` Aave blocker set is removed only when:
1. all seven canonical inputs are populated
2. the Pool, AddressesProvider, and DataProvider have bytecode on Chain `138`
3. `check-chain138-aave-rollout-readiness.sh` passes
4. the execution stack is deployed
5. the quote-push receiver is deployed
6. runtime publication is complete
7. docs/config/registry reflect the live on-chain values

View File

@@ -1,60 +0,0 @@
# Chain 138 Aave V3 Origin and GMX Synthetics Scaffold
This document records the native deployment scaffold added after importing:
- `vendor/chain138-protocols/aave-v3-origin`
- `vendor/chain138-protocols/gmx-synthetics`
## Aave V3 Origin
Canonical manifest template:
- [chain138-aave-v3-origin-manifest.example.json](/home/intlc/projects/proxmox/config/chain138-aave-v3-origin-manifest.example.json:1)
Renderer:
- [render-chain138-aave-v3-origin-market-input.py](/home/intlc/projects/proxmox/scripts/deployment/render-chain138-aave-v3-origin-market-input.py:1)
Deploy wrapper:
- [deploy-chain138-aave-v3-origin-market.sh](/home/intlc/projects/proxmox/scripts/deployment/deploy-chain138-aave-v3-origin-market.sh:1)
This path generates a custom Chain `138` market-input contract under the imported Aave source tree and runs the upstream Foundry batch deployment script against it.
Dry-run:
```bash
bash scripts/deployment/deploy-chain138-aave-v3-origin-market.sh --dry-run
```
## GMX Synthetics
Canonical manifest template:
- [chain138-gmx-synthetics-manifest.example.json](/home/intlc/projects/proxmox/config/chain138-gmx-synthetics-manifest.example.json:1)
Overlay renderer:
- [render-chain138-gmx-synthetics-overlay.py](/home/intlc/projects/proxmox/scripts/deployment/render-chain138-gmx-synthetics-overlay.py:1)
Prepare helper:
- [prepare-chain138-gmx-synthetics-overlay.sh](/home/intlc/projects/proxmox/scripts/deployment/prepare-chain138-gmx-synthetics-overlay.sh:1)
Core deploy wrapper:
- [deploy-chain138-gmx-synthetics-core.sh](/home/intlc/projects/proxmox/scripts/deployment/deploy-chain138-gmx-synthetics-core.sh:1)
This path generates a Chain `138` Hardhat overlay config and minimal per-network modules, then runs the upstream GMX synthetics deploy flow against that overlay.
Dry-run:
```bash
bash scripts/deployment/deploy-chain138-gmx-synthetics-core.sh --dry-run
```
## Current boundary
These scaffolds make `Aave` and `GMX` deploy-program-backed from this repo.
They do **not** mean Chain `138` live deployment is complete yet. Real manifests, live token/oracle/role inputs, and funded operator accounts are still required before an apply run can be truthful.

View File

@@ -1,96 +0,0 @@
## Chain 138 Blockscout DODO Insert and Route Lineage Report
Date: 2026-04-16
### Summary
- The live Chain 138 route execution stack is traced to the `DeployEnhancedSwapRouterV2.s.sol` broadcast lineage.
- The deployed `DodoRouteExecutorAdapter` does not match the current local Foundry artifact runtime bytecode.
- `D3MMFactory` and `D3Proxy` both reach the Blockscout verifier successfully, but still fail during `smart_contracts` materialization.
### Route execution lineage
The following live contracts and creation transactions are present in:
- `smom-dbis-138/broadcast/DeployEnhancedSwapRouterV2.s.sol/138/run-latest.json`
- `smom-dbis-138/broadcast/DeployEnhancedSwapRouterV2.s.sol/138/run-1775195187069.json`
Recovered live lineage:
- `EnhancedSwapRouterV2`
- address: `0xf1c93f54a5c2fc0d7766ccb0ad8f157dfb4c99ce`
- create tx: `0x30e68f519243377006e93dd82823305729a1ede5f03e744e27e5d57c7b6766a7`
- `IntentBridgeCoordinatorV2`
- address: `0x7d0022b7e8360172fd9c0bb6778113b7ea3674e7`
- create tx: `0x73fc5f883eda73370e3a4f0d800453e095cee07ef5f37793ed4576f47b4fa5fb`
- `DodoRouteExecutorAdapter`
- address: `0x88495b3dccea93b0633390fde71992683121fa62`
- create tx: `0xc574dde65e90421ed1ff5600c1f9dd71a6b8afb5a1b8416b1dde38bd2961120c`
- `DodoV3RouteExecutorAdapter`
- address: `0x9cb97add29c52e3b81989bca2e33d46074b530ef`
- create tx: `0x7c8b4d9a43913c5b97bd2e348282d85987b342377809158ec06c6c47e7e6349c`
- `UniswapV3RouteExecutorAdapter`
- address: `0x960d6db4e78705f82995690548556fb2266308ea`
- create tx: `0xcd82bcb291f7ef36319059081d642a2c7ff1efeff160ba1f077a98ee4d17483d`
- `BalancerRouteExecutorAdapter`
- address: `0x4e1b71b69188ab45021c797039b4887a4924157a`
- create tx: `0xbe2923798c55d562d1097bfa4e861c38e825357fcc34e9b95a6344a5ba0423b7`
### Route artifact drift
The local verification path now checks the Foundry `deployedBytecode.object` hash against the live Chain 138 runtime bytecode before submitting.
Current canary:
- contract: `DodoRouteExecutorAdapter`
- local artifact keccak:
- `0xd908d26eef16d34c45f16d1584f2ae7c74535c8352d4be011520c0a09b892301`
- on-chain runtime keccak:
- `0x2c2bd22d9b8734935aee04d86664b72e5160cc8ddddef6ea4b50667d3095b90b`
Conclusion:
- the route-stack verification blocker is currently a real deployed-source or build-profile drift, not Blockscout transport plumbing
### DODO verifier / materialization path
Observed behavior for:
- `D3MMFactory` `0x78470c7d2925b6738544e2dd4fe7c07cca21ac31`
- `D3Proxy` `0xc9a11abb7c63d88546be24d58a6d95e3762cb843`
Current live path:
1. Blockscout accepts the `standard-input` verification request.
2. `smart-contract-verifier` receives the request.
3. Blockscout logs `Solidity standard-json verifier result ... {:ok, ...}`.
4. Blockscout logs:
- `create_or_update_smart_contract ... existing=nil incoming_partially_verified=true ...`
- `create_smart_contract transaction result ... {:error, :smart_contract, #Ecto.Changeset<...>}`
5. The contract still does not materialize into `smart_contracts`.
Current explorer-side status:
- public explorer API still returns:
- `name: null`
- `compiler_version: null`
- `is_partially_verified: null`
### Source instrumentation state
The live `v7.0.2` Blockscout source tree on CT `5000` contains:
- `create_smart_contract changeset errors address=... errors=... constraints=...`
- `changeset/2` defaulting missing `language` to `:solidity`
The remaining live task is to get a rebuilt runtime image that definitely contains that patched branch, then resubmit `D3MMFactory` and `D3Proxy` and read the parsed changeset errors from the Blockscout logs.
### Remaining work
1. Finish the in-progress no-cache rebuild of `blockscout/blockscout:chain138-patched-v7`.
2. Restart CT `5000` Blockscout on the rebuilt image.
3. Resubmit:
- `D3MMFactory`
- `D3Proxy`
4. Capture the parsed `changeset` errors from the new `create_smart_contract changeset errors ...` log line.
5. For the route stack, recover the exact historical source/build profile that produced the currently deployed bytecode before re-verifying.

View File

@@ -1,48 +0,0 @@
# Chain 138 Deployed Smart Contract Verification Status
This report is generated from the canonical Chain `138` inventory in `config/smart-contracts-master.json`, on-chain bytecode checks against the Core RPC, and Blockscout smart-contract metadata from the internal explorer API.
## Summary
| Group | Total | Deployed | Verified | Bytecode only | Pending |
| --- | ---: | ---: | ---: | ---: | ---: |
| `dodo_v3_core` | 6 | 6 | 4 | 2 | 0 |
| `flash_infra` | 3 | 3 | 0 | 3 | 0 |
| `native_v2` | 4 | 4 | 4 | 0 | 0 |
| `route_execution_stack` | 12 | 12 | 0 | 12 | 0 |
## Inventory
| Group | Label | Address | Deployed | Verification | Blockscout name | Compiler |
| --- | --- | --- | --- | --- | --- | --- |
| `dodo_v3_core` | `D3Oracle` | `0xD7459aEa8bB53C83a1e90262777D730539A326F0` | yes | `verified` | `D3Oracle` | `v0.8.16+commit.07a7930e` |
| `dodo_v3_core` | `D3Vault` | `0x42b6867260Fb9eE6d09B7E0233A1fAD65D0133D1` | yes | `verified` | `D3Vault` | `v0.8.16+commit.07a7930e` |
| `dodo_v3_core` | `DODOApprove` | `0xbF8D5CB7E8F333CA686a27374Ae06F5dfd772E9E` | yes | `verified` | `DODOApprove` | `v0.8.16+commit.07a7930e` |
| `dodo_v3_core` | `DODOApproveProxy` | `0x08d764c03C42635d8ef9046752b5694243E21Fe9` | yes | `verified` | `DODOApproveProxy` | `v0.8.16+commit.07a7930e` |
| `dodo_v3_core` | `D3MMFactory` | `0x78470C7d2925B6738544E2DD4FE7c07CcA21AC31` | yes | `bytecode-only` | `` | `` |
| `dodo_v3_core` | `D3Proxy` | `0xc9a11abB7C63d88546Be24D58a6d95e3762cB843` | yes | `bytecode-only` | `` | `` |
| `flash_infra` | `UniversalCCIPFlashBridgeAdapter` | `0xBe9e0B2d4cF6A3b2994d6f2f0904D2B165eB8ffC` | yes | `bytecode-only` | `` | `` |
| `flash_infra` | `CrossChainFlashRepayReceiver` | `0xD084b68cB4B1ef2cBA09CF99FB1B6552fd9b4859` | yes | `bytecode-only` | `` | `` |
| `flash_infra` | `CrossChainFlashVaultCreditReceiver` | `0x89F7a1fcbBe104BeE96Da4b4b6b7d3AF85f7E661` | yes | `bytecode-only` | `` | `` |
| `native_v2` | `UniswapV2Factory` | `0x0C30F6e67Ab3667fCc2f5CEA8e274ef1FB920279` | yes | `verified` | `UniswapV2Factory` | `v0.5.16+commit.9c3226ce` |
| `native_v2` | `UniswapV2Router` | `0x3019A7fDc76ba7F64F18d78e66842760037ee638` | yes | `verified` | `UniswapV2Router02` | `v0.6.6+commit.6c089d02` |
| `native_v2` | `SushiSwapFactory` | `0x2871207ff0d56089D70c0134d33f1291B6Fce0BE` | yes | `verified` | `UniswapV2Factory` | `v0.6.12+commit.27d51765` |
| `native_v2` | `SushiSwapRouter` | `0xB37b93D38559f53b62ab020A14919f2630a1aE34` | yes | `verified` | `UniswapV2Router02` | `v0.6.12+commit.27d51765` |
| `route_execution_stack` | `EnhancedSwapRouterV2` | `0xF1c93F54A5C2fc0d7766Ccb0Ad8f157DFB4C99Ce` | yes | `bytecode-only` | `` | `` |
| `route_execution_stack` | `IntentBridgeCoordinatorV2` | `0x7D0022B7e8360172fd9C0bB6778113b7Ea3674E7` | yes | `bytecode-only` | `` | `` |
| `route_execution_stack` | `DodoRouteExecutorAdapter` | `0x88495B3dccEA93b0633390fDE71992683121Fa62` | yes | `bytecode-only` | `` | `` |
| `route_execution_stack` | `DodoV3RouteExecutorAdapter` | `0x9Cb97adD29c52e3B81989BcA2E33D46074B530eF` | yes | `bytecode-only` | `` | `` |
| `route_execution_stack` | `UniswapV3RouteExecutorAdapter` | `0x960D6db4E78705f82995690548556fb2266308EA` | yes | `bytecode-only` | `` | `` |
| `route_execution_stack` | `BalancerRouteExecutorAdapter` | `0x4E1B71B69188Ab45021c797039b4887a4924157A` | yes | `bytecode-only` | `` | `` |
| `route_execution_stack` | `CurveRouteExecutorAdapter` | `0x5f0E07071c41ACcD2A1b1032D3bd49b323b9ADE6` | yes | `bytecode-only` | `` | `` |
| `route_execution_stack` | `OneInchRouteExecutorAdapter` | `0x8168083d29b3293F215392A49D16e7FeF4a02600` | yes | `bytecode-only` | `` | `` |
| `route_execution_stack` | `PilotUniswapV3Router` | `0xD164D9cCfAcf5D9F91698f296aE0cd245D964384` | yes | `bytecode-only` | `` | `` |
| `route_execution_stack` | `PilotBalancerVault` | `0x96423d7C1727698D8a25EbFB88131e9422d1a3C3` | yes | `bytecode-only` | `` | `` |
| `route_execution_stack` | `PilotCurve3Pool` | `0xE440Ec15805BE4C7BabCD17A63B8C8A08a492e0f` | yes | `bytecode-only` | `` | `` |
| `route_execution_stack` | `PilotOneInchRouter` | `0x500B84b1Bc6F59C1898a5Fe538eA20A758757A4F` | yes | `bytecode-only` | `` | `` |
## Notes
- `verified` means Blockscout currently exposes both a contract name and compiler version.
- `bytecode-only` means the address is known to the explorer, but source metadata has not materialized yet.
- `pending` means the contract is deployed in the canonical inventory, but the current Blockscout API response does not yet expose verification metadata.

View File

@@ -1,119 +0,0 @@
# Chain 138 — remaining fixes and external listings
**Purpose:** One place for (1) **in-repo / operator fixes** that improve pricing and wallet UX without waiting on third parties, and (2) **everything needed to get native fiat** (MetaMask, Trust, etc.) via **external listings**.
**Canonical token addresses (submission must match):** [EXPLORER_TOKEN_LIST_CROSSCHECK.md](../11-references/EXPLORER_TOKEN_LIST_CROSSCHECK.md) section 5.
---
## 1. Already done (on-chain + developer tooling)
| Item | Where |
|------|--------|
| Reserve + D3 WETH path (mock, non-stale quotes) | `scripts/deployment/fix-chain138-pricing-feeds.sh` — see [CHAIN138_PRICING_FEEDS_LIVE.md](CHAIN138_PRICING_FEEDS_LIVE.md) |
| D3 verifier default WETH source | `scripts/verify/check-dodo-v3-chain138.sh` |
| dApp USD hints (ETH + stables including mirror **USDT/USDC**) | `metamask-integration/provider/oracles.js``getEthUsdPrice`, `getAssetUsdPrice` |
| CoinGecko submission package token table | [coingecko/COINGECKO_SUBMISSION_PACKAGE.md](coingecko/COINGECKO_SUBMISSION_PACKAGE.md) — **aligned to** [EXPLORER_TOKEN_LIST_CROSSCHECK.md](../11-references/EXPLORER_TOKEN_LIST_CROSSCHECK.md) **§5** (includes **cUSDT/cUSDC V2**, mirror **USDT/USDC** for forms; full set in §5) |
---
## 2. Additional fixes (recommended, no listing required)
These improve **correctness**, **ops**, or **dApp** UX; they do **not** by themselves fill MetaMasks native fiat column on chain 138.
| Priority | Action | Why |
|----------|--------|-----|
| High | Keep **`smom-dbis-138/scripts/reserve/sync-weth-mock-price.sh`** on schedule (e.g. `pmm-mesh-6s-automation.sh`) | `OraclePriceFeed` rejects aggregator reads when `updatedAt` is older than `updateInterval * 2` (~60s with default 30s interval). |
| High | Run **`PriceFeedKeeper.performUpkeep`** (systemd `keeper-service.js` or mesh script) with **`KEEPER_PRIVATE_KEY`** and **`PRICE_FEED_KEEPER_ADDRESS`** | Pushes tracked assets (now includes **WETH10**) into `ReserveSystem` on cadence. |
| Medium | **`pnpm run verify:token-aggregation-api`** or LAN push scripts if report routes break | CoinGecko/CMC submissions and Snaps depend on report JSON and `/api/config/token-list`. Runbook: [TOKEN_AGGREGATION_REPORT_API_RUNBOOK.md](TOKEN_AGGREGATION_REPORT_API_RUNBOOK.md). |
| Medium | **Blockscout WETH9 metadata** (`name`/`symbol`/`decimals` null or 0 on `/api/v2/tokens`) | Wallets and indexers read Explorer; see [EXPLORER_TOKEN_LIST_CROSSCHECK.md](../11-references/EXPLORER_TOKEN_LIST_CROSSCHECK.md) §2.1 — fix contract metadata or Blockscout re-index / override. |
| Low | Re-run **`bash scripts/verify/check-chain138-rpc-health.sh`** after infra changes | Head spread and public RPC sanity. |
| Low | Ensure token-aggregation **`.env`** sets **`USDT_ADDRESS_138`**, **`USDC_ADDRESS_138`** (or `OFFICIAL_USDT_ADDRESS` / `OFFICIAL_USDC_ADDRESS`) so `report/coingecko?chainId=138` matches mirror **USDT/USDC** used on-chain; optional schema fields for `coingecko_id` when platforms support chain **138** | Avoids export vs wallet-address drift (see `canonical-tokens.ts` Chain **138** USDT/USDC resolution). |
---
## 3. External listings — what each integration needs
### Listing scope: `c*` on Chain 138 vs `cW*` on host chains
**No — not automatically.** CoinGecko, CMC, and MetaMask-style price backends index **(chain ID, contract address)**. The materials in [COINGECKO_SUBMISSION_PACKAGE.md](coingecko/COINGECKO_SUBMISSION_PACKAGE.md) and the **Chain 138** export `?chainId=138` cover **GRU base money and mirrors on chain 138** (`cUSDT`, `cUSDC`, mirror **USDT/USDC**, WETH variants, etc.), not the **mesh `cW*`** tokens deployed on **other networks**.
**`cW*`** (public PMM / corridor capacity on Ethereum, Polygon, Arbitrum, …) live at **different chain IDs** — see [deployment-status.json](../../cross-chain-pmm-lps/config/deployment-status.json) (`chains."1".cwTokens`, and other `chains.*` entries). Each contract needs its **own** listing row under the **correct host chain** (e.g. Ethereum **1** for most `cWUSDT` / `cWUSDC` / `cWEURC` …).
**Token-aggregation exports for host chains:** `GET /api/v1/report/coingecko?chainId=1` (and other supported IDs in `smom-dbis-138/services/token-aggregation/src/config/chains.ts`) **only includes tokens that have non-empty canonical addresses for that `chainId`** and meaningful rows if the indexer has **`CHAIN_1_RPC_URL` / `ETHEREUM_MAINNET_RPC`** (or the matching `CHAIN_*_RPC_URL`) and DB coverage. If the export is empty, use `deployment-status.json` plus manual forms until indexing is wired — same URL pattern as [CMC_COINGECKO_SUBMISSION_RUNBOOK.md](coingecko/CMC_COINGECKO_SUBMISSION_RUNBOOK.md) §2.1, **per chain**.
**Policy context:** `cW*` is the **execution / mesh** layer; `c*` on 138 is **reference-aligned base money** — [GRU_REFERENCE_PRIMACY_AND_MESH_EXECUTION_MODEL.md](GRU_REFERENCE_PRIMACY_AND_MESH_EXECUTION_MODEL.md). Listings are still **per-chain contracts** for wallet price UIs.
### 3.1 CoinGecko (highest impact for MetaMask-style fiat)
**Goal:** Chain **138** and each **contract** indexed so price APIs return USD.
| You need | Detail |
|----------|--------|
| **Account** | CoinGecko user; use [request / new coin](https://www.coingecko.com/en/request) flows per [COINGECKO_SUBMISSION_GUIDE](coingecko/COINGECKO_SUBMISSION_GUIDE.md). |
| **Chain row** | Name (e.g. DeFi Oracle Meta Mainnet), **chainId 138**, RPCs, explorer `https://explorer.d-bis.org`, website, consensus note (Besu / QBFT if asked). Package: [COINGECKO_SUBMISSION_PACKAGE.md](coingecko/COINGECKO_SUBMISSION_PACKAGE.md). |
| **Per-token rows** | **Exact** contract, symbol, name, **decimals** from §5; **512×512 PNG** logo per token. |
| **Mirror stables** | Submit **USDT** `0x004b63A7B5b0E06f6bB6adb4a5F9f590BF3182D1` and **USDC** `0x71D6687F38b93CCad569Fa6352c876eea967201b` if users hold those symbols — they are **not** the same contract as cUSDT/cUSDC. |
| **Machine-readable export** | `GET …/api/v1/report/coingecko?chainId=138` — see [CMC_COINGECKO_SUBMISSION_RUNBOOK.md](coingecko/CMC_COINGECKO_SUBMISSION_RUNBOOK.md) §2.1. |
| **If CG rejects “unsupported chain”** | Keep exports current; resubmit when they add chain **138** — no code blocker. |
### 3.2 CoinMarketCap (CMC)
| You need | Detail |
|----------|--------|
| **Export** | `GET …/api/v1/report/cmc?chainId=138` — pairs, liquidity, volume fields per [CMC_COINGECKO_REPORTING.md](../../smom-dbis-138/services/token-aggregation/docs/CMC_COINGECKO_REPORTING.md). |
| **Same assets as CoinGecko** | Logos 512×512, canonical addresses, honest liquidity/volume from your indexer. |
| **Process** | CMCs own listing / DEX form (changes over time) — [CMC_COINGECKO_SUBMISSION_RUNBOOK.md](coingecko/CMC_COINGECKO_SUBMISSION_RUNBOOK.md) §2.3. |
### 3.3 MetaMask / Consensys (native portfolio fiat)
MetaMasks **in-wallet** fiat still comes from **their** price pipeline, not your RPC.
| You need | Detail |
|----------|--------|
| **Evidence pack** | Chain ID **138**, public RPCs, explorer, **token list URL** `https://explorer.d-bis.org/api/config/token-list`, user count / TVL / partners if available. |
| **Contact** | [Consensys contact](https://consensys.io/contact/), **business@consensys.io**; support / dev portals per [REPOSITORIES_AND_PRS_CHAIN138.md](../00-meta/REPOSITORIES_AND_PRS_CHAIN138.md) §4. |
| **Expectation** | No public “merge PR for chain 138 price” repo; **outreach + listings** (CoinGecko/CMC) together move outcomes faster. |
### 3.4 Trust Wallet
| You need | Detail |
|----------|--------|
| **wallet-core PR** | Registry + codegen — [ADD_CHAIN138_TO_TRUST_WALLET.md](ADD_CHAIN138_TO_TRUST_WALLET.md), [pr-ready/trust-wallet-registry-chain138.json](pr-ready/trust-wallet-registry-chain138.json). |
| **Assets logos (optional)** | [assets.trustwallet.com](https://assets.trustwallet.com) for chain **138** tokens. |
### 3.5 Ledger Live
| You need | Detail |
|----------|--------|
| **Wait for Ledger** | Form submitted; do not PR upstream until they respond — [REPOSITORIES_AND_PRS_CHAIN138.md](../00-meta/REPOSITORIES_AND_PRS_CHAIN138.md) §1. |
### 3.6 Chain metadata (ethereum-lists / Chainlist)
| You need | Detail |
|----------|--------|
| **Maintenance** | Chain **138** is already on [chainlist.org/chain/138](https://chainlist.org/chain/138). For RPC/explorer updates, fork [ethereum-lists/chains](https://github.com/ethereum-lists/chains), edit `_data/chains/eip155-138.json`, PR — [REPOSITORIES_AND_PRS_CHAIN138.md](../00-meta/REPOSITORIES_AND_PRS_CHAIN138.md) §3. |
---
## 4. After a successful listing (repo hygiene)
When CoinGecko / CMC / a wallet lists you, update status so operators do not duplicate work:
| Document | What to change |
|----------|----------------|
| [CMC_COINGECKO_SUBMISSION_RUNBOOK.md](coingecko/CMC_COINGECKO_SUBMISSION_RUNBOOK.md) | §4 “When done — where to update” |
| [PLACEHOLDERS_AND_TBD.md](../PLACEHOLDERS_AND_TBD.md) (`docs/PLACEHOLDERS_AND_TBD.md`) | CMC / CoinGecko section — submission or “Listed” dates |
| [REMAINING_COMPONENTS_TASKS_AND_RECOMMENDATIONS.md](../00-meta/REMAINING_COMPONENTS_TASKS_AND_RECOMMENDATIONS.md) | §1.4 task 25 (if present) |
| [PRICE_FEED_CHAIN138_METAMASK_AND_WALLETS.md](PRICE_FEED_CHAIN138_METAMASK_AND_WALLETS.md) | §1 “Current state” table — reflect new wallet behaviour |
---
## 5. See also
- [PRICE_FEED_CHAIN138_METAMASK_AND_WALLETS.md](PRICE_FEED_CHAIN138_METAMASK_AND_WALLETS.md) — wallet behaviour and checklist
- [CHAIN138_PRICING_FEEDS_LIVE.md](CHAIN138_PRICING_FEEDS_LIVE.md) — on-chain feed matrix
- [COINGECKO_SUBMISSION_PACKAGE.md](coingecko/COINGECKO_SUBMISSION_PACKAGE.md) — chain + token copy-paste
- [TOKEN_AGGREGATION_REPORT_API_RUNBOOK.md](TOKEN_AGGREGATION_REPORT_API_RUNBOOK.md) — report API reachability
- [GRU_V2_PUBLIC_PROTOCOL_DEPLOYMENT_STATUS.md](../11-references/GRU_V2_PUBLIC_PROTOCOL_DEPLOYMENT_STATUS.md) — links `deployment-status.json` and mesh scope
- `metamask-integration/provider/oracles.js``getAssetUsdPrice` / `getEthUsdPrice` for dApp USD hints (not MetaMask native column)

View File

@@ -1,77 +0,0 @@
# Chain 138 Native Protocol Stack Gap Report
This report answers a specific deployment question:
If the remaining native Chain `138` protocols do not already exist on-chain, can they be deployed directly from this repo?
## Answer
Not fully.
The repo contains:
- a Chain `138` Aave-backed execution wrapper deploy path
- a Chain `138` Aave quote-push receiver deploy path
- canonical inventory and readiness checks for `Aave`, `GMX`, and `dYdX`
- imported upstream protocol source repos:
- `vendor/chain138-protocols/aave-v3-origin`
- `vendor/chain138-protocols/gmx-synthetics`
The repo still does **not** contain a completed Chain `138` deployment program or published live addresses for:
- an Aave V3 market on Chain `138`
- a GMX synthetics market on Chain `138`
- any dYdX market on Chain `138`
## Why this matters
The current Aave scripts still require an already-existing native Aave market:
- `CHAIN_138_AAVE_POOL`
- `CHAIN_138_AAVE_POOL_ADDRESSES_PROVIDER`
- `CHAIN_138_AAVE_POOL_DATA_PROVIDER`
Those are not “addresses we forgot to copy.” They are the addresses of the native protocol program itself.
If that market does not yet exist on Chain `138`, then the remaining work is not just runtime config. It is a native protocol deployment project built from the imported upstream source.
## Audited gap
Use:
```bash
bash scripts/verify/check-chain138-native-protocol-stack-source.sh
```
This audit checks whether the repo contains the upstream source families expected for native deployments:
- Aave:
- `PoolAddressesProvider`
- `Pool`
- `PoolConfigurator`
- `PoolDataProvider`
- `AaveOracle`
- GMX:
- `Router`
- `Vault`
- `PositionRouter`
- `Reader`
- dYdX:
- `SoloMargin`
- `DataProvider`
If those source families are present, that proves upstream source has been imported. It does not, by itself, mean Chain `138` deployment is complete.
## Practical implication
To finish native Chain `138` rollout for the remaining protocols, one of these must happen:
1. use the imported upstream source repos to build a real Chain `138` deployment program
2. deploy the native protocol stack on Chain `138`
3. publish canonical live Chain `138` addresses and runtime config
Until that happens:
- `Aave` remains `blocked`, `source-backed`
- `GMX` remains `blocked`, `source-backed`
- `dYdX` remains `blocked`, `inventory-backed`

View File

@@ -1,51 +0,0 @@
# Chain 138 — live pricing feeds (operator reference)
**Purpose:** Single table of on-chain pricing surfaces after the Reserve + D3 + keeper alignment (`fix-chain138-pricing-feeds.sh`).
**Repair (LAN + `PRIVATE_KEY`):**
```bash
./scripts/deployment/fix-chain138-pricing-feeds.sh --dry-run
./scripts/deployment/fix-chain138-pricing-feeds.sh --apply
```
Keep **`sync-weth-mock-price.sh`** on a schedule (see `smom-dbis-138/scripts/reserve/pmm-mesh-6s-automation.sh`) so the WETH mock `updatedAt` stays inside `OraclePriceFeed`s freshness window.
---
## 1. Feed matrix (Chain 138)
| Role | Asset / pair | Contract | Notes |
|------|----------------|----------|--------|
| **WETH spot (8 dp)** | ETH/USD | `0x3e8725b8De386feF3eFE5678c92eA6aDB41992B2` | **MockPriceFeed**; owner updates from CoinGecko/Binance (`sync-weth-mock-price.sh`). Used by **OraclePriceFeed** (WETH10) and **D3Oracle** WETH10 source to avoid managed-aggregator staleness on Besu. |
| **Managed ETH/USD (legacy)** | ETH/USD | `0x99b3511a2d315a497c8112c1fdd8d508d4b1e506` | Chainlink-style; can report **stale** if not pushed; D3 verifier default is now the mock unless `CHAIN138_D3_WETH_USD_FEED` overrides. |
| **Oracle proxy (legacy)** | ETH/USD | `0x3304b747e565a97ec8ac220b0b6a1f6ffdb837e6` | Do **not** rely on `latestRoundData()` for live reads; prefer mock above. |
| **Reserve path** | WETH10 | `OraclePriceFeed` `0x8918eE0819fD687f4eb3e8b9B7D0ef7557493cfa``ReserveSystem` `0x607e97cD626f209facfE48c1464815DDE15B5093` | `getPrice(WETH10)` should return 18-decimal USD after `setAggregator` + `updatePriceFeed`. |
| **Reserve path** | cUSDT / cUSDC | Same `OraclePriceFeed` / `ReserveSystem` | Peg read `$1` (18 dp internal). |
| **D3Oracle** | WETH10 source | `D3Oracle` `0xD7459aEa8bB53C83a1e90262777D730539A326F0` | `priceSources(WETH10)` should list the mock after repair. |
| **D3 stables** | USDT, USDC, cUSDT, cUSDC | Managed feeds (see `scripts/verify/check-dodo-v3-chain138.sh` defaults) | `$1.00` answers in probes. |
| **Keeper** | Tracked assets | `PriceFeedKeeper` `0xD3AD6831aacB5386B8A25BB8D8176a6C8a026f04` | WETH10 tracked for mesh upkeep alongside legacy WETH9 if present. |
---
## 2. Verification
```bash
bash scripts/verify/check-dodo-v3-chain138.sh
cast call --rpc-url "${RPC_URL_138:-http://192.168.11.211:8545}" \
0x607e97cD626f209facfE48c1464815DDE15B5093 \
"getPrice(address)(uint256,uint256)" \
0xf4BB2e28688e89fCcE3c0580D37d36A7672E8A9f
```
---
## 3. MetaMask native USD column
MetaMask still **does not** fill fiat values for arbitrary ERC-20s on custom chains from these contracts. That applies to **USDT**, **USDC**, **cUSDT**, and **cUSDC** in the main asset list until the chain (or each token) is covered by MetaMasks price backend (historically CoinGecko-driven) or you use a dApp/Snap overlay.
Use:
- **`metamask-integration/provider`**: `getEthUsdPrice`, `getAssetUsdPrice`**$1 hints** for cUSDT, cUSDC, **cUSDT V2** (`0x9FBfab33882Efe0038DAa608185718b772EE5660`), **cUSDC V2** (`0x219522c60e83dEe01FC5b0329d6fA8fD84b9D13d`), and mirror **USDT/USDC** (`0x004b…`, `0x71D6…`); **WETH** from on-chain ETH/USD feeds. Same addresses are in [EXPLORER_TOKEN_LIST_CROSSCHECK.md](../11-references/EXPLORER_TOKEN_LIST_CROSSCHECK.md) §5.
- **Token list**: `https://explorer.d-bis.org/api/config/token-list` (icons + discovery; not full fiat).
- **Long term**: CoinGecko / CMC listing — [PRICE_FEED_CHAIN138_METAMASK_AND_WALLETS.md](PRICE_FEED_CHAIN138_METAMASK_AND_WALLETS.md) and the consolidated checklist [CHAIN138_EXTERNAL_LISTINGS_AND_REMAINING_FIXES.md](CHAIN138_EXTERNAL_LISTINGS_AND_REMAINING_FIXES.md).

View File

@@ -1,122 +0,0 @@
# Chain 138 Remaining Protocols Runbook
This runbook covers the remaining native protocol programs on Chain `138` after the supported spot/routing stack was completed:
- `Aave`
- `GMX`
- `dYdX`
## Current truth
- `Aave`
- repo-backed deployment surface exists for:
- the Aave-backed MEV execution adapter/wrapper path
- the `AaveQuotePushFlashReceiver`
- native Chain `138` Aave market deployment is **not** yet published in canonical env/registry
- `GMX`
- official upstream `gmx-io/gmx-synthetics` is now vendored as a submodule under:
- `vendor/chain138-protocols/gmx-synthetics`
- completion is now blocked on Chain `138` deployment/configuration work and canonical live addresses, not on missing upstream source
- `dYdX`
- no native Chain `138` contract stack is vendored here
- completion is blocked on live protocol addresses and/or an imported deployment stack
## Canonical inventory
Use [chain138-remaining-protocol-surface.json](/home/intlc/projects/proxmox/config/chain138-remaining-protocol-surface.json:1) as the source of truth for required Chain `138` env keys.
For the Aave blocker-removal sequence, use:
- [CHAIN138_AAVE_BLOCKER_REMOVAL_WORKSHEET.md](/home/intlc/projects/proxmox/docs/04-configuration/CHAIN138_AAVE_BLOCKER_REMOVAL_WORKSHEET.md:1)
- [chain138-aave-rollout-manifest.example.json](/home/intlc/projects/proxmox/config/chain138-aave-rollout-manifest.example.json:1)
Verify the current surface with:
```bash
bash scripts/verify/check-chain138-remaining-protocol-env.sh
```
For the Aave-specific preflight once addresses exist:
```bash
bash scripts/verify/check-chain138-aave-rollout-readiness.sh
```
## Aave tasks
1. Publish real Chain `138` addresses:
- `CHAIN_138_AAVE_POOL`
- `CHAIN_138_AAVE_POOL_ADDRESSES_PROVIDER`
- `CHAIN_138_AAVE_POOL_DATA_PROVIDER`
- `CHAIN_138_AAVE_START_BLOCK`
2. Set operator env:
- `CHAIN_138_AAVE_EXECUTOR_TREASURY`
- `CHAIN_138_AAVE_EXECUTOR_OWNER`
- `CHAIN_138_AAVE_QUOTE_PUSH_RECEIVER_OWNER`
- optionally generate an env snippet first:
```bash
bash scripts/deployment/apply-chain138-aave-manifest.sh \
--manifest config/chain138-aave-rollout-manifest.example.json \
--write reports/status/chain138_aave_runtime.env
```
3. Dry-run the execution wrapper deploy:
```bash
bash scripts/deployment/deploy-chain138-aave-v3-execution-stack.sh --dry-run
```
4. Dry-run the quote-push receiver deploy:
```bash
bash scripts/deployment/deploy-chain138-aave-quote-push-receiver.sh --dry-run
```
5. When the live market is real and funded, apply:
```bash
bash scripts/deployment/deploy-chain138-aave-v3-execution-stack.sh --apply
bash scripts/deployment/deploy-chain138-aave-quote-push-receiver.sh --apply
```
6. Publish the MEV runtime + receiver outputs:
```bash
bash scripts/deployment/publish-chain138-aave-runtime-from-artifacts.sh
```
7. Publish resulting deployed addresses into canonical env / registry / docs.
## GMX tasks
1. Publish canonical Chain `138` addresses:
- `CHAIN_138_GMX_ROUTER`
- `CHAIN_138_GMX_EXCHANGE_ROUTER`
- `CHAIN_138_GMX_READER`
- `CHAIN_138_GMX_ORDER_VAULT`
- `CHAIN_138_GMX_DEPOSIT_VAULT`
- `CHAIN_138_GMX_WITHDRAWAL_VAULT`
- `CHAIN_138_GMX_START_BLOCK`
2. Start from the vendored native source tree:
- `vendor/chain138-protocols/gmx-synthetics`
3. Add discovery, route modeling, and execution support once the live stack exists.
## dYdX tasks
1. Publish canonical Chain `138` addresses:
- `CHAIN_138_DYDX_SOLO`
- `CHAIN_138_DYDX_DATA_PROVIDER`
- `CHAIN_138_DYDX_START_BLOCK`
2. Add or vendor a real native Chain `138` dYdX deployment/integration stack.
3. Add discovery, route modeling, and execution support once the live stack exists.
## Completion rule
These protocol rows should remain `blocked` until:
1. live addresses are published
2. bytecode is present on Chain `138`
3. config / docs / registry match reality
4. execution or market checks pass
Do not close them as `done` before all four are true.

View File

@@ -1,104 +0,0 @@
# Chain 138 Remaining Protocol Discovery Report
**Status date:** 2026-04-15
This report records the evidence-gathering pass for native `Aave`, `GMX`, and `dYdX` on Chain `138`.
## Result
No canonical or discoverable live addresses were found for:
- `Aave`
- `GMX`
- `dYdX`
That means the new Chain `138` remaining-protocol inventory cannot be prefilled with real protocol addresses today.
## Evidence used
### 1. Explorer search
The Chain `138` explorer search API returned empty results for these queries:
- `Aave`
- `GMX`
- `dydx`
- `dYdX`
- `SoloMargin`
- `PoolAddressesProvider`
- `PoolDataProvider`
- `PositionRouter`
Observed response shape:
```json
{"items":[],"next_page_params":null}
```
### 2. Token-aggregation provider capabilities
Live `planner-v2` provider capabilities for `chainId=138` do **not** advertise:
- `aave`
- `gmx`
- `dydx`
They do advertise the completed supported stack:
- `dodo`
- `dodo_v3`
- `uniswap_v3`
- `uniswap_v2`
- `sushiswap`
- `balancer`
- `curve`
- `one_inch`
### 3. MEV venue coverage
Live `mev.defi-oracle.io` coverage for `chain_id=138` shows:
- `curve`
- `dodo_d3mm`
- `dodo_pmm`
- `sushiswap`
- `uniswap_v2`
- `uniswap_v3`
It does **not** show:
- `aave`
- `gmx`
- `dydx`
### 4. Direct Chain 138 log scan for Aave provider events
A direct `eth_getLogs` scan across blocks `0..latest` on Chain `138` returned no matches for core Aave `PoolAddressesProvider`-style topics:
- `AddressSet(bytes32,address,bool)`
- `AddressSetAsProxy(bytes32,address,address)`
- `ProxyCreated(bytes32,address,address)`
- `MarketIdSet(string)`
- `PoolUpdated(address)`
- `PoolConfiguratorUpdated(address)`
- `PriceOracleUpdated(address)`
- `ACLManagerUpdated(address)`
- `ACLAdminUpdated(address)`
- `PriceOracleSentinelUpdated(address)`
- `PoolDataProviderUpdated(address)`
This is strong negative evidence that a canonical Aave V3 provider surface is not live on Chain `138`.
## Conclusion
The current blocker is not “we forgot to copy the addresses into env.”
The current blocker is:
1. no discoverable canonical live deployment evidence for `Aave`, `GMX`, or `dYdX` on Chain `138`
2. therefore no truthful address inventory to prefill
## Next action
- If these protocols truly exist on Chain `138`, publish their canonical addresses and deployment inventory.
- Otherwise, treat them as native protocol rollout projects still pending deployment.

View File

@@ -0,0 +1,217 @@
# Devin → Gitea → Proxmox CI/CD
**Status:** Working baseline for this repo
**Last Updated:** 2026-04-20
## Goal
Create a repeatable path where:
1. Devin lands code in Gitea.
2. Gitea Actions validates the repo on the site-wide `act_runner`.
3. A successful workflow calls `phoenix-deploy-api`.
4. `phoenix-deploy-api` resolves the repo/branch to a deploy target and runs the matching Proxmox publish command.
5. The deploy service checks the target health URL before it reports success.
## Current baseline in this repo
The path now exists for **`d-bis/proxmox`** on **`main`** and **`master`**:
- Canonical workflow sources: [.gitea/workflow-sources/deploy-to-phoenix.yml](/home/intlc/projects/proxmox/.gitea/workflow-sources/deploy-to-phoenix.yml) and [.gitea/workflow-sources/validate-on-pr.yml](/home/intlc/projects/proxmox/.gitea/workflow-sources/validate-on-pr.yml)
- Workflow: [deploy-to-phoenix.yml](/home/intlc/projects/proxmox/.gitea/workflows/deploy-to-phoenix.yml)
- Manual app workflow: [deploy-portal-live.yml](/home/intlc/projects/proxmox/.gitea/workflows/deploy-portal-live.yml)
- Deploy service: [server.js](/home/intlc/projects/proxmox/phoenix-deploy-api/server.js)
- Target map: [deploy-targets.json](/home/intlc/projects/proxmox/phoenix-deploy-api/deploy-targets.json)
- Current live publish script: [deploy-phoenix-deploy-api-to-dev-vm.sh](/home/intlc/projects/proxmox/scripts/deployment/deploy-phoenix-deploy-api-to-dev-vm.sh)
- Manual smoke trigger: [trigger-phoenix-deploy.sh](/home/intlc/projects/proxmox/scripts/dev-vm/trigger-phoenix-deploy.sh)
- Target validator: [validate-phoenix-deploy-targets.sh](/home/intlc/projects/proxmox/scripts/validation/validate-phoenix-deploy-targets.sh)
- Bootstrap helper: [bootstrap-phoenix-cicd.sh](/home/intlc/projects/proxmox/scripts/dev-vm/bootstrap-phoenix-cicd.sh)
That default target publishes the `phoenix-deploy-api` bundle to **VMID 5700** on the correct Proxmox node and starts the CT if needed.
A second target is now available:
- `portal-live` → runs [sync-sankofa-portal-7801.sh](/home/intlc/projects/proxmox/scripts/deployment/sync-sankofa-portal-7801.sh) and then checks `http://192.168.11.51:3000/`
## Workflow lockstep
Because both `main` and `master` can trigger deploys, deploy behavior is now defined from canonical source files and checked for branch parity.
- Edit only the source files under [.gitea/workflow-sources](/home/intlc/projects/proxmox/.gitea/workflow-sources:1)
- Sync the checked-in workflow copies with:
```bash
bash scripts/verify/sync-gitea-workflows.sh
```
- Validate source sync plus `main`/`master` parity with:
```bash
bash scripts/verify/run-all-validation.sh --skip-genesis
```
The deploy and PR workflows both fetch `origin/main` and `origin/master` before validation, so branch drift now fails CI instead of silently changing deploy behavior.
## Flow
```text
Devin
-> push to Gitea
-> Gitea Actions on act_runner (5700)
-> bash scripts/verify/run-all-validation.sh --skip-genesis
-> validates deploy-targets.json structure
-> POST /api/deploy to phoenix-deploy-api
-> match repo + branch + target in deploy-targets.json
-> run deploy command
-> verify target health URL
-> update Gitea commit status success/failure
```
## Required setup
### 1. Runner
Bring up the site-wide Gitea runner on VMID **5700**:
```bash
bash scripts/dev-vm/bootstrap-gitea-act-runner-site-wide.sh
```
Reference: [GITEA_ACT_RUNNER_SETUP.md](GITEA_ACT_RUNNER_SETUP.md)
### 0. One-command bootstrap
If root `.env` already contains the needed values, use:
```bash
bash scripts/dev-vm/bootstrap-phoenix-cicd.sh --repo d-bis/proxmox
```
This runs the validation gate, deploys `phoenix-deploy-api`, and smoke-checks the service.
### 2. Deploy API service
Deploy the API to the dev VM:
```bash
./scripts/deployment/deploy-phoenix-deploy-api-to-dev-vm.sh --dry-run
./scripts/deployment/deploy-phoenix-deploy-api-to-dev-vm.sh --apply --start-ct
```
On the target VM, set at least:
```bash
PORT=4001
GITEA_URL=https://gitea.d-bis.org
GITEA_TOKEN=<token with repo status access>
PHOENIX_DEPLOY_SECRET=<shared secret>
PHOENIX_REPO_ROOT=/home/intlc/projects/proxmox
```
Optional:
```bash
DEPLOY_TARGETS_PATH=/opt/phoenix-deploy-api/deploy-targets.json
```
For the `portal-live` target, also set:
```bash
SANKOFA_PORTAL_SRC=/home/intlc/projects/Sankofa/portal
```
### 3. Gitea repo secrets
Set these in the Gitea repository that should deploy:
- `PHOENIX_DEPLOY_URL`
- `PHOENIX_DEPLOY_TOKEN`
Example:
- `PHOENIX_DEPLOY_URL=http://192.168.11.59:4001/api/deploy`
- `PHOENIX_DEPLOY_TOKEN=<same value as PHOENIX_DEPLOY_SECRET>`
For webhook signing, the bootstrap/helper path also expects:
- `PHOENIX_DEPLOY_SECRET`
- `PHOENIX_WEBHOOK_DEPLOY_ENABLED=1` only if you want webhook events themselves to execute deploys
Do not enable both repo Actions deploys and webhook deploys for the same repo unless you intentionally want duplicate deploy attempts.
## Adding more repos or VM targets
Extend [deploy-targets.json](/home/intlc/projects/proxmox/phoenix-deploy-api/deploy-targets.json) with another entry.
Each target is keyed by:
- `repo`
- `branch`
- `target`
Each target defines:
- `cwd`
- `command`
- `required_env`
- optional `healthcheck`
- optional `timeout_sec`
Example shape:
```json
{
"repo": "d-bis/another-service",
"branch": "main",
"target": "portal-live",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": ["bash", "scripts/deployment/sync-sankofa-portal-7801.sh"],
"required_env": ["PHOENIX_REPO_ROOT"]
}
```
Use separate `target` names when the same repo can publish to different VMIDs or environments.
Target-map validation is already part of:
```bash
bash scripts/verify/run-all-validation.sh --skip-genesis
```
and can also be run directly:
```bash
bash scripts/validation/validate-phoenix-deploy-targets.sh
```
## Manual testing
Before trusting a new Gitea workflow, trigger the deploy service directly:
```bash
bash scripts/dev-vm/trigger-phoenix-deploy.sh
```
Trigger the live portal deployment target directly:
```bash
bash scripts/dev-vm/trigger-phoenix-deploy.sh d-bis/proxmox main portal-live
```
Inspect configured targets:
```bash
curl -s http://192.168.11.59:4001/api/deploy-targets | jq .
```
## Recommended next expansions
- Add a Phoenix API target for the repo that owns VMID **7800** or **8600**, depending on which deployment line is canonical.
- Add repo-specific workflows once the Sankofa source repos themselves are mirrored into Gitea Actions.
- Move secret values from ad hoc `.env` files into the final operator-managed secret source once you settle the production host for `phoenix-deploy-api`.
## Notes
- The Gitea workflow is gated by `scripts/verify/run-all-validation.sh --skip-genesis` before deploy.
- `phoenix-deploy-api` now returns `404` when no matching target exists and `500` when the deploy command fails.
- Commit status updates are written back to Gitea from the deploy service itself.

View File

@@ -0,0 +1,247 @@
{
"defaults": {
"timeout_sec": 1800
},
"targets": [
{
"repo": "d-bis/proxmox",
"branch": "main",
"target": "default",
"description": "Install the Phoenix deploy API locally on the dev VM from the synced repo workspace.",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": [
"bash",
"phoenix-deploy-api/scripts/install-systemd.sh"
],
"required_env": [
"PHOENIX_REPO_ROOT"
],
"healthcheck": {
"url": "http://192.168.11.59:4001/health",
"expect_status": 200,
"expect_body_includes": "phoenix-deploy-api",
"attempts": 8,
"delay_ms": 3000,
"timeout_ms": 10000
}
},
{
"repo": "d-bis/proxmox",
"branch": "main",
"target": "cloudflare-sync",
"description": "Optional: sync Cloudflare DNS from repo .env (path-gated; set PHOENIX_CLOUDFLARE_SYNC=1 on host).",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": [
"bash",
"scripts/deployment/gitea-cloudflare-sync.sh"
],
"required_env": [
"PHOENIX_REPO_ROOT"
],
"timeout_sec": 600
},
{
"repo": "d-bis/proxmox",
"branch": "main",
"target": "cloudflare-sync-force",
"description": "Same as cloudflare-sync but skips path filter (operator / manual).",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": [
"bash",
"scripts/deployment/gitea-cloudflare-sync.sh"
],
"required_env": [
"PHOENIX_REPO_ROOT"
],
"timeout_sec": 600
},
{
"repo": "d-bis/proxmox",
"branch": "main",
"target": "portal-live",
"description": "Deploy the Sankofa portal to CT 7801 on Proxmox.",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": [
"bash",
"scripts/deployment/sync-sankofa-portal-7801.sh"
],
"required_env": [
"PHOENIX_REPO_ROOT",
"SANKOFA_PORTAL_SRC"
],
"healthcheck": {
"url": "http://192.168.11.51:3000/",
"expect_status": 200,
"expect_body_includes": "<html",
"attempts": 10,
"delay_ms": 5000,
"timeout_ms": 10000
}
},
{
"repo": "d-bis/CurrenciCombo",
"branch": "main",
"target": "default",
"description": "Deploy CurrenciCombo from the staged Gitea workspace into Phoenix CT 8604 and verify the public hostname end to end.",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": [
"bash",
"scripts/deployment/phoenix-deploy-currencicombo-from-workspace.sh"
],
"required_env": [
"PHOENIX_REPO_ROOT",
"PHOENIX_DEPLOY_WORKSPACE"
],
"healthcheck": {
"url": "https://curucombo.xn--vov0g.com/api/ready",
"expect_status": 200,
"expect_body_includes": "\"ready\":true",
"attempts": 12,
"delay_ms": 5000,
"timeout_ms": 15000
}
},
{
"repo": "d-bis/proxmox",
"branch": "main",
"target": "atomic-swap-dapp-live",
"description": "Deploy the Atomic Swap dApp to VMID 5801 on Proxmox.",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": [
"bash",
"scripts/deployment/deploy-atomic-swap-dapp-5801.sh"
],
"required_env": [
"PHOENIX_REPO_ROOT"
],
"healthcheck": {
"url": "https://atomic-swap.defi-oracle.io/data/live-route-registry.json",
"expect_status": 200,
"expect_body_includes": "\"liveBridgeRoutes\"",
"attempts": 10,
"delay_ms": 5000,
"timeout_ms": 15000
}
},
{
"repo": "d-bis/proxmox",
"branch": "master",
"target": "default",
"description": "Install the Phoenix deploy API locally on the dev VM from the synced repo workspace.",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": [
"bash",
"phoenix-deploy-api/scripts/install-systemd.sh"
],
"required_env": [
"PHOENIX_REPO_ROOT"
],
"healthcheck": {
"url": "http://192.168.11.59:4001/health",
"expect_status": 200,
"expect_body_includes": "phoenix-deploy-api",
"attempts": 8,
"delay_ms": 3000,
"timeout_ms": 10000
}
},
{
"repo": "d-bis/proxmox",
"branch": "master",
"target": "atomic-swap-dapp-live",
"description": "Deploy the Atomic Swap dApp to VMID 5801 on Proxmox.",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": [
"bash",
"scripts/deployment/deploy-atomic-swap-dapp-5801.sh"
],
"required_env": [
"PHOENIX_REPO_ROOT"
],
"healthcheck": {
"url": "https://atomic-swap.defi-oracle.io/data/live-route-registry.json",
"expect_status": 200,
"expect_body_includes": "\"liveBridgeRoutes\"",
"attempts": 10,
"delay_ms": 5000,
"timeout_ms": 15000
}
},
{
"repo": "d-bis/proxmox",
"branch": "master",
"target": "cloudflare-sync",
"description": "Optional: sync Cloudflare DNS from repo .env (path-gated; set PHOENIX_CLOUDFLARE_SYNC=1 on host).",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": [
"bash",
"scripts/deployment/gitea-cloudflare-sync.sh"
],
"required_env": [
"PHOENIX_REPO_ROOT"
],
"timeout_sec": 600
},
{
"repo": "d-bis/proxmox",
"branch": "master",
"target": "cloudflare-sync-force",
"description": "Same as cloudflare-sync but skips path filter (operator / manual).",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": [
"bash",
"scripts/deployment/gitea-cloudflare-sync.sh"
],
"required_env": [
"PHOENIX_REPO_ROOT"
],
"timeout_sec": 600
},
{
"repo": "d-bis/proxmox",
"branch": "master",
"target": "portal-live",
"description": "Deploy the Sankofa portal to CT 7801 on Proxmox.",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": [
"bash",
"scripts/deployment/sync-sankofa-portal-7801.sh"
],
"required_env": [
"PHOENIX_REPO_ROOT",
"SANKOFA_PORTAL_SRC"
],
"healthcheck": {
"url": "http://192.168.11.51:3000/",
"expect_status": 200,
"expect_body_includes": "<html",
"attempts": 10,
"delay_ms": 5000,
"timeout_ms": 10000
}
},
{
"repo": "d-bis/CurrenciCombo",
"branch": "master",
"target": "default",
"description": "Deploy CurrenciCombo from the staged Gitea workspace into Phoenix CT 8604 and verify the public hostname end to end.",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": [
"bash",
"scripts/deployment/phoenix-deploy-currencicombo-from-workspace.sh"
],
"required_env": [
"PHOENIX_REPO_ROOT",
"PHOENIX_DEPLOY_WORKSPACE"
],
"healthcheck": {
"url": "https://curucombo.xn--vov0g.com/api/ready",
"expect_status": 200,
"expect_body_includes": "\"ready\":true",
"attempts": 12,
"delay_ms": 5000,
"timeout_ms": 15000
}
}
]
}

View File

@@ -25,7 +25,70 @@ if [[ -f "$REPO_ROOT/config/public-sector-program-manifest.json" ]]; then
else
echo "WARN: $REPO_ROOT/config/public-sector-program-manifest.json missing — set PUBLIC_SECTOR_MANIFEST_PATH in .env"
fi
[ -f "$APP_DIR/.env" ] && cp "$APP_DIR/.env" "$TARGET/.env" || [ -f "$APP_DIR/.env.example" ] && cp "$APP_DIR/.env.example" "$TARGET/.env" || true
if [[ -f "$TARGET/.env" ]]; then
echo "Preserving existing $TARGET/.env"
elif [[ -f "$APP_DIR/.env" ]]; then
cp "$APP_DIR/.env" "$TARGET/.env"
elif [[ -f "$APP_DIR/.env.example" ]]; then
cp "$APP_DIR/.env.example" "$TARGET/.env"
fi
ensure_env_value() {
local key="$1"
local value="$2"
local file="$TARGET/.env"
[[ -n "$value" && -f "$file" ]] || return 0
local current=""
if grep -qE "^${key}=" "$file"; then
current="$(grep -E "^${key}=" "$file" | tail -n 1 | cut -d= -f2-)"
fi
[[ -z "$current" ]] || return 0
local tmp
tmp="$(mktemp)"
awk -v key="$key" -v value="$value" '
BEGIN { found = 0 }
$0 ~ "^" key "=" {
print key "=" value
found = 1
next
}
{ print }
END {
if (!found) print key "=" value
}
' "$file" > "$tmp"
cat "$tmp" > "$file"
rm -f "$tmp"
}
repo_env_value() {
local key="$1"
local file="$REPO_ROOT/.env"
[[ -f "$file" ]] || return 0
grep -E "^${key}=" "$file" | tail -n 1 | cut -d= -f2-
}
if [[ -f "$TARGET/.env" ]]; then
ensure_env_value PHOENIX_REPO_ROOT "$REPO_ROOT"
for key in \
GITEA_TOKEN \
PHOENIX_DEPLOY_SECRET \
PROXMOX_HOST \
PROXMOX_PORT \
PROXMOX_USER \
PROXMOX_TOKEN_NAME \
PROXMOX_TOKEN_VALUE \
PROXMOX_TLS_VERIFY \
PUBLIC_IP \
CLOUDFLARE_API_TOKEN \
CLOUDFLARE_GITEA_SYNC_ZONE \
PHOENIX_CLOUDFLARE_SYNC
do
ensure_env_value "$key" "$(repo_env_value "$key")"
done
fi
chown -R root:root "$TARGET"
cd "$TARGET" && npm install --omit=dev
cp "$APP_DIR/phoenix-deploy-api.service" /etc/systemd/system/

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env node
/**
* Phoenix Deploy API — Gitea webhook receiver, deploy stub, and Phoenix API Railing (Infra/VE)
* Phoenix Deploy API — Gitea webhook receiver, deploy execution API, and Phoenix API Railing (Infra/VE)
*
* Endpoints:
* POST /webhook/gitea — Receives Gitea push/tag/PR webhooks
@@ -19,7 +19,9 @@
import crypto from 'crypto';
import https from 'https';
import path from 'path';
import { readFileSync, existsSync } from 'fs';
import { promisify } from 'util';
import { execFile as execFileCallback } from 'child_process';
import { cpSync, existsSync, mkdirSync, mkdtempSync, readFileSync, readdirSync, rmSync, writeFileSync } from 'fs';
import { fileURLToPath } from 'url';
import express from 'express';
@@ -29,6 +31,13 @@ const PORT = parseInt(process.env.PORT || '4001', 10);
const GITEA_URL = (process.env.GITEA_URL || 'https://gitea.d-bis.org').replace(/\/$/, '');
const GITEA_TOKEN = process.env.GITEA_TOKEN || '';
const WEBHOOK_SECRET = process.env.PHOENIX_DEPLOY_SECRET || '';
const PHOENIX_REPO_ROOT_DEFAULT = (process.env.PHOENIX_REPO_ROOT_DEFAULT || '/srv/projects/proxmox').trim();
const ATOMIC_SWAP_REPO = (process.env.PHOENIX_ATOMIC_SWAP_REPO || 'd-bis/atomic-swap-dapp').trim();
const ATOMIC_SWAP_REF = (process.env.PHOENIX_ATOMIC_SWAP_REF || 'main').trim();
const CROSS_CHAIN_PMM_LPS_REPO = (process.env.PHOENIX_CROSS_CHAIN_PMM_LPS_REPO || '').trim();
const CROSS_CHAIN_PMM_LPS_REF = (process.env.PHOENIX_CROSS_CHAIN_PMM_LPS_REF || 'main').trim();
const SMOM_DBIS_138_REPO = (process.env.PHOENIX_SMOM_DBIS_138_REPO || '').trim();
const SMOM_DBIS_138_REF = (process.env.PHOENIX_SMOM_DBIS_138_REF || 'main').trim();
const PROXMOX_HOST = process.env.PROXMOX_HOST || '';
const PROXMOX_PORT = parseInt(process.env.PROXMOX_PORT || '8006', 10);
@@ -42,6 +51,17 @@ const PROMETHEUS_URL = (process.env.PROMETHEUS_URL || 'http://localhost:9090').r
const PHOENIX_WEBHOOK_URL = process.env.PHOENIX_WEBHOOK_URL || '';
const PHOENIX_WEBHOOK_SECRET = process.env.PHOENIX_WEBHOOK_SECRET || '';
const PARTNER_KEYS = (process.env.PHOENIX_PARTNER_KEYS || '').split(',').map((k) => k.trim()).filter(Boolean);
const WEBHOOK_DEPLOY_ENABLED = process.env.PHOENIX_WEBHOOK_DEPLOY_ENABLED === '1' || process.env.PHOENIX_WEBHOOK_DEPLOY_ENABLED === 'true';
const execFile = promisify(execFileCallback);
function expandEnvTokens(value, env = process.env) {
if (typeof value !== 'string') return value;
return value.replace(/\$\{([A-Z0-9_]+)\}/gi, (_, key) => env[key] || '');
}
function resolvePhoenixRepoRoot() {
return (process.env.PHOENIX_REPO_ROOT || PHOENIX_REPO_ROOT_DEFAULT || '').trim().replace(/\/$/, '');
}
/**
* Manifest resolution order:
@@ -63,15 +83,395 @@ function resolvePublicSectorManifestPath() {
return path.join(__dirname, '..', 'config', 'public-sector-program-manifest.json');
}
function resolveDeployTargetsPath() {
const override = (process.env.DEPLOY_TARGETS_PATH || '').trim();
if (override && existsSync(override)) return override;
const bundled = path.join(__dirname, 'deploy-targets.json');
if (existsSync(bundled)) return bundled;
return bundled;
}
function loadDeployTargetsConfig() {
const configPath = resolveDeployTargetsPath();
if (!existsSync(configPath)) {
return {
path: configPath,
defaults: {},
targets: [],
};
}
const raw = readFileSync(configPath, 'utf8');
const parsed = JSON.parse(raw);
return {
path: configPath,
defaults: parsed.defaults || {},
targets: Array.isArray(parsed.targets) ? parsed.targets : [],
};
}
function findDeployTarget(repo, branch, requestedTarget) {
const config = loadDeployTargetsConfig();
const wantedTarget = requestedTarget || 'default';
const match = config.targets.find((entry) => {
if (entry.repo !== repo) return false;
if ((entry.branch || 'main') !== branch) return false;
return (entry.target || 'default') === wantedTarget;
});
return { config, match, wantedTarget };
}
async function sleep(ms) {
await new Promise((resolve) => setTimeout(resolve, ms));
}
async function verifyHealthCheck(healthcheck) {
if (!healthcheck || !healthcheck.url) return null;
const attempts = Math.max(1, Number(healthcheck.attempts || 1));
const delayMs = Math.max(0, Number(healthcheck.delay_ms || 0));
const timeoutMs = Math.max(1000, Number(healthcheck.timeout_ms || 10000));
const expectedStatus = Number(healthcheck.expect_status || 200);
const expectBodyIncludes = healthcheck.expect_body_includes || '';
let lastError = null;
for (let attempt = 1; attempt <= attempts; attempt += 1) {
try {
const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), timeoutMs);
const res = await fetch(healthcheck.url, { signal: controller.signal });
const body = await res.text();
clearTimeout(timeout);
if (res.status !== expectedStatus) {
throw new Error(`Expected HTTP ${expectedStatus}, got ${res.status}`);
}
if (expectBodyIncludes && !body.includes(expectBodyIncludes)) {
throw new Error(`Health body missing expected text: ${expectBodyIncludes}`);
}
return {
ok: true,
url: healthcheck.url,
status: res.status,
attempt,
};
} catch (err) {
lastError = err;
if (attempt < attempts && delayMs > 0) {
await sleep(delayMs);
}
}
}
throw new Error(`Health check failed for ${healthcheck.url}: ${lastError?.message || 'unknown error'}`);
}
async function downloadRepoArchive({ owner, repo, ref, archivePath, authToken }) {
const archiveRef = `${ref}.tar.gz`;
const url = `${GITEA_URL}/api/v1/repos/${owner}/${repo}/archive/${archiveRef}`;
const headers = {};
if (authToken) headers.Authorization = `token ${authToken}`;
const res = await fetch(url, { headers });
if (!res.ok) {
throw new Error(`Failed to download archive ${owner}/${repo}@${ref}: HTTP ${res.status}`);
}
const buffer = Buffer.from(await res.arrayBuffer());
writeFileSync(archivePath, buffer);
}
function syncExtractedTree({ sourceRoot, destRoot, entries = null }) {
mkdirSync(destRoot, { recursive: true });
const selectedEntries = Array.isArray(entries) ? entries : readdirSync(sourceRoot);
for (const entry of selectedEntries) {
const sourcePath = path.join(sourceRoot, entry);
if (!existsSync(sourcePath)) continue;
const destPath = path.join(destRoot, entry);
rmSync(destPath, { recursive: true, force: true });
cpSync(sourcePath, destPath, { recursive: true });
}
}
async function syncRepoArchive({ owner, repo, ref, destRoot, entries = null, authToken = '' }) {
const tempDir = mkdtempSync('/tmp/phoenix-archive-');
const archivePath = path.join(tempDir, 'repo.tar.gz');
const extractDir = path.join(tempDir, 'extract');
mkdirSync(extractDir, { recursive: true });
try {
await downloadRepoArchive({ owner, repo, ref, archivePath, authToken });
await execFile('tar', ['-xzf', archivePath, '-C', extractDir]);
const [rootDir] = readdirSync(extractDir);
if (!rootDir) {
throw new Error(`Archive for ${owner}/${repo}@${ref} was empty`);
}
syncExtractedTree({
sourceRoot: path.join(extractDir, rootDir),
destRoot,
entries,
});
} finally {
rmSync(tempDir, { recursive: true, force: true });
}
}
async function prepareDeployWorkspace({ repo, branch, sha, target }) {
const repoRoot = resolvePhoenixRepoRoot();
if (!repoRoot) {
throw new Error('PHOENIX_REPO_ROOT is not configured');
}
const [owner, repoName] = repo.includes('/') ? repo.split('/') : ['d-bis', repo];
const externalWorkspaceRoot = path.join(repoRoot, '.phoenix-deploy-workspaces', owner, repoName);
// Manual smoke tests can target the already-staged local workspace without
// forcing an archive sync from Gitea.
if (sha === 'HEAD' || sha === 'local') {
mkdirSync(repoRoot, { recursive: true });
if (repo !== 'd-bis/proxmox') {
mkdirSync(externalWorkspaceRoot, { recursive: true });
}
return {
PHOENIX_REPO_ROOT: repoRoot,
PROXMOX_REPO_ROOT: repoRoot,
PHOENIX_DEPLOY_WORKSPACE: repo === 'd-bis/proxmox' ? repoRoot : externalWorkspaceRoot,
};
}
const ref = sha || branch || 'main';
if (repo === 'd-bis/proxmox') {
await syncRepoArchive({
owner,
repo: repoName,
ref,
destRoot: repoRoot,
entries: ['config', 'phoenix-deploy-api', 'reports', 'scripts', 'token-lists'],
authToken: GITEA_TOKEN,
});
} else {
await syncRepoArchive({
owner,
repo: repoName,
ref,
destRoot: externalWorkspaceRoot,
authToken: GITEA_TOKEN,
});
}
if (repo === 'd-bis/proxmox' && target === 'atomic-swap-dapp-live') {
const [swapOwner, swapRepo] = ATOMIC_SWAP_REPO.includes('/')
? ATOMIC_SWAP_REPO.split('/')
: ['d-bis', ATOMIC_SWAP_REPO];
await syncRepoArchive({
owner: swapOwner,
repo: swapRepo,
ref: ATOMIC_SWAP_REF,
destRoot: path.join(repoRoot, 'atomic-swap-dapp'),
authToken: GITEA_TOKEN,
});
if (CROSS_CHAIN_PMM_LPS_REPO) {
const [lpsOwner, lpsRepo] = CROSS_CHAIN_PMM_LPS_REPO.includes('/')
? CROSS_CHAIN_PMM_LPS_REPO.split('/')
: ['d-bis', CROSS_CHAIN_PMM_LPS_REPO];
await syncRepoArchive({
owner: lpsOwner,
repo: lpsRepo,
ref: CROSS_CHAIN_PMM_LPS_REF,
destRoot: path.join(repoRoot, 'cross-chain-pmm-lps'),
authToken: GITEA_TOKEN,
});
}
if (SMOM_DBIS_138_REPO) {
const [smomOwner, smomRepo] = SMOM_DBIS_138_REPO.includes('/')
? SMOM_DBIS_138_REPO.split('/')
: ['d-bis', SMOM_DBIS_138_REPO];
await syncRepoArchive({
owner: smomOwner,
repo: smomRepo,
ref: SMOM_DBIS_138_REF,
destRoot: path.join(repoRoot, 'smom-dbis-138'),
authToken: GITEA_TOKEN,
});
}
}
return {
PHOENIX_REPO_ROOT: repoRoot,
PROXMOX_REPO_ROOT: repoRoot,
PHOENIX_DEPLOY_WORKSPACE: repo === 'd-bis/proxmox' ? repoRoot : externalWorkspaceRoot,
};
}
async function runDeployTarget(definition, configDefaults, context, envOverrides = {}) {
if (!Array.isArray(definition.command) || definition.command.length === 0) {
throw new Error('Deploy target is missing a command array');
}
const childEnv = {
...process.env,
...envOverrides,
PHOENIX_DEPLOY_REPO: context.repo,
PHOENIX_DEPLOY_BRANCH: context.branch,
PHOENIX_DEPLOY_SHA: context.sha || '',
PHOENIX_DEPLOY_TARGET: context.target,
PHOENIX_DEPLOY_TRIGGER: context.trigger,
};
const cwd = expandEnvTokens(definition.cwd || configDefaults.cwd || process.cwd(), childEnv);
const timeoutSeconds = Number(definition.timeout_sec || configDefaults.timeout_sec || 1800);
const timeout = Number.isFinite(timeoutSeconds) && timeoutSeconds > 0 ? timeoutSeconds * 1000 : 1800 * 1000;
const command = definition.command.map((part) => expandEnvTokens(part, childEnv));
const missingEnv = (definition.required_env || []).filter((key) => !childEnv[key]);
if (missingEnv.length > 0) {
throw new Error(`Missing required env for deploy target: ${missingEnv.join(', ')}`);
}
if (!existsSync(cwd)) {
throw new Error(`Deploy working directory does not exist: ${cwd}`);
}
const { stdout, stderr } = await execFile(command[0], command.slice(1), {
cwd,
env: childEnv,
timeout,
maxBuffer: 10 * 1024 * 1024,
});
const healthcheck = await verifyHealthCheck(definition.healthcheck || configDefaults.healthcheck || null);
return {
cwd,
command,
stdout: stdout || '',
stderr: stderr || '',
timeout_sec: timeoutSeconds,
healthcheck,
};
}
async function executeDeploy({ repo, branch = 'main', target = 'default', sha = '', trigger = 'api' }) {
if (!repo) {
const error = new Error('repo required');
error.statusCode = 400;
error.payload = { error: error.message };
throw error;
}
const [owner, repoName] = repo.includes('/') ? repo.split('/') : ['d-bis', repo];
const commitSha = sha || '';
const requestedTarget = target || 'default';
const { config, match, wantedTarget } = findDeployTarget(repo, branch, requestedTarget);
if (!match) {
const error = new Error('Deploy target not configured');
error.statusCode = 404;
error.payload = {
error: error.message,
repo,
branch,
target: wantedTarget,
config_path: config.path,
};
if (commitSha && GITEA_TOKEN) {
await setGiteaCommitStatus(owner, repoName, commitSha, 'failure', `No deploy target for ${repo} ${branch} ${wantedTarget}`);
}
throw error;
}
if (commitSha && GITEA_TOKEN) {
await setGiteaCommitStatus(owner, repoName, commitSha, 'pending', 'Phoenix deployment in progress');
}
console.log(`[deploy] ${repo} branch=${branch} target=${wantedTarget} sha=${commitSha} trigger=${trigger}`);
let deployResult = null;
let deployError = null;
let envOverrides = {};
try {
envOverrides = await prepareDeployWorkspace({
repo,
branch,
sha: commitSha,
target: wantedTarget,
});
deployResult = await runDeployTarget(match, config.defaults, {
repo,
branch,
sha: commitSha,
target: wantedTarget,
trigger,
}, envOverrides);
if (commitSha && GITEA_TOKEN) {
await setGiteaCommitStatus(owner, repoName, commitSha, 'success', `Deployed to ${wantedTarget}`);
}
return {
status: 'completed',
repo,
branch,
target: wantedTarget,
config_path: config.path,
command: deployResult.command,
cwd: deployResult.cwd,
stdout: deployResult.stdout,
stderr: deployResult.stderr,
healthcheck: deployResult.healthcheck,
};
} catch (err) {
deployError = err;
if (commitSha && GITEA_TOKEN) {
await setGiteaCommitStatus(owner, repoName, commitSha, 'failure', `Deploy failed: ${err.message.slice(0, 120)}`);
}
err.statusCode = err.statusCode || 500;
err.payload = err.payload || {
error: err.message,
repo,
branch,
target: wantedTarget,
config_path: config.path,
};
throw err;
} finally {
if (PHOENIX_WEBHOOK_URL) {
const payload = {
event: 'deploy.completed',
repo,
branch,
target: wantedTarget,
sha: commitSha,
success: Boolean(deployResult),
command: deployResult?.command,
cwd: deployResult?.cwd,
phoenix_repo_root: envOverrides.PHOENIX_REPO_ROOT || null,
error: deployError?.message || null,
};
const body = JSON.stringify(payload);
const sig = crypto.createHmac('sha256', PHOENIX_WEBHOOK_SECRET || '').update(body).digest('hex');
fetch(PHOENIX_WEBHOOK_URL, {
method: 'POST',
headers: { 'Content-Type': 'application/json', 'X-Phoenix-Signature': `sha256=${sig}` },
body,
}).catch((e) => console.error('[webhook] outbound failed', e.message));
}
}
}
const httpsAgent = new https.Agent({ rejectUnauthorized: process.env.PROXMOX_TLS_VERIFY !== '0' });
function formatProxmoxAuthHeader(user, tokenName, tokenValue) {
if (tokenName.includes('!')) {
return `PVEAPIToken=${tokenName}=${tokenValue}`;
}
return `PVEAPIToken=${user}!${tokenName}=${tokenValue}`;
}
async function proxmoxRequest(endpoint, method = 'GET', body = null) {
const baseUrl = `https://${PROXMOX_HOST}:${PROXMOX_PORT}/api2/json`;
const url = `${baseUrl}${endpoint}`;
const options = {
method,
headers: {
Authorization: `PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}`,
Authorization: formatProxmoxAuthHeader(PROXMOX_USER, PROXMOX_TOKEN_NAME, PROXMOX_TOKEN_VALUE),
'Content-Type': 'application/json',
},
agent: httpsAgent,
@@ -162,12 +562,44 @@ app.post('/webhook/gitea', async (req, res) => {
if (action === 'push' || (action === 'synchronize' && payload.pull_request)) {
if (branch === 'main' || branch === 'master' || ref.startsWith('refs/tags/')) {
if (sha && GITEA_TOKEN) {
await setGiteaCommitStatus(owner, repoName, sha, 'pending', 'Phoenix deployment triggered');
if (!WEBHOOK_DEPLOY_ENABLED) {
return res.status(200).json({
received: true,
repo: fullName,
branch,
sha,
deployed: false,
message: 'Webhook accepted; set PHOENIX_WEBHOOK_DEPLOY_ENABLED=1 to execute deploys from webhook events.',
});
}
try {
const result = await executeDeploy({
repo: fullName,
branch,
sha,
target: 'default',
trigger: 'webhook',
});
return res.status(200).json({
received: true,
repo: fullName,
branch,
sha,
deployed: true,
result,
});
} catch (err) {
return res.status(200).json({
received: true,
repo: fullName,
branch,
sha,
deployed: false,
error: err.message,
details: err.payload || null,
});
}
// Stub: enqueue deploy; actual implementation would call Proxmox/deploy logic
console.log(`[deploy-stub] Would deploy ${fullName} branch=${branch} sha=${sha}`);
// Stub: when full deploy runs, call setGiteaCommitStatus(owner, repoName, sha, 'success'|'failure', ...)
}
}
@@ -185,47 +617,36 @@ app.post('/api/deploy', async (req, res) => {
}
const { repo, branch = 'main', target, sha } = req.body;
if (!repo) {
return res.status(400).json({ error: 'repo required' });
try {
const result = await executeDeploy({
repo,
branch,
sha,
target,
trigger: 'api',
});
res.status(200).json(result);
} catch (err) {
res.status(err.statusCode || 500).json(err.payload || { error: err.message });
}
});
const [owner, repoName] = repo.includes('/') ? repo.split('/') : ['d-bis', repo];
const commitSha = sha || '';
if (commitSha && GITEA_TOKEN) {
await setGiteaCommitStatus(owner, repoName, commitSha, 'pending', 'Phoenix deployment in progress');
}
console.log(`[deploy] ${repo} branch=${branch} target=${target || 'default'} sha=${commitSha}`);
// Stub: no real deploy yet — report success so Gitea shows green; replace with real deploy + setGiteaCommitStatus on completion
const deploySuccess = true;
if (commitSha && GITEA_TOKEN) {
await setGiteaCommitStatus(
owner,
repoName,
commitSha,
deploySuccess ? 'success' : 'failure',
deploySuccess ? 'Deploy accepted (stub)' : 'Deploy failed (stub)'
);
}
res.status(202).json({
status: 'accepted',
repo,
branch,
target: target || 'default',
message: 'Deploy request queued (stub). Implement full deploy logic in Sankofa Phoenix API.',
app.get('/api/deploy-targets', (req, res) => {
const config = loadDeployTargetsConfig();
const targets = config.targets.map((entry) => ({
repo: entry.repo,
branch: entry.branch || 'main',
target: entry.target || 'default',
description: entry.description || '',
cwd: entry.cwd || config.defaults.cwd || '',
command: entry.command || [],
has_healthcheck: Boolean(entry.healthcheck || config.defaults.healthcheck),
}));
res.json({
config_path: config.path,
count: targets.length,
targets,
});
if (PHOENIX_WEBHOOK_URL) {
const payload = { event: 'deploy.completed', repo, branch, target: target || 'default', sha: commitSha, success: deploySuccess };
const body = JSON.stringify(payload);
const sig = crypto.createHmac('sha256', PHOENIX_WEBHOOK_SECRET || '').update(body).digest('hex');
fetch(PHOENIX_WEBHOOK_URL, {
method: 'POST',
headers: { 'Content-Type': 'application/json', 'X-Phoenix-Signature': `sha256=${sig}` },
body,
}).catch((e) => console.error('[webhook] outbound failed', e.message));
}
});
/**
@@ -474,7 +895,10 @@ app.listen(PORT, () => {
if (!GITEA_TOKEN) console.warn('GITEA_TOKEN not set — commit status updates disabled');
if (!hasProxmox) console.warn('PROXMOX_* not set — Infra/VE API returns stub data');
if (PHOENIX_WEBHOOK_URL) console.log('Outbound webhook enabled:', PHOENIX_WEBHOOK_URL);
if (WEBHOOK_DEPLOY_ENABLED) console.log('Inbound webhook deploy execution enabled');
if (PARTNER_KEYS.length > 0) console.log('Partner API key auth enabled for /api/v1/* (except GET /api/v1/public-sector/programs)');
const mpath = resolvePublicSectorManifestPath();
const dpath = resolveDeployTargetsPath();
console.log(`Public-sector manifest: ${mpath} (${existsSync(mpath) ? 'ok' : 'missing'})`);
console.log(`Deploy targets: ${dpath} (${existsSync(dpath) ? 'ok' : 'missing'})`);
});

View File

@@ -0,0 +1,152 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
SUBMODULE_ROOT="$PROJECT_ROOT/atomic-swap-dapp"
source "$PROJECT_ROOT/config/ip-addresses.conf" 2>/dev/null || true
PROXMOX_HOST="${PROXMOX_DAPP_HOST:-${PROXMOX_HOST_R630_02:-192.168.11.12}}"
VMID="${VMID:-5801}"
DEPLOY_ROOT="${DEPLOY_ROOT:-/var/www/atomic-swap}"
TMP_ARCHIVE="/tmp/atomic-swap-dapp-5801.tgz"
DIST_DIR="$SUBMODULE_ROOT/dist"
SKIP_BUILD="${SKIP_BUILD:-0}"
SSH_OPTS="${SSH_OPTS:--o BatchMode=yes -o ConnectTimeout=15 -o StrictHostKeyChecking=accept-new}"
cleanup() {
rm -f "$TMP_ARCHIVE"
}
trap cleanup EXIT
if [ ! -d "$SUBMODULE_ROOT" ]; then
echo "Missing submodule at $SUBMODULE_ROOT" >&2
exit 1
fi
cd "$SUBMODULE_ROOT"
if [ "$SKIP_BUILD" != "1" ]; then
if [ -f package-lock.json ]; then
npm ci >/dev/null
else
npm install >/dev/null
fi
npm run sync:ecosystem >/dev/null
npm run validate:manifest >/dev/null
npm run build >/dev/null
fi
for required_path in \
"$DIST_DIR/index.html" \
"$DIST_DIR/data/ecosystem-manifest.json" \
"$DIST_DIR/data/live-route-registry.json" \
"$DIST_DIR/data/deployed-venue-inventory.json"; do
if [ ! -f "$required_path" ]; then
echo "Missing required build artifact: $required_path" >&2
exit 1
fi
done
jq -e '.supportedNetworks[] | select(.chainId == 138) | .deployedVenuePoolCount >= 19 and .publicRoutingPoolCount >= 19' \
"$DIST_DIR/data/ecosystem-manifest.json" >/dev/null
jq -e '.liveSwapRoutes | length >= 19' "$DIST_DIR/data/live-route-registry.json" >/dev/null
jq -e '.liveBridgeRoutes | length >= 12' "$DIST_DIR/data/live-route-registry.json" >/dev/null
jq -e '.networks[] | select(.chainId == 138) | .venueCounts.deployedVenuePoolCount >= 19 and .summary.totalVenues >= 19' \
"$DIST_DIR/data/deployed-venue-inventory.json" >/dev/null
rm -f "$TMP_ARCHIVE"
tar -C "$SUBMODULE_ROOT" -czf "$TMP_ARCHIVE" dist
ssh $SSH_OPTS "root@$PROXMOX_HOST" true
scp -q $SSH_OPTS "$TMP_ARCHIVE" "root@$PROXMOX_HOST:/tmp/atomic-swap-dapp-5801.tgz"
ssh $SSH_OPTS "root@$PROXMOX_HOST" "
set -euo pipefail
pct push $VMID /tmp/atomic-swap-dapp-5801.tgz /tmp/atomic-swap-dapp-5801.tgz
pct exec $VMID -- bash -lc '
set -euo pipefail
mkdir -p \"$DEPLOY_ROOT\"
find \"$DEPLOY_ROOT\" -mindepth 1 -maxdepth 1 -exec rm -rf {} +
rm -rf /tmp/dist
tar -xzf /tmp/atomic-swap-dapp-5801.tgz -C /tmp
cp -R /tmp/dist/. \"$DEPLOY_ROOT/\"
mkdir -p /var/cache/nginx/atomic-swap-api
cat > /etc/nginx/conf.d/atomic-swap-api-cache.conf <<\"EOF\"
proxy_cache_path /var/cache/nginx/atomic-swap-api
levels=1:2
keys_zone=atomic_swap_api_cache:10m
max_size=256m
inactive=30m
use_temp_path=off;
EOF
cat > /etc/nginx/sites-available/atomic-swap <<\"EOF\"
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root $DEPLOY_ROOT;
index index.html;
location / {
try_files \$uri \$uri/ /index.html;
}
location = /index.html {
add_header Cache-Control \"no-store, no-cache, must-revalidate\" always;
}
location /data/ {
add_header Cache-Control \"no-store, no-cache, must-revalidate\" always;
}
location /assets/ {
add_header Cache-Control \"public, max-age=31536000, immutable\" always;
}
location /api/v1/ {
proxy_pass https://explorer.d-bis.org/api/v1/;
proxy_ssl_server_name on;
proxy_set_header Host explorer.d-bis.org;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host \$host;
proxy_http_version 1.1;
proxy_buffering on;
proxy_cache atomic_swap_api_cache;
proxy_cache_methods GET HEAD;
proxy_cache_key \"\$scheme\$proxy_host\$request_uri\";
proxy_cache_lock on;
proxy_cache_lock_timeout 10s;
proxy_cache_lock_age 10s;
proxy_cache_background_update on;
proxy_cache_revalidate on;
proxy_cache_valid 200 10s;
proxy_cache_valid 404 1s;
proxy_cache_valid any 0;
proxy_cache_use_stale error timeout invalid_header updating http_429 http_500 http_502 http_503 http_504;
add_header X-Atomic-Swap-Cache \$upstream_cache_status always;
}
}
EOF
ln -sfn /etc/nginx/sites-available/atomic-swap /etc/nginx/sites-enabled/atomic-swap
rm -f /etc/nginx/sites-enabled/default
rm -f /etc/nginx/sites-enabled/dapp
nginx -t
systemctl reload nginx
curl -fsS http://127.0.0.1/index.html >/dev/null
curl -fsS http://127.0.0.1/data/ecosystem-manifest.json >/dev/null
curl -fsS http://127.0.0.1/data/live-route-registry.json >/dev/null
curl -fsS http://127.0.0.1/data/deployed-venue-inventory.json >/dev/null
rm -rf /tmp/dist /tmp/atomic-swap-dapp-5801.tgz
'
rm -f /tmp/atomic-swap-dapp-5801.tgz
"
curl -fsS https://atomic-swap.defi-oracle.io/ >/dev/null
curl -fsS https://atomic-swap.defi-oracle.io/data/ecosystem-manifest.json | jq -e '.supportedNetworks[] | select(.chainId == 138) | .deployedVenuePoolCount >= 19 and .publicRoutingPoolCount >= 19' >/dev/null
curl -fsS https://atomic-swap.defi-oracle.io/data/live-route-registry.json | jq -e '.liveSwapRoutes | length >= 19' >/dev/null
curl -fsS https://atomic-swap.defi-oracle.io/data/live-route-registry.json | jq -e '.liveBridgeRoutes | length >= 12' >/dev/null
curl -fsS https://atomic-swap.defi-oracle.io/data/deployed-venue-inventory.json | jq -e '.networks[] | select(.chainId == 138) | .venueCounts.deployedVenuePoolCount >= 19 and .summary.totalVenues >= 19' >/dev/null
echo "Deployed atomic-swap-dapp to VMID $VMID via $PROXMOX_HOST"

View File

@@ -1,115 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
# Deploy only the Chain 138 Aave quote-push receiver.
# Default: dry-run. Use --apply to broadcast.
#
# Required env for apply:
# PRIVATE_KEY
# CHAIN_138_AAVE_POOL
# Optional env:
# CHAIN_138_AAVE_QUOTE_PUSH_RECEIVER_OWNER
# CHAIN138_RPC_URL / RPC_URL_138
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
SMOM="${PROJECT_ROOT}/smom-dbis-138"
_qp_private_key="${PRIVATE_KEY-}"
_qp_rpc="${RPC_URL_138:-${CHAIN138_RPC_URL:-}}"
_qp_pool="${CHAIN_138_AAVE_POOL-}"
_qp_owner="${CHAIN_138_AAVE_QUOTE_PUSH_RECEIVER_OWNER-}"
# shellcheck disable=SC1091
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" 2>/dev/null || true
# shellcheck disable=SC1091
source "${SMOM}/scripts/load-env.sh" >/dev/null 2>&1 || true
[[ -n "$_qp_private_key" ]] && export PRIVATE_KEY="$_qp_private_key"
[[ -n "$_qp_rpc" ]] && export RPC_URL_138="$_qp_rpc"
[[ -n "$_qp_pool" ]] && export AAVE_POOL_ADDRESS="$_qp_pool"
[[ -n "$_qp_owner" ]] && export QUOTE_PUSH_RECEIVER_OWNER="$_qp_owner"
unset _qp_private_key _qp_rpc _qp_pool _qp_owner
require_cmd() {
command -v "$1" >/dev/null 2>&1 || {
echo "[fail] missing required command: $1" >&2
exit 1
}
}
require_env() {
local name="$1"
if [[ -z "${!name:-}" ]]; then
echo "[fail] missing required env: $name" >&2
exit 1
fi
}
resolved_chain_id() {
if [[ -n "${RPC_URL_138:-}" ]] && command -v cast >/dev/null 2>&1; then
cast chain-id --rpc-url "$RPC_URL_138" 2>/dev/null | awk '{print $1}'
return 0
fi
echo "138"
}
pick_latest_receiver() {
local mode="$1"
local chain_id
chain_id="$(resolved_chain_id)"
local latest_json="${SMOM}/broadcast/DeployAaveQuotePushFlashReceiver.s.sol/${chain_id}/run-latest.json"
if [[ "$mode" == "dry-run" ]]; then
latest_json="${SMOM}/broadcast/DeployAaveQuotePushFlashReceiver.s.sol/${chain_id}/dry-run/run-latest.json"
fi
if [[ ! -f "$latest_json" ]] || ! command -v jq >/dev/null 2>&1; then
return 1
fi
jq -r '.transactions[]? | select(.transactionType == "CREATE" and .contractName == "AaveQuotePushFlashReceiver") | .contractAddress' \
"$latest_json" | tail -n1
}
require_cmd cast
require_cmd forge
MODE="dry-run"
BROADCAST=()
for arg in "$@"; do
case "$arg" in
--dry-run) MODE="dry-run"; BROADCAST=() ;;
--apply) MODE="apply"; BROADCAST=(--broadcast) ;;
*)
echo "[fail] unknown arg: $arg (use --dry-run or --apply)" >&2
exit 2
;;
esac
done
require_env PRIVATE_KEY
require_env RPC_URL_138
require_env AAVE_POOL_ADDRESS
echo "=== deploy-chain138-aave-quote-push-receiver ($MODE) ==="
echo "rpcUrl=$RPC_URL_138"
echo "aavePool=$AAVE_POOL_ADDRESS"
if [[ -n "${QUOTE_PUSH_RECEIVER_OWNER:-}" ]]; then
echo "receiverOwner=$QUOTE_PUSH_RECEIVER_OWNER"
fi
(
cd "$SMOM"
forge script script/deploy/DeployAaveQuotePushFlashReceiver.s.sol:DeployAaveQuotePushFlashReceiver \
--rpc-url "$RPC_URL_138" \
"${BROADCAST[@]}" \
-vvvv
)
receiver_addr="$(pick_latest_receiver "$MODE" || true)"
echo
if [[ "$MODE" == "dry-run" ]]; then
echo "Projected receiver address from this dry-run:"
else
echo "After --apply: copy deployed address into .env:"
fi
echo " CHAIN_138_AAVE_QUOTE_PUSH_RECEIVER=${receiver_addr:-...}"

View File

@@ -1,75 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
# Chain 138 wrapper for the generic MEV execution deployer using the
# Aave V3 provider-adapter path.
#
# Default is dry-run. Use --apply to broadcast.
#
# Required env for apply:
# PRIVATE_KEY
# CHAIN_138_AAVE_POOL
# Optional env:
# CHAIN_138_AAVE_EXECUTOR_TREASURY
# CHAIN_138_AAVE_EXECUTOR_OWNER
# CHAIN138_RPC_URL / RPC_URL_138
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
SMOM_ROOT="$PROJECT_ROOT/smom-dbis-138"
DRY_RUN=1
for arg in "$@"; do
case "$arg" in
--apply) DRY_RUN=0 ;;
--dry-run) DRY_RUN=1 ;;
*)
echo "Unknown argument: $arg" >&2
exit 1
;;
esac
done
if [[ -f "$SMOM_ROOT/scripts/lib/deployment/dotenv.sh" ]]; then
# shellcheck disable=SC1091
source "$SMOM_ROOT/scripts/lib/deployment/dotenv.sh"
load_deployment_env --repo-root "$SMOM_ROOT"
fi
RPC_URL="${RPC_URL_138:-${CHAIN138_RPC_URL:-${RPC_URL:-http://192.168.11.211:8545}}}"
AAVE_POOL="${CHAIN_138_AAVE_POOL:-${AAVE_POOL:-}}"
TREASURY="${CHAIN_138_AAVE_EXECUTOR_TREASURY:-${TREASURY:-}}"
EXECUTOR_OWNER="${CHAIN_138_AAVE_EXECUTOR_OWNER:-${EXECUTOR_OWNER:-}}"
OUTPUT_PATH="${MEV_EXECUTION_DEPLOY_OUTPUT:-$PROJECT_ROOT/reports/status/chain138_aave_execution_deploy_$(date +%Y%m%d_%H%M%S).json}"
if [[ -z "$AAVE_POOL" ]]; then
echo "Missing CHAIN_138_AAVE_POOL (or AAVE_POOL)." >&2
exit 1
fi
CMD=(
bash "$PROJECT_ROOT/scripts/deployment/deploy-mev-execution-contracts.sh"
--rpc-url "$RPC_URL"
--aave-pool "$AAVE_POOL"
--config "$PROJECT_ROOT/MEV_Bot/mev-platform/config.toml"
--output "$OUTPUT_PATH"
)
if [[ -n "$TREASURY" ]]; then
CMD+=(--treasury "$TREASURY")
fi
if [[ -n "$EXECUTOR_OWNER" ]]; then
CMD+=(--executor-owner "$EXECUTOR_OWNER")
fi
if (( DRY_RUN )); then
CMD+=(--dry-run)
fi
echo "=== deploy-chain138-aave-v3-execution-stack ==="
echo "rpcUrl=$RPC_URL"
echo "aavePool=$AAVE_POOL"
[[ -n "$TREASURY" ]] && echo "treasury=$TREASURY"
[[ -n "$EXECUTOR_OWNER" ]] && echo "executorOwner=$EXECUTOR_OWNER"
echo "outputPath=$OUTPUT_PATH"
"${CMD[@]}"

View File

@@ -1,45 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
AAVE_ROOT="$PROJECT_ROOT/vendor/chain138-protocols/aave-v3-origin"
MANIFEST="${CHAIN138_AAVE_V3_ORIGIN_MANIFEST:-$PROJECT_ROOT/config/chain138-aave-v3-origin-manifest.example.json}"
GENERATED_SOL="${CHAIN138_AAVE_V3_ORIGIN_GENERATED_SOL:-$AAVE_ROOT/scripts/generated/Chain138AaveV3OriginMarket.sol}"
CONTRACT_NAME="${CHAIN138_AAVE_V3_ORIGIN_CONTRACT_NAME:-Chain138AaveV3OriginMarket}"
RPC_URL="${RPC_URL_138:-${CHAIN138_RPC_URL:-${RPC_URL:-http://192.168.11.211:8545}}}"
MODE="dry-run"
for arg in "$@"; do
case "$arg" in
--apply) MODE="apply" ;;
--dry-run) MODE="dry-run" ;;
*) echo "Unknown argument: $arg" >&2; exit 1 ;;
esac
done
command -v python3 >/dev/null 2>&1 || { echo "python3 is required" >&2; exit 1; }
command -v forge >/dev/null 2>&1 || { echo "forge is required" >&2; exit 1; }
[[ -f "$MANIFEST" ]] || { echo "Manifest not found: $MANIFEST" >&2; exit 1; }
python3 "$PROJECT_ROOT/scripts/deployment/render-chain138-aave-v3-origin-market-input.py" "$MANIFEST" "$GENERATED_SOL" >/dev/null
echo "=== deploy-chain138-aave-v3-origin-market ($MODE) ==="
echo "manifest=$MANIFEST"
echo "generatedSol=$GENERATED_SOL"
echo "rpcUrl=$RPC_URL"
echo "contract=$CONTRACT_NAME"
if [[ "$MODE" == "dry-run" ]]; then
echo "forge -C \"$AAVE_ROOT\" script scripts/generated/$(basename "$GENERATED_SOL"):$CONTRACT_NAME --rpc-url \"$RPC_URL\" -vvvv"
exit 0
fi
(
cd "$AAVE_ROOT"
forge script "scripts/generated/$(basename "$GENERATED_SOL"):$CONTRACT_NAME" \
--rpc-url "$RPC_URL" \
--broadcast \
-vvvv
)

View File

@@ -1,35 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
GMX_ROOT="$PROJECT_ROOT/vendor/chain138-protocols/gmx-synthetics"
MANIFEST="${CHAIN138_GMX_SYNTHETICS_MANIFEST:-$PROJECT_ROOT/config/chain138-gmx-synthetics-manifest.example.json}"
MODE="dry-run"
for arg in "$@"; do
case "$arg" in
--apply) MODE="apply" ;;
--dry-run) MODE="dry-run" ;;
*) echo "Unknown argument: $arg" >&2; exit 1 ;;
esac
done
OUT_DIR="$(bash "$PROJECT_ROOT/scripts/deployment/prepare-chain138-gmx-synthetics-overlay.sh")"
CONFIG_PATH="$OUT_DIR/hardhat.chain138.config.ts"
TAGS="${CHAIN138_GMX_CORE_TAGS:-Router,ExchangeRouter,Reader,OrderVault,DepositVault,WithdrawalVault}"
echo "=== deploy-chain138-gmx-synthetics-core ($MODE) ==="
echo "manifest=$MANIFEST"
echo "overlay=$OUT_DIR"
echo "tags=$TAGS"
if [[ "$MODE" == "dry-run" ]]; then
echo "cd \"$GMX_ROOT\" && npx hardhat deploy --config \"$CONFIG_PATH\" --network chain138 --tags \"$TAGS\""
exit 0
fi
(
cd "$GMX_ROOT"
npx hardhat deploy --config "$CONFIG_PATH" --network chain138 --tags "$TAGS"
)

View File

@@ -1,50 +0,0 @@
#!/usr/bin/env bash
#
# Root-level wrapper for the existing Chain 138 pilot venue deployer covering
# Uniswap v3, Balancer, Curve, and 1inch reference rails.
#
# Default is dry-run. Use --apply to broadcast.
#
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
SMOM_ROOT="$PROJECT_ROOT/smom-dbis-138"
DRY_RUN=1
for arg in "$@"; do
case "$arg" in
--apply) DRY_RUN=0 ;;
--dry-run) DRY_RUN=1 ;;
*)
echo "Unknown argument: $arg" >&2
exit 1
;;
esac
done
if [[ -f "$SMOM_ROOT/scripts/lib/deployment/dotenv.sh" ]]; then
# shellcheck disable=SC1091
source "$SMOM_ROOT/scripts/lib/deployment/dotenv.sh"
load_deployment_env --repo-root "$SMOM_ROOT"
fi
RPC_URL="${RPC_URL_138:-${RPC_URL:-http://192.168.11.211:8545}}"
CMD=(
forge script
script/bridge/trustless/DeployChain138PilotDexVenues.s.sol:DeployChain138PilotDexVenues
--rpc-url "$RPC_URL"
--chain-id 138
--legacy
-vvv
)
if (( DRY_RUN )); then
echo "[DRY-RUN] cd $SMOM_ROOT && ${CMD[*]} --broadcast --private-key \$PRIVATE_KEY"
else
(
cd "$SMOM_ROOT"
"${CMD[@]}" --broadcast --private-key "$PRIVATE_KEY"
)
fi

View File

@@ -1,42 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
SMOM_ROOT="${PROJECT_ROOT}/smom-dbis-138"
if [[ -f "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" ]]; then
# shellcheck source=/dev/null
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh"
fi
if [[ -f "${SMOM_ROOT}/scripts/lib/deployment/dotenv.sh" ]]; then
# shellcheck source=/dev/null
source "${SMOM_ROOT}/scripts/lib/deployment/dotenv.sh"
load_deployment_env --repo-root "${SMOM_ROOT}"
fi
if [[ -n "${PRIVATE_KEY:-}" ]]; then
PRIVATE_KEY="$(printf '%s' "${PRIVATE_KEY}" | tr -d '\r\n')"
export PRIVATE_KEY
fi
DRY_RUN=1
for arg in "$@"; do
case "$arg" in
--apply) DRY_RUN=0 ;;
--dry-run) DRY_RUN=1 ;;
*)
echo "Unknown argument: ${arg}" >&2
exit 1
;;
esac
done
CMD=(npx hardhat run scripts/chain138/deploy-sushiswap-native.js --network chain138 --no-compile)
if (( DRY_RUN )); then
echo "[DRY-RUN] cd ${SMOM_ROOT} && ${CMD[*]}"
else
(cd "${SMOM_ROOT}" && "${CMD[@]}")
fi

View File

@@ -1,42 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
SMOM_ROOT="${PROJECT_ROOT}/smom-dbis-138"
if [[ -f "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" ]]; then
# shellcheck source=/dev/null
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh"
fi
if [[ -f "${SMOM_ROOT}/scripts/lib/deployment/dotenv.sh" ]]; then
# shellcheck source=/dev/null
source "${SMOM_ROOT}/scripts/lib/deployment/dotenv.sh"
load_deployment_env --repo-root "${SMOM_ROOT}"
fi
if [[ -n "${PRIVATE_KEY:-}" ]]; then
PRIVATE_KEY="$(printf '%s' "${PRIVATE_KEY}" | tr -d '\r\n')"
export PRIVATE_KEY
fi
DRY_RUN=1
for arg in "$@"; do
case "$arg" in
--apply) DRY_RUN=0 ;;
--dry-run) DRY_RUN=1 ;;
*)
echo "Unknown argument: ${arg}" >&2
exit 1
;;
esac
done
CMD=(npx hardhat run scripts/chain138/deploy-uniswap-v2-native.js --network chain138 --no-compile)
if (( DRY_RUN )); then
echo "[DRY-RUN] cd ${SMOM_ROOT} && ${CMD[*]}"
else
(cd "${SMOM_ROOT}" && "${CMD[@]}")
fi

View File

@@ -0,0 +1,244 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
source "$PROJECT_ROOT/scripts/lib/load-project-env.sh"
source "$PROJECT_ROOT/config/ip-addresses.conf" 2>/dev/null || true
PHOENIX_DEPLOY_WORKSPACE="${PHOENIX_DEPLOY_WORKSPACE:-}"
PROXMOX_HOST="${PROXMOX_HOST_R630_01:-192.168.11.11}"
PROXMOX_SSH_USER="${PROXMOX_SSH_USER:-root}"
VMID="${CURRENCICOMBO_PHOENIX_VMID:-8604}"
CT_IP="${IP_CURRENCICOMBO_PHOENIX:-10.160.0.14}"
CT_REPO_DIR="${CT_REPO_DIR:-/var/lib/currencicombo/repo}"
PUBLIC_URL="${PUBLIC_URL:-https://curucombo.xn--vov0g.com}"
PUBLIC_DOMAIN="${PUBLIC_DOMAIN:-curucombo.xn--vov0g.com}"
NPM_URL="${NPM_URL:-https://${IP_NPMPLUS:-192.168.11.167}:81}"
NPM_EMAIL="${NPM_EMAIL:-}"
NPM_PASSWORD="${NPM_PASSWORD:-}"
DRY_RUN=0
usage() {
cat <<'USAGE'
Usage: phoenix-deploy-currencicombo-from-workspace.sh [--dry-run]
Requires:
PHOENIX_DEPLOY_WORKSPACE Full staged CurrenciCombo checkout prepared by phoenix-deploy-api
This script:
1. Packs the staged repo workspace.
2. Pushes it into CT 8604 on r630-01.
3. Ensures host prerequisites, install.sh, prune cron, and deploy script run in-CT.
4. Updates the public NPMplus host so /api/* preserves the full path and supports SSE.
5. Verifies the public portal + /api/ready end to end.
USAGE
}
while [[ $# -gt 0 ]]; do
case "$1" in
--dry-run) DRY_RUN=1; shift ;;
-h|--help) usage; exit 0 ;;
*) echo "unknown arg: $1" >&2; usage; exit 2 ;;
esac
done
log() { printf '[currencicombo-phoenix] %s\n' "$*" >&2; }
die() { printf '[currencicombo-phoenix][FATAL] %s\n' "$*" >&2; exit 1; }
run() { if [[ "$DRY_RUN" -eq 1 ]]; then printf '[dry-run] %s\n' "$*" >&2; else eval "$*"; fi; }
need_cmd() { command -v "$1" >/dev/null 2>&1 || die "missing required command: $1"; }
for cmd in ssh scp tar curl jq mktemp; do
need_cmd "$cmd"
done
[[ -n "$PHOENIX_DEPLOY_WORKSPACE" ]] || die "PHOENIX_DEPLOY_WORKSPACE is required"
[[ -d "$PHOENIX_DEPLOY_WORKSPACE" ]] || die "staged workspace missing: $PHOENIX_DEPLOY_WORKSPACE"
if [[ "$DRY_RUN" -eq 0 ]]; then
[[ -n "$NPM_EMAIL" ]] || die "NPM_EMAIL is required"
[[ -n "$NPM_PASSWORD" ]] || die "NPM_PASSWORD is required"
fi
SSH_TARGET="${PROXMOX_SSH_USER}@${PROXMOX_HOST}"
SSH_OPTS=(-o BatchMode=yes -o ConnectTimeout=15 -o StrictHostKeyChecking=accept-new)
TMP_DIR="$(mktemp -d /tmp/currencicombo-phoenix-XXXXXX)"
ARCHIVE_PATH="${TMP_DIR}/currencicombo-workspace.tgz"
REMOTE_ARCHIVE="/tmp/$(basename "$ARCHIVE_PATH")"
CT_ARCHIVE="/root/$(basename "$ARCHIVE_PATH")"
NPM_COOKIE_JAR="${TMP_DIR}/npm-cookies.txt"
cleanup() {
rm -rf "$TMP_DIR"
}
trap cleanup EXIT
ssh_remote() {
local cmd="$1"
if [[ "$DRY_RUN" -eq 1 ]]; then
printf '[dry-run] ssh %q %q\n' "$SSH_TARGET" "$cmd" >&2
else
ssh "${SSH_OPTS[@]}" "$SSH_TARGET" "$cmd"
fi
}
pct_exec_script() {
local local_script="$1"
local remote_script
local ct_script
remote_script="/tmp/$(basename "$local_script")"
ct_script="/root/$(basename "$local_script")"
run "scp ${SSH_OPTS[*]} '$local_script' '${SSH_TARGET}:${remote_script}'"
ssh_remote "pct push ${VMID} '${remote_script}' '${ct_script}' --perms 0755 && rm -f '${remote_script}' && pct exec ${VMID} -- bash '${ct_script}' && pct exec ${VMID} -- rm -f '${ct_script}'"
}
log "packing staged workspace from ${PHOENIX_DEPLOY_WORKSPACE}"
run "tar -C '$PHOENIX_DEPLOY_WORKSPACE' --exclude='.git' --exclude='node_modules' --exclude='dist' --exclude='orchestrator/node_modules' --exclude='orchestrator/dist' -czf '$ARCHIVE_PATH' ."
log "ensuring CT ${VMID} is running on ${PROXMOX_HOST}"
ssh_remote "pct start ${VMID} >/dev/null 2>&1 || true"
log "uploading staged archive to CT ${VMID}"
run "scp ${SSH_OPTS[*]} '$ARCHIVE_PATH' '${SSH_TARGET}:${REMOTE_ARCHIVE}'"
ssh_remote "pct push ${VMID} '${REMOTE_ARCHIVE}' '${CT_ARCHIVE}' && rm -f '${REMOTE_ARCHIVE}'"
CT_SCRIPT="${TMP_DIR}/currencicombo-ct-deploy.sh"
cat > "$CT_SCRIPT" <<'EOF'
#!/usr/bin/env bash
set -euo pipefail
export DEBIAN_FRONTEND=noninteractive
ARCHIVE_PATH="__CT_ARCHIVE__"
REPO_DIR="__CT_REPO_DIR__"
need_pkg() {
dpkg -s "$1" >/dev/null 2>&1
}
apt-get update -qq
for pkg in ca-certificates curl git jq postgresql redis-server rsync build-essential; do
need_pkg "$pkg" || apt-get install -y -qq "$pkg"
done
if ! command -v node >/dev/null 2>&1 || ! node -v 2>/dev/null | grep -q '^v20\.'; then
curl -fsSL https://deb.nodesource.com/setup_20.x | bash -
apt-get install -y -qq nodejs
fi
systemctl enable --now postgresql >/dev/null 2>&1 || true
systemctl enable --now redis-server >/dev/null 2>&1 || true
if [[ ! -f /root/currencicombo-prephoenix-archive.tgz && -d /opt/currencicombo ]]; then
tar -czf /root/currencicombo-prephoenix-archive.tgz /opt/currencicombo /etc/currencicombo 2>/dev/null || true
fi
install -d -o root -g root -m 0755 "$(dirname "$REPO_DIR")"
rm -rf "$REPO_DIR"
mkdir -p "$REPO_DIR"
tar -xzf "$ARCHIVE_PATH" -C "$REPO_DIR"
rm -f "$ARCHIVE_PATH"
bash "$REPO_DIR/scripts/deployment/install.sh"
bash "$REPO_DIR/scripts/deployment/install-prune-cron.sh"
CC_GIT_REF=local bash "$REPO_DIR/scripts/deployment/deploy-currencicombo-8604.sh"
systemctl is-active currencicombo-orchestrator.service currencicombo-webapp.service
curl -fsS http://127.0.0.1:8080/ready
curl -fsS http://127.0.0.1:3000/ >/dev/null
EOF
perl -0pi -e "s|__CT_ARCHIVE__|${CT_ARCHIVE//|/\\|}|g; s|__CT_REPO_DIR__|${CT_REPO_DIR//|/\\|}|g" "$CT_SCRIPT"
log "running install + deploy inside CT ${VMID}"
pct_exec_script "$CT_SCRIPT"
if [[ "$DRY_RUN" -eq 0 ]]; then
log "updating NPMplus proxy host for ${PUBLIC_DOMAIN}"
AUTH_JSON="$(jq -nc --arg identity "$NPM_EMAIL" --arg secret "$NPM_PASSWORD" '{identity:$identity,secret:$secret}')"
TOKEN_RESPONSE="$(curl -sk -X POST "$NPM_URL/api/tokens" -H 'Content-Type: application/json' -d "$AUTH_JSON" -c "$NPM_COOKIE_JAR")"
TOKEN="$(echo "$TOKEN_RESPONSE" | jq -r '.token // .accessToken // .access_token // .data.token // empty' 2>/dev/null)"
USE_COOKIE_AUTH=0
if [[ -z "$TOKEN" || "$TOKEN" == "null" ]]; then
if echo "$TOKEN_RESPONSE" | jq -e '.expires' >/dev/null 2>&1; then
USE_COOKIE_AUTH=1
else
die "NPMplus authentication failed"
fi
fi
npm_api() {
if [[ "$USE_COOKIE_AUTH" -eq 1 ]]; then
curl -sk -b "$NPM_COOKIE_JAR" "$@"
else
curl -sk -H "Authorization: Bearer $TOKEN" "$@"
fi
}
HOSTS_JSON="$(npm_api -X GET "$NPM_URL/api/nginx/proxy-hosts")"
HOST_ID="$(echo "$HOSTS_JSON" | jq -r --arg domain "$PUBLIC_DOMAIN" '
(if type == "array" then . elif .data != null then .data elif .result != null then .result else [] end)
| map(select(.domain_names | type == "array"))
| map(select(any(.domain_names[]; . == $domain)))
| .[0].id // empty
')"
[[ -n "$HOST_ID" ]] || die "NPMplus proxy host not found for ${PUBLIC_DOMAIN}"
ADVANCED_CONFIG="$(cat <<CFG
location ^~ /api/ {
proxy_pass http://${CT_IP}:8080;
proxy_http_version 1.1;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_set_header Connection \"\";
proxy_buffering off;
proxy_cache off;
proxy_read_timeout 24h;
proxy_send_timeout 24h;
add_header Cache-Control \"no-cache\";
}
CFG
)"
PAYLOAD="$(echo "$HOSTS_JSON" | jq -c --arg domain "$PUBLIC_DOMAIN" --arg host "$CT_IP" --arg advanced "$ADVANCED_CONFIG" '
(if type == "array" then . elif .data != null then .data elif .result != null then .result else [] end)
| map(select(.domain_names | type == "array"))
| map(select(any(.domain_names[]; . == $domain)))
| .[0]
| {
domain_names,
forward_scheme: (.forward_scheme // "http"),
forward_host: $host,
forward_port: 3000,
access_list_id,
certificate_id,
ssl_forced,
caching_enabled,
block_exploits,
advanced_config: $advanced,
allow_websocket_upgrade,
http2_support,
hsts_enabled,
hsts_subdomains,
enabled
}
')"
[[ -n "$PAYLOAD" && "$PAYLOAD" != "null" ]] || die "failed to build NPMplus update payload"
UPDATE_RESPONSE="$(npm_api -X PUT "$NPM_URL/api/nginx/proxy-hosts/${HOST_ID}" -H 'Content-Type: application/json' -d "$PAYLOAD")"
echo "$UPDATE_RESPONSE" | jq -e '.id != null' >/dev/null 2>&1 || die "NPMplus proxy host update failed"
log "running public smoke checks"
HEADERS="$(curl -skI "$PUBLIC_URL/")"
echo "$HEADERS" | grep -q '^HTTP/2 200' || die "public root is not HTTP 200"
if echo "$HEADERS" | grep -qi '^x-nextjs-prerender:'; then
die "old Next.js headers still present on public root"
fi
curl -sk "$PUBLIC_URL/" | grep -F '<title>Solace Bank Group PLC — Treasury Management Portal</title>' >/dev/null || die "public title mismatch"
READY_BODY="$(curl -sk "$PUBLIC_URL/api/ready")"
echo "$READY_BODY" | grep -F '"ready":true' >/dev/null || die "public /api/ready failed"
curl -skN --max-time 5 -H 'Accept: text/event-stream' "$PUBLIC_URL/api/plans/demo-pay-014/status/stream" | grep -F '"type":"connected"' >/dev/null || die "public SSE smoke failed"
log "capturing EXT-* blocker summary"
ssh_remote "pct exec ${VMID} -- journalctl -u currencicombo-orchestrator.service -n 200 --no-pager | grep -E 'ExternalBlockers|EXT-' || true"
fi
log "CurrenciCombo Phoenix deploy completed from ${PHOENIX_DEPLOY_WORKSPACE}"

View File

@@ -1,13 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
MANIFEST="${CHAIN138_GMX_SYNTHETICS_MANIFEST:-$PROJECT_ROOT/config/chain138-gmx-synthetics-manifest.example.json}"
OUT_DIR="${CHAIN138_GMX_SYNTHETICS_RENDER_DIR:-$PROJECT_ROOT/reports/generated/chain138-gmx-synthetics}"
command -v python3 >/dev/null 2>&1 || { echo "python3 is required" >&2; exit 1; }
[[ -f "$MANIFEST" ]] || { echo "Manifest not found: $MANIFEST" >&2; exit 1; }
python3 "$PROJECT_ROOT/scripts/deployment/render-chain138-gmx-synthetics-overlay.py" "$MANIFEST" "$OUT_DIR" >/dev/null
echo "$OUT_DIR"

View File

@@ -1,84 +0,0 @@
#!/usr/bin/env python3
import json
import pathlib
import sys
def solidity_addr(value: str) -> str:
if not value:
return "address(0)"
return value
def solidity_bytes32(value: str) -> str:
if not value:
return "bytes32(0)"
return value
def main() -> int:
if len(sys.argv) != 3:
print("usage: render-chain138-aave-v3-origin-market-input.py <manifest.json> <output.sol>", file=sys.stderr)
return 1
manifest_path = pathlib.Path(sys.argv[1])
output_path = pathlib.Path(sys.argv[2])
manifest = json.loads(manifest_path.read_text())
name = manifest.get("contractName", "Chain138AaveV3OriginMarket")
roles = manifest["roles"]
flags = manifest["flags"]
config = manifest["marketConfig"]
content = f"""// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
import {{DeployAaveV3MarketBatchedBase}} from "../misc/DeployAaveV3MarketBatchedBase.sol";
import {{MarketInput}} from "../../src/deployments/inputs/MarketInput.sol";
contract {name} is DeployAaveV3MarketBatchedBase, MarketInput {{
function _getMarketInput(
address
)
internal
pure
override
returns (
Roles memory roles,
MarketConfig memory config,
DeployFlags memory flags,
MarketReport memory deployedContracts
)
{{
roles.marketOwner = {solidity_addr(roles.get("marketOwner", ""))};
roles.poolAdmin = {solidity_addr(roles.get("poolAdmin", ""))};
roles.emergencyAdmin = {solidity_addr(roles.get("emergencyAdmin", ""))};
config.marketId = "{config.get("marketId", "Chain 138 Aave V3 Market")}";
config.providerId = {config.get("providerId", 138)};
config.oracleDecimals = {config.get("oracleDecimals", 8)};
config.networkBaseTokenPriceInUsdProxyAggregator = {solidity_addr(config.get("networkBaseTokenPriceInUsdProxyAggregator", ""))};
config.marketReferenceCurrencyPriceInUsdProxyAggregator = {solidity_addr(config.get("marketReferenceCurrencyPriceInUsdProxyAggregator", ""))};
config.l2SequencerUptimeFeed = {solidity_addr(config.get("l2SequencerUptimeFeed", ""))};
config.l2PriceOracleSentinelGracePeriod = {config.get("l2PriceOracleSentinelGracePeriod", 0)};
config.salt = {solidity_bytes32(config.get("salt", ""))};
config.wrappedNativeToken = {solidity_addr(config.get("wrappedNativeToken", ""))};
config.flashLoanPremium = {config.get("flashLoanPremium", "5000000000000000")};
config.incentivesProxy = {solidity_addr(config.get("incentivesProxy", ""))};
config.treasury = {solidity_addr(config.get("treasury", ""))};
flags.l2 = {str(bool(flags.get("l2", False))).lower()};
return (roles, config, flags, deployedContracts);
}}
}}
"""
output_path.parent.mkdir(parents=True, exist_ok=True)
output_path.write_text(content)
print(output_path)
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -1,112 +0,0 @@
#!/usr/bin/env python3
import json
import pathlib
import sys
def ts_value(value):
if isinstance(value, bool):
return "true" if value else "false"
if isinstance(value, (int, float)):
return str(value)
if isinstance(value, str):
return json.dumps(value)
if isinstance(value, list):
return "[" + ", ".join(ts_value(v) for v in value) + "]"
if isinstance(value, dict):
items = []
for k, v in value.items():
items.append(f"{json.dumps(k)}: {ts_value(v)}")
return "{\n" + ",\n".join(items) + "\n}"
return "undefined"
def write(path: pathlib.Path, content: str):
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(content)
def main() -> int:
if len(sys.argv) != 3:
print("usage: render-chain138-gmx-synthetics-overlay.py <manifest.json> <output-dir>", file=sys.stderr)
return 1
manifest = json.loads(pathlib.Path(sys.argv[1]).read_text())
out_dir = pathlib.Path(sys.argv[2])
net = manifest["network"]
write(out_dir / "chain138.tokens.ts", f"""export default async function () {{
return {ts_value(manifest.get("tokens", {}))};
}}
""")
write(out_dir / "chain138.markets.ts", f"""export default async function () {{
return {ts_value(manifest.get("markets", []))};
}}
""")
write(out_dir / "chain138.general.ts", f"""export default async function () {{
return {ts_value(manifest.get("general", {}))};
}}
""")
write(out_dir / "chain138.roles.ts", f"""export default async function () {{
return {{
roles: {ts_value(manifest.get("roles", {}))},
requiredRolesForContracts: {ts_value({
"CONFIG_KEEPER": ["ConfigSyncer"],
"CONTROLLER": ["Config", "ConfigSyncer", "ConfigTimelockController", "MarketFactory", "ExchangeRouter", "OrderHandler", "DepositHandler", "WithdrawalHandler", "Router", "Reader"],
"ROUTER_PLUGIN": ["ExchangeRouter"],
"ROLE_ADMIN": ["ConfigTimelockController"]
})}
}};
}}
""")
write(out_dir / "hardhat.chain138.config.ts", f"""import baseConfig from "../../vendor/chain138-protocols/gmx-synthetics/hardhat.config";
import {{ extendEnvironment }} from "hardhat/config";
import tokensConfig from "./chain138.tokens";
import marketsConfig from "./chain138.markets";
import generalConfig from "./chain138.general";
import rolesConfig from "./chain138.roles";
const config: any = baseConfig;
config.networks = config.networks || {{}};
config.networks["{net}"] = {{
url: process.env.CHAIN138_RPC_URL || process.env.RPC_URL_138 || "{manifest.get("rpcUrl", "http://192.168.11.211:8545")}",
chainId: {manifest.get("chainId", 138)},
accounts: process.env.ACCOUNT_KEY ? [process.env.ACCOUNT_KEY] : [],
blockGasLimit: 20000000
}};
config.etherscan = config.etherscan || {{}};
config.etherscan.customChains = config.etherscan.customChains || [];
config.etherscan.customChains.push({{
network: "{net}",
chainId: {manifest.get("chainId", 138)},
urls: {{
apiURL: "{manifest.get("explorer", {}).get("apiUrl", "https://explorer.d-bis.org/api")}",
browserURL: "{manifest.get("explorer", {}).get("browserUrl", "https://explorer.d-bis.org")}"
}}
}});
extendEnvironment(async (hre: any) => {{
if (hre.network.name !== "{net}") return;
hre.gmx = {{
getTokens: async () => tokensConfig(),
getMarkets: async () => marketsConfig(),
getGeneral: async () => generalConfig(),
getRoles: async () => rolesConfig(),
getOracle: async () => ({{}}),
getGlvs: async () => ({{}}),
getBuyback: async () => ({{}}),
getRiskOracle: async () => ({{}}),
getVaultV1: async () => ({{}}),
isExistingMainnetDeployment: false,
getLayerZeroEndpoint: async () => undefined,
getFeeDistributor: async () => ({{}})
}};
}});
export default config;
""")
print(out_dir)
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -1,113 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
SMOM_ROOT="$PROJECT_ROOT/smom-dbis-138"
if [[ -f "$SMOM_ROOT/scripts/lib/deployment/dotenv.sh" ]]; then
# shellcheck disable=SC1091
source "$SMOM_ROOT/scripts/lib/deployment/dotenv.sh"
load_deployment_env --repo-root "$SMOM_ROOT"
fi
command -v cast >/dev/null 2>&1 || { echo "cast is required" >&2; exit 1; }
RPC_URL="${RPC_URL_138:-${CHAIN138_RPC_URL:-${RPC_URL:-http://192.168.11.211:8545}}}"
POOL="${CHAIN_138_AAVE_POOL:-}"
PROVIDER="${CHAIN_138_AAVE_POOL_ADDRESSES_PROVIDER:-}"
DATA_PROVIDER="${CHAIN_138_AAVE_POOL_DATA_PROVIDER:-}"
START_BLOCK="${CHAIN_138_AAVE_START_BLOCK:-}"
TREASURY="${CHAIN_138_AAVE_EXECUTOR_TREASURY:-}"
OWNER="${CHAIN_138_AAVE_EXECUTOR_OWNER:-}"
RECEIVER_OWNER="${CHAIN_138_AAVE_QUOTE_PUSH_RECEIVER_OWNER:-}"
failures=0
require_var() {
local name="$1" value="$2"
if [[ -n "$value" ]]; then
echo "SET $name=$value"
else
echo "MISS $name"
failures=1
fi
}
check_code() {
local label="$1" addr="$2"
if [[ ! "$addr" =~ ^0x[0-9a-fA-F]{40}$ ]]; then
echo "BAD $label invalid address: $addr"
failures=1
return
fi
local code
code="$(cast code "$addr" --rpc-url "$RPC_URL" 2>/dev/null || true)"
if [[ -n "$code" && "$code" != "0x" ]]; then
echo "CODE $label bytecode present"
else
echo "NOCODE $label no bytecode at $addr"
failures=1
fi
}
check_eoa_or_contract_address() {
local label="$1" addr="$2"
if [[ "$addr" =~ ^0x[0-9a-fA-F]{40}$ ]] && [[ "${addr,,}" != "0x0000000000000000000000000000000000000000" ]]; then
echo "ADDR $label format ok"
else
echo "BAD $label invalid address: $addr"
failures=1
fi
}
echo "=== Chain 138 Aave rollout readiness ==="
echo "rpcUrl=$RPC_URL"
require_var "CHAIN_138_AAVE_POOL" "$POOL"
require_var "CHAIN_138_AAVE_POOL_ADDRESSES_PROVIDER" "$PROVIDER"
require_var "CHAIN_138_AAVE_POOL_DATA_PROVIDER" "$DATA_PROVIDER"
require_var "CHAIN_138_AAVE_START_BLOCK" "$START_BLOCK"
require_var "CHAIN_138_AAVE_EXECUTOR_TREASURY" "$TREASURY"
require_var "CHAIN_138_AAVE_EXECUTOR_OWNER" "$OWNER"
require_var "CHAIN_138_AAVE_QUOTE_PUSH_RECEIVER_OWNER" "$RECEIVER_OWNER"
[[ -n "$POOL" ]] && check_code "AAVE_POOL" "$POOL"
[[ -n "$PROVIDER" ]] && check_code "AAVE_POOL_ADDRESSES_PROVIDER" "$PROVIDER"
[[ -n "$DATA_PROVIDER" ]] && check_code "AAVE_POOL_DATA_PROVIDER" "$DATA_PROVIDER"
[[ -n "$TREASURY" ]] && check_eoa_or_contract_address "CHAIN_138_AAVE_EXECUTOR_TREASURY" "$TREASURY"
[[ -n "$OWNER" ]] && check_eoa_or_contract_address "CHAIN_138_AAVE_EXECUTOR_OWNER" "$OWNER"
[[ -n "$RECEIVER_OWNER" ]] && check_eoa_or_contract_address "CHAIN_138_AAVE_QUOTE_PUSH_RECEIVER_OWNER" "$RECEIVER_OWNER"
if [[ -n "$START_BLOCK" ]]; then
if [[ "$START_BLOCK" =~ ^[0-9]+$ ]]; then
echo "START_BLOCK numeric"
else
echo "BAD CHAIN_138_AAVE_START_BLOCK invalid numeric value: $START_BLOCK"
failures=1
fi
fi
if [[ -n "$POOL" && -n "$PROVIDER" ]]; then
onchain_provider="$(cast call "$POOL" 'ADDRESSES_PROVIDER()(address)' --rpc-url "$RPC_URL" 2>/dev/null | awk '{print $1}' || true)"
if [[ -n "$onchain_provider" ]]; then
echo "POOL_PROVIDER onchain=$onchain_provider"
if [[ "${onchain_provider,,}" != "${PROVIDER,,}" ]]; then
echo "MISMATCH pool.ADDRESSES_PROVIDER() != CHAIN_138_AAVE_POOL_ADDRESSES_PROVIDER"
failures=1
fi
else
echo "WARN unable to read ADDRESSES_PROVIDER() from pool"
failures=1
fi
flash_fee="$(cast call "$POOL" 'FLASHLOAN_PREMIUM_TOTAL()(uint128)' --rpc-url "$RPC_URL" 2>/dev/null | awk '{print $1}' || true)"
if [[ -n "$flash_fee" ]]; then
echo "FLASHLOAN_PREMIUM_TOTAL=$flash_fee"
else
echo "WARN unable to read FLASHLOAN_PREMIUM_TOTAL() from pool"
failures=1
fi
fi
exit "$failures"

View File

@@ -1,75 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
failures=0
find_protocol_file() {
local mode="$1"
local needle="$2"
case "$mode" in
filename)
find "$PROJECT_ROOT/MEV_Bot" "$PROJECT_ROOT/smom-dbis-138" "$PROJECT_ROOT/vendor/chain138-protocols" \
-path '*/node_modules/*' -prune -o \
-path '*/target/*' -prune -o \
-path '*/out/*' -prune -o \
-path '*/broadcast/*' -prune -o \
-type f -name "$needle" -print -quit
;;
path)
find "$PROJECT_ROOT/MEV_Bot" "$PROJECT_ROOT/smom-dbis-138" "$PROJECT_ROOT/vendor/chain138-protocols" \
-path '*/node_modules/*' -prune -o \
-path '*/target/*' -prune -o \
-path '*/out/*' -prune -o \
-path '*/broadcast/*' -prune -o \
-type f -path "*$needle*" -print -quit
;;
*)
return 1
;;
esac
}
report_family() {
local protocol="$1" family="$2" mode="$3" needle="$4"
local match
match="$(find_protocol_file "$mode" "$needle" || true)"
if [[ -n "$match" ]]; then
echo "FOUND $protocol :: $family :: ${match#$PROJECT_ROOT/}"
else
echo "MISS $protocol :: $family"
failures=1
fi
}
echo "=== Chain 138 native protocol stack source audit ==="
echo "-- Aave native market families --"
report_family "Aave" "PoolAddressesProvider" "path" "/vendor/chain138-protocols/aave-v3-origin/src/contracts/protocol/configuration/PoolAddressesProvider.sol"
report_family "Aave" "Pool" "path" "/vendor/chain138-protocols/aave-v3-origin/src/contracts/protocol/pool/Pool.sol"
report_family "Aave" "PoolConfigurator" "path" "/vendor/chain138-protocols/aave-v3-origin/src/contracts/protocol/pool/PoolConfigurator.sol"
report_family "Aave" "AaveProtocolDataProvider" "path" "/vendor/chain138-protocols/aave-v3-origin/src/contracts/helpers/AaveProtocolDataProvider.sol"
report_family "Aave" "AaveOracle" "path" "/vendor/chain138-protocols/aave-v3-origin/src/contracts/misc/AaveOracle.sol"
echo "-- GMX synthetics native market families --"
report_family "GMX" "Router" "path" "/vendor/chain138-protocols/gmx-synthetics/contracts/router/Router.sol"
report_family "GMX" "ExchangeRouter" "path" "/vendor/chain138-protocols/gmx-synthetics/contracts/router/ExchangeRouter.sol"
report_family "GMX" "Reader" "path" "/vendor/chain138-protocols/gmx-synthetics/contracts/reader/Reader.sol"
report_family "GMX" "OrderVault" "path" "/vendor/chain138-protocols/gmx-synthetics/contracts/order/OrderVault.sol"
report_family "GMX" "DepositVault" "path" "/vendor/chain138-protocols/gmx-synthetics/contracts/deposit/DepositVault.sol"
report_family "GMX" "WithdrawalVault" "path" "/vendor/chain138-protocols/gmx-synthetics/contracts/withdrawal/WithdrawalVault.sol"
echo "-- dYdX native market families --"
report_family "dYdX" "SoloMargin" "filename" "SoloMargin.sol"
report_family "dYdX" "DataProvider" "filename" "DataProvider.sol"
echo
if (( failures )); then
echo "RESULT missing native source families prevent truthful in-repo deployment of one or more remaining Chain 138 protocols."
else
echo "RESULT source families found for all audited native protocol stacks."
fi
exit "$failures"

View File

@@ -1,81 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
SMOM_ROOT="${PROJECT_ROOT}/smom-dbis-138"
if [[ -f "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" ]]; then
# shellcheck source=/dev/null
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh"
fi
RPC_URL="${CHAIN138_RPC_URL:-${RPC_URL_138:-http://192.168.11.211:8545}}"
UNISWAP_JSON="${SMOM_ROOT}/deployments/chain138/uniswap-v2-native.json"
SUSHI_JSON="${SMOM_ROOT}/deployments/chain138/sushiswap-native.json"
require_cmd() {
command -v "$1" >/dev/null 2>&1 || {
echo "[fail] missing required command: $1" >&2
exit 1
}
}
require_cmd cast
require_cmd jq
check_code() {
local label="$1"
local address="$2"
local code
code="$(cast code --rpc-url "${RPC_URL}" "${address}" 2>/dev/null || true)"
[[ -n "${code}" && "${code}" != "0x" ]] || {
echo "[fail] ${label} missing bytecode at ${address}" >&2
exit 1
}
echo "[ok] ${label} bytecode present at ${address}"
}
check_pair() {
local label="$1"
local pair="$2"
local reserves
reserves="$(cast call --rpc-url "${RPC_URL}" "${pair}" 'getReserves()(uint112,uint112,uint32)' 2>/dev/null || true)"
[[ -n "${reserves}" ]] || {
echo "[fail] ${label} getReserves failed at ${pair}" >&2
exit 1
}
local reserve0 reserve1
reserve0="$(printf '%s\n' "${reserves}" | sed -n '1p' | awk '{print $1}')"
reserve1="$(printf '%s\n' "${reserves}" | sed -n '2p' | awk '{print $1}')"
[[ "${reserve0}" != "0" && "${reserve1}" != "0" ]] || {
echo "[fail] ${label} has zero reserves (${reserve0}/${reserve1})" >&2
exit 1
}
echo "[ok] ${label} reserves ${reserve0}/${reserve1}"
}
check_stack() {
local label="$1"
local json="$2"
[[ -f "${json}" ]] || {
echo "[fail] missing deployment artifact ${json}" >&2
exit 1
}
local factory router
factory="$(jq -r '.factory' "${json}")"
router="$(jq -r '.router' "${json}")"
check_code "${label} factory" "${factory}"
check_code "${label} router" "${router}"
jq -r '.pairs | to_entries[] | [.key, .value] | @tsv' "${json}" | while IFS=$'\t' read -r key pair; do
check_code "${label} ${key} pair" "${pair}"
check_pair "${label} ${key} pair" "${pair}"
done
}
check_stack "Uniswap v2" "${UNISWAP_JSON}"
check_stack "SushiSwap" "${SUSHI_JSON}"
echo "[ok] Chain 138 native V2 venues verified"

View File

@@ -1,73 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
SMOM_ROOT="$PROJECT_ROOT/smom-dbis-138"
SURFACE_JSON="$PROJECT_ROOT/config/chain138-remaining-protocol-surface.json"
if [[ -f "$SMOM_ROOT/scripts/lib/deployment/dotenv.sh" ]]; then
# shellcheck disable=SC1091
source "$SMOM_ROOT/scripts/lib/deployment/dotenv.sh"
load_deployment_env --repo-root "$SMOM_ROOT"
fi
command -v jq >/dev/null 2>&1 || { echo "jq is required" >&2; exit 1; }
command -v cast >/dev/null 2>&1 || { echo "cast is required" >&2; exit 1; }
RPC_URL="${RPC_URL_138:-${CHAIN138_RPC_URL:-${RPC_URL:-http://192.168.11.211:8545}}}"
echo "=== Chain 138 remaining protocol env inventory ==="
echo "rpcUrl=$RPC_URL"
check_var() {
local label="$1" var="$2"
local value="${!var:-}"
if [[ -n "$value" ]]; then
echo "SET $label -> $var=$value"
return 0
fi
echo "MISS $label -> $var"
return 1
}
looks_like_contract_var() {
local var="$1"
case "$var" in
*_START_BLOCK|*_OWNER|*_TREASURY)
return 1
;;
*)
return 0
;;
esac
}
check_code() {
local protocol="$1" var="$2"
local value="${!var:-}"
[[ "$value" =~ ^0x[0-9a-fA-F]{40}$ ]] || return 0
looks_like_contract_var "$var" || return 0
local code
code="$(cast code "$value" --rpc-url "$RPC_URL" 2>/dev/null || true)"
if [[ -n "$code" && "$code" != "0x" ]]; then
echo "CODE $protocol -> $var bytecode present"
return 0
fi
echo "NOCODE $protocol -> $var has no bytecode at $value"
return 1
}
failures=0
while IFS=$'\t' read -r key status env_var; do
if check_var "$key ($status)" "$env_var"; then
if ! check_code "$key" "$env_var"; then
failures=1
fi
else
failures=1
fi
done < <(jq -r '.protocols[] | .key as $k | .status as $s | .requiredEnv[] | [$k, $s, .] | @tsv' "$SURFACE_JSON")
exit "$failures"

View File

@@ -0,0 +1,56 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
cd "$PROJECT_ROOT"
SOURCE_TARGET_PAIRS=(
".gitea/workflow-sources/deploy-to-phoenix.yml:.gitea/workflows/deploy-to-phoenix.yml"
".gitea/workflow-sources/validate-on-pr.yml:.gitea/workflows/validate-on-pr.yml"
)
REMOTE="${GITEA_WORKFLOW_REMOTE:-origin}"
if git remote | grep -qx gitea; then
REMOTE="${GITEA_WORKFLOW_REMOTE:-gitea}"
fi
missing_ref=false
for ref in "$REMOTE/main" "$REMOTE/master"; do
if ! git rev-parse --verify "$ref" >/dev/null 2>&1; then
missing_ref=true
fi
done
if [[ "$missing_ref" == true ]]; then
echo "[i] Skipping main/master workflow parity check ($REMOTE/main or $REMOTE/master not available)"
exit 0
fi
for pair in "${SOURCE_TARGET_PAIRS[@]}"; do
source="${pair%%:*}"
target="${pair##*:}"
main_blob="$(git show "$REMOTE/main:$source" 2>/dev/null || true)"
master_blob="$(git show "$REMOTE/master:$source" 2>/dev/null || true)"
if [[ -z "$main_blob" ]]; then
main_blob="$(git show "$REMOTE/main:$target" 2>/dev/null || true)"
fi
if [[ -z "$master_blob" ]]; then
master_blob="$(git show "$REMOTE/master:$target" 2>/dev/null || true)"
fi
if [[ -z "$main_blob" || -z "$master_blob" ]]; then
echo "[✗] Missing $source/$target on $REMOTE/main or $REMOTE/master" >&2
exit 1
fi
if [[ "$main_blob" != "$master_blob" ]]; then
echo "[✗] Branch workflow drift: $source differs between $REMOTE/main and $REMOTE/master" >&2
echo " Keep both deploy branches in lockstep for workflow-source files." >&2
exit 1
fi
echo "[✓] Branch parity OK for $source"
done

View File

@@ -0,0 +1,32 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
cd "$PROJECT_ROOT"
check_one() {
local source_rel="$1"
local target_rel="$2"
if [[ ! -f "$source_rel" ]]; then
echo "[✗] Missing workflow source: $source_rel" >&2
return 1
fi
if [[ ! -f "$target_rel" ]]; then
echo "[✗] Missing generated workflow: $target_rel" >&2
return 1
fi
if ! diff -u "$source_rel" "$target_rel" >/dev/null; then
echo "[✗] Workflow drift detected: $target_rel does not match $source_rel" >&2
echo " Run: bash scripts/verify/sync-gitea-workflows.sh" >&2
return 1
fi
echo "[✓] $target_rel matches $source_rel"
}
check_one ".gitea/workflow-sources/deploy-to-phoenix.yml" ".gitea/workflows/deploy-to-phoenix.yml"
check_one ".gitea/workflow-sources/validate-on-pr.yml" ".gitea/workflows/validate-on-pr.yml"

View File

@@ -0,0 +1,50 @@
#!/usr/bin/env bash
# Every path listed under "packages:" in pnpm-workspace.yaml must have a matching
# importer entry in pnpm-lock.yaml. If one is missing, pnpm can fail in confusing
# ways (e.g. pnpm outdated -r: Cannot read ... 'optionalDependencies').
# Usage: bash scripts/verify/check-pnpm-workspace-lockfile.sh
# Exit: 0 if check passes or pnpm is not used; 1 on mismatch.
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
WS="${PROJECT_ROOT}/pnpm-workspace.yaml"
LOCK="${PROJECT_ROOT}/pnpm-lock.yaml"
if [[ ! -f "$WS" ]] || [[ ! -f "$LOCK" ]]; then
echo " (skip: pnpm-workspace.yaml or pnpm-lock.yaml not present at repo root)"
exit 0
fi
# Paths under the top-level `packages:` block only (stops at next top-level key)
mapfile -t _paths < <(awk '
/^packages:/ { p=1; next }
p && /^[a-zA-Z]/ && $0 !~ /^packages/ { exit }
p && /^[[:space:]]*-[[:space:]]/ {
sub(/^[[:space:]]*-[[:space:]]+/, "")
sub(/[[:space:]]*#.*/, "")
gsub(/[[:space:]]+$/, "")
if (length) print
}
' "$WS")
missing=()
for relp in "${_paths[@]}"; do
if [[ -z "$relp" ]]; then
continue
fi
if ! grep -qFx " ${relp}:" "$LOCK"; then
missing+=("$relp")
fi
done
if [[ ${#missing[@]} -gt 0 ]]; then
echo "✗ pnpm lockfile is missing importer(s) for these workspace path(s):"
printf ' %q\n' "${missing[@]}"
echo " Run: pnpm install (at repo root) to refresh pnpm-lock.yaml"
exit 1
fi
echo " pnpm workspace / lockfile importers aligned (${#_paths[@]} path(s))."
exit 0

View File

@@ -1,398 +0,0 @@
#!/usr/bin/env python3
"""
On-chain Uniswap V3 **Quoter v1** implied USD for Aave V3 (Ethereum) top underlyings.
Reads `config/mainnet-aave-dex-parity.json`, calls `quoteExactInputSingle` / `quoteExactInput`
on `quoterV1` (default `0xb27308f9F90D607463bb33eA1BeBb41C27CE5AB6`), compares optional
DeFi Llama `coins` USD, writes JSON (stdout) and optional CSV.
Requires: `cast` (Foundry) on PATH, `ETHEREUM_MAINNET_RPC` (or `--rpc-url`).
Usage:
source scripts/lib/load-project-env.sh
python3 scripts/verify/mainnet-aave-top-assets-dex-parity.py
python3 scripts/verify/mainnet-aave-top-assets-dex-parity.py --csv reports/status/mainnet_aave_dex_parity.csv
python3 scripts/verify/mainnet-aave-top-assets-dex-parity.py --no-llama --config /path/to/custom.json
"""
from __future__ import annotations
import argparse
import csv
import json
import os
import re
import shutil
import subprocess
import sys
import urllib.error
import urllib.request
import urllib.parse
from decimal import Decimal
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple
def _repo_root() -> Path:
return Path(__file__).resolve().parents[2]
def _load_json(path: Path) -> dict:
with path.open() as f:
return json.load(f)
def _build_path_hex(tokens: List[str], fees: List[int]) -> str:
if len(tokens) < 2 or len(fees) != len(tokens) - 1:
raise ValueError("path: need tokens N and fees N-1")
def addr(s: str) -> bytes:
s = s.strip().lower()
if not s.startswith("0x") or len(s) != 42:
raise ValueError(f"bad address: {s}")
return bytes.fromhex(s[2:])
def fee_u24(x: int) -> bytes:
if x < 0 or x >= 1 << 24:
raise ValueError(f"bad fee: {x}")
return x.to_bytes(3, "big")
out = b""
for i in range(len(fees)):
out += addr(tokens[i])
out += fee_u24(fees[i])
out += addr(tokens[-1])
return "0x" + out.hex()
def _cast_quiet() -> str:
c = shutil.which("cast")
if not c:
print("[fail] Foundry `cast` not found on PATH", file=sys.stderr)
sys.exit(1)
return c
def _cast_call(
cast: str,
rpc: str,
to: str,
sig: str,
args: List[str],
) -> str:
cmd = [cast, "call", to, sig, *args, "--rpc-url", rpc]
p = subprocess.run(
cmd,
capture_output=True,
text=True,
timeout=120,
)
if p.returncode != 0:
err = (p.stderr or p.stdout or "").strip()
raise RuntimeError(err or f"cast exit {p.returncode}")
return p.stdout.strip()
def _parse_uint_out(stdout: str) -> int:
# "12345 [1.234e10]" or hex
s = stdout.splitlines()[0].strip()
m = re.match(r"^(\d+)", s)
if m:
return int(m.group(1))
if s.startswith("0x"):
return int(s, 16)
raise ValueError(f"unparsed cast output: {stdout[:200]}")
def _quoter_single(
cast: str,
rpc: str,
quoter: str,
token_in: str,
token_out: str,
fee: int,
amount_in: int,
) -> int:
out = _cast_call(
cast,
rpc,
quoter,
"quoteExactInputSingle(address,address,uint24,uint256,uint160)(uint256)",
[token_in, token_out, str(fee), str(amount_in), "0"],
)
return _parse_uint_out(out)
def _quoter_path(
cast: str,
rpc: str,
quoter: str,
path_hex: str,
amount_in: int,
) -> int:
out = _cast_call(
cast,
rpc,
quoter,
"quoteExactInput(bytes,uint256)(uint256)",
[path_hex, str(amount_in)],
)
return _parse_uint_out(out)
def _llama_prices(addresses: List[str]) -> Dict[str, float]:
if not addresses:
return {}
batch = ",".join(f"ethereum:{a.lower()}" for a in addresses)
url = f"https://coins.llama.fi/prices/current/{batch}"
req = urllib.request.Request(url, headers={"User-Agent": "proxmox-verify/1.0"})
try:
with urllib.request.urlopen(req, timeout=45) as r:
data = json.load(r)
except (urllib.error.URLError, TimeoutError) as e:
print(f"[warn] DeFiLlama coins fetch failed: {e}", file=sys.stderr)
return {}
out: Dict[str, float] = {}
for k, v in (data.get("coins") or {}).items():
addr = k.split(":", 1)[-1].lower()
try:
out[addr] = float(v.get("price") or 0)
except (TypeError, ValueError):
continue
return out
def _implied_usd(
amount_out_usdc: int,
amount_in: int,
decimals_in: int,
usdc_decimals: int,
) -> Decimal:
if amount_in <= 0:
return Decimal(0)
ai = Decimal(amount_in) / Decimal(10**decimals_in)
usd_out = Decimal(amount_out_usdc) / Decimal(10**usdc_decimals)
return usd_out / ai
def _compose_via_weth(
cast: str,
rpc: str,
quoter: str,
token_in: str,
amount_in: int,
decimals_in: int,
weth: str,
usdc: str,
usdc_dec: int,
fee_to_weth_list: List[int],
usdc_fee: int,
) -> Tuple[int, str]:
"""
implied USD/token = (WETH out per amount_in) / (amount in human) * (USDC per 1 WETH),
using Quoter for token->WETH and WETH->USDC.
Returns (usdc_numerator_scaled, description) where we store synthetic USDC-out for 1 unit
by reporting impliedUsd directly in caller — here return weth_out and build desc.
"""
best_w = -1
best_fw: Optional[int] = None
for fee in fee_to_weth_list:
try:
w = _quoter_single(cast, rpc, quoter, token_in, weth, fee, amount_in)
if w > best_w:
best_w = w
best_fw = fee
except Exception:
continue
if best_fw is None or best_w < 0:
raise RuntimeError("compose_via_weth: no token->WETH fee tier succeeded")
usdc_1eth = _quoter_single(cast, rpc, quoter, weth, usdc, usdc_fee, 10**18)
# USD per 1 full token (human) = (weth_out/1e18) / (amount_in/10^dec) * (usdc_1eth/1e6)
weth_per_token = Decimal(best_w) / Decimal(10**18) / (Decimal(amount_in) / Decimal(10**decimals_in))
usd_per_weth = Decimal(usdc_1eth) / Decimal(10**usdc_dec)
imp = weth_per_token * usd_per_weth
desc = f"compose token->WETH fee={best_fw} × WETH->USDC fee={usdc_fee} (WETH_out={best_w})"
# Return synthetic "usdc out if we sold amount_in at imp" for display: imp * human_units * 10^6
synthetic_usdc = int(imp * (Decimal(amount_in) / Decimal(10**decimals_in)) * Decimal(10**usdc_dec))
return synthetic_usdc, desc
def main() -> None:
ap = argparse.ArgumentParser(description="Mainnet Aave top assets — Uniswap V3 quoter implied USD")
ap.add_argument(
"--config",
type=Path,
default=_repo_root() / "config" / "mainnet-aave-dex-parity.json",
help="Path to parity JSON config",
)
ap.add_argument("--rpc-url", default=os.environ.get("ETHEREUM_MAINNET_RPC", "").strip(), help="Ethereum JSON-RPC")
ap.add_argument("--csv", type=Path, default=None, help="Write CSV rows")
ap.add_argument("--no-llama", action="store_true", help="Skip DeFiLlama reference prices")
ap.add_argument(
"--strict",
action="store_true",
help="Exit 1 if any asset row has error (for CI)",
)
ap.add_argument(
"--only",
default="",
help="Comma-separated asset symbols to run (default: all). Example: --only WETH,WBTC,rsETH",
)
args = ap.parse_args()
rpc = args.rpc_url or os.environ.get("ETH_MAINNET_RPC_URL", "").strip()
if not rpc:
print("[fail] Set ETHEREUM_MAINNET_RPC or pass --rpc-url", file=sys.stderr)
sys.exit(1)
cfg = _load_json(args.config)
quoter = cfg.get("quoterV1") or "0xb27308f9F90D607463bb33eA1BeBb41C27CE5AB6"
ref = cfg.get("referenceStable") or {}
usdc_dec = int(ref.get("decimals") or 6)
fee_tiers: List[int] = [int(x) for x in cfg.get("feeTiersTry") or [100, 500, 3000, 10000]]
weth_addr = (cfg.get("weth") or "0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2").strip()
assets: List[dict] = cfg.get("assets") or []
only_set = {s.strip().upper() for s in args.only.split(",") if s.strip()} if args.only.strip() else None
cast = _cast_quiet()
want_llama = not args.no_llama
llama: Dict[str, float] = {}
if want_llama:
addrs = [a["address"] for a in assets if a.get("address")]
llama = _llama_prices(addrs)
rows_out: List[Dict[str, Any]] = []
for asset in assets:
sym = asset.get("symbol") or "?"
if only_set is not None and sym.strip().upper() not in only_set:
continue
addr = (asset.get("address") or "").strip()
dec_in = int(asset.get("decimals") or 18)
amount_in = int(asset.get("amountIn") or 0)
q = asset.get("quote") or {}
qtype = (q.get("type") or "").strip()
label = q.get("label") or qtype
row: Dict[str, Any] = {
"symbol": sym,
"token": addr,
"decimals": dec_in,
"amountIn": str(amount_in),
"quoteLabel": label,
"quoter": quoter,
}
try:
if qtype == "identity_usd":
row["amountOutUsdc"] = str(amount_in)
row["feeTierOrPath"] = "identity"
imp = _implied_usd(amount_in, amount_in, dec_in, usdc_dec)
elif qtype == "compose_via_weth":
usdc_addr = (ref.get("address") or "").strip()
ftw = [int(x) for x in (q.get("feeTiersToWeth") or fee_tiers)]
usf = int(q.get("usdcFee") or 500)
syn, desc = _compose_via_weth(
cast,
rpc,
quoter,
addr,
amount_in,
dec_in,
weth_addr,
usdc_addr,
usdc_dec,
ftw,
usf,
)
row["amountOutUsdc"] = str(syn)
row["feeTierOrPath"] = desc[:500]
imp = Decimal(syn) / (Decimal(amount_in) / Decimal(10**dec_in)) / Decimal(10**usdc_dec)
elif qtype == "single_best":
token_out = (q.get("tokenOut") or ref.get("address") or "").strip()
if not token_out:
raise ValueError("single_best needs tokenOut")
tiers = [int(x) for x in q.get("feeTiersTry")] if q.get("feeTiersTry") else fee_tiers
best_amt = -1
best_fee: Optional[int] = None
for fee in tiers:
try:
amt = _quoter_single(cast, rpc, quoter, addr, token_out, fee, amount_in)
if amt > best_amt:
best_amt = amt
best_fee = fee
except Exception:
continue
if best_fee is None or best_amt < 0:
raise RuntimeError("no successful fee tier (all reverted or empty)")
row["amountOutUsdc"] = str(best_amt)
row["feeTierOrPath"] = f"single fee={best_fee}"
imp = _implied_usd(best_amt, amount_in, dec_in, usdc_dec)
elif qtype == "path":
tokens = q.get("tokens") or []
fees = [int(x) for x in (q.get("fees") or [])]
path_hex = _build_path_hex(list(tokens), fees)
amt = _quoter_path(cast, rpc, quoter, path_hex, amount_in)
row["amountOutUsdc"] = str(amt)
row["feeTierOrPath"] = path_hex[:66] + "..."
imp = _implied_usd(amt, amount_in, dec_in, usdc_dec)
else:
raise ValueError(f"unknown quote.type: {qtype}")
row["impliedUsd"] = str(imp)
la = llama.get(addr.lower())
row["llamaUsd"] = la
if la and la > 0 and imp and imp > 0:
row["diffVsLlamaBps"] = float((imp / Decimal(str(la)) - Decimal(1)) * Decimal(10000))
else:
row["diffVsLlamaBps"] = None
row["error"] = None
except Exception as e:
row["error"] = str(e)
row["impliedUsd"] = None
row["amountOutUsdc"] = None
row["feeTierOrPath"] = None
row["llamaUsd"] = llama.get(addr.lower())
row["diffVsLlamaBps"] = None
rows_out.append(row)
rpc_host = urllib.parse.urlparse(rpc).netloc or rpc[:48]
out = {
"config": str(args.config),
"rpcUrlHost": rpc_host[:120],
"quoterV1": quoter,
"referenceStable": ref,
"rows": rows_out,
}
print(json.dumps(out, indent=2))
if args.csv:
args.csv.parent.mkdir(parents=True, exist_ok=True)
fields = [
"symbol",
"token",
"impliedUsd",
"llamaUsd",
"diffVsLlamaBps",
"quoteLabel",
"feeTierOrPath",
"amountOutUsdc",
"error",
]
with args.csv.open("w", newline="") as f:
w = csv.DictWriter(f, fieldnames=fields, extrasaction="ignore")
w.writeheader()
for r in rows_out:
w.writerow({k: r.get(k) for k in fields})
print(f"[ok] wrote {args.csv}", file=sys.stderr)
errs = sum(1 for r in rows_out if r.get("error"))
if args.strict and errs:
print(f"[fail] {errs} asset(s) failed (see error fields)", file=sys.stderr)
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -3,6 +3,7 @@
# Use for CI or pre-deploy: dependencies, config files, optional genesis.
# Usage: bash scripts/verify/run-all-validation.sh [--skip-genesis]
# --skip-genesis: do not run validate-genesis.sh (default: run if smom-dbis-138 present).
# Steps: dependencies, config files, cW* mesh matrix (if pair-discovery JSON exists), genesis.
set -euo pipefail
@@ -24,15 +25,64 @@ bash "$SCRIPT_DIR/check-dependencies.sh" || log_err "check-dependencies failed"
log_ok "Dependencies OK"
echo ""
echo "1b. pnpm workspace vs lockfile..."
if [[ -f "$PROJECT_ROOT/pnpm-workspace.yaml" ]]; then
bash "$SCRIPT_DIR/check-pnpm-workspace-lockfile.sh" || log_err "pnpm lockfile / workspace drift"
log_ok "pnpm lockfile aligned with workspace"
else
echo " (no pnpm-workspace.yaml at root — skip)"
fi
echo ""
echo "1c. Gitea workflow source sync..."
bash "$SCRIPT_DIR/check-gitea-workflows.sh" || log_err "Gitea workflow source drift"
log_ok "Gitea workflows match source-of-truth files"
echo ""
echo "1d. main/master workflow parity..."
bash "$SCRIPT_DIR/check-gitea-branch-workflow-parity.sh" || log_err "main/master workflow parity drift"
log_ok "main/master workflow parity OK"
echo ""
echo "2. Config files..."
bash "$SCRIPT_DIR/../validation/validate-config-files.sh" || log_err "validate-config-files failed"
log_ok "Config validation OK"
echo ""
if [[ "$SKIP_GENESIS" == true ]]; then
echo "3. Genesis — skipped (--skip-genesis)"
echo "3. cW* mesh matrix (deployment-status + Uni V2 pair-discovery)..."
DISCOVERY_JSON="$PROJECT_ROOT/reports/extraction/promod-uniswap-v2-live-pair-discovery-latest.json"
if [[ -f "$DISCOVERY_JSON" ]]; then
MATRIX_JSON="$PROJECT_ROOT/reports/status/cw-mesh-deployment-matrix-latest.json"
bash "$SCRIPT_DIR/build-cw-mesh-deployment-matrix.sh" --no-markdown --json-out "$MATRIX_JSON" || log_err "cw mesh matrix merge failed"
log_ok "cW mesh matrix OK (also wrote $MATRIX_JSON)"
else
echo "3. Genesis (smom-dbis-138)..."
echo " ($DISCOVERY_JSON missing — run: bash scripts/verify/build-promod-uniswap-v2-live-pair-discovery.sh)"
fi
echo ""
echo "3b. deployment-status graph (cross-chain-pmm-lps)..."
PMM_VALIDATE="$PROJECT_ROOT/cross-chain-pmm-lps/scripts/validate-deployment-status.cjs"
if [[ -f "$PMM_VALIDATE" ]] && command -v node &>/dev/null; then
node "$PMM_VALIDATE" || log_err "validate-deployment-status.cjs failed"
log_ok "deployment-status.json rules OK"
else
echo " (skip: node or $PMM_VALIDATE missing)"
fi
echo ""
echo "3c. External dependency blockers..."
EXT_CHECK="$SCRIPT_DIR/check-external-dependencies.sh"
if [[ -x "$EXT_CHECK" ]]; then
bash "$EXT_CHECK" --advisory || true
else
echo " (skip: $EXT_CHECK missing)"
fi
echo ""
if [[ "$SKIP_GENESIS" == true ]]; then
echo "4. Genesis — skipped (--skip-genesis)"
else
echo "4. Genesis (smom-dbis-138)..."
GENESIS_SCRIPT="$PROJECT_ROOT/smom-dbis-138/scripts/validation/validate-genesis.sh"
if [[ -x "$GENESIS_SCRIPT" ]]; then
bash "$GENESIS_SCRIPT" || log_err "validate-genesis failed"

View File

@@ -0,0 +1,18 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
cd "$PROJECT_ROOT"
sync_one() {
local source_rel="$1"
local target_rel="$2"
mkdir -p "$(dirname "$target_rel")"
cp "$source_rel" "$target_rel"
echo "[✓] Synced $target_rel from $source_rel"
}
sync_one ".gitea/workflow-sources/deploy-to-phoenix.yml" ".gitea/workflows/deploy-to-phoenix.yml"
sync_one ".gitea/workflow-sources/validate-on-pr.yml" ".gitea/workflows/validate-on-pr.yml"

View File

@@ -1,15 +0,0 @@
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"description": "Canonical manifest template for the native Chain 138 Aave rollout.",
"chainId": 138,
"network": "Chain 138",
"aave": {
"pool": "",
"poolAddressesProvider": "",
"poolDataProvider": "",
"startBlock": "",
"executorTreasury": "",
"executorOwner": "",
"quotePushReceiverOwner": ""
}
}

View File

@@ -1,29 +0,0 @@
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"description": "Chain 138 native Aave V3 Origin market deployment manifest template.",
"chainId": 138,
"network": "Chain 138",
"contractName": "Chain138AaveV3OriginMarket",
"roles": {
"marketOwner": "",
"poolAdmin": "",
"emergencyAdmin": ""
},
"flags": {
"l2": false
},
"marketConfig": {
"marketId": "Chain 138 Aave V3 Market",
"providerId": 138,
"oracleDecimals": 8,
"networkBaseTokenPriceInUsdProxyAggregator": "",
"marketReferenceCurrencyPriceInUsdProxyAggregator": "",
"l2SequencerUptimeFeed": "",
"l2PriceOracleSentinelGracePeriod": 0,
"salt": "0x0000000000000000000000000000000000000000000000000000000000000000",
"wrappedNativeToken": "",
"flashLoanPremium": "5000000000000000",
"incentivesProxy": "",
"treasury": ""
}
}

View File

@@ -1,86 +0,0 @@
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"description": "Chain 138 native GMX synthetics deployment manifest template.",
"chainId": 138,
"network": "chain138",
"rpcUrl": "http://192.168.11.211:8545",
"explorer": {
"apiUrl": "https://explorer.d-bis.org/api",
"browserUrl": "https://explorer.d-bis.org"
},
"general": {
"feeReceiver": "",
"holdingAddress": "",
"sequencerUptimeFeed": "0x0000000000000000000000000000000000000000",
"sequencerGraceDuration": 300,
"maxUiFeeFactor": "0",
"maxAutoCancelOrders": 6,
"maxTotalCallbackGasLimitForAutoCancelOrders": 5000000,
"minHandleExecutionErrorGas": 1200000,
"minHandleExecutionErrorGasToForward": 1000000,
"minAdditionalGasForExecution": 1000000,
"refundExecutionFeeGasLimit": 200000,
"depositGasLimit": 2050000,
"withdrawalGasLimit": 1500000,
"shiftGasLimit": 2500000,
"createDepositGasLimit": 5000000,
"createGlvDepositGasLimit": 5000000,
"createWithdrawalGasLimit": 5000000,
"createGlvWithdrawalGasLimit": 5000000,
"singleSwapGasLimit": 1000000,
"increaseOrderGasLimit": 3900000,
"decreaseOrderGasLimit": 3900000,
"swapOrderGasLimit": 3400000,
"glvPerMarketGasLimit": 100000,
"glvDepositGasLimit": 2000000,
"glvWithdrawalGasLimit": 2000000,
"glvShiftGasLimit": 3000000,
"tokenTransferGasLimit": 200000,
"nativeTokenTransferGasLimit": 50000,
"setTraderReferralCodeGasLimit": 200000,
"registerCodeGasLimit": 200000,
"estimatedGasFeeBaseAmount": 600000,
"estimatedGasPerOraclePrice": 250000,
"estimatedGasFeeMultiplierFactor": "1000000000000000000000000000000",
"executionGasFeeBaseAmount": 600000,
"executionGasPerOraclePrice": 250000,
"executionGasFeeMultiplierFactor": "1000000000000000000000000000000",
"requestExpirationTime": 300,
"maxSwapPathLength": 3,
"maxCallbackGasLimit": 2000000,
"minCollateralUsd": "1000000000000000000000000000000",
"minPositionSizeUsd": "1000000000000000000000000000000",
"claimableCollateralTimeDivisor": 3600,
"claimableCollateralDelay": 432000,
"positionFeeReceiverFactor": "0",
"swapFeeReceiverFactor": "0",
"borrowingFeeReceiverFactor": "0",
"liquidationFeeReceiverFactor": "0",
"skipBorrowingFeeForSmallerSide": true,
"maxExecutionFeeMultiplierFactor": "100000000000000000000000000000000",
"oracleProviderMinChangeDelay": 3600,
"configMaxPriceAge": 180,
"gelatoRelayFeeMultiplierFactor": "0",
"gelatoRelayFeeBaseAmount": 0,
"relayFeeAddress": "0x0000000000000000000000000000000000000000",
"maxRelayFeeUsdForSubaccount": "0",
"maxDataLength": 18,
"multichainProviders": {},
"multichainEndpoints": {},
"srcChainIds": {},
"eids": {}
},
"roles": {
"CONTROLLER": {},
"ORDER_KEEPER": {},
"ADL_KEEPER": {},
"LIQUIDATION_KEEPER": {},
"MARKET_KEEPER": {},
"FROZEN_ORDER_KEEPER": {},
"CONFIG_KEEPER": {},
"LIMITED_CONFIG_KEEPER": {},
"TIMELOCK_ADMIN": {}
},
"tokens": {},
"markets": []
}

View File

@@ -1,76 +0,0 @@
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"description": "Canonical Chain 138 remaining native protocol surface inventory for Aave, GMX, and dYdX.",
"version": "1.0.0",
"updated": "2026-04-15",
"chainId": 138,
"network": "Chain 138",
"protocols": [
{
"key": "aave",
"status": "source-backed",
"discoveredAddresses": {},
"sourceSubmodule": "vendor/chain138-protocols/aave-v3-origin",
"discoveryEvidence": [
"2026-04-15: explorer search /api/v2/search?q=Aave returned items=[]",
"2026-04-15: token-aggregation provider capabilities for chainId=138 did not advertise provider=aave",
"2026-04-15: MEV venue coverage and native-venue-coverage for chainId=138 did not include venue=aave",
"2026-04-15: eth_getLogs scan for Aave PoolAddressesProvider event topics returned no matches from block 0..latest"
],
"requiredEnv": [
"CHAIN_138_AAVE_POOL",
"CHAIN_138_AAVE_POOL_ADDRESSES_PROVIDER",
"CHAIN_138_AAVE_POOL_DATA_PROVIDER",
"CHAIN_138_AAVE_START_BLOCK",
"CHAIN_138_AAVE_EXECUTOR_TREASURY",
"CHAIN_138_AAVE_EXECUTOR_OWNER",
"CHAIN_138_AAVE_QUOTE_PUSH_RECEIVER_OWNER"
],
"deployerScripts": [
"scripts/deployment/deploy-chain138-aave-v3-execution-stack.sh",
"scripts/deployment/deploy-chain138-aave-quote-push-receiver.sh",
"scripts/deployment/publish-chain138-aave-runtime-from-artifacts.sh"
],
"verifierScripts": [
"scripts/verify/check-chain138-remaining-protocol-env.sh",
"scripts/verify/check-chain138-aave-rollout-readiness.sh"
]
},
{
"key": "gmx",
"status": "source-backed",
"discoveredAddresses": {},
"sourceSubmodule": "vendor/chain138-protocols/gmx-synthetics",
"discoveryEvidence": [
"2026-04-15: explorer search /api/v2/search?q=GMX returned items=[]",
"2026-04-15: token-aggregation provider capabilities for chainId=138 did not advertise provider=gmx",
"2026-04-15: MEV venue coverage and native-venue-coverage for chainId=138 did not include venue=gmx",
"2026-04-15: imported official upstream source submodule gmx-io/gmx-synthetics into vendor/chain138-protocols/gmx-synthetics"
],
"requiredEnv": [
"CHAIN_138_GMX_ROUTER",
"CHAIN_138_GMX_EXCHANGE_ROUTER",
"CHAIN_138_GMX_READER",
"CHAIN_138_GMX_ORDER_VAULT",
"CHAIN_138_GMX_DEPOSIT_VAULT",
"CHAIN_138_GMX_WITHDRAWAL_VAULT",
"CHAIN_138_GMX_START_BLOCK"
]
},
{
"key": "dydx",
"status": "inventory-only",
"discoveredAddresses": {},
"discoveryEvidence": [
"2026-04-15: explorer search /api/v2/search?q=dydx, dYdX, and SoloMargin returned items=[]",
"2026-04-15: token-aggregation provider capabilities for chainId=138 did not advertise provider=dydx",
"2026-04-15: MEV venue coverage and native-venue-coverage for chainId=138 did not include venue=dydx"
],
"requiredEnv": [
"CHAIN_138_DYDX_SOLO",
"CHAIN_138_DYDX_DATA_PROVIDER",
"CHAIN_138_DYDX_START_BLOCK"
]
}
]
}