Compare commits

...

15 Commits

Author SHA1 Message Date
defiQUG
48d3e3f761 Backfill Phoenix deploy API env on install
Some checks failed
Deploy to Phoenix / validate (push) Successful in 1m3s
Deploy to Phoenix / deploy (push) Successful in 42s
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Successful in 2m27s
phoenix-deploy Deploy failed: Command failed: bash scripts/deployment/gitea-cloudflare-sync.sh bash: scripts/deployment/gitea-cloudflare-sync.sh: No s
Deploy to Phoenix / cloudflare (push) Successful in 2m45s
2026-04-28 05:21:56 -07:00
defiQUG
bed94a3ad4 Keep optional Cloudflare sync non-blocking
Some checks failed
Deploy to Phoenix / validate (push) Successful in 1m0s
Deploy to Phoenix / deploy (push) Successful in 42s
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Failing after 45s
Deploy to Phoenix / cloudflare (push) Has been skipped
2026-04-28 05:13:07 -07:00
defiQUG
7be2190441 Allow long atomic dapp deploy requests
Some checks failed
Deploy to Phoenix / validate (push) Successful in 1m14s
Deploy to Phoenix / deploy (push) Successful in 46s
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Successful in 2m35s
phoenix-deploy Deploy failed: Command failed: bash scripts/deployment/gitea-cloudflare-sync.sh bash: scripts/deployment/gitea-cloudflare-sync.sh: No s
Deploy to Phoenix / cloudflare (push) Failing after 2m57s
2026-04-28 04:57:24 -07:00
defiQUG
19cb7fe8b5 Align GRU main overlay with deployment graph
Some checks failed
Deploy to Phoenix / validate (push) Has started running
Deploy to Phoenix / deploy (push) Has been cancelled
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Has been cancelled
Deploy to Phoenix / cloudflare (push) Has been cancelled
2026-04-28 04:56:28 -07:00
defiQUG
f1715fb684 Serialize atomic deploy after Phoenix self-deploy
Some checks failed
Deploy to Phoenix / validate (push) Failing after 1m7s
Deploy to Phoenix / deploy (push) Has been skipped
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Has been skipped
Deploy to Phoenix / cloudflare (push) Has been skipped
2026-04-28 04:48:22 -07:00
defiQUG
6a9f5dead0 Add atomic swap deploy helper to main
Some checks failed
Deploy to Phoenix / validate (push) Failing after 1m36s
Deploy to Phoenix / deploy (push) Has been skipped
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Has been skipped
Deploy to Phoenix / cloudflare (push) Has been skipped
2026-04-28 04:44:03 -07:00
defiQUG
770a1db99a Treat Phoenix self-deploy restart as successful handoff
Some checks failed
Deploy to Phoenix / validate (push) Failing after 1m8s
Deploy to Phoenix / deploy (push) Has been skipped
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Has been skipped
Deploy to Phoenix / cloudflare (push) Has been skipped
2026-04-28 04:39:16 -07:00
defiQUG
8868a3501f Add pnpm workspace lockfile checker to main
Some checks failed
Deploy to Phoenix / validate (push) Has been cancelled
Deploy to Phoenix / deploy (push) Has been cancelled
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Has been cancelled
Deploy to Phoenix / cloudflare (push) Has been cancelled
2026-04-28 04:36:21 -07:00
defiQUG
cf96e9d821 Retry transient Phoenix deploy POST failures
Some checks failed
Deploy to Phoenix / validate (push) Failing after 1m4s
Deploy to Phoenix / deploy (push) Has been skipped
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Has been skipped
Deploy to Phoenix / cloudflare (push) Has been skipped
2026-04-28 04:31:23 -07:00
defiQUG
4c4aa28c95 Materialize PMM config in deploy validation
Some checks failed
Deploy to Phoenix / validate (push) Failing after 1m7s
Deploy to Phoenix / deploy (push) Has been skipped
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Has been skipped
Deploy to Phoenix / cloudflare (push) Has been skipped
2026-04-28 04:20:01 -07:00
defiQUG
9306a65186 Install validation dependencies in Gitea workflows
Some checks failed
Deploy to Phoenix / validate (push) Failing after 1m10s
Deploy to Phoenix / deploy (push) Has been skipped
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Has been skipped
Deploy to Phoenix / cloudflare (push) Has been skipped
2026-04-28 04:15:17 -07:00
defiQUG
7ab231c4ce ci: align validate workflow strict closure env
Some checks failed
Deploy to Phoenix / validate (push) Failing after 29s
Deploy to Phoenix / deploy (push) Has been skipped
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Has been skipped
Deploy to Phoenix / cloudflare (push) Has been skipped
2026-04-28 01:28:47 -07:00
defiQUG
b58c3a0342 ci: prefer gitea remote for workflow parity checks
Some checks failed
Deploy to Phoenix / validate (push) Failing after 14s
Deploy to Phoenix / deploy (push) Has been skipped
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Has been skipped
Deploy to Phoenix / cloudflare (push) Has been skipped
2026-04-22 21:48:58 -07:00
defiQUG
b8e735dcac ci: lock deploy workflows across main and master
Some checks failed
Deploy to Phoenix / validate (push) Failing after 14s
Deploy to Phoenix / deploy (push) Has been skipped
Deploy to Phoenix / deploy-atomic-swap-dapp (push) Has been skipped
Deploy to Phoenix / cloudflare (push) Has been skipped
2026-04-22 21:47:57 -07:00
defiQUG
3bea587e12 phoenix: automate CurrenciCombo e2e deploys
All checks were successful
Deploy to Phoenix / deploy (push) Successful in 31s
2026-04-22 20:06:19 -07:00
18 changed files with 1995 additions and 142 deletions

View File

@@ -6,6 +6,10 @@
2. Make changes, ensure tests pass
3. Open a pull request
Deploy workflow policy:
`main` and `master` are both deploy-triggering branches, so `.gitea/workflow-sources/deploy-to-phoenix.yml` and `.gitea/workflow-sources/validate-on-pr.yml` must stay identical across both branches.
Use `bash scripts/verify/sync-gitea-workflows.sh` after editing workflow-source files, and `bash scripts/verify/run-all-validation.sh --skip-genesis` to catch workflow drift before push.
## Pull Requests
- Use the PR template when opening a PR

View File

@@ -0,0 +1,125 @@
# Canonical deploy workflow. Keep source and checked-in workflow copies byte-identical.
# Validation checks both file sync and main/master parity.
name: Deploy to Phoenix
on:
push:
branches: [main, master]
workflow_dispatch:
jobs:
validate:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Fetch deploy branches for workflow parity check
run: |
REMOTE="${GITEA_WORKFLOW_REMOTE:-origin}"
if git remote | grep -qx gitea; then
REMOTE="${GITEA_WORKFLOW_REMOTE:-gitea}"
fi
git fetch --depth=1 "$REMOTE" main master
- name: Install validation dependencies
run: |
corepack enable
pnpm install --frozen-lockfile
# The cW* mesh matrix and deployment-status validators read
# cross-chain-pmm-lps/config/*.json. The parent checkout does not
# materialize submodules by default, and .gitmodules mixes public HTTPS
# with SSH URLs, so clone only the required public validation dependency.
- name: Materialize cross-chain-pmm-lps
run: |
set -euo pipefail
if [ ! -f cross-chain-pmm-lps/config/deployment-status.json ]; then
rm -rf cross-chain-pmm-lps
git clone --depth=1 \
https://gitea.d-bis.org/d-bis/cross-chain-pmm-lps.git \
cross-chain-pmm-lps
fi
- name: Run repo validation gate
run: |
bash scripts/verify/run-all-validation.sh --skip-genesis
deploy:
needs: validate
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Trigger Phoenix deployment
run: |
set -euo pipefail
SHA="$(git rev-parse HEAD)"
BRANCH="$(git rev-parse --abbrev-ref HEAD)"
set +e
curl -sSf --retry 3 --retry-connrefused --retry-delay 10 --retry-max-time 180 \
--connect-timeout 10 --max-time 120 \
-X POST "${{ secrets.PHOENIX_DEPLOY_URL }}" \
-H "Authorization: Bearer ${{ secrets.PHOENIX_DEPLOY_TOKEN }}" \
-H "Content-Type: application/json" \
-d "{\"repo\":\"${{ gitea.repository }}\",\"sha\":\"${SHA}\",\"branch\":\"${BRANCH}\",\"target\":\"default\"}"
rc="$?"
set -e
if [ "$rc" -eq 52 ]; then
HEALTH_URL="${{ secrets.PHOENIX_DEPLOY_URL }}"
HEALTH_URL="${HEALTH_URL%/api/deploy}/health"
echo "Phoenix deploy API restarted during self-deploy; verifying ${HEALTH_URL}"
for i in $(seq 1 12); do
if curl -fsS --max-time 5 "$HEALTH_URL"; then
exit 0
fi
sleep 5
done
fi
exit "$rc"
deploy-atomic-swap-dapp:
needs: deploy
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Trigger Atomic Swap dApp deployment (Phoenix)
run: |
set -euo pipefail
SHA="$(git rev-parse HEAD)"
BRANCH="$(git rev-parse --abbrev-ref HEAD)"
curl -sSf \
--connect-timeout 10 --max-time 900 \
-X POST "${{ secrets.PHOENIX_DEPLOY_URL }}" \
-H "Authorization: Bearer ${{ secrets.PHOENIX_DEPLOY_TOKEN }}" \
-H "Content-Type: application/json" \
-d "{\"repo\":\"${{ gitea.repository }}\",\"sha\":\"${SHA}\",\"branch\":\"${BRANCH}\",\"target\":\"atomic-swap-dapp-live\"}"
# After app deploy, ask Phoenix to run path-gated Cloudflare DNS sync on the host that has
# PHOENIX_REPO_ROOT + .env (not on this runner). Skips unless PHOENIX_CLOUDFLARE_SYNC=1 on that host.
# continue-on-error: first-time or missing opt-in should not block the main deploy.
cloudflare:
needs:
- deploy
- deploy-atomic-swap-dapp
runs-on: ubuntu-latest
continue-on-error: true
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Request Cloudflare DNS sync (Phoenix)
run: |
set -euo pipefail
SHA="$(git rev-parse HEAD)"
BRANCH="$(git rev-parse --abbrev-ref HEAD)"
curl -sSf --retry 5 --retry-all-errors --retry-connrefused --retry-delay 10 --retry-max-time 300 \
--connect-timeout 10 --max-time 120 \
-X POST "${{ secrets.PHOENIX_DEPLOY_URL }}" \
-H "Authorization: Bearer ${{ secrets.PHOENIX_DEPLOY_TOKEN }}" \
-H "Content-Type: application/json" \
-d "{\"repo\":\"${{ gitea.repository }}\",\"sha\":\"${SHA}\",\"branch\":\"${BRANCH}\",\"target\":\"cloudflare-sync\"}" \
|| { echo "Cloudflare DNS sync request failed; optional sync is non-blocking."; exit 0; }

View File

@@ -0,0 +1,33 @@
# Canonical PR validation workflow. Keep source and checked-in workflow copies byte-identical.
# Validation checks both file sync and main/master parity.
# PR-only: push validation already runs in deploy-to-phoenix.yml; this gives PRs the same
# no-LAN checks without the deploy job (and without deploy secrets).
name: Validate (PR)
on:
pull_request:
types: [opened, synchronize, reopened]
branches: [main, master]
workflow_dispatch:
jobs:
run-all-validation:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Fetch deploy branches for workflow parity check
run: |
REMOTE="${GITEA_WORKFLOW_REMOTE:-origin}"
if git remote | grep -qx gitea; then
REMOTE="${GITEA_WORKFLOW_REMOTE:-gitea}"
fi
git fetch --depth=1 "$REMOTE" main master
- name: Install validation dependencies
run: |
corepack enable
pnpm install --frozen-lockfile
# Optional: set org/repo variable URA_STRICT_CLOSURE=1 to fail PRs while pilot placeholders
# remain in manifest (see scripts/ura/validate-manifest-closure.mjs). Not enabled by default.
- name: run-all-validation (no LAN, no genesis)
env:
URA_STRICT_CLOSURE: ${{ vars.URA_STRICT_CLOSURE }}
run: bash scripts/verify/run-all-validation.sh --skip-genesis

View File

@@ -1,11 +1,52 @@
# Canonical deploy workflow. Keep source and checked-in workflow copies byte-identical.
# Validation checks both file sync and main/master parity.
name: Deploy to Phoenix
on:
push:
branches: [main, master]
workflow_dispatch:
jobs:
validate:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Fetch deploy branches for workflow parity check
run: |
REMOTE="${GITEA_WORKFLOW_REMOTE:-origin}"
if git remote | grep -qx gitea; then
REMOTE="${GITEA_WORKFLOW_REMOTE:-gitea}"
fi
git fetch --depth=1 "$REMOTE" main master
- name: Install validation dependencies
run: |
corepack enable
pnpm install --frozen-lockfile
# The cW* mesh matrix and deployment-status validators read
# cross-chain-pmm-lps/config/*.json. The parent checkout does not
# materialize submodules by default, and .gitmodules mixes public HTTPS
# with SSH URLs, so clone only the required public validation dependency.
- name: Materialize cross-chain-pmm-lps
run: |
set -euo pipefail
if [ ! -f cross-chain-pmm-lps/config/deployment-status.json ]; then
rm -rf cross-chain-pmm-lps
git clone --depth=1 \
https://gitea.d-bis.org/d-bis/cross-chain-pmm-lps.git \
cross-chain-pmm-lps
fi
- name: Run repo validation gate
run: |
bash scripts/verify/run-all-validation.sh --skip-genesis
deploy:
needs: validate
runs-on: ubuntu-latest
steps:
- name: Checkout code
@@ -13,8 +54,72 @@ jobs:
- name: Trigger Phoenix deployment
run: |
curl -sSf -X POST "${{ secrets.PHOENIX_DEPLOY_URL }}" \
set -euo pipefail
SHA="$(git rev-parse HEAD)"
BRANCH="$(git rev-parse --abbrev-ref HEAD)"
set +e
curl -sSf --retry 3 --retry-connrefused --retry-delay 10 --retry-max-time 180 \
--connect-timeout 10 --max-time 120 \
-X POST "${{ secrets.PHOENIX_DEPLOY_URL }}" \
-H "Authorization: Bearer ${{ secrets.PHOENIX_DEPLOY_TOKEN }}" \
-H "Content-Type: application/json" \
-d "{\"repo\":\"${{ gitea.repository }}\",\"sha\":\"${{ gitea.sha }}\",\"branch\":\"${{ gitea.ref_name }}\"}"
continue-on-error: true
-d "{\"repo\":\"${{ gitea.repository }}\",\"sha\":\"${SHA}\",\"branch\":\"${BRANCH}\",\"target\":\"default\"}"
rc="$?"
set -e
if [ "$rc" -eq 52 ]; then
HEALTH_URL="${{ secrets.PHOENIX_DEPLOY_URL }}"
HEALTH_URL="${HEALTH_URL%/api/deploy}/health"
echo "Phoenix deploy API restarted during self-deploy; verifying ${HEALTH_URL}"
for i in $(seq 1 12); do
if curl -fsS --max-time 5 "$HEALTH_URL"; then
exit 0
fi
sleep 5
done
fi
exit "$rc"
deploy-atomic-swap-dapp:
needs: deploy
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Trigger Atomic Swap dApp deployment (Phoenix)
run: |
set -euo pipefail
SHA="$(git rev-parse HEAD)"
BRANCH="$(git rev-parse --abbrev-ref HEAD)"
curl -sSf \
--connect-timeout 10 --max-time 900 \
-X POST "${{ secrets.PHOENIX_DEPLOY_URL }}" \
-H "Authorization: Bearer ${{ secrets.PHOENIX_DEPLOY_TOKEN }}" \
-H "Content-Type: application/json" \
-d "{\"repo\":\"${{ gitea.repository }}\",\"sha\":\"${SHA}\",\"branch\":\"${BRANCH}\",\"target\":\"atomic-swap-dapp-live\"}"
# After app deploy, ask Phoenix to run path-gated Cloudflare DNS sync on the host that has
# PHOENIX_REPO_ROOT + .env (not on this runner). Skips unless PHOENIX_CLOUDFLARE_SYNC=1 on that host.
# continue-on-error: first-time or missing opt-in should not block the main deploy.
cloudflare:
needs:
- deploy
- deploy-atomic-swap-dapp
runs-on: ubuntu-latest
continue-on-error: true
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Request Cloudflare DNS sync (Phoenix)
run: |
set -euo pipefail
SHA="$(git rev-parse HEAD)"
BRANCH="$(git rev-parse --abbrev-ref HEAD)"
curl -sSf --retry 5 --retry-all-errors --retry-connrefused --retry-delay 10 --retry-max-time 300 \
--connect-timeout 10 --max-time 120 \
-X POST "${{ secrets.PHOENIX_DEPLOY_URL }}" \
-H "Authorization: Bearer ${{ secrets.PHOENIX_DEPLOY_TOKEN }}" \
-H "Content-Type: application/json" \
-d "{\"repo\":\"${{ gitea.repository }}\",\"sha\":\"${SHA}\",\"branch\":\"${BRANCH}\",\"target\":\"cloudflare-sync\"}" \
|| { echo "Cloudflare DNS sync request failed; optional sync is non-blocking."; exit 0; }

View File

@@ -0,0 +1,33 @@
# Canonical PR validation workflow. Keep source and checked-in workflow copies byte-identical.
# Validation checks both file sync and main/master parity.
# PR-only: push validation already runs in deploy-to-phoenix.yml; this gives PRs the same
# no-LAN checks without the deploy job (and without deploy secrets).
name: Validate (PR)
on:
pull_request:
types: [opened, synchronize, reopened]
branches: [main, master]
workflow_dispatch:
jobs:
run-all-validation:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Fetch deploy branches for workflow parity check
run: |
REMOTE="${GITEA_WORKFLOW_REMOTE:-origin}"
if git remote | grep -qx gitea; then
REMOTE="${GITEA_WORKFLOW_REMOTE:-gitea}"
fi
git fetch --depth=1 "$REMOTE" main master
- name: Install validation dependencies
run: |
corepack enable
pnpm install --frozen-lockfile
# Optional: set org/repo variable URA_STRICT_CLOSURE=1 to fail PRs while pilot placeholders
# remain in manifest (see scripts/ura/validate-manifest-closure.mjs). Not enabled by default.
- name: run-all-validation (no LAN, no genesis)
env:
URA_STRICT_CLOSURE: ${{ vars.URA_STRICT_CLOSURE }}
run: bash scripts/verify/run-all-validation.sh --skip-genesis

View File

@@ -2076,10 +2076,10 @@
"baseSymbol": "cWETH",
"quoteSymbol": "USDC",
"poolAddress": "0xd012000000000000000000000000000000000001",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "eth_mainnet",
"venue": "dodo_pmm",
@@ -2091,10 +2091,10 @@
"baseSymbol": "cWETH",
"quoteSymbol": "WETH",
"poolAddress": "0xd011000000000000000000000000000000000001",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "eth_mainnet",
"venue": "dodo_pmm",
@@ -2150,10 +2150,10 @@
"baseSymbol": "cWETHL2",
"quoteSymbol": "USDC",
"poolAddress": "0xd02200000000000000000000000000000000000a",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "eth_l2",
"venue": "dodo_pmm",
@@ -2165,10 +2165,10 @@
"baseSymbol": "cWETHL2",
"quoteSymbol": "WETH",
"poolAddress": "0xd02100000000000000000000000000000000000a",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "eth_l2",
"venue": "dodo_pmm",
@@ -2246,10 +2246,10 @@
"baseSymbol": "cWXDAI",
"quoteSymbol": "USDC",
"poolAddress": "0xd072000000000000000000000000000000000064",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "xdai",
"venue": "dodo_pmm",
@@ -2261,10 +2261,10 @@
"baseSymbol": "cWXDAI",
"quoteSymbol": "WXDAI",
"poolAddress": "0xd071000000000000000000000000000000000064",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "xdai",
"venue": "dodo_pmm",
@@ -2276,10 +2276,10 @@
"baseSymbol": "cWWEMIX",
"quoteSymbol": "USDC",
"poolAddress": "0xd092000000000000000000000000000000000457",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "wemix",
"venue": "dodo_pmm",
@@ -2291,10 +2291,10 @@
"baseSymbol": "cWWEMIX",
"quoteSymbol": "WWEMIX",
"poolAddress": "0xd091000000000000000000000000000000000457",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "wemix",
"venue": "dodo_pmm",
@@ -2339,10 +2339,10 @@
"baseSymbol": "cWPOL",
"quoteSymbol": "USDC",
"poolAddress": "0xd042000000000000000000000000000000000089",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "pol",
"venue": "dodo_pmm",
@@ -2354,10 +2354,10 @@
"baseSymbol": "cWPOL",
"quoteSymbol": "WPOL",
"poolAddress": "0xd041000000000000000000000000000000000089",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "pol",
"venue": "dodo_pmm",
@@ -2413,10 +2413,10 @@
"baseSymbol": "cWCRO",
"quoteSymbol": "USDT",
"poolAddress": "0xd062000000000000000000000000000000000019",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "cro",
"venue": "dodo_pmm",
@@ -2428,10 +2428,10 @@
"baseSymbol": "cWCRO",
"quoteSymbol": "WCRO",
"poolAddress": "0xd061000000000000000000000000000000000019",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "cro",
"venue": "dodo_pmm",
@@ -2487,10 +2487,10 @@
"baseSymbol": "cWETHL2",
"quoteSymbol": "USDC",
"poolAddress": "0xd02200000000000000000000000000000000a4b1",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "eth_l2",
"venue": "dodo_pmm",
@@ -2502,10 +2502,10 @@
"baseSymbol": "cWETHL2",
"quoteSymbol": "WETH",
"poolAddress": "0xd02100000000000000000000000000000000a4b1",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "eth_l2",
"venue": "dodo_pmm",
@@ -2572,10 +2572,10 @@
"baseSymbol": "cWCELO",
"quoteSymbol": "USDC",
"poolAddress": "0xd08200000000000000000000000000000000a4ec",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "celo",
"venue": "dodo_pmm",
@@ -2587,10 +2587,10 @@
"baseSymbol": "cWCELO",
"quoteSymbol": "WCELO",
"poolAddress": "0xd08100000000000000000000000000000000a4ec",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "celo",
"venue": "dodo_pmm",
@@ -2635,10 +2635,10 @@
"baseSymbol": "cWAVAX",
"quoteSymbol": "USDC",
"poolAddress": "0xd05200000000000000000000000000000000a86a",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "avax",
"venue": "dodo_pmm",
@@ -2650,10 +2650,10 @@
"baseSymbol": "cWAVAX",
"quoteSymbol": "WAVAX",
"poolAddress": "0xd05100000000000000000000000000000000a86a",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "avax",
"venue": "dodo_pmm",
@@ -2720,10 +2720,10 @@
"baseSymbol": "cWBNB",
"quoteSymbol": "USDT",
"poolAddress": "0xd032000000000000000000000000000000000038",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "bnb",
"venue": "dodo_pmm",
@@ -2735,10 +2735,10 @@
"baseSymbol": "cWBNB",
"quoteSymbol": "WBNB",
"poolAddress": "0xd031000000000000000000000000000000000038",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "bnb",
"venue": "dodo_pmm",
@@ -2816,10 +2816,10 @@
"baseSymbol": "cWETHL2",
"quoteSymbol": "USDC",
"poolAddress": "0xd022000000000000000000000000000000002105",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "eth_l2",
"venue": "dodo_pmm",
@@ -2831,10 +2831,10 @@
"baseSymbol": "cWETHL2",
"quoteSymbol": "WETH",
"poolAddress": "0xd021000000000000000000000000000000002105",
"active": true,
"routingEnabled": true,
"mcpVisible": true,
"phase": "wave1",
"active": false,
"routingEnabled": false,
"mcpVisible": false,
"phase": "wave1-staged",
"assetClass": "gas_native",
"familyKey": "eth_l2",
"venue": "dodo_pmm",

View File

@@ -1936,7 +1936,7 @@
"key": "Compliant_WEMIX_cW",
"name": "cWEMIX->cWWEMIX",
"addressFrom": "0x4d82206bec5b4dfa17759ffede07e35f4f63a050",
"addressTo": "0xc111000000000000000000000000000000000457",
"addressTo": "0x4c38f9a5ed68a04cd28a72e8c68c459ec34576f3",
"notes": "Wave 1 gas-family lane wemix: Chain 138 cWEMIX -> Wemix cWWEMIX. hybrid_cap backing with uniswap_v3 reference pricing and DODO PMM edge liquidity."
}
]

View File

@@ -0,0 +1,217 @@
# Devin → Gitea → Proxmox CI/CD
**Status:** Working baseline for this repo
**Last Updated:** 2026-04-20
## Goal
Create a repeatable path where:
1. Devin lands code in Gitea.
2. Gitea Actions validates the repo on the site-wide `act_runner`.
3. A successful workflow calls `phoenix-deploy-api`.
4. `phoenix-deploy-api` resolves the repo/branch to a deploy target and runs the matching Proxmox publish command.
5. The deploy service checks the target health URL before it reports success.
## Current baseline in this repo
The path now exists for **`d-bis/proxmox`** on **`main`** and **`master`**:
- Canonical workflow sources: [.gitea/workflow-sources/deploy-to-phoenix.yml](/home/intlc/projects/proxmox/.gitea/workflow-sources/deploy-to-phoenix.yml) and [.gitea/workflow-sources/validate-on-pr.yml](/home/intlc/projects/proxmox/.gitea/workflow-sources/validate-on-pr.yml)
- Workflow: [deploy-to-phoenix.yml](/home/intlc/projects/proxmox/.gitea/workflows/deploy-to-phoenix.yml)
- Manual app workflow: [deploy-portal-live.yml](/home/intlc/projects/proxmox/.gitea/workflows/deploy-portal-live.yml)
- Deploy service: [server.js](/home/intlc/projects/proxmox/phoenix-deploy-api/server.js)
- Target map: [deploy-targets.json](/home/intlc/projects/proxmox/phoenix-deploy-api/deploy-targets.json)
- Current live publish script: [deploy-phoenix-deploy-api-to-dev-vm.sh](/home/intlc/projects/proxmox/scripts/deployment/deploy-phoenix-deploy-api-to-dev-vm.sh)
- Manual smoke trigger: [trigger-phoenix-deploy.sh](/home/intlc/projects/proxmox/scripts/dev-vm/trigger-phoenix-deploy.sh)
- Target validator: [validate-phoenix-deploy-targets.sh](/home/intlc/projects/proxmox/scripts/validation/validate-phoenix-deploy-targets.sh)
- Bootstrap helper: [bootstrap-phoenix-cicd.sh](/home/intlc/projects/proxmox/scripts/dev-vm/bootstrap-phoenix-cicd.sh)
That default target publishes the `phoenix-deploy-api` bundle to **VMID 5700** on the correct Proxmox node and starts the CT if needed.
A second target is now available:
- `portal-live` → runs [sync-sankofa-portal-7801.sh](/home/intlc/projects/proxmox/scripts/deployment/sync-sankofa-portal-7801.sh) and then checks `http://192.168.11.51:3000/`
## Workflow lockstep
Because both `main` and `master` can trigger deploys, deploy behavior is now defined from canonical source files and checked for branch parity.
- Edit only the source files under [.gitea/workflow-sources](/home/intlc/projects/proxmox/.gitea/workflow-sources:1)
- Sync the checked-in workflow copies with:
```bash
bash scripts/verify/sync-gitea-workflows.sh
```
- Validate source sync plus `main`/`master` parity with:
```bash
bash scripts/verify/run-all-validation.sh --skip-genesis
```
The deploy and PR workflows both fetch `origin/main` and `origin/master` before validation, so branch drift now fails CI instead of silently changing deploy behavior.
## Flow
```text
Devin
-> push to Gitea
-> Gitea Actions on act_runner (5700)
-> bash scripts/verify/run-all-validation.sh --skip-genesis
-> validates deploy-targets.json structure
-> POST /api/deploy to phoenix-deploy-api
-> match repo + branch + target in deploy-targets.json
-> run deploy command
-> verify target health URL
-> update Gitea commit status success/failure
```
## Required setup
### 1. Runner
Bring up the site-wide Gitea runner on VMID **5700**:
```bash
bash scripts/dev-vm/bootstrap-gitea-act-runner-site-wide.sh
```
Reference: [GITEA_ACT_RUNNER_SETUP.md](GITEA_ACT_RUNNER_SETUP.md)
### 0. One-command bootstrap
If root `.env` already contains the needed values, use:
```bash
bash scripts/dev-vm/bootstrap-phoenix-cicd.sh --repo d-bis/proxmox
```
This runs the validation gate, deploys `phoenix-deploy-api`, and smoke-checks the service.
### 2. Deploy API service
Deploy the API to the dev VM:
```bash
./scripts/deployment/deploy-phoenix-deploy-api-to-dev-vm.sh --dry-run
./scripts/deployment/deploy-phoenix-deploy-api-to-dev-vm.sh --apply --start-ct
```
On the target VM, set at least:
```bash
PORT=4001
GITEA_URL=https://gitea.d-bis.org
GITEA_TOKEN=<token with repo status access>
PHOENIX_DEPLOY_SECRET=<shared secret>
PHOENIX_REPO_ROOT=/home/intlc/projects/proxmox
```
Optional:
```bash
DEPLOY_TARGETS_PATH=/opt/phoenix-deploy-api/deploy-targets.json
```
For the `portal-live` target, also set:
```bash
SANKOFA_PORTAL_SRC=/home/intlc/projects/Sankofa/portal
```
### 3. Gitea repo secrets
Set these in the Gitea repository that should deploy:
- `PHOENIX_DEPLOY_URL`
- `PHOENIX_DEPLOY_TOKEN`
Example:
- `PHOENIX_DEPLOY_URL=http://192.168.11.59:4001/api/deploy`
- `PHOENIX_DEPLOY_TOKEN=<same value as PHOENIX_DEPLOY_SECRET>`
For webhook signing, the bootstrap/helper path also expects:
- `PHOENIX_DEPLOY_SECRET`
- `PHOENIX_WEBHOOK_DEPLOY_ENABLED=1` only if you want webhook events themselves to execute deploys
Do not enable both repo Actions deploys and webhook deploys for the same repo unless you intentionally want duplicate deploy attempts.
## Adding more repos or VM targets
Extend [deploy-targets.json](/home/intlc/projects/proxmox/phoenix-deploy-api/deploy-targets.json) with another entry.
Each target is keyed by:
- `repo`
- `branch`
- `target`
Each target defines:
- `cwd`
- `command`
- `required_env`
- optional `healthcheck`
- optional `timeout_sec`
Example shape:
```json
{
"repo": "d-bis/another-service",
"branch": "main",
"target": "portal-live",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": ["bash", "scripts/deployment/sync-sankofa-portal-7801.sh"],
"required_env": ["PHOENIX_REPO_ROOT"]
}
```
Use separate `target` names when the same repo can publish to different VMIDs or environments.
Target-map validation is already part of:
```bash
bash scripts/verify/run-all-validation.sh --skip-genesis
```
and can also be run directly:
```bash
bash scripts/validation/validate-phoenix-deploy-targets.sh
```
## Manual testing
Before trusting a new Gitea workflow, trigger the deploy service directly:
```bash
bash scripts/dev-vm/trigger-phoenix-deploy.sh
```
Trigger the live portal deployment target directly:
```bash
bash scripts/dev-vm/trigger-phoenix-deploy.sh d-bis/proxmox main portal-live
```
Inspect configured targets:
```bash
curl -s http://192.168.11.59:4001/api/deploy-targets | jq .
```
## Recommended next expansions
- Add a Phoenix API target for the repo that owns VMID **7800** or **8600**, depending on which deployment line is canonical.
- Add repo-specific workflows once the Sankofa source repos themselves are mirrored into Gitea Actions.
- Move secret values from ad hoc `.env` files into the final operator-managed secret source once you settle the production host for `phoenix-deploy-api`.
## Notes
- The Gitea workflow is gated by `scripts/verify/run-all-validation.sh --skip-genesis` before deploy.
- `phoenix-deploy-api` now returns `404` when no matching target exists and `500` when the deploy command fails.
- Commit status updates are written back to Gitea from the deploy service itself.

View File

@@ -0,0 +1,247 @@
{
"defaults": {
"timeout_sec": 1800
},
"targets": [
{
"repo": "d-bis/proxmox",
"branch": "main",
"target": "default",
"description": "Install the Phoenix deploy API locally on the dev VM from the synced repo workspace.",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": [
"bash",
"phoenix-deploy-api/scripts/install-systemd.sh"
],
"required_env": [
"PHOENIX_REPO_ROOT"
],
"healthcheck": {
"url": "http://192.168.11.59:4001/health",
"expect_status": 200,
"expect_body_includes": "phoenix-deploy-api",
"attempts": 8,
"delay_ms": 3000,
"timeout_ms": 10000
}
},
{
"repo": "d-bis/proxmox",
"branch": "main",
"target": "cloudflare-sync",
"description": "Optional: sync Cloudflare DNS from repo .env (path-gated; set PHOENIX_CLOUDFLARE_SYNC=1 on host).",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": [
"bash",
"scripts/deployment/gitea-cloudflare-sync.sh"
],
"required_env": [
"PHOENIX_REPO_ROOT"
],
"timeout_sec": 600
},
{
"repo": "d-bis/proxmox",
"branch": "main",
"target": "cloudflare-sync-force",
"description": "Same as cloudflare-sync but skips path filter (operator / manual).",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": [
"bash",
"scripts/deployment/gitea-cloudflare-sync.sh"
],
"required_env": [
"PHOENIX_REPO_ROOT"
],
"timeout_sec": 600
},
{
"repo": "d-bis/proxmox",
"branch": "main",
"target": "portal-live",
"description": "Deploy the Sankofa portal to CT 7801 on Proxmox.",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": [
"bash",
"scripts/deployment/sync-sankofa-portal-7801.sh"
],
"required_env": [
"PHOENIX_REPO_ROOT",
"SANKOFA_PORTAL_SRC"
],
"healthcheck": {
"url": "http://192.168.11.51:3000/",
"expect_status": 200,
"expect_body_includes": "<html",
"attempts": 10,
"delay_ms": 5000,
"timeout_ms": 10000
}
},
{
"repo": "d-bis/CurrenciCombo",
"branch": "main",
"target": "default",
"description": "Deploy CurrenciCombo from the staged Gitea workspace into Phoenix CT 8604 and verify the public hostname end to end.",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": [
"bash",
"scripts/deployment/phoenix-deploy-currencicombo-from-workspace.sh"
],
"required_env": [
"PHOENIX_REPO_ROOT",
"PHOENIX_DEPLOY_WORKSPACE"
],
"healthcheck": {
"url": "https://curucombo.xn--vov0g.com/api/ready",
"expect_status": 200,
"expect_body_includes": "\"ready\":true",
"attempts": 12,
"delay_ms": 5000,
"timeout_ms": 15000
}
},
{
"repo": "d-bis/proxmox",
"branch": "main",
"target": "atomic-swap-dapp-live",
"description": "Deploy the Atomic Swap dApp to VMID 5801 on Proxmox.",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": [
"bash",
"scripts/deployment/deploy-atomic-swap-dapp-5801.sh"
],
"required_env": [
"PHOENIX_REPO_ROOT"
],
"healthcheck": {
"url": "https://atomic-swap.defi-oracle.io/data/live-route-registry.json",
"expect_status": 200,
"expect_body_includes": "\"liveBridgeRoutes\"",
"attempts": 10,
"delay_ms": 5000,
"timeout_ms": 15000
}
},
{
"repo": "d-bis/proxmox",
"branch": "master",
"target": "default",
"description": "Install the Phoenix deploy API locally on the dev VM from the synced repo workspace.",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": [
"bash",
"phoenix-deploy-api/scripts/install-systemd.sh"
],
"required_env": [
"PHOENIX_REPO_ROOT"
],
"healthcheck": {
"url": "http://192.168.11.59:4001/health",
"expect_status": 200,
"expect_body_includes": "phoenix-deploy-api",
"attempts": 8,
"delay_ms": 3000,
"timeout_ms": 10000
}
},
{
"repo": "d-bis/proxmox",
"branch": "master",
"target": "atomic-swap-dapp-live",
"description": "Deploy the Atomic Swap dApp to VMID 5801 on Proxmox.",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": [
"bash",
"scripts/deployment/deploy-atomic-swap-dapp-5801.sh"
],
"required_env": [
"PHOENIX_REPO_ROOT"
],
"healthcheck": {
"url": "https://atomic-swap.defi-oracle.io/data/live-route-registry.json",
"expect_status": 200,
"expect_body_includes": "\"liveBridgeRoutes\"",
"attempts": 10,
"delay_ms": 5000,
"timeout_ms": 15000
}
},
{
"repo": "d-bis/proxmox",
"branch": "master",
"target": "cloudflare-sync",
"description": "Optional: sync Cloudflare DNS from repo .env (path-gated; set PHOENIX_CLOUDFLARE_SYNC=1 on host).",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": [
"bash",
"scripts/deployment/gitea-cloudflare-sync.sh"
],
"required_env": [
"PHOENIX_REPO_ROOT"
],
"timeout_sec": 600
},
{
"repo": "d-bis/proxmox",
"branch": "master",
"target": "cloudflare-sync-force",
"description": "Same as cloudflare-sync but skips path filter (operator / manual).",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": [
"bash",
"scripts/deployment/gitea-cloudflare-sync.sh"
],
"required_env": [
"PHOENIX_REPO_ROOT"
],
"timeout_sec": 600
},
{
"repo": "d-bis/proxmox",
"branch": "master",
"target": "portal-live",
"description": "Deploy the Sankofa portal to CT 7801 on Proxmox.",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": [
"bash",
"scripts/deployment/sync-sankofa-portal-7801.sh"
],
"required_env": [
"PHOENIX_REPO_ROOT",
"SANKOFA_PORTAL_SRC"
],
"healthcheck": {
"url": "http://192.168.11.51:3000/",
"expect_status": 200,
"expect_body_includes": "<html",
"attempts": 10,
"delay_ms": 5000,
"timeout_ms": 10000
}
},
{
"repo": "d-bis/CurrenciCombo",
"branch": "master",
"target": "default",
"description": "Deploy CurrenciCombo from the staged Gitea workspace into Phoenix CT 8604 and verify the public hostname end to end.",
"cwd": "${PHOENIX_REPO_ROOT}",
"command": [
"bash",
"scripts/deployment/phoenix-deploy-currencicombo-from-workspace.sh"
],
"required_env": [
"PHOENIX_REPO_ROOT",
"PHOENIX_DEPLOY_WORKSPACE"
],
"healthcheck": {
"url": "https://curucombo.xn--vov0g.com/api/ready",
"expect_status": 200,
"expect_body_includes": "\"ready\":true",
"attempts": 12,
"delay_ms": 5000,
"timeout_ms": 15000
}
}
]
}

View File

@@ -25,7 +25,70 @@ if [[ -f "$REPO_ROOT/config/public-sector-program-manifest.json" ]]; then
else
echo "WARN: $REPO_ROOT/config/public-sector-program-manifest.json missing — set PUBLIC_SECTOR_MANIFEST_PATH in .env"
fi
[ -f "$APP_DIR/.env" ] && cp "$APP_DIR/.env" "$TARGET/.env" || [ -f "$APP_DIR/.env.example" ] && cp "$APP_DIR/.env.example" "$TARGET/.env" || true
if [[ -f "$TARGET/.env" ]]; then
echo "Preserving existing $TARGET/.env"
elif [[ -f "$APP_DIR/.env" ]]; then
cp "$APP_DIR/.env" "$TARGET/.env"
elif [[ -f "$APP_DIR/.env.example" ]]; then
cp "$APP_DIR/.env.example" "$TARGET/.env"
fi
ensure_env_value() {
local key="$1"
local value="$2"
local file="$TARGET/.env"
[[ -n "$value" && -f "$file" ]] || return 0
local current=""
if grep -qE "^${key}=" "$file"; then
current="$(grep -E "^${key}=" "$file" | tail -n 1 | cut -d= -f2-)"
fi
[[ -z "$current" ]] || return 0
local tmp
tmp="$(mktemp)"
awk -v key="$key" -v value="$value" '
BEGIN { found = 0 }
$0 ~ "^" key "=" {
print key "=" value
found = 1
next
}
{ print }
END {
if (!found) print key "=" value
}
' "$file" > "$tmp"
cat "$tmp" > "$file"
rm -f "$tmp"
}
repo_env_value() {
local key="$1"
local file="$REPO_ROOT/.env"
[[ -f "$file" ]] || return 0
grep -E "^${key}=" "$file" | tail -n 1 | cut -d= -f2-
}
if [[ -f "$TARGET/.env" ]]; then
ensure_env_value PHOENIX_REPO_ROOT "$REPO_ROOT"
for key in \
GITEA_TOKEN \
PHOENIX_DEPLOY_SECRET \
PROXMOX_HOST \
PROXMOX_PORT \
PROXMOX_USER \
PROXMOX_TOKEN_NAME \
PROXMOX_TOKEN_VALUE \
PROXMOX_TLS_VERIFY \
PUBLIC_IP \
CLOUDFLARE_API_TOKEN \
CLOUDFLARE_GITEA_SYNC_ZONE \
PHOENIX_CLOUDFLARE_SYNC
do
ensure_env_value "$key" "$(repo_env_value "$key")"
done
fi
chown -R root:root "$TARGET"
cd "$TARGET" && npm install --omit=dev
cp "$APP_DIR/phoenix-deploy-api.service" /etc/systemd/system/

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env node
/**
* Phoenix Deploy API — Gitea webhook receiver, deploy stub, and Phoenix API Railing (Infra/VE)
* Phoenix Deploy API — Gitea webhook receiver, deploy execution API, and Phoenix API Railing (Infra/VE)
*
* Endpoints:
* POST /webhook/gitea — Receives Gitea push/tag/PR webhooks
@@ -19,7 +19,9 @@
import crypto from 'crypto';
import https from 'https';
import path from 'path';
import { readFileSync, existsSync } from 'fs';
import { promisify } from 'util';
import { execFile as execFileCallback } from 'child_process';
import { cpSync, existsSync, mkdirSync, mkdtempSync, readFileSync, readdirSync, rmSync, writeFileSync } from 'fs';
import { fileURLToPath } from 'url';
import express from 'express';
@@ -29,6 +31,13 @@ const PORT = parseInt(process.env.PORT || '4001', 10);
const GITEA_URL = (process.env.GITEA_URL || 'https://gitea.d-bis.org').replace(/\/$/, '');
const GITEA_TOKEN = process.env.GITEA_TOKEN || '';
const WEBHOOK_SECRET = process.env.PHOENIX_DEPLOY_SECRET || '';
const PHOENIX_REPO_ROOT_DEFAULT = (process.env.PHOENIX_REPO_ROOT_DEFAULT || '/srv/projects/proxmox').trim();
const ATOMIC_SWAP_REPO = (process.env.PHOENIX_ATOMIC_SWAP_REPO || 'd-bis/atomic-swap-dapp').trim();
const ATOMIC_SWAP_REF = (process.env.PHOENIX_ATOMIC_SWAP_REF || 'main').trim();
const CROSS_CHAIN_PMM_LPS_REPO = (process.env.PHOENIX_CROSS_CHAIN_PMM_LPS_REPO || '').trim();
const CROSS_CHAIN_PMM_LPS_REF = (process.env.PHOENIX_CROSS_CHAIN_PMM_LPS_REF || 'main').trim();
const SMOM_DBIS_138_REPO = (process.env.PHOENIX_SMOM_DBIS_138_REPO || '').trim();
const SMOM_DBIS_138_REF = (process.env.PHOENIX_SMOM_DBIS_138_REF || 'main').trim();
const PROXMOX_HOST = process.env.PROXMOX_HOST || '';
const PROXMOX_PORT = parseInt(process.env.PROXMOX_PORT || '8006', 10);
@@ -42,6 +51,17 @@ const PROMETHEUS_URL = (process.env.PROMETHEUS_URL || 'http://localhost:9090').r
const PHOENIX_WEBHOOK_URL = process.env.PHOENIX_WEBHOOK_URL || '';
const PHOENIX_WEBHOOK_SECRET = process.env.PHOENIX_WEBHOOK_SECRET || '';
const PARTNER_KEYS = (process.env.PHOENIX_PARTNER_KEYS || '').split(',').map((k) => k.trim()).filter(Boolean);
const WEBHOOK_DEPLOY_ENABLED = process.env.PHOENIX_WEBHOOK_DEPLOY_ENABLED === '1' || process.env.PHOENIX_WEBHOOK_DEPLOY_ENABLED === 'true';
const execFile = promisify(execFileCallback);
function expandEnvTokens(value, env = process.env) {
if (typeof value !== 'string') return value;
return value.replace(/\$\{([A-Z0-9_]+)\}/gi, (_, key) => env[key] || '');
}
function resolvePhoenixRepoRoot() {
return (process.env.PHOENIX_REPO_ROOT || PHOENIX_REPO_ROOT_DEFAULT || '').trim().replace(/\/$/, '');
}
/**
* Manifest resolution order:
@@ -63,15 +83,395 @@ function resolvePublicSectorManifestPath() {
return path.join(__dirname, '..', 'config', 'public-sector-program-manifest.json');
}
function resolveDeployTargetsPath() {
const override = (process.env.DEPLOY_TARGETS_PATH || '').trim();
if (override && existsSync(override)) return override;
const bundled = path.join(__dirname, 'deploy-targets.json');
if (existsSync(bundled)) return bundled;
return bundled;
}
function loadDeployTargetsConfig() {
const configPath = resolveDeployTargetsPath();
if (!existsSync(configPath)) {
return {
path: configPath,
defaults: {},
targets: [],
};
}
const raw = readFileSync(configPath, 'utf8');
const parsed = JSON.parse(raw);
return {
path: configPath,
defaults: parsed.defaults || {},
targets: Array.isArray(parsed.targets) ? parsed.targets : [],
};
}
function findDeployTarget(repo, branch, requestedTarget) {
const config = loadDeployTargetsConfig();
const wantedTarget = requestedTarget || 'default';
const match = config.targets.find((entry) => {
if (entry.repo !== repo) return false;
if ((entry.branch || 'main') !== branch) return false;
return (entry.target || 'default') === wantedTarget;
});
return { config, match, wantedTarget };
}
async function sleep(ms) {
await new Promise((resolve) => setTimeout(resolve, ms));
}
async function verifyHealthCheck(healthcheck) {
if (!healthcheck || !healthcheck.url) return null;
const attempts = Math.max(1, Number(healthcheck.attempts || 1));
const delayMs = Math.max(0, Number(healthcheck.delay_ms || 0));
const timeoutMs = Math.max(1000, Number(healthcheck.timeout_ms || 10000));
const expectedStatus = Number(healthcheck.expect_status || 200);
const expectBodyIncludes = healthcheck.expect_body_includes || '';
let lastError = null;
for (let attempt = 1; attempt <= attempts; attempt += 1) {
try {
const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), timeoutMs);
const res = await fetch(healthcheck.url, { signal: controller.signal });
const body = await res.text();
clearTimeout(timeout);
if (res.status !== expectedStatus) {
throw new Error(`Expected HTTP ${expectedStatus}, got ${res.status}`);
}
if (expectBodyIncludes && !body.includes(expectBodyIncludes)) {
throw new Error(`Health body missing expected text: ${expectBodyIncludes}`);
}
return {
ok: true,
url: healthcheck.url,
status: res.status,
attempt,
};
} catch (err) {
lastError = err;
if (attempt < attempts && delayMs > 0) {
await sleep(delayMs);
}
}
}
throw new Error(`Health check failed for ${healthcheck.url}: ${lastError?.message || 'unknown error'}`);
}
async function downloadRepoArchive({ owner, repo, ref, archivePath, authToken }) {
const archiveRef = `${ref}.tar.gz`;
const url = `${GITEA_URL}/api/v1/repos/${owner}/${repo}/archive/${archiveRef}`;
const headers = {};
if (authToken) headers.Authorization = `token ${authToken}`;
const res = await fetch(url, { headers });
if (!res.ok) {
throw new Error(`Failed to download archive ${owner}/${repo}@${ref}: HTTP ${res.status}`);
}
const buffer = Buffer.from(await res.arrayBuffer());
writeFileSync(archivePath, buffer);
}
function syncExtractedTree({ sourceRoot, destRoot, entries = null }) {
mkdirSync(destRoot, { recursive: true });
const selectedEntries = Array.isArray(entries) ? entries : readdirSync(sourceRoot);
for (const entry of selectedEntries) {
const sourcePath = path.join(sourceRoot, entry);
if (!existsSync(sourcePath)) continue;
const destPath = path.join(destRoot, entry);
rmSync(destPath, { recursive: true, force: true });
cpSync(sourcePath, destPath, { recursive: true });
}
}
async function syncRepoArchive({ owner, repo, ref, destRoot, entries = null, authToken = '' }) {
const tempDir = mkdtempSync('/tmp/phoenix-archive-');
const archivePath = path.join(tempDir, 'repo.tar.gz');
const extractDir = path.join(tempDir, 'extract');
mkdirSync(extractDir, { recursive: true });
try {
await downloadRepoArchive({ owner, repo, ref, archivePath, authToken });
await execFile('tar', ['-xzf', archivePath, '-C', extractDir]);
const [rootDir] = readdirSync(extractDir);
if (!rootDir) {
throw new Error(`Archive for ${owner}/${repo}@${ref} was empty`);
}
syncExtractedTree({
sourceRoot: path.join(extractDir, rootDir),
destRoot,
entries,
});
} finally {
rmSync(tempDir, { recursive: true, force: true });
}
}
async function prepareDeployWorkspace({ repo, branch, sha, target }) {
const repoRoot = resolvePhoenixRepoRoot();
if (!repoRoot) {
throw new Error('PHOENIX_REPO_ROOT is not configured');
}
const [owner, repoName] = repo.includes('/') ? repo.split('/') : ['d-bis', repo];
const externalWorkspaceRoot = path.join(repoRoot, '.phoenix-deploy-workspaces', owner, repoName);
// Manual smoke tests can target the already-staged local workspace without
// forcing an archive sync from Gitea.
if (sha === 'HEAD' || sha === 'local') {
mkdirSync(repoRoot, { recursive: true });
if (repo !== 'd-bis/proxmox') {
mkdirSync(externalWorkspaceRoot, { recursive: true });
}
return {
PHOENIX_REPO_ROOT: repoRoot,
PROXMOX_REPO_ROOT: repoRoot,
PHOENIX_DEPLOY_WORKSPACE: repo === 'd-bis/proxmox' ? repoRoot : externalWorkspaceRoot,
};
}
const ref = sha || branch || 'main';
if (repo === 'd-bis/proxmox') {
await syncRepoArchive({
owner,
repo: repoName,
ref,
destRoot: repoRoot,
entries: ['config', 'phoenix-deploy-api', 'reports', 'scripts', 'token-lists'],
authToken: GITEA_TOKEN,
});
} else {
await syncRepoArchive({
owner,
repo: repoName,
ref,
destRoot: externalWorkspaceRoot,
authToken: GITEA_TOKEN,
});
}
if (repo === 'd-bis/proxmox' && target === 'atomic-swap-dapp-live') {
const [swapOwner, swapRepo] = ATOMIC_SWAP_REPO.includes('/')
? ATOMIC_SWAP_REPO.split('/')
: ['d-bis', ATOMIC_SWAP_REPO];
await syncRepoArchive({
owner: swapOwner,
repo: swapRepo,
ref: ATOMIC_SWAP_REF,
destRoot: path.join(repoRoot, 'atomic-swap-dapp'),
authToken: GITEA_TOKEN,
});
if (CROSS_CHAIN_PMM_LPS_REPO) {
const [lpsOwner, lpsRepo] = CROSS_CHAIN_PMM_LPS_REPO.includes('/')
? CROSS_CHAIN_PMM_LPS_REPO.split('/')
: ['d-bis', CROSS_CHAIN_PMM_LPS_REPO];
await syncRepoArchive({
owner: lpsOwner,
repo: lpsRepo,
ref: CROSS_CHAIN_PMM_LPS_REF,
destRoot: path.join(repoRoot, 'cross-chain-pmm-lps'),
authToken: GITEA_TOKEN,
});
}
if (SMOM_DBIS_138_REPO) {
const [smomOwner, smomRepo] = SMOM_DBIS_138_REPO.includes('/')
? SMOM_DBIS_138_REPO.split('/')
: ['d-bis', SMOM_DBIS_138_REPO];
await syncRepoArchive({
owner: smomOwner,
repo: smomRepo,
ref: SMOM_DBIS_138_REF,
destRoot: path.join(repoRoot, 'smom-dbis-138'),
authToken: GITEA_TOKEN,
});
}
}
return {
PHOENIX_REPO_ROOT: repoRoot,
PROXMOX_REPO_ROOT: repoRoot,
PHOENIX_DEPLOY_WORKSPACE: repo === 'd-bis/proxmox' ? repoRoot : externalWorkspaceRoot,
};
}
async function runDeployTarget(definition, configDefaults, context, envOverrides = {}) {
if (!Array.isArray(definition.command) || definition.command.length === 0) {
throw new Error('Deploy target is missing a command array');
}
const childEnv = {
...process.env,
...envOverrides,
PHOENIX_DEPLOY_REPO: context.repo,
PHOENIX_DEPLOY_BRANCH: context.branch,
PHOENIX_DEPLOY_SHA: context.sha || '',
PHOENIX_DEPLOY_TARGET: context.target,
PHOENIX_DEPLOY_TRIGGER: context.trigger,
};
const cwd = expandEnvTokens(definition.cwd || configDefaults.cwd || process.cwd(), childEnv);
const timeoutSeconds = Number(definition.timeout_sec || configDefaults.timeout_sec || 1800);
const timeout = Number.isFinite(timeoutSeconds) && timeoutSeconds > 0 ? timeoutSeconds * 1000 : 1800 * 1000;
const command = definition.command.map((part) => expandEnvTokens(part, childEnv));
const missingEnv = (definition.required_env || []).filter((key) => !childEnv[key]);
if (missingEnv.length > 0) {
throw new Error(`Missing required env for deploy target: ${missingEnv.join(', ')}`);
}
if (!existsSync(cwd)) {
throw new Error(`Deploy working directory does not exist: ${cwd}`);
}
const { stdout, stderr } = await execFile(command[0], command.slice(1), {
cwd,
env: childEnv,
timeout,
maxBuffer: 10 * 1024 * 1024,
});
const healthcheck = await verifyHealthCheck(definition.healthcheck || configDefaults.healthcheck || null);
return {
cwd,
command,
stdout: stdout || '',
stderr: stderr || '',
timeout_sec: timeoutSeconds,
healthcheck,
};
}
async function executeDeploy({ repo, branch = 'main', target = 'default', sha = '', trigger = 'api' }) {
if (!repo) {
const error = new Error('repo required');
error.statusCode = 400;
error.payload = { error: error.message };
throw error;
}
const [owner, repoName] = repo.includes('/') ? repo.split('/') : ['d-bis', repo];
const commitSha = sha || '';
const requestedTarget = target || 'default';
const { config, match, wantedTarget } = findDeployTarget(repo, branch, requestedTarget);
if (!match) {
const error = new Error('Deploy target not configured');
error.statusCode = 404;
error.payload = {
error: error.message,
repo,
branch,
target: wantedTarget,
config_path: config.path,
};
if (commitSha && GITEA_TOKEN) {
await setGiteaCommitStatus(owner, repoName, commitSha, 'failure', `No deploy target for ${repo} ${branch} ${wantedTarget}`);
}
throw error;
}
if (commitSha && GITEA_TOKEN) {
await setGiteaCommitStatus(owner, repoName, commitSha, 'pending', 'Phoenix deployment in progress');
}
console.log(`[deploy] ${repo} branch=${branch} target=${wantedTarget} sha=${commitSha} trigger=${trigger}`);
let deployResult = null;
let deployError = null;
let envOverrides = {};
try {
envOverrides = await prepareDeployWorkspace({
repo,
branch,
sha: commitSha,
target: wantedTarget,
});
deployResult = await runDeployTarget(match, config.defaults, {
repo,
branch,
sha: commitSha,
target: wantedTarget,
trigger,
}, envOverrides);
if (commitSha && GITEA_TOKEN) {
await setGiteaCommitStatus(owner, repoName, commitSha, 'success', `Deployed to ${wantedTarget}`);
}
return {
status: 'completed',
repo,
branch,
target: wantedTarget,
config_path: config.path,
command: deployResult.command,
cwd: deployResult.cwd,
stdout: deployResult.stdout,
stderr: deployResult.stderr,
healthcheck: deployResult.healthcheck,
};
} catch (err) {
deployError = err;
if (commitSha && GITEA_TOKEN) {
await setGiteaCommitStatus(owner, repoName, commitSha, 'failure', `Deploy failed: ${err.message.slice(0, 120)}`);
}
err.statusCode = err.statusCode || 500;
err.payload = err.payload || {
error: err.message,
repo,
branch,
target: wantedTarget,
config_path: config.path,
};
throw err;
} finally {
if (PHOENIX_WEBHOOK_URL) {
const payload = {
event: 'deploy.completed',
repo,
branch,
target: wantedTarget,
sha: commitSha,
success: Boolean(deployResult),
command: deployResult?.command,
cwd: deployResult?.cwd,
phoenix_repo_root: envOverrides.PHOENIX_REPO_ROOT || null,
error: deployError?.message || null,
};
const body = JSON.stringify(payload);
const sig = crypto.createHmac('sha256', PHOENIX_WEBHOOK_SECRET || '').update(body).digest('hex');
fetch(PHOENIX_WEBHOOK_URL, {
method: 'POST',
headers: { 'Content-Type': 'application/json', 'X-Phoenix-Signature': `sha256=${sig}` },
body,
}).catch((e) => console.error('[webhook] outbound failed', e.message));
}
}
}
const httpsAgent = new https.Agent({ rejectUnauthorized: process.env.PROXMOX_TLS_VERIFY !== '0' });
function formatProxmoxAuthHeader(user, tokenName, tokenValue) {
if (tokenName.includes('!')) {
return `PVEAPIToken=${tokenName}=${tokenValue}`;
}
return `PVEAPIToken=${user}!${tokenName}=${tokenValue}`;
}
async function proxmoxRequest(endpoint, method = 'GET', body = null) {
const baseUrl = `https://${PROXMOX_HOST}:${PROXMOX_PORT}/api2/json`;
const url = `${baseUrl}${endpoint}`;
const options = {
method,
headers: {
Authorization: `PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}`,
Authorization: formatProxmoxAuthHeader(PROXMOX_USER, PROXMOX_TOKEN_NAME, PROXMOX_TOKEN_VALUE),
'Content-Type': 'application/json',
},
agent: httpsAgent,
@@ -162,12 +562,44 @@ app.post('/webhook/gitea', async (req, res) => {
if (action === 'push' || (action === 'synchronize' && payload.pull_request)) {
if (branch === 'main' || branch === 'master' || ref.startsWith('refs/tags/')) {
if (sha && GITEA_TOKEN) {
await setGiteaCommitStatus(owner, repoName, sha, 'pending', 'Phoenix deployment triggered');
if (!WEBHOOK_DEPLOY_ENABLED) {
return res.status(200).json({
received: true,
repo: fullName,
branch,
sha,
deployed: false,
message: 'Webhook accepted; set PHOENIX_WEBHOOK_DEPLOY_ENABLED=1 to execute deploys from webhook events.',
});
}
try {
const result = await executeDeploy({
repo: fullName,
branch,
sha,
target: 'default',
trigger: 'webhook',
});
return res.status(200).json({
received: true,
repo: fullName,
branch,
sha,
deployed: true,
result,
});
} catch (err) {
return res.status(200).json({
received: true,
repo: fullName,
branch,
sha,
deployed: false,
error: err.message,
details: err.payload || null,
});
}
// Stub: enqueue deploy; actual implementation would call Proxmox/deploy logic
console.log(`[deploy-stub] Would deploy ${fullName} branch=${branch} sha=${sha}`);
// Stub: when full deploy runs, call setGiteaCommitStatus(owner, repoName, sha, 'success'|'failure', ...)
}
}
@@ -185,47 +617,36 @@ app.post('/api/deploy', async (req, res) => {
}
const { repo, branch = 'main', target, sha } = req.body;
if (!repo) {
return res.status(400).json({ error: 'repo required' });
try {
const result = await executeDeploy({
repo,
branch,
sha,
target,
trigger: 'api',
});
res.status(200).json(result);
} catch (err) {
res.status(err.statusCode || 500).json(err.payload || { error: err.message });
}
});
const [owner, repoName] = repo.includes('/') ? repo.split('/') : ['d-bis', repo];
const commitSha = sha || '';
if (commitSha && GITEA_TOKEN) {
await setGiteaCommitStatus(owner, repoName, commitSha, 'pending', 'Phoenix deployment in progress');
}
console.log(`[deploy] ${repo} branch=${branch} target=${target || 'default'} sha=${commitSha}`);
// Stub: no real deploy yet — report success so Gitea shows green; replace with real deploy + setGiteaCommitStatus on completion
const deploySuccess = true;
if (commitSha && GITEA_TOKEN) {
await setGiteaCommitStatus(
owner,
repoName,
commitSha,
deploySuccess ? 'success' : 'failure',
deploySuccess ? 'Deploy accepted (stub)' : 'Deploy failed (stub)'
);
}
res.status(202).json({
status: 'accepted',
repo,
branch,
target: target || 'default',
message: 'Deploy request queued (stub). Implement full deploy logic in Sankofa Phoenix API.',
app.get('/api/deploy-targets', (req, res) => {
const config = loadDeployTargetsConfig();
const targets = config.targets.map((entry) => ({
repo: entry.repo,
branch: entry.branch || 'main',
target: entry.target || 'default',
description: entry.description || '',
cwd: entry.cwd || config.defaults.cwd || '',
command: entry.command || [],
has_healthcheck: Boolean(entry.healthcheck || config.defaults.healthcheck),
}));
res.json({
config_path: config.path,
count: targets.length,
targets,
});
if (PHOENIX_WEBHOOK_URL) {
const payload = { event: 'deploy.completed', repo, branch, target: target || 'default', sha: commitSha, success: deploySuccess };
const body = JSON.stringify(payload);
const sig = crypto.createHmac('sha256', PHOENIX_WEBHOOK_SECRET || '').update(body).digest('hex');
fetch(PHOENIX_WEBHOOK_URL, {
method: 'POST',
headers: { 'Content-Type': 'application/json', 'X-Phoenix-Signature': `sha256=${sig}` },
body,
}).catch((e) => console.error('[webhook] outbound failed', e.message));
}
});
/**
@@ -474,7 +895,10 @@ app.listen(PORT, () => {
if (!GITEA_TOKEN) console.warn('GITEA_TOKEN not set — commit status updates disabled');
if (!hasProxmox) console.warn('PROXMOX_* not set — Infra/VE API returns stub data');
if (PHOENIX_WEBHOOK_URL) console.log('Outbound webhook enabled:', PHOENIX_WEBHOOK_URL);
if (WEBHOOK_DEPLOY_ENABLED) console.log('Inbound webhook deploy execution enabled');
if (PARTNER_KEYS.length > 0) console.log('Partner API key auth enabled for /api/v1/* (except GET /api/v1/public-sector/programs)');
const mpath = resolvePublicSectorManifestPath();
const dpath = resolveDeployTargetsPath();
console.log(`Public-sector manifest: ${mpath} (${existsSync(mpath) ? 'ok' : 'missing'})`);
console.log(`Deploy targets: ${dpath} (${existsSync(dpath) ? 'ok' : 'missing'})`);
});

View File

@@ -0,0 +1,152 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
SUBMODULE_ROOT="$PROJECT_ROOT/atomic-swap-dapp"
source "$PROJECT_ROOT/config/ip-addresses.conf" 2>/dev/null || true
PROXMOX_HOST="${PROXMOX_DAPP_HOST:-${PROXMOX_HOST_R630_02:-192.168.11.12}}"
VMID="${VMID:-5801}"
DEPLOY_ROOT="${DEPLOY_ROOT:-/var/www/atomic-swap}"
TMP_ARCHIVE="/tmp/atomic-swap-dapp-5801.tgz"
DIST_DIR="$SUBMODULE_ROOT/dist"
SKIP_BUILD="${SKIP_BUILD:-0}"
SSH_OPTS="${SSH_OPTS:--o BatchMode=yes -o ConnectTimeout=15 -o StrictHostKeyChecking=accept-new}"
cleanup() {
rm -f "$TMP_ARCHIVE"
}
trap cleanup EXIT
if [ ! -d "$SUBMODULE_ROOT" ]; then
echo "Missing submodule at $SUBMODULE_ROOT" >&2
exit 1
fi
cd "$SUBMODULE_ROOT"
if [ "$SKIP_BUILD" != "1" ]; then
if [ -f package-lock.json ]; then
npm ci >/dev/null
else
npm install >/dev/null
fi
npm run sync:ecosystem >/dev/null
npm run validate:manifest >/dev/null
npm run build >/dev/null
fi
for required_path in \
"$DIST_DIR/index.html" \
"$DIST_DIR/data/ecosystem-manifest.json" \
"$DIST_DIR/data/live-route-registry.json" \
"$DIST_DIR/data/deployed-venue-inventory.json"; do
if [ ! -f "$required_path" ]; then
echo "Missing required build artifact: $required_path" >&2
exit 1
fi
done
jq -e '.supportedNetworks[] | select(.chainId == 138) | .deployedVenuePoolCount >= 19 and .publicRoutingPoolCount >= 19' \
"$DIST_DIR/data/ecosystem-manifest.json" >/dev/null
jq -e '.liveSwapRoutes | length >= 19' "$DIST_DIR/data/live-route-registry.json" >/dev/null
jq -e '.liveBridgeRoutes | length >= 12' "$DIST_DIR/data/live-route-registry.json" >/dev/null
jq -e '.networks[] | select(.chainId == 138) | .venueCounts.deployedVenuePoolCount >= 19 and .summary.totalVenues >= 19' \
"$DIST_DIR/data/deployed-venue-inventory.json" >/dev/null
rm -f "$TMP_ARCHIVE"
tar -C "$SUBMODULE_ROOT" -czf "$TMP_ARCHIVE" dist
ssh $SSH_OPTS "root@$PROXMOX_HOST" true
scp -q $SSH_OPTS "$TMP_ARCHIVE" "root@$PROXMOX_HOST:/tmp/atomic-swap-dapp-5801.tgz"
ssh $SSH_OPTS "root@$PROXMOX_HOST" "
set -euo pipefail
pct push $VMID /tmp/atomic-swap-dapp-5801.tgz /tmp/atomic-swap-dapp-5801.tgz
pct exec $VMID -- bash -lc '
set -euo pipefail
mkdir -p \"$DEPLOY_ROOT\"
find \"$DEPLOY_ROOT\" -mindepth 1 -maxdepth 1 -exec rm -rf {} +
rm -rf /tmp/dist
tar -xzf /tmp/atomic-swap-dapp-5801.tgz -C /tmp
cp -R /tmp/dist/. \"$DEPLOY_ROOT/\"
mkdir -p /var/cache/nginx/atomic-swap-api
cat > /etc/nginx/conf.d/atomic-swap-api-cache.conf <<\"EOF\"
proxy_cache_path /var/cache/nginx/atomic-swap-api
levels=1:2
keys_zone=atomic_swap_api_cache:10m
max_size=256m
inactive=30m
use_temp_path=off;
EOF
cat > /etc/nginx/sites-available/atomic-swap <<\"EOF\"
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root $DEPLOY_ROOT;
index index.html;
location / {
try_files \$uri \$uri/ /index.html;
}
location = /index.html {
add_header Cache-Control \"no-store, no-cache, must-revalidate\" always;
}
location /data/ {
add_header Cache-Control \"no-store, no-cache, must-revalidate\" always;
}
location /assets/ {
add_header Cache-Control \"public, max-age=31536000, immutable\" always;
}
location /api/v1/ {
proxy_pass https://explorer.d-bis.org/api/v1/;
proxy_ssl_server_name on;
proxy_set_header Host explorer.d-bis.org;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host \$host;
proxy_http_version 1.1;
proxy_buffering on;
proxy_cache atomic_swap_api_cache;
proxy_cache_methods GET HEAD;
proxy_cache_key \"\$scheme\$proxy_host\$request_uri\";
proxy_cache_lock on;
proxy_cache_lock_timeout 10s;
proxy_cache_lock_age 10s;
proxy_cache_background_update on;
proxy_cache_revalidate on;
proxy_cache_valid 200 10s;
proxy_cache_valid 404 1s;
proxy_cache_valid any 0;
proxy_cache_use_stale error timeout invalid_header updating http_429 http_500 http_502 http_503 http_504;
add_header X-Atomic-Swap-Cache \$upstream_cache_status always;
}
}
EOF
ln -sfn /etc/nginx/sites-available/atomic-swap /etc/nginx/sites-enabled/atomic-swap
rm -f /etc/nginx/sites-enabled/default
rm -f /etc/nginx/sites-enabled/dapp
nginx -t
systemctl reload nginx
curl -fsS http://127.0.0.1/index.html >/dev/null
curl -fsS http://127.0.0.1/data/ecosystem-manifest.json >/dev/null
curl -fsS http://127.0.0.1/data/live-route-registry.json >/dev/null
curl -fsS http://127.0.0.1/data/deployed-venue-inventory.json >/dev/null
rm -rf /tmp/dist /tmp/atomic-swap-dapp-5801.tgz
'
rm -f /tmp/atomic-swap-dapp-5801.tgz
"
curl -fsS https://atomic-swap.defi-oracle.io/ >/dev/null
curl -fsS https://atomic-swap.defi-oracle.io/data/ecosystem-manifest.json | jq -e '.supportedNetworks[] | select(.chainId == 138) | .deployedVenuePoolCount >= 19 and .publicRoutingPoolCount >= 19' >/dev/null
curl -fsS https://atomic-swap.defi-oracle.io/data/live-route-registry.json | jq -e '.liveSwapRoutes | length >= 19' >/dev/null
curl -fsS https://atomic-swap.defi-oracle.io/data/live-route-registry.json | jq -e '.liveBridgeRoutes | length >= 12' >/dev/null
curl -fsS https://atomic-swap.defi-oracle.io/data/deployed-venue-inventory.json | jq -e '.networks[] | select(.chainId == 138) | .venueCounts.deployedVenuePoolCount >= 19 and .summary.totalVenues >= 19' >/dev/null
echo "Deployed atomic-swap-dapp to VMID $VMID via $PROXMOX_HOST"

View File

@@ -0,0 +1,244 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
source "$PROJECT_ROOT/scripts/lib/load-project-env.sh"
source "$PROJECT_ROOT/config/ip-addresses.conf" 2>/dev/null || true
PHOENIX_DEPLOY_WORKSPACE="${PHOENIX_DEPLOY_WORKSPACE:-}"
PROXMOX_HOST="${PROXMOX_HOST_R630_01:-192.168.11.11}"
PROXMOX_SSH_USER="${PROXMOX_SSH_USER:-root}"
VMID="${CURRENCICOMBO_PHOENIX_VMID:-8604}"
CT_IP="${IP_CURRENCICOMBO_PHOENIX:-10.160.0.14}"
CT_REPO_DIR="${CT_REPO_DIR:-/var/lib/currencicombo/repo}"
PUBLIC_URL="${PUBLIC_URL:-https://curucombo.xn--vov0g.com}"
PUBLIC_DOMAIN="${PUBLIC_DOMAIN:-curucombo.xn--vov0g.com}"
NPM_URL="${NPM_URL:-https://${IP_NPMPLUS:-192.168.11.167}:81}"
NPM_EMAIL="${NPM_EMAIL:-}"
NPM_PASSWORD="${NPM_PASSWORD:-}"
DRY_RUN=0
usage() {
cat <<'USAGE'
Usage: phoenix-deploy-currencicombo-from-workspace.sh [--dry-run]
Requires:
PHOENIX_DEPLOY_WORKSPACE Full staged CurrenciCombo checkout prepared by phoenix-deploy-api
This script:
1. Packs the staged repo workspace.
2. Pushes it into CT 8604 on r630-01.
3. Ensures host prerequisites, install.sh, prune cron, and deploy script run in-CT.
4. Updates the public NPMplus host so /api/* preserves the full path and supports SSE.
5. Verifies the public portal + /api/ready end to end.
USAGE
}
while [[ $# -gt 0 ]]; do
case "$1" in
--dry-run) DRY_RUN=1; shift ;;
-h|--help) usage; exit 0 ;;
*) echo "unknown arg: $1" >&2; usage; exit 2 ;;
esac
done
log() { printf '[currencicombo-phoenix] %s\n' "$*" >&2; }
die() { printf '[currencicombo-phoenix][FATAL] %s\n' "$*" >&2; exit 1; }
run() { if [[ "$DRY_RUN" -eq 1 ]]; then printf '[dry-run] %s\n' "$*" >&2; else eval "$*"; fi; }
need_cmd() { command -v "$1" >/dev/null 2>&1 || die "missing required command: $1"; }
for cmd in ssh scp tar curl jq mktemp; do
need_cmd "$cmd"
done
[[ -n "$PHOENIX_DEPLOY_WORKSPACE" ]] || die "PHOENIX_DEPLOY_WORKSPACE is required"
[[ -d "$PHOENIX_DEPLOY_WORKSPACE" ]] || die "staged workspace missing: $PHOENIX_DEPLOY_WORKSPACE"
if [[ "$DRY_RUN" -eq 0 ]]; then
[[ -n "$NPM_EMAIL" ]] || die "NPM_EMAIL is required"
[[ -n "$NPM_PASSWORD" ]] || die "NPM_PASSWORD is required"
fi
SSH_TARGET="${PROXMOX_SSH_USER}@${PROXMOX_HOST}"
SSH_OPTS=(-o BatchMode=yes -o ConnectTimeout=15 -o StrictHostKeyChecking=accept-new)
TMP_DIR="$(mktemp -d /tmp/currencicombo-phoenix-XXXXXX)"
ARCHIVE_PATH="${TMP_DIR}/currencicombo-workspace.tgz"
REMOTE_ARCHIVE="/tmp/$(basename "$ARCHIVE_PATH")"
CT_ARCHIVE="/root/$(basename "$ARCHIVE_PATH")"
NPM_COOKIE_JAR="${TMP_DIR}/npm-cookies.txt"
cleanup() {
rm -rf "$TMP_DIR"
}
trap cleanup EXIT
ssh_remote() {
local cmd="$1"
if [[ "$DRY_RUN" -eq 1 ]]; then
printf '[dry-run] ssh %q %q\n' "$SSH_TARGET" "$cmd" >&2
else
ssh "${SSH_OPTS[@]}" "$SSH_TARGET" "$cmd"
fi
}
pct_exec_script() {
local local_script="$1"
local remote_script
local ct_script
remote_script="/tmp/$(basename "$local_script")"
ct_script="/root/$(basename "$local_script")"
run "scp ${SSH_OPTS[*]} '$local_script' '${SSH_TARGET}:${remote_script}'"
ssh_remote "pct push ${VMID} '${remote_script}' '${ct_script}' --perms 0755 && rm -f '${remote_script}' && pct exec ${VMID} -- bash '${ct_script}' && pct exec ${VMID} -- rm -f '${ct_script}'"
}
log "packing staged workspace from ${PHOENIX_DEPLOY_WORKSPACE}"
run "tar -C '$PHOENIX_DEPLOY_WORKSPACE' --exclude='.git' --exclude='node_modules' --exclude='dist' --exclude='orchestrator/node_modules' --exclude='orchestrator/dist' -czf '$ARCHIVE_PATH' ."
log "ensuring CT ${VMID} is running on ${PROXMOX_HOST}"
ssh_remote "pct start ${VMID} >/dev/null 2>&1 || true"
log "uploading staged archive to CT ${VMID}"
run "scp ${SSH_OPTS[*]} '$ARCHIVE_PATH' '${SSH_TARGET}:${REMOTE_ARCHIVE}'"
ssh_remote "pct push ${VMID} '${REMOTE_ARCHIVE}' '${CT_ARCHIVE}' && rm -f '${REMOTE_ARCHIVE}'"
CT_SCRIPT="${TMP_DIR}/currencicombo-ct-deploy.sh"
cat > "$CT_SCRIPT" <<'EOF'
#!/usr/bin/env bash
set -euo pipefail
export DEBIAN_FRONTEND=noninteractive
ARCHIVE_PATH="__CT_ARCHIVE__"
REPO_DIR="__CT_REPO_DIR__"
need_pkg() {
dpkg -s "$1" >/dev/null 2>&1
}
apt-get update -qq
for pkg in ca-certificates curl git jq postgresql redis-server rsync build-essential; do
need_pkg "$pkg" || apt-get install -y -qq "$pkg"
done
if ! command -v node >/dev/null 2>&1 || ! node -v 2>/dev/null | grep -q '^v20\.'; then
curl -fsSL https://deb.nodesource.com/setup_20.x | bash -
apt-get install -y -qq nodejs
fi
systemctl enable --now postgresql >/dev/null 2>&1 || true
systemctl enable --now redis-server >/dev/null 2>&1 || true
if [[ ! -f /root/currencicombo-prephoenix-archive.tgz && -d /opt/currencicombo ]]; then
tar -czf /root/currencicombo-prephoenix-archive.tgz /opt/currencicombo /etc/currencicombo 2>/dev/null || true
fi
install -d -o root -g root -m 0755 "$(dirname "$REPO_DIR")"
rm -rf "$REPO_DIR"
mkdir -p "$REPO_DIR"
tar -xzf "$ARCHIVE_PATH" -C "$REPO_DIR"
rm -f "$ARCHIVE_PATH"
bash "$REPO_DIR/scripts/deployment/install.sh"
bash "$REPO_DIR/scripts/deployment/install-prune-cron.sh"
CC_GIT_REF=local bash "$REPO_DIR/scripts/deployment/deploy-currencicombo-8604.sh"
systemctl is-active currencicombo-orchestrator.service currencicombo-webapp.service
curl -fsS http://127.0.0.1:8080/ready
curl -fsS http://127.0.0.1:3000/ >/dev/null
EOF
perl -0pi -e "s|__CT_ARCHIVE__|${CT_ARCHIVE//|/\\|}|g; s|__CT_REPO_DIR__|${CT_REPO_DIR//|/\\|}|g" "$CT_SCRIPT"
log "running install + deploy inside CT ${VMID}"
pct_exec_script "$CT_SCRIPT"
if [[ "$DRY_RUN" -eq 0 ]]; then
log "updating NPMplus proxy host for ${PUBLIC_DOMAIN}"
AUTH_JSON="$(jq -nc --arg identity "$NPM_EMAIL" --arg secret "$NPM_PASSWORD" '{identity:$identity,secret:$secret}')"
TOKEN_RESPONSE="$(curl -sk -X POST "$NPM_URL/api/tokens" -H 'Content-Type: application/json' -d "$AUTH_JSON" -c "$NPM_COOKIE_JAR")"
TOKEN="$(echo "$TOKEN_RESPONSE" | jq -r '.token // .accessToken // .access_token // .data.token // empty' 2>/dev/null)"
USE_COOKIE_AUTH=0
if [[ -z "$TOKEN" || "$TOKEN" == "null" ]]; then
if echo "$TOKEN_RESPONSE" | jq -e '.expires' >/dev/null 2>&1; then
USE_COOKIE_AUTH=1
else
die "NPMplus authentication failed"
fi
fi
npm_api() {
if [[ "$USE_COOKIE_AUTH" -eq 1 ]]; then
curl -sk -b "$NPM_COOKIE_JAR" "$@"
else
curl -sk -H "Authorization: Bearer $TOKEN" "$@"
fi
}
HOSTS_JSON="$(npm_api -X GET "$NPM_URL/api/nginx/proxy-hosts")"
HOST_ID="$(echo "$HOSTS_JSON" | jq -r --arg domain "$PUBLIC_DOMAIN" '
(if type == "array" then . elif .data != null then .data elif .result != null then .result else [] end)
| map(select(.domain_names | type == "array"))
| map(select(any(.domain_names[]; . == $domain)))
| .[0].id // empty
')"
[[ -n "$HOST_ID" ]] || die "NPMplus proxy host not found for ${PUBLIC_DOMAIN}"
ADVANCED_CONFIG="$(cat <<CFG
location ^~ /api/ {
proxy_pass http://${CT_IP}:8080;
proxy_http_version 1.1;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_set_header Connection \"\";
proxy_buffering off;
proxy_cache off;
proxy_read_timeout 24h;
proxy_send_timeout 24h;
add_header Cache-Control \"no-cache\";
}
CFG
)"
PAYLOAD="$(echo "$HOSTS_JSON" | jq -c --arg domain "$PUBLIC_DOMAIN" --arg host "$CT_IP" --arg advanced "$ADVANCED_CONFIG" '
(if type == "array" then . elif .data != null then .data elif .result != null then .result else [] end)
| map(select(.domain_names | type == "array"))
| map(select(any(.domain_names[]; . == $domain)))
| .[0]
| {
domain_names,
forward_scheme: (.forward_scheme // "http"),
forward_host: $host,
forward_port: 3000,
access_list_id,
certificate_id,
ssl_forced,
caching_enabled,
block_exploits,
advanced_config: $advanced,
allow_websocket_upgrade,
http2_support,
hsts_enabled,
hsts_subdomains,
enabled
}
')"
[[ -n "$PAYLOAD" && "$PAYLOAD" != "null" ]] || die "failed to build NPMplus update payload"
UPDATE_RESPONSE="$(npm_api -X PUT "$NPM_URL/api/nginx/proxy-hosts/${HOST_ID}" -H 'Content-Type: application/json' -d "$PAYLOAD")"
echo "$UPDATE_RESPONSE" | jq -e '.id != null' >/dev/null 2>&1 || die "NPMplus proxy host update failed"
log "running public smoke checks"
HEADERS="$(curl -skI "$PUBLIC_URL/")"
echo "$HEADERS" | grep -q '^HTTP/2 200' || die "public root is not HTTP 200"
if echo "$HEADERS" | grep -qi '^x-nextjs-prerender:'; then
die "old Next.js headers still present on public root"
fi
curl -sk "$PUBLIC_URL/" | grep -F '<title>Solace Bank Group PLC — Treasury Management Portal</title>' >/dev/null || die "public title mismatch"
READY_BODY="$(curl -sk "$PUBLIC_URL/api/ready")"
echo "$READY_BODY" | grep -F '"ready":true' >/dev/null || die "public /api/ready failed"
curl -skN --max-time 5 -H 'Accept: text/event-stream' "$PUBLIC_URL/api/plans/demo-pay-014/status/stream" | grep -F '"type":"connected"' >/dev/null || die "public SSE smoke failed"
log "capturing EXT-* blocker summary"
ssh_remote "pct exec ${VMID} -- journalctl -u currencicombo-orchestrator.service -n 200 --no-pager | grep -E 'ExternalBlockers|EXT-' || true"
fi
log "CurrenciCombo Phoenix deploy completed from ${PHOENIX_DEPLOY_WORKSPACE}"

View File

@@ -0,0 +1,56 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
cd "$PROJECT_ROOT"
SOURCE_TARGET_PAIRS=(
".gitea/workflow-sources/deploy-to-phoenix.yml:.gitea/workflows/deploy-to-phoenix.yml"
".gitea/workflow-sources/validate-on-pr.yml:.gitea/workflows/validate-on-pr.yml"
)
REMOTE="${GITEA_WORKFLOW_REMOTE:-origin}"
if git remote | grep -qx gitea; then
REMOTE="${GITEA_WORKFLOW_REMOTE:-gitea}"
fi
missing_ref=false
for ref in "$REMOTE/main" "$REMOTE/master"; do
if ! git rev-parse --verify "$ref" >/dev/null 2>&1; then
missing_ref=true
fi
done
if [[ "$missing_ref" == true ]]; then
echo "[i] Skipping main/master workflow parity check ($REMOTE/main or $REMOTE/master not available)"
exit 0
fi
for pair in "${SOURCE_TARGET_PAIRS[@]}"; do
source="${pair%%:*}"
target="${pair##*:}"
main_blob="$(git show "$REMOTE/main:$source" 2>/dev/null || true)"
master_blob="$(git show "$REMOTE/master:$source" 2>/dev/null || true)"
if [[ -z "$main_blob" ]]; then
main_blob="$(git show "$REMOTE/main:$target" 2>/dev/null || true)"
fi
if [[ -z "$master_blob" ]]; then
master_blob="$(git show "$REMOTE/master:$target" 2>/dev/null || true)"
fi
if [[ -z "$main_blob" || -z "$master_blob" ]]; then
echo "[✗] Missing $source/$target on $REMOTE/main or $REMOTE/master" >&2
exit 1
fi
if [[ "$main_blob" != "$master_blob" ]]; then
echo "[✗] Branch workflow drift: $source differs between $REMOTE/main and $REMOTE/master" >&2
echo " Keep both deploy branches in lockstep for workflow-source files." >&2
exit 1
fi
echo "[✓] Branch parity OK for $source"
done

View File

@@ -0,0 +1,32 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
cd "$PROJECT_ROOT"
check_one() {
local source_rel="$1"
local target_rel="$2"
if [[ ! -f "$source_rel" ]]; then
echo "[✗] Missing workflow source: $source_rel" >&2
return 1
fi
if [[ ! -f "$target_rel" ]]; then
echo "[✗] Missing generated workflow: $target_rel" >&2
return 1
fi
if ! diff -u "$source_rel" "$target_rel" >/dev/null; then
echo "[✗] Workflow drift detected: $target_rel does not match $source_rel" >&2
echo " Run: bash scripts/verify/sync-gitea-workflows.sh" >&2
return 1
fi
echo "[✓] $target_rel matches $source_rel"
}
check_one ".gitea/workflow-sources/deploy-to-phoenix.yml" ".gitea/workflows/deploy-to-phoenix.yml"
check_one ".gitea/workflow-sources/validate-on-pr.yml" ".gitea/workflows/validate-on-pr.yml"

View File

@@ -0,0 +1,50 @@
#!/usr/bin/env bash
# Every path listed under "packages:" in pnpm-workspace.yaml must have a matching
# importer entry in pnpm-lock.yaml. If one is missing, pnpm can fail in confusing
# ways (e.g. pnpm outdated -r: Cannot read ... 'optionalDependencies').
# Usage: bash scripts/verify/check-pnpm-workspace-lockfile.sh
# Exit: 0 if check passes or pnpm is not used; 1 on mismatch.
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
WS="${PROJECT_ROOT}/pnpm-workspace.yaml"
LOCK="${PROJECT_ROOT}/pnpm-lock.yaml"
if [[ ! -f "$WS" ]] || [[ ! -f "$LOCK" ]]; then
echo " (skip: pnpm-workspace.yaml or pnpm-lock.yaml not present at repo root)"
exit 0
fi
# Paths under the top-level `packages:` block only (stops at next top-level key)
mapfile -t _paths < <(awk '
/^packages:/ { p=1; next }
p && /^[a-zA-Z]/ && $0 !~ /^packages/ { exit }
p && /^[[:space:]]*-[[:space:]]/ {
sub(/^[[:space:]]*-[[:space:]]+/, "")
sub(/[[:space:]]*#.*/, "")
gsub(/[[:space:]]+$/, "")
if (length) print
}
' "$WS")
missing=()
for relp in "${_paths[@]}"; do
if [[ -z "$relp" ]]; then
continue
fi
if ! grep -qFx " ${relp}:" "$LOCK"; then
missing+=("$relp")
fi
done
if [[ ${#missing[@]} -gt 0 ]]; then
echo "✗ pnpm lockfile is missing importer(s) for these workspace path(s):"
printf ' %q\n' "${missing[@]}"
echo " Run: pnpm install (at repo root) to refresh pnpm-lock.yaml"
exit 1
fi
echo " pnpm workspace / lockfile importers aligned (${#_paths[@]} path(s))."
exit 0

View File

@@ -3,6 +3,7 @@
# Use for CI or pre-deploy: dependencies, config files, optional genesis.
# Usage: bash scripts/verify/run-all-validation.sh [--skip-genesis]
# --skip-genesis: do not run validate-genesis.sh (default: run if smom-dbis-138 present).
# Steps: dependencies, config files, cW* mesh matrix (if pair-discovery JSON exists), genesis.
set -euo pipefail
@@ -24,15 +25,64 @@ bash "$SCRIPT_DIR/check-dependencies.sh" || log_err "check-dependencies failed"
log_ok "Dependencies OK"
echo ""
echo "1b. pnpm workspace vs lockfile..."
if [[ -f "$PROJECT_ROOT/pnpm-workspace.yaml" ]]; then
bash "$SCRIPT_DIR/check-pnpm-workspace-lockfile.sh" || log_err "pnpm lockfile / workspace drift"
log_ok "pnpm lockfile aligned with workspace"
else
echo " (no pnpm-workspace.yaml at root — skip)"
fi
echo ""
echo "1c. Gitea workflow source sync..."
bash "$SCRIPT_DIR/check-gitea-workflows.sh" || log_err "Gitea workflow source drift"
log_ok "Gitea workflows match source-of-truth files"
echo ""
echo "1d. main/master workflow parity..."
bash "$SCRIPT_DIR/check-gitea-branch-workflow-parity.sh" || log_err "main/master workflow parity drift"
log_ok "main/master workflow parity OK"
echo ""
echo "2. Config files..."
bash "$SCRIPT_DIR/../validation/validate-config-files.sh" || log_err "validate-config-files failed"
log_ok "Config validation OK"
echo ""
if [[ "$SKIP_GENESIS" == true ]]; then
echo "3. Genesis — skipped (--skip-genesis)"
echo "3. cW* mesh matrix (deployment-status + Uni V2 pair-discovery)..."
DISCOVERY_JSON="$PROJECT_ROOT/reports/extraction/promod-uniswap-v2-live-pair-discovery-latest.json"
if [[ -f "$DISCOVERY_JSON" ]]; then
MATRIX_JSON="$PROJECT_ROOT/reports/status/cw-mesh-deployment-matrix-latest.json"
bash "$SCRIPT_DIR/build-cw-mesh-deployment-matrix.sh" --no-markdown --json-out "$MATRIX_JSON" || log_err "cw mesh matrix merge failed"
log_ok "cW mesh matrix OK (also wrote $MATRIX_JSON)"
else
echo "3. Genesis (smom-dbis-138)..."
echo " ($DISCOVERY_JSON missing — run: bash scripts/verify/build-promod-uniswap-v2-live-pair-discovery.sh)"
fi
echo ""
echo "3b. deployment-status graph (cross-chain-pmm-lps)..."
PMM_VALIDATE="$PROJECT_ROOT/cross-chain-pmm-lps/scripts/validate-deployment-status.cjs"
if [[ -f "$PMM_VALIDATE" ]] && command -v node &>/dev/null; then
node "$PMM_VALIDATE" || log_err "validate-deployment-status.cjs failed"
log_ok "deployment-status.json rules OK"
else
echo " (skip: node or $PMM_VALIDATE missing)"
fi
echo ""
echo "3c. External dependency blockers..."
EXT_CHECK="$SCRIPT_DIR/check-external-dependencies.sh"
if [[ -x "$EXT_CHECK" ]]; then
bash "$EXT_CHECK" --advisory || true
else
echo " (skip: $EXT_CHECK missing)"
fi
echo ""
if [[ "$SKIP_GENESIS" == true ]]; then
echo "4. Genesis — skipped (--skip-genesis)"
else
echo "4. Genesis (smom-dbis-138)..."
GENESIS_SCRIPT="$PROJECT_ROOT/smom-dbis-138/scripts/validation/validate-genesis.sh"
if [[ -x "$GENESIS_SCRIPT" ]]; then
bash "$GENESIS_SCRIPT" || log_err "validate-genesis failed"

View File

@@ -0,0 +1,18 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
cd "$PROJECT_ROOT"
sync_one() {
local source_rel="$1"
local target_rel="$2"
mkdir -p "$(dirname "$target_rel")"
cp "$source_rel" "$target_rel"
echo "[✓] Synced $target_rel from $source_rel"
}
sync_one ".gitea/workflow-sources/deploy-to-phoenix.yml" ".gitea/workflows/deploy-to-phoenix.yml"
sync_one ".gitea/workflow-sources/validate-on-pr.yml" ".gitea/workflows/validate-on-pr.yml"