Compare commits

..

12 Commits

Author SHA1 Message Date
f2e0434ad6 PR AB: complete Phoenix deployment scaffolding (add 3 files referenced by main 4a1f69a) (#32)
Some checks failed
Deploy to Phoenix / deploy (push) Failing after 7s
Adds webapp-nginx.conf, systemd/currencicombo-orchestrator.service, and install-prune-cron.sh — all three referenced by main's existing install.sh / deploy script / webapp.service / README but missing from the 4a1f69a commit. Byte-identical to PR #31 branch ded7d24.

Closes gap so CT 8604 can boot cleanly.
2026-04-23 04:39:36 +00:00
defiQUG
4a1f69a8e5 deploy: make Phoenix redeploys archive-safe
Some checks failed
Deploy to Phoenix / deploy (push) Failing after 5s
phoenix-deploy Deploy failed: Command failed: bash scripts/deployment/phoenix-deploy-currencicombo-from-workspace.sh [currencicombo-phoenix] packing s
2026-04-22 20:05:35 -07:00
b48eb2ab76 PR #29 (squash-merged via Gitea API)
Some checks failed
CI / Frontend Lint (push) Failing after 6s
CI / Frontend Type Check (push) Failing after 5s
CI / Frontend Build (push) Failing after 5s
CI / Frontend E2E Tests (push) Failing after 7s
CI / Orchestrator Build (push) Failing after 5s
CI / Orchestrator Unit Tests (push) Failing after 6s
CI / Orchestrator E2E (Testcontainers) (push) Failing after 6s
CI / Contracts Compile (push) Failing after 5s
CI / Contracts Test (push) Failing after 7s
Security Scan / Dependency Vulnerability Scan (push) Failing after 5s
Security Scan / OWASP ZAP Scan (push) Failing after 5s
2026-04-22 21:59:13 +00:00
3787362406 PR #28 (squash-merged via Gitea API)
Some checks failed
CI / Frontend Lint (push) Has been cancelled
CI / Frontend Type Check (push) Has been cancelled
CI / Frontend Build (push) Has been cancelled
CI / Frontend E2E Tests (push) Has been cancelled
CI / Orchestrator Build (push) Has been cancelled
CI / Orchestrator Unit Tests (push) Has been cancelled
CI / Orchestrator E2E (Testcontainers) (push) Has been cancelled
CI / Contracts Compile (push) Has been cancelled
CI / Contracts Test (push) Has been cancelled
Security Scan / Dependency Vulnerability Scan (push) Has been cancelled
Security Scan / OWASP ZAP Scan (push) Has been cancelled
2026-04-22 21:58:55 +00:00
c1aef82ede PR #27 (squash-merged via Gitea API)
Some checks failed
CI / Frontend Lint (push) Failing after 6s
CI / Frontend Type Check (push) Failing after 7s
CI / Frontend Build (push) Failing after 7s
CI / Frontend E2E Tests (push) Failing after 7s
CI / Orchestrator Build (push) Failing after 7s
CI / Orchestrator Unit Tests (push) Failing after 5s
CI / Orchestrator E2E (Testcontainers) (push) Failing after 6s
CI / Contracts Compile (push) Failing after 7s
CI / Contracts Test (push) Failing after 5s
Security Scan / Dependency Vulnerability Scan (push) Failing after 5s
Security Scan / OWASP ZAP Scan (push) Failing after 3s
2026-04-22 21:12:21 +00:00
7fdc9c06da PR #26 (squash-merged via Gitea API)
Some checks failed
CI / Frontend Lint (push) Has been cancelled
CI / Frontend Type Check (push) Has been cancelled
CI / Frontend Build (push) Has been cancelled
CI / Frontend E2E Tests (push) Has been cancelled
CI / Orchestrator Build (push) Has been cancelled
CI / Orchestrator Unit Tests (push) Has been cancelled
CI / Orchestrator E2E (Testcontainers) (push) Has been cancelled
CI / Contracts Compile (push) Has been cancelled
CI / Contracts Test (push) Has been cancelled
Security Scan / Dependency Vulnerability Scan (push) Has been cancelled
Security Scan / OWASP ZAP Scan (push) Has been cancelled
2026-04-22 21:11:56 +00:00
a9fbb39889 PR #25 (squash-merged via Gitea API)
Some checks failed
CI / Frontend Lint (push) Has been cancelled
CI / Frontend Type Check (push) Has been cancelled
CI / Frontend Build (push) Has been cancelled
CI / Frontend E2E Tests (push) Has been cancelled
CI / Orchestrator Build (push) Has been cancelled
CI / Orchestrator Unit Tests (push) Has been cancelled
CI / Orchestrator E2E (Testcontainers) (push) Has been cancelled
CI / Contracts Compile (push) Has been cancelled
CI / Contracts Test (push) Has been cancelled
Security Scan / Dependency Vulnerability Scan (push) Has been cancelled
Security Scan / OWASP ZAP Scan (push) Has been cancelled
2026-04-22 21:11:52 +00:00
21d49595d0 PR T: consolidate obligations/evaluator into rulesEngine (#24)
Some checks failed
CI / Frontend Lint (push) Failing after 9s
CI / Frontend Type Check (push) Failing after 7s
CI / Frontend Build (push) Failing after 7s
CI / Frontend E2E Tests (push) Failing after 7s
CI / Orchestrator Build (push) Failing after 7s
CI / Orchestrator Unit Tests (push) Failing after 7s
CI / Orchestrator E2E (Testcontainers) (push) Failing after 6s
CI / Contracts Compile (push) Failing after 6s
CI / Contracts Test (push) Failing after 6s
Security Scan / Dependency Vulnerability Scan (push) Failing after 6s
Security Scan / OWASP ZAP Scan (push) Failing after 4s
2026-04-22 20:48:09 +00:00
d7d3e80bff PR Q: E2E Testcontainers integration suite (#21)
Some checks failed
CI / Frontend Lint (push) Failing after 6s
CI / Frontend Type Check (push) Failing after 5s
CI / Frontend Build (push) Failing after 8s
CI / Frontend E2E Tests (push) Failing after 7s
CI / Orchestrator Build (push) Failing after 5s
CI / Orchestrator Unit Tests (push) Failing after 7s
CI / Orchestrator E2E (Testcontainers) (push) Failing after 6s
CI / Contracts Compile (push) Failing after 7s
CI / Contracts Test (push) Failing after 5s
Security Scan / Dependency Vulnerability Scan (push) Failing after 3s
Security Scan / OWASP ZAP Scan (push) Failing after 4s
2026-04-22 20:31:06 +00:00
2c72a51a06 PR R: FIN-link sandbox service (#22)
Some checks failed
CI / Frontend Lint (push) Has been cancelled
CI / Frontend Type Check (push) Has been cancelled
CI / Frontend Build (push) Has been cancelled
CI / Frontend E2E Tests (push) Has been cancelled
CI / Orchestrator Build (push) Has been cancelled
CI / Contracts Compile (push) Has been cancelled
CI / Contracts Test (push) Has been cancelled
Security Scan / OWASP ZAP Scan (push) Has been cancelled
Security Scan / Dependency Vulnerability Scan (push) Has been cancelled
2026-04-22 20:30:45 +00:00
b77ebce497 PR S: Machine-form obligation layer (terms-as-data) (#23)
Some checks failed
CI / Frontend Lint (push) Has been cancelled
CI / Frontend Type Check (push) Has been cancelled
CI / Frontend Build (push) Has been cancelled
CI / Frontend E2E Tests (push) Has been cancelled
CI / Orchestrator Build (push) Has been cancelled
CI / Contracts Compile (push) Has been cancelled
CI / Contracts Test (push) Has been cancelled
Security Scan / Dependency Vulnerability Scan (push) Has been cancelled
Security Scan / OWASP ZAP Scan (push) Has been cancelled
2026-04-22 20:30:32 +00:00
351bb472b6 PR P: Pluggable Rules Engine (JSON DSL) (#20)
Some checks failed
CI / Frontend Type Check (push) Has been cancelled
CI / Frontend Build (push) Has been cancelled
CI / Frontend E2E Tests (push) Has been cancelled
CI / Orchestrator Build (push) Failing after 10s
CI / Frontend Lint (push) Has been cancelled
Security Scan / Dependency Vulnerability Scan (push) Has been cancelled
Security Scan / OWASP ZAP Scan (push) Has been cancelled
CI / Contracts Compile (push) Failing after 8s
CI / Contracts Test (push) Failing after 7s
2026-04-22 20:30:21 +00:00
45 changed files with 5333 additions and 171 deletions

View File

@@ -0,0 +1,22 @@
name: Deploy to Phoenix
on:
push:
branches: [main, master]
workflow_dispatch:
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Trigger Phoenix deployment
run: |
SHA="$(git rev-parse HEAD)"
BRANCH="$(git rev-parse --abbrev-ref HEAD)"
curl -sSf -X POST "${{ secrets.PHOENIX_DEPLOY_URL }}" \
-H "Authorization: Bearer ${{ secrets.PHOENIX_DEPLOY_TOKEN }}" \
-H "Content-Type: application/json" \
-d "{\"repo\":\"${{ gitea.repository }}\",\"sha\":\"${SHA}\",\"branch\":\"${BRANCH}\",\"target\":\"default\"}"

View File

@@ -108,6 +108,56 @@ jobs:
working-directory: orchestrator
run: npm run build
orchestrator-test:
name: Orchestrator Unit Tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/setup-node@v6
with:
node-version: "18"
cache: "npm"
cache-dependency-path: orchestrator/package-lock.json
- name: Install dependencies
working-directory: orchestrator
run: npm ci
- name: Type check
working-directory: orchestrator
run: npx tsc --noEmit
- name: Unit tests
working-directory: orchestrator
run: npm test
orchestrator-e2e:
name: Orchestrator E2E (Testcontainers)
runs-on: ubuntu-latest
# Gap-analysis v2 §7.8 / §10.8 — opt-in E2E suite that brings up
# a real Postgres container and exercises the lifecycle against it.
# Gated on a workflow label so PR runs default to the fast unit
# suite; add the `run-e2e` label to a PR to include this job.
if: contains(github.event.pull_request.labels.*.name, 'run-e2e') || github.event_name == 'push'
steps:
- uses: actions/checkout@v5
- uses: actions/setup-node@v6
with:
node-version: "18"
cache: "npm"
cache-dependency-path: orchestrator/package-lock.json
- name: Install dependencies
working-directory: orchestrator
run: npm ci
- name: E2E tests (Testcontainers Postgres + public Chain 138 RPC)
working-directory: orchestrator
# EXT-CHAIN138-CI-RPC resolved via the public endpoint at
# https://rpc.public-0138.defi-oracle.io — the read-only
# public-RPC suite exercises the orchestrator's ethers client
# against a real Chain 138 node alongside the ganache-based
# round-trip tests. The env var opts the public-RPC suite in;
# without it, those tests self-skip.
env:
E2E_USE_PUBLIC_CHAIN138: "1"
run: npm run test:e2e
# Smart Contracts CI
contracts-compile:
name: Contracts Compile

View File

@@ -16,6 +16,8 @@ contract NotaryRegistry is INotaryRegistry, Ownable {
event PlanFinalized(bytes32 indexed planId, bool success, bytes32 receiptHash);
event CodehashRegistered(address indexed contractAddress, bytes32 codehash, string version);
constructor(address initialOwner) Ownable(initialOwner) {}
/**
* @notice Register a plan with notary
*/

View File

@@ -4,6 +4,6 @@ module.exports = {
testEnvironment: "node",
roots: ["<rootDir>/tests"],
testMatch: ["**/*.test.ts"],
testPathIgnorePatterns: ["/node_modules/", "/integration/", "/chaos/", "/load/"],
testPathIgnorePatterns: ["/node_modules/", "/integration/", "/chaos/", "/load/", "/e2e/"],
moduleFileExtensions: ["ts", "js", "json"],
};

View File

@@ -0,0 +1,18 @@
/** @type {import('jest').Config} */
// E2E suite — runs the Testcontainers-backed integration tests
// under tests/e2e/. Separate from the default jest.config.js because
// it requires Docker and takes significantly longer.
//
// Usage:
// RUN_E2E=1 npx jest --config=jest.e2e.config.js
//
// CI wires this into a dedicated e2e workflow step so the normal
// unit-test suite stays <5s.
module.exports = {
preset: "ts-jest",
testEnvironment: "node",
roots: ["<rootDir>/tests/e2e"],
testMatch: ["**/*.e2e.test.ts"],
moduleFileExtensions: ["ts", "js", "json"],
testTimeout: 120_000,
};

View File

@@ -8,6 +8,7 @@
"dev": "ts-node src/index.ts",
"start": "node dist/index.js",
"test": "jest",
"test:e2e": "RUN_E2E=1 jest --config=jest.e2e.config.js",
"migrate": "ts-node src/db/migrations/index.ts"
},
"dependencies": {
@@ -27,6 +28,7 @@
},
"devDependencies": {
"@jest/globals": "^30.3.0",
"@testcontainers/postgresql": "^11.14.0",
"@types/cors": "^2.8.17",
"@types/express": "^4.17.21",
"@types/jest": "^30.0.0",
@@ -34,8 +36,11 @@
"@types/pg": "^8.10.9",
"@types/supertest": "^7.2.0",
"@types/uuid": "^9.0.6",
"ganache": "^7.9.2",
"jest": "^30.3.0",
"solc": "^0.8.20",
"supertest": "^7.2.2",
"testcontainers": "^11.14.0",
"ts-jest": "^29.4.9",
"ts-node": "^10.9.2",
"typescript": "^5.3.3"

View File

@@ -1,27 +1,42 @@
import { z } from "zod";
const emptyToUndefined = (value: unknown) => {
if (typeof value !== "string") return value;
const trimmed = value.trim();
return trimmed === "" ? undefined : trimmed;
};
const optionalString = () => z.preprocess(emptyToUndefined, z.string().optional());
const optionalUrl = () => z.preprocess(emptyToUndefined, z.string().url().optional());
/**
* Environment variable validation schema
*/
const envSchema = z.object({
NODE_ENV: z.enum(["development", "production", "test"]).default("development"),
PORT: z.string().transform(Number).pipe(z.number().int().positive()),
DATABASE_URL: z.string().url().optional(),
API_KEYS: z.string().optional(),
REDIS_URL: z.string().url().optional(),
DATABASE_URL: optionalUrl(),
API_KEYS: optionalString(),
REDIS_URL: optionalUrl(),
LOG_LEVEL: z.enum(["error", "warn", "info", "debug"]).default("info"),
ALLOWED_IPS: z.string().optional(),
ALLOWED_IPS: optionalString(),
SESSION_SECRET: z.string().min(32),
JWT_SECRET: z.string().min(32).optional(),
AZURE_KEY_VAULT_URL: z.string().url().optional(),
AWS_SECRETS_MANAGER_REGION: z.string().optional(),
SENTRY_DSN: z.string().url().optional(),
JWT_SECRET: z.preprocess(emptyToUndefined, z.string().min(32).optional()),
AZURE_KEY_VAULT_URL: optionalUrl(),
AWS_SECRETS_MANAGER_REGION: optionalString(),
SENTRY_DSN: optionalUrl(),
// Chain-138 + NotaryRegistry wiring (arch §4.5). All optional; when
// absent the notary adapter falls back to its deterministic mock.
CHAIN_138_RPC_URL: z.string().url().optional(),
CHAIN_138_CHAIN_ID: z.string().regex(/^\d+$/).optional(),
NOTARY_REGISTRY_ADDRESS: z.string().regex(/^0x[0-9a-fA-F]{40}$/).optional(),
ORCHESTRATOR_PRIVATE_KEY: z.string().regex(/^0x[0-9a-fA-F]{64}$/).optional(),
CHAIN_138_RPC_URL: optionalUrl(),
CHAIN_138_CHAIN_ID: z.preprocess(emptyToUndefined, z.string().regex(/^\d+$/).optional()),
NOTARY_REGISTRY_ADDRESS: z.preprocess(
emptyToUndefined,
z.string().regex(/^0x[0-9a-fA-F]{40}$/).optional(),
),
ORCHESTRATOR_PRIVATE_KEY: z.preprocess(
emptyToUndefined,
z.string().regex(/^0x[0-9a-fA-F]{64}$/).optional(),
),
});
/**
@@ -31,7 +46,7 @@ export const env = envSchema.parse({
NODE_ENV: process.env.NODE_ENV,
PORT: process.env.PORT || "8080",
DATABASE_URL: process.env.DATABASE_URL,
API_KEYS: process.env.API_KEYS,
API_KEYS: process.env.API_KEYS || process.env.ORCHESTRATOR_API_KEYS,
REDIS_URL: process.env.REDIS_URL,
LOG_LEVEL: process.env.LOG_LEVEL,
ALLOWED_IPS: process.env.ALLOWED_IPS,
@@ -56,7 +71,7 @@ export function validateEnv() {
NODE_ENV: process.env.NODE_ENV || "development",
PORT: process.env.PORT || "8080",
DATABASE_URL: process.env.DATABASE_URL,
API_KEYS: process.env.API_KEYS,
API_KEYS: process.env.API_KEYS || process.env.ORCHESTRATOR_API_KEYS,
REDIS_URL: process.env.REDIS_URL,
LOG_LEVEL: process.env.LOG_LEVEL || "info",
ALLOWED_IPS: process.env.ALLOWED_IPS,
@@ -65,6 +80,10 @@ export function validateEnv() {
AZURE_KEY_VAULT_URL: process.env.AZURE_KEY_VAULT_URL,
AWS_SECRETS_MANAGER_REGION: process.env.AWS_SECRETS_MANAGER_REGION,
SENTRY_DSN: process.env.SENTRY_DSN,
CHAIN_138_RPC_URL: process.env.CHAIN_138_RPC_URL,
CHAIN_138_CHAIN_ID: process.env.CHAIN_138_CHAIN_ID,
NOTARY_REGISTRY_ADDRESS: process.env.NOTARY_REGISTRY_ADDRESS,
ORCHESTRATOR_PRIVATE_KEY: process.env.ORCHESTRATOR_PRIVATE_KEY,
};
envSchema.parse(envWithDefaults);
console.log("✅ Environment variables validated");
@@ -79,4 +98,3 @@ export function validateEnv() {
throw error;
}
}

View File

@@ -0,0 +1,159 @@
/**
* External dependency blocker registry (EXT-* IDs).
*
* Mirrors the blocker gate in `proxmox/scripts/verify/
* check-external-dependencies.sh` so orchestrator startup logs and
* provider-switch mock-mode logs surface the **same** IDs the
* deployment pipeline already tracks. When operators see
* "[DbisCore] mock mode" they also see `blockerId: EXT-DBIS-CORE`,
* which maps 1:1 to the proxmox checker output.
*
* A blocker is considered **active** when:
* - the upstream dependency is not yet reachable / not yet built, AND
* - the orchestrator env does not point at any live instance (the
* presence of a "live URL" env var flips the blocker to resolved).
*
* Source of truth for the list: proxmox/docs/03-deployment/
* EXTERNAL_DEPENDENCY_BLOCKERS.md.
*/
export const EXT_BLOCKER_IDS = [
"EXT-DBIS-CORE",
"EXT-CC-PAYMENT-ADAPTERS",
"EXT-CC-AUDIT-LEDGER",
"EXT-CC-SHARED-EVENTS",
"EXT-CC-SHARED-SCHEMAS",
"EXT-FIN-GATEWAY",
"EXT-CHAIN138-CI-RPC",
] as const;
export type ExtBlockerId = (typeof EXT_BLOCKER_IDS)[number];
export interface ExtBlockerDetail {
id: ExtBlockerId;
title: string;
/** Env var whose presence resolves this blocker from the orchestrator's POV. */
resolvingEnvVar?: string;
/** Whether the blocker is structurally resolved independently of env. */
staticallyResolved?: boolean;
/** Short description suitable for structured logs. */
description: string;
}
export const BLOCKER_DETAILS: Record<ExtBlockerId, ExtBlockerDetail> = {
"EXT-DBIS-CORE": {
id: "EXT-DBIS-CORE",
title: "dbis_core live deployment",
resolvingEnvVar: "DBIS_CORE_URL",
description:
"DBIS Core Banking API not deployed; orchestrator falls back to deterministic mock.",
},
"EXT-CC-PAYMENT-ADAPTERS": {
id: "EXT-CC-PAYMENT-ADAPTERS",
title: "DBIS/cc-payment-adapters implementation",
description:
"Upstream repo is a template scaffold; no orchestrator client wired yet.",
},
"EXT-CC-AUDIT-LEDGER": {
id: "EXT-CC-AUDIT-LEDGER",
title: "DBIS/cc-audit-ledger implementation",
description:
"Upstream repo is a template scaffold; audit sink remains in-process events table.",
},
"EXT-CC-SHARED-EVENTS": {
id: "EXT-CC-SHARED-EVENTS",
title: "DBIS/cc-shared-events implementation",
description:
"Upstream repo is a template scaffold; orchestrator uses local eventBus schema.",
},
"EXT-CC-SHARED-SCHEMAS": {
id: "EXT-CC-SHARED-SCHEMAS",
title: "DBIS/cc-shared-schemas implementation",
description:
"Upstream repo is a template scaffold; orchestrator types are locally defined.",
},
"EXT-FIN-GATEWAY": {
id: "EXT-FIN-GATEWAY",
title: "Real FIN / Alliance Access gateway",
resolvingEnvVar: "FIN_SANDBOX_URL",
description:
"No real FIN transport; orchestrator routes dispatch through the in-process sandbox.",
},
"EXT-CHAIN138-CI-RPC": {
id: "EXT-CHAIN138-CI-RPC",
title: "Chain 138 RPC reachable from CI",
resolvingEnvVar: "CHAIN_138_RPC_URL",
description:
"Public Chain 138 RPC endpoint available; E2E and notary-chain paths can target a real chain.",
},
};
export type BlockerStatus = "active" | "resolved";
export interface BlockerStatusRecord extends ExtBlockerDetail {
status: BlockerStatus;
/** Value of the resolving env var at the time of evaluation, if any. */
resolvedVia?: string;
}
/**
* Evaluate current blocker status against `process.env` (or a
* supplied env object, for tests).
*/
export function evaluateBlockers(
env: NodeJS.ProcessEnv = process.env,
): BlockerStatusRecord[] {
return EXT_BLOCKER_IDS.map((id) => {
const detail = BLOCKER_DETAILS[id];
if (detail.staticallyResolved) {
return { ...detail, status: "resolved" };
}
if (detail.resolvingEnvVar) {
const v = env[detail.resolvingEnvVar];
if (v && v.length > 0) {
return {
...detail,
status: "resolved",
resolvedVia: detail.resolvingEnvVar,
};
}
}
return { ...detail, status: "active" };
});
}
/**
* Convenience: same as evaluateBlockers() filtered to active IDs only.
*/
export function activeBlockers(
env: NodeJS.ProcessEnv = process.env,
): ExtBlockerId[] {
return evaluateBlockers(env)
.filter((b) => b.status === "active")
.map((b) => b.id);
}
/**
* Emit a structured startup summary of external blockers using the
* supplied logger. Shape matches the proxmox checker output so
* operators can grep for the same IDs across the two systems.
*/
export function logBlockerStatusAtBoot(logger: {
info: (obj: Record<string, unknown>, msg: string) => void;
}): void {
const records = evaluateBlockers();
const active = records.filter((b) => b.status === "active").map((b) => b.id);
const resolved = records.filter((b) => b.status === "resolved").map((b) => b.id);
logger.info(
{
externalBlockers: records.map((b) => ({
id: b.id,
status: b.status,
resolvedVia: b.resolvedVia,
})),
activeCount: active.length,
resolvedCount: resolved.length,
},
`[ExternalBlockers] ${active.length} active, ${resolved.length} resolved`,
);
}

View File

@@ -2,6 +2,7 @@ import "dotenv/config";
import express from "express";
import cors from "cors";
import { validateEnv } from "./config/env";
import { logBlockerStatusAtBoot } from "./config/externalBlockers";
import {
apiLimiter,
securityHeaders,
@@ -23,6 +24,11 @@ import { runMigration } from "./db/migrations";
// Validate environment on startup
validateEnv();
// Surface the current EXT-* external-dependency blocker status so
// orchestrator startup logs match the proxmox deployment checker
// (proxmox/scripts/verify/check-external-dependencies.sh) 1:1.
logBlockerStatusAtBoot(logger);
const app = express();
const PORT = process.env.PORT || 8080;
@@ -64,16 +70,28 @@ app.get("/health", async (req, res) => {
const health = await healthCheck();
res.status(health.status === "healthy" ? 200 : 503).json(health);
});
app.get("/api/health", async (req, res) => {
const health = await healthCheck();
res.status(health.status === "healthy" ? 200 : 503).json(health);
});
app.get("/ready", async (req, res) => {
const ready = await readinessCheck();
res.status(ready ? 200 : 503).json({ ready });
});
app.get("/api/ready", async (req, res) => {
const ready = await readinessCheck();
res.status(ready ? 200 : 503).json({ ready });
});
app.get("/live", async (req, res) => {
const alive = await livenessCheck();
res.status(alive ? 200 : 503).json({ alive });
});
app.get("/api/live", async (req, res) => {
const alive = await livenessCheck();
res.status(alive ? 200 : 503).json({ alive });
});
// Metrics endpoint
app.get("/metrics", async (req, res) => {
@@ -81,6 +99,11 @@ app.get("/metrics", async (req, res) => {
const metrics = await getMetrics();
res.send(metrics);
});
app.get("/api/metrics", async (req, res) => {
res.setHeader("Content-Type", register.contentType);
const metrics = await getMetrics();
res.send(metrics);
});
// API routes with rate limiting
app.use("/api", apiLimiter);
@@ -112,6 +135,19 @@ app.get("/api/proxmox/cluster/status", proxmoxClusterStatus);
app.get("/api/plans/:planId/status/stream", streamPlanStatus);
// FIN-link sandbox transport (gap-analysis v2 §7.1 / §10.6).
// Mounted only when FIN_SANDBOX_ENABLED=true so production builds
// don't expose the in-memory fake. Intended for dev + E2E only.
if (process.env.FIN_SANDBOX_ENABLED === "true") {
import("./services/finLink/sandbox").then(({ buildSandboxRouter, startAutoProgress }) => {
app.use("/fin-sandbox", buildSandboxRouter());
if (process.env.FIN_SANDBOX_AUTO_PROGRESS !== "false") {
startAutoProgress(Number(process.env.FIN_SANDBOX_TICK_MS || 2000));
}
logger.info({ route: "/fin-sandbox" }, "FIN-link sandbox mounted");
});
}
// Error handling middleware
import { errorHandler } from "./services/errorHandler";
import { initRedis } from "./services/redis";
@@ -154,4 +190,3 @@ async function start() {
}
start();

View File

@@ -1,20 +1,44 @@
import type { Plan } from "../types/plan";
import { generatePacs008 } from "./iso20022";
import { logger } from "../logging/logger";
/**
* Bank-instruction client — two-phase-commit adapter for the payment
* leg (arch §4.3 Payment Messaging / Settlement Layer).
*
* Until `d-bis/dbis_core` is reachable as a live API, every call here
* is a deterministic mock. That corresponds to blocker EXT-DBIS-CORE
* in proxmox/docs/03-deployment/EXTERNAL_DEPENDENCY_BLOCKERS.md and
* flips to real once DBIS_CORE_URL is set (see services/dbisCore/).
*/
const BLOCKER_ID = "EXT-DBIS-CORE";
function bankMode(): "live" | "mock" {
return process.env.DBIS_CORE_URL ? "live" : "mock";
}
/**
* Prepare bank instruction (2PC prepare phase)
* Sends provisional ISO-20022 message
*/
export async function prepareBankInstruction(plan: Plan): Promise<boolean> {
console.log(`[Bank] Preparing instruction for plan ${plan.plan_id}`);
const mode = bankMode();
logger.info(
{
planId: plan.plan_id,
mode,
...(mode === "mock" ? { blockerId: BLOCKER_ID } : {}),
},
"[Bank] prepareBankInstruction()",
);
// Mock: In real implementation, this would:
// 1. Generate provisional ISO-20022 message (pacs.008 with conditional settlement)
// 2. Send to bank connector
// 3. Receive provisional acceptance
await new Promise((resolve) => setTimeout(resolve, 100));
return true;
}
@@ -27,30 +51,39 @@ export async function commitBankInstruction(plan: Plan): Promise<{
isoMessageId?: string;
error?: string;
}> {
console.log(`[Bank] Committing instruction for plan ${plan.plan_id}`);
const mode = bankMode();
logger.info(
{
planId: plan.plan_id,
mode,
...(mode === "mock" ? { blockerId: BLOCKER_ID } : {}),
},
"[Bank] commitBankInstruction()",
);
try {
// Generate final ISO-20022 message
const isoMessage = await generatePacs008(plan);
// Mock: In real implementation, this would:
// 1. Send ISO message to bank connector
// 2. Receive confirmation and message ID
// 3. Store message ID for audit trail
const isoMessageId = `MSG-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
const isoMessageId = `MSG-${Date.now()}-${Math.random().toString(36).slice(2, 11)}`;
// Simulate processing delay
await new Promise((resolve) => setTimeout(resolve, 300));
return {
success: true,
isoMessageId,
};
} catch (error: any) {
} catch (err) {
const error = err instanceof Error ? err.message : String(err);
return {
success: false,
error: error.message,
error,
};
}
}
@@ -60,13 +93,20 @@ export async function commitBankInstruction(plan: Plan): Promise<{
* Cancels provisional instruction
*/
export async function abortBankInstruction(planId: string): Promise<void> {
console.log(`[Bank] Aborting instruction for plan ${planId}`);
const mode = bankMode();
logger.info(
{
planId,
mode,
...(mode === "mock" ? { blockerId: BLOCKER_ID } : {}),
},
"[Bank] abortBankInstruction()",
);
// Mock: In real implementation, this would:
// 1. Generate cancellation message (camt.056)
// 2. Send to bank connector
// 3. Confirm cancellation
await new Promise((resolve) => setTimeout(resolve, 100));
}

View File

@@ -0,0 +1,183 @@
/**
* Loader for the `DBIS/cc-compliance-controls` controls matrix.
*
* `cc-compliance-controls` ships a v0 matrix at
* `controls/matrix/v0.yaml`. When `CC_CONTROLS_MATRIX_URL` is set the
* loader fetches that remote YAML; otherwise it returns an embedded
* snapshot so the orchestrator always has a usable matrix to assert
* against in validation/obligation flows without a network hop.
*
* The embedded snapshot is a faithful copy of the upstream v0 matrix
* at recon time — if upstream evolves, re-sync by fetching and
* replacing the `EMBEDDED_V0_MATRIX` literal.
*/
import { logger } from "../../logging/logger";
import type { CcControlsMatrix } from "./types";
export interface CcControlsConfig {
url?: string;
timeoutMs?: number;
fetchImpl?: typeof fetch;
}
/**
* Embedded v0 matrix — kept small and hand-typed rather than parsed
* from YAML so the orchestrator doesn't drag in a YAML runtime.
*/
const EMBEDDED_V0_MATRIX: CcControlsMatrix = {
version: 0,
source: "embedded",
domains: [
{
id: "identity_proofing",
controls: [
{
id: "IDP-001",
title: "Identity enrollment recorded in audit ledger",
evidenceType: "audit_event",
ownerTeam: "TBD",
frequency: "continuous",
},
],
},
{
id: "payment_issuance",
controls: [
{
id: "PAY-001",
title: "No production PAN in non-production",
evidenceType: "config_scan",
ownerTeam: "TBD",
frequency: "per_release",
},
],
},
{
id: "audit_non_repudiation",
controls: [
{
id: "AUD-001",
title: "Credential state change only via workflow + immutable event",
evidenceType: "architecture_review",
ownerTeam: "TBD",
frequency: "quarterly",
},
],
},
{
id: "registry_verticals",
controls: [
{
id: "REG-001",
title: "Judicial registry data classified high sensitivity; tenant-scoped APIs only",
evidenceType: "policy_review",
ownerTeam: "TBD",
frequency: "quarterly",
},
],
},
],
};
function loadConfigFromEnv(): CcControlsConfig {
return {
url: process.env.CC_CONTROLS_MATRIX_URL,
timeoutMs: process.env.CC_CONTROLS_MATRIX_TIMEOUT_MS
? parseInt(process.env.CC_CONTROLS_MATRIX_TIMEOUT_MS, 10)
: 10_000,
};
}
/**
* Minimal JSON-or-YAML-ish adapter: upstream ships YAML today but
* could add a JSON endpoint. This loader only accepts `application/
* json` responses — if the endpoint is pure YAML, serve it via a thin
* JSON-convert proxy or extend this loader.
*/
export async function loadControlsMatrix(
cfg: CcControlsConfig = loadConfigFromEnv(),
): Promise<CcControlsMatrix> {
if (!cfg.url) {
logger.info(
{ source: "embedded" },
"[CcControls] controls matrix (no CC_CONTROLS_MATRIX_URL — embedded v0)",
);
return EMBEDDED_V0_MATRIX;
}
const fetchImpl = cfg.fetchImpl ?? fetch;
const controller = new AbortController();
const timer = setTimeout(() => controller.abort(), cfg.timeoutMs ?? 10_000);
try {
const resp = await fetchImpl(cfg.url, {
method: "GET",
headers: { Accept: "application/json" },
signal: controller.signal,
});
if (!resp.ok) {
throw new Error(
`cc-controls matrix GET failed: HTTP ${resp.status}`,
);
}
const body = (await resp.json()) as unknown;
const parsed = normaliseMatrix(body);
logger.info(
{ source: "remote", url: cfg.url, version: parsed.version },
"[CcControls] controls matrix (remote)",
);
return parsed;
} finally {
clearTimeout(timer);
}
}
function normaliseMatrix(raw: unknown): CcControlsMatrix {
if (typeof raw !== "object" || raw === null) {
throw new Error("cc-controls matrix: response is not an object");
}
const r = raw as Record<string, unknown>;
const version = typeof r.version === "number" ? r.version : 0;
const domains = Array.isArray(r.domains) ? r.domains : [];
return {
version,
source: "remote",
domains: domains.map((d) => normaliseDomain(d)),
};
}
function normaliseDomain(raw: unknown): CcControlsMatrix["domains"][number] {
const r = (raw ?? {}) as Record<string, unknown>;
const controls = Array.isArray(r.controls) ? r.controls : [];
return {
id: String(r.id ?? ""),
controls: controls.map((c) => normaliseControl(c)),
};
}
function normaliseControl(raw: unknown): CcControlsMatrix["domains"][number]["controls"][number] {
const r = (raw ?? {}) as Record<string, unknown>;
return {
id: String(r.id ?? ""),
title: String(r.title ?? ""),
evidenceType: String(r.evidence_type ?? r.evidenceType ?? ""),
ownerTeam: String(r.owner_team ?? r.ownerTeam ?? "TBD"),
frequency: String(r.frequency ?? ""),
};
}
/**
* Convenience helper — resolve a control by id across all domains.
* Used by evaluator flows that need to attach control evidence to a
* transition.
*/
export function findControl(
matrix: CcControlsMatrix,
controlId: string,
): CcControlsMatrix["domains"][number]["controls"][number] | undefined {
for (const d of matrix.domains) {
for (const c of d.controls) {
if (c.id === controlId) return c;
}
}
return undefined;
}

View File

@@ -0,0 +1,155 @@
/**
* HTTP client adapter for `DBIS/cc-identity-core`.
*
* Provider-switched: when `CC_IDENTITY_URL` is set the client makes
* real HTTP calls to the upstream Complete Credential identity
* service; otherwise every method returns a deterministic mock so
* unit tests, local dev, and CI still work.
*
* Upstream surface (openapi.yaml + src/server.mjs at recon time):
* GET /health
* GET /ready
* POST /v1/subjects
*
* Extend as additional endpoints ship upstream.
*/
import { randomUUID } from "crypto";
import { logger } from "../../logging/logger";
import type {
CcHealthStatus,
CcSubject,
CcSubjectCreate,
} from "./types";
export interface CcIdentityConfig {
baseUrl?: string;
apiKey?: string;
timeoutMs?: number;
fetchImpl?: typeof fetch;
}
export interface CcIdentityClient {
mode: "live" | "mock";
health(): Promise<CcHealthStatus>;
ready(): Promise<CcHealthStatus>;
createSubject(req: CcSubjectCreate, correlationId?: string): Promise<CcSubject>;
}
function loadConfigFromEnv(): CcIdentityConfig {
return {
baseUrl: process.env.CC_IDENTITY_URL,
apiKey: process.env.CC_IDENTITY_API_KEY,
timeoutMs: process.env.CC_IDENTITY_TIMEOUT_MS
? parseInt(process.env.CC_IDENTITY_TIMEOUT_MS, 10)
: 10_000,
};
}
class HttpCcIdentityClient implements CcIdentityClient {
readonly mode = "live" as const;
private readonly baseUrl: string;
private readonly apiKey?: string;
private readonly timeoutMs: number;
private readonly fetchImpl: typeof fetch;
constructor(
cfg: Required<Pick<CcIdentityConfig, "baseUrl">> & CcIdentityConfig,
) {
this.baseUrl = cfg.baseUrl.replace(/\/+$/, "");
this.apiKey = cfg.apiKey;
this.timeoutMs = cfg.timeoutMs ?? 10_000;
this.fetchImpl = cfg.fetchImpl ?? fetch;
}
private async request<T>(
method: "GET" | "POST",
path: string,
body?: unknown,
correlationId?: string,
): Promise<T> {
const url = `${this.baseUrl}${path}`;
const headers: Record<string, string> = { Accept: "application/json" };
if (body !== undefined) headers["Content-Type"] = "application/json";
if (this.apiKey) headers["X-API-Key"] = this.apiKey;
if (correlationId) headers["X-Correlation-Id"] = correlationId;
const controller = new AbortController();
const timer = setTimeout(() => controller.abort(), this.timeoutMs);
try {
const resp = await this.fetchImpl(url, {
method,
headers,
body: body !== undefined ? JSON.stringify(body) : undefined,
signal: controller.signal,
});
if (!resp.ok) {
const text = await resp.text().catch(() => "");
throw new Error(
`cc-identity ${method} ${path} failed: HTTP ${resp.status} ${text.slice(0, 200)}`,
);
}
return (await resp.json()) as T;
} finally {
clearTimeout(timer);
}
}
health(): Promise<CcHealthStatus> {
return this.request<CcHealthStatus>("GET", "/health");
}
ready(): Promise<CcHealthStatus> {
return this.request<CcHealthStatus>("GET", "/ready");
}
createSubject(
req: CcSubjectCreate,
correlationId?: string,
): Promise<CcSubject> {
return this.request<CcSubject>(
"POST",
"/v1/subjects",
req,
correlationId ?? randomUUID(),
);
}
}
class MockCcIdentityClient implements CcIdentityClient {
readonly mode = "mock" as const;
async health(): Promise<CcHealthStatus> {
return { status: "ok", service: "cc-identity-core" };
}
async ready(): Promise<CcHealthStatus> {
return { status: "ok", service: "cc-identity-core", persistence: false };
}
async createSubject(req: CcSubjectCreate): Promise<CcSubject> {
return {
subjectId: randomUUID(),
tenantId: req.tenantId ?? "tenant-demo",
entityId: req.entityId ?? "entity-demo",
createdAt: new Date().toISOString(),
};
}
}
export function createCcIdentityClient(
cfg: CcIdentityConfig = loadConfigFromEnv(),
): CcIdentityClient {
if (cfg.baseUrl) {
logger.info(
{ baseUrl: cfg.baseUrl, mode: "live" },
"[CcIdentity] HTTP client",
);
return new HttpCcIdentityClient({ ...cfg, baseUrl: cfg.baseUrl });
}
logger.info(
{ mode: "mock" },
"[CcIdentity] HTTP client (no CC_IDENTITY_URL — mock mode; upstream cc-identity-core ships code but not yet deployed)",
);
return new MockCcIdentityClient();
}

View File

@@ -0,0 +1,28 @@
/**
* Public surface for the DBIS Complete Credential (cc-*) adapters.
*
* Covers the upstream bounded-context repos the orchestrator needs:
* - cc-identity-core → identityClient (HTTP, provider-switched)
* - cc-compliance-controls → controlsMatrix (embedded v0 with
* optional remote JSON override)
*
* cc-payment-adapters / cc-audit-ledger / cc-shared-events are still
* template scaffolds upstream at recon time; when those services
* ship, add sibling clients here following the same pattern.
*/
export * from "./types";
export {
createCcIdentityClient,
} from "./identityClient";
export type {
CcIdentityClient,
CcIdentityConfig,
} from "./identityClient";
export {
loadControlsMatrix,
findControl,
} from "./controlsMatrix";
export type {
CcControlsConfig,
} from "./controlsMatrix";

View File

@@ -0,0 +1,49 @@
/**
* Types shared across the Complete Credential (DBIS cc-*) adapters.
*
* Shapes mirror the relevant upstream repos:
* - cc-identity-core (openapi/openapi.yaml + src/server.mjs)
* - cc-compliance-controls (controls/matrix/v0.yaml)
*
* Only the fields the orchestrator actually consumes are typed —
* extend as needed when more of the CC surface is wired.
*/
export interface CcHealthStatus {
status: "ok" | "unready";
service: string;
persistence?: boolean;
error?: string;
}
export interface CcSubjectCreate {
tenantId?: string;
entityId?: string;
metadata?: Record<string, string | number | boolean>;
}
export interface CcSubject {
subjectId: string;
tenantId: string;
entityId: string;
createdAt: string;
}
export interface CcControl {
id: string;
title: string;
evidenceType: string;
ownerTeam: string;
frequency: string;
}
export interface CcControlDomain {
id: string;
controls: CcControl[];
}
export interface CcControlsMatrix {
version: number;
source: "embedded" | "remote";
domains: CcControlDomain[];
}

View File

@@ -0,0 +1,218 @@
/**
* HTTP client adapter for `d-bis/dbis_core`.
*
* Provider-switched: when `DBIS_CORE_URL` is set the client makes real
* HTTP calls to the upstream DBIS Core Banking API; otherwise every
* method returns a deterministic mock response so unit tests, local
* dev, and CI still work.
*
* This is intentionally minimal — only the endpoints the orchestrator
* actually calls from its settlement / obligation / compliance paths.
* Extend the client surface as new orchestrator capabilities need more
* of the dbis_core API.
*/
import { logger } from "../../logging/logger";
import type {
AccountBalance,
AriDecisionRequest,
AriDecisionResponse,
AtomicSettleRequest,
AtomicSettleResponse,
Pacs008DispatchRequest,
Pacs008DispatchResponse,
RouteRequest,
RouteResponse,
SettlementStatus,
} from "./types";
export interface DbisCoreConfig {
baseUrl?: string;
apiKey?: string;
timeoutMs?: number;
fetchImpl?: typeof fetch;
}
export interface DbisCoreClient {
mode: "live" | "mock";
getAccountBalance(accountId: string): Promise<AccountBalance>;
findSettlementRoute(req: RouteRequest): Promise<RouteResponse>;
atomicSettle(req: AtomicSettleRequest): Promise<AtomicSettleResponse>;
getSettlementStatus(settlementId: string): Promise<SettlementStatus>;
requestAriDecision(req: AriDecisionRequest): Promise<AriDecisionResponse>;
dispatchPacs008(req: Pacs008DispatchRequest): Promise<Pacs008DispatchResponse>;
}
function loadConfigFromEnv(): DbisCoreConfig {
return {
baseUrl: process.env.DBIS_CORE_URL,
apiKey: process.env.DBIS_CORE_API_KEY,
timeoutMs: process.env.DBIS_CORE_TIMEOUT_MS
? parseInt(process.env.DBIS_CORE_TIMEOUT_MS, 10)
: 10_000,
};
}
class HttpDbisCoreClient implements DbisCoreClient {
readonly mode = "live" as const;
private readonly baseUrl: string;
private readonly apiKey?: string;
private readonly timeoutMs: number;
private readonly fetchImpl: typeof fetch;
constructor(cfg: Required<Pick<DbisCoreConfig, "baseUrl">> & DbisCoreConfig) {
this.baseUrl = cfg.baseUrl.replace(/\/+$/, "");
this.apiKey = cfg.apiKey;
this.timeoutMs = cfg.timeoutMs ?? 10_000;
this.fetchImpl = cfg.fetchImpl ?? fetch;
}
private async request<T>(
method: "GET" | "POST",
path: string,
body?: unknown,
): Promise<T> {
const url = `${this.baseUrl}${path}`;
const headers: Record<string, string> = {
Accept: "application/json",
};
if (body !== undefined) headers["Content-Type"] = "application/json";
if (this.apiKey) headers["X-API-Key"] = this.apiKey;
const controller = new AbortController();
const timer = setTimeout(() => controller.abort(), this.timeoutMs);
try {
const resp = await this.fetchImpl(url, {
method,
headers,
body: body !== undefined ? JSON.stringify(body) : undefined,
signal: controller.signal,
});
if (!resp.ok) {
const text = await resp.text().catch(() => "");
throw new Error(
`dbis_core ${method} ${path} failed: HTTP ${resp.status} ${text.slice(0, 200)}`,
);
}
return (await resp.json()) as T;
} finally {
clearTimeout(timer);
}
}
getAccountBalance(accountId: string): Promise<AccountBalance> {
return this.request<AccountBalance>(
"GET",
`/api/accounts/${encodeURIComponent(accountId)}/balance`,
);
}
findSettlementRoute(req: RouteRequest): Promise<RouteResponse> {
return this.request<RouteResponse>("POST", "/api/isn/route", req);
}
atomicSettle(req: AtomicSettleRequest): Promise<AtomicSettleResponse> {
return this.request<AtomicSettleResponse>("POST", "/api/isn/atomic", req);
}
getSettlementStatus(settlementId: string): Promise<SettlementStatus> {
return this.request<SettlementStatus>(
"GET",
`/api/isn/settlements/${encodeURIComponent(settlementId)}`,
);
}
requestAriDecision(req: AriDecisionRequest): Promise<AriDecisionResponse> {
return this.request<AriDecisionResponse>("POST", "/api/ari/decision", req);
}
dispatchPacs008(req: Pacs008DispatchRequest): Promise<Pacs008DispatchResponse> {
return this.request<Pacs008DispatchResponse>(
"POST",
"/api/v1/gpn/message/pacs008",
req,
);
}
}
class MockDbisCoreClient implements DbisCoreClient {
readonly mode = "mock" as const;
async getAccountBalance(accountId: string): Promise<AccountBalance> {
return {
accountId,
currency: "USD",
available: "1000000.00",
held: "0.00",
asOf: new Date().toISOString(),
};
}
async findSettlementRoute(req: RouteRequest): Promise<RouteResponse> {
return {
routeId: `mock-route-${req.sourceBankId}-${req.destinationBankId}`,
hops: [
{ bankId: req.sourceBankId, latencyMs: 20, feeBps: 0 },
{ bankId: req.destinationBankId, latencyMs: 40, feeBps: 5 },
],
estimatedLatencyMs: 60,
estimatedFeeBps: 5,
};
}
async atomicSettle(req: AtomicSettleRequest): Promise<AtomicSettleResponse> {
return {
settlementId: `mock-stlm-${req.reference}`,
status: "settled",
completedAt: new Date().toISOString(),
};
}
async getSettlementStatus(settlementId: string): Promise<SettlementStatus> {
return {
settlementId,
status: "settled",
legs: [
{ legId: `${settlementId}-leg1`, bankId: "mock-src", status: "confirmed" },
{ legId: `${settlementId}-leg2`, bankId: "mock-dst", status: "confirmed" },
],
lastUpdated: new Date().toISOString(),
};
}
async requestAriDecision(req: AriDecisionRequest): Promise<AriDecisionResponse> {
return {
txId: req.txId,
outcome: "allow",
riskScore: 0.1,
reasons: ["mock: default allow"],
evaluatedAt: new Date().toISOString(),
};
}
async dispatchPacs008(req: Pacs008DispatchRequest): Promise<Pacs008DispatchResponse> {
return {
messageId: req.messageId,
status: "accepted",
acknowledgmentRef: `mock-ack-${req.messageId}`,
};
}
}
/**
* Factory. Call once per process (or per test run) to get a client
* wired to whichever backend the env selects.
*/
export function createDbisCoreClient(
cfg: DbisCoreConfig = loadConfigFromEnv(),
): DbisCoreClient {
if (cfg.baseUrl) {
logger.info({ baseUrl: cfg.baseUrl, mode: "live" }, "[DbisCore] HTTP client");
return new HttpDbisCoreClient({ ...cfg, baseUrl: cfg.baseUrl });
}
logger.info(
{ mode: "mock", blockerId: "EXT-DBIS-CORE" },
"[DbisCore] HTTP client (no DBIS_CORE_URL — mock mode; blocker EXT-DBIS-CORE active)",
);
return new MockDbisCoreClient();
}

View File

@@ -0,0 +1,9 @@
/**
* Public surface for the dbis_core client adapter.
* See ./client.ts for implementation and ./types.ts for the shared
* request/response shapes.
*/
export * from "./types";
export { createDbisCoreClient } from "./client";
export type { DbisCoreClient, DbisCoreConfig } from "./client";

View File

@@ -0,0 +1,106 @@
/**
* Canonical request/response shapes for the subset of `d-bis/dbis_core`
* endpoints the orchestrator actually calls. Kept small and focused —
* this is a client adapter, not a mirror of the upstream service.
*
* Upstream endpoint references (from dbis_core/src/integration/api-
* gateway/app.ts mount points):
*
* GET /api/accounts/:accountId/balance
* POST /api/isn/route
* POST /api/isn/atomic
* POST /api/ari/decision
* POST /api/v1/gpn/message/pacs008
* GET /api/isn/settlements/:settlementId
*/
export interface AccountBalance {
accountId: string;
currency: string;
available: string;
held: string;
asOf: string;
}
export interface RouteRequest {
sourceBankId: string;
destinationBankId: string;
amount: string;
currencyCode: string;
}
export interface SettlementHop {
bankId: string;
latencyMs: number;
feeBps: number;
}
export interface RouteResponse {
routeId: string;
hops: SettlementHop[];
estimatedLatencyMs: number;
estimatedFeeBps: number;
}
export interface AtomicSettleRequest {
routeId: string;
sourceAccountId: string;
destinationAccountId: string;
amount: string;
currencyCode: string;
reference: string;
}
export interface AtomicSettleResponse {
settlementId: string;
status: "accepted" | "settled" | "rejected";
completedAt?: string;
rejectionReason?: string;
}
export interface AriDecisionRequest {
txId: string;
amount: string;
currencyCode: string;
creator: string;
counterparty?: string;
metadata?: Record<string, string | number | boolean>;
}
export type AriOutcome = "allow" | "deny" | "review";
export interface AriDecisionResponse {
txId: string;
outcome: AriOutcome;
riskScore: number;
reasons: string[];
evaluatedAt: string;
}
export interface Pacs008DispatchRequest {
messageId: string;
creationDateTime: string;
debtor: { name: string; bic: string; account: string };
creditor: { name: string; bic: string; account: string };
amount: string;
currencyCode: string;
remittanceInfo?: string;
}
export interface Pacs008DispatchResponse {
messageId: string;
status: "accepted" | "rejected";
acknowledgmentRef?: string;
rejectionReason?: string;
}
export interface SettlementStatus {
settlementId: string;
status: "pending" | "routing" | "executing" | "settled" | "failed" | "reversed";
legs: {
legId: string;
bankId: string;
status: "pending" | "dispatched" | "confirmed" | "failed";
}[];
lastUpdated: string;
}

View File

@@ -0,0 +1,94 @@
/**
* FIN-link client (gap-analysis v2 §7.1 / §10.6).
*
* Thin wrapper around the outbound dispatch API. In dev / E2E it
* talks to the sandbox server mounted at FIN_SANDBOX_URL. In
* production it should talk to a real FIN / Alliance Access gateway
* that exposes the same minimal surface.
*
* The SWIFT message generators live in `services/swift/`; this
* client is the transport hop that PR E was missing.
*/
import type {
DispatchRequest,
DispatchResponse,
FinMessage,
} from "./sandbox";
export interface FinLinkClient {
dispatch(req: DispatchRequest): Promise<DispatchResponse>;
getMessage(reference: string): Promise<FinMessage | null>;
}
export function createHttpFinLinkClient(baseUrl: string): FinLinkClient {
const base = baseUrl.replace(/\/$/, "");
return {
async dispatch(req) {
const resp = await fetch(`${base}/dispatch`, {
method: "POST",
headers: { "content-type": "application/json" },
body: JSON.stringify(req),
});
if (!resp.ok) {
throw new Error(`fin dispatch failed: ${resp.status}`);
}
return (await resp.json()) as DispatchResponse;
},
async getMessage(reference) {
const resp = await fetch(`${base}/messages/${encodeURIComponent(reference)}`);
if (resp.status === 404) return null;
if (!resp.ok) throw new Error(`fin getMessage failed: ${resp.status}`);
return (await resp.json()) as FinMessage;
},
};
}
/**
* In-process client that talks to the sandbox module directly —
* avoids a round-trip through HTTP for unit tests.
*/
export async function createInProcessFinLinkClient(): Promise<FinLinkClient> {
const sandbox = await import("./sandbox");
return {
async dispatch(req) {
const msg = sandbox.recordDispatch(req);
return {
reference: msg.reference,
state: msg.state,
ackedAt: msg.updatedAt,
};
},
async getMessage(reference) {
return sandbox.getMessage(reference) ?? null;
},
};
}
/**
* Factory: returns an HTTP client if FIN_SANDBOX_URL is set, else an
* in-process client that short-circuits to the sandbox module.
*
* When falling back to the in-process sandbox we emit blocker
* EXT-FIN-GATEWAY (per proxmox/docs/03-deployment/
* EXTERNAL_DEPENDENCY_BLOCKERS.md) — that id maps 1:1 with the
* deployment checker and signals "no real FIN / Alliance Access
* transport configured yet".
*/
export async function getFinLinkClient(): Promise<FinLinkClient> {
const url = process.env.FIN_SANDBOX_URL;
if (url) {
const { logger } = await import("../../logging/logger");
logger.info(
{ baseUrl: url, mode: "live" },
"[FinLink] HTTP client (FIN_SANDBOX_URL)",
);
return createHttpFinLinkClient(url);
}
const { logger } = await import("../../logging/logger");
logger.info(
{ mode: "sandbox", blockerId: "EXT-FIN-GATEWAY" },
"[FinLink] in-process sandbox (no FIN_SANDBOX_URL — blocker EXT-FIN-GATEWAY active)",
);
return createInProcessFinLinkClient();
}

View File

@@ -0,0 +1,28 @@
/**
* FIN-link public surface.
*/
export {
buildSandboxRouter,
recordDispatch,
advance,
rejectMessage,
getMessage,
listMessages,
resetSandboxForTests,
startAutoProgress,
stopAutoProgress,
finSignature,
type FinMessage,
type FinMessageState,
type FinMessageType,
type DispatchRequest,
type DispatchResponse,
} from "./sandbox";
export {
createHttpFinLinkClient,
createInProcessFinLinkClient,
getFinLinkClient,
type FinLinkClient,
} from "./client";

View File

@@ -0,0 +1,274 @@
/**
* FIN-link sandbox (gap-analysis v2 §7.1 / §10.6).
*
* The SWIFT generators under `services/swift/` produce strings — but
* the architecture note §4.3 requires an actual transport. Until a
* production FIN-link / Alliance Access integration ships, this
* sandbox service stands in as the outbound transport so the full
* lifecycle (dispatch → ack → accept → settle) can be exercised end
* to end in dev + E2E.
*
* The sandbox:
*
* 1. Accepts an outbound SWIFT/ISO payload via POST /dispatch.
* 2. Assigns a FIN reference, stores the message in memory, and
* returns a synchronous ack (200).
* 3. Advances the message through a deterministic lifecycle:
* received -> acknowledged -> accepted -> settled
* on each tick of an internal clock (configurable via
* setTickIntervalMs for tests).
* 4. Exposes GET /messages/:reference + GET /messages for polling.
* 5. Optionally POSTs a webhook on each state change when a caller
* supplies `webhookUrl` in the dispatch request.
*
* The sandbox is intentionally process-local. Production transports
* should back this interface with a real FIN queue / Alliance Web
* Platform gateway.
*/
import { createHmac, randomBytes } from "crypto";
import express, { Router, type Request, type Response } from "express";
export type FinMessageState =
| "received"
| "acknowledged"
| "accepted"
| "settled"
| "rejected";
export type FinMessageType =
| "MT760"
| "MT202"
| "pacs.009"
| "pacs.008"
| "camt.025"
| "camt.054"
| "unknown";
export interface FinMessage {
reference: string;
messageType: FinMessageType;
payload: string;
state: FinMessageState;
receivedAt: string;
updatedAt: string;
stateHistory: Array<{ state: FinMessageState; at: string }>;
webhookUrl?: string;
planId?: string;
endToEndId?: string;
}
export interface DispatchRequest {
messageType: FinMessageType;
payload: string;
planId?: string;
endToEndId?: string;
webhookUrl?: string;
}
export interface DispatchResponse {
reference: string;
state: FinMessageState;
ackedAt: string;
}
const store = new Map<string, FinMessage>();
// Deterministic lifecycle progression.
const ORDER: FinMessageState[] = [
"received",
"acknowledged",
"accepted",
"settled",
];
function nextState(current: FinMessageState): FinMessageState | null {
const idx = ORDER.indexOf(current);
if (idx < 0 || idx === ORDER.length - 1) return null;
return ORDER[idx + 1];
}
function genReference(): string {
return `FIN-${randomBytes(6).toString("hex").toUpperCase()}`;
}
export function finSignature(payload: string): string {
const secret = process.env.FIN_SANDBOX_SECRET || "fin-sandbox-dev-secret";
return createHmac("sha256", secret).update(payload).digest("hex");
}
export function recordDispatch(req: DispatchRequest): FinMessage {
const reference = genReference();
const now = new Date().toISOString();
const msg: FinMessage = {
reference,
messageType: req.messageType,
payload: req.payload,
state: "received",
receivedAt: now,
updatedAt: now,
stateHistory: [{ state: "received", at: now }],
webhookUrl: req.webhookUrl,
planId: req.planId,
endToEndId: req.endToEndId,
};
store.set(reference, msg);
return msg;
}
export async function advance(reference: string): Promise<FinMessage | null> {
const msg = store.get(reference);
if (!msg) return null;
const next = nextState(msg.state);
if (!next) return msg;
const at = new Date().toISOString();
msg.state = next;
msg.updatedAt = at;
msg.stateHistory.push({ state: next, at });
if (msg.webhookUrl) {
await emitWebhook(msg).catch(() => undefined);
}
return msg;
}
export function rejectMessage(
reference: string,
reason: string,
): FinMessage | null {
const msg = store.get(reference);
if (!msg) return null;
const at = new Date().toISOString();
msg.state = "rejected";
msg.updatedAt = at;
msg.stateHistory.push({ state: "rejected", at });
(msg as FinMessage & { rejectionReason?: string }).rejectionReason = reason;
return msg;
}
export function getMessage(reference: string): FinMessage | undefined {
return store.get(reference);
}
export function listMessages(filter?: { planId?: string }): FinMessage[] {
const all = Array.from(store.values());
if (!filter?.planId) return all;
return all.filter((m) => m.planId === filter.planId);
}
export function resetSandboxForTests(): void {
store.clear();
}
async function emitWebhook(msg: FinMessage): Promise<void> {
if (!msg.webhookUrl) return;
const body = JSON.stringify({
reference: msg.reference,
messageType: msg.messageType,
state: msg.state,
updatedAt: msg.updatedAt,
planId: msg.planId,
endToEndId: msg.endToEndId,
});
const signature = finSignature(body);
try {
await fetch(msg.webhookUrl, {
method: "POST",
headers: {
"content-type": "application/json",
"x-fin-sandbox-signature": signature,
},
body,
});
} catch {
// swallow — the sandbox is best-effort in dev
}
}
// ---------------------------------------------------------------------------
// HTTP router
// ---------------------------------------------------------------------------
export function buildSandboxRouter(): Router {
const r = Router();
r.use(express.json({ limit: "5mb" }));
r.post("/dispatch", (req: Request, res: Response) => {
const body = req.body as Partial<DispatchRequest>;
if (
!body ||
typeof body.payload !== "string" ||
typeof body.messageType !== "string"
) {
return res.status(400).json({
error: "messageType and payload are required",
});
}
const msg = recordDispatch({
messageType: body.messageType as FinMessageType,
payload: body.payload,
planId: body.planId,
endToEndId: body.endToEndId,
webhookUrl: body.webhookUrl,
});
const response: DispatchResponse = {
reference: msg.reference,
state: msg.state,
ackedAt: msg.updatedAt,
};
return res.status(202).json(response);
});
r.post("/advance/:reference", async (req: Request, res: Response) => {
const msg = await advance(req.params.reference);
if (!msg) return res.status(404).json({ error: "not found" });
return res.json(msg);
});
r.post("/reject/:reference", (req: Request, res: Response) => {
const reason =
typeof req.body?.reason === "string" ? req.body.reason : "rejected";
const msg = rejectMessage(req.params.reference, reason);
if (!msg) return res.status(404).json({ error: "not found" });
return res.json(msg);
});
r.get("/messages/:reference", (req: Request, res: Response) => {
const msg = getMessage(req.params.reference);
if (!msg) return res.status(404).json({ error: "not found" });
return res.json(msg);
});
r.get("/messages", (req: Request, res: Response) => {
const planId =
typeof req.query.planId === "string" ? req.query.planId : undefined;
return res.json({ messages: listMessages({ planId }) });
});
return r;
}
// ---------------------------------------------------------------------------
// Timer-driven auto-progress (optional; off by default in tests)
// ---------------------------------------------------------------------------
let tickTimer: NodeJS.Timeout | null = null;
export function startAutoProgress(intervalMs = 2_000): void {
stopAutoProgress();
tickTimer = setInterval(() => {
for (const msg of store.values()) {
if (msg.state !== "settled" && msg.state !== "rejected") {
void advance(msg.reference);
}
}
}, intervalMs);
// Allow the Node process to exit while this timer is pending.
if (typeof tickTimer.unref === "function") tickTimer.unref();
}
export function stopAutoProgress(): void {
if (tickTimer) {
clearInterval(tickTimer);
tickTimer = null;
}
}

View File

@@ -25,7 +25,12 @@ import { logger } from "../logging/logger";
import type { Plan } from "../types/plan";
const NOTARY_REGISTRY_ABI = [
"function registerPlan(bytes32 planId, tuple(uint8 stepType, address target, uint256 amount, bytes data)[] steps, address creator) external",
// Step tuple order must match IComboHandler.Step exactly:
// (StepType stepType, bytes data, address target, uint256 value)
// Any divergence changes the canonical signature and therefore the
// function selector — the call would silently miss and the contract
// would revert with no revert data.
"function registerPlan(bytes32 planId, tuple(uint8 stepType, bytes data, address target, uint256 value)[] steps, address creator) external",
"function finalizePlan(bytes32 planId, bool success) external",
"function getPlan(bytes32 planId) view returns (tuple(bytes32 planHash, address creator, uint256 registeredAt, uint256 finalizedAt, bool success, bytes32 receiptHash))",
"event PlanRegistered(bytes32 indexed planId, address indexed creator, bytes32 planHash)",
@@ -108,7 +113,13 @@ function getContract(cfg: NotaryConfig): {
if (cached && cached.cfg.contractAddress === cfg.contractAddress) {
return { contract: cached.contract, wallet: cached.wallet };
}
const provider = new ethers.JsonRpcProvider(cfg.rpcUrl);
// cacheTimeout=-1 disables the 250ms response cache — otherwise
// back-to-back anchor+finalize calls read a stale getTransactionCount
// and collide on nonce, particularly on fast (ganache/hardhat) chains.
const provider = new ethers.JsonRpcProvider(cfg.rpcUrl, cfg.chainId, {
staticNetwork: true,
cacheTimeout: -1,
});
const wallet = new ethers.Wallet(cfg.privateKey!, provider);
const contract = new ethers.Contract(
cfg.contractAddress!,

View File

@@ -0,0 +1,45 @@
/**
* Obligation-layer condition evaluator.
*
* Originally shipped as a self-contained subset of the PR P Rules
* Engine so the obligation layer could be merged independently. Now
* consolidated: this file re-exports the shared types and
* `evaluateCondition` from `services/rulesEngine.ts` and provides a
* thin compatibility wrapper for `resolvePath(path, context)` which
* historically took its arguments in the opposite order.
*
* Keeping this module as a named surface preserves existing imports
* under `services/obligations/evaluator` throughout the codebase and
* the test suite.
*/
export type {
Operator,
LeafCondition,
AndCondition,
OrCondition,
NotCondition,
Condition,
} from "../rulesEngine";
import { evaluateCondition as ruleEngineEvaluate, resolvePath as ruleEnginePath } from "../rulesEngine";
import type { Condition } from "../rulesEngine";
export function evaluateCondition(
condition: Condition,
context: Record<string, unknown>,
): boolean {
return ruleEngineEvaluate(condition, context);
}
/**
* Historical (path, context) signature retained for backward
* compatibility with call sites written before the evaluator was
* consolidated into the Rules Engine.
*/
export function resolvePath(
path: string,
context: Record<string, unknown>,
): unknown {
return ruleEnginePath(context, path);
}

View File

@@ -0,0 +1,320 @@
/**
* Machine-form obligation layer — entry point.
*
* See ./types.ts for the architectural shape; this module exposes:
* - canonicalize / hashObligationTerms (deterministic identity)
* - validateObligationTerms (shape check)
* - evaluateObligationTerms (run commit/abort/unwind
* clauses against a context
* via the PR P rules engine)
* - buildIssueInstrumentObligation (helper that derives a
* sensible default obligation
* shape from a plan's
* instrument terms)
*/
import { createHash } from "crypto";
import { evaluateCondition } from "./evaluator";
import type { InstrumentTerms } from "../../types/plan";
import type {
AuthorizedParticipant,
Consideration,
EvaluationResult,
GoverningDocument,
ObligationClause,
ObligationEvaluation,
ObligationTerms,
} from "./types";
export * from "./types";
/**
* Deterministic canonical JSON encoding: object keys sorted
* lexicographically at every depth, arrays preserved, no whitespace.
*
* This is what `hashObligationTerms()` hashes, so two obligations
* with identical semantic content always hash to the same value
* regardless of key insertion order.
*/
export function canonicalize(value: unknown): string {
return JSON.stringify(sortValue(value));
}
function sortValue(v: unknown): unknown {
if (v === null || typeof v !== "object") return v;
if (Array.isArray(v)) return v.map((x) => sortValue(x));
const out: Record<string, unknown> = {};
for (const k of Object.keys(v as Record<string, unknown>).sort()) {
out[k] = sortValue((v as Record<string, unknown>)[k]);
}
return out;
}
/**
* SHA-256 of the canonical obligation terms, hex-encoded without
* 0x prefix. Matches the formatting convention used by
* `InstrumentTerms.templateHash`.
*/
export function hashObligationTerms(terms: ObligationTerms): string {
return createHash("sha256").update(canonicalize(terms)).digest("hex");
}
/**
* Shape validation. Returns a list of human-readable problems; empty
* list means the object conforms to `ObligationTerms`.
*
* Intentionally cheap (no JSON-Schema runtime) — the TypeScript type
* plus these assertions catch the bulk of real-world mistakes.
*/
export function validateObligationTerms(
input: unknown,
): { ok: boolean; errors: string[] } {
const errors: string[] = [];
if (!input || typeof input !== "object") {
return { ok: false, errors: ["obligation terms must be an object"] };
}
const t = input as Partial<ObligationTerms>;
if (t.version !== "1.0") errors.push("version must be \"1.0\"");
if (!t.consideration || typeof t.consideration !== "object") {
errors.push("consideration missing");
} else {
const c = t.consideration as Partial<Consideration>;
if (!c.payor) errors.push("consideration.payor required");
if (!c.payee) errors.push("consideration.payee required");
if (!c.currency || !/^[A-Z]{3}$/.test(c.currency))
errors.push("consideration.currency must be ISO-4217 (3 uppercase letters)");
if (typeof c.amount !== "number" || !(c.amount > 0))
errors.push("consideration.amount must be a positive number");
}
for (const arrKey of [
"validIssuance",
"validPayment",
"commit",
"abort",
"unwind",
] as const) {
const arr = t[arrKey];
if (!Array.isArray(arr)) {
errors.push(`${arrKey} must be an array`);
continue;
}
arr.forEach((clause, i) => {
if (!clause || typeof clause !== "object") {
errors.push(`${arrKey}[${i}] must be an object`);
return;
}
const c = clause as Partial<ObligationClause>;
if (!c.id) errors.push(`${arrKey}[${i}].id required`);
if (!c.description) errors.push(`${arrKey}[${i}].description required`);
if (!c.assert) errors.push(`${arrKey}[${i}].assert required`);
if (c.binds && !["instrument", "payment", "both"].includes(c.binds))
errors.push(`${arrKey}[${i}].binds must be instrument|payment|both`);
});
}
if (!Array.isArray(t.authorizedParticipants)) {
errors.push("authorizedParticipants must be an array");
} else {
t.authorizedParticipants.forEach((p, i) => {
const pp = p as Partial<AuthorizedParticipant>;
if (!pp.role) errors.push(`authorizedParticipants[${i}].role required`);
if (!pp.actorId)
errors.push(`authorizedParticipants[${i}].actorId required`);
});
}
if (!Array.isArray(t.governingDocuments) || t.governingDocuments.length === 0) {
errors.push("governingDocuments must be a non-empty array");
} else {
t.governingDocuments.forEach((d, i) => {
const dd = d as Partial<GoverningDocument>;
if (!dd.templateRef)
errors.push(`governingDocuments[${i}].templateRef required`);
if (!dd.templateHash || !/^[0-9a-fA-F]{64}$/.test(dd.templateHash))
errors.push(`governingDocuments[${i}].templateHash must be hex SHA-256`);
});
}
return { ok: errors.length === 0, errors };
}
/**
* Evaluate a set of obligation clauses against a live context.
*
* `context` typically contains the plan, execution state, event chain,
* and bank/DLT dispatch evidence — whatever the clauses assert against.
*
* A failure short-circuits nothing; all clauses are evaluated so the
* caller can surface the full list of unmet conditions (arch §12.2).
*/
export function evaluateClauses(
clauses: ObligationClause[],
context: Record<string, unknown>,
): ObligationEvaluation {
const results: EvaluationResult[] = clauses.map((clause) => {
let ok = false;
let failureReason: string | undefined;
try {
ok = evaluateCondition(clause.assert, context);
if (!ok) failureReason = "assert condition returned false";
} catch (err) {
ok = false;
failureReason =
err instanceof Error ? err.message : "unknown evaluator error";
}
return {
clauseId: clause.id,
description: clause.description,
ok,
...(failureReason ? { failureReason } : {}),
};
});
return { ok: results.every((r) => r.ok), results };
}
/**
* Evaluate specifically the commit clauses. Convenience for the
* transition coordinator (arch §9.2).
*/
export function evaluateCommit(
terms: ObligationTerms,
context: Record<string, unknown>,
): ObligationEvaluation {
return evaluateClauses(terms.commit, context);
}
/**
* Evaluate specifically the abort clauses (arch §9.3). A true result
* here means the transaction MUST abort.
*/
export function evaluateAbort(
terms: ObligationTerms,
context: Record<string, unknown>,
): ObligationEvaluation {
const ev = evaluateClauses(terms.abort, context);
// Semantically an abort clause that *asserts true* means the abort
// condition has been hit, so `ok=true` in the evaluation result ==
// "abort required". Callers consume this as a boolean trigger.
return ev;
}
/**
* Derive a default obligation-terms object from an issueInstrument
* step's instrument terms. Useful for plans that haven't supplied an
* explicit obligation block — gives them a reasonable starting point
* that matches the template's commit/abort semantics.
*/
export function buildIssueInstrumentObligation(input: {
instrument: InstrumentTerms;
payor: string;
payee: string;
authorizedParticipants: AuthorizedParticipant[];
governingDocumentTitle?: string;
}): ObligationTerms {
const { instrument, payor, payee, authorizedParticipants } = input;
const commit: ObligationClause[] = [
{
id: "commit.dlt_tx_hash",
description: "DLT anchor transaction hash is present and valid",
binds: "both",
assert: {
path: "dlt.tx_hash",
op: "matches",
value: "^0x[0-9a-fA-F]{64}$",
},
},
{
id: "commit.bank_iso_message_id",
description: "Bank leg has produced an ISO-20022 message id",
binds: "instrument",
assert: { path: "bank.iso_message_id", op: "exists" },
},
{
id: "commit.state_is_validating",
description: "Transaction must be in VALIDATING when commit fires",
binds: "both",
assert: { path: "state", op: "eq", value: "VALIDATING" },
},
];
const abort: ObligationClause[] = [
{
id: "abort.exception_raised",
description: "At least one active exception blocks commit",
binds: "both",
assert: { path: "exceptions.active", op: "length_gte", value: 1 },
},
];
const unwind: ObligationClause[] = [
{
id: "unwind.payment_failed_only",
description:
"Unwind applies only when the payment leg failed AFTER the "
+ "instrument was dispatched (MT760 is irrevocable under UCP 600).",
binds: "payment",
assert: {
all: [
{ path: "instrument.dispatched", op: "eq", value: true },
{ path: "payment.failed", op: "eq", value: true },
],
},
},
];
const validIssuance: ObligationClause[] = [
{
id: "issuance.template_hash_matches",
description: "Dispatched instrument text hashes to the agreed template",
binds: "instrument",
assert: {
path: "instrument.template_hash",
op: "eq",
value: instrument.templateHash,
},
},
];
const validPayment: ObligationClause[] = [
{
id: "payment.amount_matches",
description: "Payment amount equals the instrument face value",
binds: "payment",
assert: { path: "payment.amount", op: "eq", value: instrument.amount },
},
{
id: "payment.currency_matches",
description: "Payment currency equals the instrument currency",
binds: "payment",
assert: { path: "payment.currency", op: "eq", value: instrument.currency },
},
];
return {
version: "1.0",
consideration: {
payor,
payee,
currency: instrument.currency,
amount: instrument.amount,
},
validIssuance,
validPayment,
commit,
abort,
unwind,
authorizedParticipants,
governingDocuments: [
{
templateRef: instrument.templateRef,
templateHash: instrument.templateHash,
title: input.governingDocumentTitle,
governingLaw: instrument.governingLaw,
},
],
};
}

View File

@@ -0,0 +1,135 @@
/**
* Machine-form obligation layer (gap-analysis v2 §4.1 partial).
*
* Architecture §4.1 "Legal / Obligation Layer" describes what the
* transaction's terms must express: consideration, commit conditions,
* abort conditions, unwind conditions, authorized-participant matrix,
* and a reference to governing documents.
*
* Until now a Plan only stored a `templateHash` — a hash reference
* to an off-chain text. That satisfies tamper-evidence but is not
* machine-enforceable: the orchestrator can't tell whether a given
* execution context *satisfies* the terms without a human reading
* the underlying PDF.
*
* This module makes the obligation layer first-class data:
*
* - Strongly typed shape for the six architectural sub-objects
* (consideration, validIssuance, validPayment, commit, abort,
* unwind, authorizedParticipants, governingDocuments).
* - Canonicalisation + SHA-256 hash (deterministic, replayable).
* - Executable assertions built on the PR P Rules Engine DSL so
* commit/abort/unwind conditions can be checked automatically
* against a live context.
*
* Binds to the existing `InstrumentTerms.templateHash` field: an
* ObligationTerms instance records the governing-document hash as
* one of its `governingDocuments[]` entries, closing the loop from
* "which document governs this plan" to "what does that document
* require, expressed as machine-checkable predicates".
*/
import type { Condition } from "./evaluator";
/**
* Commercial and legal meaning of the transaction (arch §4.1).
*/
export interface Consideration {
/** Who pays and what. */
payor: string;
payee: string;
/** ISO-4217 currency code. */
currency: string;
/** Positive amount in major units (e.g. 100.00 USD = 100). */
amount: number;
/** Optional free-form description of the consideration. */
description?: string;
}
/**
* Role entry on the authorized-participant matrix. Roles match the
* SoD set used by middleware/apiKeyAuth (PR M): coordinator, approver,
* releaser, validator, exception_manager, operator.
*/
export interface AuthorizedParticipant {
role:
| "coordinator"
| "approver"
| "releaser"
| "validator"
| "exception_manager"
| "operator";
/** Free-form identifier — an actor id, API-key id, or wallet address. */
actorId: string;
/** Optional display name. */
displayName?: string;
}
/**
* Governing-document reference: template id + integrity hash of the
* agreed text (see InstrumentTerms.templateHash).
*/
export interface GoverningDocument {
/** Stable template identifier (e.g. "emirates-islamic-sblc-v3"). */
templateRef: string;
/** Hex SHA-256 of the canonical agreed text, without 0x prefix. */
templateHash: string;
/** Optional human-readable title. */
title?: string;
/** Optional ruleset the template is governed under. */
governingLaw?: string;
}
/**
* A single machine-enforceable clause. The `assert` field is a
* rulesEngine Condition so the obligation layer can reuse the
* evaluator from PR P.
*/
export interface ObligationClause {
id: string;
description: string;
/** Rules-engine condition that must hold for the clause to be satisfied. */
assert: Condition;
/** Explicitly surface which side of the transaction the clause binds. */
binds: "instrument" | "payment" | "both";
}
/**
* Top-level obligation-terms object.
*
* Canonicalisation:
* - Keys are sorted lexicographically via `canonicalize()`.
* - `terms_hash` = SHA-256 of the canonical JSON string.
*
* The hash is the identity of the obligation: two plans with the
* same hash have identical machine-enforceable terms.
*/
export interface ObligationTerms {
/** Schema version — bump on any breaking shape change. */
version: "1.0";
consideration: Consideration;
/** Clauses that define what "valid issuance" means (arch §4.1). */
validIssuance: ObligationClause[];
/** Clauses that define what "valid payment" means (arch §4.1). */
validPayment: ObligationClause[];
/** Commit criteria (arch §9.2). */
commit: ObligationClause[];
/** Abort criteria (arch §9.3). */
abort: ObligationClause[];
/** Unwind procedures (arch §8 UNWIND_PENDING). */
unwind: ObligationClause[];
authorizedParticipants: AuthorizedParticipant[];
governingDocuments: GoverningDocument[];
}
export interface EvaluationResult {
clauseId: string;
description: string;
ok: boolean;
failureReason?: string;
}
export interface ObligationEvaluation {
ok: boolean;
results: EvaluationResult[];
}

View File

@@ -0,0 +1,308 @@
/**
* Pluggable Rules Engine (arch §5.2 Rules Engine; gap v2 §5.2 partial).
*
* Before this PR, business rules were hardcoded at the call sites
* (e.g. "plan must have a pay step" baked into iso20022.ts, SoD
* matrix hard-coded in transactionState.ts). This module introduces
* a minimal, declarative JSON DSL so that ruleSets can be loaded
* from env (RULES_FILE) or swapped per-environment.
*
* Design principles
* -----------------
* - No eval. The evaluator is a small recursive switch over a
* closed operator set — no runtime code injection.
* - Pure, deterministic, side-effect free. Evaluation order is
* explicit so the engine can be reasoned about and replayed.
* - Context is a flat name → value map. Callers project whatever
* shape they need ({plan, state, compliance, participants}).
* - Failures are collected, not thrown. The caller decides whether
* a single failure aborts, or whether to accumulate and report.
*/
import { readFileSync } from "fs";
/** Supported primitive operators. */
export type Operator =
| "eq"
| "neq"
| "gt"
| "gte"
| "lt"
| "lte"
| "in"
| "not_in"
| "exists"
| "matches" // regex
| "length_gte"
| "length_lte";
/** Leaf condition — references a context path against a literal. */
export interface LeafCondition {
path: string; // dotted path into the context object
op: Operator;
value?: unknown; // not required for `exists`
/** Optional human label for failure messages. */
message?: string;
}
/** Combinator — AND / OR / NOT over child conditions. */
export interface AndCondition {
all: Condition[];
message?: string;
}
export interface OrCondition {
any: Condition[];
message?: string;
}
export interface NotCondition {
not: Condition;
message?: string;
}
export type Condition = LeafCondition | AndCondition | OrCondition | NotCondition;
export interface Rule {
id: string;
description?: string;
when?: Condition; // precondition — rule only fires when `when` is true
assert: Condition; // the rule passes when `assert` evaluates true
/** Optional severity for reporting: "error" (default) blocks, "warn" does not. */
severity?: "error" | "warn";
}
export interface RuleSet {
id: string;
version?: string;
rules: Rule[];
}
export interface RuleFailure {
ruleId: string;
severity: "error" | "warn";
message: string;
path?: string;
}
export interface EvaluationResult {
ok: boolean;
failures: RuleFailure[];
}
/* -----------------------------------------------------------------
* Dotted-path resolver. Supports a.b.c and a.b[0].c.
* --------------------------------------------------------------- */
export function resolvePath(ctx: unknown, path: string): unknown {
return getPath(ctx, path);
}
function getPath(ctx: unknown, path: string): unknown {
if (!path) return ctx;
const parts = path
.replace(/\[(\d+)\]/g, ".$1")
.split(".")
.filter(Boolean);
let cur: unknown = ctx;
for (const p of parts) {
if (cur === null || cur === undefined) return undefined;
if (typeof cur === "object") {
cur = (cur as Record<string, unknown>)[p];
} else {
return undefined;
}
}
return cur;
}
/* -----------------------------------------------------------------
* Operator evaluation. Pure — no throws.
* --------------------------------------------------------------- */
function evalOp(op: Operator, actual: unknown, expected: unknown): boolean {
switch (op) {
case "eq":
return actual === expected;
case "neq":
return actual !== expected;
case "gt":
return typeof actual === "number" && typeof expected === "number" && actual > expected;
case "gte":
return typeof actual === "number" && typeof expected === "number" && actual >= expected;
case "lt":
return typeof actual === "number" && typeof expected === "number" && actual < expected;
case "lte":
return typeof actual === "number" && typeof expected === "number" && actual <= expected;
case "in":
return Array.isArray(expected) && expected.includes(actual as never);
case "not_in":
return Array.isArray(expected) && !expected.includes(actual as never);
case "exists":
return actual !== undefined && actual !== null;
case "matches":
if (typeof actual !== "string" || typeof expected !== "string") return false;
try {
return new RegExp(expected).test(actual);
} catch {
return false;
}
case "length_gte":
if (!Array.isArray(actual) && typeof actual !== "string") return false;
return (actual as { length: number }).length >= (expected as number);
case "length_lte":
if (!Array.isArray(actual) && typeof actual !== "string") return false;
return (actual as { length: number }).length <= (expected as number);
default:
return false;
}
}
function isLeaf(c: Condition): c is LeafCondition {
return (c as LeafCondition).op !== undefined && (c as LeafCondition).path !== undefined;
}
export function evaluateCondition(
condition: Condition,
context: Record<string, unknown>,
): boolean {
if (isLeaf(condition)) {
const actual = getPath(context, condition.path);
return evalOp(condition.op, actual, condition.value);
}
if ("all" in condition) {
return condition.all.every((c) => evaluateCondition(c, context));
}
if ("any" in condition) {
return condition.any.some((c) => evaluateCondition(c, context));
}
if ("not" in condition) {
return !evaluateCondition(condition.not, context);
}
return false;
}
/* -----------------------------------------------------------------
* Public evaluate(): runs the full rule set and collects failures.
* --------------------------------------------------------------- */
export function evaluate(
ruleSet: RuleSet,
context: Record<string, unknown>,
): EvaluationResult {
const failures: RuleFailure[] = [];
for (const rule of ruleSet.rules) {
if (rule.when && !evaluateCondition(rule.when, context)) continue;
const passed = evaluateCondition(rule.assert, context);
if (!passed) {
failures.push({
ruleId: rule.id,
severity: rule.severity ?? "error",
message: rule.description ?? `rule ${rule.id} failed`,
path: isLeaf(rule.assert) ? rule.assert.path : undefined,
});
}
}
const blocking = failures.filter((f) => f.severity === "error");
return { ok: blocking.length === 0, failures };
}
/* -----------------------------------------------------------------
* Built-in rule sets. These mirror the pre-DSL hardcoded checks so
* callers can migrate incrementally.
* --------------------------------------------------------------- */
/** Preconditions check — arch §8 PRECONDITIONS_PENDING -> READY_FOR_PREPARE. */
export const BUILTIN_PRECONDITIONS: RuleSet = {
id: "preconditions.builtin",
version: "1",
rules: [
{
id: "plan.exists",
description: "plan must be present on the context",
assert: { path: "plan", op: "exists" },
},
{
id: "plan.steps.non_empty",
description: "plan must contain at least one step",
assert: { path: "plan.steps", op: "length_gte", value: 1 },
},
{
id: "plan.pay_step_present",
description: "plan must contain at least one pay step (ISO-20022 envelope)",
assert: {
any: [
{ path: "plan.steps[0].type", op: "eq", value: "pay" },
{ path: "plan.steps[1].type", op: "eq", value: "pay" },
{ path: "plan.steps[2].type", op: "eq", value: "pay" },
{ path: "plan.steps[3].type", op: "eq", value: "pay" },
],
},
},
{
id: "participants.at_least_one",
description: "participant registry must not be empty",
assert: { path: "participants", op: "length_gte", value: 1 },
},
{
id: "compliance.kyc_ok",
description: "compliance KYC status must be ok",
when: { path: "compliance", op: "exists" },
assert: { path: "compliance.kyc", op: "eq", value: "ok" },
},
],
};
/** Commit rule — arch §9.2. */
export const BUILTIN_COMMIT: RuleSet = {
id: "commit.builtin",
version: "1",
rules: [
{
id: "dlt.tx_hash",
description: "DLT leg must produce a 0x + 64-hex tx hash",
assert: { path: "dlt.txHash", op: "matches", value: "^0x[0-9a-fA-F]{64}$" },
},
{
id: "bank.iso_message_id",
description: "bank leg must produce a non-empty ISO message id",
assert: { path: "bank.isoMessageId", op: "exists" },
},
{
id: "state.is_validating",
description: "commit is only valid from VALIDATING",
assert: { path: "state", op: "eq", value: "VALIDATING" },
},
{
id: "no_exception_holds",
description: "no exception may be outstanding",
assert: { path: "exceptions.active", op: "length_lte", value: 0 },
},
],
};
/* -----------------------------------------------------------------
* Loader: RULES_FILE env points at a JSON file containing a map
* {ruleSetId: RuleSet}. Falls back to built-ins on any error.
* --------------------------------------------------------------- */
let cachedOverrides: Record<string, RuleSet> | undefined;
export function getRuleSet(id: string): RuleSet {
if (cachedOverrides === undefined) {
cachedOverrides = {};
const path = process.env.RULES_FILE;
if (path) {
try {
const raw = readFileSync(path, "utf8");
const parsed = JSON.parse(raw) as Record<string, RuleSet>;
if (parsed && typeof parsed === "object") cachedOverrides = parsed;
} catch {
// leave empty — silent fall-through to built-ins
}
}
}
if (cachedOverrides[id]) return cachedOverrides[id];
if (id === BUILTIN_PRECONDITIONS.id) return BUILTIN_PRECONDITIONS;
if (id === BUILTIN_COMMIT.id) return BUILTIN_COMMIT;
return { id, rules: [] };
}
export function __resetRulesCacheForTests(): void {
cachedOverrides = undefined;
}

View File

@@ -85,63 +85,3 @@ export const ROLE_FOR_TRANSITION: Readonly<Record<string, ActorRole>> = {
"VALIDATING->COMMITTED": "approver",
"ABORTED->UNWIND_PENDING": "exception_manager",
};
/**
* Per-state phase timeouts (arch §12.1 timing exceptions, gap v2 §7.6 / §10.7).
*
* Each state has its own watchdog: if a plan sits in `state` for longer than
* `PHASE_TIMEOUTS[state]` ms, the exception manager raises a timing exception
* from arch §12.1 and transitions the plan toward ABORTED (or whatever the
* policy dictates for that state).
*
* Terminal states (COMMITTED → CLOSED path; CLOSED) have no timeout — they
* are end-of-lifecycle and not supposed to be aged out.
*
* Values are defaults. Each entry can be individually overridden via env:
* PHASE_TIMEOUT_<STATE>=<ms>
* e.g. PHASE_TIMEOUT_EXECUTING=180000. Unit: milliseconds.
*
* Rationale for defaults:
* DRAFT — long; plans can sit in draft for days.
* INITIATED — short; identity + terms hashing is deterministic.
* PRECONDITIONS_PENDING — long; human KYC / control approvals.
* READY_FOR_PREPARE — short; just awaits programmatic prepare.
* PREPARED — medium; both legs confirm readiness.
* EXECUTING — medium; dispatch timeouts from arch §12.1.
* PARTIALLY_EXECUTED — medium; waits for the lagging leg.
* VALIDATING — medium; reconciliation + ack/settle evidence.
* ABORTED — short; decision of unwind-vs-close should be prompt.
* UNWIND_PENDING — long; recovery procedures can be slow/manual.
*/
export const DEFAULT_PHASE_TIMEOUTS: Readonly<Record<TransactionState, number | null>> = {
DRAFT: 24 * 60 * 60 * 1000, // 24 h
INITIATED: 5 * 60 * 1000, // 5 min
PRECONDITIONS_PENDING: 4 * 60 * 60 * 1000, // 4 h
READY_FOR_PREPARE: 15 * 60 * 1000, // 15 min
PREPARED: 30 * 60 * 1000, // 30 min
EXECUTING: 10 * 60 * 1000, // 10 min (dispatch_timeout §12.1)
PARTIALLY_EXECUTED: 15 * 60 * 1000, // 15 min (settlement_timeout §12.1)
VALIDATING: 10 * 60 * 1000, // 10 min
COMMITTED: 60 * 60 * 1000, // 1 h (waiting to be CLOSED)
ABORTED: 10 * 60 * 1000, // 10 min
UNWIND_PENDING: 12 * 60 * 60 * 1000, // 12 h
CLOSED: null, // terminal
};
/**
* Read the effective timeout (ms) for a given state, honouring per-state
* env overrides (`PHASE_TIMEOUT_<STATE>`). Returns `null` when the state
* has no timeout (terminal / explicitly disabled via override of "0").
*/
export function getPhaseTimeoutMs(state: TransactionState): number | null {
const override = process.env[`PHASE_TIMEOUT_${state}`];
if (override !== undefined) {
const parsed = Number(override);
if (!Number.isFinite(parsed) || parsed < 0) {
// Fall through to default when env value is invalid.
return DEFAULT_PHASE_TIMEOUTS[state];
}
return parsed === 0 ? null : parsed;
}
return DEFAULT_PHASE_TIMEOUTS[state];
}

View File

@@ -0,0 +1,119 @@
/**
* Unit tests for the EXT-* external-dependency blocker registry.
* Headless — no network, no UI.
*/
import {
EXT_BLOCKER_IDS,
BLOCKER_DETAILS,
evaluateBlockers,
activeBlockers,
logBlockerStatusAtBoot,
} from "../../src/config/externalBlockers";
describe("externalBlockers registry", () => {
it("exposes exactly the 7 blocker IDs the proxmox checker tracks", () => {
expect(EXT_BLOCKER_IDS).toEqual([
"EXT-DBIS-CORE",
"EXT-CC-PAYMENT-ADAPTERS",
"EXT-CC-AUDIT-LEDGER",
"EXT-CC-SHARED-EVENTS",
"EXT-CC-SHARED-SCHEMAS",
"EXT-FIN-GATEWAY",
"EXT-CHAIN138-CI-RPC",
]);
});
it("has a detail record for every id", () => {
for (const id of EXT_BLOCKER_IDS) {
expect(BLOCKER_DETAILS[id]).toBeDefined();
expect(BLOCKER_DETAILS[id].id).toBe(id);
expect(BLOCKER_DETAILS[id].title.length).toBeGreaterThan(0);
expect(BLOCKER_DETAILS[id].description.length).toBeGreaterThan(0);
}
});
});
describe("evaluateBlockers()", () => {
it("marks everything active on an empty env", () => {
const records = evaluateBlockers({});
expect(records).toHaveLength(EXT_BLOCKER_IDS.length);
expect(records.every((r) => r.status === "active")).toBe(true);
});
it("resolves EXT-DBIS-CORE when DBIS_CORE_URL is set", () => {
const records = evaluateBlockers({ DBIS_CORE_URL: "http://x.test" });
const rec = records.find((r) => r.id === "EXT-DBIS-CORE");
expect(rec?.status).toBe("resolved");
expect(rec?.resolvedVia).toBe("DBIS_CORE_URL");
});
it("resolves EXT-FIN-GATEWAY when FIN_SANDBOX_URL is set", () => {
const records = evaluateBlockers({ FIN_SANDBOX_URL: "http://fin.test" });
expect(records.find((r) => r.id === "EXT-FIN-GATEWAY")?.status).toBe("resolved");
});
it("resolves EXT-CHAIN138-CI-RPC when CHAIN_138_RPC_URL is set", () => {
const records = evaluateBlockers({
CHAIN_138_RPC_URL: "https://rpc.public-0138.defi-oracle.io",
});
expect(records.find((r) => r.id === "EXT-CHAIN138-CI-RPC")?.status).toBe("resolved");
});
it("leaves cc-* scaffold blockers active regardless of env", () => {
const records = evaluateBlockers({
DBIS_CORE_URL: "http://x",
FIN_SANDBOX_URL: "http://y",
CHAIN_138_RPC_URL: "http://z",
});
const scaffoldIds = [
"EXT-CC-PAYMENT-ADAPTERS",
"EXT-CC-AUDIT-LEDGER",
"EXT-CC-SHARED-EVENTS",
"EXT-CC-SHARED-SCHEMAS",
];
for (const id of scaffoldIds) {
expect(records.find((r) => r.id === id)?.status).toBe("active");
}
});
it("treats empty-string env var as unset (not resolved)", () => {
const records = evaluateBlockers({ DBIS_CORE_URL: "" });
expect(records.find((r) => r.id === "EXT-DBIS-CORE")?.status).toBe("active");
});
});
describe("activeBlockers()", () => {
it("returns 7 when env is empty", () => {
expect(activeBlockers({})).toHaveLength(7);
});
it("returns 6 when Chain-138 RPC is resolved", () => {
const ids = activeBlockers({
CHAIN_138_RPC_URL: "https://rpc.public-0138.defi-oracle.io",
});
expect(ids).not.toContain("EXT-CHAIN138-CI-RPC");
expect(ids).toHaveLength(6);
});
});
describe("logBlockerStatusAtBoot()", () => {
it("emits a single summary with active + resolved counts", () => {
const calls: Array<{ obj: Record<string, unknown>; msg: string }> = [];
const fakeLogger = {
info: (obj: Record<string, unknown>, msg: string) => calls.push({ obj, msg }),
};
const prev = process.env.CHAIN_138_RPC_URL;
process.env.CHAIN_138_RPC_URL = "https://rpc.public-0138.defi-oracle.io";
try {
logBlockerStatusAtBoot(fakeLogger);
} finally {
if (prev === undefined) delete process.env.CHAIN_138_RPC_URL;
else process.env.CHAIN_138_RPC_URL = prev;
}
expect(calls).toHaveLength(1);
expect(calls[0].msg).toMatch(/active,.*resolved/);
expect((calls[0].obj.activeCount as number) + (calls[0].obj.resolvedCount as number)).toBe(7);
expect(calls[0].obj.resolvedCount).toBeGreaterThanOrEqual(1);
});
});

View File

@@ -0,0 +1,163 @@
/**
* Helper: compile contracts/NotaryRegistry.sol + its two interfaces
* + @openzeppelin/contracts Ownable using solc-js in-process.
*
* Keeps the E2E suite self-contained — no dependence on a prior
* `hardhat compile` step, no new workspace wiring.
*/
import { readFileSync } from "fs";
import { dirname, join, resolve } from "path";
// eslint-disable-next-line @typescript-eslint/no-require-imports, @typescript-eslint/no-var-requires
const solc = require("solc");
const REPO_ROOT = resolve(__dirname, "..", "..", "..", "..");
const CONTRACTS_ROOT = join(REPO_ROOT, "contracts");
const OZ_ROOT = join(CONTRACTS_ROOT, "node_modules", "@openzeppelin");
// ethers v6 accepts any JsonFragment-shaped array here. Declaring the
// element type loosely keeps us decoupled from ethers' private type
// exports while still being strictly typed against `unknown`.
export type AbiFragment = Record<string, unknown>;
export interface CompiledArtifact {
abi: AbiFragment[];
bytecode: string;
}
interface SolcSource {
content: string;
}
interface SolcInput {
language: "Solidity";
sources: Record<string, SolcSource>;
settings: {
optimizer: { enabled: true; runs: number };
outputSelection: Record<string, Record<string, string[]>>;
};
}
interface SolcOutput {
errors?: Array<{ severity: "error" | "warning"; formattedMessage: string }>;
contracts: Record<
string,
Record<string, { abi: AbiFragment[]; evm: { bytecode: { object: string } } }>
>;
}
function readFromRoots(rel: string, roots: string[]): string {
for (const root of roots) {
try {
return readFileSync(join(root, rel), "utf8");
} catch {
// try next root
}
}
throw new Error(`Could not resolve import ${rel} against roots ${roots.join(",")}`);
}
function findImports(requestedPath: string): { contents: string } | { error: string } {
// @openzeppelin/... → contracts/node_modules/@openzeppelin/...
if (requestedPath.startsWith("@openzeppelin/")) {
const rel = requestedPath.replace("@openzeppelin/", "");
try {
return { contents: readFileSync(join(OZ_ROOT, rel), "utf8") };
} catch (e) {
return { error: `Could not read ${requestedPath}: ${(e as Error).message}` };
}
}
// Local ./interfaces/... paths resolve against contracts/
try {
return { contents: readFromRoots(requestedPath, [CONTRACTS_ROOT]) };
} catch (e) {
return { error: (e as Error).message };
}
}
/**
* Recursively pull in all `import "..."` references starting from
* NotaryRegistry.sol and return the full `sources` object solc needs.
*/
function collectSources(entryPath: string): Record<string, SolcSource> {
const sources: Record<string, SolcSource> = {};
const stack: string[] = [entryPath];
const seen = new Set<string>();
while (stack.length > 0) {
const cur = stack.pop()!;
if (seen.has(cur)) continue;
seen.add(cur);
let content: string;
if (cur === entryPath) {
content = readFileSync(join(CONTRACTS_ROOT, "NotaryRegistry.sol"), "utf8");
} else {
const resolved = findImports(cur);
if ("error" in resolved) {
throw new Error(`Unresolved import: ${cur} (${resolved.error})`);
}
content = resolved.contents;
}
sources[cur] = { content };
// Parse `import "..."` statements. Interfaces may use relative paths
// that we normalise back into keys solc expects.
const importRe = /^\s*import\s+(?:\{[^}]+\}\s+from\s+)?"([^"]+)";/gm;
let m: RegExpExecArray | null;
while ((m = importRe.exec(content)) !== null) {
const rawImport = m[1];
let normalised: string;
if (rawImport.startsWith("@openzeppelin/")) {
normalised = rawImport;
} else if (rawImport.startsWith("./") || rawImport.startsWith("../")) {
// Relative import — resolve against the dir of `cur`.
const curDir = cur.includes("/") ? dirname(cur) : ".";
const joined = join(curDir, rawImport);
normalised = joined.startsWith(".") ? joined.slice(2) : joined;
} else {
normalised = rawImport;
}
if (!seen.has(normalised)) stack.push(normalised);
}
}
return sources;
}
export function compileNotaryRegistry(): CompiledArtifact {
const entry = "NotaryRegistry.sol";
const sources = collectSources(entry);
const input: SolcInput = {
language: "Solidity",
sources,
settings: {
optimizer: { enabled: true, runs: 200 },
outputSelection: {
"*": { "*": ["abi", "evm.bytecode.object"] },
},
},
};
const output: SolcOutput = JSON.parse(
solc.compile(JSON.stringify(input), { import: findImports }),
);
const fatal = (output.errors ?? []).filter((e) => e.severity === "error");
if (fatal.length > 0) {
const msg = fatal.map((e) => e.formattedMessage).join("\n");
throw new Error(`solc compile failed:\n${msg}`);
}
const artifact = output.contracts[entry]?.NotaryRegistry;
if (!artifact) {
throw new Error("NotaryRegistry not found in solc output");
}
return {
abi: artifact.abi,
bytecode: "0x" + artifact.evm.bytecode.object,
};
}

View File

@@ -0,0 +1,151 @@
/**
* Read-only E2E round-trip against the **public Chain 138 RPC**.
*
* Whereas `notaryChainRoundtrip.e2e.test.ts` spins up ganache locally
* and exercises both writes and reads, this suite targets the real
* public endpoint (`https://rpc.public-0138.defi-oracle.io`) and
* closes the proxmox `EXT-CHAIN138-CI-RPC` blocker on the
* CurrenciCombo side.
*
* It does **not** perform any writes:
* - we don't own a funded key on Chain 138 in CI;
* - writes against mainnet-equivalent infra would be reckless and
* non-deterministic.
*
* What it does do:
* 1. Prove the orchestrator's ethers client can reach the public RPC.
* 2. Verify `eth_chainId` matches the expected Chain 138.
* 3. Verify `eth_blockNumber` returns a plausible current height.
* 4. If `NOTARY_REGISTRY_ADDRESS` is set, read a synthetic
* `plans(bytes32)` key and assert the contract responded (zeros
* are fine — the call succeeding is the point).
* 5. Build an orchestrator notaryChain config pointed at the real
* chain and confirm the module still gracefully mock-falls-back
* when the orchestrator's signing key isn't set.
*
* Gated on **BOTH** `RUN_E2E=1` and `E2E_USE_PUBLIC_CHAIN138=1` so the
* default E2E path stays offline.
*/
import { JsonRpcProvider, Contract, id as keccakId, ZeroHash } from "ethers";
import { compileNotaryRegistry } from "./helpers/compileNotaryRegistry";
const RUN_E2E = process.env.RUN_E2E === "1";
const USE_PUBLIC = process.env.E2E_USE_PUBLIC_CHAIN138 === "1";
const d = RUN_E2E && USE_PUBLIC ? describe : describe.skip;
const DEFAULT_PUBLIC_RPC = "https://rpc.public-0138.defi-oracle.io";
const EXPECTED_CHAIN_ID = 138n;
function getPublicRpcUrl(): string {
// If the caller set CHAIN_138_RPC_URL, honour it (matches how the
// orchestrator's own services pick up config); otherwise use the
// documented public endpoint.
return process.env.CHAIN_138_RPC_URL || DEFAULT_PUBLIC_RPC;
}
d("NotaryRegistry read-only round-trip against public Chain 138", () => {
let rpcUrl: string;
let provider: JsonRpcProvider;
beforeAll(() => {
rpcUrl = getPublicRpcUrl();
// staticNetwork=true skips the network discovery handshake every
// call; cacheTimeout=-1 disables the 250ms response cache so
// subsequent JSON-RPC calls see fresh data.
provider = new JsonRpcProvider(rpcUrl, undefined, {
staticNetwork: true,
cacheTimeout: -1,
});
});
it("resolves a network descriptor", async () => {
const net = await provider.getNetwork();
expect(net).toBeDefined();
expect(typeof net.chainId).toBe("bigint");
}, 30_000);
it("eth_chainId matches Chain 138", async () => {
const net = await provider.getNetwork();
expect(net.chainId).toBe(EXPECTED_CHAIN_ID);
}, 30_000);
it("eth_blockNumber returns a positive current height", async () => {
const blockNumber = await provider.getBlockNumber();
expect(blockNumber).toBeGreaterThan(0);
}, 30_000);
it("eth_getBlockByNumber returns a well-formed block", async () => {
const latest = await provider.getBlockNumber();
const block = await provider.getBlock(latest);
expect(block).not.toBeNull();
if (block) {
expect(block.number).toBe(latest);
expect(block.hash).toMatch(/^0x[0-9a-fA-F]{64}$/);
expect(typeof block.timestamp).toBe("number");
expect(block.timestamp).toBeGreaterThan(1_600_000_000);
}
}, 30_000);
it("reads plans(bytes32) if NOTARY_REGISTRY_ADDRESS is set", async () => {
const addr = process.env.NOTARY_REGISTRY_ADDRESS;
if (!addr) {
// Not a failure — this is the current CI state until the
// deployed NotaryRegistry address is published to the
// environment. Document it instead of failing.
expect(addr).toBeUndefined();
return;
}
const { abi } = compileNotaryRegistry();
const readOnly = new Contract(addr, abi, provider);
// Synthetic id — we expect an empty / zero record but the call
// itself must succeed (proves ABI matches deployed contract).
const syntheticKey = keccakId("e2e-public-read-only-" + Date.now());
const record = await readOnly.getFunction("plans")(syntheticKey);
// plans() returns (planHash, creator, registeredAt, finalizedAt, success, receiptHash)
expect(record).toBeDefined();
expect(Array.isArray(record) || typeof record === "object").toBe(true);
// Either a fresh key → zeros, or an already-used key — both are OK.
// We only assert the types match the tuple shape.
const [planHash, , registeredAt, finalizedAt] = record as readonly [
string, string, bigint, bigint, boolean, string,
];
expect(typeof planHash).toBe("string");
expect(typeof registeredAt).toBe("bigint");
expect(typeof finalizedAt).toBe("bigint");
// For a synthetic key, every field should be zero.
expect(planHash).toBe(ZeroHash);
expect(registeredAt).toBe(0n);
expect(finalizedAt).toBe(0n);
}, 60_000);
it("orchestrator notaryChain module mock-falls-back when signing key is absent", async () => {
const saved = {
rpc: process.env.CHAIN_138_RPC_URL,
addr: process.env.NOTARY_REGISTRY_ADDRESS,
pk: process.env.ORCHESTRATOR_PRIVATE_KEY,
};
// Point at the public RPC but leave the signing key unset.
process.env.CHAIN_138_RPC_URL = rpcUrl;
delete process.env.ORCHESTRATOR_PRIVATE_KEY;
try {
jest.resetModules();
const chain = await import("../../src/services/notaryChain");
const result = await chain.anchorPlan({
plan_id: "public-rpc-readonly-" + Date.now(),
steps: [],
created_at: new Date().toISOString(),
} as never);
// With no signer, isConfigured() returns false → mock path.
expect(result.mode).toBe("mock");
expect(result.planHash).toMatch(/^0x[0-9a-fA-F]{64}$/);
expect(result.txHash).toBeUndefined();
} finally {
if (saved.rpc !== undefined) process.env.CHAIN_138_RPC_URL = saved.rpc;
else delete process.env.CHAIN_138_RPC_URL;
if (saved.addr !== undefined) process.env.NOTARY_REGISTRY_ADDRESS = saved.addr;
if (saved.pk !== undefined) process.env.ORCHESTRATOR_PRIVATE_KEY = saved.pk;
}
}, 30_000);
});

View File

@@ -0,0 +1,173 @@
/**
* End-to-end round-trip against a real EVM node.
*
* Spawns the ganache CLI as a child process on a random dev port,
* deploys NotaryRegistry.sol compiled via in-process solc, and
* exercises services/notaryChain.ts (`anchorPlan` + `finalizeAnchor`)
* against it via ethers v6. This closes the
* orchestrator-unit-tests-pass-but-the-adapter-to-reality-boundary-
* is-uncovered gap flagged in gap-analysis v2 §7.9 / §8.5 — PR Q's
* existing suite covers Postgres only.
*
* Gated on RUN_E2E=1 to stay out of the fast unit-test path. Runs on
* CI via the `orchestrator-e2e` job (see .github/workflows/ci.yml).
*/
import { spawn, type ChildProcess } from "child_process";
import { JsonRpcProvider, Wallet, ContractFactory, Contract } from "ethers";
import { compileNotaryRegistry } from "./helpers/compileNotaryRegistry";
const RUN_E2E = process.env.RUN_E2E === "1";
const d = RUN_E2E ? describe : describe.skip;
const DEPLOYER_PK = "0x4f3edf983ac636a65a842ce7c78d9aa706d3b113bce9c46f30d7d21715b23b1d";
async function waitForRpc(url: string, timeoutMs = 30_000): Promise<void> {
const start = Date.now();
while (Date.now() - start < timeoutMs) {
try {
const r = await fetch(url, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ jsonrpc: "2.0", method: "eth_chainId", params: [], id: 1 }),
});
if (r.ok) return;
} catch {
/* not ready yet */
}
await new Promise((r) => setTimeout(r, 300));
}
throw new Error(`RPC did not come up within ${timeoutMs}ms: ${url}`);
}
// eslint-disable-next-line @typescript-eslint/no-explicit-any
type NotaryChainModule = typeof import("../../src/services/notaryChain");
d("NotaryRegistry chain round-trip (E2E)", () => {
let ganacheProc: ChildProcess;
let port: number;
let rpcUrl: string;
let contractAddress: string;
let chain: NotaryChainModule;
beforeAll(async () => {
port = 18545 + Math.floor(Math.random() * 1000);
rpcUrl = `http://127.0.0.1:${port}`;
ganacheProc = spawn(
"node_modules/.bin/ganache",
[
"--port",
String(port),
"--chain.chainId",
"1337",
"--wallet.accounts",
`${DEPLOYER_PK},1000000000000000000000`,
"--logging.quiet",
],
{ stdio: "pipe", cwd: process.cwd() },
);
await waitForRpc(rpcUrl);
const provider = new JsonRpcProvider(rpcUrl);
const wallet = new Wallet(DEPLOYER_PK, provider);
const { abi, bytecode } = compileNotaryRegistry();
// OZ v5 Ownable requires `initialOwner` in the constructor.
const factory = new ContractFactory(abi, bytecode, wallet);
const deployer = await wallet.getAddress();
const contract = (await factory.deploy(deployer)) as unknown as Contract;
await contract.waitForDeployment();
contractAddress = await contract.getAddress();
// Wire the service under test to this chain. Import after the env
// is set so the service's lazy loader picks it up.
process.env.CHAIN_138_RPC_URL = rpcUrl;
process.env.CHAIN_138_CHAIN_ID = "1337";
process.env.NOTARY_REGISTRY_ADDRESS = contractAddress;
process.env.ORCHESTRATOR_PRIVATE_KEY = DEPLOYER_PK;
jest.resetModules();
chain = await import("../../src/services/notaryChain");
}, 120_000);
afterAll(async () => {
if (ganacheProc && !ganacheProc.killed) {
ganacheProc.kill("SIGTERM");
await new Promise((r) => setTimeout(r, 300));
}
delete process.env.CHAIN_138_RPC_URL;
delete process.env.CHAIN_138_CHAIN_ID;
delete process.env.NOTARY_REGISTRY_ADDRESS;
delete process.env.ORCHESTRATOR_PRIVATE_KEY;
});
it("anchorPlan writes a PlanRegistered record on-chain", async () => {
const plan = {
plan_id: "e2e-plan-" + Date.now(),
steps: [],
created_at: new Date().toISOString(),
};
const expectedHash = chain.computePlanHash(plan as never);
const result = await chain.anchorPlan(plan as never);
expect(result.mode).toBe("chain");
expect(result.txHash).toMatch(/^0x[0-9a-fA-F]{64}$/);
expect(result.blockNumber).toBeGreaterThan(0);
expect(result.planHash).toBe(expectedHash);
// Directly query the contract to prove the state transition landed.
const provider = new JsonRpcProvider(rpcUrl);
const { abi } = compileNotaryRegistry();
const readOnly = new Contract(contractAddress, abi, provider);
const stored = await readOnly.getFunction("plans")(chain.planIdToBytes32(plan.plan_id));
// plans(bytes32) → (planHash, creator, registeredAt, finalizedAt, success, receiptHash)
expect(stored[0]).toMatch(/^0x[0-9a-fA-F]{64}$/);
expect(Number(stored[2])).toBeGreaterThan(0); // registeredAt
expect(Number(stored[3])).toBe(0); // finalizedAt
}, 60_000);
it("finalizeAnchor writes a PlanFinalized record with a receipt hash", async () => {
const plan = {
plan_id: "e2e-finalize-" + Date.now(),
steps: [],
created_at: new Date().toISOString(),
};
await chain.anchorPlan(plan as never);
const result = await chain.finalizeAnchor(plan.plan_id, true);
expect(result.mode).toBe("chain");
expect(result.txHash).toMatch(/^0x[0-9a-fA-F]{64}$/);
expect(result.receiptHash).toMatch(/^0x[0-9a-fA-F]{64}$/);
expect(result.blockNumber).toBeGreaterThan(0);
}, 60_000);
it("anchorPlan falls back to mock when envs are cleared", async () => {
const saved = {
rpc: process.env.CHAIN_138_RPC_URL,
addr: process.env.NOTARY_REGISTRY_ADDRESS,
pk: process.env.ORCHESTRATOR_PRIVATE_KEY,
};
delete process.env.CHAIN_138_RPC_URL;
delete process.env.NOTARY_REGISTRY_ADDRESS;
delete process.env.ORCHESTRATOR_PRIVATE_KEY;
try {
jest.resetModules();
const mockOnly = await import("../../src/services/notaryChain");
const result = await mockOnly.anchorPlan({
plan_id: "mock-plan",
steps: [],
created_at: new Date().toISOString(),
} as never);
expect(result.mode).toBe("mock");
expect(result.txHash).toBeUndefined();
expect(result.planHash).toMatch(/^0x[0-9a-fA-F]{64}$/);
} finally {
if (saved.rpc) process.env.CHAIN_138_RPC_URL = saved.rpc;
if (saved.addr) process.env.NOTARY_REGISTRY_ADDRESS = saved.addr;
if (saved.pk) process.env.ORCHESTRATOR_PRIVATE_KEY = saved.pk;
}
});
});

View File

@@ -0,0 +1,178 @@
/**
* E2E transaction lifecycle (gap-analysis v2 §7.8 / §10.8).
*
* Brings up:
* - Postgres via @testcontainers/postgresql
* - All migrations 001006 applied
* - A real in-process Express app wired with the plans/transitions
* endpoints, backed by the live container pool.
*
* Skipped unless RUN_E2E=1 and Docker is reachable. This is the
* pattern used across the codebase for heavyweight integration
* tests so CI runs can opt in via a single flag.
*
* NB: Chain-138 RPC, SWIFT gateway, and Redis are all mocked-local
* by default. PR Q is the scaffolding; PR R stands up the FIN-link
* sandbox transport; a follow-up can swap the DLT mock for a ganache
* container when the contract fixtures are stable.
*/
import { describe, it, expect, beforeAll, afterAll } from "@jest/globals";
import express from "express";
import request from "supertest";
const shouldRun = process.env.RUN_E2E === "1";
// Use describe.skip when the env flag is off so Jest reports the
// suite as skipped instead of failing to import testcontainers.
const d = shouldRun ? describe : describe.skip;
d("E2E transaction lifecycle (Postgres testcontainer)", () => {
let pgContainer: unknown;
let connectionString = "";
let app: express.Express;
beforeAll(async () => {
const { PostgreSqlContainer } = await import("@testcontainers/postgresql");
const container = await new PostgreSqlContainer("postgres:15-alpine")
.withDatabase("ccflow_e2e")
.withUsername("ccflow")
.withPassword("ccflow")
.start();
pgContainer = container;
connectionString = container.getConnectionUri();
process.env.DATABASE_URL = connectionString;
process.env.SESSION_SECRET =
"e2e-session-secret-must-be-at-least-32-chars-long!";
process.env.NODE_ENV = "test";
// Import after env set so migrations/pool read the container URL.
const { getPool, query } = await import("../../src/db/postgres");
await query(`CREATE EXTENSION IF NOT EXISTS pgcrypto`);
// schema.sql contains $$...$$ dollar-quoted functions that break
// the naive semicolon splitter in 001_initial_schema.ts. Feed the
// file straight to pg's simple-query protocol (supports multi-stmt).
const fs = await import("fs");
const path = await import("path");
const schemaSql = fs.readFileSync(
path.join(__dirname, "../../src/db/schema.sql"),
"utf-8",
);
const pool = getPool();
const client = await pool.connect();
try {
await client.query(schemaSql);
} finally {
client.release();
}
// Run the numbered migrations after schema.sql.
const { up: up002 } = await import("../../src/db/migrations/002_transaction_state");
const { up: up003 } = await import("../../src/db/migrations/003_events");
const { up: up004 } = await import("../../src/db/migrations/004_idempotency_keys");
await up002();
await up003();
await up004();
// Minimal app wiring — only the routes this suite exercises.
const { createPlan, getPlan } = await import("../../src/api/plans");
app = express();
app.use(express.json());
app.post("/api/plans", createPlan);
app.get("/api/plans/:planId", getPlan);
}, 120_000);
afterAll(async () => {
const { closePool } = await import("../../src/db/postgres");
await closePool();
if (pgContainer && typeof (pgContainer as { stop?: () => Promise<void> }).stop === "function") {
await (pgContainer as { stop: () => Promise<void> }).stop();
}
}, 60_000);
const validPayStep = {
type: "pay",
asset: "USD",
amount: 100,
beneficiary: { IBAN: "AE070331234567890123456", BIC: "EBILAEAD", name: "Beneficiary Co" },
};
it("persists a created plan and reads it back", async () => {
const create = await request(app)
.post("/api/plans")
.send({
creator: "0xtest-creator",
steps: [validPayStep],
})
.expect(201);
expect(create.body.plan_id).toBeDefined();
expect(create.body.plan_hash).toMatch(/^[0-9a-fA-F]{64}$/);
const read = await request(app)
.get(`/api/plans/${create.body.plan_id}`)
.expect(200);
expect(read.body.plan_id).toBe(create.body.plan_id);
}, 30_000);
it("publishes a signed event row via the live event bus", async () => {
const create = await request(app)
.post("/api/plans")
.send({
creator: "0xtest-creator-2",
steps: [validPayStep],
})
.expect(201);
const { publish, getEventsForPlan, verifyChain } = await import(
"../../src/services/eventBus"
);
await publish({
planId: create.body.plan_id,
type: "transaction.created",
actor: "e2e",
payload: { plan_hash: create.body.plan_hash },
});
await publish({
planId: create.body.plan_id,
type: "transaction.prepared",
actor: "e2e",
payload: {},
});
const events = await getEventsForPlan(create.body.plan_id);
expect(events).toHaveLength(2);
expect(events[0].prev_hash).toBeNull();
expect(events[1].prev_hash).toBe(events[0].signature);
const chain = await verifyChain(create.body.plan_id);
expect(chain.ok).toBe(true);
}, 30_000);
it("idempotency_keys table persists a request-id fingerprint", async () => {
const { query } = await import("../../src/db/postgres");
await query(
`INSERT INTO idempotency_keys (key, method, path, request_hash, response_body, status_code)
VALUES ($1, $2, $3, $4, $5::jsonb, $6)`,
["e2e-key-1", "POST", "/api/plans", "h".repeat(64), JSON.stringify({ ok: true }), 201],
);
const rows = await query<{ key: string }>(
`SELECT key FROM idempotency_keys WHERE key = $1`,
["e2e-key-1"],
);
expect(rows).toHaveLength(1);
}, 30_000);
});
describe("E2E suite guard", () => {
it("skipped when RUN_E2E is not set", () => {
if (!shouldRun) {
expect(shouldRun).toBe(false);
return;
}
expect(true).toBe(true);
});
});

View File

@@ -0,0 +1,232 @@
/**
* Unit tests for the Complete Credential (DBIS cc-*) adapters.
*
* All tests are headless — they either exercise the embedded mock /
* matrix or stub `fetch` directly. No network, no UI.
*/
import {
createCcIdentityClient,
loadControlsMatrix,
findControl,
type CcIdentityClient,
} from "../../src/services/completeCredential";
describe("completeCredential.createCcIdentityClient() — mock mode", () => {
let client: CcIdentityClient;
beforeAll(() => {
delete process.env.CC_IDENTITY_URL;
delete process.env.CC_IDENTITY_API_KEY;
client = createCcIdentityClient();
});
it("reports mock mode", () => {
expect(client.mode).toBe("mock");
});
it("returns ok for health()", async () => {
const h = await client.health();
expect(h.status).toBe("ok");
expect(h.service).toBe("cc-identity-core");
});
it("returns a ready response with persistence=false in mock", async () => {
const r = await client.ready();
expect(r.status).toBe("ok");
expect(r.persistence).toBe(false);
});
it("creates a subject with a uuid and defaulted tenant/entity", async () => {
const s = await client.createSubject({});
expect(s.subjectId).toMatch(/^[0-9a-f-]{36}$/);
expect(s.tenantId).toBe("tenant-demo");
expect(s.entityId).toBe("entity-demo");
});
it("passes tenant/entity through when provided", async () => {
const s = await client.createSubject({
tenantId: "t-acme",
entityId: "e-bank-1",
});
expect(s.tenantId).toBe("t-acme");
expect(s.entityId).toBe("e-bank-1");
});
});
describe("completeCredential.createCcIdentityClient() — live mode (stubbed fetch)", () => {
function makeFetch(
record: (url: string, init: RequestInit) => void,
responseBody: unknown,
status = 200,
): typeof fetch {
return (async (input: string | URL | Request, init?: RequestInit) => {
record(String(input), init ?? {});
return new Response(JSON.stringify(responseBody), {
status,
headers: { "Content-Type": "application/json" },
});
}) as typeof fetch;
}
it("reports live mode when baseUrl is set", () => {
const client = createCcIdentityClient({
baseUrl: "http://cc.example.test",
fetchImpl: makeFetch(() => undefined, { status: "ok", service: "x" }),
});
expect(client.mode).toBe("live");
});
it("hits GET /health", async () => {
const calls: string[] = [];
const client = createCcIdentityClient({
baseUrl: "http://cc.example.test",
fetchImpl: makeFetch(
(url) => {
calls.push(url);
},
{ status: "ok", service: "cc-identity-core" },
),
});
const h = await client.health();
expect(h.status).toBe("ok");
expect(calls[0]).toBe("http://cc.example.test/health");
});
it("posts to /v1/subjects with X-Correlation-Id + api key header", async () => {
const calls: { url: string; headers: Record<string, string>; body?: string }[] = [];
const client = createCcIdentityClient({
baseUrl: "http://cc.example.test",
apiKey: "k-1",
fetchImpl: makeFetch(
(url, init) => {
calls.push({
url,
headers: (init.headers ?? {}) as Record<string, string>,
body: init.body as string,
});
},
{
subjectId: "11111111-2222-3333-4444-555555555555",
tenantId: "t-1",
entityId: "e-1",
createdAt: "2026-01-01T00:00:00Z",
},
),
});
const s = await client.createSubject({ tenantId: "t-1", entityId: "e-1" }, "corr-42");
expect(s.subjectId).toContain("-");
expect(calls[0].url).toBe("http://cc.example.test/v1/subjects");
expect(calls[0].headers["X-API-Key"]).toBe("k-1");
expect(calls[0].headers["X-Correlation-Id"]).toBe("corr-42");
expect(JSON.parse(calls[0].body ?? "{}")).toEqual({
tenantId: "t-1",
entityId: "e-1",
});
});
it("auto-generates a correlation id when not provided", async () => {
const calls: Record<string, string>[] = [];
const client = createCcIdentityClient({
baseUrl: "http://cc.example.test",
fetchImpl: makeFetch(
(_url, init) => {
calls.push((init.headers ?? {}) as Record<string, string>);
},
{
subjectId: "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
tenantId: "t",
entityId: "e",
createdAt: "2026-01-01T00:00:00Z",
},
),
});
await client.createSubject({});
expect(calls[0]["X-Correlation-Id"]).toMatch(/^[0-9a-f-]{36}$/);
});
it("throws a descriptive error on non-2xx", async () => {
const client = createCcIdentityClient({
baseUrl: "http://cc.example.test",
fetchImpl: makeFetch(() => undefined, { error: "boom" }, 500),
});
await expect(client.health()).rejects.toThrow(/HTTP 500/);
});
});
describe("completeCredential.loadControlsMatrix() — embedded mode", () => {
beforeAll(() => {
delete process.env.CC_CONTROLS_MATRIX_URL;
});
it("returns the embedded v0 matrix when no URL is set", async () => {
const m = await loadControlsMatrix();
expect(m.source).toBe("embedded");
expect(m.version).toBe(0);
expect(m.domains.length).toBeGreaterThan(0);
});
it("exposes expected control ids", async () => {
const m = await loadControlsMatrix();
const ids = m.domains.flatMap((d) => d.controls.map((c) => c.id));
expect(ids).toEqual(expect.arrayContaining(["IDP-001", "PAY-001", "AUD-001", "REG-001"]));
});
it("findControl() resolves by id", async () => {
const m = await loadControlsMatrix();
const c = findControl(m, "PAY-001");
expect(c?.title).toContain("PAN");
});
it("findControl() returns undefined for unknown ids", async () => {
const m = await loadControlsMatrix();
expect(findControl(m, "NOPE-999")).toBeUndefined();
});
});
describe("completeCredential.loadControlsMatrix() — remote mode", () => {
function makeFetch(responseBody: unknown, status = 200): typeof fetch {
return (async () =>
new Response(JSON.stringify(responseBody), {
status,
headers: { "Content-Type": "application/json" },
})) as typeof fetch;
}
it("fetches and normalises a JSON matrix", async () => {
const matrix = await loadControlsMatrix({
url: "http://cc.example.test/controls/matrix/v0.json",
fetchImpl: makeFetch({
version: 1,
domains: [
{
id: "extra",
controls: [
{
id: "X-001",
title: "Extra",
evidence_type: "doc_review",
owner_team: "ops",
frequency: "monthly",
},
],
},
],
}),
});
expect(matrix.source).toBe("remote");
expect(matrix.version).toBe(1);
expect(matrix.domains[0].controls[0].evidenceType).toBe("doc_review");
expect(matrix.domains[0].controls[0].ownerTeam).toBe("ops");
});
it("throws on non-2xx", async () => {
await expect(
loadControlsMatrix({
url: "http://cc.example.test/nope",
fetchImpl: makeFetch({}, 404),
}),
).rejects.toThrow(/HTTP 404/);
});
});

View File

@@ -0,0 +1,198 @@
/**
* Unit tests for the dbis_core HTTP client adapter.
*
* Covers both provider-switch legs:
* - `createDbisCoreClient()` with DBIS_CORE_URL unset → mock mode.
* - `createDbisCoreClient({ baseUrl, fetchImpl })` → live mode, with
* a stub `fetch` so tests never hit the network.
*/
import {
createDbisCoreClient,
type DbisCoreClient,
} from "../../src/services/dbisCore";
describe("dbisCore.createDbisCoreClient() — mock mode", () => {
let client: DbisCoreClient;
beforeAll(() => {
delete process.env.DBIS_CORE_URL;
delete process.env.DBIS_CORE_API_KEY;
client = createDbisCoreClient();
});
it("reports mock mode", () => {
expect(client.mode).toBe("mock");
});
it("returns a balance shaped like upstream", async () => {
const b = await client.getAccountBalance("acct-1");
expect(b.accountId).toBe("acct-1");
expect(typeof b.available).toBe("string");
expect(typeof b.held).toBe("string");
expect(b.currency).toBeDefined();
});
it("returns a plausible route", async () => {
const r = await client.findSettlementRoute({
sourceBankId: "src",
destinationBankId: "dst",
amount: "100",
currencyCode: "USD",
});
expect(r.routeId).toContain("src");
expect(r.hops.length).toBeGreaterThan(0);
expect(r.estimatedFeeBps).toBeGreaterThanOrEqual(0);
});
it("settles atomically with a deterministic id", async () => {
const s = await client.atomicSettle({
routeId: "r1",
sourceAccountId: "a",
destinationAccountId: "b",
amount: "1",
currencyCode: "USD",
reference: "ref-1",
});
expect(s.status).toBe("settled");
expect(s.settlementId).toContain("ref-1");
expect(s.completedAt).toBeDefined();
});
it("returns an allow decision by default from ARI", async () => {
const d = await client.requestAriDecision({
txId: "tx-1",
amount: "1",
currencyCode: "USD",
creator: "0xdead",
});
expect(d.outcome).toBe("allow");
expect(d.txId).toBe("tx-1");
expect(d.riskScore).toBeLessThan(1);
});
it("accepts a pacs008 dispatch and echoes the messageId", async () => {
const r = await client.dispatchPacs008({
messageId: "msg-1",
creationDateTime: "2026-01-01T00:00:00Z",
debtor: { name: "Acme", bic: "ACMEUS33", account: "1" },
creditor: { name: "Widget", bic: "WDGTGB22", account: "2" },
amount: "100",
currencyCode: "USD",
});
expect(r.status).toBe("accepted");
expect(r.messageId).toBe("msg-1");
});
it("returns a settled status from a synthetic settlementId", async () => {
const s = await client.getSettlementStatus("stlm-99");
expect(s.status).toBe("settled");
expect(s.legs.length).toBeGreaterThan(0);
});
});
describe("dbisCore.createDbisCoreClient() — live mode (stubbed fetch)", () => {
function makeFetch(
record: (url: string, init: RequestInit) => void,
responseBody: unknown,
status = 200,
): typeof fetch {
return (async (input: string | URL | Request, init?: RequestInit) => {
record(String(input), init ?? {});
return new Response(JSON.stringify(responseBody), {
status,
headers: { "Content-Type": "application/json" },
});
}) as typeof fetch;
}
it("reports live mode when baseUrl is set", () => {
const client = createDbisCoreClient({
baseUrl: "http://dbis.example.test",
fetchImpl: makeFetch(
() => undefined,
{ accountId: "a", currency: "USD", available: "0", held: "0", asOf: "" },
),
});
expect(client.mode).toBe("live");
});
it("hits GET /api/accounts/:id/balance with the API key header", async () => {
const calls: { url: string; headers: Record<string, string>; method?: string }[] = [];
const client = createDbisCoreClient({
baseUrl: "http://dbis.example.test",
apiKey: "k-secret",
fetchImpl: makeFetch(
(url, init) => {
calls.push({
url,
method: init.method,
headers: (init.headers ?? {}) as Record<string, string>,
});
},
{
accountId: "a42",
currency: "USD",
available: "500",
held: "10",
asOf: "2026-01-01T00:00:00Z",
},
),
});
const b = await client.getAccountBalance("a42");
expect(b.available).toBe("500");
expect(calls).toHaveLength(1);
expect(calls[0].url).toBe("http://dbis.example.test/api/accounts/a42/balance");
expect(calls[0].method).toBe("GET");
expect(calls[0].headers["X-API-Key"]).toBe("k-secret");
});
it("posts a route request and parses the structured response", async () => {
const client = createDbisCoreClient({
baseUrl: "http://dbis.example.test/",
fetchImpl: makeFetch(
() => undefined,
{
routeId: "R1",
hops: [{ bankId: "A", latencyMs: 1, feeBps: 2 }],
estimatedLatencyMs: 10,
estimatedFeeBps: 2,
},
),
});
const r = await client.findSettlementRoute({
sourceBankId: "A",
destinationBankId: "B",
amount: "1",
currencyCode: "USD",
});
expect(r.routeId).toBe("R1");
expect(r.estimatedFeeBps).toBe(2);
});
it("throws a descriptive error on non-2xx", async () => {
const client = createDbisCoreClient({
baseUrl: "http://dbis.example.test",
fetchImpl: makeFetch(() => undefined, { error: "denied" }, 403),
});
await expect(client.getAccountBalance("a1")).rejects.toThrow(/HTTP 403/);
});
it("encodes path parameters safely", async () => {
const calls: string[] = [];
const client = createDbisCoreClient({
baseUrl: "http://dbis.example.test",
fetchImpl: makeFetch(
(url) => {
calls.push(url);
},
{ settlementId: "x", status: "settled", legs: [], lastUpdated: "" },
),
});
await client.getSettlementStatus("weird/id with space");
expect(calls[0]).toBe(
"http://dbis.example.test/api/isn/settlements/weird%2Fid%20with%20space",
);
});
});

View File

@@ -0,0 +1,170 @@
import { describe, it, expect, beforeEach } from "@jest/globals";
import express from "express";
import request from "supertest";
import {
buildSandboxRouter,
recordDispatch,
advance,
rejectMessage,
getMessage,
listMessages,
resetSandboxForTests,
finSignature,
} from "../../src/services/finLink/sandbox";
import {
createInProcessFinLinkClient,
createHttpFinLinkClient,
} from "../../src/services/finLink/client";
describe("FIN-link sandbox (gap-analysis v2 §7.1 / §10.6)", () => {
beforeEach(() => {
resetSandboxForTests();
});
describe("lifecycle (in-memory)", () => {
it("assigns a FIN reference and records state received", () => {
const msg = recordDispatch({
messageType: "MT760",
payload: "MT760 payload",
planId: "plan-1",
});
expect(msg.reference).toMatch(/^FIN-[0-9A-F]{12}$/);
expect(msg.state).toBe("received");
expect(msg.stateHistory).toHaveLength(1);
expect(msg.planId).toBe("plan-1");
});
it("advances deterministically: received -> acknowledged -> accepted -> settled", async () => {
const msg = recordDispatch({ messageType: "pacs.009", payload: "<pacs.009/>" });
expect((await advance(msg.reference))!.state).toBe("acknowledged");
expect((await advance(msg.reference))!.state).toBe("accepted");
expect((await advance(msg.reference))!.state).toBe("settled");
expect((await advance(msg.reference))!.state).toBe("settled"); // terminal
const final = getMessage(msg.reference)!;
expect(final.stateHistory.map((h) => h.state)).toEqual([
"received",
"acknowledged",
"accepted",
"settled",
]);
});
it("supports rejection and stops lifecycle progression", async () => {
const msg = recordDispatch({ messageType: "MT202", payload: "MT202 payload" });
const rejected = rejectMessage(msg.reference, "bad coordinates")!;
expect(rejected.state).toBe("rejected");
const afterAdvance = await advance(msg.reference);
expect(afterAdvance!.state).toBe("rejected");
});
it("listMessages filters by planId", () => {
recordDispatch({ messageType: "MT760", payload: "a", planId: "plan-a" });
recordDispatch({ messageType: "MT760", payload: "b", planId: "plan-b" });
recordDispatch({ messageType: "MT760", payload: "c", planId: "plan-a" });
expect(listMessages().length).toBe(3);
expect(listMessages({ planId: "plan-a" }).length).toBe(2);
});
});
describe("signature", () => {
it("produces a stable 64-char hex HMAC", () => {
const sig = finSignature("hello");
expect(sig).toMatch(/^[0-9a-f]{64}$/);
expect(finSignature("hello")).toBe(sig);
expect(finSignature("world")).not.toBe(sig);
});
});
describe("HTTP router", () => {
const app = express();
app.use("/fin", buildSandboxRouter());
beforeEach(() => resetSandboxForTests());
it("POST /fin/dispatch returns 202 + reference", async () => {
const resp = await request(app)
.post("/fin/dispatch")
.send({ messageType: "MT760", payload: "mt760", planId: "plan-x" })
.expect(202);
expect(resp.body.reference).toMatch(/^FIN-/);
expect(resp.body.state).toBe("received");
});
it("POST /fin/dispatch 400s on missing payload", async () => {
await request(app)
.post("/fin/dispatch")
.send({ messageType: "MT760" })
.expect(400);
});
it("POST /fin/advance/:ref walks through lifecycle", async () => {
const d = await request(app)
.post("/fin/dispatch")
.send({ messageType: "pacs.009", payload: "<pacs.009/>" })
.expect(202);
const ref = d.body.reference;
const a1 = await request(app).post(`/fin/advance/${ref}`).expect(200);
expect(a1.body.state).toBe("acknowledged");
const a2 = await request(app).post(`/fin/advance/${ref}`).expect(200);
expect(a2.body.state).toBe("accepted");
const a3 = await request(app).post(`/fin/advance/${ref}`).expect(200);
expect(a3.body.state).toBe("settled");
});
it("GET /fin/messages?planId=... filters", async () => {
await request(app)
.post("/fin/dispatch")
.send({ messageType: "MT760", payload: "a", planId: "p1" });
await request(app)
.post("/fin/dispatch")
.send({ messageType: "MT760", payload: "b", planId: "p2" });
const r = await request(app).get("/fin/messages?planId=p1").expect(200);
expect(r.body.messages).toHaveLength(1);
expect(r.body.messages[0].planId).toBe("p1");
});
it("GET /fin/messages/:ref returns 404 for unknown", async () => {
await request(app).get("/fin/messages/FIN-UNKNOWN").expect(404);
});
});
describe("client", () => {
beforeEach(() => resetSandboxForTests());
it("createInProcessFinLinkClient dispatches and reads back", async () => {
const client = await createInProcessFinLinkClient();
const ack = await client.dispatch({
messageType: "MT760",
payload: "mt760",
planId: "plan-ip",
});
expect(ack.reference).toMatch(/^FIN-/);
const msg = await client.getMessage(ack.reference);
expect(msg?.planId).toBe("plan-ip");
});
it("createHttpFinLinkClient hits the live router", async () => {
const app = express();
app.use("/fin", buildSandboxRouter());
const server = app.listen(0);
try {
const addr = server.address();
const port = typeof addr === "object" && addr ? addr.port : 0;
const client = createHttpFinLinkClient(`http://127.0.0.1:${port}/fin`);
const ack = await client.dispatch({
messageType: "pacs.009",
payload: "<pacs.009/>",
planId: "plan-http",
});
expect(ack.reference).toMatch(/^FIN-/);
const msg = await client.getMessage(ack.reference);
expect(msg?.messageType).toBe("pacs.009");
const missing = await client.getMessage("FIN-DOES-NOT-EXIST");
expect(missing).toBeNull();
} finally {
server.close();
}
});
});
});

View File

@@ -0,0 +1,284 @@
import { describe, it, expect } from "@jest/globals";
import {
canonicalize,
hashObligationTerms,
validateObligationTerms,
evaluateClauses,
evaluateCommit,
evaluateAbort,
buildIssueInstrumentObligation,
type ObligationTerms,
} from "../../src/services/obligations";
import { evaluateCondition, resolvePath } from "../../src/services/obligations/evaluator";
describe("Obligation layer (gap-analysis v2 §4.1)", () => {
const instrument = {
applicant: "ACME Corp",
issuingBankBIC: "CHASUS33",
beneficiaryBankBIC: "EBILAEAD",
beneficiaryName: "Acme Beneficiary Ltd",
beneficiaryAccount: "AE070331234567890123456",
amount: 1_000_000,
currency: "USD",
tenor: "1Y",
expiryDate: "2026-12-31",
placeOfPresentation: "Dubai",
governingLaw: "URDG 758",
templateRef: "emirates-islamic-sblc-v3",
templateHash:
"a".repeat(64),
};
const authorizedParticipants = [
{ role: "coordinator" as const, actorId: "actor-1" },
{ role: "approver" as const, actorId: "actor-2" },
{ role: "releaser" as const, actorId: "actor-3" },
{ role: "validator" as const, actorId: "actor-4" },
{ role: "exception_manager" as const, actorId: "actor-5" },
];
describe("canonicalize()", () => {
it("sorts object keys at every depth", () => {
const a = canonicalize({ b: 1, a: { d: 2, c: 3 } });
const b = canonicalize({ a: { c: 3, d: 2 }, b: 1 });
expect(a).toBe(b);
expect(a).toBe('{"a":{"c":3,"d":2},"b":1}');
});
it("preserves array order", () => {
expect(canonicalize({ x: [3, 1, 2] })).toBe('{"x":[3,1,2]}');
});
it("handles null and nested arrays of objects", () => {
expect(
canonicalize({ a: null, b: [{ y: 2, x: 1 }, { z: 3 }] }),
).toBe('{"a":null,"b":[{"x":1,"y":2},{"z":3}]}');
});
});
describe("hashObligationTerms()", () => {
const terms = buildIssueInstrumentObligation({
instrument,
payor: "ACME Corp",
payee: "Acme Beneficiary Ltd",
authorizedParticipants,
});
it("produces a 64-char hex hash", () => {
expect(hashObligationTerms(terms)).toMatch(/^[0-9a-f]{64}$/);
});
it("is insensitive to key ordering", () => {
const shuffled: ObligationTerms = {
...terms,
consideration: {
payee: terms.consideration.payee,
currency: terms.consideration.currency,
amount: terms.consideration.amount,
payor: terms.consideration.payor,
},
};
expect(hashObligationTerms(shuffled)).toBe(hashObligationTerms(terms));
});
it("changes when any field mutates", () => {
const mutated: ObligationTerms = {
...terms,
consideration: { ...terms.consideration, amount: 999 },
};
expect(hashObligationTerms(mutated)).not.toBe(hashObligationTerms(terms));
});
});
describe("validateObligationTerms()", () => {
const valid = buildIssueInstrumentObligation({
instrument,
payor: "A",
payee: "B",
authorizedParticipants,
});
it("accepts a well-formed obligation", () => {
expect(validateObligationTerms(valid).ok).toBe(true);
});
it("rejects non-object input", () => {
expect(validateObligationTerms(null).ok).toBe(false);
expect(validateObligationTerms("nope").ok).toBe(false);
});
it("flags missing consideration fields", () => {
const bad = {
...valid,
consideration: { payor: "A", payee: "B", currency: "usd", amount: -5 },
};
const r = validateObligationTerms(bad);
expect(r.ok).toBe(false);
expect(r.errors).toEqual(
expect.arrayContaining([
expect.stringContaining("ISO-4217"),
expect.stringContaining("amount"),
]),
);
});
it("flags bad template hash", () => {
const bad = {
...valid,
governingDocuments: [
{ templateRef: "t", templateHash: "not-a-hash" },
],
};
const r = validateObligationTerms(bad);
expect(r.ok).toBe(false);
expect(r.errors.some((e) => e.includes("hex SHA-256"))).toBe(true);
});
it("flags empty authorizedParticipants[].role", () => {
const bad = {
...valid,
authorizedParticipants: [{ actorId: "x" }],
};
const r = validateObligationTerms(bad);
expect(r.ok).toBe(false);
});
});
describe("evaluator", () => {
it("resolvePath handles dotted + indexed paths", () => {
const ctx = { plan: { steps: [{ type: "pay" }, { type: "issueInstrument" }] } };
expect(resolvePath("plan.steps[1].type", ctx)).toBe("issueInstrument");
expect(resolvePath("plan.missing.x", ctx)).toBeUndefined();
});
it("evaluates all/any/not combinators", () => {
const ctx = { a: 1, b: 2 };
expect(
evaluateCondition(
{
all: [
{ path: "a", op: "eq", value: 1 },
{ path: "b", op: "gt", value: 1 },
],
},
ctx,
),
).toBe(true);
expect(
evaluateCondition(
{
any: [
{ path: "a", op: "eq", value: 99 },
{ path: "b", op: "gt", value: 1 },
],
},
ctx,
),
).toBe(true);
expect(
evaluateCondition({ not: { path: "a", op: "eq", value: 2 } }, ctx),
).toBe(true);
});
it("matches regex operator safely (no eval)", () => {
expect(
evaluateCondition(
{ path: "h", op: "matches", value: "^0x[0-9a-f]{4}$" },
{ h: "0xbeef" },
),
).toBe(true);
expect(
evaluateCondition(
{ path: "h", op: "matches", value: "^0x[0-9a-f]{4}$" },
{ h: "0xBEEFG" },
),
).toBe(false);
});
});
describe("evaluateClauses / evaluateCommit / evaluateAbort", () => {
const terms = buildIssueInstrumentObligation({
instrument,
payor: "ACME Corp",
payee: "Acme Beneficiary Ltd",
authorizedParticipants,
});
const passingCtx = {
state: "VALIDATING",
dlt: { tx_hash: "0x" + "b".repeat(64) },
bank: { iso_message_id: "MSG-1" },
exceptions: { active: [] },
instrument: { template_hash: instrument.templateHash, dispatched: true },
payment: {
amount: instrument.amount,
currency: instrument.currency,
failed: false,
},
};
it("evaluateCommit returns ok=true when all commit clauses pass", () => {
const r = evaluateCommit(terms, passingCtx);
expect(r.ok).toBe(true);
expect(r.results.every((x) => x.ok)).toBe(true);
});
it("evaluateCommit returns ok=false with per-clause reasons on failure", () => {
const badCtx = { ...passingCtx, dlt: { tx_hash: "not-hex" } };
const r = evaluateCommit(terms, badCtx);
expect(r.ok).toBe(false);
const failing = r.results.find((x) => !x.ok);
expect(failing?.clauseId).toBe("commit.dlt_tx_hash");
expect(failing?.failureReason).toBeTruthy();
});
it("evaluateAbort fires when an active exception exists", () => {
const ctx = {
...passingCtx,
exceptions: { active: [{ kind: "timeout" }] },
};
const r = evaluateAbort(terms, ctx);
expect(r.ok).toBe(true);
expect(r.results.find((x) => x.clauseId === "abort.exception_raised")?.ok).toBe(
true,
);
});
it("evaluateClauses surfaces evaluator errors without throwing", () => {
const bogus = [
{
id: "bogus",
description: "bad regex",
binds: "both" as const,
assert: { path: "h", op: "matches" as const, value: "[" }, // invalid regex
},
];
const r = evaluateClauses(bogus, { h: "x" });
expect(r.ok).toBe(false);
expect(r.results[0].failureReason).toBeTruthy();
});
});
describe("buildIssueInstrumentObligation()", () => {
it("binds the instrument template hash into governingDocuments", () => {
const terms = buildIssueInstrumentObligation({
instrument,
payor: "A",
payee: "B",
authorizedParticipants,
});
expect(terms.governingDocuments[0].templateHash).toBe(instrument.templateHash);
expect(terms.governingDocuments[0].governingLaw).toBe("URDG 758");
});
it("validates cleanly", () => {
const terms = buildIssueInstrumentObligation({
instrument,
payor: "A",
payee: "B",
authorizedParticipants,
});
expect(validateObligationTerms(terms).ok).toBe(true);
});
});
});

View File

@@ -1,75 +0,0 @@
/**
* Tests for per-state phase timeouts (arch §12.1 / gap v2 §7.6 / §10.7).
*/
import {
DEFAULT_PHASE_TIMEOUTS,
TRANSACTION_STATES,
getPhaseTimeoutMs,
} from "../../src/types/transactionState";
describe("PHASE_TIMEOUTS", () => {
const savedEnv = { ...process.env };
afterEach(() => {
process.env = { ...savedEnv };
});
it("has a mapping for every declared transaction state", () => {
for (const s of TRANSACTION_STATES) {
expect(DEFAULT_PHASE_TIMEOUTS).toHaveProperty(s);
}
});
it("CLOSED is the only state without a timeout", () => {
const nullStates = TRANSACTION_STATES.filter(
(s) => DEFAULT_PHASE_TIMEOUTS[s] === null,
);
expect(nullStates).toEqual(["CLOSED"]);
});
it("all non-terminal timeouts are strictly positive integers", () => {
for (const s of TRANSACTION_STATES) {
const v = DEFAULT_PHASE_TIMEOUTS[s];
if (v === null) continue;
expect(v).toBeGreaterThan(0);
expect(Number.isInteger(v)).toBe(true);
}
});
it("getPhaseTimeoutMs honours a valid env override", () => {
process.env.PHASE_TIMEOUT_EXECUTING = "123456";
expect(getPhaseTimeoutMs("EXECUTING")).toBe(123456);
});
it("getPhaseTimeoutMs treats override '0' as 'no timeout' (null)", () => {
process.env.PHASE_TIMEOUT_EXECUTING = "0";
expect(getPhaseTimeoutMs("EXECUTING")).toBeNull();
});
it("getPhaseTimeoutMs falls back to default when override is invalid", () => {
process.env.PHASE_TIMEOUT_EXECUTING = "not-a-number";
expect(getPhaseTimeoutMs("EXECUTING")).toBe(
DEFAULT_PHASE_TIMEOUTS.EXECUTING,
);
});
it("getPhaseTimeoutMs falls back to default when override is negative", () => {
process.env.PHASE_TIMEOUT_PREPARED = "-1";
expect(getPhaseTimeoutMs("PREPARED")).toBe(
DEFAULT_PHASE_TIMEOUTS.PREPARED,
);
});
it("returns the default when no env override exists", () => {
delete process.env.PHASE_TIMEOUT_VALIDATING;
expect(getPhaseTimeoutMs("VALIDATING")).toBe(
DEFAULT_PHASE_TIMEOUTS.VALIDATING,
);
});
it("CLOSED stays null even with no env override", () => {
delete process.env.PHASE_TIMEOUT_CLOSED;
expect(getPhaseTimeoutMs("CLOSED")).toBeNull();
});
});

View File

@@ -0,0 +1,245 @@
/**
* PR P — Pluggable Rules Engine (gap-analysis v2 §5.2 partial).
*/
import { describe, it, expect, beforeEach } from "@jest/globals";
import {
evaluate,
evaluateCondition,
getRuleSet,
BUILTIN_PRECONDITIONS,
BUILTIN_COMMIT,
__resetRulesCacheForTests,
type RuleSet,
} from "../../src/services/rulesEngine";
describe("rulesEngine — primitive operators", () => {
it("eq / neq / gt / gte / lt / lte", () => {
expect(
evaluateCondition({ path: "a", op: "eq", value: 1 }, { a: 1 }),
).toBe(true);
expect(
evaluateCondition({ path: "a", op: "neq", value: 1 }, { a: 2 }),
).toBe(true);
expect(
evaluateCondition({ path: "a", op: "gt", value: 1 }, { a: 2 }),
).toBe(true);
expect(
evaluateCondition({ path: "a", op: "lte", value: 3 }, { a: 3 }),
).toBe(true);
});
it("in / not_in / exists / matches", () => {
expect(
evaluateCondition(
{ path: "role", op: "in", value: ["approver", "releaser"] },
{ role: "approver" },
),
).toBe(true);
expect(
evaluateCondition(
{ path: "role", op: "not_in", value: ["approver"] },
{ role: "operator" },
),
).toBe(true);
expect(
evaluateCondition({ path: "x", op: "exists" }, { x: 0 }),
).toBe(true);
expect(
evaluateCondition(
{ path: "hash", op: "matches", value: "^0x[0-9a-f]+$" },
{ hash: "0xabc" },
),
).toBe(true);
});
it("length_gte / length_lte work on arrays and strings", () => {
expect(
evaluateCondition({ path: "a", op: "length_gte", value: 2 }, { a: [1, 2] }),
).toBe(true);
expect(
evaluateCondition({ path: "a", op: "length_lte", value: 5 }, { a: "abcd" }),
).toBe(true);
});
it("dotted + indexed path resolution", () => {
expect(
evaluateCondition(
{ path: "plan.steps[1].type", op: "eq", value: "pay" },
{ plan: { steps: [{ type: "issue" }, { type: "pay" }] } },
),
).toBe(true);
});
});
describe("rulesEngine — combinators", () => {
const ctx = { role: "approver", amount: 1000 };
it("all (AND) — every child must pass", () => {
expect(
evaluateCondition(
{
all: [
{ path: "role", op: "eq", value: "approver" },
{ path: "amount", op: "gt", value: 500 },
],
},
ctx,
),
).toBe(true);
expect(
evaluateCondition(
{
all: [
{ path: "role", op: "eq", value: "approver" },
{ path: "amount", op: "gt", value: 5000 },
],
},
ctx,
),
).toBe(false);
});
it("any (OR) — at least one child must pass", () => {
expect(
evaluateCondition(
{
any: [
{ path: "role", op: "eq", value: "releaser" },
{ path: "amount", op: "gt", value: 500 },
],
},
ctx,
),
).toBe(true);
});
it("not — inverts the child", () => {
expect(
evaluateCondition(
{ not: { path: "role", op: "eq", value: "releaser" } },
ctx,
),
).toBe(true);
});
});
describe("rulesEngine — evaluate() and failure reporting", () => {
const ruleSet: RuleSet = {
id: "test.rs",
rules: [
{
id: "amount_positive",
description: "amount must be > 0",
assert: { path: "amount", op: "gt", value: 0 },
},
{
id: "role_listed",
description: "role must be in the allowed list",
assert: {
path: "role",
op: "in",
value: ["approver", "releaser", "operator"],
},
},
{
id: "warning_only",
description: "low amount warning",
severity: "warn",
assert: { path: "amount", op: "gte", value: 10_000 },
},
],
};
it("returns ok=true when all error-severity rules pass", () => {
const res = evaluate(ruleSet, { amount: 1000, role: "approver" });
expect(res.ok).toBe(true);
// warn still reported even though ok=true
expect(res.failures.some((f) => f.ruleId === "warning_only")).toBe(true);
expect(res.failures.every((f) => f.severity === "warn")).toBe(true);
});
it("returns ok=false with error failure when a blocking rule fails", () => {
const res = evaluate(ruleSet, { amount: -1, role: "approver" });
expect(res.ok).toBe(false);
const amountFail = res.failures.find((f) => f.ruleId === "amount_positive");
expect(amountFail?.severity).toBe("error");
});
it("'when' gates a rule — false when-clause skips the assert", () => {
const guarded: RuleSet = {
id: "guarded.rs",
rules: [
{
id: "kyc_if_present",
when: { path: "compliance", op: "exists" },
assert: { path: "compliance.kyc", op: "eq", value: "ok" },
},
],
};
expect(evaluate(guarded, {}).ok).toBe(true);
expect(evaluate(guarded, { compliance: { kyc: "ok" } }).ok).toBe(true);
expect(evaluate(guarded, { compliance: { kyc: "fail" } }).ok).toBe(false);
});
});
describe("rulesEngine — built-in rule sets", () => {
it("preconditions: pay step + non-empty participants passes", () => {
const res = evaluate(BUILTIN_PRECONDITIONS, {
plan: { steps: [{ type: "pay" }] },
participants: [{ id: "p1" }],
});
expect(res.ok).toBe(true);
});
it("preconditions: missing pay step fails", () => {
const res = evaluate(BUILTIN_PRECONDITIONS, {
plan: { steps: [{ type: "issueInstrument" }] },
participants: [{ id: "p1" }],
});
expect(res.ok).toBe(false);
expect(res.failures.some((f) => f.ruleId === "plan.pay_step_present")).toBe(
true,
);
});
it("commit: VALIDATING + matching refs + no exceptions passes", () => {
const res = evaluate(BUILTIN_COMMIT, {
state: "VALIDATING",
dlt: { txHash: `0x${"a".repeat(64)}` },
bank: { isoMessageId: "MSG-1" },
exceptions: { active: [] },
});
expect(res.ok).toBe(true);
});
it("commit: state != VALIDATING blocks", () => {
const res = evaluate(BUILTIN_COMMIT, {
state: "EXECUTING",
dlt: { txHash: `0x${"a".repeat(64)}` },
bank: { isoMessageId: "MSG-1" },
exceptions: { active: [] },
});
expect(res.ok).toBe(false);
expect(res.failures.some((f) => f.ruleId === "state.is_validating")).toBe(
true,
);
});
});
describe("rulesEngine — pluggable loading", () => {
beforeEach(() => {
__resetRulesCacheForTests();
delete process.env.RULES_FILE;
});
it("returns built-ins when RULES_FILE is unset", () => {
expect(getRuleSet(BUILTIN_PRECONDITIONS.id).rules.length).toBeGreaterThan(0);
expect(getRuleSet(BUILTIN_COMMIT.id).rules.length).toBeGreaterThan(0);
});
it("returns an empty rule set for unknown ids (no throw)", () => {
const rs = getRuleSet("nonexistent");
expect(rs.rules).toEqual([]);
});
});

View File

@@ -0,0 +1,80 @@
# CurrenciCombo orchestrator production env (Phoenix CT 8604 / any systemd host)
#
# Installed by scripts/deployment/install.sh to:
# /etc/currencicombo/orchestrator.env
#
# Loaded by the currencicombo-orchestrator.service systemd unit via
# EnvironmentFile=. Values that are committed here are safe defaults;
# secrets are left blank and must be set before first boot.
#
# The portal is a statically built SPA (nginx), so it takes NO runtime env.
# Any VITE_* vars needed at build time are baked into dist/ by
# scripts/deployment/deploy-currencicombo-8604.sh before the rsync.
############################################################
# Server
############################################################
NODE_ENV=production
PORT=8080
# Bind to loopback only when behind NPMplus on the same host; bind
# 0.0.0.0 if NPMplus is on a different host (the CT 8604 case, so 0.0.0.0).
HOST=0.0.0.0
############################################################
# Postgres (local to the CT per install.sh)
############################################################
DATABASE_URL=postgresql://currencicombo:replace-me-on-install@127.0.0.1:5432/currencicombo
############################################################
# Redis (local to the CT per install.sh)
############################################################
REDIS_URL=redis://127.0.0.1:6379
############################################################
# Event bus signing (REQUIRED). install.sh generates this on first run
# via `openssl rand -hex 32` unless the file already exists.
############################################################
EVENT_SIGNING_SECRET=
############################################################
# API keys per role (REQUIRED). install.sh generates three random
# initiator/settler/auditor keys on first run unless set.
# Format: key1:role1,key2:role2,...
############################################################
API_KEYS=
############################################################
# Chain 138 — resolves EXT-CHAIN138-CI-RPC (already resolved).
############################################################
CHAIN_138_RPC_URL=https://rpc.public-0138.defi-oracle.io
CHAIN_138_CHAIN_ID=138
# Leave empty to run mock notary. Populate after running
# `contracts/scripts/deploy-notary-registry.ts` once.
NOTARY_REGISTRY_ADDRESS=
# Leave empty to run mock notary. Otherwise 0x-prefixed 32-byte hex.
ORCHESTRATOR_PRIVATE_KEY=
############################################################
# External dependency blockers (leave blank → mock fallback + EXT-* log)
# These are the exact IDs that the Proxmox
# scripts/verify/check-external-dependencies.sh gate knows about.
############################################################
# EXT-DBIS-CORE — set when dbis_core is deployed and reachable.
DBIS_CORE_URL=
# EXT-FIN-GATEWAY — set when a real Alliance Access / FIN gateway is
# provisioned. Leave blank to use PR R's in-process sandbox.
FIN_SANDBOX_URL=
# EXT-CC-* — the following four blockers are upstream-scaffold repos
# (cc-payment-adapters, cc-audit-ledger, cc-shared-events,
# cc-shared-schemas). They cannot be resolved from this repo; no
# env var flips them. The orchestrator logs EXT-CC-* as active on boot.
# Identity + controls matrix (not a blocker IDs per se — they ship
# today via the cc-identity-core and cc-compliance-controls adapters
# merged in PR V/W). Blank keeps the embedded v0 matrix + mock identity.
CC_IDENTITY_URL=
CC_CONTROLS_MATRIX_URL=

View File

@@ -0,0 +1,254 @@
# CurrenciCombo — Phoenix / systemd deployment
This directory holds everything needed to deploy CurrenciCombo onto a
systemd host — starting with Phoenix CT 8604 on `r630-01`, but any
Debian/Ubuntu (or Alpine) host with Postgres + Redis available works.
The files here are **target-agnostic**. They hardcode no IPs, hostnames,
or VLANs. Environment-specific values — `curucombo.曼李.com`, the
`10.160.0.14` VIP, the NPMplus reverse proxy — are applied at the
edge (NPMplus) and at `/etc/currencicombo/orchestrator.env`, never in
the repo.
## Architecture on CT 8604
```
┌────────────────────┐
curucombo.曼李.com ──▶ NPMplus │192.168.11.167 │
(Cloudflare-proxied) │ TLS terminates here│
└─────────┬──────────┘
┌──────────────────────┴──────────────────────┐
│ │
▼ ▼
curucombo.曼李.com/* (default) curucombo.曼李.com/api/*
(incl. SSE /api/plans/*/events/stream)
│ │
CT 8604 │10.160.0.14:3000 CT 8604 │10.160.0.14:8080
▼ ▼
┌─────────────────────────────┐ ┌─────────────────────────────┐
│ currencicombo-webapp.service │ │ currencicombo-orchestrator │
│ nginx → /opt/currencicombo/ │ │ .service (systemd) │
│ webapp/dist/ │ │ node dist/index.js │
└─────────────────────────────┘ │ env /etc/currencicombo/ │
│ orchestrator.env │
└──────────────┬──────────────┘
postgresql + redis (same CT, local)
```
## Files
| path | purpose |
|---|---|
| `systemd/currencicombo-orchestrator.service` | Node orchestrator, reads `/etc/currencicombo/orchestrator.env` |
| `systemd/currencicombo-webapp.service` | nginx serving the Vite SPA on `:3000` |
| `webapp-nginx.conf` | full nginx.conf for the webapp unit |
| `.env.prod.example` | env template installed to `/etc/currencicombo/orchestrator.env` |
| `install.sh` | one-shot host setup: user / dirs / DB role / systemd units / first-run key handoff file |
| `install-prune-cron.sh` | opt-in daily cron that prunes `/var/lib/currencicombo/backups/` (30-day retention, keep-min 5) |
| `deploy-currencicombo-8604.sh` | build-and-swap deploy driver (the script Phoenix/proxmox deploy-api calls) |
| `README.md` | you're reading it |
## First-time setup on CT 8604
All commands run as **root** inside the CT.
1. Ensure Postgres + Redis are installed and running:
```
apt-get install -y postgresql redis-server
systemctl enable --now postgresql redis-server
```
2. Clone the repo into its staging location (once):
```
install -d -o root -g root /var/lib/currencicombo
git clone https://gitea.d-bis.org/d-bis/CurrenciCombo.git /var/lib/currencicombo/repo
```
3. Run `install.sh` (creates user, DB, systemd units, env file):
```
bash /var/lib/currencicombo/repo/scripts/deployment/install.sh
```
On success you'll see:
```
[install] generated EVENT_SIGNING_SECRET (64 hex)
[install] generated 3 API keys (initiator/settler/auditor)
[install] initial secrets written to /root/currencicombo-first-keys.txt (0600) — record in password manager, then 'shred -u /root/currencicombo-first-keys.txt'
[install] install complete.
```
`install.sh` writes the three API keys + `EVENT_SIGNING_SECRET` to **two** places:
- `/etc/currencicombo/orchestrator.env` — canonical, read by systemd (`0640`, owned by `currencicombo`).
- `/root/currencicombo-first-keys.txt` — **root-only handoff file** (`0600`). Grab it once, record the values in your password manager, then `shred -u` it.
The handoff file is **not** regenerated on re-run — if `orchestrator.env` already exists, `install.sh` does not produce new secrets.
4. (Optional) Install the backup-pruning cron:
```
bash /var/lib/currencicombo/repo/scripts/deployment/install-prune-cron.sh
```
Drops a `/etc/cron.daily/currencicombo-prune-backups` that deletes anything under `/var/lib/currencicombo/backups/` older than 30 days while **always keeping the newest 5** regardless of age. Safe on re-run; opt out with `sudo rm /etc/cron.daily/currencicombo-prune-backups`.
5. If you need to resolve any `EXT-*` blocker (e.g. point at a real dbis_core), edit `/etc/currencicombo/orchestrator.env` before the first deploy.
6. First build-and-start:
```
bash /var/lib/currencicombo/repo/scripts/deployment/deploy-currencicombo-8604.sh
```
Expected tail:
```
[deploy] orchestrator ready: {"ready":true}
[deploy] portal OK (HTTP 200)
[deploy] EXT-* blocker summary from orchestrator boot log:
[ExternalBlockers] 6 active, 1 resolved
id: EXT-DBIS-CORE
id: EXT-CC-PAYMENT-ADAPTERS
...
id: EXT-CHAIN138-CI-RPC (resolved)
[deploy] deploy complete. ref=main sha=<short> ts=<timestamp>
```
## NPMplus ingress changes required at cutover
`curucombo.曼李.com` today proxies 100% to `10.160.0.14:3000`. After
cutover it must become a **single-origin path-routed proxy** with **two**
rules (the SSE endpoint lives at `/api/plans/:id/events/stream`, so it's
already under `/api/*` — no separate `/events/*` rule is needed):
| location | upstream | proxy settings |
|---|---|---|
| `/api/*` | `http://10.160.0.14:8080` | **SSE-friendly settings apply here because the SSE route `/api/plans/:id/events/stream` is under /api/**. Use `proxy_pass http://10.160.0.14:8080;` with **no trailing slash** so `/api/...` reaches the orchestrator unchanged. Set: `proxy_http_version 1.1;`, `proxy_set_header Connection "";`, `proxy_buffering off;`, `proxy_cache off;`, `proxy_read_timeout 24h;`, `proxy_send_timeout 24h;`. Standard forwarding: `proxy_set_header Host $host;`, `X-Real-IP $remote_addr;`, `X-Forwarded-For $proxy_add_x_forwarded_for;`, `X-Forwarded-Proto $scheme;`. The slight overhead of `proxy_buffering off` on plain REST calls is negligible for this workload. |
| `/` | `http://10.160.0.14:3000` | Vite SPA. Default upstream. No special settings. |
If you skip the `/api/*` rule, the nginx in `webapp-nginx.conf`
intentionally returns `HTTP 421` for that path — a clean "upstream is
misconfigured" signal instead of silently returning `index.html` and
breaking the browser with a JSON parse error.
## Subsequent deploys
Every deploy after the first is just:
```
sudo /var/lib/currencicombo/repo/scripts/deployment/deploy-currencicombo-8604.sh
```
Flags:
- `--ref=<branch-or-sha>` — deploy something other than `main`.
- `--dry-run` — print what would happen, don't touch anything.
- `--skip-migrate` — hotfix deploys that don't change the schema.
- `--skip-build` — reuse the build from the previous run (debugging only).
- `--rollback` — restore the most recent `/var/lib/currencicombo/backups/<ts>/` and restart units. Does **not** git-pull or rebuild.
Every deploy writes a timestamped backup to
`/var/lib/currencicombo/backups/<YYYYmmdd-HHMMSS>/` before swapping. Pruning is opt-in via `install-prune-cron.sh` (30-day retention, keep-min 5). Without the cron, backups accumulate forever — quietly filling `/var/lib` is how the next outage starts.
## Failure handling on deploy
**Rollback is manual.** `deploy-currencicombo-8604.sh` **does not** auto-restore the previous backup if the orchestrator fails to become ready. First cutovers typically fail because of env typos or migration mistakes, and auto-restoring hides the failure state ops needs.
Instead, on a readiness timeout the deploy script prints:
- last 40 lines of `journalctl -u currencicombo-orchestrator`
- last 20 lines of `journalctl -u currencicombo-webapp`
- **the exact `--rollback` command with the specific backup path filled in**
Example tail on failure:
```
================================================================
DEPLOY FAILED: orchestrator did not become ready after 60s
================================================================
## currencicombo-orchestrator (last 40 lines):
... env validation error: EVENT_SIGNING_SECRET is required ...
## Units are in whatever state deploy left them. To restore
## the previous build (does NOT revert DB migrations):
sudo /var/lib/currencicombo/repo/scripts/deployment/deploy-currencicombo-8604.sh --rollback
# (will restore /var/lib/currencicombo/backups/20260423-140215)
================================================================
```
Rollback one-liner (when ops has decided to restore):
```
sudo /var/lib/currencicombo/repo/scripts/deployment/deploy-currencicombo-8604.sh --rollback
```
Rollback restores the most recent backup and restarts both units. It **does not** touch the DB. If the failed deploy applied a new migration, DB rollback is a manual `psql` task — the orchestrator's migration runner only emits `up()` paths.
## Post-cutover smoke checks through NPMplus
Once the NPMplus `/api/*` rule is live, from a workstation (not the CT):
```
# 1. Front-door TLS is healthy
curl -skI https://curucombo.xn--vov0g.com/ | head -3
# expect: HTTP/2 200
# expect: NO 'x-nextjs-prerender' header (that was the old Next.js build)
# 2. SPA is the new Vite portal
curl -sk https://curucombo.xn--vov0g.com/ | grep -oE '<title>[^<]+</title>'
# expect: <title>Solace Bank Group PLC — Treasury Management Portal</title>
# 3. Orchestrator ready through NPMplus
curl -sk https://curucombo.xn--vov0g.com/api/ready | head -1
# expect: {"ready":true} (not HTML)
# 4. Orchestrator blocker log (through CT shell, not NPMplus)
ssh root@10.160.0.14 'journalctl -u currencicombo-orchestrator -n 200 | grep -E "ExternalBlockers|EXT-"'
# expect: [ExternalBlockers] 6 active, 1 resolved
# expect: one line per EXT-* id
# 5. SSE actually streams (catches silent NPMplus proxy_buffering=on misconfig)
curl -sk -N --max-time 5 -H 'Accept: text/event-stream' \
https://curucombo.xn--vov0g.com/api/plans/demo-pay-014/events/stream \
| head -20 || true
# expect: HTTP/2 200 with Content-Type: text/event-stream
# expect: at least one 'data: {...}\n\n' frame to arrive WITHIN ~1s
# if you see nothing for 3-5s and then everything dumps at once:
# NPMplus has proxy_buffering=on. Fix: proxy_buffering off; proxy_http_version 1.1; proxy_set_header Connection "";
# if the ping is 401/403: expected — SSE is auth-gated; the point is to
# prove the request REACHED the orchestrator (content-type header +
# chunked response headers) rather than hitting the Vite SPA.
```
A plain `HTTP/2 200` with a `Content-Type: text/html` body on `/api/ready` means NPMplus is silently falling back to the `/` rule — the `/api/*` rule is missing or ordered wrong. The `webapp-nginx.conf` in this repo returns `HTTP 421` for `/api/*` to make that case obvious when debugging CT-locally, but at the NPMplus edge nginx serves whatever NPMplus routes to it.
## Troubleshooting
| symptom | cause / check |
|---|---|
| `/api/*` returns `421 NPMplus is misconfigured` | NPMplus `/api/*` rule missing or wrong upstream. |
| `/events/*` connects then disconnects after ~60s | NPMplus forgot `proxy_buffering off` + high `proxy_read_timeout`. |
| orchestrator unit enters `activating (auto-restart)` loop | `journalctl -u currencicombo-orchestrator -n 80` — usually a zod env-validation error. The boot-time assertion message names the missing/invalid var. |
| orchestrator boot log says `[ExternalBlockers] N active` where N > 6 | you added an `EXT-*` env var without also updating the central registry in `orchestrator/src/config/externalBlockers.ts`. |
| `/health` returns 503 but `/ready` is 200 | memory `critical` is a separate signal from readiness. Inspect CT memory; this happens on constrained builders and is not a deploy bug. |
| portal page loads but MetaMask login does nothing | the portal couldn't reach `/api/auth/*`. Walk back up the NPMplus rule chain. |
## Cutting over from the pre-existing Next.js build
Phoenix previously had an older Next.js "ISO-20022 Combo Flow" app in
`/opt/currencicombo/webapp`. The cutover sequence on CT 8604 is:
1. **Backup the old install** out-of-band:
```
tar czf /root/currencicombo-preRepo-$(date +%s).tgz /opt/currencicombo /etc/currencicombo 2>/dev/null || true
```
2. **Disable the pre-existing systemd units** (they're the same names but point at the old tree):
```
systemctl stop currencicombo-webapp currencicombo-orchestrator
systemctl disable currencicombo-webapp currencicombo-orchestrator
```
3. Run `install.sh` (writes the new units, new nginx, new env). On an already-set-up host this is idempotent: it preserves `/etc/currencicombo/orchestrator.env` if it already exists.
4. Run `deploy-currencicombo-8604.sh`.
5. Apply the NPMplus `/api` + `/` path rules.
6. Smoke from outside the CT: `curl -skI https://curucombo.xn--vov0g.com/ && curl -sk https://curucombo.xn--vov0g.com/api/ready`.
## Proxmox-side follow-up (not in this PR)
After this PR merges and the above cutover runs cleanly, the
`/home/intlc/projects/proxmox` repo needs a separate commit to:
- Update `phoenix-deploy-api/deploy-targets.json` to point at:
- repo: `d-bis/CurrenciCombo`
- branch: `main`
- target: `default`
- deploy entrypoint: `scripts/deployment/deploy-currencicombo-8604.sh`
- Remove any stale `/opt/currencicombo/webapp` Next.js references.
- Drop any description of `ignoreBuildErrors: true` in `webapp/next.config.ts` — the new webapp is Vite+tsc-strict, no build-error suppression.

View File

@@ -0,0 +1,236 @@
#!/usr/bin/env bash
# deploy-currencicombo-8604.sh — build-and-swap deploy for CurrenciCombo.
#
# Runs on a systemd host that has already had `install.sh` applied once.
# This is the script referenced by the Proxmox repo's
# `phoenix-deploy-api/deploy-targets.json` tuple
# (repo=d-bis/CurrenciCombo, branch=main, target=default).
#
# Steps (each idempotent, each can be --dry-run'd):
# 1. git clone/pull /var/lib/currencicombo/repo to the target ref.
# 2. Build orchestrator (npm ci + npm run build).
# 3. Build portal/webapp (npm ci + npm run build), baking
# VITE_ORCHESTRATOR_URL into the bundle.
# 4. Run DB migrations (npm run migrate in orchestrator/).
# 5. Stop systemd units.
# 6. rsync build output into /opt/currencicombo/{orchestrator,webapp}.
# 7. Start systemd units.
# 8. Smoke-test /ready + portal / + print EXT-* blocker summary.
#
# Rollback: `--rollback` restores the previous backup under
# /var/lib/currencicombo/backups/<timestamp>.
#
# CT 8604 is in the filename for ops-grep-ability; the script itself is
# host-agnostic. Override paths via env vars if you run it elsewhere.
set -euo pipefail
# ----- defaults (override via env) ------------------------------------
: "${CC_GIT_REMOTE:=https://gitea.d-bis.org/d-bis/CurrenciCombo.git}"
: "${CC_GIT_REF:=main}"
: "${CC_REPO_DIR:=/var/lib/currencicombo/repo}"
: "${CC_APP_HOME:=/opt/currencicombo}"
: "${CC_BACKUP_DIR:=/var/lib/currencicombo/backups}"
: "${CC_USER:=currencicombo}"
# Portal build-time env. The NPMplus ingress path-routes /api/* and
# /events/* to the orchestrator, so same-origin works.
: "${VITE_ORCHESTRATOR_URL:=https://curucombo.xn--vov0g.com}"
: "${ORCHESTRATOR_UNIT:=currencicombo-orchestrator.service}"
: "${WEBAPP_UNIT:=currencicombo-webapp.service}"
: "${CC_HEALTH_URL:=http://127.0.0.1:8080/ready}"
: "${CC_PORTAL_URL:=http://127.0.0.1:3000/}"
: "${CC_HEALTH_TIMEOUT_SECS:=60}"
# ----- flags ----------------------------------------------------------
DRY_RUN=0
SKIP_MIGRATE=0
SKIP_BUILD=0
DO_ROLLBACK=0
usage() {
cat <<'USAGE'
Usage: sudo ./deploy-currencicombo-8604.sh [flags]
Flags:
--ref=<git-ref> Override CC_GIT_REF (default: main)
--dry-run Print commands, don't run them
--skip-migrate Skip `npm run migrate` step (use for hotfix
deploys where schema hasn't changed)
--skip-build Reuse the existing build in CC_REPO_DIR/dist
(useful after `--dry-run --skip-build=no` from
the previous run)
--rollback Restore the most recent backup and restart.
Does not run git/build/migrate.
-h, --help This help
Env overrides:
CC_GIT_REMOTE, CC_GIT_REF, CC_REPO_DIR, CC_APP_HOME, CC_BACKUP_DIR,
CC_USER, VITE_ORCHESTRATOR_URL, ORCHESTRATOR_UNIT, WEBAPP_UNIT,
CC_HEALTH_URL, CC_PORTAL_URL, CC_HEALTH_TIMEOUT_SECS
USAGE
}
while [[ $# -gt 0 ]]; do
case "$1" in
--ref=*) CC_GIT_REF="${1#*=}"; shift ;;
--dry-run) DRY_RUN=1; shift ;;
--skip-migrate) SKIP_MIGRATE=1; shift ;;
--skip-build) SKIP_BUILD=1; shift ;;
--rollback) DO_ROLLBACK=1; shift ;;
-h|--help) usage; exit 0 ;;
*) echo "unknown arg: $1" >&2; usage; exit 2 ;;
esac
done
log() { printf '[deploy] %s\n' "$*" >&2; }
warn() { printf '[deploy][WARN] %s\n' "$*" >&2; }
die() { printf '[deploy][FATAL] %s\n' "$*" >&2; exit 1; }
run() { if [[ "${DRY_RUN}" -eq 1 ]]; then printf '[deploy][dry-run] %s\n' "$*" >&2; else eval "$*"; fi; }
runcc() { if [[ "${DRY_RUN}" -eq 1 ]]; then printf '[deploy][dry-run][as %s] %s\n' "${CC_USER}" "$*" >&2; else sudo -u "${CC_USER}" -H bash -lc "$*"; fi; }
[[ "$EUID" -eq 0 ]] || die "must run as root (sudo)"
# ----- rollback fast-path ---------------------------------------------
if [[ "${DO_ROLLBACK}" -eq 1 ]]; then
LATEST="$(ls -1dt "${CC_BACKUP_DIR}"/* 2>/dev/null | head -1 || true)"
[[ -n "${LATEST}" ]] || die "no backup under ${CC_BACKUP_DIR}"
log "rolling back to ${LATEST}"
run "systemctl stop '${WEBAPP_UNIT}' '${ORCHESTRATOR_UNIT}'"
run "rsync -a --delete '${LATEST}/orchestrator/' '${CC_APP_HOME}/orchestrator/'"
run "rsync -a --delete '${LATEST}/webapp/' '${CC_APP_HOME}/webapp/'"
run "systemctl start '${ORCHESTRATOR_UNIT}' '${WEBAPP_UNIT}'"
log "rollback applied. systemctl status ${ORCHESTRATOR_UNIT} to verify."
exit 0
fi
# ----- 1. git ---------------------------------------------------------
run "install -d -o '${CC_USER}' -g '${CC_USER}' -m 0755 '${CC_REPO_DIR}'"
run "chown -R '${CC_USER}:${CC_USER}' '${CC_REPO_DIR}'"
if [[ ! -d "${CC_REPO_DIR}/.git" && "${CC_GIT_REF}" != "local" ]]; then
log "cloning ${CC_GIT_REMOTE}${CC_REPO_DIR}"
runcc "git clone '${CC_GIT_REMOTE}' '${CC_REPO_DIR}'"
fi
if [[ -d "${CC_REPO_DIR}/.git" && "${CC_GIT_REF}" != "local" ]]; then
runcc "cd '${CC_REPO_DIR}' && git fetch --prune origin"
runcc "cd '${CC_REPO_DIR}' && git reset --hard 'origin/${CC_GIT_REF}'"
REF_SHA="$(sudo -u "${CC_USER}" git -C "${CC_REPO_DIR}" rev-parse --short HEAD 2>/dev/null || echo unknown)"
log "repo at ${CC_GIT_REF} = ${REF_SHA}"
else
REF_SHA="local"
log "using staged local workspace from ${CC_REPO_DIR}"
fi
# ----- 2. orchestrator build -----------------------------------------
if [[ "${SKIP_BUILD}" -eq 0 ]]; then
log "building orchestrator"
if [[ -f "${CC_REPO_DIR}/orchestrator/package-lock.json" ]]; then
runcc "cd '${CC_REPO_DIR}/orchestrator' && npm ci --no-audit --no-fund"
else
runcc "cd '${CC_REPO_DIR}/orchestrator' && npm install --no-audit --no-fund"
fi
runcc "cd '${CC_REPO_DIR}/orchestrator' && npm run build"
log "building portal (VITE_ORCHESTRATOR_URL=${VITE_ORCHESTRATOR_URL})"
runcc "cd '${CC_REPO_DIR}' && npm ci --include=optional --no-audit --no-fund || npm ci --include=optional --force --no-audit --no-fund"
runcc "cd '${CC_REPO_DIR}' && VITE_ORCHESTRATOR_URL='${VITE_ORCHESTRATOR_URL}' npm run build"
else
log "skipping builds (--skip-build)"
fi
# ----- 3. migrations --------------------------------------------------
if [[ "${SKIP_MIGRATE}" -eq 0 ]]; then
log "running DB migrations"
runcc "cd '${CC_REPO_DIR}/orchestrator' && npm run migrate"
else
log "skipping migrations (--skip-migrate)"
fi
# ----- 4. backup previous install ------------------------------------
TS="$(date +%Y%m%d-%H%M%S)"
BACKUP="${CC_BACKUP_DIR}/${TS}"
if [[ -d "${CC_APP_HOME}/orchestrator/dist" || -d "${CC_APP_HOME}/webapp/dist" ]]; then
log "backing up current install → ${BACKUP}"
run "install -d -o root -g root -m 0700 '${BACKUP}/orchestrator' '${BACKUP}/webapp'"
run "rsync -a '${CC_APP_HOME}/orchestrator/' '${BACKUP}/orchestrator/'"
run "rsync -a '${CC_APP_HOME}/webapp/' '${BACKUP}/webapp/'"
fi
# ----- 5. stop units --------------------------------------------------
log "stopping systemd units"
run "systemctl stop '${WEBAPP_UNIT}' || true"
run "systemctl stop '${ORCHESTRATOR_UNIT}' || true"
# ----- 6. swap in new build ------------------------------------------
log "rsyncing new build into ${CC_APP_HOME}"
# Orchestrator: dist/ + node_modules/ + package.json + package-lock.json
runcc "rsync -a --delete '${CC_REPO_DIR}/orchestrator/dist/' '${CC_APP_HOME}/orchestrator/dist/'"
runcc "rsync -a '${CC_REPO_DIR}/orchestrator/node_modules/' '${CC_APP_HOME}/orchestrator/node_modules/'"
runcc "cp '${CC_REPO_DIR}/orchestrator/package.json' '${CC_APP_HOME}/orchestrator/package.json'"
runcc "if [[ -f '${CC_REPO_DIR}/orchestrator/package-lock.json' ]]; then cp '${CC_REPO_DIR}/orchestrator/package-lock.json' '${CC_APP_HOME}/orchestrator/package-lock.json'; else rm -f '${CC_APP_HOME}/orchestrator/package-lock.json'; fi"
# Webapp: dist/
runcc "rsync -a --delete '${CC_REPO_DIR}/dist/' '${CC_APP_HOME}/webapp/dist/'"
# ----- 7. start units ------------------------------------------------
log "starting systemd units"
run "systemctl start '${ORCHESTRATOR_UNIT}'"
run "systemctl start '${WEBAPP_UNIT}'"
# ----- 8. smoke -------------------------------------------------------
if [[ "${DRY_RUN}" -eq 1 ]]; then
log "dry-run: skipping smoke test"
exit 0
fi
log "waiting up to ${CC_HEALTH_TIMEOUT_SECS}s for orchestrator ${CC_HEALTH_URL}"
SECS=0
until curl -sfL --max-time 3 "${CC_HEALTH_URL}" >/dev/null 2>&1; do
SECS=$((SECS + 2))
if [[ "${SECS}" -ge "${CC_HEALTH_TIMEOUT_SECS}" ]]; then
# Loud failure summary. Deliberately does NOT auto-rollback — first
# cutovers often fail because of env/migration mistakes, and
# auto-restoring the old build hides the failure state ops needs to
# diagnose. Print the exact --rollback command with the specific
# backup path filled in, so it's one copy-paste away if desired.
{
echo
echo "================================================================"
echo "DEPLOY FAILED: orchestrator did not become ready after ${CC_HEALTH_TIMEOUT_SECS}s"
echo "================================================================"
echo
echo "## currencicombo-orchestrator (last 40 lines):"
journalctl -u "${ORCHESTRATOR_UNIT}" -n 40 --no-pager 2>&1 || echo "(journalctl unavailable)"
echo
echo "## currencicombo-webapp (last 20 lines):"
journalctl -u "${WEBAPP_UNIT}" -n 20 --no-pager 2>&1 || echo "(journalctl unavailable)"
echo
echo "## Units are in whatever state deploy left them. To restore"
echo "## the previous build (does NOT revert DB migrations):"
echo
if [[ -n "${BACKUP:-}" && -d "${BACKUP}" ]]; then
echo " sudo $0 --rollback"
echo " # (will restore ${BACKUP})"
else
echo " # No backup was taken (first deploy). Manual recovery required."
fi
echo
echo "================================================================"
} >&2
exit 1
fi
sleep 2
done
log "orchestrator ready: $(curl -sf "${CC_HEALTH_URL}")"
log "probing portal ${CC_PORTAL_URL}"
PORTAL_CODE="$(curl -s -o /dev/null -w '%{http_code}' "${CC_PORTAL_URL}" || echo ERR)"
[[ "${PORTAL_CODE}" =~ ^2 ]] || die "portal returned HTTP ${PORTAL_CODE}"
log "portal OK (HTTP ${PORTAL_CODE})"
log "EXT-* blocker summary from orchestrator boot log:"
journalctl -u "${ORCHESTRATOR_UNIT}" --no-pager -n 200 \
| grep -E 'ExternalBlockers|EXT-[A-Z0-9-]+' | tail -20 || true
log "deploy complete. ref=${CC_GIT_REF} sha=${REF_SHA} ts=${TS}"

View File

@@ -0,0 +1,102 @@
#!/usr/bin/env bash
# install-prune-cron.sh — opt-in cron job to prune old deploy backups.
#
# Run ONCE as root (or with sudo) after install.sh to enable daily
# pruning of /var/lib/currencicombo/backups/. The pruner:
# - deletes entries older than 30 days
# - ALWAYS keeps the newest N backups regardless of age (default 5)
#
# No-op on re-run. Opt out by removing /etc/cron.daily/currencicombo-prune-backups.
set -euo pipefail
BACKUP_DIR="${CC_BACKUP_DIR:-/var/lib/currencicombo/backups}"
RETAIN_DAYS="${CC_BACKUP_RETAIN_DAYS:-30}"
KEEP_MIN="${CC_BACKUP_KEEP_MIN:-5}"
CRON_FILE="/etc/cron.daily/currencicombo-prune-backups"
DRY_RUN=0
while [[ $# -gt 0 ]]; do
case "$1" in
--dry-run) DRY_RUN=1; shift ;;
-h|--help)
cat <<'USAGE'
Usage: sudo ./install-prune-cron.sh [--dry-run]
Env overrides:
CC_BACKUP_DIR (default: /var/lib/currencicombo/backups)
CC_BACKUP_RETAIN_DAYS (default: 30)
CC_BACKUP_KEEP_MIN (default: 5)
USAGE
exit 0 ;;
*) echo "unknown arg: $1" >&2; exit 2 ;;
esac
done
log() { printf '[install-prune-cron] %s\n' "$*" >&2; }
die() { printf '[install-prune-cron][FATAL] %s\n' "$*" >&2; exit 1; }
[[ "$EUID" -eq 0 ]] || die "must run as root (sudo)"
# The pruner script body. Runs daily via cron.daily.
# KEEP_MIN is enforced by listing backups newest-first, skipping the
# first KEEP_MIN, then deleting any remaining entries older than
# RETAIN_DAYS. This means we always keep at least KEEP_MIN (even if
# they're all <30 days old), and never delete one of the newest
# KEEP_MIN (even if it's >30 days old on a dormant host).
read -r -d '' PRUNER_BODY <<PRUNER || true
#!/usr/bin/env bash
# Managed by scripts/deployment/install-prune-cron.sh. Edits overwritten
# on next install. Opt out by deleting this file.
set -euo pipefail
BACKUP_DIR="${BACKUP_DIR}"
RETAIN_DAYS=${RETAIN_DAYS}
KEEP_MIN=${KEEP_MIN}
[[ -d "\$BACKUP_DIR" ]] || exit 0
cd "\$BACKUP_DIR"
mapfile -t all < <(find . -mindepth 1 -maxdepth 1 -type d -printf '%T@ %p\n' 2>/dev/null | sort -rn | awk '{print \$2}')
count=\${#all[@]}
if (( count <= KEEP_MIN )); then
logger -t currencicombo-prune "count=\$count <= KEEP_MIN=\$KEEP_MIN; nothing to prune"
exit 0
fi
cutoff=\$(date -d "\$RETAIN_DAYS days ago" +%s)
deleted=0
kept=0
for i in "\${!all[@]}"; do
p="\${all[\$i]}"
if (( i < KEEP_MIN )); then
kept=\$((kept + 1))
continue
fi
mtime=\$(stat -c %Y "\$p" 2>/dev/null || echo 0)
if (( mtime < cutoff )); then
rm -rf -- "\$p"
deleted=\$((deleted + 1))
else
kept=\$((kept + 1))
fi
done
logger -t currencicombo-prune "deleted=\$deleted kept=\$kept total_before=\$count"
PRUNER
if [[ "${DRY_RUN}" -eq 1 ]]; then
log "[dry-run] would write ${CRON_FILE} (0755) with pruner targeting ${BACKUP_DIR}, retain ${RETAIN_DAYS}d, keep-min ${KEEP_MIN}"
echo "---"
echo "${PRUNER_BODY}"
echo "---"
exit 0
fi
printf '%s\n' "${PRUNER_BODY}" > "${CRON_FILE}"
chmod 0755 "${CRON_FILE}"
chown root:root "${CRON_FILE}"
log "installed ${CRON_FILE} (backups older than ${RETAIN_DAYS}d, keep-min ${KEEP_MIN}, target ${BACKUP_DIR})"
log "runs daily via /etc/cron.daily/. Opt out: sudo rm ${CRON_FILE}"
log "logs to syslog (tag currencicombo-prune); journalctl -t currencicombo-prune"

252
scripts/deployment/install.sh Executable file
View File

@@ -0,0 +1,252 @@
#!/usr/bin/env bash
# install.sh — idempotent first-time setup for CurrenciCombo on a systemd host.
#
# Intended to run ONCE per host as root (or with sudo). Running it again is
# safe: it will skip already-present artifacts and warn on conflicts.
#
# What this does:
# 1. Creates the `currencicombo` system user and /opt/currencicombo tree.
# 2. Installs nginx (Debian/Ubuntu or Alpine) if not present.
# 3. Ensures a local Postgres is running and creates a fresh
# `currencicombo` role + DB (refuses to touch an existing one unless
# --force-recreate is passed).
# 4. Ensures a local Redis is running.
# 5. Writes /etc/currencicombo/orchestrator.env from .env.prod.example,
# auto-populating EVENT_SIGNING_SECRET and ORCHESTRATOR_API_KEYS with
# fresh randoms the first time.
# 6. Installs /etc/currencicombo/webapp-nginx.conf.
# 7. Installs the two systemd units and runs `systemctl daemon-reload`.
# 8. Enables (does NOT start) both units. First start happens via
# scripts/deployment/deploy-currencicombo-8604.sh after the first
# successful build.
#
# This script is target-agnostic. It has no hardcoded IP / hostname /
# VLAN. The NPMplus ingress in front of it is configured separately —
# see scripts/deployment/README.md.
set -euo pipefail
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
APP_USER="currencicombo"
APP_HOME="/opt/currencicombo"
ETC_DIR="/etc/currencicombo"
LOG_DIR="/var/log/currencicombo"
REPO_DIR="/var/lib/currencicombo/repo"
ENV_FILE="${ETC_DIR}/orchestrator.env"
NGINX_FILE="${ETC_DIR}/webapp-nginx.conf"
SYSTEMD_DIR="/etc/systemd/system"
FORCE_RECREATE_DB=0
DRY_RUN=0
SKIP_NGINX_INSTALL=0
log() { printf '[install] %s\n' "$*" >&2; }
warn() { printf '[install][WARN] %s\n' "$*" >&2; }
die() { printf '[install][FATAL] %s\n' "$*" >&2; exit 1; }
run() { if [[ "${DRY_RUN}" -eq 1 ]]; then printf '[install][dry-run] %s\n' "$*" >&2; else eval "$*"; fi; }
sql_escape() {
printf "%s" "$1" | sed "s/'/''/g"
}
usage() {
cat <<'USAGE'
Usage: sudo ./install.sh [--force-recreate-db] [--skip-nginx-install] [--dry-run]
--force-recreate-db DROP and recreate the currencicombo Postgres role
and DB even if they already exist. DESTRUCTIVE.
--skip-nginx-install Do not apt/apk install nginx (use if you already
have a custom nginx build in place).
--dry-run Print the commands that would run, don't run them.
USAGE
}
while [[ $# -gt 0 ]]; do
case "$1" in
--force-recreate-db) FORCE_RECREATE_DB=1; shift ;;
--skip-nginx-install) SKIP_NGINX_INSTALL=1; shift ;;
--dry-run) DRY_RUN=1; shift ;;
-h|--help) usage; exit 0 ;;
*) die "unknown arg: $1" ;;
esac
done
[[ "$EUID" -eq 0 ]] || die "must run as root (sudo)"
# ----------------------------------------------------------------------
# 1. User + tree
# ----------------------------------------------------------------------
if id "${APP_USER}" >/dev/null 2>&1; then
log "user ${APP_USER} already exists"
else
log "creating system user ${APP_USER}"
run useradd --system --home-dir "${APP_HOME}" --shell /usr/sbin/nologin --user-group "${APP_USER}"
fi
for d in "${APP_HOME}" "${APP_HOME}/orchestrator" "${APP_HOME}/webapp" \
"${APP_HOME}/webapp/dist" "${ETC_DIR}" "${LOG_DIR}" "${REPO_DIR}"; do
run install -d -o "${APP_USER}" -g "${APP_USER}" -m 0755 "$d"
done
run chown "${APP_USER}:${APP_USER}" "${APP_HOME}" "${LOG_DIR}" "${REPO_DIR}"
run chmod 0750 "${ETC_DIR}"
# ----------------------------------------------------------------------
# 2. nginx (required by currencicombo-webapp.service)
# ----------------------------------------------------------------------
if [[ "${SKIP_NGINX_INSTALL}" -eq 0 ]]; then
if command -v nginx >/dev/null 2>&1; then
log "nginx already installed ($(nginx -v 2>&1 | head -1))"
elif command -v apt-get >/dev/null 2>&1; then
log "installing nginx via apt"
run 'DEBIAN_FRONTEND=noninteractive apt-get update -q'
run 'DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends nginx-light'
# We use our own nginx.conf via -c, so disable the distro site.
run systemctl disable --now nginx 2>/dev/null || true
elif command -v apk >/dev/null 2>&1; then
log "installing nginx via apk"
run apk add --no-cache nginx
run rc-update del nginx 2>/dev/null || true
else
die "no apt or apk available — install nginx manually or re-run with --skip-nginx-install"
fi
fi
[[ -f /etc/nginx/mime.types ]] || warn "/etc/nginx/mime.types missing; webapp-nginx.conf may fail"
# ----------------------------------------------------------------------
# 3. Postgres role + DB
# ----------------------------------------------------------------------
if ! command -v psql >/dev/null 2>&1; then
die "psql not on PATH — install Postgres on this host (e.g. apt install postgresql) before running install.sh"
fi
# Use the OS `postgres` superuser for DDL.
pg_role_exists() {
sudo -u postgres psql -tAc "SELECT 1 FROM pg_roles WHERE rolname='${APP_USER}';" 2>/dev/null | grep -q 1
}
pg_db_exists() {
sudo -u postgres psql -tAc "SELECT 1 FROM pg_database WHERE datname='${APP_USER}';" 2>/dev/null | grep -q 1
}
if pg_role_exists; then
if [[ "${FORCE_RECREATE_DB}" -eq 1 ]]; then
log "dropping existing role/DB (--force-recreate-db)"
run "sudo -u postgres psql -c 'DROP DATABASE IF EXISTS ${APP_USER};'"
run "sudo -u postgres psql -c 'DROP ROLE IF EXISTS ${APP_USER};'"
else
warn "Postgres role ${APP_USER} already exists — skipping role/DB creation. Re-run with --force-recreate-db to wipe."
fi
fi
if ! pg_role_exists; then
log "creating Postgres role ${APP_USER}"
run "sudo -u postgres psql -c \"CREATE ROLE ${APP_USER} LOGIN;\""
fi
if ! pg_db_exists; then
log "creating Postgres database ${APP_USER}"
run "sudo -u postgres psql -c \"CREATE DATABASE ${APP_USER} OWNER ${APP_USER};\""
fi
# ----------------------------------------------------------------------
# 4. Redis
# ----------------------------------------------------------------------
if systemctl list-unit-files | grep -q '^redis-server\.service'; then
run "systemctl start redis-server.service || true"
run "systemctl enable redis-server.service >/dev/null 2>&1 || true"
elif systemctl list-unit-files | grep -q '^redis\.service'; then
run "systemctl start redis.service || true"
run "systemctl enable redis.service >/dev/null 2>&1 || true"
elif command -v redis-cli >/dev/null 2>&1; then
warn "redis-cli present but no redis-server.service / redis.service unit — assuming external Redis"
else
warn "redis not detected; orchestrator will fall back to in-process event bus. Install redis for multi-replica support."
fi
# ----------------------------------------------------------------------
# 5. orchestrator.env
# ----------------------------------------------------------------------
FIRST_KEYS_FILE="/root/currencicombo-first-keys.txt"
if [[ -f "${ENV_FILE}" ]]; then
log "${ENV_FILE} already exists — leaving alone (no new keys generated)"
else
log "writing ${ENV_FILE}"
install -o "${APP_USER}" -g "${APP_USER}" -m 0640 "${SCRIPT_DIR}/.env.prod.example" "${ENV_FILE}"
# Auto-fill the two REQUIRED secrets so first boot doesn't crash.
SECRET="$(openssl rand -hex 32)"
INIT_KEY="$(openssl rand -hex 24)"
SETT_KEY="$(openssl rand -hex 24)"
AUD_KEY="$(openssl rand -hex 24)"
DB_PASSWORD="$(openssl rand -hex 24)"
DB_PASSWORD_SQL="$(sql_escape "${DB_PASSWORD}")"
API_KEYS_VALUE="${INIT_KEY}:initiator,${SETT_KEY}:settler,${AUD_KEY}:auditor"
DATABASE_URL="postgresql://${APP_USER}:${DB_PASSWORD}@127.0.0.1:5432/${APP_USER}"
log "setting Postgres password for role ${APP_USER}"
run "sudo -u postgres psql -c \"ALTER ROLE ${APP_USER} WITH LOGIN PASSWORD '${DB_PASSWORD_SQL}';\""
run "sed -i 's|^EVENT_SIGNING_SECRET=.*|EVENT_SIGNING_SECRET=${SECRET}|' '${ENV_FILE}'"
run "sed -i 's|^API_KEYS=.*|API_KEYS=${API_KEYS_VALUE}|' '${ENV_FILE}'"
run "sed -i 's|^DATABASE_URL=.*|DATABASE_URL=${DATABASE_URL}|' '${ENV_FILE}'"
run "grep -q '^ORCHESTRATOR_API_KEYS=' '${ENV_FILE}' && sed -i 's|^ORCHESTRATOR_API_KEYS=.*|ORCHESTRATOR_API_KEYS=${API_KEYS_VALUE}|' '${ENV_FILE}' || printf '\nORCHESTRATOR_API_KEYS=%s\n' '${API_KEYS_VALUE}' >> '${ENV_FILE}'"
# Write a root-only handoff file so ops can grab the keys without
# scraping journald or reading the env file. The canonical copy lives
# in ${ENV_FILE}; delete this file once the keys are in your password
# manager.
if [[ "${DRY_RUN}" -eq 0 ]]; then
umask 077
cat > "${FIRST_KEYS_FILE}" <<EOF
# CurrenciCombo first-deploy secrets — generated $(date -Iseconds) by install.sh
#
# This file contains the initial API keys and event-signing secret for the
# orchestrator. The canonical live values live in ${ENV_FILE} and are what
# systemd actually loads. This file is a root-only handoff copy — record
# these values in your password manager, then:
#
# shred -u ${FIRST_KEYS_FILE}
#
# Re-running install.sh does NOT regenerate these values if ${ENV_FILE}
# already exists. Losing both ${FIRST_KEYS_FILE} and ${ENV_FILE} means
# rotating all three API keys and the signing secret.
EVENT_SIGNING_SECRET=${SECRET}
ORCHESTRATOR_API_KEY_INITIATOR=${INIT_KEY}
ORCHESTRATOR_API_KEY_SETTLER=${SETT_KEY}
ORCHESTRATOR_API_KEY_AUDITOR=${AUD_KEY}
DATABASE_URL=${DATABASE_URL}
# As it appears in ${ENV_FILE}:
API_KEYS=${API_KEYS_VALUE}
ORCHESTRATOR_API_KEYS=${API_KEYS_VALUE}
EOF
chmod 0600 "${FIRST_KEYS_FILE}"
chown root:root "${FIRST_KEYS_FILE}"
else
log "[dry-run] would write ${FIRST_KEYS_FILE} (0600, root:root)"
fi
log " generated EVENT_SIGNING_SECRET (64 hex)"
log " generated 3 API keys (initiator/settler/auditor)"
log " generated local Postgres password for ${APP_USER}"
log " initial secrets written to ${FIRST_KEYS_FILE} (0600) — record in password manager, then 'shred -u ${FIRST_KEYS_FILE}'"
fi
# ----------------------------------------------------------------------
# 6. webapp-nginx.conf
# ----------------------------------------------------------------------
run install -o "${APP_USER}" -g "${APP_USER}" -m 0644 \
"${SCRIPT_DIR}/webapp-nginx.conf" "${NGINX_FILE}"
# ----------------------------------------------------------------------
# 7. systemd units
# ----------------------------------------------------------------------
run install -o root -g root -m 0644 \
"${SCRIPT_DIR}/systemd/currencicombo-orchestrator.service" \
"${SYSTEMD_DIR}/currencicombo-orchestrator.service"
run install -o root -g root -m 0644 \
"${SCRIPT_DIR}/systemd/currencicombo-webapp.service" \
"${SYSTEMD_DIR}/currencicombo-webapp.service"
run systemctl daemon-reload
# ----------------------------------------------------------------------
# 8. Enable (but do NOT start yet — no build exists)
# ----------------------------------------------------------------------
run systemctl enable currencicombo-orchestrator.service
run systemctl enable currencicombo-webapp.service
log "install complete."
log " next: run scripts/deployment/deploy-currencicombo-8604.sh as root to build + start."

View File

@@ -0,0 +1,34 @@
[Unit]
Description=CurrenciCombo orchestrator (Node)
Documentation=https://gitea.d-bis.org/d-bis/CurrenciCombo
After=network-online.target postgresql.service redis-server.service redis.service
Wants=network-online.target
[Service]
Type=simple
User=currencicombo
Group=currencicombo
WorkingDirectory=/opt/currencicombo/orchestrator
EnvironmentFile=/etc/currencicombo/orchestrator.env
ExecStart=/usr/bin/node /opt/currencicombo/orchestrator/dist/index.js
Restart=on-failure
RestartSec=5
TimeoutStopSec=20
StandardOutput=journal
StandardError=journal
SyslogIdentifier=currencicombo-orchestrator
# Hardening
NoNewPrivileges=yes
PrivateTmp=yes
ProtectSystem=strict
ProtectHome=yes
ReadWritePaths=/var/log/currencicombo
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectControlGroups=yes
RestrictSUIDSGID=yes
LockPersonality=yes
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,34 @@
[Unit]
Description=CurrenciCombo webapp (Vite SPA served by nginx)
Documentation=https://gitea.d-bis.org/d-bis/CurrenciCombo
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=currencicombo
Group=currencicombo
RuntimeDirectory=currencicombo-webapp
RuntimeDirectoryMode=0755
ExecStart=/usr/sbin/nginx -c /etc/currencicombo/webapp-nginx.conf -g 'daemon off; pid /run/currencicombo-webapp/nginx.pid;'
ExecReload=/usr/sbin/nginx -c /etc/currencicombo/webapp-nginx.conf -s reload
Restart=on-failure
RestartSec=5
StandardOutput=journal
StandardError=journal
SyslogIdentifier=currencicombo-webapp
# Hardening
NoNewPrivileges=yes
PrivateTmp=yes
ProtectSystem=strict
ProtectHome=yes
ReadWritePaths=/var/log/currencicombo /run/currencicombo-webapp
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectControlGroups=yes
RestrictSUIDSGID=yes
LockPersonality=yes
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,80 @@
# Self-contained nginx.conf for the CurrenciCombo Vite SPA.
# Invoked by the `currencicombo-webapp.service` systemd unit and installed
# to /etc/currencicombo/webapp-nginx.conf by scripts/deployment/install.sh.
#
# Listens on :3000 (NPMplus upstream). NPMplus path-routes /api/* to the
# orchestrator on :8080 (with SSE-friendly settings — see README.md);
# everything else lands here.
# This config does NOT proxy /api itself — that's intentional so a wrong
# NPMplus rule fails loudly instead of silently bypassing the orchestrator.
worker_processes auto;
error_log /var/log/currencicombo/webapp-nginx.error.log warn;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/currencicombo/webapp-nginx.access.log combined;
sendfile on;
tcp_nopush on;
keepalive_timeout 65;
server_tokens off;
gzip on;
gzip_types text/plain text/css application/javascript application/json image/svg+xml;
gzip_min_length 1024;
# Uploads/bodies: the portal is a static SPA, so any request with a body
# is almost certainly mis-routed. Cap tight.
client_max_body_size 1m;
server {
listen 3000 default_server;
listen [::]:3000 default_server;
server_name _;
root /opt/currencicombo/webapp/dist;
index index.html;
# Security headers are also set by NPMplus, but apply them here too
# so they survive a direct-to-CT curl for debugging.
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# Immutable asset bundles.
location /assets/ {
access_log off;
expires 1y;
add_header Cache-Control "public, max-age=31536000, immutable";
try_files $uri =404;
}
# Deny sourcemaps in prod.
location ~ \.map$ {
access_log off;
deny all;
return 404;
}
# Guard-rail: if NPMplus fails to path-route /api/*, surface it as a
# clean 421 rather than serving index.html and confusing the browser
# with a JSON parse error. The SSE endpoint lives at
# /api/plans/:id/events/stream, which also sits under /api/, so one
# rule covers both.
location /api/ {
return 421 "NPMplus is misconfigured: /api/* must proxy to orchestrator :8080\n";
add_header Content-Type text/plain always;
}
# SPA fallback. Must come last.
location / {
try_files $uri $uri/ /index.html;
}
}
}