Files
proxmox/docs/04-configuration/DEV_VM_SSH_REMOTE_ACCESS.md

6.2 KiB
Raw Permalink Blame History

Dev VM (5700) — remote SSH (operators / automation)

Purpose: Let remote operators (e.g. Gitea/CI, automation hosts) open SSH to CT 5700 (192.168.11.59) without relying on a shared LAN. Canonical for service work: root@192.168.11.59 (or dev1dev4 for interactive dev accounts). This doc is the infrastructure checklist; application runbooks (Phase 1 CTs, etc.) live in other repos.

Last updated: 2026-04-24


1) Preconditions on the dev VM (5700)

  • sshd listening on 22 inside the guest.
  • The remote principals public key in ~/.ssh/authorized_keys for the user you use (root and/or dev1dev4).
  • From a Proxmox host on the same LAN, verify: nc -zw2 192.168.11.59 22 and ssh -o BatchMode=yes root@192.168.11.59 true.

1.1 Append a remote operator pubkey on root (one-time, ~10s) — cannot be done via Cloudflare API

Cloudflare Access only secures the tunnel; sshd still needs a line in /root/.ssh/authorized_keys (unless you have configured SSH with Access CA / TrustedUserCAKeys on the guest — a larger change).

Approved public key (Devin remote operator — 2026-04-24): idempotent if you re-run; checks for the key comment to avoid dupes.

ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMX/Etk+KC6cgID1Sd7E/YTaSsxvPygQnBmKFG3Wz6TD devin-pve-20260424

A — From a workstation that can already ssh root@192.168.11.59 (LAN or VPN):

ssh root@192.168.11.59 "grep -qF 'devin-pve-20260424' /root/.ssh/authorized_keys 2>/dev/null || { umask 077; mkdir -p /root/.ssh; chmod 700 /root/.ssh; echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMX/Etk+KC6cgID1Sd7E/YTaSsxvPygQnBmKFG3Wz6TD devin-pve-20260424' >> /root/.ssh/authorized_keys; chmod 600 /root/.ssh/authorized_keys; }"

B — No ssh to the guest yet (first-time key install): on the Proxmox node that runs 5700 (cluster truth: scripts/lib/load-project-env.sh maps 5700r630-04 / 192.168.11.14, not r630-01/02; always confirm with ssh root@<PVE> 'pct list | grep 5700' if nodes move). Use an interactive or pct shell:

ssh root@<PVE_HOST>   # node where 5700 is defined
pct exec 5700 -- bash

Then inside 5700:

umask 077; mkdir -p /root/.ssh; chmod 700 /root/.ssh; touch /root/.ssh/authorized_keys; chmod 600 /root/.ssh/authorized_keys
grep -qF 'devin-pve-20260424' /root/.ssh/authorized_keys || \
  echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMX/Etk+KC6cgID1Sd7E/YTaSsxvPygQnBmKFG3Wz6TD devin-pve-20260424' >> /root/.ssh/authorized_keys

Re-test: ssh -o BatchMode=yes -o ConnectTimeout=5 root@192.168.11.59 true (or cloudflared access ssh to ssh.dev.d-bis.org when CF is live).


Connector must be able to reach 192.168.11.59:22 (run cloudflared on a host that can route to that IP).

2.1 config.yaml ingress (example)

Add a hostname in the same tunnel you use for other d-bis / Proxmox surfaces (or a dedicated tunnel):

ingress:
  - hostname: ssh.dev.d-bis.org
    service: ssh://192.168.11.59:22
  # ... existing hostnames ...
  - service: http_status:404

Reload/restart the tunnel service after changing config.

2.2 DNS

From a host with cloudflared logged in to the right account:

cloudflared tunnel route dns <tunnel-name> ssh.dev.d-bis.org

(or create a CNAME in Cloudflare to <tunnel-uuid>.cfargotunnel.com for that hostname)

Confirm public resolution: dig +short ssh.dev.d-bis.org (should be Cloudflare / tunnel targets, not empty once published).

2.3 Cloudflare Access (application)

Zero Trust → Access → Applications → Add application

Field Suggestion
Type Self-hosted
Application domain ssh.dev.d-bis.org
Policy Include a Service token (and/or your org IdPs) so automation can authenticate without a browser. Match the same client id used in env as CF_ACCESS_CLIENT_ID (and secret) when using service auth.

Without a policy that your client can satisfy, SSH will fail at the Access layer even if DNS and sshd are correct.

2.4 Client: cloudflared access ssh

Set (typical for service tokens / headless clients):

  • TUNNEL_TOKEN / Access credentials as required by your org, or
  • CF_ACCESS_CLIENT_ID and CF_ACCESS_CLIENT_SECRET when using a service token allowed by the Access policy.

Example (adjust to your cloudflared version; hostname must match the Access app):

ssh -o ProxyCommand="cloudflared access ssh --hostname %h" \
  -o ServerAliveInterval=30 \
  root@ssh.dev.d-bis.org

Triage: (1) DNS returns answers → (2) Access allows the token/identity → (3) connector reaches 192.168.11.59:22 → (4) sshd accepts the key. Failures at (3) look like timeout; at (2) like Access / 302-style issues in logs; at (4) like Permission denied (publickey).


3) Option B — UDM Pro port forward + optional allowlist

If you expose 76.53.10.40:22192.168.11.59:22, restrict WAN access with a source IP allowlist (or Geo/IP group) in UniFi, not the whole internet. This is a break-glass / session path; Option A is better long term.

Reference: UDM_PRO_DEV_CODESPACES_PORT_FORWARD.md


4) Option C — Cloudflare WARP / private network

WARP (or a site-to-site VPN) to reach 192.168.11.0/24, then plain ssh root@192.168.11.59 as if on LAN. See CLOUDFLARE_ZERO_TRUST_GUIDE.md and operator VPN docs.



Status (external): If dig +short ssh.dev.d-bis.org is empty, DNS is not published. If it resolves but SSH hangs or fails, split-debug Access policy vs tunnel vs sshd as in §2.4.