Frontend (items 1-10):
- WebSocket streaming integration with useWebSocket hook
- Admin Dashboard UI (status, voices, agents, governance tabs)
- Voice playback UI (TTS/STT integration)
- Settings/Preferences page (conversation style, sliders)
- Responsive/mobile layout (breakpoints at 480px, 768px)
- Dark/light theme with CSS variables and localStorage
- Error handling & loading states (retry, empty state, disabled input)
- Authentication UI (login page, Bearer token, logout)
- Head visualization improvements (active/speaking states, animations)
- Consequence/Ethics dashboard (lessons, consequences, insights tabs)
Backend stubs (items 11-21):
- Tool connectors: DocsConnector (text/md/PDF), DBConnector (SQLite/Postgres), CodeRunnerConnector (Python/JS/Bash/Ruby sandboxed)
- STT adapter: WhisperSTTAdapter, AzureSTTAdapter
- Multi-modal interface adapters: Visual, Haptic, Gesture, Biometric
- SSE streaming endpoint (/v1/sessions/{id}/stream/sse)
- Multi-tenant support (X-Tenant-ID header, tenant CRUD)
- Plugin marketplace/registry (register, install, list)
- Backup/restore endpoints
- Versioned API negotiation (Accept-Version header, deprecation)
Infrastructure (items 22-26):
- docker-compose.yml (API + Postgres + Redis + frontend)
- .env.example with all configurable vars
- gunicorn.conf.py production ASGI config
- Prometheus metrics collector and /metrics endpoint
- Structured JSON logging configuration
Documentation (items 27-29):
- Architecture docs with module layout and subsystem descriptions
- Quickstart guide with setup, API tour, and test instructions
Tests (items 30-32):
- Integration tests: 25 end-to-end API tests
- Frontend tests: 10 Vitest tests for hooks (useTheme, useAuth)
- Load/performance tests: latency and throughput benchmarks
- Connector tests: 16 tests for Docs, DB, CodeRunner
- Multi-modal adapter tests: 9 tests
- Metrics collector tests: 5 tests
- STT adapter tests: 2 tests
511 Python tests passing, 10 frontend tests passing, 0 ruff errors.
Co-Authored-By: Nakamoto, S <defi@defi-oracle.io>
40 lines
1.3 KiB
Python
40 lines
1.3 KiB
Python
"""Tests for the metrics collector."""
|
|
|
|
from fusionagi.api.metrics import MetricsCollector
|
|
|
|
|
|
class TestMetricsCollector:
|
|
def test_counter(self) -> None:
|
|
m = MetricsCollector()
|
|
m.inc("requests")
|
|
m.inc("requests")
|
|
snap = m.snapshot()
|
|
assert snap["counters"]["requests"] == 2
|
|
|
|
def test_counter_with_labels(self) -> None:
|
|
m = MetricsCollector()
|
|
m.inc("http_requests", labels={"method": "GET"})
|
|
m.inc("http_requests", labels={"method": "POST"})
|
|
snap = m.snapshot()
|
|
assert snap["counters"]["http_requests{method=GET}"] == 1
|
|
assert snap["counters"]["http_requests{method=POST}"] == 1
|
|
|
|
def test_histogram(self) -> None:
|
|
m = MetricsCollector()
|
|
for v in [0.1, 0.2, 0.3, 0.4, 0.5]:
|
|
m.observe("latency", v)
|
|
snap = m.snapshot()
|
|
assert snap["histograms"]["latency"]["count"] == 5
|
|
assert 0.2 < snap["histograms"]["latency"]["mean"] < 0.4
|
|
|
|
def test_gauge(self) -> None:
|
|
m = MetricsCollector()
|
|
m.set_gauge("active_sessions", 5.0)
|
|
snap = m.snapshot()
|
|
assert snap["gauges"]["active_sessions"] == 5.0
|
|
|
|
def test_uptime(self) -> None:
|
|
m = MetricsCollector()
|
|
snap = m.snapshot()
|
|
assert snap["uptime_seconds"] >= 0
|