Files
FusionAGI/tests/test_world_model_causal.py
Devin AI 9a8affae9a
Some checks failed
Tests / test (3.10) (pull_request) Failing after 35s
Tests / test (3.11) (pull_request) Failing after 34s
Tests / test (3.12) (pull_request) Successful in 39s
Tests / lint (pull_request) Successful in 36s
Tests / docker (pull_request) Successful in 1m42s
feat: consequence engine, causal world model, metacognition, interpretability, claim verification
Choice → Consequence → Learning:
- ConsequenceEngine tracks every decision point with alternatives,
  risk/reward estimates, and actual outcomes
- Consequences feed into AdaptiveEthics for experience-based learning
- FusionAGILoop now wires ethics + consequences into task lifecycle

Causal World Model:
- CausalWorldModel learns state-transition patterns from execution history
- Predicts outcomes based on observed action→effect patterns
- Uncertainty estimates decrease as more evidence accumulates

Metacognition:
- assess_head_outputs() evaluates reasoning quality from head outputs
- Detects knowledge gaps, measures head agreement, identifies uncertainty
- Actively recommends whether to seek more information

Interpretability:
- ReasoningTracer captures full prompt→answer reasoning traces
- Each step records stage, component, input/output, timing
- explain() generates human-readable reasoning explanations

Claim Verification:
- ClaimVerifier cross-checks claims for evidence, consistency, grounding
- Flags high-confidence claims lacking evidence support
- Detects contradictions between claims from different heads

325 tests passing, 0 ruff errors, 0 mypy errors.

Co-Authored-By: Nakamoto, S <defi@defi-oracle.io>
2026-04-28 06:25:35 +00:00

70 lines
2.4 KiB
Python

"""Tests for the causal world model."""
from fusionagi.world_model import CausalWorldModel
class TestCausalWorldModel:
"""Test learned causal state-transition prediction."""
def test_predict_unknown_action(self) -> None:
wm = CausalWorldModel()
result = wm.predict({"x": 1}, "unknown", {})
assert result.confidence == 0.3
assert result.to_state == {"x": 1}
def test_observe_and_predict(self) -> None:
wm = CausalWorldModel()
wm.observe(
from_state={"count": 0},
action="increment",
action_args={},
to_state={"count": 1},
success=True,
)
result = wm.predict({"count": 5}, "increment", {})
assert result.confidence > 0.3
assert "count" in result.to_state
def test_multiple_observations_increase_confidence(self) -> None:
wm = CausalWorldModel()
for i in range(10):
wm.observe({"s": i}, "act", {}, {"s": i + 1}, success=True)
result = wm.predict({"s": 100}, "act", {})
assert result.confidence > 0.7
def test_uncertainty_no_observations(self) -> None:
wm = CausalWorldModel()
info = wm.uncertainty({}, "unknown_action")
assert info.risk_level == "high"
assert info.confidence == 0.3
def test_uncertainty_with_observations(self) -> None:
wm = CausalWorldModel()
for i in range(10):
wm.observe({}, "safe_action", {}, {}, success=True)
info = wm.uncertainty({}, "safe_action")
assert info.risk_level in ("low", "medium")
assert info.confidence > 0.5
def test_failed_observations_lower_confidence(self) -> None:
wm = CausalWorldModel()
for i in range(5):
wm.observe({}, "risky", {}, {}, success=False)
info = wm.uncertainty({}, "risky")
assert info.risk_level == "high"
def test_known_actions(self) -> None:
wm = CausalWorldModel()
wm.observe({}, "act_a", {}, {}, success=True)
wm.observe({}, "act_b", {}, {}, success=True)
assert "act_a" in wm.known_actions
assert "act_b" in wm.known_actions
def test_get_summary(self) -> None:
wm = CausalWorldModel()
wm.observe({}, "x", {}, {"result": 1}, success=True)
wm.observe({}, "x", {}, {"result": 2}, success=True)
summary = wm.get_summary()
assert summary["total_observations"] == 2
assert summary["known_patterns"] >= 1