Bidirectional Drift Detection
Nomotic detects behavioral drift in two directions: agent drift (changes in agent behavior) and human reviewer drift (changes in human oversight patterns). Both matter for governance integrity.
Standard drift detection monitors agents. Bidirectional drift detection monitors both agents and the humans overseeing them — because human-in-the-loop governance fails when humans stop paying attention.
Agent Behavioral Drift
How It Works
The DriftMonitor maintains a behavioral fingerprint for each agent, built from:
Action distribution — the frequency of each action type (read, write, delete, etc.)
Target distribution — which resources the agent accesses
Temporal patterns — when the agent is active (business hours, overnight, etc.)
Outcome distribution — the ratio of ALLOW/DENY/ESCALATE verdicts
When current behavior diverges from the established fingerprint, drift is detected. Nomotic compares two behavioral fingerprints using Jensen-Shannon Divergence (JSD):
Drift Scores
Thresholds and Severity
0.0 – 0.2
Low
Log only
0.2 – 0.4
Medium
Increase scrutiny
0.4 – 0.6
High
Alert + reduce trust
0.6 – 1.0
Critical
Suspend agent
Thresholds are configurable per preset and per agent.
Archetype Priors
Before an agent has enough history, its fingerprint is seeded from the archetype's behavioral prior. For example, a customer-experience agent's prior expects:
55% read, 20% write, 12% send, 8% query, 5% escalate
Peak hours: 10:00–14:00
Active during business hours only
92% ALLOW, 4% MODIFY, 3% ESCALATE, 1% DENY
As real observations accumulate (weighted by prior_weight, default 50 synthetic observations), the actual behavior gradually replaces the prior.
Drift Weights
Not all drift is equally concerning. Each archetype defines drift_weights that amplify or dampen drift signals:
Continuous Monitoring
Human Reviewer Drift
Why It Matters
Human oversight is only effective if humans are actually overseeing. Reviewer drift detects when oversight quality degrades:
A reviewer who normally handles 50 escalations/week drops to 5
Approval rate jumps from 70% to 99% (rubber-stamping)
Response times increase from minutes to days
A reviewer stops reviewing certain agent types entirely
Oversight Metrics
Approval rate
What percentage of escalations does the reviewer approve?
Review time
How long does the reviewer spend on each decision?
Consistency
Are similar cases getting similar decisions?
Engagement
Is the reviewer actively reviewing or rubber-stamping?
HumanDriftMonitor
The HumanDriftMonitor tracks reviewer engagement patterns:
Review frequency — how often the reviewer handles escalations
Approval rate — percentage of escalations approved vs. denied
Response time — latency from escalation to resolution
Coverage — which agent types and archetypes the reviewer handles
Drift in any of these patterns triggers alerts visible in the dashboard and via the API.
Detecting Rubber-Stamping
:::warning A reviewer approving 95%+ of escalations with declining review times is likely rubber-stamping. Nomotic flags this and can escalate to a secondary reviewer. :::
Configuring Thresholds
API
CLI
Last updated

