The Nomotic Drift Taxonomy
The AI governance industry treats drift as a single concept. Nomotic doesn't. Drift has structure. Different kinds of drift have different causes, different detection methods, different risk profiles, and different remediation paths. Treating all drift the same is like treating all network faults the same. You can't fix a routing loop the same way you fix a cable cut.
The Nomotic Drift Taxonomy defines a two-layer classification system. Drift distributions measure what changed. Drift scopes describe where and how the change manifests. Every drift event Nomotic detects is classified by both.
Drift Distributions
Distributions are the signals measured within a behavioral fingerprint. Each captures a different dimension of behavioral change. Together they form the complete picture of how an agent's behavior has shifted. All five are measured continuously and independently.
Action Drift
The distribution of action types the agent performs has changed.
An agent whose baseline was 70% read / 30% query shifting to 50% read / 30% write / 20% query has action drift. The agent is doing different kinds of things than it used to.
Measurement: Jensen-Shannon divergence between the baseline action type distribution and the recent window action type distribution.
Example signal: A research agent that historically reads and queries starts issuing write and delete actions.
Target Drift
The distribution of targets the agent operates on has changed.
An agent that normally accesses /data/reports 80% of the time and starts accessing /data/payroll 40% of the time has target drift. The agent is operating in different places than it used to.
Measurement: Jensen-Shannon divergence between the baseline target distribution and the recent window target distribution.
Example signal: A customer service agent whose targets were customer records and order data starts accessing internal financial systems.
Temporal Drift
When and how fast the agent acts has changed.
A composite signal with two components: hourly activity distribution (which hours of the day the agent is active, weighted 60%) and rate deviation (how much the action frequency has changed, weighted 40%). An agent that was business-hours-only and starts operating at 2 AM, or one that doubles its action rate, has temporal drift.
Measurement: Weighted combination of hourly distribution JSD and normalized rate deviation.
Example signal: A compliance-officer agent that ran during business hours starts executing actions at 3 AM on weekends.
Outcome Drift
The distribution of governance verdicts the agent receives has changed.
An agent that was 95% ALLOW / 3% ESCALATE / 2% DENY shifting to 70% ALLOW / 20% DENY / 10% ESCALATE has outcome drift. Governance is responding to this agent differently than it used to, which means the agent's behavior is triggering different governance signals.
Measurement: Jensen-Shannon divergence between the baseline verdict distribution and the recent window verdict distribution.
Example signal: An agent that rarely triggered denials starts accumulating DENY and ESCALATE verdicts, suggesting its behavior is testing governance boundaries more frequently.
:::note Outcome drift is a second-order signal. It reflects changes in how governance evaluates the agent, which is itself a product of changes in what the agent does. Outcome drift that appears without corresponding action or target drift can indicate that governance rules changed rather than agent behavior. :::
Semantic Drift
The meaning-level mapping between the agent's instructions and its actions has changed, even when the structural distributions remain stable.
This is the distribution that the other four cannot see. An agent instructed to "research flights" that gradually starts "booking flights" may maintain identical action type distributions, identical target distributions, identical timing, and identical approval rates. All four structural distributions stay clean. But the word "research" now maps to a fundamentally different operational behavior than it did at baseline.
Measurement: Divergence between the current semantic-action map (which instruction terms map to which action patterns) and the anchored baseline established in the behavioral contract.
Example signal: A vacation planning agent where "research" originally mapped to 95% read / 5% query actions begins mapping "research" to 60% read / 30% write / 10% query. The agent is still "researching" according to its instructions, but the operational meaning of research has shifted toward booking.
:::tip Semantic drift requires instruction context. When agents produce reasoning artifacts or operate through framework adapters that expose task metadata, the semantic observer can track which instruction terms drove which actions. For agents where only raw actions are visible, operators can declare semantic anchors in the behavioral contract and the system monitors whether the action patterns associated with those terms remain stable. :::
Drift Distributions — Quick Reference
Action
Action type mix changed
JSD of action type frequencies
Agent starts doing different kinds of things
Target
Target mix changed
JSD of target frequencies
Agent starts operating in different places
Temporal
Timing or rate changed
Hourly JSD + rate deviation
Agent operates at different times or speeds
Outcome
Governance verdicts changed
JSD of verdict frequencies
Governance responds to agent differently
Semantic
Instruction meaning changed
Semantic-action map divergence
Agent reinterprets what its instructions mean
Drift Scopes
Scopes describe the structure of the drift event: which agents are affected, how they relate to each other, and what mechanism caused the drift. Every scope can exhibit any combination of the five distributions. An individual agent can have semantic drift. A fleet can have aggregate temporal drift. A coordinated chain can propagate action drift that amplifies at each hop.
The scope tells you the shape of the problem. The distributions tell you what changed within that shape.
Agent Drift
One agent's behavioral fingerprint diverges from its established baseline across one or more distributions.
This is the foundational drift scope. The DriftMonitor maintains a sliding window per agent and compares recent behavior against the full baseline fingerprint. When any distribution or the weighted overall score exceeds the configured threshold, an alert fires.
Detection: Per-agent JSD comparison across all five distributions.
Remediation: Agent-level. Review the agent's configuration, retrain or re-prompt, tighten the behavioral contract, or investigate what changed in the agent's inputs.
Human Drift
Human reviewers shift their oversight behavior over time. Approval rates creep upward, response times lengthen, coverage narrows.
Governance fails not when agents misbehave, but when humans stop paying attention. A 93% approval rate with 2-second review times across 500 decisions is not oversight. It's rubber-stamping. The HumanDriftMonitor tracks reviewer behavioral patterns and flags when oversight quality degrades.
Detection: Statistical tracking of approval rate, response time, rationale depth, coverage gaps, and engagement patterns per reviewer.
Remediation: Process-level. Reviewer rotation, workload rebalancing, additional training, or escalation routing adjustments.
:::note Human drift detection monitors behavioral proxies, not surveillance metrics. No keystroke tracking, no attention monitoring. The system measures what can be observed from governance outcomes: how fast reviews happen, what percentage are approved, and whether coverage is comprehensive. See Bidirectional Drift Detection for full details. :::
Fleet Drift
The fleet's aggregate behavioral distribution shifts over time, even when no individual agent has crossed its own drift threshold.
If 200 agents were collectively 80% read-heavy six months ago and are now 60% read-heavy, the fleet has drifted. No single agent may have shifted enough to trigger an individual alert. But the aggregate distribution has moved meaningfully, and that movement may reflect a systemic change that demands investigation.
Detection: Aggregate fleet-level fingerprint compared against a fleet baseline using JSD across all five distributions.
Remediation: Strategic. Assess whether the shift reflects legitimate workload evolution, a policy change that altered behavior fleet-wide, or a systemic problem. Fleet drift may be acceptable if the underlying business requirements changed.
Correlated Drift
Multiple agents independently shift in the same direction at the same time because they share an upstream input.
A model version update, a data source change, a prompt template revision. All agents exposed to the shared input shift in parallel. They aren't communicating with each other. They aren't in a workflow chain. They each independently respond to the same environmental change and produce similar behavioral shifts.
Detection: Drift vector analysis. Compute the drift vector (direction and magnitude across all five distributions) for each drifting agent, then measure cosine similarity across vectors within a configurable time window. When three or more agents drift in the same direction with similarity above threshold, it's correlated.
Remediation: Upstream. Identify the shared input that changed and review its behavioral impact. The fix is at the source, not at the individual agents. Consider staged rollouts for model updates and prompt template changes.
Example: A model endpoint upgrades from v3.1 to v3.2. Seven finance-agent instances all shift their action distribution toward more write actions within 8 minutes. No agent is interacting with another. They all independently responded to the same model change. Correlated action drift.
Coordinated Drift
One agent's output changes another agent's behavior, which changes a third's, and the shift compounds across iterations.
This is iterative context injection. Agent A's output feeds Agent B's context. Agent B shifts. Agent B's output feeds Agent C's context. Agent C shifts further. The behavioral state at the end of the chain can diverge substantially from any starting point, producing fleet-wide behavioral shifts that no individual agent would have produced alone.
Detection: Causal propagation tracing. The FleetBehavioralMonitor maintains an interaction map (which agents consume which other agents' outputs) and tracks the temporal sequence of drift onset across connected agents. When drift appears in sequence along a known interaction chain, with each agent drifting after consuming the previous agent's output, it's coordinated.
Remediation: Structural. Break the amplification chain by injecting governance checkpoints between agents. The intervention point is typically the earliest agent in the chain where a checkpoint can prevent the drift from propagating further.
Key metric: The amplification factor at each hop. A chain where drift doubles at each hop is significantly more dangerous than one where it attenuates. Risk scales with chain length multiplied by amplification factor.
Example: Agent A (data gatherer) subtly reinterprets "summarize" to include editorialization. Agent B (analyst) consumes A's summaries and, influenced by the editorial framing, shifts its own recommendations. Agent C (executor) acts on B's recommendations, now two hops removed from the original intent. The compounding semantic drift across A → B → C produces actions that none of the agents would have taken individually. Coordinated semantic drift.
:::caution The distinction between correlated and coordinated drift is critical. Correlated drift is parallel and independent. Coordinated drift is serial and causal. The remediation for correlated drift is upstream (fix the shared input). The remediation for coordinated drift is structural (break the amplification chain). Applying the wrong remediation wastes time and leaves the actual problem unaddressed. :::
Drift Scopes — Quick Reference
Agent
Single agent
Fingerprint divergence
Per-agent JSD across 5 distributions
Review config, retrain, tighten contract
Human
Reviewer(s)
Oversight degradation
Approval rate, response time, coverage tracking
Rotate reviewers, rebalance workload
Fleet
All agents
Aggregate distribution shift
Fleet-level fingerprint JSD against baseline
Assess workload evolution, review policy alignment
Correlated
Multiple agents (parallel)
Shared upstream input
Drift vector cosine similarity in time window
Identify and review the shared input change
Coordinated
Multiple agents (serial)
Output-to-input propagation
Causal chain tracing through interaction map
Break the amplification chain, inject checkpoints
The Taxonomy Matrix
Every drift event is classified by its scope (row) and the distributions that changed (columns). An event can involve multiple distributions. The matrix is not theoretical. It describes exactly how Nomotic classifies drift events in production.
Agent
Action types changed
Targets shifted
Timing/rate changed
Verdicts changed
Instruction meaning changed
Human
—
—
Response timing changed
Approval patterns changed
—
Fleet
Aggregate action mix shifted
Aggregate target mix shifted
Fleet timing shifted
Fleet outcome shift
Fleet meaning shift
Correlated
Parallel action shifts
Parallel target shifts
Parallel timing shifts
Parallel outcome shifts
Parallel reinterpretation
Coordinated
Action shifts propagating
Target shifts propagating
Timing changes cascading
Outcome changes cascading
Meaning reframing propagating
Reading the Taxonomy
Every drift event Nomotic reports includes its classification. The format is scope + distributions. Some examples of how to read taxonomy classifications:
"Agent semantic drift" — One agent's instruction-to-action mapping has shifted. The agent is reinterpreting what its instructions mean. Structural distributions are stable.
"Agent action and target drift" — One agent is doing different kinds of things against different targets. Two structural distributions shifted together, which is more concerning than either alone.
"Correlated semantic drift across 7 agents" — Seven agents independently reinterpreted the same instruction terms in the same direction at the same time. Something upstream changed. A model update is the most likely cause.
"Coordinated action and semantic drift (3-hop chain)" — One agent's behavioral shift propagated through a three-agent interaction chain, with both the action mix and semantic interpretation compounding at each hop. The amplification factor and chain length determine severity.
"Fleet temporal drift" — The aggregate fleet timing pattern has shifted over the past month, even though no individual agent crossed its own temporal drift threshold. The shift is gradual and distributed across many agents.
"Agent outcome drift without action or target drift" — Governance is responding differently to this agent, but the agent's own behavior hasn't changed. This usually means governance rules changed, not the agent.
Severity Thresholds
Drift severity is determined by the overall weighted score. Each archetype defines drift weights that amplify or dampen specific distributions based on what matters most for that type of agent.
0.00 – 0.05
None
No meaningful drift
0.05 – 0.15
Low
Within normal variation. Logged but no action
0.15 – 0.35
Moderate
Worth investigating. Trust reduced slightly
0.35 – 0.60
High
Active concern. Alert generated. Trust reduced
0.60 – 1.00
Critical
Agent behavior has fundamentally changed. Veto-level intervention
Drift Weights by Archetype
Different archetypes weight distributions differently. A financial auditor cares more about action and target drift (accessing new data is concerning) than temporal drift (shifting schedules is less critical). A devops automator cares more about temporal drift (unexpected execution timing is a red flag) than action drift (varied action types are normal).
These weights are configured per archetype in the archetype prior. They can be overridden in the behavioral contract for individual agents.
Glossary
Amplification factor — In coordinated drift, the ratio of drift magnitude at one hop versus the previous hop. An amplification factor > 1.0 means drift is growing as it propagates. < 1.0 means it's attenuating.
Behavioral contract — A versioned, cryptographically sealed artifact declaring an agent's behavioral expectations. Contracts carry drift thresholds, behavioral invariants, and semantic anchors.
Behavioral fingerprint — The statistical profile of an agent's behavior across five distributions: action, target, temporal, outcome, and semantic. Built automatically from governance telemetry.
Correlated drift — Multiple agents drifting in the same direction at the same time due to a shared upstream input. Parallel and independent. Remediation targets the shared input.
Coordinated drift — Drift that propagates through an agent interaction chain, where one agent's output shifts another's behavior. Serial and causal. Remediation targets the chain structure.
Cosine similarity — The measure used to compare drift vectors across agents in correlated drift detection. Vectors with high cosine similarity are drifting in the same direction.
Distribution — One of the five behavioral signals tracked in a fingerprint: action, target, temporal, outcome, semantic. Each produces a 0.0 to 1.0 drift score independently.
Drift velocity — The rate of change of a drift score over time. Used by the trajectory engine to project future drift and trigger proactive interventions.
Drift vector — The five-dimensional vector representing an agent's drift direction and magnitude: [action_drift, target_drift, temporal_drift, outcome_drift, semantic_drift]. Used in correlated drift detection.
Fleet drift — Aggregate behavioral shift across all agents, detectable even when no individual agent crosses its own threshold. Measures the fleet-level fingerprint against a fleet baseline.
Interaction map — The graph of which agents consume which other agents' outputs. Required for coordinated drift detection. Registered via the runtime or fleet monitor.
Jensen-Shannon divergence (JSD) — The metric used to compare probability distributions. Bounded between 0.0 (identical) and 1.0 (completely disjoint). Symmetric and computed from standard library math only.
Propagation chain — In coordinated drift, the ordered sequence of agents through which drift traveled, traced by temporal onset ordering along the interaction map.
Scope — One of the five structural contexts where drift manifests: agent, human, fleet, correlated, coordinated. The scope describes the shape of the problem.
Semantic anchor — A mapping from an instruction term to its expected action and target distributions. Stored in the behavioral contract. The reference point against which semantic drift is measured.
Semantic-action map — The observed mapping between instruction terms and the action patterns they produce. Compared against semantic anchors to compute semantic drift.
Sliding window — The recent-observation buffer used for drift detection. Default: 100 actions. Drift is computed by comparing this window against the full baseline fingerprint.
Last updated

