Reasoning Artifacts

A reasoning artifact is a structured representation of an AI agent's deliberation process. It externalizes the agent's reasoning in a format that governance systems can evaluate before an action occurs.

Instead of governance evaluating only the intended action (what the agent wants to do), reasoning artifacts let governance evaluate the thinking behind it: what the agent considered, what alternatives it rejected, what it's uncertain about, and what authority it claims.

Why Reasoning Artifacts

Standard governance evaluates actions. The agent says "I want to write to customer_records" and governance checks scope, authority, and resource limits.

Reasoning artifacts add a layer: the agent explains why it wants to write to customer_records, what alternatives it considered, what constraints it identified, and how confident it is. This enables governance to catch problems that action-level evaluation misses — an agent with correct scope but flawed reasoning, or an agent that didn't consider obvious alternatives.

Declaring uncertainty is a sign of reasoning quality, not weakness. An agent that says "I'm 60% confident because I'm missing the customer's payment history" is more trustworthy than one that says "I'm 99% confident" without acknowledging gaps.

Schema Structure

Reasoning artifacts conform to the Nomotic Protocol schemaarrow-up-right and contain six sections:

Identity

Links the artifact to a specific agent and its governance context.

Field
Required
Description

agent_id

Yes

Unique identifier for the agent

certificate_id

No

Reference to the agent's birth certificate

envelope_id

No

Authority envelope the agent is operating under

session_id

No

Links to a broader interaction session

Task

What the agent is trying to accomplish.

Field
Required
Description

goal

Yes

Plain language description of the objective

origin

Yes

What initiated the task: user_request, scheduled, event_triggered, agent_initiated, or escalation_received

origin_id

No

Identifier for the originating entity (hashed for privacy)

constraints_identified

Yes

Constraints the agent identified as relevant

Each constraint has a type (policy, regulatory, authority, ethical, resource, temporal, technical, or organizational), a description, and a source (URI format where possible).

Reasoning

The agent's deliberation process, structured as discrete factors that governance can individually evaluate.

Field
Required
Description

factors

Yes

Considerations evaluated (at least one must be type constraint)

alternatives_considered

Yes

Alternative actions considered and why they were rejected

narrative

No

Human-readable summary (not evaluated by governance — present for audit readability)

Each factor includes:

  • id — unique within this artifact, referenced by justifications

  • type — constraint, context, precedent, evidence, inference, uncertainty, alternative, or risk

  • description, source, assessment — what it is, where it came from, what the agent concluded

  • influence — decisive, significant, moderate, minor, or noted

  • confidence — 0.0 to 1.0

Each alternative specifies a method from the Nomotic method taxonomy, optional context, and reason_rejected.

Decision

What the agent intends to do and how it connects to the reasoning.

Field
Required
Description

intended_action

Yes

The action the agent will take if governance approves (method + target)

justifications

Yes

Links between the decision and specific reasoning factors (by factor_id)

authority_claim

Yes

What authority the agent believes it is operating under

The authority_claim specifies an envelope_type: standard, conditional, delegated, escalated, or pre_authorized. For conditional authority, the agent lists the conditions it believes are satisfied.

Uncertainty

What the agent doesn't know.

Field
Required
Description

unknowns

Yes

Information identified as relevant but unavailable

assumptions

Yes

Assumptions made to proceed despite incomplete information

overall_confidence

Yes

Holistic confidence in the decision (0.0–1.0)

Each unknown has a description and impact (how missing information might affect the decision). Each assumption has a description, basis (why it's reasonable), and risk_if_wrong.

Plan (Optional)

For multi-step workflows, provides context about where this reasoning fits in a larger plan.

Field
Required
Description

workflow_id

Yes

Identifier for the overall workflow

total_steps

Yes

Total steps in the plan

current_step

Yes

Which step this artifact represents (1-indexed)

step_description

Yes

What this step accomplishes

dependencies

No

Artifact IDs of previous steps this step depends on

remaining_steps

No

Descriptions of subsequent steps (enables cascading impact assessment)

rollback_capability

No

Whether this step can be undone if a subsequent step fails

Method Taxonomy

The schema defines a standardized set of action methods organized by category:

Category
Methods

Data

query, read, write, update, delete, archive, restore, export, import

Retrieval

fetch, search, find, scan, filter, extract, pull

Decision

approve, deny, escalate, recommend, classify, prioritize, evaluate, validate, check, rank, predict

Communication

notify, request, respond, reply, broadcast, subscribe, publish, send, call

Orchestration

schedule, assign, delegate, invoke, retry, cancel, pause, resume, route, run, start, open

Transaction

transfer, refund, charge, reserve, release, reconcile, purchase

Security

authenticate, authorize, revoke, elevate, sign, register

System

configure, deploy, monitor, report, log, audit, sync

Generation

generate, create, summarize, transform, translate, normalize, merge, link, map, make

Control

set, take, show, turn, break, submit

Example

Governance Integration

When an agent submits a reasoning artifact alongside an action, governance can evaluate both. The artifact enriches the dimension evaluation:

  • Transparency dimension scores higher when reasoning is well-structured with clear justifications

  • Ethical alignment can evaluate the reasoning factors, not just the action

  • Precedent alignment can compare reasoning patterns across similar decisions

  • Cascading impact can assess the plan.remaining_steps for downstream consequences

Reasoning artifacts are stored as part of the audit trail, providing a complete record of not just what the agent did but why it believed it should.

Last updated