Weights and Vetoes

Security says allow. Ethics says block.

Who wins?

This is not a thought experiment. Every organization deploying AI agents will face this question repeatedly, in production, with real consequences. And most organizations have no mechanism to answer it.

They have a security team. They have an ethics board. They might have bias auditors and governance committees. Each group does excellent work within its domain. But the moment two domains reach conflicting conclusions about the same action, the architecture has nothing to say. The decision defaults to whoever has the louder voice, the higher title, or the faster escalation path.

That is not governance. That is organizational politics dressed up as process.

A nomotic architecture resolves these conflicts structurally, through a system of weights and vetoes built around a Unified Confidence Score. The mechanism is straightforward. The politics of implementing it are not.

Why Consensus Fails at Runtime

The default approach to cross-domain disagreement is consensus. Get everyone in a room. Discuss the issue. Reach an agreement.

AI agents make decisions in milliseconds. A claims processing agent determining whether to approve a payout cannot wait for a meeting. Runtime governance requires a mechanism that resolves conflicts at execution speed, is designed before conflicts arise, agreed upon by all stakeholders, and enforced automatically.

Weights and vetoes provide that pre-commitment. They are the architectural answer to a question that most organizations try to solve with meetings.

How Vetoes Work

A veto is the simplest governance mechanism. A non-negotiable stop.

Any governance dimension with veto authority can halt an action regardless of what every other dimension concludes. No weighting. No scoring. The action stops. Eight of the 14 dimensions have veto authority: scope compliance, authority verification, resource boundaries, isolation integrity, temporal compliance, human override, ethical alignment, and jurisdictional compliance.

Vetoes are absolute by design. They protect against catastrophic failures where no business justification makes the action acceptable. A scope-violating action doesn't get weighed against its business value. An ethical violation doesn't get balanced against efficiency metrics.

This is the easy part. Most organizations can agree on what constitutes catastrophic. The hard part is everything below catastrophic.

The Unified Confidence Score

Below the veto threshold, most governance decisions fall into a gray area. Multiple dimensions have legitimate concerns of varying severity. Something needs to decide whether the action proceeds.

The UCS aggregates dimension signals into a composite score. The computation is not a simple average:

  1. Veto check: Any veto forces UCS to 0.0.

  2. Weighted average: sum(score × weight × confidence) / sum(weight × confidence). Each dimension's weight reflects its governance importance. Each dimension's confidence reflects how certain it is about its assessment.

  3. Trust modulation: Shifts the score based on agent trust. At default influence (0.2), trust can move the score by ±10%.

  4. Floor drag: A very low individual score (below 0.2) drags the overall UCS down by (0.2 - min_score) × 0.3. One badly-scoring dimension cannot be completely averaged away.

  5. Clamping: Final result bounded to [0.0, 1.0].

Above the allow threshold (default 0.7), the action proceeds. Below the deny threshold (default 0.3), the action is blocked. Between them, the action escalates to Tier 3 deliberation.

The floor drag is a safety mechanism. Without it, thirteen high scores could mask one dangerous score. The drag ensures that extreme governance concerns are felt in the final number even when other dimensions are fine.

The Political Confrontation

Setting weights requires something most organizations have spent years avoiding: forcing governance domains to agree on relative priority.

Security will argue that their concerns are existential. Ethics will argue that theirs are fundamental. Bias will argue that they carry legal liability. Governance will argue that theirs are structural. Every team is right. The weights still need to be set.

This is the architecture working as designed. The nomotic framework does not create political tension. It surfaces tension that already exists but has been masked by siloed operations. When teams operate independently, they never reconcile competing priorities because they don't share decisions. The moment they share a decision framework, unresolved disagreements become visible.

Visibility is the point. An organization that cannot align these teams on shared priorities has a problem. The question is whether that problem remains hidden until a failure exposes it or becomes visible through deliberate design, when there is still time to resolve it.

The UCS weights must be signed off by a cross-functional governance board with representatives who have real authority to negotiate, compromise, and commit. This board convenes regularly because weights must evolve. New regulations, security incidents, bias audits, and shifting priorities all reshape how dimensions should be weighted. Static weights create the same problem as static permissions: governance frozen at a moment in time, regardless of how reality has changed.

What Happens in Practice

An AI agent is about to take an action. All 14 governance dimensions evaluate simultaneously.

Security flags at medium severity. The input pattern resembles a known attack vector, but the match is not definitive. Ethics finds no issue. Bias detects no discriminatory pattern. Governance confirms the agent has explicit authority.

The weighted scores aggregate into a UCS above the allow threshold. The action proceeds, with the near-miss logged for future trust calibration.

Now change the scenario. Same action, but the agent has triggered three security flags in the past week. Trust has eroded. Trust modulation shifts the UCS downward. The recalculated UCS drops below the allow threshold. The action enters the ambiguity zone. Tier 3 deliberation triggers. Low trust tips the verdict to ESCALATE. Human review engages.

No one debated. No one escalated manually. No one called a meeting. The architecture resolved it because it was designed to do so.

When Weights Are Wrong

Weights will sometimes be wrong. This is expected.

When the UCS consistently produces outcomes that require human override, the architecture identifies the pattern. If humans keep overriding in the same direction, the weights are miscalibrated. The system does not fix itself. It flags the miscalibration for the governance board to address. Governance learns, but humans decide.

The wrong weights that get corrected are healthy governance. Wrong weights that persist because no one reviews them mean the governance board is not functioning.

Getting Started

For organizations that have never attempted cross-functional weight-setting:

Start with one application. Do not try to set weights across your entire AI portfolio.

Define vetoes first. Agreeing on what constitutes catastrophic is easier than negotiating weights. Getting alignment on vetoes builds the collaborative muscle for the harder conversation.

Use scenarios. Abstract weight discussions stall because numbers feel arbitrary. Present realistic situations where dimensions conflict. Walk through how each configuration resolves each one. The scenarios reveal which weights produce acceptable outcomes and which do not.

Document the rationale. Weights without rationale are arbitrary numbers. Weights with rationale are governance decisions. Record why security carries a given weight in this context, not just that it does.

Set a review cadence. The first configuration will be imperfect. Commit to reviewing at a defined interval. Ninety days is reasonable. The commitment to review reduces pressure to get everything right immediately.

Accept discomfort. The conversation will surface disagreements. Teams will advocate for their own importance. This is governance working, not failing. The goal is not harmony. The goal is explicit, documented, defensible prioritization that operates at runtime speed.

The Foundation

Security says allow. Ethics says block.

The architecture answers. And the answer is documented, defensible, and adjustable.

That is governance. Everything else is just commentary.

Last updated