FabricFabricPlatform
Platform referenceDevelopment patterns

Agents within policy bounds

How AI agents call into the platform under the same governance as humans.

An AI agent in Fabric is not a special user with elevated rights. It is an actor with a credential whose scope is explicit, per-action, and enforced upstream of invokeAction.

The actor flow

The two checks happen in different places:

  1. Authorization (which actions can this agent call) → upstream agentProcedure middleware.
  2. Authorization-in-context (can this agent call this action now with these parameters) → policies inside the pipeline.

Why the permission check is skipped at invokeAction

invokeAction skips the tenant-Member permission check for actorType: "agent". The reason: an agent does not have a Member row. It has a credential. The credential carries the scope list. The middleware verified the scope before calling invokeAction.

Bypass paths are explicitly enumerated in packages/api/modules/platform/lib/invoke-action.ts:

const skipPermissionCheck =
  input.actorType === "external_system" ||
  input.actorType === "system" ||
  input.actorType === "agent";

Anything else still goes through Member + requiredPermissions. The agent path is one of three trusted upstream gates.

What policies see

Policies evaluating an agent invocation get a PolicyContext with actionId, parameters, tenantId, spaceId, and mode ("preview" or "execute"). They do not automatically see "this is an agent" — by design. A policy that needs to differentiate (e.g. "agents cannot accept offers above $100k") writes that condition directly:

const offerLimitPolicy: PolicyEvaluator = {
  policyId: "lending.agent_offer_limit.v1",
  version: 1,
  evaluate: async (ctx) => {
    // The platform-level audit context (actorType) is available via the action invocation;
    // the policy reads it via db where needed, or via parameters if explicitly threaded.
    // ...
  },
};

In practice most teams thread agent-specific limits via separate action IDs with their own policy lists, so lending.agent_accept_offer is a different action from lending.accept_offer and carries the agent-specific cap as a routine policy. This avoids "is this an agent?" branching inside one policy.

Designing agent-callable actions

Three rules that make agent operations safe:

1. Explicit, narrow scopes

Don't grant lending.*. Grant ["lending.list_offers", "lending.summarize_offer"]. Read-only by default. Mutating actions added one at a time, each justified.

2. Mutating actions get their own policies

Any action an agent can call should have a policy that captures the human version of "is this OK?" — even if the human-facing UI doesn't enforce it (the human has visual judgment; the agent does not). Examples:

  • lending.agent_send_offer policy: cap dollar amount, cap rate spread, require recent consent.
  • lending.agent_request_consent: rate-limit per borrower, cap per day.

3. Approval as a state, not as a hope

For high-stakes actions, use waiting_for_approval. The agent invokes the action; the runtime parks it; a human approves. The audit trail records both the agent's request and the human's approval.

Agent runs

The platform's AgentRun object type lets you bind a sequence of related agent actions to a session. The MCP server typically opens an AgentRun at the start of a conversation and tags every ActionInvocation made during it. This lets compliance ask:

Show me every action this agent took during this user's chat session, in order.

…and get one query, not an archeology project.

Anti-patterns

  • A single shared "agent" credential. Per-agent or per-deployment credentials with their own scope lists. Revocation is cheap that way.
  • Wide scopes "for now." Scopes are forever in audit logs. Start narrow.
  • Branching on actorType inside business logic. If an action behaves differently for agents, it's a different action.
  • Asking the LLM to "be careful." Carefulness is a property of the runtime gates, not the model.

See also

On this page