Runtime Architecture
The AIPC runtime architecture has two phases: pre-generation (validation and prompt building) and post-generation (rendering, validation, and composition). This guide covers the full pipeline.
Processing order
The runtime follows a strict processing pipeline. Order matters — each step depends on the results of the previous one:
- Freshness check — Is the data still valid? If expired and behavior is suppress, stop immediately.
- Conditional evaluation — Evaluate all conditional rules against the payload and user context. Add any triggered disclosures or field modifications.
- Display rule processing — Resolve scopes, format field values, check suppress_if conditions, verify never_show_without dependencies.
- Disclosure resolution — For each active disclosure, determine if its scope is triggered by presented fields. Sort by placement.
- Compliance check — Verify that all required disclosures for presented fields are included. Apply fail_behavior if not.
- Attribution — Resolve attribution text with modality overrides.
- Tone restrictions — Collect prohibited phrases, framings, and editorial restrictions.
Validation pipeline
Initialize the runtime with options for the current context:
import { AIPCRuntime } from '@aipc/runtime';
const runtime = new AIPCRuntime({
modality: 'visual', // 'visual' | 'voice' | 'document'
language: 'en', // ISO 639-1 language code
userContext: { // Available to conditional rules
jurisdiction: 'US',
account_type: 'retail',
risk_tolerance: 'moderate'
}
});
The ValidatedOutput object
The validate() method returns a ValidatedOutput with everything the AI needs:
const output = runtime.validate(apiResponse);
// Check if processing succeeded
output.success // boolean
output.compliance_level // "L1" | "L2" | "L3"
output.errors // string[] — why it failed
output.warnings // string[] — non-fatal issues
// Presentation data
output.formatted_fields // FormattedField[] — ready to display
output.disclosures // { global, before, after, adjacent }
output.attribution // string | null
// Restrictions for the AI
output.tone_restrictions // { prohibited_phrases, prohibited_framings, editorial_restrictions }
// Audit trail
output.audit // Full audit record for logging
Prompt generation
The prompt builder converts a ValidatedOutput into structured instructions for an LLM:
import { AIPCPromptBuilder } from '@aipc/runtime';
const instructions = AIPCPromptBuilder.buildPromptInstructions(output);
// Returns a markdown-formatted string with sections for:
// - Attribution requirements
// - Global and field-specific disclosures
// - Formatted data values
// - Tone restrictions
// - Compliance rules
Post-generation pipeline
After the LLM generates its response using the prompt instructions, three additional components handle compliance enforcement:
- Renderer — Produces deterministic content blocks (disclosures, data tables, attribution) rendered by code, not by the LLM. At L2+ enforcement, these blocks are immutable.
- Validator — Checks the LLM-generated narrative against contract rules. Catches prohibited phrases, data reformatting, suppressed data leakage, and tone violations.
- Compositor — Orchestrates the full pipeline: renders deterministic blocks, validates narrative, enforces compliance, assembles the final response, and produces an audit trail.
Together, these components form the L2 (Deterministic Rendering) and L3 (Full Enforcement) tiers of the consumer enforcement model.
Compliance levels
| Level | What it covers |
|---|---|
L1 | Basic: disclosures, display rules, attribution |
L2 | L1 + freshness enforcement, tone restrictions |
L3 | L2 + conditional rules, modality overrides |