Structured Context Frameworks
Constrain LLM outputs by injecting structured context across multiple dimensions. These frameworks eliminate ambiguity by requiring explicit specification of role, audience, format, and tone.
CO-STAR
The most comprehensive context injection framework. Six orthogonal dimensions ensure the model receives complete specifications for any task — from creative writing to data analysis.
Best For
Content creation, analysis, professional communication
Select When
Tasks requiring balanced control over multiple output dimensions
Context
Background information and domain knowledge the model needs to understand the task scope.
Objective
The specific, measurable outcome expected from the model's response.
Style
Writing register and approach — academic, conversational, journalistic, technical.
Tone
Emotional register — authoritative, empathetic, neutral, urgent.
Audience
Target reader profile that determines vocabulary, depth, and assumptions.
Response
Exact output structure — JSON schema, markdown template, bullet list, table.
User Input
"Write a blog post about microservices for our engineering blog"
Engineered Output — CO-STAR Framework
CREATE
Designed for original content production where brand voice and creative direction are paramount. Adds explicit execution instructions that CO-STAR lacks.
Best For
Marketing copy, storytelling, brand voice development
Select When
Creative output requiring stylistic control and audience targeting
User Input
"Write launch copy for our new AI productivity app"
Engineered Output — CREATE Framework
PROMPT
Persona, Request, Output, Modifier, Purpose, Tone
Fine-grained control over voice, format, and intent. Best for brand-specific content where persona consistency is critical across multiple outputs.
Best For
Brand content, persona consistency, stylistic precision
User Input
"Generate social media captions for our coffee brand"
Engineered Output — PROMPT Framework
Reasoning & Logic Frameworks
Force LLMs to decompose problems systematically. These techniques improve accuracy by 40%+ on tasks requiring logical inference, multi-step computation, or sequential processing.
Chain of Thought
LLMs are probabilistic token generators, not reasoning engines. Chain of Thought forces explicit intermediate reasoning steps, converting implicit pattern matching into auditable logical chains.
Best For
Mathematical reasoning, code debugging, logical analysis
Select When
Any task where the model must derive an answer rather than recall one
Zero-Shot (No Reasoning Chain)
User:
A server processes 150 requests/sec. After optimization, throughput increases by 40%. Under peak load (3x normal), what is the max request rate?
Model: 450 requests/sec (Incorrect — skipped the optimization step)
CoT Prompt (Explicit Reasoning)
User:
A server processes 150 requests/sec. After optimization, throughput increases by 40%. Under peak load (3x normal), what is the max request rate? Solve step-by-step. Show your work.
Step 1: Base throughput = 150 req/sec
Step 2: After 40% optimization = 150 × 1.4 = 210 req/sec
Step 3: Peak load at 3x = 210 × 3 = 630 req/sec
Answer: 630 requests/sec
Self-Consistency
Generates multiple independent Chain-of-Thought reasoning paths for the same problem, then selects the answer with the highest frequency. This ensemble approach reduces variance and catches reasoning errors that any single path might introduce.
Best For
High-stakes calculations, ambiguous problems
Select When
When a single reasoning path may be unreliable
Parallel Reasoning Paths
RISE
Role, Input, Steps, Expectation
Engineered for complex multi-step processes that demand procedural precision. Each step becomes a checkpoint, enabling validation at every stage of execution.
Best For
Data pipelines, computational workflows, CI/CD processes
User Input
"Process and validate uploaded CSV data for our analytics pipeline"
Engineered Output — RISE Framework
Business & Strategy Frameworks
Structured approaches for strategic analysis, decision support, and professional documentation. These frameworks mirror established consulting methodologies adapted for LLM-driven output.
SCOPE
Situation, Complication, Objective, Plan, Evaluation
Mirrors McKinsey's Situation-Complication-Resolution framework. Forces rigorous problem decomposition before solution generation — preventing the model from jumping to conclusions.
Best For
Market research, strategic planning, consulting deliverables
User Input
"Analyze whether we should expand into the European market"
Engineered Output — SCOPE Framework
STAR
Situation, Task, Action, Result
The behavioral interview standard, adapted for AI-generated professional documentation. Structures achievements as evidence-based narratives with quantified outcomes.
ROSES
Role, Objective, Scenario, Expected Solution, Steps
Scenario planning and hypothesis testing framework. Forces the model to simulate conditions, evaluate outcomes, and produce contingency-aware recommendations.
Technical & Documentation Frameworks
Purpose-built for engineering artifacts — API specifications, integration guides, debugging workflows, and research documentation that demands precision and completeness.
TRACE
Task, Request, Action, Context, Example
Maximum-specification framework for technical documentation. Every section reduces one axis of ambiguity, culminating in concrete examples that serve as executable test cases.
Best For
API documentation, integration guides, technical specifications
User Input
"Document our REST API endpoint for creating user accounts"
Engineered Output — TRACE Framework
5W1H
Who, What, Where, When, Why, How
Journalistic completeness framework. Ensures no critical information axis is omitted — particularly effective for research briefs, incident reports, and factual documentation.
CARE
Context, Action, Result, Example
Debugging and troubleshooting framework. Demonstrates expected behavior through concrete exemplars — teaching the model what "correct" looks like before it attempts to solve.
GRADE
Goal, Request, Action, Details, Examples
Pedagogical framework optimized for knowledge transfer. Scaffolds learning progression from objectives through detailed instruction to concrete demonstrations.
Best For
Tutorials, explainers, learning materials, onboarding docs
User Input
"Create a tutorial on implementing JWT authentication in Node.js"
Engineered Output — GRADE Framework
Rapid Execution Frameworks
Lightweight structures for tasks that need speed over depth. Minimal overhead, maximum clarity — when a 3-field specification is all you need.
RTF
Role, Task, Format
The fastest path from intent to structured output. Assign a role, define the task, specify the format. No overhead.
APE
Action, Purpose, Expectation
Optimized for delegation to autonomous agents. Defines what to do, why, and what success looks like — the minimum viable specification for automated workflows.
ERA
Expectation, Role, Action
Leads with the expected outcome, then assigns the expert and task. Best for quick consultations where you know what you want but need expert execution.
TAG
Task, Action, Goal
Purpose-driven execution where the 'why' matters as much as the 'what'. Aligns the model's reasoning with your strategic objective, producing goal-aware outputs.
Persuasion & Narrative Frameworks
Structures rooted in classical rhetoric and storytelling theory. These frameworks engineer emotional arcs, logical arguments, and transformation narratives.
BAB
Before, After, Bridge
The conversion copywriter's workhorse. Paint the painful current state, reveal the desired future, then position your solution as the bridge between them.
SOAR
Situation, Obstacle, Action, Result
Hero's journey structure for professional narratives. The obstacle creates tension, the action demonstrates competence, and the result delivers the payoff.
PREP
Point, Reason, Example, Point
Classical rhetoric adapted for AI. State the thesis, provide evidence, demonstrate with a concrete example, then reinforce the thesis. Builds irrefutable logical arguments.
PAIN
Problem, Action, Information, Next Steps
Structured problem-resolution framework designed for support workflows and incident management. Progresses linearly from diagnosis to resolution to follow-up.
Advanced Optimization Techniques
Cross-cutting techniques applied automatically by our engine on top of any framework selection. These optimizations are what separate amateur prompts from production-grade engineering.
Few-Shot Prompting
Rather than describing the desired behavior, demonstrate it. Providing 2-5 input-output exemplars conditions the model to replicate the exact pattern, tone, and format — bypassing instruction ambiguity entirely.
Best For
Consistent formatting, classification, style mimicry
Select When
When examples communicate the pattern better than instructions
System Instruction
Classify customer feedback into categories. Follow the exact format shown in the examples.
Exemplars
Input: "The checkout page loads extremely slowly on mobile"
Output: {category: "performance", severity: "high", component: "checkout", platform: "mobile"}
Input: "Would be great if you supported dark mode"
Output: {category: "feature_request", severity: "low", component: "ui", platform: "all"}
Input: "Payment failed but I was still charged twice"
Output: {category: "bug", severity: "critical", component: "payments", platform: "all"}
Input: "The search results don't match what I typed"
Output: {category: "bug", severity: "high", component: "search", platform: "all"}
Persona Engineering
Deep role specification with expertise credentials, decision-making frameworks, and behavioral constraints. Transforms generic responses into domain-expert outputs.
Weak
Act as a marketing expert
Engineered
You are a Senior Growth Marketer with 12 years of B2B SaaS experience. Your analytical framework combines attribution modeling with cohort analysis. You prioritize CAC:LTV ratios over vanity metrics.
Constraint Specification
Explicit boundary enforcement using positive (MUST/ALWAYS) and negative (NEVER/AVOID) directives. Critical constraints are placed at both the start and end of the prompt for reinforcement.
Weak
Keep it short and professional
Engineered
MUST: Under 200 words. ALWAYS: Include one data point per claim. NEVER: Use passive voice or hedge words (might, perhaps, could). Format: 3 bullet points max.
Output Determinism
Exact structural specifications that eliminate format ambiguity. Define schemas, field names, data types, and validation rules so the output is machine-parseable.
Weak
Return the results as JSON
Engineered
Return JSON matching this schema: {results: [{id: string, score: float(0-1), label: enum["positive","negative","neutral"], confidence: float}], metadata: {model: string, processed_at: ISO8601}}
Structural Delimiters
Semantic markup using ###, ===, ---, and XML-style tags to create clear boundaries between instructions, context, and data. Prevents instruction-data confusion.
Weak
Here is some context and here is what I want you to do
Engineered
### INSTRUCTIONS Analyze the text below. ### INPUT DATA <<< {user_provided_text} >>> ### OUTPUT FORMAT Return analysis as specified above.
Context Hierarchies
Information density optimization — front-load critical parameters, layer supporting details, and place reinforcement constraints at the end. Mimics the inverted pyramid from journalism.
Weak
I want you to write something about our product for social media considering our brand voice and target audience
Engineered
PRIMARY: Write 3 LinkedIn posts for product launch. CONTEXT: B2B analytics platform for CFOs. VOICE: Data-driven, authoritative. CONSTRAINT: Each post under 150 words.
Ambiguity Elimination
Replace subjective qualifiers with quantified specifications. Every vague term is resolved to a measurable criterion that the model can deterministically satisfy.
Weak
Write a good, detailed summary
Engineered
Write a 150-200 word summary. Include: 3 key findings (each with one supporting statistic), 1 recommendation, and 1 limitation. Reading level: Grade 10 (Flesch-Kincaid).
Automatic Framework Selection
You don't need to memorize these frameworks. Our system analyzes your inputs — goal, role, audience, format, tone, complexity, and domain — then automatically selects and applies the optimal framework with advanced optimizations layered on top.
20+
Engineering Frameworks
7
Optimization Layers
11
Input Dimensions
<2s
Generation Latency