Product Updates

Changelog

Technical release notes for the Advi prompt engineering platform. API changes, framework updates, model additions, and system improvements.

v2.1.0February 10, 2026

Advanced Configuration & Multi-Language Support

  • added

    Output language selector supporting 10 languages: English, Spanish, French, German, Russian, Ukrainian, Armenian, Chinese, Japanese, and Korean. The selected language is injected into the system prompt as a primary language directive.

  • added

    Domain/industry field in Persona & Targeting section. The engine uses this to select domain-specific terminology and adjust framework weighting (e.g., TRACE for technical domains, SCOPE for business).

  • added

    Complexity level selector (simple, moderate, advanced, expert) that maps to technical depth modifiers in the generated prompt. Expert-level prompts include schema definitions, validation criteria, and multi-step reasoning chains.

  • added

    Optimization priority selector (speed, balanced, quality, creative) that influences framework selection heuristics. Speed priority favors RTF/APE/TAG; quality priority favors CO-STAR/RISE/TRACE.

  • improved

    Framework selection logic rewritten with a 4-step analysis pipeline: (1) read all user fields, (2) identify structural needs, (3) match to optimal framework, (4) annotate selection rationale. Eliminates CO-STAR default bias.

  • improved

    System prompt now enforces framework-specific section headers in output. RTF outputs use ### ROLE / ### TASK / ### FORMAT; PAIN outputs use ### PROBLEM / ### ACTION / ### INFORMATION / ### NEXT STEPS, etc.

v2.0.0January 27, 2026

Dashboard Rebuild & Prompt History System

  • added

    Complete prompt history system with per-generation snapshots. Each entry stores all input fields (goal, background, role, audience, format, tone, examples, constraints), selected model, and the full generated output.

  • added

    One-click history restore: selecting a previous generation repopulates all form fields and model selection to their exact state at generation time.

  • added

    Per-item history deletion and bulk clear with organization-level data isolation. History is filtered by Clerk orgId — switching organizations shows only that org's generations.

  • added

    Output terminal with real-time character, word, and line count. Includes copy-to-clipboard, download-as-Markdown (.md), and auto-scroll-to-output on generation complete.

  • added

    Input validation system with real-time character counting on goal and background fields. Goal requires minimum 10 characters. Visual indicators shift from red → yellow → green as input quality improves.

  • improved

    Dashboard restructured into 4 collapsible sections: Core Objectives, Persona & Targeting, Output Configuration, and Advanced Configuration. Reduces cognitive load for simple tasks while exposing full control for power users.

v1.9.0January 08, 2026

Model Expansion & Provider Configuration

  • added

    Claude Sonnet 4.5 (Anthropic) added to model selector. 200K context window, highest reasoning capability for complex prompt engineering and nuanced multi-step tasks.

  • added

    Grok 4.1 Fast (xAI) added as an experimental model with real-time knowledge access and 128K context. Marked as Beta in the model selector.

  • added

    Gemini 3 Flash Preview (Google) added with 1M token context window and sub-second latency. Optimized for rapid iteration workflows.

  • improved

    Model configuration modal rebuilt with detailed spec cards showing context window size, speed rating, and cost tier for each model. Includes provider attribution and selection tips.

  • improved

    Model selection state now persists to localStorage (key: advi_selected_model) and correctly syncs between the settings modal and API request payload without false 'unsaved changes' indicators.

  • removed

    Deprecated model entries (GPT-3.5 Turbo, older Gemini variants) removed from the selector. All traffic consolidated to 5 actively maintained model endpoints via OpenRouter.

v1.5.0December 12, 2025

Framework Engine v1 & Constraint Injection

  • added

    Initial framework selection engine supporting 20 prompt engineering methodologies: CO-STAR, CREATE, PROMPT, Chain of Thought, Self-Consistency, RISE, SCOPE, STAR, ROSES, TRACE, 5W1H, CARE, GRADE, RTF, APE, ERA, TAG, BAB, SOAR, PREP, PAIN, and Few-Shot.

  • added

    Constraint injection system with MUST/ALWAYS positive directives and NEVER/AVOID negative directives. Constraints are placed at both the beginning and end of the generated prompt for reinforcement.

  • added

    Persona engineering layer in the system prompt. Transforms generic role assignments (e.g., 'marketing expert') into deep specifications with credentials, analytical frameworks, and behavioral constraints.

  • added

    Few-Shot example injection field. User-provided input-output patterns are formatted as exemplars in the generated prompt, enabling direct behavioral conditioning through demonstration.

  • improved

    System instruction expanded to a 3-phase pipeline: (1) Requirements Analysis — extracts intent, assesses complexity, maps constraints; (2) Framework Selection — matches inputs to the optimal methodology; (3) Advanced Optimizations — applies structural delimiters, cognitive load optimization, and output determinism.

v1.3.0November 18, 2025

Core API & Authentication

  • added

    POST /api/refine endpoint: accepts goal, background, role, audience, format, tone, examples, constraints, and model fields. Returns a single refined prompt string optimized for frontier LLMs.

  • added

    Clerk authentication integration with organization-level access control. API requires valid userId and orgId — requests without org membership return 401.

  • added

    OpenRouter API integration as the unified model gateway. All model requests route through OpenRouter with configurable API key, site URL tracking (HTTP-Referer), and app name headers (X-Title).

  • added

    Error handling pipeline with typed responses: 400 (invalid input), 401 (unauthorized), 429 (rate limited), 503 (provider config error), 504 (timeout). All errors include human-readable messages.

  • improved

    API temperature set to 0.7 with 4,000 max tokens. Balances creative variation with output consistency for prompt engineering workloads.

v1.0.0October 28, 2025

Initial Release

  • added

    Prompt refinement workspace with structured input fields: goal (required), background, role, audience, format, tone, examples, and constraints.

  • added

    Default model set to meta-llama/llama-3.1-8b-instruct:free via OpenRouter. 128K context window, zero cost, suitable for general-purpose prompt generation.

  • added

    GPT-4o (OpenAI) available as a premium model option. 128K context, fast inference, consistent performance across task types.

  • added

    Real-time prompt generation with streaming output display. Generated prompts are rendered in a terminal-style output pane with syntax-aware formatting.

  • added

    Clipboard copy functionality for generated prompts. Single-click copies the full output with a visual confirmation state (2-second timeout).

First release: October 2025. See /docs for API reference.