AI Context

How Animatic gives your AI structured knowledge — personalities, timing constraints, primitive compatibility, cinematic principles, and 20 reference breakdowns — so every tool call produces consistent, validated output instead of guesswork.

How context works

Most animation APIs give AI a list of functions and leave it to guess how to combine them. Animatic takes a different approach: every tool call returns not just data but context about how to use that data.

When the AI calls search_primitives, it does not get a flat list of CSS keyframes. It gets structured objects with when_to_use, when_to_avoid, personality_affinity, composable, ai_guidance, and timing defaults. The AI reads these fields and makes informed decisions — not random selections.

This context flows through the entire pipeline:

Pipeline stageContext provided
Personality selectionTiming tiers, easing curves, camera constraints, forbidden techniques, AI guidance text
Primitive searchPer-primitive usage rules, avoidance rules, composability flags, stagger compatibility
Choreography recommendationPhase structure, transition types, beat timing, personality-validated primitive sequences
ValidationSpecific violation messages with rule citations and fix suggestions
CompilationCSS output with comments explaining which primitive generated each keyframe block

The AI never operates in a vacuum. At every step, it has access to Animatic's full knowledge about what works, what does not, and why.

The knowledge layers

Animatic's context is organized into layers that the AI consults depending on the task:

  • Name
    Personalities
    Type
    4 built-in + custom
    Description

    Complete motion systems: timing tiers, easing curves, camera behavior, forbidden techniques, and natural-language AI guidance. The personality is the broadest context layer — it shapes every decision downstream.

  • Name
    Primitives
    Type
    146 named units
    Description

    Individual animation behaviors with usage rules, personality affinity, composability, and implementation guidance. The AI searches and filters these per-task.

  • Name
    Animation principles
    Type
    12 adapted principles
    Description

    Disney's 12 principles of animation adapted for UI motion. These inform the AI's understanding of why certain motion choices work — anticipation, follow-through, staging, timing.

  • Name
    Reference breakdowns
    Type
    20 analyses
    Description

    Detailed motion analyses of exemplary animations from Linear, Notion, Vercel, and others. Each breakdown maps signature moments, extracts primitives, and documents what makes the reference effective.

  • Name
    Style packs
    Type
    per-personality
    Description

    CSS variable sets, color palettes, and visual constraints. Ensure compiled output matches the personality's visual identity.

Try asking your AI

"What context does Animatic give you about the editorial personality?"

"Show me the AI guidance for the focus-pull stagger primitive"


The personality narrowing mechanism

The single most important context mechanism in Animatic is personality selection. Choosing a personality does not just set a color scheme — it narrows the AI's entire decision space.

Without a personality, the AI has 146 primitives, 12 camera movement types, multiple easing curve families, and no constraints. With a personality, the AI operates within a defined subset:

DecisionWithout personalityWith editorial personality
Available primitives146~35
Camera movements12 types2 (push-in, drift)
Easing curvesAll3 (enter, exit, smooth — no spring)
PerspectiveAnyNone (2D only)
Blur effectsAvailableForbidden
Transition typeAnyOpacity crossfade only

This narrowing is not a limitation — it is a quality mechanism. An AI with 146 options and no guardrails will produce inconsistent work. An AI with 35 personality-validated options and clear constraints produces animation that looks like it was designed by someone who understands the medium.

How narrowing flows through tools

  1. get_personality — Returns the full personality definition including all constraints and AI guidance
  2. search_primitives — Automatically filters results to personality-compatible primitives
  3. recommend_choreography — Builds sequences using only approved primitives, timing tiers, and transition types
  4. validate_choreography — Rejects any primitive or technique that violates the personality's rules
  5. compile_motion — Applies the personality's easing curves and timing tiers to the final CSS output

The AI does not need to remember constraints. The tools enforce them.


How choreography recommendations work

Choreography is where context pays off. When you ask for a choreography recommendation, the AI follows a structured process:

IntentPersonalityFiltered PrimitivesSequenceValidationOutput

Step by step

1. Parse intent. The AI reads your description — "a hero section entrance for a SaaS dashboard" — and identifies the motion goals: what needs to enter, what needs emphasis, what transitions are needed.

2. Apply personality constraints. The active personality filters the available primitives, sets timing ranges, and constrains camera and transition behavior.

3. Select primitives. From the filtered set, the AI matches primitives to each beat of the sequence. It reads when_to_use and when_to_avoid to pick appropriate matches, checks composable for layering opportunities, and respects stagger_compatible for multi-element entrances.

4. Build the sequence. Primitives are arranged into a timeline with phase structure, dwell times, transition points, and stagger offsets. The AI uses the personality's timing tiers to set durations.

5. Validate. The complete choreography is checked against:

  • Personality compatibility for every primitive
  • Timing guardrails (no phase shorter than the minimum, no phase longer than the maximum)
  • Camera movement restrictions
  • Transition type constraints
  • Composability conflicts (two non-composable primitives on the same element)

6. Return with context. The validated choreography is returned with per-beat explanations: why each primitive was chosen, which alternatives were considered, and how the timing was derived.

Try asking your AI

"Recommend choreography for a cinematic-dark product demo with 4 phases: hero entrance, feature showcase, data visualization, and call to action"

"Walk me through why you chose each primitive in that choreography"


Animation principles as context

Animatic's tools are informed by Disney's 12 basic principles of animation, adapted for UI motion. These principles are not just reference material — they shape how the AI structures sequences and selects primitives.

PrincipleUI adaptationHow it affects AI decisions
Squash & StretchIcon wiggle, subtle button scale (2-3%)AI uses icon rotation over body deformation for product UI
AnticipationPre-motion signals: hover glow, scale-up before pressAI inserts anticipation beats before dramatic actions
StagingOne animation at a time; direct the eyeAI avoids simultaneous competing animations
Pose-to-PoseKeyframe definitions with browser interpolationAI defines key states, lets CSS handle tweening
Follow ThroughDifferent speeds for different elementsAI assigns speed tiers: header (fast), body (medium), container (slow)
Slow In / Slow OutEasing curves on everything except progress barsAI applies personality-specific easing, never linear (except progress fills)
ArcCurved timing via easing curvesAI avoids linear movement paths
Secondary ActionSupporting animations: border darkening, brightness shiftAI layers secondary effects to reinforce primary motion
TimingDwell time per phase: 2-4.5 seconds depending on contentAI calculates phase duration based on content complexity
ExaggerationDemo-scale motion: larger scale changes, visible overshootAI applies demo-appropriate intensity, not subtle interactive-UI values
Solid DrawingConsistent radii, shadows, borders during animationAI preserves visual consistency through transforms
AppealClear narrative: the viewer understands what happenedAI structures sequences with a story arc

Two additional principles extend the Disney foundation for multi-phase UI animation:

Directional Journey — Inspired by the Eames' Powers of 10. Each phase enters from a different direction to create a sense of journeying through levels of detail. The AI varies entry origins across phases: fade, rise from below, slide from right, scale up. Stagger items within each phase match the phase's direction.

Spatial Causality — Element B appears because element A's animation reached it, not because a timer fired. The AI builds causality chains where motion flows through space with visible cause and effect, rather than independent timer sequences.

Try asking your AI

"Explain how anticipation applies to the button press sequence in cinematic-dark"

"Build a 4-phase sequence that follows the directional journey principle — each phase entering from a different direction"


Reference breakdowns as a learning resource

Animatic includes 19 detailed motion analyses of exemplary animations. These breakdowns are available via the search_breakdowns and get_breakdown tools, and they serve as both learning material and a source of extractable primitives.

What a breakdown contains

Each breakdown maps an animation frame by frame, identifying:

  • Signature moments — the 3-5 techniques that define the reference
  • Timing analysis — exact durations, easing curves, stagger offsets
  • Personality classification — which Animatic personality the reference aligns with
  • Quality tier — exemplary, strong, or interesting
  • Extracted primitives — novel motion patterns formalized as reusable primitives
  • Tags — searchable labels: stagger, ambient, grid, onboarding, product-demo, etc.

Breakdowns by personality

PersonalityBreakdownsExamples
cinematic-dark10Linear Homepage (spring physics), Dot Grid Ripple (wave distortion), Flow Field Vortex (generative lines)
editorial4Text-Image Reveal (clip-path split), Nume.ai Chat Dashboard (progressive build), 3D Card Cascade (grid flip), Card Conveyor Depth Rail (z-axis scroll)
neutral-light4Linear Onboarding Wizard (step indicators), Notion Onboarding (progressive disclosure), Vercel Onboarding (minimal flow)
universal2Sparse Dot Breathing (ambient), Concentric Depth Pulse (ring breathing)

Quality tiers

TierCountMeaning
exemplary10Best-in-class execution. Study these first.
strong10Excellent technique with minor gaps. Reliable reference.
interesting0Novel approach worth noting but not yet a model.

Using breakdowns with the AI

Breakdowns are not just documentation. The AI can retrieve them, analyze their techniques, and apply those techniques to your work.

Try asking your AI

"Show me the Linear Homepage breakdown — what makes it exemplary?"

"Find breakdowns tagged with 'onboarding' and summarize the common patterns"

"I want to build something similar to the Nume.ai chat dashboard. What primitives were extracted from that breakdown?"

"Search breakdowns for stagger techniques and show me the timing patterns they use"


Prompting for better results

The quality of Animatic's output depends on the quality of your prompt. Here are prompts ranked from basic to advanced, with explanations of why each level works better.

Level 1: Basic

Make an animation for my landing page.

This works, but the AI has to guess everything: personality, content, mood, structure. It will produce something generic.

Level 2: Name the personality

Create an editorial animation for my landing page.

Better. The AI now has timing, easing, camera constraints, and a filtered primitive set. But it still does not know what your landing page contains or what the animation should emphasize.

Level 3: Describe the content and intent

Create an editorial animation for my docs landing page.
I need: hero headline entrance, three feature cards staggering in,
and a content cycling section showing different use cases.

Much better. The AI can map each element to a specific primitive category and build a structured sequence. The mention of "content cycling" directly points to ed-content-cycle.

Level 4: Specify mood and pacing

Create an editorial animation for my docs landing page.
Tone: confident, unhurried. Let each section breathe.
Hero headline with blur-reveal, three feature cards staggering in
with 120ms delays, then a content cycling section showing
4 use cases at 3-second intervals.

Now the AI has mood (which affects timing choices), specific primitive preferences (blur-reveal), exact stagger values, and content cycling parameters. The output will be precise.

Level 5: Reference and constrain

Create an editorial animation for my docs landing page,
inspired by the text-image-reveal breakdown.
Tone: confident, unhurried. No spring physics, no blur on cards.
Hero: blur-reveal with 800ms duration.
Features: slide-fade stagger, 120ms offset, 3 items.
Content cycling: 4 use cases, 3s per cycle, crossfade transition.
Stats section: count-up numbers with 150ms stagger.
Total loop: 14 seconds.

Maximum context. The AI has a reference to study, explicit constraints, per-section primitive assignments, timing values, and a target loop duration. The output will match your vision closely.

Prompting principles

PrincipleWhy it matters
Name the personalityEliminates 70%+ of the AI's decision space. Always do this.
Describe content, not effectsSay "three feature cards" not "animate some divs." The AI maps content to primitives better than you can.
State the mood"Playful," "serious," "urgent," "calm" — these words adjust timing and easing choices.
Specify what you do not want"No blur," "no spring physics," "no camera movement" — constraints are as valuable as instructions.
Reference a breakdownIf you have seen an animation you like, name the breakdown. The AI will study its techniques.
Give a target loop timeForces the AI to budget time across phases instead of making each one too long or too short.

Try asking your AI

"I want a cinematic-dark product demo inspired by the Linear homepage breakdown. 5 phases, 18-second loop, focus-pull entrances, camera orbit at phase 3, spring physics on the interactive elements."

"Build a neutral-light onboarding tutorial for a 4-step signup flow. Use spotlight highlights and cursor simulation. Keep total duration under 15 seconds. No decorative motion — every animation should guide the user's eye."

Was this page helpful?