Introducing LLM Actions: AI decisioning built into your workflows 

LLM Actions bring AI-powered decisioning natively into Customer.io workflows, so your campaigns can generate personalized content, classify intent, and route customers without webhooks or external tools.

Molly Murphy
Molly Murphy
Sr. Product Marketing Manager

Your campaigns just got a lot smarter.

Most marketing automation is built around rules. If a customer does X, send Y. If they belong to segment A, route them to branch B. It's logical, it's reliable, but it breaks down the moment customer behavior gets complicated.

The problem isn't the rules themselves; it's that real customers are defined by the relationship between their attributes, not any single one. A customer who's on a free plan, visited pricing twice this week, and has a 90-day tenure is a very different conversation than a free-plan customer who's been inactive for six months. Same segment. Completely different intent. No branch condition captures that nuance and then delivers a personalized journey; at least, not without building a workflow no one wants to maintain.

That's exactly where LLMs shine, and it's why we built LLM Actions.

What are LLM Actions?

LLM Actions are a new native step you can add to any campaign workflow in Customer.io. Instead of (or alongside) sending a message or firing a webhook, you can now call a large language model directly mid-journey and use what it returns to personalize content, classify customers, or decide which path they take next.

Think of it like giving your campaign a brain. You feed it context like customer attributes, behavioral events, product data and it gives you back something useful: a personalized subject line, a lead score, an intent classification, a follow-up message that actually matches the customer's tone.

Here's how it works in practice:

  1. Add a Run LLM step to any campaign workflow (found in the Data section of the build menu)
  2. Write your prompt and inject customer or event context using Liquid, like {{customer.first_name}} or {{event.product_viewed}}
  3. Define your outputs — specify what you want to extract from the response and where to store it
  4. Preview and confirm the output before you activate

The response gets stored as a journey attribute or customer attribute—and from there, it's available to use anywhere in the campaign: in message copy, branching conditions, downstream steps, or even as a trigger for another workflow.

Why this matters

This isn't just a neat trick. It changes what's actually possible inside your Customer.io campaigns.

Personalization that actually scales. Writing copy variants for every segment, behavior, or attribute combination isn't sustainable. LLM Actions let you generate content that adapts to each customer's specific context (their tier, their activity, their support history) without writing every permutation yourself.

Smarter routing without the sprawl. Complex branching logic is one of the biggest pain points for lifecycle teams. LLM Actions let you replace a nest of conditionals with a single classification step. Ask the model to assess a customer's intent, persona, or lead quality, store the result as a journey attribute, and branch on it downstream. Cleaner workflows, fewer points of failure.

Native, not bolted on. Before LLM Actions, teams who wanted AI in their campaigns had to set up webhooks and manage third-party tools. LLM Actions are built into the same environment as your customer profiles, behavioral events, and campaign logic. You write the prompt. We handle the model call.

Decisions and content that stay in the journey. LLM Actions work natively with journey attributes; meaning the intelligence the model generates (a lead score, a persona classification, a personalized message) lives within the campaign and expires when it ends. No permanent changes to the customer profile, no stale data accumulating over time. You get the right context for the right moment, without the cleanup.

What can you actually do with it?

Here are six use cases to get you thinking along with the actual prompts you can use to get started.

Our first live campaign featuring LLM actions for AI-generated emails is running, and we've already received some very positive signals from recipients. One of my favorite aspects is using boolean output fields in LLM actions for decisioning steps. - Pitch

Content generation

Generate personalized subject lines, email body copy, SMS, and CTAs tailored to each customer's context. No template hacks. No manual variants.

Generate a promotional email for {{customer.first_name}}, a {{customer.tier}} member. Favorite category: {{customer.favorite_category}}. Generate a subject line and 2-paragraph email body.

Persona analysis

Ask the model to analyze a customer's attributes and classify them into a persona you can use for targeting, routing, or downstream personalization.

`Analyze this customer:

  • Job title: {{customer.job_title}}
  • Company size: {{customer.company_size}}
  • Features used: {{customer.features_used}}
  • Support tickets: {{customer.ticket_count}}

Classify them as one of: "power_user", "casual_user", "struggling_user", or "champion_user". Explain your reasoning in one sentence, then provide the classification.`

Lead qualification

Score and qualify leads based on behavior and attributes without having to manually review every time. Store the result and route hot leads directly to sales.

`Evaluate this lead for sales readiness:

  • Signed up: {{customer.created_at}}
  • Pages viewed: {{event.pages_visited}}
  • Pricing page visits: {{customer.pricing_views}}
  • Company size: {{customer.company_size}}
  • Industry: {{customer.industry}}

Score from 1-100 and classify as "hot", "warm", or "cold". Respond with JSON: {"score": X, "qualification": "...", "reason": "..."}`

Intent detection

Understand what a customer is trying to accomplish and route them into the right path, without building out a maze of conditional branches.

`Based on this customer's recent activity: {{customer.recent_events | json}}

What is their likely intent? Add one line reason and choose one:

  • "ready_to_buy"
  • "comparing_options"
  • "just_browsing"
  • "needs_support"
  • "churn_risk"`

Sentiment & tone matching

Analyze a customer's most recent support interaction and adapt your messaging tone accordingly. Especially useful for post-support sequences where the wrong tone can undo otherwise good work.

`Analyze the tone of this customer's recent support interactions: {{customer.last_support_message}}

Rate their sentiment (positive/neutral/negative) and recommend a response tone. Then write a follow-up message that matches their emotional state.`

Dynamic branching

Any of the above outputs can be stored as a journey attribute or customer attribute and used to drive branching logic downstream. For example, using the lead qualification prompt, you'd store the result as journey.lead_qualification—then create a True/False Branch where journey.lead_qualification == "hot" routes to sales, and everyone else continues into a nurture sequence.

One classification step. No nested conditionals. Much easier to maintain.

A note on where to store outputs

When you get a response back from the model, you have two storage options: journey attributes or customer attributes.

For most use cases, journey attributes are the right choice. They're temporary, traveling with the customer through the campaign and expiring when the journey ends. That means no profile clutter, and no cleanup required. You get the intelligence you need, exactly when you need it, without permanently changing what you know about the customer.

Use customer attributes only when you want the result to persist beyond a campaign—for something like a long-lived persona or a health score that should influence future automations.

What about data and privacy?

Data sent to Customer.io's hosted models is not used for model training. Full stop. Your customer data stays yours.

If LLM calls fail, the system automatically retries. If retries are exhausted, the journey continues with the fallback values that users set as a safety net. (Success/failure branching logic, for teams who want more control over the failure path, is coming in a future release.)

Built-in safety controls

LLM Actions ships with safety built in, not bolted on.

For teams using Gemini models, you can set thresholds across predefined safety categories like harassment, hate speech, dangerous content, and more, directly in your action settings. Outputs that exceed your threshold are automatically blocked before they reach a customer.

Every Customer.io account also has access to a compliance prompt; a persistent instruction layer that sets guardrails for your organization. Think of it as a standing brief the model always reads: your brand voice, topics to avoid, regulatory constraints. It applies across every LLM Action in your account, so you don't have to re-specify rules prompt by prompt.

How much does LLM Actions cost?

During the introductory period, all customers will receive a 100,000 credit bundle that equates to approximately 100,000 actions when using Gemini 2.5 Flash Lite as the baseline, subject to applicable Promotional Credit Terms and Feature Terms . Selecting different models will affect how quickly credits are consumed.

The shift from static, rules-based campaigns to intelligent, adaptive journeys is already underway. Teams are stitching together Zapier, webhooks, and external AI tools to get there, but that comes with friction, latency, and complexity.

LLM Actions are how that capability becomes native to Customer.io. Less setup. Fewer moving parts. AI that fits into the journey and data model you already use.

Ready to try it? Add a Run LLM step to any campaign and see what your journeys can do when they actually think.

Drive engagement with every message 

  • Omnichannel campaigns
  • Behavior-based targeting