Content Creation: Style Mimicry

A step-by-step guide on using few-shot prompting to train a Large Language Model (LLM) to mimic a specific writing style for automated responses.

This guide explores how to use a powerful technique called few-shot prompting to train an LLM to respond to messages in your unique style. This is perfect for automating sales responses, customer support, or any communication where a consistent tone is critical.

Goal

We will build a workflow that can automatically generate a sales response to an incoming message. The response will be crafted to mimic the style and tone of a human sales representative, based on examples of their previous conversations.

Key Concepts

  • Few-shot Prompting: This is the core technique. Instead of just telling the LLM what to do, we provide it with several examples (the "few shots") of inputs and desired outputs. The LLM learns the pattern from these examples and applies it to new inputs.

  • Vector Search: To handle a large number of examples, we can't fit them all into a single prompt. Instead, we'll store our examples in a knowledge base and use vector search to find the most relevant examples for any new incoming message.

Required Nodes & MCPs

  • Text Input Node: To receive the new message that needs a response.

  • Knowledge Search Node: To search our database of past conversations.

  • OpenAI MCP: To power the LLM that generates the final response.

Workflow Steps

Step 1: Prepare Your Training Data

Before building the workflow, you need a dataset of your past conversations.

  1. Create a CSV file with two columns: prospect_message and your_reply.

  2. Populate this file with as many real examples as possible. The more examples you provide, the better the LLM will learn your style.

  3. Upload this CSV as a new collection in your AgenticFlow Knowledge base.

Step 2: Build the Workflow

1. Input for the New Message

  • Node: Text Input

  • Purpose: This is where you'll paste the new message you want to respond to.

2. Find Similar Past Conversations

  • Node: Knowledge Search

  • Purpose: Search your training data for past conversations that are most similar to the new message.

  • Setup:

    • Knowledge: Select the CSV collection you uploaded in Step 1.

    • Query: Use the output from the first node: {{text_input_1.text}}.

    • Number of Results: Set this to a higher number, like 15 or 30, to give the LLM plenty of context.

3. Craft the LLM Prompt

This is the most important step. We will construct a prompt that includes the relevant examples from our knowledge search and asks the LLM to generate a new reply.

  • Node: OpenAI MCP

  • Purpose: To generate the new, style-mimicking reply.

  • Setup:

    • Action: Chat

    • Model: gpt-4-turbo

    • System Prompt:

      You are a sales representative for Company XYZ. Your goal is to reply to emails professionally while trying to book a meeting.
    • User Prompt:

      Based on the provided examples, write a reply to the new message in the same style.
      
      ---
      EXAMPLES:
      {{knowledge_search_1.results}}
      ---
      
      NEW MESSAGE:
      {{text_input_1.text}}
      
      YOUR REPLY:

How It Works

  1. When a new message comes in, the Knowledge Search node finds the 15 most similar prospect_message and your_reply pairs from your history.

  2. These 15 examples are dynamically inserted into the prompt for the OpenAI MCP.

  3. The LLM analyzes these examples to understand your tone, phrasing, and objectives (e.g., booking a meeting).

  4. It then writes a new reply to the new message, applying the style it just learned.

This creates a powerful and dynamic workflow that consistently generates on-brand, stylistically-aligned responses.

Last updated

Was this helpful?