Teach LLMs to Mimic Your Style

Use few-shot prompting to teach your LLM to mimic your style

In this guide, we’ll explore how you can enhance your LLM’s performance and teach it to mimic your style. We’re diving into the world of few-shot prompting with large language models (LLMs), sharing a technique that allows LLMs to mimic your style based on large datasets. Let’s get started!

Few-shot Prompting Basics

Few-shot prompting is a technique that feeds an LLM with some example data on how you want the response to be made. The LLM then generates a response that mimics the given examples. This technique can significantly enhance the performance of your LLM, helping you adapt your LLM’s responses to different environments, ensuring that your messages are always on point.

A Real-life Example: Sales Response Automation

Suppose you want to create a sales automation workflow that can respond to emails and LinkedIn messages automatically. You can use few-shot prompting to train your LLM to generate responses that align with your style and meet your specific rules and objectives. For instance, LinkedIn messages tend to be shorter and more specific.

1. Create the Sales Response Automation Workflow

To start, create a new workflow. Add a long text input to present the message to which we want to respond. Next, add an LLM action. Don’t forget to hit “Save” on the top right to save your workflow.

Don’t Take System Prompts for Granted

Customizing the prompts and including examples specific to your data context can enhance performance. A system prompt can be:

You are a sales representative for Company XYZ. Your goal is to reply to emails professionally while trying to book a meeting.

2. Add Examples to Your LLM Prompt

Incorporate messages that some of your prospects have previously sent, along with your responses. These will constitute the prospect-reply pairs, serving as examples. The more examples you include, the better performance you can anticipate from the LLM.

Enter a few cases of messages and your corresponding responses using identifiers such as “Message and Response” or “Received and Reply”. In our example, we used “Response” and “Company XYZ reply”. The pattern shows the LLM what the expected task is. At the end use {{}} to enter the response from the input component and then the respondent identifier.

Example Prompt:

Message: Hi, I'm interested in your services. Can you tell me more?
Response: Hi! Thank you for reaching out. I'd love to tell you more about our services. Let's schedule a meeting at your earliest convenience.

Include multiple examples in your prompt to improve LLM accuracy.

3. Use a Training Database

A few samples are beneficial, but using a database of samples can further enhance the LLM’s performance. By providing a dataset of effective responses and replies, you can train your LLM to generate similar responses.

Upload a CSV file containing samples of received messages and your corresponding replies. Ensure the dataset is prepared properly.

4. Search in Knowledge

Create a few-shot prompting technique based on an entire dataset by using a search action to find the most similar past responses to the current input. Add a Knowledge search action to your workflow and configure it:

  • Select your dataset from the dropdown.

  • Use {{}} to set up the query to the input component (e.g., {{response}}).

  • Specify the name of the vector field corresponding to the incoming messages in your dataset.

  • Increase the number of results to 10+ (e.g., 15, 30).

This action will retrieve a set of responses that are most similar to your input.

5. Add a ‘Truncate Text’ Action

Add a truncate text action to truncate the results. This allows you to specify an array, derived from the previous search action, and limit the size of objects within the result to a certain number of tokens. Tokens can be roughly defined as words, and LLMs have limitations on the number of tokens they can receive. Set it to 2000 tokens to ensure the data fits within the LLM prompt limitations.

6. Feeding the Truncated Results into the LLM

Instead of entering examples manually, include the output from the truncated results of the previous action in your LLM. Access to action outputs is via {{}} and the name of the action.

Final LLM Prompt:

Based on the responses, create a response for the prospect. 
The following data shows examples of the prospect sending a message in "prospect-reply" and the Company XYZ sales rep replying in "company-reply".

{{ steps.truncate_text.output.chunks }}

RESPONSE: {{ response }}

COMPANY XYZ REPLY:

This final setup ensures that the LLM mimics your style and includes important details from the training dataset. Remember, experimentation is key to success with few-shot prompting. Try out different techniques and see what works best for you.

Last updated