FAQ
Frequently asked questions about building Workflows
How do I set default values for input parameters?
Click on the setting icon, located on the bottom right of the input component. Set the values, and click on “Set current value”.
How do I insert variables into a CODE step?
Access to variables is possible via the params
parameter. For instance, to access a variable called name
, use params.name
in JavaScript or params["name"]
in Python.
How do I insert variables into the API step?
On your API component, change the body to edit as string and use {{}}
to access variables.
How to make running an action conditional?
Click on the three vertical dots on the top right of the action and select “Add conditions”. See the full guide at Adding condition to an action.
Why do I have to use single or double quotations around variables in my prompts?
Such marks (e.g., single, double, triple quote marks) are not required nor have a direct functional utility. It is a prompting technique that has been found to work well with LLMs. Keep in mind that an LLM prompt is a long piece of text composed of instructions, examples, etc. Single or double or triple quote marks around a variable X
are just to specify the scope of X
(i.e., the beginning and end of the string X
within the prompt). The whole {{X}}
brings the variable X
into the prompt, meaning there won’t be {{}}
around it when passed to the LLM.
How do I use the Checkbox input component as a condition for running an action?
Add a condition to your action. Use the {{checkbox variable name}}
as the value for the condition. For example, under the default step name:
{{checkbox}}
when the checkbox is ticked{{!checkbox}}
when the checkbox is not ticked
How to run an action multiple times like a loop?
Click on the three vertical dots on the top right of the action and select “Enable foreach”. See the full guide at Loop through an action.
How to reduce hallucination for LLM?
Here are a list of steps to take to improve your experience with LLMs:
Providing good and precise system prompt
Providing good prompt (Tips on a good prompt)
Providing data as background knowledge
More experiment with LLMs
Why is the LLM output cut off in the middle of a sentence and how to fix it?
LLMs have the capacity for a limited number of tokens. To manage this, ensure that your prompt/knowledge is concise. If your prompt/knowledge is too lengthy, there won't be enough room for the full output. Use vector search to fetch the most relevant pieces of knowledge, and consider decreasing the page size
parameter under advanced options in most relevant data settings.
How do I set multiple outputs?
In the last step of your Workflow, click on the "Configure output" button. Disable “Infer output from the last action”. Use "Add new output key" to add outputs and {{}}
to access variables and steps’ outputs. More details are provided at Output configuration.
How long can my bulk run take?
Each run is terminated after 4 hours. If your data table is large and 4 hours does not cover all the rows, you can rerun your enrichment using the “Run on rows that haven’t run” option. This resumes execution from where it stopped.
How do I access the bulk-run analysis results?
Each output variable of your AI Workflow will be added to your data table as a new column. You can see the results on the data table or export them to a CSV.
Can an AI Workflow have multiple outputs?
Yes! You can configure multiple outputs for your AI Workflow. Each output will have its corresponding column in the data table.
Can I run a Workflow multiple times on a data table?
Yes! There is no limitation on the number of times you run a Workflow. Just keep in mind to name the output column for each run to avoid overwriting existing results.
If my dataset contains more than one column to be analyzed, do I need to upload the dataset multiple times?
No! You can use the same data table while configuring the bulk run as many times as needed. Just remember to name the output column for each run to avoid overwriting existing results.
Can I leave the page after starting a bulk-run?
Yes! Bulk data enrichment is executed in the background, so rows of data will be added to your dataset even if you leave or refresh the page.
Is there a cap on the number of rows AI Workflows can process?
Technically, no - however, you will be limited by the number of credits you have.
What format is supported for the data table?
Enhancement of your dataset executes across a dataset, which can be created from CSVs and Excel files or from files such as PDFs, audio, and Word which are automatically converted into a dataset.
How does bulk run (enrichment) pricing work?
Bulk runs are charged the same as individual triggers of a Workflow. Each row in your dataset will be the equivalent of one execution of the Workflow (for example, 2 credits per row).
How is credits consumption calculated?
There is a fixed fee per tier and a variable fee for certain steps. If you provide your own key, the variable fee is not charged.
What if I exceed the credit limit for my plan?
Depending on your plan, you might need to top up before further using the platform or have a buffer when your credits are negative but can still use the platform before topping up.
Will there be a price difference between using GPT 3.5 VS GPT4 or other models?
Different models have their own pricing. The cost is passed on directly. Keep in mind that if you provide your own key, the variable fee is not charged.
Can I use my own API key?
Absolutely! Set up your own keys on the API key page.
Last updated