OpenAI Search
A guide to using the OpenAI Search action for web searches within the OpenAI ecosystem.
Last updated
Was this helpful?
A guide to using the OpenAI Search action for web searches within the OpenAI ecosystem.
Last updated
Was this helpful?
The OpenAI Search Action provides a way to conduct web searches using OpenAI's capabilities. While not a standalone search engine in the traditional sense, this tool is powerful when used as a research step before interacting with an OpenAI language model, like in the LLM Action.
It's designed to gather relevant, up-to-date information from the web that can then be passed as context to a chat completion or instruction-following model, improving the factual accuracy and timeliness of its responses.
You will need an OpenAI account and your API key.
Sign up for an account on the .
Find your API key in your account settings under API Keys.
In AgenticFlow, navigate to Settings > Connections and add a new OpenAI Connection, providing your API key.
The OpenAI Search action takes your query and uses it to find relevant snippets of information from across the web. The key difference from other search actions is that its output is formatted specifically to be used effectively as context in a subsequent prompt to an OpenAI model.
Connection
Connection
Select the OpenAI connection you created.
Query
Text
The topic or question you want to research.
Limit
Number
The maximum number of search results to return. Defaults to 5.
Results
Array
An array of objects, where each object contains the title
, link
, and a snippet
of the search result.
Let's build a workflow to answer a question about a very recent event, ensuring the LLM has the latest information.
Get the Question: The workflow starts with a Text Input action where you ask: "Summarize the key announcements from Apple's latest event."
Configure the OpenAI Search Action:
Connection: Select your OpenAI connection.
Query: {{text_input_action.output}}
Limit: 3
(to get the top 3 most relevant articles/summaries).
Provide Context to the LLM Action:
The Results
output will be an array of search results.
Connect this to an LLM Action.
Set the LLM's prompt to:
This RAG (Retrieval-Augmented Generation) pattern ensures the LLM isn't relying solely on its training data, which might be outdated. It's using fresh information from the web to provide a current and accurate answer.