{"id":91725,"date":"2025-10-21T08:21:09","date_gmt":"2025-10-21T08:21:09","guid":{"rendered":"https:\/\/www.newsbeep.com\/nz\/91725\/"},"modified":"2025-10-21T08:21:09","modified_gmt":"2025-10-21T08:21:09","slug":"how-to-build-an-ai-agent-with-function-calling-and-gpt-5","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/nz\/91725\/","title":{"rendered":"How to Build An AI Agent with Function Calling and GPT-5"},"content":{"rendered":"<p> and Large Language Models (LLMs)<\/p>\n<p class=\"wp-block-paragraph\">Large language models (LLMs)\u00a0are advanced AI systems built on deep neural network such as transformers and trained on vast amounts of text to generate human-like language. LLMs like ChatGPT, Claude, Gemini and Grok can tackle many challenging tasks and are used across fields such as science, healthcare, education, and finance.<\/p>\n<p class=\"wp-block-paragraph\">An AI agent extends the capabilites of LLMs to solve tasks that are beyond their pre-trained knowledge. An LLM can write a Python tutorial from what it learned during training. If you ask it to book a flight, the task requires access to your calendar, web search and the ability to take actions, these fall beyond the LLM\u2019s pre-trained knowledge. Some of the common actions include:<\/p>\n<p>Weather forecast:\u00a0The LLM connects to a web search tool to fetch the latest weather forecast.<\/p>\n<p>Booking agent:\u00a0An AI agent that can check a user\u2019s calendar, search the web to visit a booking site like Expedia to find available options for flights and hotels, present them to the user for confirmation, and complete the booking on behalf of the user.<\/p>\n<p>How an AI Agent Works<\/p>\n<p class=\"wp-block-paragraph\">AI agents form a system that uses a Large Language Model to plan, reason, and take steps to interact with its environment using tools suggested from the model\u2019s reasoning to solve a particular task.<\/p>\n<p>Basic Structure of an AI Agent<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/nz\/wp-content\/uploads\/2025\/10\/agent.webp.webp\" alt=\"\" class=\"wp-image-624142\"\/>Image Generated By Gemini<\/p>\n<p>A Large Language Model (LLM):\u00a0the LLM is the\u00a0brain\u00a0of an AI agent. It takes a user\u2019s prompt, plans and reasons through the request and breaks the problem into steps that determine which tools it should use to complete the task.<\/p>\n<p>A tool\u00a0is the framework that the agent uses to perform an action based on the plan and reasoning from the Large Language Model. If you ask an LLM to book a table for you at a restaurant, possible tools that will be used include calendar to check your availability and a web search tool to access the restaurant website and make a reservation for you.<\/p>\n<p>Ilustrated Decision Making of a Booking AI Agent<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/nz\/wp-content\/uploads\/2025\/10\/booking-1024x683.webp.webp\" alt=\"\" class=\"wp-image-624143\"\/>Image Generated By ChatGPT<\/p>\n<p class=\"wp-block-paragraph\">AI agents can access different tools depending on the task. A tool might be a data store, such as a database. For example, a customer-support agent could access a customer\u2019s account details and purchase history and decide when to retrieve that information to help resolve an issue.<\/p>\n<p class=\"wp-block-paragraph\">AI agents are used to solve a wide range of tasks, and there are many powerful agents available. Coding agents, particularly agentic IDEs such as Cursor, Windsurf, and GitHub Copilot help engineers write and debug code faster and build projects quickly. CLI Coding agents like Claude Code and Codex CLI can interact with a user\u2019s desktop and terminal to carry out coding tasks. ChatGPT supports agents that can perform actions such as booking reservations on a user\u2019s behalf. Agents are also integrated into customer support workflows to communicate with customers and resolve their issues.<\/p>\n<p>Function Calling<\/p>\n<p class=\"wp-block-paragraph\">Function calling is a technique for connecting a large language model (LLM) to external tools such as APIs or databases. It is used in creating AI agents to connect LLMs to tools. In function calling, each tool is defined as a code function (for example, a weather API to fetch the latest forecast) along with a JSON Schema that specifies the function\u2019s parameters and instructs the LLM on when and how to call the function for a given task.<\/p>\n<p class=\"wp-block-paragraph\">The type of function defined depends on the task the agent is designed to perform. For example, for a customer support agent we can define a function that can extract information from unstructured data, such as PDFs containing details about a business\u2019s products.<\/p>\n<p class=\"wp-block-paragraph\">In this post I will demonstrate how to use function calling to build a simple web search agent using GPT-5 as the large language model.<\/p>\n<p>Basic Structure of a Web Search Agent<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/nz\/wp-content\/uploads\/2025\/10\/functioncall.webp.webp\" alt=\"\" class=\"wp-image-624144\"\/>Image Generated By Gemini<\/p>\n<p>The main logic behind the web search agent:<\/p>\n<p>Define a code function to handle the web search.<\/p>\n<p>Define custom instructions that guide the large language model in determining when to call the web search function based on the query. For example, if the query asks about the current weather, the web search agent will recognize the need to search the internet to get the latest weather reports. However, if the query asks it to write a tutorial about a programming language like Python, something it can answer from its pre-trained knowledge it will not call the web search function and will respond directly instead.<\/p>\n<p>Prerequisite<\/p>\n<p class=\"wp-block-paragraph\">Create an OpenAI account and generate an API key<br \/>1:\u00a0Create an\u00a0<a href=\"https:\/\/auth.openai.com\/create-account\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI Account<\/a>\u00a0if you don\u2019t have one<br \/>2:\u00a0<a href=\"https:\/\/platform.openai.com\/account\/api-keys\" rel=\"nofollow noopener\" target=\"_blank\">Generate an API Key<\/a><\/p>\n<p class=\"wp-block-paragraph\">Set up and Activate Environment<\/p>\n<p>python3 -m venv env<br \/>\nsource env\/bin\/activate<\/p>\n<p class=\"wp-block-paragraph\">Export OpenAI API Key<\/p>\n<p>export OPENAI_API_KEY=&#8221;Your Openai API Key&#8221;<\/p>\n<p class=\"wp-block-paragraph\">Setup Tavily for Web Search<br \/>Tavily is a specialized web-search tool for AI agents. Create an account on\u00a0<a href=\"https:\/\/www.tavily.com\/\" rel=\"nofollow noopener\" target=\"_blank\">Tavily.com<\/a>, and once your profile is set up, an API key will be generated that you can copy into your environment. New accounta receive 1000 free credits that can be used for up to 1000 web searches.<\/p>\n<p class=\"wp-block-paragraph\">Export TAVILY API Key<\/p>\n<p>export TAVILY_API_KEY=&#8221;Your Tavily API Key&#8221;<\/p>\n<p class=\"wp-block-paragraph\">Install Packages<\/p>\n<p>pip3 install openai<br \/>\npip3 install tavily-python<\/p>\n<p>Building a Web Search Agent with Function Calling Step by Step<\/p>\n<p>Step 1: Create Web Search Function with Tavily<\/p>\n<p class=\"wp-block-paragraph\">A web search function is implemented using Tavily, serving as the tool for function calling in the web search agent.<\/p>\n<p>from tavily import TavilyClient<br \/>\nimport os<\/p>\n<p>tavily = TavilyClient(api_key=os.getenv(&#8220;TAVILY_API_KEY&#8221;))<\/p>\n<p>def web_search(query: str, num_results: int = 10):<br \/>\n    try:<br \/>\n        result = tavily.search(<br \/>\n            query=query,<br \/>\n            search_depth=&#8221;basic&#8221;,<br \/>\n            max_results=num_results,<br \/>\n            include_answer=False,<br \/>\n            include_raw_content=False,<br \/>\n            include_images=False<br \/>\n        )<\/p>\n<p>        results = result.get(&#8220;results&#8221;, [])<\/p>\n<p>        return {<br \/>\n            &#8220;query&#8221;: query,<br \/>\n            &#8220;results&#8221;: results,<br \/>\n            &#8220;sources&#8221;: [<br \/>\n                {&#8220;title&#8221;: r.get(&#8220;title&#8221;, &#8220;&#8221;), &#8220;url&#8221;: r.get(&#8220;url&#8221;, &#8220;&#8221;)}<br \/>\n                for r in results<br \/>\n            ]<br \/>\n        }<\/p>\n<p>    except Exception as e:<br \/>\n        return {<br \/>\n            &#8220;error&#8221;: f&#8221;Search error: {e}&#8221;,<br \/>\n            &#8220;query&#8221;: query,<br \/>\n            &#8220;results&#8221;: [],<br \/>\n            &#8220;sources&#8221;: [],<br \/>\n        }<\/p>\n<p class=\"wp-block-paragraph\">Web function code breakdown<\/p>\n<p class=\"wp-block-paragraph\">Tavily is initialized with its API key. In the\u00a0web_search\u00a0function, the following steps are performed:<\/p>\n<p>Tavily search function is called to search the internet and retrieve the top 10 results.<\/p>\n<p>The search results and their corresponding sources are returned.<\/p>\n<p class=\"wp-block-paragraph\">This returned output will serve as relevant context for the web search agent: which we will define later in this article, to fetch up-to-date information for queries (prompts) that require real-time data such as weather forecasts.<\/p>\n<p>Step 2: Create Tool Schema<\/p>\n<p class=\"wp-block-paragraph\">The tool schema defines custom instructions for an AI model on when it should call a tool, in this case the tool that will be used in a web search function. It also specifies the conditions and actions to be taken when the model calls a tool. A json tool schema is defined below based on the\u00a0<a href=\"https:\/\/platform.openai.com\/docs\/guides\/structured-outputs?context=with_parse#supported-schemas\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI tool schema structure<\/a>.<\/p>\n<p>tool_schema = [<br \/>\n    {<br \/>\n        &#8220;type&#8221;: &#8220;function&#8221;,<br \/>\n        &#8220;name&#8221;: &#8220;web_search&#8221;,<\/p>\n<p>        &#8220;description&#8221;: &#8220;&#8221;&#8221;Execute a web search to fetch up to date information. Synthesize a concise,<br \/>\n        self-contained answer from the content of the results of the visited pages.<br \/>\n        Fetch pages, extract text, and provide the best available result while citing 1-3 sources (title + URL).<br \/>\n        If sources conflict, surface the uncertainty and prefer the most recent evidence.<br \/>\n        &#8220;&#8221;&#8221;,<\/p>\n<p>        &#8220;strict&#8221;: True,<br \/>\n        &#8220;parameters&#8221;: {<br \/>\n            &#8220;type&#8221;: &#8220;object&#8221;,<br \/>\n            &#8220;properties&#8221;: {<br \/>\n                &#8220;query&#8221;: {<br \/>\n                    &#8220;type&#8221;: &#8220;string&#8221;,<br \/>\n                    &#8220;description&#8221;: &#8220;Query to be searched on the web.&#8221;,<br \/>\n                },<br \/>\n            },<br \/>\n            &#8220;required&#8221;: [&#8220;query&#8221;],<br \/>\n            &#8220;additionalProperties&#8221;: False<br \/>\n        },<br \/>\n    },<br \/>\n]<\/p>\n<p>Tool schema\u2019s Properties<\/p>\n<p>type:\u00a0Specifies that the type of tool is a function.<\/p>\n<p>name:\u00a0the name of the function that will be used for tool call, which is\u00a0web_search.<\/p>\n<p>description:\u00a0Describes what the AI model should do when calling the web search tool. It instructs the model to search the internet using the\u00a0web_search\u00a0function to fetch up-to-date information and extract relevant details to generate the best response.<\/p>\n<p>strict:\u00a0It is set to true, this property instructs the LLM to strictly follow the tool schema\u2019s instructions.<\/p>\n<p>parameters:\u00a0Defines the parameters that will be passed into the\u00a0web_search\u00a0function. In this case, there is only one parameter:\u00a0query\u00a0which represents the search term to look up on the internet.<\/p>\n<p>required:\u00a0Instructs the LLM that query is a mandatory parameter for the\u00a0web_search\u00a0function.<\/p>\n<p>additionalProperties:\u00a0it is set to false, meaning that the tool\u2019s\u00a0arguments object\u00a0cannot include any parameters other than those defined under\u00a0parameters.properties.<\/p>\n<p>Step 3: Create the Web Search Agent Using GPT-5 and Function Calling<\/p>\n<p class=\"wp-block-paragraph\">Finally I will build an agent that we can chat with, which can search the web when it needs up-to-date information. I will use\u00a0GPT-5-mini, a fast and accurate model from OpenAI, along with function calling to invoke the\u00a0tool schema\u00a0and the\u00a0web search function\u00a0already defined.<\/p>\n<p>from datetime import datetime, timezone<br \/>\nimport json<br \/>\nfrom openai import OpenAI<br \/>\nimport os <\/p>\n<p>client = OpenAI(api_key=os.getenv(&#8220;OPENAI_API_KEY&#8221;))<\/p>\n<p># tracker for the last model&#8217;s response id to maintain conversation&#8217;s state<br \/>\nprev_response_id = None<\/p>\n<p># a list for storing tool&#8217;s results from the function call<br \/>\ntool_results = []<\/p>\n<p>while True:<br \/>\n    # if the tool results is empty prompt message<br \/>\n    if len(tool_results) == 0:<br \/>\n        user_message = input(&#8220;User: &#8220;)<\/p>\n<p>        &#8220;&#8221;&#8221; commands for exiting chat &#8220;&#8221;&#8221;<br \/>\n        if isinstance(user_message, str) and user_message.strip().lower() in {&#8220;exit&#8221;, &#8220;q&#8221;}:<br \/>\n            print(&#8220;Exiting chat. Goodbye!&#8221;)<br \/>\n            break<\/p>\n<p>    else:<br \/>\n        user_message = tool_results.copy()<\/p>\n<p>        # clear the tool results for the next call<br \/>\n        tool_results = []<\/p>\n<p>    # obtain current&#8217;s date to be passed into the model as an instruction to assist in decision making<br \/>\n    today_date = datetime.now(timezone.utc).date().isoformat()     <\/p>\n<p>    response = client.responses.create(<br \/>\n        model = &#8220;gpt-5-mini&#8221;,<br \/>\n        input = user_message,<br \/>\n        instructions=f&#8221;Current date is {today_date}.&#8221;,<br \/>\n        tools = tool_schema,<br \/>\n        previous_response_id=prev_response_id,<br \/>\n        text = {&#8220;verbosity&#8221;: &#8220;low&#8221;},<br \/>\n        reasoning={<br \/>\n            &#8220;effort&#8221;: &#8220;low&#8221;,<br \/>\n        },<br \/>\n        store=True,<br \/>\n        )<\/p>\n<p>    prev_response_id = response.id<\/p>\n<p>    # Handles model response&#8217;s output<br \/>\n    for output in response.output:<\/p>\n<p>        if output.type == &#8220;reasoning&#8221;:<br \/>\n            print(&#8220;Assistant: &#8220;,&#8221;Reasoning &#8230;.&#8221;)<\/p>\n<p>            for reasoning_summary in output.summary:<br \/>\n                print(&#8220;Assistant: &#8220;,reasoning_summary)<\/p>\n<p>        elif output.type == &#8220;message&#8221;:<br \/>\n            for item in output.content:<br \/>\n                print(&#8220;Assistant: &#8220;,item.text)<\/p>\n<p>        elif output.type == &#8220;function_call&#8221;:<br \/>\n            # obtain function name<br \/>\n            function_name = globals().get(output.name)<br \/>\n            # loads function arguments<br \/>\n            args = json.loads(output.arguments)<br \/>\n            function_response = function_name(**args)<br \/>\n            tool_results.append(<br \/>\n                {<br \/>\n                    &#8220;type&#8221;: &#8220;function_call_output&#8221;,<br \/>\n                    &#8220;call_id&#8221;: output.call_id,<br \/>\n                    &#8220;output&#8221;: json.dumps(function_response)<br \/>\n                }<br \/>\n            )<\/p>\n<p class=\"wp-block-paragraph\">Step by Step Code Breakdown<\/p>\n<p>from openai import OpenAI<br \/>\nimport os <\/p>\n<p>client = OpenAI(api_key=os.getenv(&#8220;OPENAI_API_KEY&#8221;))<br \/>\nprev_response_id = None<br \/>\ntool_results = []<\/p>\n<p>Initialized the OpenAI model API with an API key.<\/p>\n<p>Initialized two variables\u00a0prev_response_id\u00a0and\u00a0tool_results.\u00a0prev_response_id\u00a0keeps track of the model\u2019s response to maintain conversation state, and\u00a0tool_results\u00a0is a list that stores outputs returned from the\u00a0web_search\u00a0function call.<\/p>\n<p class=\"wp-block-paragraph\">The chat runs inside the\u00a0loop. A user enters a message and the model called with tool schema accepts the message, reasons over it, decides whether to call the web search tool, and then the tool\u2019s output is passed back to the model. The model generates a context-aware response. This continues until the user exits the chat.<\/p>\n<p class=\"wp-block-paragraph\">Code Walkthrough of the Loop<\/p>\n<p>if len(tool_results) == 0:<br \/>\n    user_message = input(&#8220;User: &#8220;)<br \/>\n    if isinstance(user_message, str) and user_message.strip().lower() in {&#8220;exit&#8221;, &#8220;q&#8221;}:<br \/>\n        print(&#8220;Exiting chat. Goodbye!&#8221;)<br \/>\n        break<\/p>\n<p>else:<br \/>\n    user_message = tool_results.copy()<br \/>\n    tool_results = []<\/p>\n<p>today_date = datetime.now(timezone.utc).date().isoformat()     <\/p>\n<p>response = client.responses.create(<br \/>\n    model = &#8220;gpt-5-mini&#8221;,<br \/>\n    input = user_message,<br \/>\n    instructions=f&#8221;Current date is {today_date}.&#8221;,<br \/>\n    tools = tool_schema,<br \/>\n    previous_response_id=prev_response_id,<br \/>\n    text = {&#8220;verbosity&#8221;: &#8220;low&#8221;},<br \/>\n    reasoning={<br \/>\n        &#8220;effort&#8221;: &#8220;low&#8221;,<br \/>\n    },<br \/>\n    store=True,<br \/>\n    )<\/p>\n<p>prev_response_id = response.id<\/p>\n<p>Checks if the\u00a0tool_results\u00a0is empty. If it is, the user will be prompted to type in a message, with an option to quit using\u00a0exit\u00a0or\u00a0q.<\/p>\n<p>If the\u00a0tool_results\u00a0is not empty,\u00a0user_message\u00a0will be set to the collected tool outputs to be sent to the model.\u00a0tool_results\u00a0is cleared to avoid resending the same\u00a0tool outputs\u00a0on the next loop iteration.<\/p>\n<p>The current date (today_date) is obtained to be used by the model to make time-aware decisions.<\/p>\n<p>Calls\u00a0client.responses.create\u00a0to generate the model\u2019s response and it accepts the following parameters:<\/p>\n<p>model: set to\u00a0gpt-5-mini.<\/p>\n<p>input: accepts the user\u2019s message.<\/p>\n<p>instructions: set to current\u2019s date (today_date).<\/p>\n<p>tools: set to the tool schema that was defined earlier.<\/p>\n<p>previous_response_id: set to the previous response\u2019s id so the model can maintain conversation state.<\/p>\n<p>text: verbosity is set to low to keep model\u2019s response concise.<\/p>\n<p>reasoning: GPT-5-mini is a reasoning model, set the reasoning\u2019s effort to low for faster\u2019s response. For more complex tasks we can set it to high.<\/p>\n<p>store: tells the model to store the current\u2019s response so it can be retrieved later and helps with conversation continuity.<\/p>\n<p>prev_response_id\u00a0is set to current\u2019s response id so the next function call can thread onto the same conversation.<\/p>\n<p>for output in response.output:<br \/>\n    if output.type == &#8220;reasoning&#8221;:<br \/>\n        print(&#8220;Assistant: &#8220;,&#8221;Reasoning &#8230;.&#8221;)<\/p>\n<p>        for reasoning_summary in output.summary:<br \/>\n            print(&#8220;Assistant: &#8220;,reasoning_summary)<\/p>\n<p>    elif output.type == &#8220;message&#8221;:<br \/>\n        for item in output.content:<br \/>\n            print(&#8220;Assistant: &#8220;,item.text)<\/p>\n<p>    elif output.type == &#8220;function_call&#8221;:<br \/>\n        # obtain function name<br \/>\n        function_name = globals().get(output.name)<br \/>\n        # loads function arguments<br \/>\n        args = json.loads(output.arguments)<br \/>\n        function_response = function_name(**args)<br \/>\n        # append tool results list with the the function call&#8217;s id and function&#8217;s response<br \/>\n        tool_results.append(<br \/>\n            {<br \/>\n                &#8220;type&#8221;: &#8220;function_call_output&#8221;,<br \/>\n                &#8220;call_id&#8221;: output.call_id,<br \/>\n                &#8220;output&#8221;: json.dumps(function_response)<br \/>\n            }<br \/>\n        )<\/p>\n<p class=\"wp-block-paragraph\">This processes the model\u2019s response output and does the following;<\/p>\n<p>If the output type is reasoning, print each item in the reasoning summary.<\/p>\n<p>If the output type is message, iterate through the content and print each text item.<\/p>\n<p>If the output type is a function call, obtain the function\u2019s name, parse its arguments, and pass them to the function (web_search) to generate a response. In this case, the web search response contains up-to-date information relevant to the user\u2019s message. Finally appends the function call\u2019s response and function call id to\u00a0tool_results. This lets the next loop send the tool result back to the model.<\/p>\n<p>Full Code for the Web Search Agent<\/p>\n<p>from datetime import datetime, timezone<br \/>\nimport json<br \/>\nfrom openai import OpenAI<br \/>\nimport os<br \/>\nfrom tavily import TavilyClient<\/p>\n<p>tavily = TavilyClient(api_key=os.getenv(&#8220;TAVILY_API_KEY&#8221;))<\/p>\n<p>def web_search(query: str, num_results: int = 10):<br \/>\n    try:<br \/>\n        result = tavily.search(<br \/>\n            query=query,<br \/>\n            search_depth=&#8221;basic&#8221;,<br \/>\n            max_results=num_results,<br \/>\n            include_answer=False,<br \/>\n            include_raw_content=False,<br \/>\n            include_images=False<br \/>\n        )<\/p>\n<p>        results = result.get(&#8220;results&#8221;, [])<\/p>\n<p>        return {<br \/>\n            &#8220;query&#8221;: query,<br \/>\n            &#8220;results&#8221;: results,<br \/>\n            &#8220;sources&#8221;: [<br \/>\n                {&#8220;title&#8221;: r.get(&#8220;title&#8221;, &#8220;&#8221;), &#8220;url&#8221;: r.get(&#8220;url&#8221;, &#8220;&#8221;)}<br \/>\n                for r in results<br \/>\n            ]<br \/>\n        }<\/p>\n<p>    except Exception as e:<br \/>\n        return {<br \/>\n            &#8220;error&#8221;: f&#8221;Search error: {e}&#8221;,<br \/>\n            &#8220;query&#8221;: query,<br \/>\n            &#8220;results&#8221;: [],<br \/>\n            &#8220;sources&#8221;: [],<br \/>\n        }<\/p>\n<p>tool_schema = [<br \/>\n    {<br \/>\n        &#8220;type&#8221;: &#8220;function&#8221;,<br \/>\n        &#8220;name&#8221;: &#8220;web_search&#8221;,<br \/>\n        &#8220;description&#8221;: &#8220;&#8221;&#8221;Execute a web search to fetch up to date information. Synthesize a concise,<br \/>\n        self-contained answer from the content of the results of the visited pages.<br \/>\n        Fetch pages, extract text, and provide the best available result while citing 1-3 sources (title + URL). &#8221;<br \/>\n        If sources conflict, surface the uncertainty and prefer the most recent evidence.<br \/>\n        &#8220;&#8221;&#8221;,<br \/>\n        &#8220;strict&#8221;: True,<br \/>\n        &#8220;parameters&#8221;: {<br \/>\n            &#8220;type&#8221;: &#8220;object&#8221;,<br \/>\n            &#8220;properties&#8221;: {<br \/>\n                &#8220;query&#8221;: {<br \/>\n                    &#8220;type&#8221;: &#8220;string&#8221;,<br \/>\n                    &#8220;description&#8221;: &#8220;Query to be searched on the web.&#8221;,<br \/>\n                },<br \/>\n            },<br \/>\n            &#8220;required&#8221;: [&#8220;query&#8221;],<br \/>\n            &#8220;additionalProperties&#8221;: False<br \/>\n        },<br \/>\n    },<br \/>\n]<\/p>\n<p>client = OpenAI(api_key=os.getenv(&#8220;OPENAI_API_KEY&#8221;))<\/p>\n<p># tracker for the last model&#8217;s response id to maintain conversation&#8217;s state<br \/>\nprev_response_id = None<\/p>\n<p># a list for storing tool&#8217;s results from the function call<br \/>\ntool_results = []<\/p>\n<p>while True:<br \/>\n    # if the tool results is empty prompt message<br \/>\n    if len(tool_results) == 0:<br \/>\n        user_message = input(&#8220;User: &#8220;)<\/p>\n<p>        &#8220;&#8221;&#8221; commands for exiting chat &#8220;&#8221;&#8221;<br \/>\n        if isinstance(user_message, str) and user_message.strip().lower() in {&#8220;exit&#8221;, &#8220;q&#8221;}:<br \/>\n            print(&#8220;Exiting chat. Goodbye!&#8221;)<br \/>\n            break<\/p>\n<p>    else:<br \/>\n        # set the user&#8217;s messages to the tool results to be sent to the model<br \/>\n        user_message = tool_results.copy()<\/p>\n<p>        # clear the tool results for the next call<br \/>\n        tool_results = []<\/p>\n<p>    # obtain current&#8217;s date to be passed into the model as an instruction to assist in decision making<br \/>\n    today_date = datetime.now(timezone.utc).date().isoformat()     <\/p>\n<p>    response = client.responses.create(<br \/>\n        model = &#8220;gpt-5-mini&#8221;,<br \/>\n        input = user_message,<br \/>\n        instructions=f&#8221;Current date is {today_date}.&#8221;,<br \/>\n        tools = tool_schema,<br \/>\n        previous_response_id=prev_response_id,<br \/>\n        text = {&#8220;verbosity&#8221;: &#8220;low&#8221;},<br \/>\n        reasoning={<br \/>\n            &#8220;effort&#8221;: &#8220;low&#8221;,<br \/>\n        },<br \/>\n        store=True,<br \/>\n        )<\/p>\n<p>    prev_response_id = response.id<\/p>\n<p>    # Handles model response&#8217;s output<br \/>\n    for output in response.output:<\/p>\n<p>        if output.type == &#8220;reasoning&#8221;:<br \/>\n            print(&#8220;Assistant: &#8220;,&#8221;Reasoning &#8230;.&#8221;)<\/p>\n<p>            for reasoning_summary in output.summary:<br \/>\n                print(&#8220;Assistant: &#8220;,reasoning_summary)<\/p>\n<p>        elif output.type == &#8220;message&#8221;:<br \/>\n            for item in output.content:<br \/>\n                print(&#8220;Assistant: &#8220;,item.text)<\/p>\n<p>        # checks if the output type is a function call and append the function call&#8217;s results to the tool results list<br \/>\n        elif output.type == &#8220;function_call&#8221;:<br \/>\n            # obtain function name<br \/>\n            function_name = globals().get(output.name)<br \/>\n            # loads function arguments<br \/>\n            args = json.loads(output.arguments)<br \/>\n            function_response = function_name(**args)<br \/>\n            # append tool results list with the the function call&#8217;s id and function&#8217;s response<br \/>\n            tool_results.append(<br \/>\n                {<br \/>\n                    &#8220;type&#8221;: &#8220;function_call_output&#8221;,<br \/>\n                    &#8220;call_id&#8221;: output.call_id,<br \/>\n                    &#8220;output&#8221;: json.dumps(function_response)<br \/>\n                }<br \/>\n            )<\/p>\n<p class=\"wp-block-paragraph\">When you run the code, you can easily chat with the agent to ask questions that require the latest information, such as the current weather or the latest product releases. The agent responds with up-to-date information along with the corresponding sources from the internet. Below is a sample output from the terminal.<\/p>\n<p>User: What is the weather like in London today?<br \/>\nAssistant:  Reasoning &#8230;.<br \/>\nAssistant:  Reasoning &#8230;.<br \/>\nAssistant:  Right now in London: overcast, about 18\u00b0C (64\u00b0F), humidity ~88%, light SW wind ~16 km\/h, no precipitation reported. Source: WeatherAPI (current conditions) \u2014 https:\/\/www.weatherapi.com\/<\/p>\n<p>User: What is the latest iPhone model?<br \/>\nAssistant:  Reasoning &#8230;.<br \/>\nAssistant:  Reasoning &#8230;.<br \/>\nAssistant:  The latest iPhone models are the iPhone 17 lineup (including iPhone 17, iPhone 17 Pro, iPhone 17 Pro Max) and the new iPhone Air \u2014 announced by Apple on Sept 9, 2025. Source: Apple Newsroom \u2014 https:\/\/www.apple.com\/newsroom\/2025\/09\/apple-debuts-iphone-17\/<\/p>\n<p>User: Multiply 500 by 12.<br \/>\nAssistant:  Reasoning &#8230;.<br \/>\nAssistant:  6000<br \/>\nUser: exit<br \/>\nExiting chat. Goodbye!<\/p>\n<p class=\"wp-block-paragraph\">You can see the results with their corresponding web sources. When you ask it to perform a task that doesn\u2019t require up-to-date information, such as maths calculations or writing code the agent responds directly without any web search.<\/p>\n<p class=\"wp-block-paragraph\">Note: The web search agent is a simple, single-tool agent. Advanced agentic systems orchestrate multiple specialized tools and use efficient memory to maintain context, plan, and solve more complex tasks.<\/p>\n<p>Conclusion<\/p>\n<p class=\"wp-block-paragraph\">In this post I explained how an AI agent works and how it extends the capabilities of a large language model to interact with its environment, perform actions and solve tasks through the use of tools. I also explained function calling and how it enables LLMs to call tools. I demonstrated how to create a tool schema for function calling that defines when and how an LLM should call a tool to perform an action. I defined a web search function using Tavily to fetch information from the web and then showed step by step how to build a web search agent using function calling and GPT-5-mini as the LLM. In the end, we built a web search agent capable of retrieving up-to-date information from the internet to answer user queries.<\/p>\n<p class=\"wp-block-paragraph\">Check out my GitHub repo,\u00a0<a href=\"https:\/\/github.com\/ayoolaolafenwa\/GenAI-Courses\/tree\/main\" rel=\"nofollow noopener\" target=\"_blank\">GenAI-Courses<\/a>\u00a0where I have published more courses on various Generative AI topics. It also includes a guide on building an\u00a0<a href=\"https:\/\/github.com\/ayoolaolafenwa\/GenAI-Courses\/blob\/main\/AI%20Agents\/Agentic_RAG.md\" rel=\"nofollow noopener\" target=\"_blank\">Agentic RAG using function calling.<\/a><\/p>\n<p>Reach out to me via:<\/p>\n<p class=\"wp-block-paragraph\">Email:\u00a0<a href=\"https:\/\/mail.google.com\/mail\/u\/0\/#inbox\" rel=\"nofollow noopener\" target=\"_blank\">[email\u00a0protected]<\/a><\/p>\n<p class=\"wp-block-paragraph\">Linkedin:\u00a0<a href=\"https:\/\/www.linkedin.com\/in\/ayoola-olafenwa-003b901a9\/\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/www.linkedin.com\/in\/ayoola-olafenwa-003b901a9\/<\/a><\/p>\n<p>References<\/p>\n<p class=\"wp-block-paragraph\"><a href=\"https:\/\/platform.openai.com\/docs\/guides\/function-calling?api-mode=responses\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/platform.openai.com\/docs\/guides\/function-calling?api-mode=responses<\/a><\/p>\n<p class=\"wp-block-paragraph\"><a href=\"https:\/\/docs.tavily.com\/documentation\/api-reference\/endpoint\/search\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/docs.tavily.com\/documentation\/api-reference\/endpoint\/search<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"and Large Language Models (LLMs) Large language models (LLMs)\u00a0are advanced AI systems built on deep neural network such&hellip;\n","protected":false},"author":2,"featured_media":91726,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[365,618,363,364,8950,6033,2489,111,139,69,145],"class_list":{"0":"post-91725","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-ai-agents","10":"tag-artificial-intelligence","11":"tag-artificialintelligence","12":"tag-deep-dives","13":"tag-llm","14":"tag-machine-learning","15":"tag-new-zealand","16":"tag-newzealand","17":"tag-nz","18":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/91725","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/comments?post=91725"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/91725\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media\/91726"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media?parent=91725"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/categories?post=91725"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/tags?post=91725"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}