An Overview of Function Calling and Its Implications for Building LLM Apps


Function calling is an innovation from OpenAI that has expanded the possibilities when it comes to app development around large language models.
However, I have found that it remains misunderstood by some. In this article, I aim to clarify function calling in the time it takes you to make a cup of coffee.
If you have aspirations to build LLM apps, integrate LLMs into your business, or simply expand your knowledge in this area, then this article is for you.
Function calling allows us to develop natural language interfaces atop our existing APIs. If this sounds confusing to you, don’t worry — the details will become clearer as you read on.
So, what does a natural language API look like? I believe it’s best to demonstrate this diagrammatically. Here’s an example app that uses function calling to enable users to find flights.
To implement this type of app without function calling, you would need to prompt the user to input the flight information using menus, selection boxes, and the like.
Function calling also facilitates the possibility of users making requests by voice. All you would need is an additional transcription service, and voilà, you have an AI personal assistant.
Hopefully, you should now be forming a clearer picture of function calling and its purpose. Let’s cement this newfound knowledge with some technical walkthroughs.
Let’s examine a simple use case atop a weather API.
The most basic use case involves function calling with one function. To illustrate this, I have modelled an API that provides a temperature forecast — see here.
This post originally appeared on TechToday.