Function Calling is Structured Output
(or it can be anyway)
This is a brief post about something that confused me a great deal when I started working with LLMs.
Context
Many LLM providers (Anthropic, OpenAI, Google) support “function calling”, AKA “tool use”. In a nutshell:
- When calling the provider’s chat completion APIs, you tell the model “if needed, I can run these specific functions for you.”
- The model responds saying “hey go run function X with arguments Y and Z.”
- You go and run the function with those arguments. Maybe you append the result to the chat so the model has access to it.
Weather lookup is a common example. You tell the model “I have a function get_temperature(city: String)
that looks up the current temperature in a city”, and then when a question like “What’s the weather like in Tokyo?” comes up the model responds to your code with “please call get_temperature("Tokyo")
”.
Structured Output
All well and good, but where this gets interesting is that function calling is also a good way to get structured data out of LLMs. You can provide a function definition that you have no intention of “calling”, purely to get data in the format you want.
For example, using the Rust genai
library:
// Text to analyze
let text = "The quick brown fox jumps over the lazy dog.";
// Define a tool/function for rating grammar
let grammar_tool = Tool::new("rate_grammar")
.with_description("Rate the grammatical correctness of English text")
.with_schema(json!({
"type": "object",
"properties": {
"rating": {
"type": "integer",
"minimum": 1,
"maximum": 10,
"description": "Grammar rating from 1 to 10, where 10 is perfect grammar"
},
"explanation": {
"type": "string",
"description": "Brief explanation for the rating"
}
},
"required": ["rating", "explanation"]
}));
// Create a chat request with the text and the grammar tool
let chat_req = ChatRequest::new(vec![
ChatMessage::system("You are a professional English grammar expert. Analyze the grammar of the given text and provide a rating."),
ChatMessage::user(format!("Please rate the grammar of this text: '{}'", text))
]).append_tool(grammar_tool);
// ...and execute it
let chat_res = client.exec_chat("gpt-4o-mini", chat_req, None).await?;
The result will include some JSON like:
{
"rating": 10,
"explanation": "This sentence is grammatically perfect..."
}
…and we’re done. We just used function calling to get structured data, with no intention of calling any functions. This is much nicer and more reliable than string parsing on the raw chat output.
This approach is probably obvious to many people, but it was unintuitive to me at first; I think “function calling” is a misleading name for this functionality that can be used for so much more.
Alternative Approaches
This isn’t the only way to get structured data out of an LLM; OpenAI supports Structured Outputs, and Gemini lets you specify a response schema. But for Anthropic, it seems like function calling is still recommended:
Tools do not necessarily need to be client-side functions — you can use tools anytime you want the model to return JSON output that follows a provided schema.