Category: OpenAI

This is a brief post about something that confused me a great deal when I started working with LLMs.

Context

Many LLM providers (Anthropic, OpenAI, Google) support “function calling”, AKA “tool use”. In a nutshell:

  1. When calling the provider’s chat completion APIs, you tell the model “if needed, I can run these specific functions for you.”
  2. The model responds saying “hey go run function X with arguments Y and Z.”
  3. You go and run the function with those arguments. Maybe you append the result to the chat so the model has access to it.

Weather lookup is a common example. You tell the model “I have a function get_temperature(city: String) that looks up the current temperature in a city”, and then when a question like “What’s the weather like in Tokyo?” comes up the model responds to your code with “please call get_temperature("Tokyo")”.

Structured Output

All well and good, but where this gets interesting is that function calling is also a good way to get structured data out of LLMs. You can provide a function definition that you have no intention of “calling”, purely to get data in the format you want.

For example, using the Rust genai library:

// Text to analyze
let text = "The quick brown fox jumps over the lazy dog.";

// Define a tool/function for rating grammar
let grammar_tool = Tool::new("rate_grammar")
    .with_description("Rate the grammatical correctness of English text")
    .with_schema(json!({
        "type": "object",
        "properties": {
          "rating": {
            "type": "integer",
            "minimum": 1,
            "maximum": 10,
            "description": "Grammar rating from 1 to 10, where 10 is perfect grammar"
          },
          "explanation": {
            "type": "string",
            "description": "Brief explanation for the rating"
          }
        },
        "required": ["rating", "explanation"]
    }));

// Create a chat request with the text and the grammar tool
let chat_req = ChatRequest::new(vec![
    ChatMessage::system("You are a professional English grammar expert. Analyze the grammar of the given text and provide a rating."),
    ChatMessage::user(format!("Please rate the grammar of this text: '{}'", text))
]).append_tool(grammar_tool);

// ...and execute it
let chat_res = client.exec_chat("gpt-4o-mini", chat_req, None).await?;

The result will include some JSON like:

{
    "rating": 10,
    "explanation": "This sentence is grammatically perfect..."
}

…and we’re done. We just used function calling to get structured data, with no intention of calling any functions. This is much nicer and more reliable than string parsing on the raw chat output.

This approach is probably obvious to many people, but it was unintuitive to me at first; I think “function calling” is a misleading name for this functionality that can be used for so much more.

Alternative Approaches

This isn’t the only way to get structured data out of an LLM; OpenAI supports Structured Outputs, and Gemini lets you specify a response schema. But for Anthropic, it seems like function calling is still recommended:

Tools do not necessarily need to be client-side functions — you can use tools anytime you want the model to return JSON output that follows a provided schema.

How I Use LLMs (Sep 2024)

Aider is pretty cool

It feels a bit early to be writing an update to something I wrote 1.5 months ago, but we live in interesting times. Shortly after writing that post, I started trying out Aider with Claude 3.5 Sonnet. Aider’s an open source Python CLI app that you run inside a Git repo with an OpenAI/Anthropic/whatever API key1.

My Aider workflow

  1. I direct Aider toward a file or multiple files of interest (with /add src/main.rs or similar)
  2. I describe a commit-sized piece of work to do in 1 or 2 sentences
  3. Aider sends some file contents and my prompt to the LLM and translates the response into a Git commit
  4. I skim the commit and leave it as is, tell Aider to tweak it some more, tweak it myself, or /undo it entirely

This works shockingly well; most of the time, Aider+Claude can get it right on the first or second try. This workflow has a few properties that I really like:

  1. It’s IDE-agnostic (no need to switch to something like Cursor)
  2. It’s very low-friction, which encourages trying things out
    1. No need to copy code from a browser, write commit messages, etc.
    2. Undoing work is trivial (just delete the Git commit or run /undo)
  3. It’s pay-as-you-go (I pay Anthropic by the token, no monthly subscription)

Prompts

Here are some examples of the prompts I do in Aider:

  • Library updates should be streamed to all connected web clients over a WebSocket. Add an /updates websocket in the Rust code that broadcasts updated LibraryItems to clients (triggered by a successful call to update_handler). The JS in index.html should subscribe to the WebSocket and call table.updateData() to update the Tabulator table
  • Add a new endpoint (POST or PUT) for adding new items to the library. It will create a new LibraryItemCreatedEvent, save it to the DB, apply it to the in-memory library, then broadcast the new item over the websocket
  • add a nice-looking button that bookmarks the current song. don’t worry about hooking it up to anything just yet
  • Add a new “test-api” command to justfile. It should curl the API exposed by add_item_handler and check that the response status code is CREATED
  • Write a throwaway C# program for benchmarking the the same SQLite insert as create_item() in lib.rs

I’m still developing an intuition for how to write these, but with all of these examples I got results that were correct or able to be fixed up easily. Sometimes I am very precise about what I want, and sometimes I am not; it all depends on the task at hand and how confident I am that the LLM will do what I’m looking for.

What does all this mean?

I dunno! The world is drowning in long-winded AI thinkpieces, so I’ll spare you another one.

All I know for a fact is that if I have a commit-sized piece of work in mind, there’s a very good chance that Claude+Aider can do it for me in less than a minute — today. I’m still exploring the implications of that, but Jamie Brandon’s Speed Matters post feels very relevant. I can try out more ideas and generally be more ambitious with my software projects, which is very exciting.


  1. You can also point Aider at a locally-hosted LLM, which is cool, but in my experience the quality is nowhere near as good as Claude. ↩︎

I find LLMs to be pretty useful these days. I don’t consider myself to be on the frontier of LLM experimentation, but when I talk to (technical) people it sounds like my workflow is pretty uncommon, so I should probably write about it.

LLM (the command-line tool)

Simon Willison’s llm command-line tool is the primary way I use LLMs. I sometimes struggle to describe the appeal of llm to people because it’s boring. llm lets you do the following with any popular LLM (hosted or local):

  1. Ask the LLM one-off questions (optionally taking stdin as context)
  2. Start a chat session (optionally starting from the last ad-hoc question)

And that’s about it! It’s one of those lovely tools that does a few things well. I usually start sessions with exploratory questions/requests, sometimes piping in data:

cat xycursor.rs | llm "the end() function in this file is confusing, explain it"

And then if I need to follow up on a question, llm chat --continue drops me into an interactive chat that starts after the last question+response:

> llm chat --continue
Chatting with claude-3-5-sonnet-20240620
Type 'exit' or 'quit' to exit
Type '!multi' to enter multiple lines, then '!end' to finish
> write a comment explaining that function, using ASCII diagrams

Important things about this workflow:

  1. It’s trivial to “connect” the LLM to other data+files
    1. For example, every week I used to manually rewrite the output of this script to be more readable before publishing it; now I pipe it to llm and tell it to do an initial rewrite first
  2. llm makes it trivial to go from exploratory work to more focused iteration

I have llm set up to use this custom prompt, no matter what underlying LLM it’s using. I find that it helps make responses much more succinct.

GitHub Copilot

It’s good, I use it every day. It’s a lot more widely known than llm so I won’t spill too much ink over it.

Observations

I use LLMs and web search in a similar way: do a quick exploratory investigation into something, taking the initial results with a grain of salt. The skills+knowledge you need to evaluate Google results are very similar to the ones you need to evaluate LLM results!

I mostly use LLMs for computer stuff, and it’s often really easy to verify whether a programming/computing answer is any good; just try it out! LLMs are probably not quite as useful for fields where that’s not the case.

I’m happy with llm but it is, ultimately, a wrapper around a basic chat interface and we can probably do better. Claude Artifacts is very appealing in that it can offer a faster iteration cycle for web development (but is unfortunately coupled to an expensive subscription service), and Aider is interesting as a better way to give an LLM access to context from an entire code base. I’m hoping we’ll see more tools like these that extend what we can do with LLMs.

headshot

Cities & Code

Top Categories

View all categories