Category: Software

The Model Context Protocol (MCP) is a pretty big deal these days. It’s become the de facto standard for giving LLMs access to tools that someone else wrote, which, of course, turns them into agents. But writing tools for a new MCP server is hard, and so people often propose auto-converting existing APIs into MCP tools; typically using OpenAPI metadata (1, 2).

In my experience, this can work but it doesn’t work well. Here are a few reasons why:

Agents don’t do well with large numbers of tools

Infamously, VS Code has a hard limit of 128 tools - but many models struggle with accurate tool calling well before that number. Also, each tool and its description takes up valuable context window space.

Most web APIs weren’t designed with these constraints in mind! It’s fine to have umpteen APIs for a single product area when those APIs are called from code, but if each of those APIs is mapped to an MCP tool the results might not be great.

MCP tools designed from the ground up are typically much more flexible than individual web APIs, with each tool being able to do the work of several individual APIs.

APIs can blow through context windows quickly

Imagine an API that returns 100 records at a time, and each record is very wide (say, 50 fields). Sending those results to an agent as-is will use up a lot of tokens; even if a query can be satisfied with only a few fields, every field ends up in the context window.

APIs are typically paginated by the number of records, but records can vary a lot in size. One record might contain a large text field that takes up 100,000 tokens, while another might contain 10. Putting these API results directly into an agent’s context window is a gamble; sometimes it works, sometimes it will blow up.

The format of the data can also be an issue. Most web APIs these days return JSON, but JSON is a very token-inefficient format. Take this:

[
  {
    "firstName": "Alice",
    "lastName": "Johnson",
    "age": 28
  },
  {
    "firstName": "Bob",
    "lastName": "Smith",
    "age": 35
  }
]

Compare to the same data in CSV format:

firstName,lastName,age
Alice,Johnson,28
Bob,Smith,35

The CSV data is much more succinct - it uses up half as many tokens per record. Typically CSV, TSV, or YAML (for nested data) are better choices than JSON.

None these issues are insurmountable. You could imagine automatically adding tool arguments that let agents project fields, automatically truncating or summarizing large results, and automatically converting JSON results to CSV (or YAML for nested data). But the tools I’ve seen do none of those things.

APIs don’t make the most of agents’ unique capabilities

APIs return structured data for programmatic consumption. That’s often what agents want from tool calls… but agents can also handle other, more free-form instructions.

For example an ask_question tool could perform a RAG query over some documentation, then return information in plain text that is used to inform the next tool call - skipping structured data entirely.

Or, a call to a search_cities tool could return a structured list of cities and a suggestion of what to call next:

city_name,population,country,region
Tokyo,37194000,Japan,Asia
Delhi,32941000,India,Asia
Shanghai,28517000,China,Asia

Suggestion: To get more specific information (weather, attractions, demographics), try calling get_city_details with the city_name parameter.

That sort of layering and tool chaining can be very effective in MCP servers, and it’s something you’ll miss out on completely if auto-converting APIs to tools.

If an agent needs to call an API, it could just do that

Agents like Claude Code are remarkably capable of writing+executing code these days, including scripts that call web APIs. Some people take this so far as to argue that MCP isn’t needed at all!

I disagree with that conclusion, but I do think we should skate to where the puck is going. Sandboxing of agents is improving rapidly, and if it’s easy+safe for an agent to call APIs directly then we might as well do that and cut out the middleman.

Conclusion

Agents are fundamentally different from the typical consumers of APIs. It’s possible to automatically create MCP tools from existing APIs, but doing that is unlikely to work well. Agents do best when given tools that are designed for their unique capabilities and limitations.

Agents all the way down

A pattern for UI in MCP clients

Say you’re working on an agent (a model using tools in a loop). Furthermore, let’s say your agent uses the Model Context Protocol to populate its set of tools dynamically. This results in an interesting UX question: how should you show text tool results to the user of your agent?

You could just show the raw text, but that’s a little unsatisfying when tool results are often JSON, XML, or some other structured data. You could parse the structured data, but that’s tricky too; the set of tools your agent has access to may change, and the tool results you get today could be structured differently tomorrow.

I like another option: pass the tool results to another agent.

The Visualization Agent

Let’s add another agent to our system; we’ll call it the visualization agent. After the main agent executes a tool, it will pass the results to the visualization agent and say “hey, can you visualize this for the user?”

The visualization agent has access to specialized tools like “show table”, “show chart”, “show formatted code”, etc. It handles the work of translating tool results in arbitrary formats into the structures that are useful for opinionated visualization.

And if it can’t figure out a good way to visualize something, well, we can always fall back to text.

Why do it this way?

The big thing is that we can display arbitrary data to the user in a nice way, without assuming much about the tools our agent will have access to. We could also give the main agent visualization tools (tempting! so simple!), but:

  1. That can be very wasteful of the context window
    1. Imagine receiving 10,000 tokens from a tool, then the agent decides to pass those 10,000 tokens by calling a visualization tool - the 10,000 tokens just doubled to 20,000 in our chat history
  2. The more tools an agent has access to, the more likely it is to get confused
  3. A specialized visualization agent can use a faster+cheaper model than our main agent

It’s not all sunshine and roses; calling the visualization agent can be slow, and it adds some complexity. But I like this approach compared to the others I’ve seen, and we’re not far away from fast local models being widely available. If you’ve got another approach, I’d love to hear from you!

This is a brief post about something that confused me a great deal when I started working with LLMs.

Context

Many LLM providers (Anthropic, OpenAI, Google) support “function calling”, AKA “tool use”. In a nutshell:

  1. When calling the provider’s chat completion APIs, you tell the model “if needed, I can run these specific functions for you.”
  2. The model responds saying “hey go run function X with arguments Y and Z.”
  3. You go and run the function with those arguments. Maybe you append the result to the chat so the model has access to it.

Weather lookup is a common example. You tell the model “I have a function get_temperature(city: String) that looks up the current temperature in a city”, and then when a question like “What’s the weather like in Tokyo?” comes up the model responds to your code with “please call get_temperature("Tokyo")”.

Structured Output

All well and good, but where this gets interesting is that function calling is also a good way to get structured data out of LLMs. You can provide a function definition that you have no intention of “calling”, purely to get data in the format you want.

For example, using the Rust genai library:

// Text to analyze
let text = "The quick brown fox jumps over the lazy dog.";

// Define a tool/function for rating grammar
let grammar_tool = Tool::new("rate_grammar")
    .with_description("Rate the grammatical correctness of English text")
    .with_schema(json!({
        "type": "object",
        "properties": {
          "rating": {
            "type": "integer",
            "minimum": 1,
            "maximum": 10,
            "description": "Grammar rating from 1 to 10, where 10 is perfect grammar"
          },
          "explanation": {
            "type": "string",
            "description": "Brief explanation for the rating"
          }
        },
        "required": ["rating", "explanation"]
    }));

// Create a chat request with the text and the grammar tool
let chat_req = ChatRequest::new(vec![
    ChatMessage::system("You are a professional English grammar expert. Analyze the grammar of the given text and provide a rating."),
    ChatMessage::user(format!("Please rate the grammar of this text: '{}'", text))
]).append_tool(grammar_tool);

// ...and execute it
let chat_res = client.exec_chat("gpt-4o-mini", chat_req, None).await?;

The result will include some JSON like:

{
    "rating": 10,
    "explanation": "This sentence is grammatically perfect..."
}

…and we’re done. We just used function calling to get structured data, with no intention of calling any functions. This is much nicer and more reliable than string parsing on the raw chat output.

This approach is probably obvious to many people, but it was unintuitive to me at first; I think “function calling” is a misleading name for this functionality that can be used for so much more.

Alternative Approaches

This isn’t the only way to get structured data out of an LLM; OpenAI supports Structured Outputs, and Gemini lets you specify a response schema. But for Anthropic, it seems like function calling is still recommended:

Tools do not necessarily need to be client-side functions — you can use tools anytime you want the model to return JSON output that follows a provided schema.

I tried to use Automerge again, and failed.

For those of you who aren’t familiar, Automerge is a neat library that helps with building collaborative and local-first applications. It’s pretty cool! I work on a collaborative notes application that does not handle concurrent edits very well, and Automerge is one of the main contenders for improving that situation.

I gave Automerge a try in 2023 and wasn’t able to get it working, to my chagrin. This weekend there was an event in Vancouver for local-first software with one of the main Automerge authors, so I decided to attend and give it another try. I made a fair bit of progress, but ultimately gave up after spending ~5 hours on the problem. A few thoughts+observations:

I am going off the beaten path (web)

Automerge’s “golden path” is web apps. The core of Automerge is written in Rust, but it’s primarily used via WASM in the browser.

This approach is unpleasant for me; I like Rust, I have a good understanding of how code runs+executes on a “real computer”, and I do not want to write an application where 99% of the business logic runs in the browser. Instead, I tried to write an application where my Rust backend was the primary Automerge node and browser/JS Automerge nodes would talk to it.

This did not go well; the documentation and ergonomics of the Rust library are lacking, and most tutorials assume that you are using the JS wrapper around the Rust library. And then when I tried to use the JS version in my simple web UI, the docs assumed a level of web development sophistication that I don’t have.

To be clear, this is mostly a me problem: primarily targeting the browser is absolutely the way to go in 2025!

I am going off the beaten path (local-first)

Automerge tries to solve a lot of problems related to local-first software. But I wanted to “start small” and solve the problem of concurrent text editing for an application that isn’t local-first. In retrospect this was a mistake; the documentation was written for a very different audience than me, and I wasn’t especially aligned with what other people at the event were building.

Chrome is winning

Something that was discussed at the event: if you are building entirely in-browser local-first applications you may want to target Chrome, because Firefox is way behind on several new+useful APIs. This is sad, but not surprising.

What next?

I think it’s possible to build an Automerge-based collaborative text editor the way I want, but it’s a lot harder than I expected. I’m going to shelve this and revisit it next time I have time+energy to hack on it.

How I Use LLMs (Sep 2024)

Aider is pretty cool

It feels a bit early to be writing an update to something I wrote 1.5 months ago, but we live in interesting times. Shortly after writing that post, I started trying out Aider with Claude 3.5 Sonnet. Aider’s an open source Python CLI app that you run inside a Git repo with an OpenAI/Anthropic/whatever API key1.

My Aider workflow

  1. I direct Aider toward a file or multiple files of interest (with /add src/main.rs or similar)
  2. I describe a commit-sized piece of work to do in 1 or 2 sentences
  3. Aider sends some file contents and my prompt to the LLM and translates the response into a Git commit
  4. I skim the commit and leave it as is, tell Aider to tweak it some more, tweak it myself, or /undo it entirely

This works shockingly well; most of the time, Aider+Claude can get it right on the first or second try. This workflow has a few properties that I really like:

  1. It’s IDE-agnostic (no need to switch to something like Cursor)
  2. It’s very low-friction, which encourages trying things out
    1. No need to copy code from a browser, write commit messages, etc.
    2. Undoing work is trivial (just delete the Git commit or run /undo)
  3. It’s pay-as-you-go (I pay Anthropic by the token, no monthly subscription)

Prompts

Here are some examples of the prompts I do in Aider:

  • Library updates should be streamed to all connected web clients over a WebSocket. Add an /updates websocket in the Rust code that broadcasts updated LibraryItems to clients (triggered by a successful call to update_handler). The JS in index.html should subscribe to the WebSocket and call table.updateData() to update the Tabulator table
  • Add a new endpoint (POST or PUT) for adding new items to the library. It will create a new LibraryItemCreatedEvent, save it to the DB, apply it to the in-memory library, then broadcast the new item over the websocket
  • add a nice-looking button that bookmarks the current song. don’t worry about hooking it up to anything just yet
  • Add a new “test-api” command to justfile. It should curl the API exposed by add_item_handler and check that the response status code is CREATED
  • Write a throwaway C# program for benchmarking the the same SQLite insert as create_item() in lib.rs

I’m still developing an intuition for how to write these, but with all of these examples I got results that were correct or able to be fixed up easily. Sometimes I am very precise about what I want, and sometimes I am not; it all depends on the task at hand and how confident I am that the LLM will do what I’m looking for.

What does all this mean?

I dunno! The world is drowning in long-winded AI thinkpieces, so I’ll spare you another one.

All I know for a fact is that if I have a commit-sized piece of work in mind, there’s a very good chance that Claude+Aider can do it for me in less than a minute — today. I’m still exploring the implications of that, but Jamie Brandon’s Speed Matters post feels very relevant. I can try out more ideas and generally be more ambitious with my software projects, which is very exciting.


  1. You can also point Aider at a locally-hosted LLM, which is cool, but in my experience the quality is nowhere near as good as Claude. ↩︎

headshot

Cities & Code

Top Categories

View all categories