Land Values and Affordability

The relationship might not work the way you think

I want to get something off my chest: attempts to keep the price of urban land down are not necessarily good. Many people in local politics place a high priority on keeping land prices down. For example, the new Vancouver councillor who opposed a church building apartments on their own land:

Land values displace people. This will increase land values.

…and then used that same reason to vote against apartments at a major train station:

I’m worried that filtering will take too long, that land value increases will lead to displacement

Vancouver’s planning staff share these concerns and try to keep land values down when changing zoning. For example, the recent multiplex policy was designed to avoid raising land values:

Proposed density bonus contribution requirements & rates (are) set to… limit any potential land value escalation

thinking
That makes sense; if land value is lower then homes are more affordable, right?
Reilly
WRONG (if you keep land values down by stopping development)

Land Prices Are Not Housing Prices

The main way people save on housing costs in cities is by using less land. For example, imagine the following uses on a 4000 sqft lot:

Building Land Per Household
Single family home 4000 sqft
Duplex 2000 sqft
5-unit apartment/condo building 800 sqft

It is generally much cheaper to buy 800 square feet of land than it is to buy 4000. But where this gets interesting is that those denser uses may cause higher land prices. Let’s walk through how:

  1. Say that 4000 sqft lot is zoned to only allow a single-family home. Richie McRicherson is willing to pay $1M so he can build a house on that land. The land sells for 1 MILLION DOLLARS.
  2. Now, suppose the land is zoned to allow a duplex. 2 households who each have $600k pool their money together and outbid Richie. The land sells for 1.2 MILLION DOLLARS.
  3. Finally, suppose the land is zoned to allow a 5-unit condo building. 5 households who each have $400k pool their money and outbid both Richie and the duplex buyers. The land sells for 2 MILLION DOLLARS.
Building Land Price Land Price/Sqft Land Price/Household
Single family home $1,000,000 $250 $1,000,000
Duplex $1,200,000 $300 $600,000
5-unit condo building $2,000,000 $500 $400,000

It is really important to note that even though allowing more homes drove land prices up, households are paying less for land.

OK that’s the theory; what about in practice?

It can be hard to observe this in real life, because dense city centres tend to be pretty expensive. That’s a complicated topic that’s beyond the scope of this blog post, but there are places where it’s easy to see this specific phenomenon in Vancouver with a map of land values. For example:

North West Point Grey

Left: cheap land and expensive homes. Right: expensive land and relatively cheap homes

This is one of the most expensive neighbourhoods in Vancouver, by design. Apartments are forbidden everywhere, only houses are allowed. And city planning rules require each house to use up much more land west of Blanca Street:

Area Minimum Lot Size Land Price/Sq Ft Land Price/Lot
West of Blanca 12,000-18,000 sqft Usually around $300 $7M-$30M
East of Blanca 3000-5400 sqft Usually around $800 $3M-$8M

This is exactly what we were talking about. When the city lets people use less land per home, land prices go up and home prices go down. To be clear, $3M still isn’t cheap; we should go a lot further.

Shaughnessy

It’s a similar story in Shaughnessy, historically Vancouver’s most exclusive neighbourhood:

Top: Fairview/South Granville apartments+condos. Bottom: Shaughnessy mansions

South of 16th we zone for mansions on very large lots (making the land relatively cheap), and north of 16th we allow apartments and condo buildings (making the land relatively expensive). If you know Vancouver at all, you know that those apartments are a lot cheaper than the $10M+ Shaughnessy mansions!

Takeaway

It’s important to distinguish between the cost of land per square foot and the cost of land per home. Limiting density does work to drive the former down, but at a terrible cost: it stops people of modest means from pooling their resources to outbid someone much richer.

I had an odd experience with this website, and I’m finally writing it up. The short version:

  1. In August 2024 I wrote a blog post that documented how a local “independent journalist” had written for white nationalist websites.
  2. In October 2024 he filed a DMCA complaint with my host (Netlify).
    1. Netlify support rubber-stamped the complaint without giving me a reasonable way to appeal.
    2. I moved to CloudFlare and cut the blog post back to a few essential facts+links, to make it easier for the next overworked support person to interpret.
  3. In February 2025 CloudFlare approved another DMCA complaint from someone who’d copied my entire post to a content mill and backdated it!

This post will mostly focus on the 2nd DMCA complaint, as it’s the most interesting one.

My post was copied to… MormonFind.com?

On February 14, while on vacation, I received the following email:

Cloudflare received the below copyright infringement complaint regarding your account. If the content identified in the complaint is not removed within 48 hours, Cloudflare will take steps to disable access to the content, consistent with section 512(c) of the Digital Millennium Copyright Act. Please note that these steps will include disabling access to the reported URL on which the content is located, which will affect any other content located on the same URL.

Complaint Information:

Reporter’s Name: Aaron Bennet

Reporter’s Email Address: <redacted>

Reporter’s Title: Copyright Infringement

Reporter’s Company Name: Bennet Media Association

Reporter’s Address: <redacted>

Reported URL(s): hxxps://www[.]reillywood[.]com/blog/riley-donovan/

Original Work Description: https://mormonfind.com/2024/04/10/riley-donovan-contributes-to-white-supremacist-websites/

To respond to this issue, please reply to [email protected].

Agents all the way down

A pattern for UI in MCP clients

Say you’re working on an agent (a model using tools in a loop). Furthermore, let’s say your agent uses the Model Context Protocol to populate its set of tools dynamically. This results in an interesting UX question: how should you show text tool results to the user of your agent?

You could just show the raw text, but that’s a little unsatisfying when tool results are often JSON, XML, or some other structured data. You could parse the structured data, but that’s tricky too; the set of tools your agent has access to may change, and the tool results you get today could be structured differently tomorrow.

I like another option: pass the tool results to another agent.

The Visualization Agent

Let’s add another agent to our system; we’ll call it the visualization agent. After the main agent executes a tool, it will pass the results to the visualization agent and say “hey, can you visualize this for the user?”

The visualization agent has access to specialized tools like “show table”, “show chart”, “show formatted code”, etc. It handles the work of translating tool results in arbitrary formats into the structures that are useful for opinionated visualization.

And if it can’t figure out a good way to visualize something, well, we can always fall back to text.

Why do it this way?

The big thing is that we can display arbitrary data to the user in a nice way, without assuming much about the tools our agent will have access to. We could also give the main agent visualization tools (tempting! so simple!), but:

  1. That can be very wasteful of the context window
    1. Imagine receiving 10,000 tokens from a tool, then the agent decides to pass those 10,000 tokens by calling a visualization tool - the 10,000 tokens just doubled to 20,000 in our chat history
  2. The more tools an agent has access to, the more likely it is to get confused
  3. A specialized visualization agent can use a faster+cheaper model than our main agent

It’s not all sunshine and roses; calling the visualization agent can be slow, and it adds some complexity. But I like this approach compared to the others I’ve seen, and we’re not far away from fast local models being widely available. If you’ve got another approach, I’d love to hear from you!

This is a brief post about something that confused me a great deal when I started working with LLMs.

Context

Many LLM providers (Anthropic, OpenAI, Google) support “function calling”, AKA “tool use”. In a nutshell:

  1. When calling the provider’s chat completion APIs, you tell the model “if needed, I can run these specific functions for you.”
  2. The model responds saying “hey go run function X with arguments Y and Z.”
  3. You go and run the function with those arguments. Maybe you append the result to the chat so the model has access to it.

Weather lookup is a common example. You tell the model “I have a function get_temperature(city: String) that looks up the current temperature in a city”, and then when a question like “What’s the weather like in Tokyo?” comes up the model responds to your code with “please call get_temperature("Tokyo")”.

Structured Output

All well and good, but where this gets interesting is that function calling is also a good way to get structured data out of LLMs. You can provide a function definition that you have no intention of “calling”, purely to get data in the format you want.

For example, using the Rust genai library:

// Text to analyze
let text = "The quick brown fox jumps over the lazy dog.";

// Define a tool/function for rating grammar
let grammar_tool = Tool::new("rate_grammar")
    .with_description("Rate the grammatical correctness of English text")
    .with_schema(json!({
        "type": "object",
        "properties": {
          "rating": {
            "type": "integer",
            "minimum": 1,
            "maximum": 10,
            "description": "Grammar rating from 1 to 10, where 10 is perfect grammar"
          },
          "explanation": {
            "type": "string",
            "description": "Brief explanation for the rating"
          }
        },
        "required": ["rating", "explanation"]
    }));

// Create a chat request with the text and the grammar tool
let chat_req = ChatRequest::new(vec![
    ChatMessage::system("You are a professional English grammar expert. Analyze the grammar of the given text and provide a rating."),
    ChatMessage::user(format!("Please rate the grammar of this text: '{}'", text))
]).append_tool(grammar_tool);

// ...and execute it
let chat_res = client.exec_chat("gpt-4o-mini", chat_req, None).await?;

The result will include some JSON like:

{
    "rating": 10,
    "explanation": "This sentence is grammatically perfect..."
}

…and we’re done. We just used function calling to get structured data, with no intention of calling any functions. This is much nicer and more reliable than string parsing on the raw chat output.

This approach is probably obvious to many people, but it was unintuitive to me at first; I think “function calling” is a misleading name for this functionality that can be used for so much more.

Alternative Approaches

This isn’t the only way to get structured data out of an LLM; OpenAI supports Structured Outputs, and Gemini lets you specify a response schema. But for Anthropic, it seems like function calling is still recommended:

Tools do not necessarily need to be client-side functions — you can use tools anytime you want the model to return JSON output that follows a provided schema.

I tried to use Automerge again, and failed.

For those of you who aren’t familiar, Automerge is a neat library that helps with building collaborative and local-first applications. It’s pretty cool! I work on a collaborative notes application that does not handle concurrent edits very well, and Automerge is one of the main contenders for improving that situation.

I gave Automerge a try in 2023 and wasn’t able to get it working, to my chagrin. This weekend there was an event in Vancouver for local-first software with one of the main Automerge authors, so I decided to attend and give it another try. I made a fair bit of progress, but ultimately gave up after spending ~5 hours on the problem. A few thoughts+observations:

I am going off the beaten path (web)

Automerge’s “golden path” is web apps. The core of Automerge is written in Rust, but it’s primarily used via WASM in the browser.

This approach is unpleasant for me; I like Rust, I have a good understanding of how code runs+executes on a “real computer”, and I do not want to write an application where 99% of the business logic runs in the browser. Instead, I tried to write an application where my Rust backend was the primary Automerge node and browser/JS Automerge nodes would talk to it.

This did not go well; the documentation and ergonomics of the Rust library are lacking, and most tutorials assume that you are using the JS wrapper around the Rust library. And then when I tried to use the JS version in my simple web UI, the docs assumed a level of web development sophistication that I don’t have.

To be clear, this is mostly a me problem: primarily targeting the browser is absolutely the way to go in 2025!

I am going off the beaten path (local-first)

Automerge tries to solve a lot of problems related to local-first software. But I wanted to “start small” and solve the problem of concurrent text editing for an application that isn’t local-first. In retrospect this was a mistake; the documentation was written for a very different audience than me, and I wasn’t especially aligned with what other people at the event were building.

Chrome is winning

Something that was discussed at the event: if you are building entirely in-browser local-first applications you may want to target Chrome, because Firefox is way behind on several new+useful APIs. This is sad, but not surprising.

What next?

I think it’s possible to build an Automerge-based collaborative text editor the way I want, but it’s a lot harder than I expected. I’m going to shelve this and revisit it next time I have time+energy to hack on it.

headshot

Cities & Code

Top Categories

View all categories