Skip to main content
A notebook about our connected future by Danilo Campos.
  • How to get help from a robot

    Imagine you’re in a cave.

    In every direction, passages and galleries lead away into the dark. There are occasional dots of light in the murky depths, but most of the space around you is steeped in shadow.

    You have somewhere to get within this cave system. You have to define a path between where you are and where you want to go. Given the many branches, the darkness, and the sheer ground to cover, this could take a while.

    Now imagine a pouch on your hip. Inside it is a robot. You can remove the robot and send it off to help map the space for you. You can define its strategy pretty flexibly, following your existing hunches. You can also ask it to survey the darkness, giving you the lay of the land.

    If the robot needs tools to aid its explorations, you can attach them and the robot can use them. If the robot needs information, like existing data about the cave, you can provide it.

    And you can deploy this robot over and over until you’ve covered the territory you need to move confidently.

    You can have this fantasy today, using so-called “agents” driven by LLMs.

    This year, LLM-based agents crossed a threshold in maturity: the technology is now ready to help you solve loads of problems, directly manipulating files and resources on your behalf. They’re comfortable to use and easy to get started with.

    If you are outside this trend, that AI detail might give you pause. You’ve heard a lot about LLM-based products like ChatGPT, and not all of it is great. LLMs are sometimes said to be little more than autocomplete, or prone to hallucinations. You’ve heard that LLMs, applied stupidly, have invented everything from fake case law to fake airline cancellation policy.

    LLMs, from this perspective, might seem unpredictable and inconvenient.

    An ox is willful, and may not always want to plow the field. But combine the ox with a harness and yoke, and now its energy can be reliably focused on a productive task.

    An LLM is much like the ox here, and while its output isn’t perfect, its stamina is. An agent yokes an LLM to both other tools and to a workflow that can dramatically improve its correctness. Correctness can improve further, through the strategic introduction of existing information.

    This isn’t a tool that replaces humans. Properly applied, it’s a tool that amplifies our imagination, discovery and ambition, creating more leverage for our finite time on this planet.

    Most importantly, I think it’s best to see agents as things that explore on your behalf, rather than things that create for you. The agent is best applied as a mechanism for deepening your understanding of a problem, addressing the tedious bits you’d rather not do yourself.

    An agent might nonetheless create a ton of output for you! They’re great for prototypes, boilerplate, and creating structures you’ve defined carefully.

    But the only way to get output you’re happy and confident with is by really understanding the problem you’re working. Arguments about the usefulness of LLMs really miss this point: agents are incredible tools for figuring out your problems quickly and plumbing their depths efficiently.

    Here’s how to use agents: basic concepts, plus some concrete guidance with tools that work.

    Anatomy of an agent workflow

    Using an agent requires understanding a sandwich of different components, each with their own limitations and leverage. Here’s a quick (and incomplete) survey.

    The model

    The (large language) model is the motive power behind your agent. Think of it as a lossy, biased, incomplete snapshot of human knowledge and culture.

    LLMs are expensive to “train.” It takes time and significant computing resources to create one. The result is something both impressive and quite rigid.

    A completed model can process information with a staggering variety of structures, from plain language to any number of coding languages, and even some file formats. It can generate structured information just as well. A surprising quality of LLMs is how flexible they are at interpreting and replicating all kinds of patterns.

    But the models themselves aren’t changeable. They’re a brick edifice.

    The context

    Information is physical: an LLM runs inside systems with physical constraints, with ceilings imposed by things like memory chips. Context describes the memory limit allocated for a model to do its work in.

    Context is finite.

    By contrast to the model, context is also completely malleable. You can put anything you want in there. Part of the strategy for working with agents is ensuring that your context contains enough information to tilt the model in the direction of productive work. Populating context strategically with examples, reference files, and even instructions can productively bias the model so that it’s more likely to do the thing you want. Many agents give you the tools to add entire files to the context, and this is very handy.

    But remember: context is finite. You can’t stuff the world into it. You need to provide just enough to cue the model while leaving room for actually producing work.

    Because you’ll use that room for things you say to the model, and things it says back to you.

    Unlike the solid brick of the model, the context is more like a sponge. It’s malleable, it can absorb a lot, but it has limits.

    The agent

    An “agent” is a harness for an LLM. In simple terms, it runs the LLM in a loop, continually prompting it with either automated input, or your own text, so the model can use various tools.

    But this harness is extensible. Agents can swap models, allowing them to improve as new models come online.

    Moreover, agents have convenient interfaces for getting information in and out of the LLM. A common approach to this is a file editor: the same file you’re looking at is also fed to the model. This can get fancy, even piping in details like which lines you’ve got selected. Click and drag over a passage and you can just talk to the agent as though it can see you gesturing at an object.

    These niceties are where an agent differentiates itself. Such a product can earn our allegiance through its reliability, conveniences and ease of use.

    The tools

    Agents integrate other services using MCP, short for model context protocol. MCP enables populating the context with information from outside your agent. You could pull in spreadsheet data, documentation, you name it. Any existing service can provide an MCP. Even if it doesn’t, you can use existing APIs to build an MCP yourself. Ask an agent to build it for you!

    But MCP travels both ways: agents can use MCP to manipulate external resources on your behalf. An MCP could also update a spreadsheet based on your instructions to an agent and the contents of your context.

    MCP servers extend the reach of your agent into other products, domains and data sources. Of all these components, MCP may be the most exciting because you might just use it in a way no one has thought of yet. Multiple MCP connections can be combined in one session, letting your agent act as a mixing valve between distinct services and data sources.

    Put it to use

    My favorite agent tool right now is Claude Code. It runs in your terminal, giving you a robot butler that can do a huge amount of stuff with your projects, and help you solve problems with your tools and computer besides.

    Probably the easiest way to start using it is to grab Visual Studio Code and add the Claude Code extension.

    You choose the places Claude Code has access to, either by opening a folder in VS Code, or using your terminal to navigate somewhere specific, as in:

    cd ~/Documents/my-big-project

    Then you just invoke the tool with one command:

    claude

    From here Claude Code can analyze existing files and projects, building summaries for you—or even for itself, to help in future runs.

    You can point Claude Code to specific files by using the @ symbol: you’ll get an auto-completion menu. Adding documentation and other references to your project, then pointing Claude Code in their direction, can dramatically calibrate its output. 1

    With Claude Code running, the /init command builds a reference file based on a deep dive of the whole project.

    If you need to come back to an existing conversation, use /resume

    Claude Code can also run commands on your behalf. If you’re having problems with version control, or don’t want to learn more than the basics of git, this tool can help you work out the one-off incantations you need for more advanced troubleshooting.

    Claude Code is also great at interpreting errors for you. You can paste log spew, or even ask it to diagnose errors from the commands it runs for you.

    Connect Claude Code to other tools and experiment. If you love Notion, Airtable or Obsidian, MCP servers exist to let your data feed into Claude Code easily, and for the tool to collaborate with you.

    How do I know it’s doing the right thing?

    This is the most important skill to develop in working with these tools: getting correctness.

    The agent can do a lot, but it will do what you say, not what you mean. So taking some time to really think through your problem, and documenting that thinking for the robot, can be valuable. I like to do this by creating a new file in the project directory. Plain text or Markdown works fine. From here I use writing to think through the problem: why I’m working on something, what I hope to achieve, and the specific approaches I feel are valid.

    In another age, this would be a design document or specification. But here your audience is the robot, and perhaps yourself in the future.

    You don’t need to be exhaustive. In fact, you can work on this document iteratively, enhancing it as the robot tries things for you and your understanding improves.

    If you use Claude Code inside of an editor, you can review changes and new files as they’re generated. You edit them directly as these changes come in, and you can reject directions you don’t like. I’d give this editor/Claude Code combo a try even if you’re not working on code-specific projects. It’s a very handy way to work with all kinds of problems, giving you a tidy menu of files, multiple editing tabs, and your agent right next to it all.

    And again, some of the best uses of the agent don’t demand perfect correctness: one-off scripts that query an API, prototypes that validate your thinking and expectations, frameworks and templates built from existing examples… you can get so much from a robot spelunking the depths of computer, reading and writing files, and helping you understand the troubleshooting leverage in front of you.

    There’s no guarantee the robot will explore the problem space perfectly, nor create perfect solutions. But you’ve got the same constraints. The robot’s advantage is its speed and stamina. Combine it with your discernment and experience, and find your powers amplified.


    1. Pro-tip: robots can help robots. You can use an LLM research product, like Claude or ChatGPT in “deep research mode” on the web, to gather supporting information for you. These tools can scour message boards for up-to-date information that’s not otherwise collected anywhere. You can then ask them to output this research into a file you can download and include in your project and insert into your Claude Code context. This is really helpful for cutting edge new tech with poor documentation.


  • The Expanse guys return to economics

    I remember the first time I read about the colonial history of Puerto Rico, where I was born.

    It changed me forever.

    Passed to the United States in 1898 after the end of the Spanish-American war, Puerto Rico was a fertile jewel of the Caribbean. Its soil was productive, it wasn’t far from the mainland, and it could yield exotic crops like coffee and sugarcane that the US could not produce on its own.

    The oppression that followed was motivated by economics. The island’s autonomy, resources and local control were steadily stripped away to serve a Wall Street hungry for profits. A place that was once agriculturally self-sufficient became completely dependent on imports, as soil was dedicated exclusively to the cash crops and profits of distant corporations.

    Puerto Rico wasn’t broken for no reason. It was taken because its labor and resources had value. This animated decades of conflict, trauma and economic squalor. In the most painful way possible, this explained so much of what, in childhood, had felt so inexplicable.

    In their book and television series The Expanse, Daniel Abraham and Ty Franck—“James S.A. Corey”—continue in the tradition of George R.R. Martin, spinning a tale of war and conquest informed by economic incentive. It makes the stories both relatable and instructive: their engine is based on systems and pitfalls we can see in our own lives.

    It’s to their credit that in their new series, The Captive’s War, they continue the lesson.

    Even if it is, at times, a grueling march.

    Why are you doing this to me?

    The Mercy of Gods starts us off and I have to warn you that this is not a very fun time. Which is not to say that it is time wasted.

    But there’s a lot here that hurts because it’s so clearly drawn from human history. This is a story of conquest, slavery and imperial brutality.

    The book spends explores the subject of humans as chattel, including clear parallels to the Middle Passage experience of the Atlantic slave trade.

    It’s not a spoiler to tell you that there isn’t a happy ending here. At least not in book one of the series. Our characters are subject to vast and overwhelming forces, and they have neither the tools nor the context to fight off their imperial oppressor.

    Sometimes the good guys don’t prevail against the empire. Sometimes the good guys are swallowed up like a whale eating krill.

    History has many stories where empires behave with impunity, getting away with their cruelty and exploitation, for decades or centuries. Slavery, colonialism, the destruction of indigenous peoples… there’s not a lot of silver lining to find there. Just blood, sadness and generational trauma.

    Because science fiction is about truth without preconception

    As a vehicle for how the world works, I think this book delivers. I’ll be honest: I wanted to bail multiple times.

    I was mad at Ty and Daniel for making me read something so sad and brutal, so darkly drawn from a past I’d rather us leave behind. I was mad that they weren’t using their considerable skill for storytelling to do something… brighter.

    At the same time, the pages kept turning. I always wanted to know what happened next. With so many books and episodes to their credit, this team surely knows how to keep us engaged.

    And by the end: they got me. By the end, the mirror they held up not just to history, but to the present, was devastating.

    Instructive.

    Daniel and Ty spend four hundred pages presenting an indictment against all of the modern world. The arrangements we tolerate, the atrocity we look past, the economically-driven cruelty we’re born into… the book is a searing rebuke of all of it.

    We should be outraged. We should be fighting the sense of constant precarity and exploitation we see everywhere. Our utility should not be the measure of the dignity we enjoy.

    But like the characters in this story, we are outnumbered, outgunned, and so very tired. We just want to make it through without losing too much of what we love.

    Meanwhile, we’re cells in a body that doesn’t especially care if any one of us lives or dies. It just wants to keep eating and growing.

    Dammit, buy the book and survive its pages if you can.


  • Where's the agent debugger?

    A computer is a zoetrope: an illusion of persistence and coherence, built on untold numbers of individual frames of detail.

    Even as you read this, beneath the surface of your machine endless lines of code zoom past, at rates so extraordinary the human mind can’t comprehend them. Every second, a modern computer churns through billions of cycles.

    This is great when everything is working.

    But in the history of everything that works, there’s a time where it didn’t. And to get a piece of code out of that place, you’ll need to do some debugging.

    Every crash a crime scene

    When your program crashes, something did it. Something is responsible. Something came smashing into the assumptions that all of your code is relying upon.

    Sometimes it’s simple stuff: we’re missing a value that we expected to exist, and we’ve written no code to handle its absence. Or, we’ve tried to grab the third member of a collection that has just two objects.

    Sometimes it’s much messier: trying to touch memory that no longer belongs to us. Trying to use an object that no longer exists. All of these are violations of the simple contract for reality that our code is built around.

    Its universe shattered, the program has no choice but to end.

    Other bugs are less catastrophic: a mysterious, transient freeze. Something happens twice that should only happen once. Something is missing that really should be present.

    Building software is an endless soap opera of whodunnits where the detective and the perpetrator are often the same person: the luckless programmer trying to make sense of many stacked layers of opaque but powerful computing abstraction.

    It’s hard to reason about so many interlocking parts moving so fast. Past a certain level of complexity, you just can’t keep it all inside your head.

    So, somehow, we have to investigate, shining light into black boxes of our own creation.

    Cowboy style

    The easiest way to debug code you control is to add logging.

    print("Here's the part of the code I really care about. It happened!")

    When the program runs, such log messages print out in a terminal with a timestamp.

    To continue the zoetrope analogy, this is tagging a sticky note on a specific frame and then observing whether and how often you see that sticky later in the program’s run.

    You can get really far with this approach: depending on how much logging you add, you can develop a clear sense of what’s happening with your program at any given moment.

    But there are also drawbacks: logs can only answer questions that you think of in advance. Worse, the more logging you add, the harder it is to find the one line you actually care about at any given moment.

    Logs are a way of reflecting the internal state of a program, but all they can ever be is a one-way firehose of information. Sometimes that isn’t enough.

    The interactive debugger

    A debugger hands you the reins on the galloping horse that is your program. Instead of following a dizzying path of instructions after they happen, the debugger is an invitation to intervene.

    With a debugger, the zoetrope can stop altogether. Your hand is on the wheel, advancing it at whim. You can move through your program line by line, inspecting which code is being run. Every scrap of data in scope for that code is visible too, allowing you to reason through why something is breaking or behaving unexpectedly.

    Instead of the processor dictating the pace, you control reality. It’s essentially the power of Neo in The Matrix: you can slow time and act upon objects inside the system.

    Even the ones that move very fast.

    An experienced code detective traps their quarry using breakpoints: flags in the program that tell the debugger to stop at a specific place. If your gut says that your crash has something to do with making a network request, you might break at the point where the request is triggered, and break again at the point where the result is written to disk.

    Breakpoints allow the program to whisk you directly to the crime scene. There’s little upside to reviewing all the lines of code you know are working correctly. Instead, you arrive briskly to the area of your investigation again and again, run after run, testing changes and reviewing their consequences.

    Of all the programming skill sets, I think debugging may be the most consequential. Not just understanding the tool, but developing an intuition for how to use it. What is opaque from the outside can become obvious when you step through the code line by line. Interactive debugging gives you more control, more information.

    This is even more important in the age of machine-written code. So far, most people’s robots can’t do this at all.

    What if the robot did it

    In the past, you were the likely perpetrator of your code crimes.

    Today, you might also have a robot accomplice. LLMs can extrude code at a dizzying pace, but all of it requires checking and error correction.

    Type systems and linters are a powerful first line of defense against machine errors. If a symbol or function is out of date or simply does not exist, feedback to that effect can be immediately reported, compelling a coding agent to re-work its output, look things up, and otherwise correct itself.

    If your next step is a compiler, you get even more error correction fodder. A compiler provides (somewhat) clear feedback about where a problem exists: the file, line number, and the broken expectation. Again, this gives an agent plenty to work with. Compiler output provides an agent with more leads, like which libraries it needs to examine more carefully.

    But other errors are more subtle, emerging only at runtime: say, a function in one thread that touches memory in another.

    For such errors, the fastest way to track them down is to fire up the debugger and step through the code.

    But agents don’t know how to do that. Agents debug cowboy-style, shitting logs all over your program and making you paste back what they see. It’s crude stuff.

    There’s no reason it needs to work this way. The consequential information from a debugger could be piped into an agent’s context, and it could navigate the program much like a human developer: setting breakpoints, stepping over and into code. From these investigations, the agent could propose fixes and architectural improvements.

    When such a tool enters common use, on the scale of Claude Code or Cursor, that’s going to be a leap forward in the trustworthiness and effectiveness of agent coding systems. Not to mention their usefulness.

    Beyond checking and improving its own output, an agent that could walk you through program execution, explaining why things work the way they do, would be powerful indeed.

    But, peering as they do inside of running code, debuggers can be used for lots of things. The security implications of such a tool are surely complicated as well. What if you could set an agent loose on cracking someone’s serial number registration code?

    Life in a paradigm shift, man.


  • LLMs: the opposable thumb of computing

    Our thumbs are opposable: they can, with both strength and precision, touch any part of any other finger. We can grip things, pinch them, hold them carefully or loosely.

    The existence of our thumb makes our hand more useful, more dynamic, and importantly: compatible with a wide range of complex tasks. We can smelt metals, we can play the viola, we can sauté a delicious pan of peppers.

    We can hunt, we can defend ourselves, and we can write down our thoughts.

    All of this, plus every other trade and hobby, thanks to the thumb. Take it away and we are much clumsier, more helpless. Fingers alone can pull and push, but they just aren’t enough for the kind of complexity we built a civilization around.

    With the thumb, our hand is a tool for reshaping the universe. And for millennia, for better or worse, we’ve been doing just that.

    Amidst all the hype, all the skepticism, all the rending of clothes, I am here to tell you:

    The LLM is to computing what the thumb is to our hand.

    The power of for

    for is a fulcrum.

    On one side of the lever, a method of counting. How long are we doing this? Under what conditions do we stop? How many steps per turn?

    On the other side, work.

    Each time we do something, what are we doing?

    Programming uses loops a lot. They’re essential grammar of computing strategy, since the processor itself is running on a loop.

    The same way prepositions have fewer letters, the syntax for writing a loop is terse.

    Like math, this is one of those places where learning concepts at the same time you’re learning their symbols just makes life harder. I had the damndest time interpreting loops as a beginning programmer. C-style loops might as well have been hieroglyphics. 1

    for (i = 0; i < 10; i++) {
    	//Do something
    }

    If I’d started with Swift, the syntax would have been much clearer:

    for variety in burritoVarieties {
    	//Variety-specific code
    }

    So we have a collection: burritoVarieties. If we iterate through that collection, for each variety, our code can take unique directions.

    A paper-wrapped, green chile-filled burrito looks very different from a smothered, red chile situation. They’d need different photos, their costs are different.

    Your takeout app is a roiling sea of loops.

    This is the central premise of conventional computing. Developers write loops that might occur on human timescales, or might be much faster, controlling radios and network behavior at a dizzying rate.

    Deep at the level of miraculous miniaturization, many billions of cycles conclude every second, as instructions churn through microscopic transistors carved into silicon by beams of light.

    And for decades, that was the story. Learn to work with loops, don’t waste resources inside them, create code that adapts to different situations. Such code is more or less deterministic: you run it 100 times, you get 100 identical results.

    This infinite-iteration paradigm of computing has given us everything from ballistic missiles to email to the Facebook algorithm. You can do a lot with it.

    But it’s fundamentally constrained: you have to write rigid code and anticipate all the conditions it will face. What if there were another way?

    Enter ‘AI’

    People are freaking out.

    I mean the AI discourse is just rancid stuff. People are charged on this topic.

    On some level: I get it.

    The rules are changing. We’re watching a churn that could be as consequential as the microprocessor, which kicked off a 50 year supercycle that’s still playing out.

    But the ways they’re changing are weird.

    After a generation of prosperity and endless career growth, technology workers have faced years of layoffs, stiff competition for roles, and declining flexibility from employers. What was once a safe and growing pie feels like it’s shrinking.

    AI, with its claims of labor savings, arrives at the worst possible moment, compounding these headwinds, and handing perceived leverage to the cost-cutter case.

    AI is seen as a business lotion: slather it on, get better results.

    But the way it actually works is this:

    You give up the deterministic clarity of your for loops.

    In the AI age, we all have the choice to wield a very flexible, somewhat unpredictable technology.

    In trade, you get much greater range of motion. All you have to do is describe, in natural language, what your goals are. In return, large language models can both interpret and generate endless patterns of structured information.

    Instead of a fulcrum, you have a djinn conjuring something that might solve your problem. If you’re cautious, if you’re thoughtful, if your desires are realistically constrained, it can actually happen.

    This ambiguity requires thoughtful strategy to harness, and that strategy is in short supply. So you have a lot of sloppy chaos following AI deployments around like hungry ticks on a dog.

    The one where an LLM invents new corporate policy remains my favorite category of this failure.

    Like all emerging technologies, it’s just not obvious at the beginning how to best use this stuff.

    Still, the power of LLMs to reshape things is formidable at scale. That’s just a lot of djinns conjuring a lot desire. It’s unsettling stuff.

    But it’s also the destiny of computing to arrive at this place. Now begins the work to make sense of it.

    Thumb and fingers

    The true power of LLMs emerges when they are combined with conventional computing: run in a loop.

    For example:

    • An LLM may generate incorrect code
    • A linter will catch and report errors as they’re written
    • With this error-correction data, the LLM can run again

    This loop can continue until all errors are resolved.

    This is the agent revolution now underway, and it’s poised to change how code gets made. It’s also a great demonstration of how the fingers of conventional computing pinch against the thumb of the LLM.

    Some example fingers:

    • Variables and constants, to precisely map a value to an identity
    • Logic to compare values
    • Algorithms to transform values
    • Loops executing code in sequence
    • State machines, keeping track of a system or complex operation

    These are a lot of power on their own. But add the thumb:

    • Interpret unstructured or unanticipated input
    • Transform bodies of text
    • Create new text to be interpreted by conventional computers
    • Read an existing pattern of text, then extend it

    The combination of fingers and thumb unlocks an all new epoch for how human imagination solves problems through computing. The rigidity of conventional, deterministic code pinches against the flexible grip of the LLM.

    The result is capabilities that will take a long time to fully explore.

    Come with me if you want to live

    This kind of flexibility has been the quest of computing for as long as we’ve had it. Alan Turing, a father of modern computing, proposed a test he called “The Imitation Game” specifically in anticipation of this capacity. In 1950.

    Now we’ve arrived: not just a dull, mechanistic automaton, but a machine capable adjusting its approach even to novel, unanticipated input.

    Again: that’s a little scary, right?

    An immediate toxic use case for this new power: poisoning every online conversation with fake people. You can just do that now. If you can afford the computing capacity, you can saturate the replies of any conversation.

    The spam consequences are numerous, and touch every layer of communication. But they’re nothing compared to the larger threats: ubiquitous state surveillance. This opens the door to monitoring and cataloging every kind of communication.

    To say nothing of applying these technologies to war. From psychological operations to target selection, this thumb creates all new opportunities for destruction and mayhem.

    But this happens with every tool. The great wars of the 20th century were exercises in heinous experimentation: combining the new powers of industrial technology and fossil fuels with the ancient rites of our worst impulses.

    Yet, World War II gave us the dawn of modern computing. It gave us all new ways to harness physics to solve our energy problems. It gave us modern antibiotics.

    Humanity is indeed terrifying in its destructive capacity. And since before civilization itself, our tools have been turned toward truly blood-soaked ends.

    But we also use our tools to invent musical instruments, new foods, new medical advancements. Our tools have created the sublime, from fine art to cinema to our favorite video games.

    There are genuinely good reasons to fear our technological capacity. But it’s up to us to turn every tool toward its best and most fruitful end. The advent of the LLM has opened a pandora’s box, opening a new chapter of what a computer can accomplish.

    We know the assholes are going to build something scary. They always do. But what sublime accomplishments could be made with this all new power instead? I don’t think putting our heads in the sand is a reasonable response to the downsides.

    I think we have to be the authors of the upside.


    1. This code starts at zero, and increments until it reaches ten. Then the loop exits. In each stage of the loop, my code can know how far along the process I am by reading the value of i, and respond accordingly. Again: a little opaque, but it’s written simply because it’s written constantly.


  • Microsoft's game

    Lego is an ingenious system for building and creation.

    You can learn to use Lego’s system in seconds:

    • Snap bricks together in stacks
    • Separate stacks to disassemble

    Step-by-step guides walk you through complex models, but you can ignore them and build what you want with any given pile of bricks.

    The system depends on a grid of pips. The bottom face of a brick sheathes these pips, gripping them snugly enough to build on, but loose enough for a child to separate.

    Lego bricks are sturdy, injection-molded ABS plastic designed to be snapped and un-snapped thousands of times without losing grip. They’re made to exacting standards, and have been for generations. Unearth a pile of bricks from 1987 and you’ll have no problem using them alongside a kit fresh from the store.

    Every design comes with tradeoffs. Here are Lego’s:

    • Right angles everywhere
    • Vertical construction is effortless; horizontal construction requires serious planning and cleverness
    • You can build anything to centimeter precision; Lego holds the monopoly on millimeter resolution

    It’s in the high-detail pieces where Lego’s design language transitions, from dull geometry to a gift treasured by all ages. Scandinavian minimalism defines it all, from spaceship antennas to medieval castle torches. The minifigs, the helmets, the hairpieces, the the greebles that adorn your incredible machine…

    You can detect Lego-ness instantly, no matter the theme or scale.

    It’s the classic platform bargain:

    The vendor makes specific paths easy, while reserving unique control for themselves. They get to make certain decisions that unify all activity, scalably influencing every single project.

    And if you understand Lego, you have everything you need to interpret Microsoft.

    In the beginning

    Microsoft is a leviathan worth your study.

    It has existed for half a century, shaping modern computing from the dawn of the age. Investors value Microsoft at more than three trillion dollars. For a sense of scale, France’s 2023 GDP was about even with this, and it’s 75% of what California produced in 2024.

    Microsoft is world-historically big.

    To achieve this, Microsoft captured and held an invaluable position: deciding how software would be written.

    Early computing was very different from what we know today.

    Microprocessors were new and no one knew exactly what to do with them. For a few thousand (inflation-adjusted) dollars, anyone could buy a computer and try to tell it what to do.

    This required programming languages: conventions defining what you could say to the computer, and how you would say it.

    Bill Gates and his coterie were among the first to understand exactly how you could bridge the world of instruction signals with anything a human could learn in an afternoon. They provided language interpreters for early computers.

    As computers increased in complexity, Microsoft kept pace, building the operating systems and APIs that would eventually let developers target the majority of the world’s computing hardware.

    Microsoft paved the first roads to the electronic frontier.

    A good story about Microsoft

    If you want to understand Microsoft, I like this story about the Xbox.

    The short version is that Sony totally freaked them out.

    It’s one thing to ship a plastic toy that takes plastic cartridges and plays glorified arcade games, as Nintendo had been.

    But by 1995, Sony was elevating the living room toward something that looked like computing: optical disks, storage, and eventually: networking.

    As leviathans go, Microsoft has a totalizing appetite. For its stratagems to work, it needs to influence as much of the game board as possible. If they didn’t act, the home could fracture: Microsoft handling the bills and correspondence, Sony owning the fun.

    And who knows where it goes from there.

    For me, this is unusually savvy for a bunch of suits to have figured out, much less taken action upon. Just think about it like this:

    How often do you really need to upgrade a computer that only does spreadsheets and emails? Ceding the future of entertainment would slow down the entire metabolism of PC upgrades, which is no good if you’re selling everything from operating systems to productivity tools.

    Not to mention the OS where most games are published.

    So Microsoft decided to enter the fight with their own console.

    They walked in with a 25 year head-start on shaping how developers make software. By the time they were pondering Xbox, this included a deep stack of graphics programming and input technologies. They could pivot the tools and APIs developers were already using to build games for Windows into a new, dedicated platform.

    To do it, they had to do something they’d largely avoided: building and shipping hardware.

    Non-rivalrous goods

    If I’m holding a mug of root beer, I’m the only one who gets to drink it. If I finish that mug without sharing, you don’t get any unless you can find your own.

    But if I’m holding a CD-ROM of software, I can install it, then pass it on to you. 1

    Which means that Microsoft can make loads of money: they can come up with something valuable to put on that CD-ROM, and then sell it to as many people as they can convince to buy it.

    They created and held the perfect position on the game board. They were essential to building computers, but didn’t have to build any themselves.

    Like getting paid to imagine a tire every time someone bought a car.

    It’s great work if you can get it. So you can see why Microsoft would be reluctant to ever dirty their hands with anything so risky as putting an object in a box and waiting for the phone to ring.

    But to hold the living room, they had to play the role only a manufacturer can: hardware cost control.

    The Xbox cost $540 (inflation-adjusted to 2025), which made it competitive with a PlayStation 2. This was a good deal cheaper than all but the crummiest PCs, and unlike those, it could play games pretty well.

    They built it fast, using off-the-shelf PC components. They created a thin container of an operating system to host their graphics libraries, handle storage, and process I/O.

    And it worked. It was an instantly competitive console, and the existing body of Windows game developers had a much shorter path to the living room than ever before.

    Microsoft built this beachhead into a business worth billions annually. More than that, they’ve maintained their prominence as a prime target for game developers.

    And it all came down to their existing advantage: building the tools that make software possible.

    How does this game play in 2025?

    Visual Studio Code is a free tool for building software.

    The cost of building and maintaining it is a pittance compared to the returns: Microsoft is a default tool for anyone starting a software journey. It’s used by professionals and hobbyists.

    So Microsoft continues to influence how software gets made.

    The dream of machine learning models trained on existing software to write more code predates the current age by a lot. People have been waiting for it. I remember a first-week orientation a decade ago where a CEO discussed this vision as both far-off but attainable.

    So the moment this approach became viable, Microsoft got to work.

    The first version of GitHub CoPilot seems quaint by comparison to the tools of today, and it’s only been four years. Underwhelming as the tool was at that point, it’s not surprising that Microsoft wanted to be early.

    This leviathan was built to turn totalizing control of the developer experience into money. Historically it did this through controlling tools: languages, interpreters, libraries, development environments. In the age of LLMs, this control now extends to the very structure of new code itself.

    Microsoft is in the position to nudge the everyday work of every developer who relies on their AI products. 50 years into their existence, they are poised to seize a whole new dimension of power, profit and leverage.

    What does this mean?

    Start with feedback loops. Because Microsoft controls so many tools, they’ll be in a position to make them mesh better with LLM workflows, and to make them happier with machine-generated code. Microsoft also owns and controls loads of the GPU hardware needed to make LLMs run. They have the most favorable pricing for these workloads possible.

    So we might expect, for example, for it to become really easy to prompt into existence a web application built on a resurgent Microsoft stack. It might become really cheap to prototype a game for Windows, because Microsoft builds scaffolding for those to be built by LLMs.

    Between owning the hardware that runs LLMs, and the tools that make code happen, Microsoft will be in a position to tilt the whole computing game board so that loads of pieces drift deep into their pockets.

    Of course, that’s if the old rules hold.

    A funny thing about computing: sometimes the game surprises you.


    1. Without guardrails, this turns into “piracy,” according to a legal framework shaped to support companies like Microsoft. But in the context of a corporate sale of hundreds or thousands of licenses, it’s of course quite convenient.

Network Games Network Games

©2025 Danilo Campos

"The only game. Survival. When the jungle tears itself down and builds itself into something new."