Skip to main content
A notebook about our connected future by Danilo Campos.
  • Where's the agent debugger?

    A computer is a zoetrope: an illusion of persistence and coherence, built on untold numbers of individual frames of detail.

    Even as you read this, beneath the surface of your machine endless lines of code zoom past, at rates so extraordinary the human mind can’t comprehend them. Every second, a modern computer churns through billions of cycles.

    This is great when everything is working.

    But in the history of everything that works, there’s a time where it didn’t. And to get a piece of code out of that place, you’ll need to do some debugging.

    Every crash a crime scene

    When your program crashes, something did it. Something is responsible. Something came smashing into the assumptions that all of your code is relying upon.

    Sometimes it’s simple stuff: we’re missing a value that we expected to exist, and we’ve written no code to handle its absence. Or, we’ve tried to grab the third member of a collection that has just two objects.

    Sometimes it’s much messier: trying to touch memory that no longer belongs to us. Trying to use an object that no longer exists. All of these are violations of the simple contract for reality that our code is built around.

    Its universe shattered, the program has no choice but to end.

    Other bugs are less catastrophic: a mysterious, transient freeze. Something happens twice that should only happen once. Something is missing that really should be present.

    Building software is an endless soap opera of whodunnits where the detective and the perpetrator are often the same person: the luckless programmer trying to make sense of many stacked layers of opaque but powerful computing abstraction.

    It’s hard to reason about so many interlocking parts moving so fast. Past a certain level of complexity, you just can’t keep it all inside your head.

    So, somehow, we have to investigate, shining light into black boxes of our own creation.

    Cowboy style

    The easiest way to debug code you control is to add logging.

    print("Here's the part of the code I really care about. It happened!")

    When the program runs, such log messages print out in a terminal with a timestamp.

    To continue the zoetrope analogy, this is tagging a sticky note on a specific frame and then observing whether and how often you see that sticky later in the program’s run.

    You can get really far with this approach: depending on how much logging you add, you can develop a clear sense of what’s happening with your program at any given moment.

    But there are also drawbacks: logs can only answer questions that you think of in advance. Worse, the more logging you add, the harder it is to find the one line you actually care about at any given moment.

    Logs are a way of reflecting the internal state of a program, but all they can ever be is a one-way firehose of information. Sometimes that isn’t enough.

    The interactive debugger

    A debugger hands you the reins on the galloping horse that is your program. Instead of following a dizzying path of instructions after they happen, the debugger is an invitation to intervene.

    With a debugger, the zoetrope can stop altogether. Your hand is on the wheel, advancing it at whim. You can move through your program line by line, inspecting which code is being run. Every scrap of data in scope for that code is visible too, allowing you to reason through why something is breaking or behaving unexpectedly.

    Instead of the processor dictating the pace, you control reality. It’s essentially the power of Neo in The Matrix: you can slow time and act upon objects inside the system.

    Even the ones that move very fast.

    An experienced code detective traps their quarry using breakpoints: flags in the program that tell the debugger to stop at a specific place. If your gut says that your crash has something to do with making a network request, you might break at the point where the request is triggered, and break again at the point where the result is written to disk.

    Breakpoints allow the program to whisk you directly to the crime scene. There’s little upside to reviewing all the lines of code you know are working correctly. Instead, you arrive briskly to the area of your investigation again and again, run after run, testing changes and reviewing their consequences.

    Of all the programming skill sets, I think debugging may be the most consequential. Not just understanding the tool, but developing an intuition for how to use it. What is opaque from the outside can become obvious when you step through the code line by line. Interactive debugging gives you more control, more information.

    This is even more important in the age of machine-written code. So far, most people’s robots can’t do this at all.

    What if the robot did it

    In the past, you were the likely perpetrator of your code crimes.

    Today, you might also have a robot accomplice. LLMs can extrude code at a dizzying pace, but all of it requires checking and error correction.

    Type systems and linters are a powerful first line of defense against machine errors. If a symbol or function is out of date or simply does not exist, feedback to that effect can be immediately reported, compelling a coding agent to re-work its output, look things up, and otherwise correct itself.

    If your next step is a compiler, you get even more error correction fodder. A compiler provides (somewhat) clear feedback about where a problem exists: the file, line number, and the broken expectation. Again, this gives an agent plenty to work with. Compiler output provides an agent with more leads, like which libraries it needs to examine more carefully.

    But other errors are more subtle, emerging only at runtime: say, a function in one thread that touches memory in another.

    For such errors, the fastest way to track them down is to fire up the debugger and step through the code.

    But agents don’t know how to do that. Agents debug cowboy-style, shitting logs all over your program and making you paste back what they see. It’s crude stuff.

    There’s no reason it needs to work this way. The consequential information from a debugger could be piped into an agent’s context, and it could navigate the program much like a human developer: setting breakpoints, stepping over and into code. From these investigations, the agent could propose fixes and architectural improvements.

    When such a tool enters common use, on the scale of Claude Code or Cursor, that’s going to be a leap forward in the trustworthiness and effectiveness of agent coding systems. Not to mention their usefulness.

    Beyond checking and improving its own output, an agent that could walk you through program execution, explaining why things work the way they do, would be powerful indeed.

    But, peering as they do inside of running code, debuggers can be used for lots of things. The security implications of such a tool are surely complicated as well. What if you could set an agent loose on cracking someone’s serial number registration code?

    Life in a paradigm shift, man.


  • LLMs: the opposable thumb of computing

    Our thumbs are opposable: they can, with both strength and precision, touch any part of any other finger. We can grip things, pinch them, hold them carefully or loosely.

    The existence of our thumb makes our hand more useful, more dynamic, and importantly: compatible with a wide range of complex tasks. We can smelt metals, we can play the viola, we can sauté a delicious pan of peppers.

    We can hunt, we can defend ourselves, and we can write down our thoughts.

    All of this, plus every other trade and hobby, thanks to the thumb. Take it away and we are much clumsier, more helpless. Fingers alone can pull and push, but they just aren’t enough for the kind of complexity we built a civilization around.

    With the thumb, our hand is a tool for reshaping the universe. And for millennia, for better or worse, we’ve been doing just that.

    Amidst all the hype, all the skepticism, all the rending of clothes, I am here to tell you:

    The LLM is to computing what the thumb is to our hand.

    The power of for

    for is a fulcrum.

    On one side of the lever, a method of counting. How long are we doing this? Under what conditions do we stop? How many steps per turn?

    On the other side, work.

    Each time we do something, what are we doing?

    Programming uses loops a lot. They’re essential grammar of computing strategy, since the processor itself is running on a loop.

    The same way prepositions have fewer letters, the syntax for writing a loop is terse.

    Like math, this is one of those places where learning concepts at the same time you’re learning their symbols just makes life harder. I had the damndest time interpreting loops as a beginning programmer. C-style loops might as well have been hieroglyphics. 1

    for (i = 0; i < 10; i++) {
    	//Do something
    }

    If I’d started with Swift, the syntax would have been much clearer:

    for variety in burritoVarieties {
    	//Variety-specific code
    }

    So we have a collection: burritoVarieties. If we iterate through that collection, for each variety, our code can take unique directions.

    A paper-wrapped, green chile-filled burrito looks very different from a smothered, red chile situation. They’d need different photos, their costs are different.

    Your takeout app is a roiling sea of loops.

    This is the central premise of conventional computing. Developers write loops that might occur on human timescales, or might be much faster, controlling radios and network behavior at a dizzying rate.

    Deep at the level of miraculous miniaturization, many billions of cycles conclude every second, as instructions churn through microscopic transistors carved into silicon by beams of light.

    And for decades, that was the story. Learn to work with loops, don’t waste resources inside them, create code that adapts to different situations. Such code is more or less deterministic: you run it 100 times, you get 100 identical results.

    This infinite-iteration paradigm of computing has given us everything from ballistic missiles to email to the Facebook algorithm. You can do a lot with it.

    But it’s fundamentally constrained: you have to write rigid code and anticipate all the conditions it will face. What if there were another way?

    Enter ‘AI’

    People are freaking out.

    I mean the AI discourse is just rancid stuff. People are charged on this topic.

    On some level: I get it.

    The rules are changing. We’re watching a churn that could be as consequential as the microprocessor, which kicked off a 50 year supercycle that’s still playing out.

    But the ways they’re changing are weird.

    After a generation of prosperity and endless career growth, technology workers have faced years of layoffs, stiff competition for roles, and declining flexibility from employers. What was once a safe and growing pie feels like it’s shrinking.

    AI, with its claims of labor savings, arrives at the worst possible moment, compounding these headwinds, and handing perceived leverage to the cost-cutter case.

    AI is seen as a business lotion: slather it on, get better results.

    But the way it actually works is this:

    You give up the deterministic clarity of your for loops.

    In the AI age, we all have the choice to wield a very flexible, somewhat unpredictable technology.

    In trade, you get much greater range of motion. All you have to do is describe, in natural language, what your goals are. In return, large language models can both interpret and generate endless patterns of structured information.

    Instead of a fulcrum, you have a djinn conjuring something that might solve your problem. If you’re cautious, if you’re thoughtful, if your desires are realistically constrained, it can actually happen.

    This ambiguity requires thoughtful strategy to harness, and that strategy is in short supply. So you have a lot of sloppy chaos following AI deployments around like hungry ticks on a dog.

    The one where an LLM invents new corporate policy remains my favorite category of this failure.

    Like all emerging technologies, it’s just not obvious at the beginning how to best use this stuff.

    Still, the power of LLMs to reshape things is formidable at scale. That’s just a lot of djinns conjuring a lot desire. It’s unsettling stuff.

    But it’s also the destiny of computing to arrive at this place. Now begins the work to make sense of it.

    Thumb and fingers

    The true power of LLMs emerges when they are combined with conventional computing: run in a loop.

    For example:

    • An LLM may generate incorrect code
    • A linter will catch and report errors as they’re written
    • With this error-correction data, the LLM can run again

    This loop can continue until all errors are resolved.

    This is the agent revolution now underway, and it’s poised to change how code gets made. It’s also a great demonstration of how the fingers of conventional computing pinch against the thumb of the LLM.

    Some example fingers:

    • Variables and constants, to precisely map a value to an identity
    • Logic to compare values
    • Algorithms to transform values
    • Loops executing code in sequence
    • State machines, keeping track of a system or complex operation

    These are a lot of power on their own. But add the thumb:

    • Interpret unstructured or unanticipated input
    • Transform bodies of text
    • Create new text to be interpreted by conventional computers
    • Read an existing pattern of text, then extend it

    The combination of fingers and thumb unlocks an all new epoch for how human imagination solves problems through computing. The rigidity of conventional, deterministic code pinches against the flexible grip of the LLM.

    The result is capabilities that will take a long time to fully explore.

    Come with me if you want to live

    This kind of flexibility has been the quest of computing for as long as we’ve had it. Alan Turing, a father of modern computing, proposed a test he called “The Imitation Game” specifically in anticipation of this capacity. In 1950.

    Now we’ve arrived: not just a dull, mechanistic automaton, but a machine capable adjusting its approach even to novel, unanticipated input.

    Again: that’s a little scary, right?

    An immediate toxic use case for this new power: poisoning every online conversation with fake people. You can just do that now. If you can afford the computing capacity, you can saturate the replies of any conversation.

    The spam consequences are numerous, and touch every layer of communication. But they’re nothing compared to the larger threats: ubiquitous state surveillance. This opens the door to monitoring and cataloging every kind of communication.

    To say nothing of applying these technologies to war. From psychological operations to target selection, this thumb creates all new opportunities for destruction and mayhem.

    But this happens with every tool. The great wars of the 20th century were exercises in heinous experimentation: combining the new powers of industrial technology and fossil fuels with the ancient rites of our worst impulses.

    Yet, World War II gave us the dawn of modern computing. It gave us all new ways to harness physics to solve our energy problems. It gave us modern antibiotics.

    Humanity is indeed terrifying in its destructive capacity. And since before civilization itself, our tools have been turned toward truly blood-soaked ends.

    But we also use our tools to invent musical instruments, new foods, new medical advancements. Our tools have created the sublime, from fine art to cinema to our favorite video games.

    There are genuinely good reasons to fear our technological capacity. But it’s up to us to turn every tool toward its best and most fruitful end. The advent of the LLM has opened a pandora’s box, opening a new chapter of what a computer can accomplish.

    We know the assholes are going to build something scary. They always do. But what sublime accomplishments could be made with this all new power instead? I don’t think putting our heads in the sand is a reasonable response to the downsides.

    I think we have to be the authors of the upside.


    1. This code starts at zero, and increments until it reaches ten. Then the loop exits. In each stage of the loop, my code can know how far along the process I am by reading the value of i, and respond accordingly. Again: a little opaque, but it’s written simply because it’s written constantly.


  • Microsoft's game

    Lego is an ingenious system for building and creation.

    You can learn to use Lego’s system in seconds:

    • Snap bricks together in stacks
    • Separate stacks to disassemble

    Step-by-step guides walk you through complex models, but you can ignore them and build what you want with any given pile of bricks.

    The system depends on a grid of pips. The bottom face of a brick sheathes these pips, gripping them snugly enough to build on, but loose enough for a child to separate.

    Lego bricks are sturdy, injection-molded ABS plastic designed to be snapped and un-snapped thousands of times without losing grip. They’re made to exacting standards, and have been for generations. Unearth a pile of bricks from 1987 and you’ll have no problem using them alongside a kit fresh from the store.

    Every design comes with tradeoffs. Here are Lego’s:

    • Right angles everywhere
    • Vertical construction is effortless; horizontal construction requires serious planning and cleverness
    • You can build anything to centimeter precision; Lego holds the monopoly on millimeter resolution

    It’s in the high-detail pieces where Lego’s design language transitions, from dull geometry to a gift treasured by all ages. Scandinavian minimalism defines it all, from spaceship antennas to medieval castle torches. The minifigs, the helmets, the hairpieces, the the greebles that adorn your incredible machine…

    You can detect Lego-ness instantly, no matter the theme or scale.

    It’s the classic platform bargain:

    The vendor makes specific paths easy, while reserving unique control for themselves. They get to make certain decisions that unify all activity, scalably influencing every single project.

    And if you understand Lego, you have everything you need to interpret Microsoft.

    In the beginning

    Microsoft is a leviathan worth your study.

    It has existed for half a century, shaping modern computing from the dawn of the age. Investors value Microsoft at more than three trillion dollars. For a sense of scale, France’s 2023 GDP was about even with this, and it’s 75% of what California produced in 2024.

    Microsoft is world-historically big.

    To achieve this, Microsoft captured and held an invaluable position: deciding how software would be written.

    Early computing was very different from what we know today.

    Microprocessors were new and no one knew exactly what to do with them. For a few thousand (inflation-adjusted) dollars, anyone could buy a computer and try to tell it what to do.

    This required programming languages: conventions defining what you could say to the computer, and how you would say it.

    Bill Gates and his coterie were among the first to understand exactly how you could bridge the world of instruction signals with anything a human could learn in an afternoon. They provided language interpreters for early computers.

    As computers increased in complexity, Microsoft kept pace, building the operating systems and APIs that would eventually let developers target the majority of the world’s computing hardware.

    Microsoft paved the first roads to the electronic frontier.

    A good story about Microsoft

    If you want to understand Microsoft, I like this story about the Xbox.

    The short version is that Sony totally freaked them out.

    It’s one thing to ship a plastic toy that takes plastic cartridges and plays glorified arcade games, as Nintendo had been.

    But by 1995, Sony was elevating the living room toward something that looked like computing: optical disks, storage, and eventually: networking.

    As leviathans go, Microsoft has a totalizing appetite. For its stratagems to work, it needs to influence as much of the game board as possible. If they didn’t act, the home could fracture: Microsoft handling the bills and correspondence, Sony owning the fun.

    And who knows where it goes from there.

    For me, this is unusually savvy for a bunch of suits to have figured out, much less taken action upon. Just think about it like this:

    How often do you really need to upgrade a computer that only does spreadsheets and emails? Ceding the future of entertainment would slow down the entire metabolism of PC upgrades, which is no good if you’re selling everything from operating systems to productivity tools.

    Not to mention the OS where most games are published.

    So Microsoft decided to enter the fight with their own console.

    They walked in with a 25 year head-start on shaping how developers make software. By the time they were pondering Xbox, this included a deep stack of graphics programming and input technologies. They could pivot the tools and APIs developers were already using to build games for Windows into a new, dedicated platform.

    To do it, they had to do something they’d largely avoided: building and shipping hardware.

    Non-rivalrous goods

    If I’m holding a mug of root beer, I’m the only one who gets to drink it. If I finish that mug without sharing, you don’t get any unless you can find your own.

    But if I’m holding a CD-ROM of software, I can install it, then pass it on to you. 1

    Which means that Microsoft can make loads of money: they can come up with something valuable to put on that CD-ROM, and then sell it to as many people as they can convince to buy it.

    They created and held the perfect position on the game board. They were essential to building computers, but didn’t have to build any themselves.

    Like getting paid to imagine a tire every time someone bought a car.

    It’s great work if you can get it. So you can see why Microsoft would be reluctant to ever dirty their hands with anything so risky as putting an object in a box and waiting for the phone to ring.

    But to hold the living room, they had to play the role only a manufacturer can: hardware cost control.

    The Xbox cost $540 (inflation-adjusted to 2025), which made it competitive with a PlayStation 2. This was a good deal cheaper than all but the crummiest PCs, and unlike those, it could play games pretty well.

    They built it fast, using off-the-shelf PC components. They created a thin container of an operating system to host their graphics libraries, handle storage, and process I/O.

    And it worked. It was an instantly competitive console, and the existing body of Windows game developers had a much shorter path to the living room than ever before.

    Microsoft built this beachhead into a business worth billions annually. More than that, they’ve maintained their prominence as a prime target for game developers.

    And it all came down to their existing advantage: building the tools that make software possible.

    How does this game play in 2025?

    Visual Studio Code is a free tool for building software.

    The cost of building and maintaining it is a pittance compared to the returns: Microsoft is a default tool for anyone starting a software journey. It’s used by professionals and hobbyists.

    So Microsoft continues to influence how software gets made.

    The dream of machine learning models trained on existing software to write more code predates the current age by a lot. People have been waiting for it. I remember a first-week orientation a decade ago where a CEO discussed this vision as both far-off but attainable.

    So the moment this approach became viable, Microsoft got to work.

    The first version of GitHub CoPilot seems quaint by comparison to the tools of today, and it’s only been four years. Underwhelming as the tool was at that point, it’s not surprising that Microsoft wanted to be early.

    This leviathan was built to turn totalizing control of the developer experience into money. Historically it did this through controlling tools: languages, interpreters, libraries, development environments. In the age of LLMs, this control now extends to the very structure of new code itself.

    Microsoft is in the position to nudge the everyday work of every developer who relies on their AI products. 50 years into their existence, they are poised to seize a whole new dimension of power, profit and leverage.

    What does this mean?

    Start with feedback loops. Because Microsoft controls so many tools, they’ll be in a position to make them mesh better with LLM workflows, and to make them happier with machine-generated code. Microsoft also owns and controls loads of the GPU hardware needed to make LLMs run. They have the most favorable pricing for these workloads possible.

    So we might expect, for example, for it to become really easy to prompt into existence a web application built on a resurgent Microsoft stack. It might become really cheap to prototype a game for Windows, because Microsoft builds scaffolding for those to be built by LLMs.

    Between owning the hardware that runs LLMs, and the tools that make code happen, Microsoft will be in a position to tilt the whole computing game board so that loads of pieces drift deep into their pockets.

    Of course, that’s if the old rules hold.

    A funny thing about computing: sometimes the game surprises you.


    1. Without guardrails, this turns into “piracy,” according to a legal framework shaped to support companies like Microsoft. But in the context of a corporate sale of hundreds or thousands of licenses, it’s of course quite convenient.

Network Games Network Games

©2025 Danilo Campos

"The only game. Survival. When the jungle tears itself down and builds itself into something new."