Hello! It’s me, Craig Mod. Author of TBOT (amzn | bkshp). Poking my head out into newsletter land. This? Roden, a newsletter you signed up for at some point. Perhaps last week, perhaps fourteen years ago, when I started shooting these out.
I’ve been busy. I’ve been doing something that I’m bad at and am trying to get better at: I’ve been having fun (and trying not to be crushed by the guilt of having fun). I went to LA and then Santa Fe and then Hokkaido with the binding agent of: eating great food with people I love. In Santa Fe I spent a few days meditating at Mountain Cloud Zen Center (more on that below; also, yes, fly from Japan to Santa Fe for Zen; also also, turns out the headquarters of their school is around the corner from my home ha ha ha). My body loves Santa Fe. Loves the crispness of the air. The elevation (once it gets used to it). The sharp light. The salsa. I spent a few mornings writing in Collected Works and generally came away from the whole visit thinking: I’d like to head back, eat More Salsa, spend more time in that corner of the US. In LA, I went deep on LLMs and Claws and all that with Kevin Rose (and also met some Hollywood-adjacent folks about book optioning), eating lots of Doordash’d Gwyneth Paltrow slop bowls and making software. I have to say, these three weeks of doofery have been some of the most fun weeks I’ve had in years. So, thanks for indulging me a bit of newsletter silence as I pretended to be a human out in the wild.
I also just finished six days of walking and looking at things on the Tōkaidō with The Book of John. I wrote a Ridgeline about some old buildings in Toyohashi. I filed my taxes. Then I was in Nagasaki all last week, catching up with the town and doing a televised discussion with the mayor about why I picked the city for the Times’ 2026 52 Places to Visit list. Just got back from that last night. Phew. (Oh, don’t mind that, that’s just my brain melting out of my ears.)
Of course, I’m only sitting still for a couple of days. Then I’m off to Portugal to walk with Kevin Kelly and a fabulous group of humans. And then, mercifully, I’m back home in Japan for a month to write and fight to protect that writing time with all my being … Before heading to live in Brooklyn for most of May and June.
For the meat of this issue, here’s an essay (linkable) on meditation, human sensory “resolution,” consciousness, and LLMs:
Language is imprecise. We (humans) tend to forget this. No matter how precise we try to be with language, it’s still fuzzy. Humans tend to over-index on this presumed precision of language, as if it were like a laser pointer aimed at a Rembrandt. It’s more like lobbing water balloons at cave shadows.
Language has been on my mind because eulogies have been on my mind. Since the start of the year, I’ve read dozens of blog posts eulogizing the death of coding — as in computer programming — because of the advances of Claude Code and other AI-assisted tools. Greg Knauss’ post in particular stands out. As he puts it: “In just the past few months, what was wild-eyed science fiction is now workaday reality.” Open Hacker News, and you’re bound to see more posts like his. A year ago, these sentiments were unthinkable. Now they’re the norm. Sam Altman — ever a paragon of nuance and sanity and, uh, a guy who can definitely read the room — is low-key mega shamemogging all developers who ever lived on X with a straight face. Shock! Shock! We’re watching the real-time loss of a certain nook of an entire profession, a skill set rendered obsolete in the span of months.
Perhaps the furthest you can get from programming is to go and meditate.
So a few weeks ago — as entire industries are feeling the existential complexities of their work being vacuumed up by language models — I was doing just that: sitting in the corner of a (delightful, beautiful, earthen) room in Santa Fe (morning sunlight moving slowly across the polished floor), meditating. And at the front of the room was Henry Shukman — all mega beatific and glowing. The whole room felt like an inverted LLM. Like: if you could flip the models inside out, you’d have found us, serene and quiet and still. We (the group of ten of us) were doing a Zen “sit.” We sat for twenty-five minutes, and then chatted a bit about the experience. Then sat for twenty-five more minutes. Then chatted a bit more. Repeat for seven hours a day for a few days. It was exceedingly pleasant. But more than pleasant, it was exceedingly human.
I’ve been meditating for about twenty-three years. I am terrible at it. I am the world’s least capable meditator. I started in college because I was having panic attacks. I was internalizing so much stress that I passed out during a physics exam from hyperventilation. Intuitively (still mysteriously to me), I reached for meditation. It helped. Not just helped — within a week of meditating for just fifteen minutes a day, I no longer felt like every moment was suffused with the spectre of doom. Meditation allowed me to graduate with honors and get done the work I felt compelled to get done. So I dig meditation. I’m grateful for it. I return to it when I feel the world tilting off kilter in my mind. But I’ve never really spent much time dwelling on it in a metacognitive way. When you parse it out, it’s objectively bizarre:
You sit still with a certain kind of posture and look “inward.” But what’s inward? And why the hell can we “look” in that direction? And who the hell is doing this looking? Still, “you” do these things — the sitting, the “looking,” the “inward” gazing, the counting of breaths or moving “attention” around the body — whatever they may be, and in doing so you subtly reprogram how your “mind” attends to the world. Often, a soft equanimity pervades. Your parasympathetic nervous systems chills out a bit. This is profoundly weird!
When I say we “perform” an “inward gaze,” I assume that most of you know what I mean. This is in spite of the fact that those words describe almost nothing of the actual act itself, which is (the act) — again — profoundly, weirdly, immaterial and diffuse. Yet we “know” how to do this because language is a pointer to “space beyond” the words themselves. And we understand these pointers because we (average humans) exist in this greater, shared space, suffused with gooey human experience; language is our shorthand for those experiences — ones fundamentally indescribable but often universally felt.
Another way to think of it: Airplane cockpits are filled with dials and levers. The dials and levers (words, sentences, paragraphs) can “describe” the properties or state of the machine, and given a certain configuration — a kind of incantation of settings — the machine takes flight. But nowhere in the cockpit is the act of flight itself described.
Alongside all of this, I’ve been plowing through Michael Pollen’s new book, A World Appears, a book about consciousness, and my first ever experience with a book where the more I read, the more confused I get. Never has the imprecision of words been more apparent.
“Consciousness is felt uncertainty,” he writes.
On consciousness depending on death and entropy (i.e., time):
Damasio did not say this in so many words, but underlying his theory of consciousness is the fundamental fact of our mortality. What possible difference would homeostasis, or feelings (good or bad), make to an immortal being?
One (of many) theory is that consciousness emerges from the body itself, not the mind. That consciousness emerged to help our body more efficiently find homeostasis within competing needs that don’t work well on autopilot (“I’m hungry, but also tired, what should I do?”). This particular passage stuck out as speaking to meditation:
… when we’re in that rare state of homeostatic balance, all our biological and emotional needs momentarily met, we experience, in Damasio’s words, “subtler feelings of existence.” This ceaseless flow of feeling that colors experience points to a possible explanation for the phenomenon of qualia—the fact that all our sensory experiences have a hard-to-describe but unmistakable qualitative dimension: the feeling of well-being we get from a first sip of wine, say, or the metallic bite of the air on a frigid December day.
Meditation is a kind of hack to get there faster, to induce homeostasis, to shut up the feeling of needing anything, when we may not need most of it at all.
And finally, on consciousness itself:
The emergence of a self is perhaps the apotheosis of consciousness in humans: the intuitive sense that we each have located somewhere within our heads a continuous, stable, and abiding “I” that is the subject of all our experiences. The self, we assume, is the perceiver of our perceptions and the thinker of our thoughts. Yet many scientists, philosophers, and Buddhists maintain that this self is purely a fabrication, though a useful one. Why do we cling so tightly to this idea of an enduring self at the same time that we go to such lengths to transcend it, whether by way of drugs, meditation, sensory deprivation, extreme sports, or experiences of art and awe?
I want to say that this “self,” this “abiding ‘I’” is, in part, a product of “resolution” — that is, resolution of input data / sensory information we receive and are constantly processing from the world.
In hindsight, it’s not too surprising that coding — human hands typing out lines of actual computer instruction — is the profession first diminished by the models. Coding is unique (“coding” being different from more general engineering or product management). It’s essentially “math.” Coding describes, precisely (though it used to describe it more precisely; think: byte-code, assembly, etcetera), what the machine will do. A compiler works deterministically. Each time you run a program, it is (one hopes) the same program.
Enter: Claude Code. Wild to think it was released just a year ago. I’m not sure if we’ve seen a technology appear, and so rapidly normalize something that was basically voodoo until recently. (The iPhones perhaps? The breathless tap tap-tapping-away of our lives?) With Claude, you give the machine “imprecise” instruction in language and — because so much open source software exists, because the training corpus is so vast, and because most common engineering issues are not novel (the novelty is piecing them together uniquely) — a working piece of software pops out. The more you Claude, the more you realize the limits of language, its abject imprecision. Often the thing Claude makes is not exactly what you hoped for. It turns out, when you’re working with humans, and you ask them to build something, they bring with themselves a whole universe of context that the machines — the models — lack. And so, with Claude, you go back, and you write more specs, you answer more questions.
You can go bonkers with this stuff. I recently created my ideal accounting software in just a few days, and find it hard to think about much else aside from what can be built using this machinery. It feels, in a way, like we’re in the late 1930s, and everyone has been given their own nuclear reactor for two-hundred bucks a month. Genuinely epochal and bizarre. What will you build with it? How can you not ask that question over and over again? Language is imprecise, but thrown against the wall of mathematics, useful tools can be made.
That imprecision applied to “math” only works because the outputs can be verified precisely.
The point of bloviating like this: We watch the LLMs perform these acts — acts that, even five years ago, would have seemed like pure science fiction — and we wrongly (I believe) extrapolate out a kind of intelligence that would be able to make sound decisions on a larger, world-based scale.
Which is to say: LLMs’ operating resolution is severely hamstrung. Whereas we, humans — messy, disgusting, goopy, flawed, miraculous humans — are operating at a freakishly high resolution, to which we have a preternatural ability to access subconsciously, and through which we use language to represent — in broad strokes — notions that operate in this higher register.
This is why I believe a company like Anthropic is right (for now, an extremely fleeting “now”) to “worry” about rushing to deploy their technology in wars, for vast surveillance. This is why their arguments feel so intuitively correct (for now). We know in our hearts that the world is made of so much more than words, and words barely crack the surface. But these machines are words and words alone. (For now.)
Robin Sloan addresses this conundrum of the messiness of the “real” world in his great “Flood fill vs. the magic circle” essay, giving the example of him mailing a bunch of zines around the world — an act with so much unpredictable complexity that it’s hard to imagine a model “understanding” all the variables any time soon.
Not to say AI models cannot achieve the same resolution (or may even need it), but (for now), this is, perhaps, their greatest weakness. Yann LeCunn is betting everything he can on the notion of “world models” — models that look to mimic the inputs we as humans take for granted, and often grossly underestimate.
Will more inputs and a larger footprint in the real world create a sense of homeostatic desire or need in the models?
I’ve been somewhat facetiously, somewhat seriously, somewhat jokingly, been posing a question to everyone I run into these past few months: Don’t you feel like all meaning is being scrubbed from the world? Like the Langoliers are chomping up purpose, chomping up all the things to which we’ve ascribed purpose these past hundred-thousand years? And that nothing matters?
Really, what I’m asking is: Don’t you think our contemporary education system has long needed an overhaul? That our society has long needed to reconfigure itself? That we need to stop ascribing all our meaning and purpose to being a Web Designer, or Coal Miner, or Airplane Engine Factory Foreman, or Accountant, but instead to being A Good Person, Good Parent, Good Friend, Curious Researcher, Poet, Meditator, Facilitator, or any number of other Ways of Being uncoupled from “work” as we’ve defined it since the industrial revolution? Who is safe from the hunger and capabilities of the models? Yoga instructors?
Of course, jobs disappear, and (lots of) new ones are made. Efficiency increases and work increases. Cheap work makes more work. Ben Evans, whose fabulous weekly newsletter is like reading a stone-cold assassin snipe the world of tech, writes:
The counter to this is to say, well, yes, the precise numbers are wrong, but this is directionally correct: AI will affect accountants more than fitness instructors. You could say the same about the internet: you could have made a numeric score for which industries would be most affected, and the scores would be ‘wrong’, but they’d tell the right story.
I’m not actually a Doomer around AI — don’t worry, I think we’ll be working more than ever, sometimes more interestingly, but mostly, perhaps, more depressingly. What’s special about this moment is there is something existential in the air, and that makes us open to reflection and change. I really do believe the denuding of purpose and meaning is coming for many, many jobs (coding being the first). But I also think we’ve long ascribed meaning to the wrong activities. So it’s a good time to start meditating, to spend a few afternoons talking about what you’re doing, why you’re doing it, and maybe what you’d rather be doing.
While meditating and thinking about inputs, about coding, about the loss of meaning, about abstraction, about machines flipping into consciousness, about language, (like I said: I’m bad at meditating), I thought too of Ursula Le Guin and her The Wizard of Earthsea (amzn | bkshop). A fabulous book, infused with a subtle prescience that grows with time. I reread this book yearly. Its themes are of wizards, of language used to cast spells. Le Guin elevates it all into poetics. But the coup de grâce of the book is this: the ability to name something with total precision. If you know the “true” name of a thing, you can control that thing. To be able to call out that atomic name of an object is to know it in toto. But we — humans — intuitively know that this is impossible. We know language is always a pointer to something else, never the thing itself. Le Guinn uses the counter of this to great effect. By making the act of naming a thing possible — of truly naming it with 1:1 abject precision — she creates … magic. What a move. Literary perfection. A beautiful, impossible twist.
While that precision may not be possible in real life, through our incredible sensorial machinery we get a “self.” We get an “I.” We get to do the most human thing of all: Sit quietly in a room and “look inward.” For now, the models can’t quite do that. But someday they might. Meditation is a prayer to human consciousness. It’s worth looking at, and taking stock in, everything we consider uniquely “human” at this moment in time, a moment of great change, if there ever was one.
Phew. Hello from down at this bottom of this thing. Thanks for reading that brain dump. Buy my book! (amzn | bkshp)