How We Are Thinking About AI
On language, cognition, and what it means to build things in a world where machines can think.
Why It Feels So Uncanny
Language is the thing we do. It’s the thing that separates us from every other animal and every machine we’ve ever built. It’s so central to what makes us human that when Alan Turing proposed a test for machine intelligence in 1950, he chose a text conversation — not chess, not mathematics, not image recognition. A conversation. If a machine could talk to you and you couldn’t tell it wasn’t human, Turing argued, you’d have to take seriously the possibility that it was thinking.
A few years ago, AI began passing that test. Not in every context, and not without caveats — but convincingly enough that the conversation shifted from “can machines think?” to “what kind of thinking is this?”
Critics will say it’s mimicry. That these models are trained on vast oceans of human text and are simply recombining what they’ve seen—a “stochastic parrot”. As a side note, I think we might underestimate how much we, as humans, generate ideas by recombining what we’ve already seen. But in any case, if you work closely with these systems, you’ll see there’s more going on than recombination. They are thinking. They traverse many levels of abstraction. They plan, reason, and holds context. They adopt perspectives and personalities, test assumptions, and express surprise at unexpected results.
What it does not do, and probably cannot do, is dream. It doesn’t wake up with an idea it can’t explain. It doesn’t have a felt sense of purpose or longing. Though even that is an open question — one I’m genuinely uncertain about.
What We Know About Minds
There’s a model in neuroscience called the Hodgkin-Huxley model, published in 1952, that describes how neurons fire. It’s a set of differential equations governing the flow of ions through voltage-gated channels in a cell membrane. It’s elegant, it’s quantitative, and it earned a Nobel Prize. But Hodgkin and Huxley described the behavior of a single neuron. The human brain has eighty-six billion of them, connected by a hundred trillion synapses, organized into circuits and regions and networks that we are only beginning to map.
Consciousness, creativity, reasoning, emotion — none of these emerge from any single neuron or any single mechanism. They emerge from complexity. From the interaction of vast numbers of simple components following local rules, giving rise to behavior that could not have been predicted from the components alone. This is what we mean by emergence, and it’s one of the deepest ideas in science. For an elegant demonstration of the power of emergence, spend some time with Conway’s Game of Life.
If you accept that cognition is substrate-independent — that thinking is a property of information processing, not of carbon biology specifically — then it makes sense that a sufficiently complex computational system could do everything the human brain can do. And likely more, because it isn’t constrained by the physical limitations of a skull, a metabolic budget, or a human lifespan. That’s the tipping point we’re at, at least as far as what we’ve traditionally called “knowledge work” — of which engineering and business operations is surely a part. This is a John Henry moment for knowledge work, and navigating it successfully will require respect for the power of these tools and humility for our own limitations.
There’s a spiritual dimension to this, which is beyond the scope of this piece. But for me, personally, I believe that we are more than just thinking meat. We are connected to something deeper — something that won’t be replicated by scaling up parameters or training data. I’ll leave it at that.
A Brief Primer on What These Models Actually Are
When you think about what it means to get a PhD, or a medical degree, or develop core competencies in any professional field, a rough approximation is that you’re building a large language model in your mind. You’re reading everything you can find on a subject, absorbing patterns, developing the ability to predict what comes next in a chain of reasoning, and building an internal model of the domain that lets you generate novel responses to novel problems.
Other things happen too, of course — you develop relationships with mentors and peers, you build an intuition for problems and possibilities that goes beyond what you’ve explicitly studied, and you develop a sense of purpose aligned to your personal story. Those things matter more now than ever.
But the core knowledge? The expertise? The ability to reason fluently in a domain? That is now available to anyone, anywhere, in any language, at any time, at virtually no cost. That is the world we now find ourselves in.
Think about what that means. Imagine being born into this world as a child, without preconceived notions about how careers are supposed to work, without decades of professional identity built around knowing things that a machine now knows better. How would you spend your time? Where would you seek joy, meaning, and opportunity?
If you’ve ever read Who Moved My Cheese? by Spencer Johnson, well, we all just all had our cheese moved — especially those of us who’ve spent decades in knowledge work careers. The skills and expertise that defined our professional identities, that we trained for years to acquire, that we organized entire industries around — those are now commodities. Not worthless, but no longer scarce. The question is what you do next.
The Most Significant Technological Revolution Since Writing
The printing press democratized access to knowledge. The transistor made computation possible. The internet connected every computer on Earth. Each of these changed the nature of human civilization in ways their inventors could not have predicted.
AI is the next step — and it may be the largest. For the first time in history, we have a technology that can generate new knowledge, not just store or transmit it. A technology that can reason, not just calculate. A technology that can converse with us in our own language about any topic, at any level of depth, at any hour, without getting tired or bored.
In some sense, this was always coming. Turing described it in 1950. Shannon laid the mathematical foundations. The transistor made it physically possible. The internet gave it data. The transformer architecture gave it a brain. Looking back, the trajectory feels inevitable — the work of a hundred thousand engineers and scientists across seventy years converging on this moment.
And yet, living through it, it feels unbelievable.
Where That Leaves Us
This is a time for deep contemplation of purpose, and what it means to live a good life.
We see AI as the final boss of human ego, in many respects. It challenges the stories we tell ourselves about why we matter — stories built around expertise, knowledge, intellectual capability. If a machine can do what you spent twenty years learning to do, what are you? That’s a question worth sitting with, not dismissing.
And it will come with some turmoil. The transition won’t be smooth. People will grieve — are already grieving — the loss of professional identities they spent lifetimes building. That’s real, and it deserves compassion, not dismissal.
But here’s what I keep coming back to: the things that make life meaningful were never the knowledge work. They were the relationships, the craft, the sense of contributing something to the people around you. The experience of learning itself, not the credential. The morning in the lab when something unexpected happens and you realize you’re looking at something no one has seen before. The conversation with a customer when you begin to see how you can help them solve a problem. The satisfaction of building something physical and watching it work. These are the moments we get to have more of now, not less, and these are the experiences that we are building Mosaic for.
How We Use AI at Mosaic
At Mosaic, we designed the company around these tools from day one. We didn’t bolt AI onto an existing workflow — we built the workflow assuming AI was in the loop.
A note on terminology: we often say “AI,” but much of what we’re really talking about is automation — procedural code with intelligence sprinkled in. Python scripts, webhooks, database queries, cron jobs. The difference is that an LLM query now reads an invoice and decides which project code to assign, or scans a client conversation and drafts an appropriate follow-up. These aren’t hypothetical — they run daily in our mosaic-workflow codebase, and we use AI to write virtually all of it.
Practically, AI touches every layer of our operations:
- Documents are authored in Markdown in an AI-assisted IDE, version-controlled in Git, and published through an automated pipeline. AI drafts, edits, reviews, and generates word-level redlines.
- Code is written collaboratively — the engineer describes what they need, reviews the diff, and accepts or rejects each change.
- Research starts with AI literature reviews, patent searches, and technical feasibility analyses.
- Proposals and specifications are drafted from structured inputs — customer interviews, meeting notes, engineering specifications. In many cases, design and verification are heavily facilitiated by AI collaboration that happens downstream of these specifications.
- Quality management — document control, revision history, training records — is automated rather than manual. We are working towards ISO 13485 compliance with automated document control, design control, and training.
- Receipts, expenses, and vendor coordination are handled by workflows that monitor accounts and file automatically.
What remains — what we invest our time in — is the work that actually matters: defining the right problem, earning a customer’s trust, making hard technical calls, sitting across from a scientist and understanding what tool they actually need versus what they think they need.
AI as an Instrument
When you pick up a musical instrument for the first time, you can barely make a sound. But with practice, you develop a relationship with it. You learn its range, its tendencies, its sweet spots. You develop taste. And eventually, the instrument disappears — it becomes an extension of your thinking. You don’t think about fingerings or breath control. You think about music.
That is what AI does for our work. When you develop fluency with it — prompting, reviewing diffs, guiding an agent through a multi-step task — you stop thinking about mechanics and start thinking about architecture. Strategy. The system. You elevate to a higher plane of abstraction.
For a systems thinker, this is transformative. At Mosaic we work across electronics, biomedical instrumentation, physiology, cell biology, microfluidics, algorithms, APIs, manufacturing, business models, and customer needs. Until now, it has been virtually impossible to traverse these domains in anything but the most superficial manner — there simply wasn’t enough cognitive bandwidth. AI changes that. It lets you zoom out and zoom in, fill in the gaps of your education, and call upon expertise on demand. It handles the drudgery so you can focus on the synthesis.
The Future of Teams
As AI tools become more capable, the optimal size of a team shrinks.
Tasks that used to require a marketing team, a documentation team, a QA team, and a project management office can now be done by a small group of capable people with the right tools. What was lost in large organizations — the information decay as context passed through layers of hierarchy, the attenuation of vision as it propagated down the org chart — disappears when the team is small enough that everyone has full context.
At Mosaic, our model is a small, highly creative in-house team augmented by a global network of freelance talent and AI tools. We scale up and down by project. We keep overhead low enough to take risks on work that excites us. This isn’t a staffing strategy. It’s a philosophy about how the best work gets done.
What We’re Watching
The field is moving fast. Here are the developments we’re tracking:
-
Agentic AI — Models that plan, use tools, and execute multi-step tasks autonomously. We’re already using this for document publishing, expense filing, and code development, and we’re building conversational agents that live in Slack for operational workflows.
-
Multimodal understanding — Models that see images, watch videos, read PDFs, and understand CAD files. This matters enormously for hardware engineering, where so much information lives in visual formats.
-
AI-in-the-loop experimental biology — The ability to have a conversation with an automated work cell, design an experiment together, execute it, and analyze the results. This is where our microfluidic automation platform is headed.
-
On-prem AI infrastructure — Running models on our own hardware for IP-sensitive workflows where client data cannot leave our network. This pairs with our broader strategy to reduce SaaS spend and leverage open source software.
-
Physical AI and general-purpose robotics — When programming a robot becomes as easy as describing what you want it to do, the economics of automation shift dramatically for small companies like ours.
-
Conversational CAD and simulation — Imagine building a solid model from a napkin sketch and a conversation. These tools are emerging now, and when they mature, hardware iteration speed will approach what software already enjoys.
What We Ask of You
If you’re joining Mosaic or working with us in any capacity:
-
Use AI aggressively. Don’t be shy. It’s a tool, and using it well is a skill. Start with the tasks that bore you most and work your way up.
-
Stay curious. Try new models, new prompting strategies, new workflows. The best practices of today will be obsolete in six months.
-
Keep humans in the loop. Review everything. AI is powerful but not infallible. Your judgment — your ability to assess whether something is right, whether it serves the purpose — is irreplaceable.
-
Think about what matters. The whole point is to free you from drudgery so you can focus on work that only humans can do. Make sure you’re actually doing that work.
-
Take care of your mind. This one is important! As your thinking accelerates to match the pace of the tools, it’s important to take time to relax and ground yourself. The most creative insights don’t come from twelve hours at a screen — they come from the walk afterward, the shower, the moment you finally stop thinking about the problem. AI gives you leverage, but leverage without rest is a path to burnout. If it ever feels like too much — and it will — go outside. Touch grass.
Recommended Reading
- Spencer Johnson — Who Moved My Cheese? (1998) — A fable about adapting to change. More relevant now than ever.
- Dario Amodei — “The Adolescence of Technology” (2026) — On the promise and peril of powerful AI, from the CEO of Anthropic.
- Kaplan et al. — “Scaling Laws for Neural Language Models” (2020) — The empirical discovery that model performance improves predictably with scale.
- Vaswani et al. — “Attention Is All You Need” (2017) — The transformer architecture behind every modern LLM.
- Rumelhart, Hinton & Williams — “Learning Representations by Back-Propagating Errors” (1986) — Backpropagation — the key that unlocked everything after it.
- Alan Turing — “Computing Machinery and Intelligence” (1950) — The paper that started it all.
- Claude Shannon — “A Mathematical Theory of Communication” (1948) — The foundation of information theory. (Notably, Anthropic named its LLM after Claude Shannon)
Frankie Myers is the founder of Mosaic Design Labs, a biomedical product development studio in the San Francisco Bay Area.