Volume 1: The Age of Algorithmic Evolution
In a conversation just released by Y Combinator, OpenAI's former Chief Research Officer Bob McGrew shared a striking observation about the future of AI and scientific progress. "Whatever the part of the stack is that isn't automated becomes the bottleneck. I think weirdly we're going to automate the scientist, the innovator, before we automate the experiment doer. But if that too comes through then I think the potential for really fast scientific advance is totally there. I think we'll find some other bottleneck."
He continued, pondering the future: "I think we'll look back at this and say, 'we did all the things, and science is only going like 30 percent faster than it was, why isn't it 300 times faster' and we'll have to figure it out."
McGrew's insight reveals a profound truth about the coming age of artificial general intelligence: the fundamental constraint on progress will shift from computational power to human adaptability.
In an AGI world, we will replace scientists and researchers with AI systems that can rapidly experiment and iterate on ideas. But this replacement extends far beyond the laboratory, creating a cascade of transformations that will reshape the very fabric of human society.
Consider the emergence of what we might call the Algorithmic Evolution Loop.
Today's social media platforms have already demonstrated how algorithms can systematically reshape human behavior through constant iteration and feedback. TikTok’s algorithm doesn't just predict user engagement – it actively molds it, engineering human psychology tap by tap, scroll by scroll, reshaping our attention spans and reward mechanisms with every interaction.
The great irony is that while everyone worried about AI becoming conscious like humans, we were busy making humans behave more like algorithms. Silicon Valley's brightest minds weren't replacing human consciousness - they were optimizing it out of existence, one notification at a time.
Now imagine this process amplified across every digital touchpoint in our lives. AI systems will take an initial product concept and continuously refine it through relentless A/B testing. They will craft perfectly calibrated marketing messages, send precisely timed follow-up emails, and develop products that evolve in real-time based on user behavior. The experiment never ends; the optimization never stops. We become willing participants in an endless loop of algorithmic refinement, each interaction feeding data back into systems that grow increasingly sophisticated at shaping our behavior.
This algorithmic evolution precipitates a profound shift in human social structures – what we might call the Digital Atomization of Society.
The internet, originally conceived as a tool for connecting humanity, begins to fragment our deepest relationships. We find ourselves increasingly drawn to bespoke digital communities, algorithmically curated spaces that cater to our specific interests and viewpoints. These spaces offer a kind of frictionless connection that can make our physical relationships feel cumbersome by comparison.
The result is a peculiar form of social isolation: we share physical spaces with family and loved ones while our minds inhabit separate digital worlds. We begin to know our immediate family members only through the narrow bandwidth of shared daily routine, while reserving our deeper interests and passions for online spaces where algorithms ensure we find the perfect audience. The richness of human relationship is gradually replaced by a mosaic of digital interactions, each optimized for engagement but somehow less than the sum of its parts.
This transformation gives rise to what we might call the Creative Dependency Paradox.
In this new world, human creativity doesn't disappear – it becomes inextricably bound to algorithmic systems. Artists, writers, and innovators find themselves not just creating for humans, but creating for algorithms that mediate access to human attention. What use is a masterpiece if it cannot navigate the algorithmic gates that control visibility? What value does innovation hold if it cannot engage the mechanical minds that increasingly orchestrate human experience?
Artists who once dreamed of their work hanging in the Louvre now dream of their work trending on TikTok. The tragedy isn't that AI will replace human creativity - it's that we'll voluntarily hand it over in exchange for more efficient distribution.
The tools of creation themselves become a kind of Faustian bargain. Though, if your desire is to have outsized impact, this power will appear as tasty as the sorcerer's wand from Disney's Sorcerer's Apprentice. It will be a red herring: for the very thing your heart longed for in the first place - creating something of impact, which people enjoy, for human recognition, for the joy of authentic collaboration - will slip away. Your desire of leaving an indelible mark on future generations will slip away into the algorithmic march of noise, far greater in volume than any human could ever conquer.
Your life's greatest work, your most important and seminal creations will disappear into the endless mechanical buzz of GPUs, themselves improving at the speed of Moore's law, so that the whole world can go on increasing in its efficiency of stealing 24 hours from every organic mind and from God himself. Like the sorcerer's apprentice, we gain magical new capabilities only to find ourselves caught in loops of our own creation, our creative output increasingly shaped by the very tools we thought would amplify it.
This is where the bottleneck theory reveals its full implications.
The constraint on human progress is no longer technological – it's biological and psychological. Our brains, evolved for a world of scarce information and direct human contact, struggle to adapt to an environment of infinite stimulation and algorithmic mediation. The very organ that drove our species to dominance becomes the limiting factor in our continued evolution.
For those who remain engaged with these systems, the mind becomes a kind of organic voting mechanism, continuously feeding data back into algorithmic processes that grow ever more sophisticated at predicting and shaping human behavior. Each decision, each preference, each moment of attention becomes another data point in an endless process of optimization.
Those who reject this system, seeking refuge in digital asceticism, find themselves increasingly isolated from the mechanisms of modern society. The choice becomes binary: participate in the algorithmic evolution of human experience, or accept a growing distance from the tools and systems that increasingly mediate human achievement and connection.
The proliferation of mechanical minds thus marks not just a technological revolution, but a fundamental shift in the nature of human agency. Our brains, never designed to process such volumes of information or navigate such complex layers of algorithmic mediation, begin to fragment under the strain. Joy, happiness, contentment, and peace become increasingly elusive states, compressed into ever-smaller fragments of consciousness between algorithmic interventions.
The mechanization is not just complete – it is self-perpetuating. Each advance in artificial intelligence further entangles human consciousness in loops of algorithmic optimization. Each breakthrough in machine learning capabilities makes human limitations more apparent. The bottleneck tightens.
And in this recognition lies both our greatest challenge and, perhaps, our last opportunity to shape the direction of our species' evolution.
Volume 2: The Cave of Digital Shadows
In Plato's allegory of the cave, prisoners are chained in a dark cavern, able to see only shadows cast upon the wall by objects passing before a fire behind them. These shadows become their entire reality - they know nothing else. When one prisoner is freed and sees the true world outside, he returns to tell the others, but they refuse to believe in anything beyond their shadow reality. This ancient parable about the nature of truth and perception finds new resonance in our AI-mediated world.
In 2024, our cave isn't quite so dark - it's backlit by OLED screens and notification badges. Our chains are made of dopamine hits and push notifications. And unlike Plato's prisoners, we keep voting to stay inside. After all, the cave has excellent WiFi.
Yet in this mechanized world, a curious phenomenon emerges. Like prisoners in Plato's cave, we sit chained before the flickering shadows of algorithmic projections. But unlike Plato's prisoners, we retain a unique power: we can turn our heads.
For while AI casts its shadows with increasing sophistication - each notification, each recommendation, each carefully crafted piece of content designed to hold our gaze - we possess something the machines cannot replicate: the capacity for metacognition. We can think about our thinking. We can recognize the shadows for what they are.
This is where the deterministic view of AI domination fractures. Each time a human being closes their social media app, each time they choose to put down their phone, each time they consciously step away from the digital cave, they exercise a power that transcends the mechanical. They demonstrate that consciousness, that ineffable quality of being human, cannot be reduced to mere input and output.
The algorithms learn and adapt, yes. But so do we.
Just as our ancestors developed calluses to protect against physical hardship, our minds are developing new forms of psychological resistance. Children born into this AI-saturated world don't just learn to walk and talk - they learn to recognize patterns of manipulation. They develop an intuitive understanding of when they're being led down algorithmic paths designed to capture their attention.
The internet may indeed serve up bespoke communities, but humans are learning to use these connections differently. Instead of allowing digital relationships to replace physical ones, they're learning to weave them together. The teenager who discovers a passion for astronomy online brings that enthusiasm back to the dinner table. The parent who finds a community of fellow artists uses those connections to enrich, not replace, their family relationships.
Yet here we encounter a deeper paradox: if humans can develop psychological defenses against algorithmic manipulation, won't algorithms simply evolve more sophisticated means of influence? Each new layer of human resistance could prompt an equal and opposite advance in algorithmic sophistication. Our very awareness of algorithmic influence becomes data that machines can use to refine their approach. We find ourselves caught in an arms race between consciousness and computation, where the act of resistance itself generates new patterns for algorithms to analyze and exploit.
This is not technological determinism - it is technological dialectic. Every advance in AI's capability to influence human behavior is met with an equal advance in human capacity to choose their response. The mechanical minds may cast ever more convincing shadows, but human consciousness remains the light source that makes those shadows possible.
Consider the artist in this new world. Yes, they may need AI marketers to find their audience, but it is their uniquely human capacity for meaning-making that gives their work value in the first place. The machines can optimize for engagement, but they cannot optimize for significance. They cannot create the deep resonance that occurs when one human consciousness recognizes itself in another's creation.
Perhaps then, humans are not so much a bottleneck as they are a filter - but not a static one. Rick Rubin reminds us in his book The Creative Act, we must "actively stretch your point of view. Invite beliefs that are different from the ones you hold and try to see beyond your own filter." In the age of AI, this becomes more crucial than ever. Our role as filters is not to simply restrict or limit, but to consciously expand our perspective, to "purposely experiment past the boundaries of taste," to find those "unexpected surprises" that lie beyond our algorithmic comfort zones.
This dynamic filtering - this constant stretching and expanding of our viewpoint - is precisely what separates human consciousness from artificial intelligence. While AI systems can filter based on predetermined parameters, only humans can consciously choose to transcend their own filters, to "examine approaches you may dismiss as too highbrow or lowbrow," and to grow through that examination.
The brain may indeed struggle to process the flood of information, but it adapts by becoming more discerning, not less capable. Like a muscle that grows stronger under stress, our capacity for conscious choice - for real, meaningful agency - develops through its confrontation with algorithmic influence.
While Silicon Valley was perfecting the art of 'growth hacking' the human psyche, humans were quietly hacking back. Not with code or computers, but with that most ancient of tools: awareness.
The future then, belongs neither to the machines nor to the Luddites, but to those who learn to navigate between them. Those who understand that true freedom doesn't mean escaping the cave entirely - an impossible task in our interconnected world - but rather developing the wisdom to recognize shadows for what they are, and the courage to periodically turn away from them.
The modern spiritual retreat looks less like a mountaintop monastery and more like a faraday cage with good coffee. People pay premium prices to experience what their ancestors called 'Tuesday afternoon' - life without algorithmic intervention.
Human agency persists not despite AI, but in dynamic relationship with it. Each advance in machine capability becomes an opportunity for human beings to more clearly define and exercise their unique capacities for consciousness, meaning-making, and choice.
We are not just processors of information competing with faster processors. We are meaning-makers, choice-makers, relationship-builders. These uniquely human capabilities don't diminish in an AI world - they become more crucial than ever.
The mechanization is not complete. It cannot be complete, for the simple reason that mechanization itself is a human tool, not a human destiny. The real question is not whether AI will consume human agency, but how human agency will shape and direct AI's development.
Humans are indeed the bottleneck. And perhaps that's exactly as it should be.
So the question then remains: how do we empower human bottlenecks in a post-AGI world?