The camera on my head smells of Loctite Ultragel Super Glue, and the off-gassing is slightly burning my eyes. It's 2025, there's madness everywhere, and while no one was paying attention, computers learned to see. This isn't just a dystopian headline; for me, it's the thrilling, slightly acrid scent of a future I’m actively building, a future envisioned decades ago.
To understand how we got here, and where this head-mounted camera might lead, we need to rewind to Vannevar Bush. He was the scientist’s scientist, a veritable superhero of 20th-century science. Bush didn't just help defeat the Nazis; he architected the American innovation ecosystem that has propelled the U.S. as a global leader in science and technology for generations. Think National Science Foundation, the modern Department of Energy – that's Bush.

For us Computer People, Dr. Bush is the visionary who, in his seminal 1945 article "AS WE MAY THINK" (published in The Atlantic Monthly –a scan of the original available directly from the Atlantic here ), imagined the internet and its interconnected hypertext wonders fifty years before they materialized. He described a fantastical electromechanical system called the "MEMEX" – the Memory Extender.
It was a fusion of period technologies: instant photography, microfilm, projection, and most notably for my current eye-stinging situation, a head-worn "cyclops camera." Bush saw this as a machine to help scientists make sense of the overwhelming deluge of information, proposing even further, a "Thinking Machine" and a "Supersecretary."
Well, hey internet, guess what? I built a version of MEMEX with GEMINI and that camera glued to my head!
And it actually works. The future, as they say, is NOW.
My "Cyclops Camera" feeds what I see, what I read, directly into the AI - Gemini 2.5 Flash Preview Native Audio Dialog - a real-time audio/video streaming API connected to custom version of the Gemini 2.5 model.
Suddenly, collaborative human-machine reading isn't just a concept; it's a reality. This system bridges the gap between the thinking tools of old – the magic of books and paper – and the unlimited access and recall of generative AI. Talking to magic books is now a reality. The tangible interface retains its high-fidelity interaction: I can use my HANDS, touch the pages, riffle their edges, dog-ear the ones I want to revisit, highlight with actual markers, and make margin notes with a pen. But now, this physical engagement is augmented, creating a loop where AI becomes a true partner, a convivial tool in service of accelerated understanding.
Imagine: How could such a system help an expert wrangle sense from fractured narratives? How could it help people learn to read, or even master a new language?
This is why we need to spend our time thinking, talking, and playing with AI – to figure out the amazing things we can do. The hand-wringing about the meta-narrative, the techno-economic factors? Much of that is a dialogue about conditions long since baked into the market dynamics and economies of scale that allowed AI to emerge in the first place.
Is it a tool that tends to agglomerate power into a small number of organizations? Yes, it is. But those organizations are also the ones that built the largest computer networks and supercomputing systems ever conceived. Does it make sense to talk about how the entire project, from NSF funding two Stanford graduate students (hello, Larry & Sergey) birthing Google, or the fact that AI research has been a DARPA domain since its inception? Absolutely. The internet itself, the computer, the transistor – all of it flows from that same remarkable innovation ecosystem where government collaborates with industry to deliver the future.
The emergence of AI represents a chance to reimagine our relationship with computers: how we use them, how they can augment our capacity for discovery, how they can better connect us to each other, and ultimately, how they can help us become the best versions of ourselves.
These facts do NOT obviate the need for robust conversations about AI safety, the role of open-source AI models, and the imperative for equitable, distributed access to AI inference. My opinion here is that as technologists, designers, and policy futurists, we must engage in intentional play. What we have before us is a technology so powerful that our collective imagination is required to even begin to understand the possibilities it can unlock.
Sure, for now, you might have to wear a camera on your head (Loctite smell and slightly burning eyes included). But just imagine the possibilities. As the conversation about new devices beyond the smartphone—devices promising a persistent connection to AI, sharing the context of our personal perspective—gains momentum, how will we harness this power in holistically positive and humanity-amplifying ways?
That’s the question we need to answer, together.
-Rauchwerk