I used AI for the first time.
Well, not the first time. We’re all using AI all the time, or AI is using us, through our social media feeds, web searches, email solicitations, streaming media recommendations… If you’re touching a smart phone, you’re interacting with AI.
But I did my first real AI project this week, both as a way to get something done that’s just been too big a lift for too long, and as a way of engaging with technology I’ve been avoiding for way too long. I think I was motivated in part by a bunch of campaigns saying not to use AI: AI is the enemy, it’s unethical to ask a single prompt, AI will destroy the environment, you’re taking away someone’s job, you’re feeding the thing that will one day become conscious and kill us all.
As the original spokesperson for Team Human against transhumanists like Ray Kurzweil who argued we should pass the evolutionary torch to our robot successors, I felt it was my responsibility to really learn this thing rather than just opine on its impact.
I admit, as an older GenXer, I was kind of just hoping I could sit this one out. I’ve learned every new tech since programming memory addresses for hardware control of 8 bit systems, through basic and fortran, to C++, html, and css style sheets. I learned databases, social media platform architectures, algorithms, TOR networks, and even blockchain from the inside-out, just in case people came to their senses and used crypto for something better than extraction.
I learned MIDI, linear editing, non-linear editing, Macromedia Director, Maya, Adobe blah blah blah. And every time my wealth of knowledge about one tech was obsolesced by the next one. Who can keep up with everything? I was thankful when they got rid of Flash because I’d never had time to learn it.
It’s not that I’m a programmer, but if I’m going to speak about these landscapes, their biases, and what they may or may not do to humanity, I should know how they work. Program or Be Programmed, right?
So I was hanging out with my friend Benjamin, who runs a terrific Meta-Crisis Salon in Brooklyn that I’ve been attending, and he had just taken a course in vibe coding. Vibe coding is when people build a whole app or platform without any coding knowledge — just an AI partner. And he was jazzed, showing me all these dashboards he’d created to find trends in his communications impasses, scour the web for events he’d be interested in, monitor climate change, model social networks… you name it. It was like he caught a bug.
And I’m thinking, yeah this stuff is cool, but at what cost? I’ve got friends who are now making animated movies about really deep stuff, in highly crafted detail. There’s real thought and hours going into these creative expressions. We got into the question of whether these cool, potentially useful and socially beneficial programs could be worth the unseen costs of AI, such as the environmental damage and resource depletion and labor displacement. It was hard to know.
So we decided to convene a “hackathon” the next weekend, in the back room of a vegan restaurant in the East Village called Caravan of Dreams that we’re trying to rescue. A hackathon would bring a bunch of new customers into the space, while also seeing what a group of mutual aid and social good pioneers might think to do with AI.
As one of the conveners, I figured I should find out something myself. And I have this website sitting on Wordpress, with a bunch of custom code for its little features and every time Wordpress does a security update, one of those things breaks. And I’m stuck thinking, is it worth the time to fix this one, or do I just make a new site? I could go on one of the for-pay platforms like Wix or whatever, where they actually have AI’s assisting you in the building of your website, or I can build it from scratch with the coding module of an LLM (Large Language Model) like Claude. Use it as an opportunity to get my hands, or even my soul, a little dirty. See what the fuss is all about before the hackathon.
And, to be honest, the first thing I thought was “what will people think if they find out my website was coded by AI? Will they think I’m a turncoat? What about the web design job he just destroyed?” And so on. Part of the reason I needed to turn to AI is that I am dealing with a lot of data here. A few thousand articles, reviews, interviews, book chapters, videos, podcasts…all in different file types, disorganized. Archives from previous websites in proprietary formats…. A curatorial nightmare I would not wish on any of the students I hire for help.
As for the environmental destruction, well, that’s part of the test here. I wanted to see just how much money it cost to get a job like this done. I am fully aware of the oil, rare earth metals, cobalt, and unseen labor we are leveraging to prompt an AI system. But how much are we really taking, and what can we create in the process? Could the destructive impact of using AI ever be outweighed by the creative output? And how much better or worse is it than all the other ends-justifies-the-means compromises we make in our choice of meal, clothes, energy, entertainment, or transportation? Who uses more energy: me building a website, or the person protesting against that website by serving video on Tiktok?
Instead of using a website and asking questions, I installed an AI called Claude into the Terminal of my Mac computer. This way, I could have Claude do tasks for me with my files on my own machine, and then publish those files to my website on Github, which is just a place on the Internet where it’s really easy to upload files and test things. I put a folder on my desktop with all the stuff I wanted on my website, and told Claude the basic architecture: what I want on the home page, my books page, an archive of my articles…and so on. Claude kept the website’s local files on my computer until I was ready to publish them on the net. It’s there right now, at Rushkoff.com. Responding to my desire for transparency, Claude posted a menu of the themes we rejected along the way.
It took about five hours, all told, instead of maybe a month. The real achievements are the searchable archives of articles. And now, it’s a totally changeable website. I can go to the website editor on my laptop, paste the link of a talk I’m going to do, and it will create an upcoming event with all the information. I can even say “add a new page for Team Human, with an embedded YouTube.” Or “let’s create a prompt for people to query my entire body of work, that includes a meter showing how many kilowatt-hours of electricity it took to generate the response.”
Speaking of which, when I was done, I held my breath and took a look at how much energy it cost to do all that work. How much water? Not just building the website, but organizing and categorizing all those thousands of files? Given all I’ve read and been told about AI’s massive energy costs, I figured it would be the equivalent of a round trip flight to Istanbul. So I checked the tokens I’ve used so far for my entire Claude experience? Just under five dollars worth.
Even if Anthropic is effectively subsidized and getting energy cheaper and hiding certain externalities, even if we want to be super pessimistic and say that the company is somehow using four times as much energy as they’re charging me for, we’re looking at twenty bucks of energy. And as far as intellectual property, my AI web designer partner was benefiting off the design and user interface strategies of a myriad of human designers, as was I. So were all the alternative platforms I might have tried. It was getting hard to take an absolutist stance against this tech.
I finished just in time to go to the hackathon. People came up with some really good ideas. One person was using AI to do a meta-analysis of intentional communities, to see what the few successful ones had in common, and what were the biggest reasons the great majority of them failed—a research project that could take a dozen grad students years of analysis, and they may still miss the truly relevant variables.
My favorite project was the simplest: an “I have/I need” bulletin board - basically, the original Craig’s List on digital steroids. An Amazon killer, where people list what they are willing to give away or provide in service, as well as what they need. The kid who needs a bike gets one from the kid who outgrew hers. And because it’s built on AI, it can be a dynamic database that matches people geographically when it’s a thing like a bike, linguistically when it’s verbal, or stylistically when it’s clothes.
Moreover, because it’s being conceived by social activists instead of a team of hired engineers, it’s valuing horizontalism, mutual aid, and trust instead of expedience, profitability, or scale. At worst, they get a working prototype for something they can bring to a non-profit, who can bring on real engineers to build the more durable version. Their working prototype didn’t even cost the five bucks of tokens.
If their vision holds, more and more people start functioning in the peer-to-peer economy we imagined in the 1990s, before Airbnb and Uber replaced what we used to call couch-surfing and ride-sharing. The excitement in that otherwise unused room in back of Caravan of Dreams reminded me of the early cyberpunk infused Internet era. Are we in the same place? Can we learn from our earlier mistakes?
Is it foolish to think can we use the master’s tools to take down the master’s house? I want to believe. Are we in another moment of great potential, or is this another momentary mirage in the endless march of capital through new incarnations of exploitative technology? Is it already too late? Is this just another ends-justifies-the-means rationalization?
Can we beat them at their own game by bringing our best and brightest to the fore? Or has this technology already been monopolized by those who mean to colonize our last bits of attention and coherence? Can we de-colonize the eschaton?
It won’t be easy. We’re working against some powerful countervailing forces using the same technologies. I don’t just mean the obvious players like Elon Musk or Sam Altman, but tech billionaires who have been working behind the scenes for decades. The scariest one to me is Larry Ellison, founder of Oracle.
Oracle’s first customer was the CIA — the company is named after a CIA database project from 1977. Between 2014 and 2024, Oracle acquired companies like BlueKai (browsing tracking), Datalogix (linking online behavior to purchases), and AddThis (device fingerprinting) and merged them into the Oracle Data Cloud. By 2016 Larry Ellison said that Oracle had data on five billion consumers. What’s he doing? (Drey Dossier covering this beat the best.)
But there’s more: Project Stargate, Larry Ellison’s joint venture with OpenAI and SoftBank, funded by all sorts of sovereign wealth funds and announced by President Trump on January 21, 2025, at the White House, was billed as a $500 billion AI infrastructure project. It’s essentially a massive buildout of data centers to power next-generation AI. Ellison says it’s mostly a healthcare database to prevent or cure cancer. But Ellison has also publicly spoken about AI-powered surveillance — telling investors in 2024 that citizens would be “on their best behavior” because of constant recording and reporting. Ellison’s AI would act as an “ever-present supervisor,” analyzing every police body cam and doorbell camera. Nations would unify all citizen data, including genomic data, into a single AI-accessible database.
Just a couple of weeks ago, Oracle published a blog post announcing the U.S. government authorized it to run generative AI on federal government data, including Medicare records and military systems, at the highest civilian and DoD security clearance levels.
Are we contributing to these efforts when work or play with AI, pay for pro accounts, or just even watch Ellison’s Paramount media channels or Oracle-owned TikTok? Does building on AI platforms work against even the best intentions of the projects we build? Not to mention the untold amounts of human labor and energy and water being extracted under the most exploitative, usurious conditions?
I really, honestly, don’t know. Anthropic, the company behind Claude, is supposed to be the good guys—dedicated to human-centered AI and strict guardrails against all this nastiness. And while they’re putting up a pretty good fight against Trump’s efforts to commandeer all AI technology for his crackdown on dissent, they themselves admitted they have to rescind their initial promise not to release AI models if they can’t guarantee proper risk mitigations in advance.
Now on the one hand, it’s a more honest stance. Who can guarantee anything about a technology like AI, which has emergent properties and behaviors no one can really predict? If they spend time and energy on guardrails that may not even work, they will be outpaced by all the companies who don’t give a shit about such things. But if they don’t, then is it a safe place to build the pro-human, pro-social applications my friends and I are conjuring together in the back room of a vegan restaurant?
I like to think Anthropic’s refusal to become part of the US government’s militarization and surveillance apparatus is more important, and a better place to draw their red line. I’m less afraid of a rogue AI than I am a rogue president or a dozen rogue tech billionaires using AI.
As I see it, the object of the game is to weigh the positive potentials of these technologies against their extractive and sociopathic ones. To treat them like any other technology with dangerous downsides: Is this car trip worth the gas, the pollution, the oil wars? Is this YouTube post worth the algorithms and data servers? Is this vibe-coded program worth the AI cycles?
But the bigger question remains: Can we lean into the liberating, pro-human capabilities of these technologies before they become unrecognizably incapacitating? In a lot of ways, this feels like the internet in 1994, before it became AOL and then Facebook, when it was still driven by a counterculture looking to expand the collective human imagination. Or the blockchain, when it was characterized more by Occupy Wall Street’s drive toward mutual aid than the investor’s fixation with token speculation. Could this moment be different? Did we learn our lessons? Instead of using AI to make cheap, soulless replicas of Hollywood movies and putting creatives out of work, might the real opportunity here be to build platform cooperatives and community-created, worker-owned alternatives to Uber and Amazon?
You tell me. Let’s decide this together. Use the comments to let me know: have you used AI? Was it worth it? Do you think we can use these technologies to beat those who would control us at their own game? Can we use them to help build widespread networks of sharing and mutual aid? Is the true Luddite position still okay? They were not against technology - just its use to exploit labor. Could we use AI to move toward an increasingly jobless society with Universal Basic Income and optimized for leisure? Or should we turn the other way, refuse its shortcuts and its ability to do pattern recognition on a scale beyond anything we’ve known before? How a about small language models run on public or community servers?
The negativity surrounding AI today is justified, but it may be too early to write AI’s epitaph. Can we use it to re-invent the virtual infrastructure for good? Even if it starts out small, anything that actually works can be modeled, templated, and shared. Is this our moment, or is it another mirage? Am I seeing something here, or am I just high on my own supply? You tell me.
I’ll be moving this whole Substack over to Patreon in June. The advantage is that with one membership you’ll get access to all my writing, ad-free versions of my podcasts and videos, access to the my Discord server, participation in premieres of new episodes and interactive salons with me, and even free access to live events in NYC and beyond. Plus, a little birdie told me that things are Substack may get funky really soon.