Bootlicking On Spec
AI promoters in media and the arts keep sharpening the blade on the knife that's trying to stab them. Maybe don't do that?
Welcome to The #Content Report, a newsletter by Vince Mancini. I’ve been writing about movies, culture, and food since I started FilmDrunk in 2007. Now I’m delivering it straight to you, with none of the autoplay videos, takeover ads, or chumboxes of the ad-ruined internet. Support my work and help me bring back the cool internet by subscribing, sharing, commenting, and keeping it real.
—
The tech industry has for a long time now essentially run on make believe. How profitable something seems like it might be one day matters much more than how profitable it actually is now. Uber lost billions every quarter and didn’t make a cent for 15 years. That didn’t stop every startup from billing itself as “the Uber of ____” to seem important. A big promise trumps big profits. They’re selling stock, not the product. Lottery ticket capitalism.
It’s abundantly clear that AI is the new buzzword, the shiny new phenomenon every company tries to associate itself with in order to seem like they’re part of the future and not the past. Whether “AI” will one day join the graveyard of buzzwords from eras past (“Viral content,” “Web 2.0,” “Web 3.0,” “the blockchain,” etc) remains to be seen, but for now, it’s the magic word for opening investors’ wallets.
If you’re an entrepreneur or a startup CEO who stands to profit off a vague association with AI, this all makes well enough sense. “Feed the ducks while they’re quacking,” the old investor adage goes.
Yet what if you’re an artist or employed in some arts-adjacent profession that AI seems poised to replace? What do we call rhapsodizing the glorious possibilities of AI then?
Ashton Kutcher made headlines this past week, telling the Berggruen Salon (I’m not even going to get into what that is, suffice it to say it’s something obnoxious) in Los Angeles:
“One day you’ll be able to render a whole movie. You’ll just come up with an idea for a movie, then it will write the script, then you’ll input the script into the video generator and it will generate the movie,” he said. “Instead of watching some movie that somebody else came up with, I can just generate and then watch my own movie.”
This was in praise of Open AI’s new generative video tool, Sora, which the accompanying Variety article says Kutcher “sees as the future of filmmaking.”
Kutcher went on:
“Why would you go out and shoot an establishing shot of a house in a television show when you could just create the establishing shot for $100? To go out and shoot it would cost you thousands of dollars,” Kutcher said. “Action scenes of me jumping off of this building, you don’t have to have a stunt person go do it, you could just go do it [with AI].”
Why indeed! Because it’s fun? Because you might like doing it? Because you want to add a personal flourish to the endeavor? Maybe you like being outside instead of typing on a computer! There are probably lots of reasons one would shoot an establishing shot instead of asking a computer to make it, but as we know, tech evangelizing requires a very specific kind of imagination blindered to any possibility not involving a computer.
The obvious question all this raises is why, as an actor, Kutcher would sound thrilled at the prospect of AI one day replacing his job. Astute observers may note that Kutcher is at least as much a tech investor as he is actor these days, which would seem to explain everything. But go with me here, because he’s far from the only one.
CNN talking head Van Jones the other day tweeted:
We’re living through a perfect storm in the workplace: there's an epidemic of loneliness, our workforce is distributed and disconnected, and we struggle to be empathetic without in-person connection.
I believe AI can be a tool to build empathy and combat the perfect storm.
Tomorrow on @LinkedIn Live we’re going to dig into how we can embed empathy into workforce management
I agree that we’re living through a loneliness epidemic, but after that the tweet goes all the way to coo coo land. Loneliness can be combated… with AI? The thing that’s trying to replace the communal experience of art? And this will improve… workforce management? As Dave Weigel said, the tweet was a masterpiece of the “each sentence worse than the last” genre.
Point being, if you’re an author and a television personality, why are you promoting AI (on fucking LinkedIn, of all places)? And again, far from the only one. Sony CEO Tony Vinciquerra told investors last week, “We are very focused on AI. The biggest problem with making films today is the expense. […] We will be looking at ways to…produce both films for theaters and television in a more efficient way, using AI primarily.”
I get that investors are all hot on AI right now, and so in that sense it follows that you would tell them “we’re going to be using AI!”
If investors are paying for a certain thing, all you have to do is pretend to also be selling that thing and voila. Feed the ducks while they’re quacking.
But again, isn’t this a little short-sighted? To sell them the thing that promises to put you out of business? These ducks are trying to eat you, man.
Consider Fable, the company who wants to “become the Netflix of AI.” Which it will supposedly do by letting users “generate their own TV shows by giving prompts to the AI.”
The phrase “no one wants this” comes to mind. It’s hard enough to remember what shows I’m watching and what services they’re on, you think I want to be in charge of conceptualizing them too?
Putting aside the seemingly obvious and intuitive fact that very few people actually want this, you don’t have to squint much to see the goal. Give the money to this platform, not to anyone who makes art or entertainment.
Of course, the media industry would never want to miss out on a short-sighted business strategy, so they’re getting in on the act too. Vox Media recently announced that they would be partnering with OpenAI, the company that makes Chat GPT, to “collaborate using OpenAI’s technology to develop innovative products for Vox Media’s consumers and advertising partners.”
So, the AI is going to “develop innovative products.” What would that even look like? Computer, what does women want?
Like most press releases, you need a lot of context and reading in between the lines to figure out what the story is actually about. The part that sticks out comes later:
“Through this partnership, ChatGPT and future products will draw upon the trusted journalism and expertise of Vox Media’s award-winning media properties. Additionally, OpenAI will enhance its technology with Vox Media’s archives, which contain a deep well of reliable and accountable information and journalism. This agreement recognizes the value of the company’s work and intellectual property, while opening it up to new audiences and better informing the public.”
Ah, yes, that. That sounds like the concrete stuff. Essentially, it sounds like Chat GPT is actually going to pay to scrape Vox’s archives to train their AI now, which many say that up until now they’ve just been doing for free. Getting thieves to pay you for a thing that they just used to steal does make a sick sort of business sense.
Faced with the existential threat of a massive, quasi-legal plagiarism machine that mostly describes generative AI as we now know it (which, like blockchain technology, wastes massive amounts of energy and promises to have a huge carbon footprint) a major media company just decided to give them full access to their written archives. And why do I get the strong sense that no one who actually created any of that content had a say in this?
I know the feeling. In fact the company that I helped to build for 16 years recently announced their own AI initiative. will.I.am’s AI company, FYI, it was announced, will be “joining forces” with the CEOs of Uproxx (my old company) and Complex to establish UPROXX Studios. “The venture aims to advance digital creativity through groundbreaking technology and demonstrates a dedication to fostering innovation,” goes the word salad of a press release.
To make a long story short, it’s clear that people like Will.I.Am (Will AI Am?) and Ashton Kutcher are as much AI hucksters now as they are artists (and maybe that’s just what happens when you graduate from the consumer class to the investor). Still, there are few things more infuriating than hearing people who made a living in the arts or arts-related professions evangelizing for the glorious future of AI. It’s like they’re kissing the ass of a boss they don’t even have yet. All I can think to call it is “bootlicking on spec.”
It makes it hard to imagine all the great things AI will supposedly one day do when we have concrete examples of what it’s doing now, and they all suck shit. Ruining Google Search results, for one thing (you know, the very basis for how people find things on the internet). Google has fiddled with search, first as a way to deliberately make it worse so people would spend more time searching, and then to promote its own AI products and investments. Which, in fairness, has led to such headlines as “Google Is Paying Reddit $60 Million for Fucksmith to Tell Its Users to Eat Glue.”
That, to me, is art.
That headline was for an article describing the current paradigm, in which Google’s use of AI to try to summarize search results for users (sidenote: is this really what anyone in the world wants AI to do?) created a situation in which one could Google something like “cheese not sticking to pizza” and get served a solution that included “you can also add about 1/8 cup of non-toxic glue to the sauce to give it more tackiness.”
This suggestion, it turned out, came from a joke post on Reddit by a user named “Fucksmith,” which Google’s AI then cannibalized and regurgitated into unsourced pizza advice. Much like ChatGPT and Vox, Google is apparently paying Reddit for the privilege of scraping such content (give Fucksmith his royalties!).
Most of the other examples of Google’s new AI-promoting search aren’t nearly as funny. Like how much the redesign has tanked traffic for some sites. Here’s a handy little chart full of numbers. Imagine being the Dallas News and losing 87.98% of your search traffic at a stroke to justify someone’s AI investment.
If a site like, say, GQ (who I sometimes write for) experiences a 70% drop in SEO visibility in a single month, that has real consequences for the people who work there. Meanwhile, everyone else scrambles to try to find ways to feed the robots.
How else to explain an article like “Gladiator Cast Age: How Old All The Actors Were In The Movie?” This promoted in a tweet that went, “The 2000 film Gladiator featured many prominent stars, like Russell Crowe and Joaquin Phoenix – read on for how old each major actor was in the movie.”
This is the content ecosystem all these perverse incentives have created.
So… maybe one day AI will be really good at… uh… doing establishing shots on movies? Summarizing our search results for us? Creating bespoke television shows that no one will see but us? Helping bosses give their employees special friends to reduce loneliness? Fostering, uh, innovation, in… the music space?
But right now it seems really good at killing jobs and replacing thoughtful content with absolute dogshit (and maybe convincing someone out there to eat glue, which admittedly is kind of funny).
And if you’re Ashton Kutcher or Van Jones or Will.I.Am or the CEO of Sony and you pretend that this is all a super neato future, all you’re really doing is speeding the demise of industries that once allowed you to become part of the investor class. Why did we get rid of the term “sellout” again?
As Cory Doctorow (the guy who coined the painfully relevant term “enshittification”) recently wrote:
We get good digital services when the enshittification lever doesn’t budge — when it is constrained: by competition, by regulation, by interoperable mods and hacks that undo enshittification (like alternative clients and ad-blockers) and by workers who have bargaining power thanks to a tight labor market or a powerful union.
When Google ordered its staff to build a secret Chinese search engine that would censor search results and rat out dissidents to the Chinese secret police, googlers revolted and refused, and the project died.
When Google tried to win a US government contract to build AI for drones used to target and murder civilians far from the battlefield, googlers revolted and refused, and the project died.
What’s happened since — what’s behind all the tech companies enshittifying all at once — is that tech worker power has been smashed, especially at Google, where 12,000 workers were fired just months after a $80b stock buyback that would have paid their wages for the next 27 years. Likewise, competition has receded from tech bosses’ worries, thanks to lax antitrust enforcement that saw most credible competitors merged into behemoths, or neutralized with predatory pricing schemes. Lax enforcement of other policies — privacy, labor and consumer protection — loosened up the enshittification lever even more. And the expansion of IP rights, which criminalize most kinds of reverse engineering and aftermarket modification, means that interoperability no longer applies friction to the enshittification lever.
Now that every tech boss has an enshittification lever that moves very freely, they can show up for work, yank the enshittification lever, and it goes all the way to MAX.
That’s a long block quote, but it all seems pretty relevant. The rub is that there are always top-down market forces trying to make our working lives and the way we communicate with each other shittier, largely to produce short-term gain for a few at the top.
Doctorow outlines a few ways to combat these forces, with labor power and actual government enforcement being probably the two biggest ones. But at a basic level, another one involves all of us screaming and stamping our feet every time they roll out a new initiative that promises to financially and spiritually emmiserate us. Which is to say: under the circumstances, it seems like the least we can do is not just lie down with our bare asses in the air begging for the boot.
Bootlicking on spec: not even once.
I work in government policy, my department has a team devoted to tech and "transformation," and of course AI is the latest darling.
They did a demo recently to show what it can do. You can enter a prompt and it will write a policy!
But as I told my boss recently, do you know how hard it is to write the actual policy? I can fart one out in an hour. Writing the thing is not the hard part!
Most of policy analysis is figuring out why people aren't implementing policy that's already been written (does it not make sense, is it not possible with current caseloads, are people acting in bad faith, etc). AI won't help with any of that.
The problem is it will probably take ten years of trying to make it work and laying people off before the people who make these decisions give up. I'm sure some fucker in a state or municipal governement somewhere has already drafted plans for getting rid of the people who think about government problems for a living on the belief that AI can do this instead.
Is that belief sincere? I honestly don't know. I'm not sure it makes a difference, although I'm with you, Vince: the people that piss me off the most are the ones that cheerlead how we're all going to get robofucked. They want AI to be Data from Star Trek, instead it's a Sybian duct taped to a mannequin.
I write corporate communications for a living. I want to have an open mind. Therefore, on a few occasions, I have asked ChatGPT to provide me with research sources that support a certain claim. It generates a list but when I go to the direct source there's nothing there.
So I ask ChatGPT for the links to the source it provided. Every time I do this the links lead to nowhere. Not even a 404 error. Just a link that does nothing when you click it.
I think the most astute assessment some have made is that AI is "eager to please." Like a hopeful, idealistic, young intern, it doesn't dare tell you "I don't know." Rather, it makes up shit on the spot to save face.
Of course, AI evangelists have wasted no time responding to this legitimate criticism by reminding people that they "need to get better at writing their prompts." That's right, they think the problem is that we've asked the wrong question, not that the answers come up empty.
No matter how many times and ways I think about all of this I keep arriving at the same conclusion, we all just need to go outside more.