(Welcome to the Entertainment Strategy Guy, a newsletter on the entertainment industry and business strategy. I write a weekly Streaming Ratings Report and a bi-weekly strategy column, along with occasional deep dives into other topics, like today’s article. Please subscribe.)
I don’t like hyperbole, especially in headlines. And I don’t want to be mired in negativity, cynicism or pessimism.
But you read the headline, right? “Four Horsemen of the Hollywood Media-pocalypse” sounds pretty hyperbolic and negative, but as I wrote in last week’s “Most Important Story” column, it’s dark out there for the entertainment industry, especially the LA area.
So today, I’m taking a look at four trends that worry me most for Hollywood. Ironically, I’ve got good news for three of the four topics today.
Let’s start by introducing those Horsemen and why I choose them.
Introducing the Four Horsemen
My “Four Horsemen of the Hollywood Apocalypse” are roughly:
- Piracy
- Linear cord-cutting (and its replacement, streaming, which isn’t as lucrative as the old revenue models)
- Death of theaters
- AI
Sans these threats/headwinds, Hollywood would be in a much, much better place than it is now or in the future. Piracy enabled cord-cutting to a much greater degree than people knew. Streaming, though profitable now, isn’t isn’t nearly as profitable as the old cable bundle, and Hollywood can’t afford to lose another revenue stream like theaters. As for AI, well, it’s unclear if it will lower costs, or widespread job losses.
It’s unclear what will happen with all four issues. They could get better, or they could get worse. And that’s what I want to track them long term.
Because I’m epistemically sensitive, I have to ask, what other threats does Hollywood face? What am I leaving out? Here’s what I left on the cutting room floor (for now):
- China boxes out America. Is this a “risk”? It is, or more precisely, “was” because it already happened. China boxed American studios (including Netflix) out, and they’re not as profitable because of it. If anything, things can only get better in the future.
- Aggregeddon. As I wrote earlier this year, we haven’t seen “Aggregeddon” (or bundlers coming to control all the profits from content creators) come to pass yet. The platforms/operating systems still have an enviable position (taking ridiculous cuts, sometimes as high as 30%), but they don’t quite control streaming and don’t look like they will. If anything, the next category could make it better…
- Antitrust. This is a risk for Hollywood studios individually, but collectively, greater antitrust enforcement would be great for the entertainment industry as a whole (more competition is better), so this isn’t a risk.
But I would add one topic:
- The rise of social media/social video/video games.
Basically, filmed entertainment has more competition than ever, to which I would say, yeah, that might actually be a fifth “Horseman of the Hollywood Apocalypse”! More and more eyeballs are going to smaller and smaller screens, but the question is how soon, how much, and how fast? Frankly, I think this trend is a bit overhyped, but I want to make that argument with a much longer article.
AI: EntStrategyGuy’s Policy on AI, How I’m Using It, and Why It Isn’t Ready for Prime Time
I want to do something that I think every single writer, reporter and publication out there should do: provide total transparency on my use of AI. This summer, after much hesitation and trepidation, I finally got around to experimenting with AI. And I want to be upfront with you, my readers and audience, on how I’m using it.
First off, some reassurances:
- AI will never write content for anything on the EntStrategyGuy website or newsletter.
- Any data collection will be rigorously double-checked by a human.
So that’s the theory, how’s the practice? How am I actually using AI? In short, I’ve been…
- Using it to put some top ten chart images into spreadsheets.
- Experimented with writing social media content.
- Trying to use it to format links.
- I tried to use it to find links to IMDb, Wikipedia, and other websites for specific films.
How did it go? Not well! Honestly, I can’t get AI/LLMs to do most things consistently or correctly. It’s borderline incompetent, filled with mistakes, and almost isn’t saving me more time than I spend trying to use it. Every time I use ChatGPT or Claude, the results are often littered with mistakes on a wide variety of pretty basic tasks. And to be clear, this was on the most advanced models. (I paid for one LLM and have since cancelled to switch to another one.)
Just yesterday, I asked ChatGPT to give me the headline, date and author for a link, the exact sort of task I want to use ChatGPT for: to save me time with research. For the first link, it gave me this:
“The headline is: “OpenAI just unleashed an alien of an AI” Understanding AI “27-Sep-2023,” by Brian Chau.”
It managed to get the headline, the date, and the author wrong. The actual issue is that LLMs often don’t tell you if they can’t find an answer and make one up instead, which is the bigger issue.
Here are other mostly failed attempts at using AI, with one tentative success:
- I tried using an LLM to write social media content to promote my articles; it was horrendous. Being trained on social media content, LLMs parrot the worst, most basic, most unreadable social content out there. Even though I don’t prioritize social media—hence why I was hoping an LLM could speed up the task—it couldn’t create anything usable. It also often wrote tweets saying the exact opposite of what the article actually said. (This makes me nervous for people using this to summarize research.)
- The two LLMs I’ve used were unable to find links to Wikipedia, IMDb, or the streamers. In general, it can’t search the web, and for my purposes, that’s a lot of what I would need it to do. (This may be a specific LLM problem, so I have more research to do.)
- I use LLMs to put images of charts into spreadsheets, but this process can be pretty spotty. It can summarize the data pretty well, but when I ask it to add columns (like the date) or anything involving steps or logic, it just can’t do it consistently. And I have to fix a lot of mistakes.
To be fair, my team and I are still learning how to use AI, and we need to do more work and research. (I’m going to keep using AI/LLM in some limited fashion in case it really is the future.) In particular, my researcher is looking into the best techniques to get the results we need. But for now, to get the data right, basic programming can do way, way more at a more consistent quality level.
I refuse, almost on ethical grounds, to use AI to write the Streaming Ratings Report, but let’s be honest: it’s nowhere close to being able to do this right now. (I’ve changed my mind completely on this compared to a year ago.) There’s way too much data, and there’s no world where it could synthesize it coherently. Plus, since it’s been trained on the web, even if I fed it the right data, it would spit out the wrong conclusions. I would know, since I asked it to summarize my article on Formula 1 for tweets, and it wrote about how popular Formula 1 is, even though the article I fed it said the exact opposite.
So…Is AI Ready for Its Revolution?
Again, none of this inspired confidence in AI/LLMs, which leads me to ask, how are people or companies integrating AI or LLMs in their work? (The following complaints don’t apply to more focused machine learning, which can work very well on specific, targeted problems when supplied with accurate data and solid feedback.)
I understand some use cases. AIs and LLMs are excellent at transcribing human speech or writing essays or brainstorming, but it failed at almost everything else, especially anything novel or one-of-a-kind. Though to be honest, it also wouldn’t surprise me if a lot of transcription has a huge amount of error too, leading to brutal misunderstandings or miscommunications. I have a feeling that a lot of the incredibly optimistic translation use cases for global streamers might be a few years away…and require incredible amounts of energy and, thus, money.
One famous tech reporter said on a podcast last year that they have AI read research reports and academic articles and summarize the results. Based on my experience, it wouldn’t surprise me if those summaries fundamentally misunderstood the articles.
I worry/suspect that many workers are being tasked with using AI/LLMs, and they’re finding that it’s not saving them much, if any, time, but no one’s talking about it. Like this article on coders not actually being able to use AI to be more productive like everyone says.
I’d be really, really worried if I were a corporation and used AI to automate lots of work (like data collection for, say, tax returns) and wouldn’t be surprised if, down the line, we find that tons and tons of data is loaded with lots and lots of mistakes. If I’m the governor of California, I wouldn’t be in a rush to make deals with AI companies to solve the homeless crisis, fight fires or analyze the budget.
Just like the internet was expected to make the economy more productive, but those gains never materialized (because it’s distracting and inefficient), I’m worried that, in the short term, this will be the case with AI. People expect massive improvements that just aren’t there. Yet. And I haven’t even mentioned the cost in terms of price and energy of building and running LLMs right now.
The First Horseman: Here’s What Happens When Talent Flexes Their Political Muscle
One sneaky reason I’m starting this series is that I have so much to say about AI that I just need to dedicate a column to discussing it every quarter or so to get these thoughts out. Also, there’s just so much news. Every day, there’s a new headline about AI. And almost every week, there’s a new update on AI and Hollywood.
The main AI theme for the last two months was Gavin Newsom working on behalf of big tech companies (with one big exception). On the negative side, CA legislators were trying to force Google to pay news publishers (à la similar legislation in Australia and Canada, which really helped publishers!), but instead, Gavin Newsom made a deal with Google to pay publishers $250 million (in tax-deductible money)…to train AI to replace reporters instead. This headline captured it best:
Then Newsom vetoed ground-breaking legislation to regulate AI, despite a big push from Hollywood talent to pass it and broad popular support. (Nancy Pelosi also came out against it, leading to speculation that she’s setting up her daughter to run for her seat, and they’ll need money from big-pocketed tech donors. Related, voters in CA think that tech companies and Silicon Valley have too much influence in politics. He also made a deal with Nvidia to increase AI education is California schools, and he announced that California is using AI/LLMs to do things like solve the homeless crisis. (I’d add, Newsom also vetoed other California bills to regulate private equity and healthcare.)
We’re just getting started with this issue, but the rest is for paid subscribers of the Entertainment Strategy Guy, so if you’d like to find out…
- The latest update on the US box office and why there are bright spots
- And cord-cutting.
- SAG-AFTRA’s big AI wins and why it matters
- Hollywood finally taking some needed action on piracy
- And more…
…please subscribe! We can only keep doing this great work with your support. If you’d like to read more about why you should subscribe, please read these posts about the Streaming Ratings Report, why it matters, why you need it, and why we cover streaming ratings best.