Gavin Newsom Quietly Betrayed California’s Journalists, Why I Can’t Update my Analysis on Whether Screenwriters are Using LLMs, and We Need to Talk More About LLM Costs

(Welcome to the Entertainment Strategy Guy, a newsletter on the entertainment industry and business strategy. I write a weekly Streaming Ratings Report and a bi-weekly strategy column, along with occasional deep dives into other topics, like today’s article. Please subscribe.)

In December, I gave a tongue-in-cheek headline to my article on media company valuations, facetiously saying that I was “seeking” a $100 million valuation.

In that article, I wrote, “But if I can trick some LLMs into thinking I’m worth $100 million, I’ll consider that a small victory.” And sure enough…I tricked an LLM! Over at AIInvest (which I won’t be linking to, but you can find it if you Google search this quote) an LLM that wrote the article stated:

A case in point is the author of The Entertainment Strategy Guy, who openly admitted to seeking a $100 million valuation for their media empire, a figure divorced from tangible market performance.

I’m not sure that the LLM understood the article, since I was pretty clear in the first paragraph of that article that I wasn’t actually seeking that valuation, nor do the words “tangible market performance” make much sense since my “market performance” isn’t publicly available. 

Worse, seemingly every other day, I stumble across examples of LLMs just making stuff up. Dan Rayburn was “quoted” in a Gemini insight with something that he didn’t say. A different LinkedIn article wrote that my first article on the Netflix-Warner Bros. merger predicted “a 20% efficiency gain in content production and a potential 50 million subscriber bump globally” which I definitely didn’t say in my article…

Perhaps this intro won’t age well; LLMs are getting better every day, but for now, the result is a proliferation of low quality slop across the interwebs. Worse, many folks can’t tell the difference. To their credit, LLMs do one thing very, very well: apologizing profusely if you call them out for making stuff up or getting stuff wrong. (To be clear, you do have to call them out for it.)

You know who can call themselves out for getting something wrong? Yours truly. So it’s time for another edition of “What I Got Right, Wrong and Follow Ups” to previous articles. Today, I’m going to look at…

  • Gavin Newsom betraying journalists, reporters and publications for the third year in a row…
  • My biggest issue with the reporting on AI/LLMs…
  • Why I can’t update my analysis on screenwriters using LLMs…
  • Formula 1’s ratings jump…
  • Anime’s revenue globally and what’s off in the US…
  • The Academy Awards nominees…

But let’s start with a giant TBD on two important topics…

TBD: I’m Still Waiting on the 2025 Blacklist Scripts

Last August, I tested whether screenwriters used LLMs to write screenplays, and what this means for LLM-era data collection/analysis:

In short, up until 2024, I couldn’t find evidence that LLMs had a hand in writing/co-writing Black List-winning screenplays—at least they weren’t writing dialogue extensively; I can’t test the brainstorming parts of the writing process—which leads to the obvious question: 

What about the 2025 BlackList scripts?

Well, sorry, those scripts haven’t been made publicly available for download, unlike past years.  

For years, entrepreneurial Redditors collected the annual BlackList scripts in publicly accessible Google Drive folders and shared links to those folders on Reddit, especially the r/Screenwriting subreddit. (In my original article, I explained that this was how I found the screenplays I used for my analysis.) This year, the Black List asked r/screenwriting to not share the scripts. As Franklin Leonard commented on that thread, “People are simply saying that sharing them publicly and indiscriminately for hundreds of thousands of people to download without permission isn’t appropriate.” 

Due to LLM-training concerns, I get why these screenwriters might not want their scripts out in the public; conversely, in the age of the filmmaker/writer as influencer/brand, I don’t really get it…don’t you want more people reading and passing around your screenplay if it’s really good and beloved by the community?

In January, I reached out to Leonard on Twitter/X asking if he could share the scripts with me (which I do not plan on sharing with anyone) but I’m still waiting to hear back.

I wanted to run this test not just to keep the analysis updated, but also because I really wanted to test the latest and greatest in LLMs…

TBD: Can We Talk About LLMs’ Costs?

My ulterior motive for wanting the Black List scripts is that I really, really, really want to test Claude Code. I mean, everyone (really, everyone?) is talking about how LLM coding programs are here and everything has changed! Seriously, in one week, I think I read over a dozen takes along this line about Claude Code. 

Not only do I want to see if screenwriters used LLMs to write the Black List scripts, I also want to test Claude Code, Gemini Code Assist, and ChatGPT Codex. I want to have each LLM program a word-finder to test the word use in the 2025 Black List scripts. (Which I would then spot check for accuracy.) 

But alas, without the 2025 Black List scripts, I can’t run this test. 

On Claude Code and others, I do want to make a very spicy observation: 

I’m still struggling to reconcile the hype (and I mean hype) between what people are saying LLMs can do (write entire articles, perform research tasks, do everything for everyone all day everyday) and my personal experience with the LLMs. 

Some days, especially a few months ago, I couldn’t query more than a dozen news searches with Claude before I got capped out for the day…and I’m paying for a monthly subscription! It’s gotten better recently, but in late 2025, Anthropic capped my Claude usage pretty regularly. (UPDATE: As I was about to post, my editor/researcher reported that he just got throttled after asking Claude to do seven news searches and format around forty URLs into a different format. I beleive this cap will reset in a few hours, nor do I know how many tokens/dollars this usage represents.) And the search results are often horrible. Another example: I still can’t get Claude and ChatGPT to count lists and get the same (correct) result. 

But almost every day, my inbox is filled with (non-Ed Zitron) articles about how the AI singularity is here.

So I have a few theories:

  • First, some reporters/pundits are just true believers. And they lack the skepticism that they should have/do have on other subjects.
  • Second, some influencers are being straight paid for the hype. See this CNBC article. I have to ask, why would any LLM need to hire influencers to tout their products? If their products are as revolutionary as they claim, they should sell themselves.
  • Third, some reporters/their AI usage might be subsidized by Anthropic/Open AI/Alphabet. This is a slight distinction, but I suspect some reporters are given free access to high-end LLM models, and as such don’t have to worry about the cost/benefit trade-off of the models. Many folks might have a better experience if they got to use a free product with no limitations than if they had to pay the actual current costs to use them.
  • Fourth, the higher-end models might perform much better than the lower-end models. I pay for one LLM (or should I say, I got swindled into paying for a year’s worth of Claude when I really only wanted to pay monthly, which is something companies do when their finances are great), so perhaps the $200 a month plan really delivers much, much better results. Scott Alexander wondered the same thing just last week.

If this last point is true, well, then predictions about a future where AI/LLMs create massive inequality might already be here. Higher-end models will make some workers/businesses way, way more effective/profitable, leapfrogging them ahead of everyone else, which will then allow them to spend even more money to use even higher-end models, which will make them even more effective/profitable, allowing them to buy even higher-end models, and so on and so on, ad infinitum. 

Of course, that’s a lot of assumptions about LLMs delivering on higher value tasks. There’s a good deal of evidence that LLMs are plateauing on an S-curve. Indeed, extraordinary claims require extraordinary evidence, and most of the articles hyping up LLMs reference the same METR analysis, but if you’d like to read a terrific debunking of that METR analysis, go here. (For many pundits/analysts, if the topic were something like, say, rent control, they wouldn’t accept this level of accuracy.) For example, most outlets report METR’s “50% success rate” graph, when the 80% graph is much more meaningful!

The topic most folks ignore, though, is costs. None of the AI companies are profitable. Even if LLMs deliver benefits, do they cost more to use than the value they deliver?

Which gets to the actual issue here: most of those columns I read touting Claude Code don’t discuss costs, either what they’re paying to use Claude Code (or getting comped) or how many tokens they’re using to code these software programs. I saw one anecdote of a coder using $20K tokens worth of use on the monthly plan. Insane! Ed Zitron is collecting examples of people using $20 worth of tokens a day on the $20 a month plan. 

In this case, well, the future depends on how much LLMs cost to use and whether those costs go down in the future. (Which is not a guarantee.) Almost every pro-AI pundit is talking about capabilities and almost never actually talking about prices or costs, when that’s what really matters.

To close, I’d ask that all journalists writing about AI answer these questions in all their pieces. Together, they’d act as a “full disclosure” to ensure no conflicts of interest that could bias the coverage:

  1. Do any LLM companies provide you with a subsidized model? If so, which one?
  2. If so, is that model rate limited or capped on usage? (And have you asked the provider?)
  3. If your usage is comped, how much do you spend approximately on your AI projects?
  4. Have you been paid to promote AI, either directly or indirectly?

RIGHT: Gavin Newsom Betrayed California’s Media/Journalists Again

Just a quick update on an ongoing pet peeve of mine. A couple of years ago, California State Senator Buffy Wicks tried to pass a bill to force Google, Facebook and other tech companies to compensate journalists if their platforms hijack views to their websites. Gavin Newsom, an aspiring 2028 presidential candidate and also a huge friend and supporter of Big Tech who’s received lots of donations from Big Tech, struck a “deal” with Google to kill the bill and start a fund to train journalists in how to use AI. Last year, he cut the funding he initially promised. 

This year, he killed it entirely. And Google isn’t paying into the fund anymore. Perhaps this SFist headline captured it best:

Some quick facts:

  • Google’s AI Overview leads to 58% fewer clicks for the top-surfacing search result.
  • Google spent $11 million trying to kill the bill in 2024.
  • Google made $4.7 billion from news sites in 2018. (We need an updated analysis on this.)
  • Google made over $100 billion in Q3 2025 and has a $4 trillion market cap.
  • Google and FB threatened to leave Australia when that country passed a similar law. Instead, it really helped local news in Australia.

Really, I hope Buffy Wicks, the state senator who tried to pass this bill last time, tries again. Related, California lawmakers are trying to pass a bill expanding antitrust law in California; I hope it passes and I can’t wait to see if Newsom vetoes it. 

WRONG: Anime Revenue Is Up Globally

So, on the one hand, anime’s revenue is up! Like way up! Like up 15%, powered by overseas sales, to $25 billion globally. Wow! And two anime films opened big in the US. So take that article on anime skepticism!!!

While everyone talked about Demon Slayer: Infinity Castle—I intentionally and somewhat cheekily wrote my article about anime right after the most recent Demon Slayer film came out, expecting it to be big—the bigger surprise to me was Chainsaw Man, which is by no means an all-time popular anime property (I think).

Lest I turn this into a “I was wrong but actually right” subsection, I’ll admit: Chainsaw Man’s American box office performance ($43 million in the US) genuinely surprised me. Now, I still don’t think this genre is going to “save” the US box office, but it did better in 2025 than I thought it would. 

All that said…Crunchyroll is facing layoffs in America! “…laying off a number of employees as part of a restructuring to shift resources toward high-growth markets outside the U.S.” Clearly, despite headlines arguing the opposite, anime is still niche in America. Even globally, it’s still pretty niche. Brandon Katz analyzed Netflix’s global data drop, and found that it makes up 4% of Netflix’s usage.

As I always try to argue, nuance is key. Anime is a growing, important, but still niche media segment outside of Japan, which makes up 44% of anime’s global revenue even now.

WRONG, But RIGHT: F1 Up in 2025!!! But ESPN Made Formula 1 More Popular

Long time readers know that I’ve been hard, often very hard, on Formula 1. Frankly, it’s one of the most overhyped companies in media, especially the oft-repeated assertion that Netflix’s Drive to Survive made Formula 1 popular, a notion I first debunked at The Ankler. And recently, Sports Media Watch released another visual showing just that:

But oh wow, ESPN PR (which loves hyping up bad statistics) is here to tell you, it’s big! 2025’s season drew all-time marks, averaging 1.3 million viewers over 24 races, up 142% from 2017, the last Formula 1 season that wasn’t on ESPN.

I’m not surprised.  

Nielsen changed methodology this season, so most sports viewership/ratings numbers are up compared to previous seasons. That almost entirely explains the rise over the last three seasons. Plus, Formula 1 races still averaged just 1.3 million viewers; that’s tiny. This is a niche sport that comes on TV at a time when most Americans don’t watch sports: early Sunday morning East Coast time.

Finally, these are numbers so big, ESPN let Formula 1 head over to Apple TV, refusing to pay more money to renew their TV rights.

WRONG: The Academy Didn’t Nominate Enough Popular Films for Best Picture

In my article on this year’s Oscars ceremony, I predicted that “I don’t think the Academy is going to nominate four or five blockbusters/popular films.”

Sure enough, they did! Kind of, nominating Sinners, One Battle After Another, Frankenstein, Marty Supreme, and F1. That’s one blockbuster, one popular film, two probably popular films, depending on Frankenstein actually being popular because it’s on streaming and Marty Supreme crossing $100 million in the US, and one contender, One Battle After Another, with meaningful box office.

I’m still worried. As I’ve written in a few Streaming Ratings Report bonus sections, the tea leaves for the Oscars telecast don’t look great, but if the ratings do go up, it will be interesting to analyze if this ceremony hit a critical mass of popular-enough films.

To close, here are a bunch of fun stats from Walt Hickey and Michael Domanico of Numlock Awards on the membership of the Academy! I love their analysis, especially their work explaining how Academy Awards nominations now cluster around a few films. 

Picture of The Entertainment Strategy Guy

The Entertainment Strategy Guy

Former strategy and business development guy at a major streaming company. But I like writing more than sending email, so I launched this website to share what I know.

Tags

Join the Entertainment Strategy Guy Substack

Weekly insights into the world of streaming entertainment.

Join Substack List