Bob Iger Puts His Stamp on Disney, Plus My “Solution” To AI’s Potential (Massive) Social Harms

Share on facebook
Share on twitter
Share on linkedin
Share on whatsapp
Share on email
Welcome! This is the Entertainment Strategy Guy’s regular strategy column, where I pick the “most important” story of the last few weeks and explain what it means for entertainment. If you were forwarded this email, please subscribe to get these insights in your inbox every other week (mostly).

The entertainment news has finally picked up over the last couple of weeks. Right after the WGA went on strike, we had a couple of weeks where everything was all strike all the time. I mean, it still kind of is, though even now, most of the trades don’t have any strike stories on their front pages. Regardless, that’s the most important story of the week. Before we dive back into that though…

…again thank you to everyone who has subscribed or sent me kind words about my recent newsletter anniversaries. If you’d like to join the club (what do we call EntStrategyGuy followers? EntStratHeads?) Here’s the link

Also, I realize that some folks don’t know how to reach me. The best email is on my contact page here. I read every email, but fair warning, I get enough emails that I can’t personally reply to each one. But I do read all of your thoughts, feedback and criticism. I just recorded my first podcast as well, and if you have a podcast, send me a note to schedule an interview.

Most Important Story of the Week – It’s Still the WGA…

Are the writers still on strike? They are? Then that’s the most important story of the week. But to be honest, I’m pretty tapped out on all things WGA. My plan is to keep visualizing the WGA/AMPTP negotiating points/demands, but I’m going to try to hit some other (really fun) stories in the interim. But since you probably still want news on this topic, I’d recommend four articles on it:

1. Matt Stoller’s “Time to Break Up Hollywood”. The highest compliment I can pay another writer is that I wish I had written their article. In this case, Stoller wrote the article I wanted to explaining some of the structural problems in the entertainment industry. And now that he’s laid out the problems, someday, we can fix the problems. So yeah, I’ll have to write a deep dive on “How I would structure the entertainment industry”, and how I think that would increase the quality of content, decrease prices for customers, and create value. It would be based on the insights in this piece.

2. Richard Rushfield lays out the “six month” scenario for a writer’s strike. This is a compelling argument. Six months is a long time! 

3. Two long takes of “doom” at The Ankler by myself and Sean McNulty. Again, McNulty wrote the article I wish I could have written here, really showing how the decline of linear cable revenues hasn’t been offset by the rise of streaming. I just loved this article, and I expect I’ll be linking to it a lot in the future to explain the somewhat dire financial state of the entertainment industry right now. For my part, in case you missed it, I worried about what content cuts foretell about the WGA negotiations.

So what is the most important story of the last few weeks? I’ll tell you, but first I want give you my opinion on the biggest tech story going…

My Take On – AI, Piracy, Section 230 and Aligning Value Creation With Negative Externalities

For the last few weeks, I’ve been noodling on one idea regarding AI, that’s both simple, elegant, and will probably never happen:

The best regulation for AI is to make the companies and individuals developing the technology responsible and liable for the harms it causes, even the harms generated by its users.

Essentially, this is the inverse of “Section 230”, the part of the US civil code that allows social media and other website operators to avoid responsibility for anything that’s posted on their platforms. And if you remove that protection, you’d slow the deployment of AI, but also really help prevent future harms inflicted on society. Usually, I focus on strategy issues, but sometimes I like to give my opinion on the politics of an issue, and the growth of AI is advancing so rapidly—it’s clearly the buzzword of 2023—that I want to address it. And my concerns. 

Indeed, when I imagine worst case scenarios for AI run amok, it really seems like we should demand responsibility for these technologies:

  • Say someone uses your AI system to mimic voices and scams people into giving them their credit card numbers. Why shouldn’t the company that designed the AI be responsible? Especially when they AI itself can even make those phone calls at some point in the future?
  • Say a user uses AI to manufacture misinformation on a massive scale. Say the thousands-of-articles-a-day scale, dwarfing anything seen in 2016 or 2020. Why is the AI creator not responsible? (AI don’t have free speech rights, do they?)
  • Say a user types in, “create a fake image of a terrorist attack on D.C.” and an AI system complies. Well, why should the AI owners evade responsibility? It’s not like the user created the image; their AI system did! (By the way, this happened this week and sent the stock market briefly tumbling.)
  • Say an AI program libels individuals? How are the creators not the ones responsible for that libel? It’s their machine saying it, after all.

Unless you’re part of the VC world that sees their job to make the money, and it’s society’s job to clean up the messes, it just makes sense to regulate this technology. And while the AI community has talked big about supporting “common sense regulation”—see the CEO of OpenAI testifying to Congress this week—when you dig into the details, the AI companies don’t actually want regulation at all!

Thinking About Value Creation and New Tech

Think of it like this: a new technology comes out and it has some benefit to society. Say it creates $100 billion in value. That’s great! And if it only cost $20 billion to make, society is better off by $80 billion. Unless, of course, it also generate $40 billion in “negative externalities”, harms to society or individuals. Then society is only better by $40 billion. Still a win, right? Sure. Maybe.

But that’s not the only possible reality. People have to pay those costs. Now imagine a scenario where companies have to own any negative externalities. Now they still create the AI, but instead of creating $100 billion in value, they create $60 billion in value, but cost $20 billion to create. In this case, society is still up by “$40 billion”, but you removed $40 billion in harm from the system. That’s a better system! Less harm is good! I shouldn’t even have to write this!

In the first system, the folks profiting don’t have to worry about the costs. But the folks harmed don’t get to profit from the system either. In the second, you remove the harms from the system. Minimizing harm is good!

Honestly? We Needed This Fifteen Years Ago

Indeed, one could argue that the biggest problem in our society over the last ten or so years, since the rise of smartphones and social media, is that we’ve absolved the biggest tech platforms from owning what happens on their platforms. So they create these externalities, while generating billions and billions in free cash flow (Google, Apple, Amazon, Meta, soon to be TikTok, Roblox, etc), but society is left dealing with a host of issues:

  • Instagram has increased teen suicides and depression. And might be more than just teenagers! (Listen to Ezra Klein last week on this very topic.)
  • Amazon sells lots of fraudulent and counterfeit goods.
  • Meta (and other social media) deliberately stoke outrage/fear/anger to drive clicks. (And sells counterfeit/stolen goods on its marketplace.)
  • YouTube hosts tons of copyrighted material. And then can funnel users to really bad or inappropriate places. TikTok was built off that too
  • Roblox is lousy with copyright infringement and maybe worse?
  • Google willingly sends users to websites that host or facilitate piracy of copyrighted materials.
  •  Reddit regularly hosts streams to pirated sporting events. (Indeed, Julia Alexander tweeting about this finally prompted me to write this section.)

(Side note on talent: that last two issues are HUGE in my opinion for the current WGA/talent negotiations. Piracy arguably takes $30-150 billion—depending who you ask—out of the entertainment ecosystem globally. Imagine that going to studios, which means how many more films and TV shows get made, who knows how much more pay would go to writers then.)

The Key Question: Do Companies Have “Responsibility” Or Not?

The analogy I could imagine folks making is gun manufacturers. Folks would say, “Hey, are gun companies liable for whatever their users do with the guns?” No. (By even making that analogy, I can imagine a lot of my readers nodding and saying, “But they should be…”)

But I don’t think that’s the right analogy. I like either tobacco or big Pharma (with opioid sales) better. Those companies knew they were generating huge harms, but allowed those harms, partially, because they didn’t think they’d be liable. Same goes for social media in my mind.

With AI, I’d go one step further: this is actually like nuclear reactors. Imagine a company made nuclear reactors for individual households. They’d say, “Hey, we just sold them the reactor, it’s not our fault it melted down. Blame those individuals for not maintaining the reactors!” That’s why we don’t see nuclear reactors to individuals! The risk is too great.

AI could genuinely be that powerful.

So yes, it needs to be heavily regulated and the companies responsible for ALL its uses. If that slows down the deployment of AI, I say good. The tech shouldn’t go out into the world until as many negative externalities are minimized as much as possible. The only way to ensure that is if the executives in charge truly believe and know they’ll own the damage they could cause to society.

Indeed, I’m fairly optimistic on the prospects of self-driving cars. But since those are actual physical cars that can cause physical damage, the companies (besides one) are being very careful with the technology. And people/governments/regulators are really worried about this new technology. That’s good! But AI can cause real world harm as well, and we should proceed just as cautiously.

To repeat myself and clarify from the introduction, this won’t happen, of course. It should. It would be good policy. But the way the sausage gets made in Washington D.C. would probably ever prevent a commonsense solution like this passing into law.

Most Important Story of the Week – Disney CEO Bob Iger Puts Hits Stamp on Disney

To read the rest of this article, including my take on four Disney news stories, updates on M&A, broadcast shows cutting their budgets, Prime Video’s syndication unit, please subscribe. I can only keep doing this work with your support.

The Entertainment Strategy Guy

The Entertainment Strategy Guy

Former strategy and business development guy at a major streaming company. But I like writing more than sending email, so I launched this website to share what I know.


Join the Entertainment Strategy Guy Substack

Weekly insights into the world of streaming entertainment.

Join Substack List
%d bloggers like this: