(For the last few weeks, I’ve been debuting a series of articles answering a question posed to me by The Ankler’s Richard Rushfield: Will The Irishman Make Any Money? It’s a great question because it gets as so many of the challenges of the business of streaming video. Read the rest here, here, here and here.)
On Twitter, TV journalist and friend of the website Rick Ellis of All Your Screens pointed out that my model is likely not to be accurate:
This is interesting, I'm just not convinced it's especially accurate. That's not a criticism of you, I suspect you're a lot closer than most estimates. I just think there are too many unknown variables at play. But I could wrong, I'm certainly no expert.
— AllYourScreens Rick Ellis (@aysrick) November 23, 2019
You know what, he’s right!
I don’t have leaked Netflix internal accounting and data. This means that I have to make a lot of assumptions to build my models. The more assumptions in an estimate, the more sources for error. While estimates are naturally more accurate than forecasts, they’re still estimates. Meaning they are not an exact science.
Before I publish my results for The Irishman—my next article for those waiting—it’s worth probing what type of analysis this “Great Irishman Project” is. As I’ve been thinking about it, I have three analogies to explain what I’m trying to do here, ranging from a weak to strong.
Analogy 1: The Scientific Method
This is the gold standard. Arguably, the very principle that has driven human progress for the last 500 years.
And it doesn’t apply here.
I know, I know. It would be great if it did. But to apply the scientific method means starting with a hypothesis, then gathering data to prove or disprove it. That’s not my approach at all. I’m building models based on my experience and data. Which are “scientific”, but I can’t “test” them on Netflix’s releases.
The streamers specifically have reams and reams of data. They can set theories and test whether they are correct on marketing, user experience and sometimes content. (However, they often overhype their data and stretch the limits of statistical accuracy.) Unfortunately, for yours truly, Netflix and Disney and other companies will never tell me if I am right or even super close with my models, so I can’t ever test my hypotheses.
Since we’re talking scientific method, it’s also worth pointing out that lots of data analysis at technology, consulting and really all companies is “faux scientific method” too. At their best, companies can set out hypotheses, and run tests to see what works best for their customers. At their worst, companies just collect a ton of data and use the data that supports their preexisting beliefs. That’s not the scientific method. (I can’t count the number of times I’ve seen a boss parsing a quad chart for connections that are noise, not signal.)
Analogy 2: Sports
One of my inspirations for this website was the boom in top notch sports writing. Specifically, the top tier analysts. Folks like Zach Lowe or Kevin Arnovitz on basketball, Bill Barnwell in football, Pete Zayas for the Lakers, or Chris Harris on fantasy football. These folks don’t just report the news, but analyze the what and how of sports. They provide deeper insights than just the box score, whether they use scouting, film breakdown or analytics.
I’ve tried to fill the same need in entertainment business. This industry is filled with a phenomenal amount of great reporters, but not the rigorous analysis I needed when I worked in entertainment. (There’s also a lot of great investor-side analysts, but they usually charge an arm and a leg for their work. Their focus is also less on strategy and more on stock price. Which is great for their clients, but not quite what entertainment professionals need.)
There are two huge differences between sports analysis and entertainment strategy analysis, though. First, I don’t have nearly as much data. For sports, even casual observers can now get advanced analytics, from Basketball Reference to NBA.com to Cleaning the Glass. Second, we don’t get the “results” beyond quarterly earnings reports, and even those can be woefully incomplete for streaming. Imagine how much tougher Zach Lowe’s life would be if he didn’t know the scores to games!
This analogy gets a lot closer to what I’m trying to do. Instead of breaking down how a play is run, I’m breaking down how film finances work. Then, I’m applying the data we do have to that model. Then I’m pulling insights for what that can tell us about entertainment strategy. If I had more data, I could pull even more insights.
To find the perfect analogy, we need a field where making predictions is key, but data is sparse. Well…
Analogy 3: Military Intelligence
In war you have to make predictions about what your enemy is thinking, planning and eventually doing. But the amount of information is usually extremely sparse. Even if you have tons of information, you are terrified that it’s all planted by the enemy in misdirection campaigns. Inevitably, an intelligence officer has to make assumption after assumption to project an enemy’s course of action.
As I laid out in my recent series, I used to be an intelligence office in the Army, making these types of predictions. I’m used to uncertainty when forecasting enemy actions.
That’s not a bad analogy either for companies in competitive fields. While every company professes to not care what any other company is doing, they’re lying. (All of Hollywood is obsessed with Netflix, even if they say they aren’t. And yes, Netflix is obsessed with Disney+ right now, even if they deny it.) Understanding whether your competitor is making hits or duds, burning cash or wallowing in it, and how they’re doing it, is a key piece of information for making decisions.
Most companies have blind spots into every other company the same way my models have blindspots into Netflix or Disney’s specific performance.
Just because our models have inaccuracies doesn’t mean we shouldn’t make them. And doesn’t mean they aren’t useful. If you waited in war until you have perfect information, you’d be plagued by indecision. That’s really what I’m doing here: I’m providing my competitive analysis of streaming video, starting with Netflix, at the same level of accuracy I’d try if I were doing this for insurgents in Afghanistan or working in a strategic planning group in a streaming company. Even though I could be wrong, I don’t have a choice because we need these assumptions to improve our estimates.
And yes, as a bonus, these models will eventually inform my larger series on an “Intelligence Preparation of the Battlefield”. Essentially, the models will tell us a lot about how can and can’t make money in streaming video, and how that can impact all of Hollywood. But that’s for a future article.