(Except on rare occasions, we only like to send out two newsletters a week to keep from spamming your inbox. But this is a rare occasion, since the EntStrategyGuy is putting up a paywall on Friday, 17-June-2022. Subscribe here!
This article lays out my data philosophy , so we felt that it was important to send out to everyone. I’ve also written a series of articles debunking myths about streaming ratings, explaining my philosophy on data, describing what streaming data we do have, and more. If you’d like to read the rest of those articles and a short case for why, click here.
This is our last week with no paywall. To get the Streaming Ratings Report–which will almost always be behind a paywall–and all of our other writing, please subscribe. Until Monday, you have the option to subscribe for $120 a year, per year, forever, at our locked in “Founder’s price”.)
Take a gander at this headline in Variety from last year and see if you can spot the problem:
What’s wrong with it? Well, Thunder Force was good but “thunderous” implies something a bit…better? While Thunder Force “won” its week with 15.8 million hours, compared to Netflix films like Extraction (18.5 million hours) or Enola Holmes (19.4 million hours), it was just fine. Compared to Coming 2 America (23.6 million hours), again, it looks fine. But it really pales compared to Hamilton and Red Notice, which earned over 30 million hours for truly “thunderous” openings.
Here’s another headline from Deadline:
Where to start? First, as far as I can tell, Deadline doesn’t cover every release this week, but just highlights the buzziest titles. If you’re an executive, you can’t rely on only the buzziest titles; you need as much data as possible! Second, Deadline uses Hulu’s leaks to call this Hulu’s biggest title, but Hulu didn’t provide any actual numbers to back up their claim. Third, to their credit, Deadline did use Samba TV’s data to measure impact, but they only compare Samba TV’s numbers to a previous season fo The Handmaid’s Tale!
Since I’m asking you for money, there is no better time to roll out the old fashioned infomercial pitch turned Kevin O’Leary tagline, “There has to be a better way!”
If you read nothing else about why you should subscribe to my Streaming Ratings Report, you should probably read this article. (An article I should have written a year ago when I first started my streaming ratings report.) I think that laying out my approach to data, in one place, explains why my approach is both unique in the streaming ratings game and, frankly, worth paying for.
So let’s dig into that philosophy, since it is really how I deliver value.
Why I Have a Different Approach to Streaming Ratings
When it comes to streaming ratings, you can get them in one of three ways:
1. The streamers themselves provide press releases with “datecdotes”.
2. Streaming analytics companies measure and publish their own data, either publicly or privately.
3. Reporters at the trades (and biz press) use either of the two above to write articles.
Each of these groups have their own set of biases to contend with:
– The streamers try to deliver the datecdotes or data that puts their shows in the best light.
– The streaming analytics companies try to release interesting tidbits to go viral.
– And the reporters at the trades usually repeat the data, often to write an article as quickly as possible given the insane output demands by most media companies.
Realistically, I’m competing with that third group of people. But I come at it in a fundamentally different way.
First, I start not with the story in mind, but with the data.
It takes me the better part of three days to collect, analyze, visualize and explain that data. That time allows me to dive deep, finding all the stories and angles, generating my own insights, and putting everything in context. I don’t come in with a narrative—or I try not to—but I see what the data says.
And I analyze multiple data sets simultaneously. Nielsen data is very useful for understanding how much something was watched. Meanwhile, IMDb data tells us if folks liked what they were watching. Multiple data sources tell a fuller picture. Again, most articles in the trades don’t have time to compare multiple data sources. At most, they can focus on one set of data at a time.
Further, I try to put everything in context, having built up databases of shows and films by streamer and by data source. Over time. I’ve collected every publicly available Nielsen data point to date. I’ve found nearly every Netflix datecdote they’ve ever released. Same goes for Samba TV. I’ve built a database (with help from a research assistant) of streaming films and TV shows, then paired them with all their IMDb data, which I’ve updated over time. Since September, we’ve included WhipMedia’s TV Time top ten lists too. I’m building out a Google Trends data set too. And we’ll be adding another data source in July, if not two or three.
This last point, on databases, may be the most important, since it allows me to put things in context. Sure, a lot of people can tell you that 30.7 million people watched Reacher on debut. I can tell you that’s the biggest season one debut weekend of all time.
Lastly, I write a regular Streaming Ratings Report that isn’t driven by specific headlines. I’m not trying to have articles “go viral”, because I’m not focused on clicks or advertising, but my relationship with my readers. I want you to trust what I’m writing, so I write a sober analysis each week, not (necessarily) a flashy headline.
All of this also takes times, another reason why advertising just won’t work for my website. And why I need you to subscribe.
Data Philosophy
That’s why I have a different approach, but I’m also guided by a few principles that guide how I use data and draw my conclusions. Each of these principles could be its own article (and probably will be in the future):
Use multiple data sources to build a better understanding.
If I were using intelligence jargon, I’d call this “multi-source intelligence”. If I were mimicking Nate Silver, I’d call it my “poll of polls” approach to streaming ratings. Multiple pieces of evidence help reveal how well things actually performed and provide nuance to my analysis.
Always compare things “apples-to-apples”.
I will definitely write an article on this concept, because it’s probably the most violated rule in data journalism. By making a bad comparison (not apples-to-apples), articles can wildly overhype trends.
For example, comparing global YouTube video views to U.S.-only linear TV ratings. Which leads to crazy statements like, “More people watched this YouTube Video than the Super Bowl.” But they didn’t, otherwise that stupid YouTube video would have sold tens of million in advertising, right?
Everyone should try to control for as many variables as possible. Which I try to do every issue.
“Actionable” insights are better than random data points.
If you can’t make decisions based off the analysis, then why do the data analysis? I call this the “No blue uniforms” rule. During the 2018 March Madness tournament, Google touted that its A.I. discovered that teams with blue uniforms did better in the tournament. This statistic is meaningless and random. My goal isn’t to find connections for connection’s sake, but insights that folks can leverage. And if I can’t find a clear conclusion, I’ll let you know that as well, since knowing the “null hypothesis” can also be useful!
Magnitude is better than direction.
Lots of folks provide rankings for their data. These are useful and I use them, but I prefer actual numbers, which show the differences in magnitude/scale (or velocity) as opposed to simply the rankings (or direction).
Think of it like this: both Sonic the Hedgehog and Avengers: Endgame were the number one films at the box office for their opening weekends. One made nearly six times more in box office. So those “number ones” are obviously not equal. That difference is the value in magnitude over direction.
Viewership is king.
Not only are there a lot of streaming analytics companies collecting data on streaming films and tv shows; they’re also measuring different things, like viewership hours, households, interest, and more.
How do I rank all of those different types of measurements to figure out what’s a hit?
– First party viewership
– Second party viewership estimates
– Customer ratings (as in reviews on IMDb or Rotten Tomatoes)
– Interest
…
…
…
[repeat a hundred times]
…
– The “social conversation”
While some folks want to believe there is some value in “loving” a show, at the end of the day, you need people to actually watch content. Which is why it’s the best measurement we can get.
Some folks might wonder why “social” is last. That’s because it’s the category that is the least representative of society. Some shows are buzz worthy, but others aren’t, but those not-buzzy shows still get tons of viewership. So I downplay social metrics in my analysis. (As Joe Biden told his staff, “Twitter isn’t real life.” I agree.)
Always provide a clear “Data 5Ws”.
You know the 5Ws from journalism, right? Who, what, when, where and why? Well, often in bad data analysis you can’t answer all five. Because if you could, it would make the viral headline wrong. Whenever possible, I try to provide these 5Ws so you know where my data comes from.
Look for “Dogs Not Barking”.
It’s easy to see the hits in streaming now. (Squid Game. Manifest. Reacher.) But the streamers release dozens of films and TV shows each week. And those misses go unnoticed.
One of my goal is to look for and highlight those films and shows that fail (Pachinko. Maniac. High School Musical: The Musical) so that we can have a better understanding of what works and what doesn’t in streaming.
The Tenets of the Streaming Ratings Reports
Taking all those pieces together, and trying to solve for all the problems in traditional reporting, I’ve honed my (vastly superior, in my opinion) approach and distilled those improvements into a few key tenets that will feature in every Streaming Ratings Report. Together, they make this single most informative ratings report on the market:
– Consistent: The biggest complaint I have following the trade coverage is that it’s completely variable for what ratings they cover and when. Usually it’s driven by an analytics company offering a sneak peak for PR reasons. With my report, you can guarantee it will come every week.
– Comprehensive: It will also cover as much content as possible. Meaning it isn’t all superheroes, blockbusters and films/TV series that drive clicks. If you work in entertainment, you need to know how everything performs.
– Historical Context: Data points by themselves don’t mean anything. Data does, and data means context. It isn’t enough to know that film “won the weekend”; was it the best of this month? This week? Of all time? How does it compare to similar types of content. The only way to know is to leverage a comprehensive database. That’s what I’ll offer.
– Content Context: First, we don’t cover just one streamer, but try to cover every streamer. Right now, Netflix is the most visible, but they are far from the only story in the streaming wars. Second, on content data base collects tons of information about every piece of content, providing the best analysis.
– Multi-source: Right now, we collect data from six regular sources and two variable sources. Our plan is to add any other streaming analytics companies that match our criteria. This means our analysis isn’t driven by any one agenda, but a “poll of polls” that provides the most accurate look.
I’d argue that few other outlets—be they streamers or trades or reporters or the analytics firms themselves—are as upfront with their approach to data as this article.
So, again, please subscribe. (This week’s Streaming Ratings Report is the last one that will be available for free.)