Often in my weekly column (the “most important story of the week”, click here!), I’ve called out the narratives behind one-off events. Take the Super Bowl. Were ratings down because we’re sick of the Patriots? The death of broadcast? Or because football isn’t popular anymore?
Last year, a lot of people asked, “Why did Solo fail?” and identified four or five totally plausible reasons that all could have mattered or couldn’t have mattered at all. We just don’t know! (I also called out the Lego Movie Part II narrative too.)
If you take nothing else from my website, understand that we need to do better than narratives when it comes to the business of entertainment. (With the implications that execs/companies that rely on data instead of narratives will outperform the others.) One time ratings or box office weekends are noise that we try to force into signal narratives. (Yes, I’m a big Nate Silver fan.)
That said, as fan of film, the Oscars hold a special place for me. I still remember the first film I rooted for at the Oscars and felt devastated that it didn’t win. (Crouching Tiger, Hidden Dragon if you’re curious.) And I’ve seen most Oscar films each year until I had my first child, even if the films I love the most (big, popular and genre) don’t tend to get nominated.
“Popularity” was the meta-narrative of the Oscar’s in 2019, after the Academy announced their intention to start a popular film category. (Well, and diversity.) I first looked at this last August, but now we have the ratings for the telecast on Sunday. Since this year saw a big jump up in box office, without the new category, we can answer the question:
“With a generally more popular set of films, did that boost Oscar telecast ratings?”
The quick answer is that ratings are up (roughly) 12% over last year’s telecast. But what does that mean? Can this one new data point impart the lesson to the Academy that more popular films lead to higher ratings? Not by itself. We need to analyze the larger trends.
Today, my goal is to answer that question, but I’ll be honest, I can’t. The sample size is too small to draw clear conclusions. Instead, my goal today is just to lay out what data we do have and the limits of that that data can explain.
Oh, and to correct the record. I screwed up in August with some data analysis, so I plan to correct the record and explain what went wrong. (With a really fun learning point.)
How to Craft Narratives
First, let me show how easy it is to craft a narrative. Consider these two narratives:
Narrative 1: Popular films boost Oscars.
Obviously, the more popular films that get nominated, the higher the ratings. Is it really any surprise that the highest ratings of the last 10 years came in 2010 (for the 2009 films) when, uh, Avatar and Toy Story 3 were nominated? Meanwhile, the last two years had mostly sub-$50 million films, so ratings sank to their lowest since 2007’s films, which were so unpopular the Academy changed the rules entirely. With the highest box office total since 2010, it’s not a surprise ratings went up 12% this year. Not to mention, Titanic had the highest ratings of all time!
Popular films don’t really impact Oscar ratings.
Actually, it really doesn’t matter. The 2011 ratings were tremendous (10 million more viewers than this year) and the most popular film was The Help at $169 million. Or 2005. The biggest nominee that year was Brokeback Mountain (that’s a fun trivia question to stump your friends) with $83 million. And 38 million people tuned in. Sure popular films may matter, but even a juggernaut like American Sniper didn’t help boost the ratings, as they declined from the year prior. (It did way more box office than Bohemian Rhapsody or A Star is Born.) So yeah, if you care so much about Titanic and Avatar, maybe you just need an awards show devoted to James Cameron movies, and leave this awards show alone.
Why is it so easy to craft narratives? A small data set
Narratives don’t help. Instead, we need data. But data alone can’t solve our problems, and I’ll explain why.
The Explanation 1 – Small Sample Sizes
In the realm of small sample size, everything can be true. Simultaneously.
I weaved those paragraphs above by looking at my Oscar film table and picking high and low years, while cherrypicking the data. With annual data sets, you only get one piece of data each year, the Oscar’s telecast.
Further, this data set is limited by history. I can’t justify including years before 1998, since that was a time period when broadcast shows like Seinfeld got ratings in the 20s. Since then—and even before—cable has been taking viewers. (Hot take: the biggest driver in the decline in broadcast ratings over the last 25 years has been cable television, not streaming.)
Then in the middle of that data set, the Oscars expanded from 5 films to 10, then somewhere between 8 and 10 since. That means even my 20ish sample size is arguably only 10. And yeah, cord cutting started in the middle of that latter ten years. So a five year sample set? That’s small.
The Explanation 2 – What are we measuring for?
This seems easy—popularity!—but is deceptive. Do you take viewers in millions—which is growing?—or ratings—which is declining? Or growth per year? Or rolling averages?
Or take the biggest “input” being popularity. Obviously, box office is the best measure for popularity, because paying to see something is the truest expression of intent. But how do we measure those 5 to 10 films?
This year, a lot of people just added up al the box office numbers to get the total box office. But clearly years with 10 films have an advantage over years that only have 8 (or 7 if one was on a streaming platform). So you could use the average to account for that, but then again one huge outlier (Avatar in 2009 or Black Panther in 2018) would throw that off. Or maybe not, since I’ve always said this is an industry dominated by logarithmic returns and the outlier could draw in more viewers.
Still, if you did want to account for the number of films appealing to the most people, you could factor in box office ranks or median box office or the number of popular or blockbuster films. All of which I did in this table (which has been updated to the last weekend of box office):
The point is, I came up with 16 different ways to even ask, “is this set of films popular?” That’s partly why conflicting narratives can arise.
The Explanation 3 – So many variables
Finally, the last difficulty is that beyond popularity, quite a few variables can and do impact the ultimate TV ratings for the Oscar telecast. Off the top of my head:
– Presence of big stars in feature films
– Decline of broadcast TV ratings, in general
– Decline of broadcast TV ratings, because of cord cutting specifically
– Popularity of the host
– Quality of broadcast the previous year
– Politicization of the Oscars (cuts both ways)
– Popular films actually “contending” for Best Picture, not just nominated
And likely more. So a small sample set, with many ways to measure our variables, and a lot of potential explanations, of which most we can’t test.
Trying to Answer the Question
C’mon Entertainment Strategy Guy. Do or do not, there is no try. So here’s my try.
Step 1: What is the null hypothesis? What is our hypothesis?
Our “null hypothesis” has to be that popular films don’t effect the rating. But since we have strong intuition underlying our hypothesis, we can’t set a low threshold to prove this wrong. I do have stronger hypotheses, though, that “median” box office measures will tend to forecast TV ratings better than averages, since one huge outlier can throw the whole thing off.
Step 2: Update our data
My initial foray into this was a bit off, mainly because several of the films got a “nomination” bump from the Oscars and were still in theaters. So to be upfront, the films from this year moved up in all the ratings. Here’s a table showing the gains:
The key takeaway is that our initial use of the data was mostly fine. A few films moved up, but this set of films didn’t become the “most popular year since 2009” in any other categories. Still, next year I’ll probably wait until after the telecast or the week of to draw popularity conclusions. (I’d call this a minor mistake.)
Step 3: Get the data right
(This is my correcting the record from last August.)
Last August I tried to answer the question, do popular films relate to popular Oscar telecast? I asnwered, “maybe” and tried to show why in a couple of tables. But I made a really simple, really stupid mistake: I put the ratings next to the wrong years.
This is one ongoing challenge when discussing Oscar films. The films come out in 2018, for example, but the show is 2019. And I had the dates misaligned. I tried to double-check this when I do my regular Excel proofreading, but alas I missed it. So now I’m making sure year, air date and ratings are lined up:
That initial article also shows how easy it is to weave a narrative. I took the data and said, “Hey, the ratings did really well, so obviously American Sniper brought in new viewers.” American Sniper was the heavy Fox News favorite that year. But I was wrong, and it turns out the triple whammy of Gravity, American Hustle and Wolf of Wall Street brought home the big ratings. (Though, as I’ll mention later, they also coasted off several years of popular films being nominated. As we’ll see, American Sniper was the start of the slide in Oscar ratings.)
Step 4: Pick my method
The easiest way, which I did before, is to just put up to graphs and see if they look like they match. This isn’t scientific, but I usually try to visualize data as a first step. Here are the two averages plotted against to “millions of viewers” of the Oscar telecast.
If that is confusing (trying to look at two sets of numbers for a relationship), we can always turn to a scatter plot. This is a bit easier to see, and hence where our correlations may come from:
That’s better, but we need some more data-based methods. The best thing would be to run a regression analysis on the data set as that has the best way to determine both impact and confidence in the results. But like I said, we have so many data points and such a small sample size that that p-values just won’t matter.
Instead, we’ll “settle” for correlations. A regression is really just a set of correlations, with confidence intervals. Our eyeball tests are really just looking for visual correlations.
Step 5: Test for correlations
About 2,000 words in, this is basically the answer to the question. Here is a correlation table with both millions of viewers and TV ratings:
And now with adjusted box office:
Phew. So let’s celebrate in that my initial hypothesis is actually correct. It turns out that “median” box office is a more correlated with the Oscar telecast ratings than average box office. In a weird turn, it turns out that total box office is actually slightly negatively correlated, which doesn’t make sense, except in a world of small sample size. (Ratings were bigger before the Academy expanded the categories, and the films got unpopular again, so 2018 was bigger, but not big.)
Also, I turned the “box office ranks” were negatively correlated, so I just reversed those signs to make the color code work. But the rankings actually tell the story pretty well. And another fun story is that “popular” films tend to matter more than “blockbusters”, which isn’t intuitive. Having a lot of $100 million domestic box office films that a lot of different people—this may be the change from blockbusters—tends to yield better results. This likely explains the correlations in the median averages too, which is why median rank and median box office did better than their mean average cousins.
But I don’t want to go TOO far either. While all the measures have some degree of correlation, as expected, some are pretty weak. That’s what sort of explain the 2019 telecast. It was a huge year in terms of average box office, and okay in popular films, but the ratings only went up 12%. In other word, I’m worried I’m cherry-picking correlations here when most of the popular film correlations are weakly correlated.
Other Explanations Besides Popularity
Now, if I were me, I’d have some criticisms. More precisely, other explanations for the changes in ratings:
Trends?
Maybe Oscar viewing is something that is correlated with the previous year’s viewership. So say you tune in, see a lot of films you liked, then next year, you’re more likely to tune back in. Conversely, say the show was boring, super political and you didn’t recognize any of the films. Maybe you don’t watch the next year. And the year after that and so on. In other words, there is a momentum effect.
It would seem that way just look at that growth of the curves. I tried a few different ways to calculate this but didn’t love too many of the examples.
I would say, though, that the growth and decline years seemed to be lumped together:
That’s still a lot of squinting to pull out the connections, though.
Cord cutting
In other words, as people cut the cord, the audience will naturally shrink. This is definitely a possible explanation, but when you look to the ratings of ten years ago, it doesn’t explain a 50% overall decline. (Say 38 million to 20 million.) Cord cutting—despite the hype—is still in the single digits of percentages, and a lot of that is off set by vMVPDs or OTT services that would still have ABC, and hence the Oscars. (See TV Rev for a table with that data.)
Instead, I’d point to…
Death of broadcast
No doubt, a lot of people just don’t watch broadcast television. Not that they don’t have TV, they aren’t cord cutters, but they don’t watch TV. Of course, other live, annual events haven’t had this problem neccessarily. The Super Bowl’s were up for years as the Oscars slumped, and only recently trended down. The Golden Globes have hit new records, so it isn’t like the death of broadcast impacted them.
Again, if I were the Oscars this would be my hope: if you nominate some popular films next year, you can get the ratings to jump 10 or 20% again.
Conclusion
This isn’t one of those decisions that data can make for us. I could sit here and try to parse even more ways to look at the data. Maybe the growth rates show us something when laid next to the change in popularity from year to year? Maybe we could take viewer reviews of individual shows and see if unpopular shows made unpopular shows the next year? That’s just more p-hacking, though.
Instead, I’ll leave this with the conclusion I had before. The data and logic indicate that you need more popular films. They don’t have to be blockbusters, but they need to get over that $100 million threshold.
So I’d find a way if I were the Academy to get more popular films nominated each year. I put my recommendations here, and the fact that having a generally more popular slate this year—though not the most popular ever—clearly had a positive effect. A popular film category likely won’t help—they’ve had popular animated films for decades and that doesn’t impact things—but more popular and blockbuster films—as long as they are deserving—likely do boost ratings. And if my hypothesis is right that you can get carry over viewers year to year, then 2019’s telecast could help boost 2020 even higher.
In the end, this graph tells the story, for me: