Category: Analysis

Do Popular Oscar Films Make for Larger Oscar Telecast Ratings? Maybe.

Often in my weekly column (the “most important story of the week”, click here!), I’ve called out the narratives behind one-off events. Take the Super Bowl. Were ratings down because we’re sick of the Patriots? The death of broadcast? Or because football isn’t popular anymore? 

Last year, a lot of people asked, “Why did Solo fail?” and identified four or five totally plausible reasons that all could have mattered or couldn’t have mattered at all. We just don’t know! (I also called out the Lego Movie Part II narrative too.)

If you take nothing else from my website, understand that we need to do better than narratives when it comes to the business of entertainment. (With the implications that execs/companies that rely on data instead of narratives will outperform the others.) One time ratings or box office weekends are noise that we try to force into signal narratives. (Yes, I’m a big Nate Silver fan.)

That said, as fan of film, the Oscars hold a special place for me. I still remember the first film I rooted for at the Oscars and felt devastated that it didn’t win. (Crouching Tiger, Hidden Dragon if you’re curious.) And I’ve seen most Oscar films each year until I had my first child, even if the films I love the most (big, popular and genre) don’t tend to get nominated.

“Popularity” was the meta-narrative of the Oscar’s in 2019, after the Academy announced their intention to start a popular film category. (Well, and diversity.) I first looked at this last August, but now we have the ratings for the telecast on Sunday. Since this year saw a big jump up in box office, without the new category, we can answer the question:

“With a generally more popular set of films, did that boost Oscar telecast ratings?”

The quick answer is that ratings are up (roughly) 12% over last year’s telecast. But what does that mean? Can this one new data point impart the lesson to the Academy that more popular films lead to higher ratings? Not by itself. We need to analyze the larger trends.

Today, my goal is to answer that question, but I’ll be honest, I can’t. The sample size is too small to draw clear conclusions. Instead, my goal today is just to lay out what data we do have and the limits of that that data can explain.

Oh, and to correct the record. I screwed up in August with some data analysis, so I plan to correct the record and explain what went wrong. (With a really fun learning point.)

How to Craft Narratives

First, let me show how easy it is to craft a narrative. Consider these two narratives:

Narrative 1: Popular films boost Oscars. 

Obviously, the more popular films that get nominated, the higher the ratings. Is it really any surprise that the highest ratings of the last 10 years came in 2010 (for the 2009 films) when, uh, Avatar and Toy Story 3 were nominated? Meanwhile, the last two years had mostly sub-$50 million films, so ratings sank to their lowest since 2007’s films, which were so unpopular the Academy changed the rules entirely. With the highest box office total since 2010, it’s not a surprise ratings went up 12% this year. Not to mention, Titanic had the highest ratings of all time!

Popular films don’t really impact Oscar ratings.

Actually, it really doesn’t matter. The 2011 ratings were tremendous (10 million more viewers than this year) and the most popular film was The Help at $169 million. Or 2005. The biggest nominee that year was Brokeback Mountain (that’s a fun trivia question to stump your friends) with $83 million. And 38 million people tuned in. Sure popular films may matter, but even a juggernaut like American Sniper didn’t help boost the ratings, as they declined from the year prior. (It did way more box office than Bohemian Rhapsody or A Star is Born.) So yeah, if you care so much about Titanic and Avatar, maybe you just need an awards show devoted to James Cameron movies, and leave this awards show alone.

Why is it so easy to craft narratives? A small data set

Narratives don’t help. Instead, we need data. But data alone can’t solve our problems, and I’ll explain why. 

The Explanation 1 – Small Sample Sizes

In the realm of small sample size, everything can be true. Simultaneously. 

I weaved those paragraphs above by looking at my Oscar film table and picking high and low years, while cherrypicking the data. With annual data sets, you only get one piece of data each year, the Oscar’s telecast. 

Further, this data set is limited by history. I can’t justify including years before 1998, since that was a time period when broadcast shows like Seinfeld got ratings in the 20s. Since then—and even before—cable has been taking viewers. (Hot take: the biggest driver in the decline in broadcast ratings over the last 25 years has been cable television, not streaming.) 

Then in the middle of that data set, the Oscars expanded from 5 films to 10, then somewhere between 8 and 10 since. That means even my 20ish sample size is arguably only 10. And yeah, cord cutting started in the middle of that latter ten years. So a five year sample set? That’s small. 

The Explanation 2 – What are we measuring for?

This seems easy—popularity!—but is deceptive. Do you take viewers in millions—which is growing?—or ratings—which is declining? Or growth per year? Or rolling averages? 

Or take the biggest “input” being popularity. Obviously, box office is the best measure for popularity, because paying to see something is the truest expression of intent. But how do we measure those 5 to 10 films? 

This year, a lot of people just added up al the box office numbers to get the total box office. But clearly years with 10 films have an advantage over years that only have 8 (or 7 if one was on a streaming platform). So you could use the average to account for that, but then again one huge outlier (Avatar in 2009 or Black Panther in 2018) would throw that off. Or maybe not, since I’ve always said this is an industry dominated by logarithmic returns and the outlier could draw in more viewers.

Still, if you did want to account for the number of films appealing to the most people, you could factor in box office ranks or median box office or the number of popular or blockbuster films. All of which I did in this table (which has been updated to the last weekend of box office):

Screen Shot 2019-02-28 at 3.33.49 PM

The point is, I came up with 16 different ways to even ask, “is this set of films popular?” That’s partly why conflicting narratives can arise.

The Explanation 3 – So many variables

Finally, the last difficulty is that beyond popularity, quite a few variables can and do impact the ultimate TV ratings for the Oscar telecast. Off the top of my head:

– Presence of big stars in feature films

– Decline of broadcast TV ratings, in general

– Decline of broadcast TV ratings, because of cord cutting specifically

– Popularity of the host

– Quality of broadcast the previous year

– Politicization of the Oscars (cuts both ways)

– Popular films actually “contending” for Best Picture, not just nominated

And likely more. So a small sample set, with many ways to measure our variables, and a lot of potential explanations, of which most we can’t test. 

Trying to Answer the Question 

C’mon Entertainment Strategy Guy. Do or do not, there is no try. So here’s my try. 

Step 1: What is the null hypothesis? What is our hypothesis?

Read More

Some More Entertainment and Media M&A Thoughts I’ve Been Meaning to Publish

If other writers are like me, when you write a lot on something, well a lot of great little tidbits and nuggets just don’t fit in. The thoughts are interesting, but will ultimately disrupt the flow of the article or series of articles. The joy of having my own site is I don’t have to junk those ideas like a Universal exec junking another monsters franchise.

For instance, last July I dug pretty deep into the M&A (mergers & acquisitions) landscape as it relates to media, entertainment and all communications (that’s my term for the pipes, both real, spectral and bundled) that deliver it. I’ve long been fascinated by M&A, doing some in my career, but this gave me the chance to study the trends at a higher level. So I devoted most of July to this topic. 

But a lot of thoughts didn’t fit into my initial piece. Consider this the DVD commentary/directors edition of that post along with a slight update into M&A in entertainment, media and communications since the huge surge of 2018. Did the pace continue? Has the consolidation worked? And how has the media covered it?

Data Thought: M&A Is a Messy Data Set

What does this mean? It means that with fuzzy definitions, small sample size and exponential effects, you can make M&A data do lots of things.

Let’s pause on that last sentence. Another way to say that is my least favorite quote of all time, “There are three types of lies: lies, damned lies, and statistics.” This implies all of statistics is a lie. And what I’m about to show is how you go about doing that: taking small sample size, selective dating and fuzzy definitions to weave a narrative. 

But the word “narrative” is the key to that last sentence. The quote should be “lies, damned lies and narratives”. Narratives are created by weaving together anecdotes and reasoning from first principles, sometimes using statistics as your anecdotes. Good data analysis is the antidote to bad narratives. The problem is that data analysis is hard to do and takes lots of time.

But maybe if I show you how messy this data set is, the next time The Hollywood Reporter or The New York Times does an M&A article, you can see how they may be selectively pulling data to sell a narrative.

In fact, let’s use the New York Times and Bloomberg to show this. A lot of the inspiration for this series came from the Times June 2018 article showing how huge M&A was in 2018 through the first six months, and expectations it would continue at that frantic pace. Here is they key image from The New York Times:

NY Times M&A by year

Yikes. So M&A in the first half of 2018 was five times the amount of all of 2017. That’s a 5X jump. A jump that big is clearly the signal through the noise in this small sample size data set. So presumably, if Bloomberg wrote a similar article on media & entertainment M&A, we’d see similar results. And here we have that:

Bloomberg M&A ChartFrankly, it is hard to reconcile these numbers. The New York Times divides up telecommunications and media & entertainment, while Bloomberg combines them. But it doesn’t matter because the numbers are way off either way. How could Thomson Reuters data be off from Bloomberg’s data by three times in 2017?

I could make this story even crazier. Here’s an article from Variety from October of 2018, and it uses Thomson Reuter’s information, and it doesn’t even match the New York Times numbers. Then they give PwC’s numbers, which don’t match either set:

Thomson Reuters reported $145.7 billion worth of media and entertainment deals across the Americas in the first six months of 2018 — up from $141.7 billion for all of 2017. PwC, looking through a different lens, found $82.4 billion worth of U.S. media and telecom deals in the first half of the year, up 197% from last year.

Sometimes M&A doesn’t even match at the same paper. Take the Hollywood Reporter. Doing research for my series in July, I found two different charts from articles less than a year apart, one from March 2016 and one from January 2017. Even they don’t match.

H Reporter 2016 early for SITE

H Reporter 2016 M&A

The point is M&A data is messy, as I wrote way back in July.  By choosing either when a deal closed or was announced or what counts as “entertainment” you can draw very different conclusions. It’s confusing enough that I want to do a quick explainer on it.

A Quick Primer on M&A Data Variables

Read More

Disney-Lucasfilm Deal Part XI: Disney Will Make A 107% Return on Lucasfilm Acquisition (And Other Conclusions)

(This is Part XI of a multi-part series answering the question: “How Much Money Did Disney Make on the Lucasfilm deal?” Previous sections are here:

Part I: Introduction & “The Time Value of Money Explained”
Appendix: Feature Film Finances Explained!
Part II: Star Wars Movie Revenue So Far
Part III: The Economics of Blockbusters
Part IV: Movie Revenue – Modeling the Scenarios
Part V: The Analysis! Implications, Takeaways and Cautions about Projected Revenue
Part VI: The Television!
Part VII: Licensing (Merchandise, Like Toys, Books, Comics, Video Games and Stuff)
Part VIII: The Theme Parks Make The Rest of the Money
Part IX: Bibbidy-Bobbidy-Boo: Put It Together and What Do You Got?
Part X: You’ve Been Terminated: Terminal Values Explained and The Last Piece of the Model

This series has been the equivalent of an all day trip to Disneyland for me. Arriving when the park gates open, I stayed all day, walking the park and going on every ride. I’m exhausted, and now all I have to do is wait for the fire works. My feet are killing me, but I’m almost there. So yes, today is the fireworks of this process, though the rides (articles) have been great along the way.

I spent Tuesday and Wednesday building our exhaustive models, so let’s  “generate insights” from the data, since insights are a hot business term. I’ll start with the big numbers. I’m going to do this as a Q&A.

What is the Bottom Line, Up Front?

Or “Bottom Line, 10 Parts Later”? 

Here it is: Disney will NOT lose money on this deal, even discounting for the time value of money. So yes, the people claiming success on behalf of Disney are indeed correct. They crushed it.

To show this, here are the totals for the deal. But, to show what “making money” means, I’ve broken my three scenarios into unadjusted, discounted for cost of capital and discounted for inflation. Again, these totals include my estimates for the last six years, the next ten years, and a terminal value for all future earnings:

Table 1 Totals(All numbers in millions, by the way.)

Here is how those values relate as a percentage of the initial price ($4.05 billion). (So subtract 100% to get the return.)

Table 2 PercentagesIf you said, pick one as “the truth”, I’d pick my median scenario—that’s what median is for, right?—and I’d chose the cost of capital line. That really is the best way to look at investing in entertainment properties, and Star Wars is as pure entertainment as you get. (It’s also what the finance text book would tell me to do.) So it is smack dab in the middle of the table.

Using that number, the only conclusion is that Disney crushed it. Disney got a 107% return over the lifetime of the deal. (A 5x deal in unadjusted terms.)

Even looking at the high and low cases, this makes sense. Even the most pessimistic scenario shows a 38% return. (Which is a 3x return in real dollars. Again, huge for a low case.). Bob Iger and Kevin Mayer made a huge bet and it still had a nice return. In the high case, Disney will make an unadjusted 9x on the asking price. That’s a great deal.

Why do you focus on the discounted numbers compared to the totals?

I ignore “unadjusted” numbers—unadjusted is my best term for it—because I can’t help myself. One of my biggest missions with this series is to remind all my readers of this key finance point. A point—leveraging the time value of money—that the New York Times made when writing about President Trump’s taxes (and which he incorrectly criticized). So it needs to be repeated: A dollar today is worth more than a dollar tomorrow. Financial models need to reflect this reality.

To illustrate it, here’s an example. Disney could have take $4 billion dollars (and yes, they paid half in cash, half in stock) and put it in the S&P 500. If they had done that, they’d have earned a 10.5% inflation-adjusted CAGR from 2013-2018. So if Disney had done nothing, they’d earned 10.5% on their money. This is why the “cost of capital” exists. It accounts for the return you should expect for the risks of a given industry. If you make an investment, it isn’t just good enough to make some money, you need to beat the industry costs of investing in said industry.

Well, why did you also include inflation?

It’s easier for many folks to understand. The cost of capital is what we should judge the deal on, but “cost of capital” is a finance term that most of us don’t deal with on a daily basis. Inflation is easier to understand. It is the everyday reality that the things around us get more expensive over time. Inflation is the cost if you don’t do anything with your cash. It’s just another way to look at it. (And while it fluctuates, it’s hovered around 2% for so long that I’m using that as a placeholder.)

How does the cash flow look by time period?

Glad you asked, because I want to answer this question to keep this Q&A flowing. Essentially, this question asks how earnings flow in by our three major periods: what has happened (2013-2018), the near future (2019-2028) and the far future (the terminal value). Here are 3 tables showing this by model:

Table 3 Totals by Period

To make it easier to read, here’s that breakdown in percentage terms of the total for each line.

Read More

Disney-Lucasfilm Deal Part X: You’ve Been Terminated: Terminal Values Explained and The Last Piece of the Model

Disney-Lucasfilm Deal Part X: You’ve Been Terminated: “Terminal Values Explained” and The Last Piece of the Model

(This is Part X of a multi-part series answering the question: “How Much Money Did Disney Make on the Lucasfilm deal?” Previous sections are here:

Part I: Introduction & “The Time Value of Money Explained”
Appendix: Feature Film Finances Explained!
Part II: Star Wars Movie Revenue So Far
Part III: The Economics of Blockbusters
Part IV: Movie Revenue – Modeling the Scenarios
Part V: The Analysis! Implications, Takeaways and Cautions about Projected Revenue
Part VI: The Television!
Part VII: Licensing (Merchandise, Like Toys, Books, Comics, Video Games and Stuff)
Part VIII: The Theme Parks Make The Rest of the Money
Part IX: Bibbidy-Bobbidy Boo: Put It Together and What Do You Got?

Yesterday’s article was pretty audacious, trying to estimate 6 years of past revenue and 10 years of future revenue. But the eagle-eyed among you may have noticed I left out a crucial detail:

What about the future? 2029 and beyond? Surely Lucasfilm is worth something then too?

Yes, it is. But predicting the far future is the toughest part. Which ties into one of my biggest pet peeves in valuation. I loathe business models that project near term middling performance (or even losses), but a far future of wild success. 

Usually, this wild success is summarized in an outsized “terminal value”, one of the most crucial concepts in equity valuation. It can be hyper-dependent on the growth rate. If the growth rate raises by a point, then the model’s value can shoot through the roof. (And yes, many tech valuations follow this model.)

Yet, terminal values are the best tool we have to solve this problem. If we use them properly. Today I’m adding that last piece to the model, but explaining how I got there and what it is.

Terminal Values…Explained

What is the terminal value? Well, the last number on the spreadsheet that captures all “future” earnings. Look at my model (this is the median scenario), with the new lines added:

Table 1 - Empty ProForma wTerminal Values

In a word, the “terminal value” tries to capture the value of all future earnings after your model stops. Say you feel confident you can predict revenue out five years. Okay good enough. (I mean no one can really predict revenue, costs and hence earnings, though we still try.) But what about 10 years? 15 years? There are too many variables.

You can see the need for this in the Lucasfilm acquisition. Can I really predict what will happen with release dates of films, even two years out? I already had to remove Indiana Jones 5 from my models. Take another line of business, licensing. If you used the toy sales of 2015 to forecast the future, well you’d be much, much too high. (2015 was probably the peak of Star Wars toy sales.) Back when this deal was signed, Disney didn’t know if they were going to launch a streaming service (I assume) but they still could have sold Star Wars TV series. Possibly for even more money. Not selling to others changes the model.

Here is where the science of modeling has come back to the art. (Which isn’t a bad thing, despite current connotation. Good art is really, really hard to make. Great art even harder.) The traditional way to model a terminal value is to use the future cash flows of the last year of the model, and assume those hold steady into the future. In other words, you make a “perpetuity”, a cash flow stream that continues forever. Alternatively, if your company has a large variance in cash flows year to year, you can use a three or five year average to get the base number. To be even more conservative, you can assume instead of a perpetuity, it is an annuity, where the future revenues only last for a given period of time, say 10 or 20 years. (If you need a refresher on “time value of money”, go here.)

The Specific Terminal Value Calculations

How long will Star Wars be valuable? Davy Crockett was the Star Wars of the 1950s, and it isn’t worth $4 billion dollars. Mickey Mouse has been Mickey Mouse since the 1920s, and he’s worth well more than $4 billion dollars. Which way will Star Wars go? I’m going to assume for a long time. Essentially for decades, but with one scenario where it shrinks over time. Which I’ll control for by tweaking the discount rate. (Either having it grow or shrink.)

Read More

Disney-Lucasfilm Deal Part IX: Bibbidy-Bobbidy-Boo: Put It Together and What Do You Got?

(This is Part IX of a multi-part series answering the question: “How Much Money Did Disney Make on the Lucasfilm deal?” Previous sections are here:

Part I: Introduction & “The Time Value of Money Explained”
Appendix: Feature Film Finances Explained!
Part II: Star Wars Movie Revenue So Far
Part III: The Economics of Blockbusters
Part IV: Movie Revenue – Modeling the Scenarios
Part V: The Analysis! Implications, Takeaways and Cautions about Projected Revenue
Part VI: The Television!
Part VII: Licensing (Merchandise, Like Toys, Books, Comics, Video Games and Stuff)
Part VIII: The Theme Parks Make The Rest of the Money)

Many of you are interested in knowing how much money Disney made when it bought Lucasfilm for $4.05 billion dollars. How do I know? Well, one of the Google search terms that directs to my site is, “disney profit lucasfilm”. (And really I should be higher in that search ranking!)

This interest comes from that fact that very few people know the answer. Disney CEO Bob Iger does. Kevin Mayer (Iger’s chief dealmaker) does. Christine M. McCarthy (Iger’s CFO) does. And likely many other Disney employees. 

As for the public, though, we haven’t the foggiest. 

Few other news websites have tried to answer this question. It’s too speculative. Instead, they usually rely on some version of, “Disney has grossed more at the box office than the acquisition cost of Lucasfilm” type articles. These are so obviously wrong—a studio doesn’t collect all of box office for one; it doesn’t account for other revenue streams for two; it doesn’t discount for the time value of money for three—that many of the Disney & Star Wars super-fans want something more. So I did a bottom’s up analysis. (I’m the strategy guy and a super-fan.)

Yet, I’ve left you all wanting. I never finished the damn thing.

Today, it all comes together. Totaling over 66 pages and 30 thousand words with dozens and dozens of charts, tables and financial statements, this article series is my Ulysses. I’ve calculated all the revenues and costs to finally answer the question that started this:

How much money did Disney earn on the Lucasfilm acquisition?

Today, I’m going to walk through building my final model. I will include the final numbers for my three scenarios (through 2028), but today is really about adding in the final estimates to the model. Like a final Harry Potter film—or uncompleted ASOIAF book—this dramatic conclusion will need multiple parts. I’ll explain the model today, tomorrow I’ll calculate the terminal values and then on Thursday, I’ll draw tons of fun conclusions. That’s right, it’s a Lucasfilm week!

Calculating the Final Piece

At first, I was going to make just one model, call it the “average” and be done with it.

But that didn’t make any sense. I’ve been using scenario modeling through out, building best and worst case options where appropriate. In one case—film—I made 8 different scenarios. Scenarios are great because they account for the inherent uncertainty in predicting the future.

To add everything up, I built three versions, the traditional “best case / worst case / average case”. I’m a big fan of using three versions of a model, if they are all realistic. (If you want to goose your numbers with three scenarios, make the worst case very nearly break even.) I treat the high and low case as the equivalent of our 80% confidence interval. The average then acts as my best guess of what will happen. The final summary model looks like this:

Table 1 Empty Proforma

The shaded green cells are what we need to fill in, based on our past calculations. Sure, it looks like a lot of cells, but it is really just 11 lines. A lot of time, we act like high finance is really hard. It isn’t. All you do is add and subtract. We don’t even have to do the math ourselves since Excel does that for us.

As I was building this, I realized that in some lines of business, I forecast revenues out to 2027 and 2028 in others. Don’t ask me why I didn’t keep things uniform. For consistency, all these models will go to 2028, the next ten year estimate. Building this final summary was a good proof read of the Excel models as well.

The Final Calendar

To help build the models, early on I built a calendar that represented my best guess for the future of Lucasfilm under Disney. Remember, this deal was signed in December 2012, so I started the calculations in 2013. This calendar didn’t make sense for any individual article, so I’m putting it here, so that everyone can understand the scale of what Disney is rolling out here:

Table 2 CALENDAR of Lucasfilm

The Three Scenarios

Let’s walk through what I put in each model.

The “Average”: Status Quo Continues

Read More

The Most Popular Oscars Ever? Nope. (Why The Academy STILL Needs Fixes to Make the Oscars More Representative)

Records have nearly been smashed! After decades in the doldrums, in this year’s Oscar’s telecast—for achievement in the year 2018—popular movies made a comeback. Here’s Todd VanDerWerff explaining for Vox:

For the first time since 2012, the total domestic box office of the eight films nominated for Best Picture topped $1 billion — and that’s without box office receipts for the 10-times-nominated Roma…Indeed, the $1,260,625,731 pulled in by the seven films we have data for is the biggest total for a Best Picture lineup since 2010, when the 10 films nominated (led by Toy Story 3) made $1,357,489,702…the average box office haul of the nominated films we have data for, the number becomes even more impressive: $180,089,390. Though their combined take falls slightly behind those of 2011 and 2010, the average is well ahead of those years ($135,748,970 for 2010; $170,512,813 for 2009), since 10 films were nominated in both those years.

(I changed Vox’s years to the year prior to match the rest of this article.)

This would seem to refute my thesis from last August; I predicted—based on the data—that the Oscars are nominating fewer and fewer popular films. 

So let’s check back in on those metrics I developed now that we have a new year to add to our dataset. But I’ll go above that simple mandate: I want to make an argument for popular films. I think the Academy has a chance to get higher ratings with more popular films and more importantly, I think this would better represent the state of film each year. Let’s start with defining the problem. Before one can solve a problem, one must understand it. Otherwise the solution probably won’t work.

The Problem: The Academy is Nominating Fewer Popular Films

Collecting the Data

This is true. But it’s complicated. My “rule of thumb” when you have a complicated issue that can be measured in multiple ways—like Oscar voting—is to just measure it as many ways as possible, to see where the trends lead you. If most or all the measurements roughly align directionally—meaning one or two measures could be an outlier—then you can generally trust the trend.

(This process is my refutation to the worst quote every about lies and damned lies. Mark Twain did more to set back statistics than anything._

Some definitions before I use the metrics. First, I’m calling critically acclaimed/awards-focused films “prestige” going forward. In olden times, we called these films “independent” but most independent films now have major studio distribution, so that doesn’t make sense. I’m defining “popular” films as films grossing over $100 million dollars in ticket-price adjusted terms. I’m defining “blockbusters” as films grossing over $250 million. That makes that category very, very small—usually fewer than 10 films per year—but that’s the point of the blockbuster category. Finally, I’m adjusting all ticket prices to 2018 box office, since that’s what my data set used in August. 

With that out of the way, to the charts and tables. Before we start, know that the Academy nominates a different number of films each year for Best Picture depending on the voting totals. This year it was 8 films, where 2017 and 2016 featured 9 films. 2014 and 2015 featured 8 films. And 2009 and 2010 filled out ten slots each year. We need to account for that.

(Oh and I’m assuming “box office” is correlated with “popularity”. But feel free to disagree with that, somehow.)

Let’s start with “average box office” per film. This is the metric VanDerWerff quoted above. Crucially, Vox used the the mean (or arithmetic) average. With mean averages, you run the risk of one huge outlier skewing the results. (In finance, see the “Bill Gates walks into the bar, everyone is richer, on average” scenario.). Avatar did this in 2009; Black Panther is doing it now. (Also, Black Panther box office haul is divided by fewer films (7) compared to Avatar’s fellow ten films.) 

One outlier should not mean the films as a body are more popular. To account for this, I calculated both the mean and median average. I wish I had thought of this back in August, but I’m updating it now. Check it out:

box office unadjusted

So by mean average, yes we’ve done it! The most popular Oscars of all time!

But the “median average” shows a huge split. This is evidence that overall, these films aren’t that popular compared to years past, with one tremendous outlier. As I said though, we could look at this in both adjusted and unadjusted terms. Adjusted box office is the equivalent to accounting for inflation in economics: it’s something you should ALWAYS do. Time value of money, and what not. This won’t lower this year’s average, to be clear, but raise past years. So I included both below, with again both the mean and median averages:

box office adjusted v02

The trend lines are the same, but a little even more decline in popularity. However, one of the purposes of nominating the films is to have multiple popular films. Even one blockbuster isn’t enough to get lots of people interested. That’s why I liked counting the number of popular and blockbuster films. (Last time, I included these in both percentage terms and adjusting for inflation, but I think the story is roughly the same without those views.)


This looks a little bit better, though arguably we have been flat at 3 popular films per year. (If you use percentages, it may even look a bit better.)

Read More

Prediction Time: Forecasting the Effect of Netflix’s Price Increase on US Subscribers

Netflix moves the PR needle. Even I jumped into the Twitter maelstrom to generate clicks based on their two announcements last week, especially the decision to increase prices on US customers.

The problem, for me, is that Twitter, as a medium, is really bad at digging into numbers. It isn’t Twitter’s fault; spreadsheets just don’t really fit. (See my last big analysis article for another debate taken off-Twitter.)

As a result, a lot of the “debate” on Twitter devolves into “this is good” or “this is bad”, with some anecdotes thrown in and the occasional Twitter rant. The fun thing in the #StreamingWars2019 is we’ve all clearly taken a side and this war will only end with all our heads on pikes. (I’m rereading Game of Thrones/ASOIAF in preparation for April 14th and George R.R. Martin ends lots of events with that outcome.)

We can do better than Twitter debates. Today, I want to make the subtext of all the discussion on Netflix text. I want to change the terms of the debate around Netflix by moving into concrete specifics. Strategy is numbers, right? 

That means putting our predictions into quantitative terms. I described my process for this regarding M&A back in July and my series on Lucasfilm. So here’s the question:

How will Netflix’s price increase in 2019 impact US subscribers in 2019?

The results will come in when Netflix announces their annual/quarterly earning in January 2020. For the record, Netflix currently has 58.5 paid memberships at the end of Q4 2018, among three tiers of pricing. Over Q1 and Q2 of this year, they’ll increase prices $1 to $2, raises of 13-18%. 

I’m going to walk through my process to make a prediction. First, I’ll explain why I’m predicting customers in 2019, not other financial factors. Second, I’ll evaluate what we know and some good and bad ways to look at the problem. Third, I’ll talk a bit about the data and finally make my prediction. Feel free to leave yours as a comment on this article or in my Twitter feed.

Stating the Problem: If the number of subscribers who leave is lower than 18%, it’s a win.

This is the simplest of simple microeconomics that Netflix is practicing here. If you raise prices, but the units sold (in this case, customers) decreases less in percentage terms than the price increases, you make money. (Assuming no increases in costs.) Since this is digital and each additional “unit” sold has a marginal cost of zero, that math works. (Note: this is still an “assumption”. If you continue to need a larger and larger content library to woo subscribers, well then our magic “marginal costs is zero” isn’t actually true.)

economics model


Like the “value creation” model, the above chart is the simplest explanation of price and supply and how they interact, but it is woefully incomplete. Many, many other variables ultimately impact the number of units sold or customers who subscribe.

Yet, as rule of thumb, it works. The number, therefore, to watch out for is the subscriber growth or decrease. If Netflix decreases its subscribers to 55.6 million paid subscribers, that’s a 5% decrease. Since that is still lower than the 18% price increase, the move made financial sense. Thus, the terms of the debate change to, “will Netflix customers grow, slow or halt?” Here’s the past 7 years of subscriber numbers (paid, US):

subs from earnings reports

Predicting the Effects: How Many Subscribers will Drop from Netflix?

There are a couple of ways to try to triangulate this number, but let’s start with how not to do it.

The Bad Prediction Method: Using yourself as a data point.

Many people when discussing TV or film use themselves as the ur-example of a customer. I saw multiple people say on Twitter something along the lines of, “I use Netflix all the time. I don’t care about a $2 increase. Ipso facto, this doesn’t matter.”

Now, if you are a representative sample size of America, then congratulations. This analogy works. (Also, I have a ton of other questions to ask you. Like who will win the 2020 election? You should know.) If instead, you are a single data point, then we need something else.

The Trust Method: Believe in Netflix’s army of economists.

Read More