It never ceases to amaze me how much more there is to learn about this crazy industry. I call myself the “entertainment strategy guy” and things still surprise me. Take M&A (mergers and acquisitions) in entertainment & media.
For years, I thought I closely followed the trends of mergers and acquisitions and all that jazz.
Then, I started to rigorously answer the question from two weeks ago, “How much, if at all, will M&A activity decrease?”. Naturally, I turned to Google to look for big M&A deals. I tried to build myself a little table with every deal I could find. I kept finding deals I’d forgotten about. “Oh yeah, Lionsgate bought Starz!”
There has been a lot more M&A in entertainment then you’d think. It has been a constant flow since the recovery from the great recession. That’s what my unscientific table showed and what high level summaries from PwC (and others) show. And it genuinely surprised me how many deals I’d forgotten about.
Today, I’m going summarize what I saw in the data and the shape of it.
Gathering the Data: Part 1 – My Own Data
Here’s a snapshot of the table I started filling out and will use a bit today.
Why build a table myself in Excel? Well, it’s the easiest way to click on a few variables and sort the data to discover descriptive details yourself. One of my pet peeves in data analysis is when someone doesn’t actually own the data themselves, so they rely on someone else to draw conclusions. (Also, sorry for the compressed lines. This table violates my “rule of 8”. Usually tables should never have more than six columns, usually 6 is ideal.)
My process for gathering the data was as crude as it was simple: I googled “entertainment and media mergers and acquisition” and the year to find the biggest deals per year. I later used CrunchBase’s data set to find smaller deals. I would sort by company, starting with the studios and moving to distributors and such.
I really recommend at least trying to collect data yourself whenever possible. It’s harder and takes longer, but by doing it yourself, you force yourself to figure out which variables you want/need per data point. In this case, by looking myself, I learned some thing about M&A activity, and the data set in general. Even when I later switched to using PwC’s summarized data, I could use these insights to understand PwC’s conclusions.
For example, I learned how important the timing of a deal is. A lot of the articles covering M&A activity neglect to mention what they are tracking in their coverage. Is it when a deal was announced? (For many articles, yes.) But what if a deal doesn’t close? So you sort M&A activity by closed deals, but that could be skewed by how long deals take to close versus the year they started in. If you are trying to summarize the previous year’s M&A, well you’d leave out a lot of deals if you only track deals that close.
Could this effect the data? Absolutely. The AT&T-Time Warner could swing one year’s data by $85 billion dollars. The Comcast-NBCU merger swung various year by year totals by $35 billion. The failed Comcast-Time Warner Cable inflated a few years totals by $45+ billion before it was abandoned.
I also learned that trying to distinguish between “acquirer” versus “acquired/target” is touhg. Most deals are usually one company buying another. But sometimes two companies agree to merge, and it isn’t really an acquisition, so who is the acquirer versus the acquired/target? Other times a firm is buying a majority stake in a company it has partial ownership. These little distinctions and difference can plague data analysis when you try to capture them as variables.
What about the deal value? Again, this would seem like a relatively straightforward number, but it can change depending on how stock prices move over time. Or if a company has to raise it’s offer due to competitor or shareholder pressure. Sometimes, the numbers differ by billions, swinging the total deal value by 25% or more. I tried to use the higher number whenever possible. In my scan of the data through news reports, deals rarely got less expensive.
The last five variables were less about the nuts and bolts of the deal (who bought what for how much) and instead about providing some flavor. The pieces I thought would be the most useful for data analysis/business strategy were: the industries involved (network, radio, studio, cable, etc) for both parties, the “direction” (horizontal or vertical) since this came up a lot in the AT&T lawsuit, the status (to account for failed deals), and the stake of ownership. I assumed the last piece was to full ownership unless clarified. Also, in this case industry and direction were my own subjective opinions.
If I could add a piece, I’d add PwC’s description of the business purpose of the deal: consolidation, content, innovation, capabilities extension, or other/stake ownership.
Oh, and in the future I’d include “divestiture” as a final category. Not all deals are accretive and PwC/Thomson Reuter’s database tracks this. In down swings, companies spin off bad business units and ideally a good data set on M&A would tell you when that happens.
Gathering the Data: Part 2 – The PwC Data and Others
As I mentioned above, trying to collect all the information on M&A activity by myself was more time intensive then I thought. Let’s hope I can keep building it through the rest of the year to find additional insights.
In the mean time, I needed a better, quicker look. Fortunately, the good people at PwC using Thomson Reuter’s data were able to compile annual snapshots of M&A activity in the sector they called “media, entertainment, and communications” which I copied in my first post. I found every year’s study I could—in most cases using the articles on it—and compiled it into the table I ran in my last post. Here it is again for this who missed it:
I also found other articles about consolidation or M&A activity in other sub-disciplines in entertainment, again, usually through trade press articles. Take this chart from an article in Variety about M&A in TV production, which produced this table using IHS MarkIt data:
(Source: Variety/IHS Markit.)
In addition, I found articles about M&A in the Wall Street Journal, Hollywood Reporter and The New York Times. Where possible, I saved the numbers in the article to bolster my data. I’ve tried to provide links where possible, but I have so many I may save them for a future post.
Quality of the Data
So I have essentially two data sets at this point: my own from readings/capturing news articles and the PwC summaries. The question I had to ask myself—and you should be asking me—is how good do we think this data is?
Most people in data analysis miss this key step and it’s worth pausing to emphasize it. Just because you have data doesn’t mean it is any good. Do you see potential flaws that you should acknowledge? Or could cause you to throw out the data set? Do you see quirks in the data that signal bias? Always ask these questions of data (or ask your data scientists/consultants these questions).
From year to year and between data sets, M&A data on media, entertainment and communications (and I assume all industries) is plagued by discrepancies or opinions. The biggest unreliable variable was the timing of M&A deals. Announced deals by definition exceeded the number of deals that invariably closed. So every year’s annual report invariably lowered the previous year’s totals. Sort of like how GDP is invariably adjusted by the Commerce department in future reports. This can make each year seem like it exceeded the previous year’s totals, even if it just means that some announced deals won’t end up closing.
Just because we find flaws or inconsistencies doesn’t mean we have to throw the baby out with the bath water. The question is how much we need precision in this data. Since we’re looking for trends here, being off by a few days on when a deal was announced or closed won’t kill us. Same with being off several hundred million dollars in a price. Given that a few huge deals have the largest swings, being off by a few hundred million dollars won’t effect the larger trends. Even the trends for deals announcing or closing won’t effect the five year average of deals, for the most part. (Though, it helps if you keep you data consistent/apples-to-apples when possible.)
That said, I wouldn’t try to draw too many strong conclusions from the data set, given that it has inconsistencies. And two other issues I’ll discuss in the next section.
My self-made data set has one other HUGE flaw I don’t want to neglect: I made it by trying to find as many deals as possible so I missed a lot of deals. Rigorously reviewing the internet for deals isn’t a super reliable approach, which is why I opted mid-stream to change approaches to focus on high level summaries. I’d also add I mostly focused on US-based M&A, which is a mistake. These are global companies, but our focus naturally falls on places that speak our language. (Many companies had multiple Indian deals, but their total value pales in comparison to US-based deals.)
Initial Thoughts on the Data
So we have all these high level summaries and my table. What do we think of this data? What does it look like?
Summary: This is a noisy data set
Even if I had all of Thomson Reuters data at my disposal—I don’t—I’d still call this a “noisy” data set. Adopting The Signal and The Noise terminology, I mean that trying to draw conclusions about how individual variables impact the data set will be hard. Trying to draw precise predictions will be impossible.
Take years, for example. A year is a long time in business terms. But trying to draw conclusions about any given year’s M&A activity is fraught because deals could be categorized multiple ways. As we’ve seen, you could count the AT&T-Time Warner deal in 2016 or 2018 (or later if the appeal delays the deal further), which drastically impacts the value of the deals done in that year. Since timing could change the data set so much, we have to be careful drawing conclusions about any one year of deal-making. This is why I used the five year average to set our predictions.
Or take mega-deals. There are less than 18 in any given year. That’s a small data set. So trying to draw conclusions about mega-deals with our variables like “direction” or “industry” or “type of deal” is fraught. Or to be more precise, we can’t have statistical confidence in these conclusions.
Warning: Power-Law distribution amplifies the effects of small sample size.
This data set, and a lot of conclusions drawn from it, is power-law distributed. AT&T bought Time-Warner from an entertainment and media deal high of $85 billion dollars. And it was joined in 2016 by 15 other mega-deals of $1 billion or more. But according to PwC there were over 679 deals of any size in 2017. That means that the first two deals (Linked-In purchase by Microsoft being the second biggest deal I found) totaled $111 billion, so more than the other 677 deals that year.
As a side note, I love explaining “power-law distributions” to people. This type of distribution happens throughout entertainment. Power-law distributions mean that a small number of deals can have a huge impact on the data set, especially if you focus on the “average” without accounting for size. So if you’re counting/measuring impact by each deal equally (not weighted by value) you could miss a lot of trends.
Conclusion: We still need a prediction!
I know, I spent today just reviewing the data about M&A. As I’ve been editing this article, I’ve been asking myself a brutal question: is there enough meat on the bones for this article?
And you know what? I think there is. Every few months Deadline or The Hollywood Reporter or Variety publish an article summarizing M&A activity in media and entertainment. And it comes up on their podcasts. But trying to find an explainer or FAQ on where the data comes from? Good luck. It matters whether data sets are imprecise or noisy or flawed. And a lot of the reporting on M&A ignores that crucial context. Hopefully I provided that today.
I swear I’ll make a prediction tomorrow.