(Welcome to the Entertainment Strategy Guy, a newsletter on the entertainment industry and business strategy. I write a weekly Streaming Ratings Report and a bi-weekly strategy column, along with occasional deep dives into other topics, like today’s article. Please subscribe.)
To close out the year/ring in a new one, I like to step back and focus on a topic that everyone should focus on:
Personal productivity.
It’s a bit of an obsession for me, but also, it’s vitally important for strategy. For every reader, personal productivity both allows you to be better than your competitors, but also to live a more meaningful life. It matters, personally and professionally, to work smarter, not harder.
At the end of 2022, I wrote an article on “My Biggest Strategy Recommendation. For Everyone. (That Will Never Happen)”. Last year, I unlocked that article.
That recommendation? Get off email!
Suggesting workers stop using email might be about the most shocking thing you could write in the modern day business world, but I stand by it. If you work in Hollywood, your job either requires that you’re not on a computer or phone constantly—like many below-the-line workers—or your job requires extensive, difficult deep work to create deliverables that will set you apart from your competitors or co-workers. Email doesn’t facilitate either of those outcomes.
Of course, in 2025, if you mention productivity, folks assume you mean…AI. Specifically, large language models. CEOs like Jensen Huang want their employees to use AI all the time for every task, presumably because it will improve their productivity.
Thus, how you use AI at work is easily the most important personal productivity topic of the moment. Today, I’m providing six tips on how to use AI/LLMs. I’m firmly trodding a moderate lane: LLMs can be useful, but not nearly as useful as most people think. This article will help you tell the difference, based partially on my experience incorporating it into my workflows. And I think I have clear guidance that will help you more than articles that either recommend that you use AI blindly or suggest avoiding it at all costs.
(Note: in this article, I’ll often use “AI” as a shorthand for LLMs. To be clear, I’m not discussing using “machine learning” models, since those are often carefully applied, one-use machines. Here’s an example of a great use case I shared last year.)
My Work Philosophy
Before we get to the tips, I want to quickly share my guiding principles/values about work:
- Output is greater than communication. Every job has an output, be it a decision, a plan, a report, a marketing strategy, a production schedule, a set design, a shot list, a programming lineup, a closed sale or what have you. Communication can aid in these, but the outputs of your job are what actually matter.
- Focus on deep work. Deep work means doing one to two hours of uninterrupted, undistracted work on the topic that delivers the most value to your job. For me, that’s writing articles or conducting data analysis. For screenwriters, that’s writing screenplays. For executives, that’s strategic planning. And so on. Now, what if you just read what I wrote and thought, “My job doesn’t have any deep work.” Well, I have bad news for you, but I’ll save that for when I write about AI in the next edition of the “Four Horsemen of the Media-pocalypse”.
- Writing is thinking. Strategy requires thinking. Therefore, strategy requires writing. The writing process includes brainstorming, deep thought, outlining, re-outlining, writing, then rewriting your work and thinking about it even more.
- Efficiency and quality matter; making work easier does not. Often, the real reason folks use LLMs is that they make work easier, not because they save time or improve the quality of work.
- Time prioritization is greater than time management. Knowing what you’re working on and why matters more than just scheduling your work (though you should be scheduling your time). The smartest, most successful workers cut out low quality, low value work from their schedule.
Speaking of time, let’s get to the tips!
Six Tips for Using AI/LLMs
Tip #1: Test Whether Your AI Usage is Saving You Time
If I had to provide just one piece of guidance to every person or company using AI/LLMs, it’s this:
Actually test if you and your company/division’s LLM use is saving you time.
No, literally, run a test. An experiment. Take whatever task you have outsourced to an LLM and see if it is actually saving you time by timing yourself doing the same task without using an LLM.
Right now, a lot of people assume that LLMs save them time when they don’t.
Studies show this to be the case. Last summer, a report came out showing that coders assumed that LLMs made them more efficient, when, in reality, they didn’t. Literally, coders believed that LLMs had saved them 20% of time, when it cost them 19%. Not great! Don’t let that be you!
My team just ran this experiment. Since I only rely on publicly available information for the Streaming Ratings Report, often that means capturing data from images of top ten lists (from Luminate, Samba TV, JustWatch, Reelgood, TV Time and so on) and digitizing them into a spreadsheet. Since only an image is available (unlike other data sources like IMDb or Wikipedia) you can’t automate this process.
Then along came LLMs. A God send! They could automate this process!
Hypothetically. In reality, this simple task is far too complex for an LLM. First, it couldn’t follow even basic instructions (like including a column that went from 10 to 1) and other simple tasks. After a few weeks, the LLM would eject previous instructions. After the chat got too long, the entire conversation would slow to a crawl, and the browser constantly crashed. Also, it couldn’t format the results the way I wanted. Despite a ton of troubleshooting (switching LLMs, asking the LLM to craft better prompts, upgrading to higher-end systems, starting new chats), nothing fixed these issues. My editor/researcher would spend hours waiting for responses.
After using an LLM for most of last year, spending multiple hours per week coaxing answers out of an LLM (often waiting and waiting on the LLM and checking Substack in the meantime), my researcher timed how long the process would take if he just did it himself.
He saved three hours on the first day doing the task himself.
Right now, LLMs take a long time to answer difficult prompts. It doesn’t feel like it, but this time adds up and slows many workers down.
Here are two analogies. First, think like a management scientist or operations researcher. If you ran a factory, you’d run tests on any new production process. As a knowledge worker, you owe yourself the same thing. There’s nothing more “Riverian” (to use Nate Silver’s term) or “rationalist” than this! Instead, many LLM evangelists blindly trust this new tech instead of testing it or approaching it skeptically.
Second, the medical analogy. We don’t just give people random pills and ask them, “Did this make you feel better?” and when the patient says, “Yes!” just assume that the drug is effective. We test medicine! AI is medicine for the entire workforce…so test it!
To be clear: I’m not saying you shouldn’t use LLMs. I’m saying you should test whether it’s saving your time or making you/your team more efficient. For me, I know what use cases work for my team, but I’m kicking myself that my team wasn’t more rigorous, initially, in testing whether LLMs actually saved us time. (I say my team, but honestly, my editor/researcher is pissed at how he probably lost over an entire week’s worth of work this year, wasted on this one task.)
So time your LLM use!
Tip #2: Don’t Outsource Your Thinking to LLMs
Multiple studies have come out showing that LLMs can make people worse at thinking:
- Students who used LLMs to help write essays had weaker brain connectivity than students who didn’t use them.
- Researchers at Microsoft showed that workers who use LLMs spend less time thinking critically, writing, “While GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term over-reliance on the tool and diminished skill for independent problem-solving.”
I’m pretty skeptical about many social science studies these days (after the replication crisis), but it’s not like LLMs have a body of research showing that they do help humans think better. In this case, the studies match the theory/my philosophy: writing is thinking and better thinking gives you an edge.
Outsourcing your thinking to an LLM will make you worse at your job.
So don’t do it! If you can use an LLM as a tool (for example, I use LLMs to format links) that’s great! If LLMs can help you produce code to analyze data sets in contained environments (both Gemini and Claude have functionality like that), that’s good too! It’s also possible to check the accuracy and could save you time. (With one caveat I’ll explore later.)
But if you’re having an LLM do something that requires actual thought or introspection, yikes!
Let me provide a very specific example for Hollywood: story analysis.
(Quick side note: yes, I would argue that telling stories, as creatively, innovatively and effectively as possible is Hollywood’s most important job. Sure, I’m a “suit”—as I’ve been insultingly described before—who went to business school, but crafting great stories is every studio and production company’s number one job. Period.)
If you’re a screenwriter and you’ve just finished a screenplay, should you step back, outline your script, and analyze the story yourself, or should you outsource that job to an LLM? It might save you time (emphasis on “might”, see tip #1), but will your ability to analyze story improve or worsen? Will your ability to read scripts critically get better or worse? Will these skills atrophy over time? I’d say yes to all.
Okay, say you’re a development exec. Should you use LLMs to provide coverage on screenplays or should you read them yourself, analyzing and writing up the analysis?
The answer is obvious. Whatever time you might save, the cognitive declines are not worth the trade-offs.
(I can already anticipate one possible objection to the previous example: my studio/department/production company is flooded with way too many screenplays to read and cover! My researcher saw this firsthand, having worked as a frontline script reader for a major studio. The issue is too many low quality scripts from way too many people, 90% of which the studio would never want to make in the first place. Don’t solve one problem by creating a larger problem.)
This also applies to strategic thinking. I’ve already witnessed a rise in folks using LLMs to draft strategic plans…instead of writing the plan themselves. The point of a strategic plan isn’t to have a finished document; the point is to make a great plan. And understanding why it will or won’t work. That requires thinking and writing that thinking down.
So how could you use LLMs to improve both creative and strategic thinking? Have LLMs provide feedback AFTER you write your analysis. In this case, that won’t cost much time, and you get a second set of eyes to look at your work. But it shouldn’t do the thinking for you.
Tip #3: Never Use LLMs to Draft Communication
The rest of this article is for paid subscribers of the Entertainment Strategy Guy, so please subscribe.
We can only keep doing this great work with your support. If you’d like to read more about why you should subscribe, please read this post about the Streaming Ratings Report, why you need it, and why we cover streaming ratings best.