Large language models, or LLMs, have improved tremendously in recent years. They can write prose and code better than a human can, and people with creative jobs fear those jobs will be taken—but this concern is misguided.
LLMs generate text faster than any human, and this ability to generate text from a massive set of training data, combined with novel input, can be extended in surprising ways to automate manual tasks. For example, until recently, to write a script to process files, you’d need to write code. Now, LLMs can generate the script given an English description of what needs to be done, without being specifically trained to do so. It’s not only factory workers worried about machines taking their jobs, but creative professionals like programmers too.
A writer who focuses on generic content marketing likely won’t have any work in the near future, since ChatGPT and countless other AI text generation tools can produce the same content in seconds. Most of the time, much to my disappointment, content marketing doesn’t have to be good to be effective at scale. It just has to catch some keywords, draw people into a website, and make a sale, convincing people to either buy a product or show them ads.
In the above two examples, precision doesn’t matter.
If you’re a programmer who needs to migrate a lot of files while performing tedious transformations on the contents of those files, it takes less time to describe the results you want in English than it does to learn how to do it in a scripting language. If it’s a one time task, you don’t care for how the script is written, as long as it achieves the results.
If your job is to write blog posts to boost the ranking of a website in a search engine, your writing only needs a vague connection to the topic of the website to achieve the results you’re getting paid for. If it’s an ecommerce store that sells chairs, almost any article about chairs will do. If any random article about chairs will do, then there’s not much need to spend human time writing those.
Beyond use cases like these, LLMs face what I call a “precision problem” that should hopefully alleviate people’s fear of their jobs being taken.
To illustrate, consider the design of a programming language. After years of working in a domain, someone envisions a programming language that can abstract away certain concepts to make the work easier. There is a specific syntax that will make it easy for others to read and verify the correctness of the operation, even those without a technical background. The operations are complex, and descriptions in English are hard to follow. This person could talk to an LLM, tell it the requirements, and eventually guide it to creating the desired programming language, but it might take a lot of back-and-forth to get there. It might even be so much back-and-forth that the person is tempted to do it themself. If you’ve worked with other people, you might have felt this urge too—to take over the job and do it yourself because it will be faster and more accurate than if you were to describe what you want while someone else does the work. With an actual person, at least they have the benefit of learning from you. An LLM doesn’t care. It’s just a machine that does the work.
If you don’t care about how the end result looks, then you might be fine letting the machine come up with something. If you need a specific result, a natural language conversation with a machine might take more effort than directly manipulating the medium.
In writing a poem, an author imagines a way to word things to evoke certain feelings. A description of the feeling to evoke, given to an LLM, might produce similar good results, but not exactly the one the author imagined. This example is indeed a little silly, because if you know exactly the phrase, why try to get that same phrase out of a chatbot? Just write it yourself! It’s the same for other creative work, but the caveat is that the work truly has to be creative, and to be subject to the taste of the author. Perhaps writing generic articles to rank a website higher on Google was never creative work after all.
If precision and taste remain valued, then with the right tools, direct manipulation of the medium of expression is more effective than using an LLM to get there.
If you have a painting in mind, you can pick up a brush and paint it directly, no speaking or writing required. To tell someone step-by-step how to achieve the result you have in mind is ineffective. It’s adding another layer between you and the output, and with that comes an inevitable loss of precision.
An LLM might surprise you with its own creativity, but this creativity means a loss of precision, a loss that you often can’t afford.