Computer-Generated Writing Is Here
AI-generated content isn’t waiting for the academic debates over the nature of intelligence or limits of deep learning. And it will replace lots of working journalists.
Photo created using OpenAI’s DALL-E 2 engine using the prompt “Teddy Bears working on new AI research underwater in the 1990s technology”
This weekend Steven Johnson posted an excellent story in the Times about the state of the latest generation of AI writing platforms. At more than 8,000 words, it takes a bit to process, but he does an excellent job of teasing out the debate among AI researchers over the reliability of these programs. Can we trust them? Is this thinking? Who is designing them? Are they suitable for us? These are all excellent questions that will be coming up in Machined for years to come, but in my mind, none of them really matter.
My takeaway is simpler: this is happening now, like it or not.
Johnson focuses on GPT-3, which I have written about before. GPT-3 is the latest in a series of Large Language Model (LLM) systems that use deep learning to create original texts. LLM systems “get smart” by analyzing millions of digital texts: dictionaries, encyclopedias, books, web pages, comment threads, and even social media posts. It looks for patterns and then cobbles together text that fits the pattern. That pattern may include falsehoods, offensive language, or even pure fiction.
GPT-3 was developed by OpenAI, a company that started as a non-profit in 2015 and then decided its lofty ambitions would require funding, so it created an LLC. Even so, investors' profits are capped and additional profits get funneled back into the non-profit. The company is dedicated to making AI a public resource developed for the good of humanity.
Indeed, anyone can log onto the company website and take GPT-3 for a spin. Type queries into the text box and your words will be sent to a supercomputer of 285,000 CPUs in Iowa. It is worth doing because this is what the future will look like.
There are two major sides to the LLM debate. Proponents point out the uncanny ability of these systems to create text that is novel, clear, and understandable. For example, when Johnson fed the system this prompt:
In 1958, Frank Rosenblatt introduced the Perceptron, an early version of a neural net that could only detect the location of a square on a printed card. Fifty years later, neural nets are capable of . . .
GPT-3 continued with this:
. . detecting and classifying faces, finding patterns in data, and emulating the human brain. While the neural net matured in academia, it also found its way into the tech industry. In the late 1980s and early 90s, neural nets were used in speech recognition and character recognition applications. The growth in computing power and the ability to collect larger data sets allowed other neural net applications to emerge.
Not bad, right?
On the other side of the debate, you will find Gary Marcus, founder and CEO of Geometric Intelligencecompany acquired by Uber in 2016, and the author of five books, including Rebooting AI. He thinks LLM systems are linguistic parlor tricks hiding a fundamental failure of real intelligence. I covered Marcus’s objects a few weeks back, but this is a concise summary.
We may already be running into scaling limits in deep learning, perhaps already approaching a point of diminishing returns. In the last several months, research from DeepMind and elsewhere on models even larger than GPT-3 have shown that scaling starts to falter on some measures, such as toxicity, truthfulness, reasoning, and common sense.
I honestly don’t know the limits of deep learning, but I know what these systems can do today. And that alone will be incredibly disruptive, particularly for those of us that make our living writing. Let’s say, for example, you want some tech advice. You could seek the information from publications that have 40-years of providing trusted advice, or more likely, just Google your request. Or you could have it delivered to you by an AI system like GPT-3.
What should I do if my computer won't turn on?
There are a few things you can try if your computer won't turn on. First, check to make sure that the power cord is plugged in and that the outlet is working. If the power cord is plugged in and the outlet is working, then try pressing the power button on the computer. If the computer still doesn't turn on, then you can try resetting the power supply by unplugging it from the wall and then plugging it back in. If the computer still doesn't turn on, then you may need to take it to a computer repair shop.
How do I prevent people from tracking me online?
There are a few things you can do to prevent people from tracking you online. First, you can use a VPN, which will encrypt your traffic and make it more difficult for people to track you. Second, you can use a browser extension like Privacy Badger, which will block third-party tracking cookies. Finally, you can use a private browsing mode, which will prevent your browser from saving your browsing history.
What is the best gaming laptop under $1,000?
There are a few different gaming laptops that are available for under $1,000. Some of the laptops that are available in this price range include the Acer Predator Helios 300, the Asus ROG Strix GL503VS, and the Dell G3 15.
So, GPT-3 must be looking at some cut-rate tech sites because those laptops don’t get great reviews on PCMag.com.
Tech advice is pretty straightforward, but as a life-long publishing professional, I assure you that lots of stories get “written” without a lot of thought. The AP has been using AI to write quarterly filings since 2014! For that matter, Johnson’s story was more than 8,000 words long; who would know if a few paragraphs were added by an AI? Is that even a problem?
Debates over the nature of intelligence, cultural biases, and the limits of deep learning are fascinating, but the age of automatic writing is already here.