By Sarah Conrad, Junior Copywriter
Scientific breakthroughs resulting from artificial intelligence often sound like the stuff of sci-fi novels: self-driving cars and programs that compile human features into a realistic—yet nonexistent—face. However, many applications of AI have gone mainstream, whether it’s Siri responding to your verbal commands or Facebook suggesting who to tag when you upload a picture. As amateurs and professionals alike tinker with the possibilities of AI, one thing that’s been on the minds of many is whether they can use it to write coherent strings of text.
The idea that AI can write logical and interesting copy is a popular one, with people creating and training neural networks to recognize patterns in existing writing and then spit out original text. Some people undertake this challenge for the sake of comedy, like Janelle Shane did when she fed a neural network common sayings from conversation hearts and had it write its own with results ranging from mildly out of the norm (“CUTE KISS“) to plain bizarre (“STANK LOVE“). Others are exploring its ability to create large amounts of useful copy in very little time, such as those who wrote the algorithm that used current research posted on SpringerLink to write a 233-page textbook on lithium-ion batteries.
Find the most respected doctors in key service lines, and seek to get them on board first. When the top dogs start to see results from your campaign, they’ll share their successes with their colleagues and those who admire them. If the hospital or health system sees an increase in revenue from your campaign, they’ll likely invest more money to market additional physicians and service lines where appropriate and the physicians will already be on board.
On a less scientific and more creative route, Ross Goodwin attached a GPS, a microphone, and a camera to his laptop, stuck them on his car, and had his network write a story in the spirit of Jack Kerouac’s On the Road. The novel it created, 1 the Road, contains lines of oddly poetic prose sprinkled throughout predictably clunky text.
Meanwhile, the Alibaba Group created a network named Alimama that apparently writes descriptions of products that pass the Turing Test. The Turing Test involves a moderator receiving two writing samples, one from a human and one from a computer. If the moderator can’t determine which came from the computer, the AI passes the test. While I’d like to take a look at the copy, I don’t quite trust Google Translate (another example of AI) to produce an accurate reproduction of the Chinese text in English. However, Alimama’s brief product descriptions rely mostly on surface-level details found in images and specs—these don’t require much creativity, making it easy copy for a neural network to churn out.
There are even online communities based on people training neural networks to produce computer-generated fiction, such as LiterAI. This particular website encourages users to create their own artificial stories, with tutorials to help those who have no idea what they’re doing (like me). This, of course, piqued my interest, leading to me trying to train a neural network, requesting someone else make it for me when the confusion set in, and then testing it out on my coworkers.
To train the neural network, we followed this tutorial. This type of machine learning generates copy in the same way the predictive text feature on smartphones works—it looks at the sequence of letters and words you input and then utilizes learned patterns from your previous writing to determine a plausible next word.
Our data: nail polish names. I wanted something with ridiculous copy so that 1) our results might be able to blend in with the input, and 2) our results would be funnier. In the beginning, we had the network review the data five times. Then we asked it to review the info 10 times. The network becomes more “intelligent” each time it evaluates the information, meaning that the output from the execution in which it trained 10 times would be closer to the input.
Then it was time to conduct the Turing Test. Disclaimer: these results are extremely skewed. I filtered through more than 2,000 lines of output to find the most believable names and then tried to find similar-sounding real ones. After compiling these into a Google Form, I distributed the quiz around the Decode office. The average person got half of the answers right, so we can conclude that the results from the neural network were decently convincing.
Then hilarity ensued: it was time to share the more outrageous results from the network. The following gems happen to be my favorites:
With the reasonable success of the generated nail polish names under my belt, I wanted to try for longer copy outputs. This time, we fed the network every single Decode blog and had it review the writing 10 times. The results were less than impressive—there was absolutely no point in trying to see if people could believe a human wrote this. Enjoy this excerpt:
“The brand position was a no professional advertiser advertising data from the mobile parents are some agency in the content. This is the story of this morning. We all know you can do a professional opinion. There are a few “ugh yourself in the targeting build and between the workshops and what they’ve been taking into the constance of the same of mobile screen in company, and events or location, and changes solves the based to a company and serving a plan”
Needless to say, I think my job is safe for the time being. While the neural network we used for our outputs was basic compared to what others are utilizing, human writers bring character, depth, insight, and, most importantly, love to their craft that a computer just can’t replicate—yet.