Chatbot/Author

A robot-generated digital image. The request was for a drawing of a “hamster airship.” Photo from NY Times, OpenAI.

Teachers are worried. Screenwriters are watching developments closely. Technology creators are plowed forward, thinking only of the potential payoffs. Other technology creators are scrambling to create countermoves. The chatbot authors are coming.

Writers–don’t panic…yet!

The hullabaloo is over new artificial intelligence (AI) technology which can now plausibly replicate natural language. In December, the company OpenAI released a ChatGPT application which generated an immediate frenzy. ChatGPT can carry on conversations in concise and plain language, answer questions, and write coherent stories. Some students have already used the application to submit papers; some have already been caught. Microsoft has sunk money into the technology, and more big money is planned. As the NY Times puts it “the new gold rush” is in full sway.

Our blogger friend Fandango tested out the new ChatBot, and this has prompted the Provocative Question of the Week. He charged the GPT (his was an app called Genie) with writing a story using a handful of prompt words–a typical blog task. His big questions is whether this suggests that AI is now surpassing human intelligence, and, if so, what will happen. That is indeed a big and important question, but I’m more interested in the smaller version: Is an algorithm that writes simple stories indistinguishable from humans; why or why not?

Humans Are Predictable

Isaac Asimov and Philip Dick are saying “I told you so.” The idea of programmed speech didn’t just spring up overnight. People have thought about creating proto-humans for as long as there have been people. We’re finally getting close enough to recognize machine helpers as having human shape and characteristics, even though the writers of the 1940s & 50s thought we’d have those androids done by now (2035 for Asimov; 2021 for Dick). Thousands of science fiction plots have already been written about humans being unable–at times–to separate a human speaker from a robot, once that robot has been programmed in a sufficient complex way by humans.

How is it possible for a chatbot to speak like a human? How is it possible, in particular, for an algorithm to generate what looks like a plausible story? Language follows patterns; stories follow patterns. Stories have beginning, middles, and ends, and woe to thee who doesn’t include all three. Long stories may even repeat bits.

Students of Greek epics are perfectly familiar with Homer’s “rosy-fingered dawn” and “wine-dark sea.” I’ve been reading The Secret History of the Mongols, and have come across “blown to the wind like hearth ashes,” meaning a tribe utterly destroyed, at least a dozen times. Yet even if the text has some repeating phrases, who could resist the poetry of “We have no friend but our shadow/we have no whip but our horse’s tail.” Could a robot really write that?

Sarah had a strange premonition. She felt like something was about to happen, but she couldn’t quite put her finger on it. She decided to take a walk into town to clear her head. As she walked, she found herself in a dream-like state and she couldn’t shake the strange feeling.

The start to a Chatbot-generated story, prompted by Fandango.
Teachers may have a point. image from Pinterest.

Cheaters Are Already Downloading Apps

English and history teachers are definitely worried that these new writing applications will make cheating easier. Before you can say Cliff Notes, a cottage industry has already sprung up to manufacture assignments. Mind you, this is just a question of logistics. Those who malinger or are interested in cheating have long paid other students to write papers for them; the Internet made that ever so much easier and cheaper. As a result, teachers came up with ways of spotting plagiarism, and as cheating became more sophisticated, the tools used to catch cheaters also improved.

Within days of the publicity around OpenAI’s announcement, other stories followed about applications which could spot the use of ChatGPT. For instance, an NPR story profiled programmer Edward Tian, who created a countermeasure program called GPTZero to detect the level of a writer’s “perplexity” and “burstiness.” Tian characterizes human writing as having a complexity and variation in sentences that isn’t programmed into the chatbot algorithms.

Other detector models have followed suit, including this OpenAI detector from Hugging Face. Hugging Face spotted the Chatbot Genie-generated text rather easily.

One scholar, who tested GPT on a “1500 word essay comparing Judith Butler’s theses in Gender Trouble with those found Andrea Dworkin’s corpus of works,” found that it created the correct citations but produced a wholly unoriginal essay that looked like it had been bought on the Internet. If the assignment got even a little more sophisticated, the bot started to make things up. For example, it got the Needham Thesis backward–and what a howling error that would be! Easily spottable to your average medievalist. The professor concluded that an easy way to confound the bot would be to include something only discussed in class, which is one trick teachers have been using since they started invented chalk. Perhaps, then, the teacher problem is solved.

Photo from Cartoonstock

Two Chatbots Walk into a Bar…

Should we storytellers be worried? Clearly, Chatbots can write simple stories that follow patterns. And human storytellers have used patterns since cave paintings have depicted “the one that got away.” It is also true that people who are now brought up looking at tiny screens have been trained to read tiny stories. Tropes and memes have replaced longer text, to the point where companies are promoting AI as a way of “helping” writers.

For example, the company Hybrid AT pitch their product of “chat fiction” to support writers. Their application will convert converting that messy, complicated text into something much more desirable to read. This is because:

–With the decreasing attention span of readers, huge chunks of text fail to engage them. Story Mode unleashes the text only when users tap on the interface. This makes the readers feel less overwhelmed with information.
–You can generate interest among readers by depicting the synopsis of the book conversationally.
— It eliminates the tedious and boring process of reading a book by reeling in the readers through conversations.

the value of “Story Mode,” according to HybridChat

Not only is this depiction of people insulting, but the reasoning is circular. People don’t want to read because reading is boring. Besides, people now only want to read short, conversational texts. Therefore, the way to “get them to read” is to convert your ol’ boring bunches of paragraphs with all those boring, nasty words in them into conversation.

Join the club, HybridChat. There were comic book versions of Moby Dick and other classics when I was a kid. (My mother was appalled, but they were actually quite good.) There have been TV versions, four-panel cartoon versions, and even musicals to try to make works of literature more palatable. This idea, like everything else, is simply wholly unoriginal.

Which is probably the main point about Chatbot GPT. After all, a robot might indeed create an interesting image of a hamster airship, but only if someone asked it for “a hamster airship.” Would a robot, asked to produce “a picture,” draw a hamster airship?

Meanwhile, although teachers may struggle with students using an app to be wholly unoriginal, those of us who do like to read paragraphs and stories will probably find the Chatbot versions insipid and flavorless. Let’s play a game. See if you can spot below which paragraph was written by the alcoholic Southerner, by the provincial English lady, by the Irishmen who wrote racy letters to his wife, by the closeted gay American expatriate, or by the Chatbot.

  • 1. With the stroke of the loss I was so proud of he uttered the cry of a creature hurled over an abyss, and the grasp with which I recovered him might have been that of catching him in his fall. I caught him, yes, I held him—it may be imagined with what a passion; but at the end of a minute I began to feel what it truly was that I held. We were alone with the quiet day, and his little heart, dispossessed, had stopped.

2. When Sarah opened her eyes she found herself in her bed wearing a sweatsuit and her favorite pair of socks, and smiled. She knew that she had experienced something special, even though it was just a fantasy vision, and she was grateful for the premonition that had brought her there.

3. Were I to fall in love, indeed, it would be a different thing! but I have never been in love; it is not my way, or my nature; and I do not think I ever shall. And, without love, I am sure I should be a fool to change such a situation as mine.

4. How did he bank it up, swank it up, the whaler in the punt, a guinea by a groat, his index on the balance and such wealth into the bargain, with the boguey which he snatched in the baggage coach ahead? Going forth on the prowl, master jackill, under night and creeping back, dog to hide, over morning. Humbly to fall and cheaply to rise, exposition of failures.Through Duffy’s blunders and MacKenna’s insurance for upper ten and lower five the band played on.

5. They all talked at once, their voices insistent and contradictory and impatient, making of unreality a possibility, then a probability, then an incontrovertible fact, as people will when their desires become words.

Play spot the quote: which of them was by James Joyce, Henry James, William Faulkner, Jane Austen, or theChatbot?**

Calling all you real story tellers out there. Finish this one: … three writers and a chatbot walk into a bar…

Do robots prefer absinthe or whisky? Photo from DTStudios.uk.

Which bar scene would you rather listen to? Photo from stackpathcdn.

**The answers, if you couldn’t tell…
1. Henry James “Turn of the Screw,” the ending, which I’ve read so many times including for two classes, and I still can’t tell exactly what happened, which is maybe the point.
2. Hemingway. No, sorry, that’s the Chatbot. (I’m not a Hemingway fan, sorry.)
3. Jane Austen, Emma.
4. James Joyce, Finnegan’s Wake which is infamously impossible to decipher such that most Joyce scholars propose that it reflects the writer in a dream state.
5. William Faulkner, Sound and the Fury.

4 Replies to “Chatbot/Author”

  1. I had only a rudimentary overview of AI used for writing before Chat GPT burst onto the scene in November of 2022. To me it seemed to be incredibly expensive for most fiction writing and klunky even for all the expense. I ignored it.

    When Chat GPT came out, I checked it out to see what all the fuss was about. I can see how a student might use it to cheat on an essay. I can also see it being used by bloggers and freelance writers to generate articles; articles that may or may not be factually accurate. It did not strike me as an effective storyteller.

    As an author, what I do find it useful for is quick background research, and something I call plotstorming after reading an article about the technique at ‘Writer Unboxed.’ Ask it a question and get an answer usually with details that go beyond things you’ve thought of on your own. You must, of course, research the details of the answer to make sure they’re accurate but this approach is still faster than slogging through search engine findings and piecing everything together.

    1. That’s excellent context, Anne. As always, as soon as I publish something that feels anti-technology, useful comments and even other news articles pop up, describing how the technology might be useful. I do think the examples of it telling you the capital of Ecuador are rather silly (since Google is just as fast), but having it help with quick outlines and responding in coherent language could be useful. It also may create silly results, in the same manner that autocorrect does for your texts. On the other hand, I’m not anti-tech per se; I use Wikipedia as a quick starting point for information despite some being horrified with it because I know how to fan through the sources to discern what’s fake from not. Similarly, some teachers have said Chatbot GPTs may end up being like a calculator; they let students live with what it does basically and then test in a way that doesn’t let it substitute answers. As for whether chatbots will be “better” than reading those long, boring paragraphs, well, that’s for us writers to take as a challenge, isn’t it?

Leave a Reply