Article·AI & Engineering·Jan 18, 2023

LLMs Dream of Electric Prose: An Interview About Writing With ChatGPT

Tife Sanusi
By Tife Sanusi
PublishedJan 18, 2023
UpdatedJun 13, 2024

In late November 2022, OpenAI launched ChatGPT—a fine-tuned, chatbot-style implementation of their latest large language model (LLM)—to the general public. It instantly became very popular, and for good reason. ChatGPT is able to answer complex questions, compose entire articles, and do homework, which explains why it was able to reach over one million users in five days. People are using it to write children’s books, to code, and even as their very own personal assistant. While this is all very exciting, for some writers there’s still a little apprehension about what this means for the future of writing.

In the days before Christmas, I spent an afternoon playing around with ChatGPT. I asked it to devise a menu for my holiday dinner, its thoughts about how the world will end, and to write the first paragraph of this article. (Note: We edited that paragraph a bit; every writer makes mistakes.) Its answers were slightly delayed (It seemed like all one million users were using ChatGPT at the same time) but surprisingly funny, eerily human-like, and completely soulless. I got a full menu with every single traditional Christmas ingredient but no matter how much I changed the phrasing of my prompts, I could not get an introduction that sounded anything other than, well, robotic

Despite its limitations, I still think ChatGPT is a decent conversational partner and capable writer’s assistant. So I decided to ask ChatGPT itself about writing and what the future looks like for human writers.

What Does ChatGPT Have to Say For Itself?

How are you feeling right now?

Do you think that experiencing emotions is integral to writing?

What do you draw on when you’re writing?

Does that give you an advantage over human writers?

People who use you have noticed some bias in your responses especially with race and gender.

(I got the same response when I asked if that affects its ability to be a neutral and unbiased writer)

Since you’re better than human writers at some aspects of writing, should we be scared?

What about the far future?

Explaining Everything, Disclaiming Itself

One thing I noticed during my conversations with ChatGPT was how the answers to subjective or somewhat difficult questions almost always came with a caveat. When I asked about emotions in writing, answers started with “As a language model…” and answers to questions about bias began with “As an artificial intelligence...” This aura of ethereal detachment in its answers feels like an intentional decision from ChatGPT’s designers. Interspersing these robotic admissions ensures human readers treat its answers with some skepticism. For example, while answering one of my questions, ChatGPT explained that because it was an artificial intelligence, it would be able to explain facts and figures without any bias. This is a very bold claim. I then brought up the known issues of gender and racial bias in AI, and, all too predictably, my interview subject fell back to its “I’m just an artificial intelligence and only repeats what has been fed to me” stance. If not in journalism, this makes me think ChatGPT may have a future in public relations.

On one hand, being repeatedly told that ChatGPT was just a language model and so had the limitations of one helped me to lower my expectations when asking these contentious questions. I knew that the answers I would be getting would probably read more like a PR statement and less like an actual answer to my question. And honestly, I prefer getting these caveats and warnings rather than a declarative answer to the subjective, human-centric questions that AI is just not fully equipped to answer. In my opinion, it is much safer to know upfront what you’re getting. While it may mean adjusting your expectations, that’s just one of the drawbacks of philosophizing with ChatGPT. 

Still, I can’t help feeling a bit bamboozled by ChatGPT. While it is true that its responses are limited to the data that it has been fed, and as such might contain some bias, or might not be able to answer some more objective questions, I still expected it to at least stand by its answers, even difficult ones. Replying with those caveats does not in any way absolve ChatGPT of its responsibility to be the unbiased model that OpenAI says it is. If anything, it emphasizes how restricted ChatGPT actually is and the amount of work that still needs to be done.

If you have any feedback about this post, or anything else around Deepgram, we'd love to hear from you. Please let us know in our GitHub discussions .

Unlock language AI at scale with an API call.

Get conversational intelligence with transcription and understanding on the world's best speech AI platform.