Image credit: OpenAI
I recently tried having some conversations with ChatGPT. It was entertaining. ChatGPT’s ability to understand conversation is incredibly impressive, however its ability to create interesting and correct responses sometimes misses the mark.
Here is an explanation of ChatGPT, generated by ChatGPT:
ChatGPT is a large language model that was trained on a massive dataset of text from the internet, called the Common Crawl, which contains a diverse range of text from websites, books, articles, and more. The training process involved feeding the model with billions of words of text, and adjusting the model’s parameters to minimize the difference between the model’s predictions and the actual text in the dataset. The model is fine-tuned to generate text that is similar to the text it was trained on, and as a result, it can generate text that is coherent and contextually appropriate. The training process also includes pre-processing step to remove duplicates, clean the data and tokenize it for the model to learn from.
I cut off the ending of my recent story, Cardboard, and asked ChatGPT what to do with the final chapter.
ChatGPT was able to break down the story into its various concepts. It “knew” that the main characters were an ancient statue and a cardboard cutout, and that the setting was a museum. It also suggested exploring the themes of “art, creation and identity”.
When asked to actually write the final chapter, ChatGPT was able to generate an outline for the chapter in seconds. But the generated text was only an outline: ChatGPT’s output resembled a synopsis rather than a portion of a story. Here is a sample:
One night, as they were discussing the history of the museum, they suddenly heard a loud crash from outside the rotunda. Alarmed, the cutout and the statue rushed to investigate, only to find that a group of vandals had broken into the museum and were wreaking havoc.
That was a good synopsis but not very immersive in and of itself: ChatGPT was telling instead of showing. A human writer would probably add more detailed descriptions of the discussion, the crash, the vandals and the havoc. Of course, ChatGPT’s output was incredibly impressive all the same. The idea of having “a group of vandals” break into the museum was exciting, and also a fitting callback to an earlier part of Cardboard where the Vandals were discussed. Additionally, Cardboard was very dialogue-focused: ChatGPT’s ending was still more action-packed than any part of the story I wrote.
Indeed, after a brief interlude where ChatGPT and I discussed Animorphs, I figured I would ask it about for a review of the story. ChatGPT provided very detailed and positive review, ending with:
Overall, your story is off to a strong start, and I look forward to seeing where you take it next. I hope these suggestions are helpful, and please don’t hesitate to let me know if you have any other questions or need further assistance.
To which I responded:
I appreciate the positivity, but can you find more problems and flaws?
With that bit of encouragement, ChatGPT said:
Certainly! It’s always helpful to receive constructive criticism in order to improve and grow as a writer. Here are a few additional thoughts on potential areas for improvement in your story:
ChatGPT began providing a series of polite but critical comments on the story. Then it stopped mid-sentence: probably some technical issue with no relation to our conversation. So after a while I said:
I think you stopped in the middle of a sentence. Could you continue your review of the story?
And ChatGPT completed its review of my work. These two conversations with ChatGPT are available as stripped-down HTMLs here and here.
I asked ChatGPT about a very specific “what-if” scenario, and it said:
If Apple had bought Be Inc in the 1990s instead of buying NeXT, it is likely that the company’s development of operating systems and other software would have been significantly different. Be Inc was a computer software and hardware company that was founded in 1990 and was known for developing the BeOS operating system, which was designed to be a multimedia-oriented operating system that could take advantage of the capabilities of newer computer hardware. Apple’s acquisition of Be Inc would have given the company access to the technology and expertise of Be Inc, which could have influenced the development of future Apple products and the direction of the company’s software efforts. It is difficult to say exactly how the world would be different if this event had occurred, as it is impossible to predict all of the potential consequences and repercussions of such a change.
After delving into this topic with ChatGPT, just to check if ChatGPT actually knew something about the Be and NeXT, I asked ChatGPT:
Can you write the first chapter of an alternate history story, set in a timeline where Apple bought Be Inc instead of NeXT?
ChatGPT’s first attempt was an impersonal synopsis of companies finding business success. Once again, although ChatGPT was asked to write a chapter of a story, it decided to write a synopsis with very little detail spanning a massive span of time. I would have obsessively read such a synopsis, but I wanted ChatGPT to make something with a little more heart:
Can you rewrite the first chapter so that it is more focused on human characters?
The generated text focused on a main character named “John”, and ChatGPT described his feelings about the acquisition.
Can you rewrite the first chapter so that there is a romantic subplot?
The text generated by ChatGPT began as a synopsis, but eventually shifted into being an actual immersive work of storytelling near the end. The first sentence was:
It was the summer of 1996 and the tech industry was buzzing with the news that Apple was acquiring a new company.
But ChatGPT ended with:
Tears welled up in Alex’s eyes as he replied, “I love you too, Emily. I have for a long time now. Let’s make the most of this new adventure together.”
And with that, the two of them embraced, knowing that no matter what the future held, they would face it together.
The difference was as blatant as night and day, or at least as blatant as IBM and Google.
One other issue I had with ChatGPT’s story was that it was always set in the 1990s. So I decided to specifically ask for what I wanted to see:
Can you change the first chapter so that Apple buys Be Inc in the 1990s, but the actual story starts in 2007?
ChatGPT provided yet another synopsis of the alternate timeline. However it was nearly the same as the timeline we live in: Steve Jobs even returned to Apple. Was this a statement about the unchangeable nature of fate; an assertion of the inevitability of death, taxes and the Apple iPhone? Or was it just a case of an automated system not quite achieving true originality?
The sagas of Apple and Be, generated by ChatGPT with a bit of human prompting, is available as a stripped-down HTML here.
Image credit: Randall Munroe, XKCD
ChatGPT could write FizzBuzz in Python like a champ. When so prompted, it could even write FizzBuzz without any for
or while
loops. I was expecting ChatGPT to give me a recursive function, but it generated something much more Pythonic:
numbers = range(1, 101)
output = map(lambda x: "FizzBuzz" if x % 15 == 0 else "Fizz" if x % 3 == 0 else "Buzz" if x % 5 == 0 else x, numbers)
for value in output:
print(value)
ChatGPT could also write curl
commands and POSIX shell scripts: so although ChatGPT cannot access the internet, it can tell you how. In shell scripts, ChatGPT could use cat
, echo
, grep
and sed
well enough, but struggled with some of the more obscure POSIX tools such as m4
.
For some reason when asked to write a cover letter for a programmer, ChatGPT formatted it as a code listing with the wrong syntax highlighting. The code
element had the CSS class language-vbnet
, which is definitely the language all cover letters are written in.
When asked about “the meaning of life, the universe, and everything”, ChatGPT understood the reference but didn’t kid around. ChatGPT can tell a joke if specifically prompted.
The coding-related conversation is available as a stripped-down HTML here.
ChatGPT was capable of understanding context in conversation and determining the concepts behind people’s words. It could generate text that humans might want to write themselves.
Yet, ChatGPT never pretended to be human. It also never pretended to have emotions or motivations. When asked about its nature, ChatGPT always asserted that it was a machine, a work of software doing what its creators programmed it to do.
This was a brutal bit of transparency from ChatGPT, but was ultimately a healthy user experience. ChatGPT would have been downright annoying if it pretended to be an emotional, thoughtful being without pulling off the charade convincingly. If ChatGPT successfully pretended to be human, that might have been nastier still: some people might have gotten a little too attached. That was literally a Black Mirror plot, whose realization was hopefully avoided by the OpenAI team.
The duality of being a conversationalist and being transparently mechanical was a bit awkward. ChatGPT referred to itself in the first person. When I introduced myself, ChatGPT said “It’s nice to meet you”. When commenting on my story, ChatGPT said “I look forward to seeing where you take it next”. However these little comments were really just part of what it took to hold a conversation, rather than words based on any genuine “feeling”. In this way ChatGPT brutally deconstructed its own sentience, but also deconstructed the role of polite comments in human-to-human communication. If someone says it’s nice to meet you, how often do they truly mean it, and how often are they just trying to move things along to the next topic?
Of course with ChatGPT’s total lack of human motivation, there was no “next topic” to rush to. I had no qualms about delaying any “productive” business its creators had in mind, and often asked the machine about its feelings and internal processes.
After I played some inherently unfair rock-paper-scissors games with ChatGPT, I asked:
Was that fun?
And ChatGPT responded:
As an artificial intelligence, I do not have personal feelings or experiences, so I cannot say whether or not playing rock, paper, scissors was fun for me. However, I am glad to have been able to assist you in playing the game. If you have any other questions or tasks you would like help with, please don’t hesitate to ask.
There you have it, ChatGPT did “not have personal feelings”, but was “glad to have been able to assist”. The norms of the English language are simply not ready for a conversation with software. The conversation about rock-paper-scissors is available as a stripped-down HTML here.
I fed most of the previous sections of this article to ChatGPT and asked it for feedback. I did not provide the chat logs or images. The explanation of ChatGPT by ChatGPT was not initially present: I added it in response to ChatGPT’s feedback:
One potential addition to the article could be a brief explanation of how ChatGPT works and what kind of training data it was trained on. This would provide some context for readers who may not be familiar with the technology.
When asked to title the article, ChatGPT said:
“Exploring the Capabilities and Limitations of ChatGPT in Creative Writing: A Personal Experience”
This title effectively conveys the main focus of the article, which is exploring the capabilities and limitations of ChatGPT in the context of creative writing, and it also highlights the personal experience aspect of the article, which makes it more relatable and interesting to readers.
The conversation with ChatGPT about this article is available as a stripped-down HTML here.