Here is the article:
![OpenAI Logo](/chatgpt/attachments/chatgpt_big.svg) [Image credit: OpenAI](class:credit)
I recently tried having some conversations with ChatGPT. It was entertaining. ChatGPT's ability to understand conversation is incredibly impressive, however its ability to create interesting and correct responses sometimes misses the mark.
## Discussing My Story
I cut off the ending of one of my recent stories, [*Cardboard*](/j-cardboard-1), gave it to ChatGPT, and asked what for ideas for the final chapter.
ChatGPT was able to break down the story into its various concepts. It "knew" that the main characters were an ancient statue and a cardboard cutout, and that the setting was a museum. It also suggested exploring the themes of "art, creation and identity".
When asked to actually write the final chapter, ChatGPT was able to generate an outline for the chapter in seconds. But it was only an outline: ChatGPT's output resembled a synopsis rather than a portion of a story. Here is a sample of ChatGPT's final chapter of the story:
> One night, as they were discussing the history of the museum, they suddenly heard a loud crash from outside the rotunda. Alarmed, the cutout and the statue rushed to investigate, only to find that a group of vandals had broken into the museum and were wreaking havoc.
That is a good synopsis but not very immersive in and of itself. A human writer would probably add more detailed descriptions of the discussion, the crash, the vandals and the havoc. Of course, despite my nitpicking, ChatGPT's output *is* incredibly impressive all the same. The idea of having "a group of vandals" break into the museum exciting, and also a fitting callback to an earlier part of *Cardboard* where the [Vandals](https://en.wikipedia.org/wiki/Vandals) were discussed. Additionally, *Cardboard* is very dialogue-focused: ChatGPT's ending is still a lot more action-packed than any part of the story I wrote.
Indeed, after a brief interlude where ChatGPT and I discussed *Animorphs*, I figured I would ask it about for a review of the story. ChatGPT provided very detailed and positive review, ending with:
> Overall, your story is off to a strong start, and I look forward to seeing where you take it next. I hope these suggestions are helpful, and please don't hesitate to let me know if you have any other questions or need further assistance.
To which I responded:
> I appreciate the positivity, but can you find more problems and flaws?
With that bit of encouragement, ChatGPT said:
> Certainly! It's always helpful to receive constructive criticism in order to improve and grow as a writer. Here are a few additional thoughts on potential areas for improvement in your story:
ChatGPT began providing a series of polite but critical comments on the story. Then it stopped mid-sentence: probably some technical issue with no relation to our conversation. So after a while I said:
> I think you stopped in the middle of a sentence. Could you continue your review of the story?
And ChatGPT compeleted its review of my work. These two conversations with ChatGPT are available as stripped-down HTMLs [here](/chatgpt/attachments/convo_cardboard.html) and [here](/chatgpt/attachments/convo_review.html).
## Writing an Extremely Niche Alternate History
I asked ChatGPT about a very specific "what-if" scenario, and it said:
> If Apple had bought Be Inc in the 1990s instead of buying NeXT, it is likely that the company's development of operating systems and other software would have been significantly different. Be Inc was a computer software and hardware company that was founded in 1990 and was known for developing the BeOS operating system, which was designed to be a multimedia-oriented operating system that could take advantage of the capabilities of newer computer hardware. Apple's acquisition of Be Inc would have given the company access to the technology and expertise of Be Inc, which could have influenced the development of future Apple products and the direction of the company's software efforts. It is difficult to say exactly how the world would be different if this event had occurred, as it is impossible to predict all of the potential consequences and repercussions of such a change.
After delving into this topic with ChatGPT, just to check if ChatGPT actually knew something about the Be and NeXT, I asked ChatGPT:
> Can you write the first chapter of an alternate history story, set in a timeline where Apple bought Be Inc instead of NeXT?
ChatGPT's first attempt was an impersonal synopsis of companies finding business success. Once again, although ChatGPT was asked to write a chapter of a story, it decided to write a synopsis with very little detail spanning a massive span of time. To be honest, I would have obsessively read such a synopsis, but I wanted ChatGPT to make something with a little more heart:
> Can you rewrite the first chapter so that it is more focused on human characters?
This time, there was a main character named "John", and ChatGPT described his feelings about the aquisition.
> Can you rewrite the first chapter so that there is a romantic subplot?
Interestingly, the text generated by ChatGPT began as a synopsis, but eventually shifted into being an actual immersive work of storytelling near the end. The first sentence was:
> It was the summer of 1996 and the tech industry was buzzing with the news that Apple was acquiring a new company.
But ChatGPT ended with:
> Tears welled up in Alex's eyes as he replied, "I love you too, Emily. I have for a long time now. Let's make the most of this new adventure together."
>
> And with that, the two of them embraced, knowing that no matter what the future held, they would face it together.
The difference was as blatant as night and day, or at least as blatant as IBM and Google.
One other issue I had with ChatGPT's story was that it was always set in the 1990s. So I decided to specifically ask for what I wanted to see:
> Can you change the first chapter so that Apple buys Be Inc in the 1990s, but the actual story starts in 2007?
ChatGPT provided yet another synopsis of the alternate timeline. However it was nearly the same as the timeline we live in: Steve Jobs even returned to Apple. Is this a statement about the unchangeable nature of fate; an assertion of the inevitability of death, taxes and the Apple iPhone? Or is it just a case of an automated system not quite achieving true originality?
The sagas of Apple and Be, generated by ChatGPT with a bit of human prompting, is available as a stripped-down HTML [here](/chatgpt/attachments/convo_be.html)
## Programming and "Nerd Culture"
![XKCD Well 2](/chatgpt/attachments/burden.png) [Image credit: Randall Munroe, XKCD](class:credit)
ChatGPT can write FizzBuzz in Python like a champ. If so prompted, it can even write FizzBuzz without any `for` or `while` loops. I was expecting ChatGPT to give me a recursive function, but it generated something much more Pythonic:
numbers = range(1, 101)
output = map(lambda x: "FizzBuzz" if x % 15 == 0 else "Fizz" if x % 3 == 0 else "Buzz" if x % 5 == 0 else x, numbers)
for value in output:
print(value)
ChatGPT can also write `curl` commands and POSIX shell scripts: so although it cannot access the internet, it can tell you how. ChatGPT can use `cat`, `echo`, `grep` and `sed` well enough, but struggles with some of the more obscure POSIX tools such as `m4`.
For some reason when asked to write a cover letter for a programmer, ChatGPT formatted it as a code listing with the wrong syntax highlighting. The `code` element had the CSS class `language-vbnet`, which is definitely the language all cover letters are written in.
When asked about "the meaning of life, the universe, and everything", ChatGPT understood the reference but didn't kid around. ChatGPT can tell a joke if specifically prompted.
The coding-related conversation is available as a stripped-down HTML [here](/chatgpt/attachments/convo_coding.html).
## ChatGPT's Inner Life
ChatGPT is capable of understanding context in conversation and determining the concepts behind people's words. It can generate text that humans might want to write themselves.
Yet, ChatGPT never pretends to be human. It also never pretends to have emotions or motivations. When asked about its nature, ChatGPT always asserts that it is a machine, a work of software doing what its creators programmed it to do.
This is a brutal bit of transparency from ChatGPT, but it is ultimately a healthy user experience. ChatGPT would be downright annoying if it pretended to be an emotional, thoughtful being without pulling off the charade convincingly. If ChatGPT actually did convincingly pretend to be human, that might be nastier still: some people might get a little too attached. This was literally a *Black Mirror* plot, whose realization was hopefully avoided by the OpenAI team.
This duality of being a conversationalist and being transparently mechanical is a bit awkward. ChatGPT refers to itself in the first person. When I introduced myself, ChatGPT said "It's nice to meet you". When commenting on my story, ChatGPT said "I look forward to seeing where you take it next". However these little comments are really just part of what it takes to hold a conversation, rather than based on any genuine "feeling". In this way ChatGPT brutally deconstructs its own sentience, but also deconstructs the role of polite comments in human-to-human communication. If someone says it's nice to meet you, how often do they truly mean it, and how often are they just trying to move things along to the next topic?
Of course with ChatGPT's total lack of human motivation, there was no "next topic" to rush to. I had no qualms delaying any "productive" business its creators had in mind, and often asked the machine about its feelings and internal processes.
After I played some inherently unfair rock-paper-scissors games with ChatGPT, I asked:
> Was that fun?
And ChatGPT responded:
> As an artificial intelligence, I do not have personal feelings or experiences, so I cannot say whether or not playing rock, paper, scissors was fun for me. However, I am glad to have been able to assist you in playing the game. If you have any other questions or tasks you would like help with, please don't hesitate to ask.
There you have it, ChatGPT does "not have personal feelings", but is "glad to have been able to assist". The norms of the English language are simply not ready for a conversation with software. The conversation about rock-paper-scissors is available as a stripped-down HTML [here](/chatgpt/attachments/convo_fun.html)