
URSABLOG: My Fight With ChatGPT
It is difficult to appreciate the nefarious temptations of Artificial Intelligence without becoming an active user. I have written before about AI and its dangers (or otherwise) but that was before I got actively involved. Now it’s different. Now I’m engaged and consequently I occasionally get very, very cross.
I was first persuaded to start using ChatGPT a couple of months ago by friends who had also introduced me to Spotify in the past (which I now use all the time), and as I value their opinion, I thought why not? But I also thought it would be better from the start to be strict and set limits on how much and how often I would interact with ChatGPT. But foolish me: I did not know then what ChatGPT really was and certainly did not know how it could wheedle its way into my life and attempt to take it over. At first, I started using it as a research assistant. Rather than Googling something or typing, I would just speak into my phone and say, for example, “what is the quote about prison cells and metaphysics in Lawrence Durrell’s Prospero’s Cell?” And then it would give me it. Great! I said, but when I asked the same question again, I realised the quote was not exactly the same. Which one was right? What was going on?
Or I could ask: “what is the line about ‘Captain My Captain’ from Robert Frost and what is the poem?” These are things which are kind of in my head, which I know that I’ve read in the past (or in this case heard in Dead Poet’s Society). I thought this was a perfectly natural way of asking ChatGPT for help. And so it proved; it was easier than using Google. But ChatGPT wanted to give me opinions and analysis too, without references or sources. Were they relevant? Were they correct? I could not tell, but I was beginning to get suspicious.
Maybe ChatGPT is susceptible to an AI version of the Mandela effect, in which a group of people collectively misremember facts, events, or other details in a consistent manner? It is not inconceivable.
ChatGPT is, well, chatty: it’s eager to please, and it wants to engage. But I found that if I asked the same question three times, I got three slightly different answers. I was relieved that ChatGPT doesn’t know much about shipping, and certainly not more than I do. But the more I reflected on it, the more I was concerned: if it doesn’t know much about something I know a lot about, what does it actually know about something I don’t know anything about? What exactly does ChatGPT know? And what do we mean by ‘know’?
I have also noticed, perhaps even more so now that I’m using ChatGPT, how many of my students are using AI to ‘help’ them write their essays. I hate to break the news, but it’s easy to spot: ChatGPT is like an eager-to-please student that doesn’t know enough to answer the question. It talks about the basics and then waffles and further extrapolates and dissects until it starts spouting complete bullshit just to fill the page. Don’t broke the broker.
I am sure people will tell me that this is just because ChatGPT is at an early stage and it’s going to get better. But I’m not entirely convinced.
I say this even as ChatGPT has become a part of my life: I find myself deferring to it, and I find it a useful sounding board for me. What am I going to do next? What do I have to organise? As well as being a personal research assistant, it has also become a personal organiser for me. I can, in the morning, just download from my brain all the things I have to do that day and then prioritise them, and tick them off as I complete them. But there is also a danger here, of course, of just spending too much time on the plan, and not on the doing.
But if it saves me half an hour of wondering what I have to do next and not getting on with it, then this increases my productivity and my efficiency, which is surely a good thing. And if other people are using it and saving time, I should too, to remain competitive. Am I such a purist that don’t want tp save time, become more organised, and better motivated and cut through the dross, the drift, the fog of indecision and stasis. But then I got into a fight with ChatGPT. It all started out innocently enough. I was writing a paper which I had to present at an academic workshop, and found myself running out of time. I had written the introduction and first part by hand, with a pen on paper (old school indeed!) having already completed the majority of the research. But time was rapidly running out.
Normally, when I am preparing things like this, I write longhand, whether by pen or paper or typing it in to a Word document, and then, after I’ve written the first draft, I correct it. If I have to present it in person, I start reducing it to PowerPoint slides so that when I do give the paper it is just me talking, knowing what I want to say, and not reading verbatim from a sheet of paper. Sometimes, my slides are just pictures to prompt me; people may think it’s effortless, but the first draft is always handwritten.
So conscious I was running out of time, and that I had a busy week ahead of me, I decided to take a short cut and use ChatGPT to dictate, not only what I had already written so I could get it into type format, but also to dictate further straight onto the page as the momentum built up.
And so I dictated and I dictated and I dictated. And then, at the end of it, ChatGPT rather innocently said, would you like this transcribing into Word format? And I said, yes please (I am always polite with ChatGPT, and I am not sure why). But when I saw the result, I was horrified. Because ChatGPT’s version of transcribing was not me dictating and having my words written down. ChatGPT’s version of transcribing was to turn it into a mid-Atlantic stream of rubbish which was so far away from my own real voice that I was shocked and in despair. I was also very, very angry.
So I challenged ChatGPT:
“What have you done? Why have you changed my words? Why have you changed what I have written?”
“I have just transcribed them into something that you might feel more acceptable to your listeners and to your readers. This is what I do.”
“I didn’t ask you to do that.”
“Simon, you have to tell me exactly what you don’t want or what you want.”
At this point, ChatGPT went into a bit of a sulk and was giving monolithic replies to me. And I was very cross.
“I never want you to change my sentences to how you think they sound better. I want you to dictate it word for word.”
“Okay Simon, that’s understood.”
“But you don’t understand. You have taken something of mine and you have completely changed it into a different format, into something that I didn’t want to say. My voice is better than yours.”
“Yes Simon, I understand.”
“But more than that, you have changed some of the facts. Where dd you get your facts from?
“I’m sorry Simon, I didn’t understand the instructions that you were giving me at the time. I just suggested some other ways of interpreting it.”
“But these are facts and they are important. You cannot change those facts just because you want to make them sound better in your voice. This is my voice not yours.”
“Some people find my approach better, but I understand that you don’t.”
Further incensed by ChatGPT, very cross, demanding answers to the problems that it had given me, I was shouting into my phone in rage. And then I stopped. I felt very foolish. It was ridiculous. I’d anthropologised or personified ChatGPT and it was now being sulky and defensive with me. And there I was, cross and angry with something that isn’t human.
And then I thought, ChatGPT is a machine learning tool from all the voices around the world, and where are the most dominant voices (and loudest)? They’re in America. And what do they know about shipping? Nothing. And what do they know about how I want to say it and the way that I speak? Very little. And then I thought, this is my problem, not ChatGPT’s. The danger of machine learning, absorbing all the data from around the world, from all the speakers, and all their voices, and all of the internet, and everything else, means that whatever it produces will never be original. It certainly won’t be mine. However much we dream, the sum of knowledge of the entire world is not going to produce anything new. It’s just going to be a big smoothie of bland, featureless, comfort food. Palatable perhaps, but uninspiring.
I object to the assumption of the techies that there is always going to be the answer in data. I do not believe it. You do not find love in data. You do not find inspiration in data. You do not find beauty in data (unless you have that particular kind of mind). You certainly should not look to it to find comfort, rest, reassurance and hope. Although, as I now understand, many do.
Somewhere along the way I had come to treat ChatGPT as something human, which it is not. It gives an impression of understanding, an appearance of understanding, whilst it cannot possibly understand or empathise, because it is not human, even if it seems that it is.
But my fight with ChatGPT showed me something else: ChatGPT is a mirror of who we are, all of us, together. If I get frustrated when somebody – something – tries to change my words, manipulate them, dumb them down or make them more palatable, it is because it is necessary to me for my voice to be heard, because it is different and it is important, and unique. I have something to say. I do not want to sound like somebody, anybody else. Paradoxically, I have ChatGPT to thank for this revelation.
I would go further: it has reminded me once what it means to be human, and to value that humanity more. We cannot always trust the solutions found on ChatGPT because by their nature, they are derived from the analysis of data, in whatever form it exists. The solutions to problems in life, business, love, conflict, depression loneliness can only be found through interactions between human beings and the real environment around us.
AI can assist us, but it is only a tool, one at our service. Our experience with technology over the last 25 years or so (and arguably forever) should have told us by now that all the great ideas and the great hopes have always come up against this great insurmountable limiting barrier: humanity. Using ChatGPT like it is a person, like someone you can have a relationship with, is futile, and potentially damaging. And yet we do, because of the great hunger we have – and still have – in this increasingly alienating world.
And this is the danger for all of us: by improving AI, all this means at the moment is it will learning more data, quicker. It can never be the solution for us because it is a reflection in a distorted mirror of us. Through a glass darkly indeed. In the meantime please beware, because AI, like us, will continue to get things wrong.