AI and willing suspension of disbelief

As Generative AI models get more powerful, humans need more critical thinking

David Pereira
3 min readDec 15, 2022
A futuristic image of the writer of this article, created by an AI model

Suspension of disbelief, sometimes called willing suspension of disbelief, is the avoidance of critical thinking or logic in examining something unreal or impossible in reality, such as a work of speculative fiction, in order to believe it for the sake of enjoyment (Wikipedia).

GPT-3, DALL·E 2, Stable Diffusion and Midjourney or the latest ChatGPT from OpenAI. These are only a few examples of really powerful Generative AI models with the power of suspending our disbelief. They seem so powerful that they cancel our critical thinking, making us believe that the information they create as an answer to our questions, or the images we ask them to create for us are real and accurate.

AI has reached a level of maturity in which some of its outputs in the form of text, images or even video are hardly distinguishable from those created by humans, but at the same time, they are so immature that some of those outputs are impressively wrong.

An example of an impressively wrong answer by ChatGPT to a simple question

This is a huge problem, because some people thinks these models are ready for professional or academic use, even when their creators are warning us, making the AI willing suspension of disbelief not only for the sake of enjoyment.

Even for the sake of enjoyment, they are extremely dangerous as they can spread misinformation at scale. Take as an example the new Lexica Aperture model. It can generate photorealistic images, being specially good with celebrities. Now imagine that someone, just for the sake of enjoyment, decides to create fake images of famous people in that at some point can affect their personal lives if people start to think they are real. Our willing suspension of disbelief, together with the scale of social networks, might become a huge problem.

A fake photorealistic image of Elon Musk and Natalie Portman created by Lexica Aperture

To overcome these challenges, we as AI consumers need to improve our critical thinking. This includes being conscious of our biases, considering the consequences of our actions, doing research and of course, assuming that AIs are not always right (actually, they are only right by chance):

LLM are trained to produce sequences of tokens, some of them make sense together

But this is not only a consumer responsibility. Tech companies training these models should make sure they have a thorough risk management, accuracy, robustness and human oversight audit processes in place before publishing them to the general public.

AI is a fantastic tool, one with the power to help improve our lives and our society, but we need to be critical about its current capabilities and limitations. This is not fiction anymore, powerful AIs are real, but still immature and dangerous if we willingly suspend our disbelief when we use them.

--

--

David Pereira

Data & Intelligence Partner at NTT DATA Europe & Latam. All opinions are my own. https://www.linkedin.com/in/dpereirapaz/