If you are trained as an architect or designer, you understand the feeling of decision fatigue when it comes to design. Or even the decisions that go into crafting an email, where nuances in wording can convey subtleties in meaning. When Generative AI tools, such as ChatGPT or Dall-E, came along, it seemed too easy. You would type in the prompt, and the rest was left to "fate". One is either wildly surprised and in awe of what it produced or extremely disappointed that it didn't understand your intentions.
As the digital era progresses, businesses across various industries are continually exploring ways to harness the power of artificial intelligence (AI) to enhance their processes and outputs, whether that's a business plan, a short movie trailer, or a sales email. Generative AI, by design, can originate content — text, images, music, or any digital form — reflecting the style, context, and nuances of its training data, a feat that often appears magical.
Within this evolving landscape, however, it's essential to understand the intricacies of Generative AI and its role in shaping our perception of artificial intelligence outputs. At the heart of this is a phenomenon we are calling the 'Illusion of Completeness.'
The 'Illusion of Completeness' refers to the perceived accuracy or wholeness of AI-generated content, which might seem perfect to the untrained eye but may not exactly align with the specific requirements, subtleties, or context intended by the user. This illusion arises from a mix of our brain's wiring, cognitive biases, and inherent appreciation of beauty. Let's explore the factors that feed into the 'Illusion of Completeness' in Generative AI.
Value to Effort Ratio:
In the realm of AI, the value-to-effort ratio examines the stark contrast between a user's minimal input and the AI's extensive output. From a psychological standpoint, the abundance of output against a minimal input can heighten our perception of completeness. This challenges our traditional understanding of the ratio, where more effort typically equates to more value. Think of the speed drawing challenge from years past. Artists were tasked to sketch identical drawings in intervals of 10 minutes, 1 minute, and 10 seconds. Clearly, the quality varied based on the time allotted. Yet, generative AI, such as Dall-E, can churn out intricate content with just a brief prompt or a few parameters, producing, for instance, a batch of 10 images in just 20 seconds.
Example of an eye drawing in 10 minutes, 1 minute, and 10 seconds (by rachelthellama).
Consider the following image comparisons:
Left: DALL-E2 Prompt: 3d realistic render, maya, ambient studio light, splash page image, sci-fi, futurism, greenery, aerial view, A city of bikes, scooters, pedestrians friendly city, Right: Dall-E2 Prompt: Future of mobility workshop and symposium poster without text.
The left image is perceived as more refined and detailed, fitting the criteria of high value despite the low effort. Conversely, the image on the right falls short in quality, illustrating low effort and low value. This distinction is similar to how we'd easily spot a six-fingered hand generated by AI — it's not what we'd typically recognize as a regular hand. Here, we can clearly see the 'magic' in the image on the left as we forget about the incomprehensible jumble on the right.
Perceived Coherence:
While generative AI models like ChatGPT and Dall-E can produce impressive outputs with seemingly little effort from the human counterpart, they lack true comprehension and contextual understanding. Despite the limitations, it can lead to outputs that seem internally coherent, visually complete, and contextually appropriate.
Many times, the perception of coherence lies in the ambiguity and conciseness of both the user's inputs and the AI-generated outputs. The input prompts provided to generative AI models can be concise and open-ended, leaving room for interpretation. Users may assume that the machine can understand their intentions fully and will generate outputs aligned with their expectations.
Here's an example:
In this example, the response attempts to convey coherence and relevance to the topic of AI's societal impacts. However, the content lacks depth, true understanding, and specific examples to substantiate the claims.

Examples of visual strategies such as bolding and uniform sentence length can achieve the illusion of completeness.
Fill in the _______
From a neuroscientific viewpoint, our brains are naturally wired to fill in missing information. This survival-oriented mechanism aids us in interpreting the world around us. A phenomenon called "filling in" happens when the brain "fills in" missing information in a person's blind spot. Reality is a construction of our brain, and the brain has evolved for survival, not accuracy. Consequently, when we examine an AI output, our brain instinctively completes any apparent gaps, making the result seem 'whole' even if it lacks certain aspects. Many times, it's easier to believe that AI can do more than it can.
Emotional Attachment and Sense of Ownership:
When users witness generative AI producing something remarkable and aligned with their desires, they may develop an emotional attachment to the output. Specifically, the effort the user exerted to produce the prompt that led to the generated output can create a certain feeling of "ownership" towards the generated output, whether that's a text, animation, or image. This emotional response further reinforces the belief that the AI has grasped its intent comprehensively.
This sentiment might be explained by the effort heuristic, which suggests that an object's worth is gauged by the perceived effort invested in it. It could explain why users, through the act of refining and interesting their prompts might feel a stronger ownership of the AI-generated content.
Confirmation Bias
Confirmation Bias is a cognitive bias that affects our interaction with AI outputs. This bias causes us to process new information in a way that affirms our existing beliefs or expectations. So, if the AI's output is somewhat aligned with what we expect, our brain is inclined to view the output as more precise than it might be. For example, the certainty at which ChatGPT generates false information can trick many. On the other hand, if the AI output is not what we expected, we might dismiss it and give it less weight in our minds or continue to edit the prompt until the desired output matches our input and coincides with the user's preexisting beliefs and thoughts.
Summary: Illusion of Completeness in Generative AI
These cognitive biases are more complex than described in this article. These are merely a glimpse to showcase why AI feels magical. And is it too good to be true? How can individuals and teams mitigate the Illusion of Completeness in Generative AI and think critically about our usage of Generative AI?
- Promote Awareness: Users and consumers of generative AI should be educated about the technology's limitations. Schools, governments, local communities, and companies should teach how generative AI works. Understanding that AI models lack genuine comprehension and can produce unpredictable and incorrect results will foster more realistic expectations.
- Iterative Hybrid Initiatives: Encouraging collaboration between AI and human creators, where the AI assists and the human guides, can lead to more reliable and contextually accurate results. The human plays a critical role in providing feedback to fine-tune the machine. This will allow more diligent tracking of AI outputs and user control over generated content.
- Clear Communication of Intent: Users should be educated about prompt design and encouraged to provide more specific and explicit prompts to avoid misinterpretations by the AI model. A bad prompt is misleading, unclear, and ambiguous, leaving much to be "misinterpreted" by the machine, leading to irrelevant, inappropriate, ineffective misleading, inadequate, or biased outputs.
The Illusion of Completeness in Generative AI stems from a combination of factors related to human cognition, AI limitations, and expectations. By being aware of these factors and adopting suitable strategies, we can harness the true potential of generative AI while maintaining realistic expectations about its capabilities. As AI continues to evolve, understanding the nuances of human-AI interactions becomes increasingly critical for creators and consumers of AI-generated content.
Til next time! 🙂