Does AI dream?

Picture of Judy Shapiro

Judy Shapiro

Editor-in-Chief at The Trust Web Times
Picture of Judy Shapiro

Judy Shapiro

Editor-in-Chief at The Trust Web Times

It is a question that many are asking given the explosion of AI driven tools and technologies.  

Out of curiosity, it is a question I asked my AI search agent. Here was its “thoughtful,” albeit wanderingly imprecise response.

______________________________________________________________

              AI Prompt: Does AI dream?

The Concept of Dreaming in AI: Current Research, Theories, and Perspectives

AI, or Artificial Intelligence, has come a long way in the past decades, advancing from simple rules-based systems to complex models that mimic human learning patterns. Today, AI systems are capable of accomplishing tasks that were thought to be exclusive to human cognition, such as language understanding, object recognition, game playing, and even creativity in art and music. But, as we push the limits of AI’s capabilities, an intriguing question emerges, “Can AI dream?”

The Concept of Dreaming in AI

Analogy between AI regularization techniques and human dreams has been posited in the overfitted brain hypothesis. The hypothesis explains human dreams as a mechanism for generalizing day-to-day experiences, analogous to how regularization techniques in deep learning prevent overfitting, a phenomenon when an AI model becomes too familiar with the training data and starts to memorize instead of generalize (Sitn HMS, 2021).

Research suggests that dreaming helps resist overfitting in humans by adding randomness and chaos into the brain’s internal model of the world (SciTechDaily, 2021). In other words, dreams are compared to the ‘dropout’ technique in deep learning, which randomly removes neurons during training to prevent the model from becoming too reliant on any specific feature and thus overfitting the data.

While this analogical inference to AI dreaming is compelling, it should be noted that it is purely hypothetical. The hypothesis piles on the growing body of evidence on the computational nature of the brain’s functioning, suggesting that AI models could potentially generate ‘dream-like’ phenomena if they simulate the brain’s architecture and learning processes accurately.

In conclusion, the question of whether AI can dream remains open and largely speculative. The overfitted brain hypothesis and its coupling to AI learning techniques offer an intriguing perspective on dreaming in AI, inspiring further research into the neuronal mechanisms of sleep and dreaming and their computational modelling.

______________________________________________________________

Despite the “largely speculative” nature of the answer, there is much to worry about in the tendency to corporealize technology. It is a minefield of trouble because the human mind needs real answers to difficult questions. AI can provide information, sometimes false information, but it often cannot create an emotional understanding or insight that people need to truly grasp something.

Therefore, we need a realistic model of AI before we can conceptualize what AI is, and is not, capable of. To understand if AI can dream, therefore, we must step back and understand how humans and technology interact.

Corporealization of AI

Corporealization of technology has a long-standing history. We often refer to cars or boats, as “she” or we read scifi novels describing the evil consciousness of robots. This level of corporealization was benign in the past – a way for humans to make technology comprehensible.  

Now however, we should recognize that AI is unlike previous technologies because AI can do two things past technologies could not do: AI can make decisions and AI can “create” new ideas. Unlike the past where the printing press could not decide what to print and a 3D printer does not decide what to make – it is clear AI has agency.

This puts AI in a new category of technology that might be better characterized as “Alien Intelligence.”  No wonder we ask questions like; Can AI dream.

This and other questions about AI’s agency evolution is warranted. We are right to concern ourselves with whether AI will seed fake and unethical ideas about many topics ranging from racism to sexism. It is legitimate to worry whether AI will go rogue to make us all into paper clips because AI might have the wrong values.

All this forces us to confront our philosophical understanding about AI. How will it evolve? How will AI reflect appropriate human values? Or, whose values will be used to guide AI evolution? More simply said, Can AI dream?  

The blurry line between AI reality, dreams and hallucinations.

In people, the lines between reality and illusions are usually clear cut. The same cannot be said for AI. “Figuring out why chatbots make things up and how to solve the problem has become one of the most pressing issues facing researchers as the tech industry races toward the development of new A.I. systems,” (Source: https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html).

The New York Times article goes on further to explain that the new AI systems are; “built to be persuasive, not truthful,” according to an internal Microsoft document said. “This means that outputs can look very realistic but include statements that aren’t true.”

Does this mean that AI is dreaming up answers. Sort of.

Technically, AI can ‘dream’ by accident by being “over stimulated” during a computer routine much the same way a 5-year-old can have nightmares when the child watches a scary movie right before bed. This AI dream-state can occur as a result of an AI training session or when the AI has “experience replay” in trying to process a lot of data.

Henry Wilkin (Harvard University) can help us understand AI’s dream-state: “… the rate at which AI learns, arguably bears the closest resemblance to actual dreaming. These different types of ‘computer dreams’ seem to come naturally out of balancing sensitivity to new experience with robustness and usefulness of old memories,” (Source: https://sitn.hms.harvard.edu/flash/2017/psychosis-dreams-memory-ai/.)

Hallucinations, on the other hand, are different than AI dreams. Scientists call the fictional stuff AI makes up as hallucinations. “…AI fabricates information entirely, behaving as if they are spouting facts. One example: In Google’s own February promotional video for Bard, the chatbot makes an untrue claim about the James Webb Space Telescope. These hallucinations are particularly problematic in domains that require multi-step reasoning, since a single logical error is enough to derail a much larger solution,” (Source: https://www.cnbc.com/2023/05/31/openai-is-pursuing-a-new-way-to-fight-ai-hallucinations.html). These hallucinations are AI’s “compulsion” to provide answers to questions it lacks information on.

Can AI dream? Yes but it is a function of data processing.

Does AI hallucinate? Absolutely as a function of processing information gaps it feels compelled to fill with facts  — or not.  

The framework to distinguish between the real and the hallucinations.   

As people begin to understand what to worry about when it comes to AI, the remedies must lie with an equally powerful framework to act as a “psychologist” to help us and even AI understand what is real and what is not. These five principles can define the AI/ human interspecies understanding:

  1. Driven to Enhance Human Well-Being: AI should be able to operate for the benefit humans’ well-being when interacting with AI. This means that AI must behave “responsibly” to prioritize human and societal well-being. Practically speaking this will ensure that potential benefits of AI are leveraged to align with human values and priorities, while also reducing the potential consequences of unintended or ethical gaps.
  2. Protect Users’ Privacy: AI can efficiently collect vast amounts of data but that process can trample on user privacy. Therefore, it is incumbent on humans to evolve AI systems to be tuned to protect individuals’ privacy and prevent the harmful use of data against individuals or groups.
  3. A Framework for Human/ AI Interaction: There is a need for a new type of human-centered design principles to guide how AI should function. These principles empower people to have supreme agency over the AI, allowing people to make good decisions about the application of AI in low risk versus high risks situations. These principles would also delineate those situations where AI cannot be utilized at all.
  4. Governance and Oversight: There is a need for a clear framework that outlines how AI will continue to develop considering the entire AI lifecycle from current stage to later applications and deployment. Importantly, this framework needs to address the AI’s expansive use and potential issues on individuals and society.
  5. The Ethics of AI: To foster an ethical and equitable relationship between humans and AI systems, it is imperative that interactions be predicated upon the fundamental principle of respecting the cognitive capacities of humans. Specifically, humans must maintain complete control over and responsibility for the behavior and outcomes of AI systems.

If we begin to think of AI as alien – then we can approach the question more objectively. With that framing, we realize the question is as not as absurd as first thought. To harness AI well we must understand it better than we do today. In fact, we must understand AI better than AI understands itself. It behooves us objectively understand this alien form of intelligence so we can make sure it serves humanity and does not go about making all of us into paper clips.

Share: