Have Neural Networks Accidentally Explained Why We Dream?

,
A sleeping human figure connected to a glowing neural network, with fragments of abstract thoughts, memories, and AI-generated code floating above.

Last time, I was talking about ideas — and how timing can make or break them. Right now, I hope I’ve balanced all the axes and found the right moment to share this one, exploring the connection between neural networks and dreams.

Ever since reading Why We Sleep by Matthew Walker, I’ve been fascinated by the mysteries of sleep and, especially, dreams. But in recent months, I’ve also been diving deep into AI and neural networks. One morning, while still in that hazy half-dreaming state between sleep and wakefulness, a strange but compelling idea came to me: what if dreams can be understood through the lens of AI? More specifically, what if our brains generate dreams the same way large language models generate text — and what if the strangeness of dreams has everything to do with something AI engineers call “temperature”?

Let me state clearly: I’m not a neuroscientist or AI researcher — just a very curious and avid reader, always looking for patterns in creative ways. Still, I believe this idea sits at an interesting crossroads, and I’d love to hear from people more learned on the subject — whether to debunk it, strengthen it, or point me toward similar lines of thought.

The hypothesis

(If some of the terms in this post aren’t familiar, don’t worry — I’ll give a quick and simple introduction to everything you need to follow along.)

When we dream, our brain is generating narratives, visuals, and emotions without direct input from the outside world. Could this process — this link between neural networks and dreams — be modeled as a kind of generative inference? And could the bizarre, illogical, and hyper-associative nature of dreams be the result of our brain increasing its “sampling temperature”?

In other words: are dreams just what happens when your internal model runs at high temperature?

A quick detour into neural networks

To make the analogy work, let me quickly summarise a few core ideas from the world of AI:

  • Neural networks, especially large language models like GPT, are trained on massive datasets to predict the next word (or token) based on the previous ones.
  • Once trained, you can use the model to generate new content. This is called inference.
  • During inference, the model samples from a probability distribution. Some outputs are more likely than others. If you always pick the most likely option, the output becomes repetitive or boring.
  • That’s where temperature comes in. It’s a parameter that controls the randomness of the sampling process:
    • At low temperature, the model tends to choose more conservative, high-probability outputs.
    • At high temperature, the model samples from a wider distribution of possibilities, allowing more “creative” or unusual outputs to emerge.

When you crank up the temperature, the model starts to make unexpected connections. It becomes more associative, more chaotic — sometimes more poetic, sometimes just weird.

Sound familiar?

Connecting the dots

Dreams are often surreal, emotionally intense, and filled with unlikely juxtapositions. They rarely follow strict logic or narrative coherence. In AI terms, they look a lot like what you get when you prompt a generative model with a high temperature.

Now consider that the brain, according to theories like predictive coding or the free energy principle, may itself be a kind of generative model — constantly trying to predict the world and update itself based on new data. During sleep, especially REM sleep, the brain isn’t taking in external sensory data. Instead, it might be generating outputs internally, perhaps as a way to test or refine its model.

And what if, during this process, the brain deliberately increases the diversity or randomness of these internal simulations? That could help us explore new associations, rehearse edge cases, or consolidate emotionally charged memories in novel ways. In other words, it would make sense to raise the temperature.

Interestingly, we often find that dreams become more bizarre when we are sick, intoxicated, or sleep-deprived — states that could plausibly be linked to a loss of cognitive control, or a natural increase in the “sampling temperature” of the brain.

Why it matters

If this analogy holds any water, it could offer a new way of thinking about why we dream. Not just to process memories or emotions, but to expand the range of our inner model — to allow space for novelty, variation, and exploration that a low-temperature waking mind might suppress.

At the very least, thinking about dreams through the lens of generative AI gives us a useful metaphor. And maybe, just maybe, it hints at something deeper: that the line between artificial and natural intelligence is thinner than we think.

Final thoughts

Let me reiterate, this is not a scientific conclusion. I’m not an expert in these fields — just someone who’s curious and trying to think creatively about where different areas of knowledge might intersect. Still, I hope it sparks curiosity — especially around the growing fascination with neural networks and dreams, and what this intersection might teach us about intelligence itself.

If you’ve seen similar theories, or have thoughts of your own, I’d love to hear them.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *