An LLM may need billions of tagged documents to learn to speak. Humans learn differently – a baby needs far fewer interactions to learn a language. And yet AI often trains faster for specific tasks. Why does this happen?

AI excels at well-defined, iterative computations within clear boundaries. This results in different learning processes:

  • AI can discover and extract knowledge from large, homogeneous data sets (low entropy data).
  • Humans, on the other hand, find connections in small, diverse data sets (high entropy data).

The result? AI is great at describing and reproducing what happened. But it struggles with why it happened or with generating new hypotheses. It can’t think outside the box. And this is by design, it can’t be fixed with more data.

Humans learn through meaning, unconsciously drawing on vast, diverse and virtually infinite amounts of data (memories, feelings, creativity). This gives us an advantage in explaining and responding to the world. Simply because we are an organic, conscious part of the world.

The problem arises when people try to identify human consciousness with a computer and forget to treat AI as what it is: a tool with specific applications.