Artificial intelligence finds and builds patterns in data. But arranging alphabetical symbols in a way that makes sense to us does not imply intelligence or a sense of meaning.

A stable diffusion model helps you design cars or houses, but it has no idea of the real meaning of those shapes. It goes no further than knowing that they belong to a particular archetypal class or ‘latent space’.

But then… what makes this system work?

Here is the interesting part: the fact that some aspects of reality, such as language, are easier to model than we thought. You can get a machine to speak and understand language without implicit knowledge of grammar and syntax rules.

The fact that certain processes like language can be learned as an emergent property, rather than using grammar and syntax, is one of the most interesting findings of the generative AI thing.

And this speaks volumes about how useless many educational curricula may be for humans. For example, studying grammar to learn a new language might be a complete waste of time compared to being repeatedly exposed to the right informational stimuli.