GenAGI’s evolution problem

Steve Jones
5 min readApr 9, 2024

In his seminal work “The Blind Watchmaker” Richard Dawkins wrote a simple piece of Python that enabled you to simulate, with you as the selection agent, how evolution works. There is a whole field of AI called “Evolutionary AI” which uses those sort of principles to create AI solutions that iterate and adapt to solve problems. These principles have been applied to neural networks, for instance through reinforcement learning, to create models that can be trained to perform tasks.

There is a key question on AGI, is intelligence an emergent property of information, or is it an emergent property of evolution and adaptation.

I’m betting it is the latter, and I think there are good reasons to not think it is the former.

Neuron counts aren’t all they’re cracked up to be

There is a theory of AGI that if you build something that is as big and complex as a human brain using a neural net then intelligence will become an emergent property of it, and the brain will wake up.

The thing is that elephants have more neurons than people, and Flat Earthers probably have the same number of neurons as a Nobel Prize winner. This indicates that neurons isn’t the only measure, and that even within a single species like humans the variety of results from the same basic neuron structures can be wildly different.

List of the top 4 animals by neuron count, humans are 3rd on the list, behind short-finned pilot whales and elephants

Were the Neanderthals intelligent? The Denisovans? Well everything seems to indicate they could be pretty sophisticated. Crows can plan for future tool use at significantly lower neuron count than humans, and way lower than the claims for GPT 4. Indeed there are remarkable stories of people having half their brain removed and still being smart.

So if Flat Earthers and GPT 4 have more neurons than crows, then is there something else we should think about before boasting about AGI potentials?

A few billion years of natural selection would like a word

The first multi-cellular life appeared about 700–600 million years ago, and while the jump from single celled to multi-celled ‘things’ happened multiple times, the one that led to animals happened only once. From there on though things picked up and by 525 million years ago we’d had the Cambrian explosion. From there to here we’ve had explosions and extinctions and all the time we’ve had billions of individual experiments around the globe, sometimes those individual experiments came to the brink of extinction only a million or so years ago.

In all of this these experiments, from the first few multicellular creatures, through to modern humans, there was a training mechanism unlike any other: reality. A reinforcement model played out on so many dimensions and features every day, driving selection of which genes would make it, and which would not. At what stage did those creatures begin to plan? When did they start being able to use tools? When could they pass on their knowledge to the next iteration? It would be churlish to map out the numbers of generations that have existed, and it would be impossible to map out the number of interactions and individual choices that trained these experiments over those generations to become the creatures that can both win a Nobel Prize and believe in a Flat Earth.

At some stage in this journey, some creature could be said to have passed a threshold of intelligence, but it almost certainly wasn’t a clear break, it was a slight improvement, and it was built upon by the subsequent generations of experiments, was it linked to language? Was it linked to throwing things? We don’t know, but we know it happened.

Reddit is never going to compete with evolution

Leveraging even the vast information in the internet and expecting that to create intelligence makes two massive assumptions:

  1. That Flat Earthers are less vocal than Nobel Prize winners in the data sources
  2. That intelligence is an emergent property of volume of information

We all know that the first is spectacularly not true, and the latter is a hope rather than a guarantee, and a hope that has yet to produce an intelligence as good as a crow.

The pursuit of ever larger, ever more energy consuming, models is driven by both of these assumptions. The belief that intelligence is something that can be brute forced through volume, and that it wasn’t an emergent property created and selected by evolution.

No creature in the history of the planet gained intelligence by reading Reddit as an information source, indeed many have clearly demonstrated that reading such sources has cause them to devolve their intelligence. Yet this is a primary goal of many models, to keep increasing in size, and inefficiency, to be able to do more and more tasks, through more and more learning, until there is a single model that can do everything.

The information these models use is by definition static, its a point in time view on the summation of human knowledge, and that if a model could be ‘taught’ this information then it can become intelligent.

Intelligence isn’t required to be useful

Just because I’ve seen it happen before I’ll be very clear:

GenAI models are spectacularly useful, intelligence is not required for them to be useful

I’m using GenAI models every day, I’m getting them to do things that they are better at than previous models, and I’m using them in conjunction with other techniques to create hugely capable systems. I even think we have to completely redesign how we build applications to take advantage of AI.

I’m hugely bullish on the ability of AI, I just don’t think intelligence is an emergent property of information.

Go big, or go wide?

The other option is to accept that intelligence isn’t emergent and instead go wide, so don’t keep making a single model bigger, but instead try and create an ecosystem of models, a collaboration that has the impression of an intelligent colony but where no individual piece could be considered as intelligence.

Chat GPT does this these days when you ask it complex maths, like adding up 6 big numbers, it writes and runs a Python script. It goes wide, and while neither the core LLM, nor obviously the Python script, can be said to understand maths, the system gives that impression. This approach of creating a system, where each individual model is great at certain things, and therefore can be efficient at it, also enables multiple different techniques to be used as appropriate (for instance just running that script).

The colony of AIs would not be intelligent, but it would be able to act intelligently, and maybe that will be enough.

Are there rules of thought?

I would be remiss if I didn’t mention the other option, that intelligence can be viewed like aerodynamics, where we tried to imitate birds for thousands of years, and then learned the rules behind flight. It is possible that the same could happen with intelligence and thought, that some genius, some Einstein of thought, could discover the fundamental laws of thinking and they could be encoded into systems and software.

If that happens, then it won’t think like us, and it truly might change everything.

The image, designed to emphasize the digital essence of artificial intelligence evolution, is now showcased. It encapsulates the journey from simple digital organisms to the sophisticated AI humans of today, with each evolutionary stage infused with digital and technological motifs, ensuring that the creatures distinctly mirror their digital origins.

--

--

My job is to make exciting technology dull, because dull means it works. All opinions my own.