Visit the official COVID-19 government website to stay informed:

News & Blog

Some Thoughts on Artificial Intelligence

News & Blog

Some Thoughts on Artificial Intelligence

By Stephen Smith


A few years ago I posted a blog post on the Singularity. This is the point where machine intelligence surpasses human intelligence and all predictions past that point are out the window. Just recently we’ve seen a number of notable advances in AI as well as a number of instances where it has gone wrong. On the notable side we have Google’s DeepMind AlphaGo program beat the world champion at Go. This is remarkable since only recently did IBM’s Deep Blue program beat the world champion at Chess and the prevailing wisdom was that Go would be much harder than Chess. On the downside we have Microsoft’s recent Tay chat bot which quickly became a racist rather tainting Microsoft’s vision as presented at their Build Conference.


So this begs the question, are computers getting smarter? Or are they just becoming more computationally better without any real intelligence? For instance, you can imagine that Chess or Go just require sufficient computational resources to overcome a poor old human. Are chat bot’s like Tay really learning? Or are they just blindly mimicking back what is fed to them? From the mimicking side they are getting lots of help from big data which is now providing huge storehouses or all our accumulated knowledge to draw on.

In this article I’ll look at a few comparisons to the brain and then what are some of the stumbling blocks and where might true intelligence emerge.



Let’s compare a few interesting statistics of humans to computers. Let’s start with initialization, the human genome contains about 3.2Gigabytes of information. This is all the information required to build a human body including the heart, liver, skin, and then the brain. That means there is very little information in the genome that could be dedicated to providing say an operating system for the brain. An ISO image of Windows 10 is about 3.8Gigabytes, so clearly the brain doesn’t have something like Windows running at its core.

The human brain contains about 86 billion neurons. The Intel i7 processor contains about 1.7 billion transistors. Intel further predicts that their processor will have as many transistors as the brain has neurons by 2026. The neuron is the basic computational logic gate in the brain and the transistor is the basic computational logic gate in a computer. Let’s set aside the differences. A neuron is quite a bit more complicated than a transistor, it has many more interconnections and works in a slightly analog fashion rather than being purely digital. However, these differences probably only account for one order of magnitude in the size (so perhaps the computer needs 860 billion transistors to be comparable). Ultimately though these are both Turing machines, and hence can solve the same problems as proved by Alan Turing.

To compare memory is a bit more difficult since the brain doesn’t separate memory from computation like a computer. The neurons also hold memories as well as performing computations.  Estimates on the brains memory capacity seem to range from a few gigabytes to 2.5petabytes. I suspect its unlike to be anywhere close to 1petabyte (100Gigabytes). Regardless it seems that computers currently can exceed the memory of the brain (especially when networked together).<br<

From a speed point of view, it would appear that computers are much faster than the brain. A neuron can fire about 200 times per second, which is glacial compared to a 3GHz processor. However, the brain makes up for it through parallel processing. Modern computers are limited by the Von Neumann architecture. In this architecture the computer does one thing at a time, unlike the brain where all (or many) neurons are all doing things at the same time. Computers are limited to Von Neumann architectures because these make it easier to program. Its hard enough to program a computer today, let alone if it didn’t have the structure this architecture imposes. Generally, computer parallel processing is very simple either through multiple cores or through very specific algorithms.


Learning Versus Inherited Intelligence

From the previous comparisons, one striking data point is the size of the human genome. In fact, the genome is quite small and doesn’t have enough information to seed the brain with so called inherited intelligence. Plus, if we did have inherited intelligence it would be more aligned to what humans needed to survive hundreds of thousands of years ago and wouldn’t say tell you how to work your mobile phone. What it appears is that the genome defines the structure of the brain and the formulae for neurons, but doesn’t pre-program them with knowledge, perhaps with just some really basic things like when you feel hungry, you should eat and to be afraid of snakes. This means nearly all our intelligence is learned in our earliest years.

br>This means a brain is programmed quite differently from a computer. The brain has a number of sensory inputs, namely touch, sight, hearing, smell and taste and the with the help of adult humans, it learns everything through these senses. Whereas a computer is mostly pre-programmed and the amount of learning its capable of is very limited.

It takes many years for a human to develop, learn language, basic education, physical co-ordination, visual recognition, geography, etc. Say we want a computer with the level of intelligence of a ten-year-old human; then, do we need to train a computer for ten years to become comparable? If so this would be very hard on AI researchers needing ten years to test each AI to see if it works.

Complexity Theory

It seems that both computers and the brains are both Turing machines. All Turing machines can solve the same problems, though this says nothing about how long they may take. Computer’s logic elements are far faster than neurons, but suffer from being organized in a von Neumann architecture and thus operate very serially as opposed to the brain that does everything in parallel. But as such both are programmed from very simple logic elements with a certain small amount of initial programming. So where does self-aware intelligence arise from?

I believe the answer comes from complexity and chaos theory. When you study dynamic systems with increasing complexity like studying transitions to turbulence in fluid mechanics or studying more and more complicated systems like cellular automation or fractals, you find there are emergent stable solutions (sometimes called strange attractors) that appear that couldn’t be predicted from the initial conditions. With brains having billions of neurons all performing simple logic operations, but in parallel this is a very complex system. There is guaranteed to be emergent stable behaviours that evolution has adjusted into becoming our intelligence.

What’s Needed

Our computers aren’t quite at a truly self aware intelligent state yet (at least that I know of, who knows what might be happening in a secret research lab somewhere). So what is needed to get over the hump and to create a true artificial intelligence? I believe we need two things, one on the hardware side and the other on the software side.

First we need the software algorithm that the brain uses to learn from its environment. This must be fairly simple and it must apply to a wide range of inputs. There isn’t enough data in the human genome for anything else. I think we are getting closer to this with algorithms around the Hidden Markov Model that are currently being used in machine learning. One key part of this algorithm will be how it can be adapted to scale by running millions of copies in parallel.

Second we need the hardware to run it. This is a bit controversial, since one school of thought is that once we have the correct algorithm then we can run it on standard hardware, since its raw processing speed will overcome its lack of parallel processing. Even hardware like GPUs with hundreds of cores aren’t anywhere as parallel as the brain. Until we figure out this ideal learning algorithm, we won’t know the exact computer architecture to build. There are people building computer hardware that are very parallel and more precisely model neurons, but others feel that this is like building an aeroplane by exactly simulating birds flapping their wings.


We’ve solved a lot of difficult problems with Artificial Intelligence computer algorithms. We now have self-driving cars, robots that can walk over rugged terrain, computer world chess and go champions and really good voice and picture recognition systems. As these come together, we just need a couple more breakthroughs to achieve true intelligence. Now it seems every now and then we predict this is just around the corner and then we get stuck for a decade or so. Right now we are making great progress and hopefully we won’t hit another major roadblock. We are certainly seeing a lot of exciting advances right now.

First Published:  By Stephen Smith