Several years ago when I was coding regularly, I stumbled upon an idea that has always stuck with me. I was employed as a development team lead and had just hired a Jr programmer. I assigned the new guy to the task of fixing bugs, because what better way to learn a legacy system right? The first day on the job the new guy re-wrote a function I had written a year or two earlier, reducing it in size and complexity by orders of magnitude. My function was around twenty lines of code. His was four. FOUR. The new function produced exactly the same results as the old one, it was simply much more elegant. I learned something important that day - the more elegant the solution is to a given problem, the more efficient the process of solving it. From that day on, I strived to develop smarter, more elegant code. In that pursuit, I learned that my own development began to mirror the emerging work being done in the Artificial Intelligence community. However, something was still missing. At that time, AI was more artificial than intelligent.
The basic building blocks of intelligence were impossible to model, and it wasn’t for a lack of computing power. If we benchmark ‘artificial’ intelligence against the natural function of the human brain, then the net measure of that intelligence is the precision in which that intelligence can predict outcome when compared to the human mind. The fallacy in AI is in believing that it can be logically modeled. Intelligence is not logical. It is intuitive. The human mind makes hundreds of thousands of intuitive predictions to perform the simplest task - tasks that are incredibly difficult if not impossible to model logically. Intuition is what we use to handle everyday problems such as predicting our limb positions and controlling muscle movement in order to crawl, walk, or run. We use intuition to understand and generate speech, to read, to analyze what we see, to drive a car, to do all of the things we do that we most often take for granted – the things we do "without thinking".
Many of these natural human intuitions are actually not as trivial as they may initially seem. They have in fact proven to be difficult or impossible to emulate using current computing technology. Think about the way the adolescent human mind works – children most often employ simple cause and effect in their decision making, and although this approach is rudimentary and admittedly sophomoric, it is logical nonetheless. This is very, very different from the lifetime of effective intuition we call wisdom. Wisdom is what gives us guidance in complex or nuanced social situations, something the adolescent mind struggles with. Intuition also enables a great degree novelty and innovation. If you consider language as an intuitive cognitive function vs a logical one, then every sentence we speak is in fact a new invention, and from a practical perspective, intuition is much faster than logic. Humans make life-and-death decisions in an instant when forced to do so, and the accuracy of these intuitive responses is surprisingly robust. That is the reason that intuition evolved in the first place – because it increases our chances of survival. Because intuition is not logical, it does not require a complex logical model – which essentially solves the bootstrapping problem of AI. The rationale for this is a simple chicken vs. egg scenario – you cannot create a high level model unless you already have intelligence. In order to sufficiently advance the field of artificial intelligence, we must start by building illogical artificial intuition.