Skip to main content

In Part 3, Patrick Carpenter continues to answer the question…

“What is Artificial Intelligence?”

 

There are lots of ways to try to answer the question of what makes human intelligence possible. Is intelligence a non-physical process that occurs in a mind separate from the body, or is intelligence an emergent phenomenon caused by electrical activity inside the brain? Clearly, we don’t know the answer to this question, but there are a few reasons we might well assume (if only for the purposes of discussion) that it is the latter.

First, if it’s the former, we won’t ever be able to recreate AI using computers; the whole discussion becomes moot, and we are at the end of the road in that case.

Second, we can observe that damage to the human brain seems to cause corresponding damage to aspects of human intelligence; it appears brains are at least necessary for intelligence, if not sufficient.

Third, we note some aspects of intelligence in non-human animals with brains, which tends to support a belief that brains are sufficient for at least some aspects of intelligence. Another interesting reason for thinking that human intelligence, or at least useful aspects of it, might be explained in purely physical terms is intertwined with the development of mechanical computation in the first half of the 20th century.

British mathematician Alan Turing developed a formal model of computation that has come to be known as the Turing machine. The goal of this model was to explain in simplest possible terms the process by which humans perform computations, so that the processes by which those computations are performed, called algorithms, could be reasoned about logically.

Turing went on to show that his machines could carry out many calculations that humans could. Indeed, to date, there has not been any calculation, understood broadly as taking some input and producing some definite corresponding output, which any other physically-realizable computing system can perform, and which Turing machines cannot. Moreover, no calculation has been found that can be performed by a human but not also by some Turing machine.

The Church-Turing thesis (named after American mathematician Alonzo Church and Alan Turing) claims that Turing machines are capable of any effective computation; that is, anything that can be computed, can be computed by some Turing machine. This statement has never been proven, but neither has any counterexample ever been found. If human intelligence makes possible some calculation that cannot be implemented using purely physical processes, we aren’t aware of them yet.

Turing understood the significance of his work with respect to understanding human intelligence and eventually creating AI. In one sense, the normal operation of computers as calculating machines could be considered as a kind of artificial intelligence: we consider it a sign of intelligence when humans perform calculations, and we all agree computers can calculate. It’s true we have to program the computer to divide numbers – but we also have to teach human beings how to perform long division. If we admit the ability to calculate and reason formally using logic as at least aspects of intelligence, then AI has been all around us for a long time – for as long as we have had computers. Turing’s criteria for machine intelligence were much stricter. He developed a test, which has come to be referred to as the Turing test, to determine whether a computing system possessed what he considered to be true (or strong) AI: can the computer trick us into believing the computer is, itself, human?

This way of thinking about AI, as the recreation of human intelligence, and specifically as the ability to converse with human beings, was prevalent in the early days of AI. Subjects such as knowledge representation, automated logical reasoning and natural language processing were the focus of much research. Intelligence was seen as the manipulation of symbols to transform inputs into proper outputs, and the resulting approaches to AI, referred to as symbolic AI, remained the dominant approaches to the subject until the 1980s, when progress slowed. The problem was that symbolic approaches to AI are inherently limited by our human ability to represent and reason about the world symbolically.

It turns out we’re not able to describe all aspects of intelligence using natural language, logic, and mathematics. New approaches, referred to as sub-symbolic AI, emerged in the 1980s to deal with the limitations of symbolic AI and have remained the focus of AI research to the present day. Rather than teaching computers high-level reasoning, sub-symbolic approaches treat intelligence as an emergent phenomenon that occurs when systems adapt to new stimuli. Provide the computer with a lot of examples and an algorithm that can learn from the data, and the computer might just start acting in ways that we think of as being intelligent. By learning from data in ways we might not have anticipated, systems which use sub-symbolic approaches to AI can sometimes even surprise us.

Somebody once asked me what the different between AI and a bunch of if-statements is. I didn’t have a good answer at the time; I think I said something like, a bunch of if-statements might work the way you expect it to, sometimes.

Upon reflection, I think this question hits right at the heart of the issue of what AI is. We’ve assumed that at least some aspects of human intelligence arise from physical activity in our brains. We’ve never found a calculation that humans can do that Turing machines can’t do. Turing machines encode the same kinds of procedures that we write in everyday programming languages and execute on everyday computers.

The answer is that there is no difference between AI and a bunch of if-statements: AI is code that somebody types in and runs on a computer. What distinguishes AI is not what it’s made of, it’s made of the same kind of code we’re already writing, but how it’s arranged and the way it looks at problems. We may get a lot of good ideas by looking at how humans solve problems, consciously and unconsciously, just like pioneers of artificial flight got good ideas from thinking about how birds fly. As an example, neural networks draw inspiration directly from the biology of brains as interconnected collections of cells. A bunch of computer code is never going to look or work like a human brain, just like a modern jet aircraft doesn’t look or work like a bird. As was the case for airplanes, that might just be a good thing.

I wanted to write a post like this to help explain what companies like Vergent are interested in as they begin exploring how AI could be incorporated into their business processes. Though fascinating, questions like the mind-body problem and the quest for strong AI will not be our focus. We won’t worry about whether some bit of code is true AI, if what the code does is new and useful.

In next month’s installment, we’ll survey some of the most widely-known tools and techniques in the field of AI and look at how those techniques may or may not be usefully applied to solve business problems we’re facing, and maybe even a few we didn’t even know we had.