October 17, 2019

Impact of AI on our Lives

The primary focus of this essay is the future of Artificial Intelligence (AI). In order to better understand how AI is likely to grow I intend to first explore the history and current state of AI. By showing how its role in our lives has changed and expanded so far, I will be better able to predict its future trends.

John McCarthy first coined the term artificial intelligence in 1956 at Dartmouth College. At this time electronic computers, the obvious platform for such a technology were still less than thirty years old, the size of lecture halls and had storage systems and processing systems that were too slow to do the concept justice. It wasn’t until the digital boom of the 80’s and 90’s that the hardware to build the systems on began to gain ground on the ambitions of the AI theorists and the field really started to pick up. If artificial intelligence can match the advances made last decade in the decade to come it is set to be as common a part of our daily lives as computers have in our lifetimes. Artificial intelligence has had many different descriptions put to it since its birth and the most important shift it’s made in its history so far is in how it has defined its aims. When AI was young its aims were limited to replicating the function of the human mind, as the research developed new intelligent things to replicate such as insects or genetic material became apparent. The limitations of the field were also becoming clear and out of this AI as we understand it today emerged. The first AI systems followed a purely symbolic approach. Classic AI’s approach was to build intelligences on a set of symbols and rules for manipulating them. One of the main problems with such a system is that of symbol grounding. If every bit of knowledge in a system is represented by a set of symbol and a particular set of symbols (“Dog” for example) has a definition made up of a set of symbols (“Canine mammal”) then the definition needs a definition (“mammal: creature with four limbs, and a constant internal temperature”) and this definition needs a definition and so on. When does this symbolically represented knowledge get described in a manner that doesn’t need further definition to be complete? These symbols need to be defined outside of the symbolic world to avoid an eternal recursion of definitions. The way the human mind does this is to link symbols with stimulation. For example when we think dog we don’t think canine mammal, we remember what a dog looks like, smells like, feels like etc. This is known as sensorimotor categorization. By allowing an AI system access to senses beyond a typed message it could ground the knowledge it has in sensory input in the same manner we do. That’s not to say that classic AI was a completely flawed strategy as it turned out to be successful for a lot of its applications. Chess playing algorithms can beat grand masters, expert systems can diagnose diseases with greater accuracy than doctors in controlled situations and guidance systems can fly planes better than pilots. This model of AI developed in a time when the understanding of the brain wasn’t as complete as it is today. Early AI theorists believed that the classic AI approach could achieve the goals set out in AI because computational theory supported it. Computation is largely based on symbol manipulation, and according to the Church/Turing thesis computation can potentially simulate anything symbolically. However, classic AI’s methods don’t scale up well to more complex tasks. Turing also proposed a test to judge the worth of an artificial intelligent system known as the Turing test. In the Turing test two rooms with terminals capable of communicating with each other are set up. The person judging the test sits in one room. In the second room there is either another person or an AI system designed to emulate a person. The judge communicates with the person or system in the second room and if he eventually cannot distinguish between the person and the system then the test has been passed. However, this test isn’t broad enough (or is too broad…) to be applied to modern AI systems. The philosopher Searle made the Chinese room argument in 1980 stating that if a computer system passed the Turing test for speaking and understanding Chinese this doesn’t necessarily mean that it understands Chinese because Searle himself could execute the same program thus giving the impression that he understand Chinese, he wouldn’t actually be understanding the language, just manipulating symbols in a system. If he could give the impression that he understood Chinese while not actually understanding a single word then the true test of intelligence must go beyond what this test lays out.

Leave a Reply

Your email address will not be published. Required fields are marked *