home
 
 
Natural Language Part 4- The Long-Term Vision
 
 


Natural language processing is both one of the most essential areas of artificial intelligence to develop fully in order to realize human-level intelligence, and an area in which success is very likely to be achieved in the not-too-distant future. The reason why I saw it is essential should be fairly obvious-language is the means by which humans communicate, so for a machine to operate at a human level, it needs to be able to communicate in their language. The reason why I saw it is likely to be achieved in the fairly near future is because we have, in the Internet, a tool available to us which has given unprecedented opportunity for advances in natural language and artificial intelligence.

I recall a number of years ago writing a proposal for Rama to build a natural language processing system. The natural language processing system would start with an intelligent chatbot where humans and computers interact-there are many such chatbots in existence and there have been for many years. Then, there would be a very large number of conversations between computers and humans using this chatbot. The idea was to build a very large learning engine which would take an extensive set of dialogs between humans and machines and provide feedback to the chatbot based upon the quality of its interactions with the humans. Such factors as an extended dialog with the human, or long responses, or favorable words ("great", "sounds good", etc.) in the human's responses would indicate that the machine was performing well, and this positive and negative information would form the basis of a learning algorithm from which the machine could improve its dialog. The chatbot could try using particular words in particular contexts to see what kind of a response it generated-positive or negative-and then would gradually learn from its experience. The key would lie both in building the learning engine, and in employing a very large number of humans for an extended period to engage in the conversations-initially probably inane but gradually growing more intelligent-needed to train the chatbots.

One big drawback to this proposal at the time had been the cost of employing these humans, which would have been very large. Today, with the Internet, a chatbot can engage in human conversation on any one of many chat channels on the Internet, and hence the cost of such interactions is now trivial.

An important facet of this proposal was that as the chatbot learned what language is appropriate to use in particular circumstances, it would not necessarily ascribe any meaning to the words it learned how to use. It would simply converse "intelligently" without really understanding what it was saying. This idea was based upon a fundamental belief of mine-which I'll concede is only an opinion and could be wrong-that words and language really have no underlying meaning, but is simply a collection of symbols used to communicate. I base this partly on how a baby first learns to use language. Essentially, the baby gets feedback from its parents as to what words sound most like "mama" or "dada", and learns these words not from any fundamental understanding of the meanings of the words, but from the feedback which she receives.

Additionally, the human brain is the original neural network based learning machine. However, the human brain-so far as anyone knows-does not explicitly store symbolic information about what particular words mean. Rather, it is encoded implicitly through the complex network of neurons which the brain maintains. From an occult point of view, explicit symbolic information is in the first attention, and everything else is in the second attention.

So, my basic contention would be that by designing a sophisticated learning algorithm, and having it learn through repeated interactions with humans over the Internet over a period of many years, a chatbot capable of interacting at a human level-of passing the Turing Test would be created. However, an extensive library of explicit, symbolic knowledge about the world in which humans interact would greatly speed up the ability of a computer to learn from its interactions with humans by giving it a reference point from which to start. Fortunately, just such a library is being developed-the Cyc project. The Cyc project has been in the works for about fifteen years, and the idea is to encode a large array of human knowledge in a vast knowledge base. Cyc includes both an inference engine and a KB consisting of about one million rules. In addition, an open source version of Cyc is due to be released in late summer 2001.

Once a knowledge base like Cyc is available, it will make it far easier for a learning engine to effectively learn how to communicate with humans. Instead of having to learn randomly by selecting from a very large set of words which might be appropriate to use in a particular context, the chatbot will be able to reference the Cyc knowledge base and choose an appropriate response fairly easily. There will still be a learning curve involved for the chatbot, but it will not be nearly as severe as before.

The applications of such an intelligent agent are probably unlimited, and it is tough to predict in advance precisely how it might be used. However, at a minimum, it should be possible for online e-commerce sites to become vastly more intelligent than they are at present, allowing for very intelligent discourse with the human being about the proposed purchase. Other types of web sites-or other technologies such as handhelds and wireless devices-would be able to provide sophisticated professional advice in areas such as law, medicine, accounting and similar areas. Within a few years it should be possible to encode the knowledge of experts in a wide variety of areas. Whereas previously expert systems were limited because their knowledge was so specialized, these new systems would have extensive general knowledge making meaningful conversation with people quite feasible. Education, too, is likely to change as it becomes more and more possible for people to learn directly from intelligent agents rather than from humans.

When this happens, how will it change the relationship between humans and machines? And will it fundamentally change the nature of human existence? We will explore these questions in more detail in a couple of editions. In the next edition, we will look at Cyc in more detail.

For more information about Cyc, you can click here.

Next edition: Cyc and the Cyc Knowledge Base

 
  Home

Home: Ramalila.NET

 

 

Legals:
All copyrights are maintained by respective contributors and may not be reused without permission. Graphics and scripts may not be directly linked to. Site assets copyright © 2000 RamaLila.com and respective authors.
By using this site, you agree to relinquish all liabilities and claims financial or otherwise against RamaLila and its contributors. Visit this site at your own risk.