home
 
 
Using AI to Fight Terrorism
 
 


There has been a fair bit of attention in recent weeks and months paid to the prospective effort to use artificial intelligence in fighting terrorism. Before looking too closely at these efforts, however, let's take a step back and look at the history of the relationship between the federal government and artificial intelligence in this country. One of the biggest players in the funding of research projects in artificial intelligence, both in universities and private research labs, has been the Defense Advanced Research Projects Agency (DARPA). Conservatively, I would estimate that at least 80% of all AI research in the U.S. has been funded by DARPA (or ARPA as it was known in a previous life). If you look at published papers in artificial intelligence, you will see that for U.S. based authors, they almost always acknowledge DARPA as their major source of funding. This has given AI a very definite defense related flavor. For example, in the early 1990s, Rama gave his ST1 students the task of using AI technologies to solve the problem of identifying enemy tanks on a battlefield. (You don't want to mis-identify something far away and fire at your own tank, nor do you want to waste much ammunition on enemies you've mistakenly thought were more threatening than they were). This tank identification problem, and other similar problems, has been used as a testbed by other AI practitioners besides Rama students. Yet although DARPA-funded projects have tended to have a defense-related flavor, it would be a mistake to assume that all DARPA projects are so obviously related to the military.

One reason for this has been that the military is a very large institution, and probably just about everything is probably done somewhere in the military. Thus, the requirement that DARPA fund only military-related projects has often been applied rather loosely. Once a potential application of AI has been identified, all a researcher often needed to do was identify somewhere in the military where this might be useful, and if their request for funding was otherwise good, they would be approved.

DARPA often did not follow up on establishing that the expected applications of a given project were actually delivered. Their philosophy seemed to be that they would fund a large number of non-classified basic research projects, and then could follow up if needed with further classified research if particular work crossed a particular threshold of strategic importance. The non-classified research was, as a side effect, in the public domain and could therefore be picked up if needed and used by the business community for corporate purposes.

So we have, at the present time, a somewhat interesting situation. Clearly, artificial intelligence technologies have a lot of potential to help in the fight against terrorism. For example, if we applied machine learning (neural networks, genetic algorithms, and so on) techniques to a large database of information about people's prior brushes with the law, entries and exits from the U.S.A. (and other countries), countries of origins, driver's license records, and so on, we could very efficiently identify potential terrorists.Alternatively, basic data mining techniques could be used to similar effect. We could also use rule-based systems to augment good human judgment in deciding when to investigate potential terrorists.

The challenge is going to be in how to deploy the technology. Is it really in a form that the government can use at the present time? Probably not. As we've previously noted, the government, mainly DARPA, has been a really big player in artificial intelligence, yet it has left a hodgepodge of various partially completed research projects. The business community, driven by the demands of the marketplace, has been very good at bringing traditional IT projects up to snuff, but so far it hasn't necessarily seen the need to do much with AI, at least not as a proportion of the total investment into IT projects. The various government agencies have not been faced by the same marketplace realities and so their IT infrastructures are not as advanced as the business community. Also, their systems are not necessarily compatible with each other: CIA software doesn't necessarily interface well with FBI software, and so on.

Yet the government (and hence the taxpayer) has a lot invested in AI through DARPA. It would be nice to get a return on this investment at a time when it's clearly needed. It seems to me that a real partnership between the corporate world and government is going to be needed to do this. The various departments in government clearly have a lot to learn from the business community about how to deploy basic IT applications. I'm talking here not about AI but about the more basic building blocks of IT-databases, networks, intranets and internets, security, and so on-which go into a successful IT project. Just as competitors in the business world often have compatible IT solutions, the rival departments of government need to find a way to do this.

However, the business community has a certain amount to learn from government as well. The government has taken the lead on advanced research into AI, and the business community hasn't necessarily had the foresight to develop this research into workable technologies, often solving short-term IT problems extremely well but investing no effort into the long term vision. The business community needs to find a way to turn interesting AI ideas into workable products. It's lagged a bit in doing so in the past.

Yet, if this effort is successful, it will have more far-reaching effects than simply stopping terrorists in their tracks: it will result in both a much leaner and more effective government and in a much stronger business community. AI really remains the crown chakra of IT: it is what we need to aim for, but we need to make sure all the lower chakras are working effectively first.

One initiative which is a promising step is the DARPA Agent Markup Language (DAML). DAML is an example of a rule markup technique which we talked about in an earlier edition. This is an effort to encode commonsense knowledge and reasoning within an XML-based markup language. XML-based standards are a very promising way for different organizations to communicate with each other, not requiring the amount of overhead in building infrastructure (except possibly security protocols, which will be very important in fighting terrorism) that previous standards have required. Different government agencies, and different businesses, could all do things the way they want internally provided they can agree on an XML-based standard for communicating with each other. Initial markup languages need not encode the full breadth of knowledge needed to build AI systems-that could be added later.

More information about DARPA in general may be found by clicking here. More information about DAML may be found here.

Next edition, we will return to the topic of the Semantic Web which, I strongly believe, will be an important ingredient in forging the partnerships described in this edition.

 
  Home

Home: Ramalila.NET

 

 

Legals:
All copyrights are maintained by respective contributors and may not be reused without permission. Graphics and scripts may not be directly linked to. Site assets copyright © 2000 RamaLila.com and respective authors.
By using this site, you agree to relinquish all liabilities and claims financial or otherwise against RamaLila and its contributors. Visit this site at your own risk.