Personalization refers to the process by which an Internet web page is automatically customized to suit the needs of an individual browsing the Web. For example, when you return to Amazon.com on a regular basis, you will find that various buttons such as "The Page You Made" or "Recommendations" contain information customized for you in particular. An attempt is made to provide you with a list of books or CDs or whatever which you are likely to purchase in the future given your purchases in the past. Another example occurs on many Web pages, especially search engines like Google, where banner ads are displayed based upon text the user has typed. The text is analyzed to determine what kind of product/service the user is likely to be interested in, and banners are displayed for such products.

There are two main approaches which are used for personalization: rule-based approaches and collaborative filtering. We've already talked a certain amount about rule-based approaches. When used for personalization, rule-based systems would be used in a similar manner: an expert in the field would get involved and would give knowledge engineers information about how best to customize given Web pages for particular users. The knowledge engineer would code that information into the page in some manner which would give the site the behavior required.

However, rule-based approaches tend to work better for domains (areas of application) where one is applying technology to solve old, low-tech problems like in medicine, insurance, or credit approval. The problem when you start trying to apply rule-based approaches to e-commerce situations is that the rules themselves change so much that it is very difficult for a human, even an expert, to keep up with them, much less supply knowledge engineers with the changing rules fast enough for the engineers to make the required update to the site.

Collaborative filtering, also sometimes known as data mining, has emerged as an approach to help remedy this. The idea with collaborative filtering is that a great deal of data about a given industry, let us say an e-commerce site, is available online in a database. That database can be automatically mined for knowledge about the buying or other patterns of the customers. This data can then be automatically turned into rules that a certain customer is likely to buy a certain product without a human expert ever needing to be involved. These automatically generated rules are then used to produce the content which is shown to user.

The algorithms used by collaborative filtering, in general, are free to use any information they might have about customers in order to better target the right customer with the right product. As such, they definitely raise privacy types of concerns with some consumers, because there can be a sense that "Big Brother" wants to know as much as possible about consumers to help win economically. However, there are definitely two sides to this coin. The smart consumer, knowing this, can make wise decisions about just how much to reveal about themselves online. And the Internet, while definitely creating unprecedented opportunities for large institutions to pry into people's lives, also creates unprecedented opportunities for people to protect their own privacy. It's simply a more complex game than ever.

However, from the point of view of AI, the basic approach of collaborative filtering is definitely to use as much information as is available to automatically generate rules for targeting consumers.

The basic data mining algorithms have been around for some time because there was already a perceived strong need for data mining even before the advent of the Internet. Previously data mining--collaborative filtering--was used for things like managing print advertising campaigns and for making credit decisions. Thus the technology already existed prior to the Internet, and so it did not take long after the rise of the Internet for it to be applied to e-commerce applications. Data mining had already been the focus of quite a number of companies, including (but not limited) to some of Rama's companies.


Next Edition: Genetic Algorithms



Home: Ramalila.NET



All copyrights are maintained by respective contributors and may not be reused without permission. Graphics and scripts may not be directly linked to. Site assets copyright © 2000 RamaLila.com and respective authors.
By using this site, you agree to relinquish all liabilities and claims financial or otherwise against RamaLila and its contributors. Visit this site at your own risk.