Friday, June 20, 2008

My dog taught me everything I know...

My first serious exposure to machine learning occurred in the fall of 1986 when I took an introductory graduate course in robotics. To this day, I still remember how awe inspiring it was to develop the control language that allowed a robotic arm to pour a sequence of alcohol from bottles to create a mixed drink. It must be noted that the bottles required arrangement in a particular order, but that did not dampen our enthusiasm. We harnessed the cutting edge technology of the day to perform a task that would entertain any college student: we “trained” a machine to create a cocktail.

Now while this feat may not seem very awe inspiring today, it demonstrates a very fundamental principle of machine learning/artificial intelligence: it can never surpass the available technology or separate itself from its dependence on humans. In the twenty odd years since this event, processors have evolved from 8-bit machines to 64-bit machines and beyond. Memory has had an equally impressive evolution. Similarly, robotic arms are now used in many facets of manufacturing in lieu of humans. However, the tasks performed by these machines are still devised and programmed by humans. We still have not harnessed the capability to allow machines to teach other machines how to perform a task, and in turn, demonstrate true artificial intelligence.

Even as technology evolves and allows machines to perform more complex tasks, there is still an intrinsic need for a human to identify the task, to develop a process by which a machine can learn the task, and then determine if the machine can properly perform the task. However, this process can never be undertaken unless the available human-developed technology supports the creation of the needed machine. Similarly, the quality of the task performance by the machine is intrinsically related to the human capability to devise a sufficient training schema. Therefore, I assert that machine learning/artificial intelligence as it stands today is simply a model or collection of models reflecting the beliefs of its creator. This statement should not be taken as ridicule, but rather as a stern rationalization of fact. Furthermore, I assert that this belief should be infused across any application that uses a computer-based system. If we forget the fact that humans are fallible and humans create the machines and processes that support machine learning/artificial intelligence, then we as a society will suffer the consequences. If we recognize this fallibility of human design, then the machine learning/artificial intelligence community at large must be begin to address en masse how to demonstrate that their creations are validated and verified.

No comments: