Skip to main content

AlfaGo and the Future

What does it mean for Deep Learning to recently beat Go champion Lee Sedol? Or what did it mean back in 1997 for Deep Blue to beat chess champion Garry Kasparov? Is the purpose of AI to only demonstrate that it can win against humans, or is it much more than winning?

Such wins demonstrate the capabilities of AI, and open up new avenues for the tools and techniques used. In the case of Deep Blue developed by IBM, it was better search and evaluation algorithms, combined with a supercomputer to defeat a world champion. Similar AI algorithms were then applied to other applications including search engines.

AI community continued its fascination of winning in games involving intelligence, with IBM Watson turning out to be a winner of quiz show Jeopardy. Watson even received the first place prize of $1 million. The AI techniques such as Natural Language Processing and Machine Learning that Watson used to win the competition are today driving the Watson Cloud Platform to understand unstructured documents and create question answering systems.

AI has come a long way since Deep Blue’s win. Recently Google took up the challenge of creating a Deep Learning based AI called AlfaGo to beat the world champion of Go, and it was successful in doing so. The same algorithms that won the game of Go, also power Google’s softwares that recognize spoken words, understand natural language, classify images.

Deep Learning has now evolved enough that it was able to beat a Go champion, and it looks like it can win in any kind of competitive game involving human mind. It seems, the AI community might have to invent new games to further show capabilities of AI.

P.S: Looking forward to see a match between robots and the world champions of football, with no red cards, of course.

Comments

Popular posts from this blog

GPU - The brain of Artificial Intelligence

Machine Learning algorithms require tens and thousands of CPU based servers to train a model, which turns out to be an expensive activity. Machine Learning researchers and engineers are often faced with the problem of running their algorithms fast. Although initially invented for processing graphics in computer games, GPUs today are used in machine learning to perform feature detection from vast amount of unlabeled data. Compared to CPUs, GPUs take far less time to train models that perform classification and prediction. Characteristics of GPUs that make them ideal for machine learning Handle large datasets Needs far less data centre infrastructure Can be specialized for specific machine learning needs Perform vector computations faster than any known processor Designed to perform data parallel computation NVIDIA CUDA GPUs today are used to build deep learning image processing tools for  Adobe Creative Cloud. According to NVIDIA blog future Adobe applicati

Understanding Projection Pursuit Regression

The following article gives an overview of the paper "Projection Pursuit Regression” published by Friedman J. H and Stuetzle W. You will need basic background of Machine Learning and Regression before understanding this article. The algorithms and images are taken from the paper. ( http://www.stat.washington.edu/courses/stat527/s13/readings/FriedmanStuetzle_JASA_1981.pdf )  What is Regression? Regression is a machine learning technology used to predict a response variable given multiple predictor variables or features. The main distinction is that the response to be predicted is any real value and not just any class or cluster name. Hence though similar to Classification in terms of making a prediction, it is largely different given what it’s predicting.  A simple to understand real world problem of regression would be predicting the sale price of a particular house based on it’s square footage, given that we have data of similar houses sold in that area in the past. T

Anomaly Detection based on Prediction - A Step Closer to General Artificial Intelligence

Anomaly detection refers to the problem of finding patterns that do not conform to expected behavior [1]. In the last article "Understanding Neocortex to Create Intelligence" , we explored how applications based on the workings of neocortex create intelligence. Pattern recognition along with prediction makes human brains the ultimate intelligent machines. Prediction help humans to detect anomalies in the environment. Before every action is taken, neocortex predicts the outcome. If there is a deviation from the expected outcome, neocortex detects anomalies, and will take necessary steps to handle them. A system which claims to be intelligent, should have anomaly detection in place. Recent findings using research on neocortex have made it possible to create applications that does anomaly detection. Numenta’s NuPIC using Hierarchical Temporal Memory (HTM) framework is able to do inference and prediction, and hence anomaly detection. HTM accurately predicts anomalies in real