Skip to main content

Buddha vs Child Paradigm



“Artificial Intelligence (AI) is like a child, who needs time to learn and adapt, whereas a typical IT system is like Buddha who knows everything about the problem it was supposed to solve.”   
          - Devesh Rajadhyax, Founder, Cere Labs.


Images: By Purshi - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=9827197
By Shaun MItchem - Diggy starts to learn to walk, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=4762045 




Let us in this post try to elaborate on this Buddha vs Child Paradigm which Devesh coined for 
differentiating between conventional IT systems and AI. It is essential to know the difference because it 
helps in building the right kind of attitude towards implementing AI systems. A typical IT system such as 
ERP does the job for what it was implemented. It is assumed that it will solve the problem for what it was 
made. Take for example an accounting system like Tally. It will help you to manage your accounts in highly
accurate manner. This system is like a Buddha, who is enlightened from the start. We can’t expect it to do
any mistakes (except the initial software bugs that are rectified). Or take for example a calculator. 
A calculator will always give you 1+2 = 3, no matter you do this operation million times assuming that 
the calculator is in a working condition. Thus we expect an IT system to work perfectly, and it does indeed.

But take an AI system. A typical AI system learns from data. Initially it is like a child, finding it difficult to 
even crawl, but slowly and steadily it learns how to walk as it is exposed to more situations which in the 
case of AI is more data. Maturity of an AI system comes with more and more exposure to data. 
In the example of calculator, an AI system will get many examples such as  1+2 = 3, and over a period 
of time it will learn how to perform addition. Initially it might fail to predict 3, but it will come closer and 
closer to 3.

As you have noticed there is a paradigm shift in understanding and using an AI system compared to an 
IT system. Let us summarize few differences…


IT System
AI System
Matured to solve the problem right from 
the start it was supposed to.
Keeps getting matured as it is exposed to 
more data.
Results are expected to be accurate.
Results keep improving.
Patience by IT executives is only required 
till implementation.
Patience is required throughout the lifetime 
of the system.
Mistakes in output can’t be tolerated.
Mistakes in output can be tolerated.
100% accuracy.
Accuracy tries to reach 100%, but is 
mostly probabilistic.

AI systems are the only option to solve problems which are not well defined, such as prediction or
anomaly. That’s why people tolerate its childish behavior.

Thus it is essential to understand this difference between an IT system and an AI system. It will help an 
executive to assist AI researchers and engineers in achieving great results over a period of time.

By,
Siddhesh Wagle,
Head of Research,
Cere Labs

Comments

Post a Comment

Popular posts from this blog

GPU - The brain of Artificial Intelligence

Machine Learning algorithms require tens and thousands of CPU based servers to train a model, which turns out to be an expensive activity. Machine Learning researchers and engineers are often faced with the problem of running their algorithms fast. Although initially invented for processing graphics in computer games, GPUs today are used in machine learning to perform feature detection from vast amount of unlabeled data. Compared to CPUs, GPUs take far less time to train models that perform classification and prediction. Characteristics of GPUs that make them ideal for machine learning Handle large datasets Needs far less data centre infrastructure Can be specialized for specific machine learning needs Perform vector computations faster than any known processor Designed to perform data parallel computation NVIDIA CUDA GPUs today are used to build deep learning image processing tools for  Adobe Creative Cloud. According to NVIDIA blog future Adobe applicati

Understanding Projection Pursuit Regression

The following article gives an overview of the paper "Projection Pursuit Regression” published by Friedman J. H and Stuetzle W. You will need basic background of Machine Learning and Regression before understanding this article. The algorithms and images are taken from the paper. ( http://www.stat.washington.edu/courses/stat527/s13/readings/FriedmanStuetzle_JASA_1981.pdf )  What is Regression? Regression is a machine learning technology used to predict a response variable given multiple predictor variables or features. The main distinction is that the response to be predicted is any real value and not just any class or cluster name. Hence though similar to Classification in terms of making a prediction, it is largely different given what it’s predicting.  A simple to understand real world problem of regression would be predicting the sale price of a particular house based on it’s square footage, given that we have data of similar houses sold in that area in the past. T

Anomaly Detection based on Prediction - A Step Closer to General Artificial Intelligence

Anomaly detection refers to the problem of finding patterns that do not conform to expected behavior [1]. In the last article "Understanding Neocortex to Create Intelligence" , we explored how applications based on the workings of neocortex create intelligence. Pattern recognition along with prediction makes human brains the ultimate intelligent machines. Prediction help humans to detect anomalies in the environment. Before every action is taken, neocortex predicts the outcome. If there is a deviation from the expected outcome, neocortex detects anomalies, and will take necessary steps to handle them. A system which claims to be intelligent, should have anomaly detection in place. Recent findings using research on neocortex have made it possible to create applications that does anomaly detection. Numenta’s NuPIC using Hierarchical Temporal Memory (HTM) framework is able to do inference and prediction, and hence anomaly detection. HTM accurately predicts anomalies in real