Skip to main content

How is AI Saving the Future

how_is_AI_Saving_the_Future_cerelabs_12_05_2016
Meanwhile the talk of AI being the number one risk of human extinction is going on, there are lot many ways it is helping humanity. Recent developments in Machine Learning are helping scientists to solve difficult problems ranging from climate change to finding the cure for cancer.

It will be a daunting task for humans to understand enormous amount of data that is generated all over the world. Machine Learning is helping scientists to use algorithms that learn from data and find patterns.

Below is a list of few of the problems AI is working on to help find solutions which otherwise would not have been possible:

  • Cancer Diagnostics: Recently, scientists at University of California (UCLA) applied Deep Learning to extract features for achieving high accuracy in label-free cell classification. This technique will help in faster cancer diagnostics, and thus will save a lot of lives.

  • Low Cost Renewable Energy: Artificial-intelligence is helping wind power forecasts of unprecedented accuracy that are making it possible for Colorado to use far more renewable energy, at lower cost.

  • Global Conservation: National Science Foundation (NSF) funded researchers are using Artificial Intelligence to solve poaching and illegal logging. They have created an AI-driven application called Protection Assistant for Wildlife Security (PAWS) which has led to significant improvements when tested in Uganda and Malaysia in 2014, thus protecting forests and wildlife.

  • Precision Based Medicine: AI is turning out to be a powerful tool for precision-based medicine, where treatments are tailored made. Custom diagnostics and treatments seems possible because of the recent advancements in AI.

We at Cere Labs are continuously thinking how we can develop applications based on AI that can help humanity, especially in healthcare. We will keep you updated of the progress we make in this area. AI is not just about robots winning a game of chess or  a game of GO.

References:

  1. Chen, C. L. et al. Deep Learning in Label-free Cell Classification. Sci. Rep. 6, 21471; doi: 10.1038/srep21471 (2016).

Comments

  1. Thanks for Sharing a useful content about Artificial Intelligence
    Please visit our website At SFJ Business Solutions we to shared some blogs about AI
    AI online course

    ReplyDelete
  2. Learn about the various Error functions, which are also called Cost functions or Loss functions. Also, understand about the entropy and its use in measuring error. Understand the various optimization techniques, drawbacks and ways to overcome the same. This you will learn alongside various terms in implementing neural networks.ai courses

    ReplyDelete

Post a Comment

Popular posts from this blog

Implement XOR in Tensorflow

XOR is considered as the 'Hello World' of Neural Networks. It seems like the best problem to try your first TensorFlow program.

Tensorflow makes it easy to build a neural network with few tweaks. All you have to do is make a graph and you have a neural network that learns the XOR function.

Why XOR? Well, XOR is the reason why backpropogation was invented in the first place. A single layer perceptron although quite successful in learning the AND and OR functions, can't learn XOR (Table 1) as it is just a linear classifier, and XOR is a linearly inseparable pattern (Figure 1). Thus the single layer perceptron goes into a panic mode while learning XOR – it can't just do that. 

Deep Propogation algorithm comes for the rescue. It learns an XOR by adding two lines L1 and L2 (Figure 2). This post assumes you know how the backpropogation algorithm works.



Following are the steps to implement the neural network in Figure 3 for XOR in Tensorflow:
1. Import necessary libraries
impo…

From Cats to Convolutional Neural Networks

Widely used in image recognition, Convolutional Neural Networks (CNNs) consist of multiple layers of neuron collection which look at small window of the input image, called receptive fields.
The history of Convolutional Neural Networks begins with a famous experiment “Receptive Fields of Single Neurons in the Cat’s Striate Cortex” conducted by Hubel and Wiesel. The experiment confirmed the long belief of neurobiologists and psychologists that the neurons in the brain act as feature detectors.
The first neural network model that drew inspiration from the hierarchy model of the visual nervous system proposed by Hubel and Wiesel was Neocognitron invented by Kunihiko Fukushima, and had the ability of performing unsupervised learning. Kunihiko Fukushima’s approach was commendable as it was the first neural network model having the capability of pattern recognition similar to human brain. The model gave a lot of insight and helped future understanding of the brain.
A successful advancement i…

Understanding Projection Pursuit Regression

The following article gives an overview of the paper "Projection Pursuit Regression” published by Friedman J. H and Stuetzle W. You will need basic background of Machine Learning and Regression before understanding this article. The algorithms and images are taken from the paper. (http://www.stat.washington.edu/courses/stat527/s13/readings/FriedmanStuetzle_JASA_1981.pdf
What is Regression? Regression is a machine learning technology used to predict a response variable given multiple predictor variables or features. The main distinction is that the response to be predicted is any real value and not just any class or cluster name. Hence though similar to Classification in terms of making a prediction, it is largely different given what it’s predicting. 
A simple to understand real world problem of regression would be predicting the sale price of a particular house based on it’s square footage, given that we have data of similar houses sold in that area in the past. The regression so…