Skip to main content

How is AI Saving the Future

how_is_AI_Saving_the_Future_cerelabs_12_05_2016
Meanwhile the talk of AI being the number one risk of human extinction is going on, there are lot many ways it is helping humanity. Recent developments in Machine Learning are helping scientists to solve difficult problems ranging from climate change to finding the cure for cancer.

It will be a daunting task for humans to understand enormous amount of data that is generated all over the world. Machine Learning is helping scientists to use algorithms that learn from data and find patterns.

Below is a list of few of the problems AI is working on to help find solutions which otherwise would not have been possible:

  • Cancer Diagnostics: Recently, scientists at University of California (UCLA) applied Deep Learning to extract features for achieving high accuracy in label-free cell classification. This technique will help in faster cancer diagnostics, and thus will save a lot of lives.

  • Low Cost Renewable Energy: Artificial-intelligence is helping wind power forecasts of unprecedented accuracy that are making it possible for Colorado to use far more renewable energy, at lower cost.

  • Global Conservation: National Science Foundation (NSF) funded researchers are using Artificial Intelligence to solve poaching and illegal logging. They have created an AI-driven application called Protection Assistant for Wildlife Security (PAWS) which has led to significant improvements when tested in Uganda and Malaysia in 2014, thus protecting forests and wildlife.

  • Precision Based Medicine: AI is turning out to be a powerful tool for precision-based medicine, where treatments are tailored made. Custom diagnostics and treatments seems possible because of the recent advancements in AI.

We at Cere Labs are continuously thinking how we can develop applications based on AI that can help humanity, especially in healthcare. We will keep you updated of the progress we make in this area. AI is not just about robots winning a game of chess or  a game of GO.

References:

  1. Chen, C. L. et al. Deep Learning in Label-free Cell Classification. Sci. Rep. 6, 21471; doi: 10.1038/srep21471 (2016).

Comments

  1. Thanks for Sharing a useful content about Artificial Intelligence
    Please visit our website At SFJ Business Solutions we to shared some blogs about AI
    AI online course

    ReplyDelete

Post a Comment

Popular posts from this blog

Implement XOR in Tensorflow

XOR is considered as the 'Hello World' of Neural Networks. It seems like the best problem to try your first TensorFlow program.

Tensorflow makes it easy to build a neural network with few tweaks. All you have to do is make a graph and you have a neural network that learns the XOR function.

Why XOR? Well, XOR is the reason why backpropogation was invented in the first place. A single layer perceptron although quite successful in learning the AND and OR functions, can't learn XOR (Table 1) as it is just a linear classifier, and XOR is a linearly inseparable pattern (Figure 1). Thus the single layer perceptron goes into a panic mode while learning XOR – it can't just do that. 

Deep Propogation algorithm comes for the rescue. It learns an XOR by adding two lines L1 and L2 (Figure 2). This post assumes you know how the backpropogation algorithm works.



Following are the steps to implement the neural network in Figure 3 for XOR in Tensorflow:
1. Import necessary libraries
impo…

Understanding Generative Adversarial Networks - Part II

In "Understanding Generative Adversarial Networks - Part I" you gained a conceptual understanding of how GAN works. In this post let us get a mathematical understanding of GANs.
The loss functions can be designed most easily using the idea of zero-sum games. 
The sum of the costs of all players is 0. This is the Minimax algorithm for GANs
Let’s break it down.
Some terminology: V(D, G) : The value function for a minimax game E(X) : Expectation of a random variable X, also equal to its average value D(x) : The discriminator output for an input x from real data, represents probability G(z): The generator's output when its given z from the noise distribution D(G(z)): Combining the above, this represents the output of the discriminator when 
given a generated image G(z) as input
Now, as explained above, the discriminator is the maximizer and hence it tries to 
maximize