Skip to main content

TensorFlow: A new generation of Google's Machine Learning Open Source Library


Although Machine Learning has dominated the Artificial Intelligence scene for long, easy access to open source machine learning libraries is recently made possible. With the launch of TensorFlow, Google has made it possible for corporates to add intelligence to make sense of data.

TensorFlow adds to the list of other popular open source Machine Learning libraries like Theano and Torch. The uniqueness of TensorFlow is that it has the strong support of Google, which is one of the early pioneers in AI research. Google, using DistBelief, has delivered a lot of successful tools such as Computer Vision, Speech Recognition, Natural Language Processing, Information Extraction, Geographic Information Extraction, Computational Drug Discovery, Language Translation, etc. Tensorflow is Google's second generation machine learning system. 

Teaching machines was never so easy. TensorFlow lets you use most of the machine learning algorithms that Google employees use to add intelligence to their products.

To learn more about TensorFlow check TensorFlow's official website at TensorFlow. 

With the world heading towards making machines more intelligent, we at CereLabs are closely monitoring how our research and engineering teams can benefit from the unlimited set of Machine Learning open source libraries.

As we proceed to take this journey, we will keep adding our experience of TensorFlow. Keep looking at this blog for future updates.

Comments

Popular posts from this blog

Implement XOR in Tensorflow

XOR is considered as the 'Hello World' of Neural Networks. It seems like the best problem to try your first TensorFlow program.

Tensorflow makes it easy to build a neural network with few tweaks. All you have to do is make a graph and you have a neural network that learns the XOR function.

Why XOR? Well, XOR is the reason why backpropogation was invented in the first place. A single layer perceptron although quite successful in learning the AND and OR functions, can't learn XOR (Table 1) as it is just a linear classifier, and XOR is a linearly inseparable pattern (Figure 1). Thus the single layer perceptron goes into a panic mode while learning XOR – it can't just do that. 

Deep Propogation algorithm comes for the rescue. It learns an XOR by adding two lines L1 and L2 (Figure 2). This post assumes you know how the backpropogation algorithm works.



Following are the steps to implement the neural network in Figure 3 for XOR in Tensorflow:
1. Import necessary libraries
impo…

Understanding Generative Adversarial Networks - Part II

In "Understanding Generative Adversarial Networks - Part I" you gained a conceptual understanding of how GAN works. In this post let us get a mathematical understanding of GANs.
The loss functions can be designed most easily using the idea of zero-sum games. 
The sum of the costs of all players is 0. This is the Minimax algorithm for GANs
Let’s break it down.
Some terminology: V(D, G) : The value function for a minimax game E(X) : Expectation of a random variable X, also equal to its average value D(x) : The discriminator output for an input x from real data, represents probability G(z): The generator's output when its given z from the noise distribution D(G(z)): Combining the above, this represents the output of the discriminator when 
given a generated image G(z) as input
Now, as explained above, the discriminator is the maximizer and hence it tries to 
maximize