Skip to main content

50 Questions about Convolutional Neural Networks

Typical cnn

“Any sufficiently advanced technology is indistinguishable from magic.” - Arthur C. Clarke.

Well! Convolutional Neural Network (CNN) is such a technology. How it does, what it does is truly indistinguishable from magic. Read our earlier post - “From Cats to Convolutional Neural Networks”, to understand why CNNs come close to human intelligence. Although the inner workings of a CNN can be explained, the magic remains. Fascinated by CNNs, we thought of coming up with as many questions about CNNs to understand the mystery of why it is able to classify images or any kind of input so well.

  1. What is convolution?
  2. What is pooling?
  3. Which pooling function is preferred - Max or Average?
  4. What is the role of activation functions in CNN?
  5. Why is Relu prefered in CNN rather than Sigmoid?
  6. Why adding more layers increase the accuracy of the network?
  7. What is the intuition behind CNN?
  8. What is stride?
  9. Is it necessary to include zero-padding?
  10. What is parameter sharing, and why is it important?
  11. What would have happened if we would have not considered the pooling layer in CNN?
Why is pooling so important?
  1. What brings CNN closer to biological systems?
  2. How to decide on amount of training, test and validation data to be given to the network?
  3. What is cross-validation and why is it important?
  4. Which cross-validation technique is better - bootstrap or k-fold?
  5. When does a CNN fail?
  6. How can we know for certain that the network fails because of not providing adequate input or because it has less layers?
  7. What are the hidden layers doing?
  8. How does the backpropagation algorithm work across the network?
  9. Can one do continuous learning on CNN, or the training needs to be done first before conducting inference?
  10. Why are GPUs necessary to train a CNN?
  11. Why does using a pre-trained network increase the learning speed of new categories?
  12. When will we say a CNN is not able to learn?
  13. Why is it sufficient to only train the fully connected layer of a pre-trained network to train new categories.
  14. How important it is to provide right set of data to train a CNN?
  15. Can we use the features learned by the inside layers of a CNN?
  16. What is generalization?
  17. What is overfitting?
  18. Why is it important to apply distortions to input images to train an image classifier?
  19. What are hyper-parameters?
  20. What is an epoch?
  21. What decides the number of examples per epoch?
  22. What is gradient descent?
  23. What is a loss function?
  24. Why is cross-entropy the preferred cost function in CNN?
  25. Which one is better - Batch gradient descent or Stochastic gradient descent?
  26. What is the importance of learning rate in training a CNN?
  27. Which method is optimal - keeping the learning rate constant or changing it as the network becomes mature?
  28. How has CNN reduced the job of data scientists in terms of feature selection?
  29. Why starting the CNN’s training with random weights is preferable compared to starting it with zero weights?
  30. Why is Gaussian the preferred choice to choose random weights?
  31. How does regularization helps in preventing overfitting?
  32. How is a trained CNN evaluated?
  33. What is the importance of bias in training a CNN? Is it that significant in training a CNN?
  34. What are the best practices followed in CNNs?
  35. Why is training CNN a costly affair?
  36. Why can a CNN can be applied to any kind of learning, including images, Natural Language Processing and speech?
  37. Why is a CNN capable of computing any kind of function?
  38. How to tweak the number of convolutions and pooling functions in each layer?
  39. What does pre-processing in CNN means?

Hope we have covered most of the questions that justify the magic of Convolutional Neural Networks. If you have any more questions about CNNs, please feel free to add in the comments.


Popular posts from this blog

Implement XOR in Tensorflow

XOR is considered as the 'Hello World' of Neural Networks. It seems like the best problem to try your first TensorFlow program.

Tensorflow makes it easy to build a neural network with few tweaks. All you have to do is make a graph and you have a neural network that learns the XOR function.

Why XOR? Well, XOR is the reason why backpropogation was invented in the first place. A single layer perceptron although quite successful in learning the AND and OR functions, can't learn XOR (Table 1) as it is just a linear classifier, and XOR is a linearly inseparable pattern (Figure 1). Thus the single layer perceptron goes into a panic mode while learning XOR – it can't just do that. 

Deep Propogation algorithm comes for the rescue. It learns an XOR by adding two lines L1 and L2 (Figure 2). This post assumes you know how the backpropogation algorithm works.

Following are the steps to implement the neural network in Figure 3 for XOR in Tensorflow:
1. Import necessary libraries

Understanding Generative Adversarial Networks - Part II

In "Understanding Generative Adversarial Networks - Part I" you gained a conceptual understanding of how GAN works. In this post let us get a mathematical understanding of GANs.
The loss functions can be designed most easily using the idea of zero-sum games. 
The sum of the costs of all players is 0. This is the Minimax algorithm for GANs
Let’s break it down.
Some terminology: V(D, G) : The value function for a minimax game E(X) : Expectation of a random variable X, also equal to its average value D(x) : The discriminator output for an input x from real data, represents probability G(z): The generator's output when its given z from the noise distribution D(G(z)): Combining the above, this represents the output of the discriminator when 
given a generated image G(z) as input
Now, as explained above, the discriminator is the maximizer and hence it tries to