Skip to main content

Why Evaluation Metrics Matters


This is a follow up article on "The Importance of F1 Score" in which we understood the technical aspects of evaluating a Machine Learning model. In this article we will understand how different evaluation metrics can help us in designing solutions based on the problem statement and domain.

I will like to distinguish evaluation metrics with respect to the following criteria:

When there is life involved:

In case of aircraft or ships, where a Machine Learning algorithm misses to detect a failure can be a costly affair, since there is life involved. Also in the case of detecting cancer, a failure to predict a true positive can lead to delay in treatment, and hence can be life threatening.

In cases such as detecting failures in aircrafts, even if we get false positives is acceptable, but not able to detect true positives can be too expensive in terms of loss of life. Thus it becomes important to use Recall as a measure, as any positive detection becomes important, even if it turns out to be false positive. It is reasonable to waste resources on analyzing a false positive in such cases.

When there is cost involved:

In cases where the cost of responding to a false positive is too high, and where there are no lives involved, it is acceptable even if few failures are not detected. Take for example, a power plant is located in a village. Responding to a false positive from a sensor can be costly, as the transportation cost to check whether the fault has really occurred is too high. In such a scenario it becomes important to consider Precision as a measure. Thus the model should be trained for high precision, as the quality of the detection by the Machine Learning algorithm matters.


When two or more models need to be compared:

In cases where we need to perform comparison between two or more models going with a single metric like Precision or Recall may not be right. In such cases it is advisable to consider the harmonic mean of both the metrics, and hence F1 score matters in such scenarios. The model with high F1 score should be considered, except when the first two criteria are involved.

Understanding evaluation metrics can help you in deciding what model to choose, which metric matters for which domain. It will also help you to interact with your client and understand different Machine Learning algorithms better. Also it gives you the ability of understanding a problem statement well, thus helping you to implement the correct Machine Learning solution.

Evaluation metrics should be the main area of focus while designing Machine Learning solutions.


By,
Siddhesh Wagle,
Research Consultant,
Cere Labs





 

Comments

Popular posts from this blog

Implement XOR in Tensorflow

XOR is considered as the 'Hello World' of Neural Networks. It seems like the best problem to try your first TensorFlow program.

Tensorflow makes it easy to build a neural network with few tweaks. All you have to do is make a graph and you have a neural network that learns the XOR function.

Why XOR? Well, XOR is the reason why backpropogation was invented in the first place. A single layer perceptron although quite successful in learning the AND and OR functions, can't learn XOR (Table 1) as it is just a linear classifier, and XOR is a linearly inseparable pattern (Figure 1). Thus the single layer perceptron goes into a panic mode while learning XOR – it can't just do that. 

Deep Propogation algorithm comes for the rescue. It learns an XOR by adding two lines L1 and L2 (Figure 2). This post assumes you know how the backpropogation algorithm works.



Following are the steps to implement the neural network in Figure 3 for XOR in Tensorflow:
1. Import necessary libraries
impo…

Understanding Generative Adversarial Networks - Part II

In "Understanding Generative Adversarial Networks - Part I" you gained a conceptual understanding of how GAN works. In this post let us get a mathematical understanding of GANs.
The loss functions can be designed most easily using the idea of zero-sum games. 
The sum of the costs of all players is 0. This is the Minimax algorithm for GANs
Let’s break it down.
Some terminology: V(D, G) : The value function for a minimax game E(X) : Expectation of a random variable X, also equal to its average value D(x) : The discriminator output for an input x from real data, represents probability G(z): The generator's output when its given z from the noise distribution D(G(z)): Combining the above, this represents the output of the discriminator when 
given a generated image G(z) as input
Now, as explained above, the discriminator is the maximizer and hence it tries to 
maximize