For Developers

Precision Recall Method: When Accuracy is as Important as Outcome for your ML Model

Precision vs recall

During the COVID-19 pandemic, over 50 crore individuals were infected. Nearly 60 lakh people lost their lives due to reasons of miscalculations. Though different nations came up with other solutions, nothing worked on a large scale.

Why? Categorizing people based on their infection status, i.e., positive and negative, was a severe problem. Therefore, getting an infected person's positive status was a difficult task. This problem can be classified as an imbalanced classification problem. Other examples of imbalanced classification problems are spam email detection and financial fraud detection, among others.

Problems like these have a different fan base in Machine Learning (ML) and Data Science domains. In such problems, accuracy is as important as the outcome, or maybe accuracy plays even a more significant role than the outcome itself. Knowing a positively infected person in that scenario is as crucial as learning about that junk email that can get your bank account emptied within a fraction of seconds.

Therefore, the precision and recall method becomes very important. In this article, we will know about it in great detail.

Key concepts to understand before understanding Precision Recall

False positives (FP)

Suppose a person receives an email stating they have won something big. The person believes that email and shares their bank account details in the following email.

What if that email turns out to be spammed or a phishing email? Can you imagine what will happen to their bank account?

So this problem is called a False Positive (FP) in the recall and precision classification method. Sometimes, it also gets notified as a Type 1 error.

False negatives

Imagine the above scenario again. The person receives an email stating the same thing and asking for the bank account details. This person is smart and knows about phishing and spam emails. They simply mark it as junk and delete it.

But the story doesn’t end here as there is a catch- what if the email was for a bounty or prize money they had really won? What will happen now?

This kind of problem is known as a False Negative (FN) or Type 2 error in the classification precision and recall method.

What is the Precision Recall method?

Precision

Precision is the ratio between true positives (TP) and actual results. Thus, precision measures all the relevant data points for our Machine Learning (ML) model. In short, precision tries to solve the below problem-

What part of positive identifiers was actually correct? Or What percentage of our results were relevant?

How to calculate precision?

Mathematically, precision can be defined as-

               True Positives (TP)

Precision = —----------------------------------------- True Positives (TP) + False Positives (FP)

Or

          True Positives (TP)

Precision = —------------------- Actual Results

(Precision Formula)

Recall

Recall is the ratio between the true positives (TP) and to that of predicted results. Recall helps us in shorting out the accuracy of our predictions by analyzing the data provided. In short, recall tries to solve the below problem-

What part of positive identifiers were identified correctly? Or What percentage of our results were classified correctly?

How to calculate recall?

Mathematically, recall can be defined as-

                   True Positives (TP)

Recall = —----------------------------------------- True Positives (TP) + False Negative (FN)

Or

       True Positives (TP)

Recall = —------------------- Predicted Results

(Recall Formula)

How to use Precision Recall as an ML classification method?

For better accuracy of your ML model, you should calculate and examine both the precision and recall. However, doing this is a tedious task itself because increasing the classification accuracy of your model through precision reduces the recall and the same goes the other way around.

So, what is the solution to this problem?

A simple solution can be the priority of your model. Thus, based on the priority and functionality of your ML model you can decide.

What other solutions we can have?

Another important solution is the harmonic mean of precision and recall. This is also known as the precision recall f1 score. A precision recall f1 score formula can be derived as-

             Precision x Recall

F1 score = 2 x —------------------- Precision + Recall

(f1 Score Formula)

The precision recall f1 score is a more convenient and apt method of classification, wherein you can ensure both the accuracy and inclusion of precision and recall outcomes.

Why use Precision Recall over other classification methods?

The application of precision and recall depends on the issue being addressed.

When there's a need to classify all positive and negative samples as positive, regardless of whether they're classified correctly or incorrectly, then you should use precision.

On the other hand, if you aim to identify only positive samples, you should employ Recall. This is where you don't need to be concerned about whether negative samples are correctly or incorrectly classified.

Difference between Precision and Recall

RecallPrecision
We only need positive samples to calculate the Recall for a model. All negative samples will be ignored.When calculating the Precision of a model, it is important that we consider both the positive and negative samples that are being classified.
This allows us to determine how many positive samples have been correctly classified using the ML model.This allows us to measure the ability to classify the positive samples in the model.
Recall of the ML model is dependent upon positive samples and independent of negative samples.Precision of the ML model depends on both the negative as well as positive samples.
Recall cares about accurately classifying all positive samples. It doesn't care if any negative samples are classified as positive.In Precision, all positive samples that are classified as positive should be considered either correctly or incorrectly.

Use cases: Precision Recall method

In real-life situations, there are different interpretations for each kind of error - False Positive vs False Negative. In most cases, one is more important than the other one.

Let's take a look at some of the real-life use cases of Precision Recall.

1. Email spam detection: (Precision focused)

It is acceptable to miss out on a spam email being detected (low recall), but any legitimate or important email should not be sent into the spam folder (false positive).

2. Tests for medical conditions (Recall focused)

It's okay to diagnose a healthy person with cancer (false positive) and follow up with additional medical tests. However, it is not acceptable to fail to identify a person with cancer or classify them as healthy (false negative) because the patient's life is at risk.

3. Criminal death penalty (Precision focused)

It is acceptable to not punish a criminal (low recall), but it is unacceptable to incriminate an innocent person (false positive).

4. Flagging fraud transactions: (Recall focused)

It is acceptable to label a legitimate transaction fraudulent. It can always be reverified through additional checks. However, it is not acceptable to consider a fraudulent transaction legitimate (false positive).

Precision Recall approach: Closing notes

Thus, the precision recall approach helps optimize our classification-based Machine Learning (ML) models. It can be more beneficial if we can achieve the balance between both precision recall.

Precision Recall: Pro tip- You won't get much from memorizing the metrics. Instead, think about which misclassifications are most dangerous and how you can prevent them. Also, keep a healthy balance between recall and precision. Machine learning models are just a means to an end. They don't represent the ultimate goal.

Frequently Asked Questions

Two crucial criteria for evaluating models are precision and recall. Recall is the percentage of the total relevant results that your algorithm successfully categorised, whereas accuracy is the percentage of your results that are relevant.

To do this, you first compare the projected target to the actual answer by using the model to forecast the response on the evaluation dataset (held out data) (ground truth). The prediction accuracy of a model is measured using a variety of measures in machine learning (ML). The ML task determines the accuracy metric to use.

When the cost of acting is low but the potential cost of passing up a candidate is high, recall is more crucial than precision.

View more FAQs
Press

Press

What's up with Turing? Get the latest news about us here.
Blog

Blog

Know more about remote work.
Checkout our blog here.
Contact

Contact

Have any questions?
We'd love to hear from you.

Hire and manage remote developers

Tell us the skills you need and we'll find the best developer for you in days, not weeks.

Hire Developers