Dealing with Blurring

Dealing with Blurring

May 25, 2023
Created by
Neo Yin
In Progress
Broad Research

What is blurring, technically speaking?

Technically speaking, the operation of blurring is the application of a

via convolution to a reference image β€” to create a less sharp, and smoother look.

A better question which I find very intriguing.

However, there is a different question of naturally occurring blurriness β€” perhaps due to an ill-focused camera, squinting your eyes, myopia, dirty camera lens, etc. Just why the very notion of

is a good modelling choice for these naturally occurring image distortions, and what kernel should we choose and how we justify the choice of this kernel?


How is the blurring of an image measured?


How to measure a computer vision model’s robustness toward blurring?

The immediate approach is to probe what is the minimal amount of sacrifice to image quality metrics, through blurring data augmentations, (between a reference image and blurred image) to cause a misclassification.

The best way to find blur-robustness metrics is to look at the blurring-robustness metrics and measurements used in papers that talk about techniques promoting blur-robustness.


What are methods in deep computer vision that promote model robustness toward blurred and blurring images?

  • Data augmentation:
    • The simplest and most widely applicable method is to apply blurring to the training examples β€” either/both during the pre-training stage or the task-specific training stage.
  • Adversarial training:
    • We can train two networks at the same time. The first network is a image classifier, latent representation encoder, etc., and the second network is a adversary that tries to learn the optimal blurring method that tries to mess with the first network β€” for instance, by learning the best blurring kernel β€” regularized by fidelity between the blurred image and the original, to prevent
      model collapse
  • Blind Deblurring:
    • Blind deblurring consists of feeding images through sharpening/deblurring algorithm, either fixed or learned, before feeding them through the classification model.
We might not want to use these below because it sounds like they involve changing the architecture to specifically deal with blurring, and might be overkill for what we are interested in with much sacrifice to other things we care about such as predictive performance and latent space structures:
  • Blur-invariant architecture:
  • Blur-invariant Losses: