DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks

Whilst the final image quality might not be quite yet there, there is surely more to come from this extremely promising area of research in the future.

We present an end-to-end learning approach for motion deblurring, which is based on conditional GAN and content loss. It improves the state-of-the art in terms of peak signal-to-noise ratio, structural similarity measure and by visual appearance. The quality of the deblurring model is also evaluated in a novel way on a real-world problem — object detection on (de-)blurred images. The method is 5 times faster than the closest competitor. Second, we present a novel method of generating synthetic motion blurred images from the sharp ones, which allows realistic dataset augmentation. Model, training code and dataset are available at this https URL

Source code: https://github.com/KupynOrest/DeblurGAN


Are you aware of some research that warrants coverage here? Contact us or let us know in the comments section below!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.