If you’ve ever had to take a shot from an airplane window, a dirty skyscraper or a fence at the zoo to get a shot, you often end up with unwanted backgrounds and artifacts that make it an unusable shot. MIT and Google have fixed this problem by creating an advanced algorithm that erases unwanted, or occluded elements from a digital image.
Essentially Google and MIT have combined forces in creating an advanced algorithm that combines multiple digital images to isolate reflections, unwanted backgrounds, fences, etc. It can be accomplished when a photographer takes multiple shots of the desired scene while moving the camera around to obtain more visual information. With the various images inputted the algorithm can isolate the obstruction from the image.
To get a better idea of the process check out this video from MIT and Google that explains the process:
We present a unified computational approach for taking photos
through reflecting or occluding elements such as windows and
fences. Rather than capturing a single image, we instruct the user to
take a short image sequence while slightly moving the camera. Differences
that often exist in the relative position of the background
and the obstructing elements from the camera allow us to separate
them based on their motions, and to recover the desired background
scene as if the visual obstructions were not there. We show results
on controlled experiments and many real and practical scenarios, including
shooting through reflections, fences, and raindrop-covered
windows. – from A Computational Approach for Obstruction-Free Photography
To learn more about the study click here to read the paper.
Leave a Reply