Researchers Develop New Process That Determines Who’s Behind The camera
Two researchers, Christopher Thomas and Adriana Kovashka, from the University of Pittsburgh recently conducted a research study using computer vision to identify the photographer behind the camera. (Computer Vision via Wikipedia: is a field that includes methods for acquiring, processing, analyzing, and understanding images and, in general, high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the forms of decisions.). Thomas and Kovashka used a dataset of exactly 119,806 photographs from famous photographers like Dorothea Lange, Ansel Adams, Jack Delano, and Carl Van Vechten, etc., in order to find out who’s behind the camera.
Computer vision has been successfully applied in identifying artists with their paintings, rather than attributing photographs with the photographer. The researchers decided to conduct their study titled: “Who’s Behind the Camera? Identifying the Authorship of a Photograph” through using computer vision to determine if it was possible to identify the photographer behind the photograph.
According their study, it was more challenging than traditional identification of artwork because paintings are much easier to identify through visual cues such a brush strokes. To make the research study work they had to test high-, intermediate-, and low-level features to match photographer to photograph.
The low-level features are criteria with no semantic meaning (color histogram, holistic representation of properties like openness and ruggedness of a scene), intermediate-level features (local features used to find detectors and descriptors), and high-level features (Object Banks using spatial pooling approach to distinguish location of the object detection in the descriptor; and the use of Deep Convolutional Networks, usin 60 million parameters and 500,000 neurons to identify different features).
The study used the work of 25 well-known photographers by randomly sampling 20 images per photographer to create a test set of 460 images. They used a Multiclass Support Vector (algorithm) with each of these low, intermedia, and high-level features with linear kernels and class weight so that the photographers had equal weight during experimentation. They ran the test model 10 times and were able to gather 200 testing images per photographer.
The results indicate that by using this computer vision model we can utilize higher-level features/collected semantic information within photographs to distinguish the photographer behind the photographs. Their findings seems to suggest that they have constructed a new way of ‘fingerprinting’ or determine photographers and their images.
With this process we will be able to confirm whether or not an image was made by a famous photographer, and to possibly match unknown photographs to photographers. Another application may be that it will help search engines detect false positive results to correctly identify an image to its author.
The study has created a new model to determine who’s behind the camera, and a new process that better understands images in a more semantic way through identifying high-fidelity metadata that uses space, color, position, time frame, and other indicators to identify images and their creators. In the future this could be a more effective and software enabled way of proving copyright and image authorship instead of using a low-level features that experts have use to spot fake paintings and forgeries via brush stroke, paint used, etc.