Abstract

In today’s multimedia world, billions of photographs and videos are generated each day. Of these, about 30,000 are obtained by the U.S. government for law enforcement and intelligence purposes. These could be photos and videos of people, including child pornography, that are of interest to the FBI, videos from surveillance cameras, or photos and videos obtained from foreign adversaries that may be a threat to the United States. Material may also be multimedia submitted to news services or posted to the internet that portray world events that could affect decisions made by our leadership. Of these images or videos, some could have been altered to improve the artistic appearance or storytelling aspects of the material, but some of the alterations may have been done to change the narrative to influence events, spread false information, or provide misdirection. Defense Advanced Research Projects Agency (DARPA) has begun a multiyear program to automate the detection of modified videos and images called Media Forensics (MEDIFOR). The details of this program are beyond the scope of this paper, but the program is referenced to illustrate the seriousness of this problem. This paper will describe some of the tests that have been done manually through the years to identify falsified material without the aid of the technology that is now being developed for the MEDIFOR program. <disp-quote xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> This paper will describe some of the tests that have been done manually through the years to identify falsified material without the aid of the technology that is now being developed for the MEDIFOR program. </disp-quote>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call