Celebrate the Facts!
2/28/2021 2 Comments
Reddit posters coined the term ‘deepfake’ in 2017 to describe photographic, audio, and video forgeries generated with artificial intelligence (AI) technologies. Media outlets had discussed the use of AI to generate deepfakes even though the technology is imperfect. The United States federal government is funding research ostensibly because the technology is evolving and could soon prove a disruptive force but the real reasons for the research are likely far more sinister.
An example of a beeneficial use of deepfakes is in Hollywood where filmmakers use it to replicate actors who have already died. Princess Leia was remastered in Star Wars: Rogue One, released in 2016, to universally negative reviews. Some film experts predict a future, after the technology is more developed, where actors license their images and voices for use reducing production costs and time.
There have been some instances of the use of deepfakes to manufacture pornographic videos using the likenesses of famous actors and actors. The frequency of people using deepfakes for what is termed revenge porn, where an enemy modifies an image or a video to superimpose a face onto a different body, is unknown.
Deepfakes have an as-yet unearned standing for precision but most are easily detectable by a trained observer using standard technology. For now, it is likely to be easier to fake reality using conventional video production techniques. Soon enough however relatively unskilled provocateurs could download free software tools and using publicly available data create convincing bogus content.
United States security concerns include scenarios where adversaries could use deepfakes as part of their information operations in a ‘grey zone’ conflict. The United States Special Operations Command defines grey zone challenges as ‘competitive interactions among and within state and non-state actors that fall between the traditional war and peace duality.’ Enemies could use deepfake technology against the United States to generate incorrect news reports, influence public discourse, corrode public trust, and attempt to blackmail government officials. Another scenario is where AI creates ‘patterns-of-life’ where it plots a person’s digital information against other private data, such as financial customs and job history, to create complete social profiles of service members, intelligence operatives, business officials, and political leaders.
Other specific concerns about deepfakes:
Despite alarming discussion in mainstream media of the potential use of deepfakes in 2020 elections, very little of that occurred. An unknown entity used conventional tools to present a video clip edited to make it look like Joe Biden had greeted Floridians as Minnesotans. A jumbled smear against Joe Biden’s son was sponsored by a fake persona with a deepfake profile photo, but the deepfake portion played only a marginal role in that piece of untruth. Another unidentified person or organization slowed down an actual video by 25% of Nancy Pelosi, the Speaker of the United States House of Representatives, apparently to create the impression she was slurring her words.
Regardless the United States government has been spending some serious cash. The Defense Advanced Research Projects Agency (DARPA) has been developing technologies for identifying the audiovisual inconsistencies present in deepfakes, including inconsistencies in digital integrity, physical integrity, and semantic integrity. DARPA created the backbone of the Internet, developed the Saturn V and Centaur rockets, the first computer mouse, and the first stealth fighter among many other technological feats.
DARPA’s Medifor program intends to change the deepfake game, which currently favors the deepfake creators, by developing an AI assessment of the integrity of an image or video. The MediFor technology will detect manipulations and provide other technical information. MediFor received $17.5 million in funding in 2019 and $5.3 million in 2020. After program completion in 2021, DARPA will deploy the technologies to the operational portion of the military-intelligence community.
In-Q-Tel Inc., a venture-capital firm located in Virginia, is a thin disguise for research funded by the United States Central Intelligence Agency (CIA). In-Q-Tel differs from other venture-capital firms, aside from its CIA affiliation, in that it is a nonprofit. Its chartered purpose is the development of technology to support the CIA mission of intelligence gathering. In-Q-Tel has an enormous portfolio including 24 different firms involved in AI and machine learning and received nearly $490 million in taxpayer funding during five years ending in 2017.
American intelligence agencies have historically used covert strategies to put leaders into office who are favorable to United States interests. This practice of interfering dates to the early days of the CIA in the post-World War II years and was a formal policy to help contain the Soviet Union during the Cold War. Whether the CIA is or plans to interfere with foreign elections using deepfakes is unknown but the odds are not against it. One can wager the CIA will use this technology soon if it has not been already.
A summary of deepfakes and national security was provided by https://crsreports.congress.gov/product/pdf/IF/IF11333. More detailed information about AI and national security was obtained at https://crsreports.congress.gov/product/pdf/R/R45178. Information about CIA funding of AI research was provided by https://emerj.com/ai-sector-overviews/artificial-intelligence-at-the-cia-current-applications/. The website for In-Q-Tel can be found at https://www.iqt.org/about-iqt/.
Michael Donnelly investigates societal concerns with an untribal approach - to limit the discussion to the facts derived from primary sources so the reader can make more informed decisions.