Online images aren’t always what they seem, especially on social media.
A peace sign from Martin Luther King Jr. becomes a rude gesture. President Donald Trump’s opening scenes were puffed up. Dolphins in the Grand Canal of Venice; and crocodiles on the streets of flooded Townsville – all manipulated images published as truth.
According to researchers at the QUT Digital Media Research Center, image editing software is so ubiquitous and easy to use that it can reimagine history. And they say timely journalists don’t have the tools to tell the difference, especially when the images are from social media.
Her study Visual Mis / Disinformation in Journalism and Public Communication was published in Journalism practice. This was due to the increased prevalence of fake news and how social media platforms and news organizations are struggling to identify and combat visual misinformation / disinformation presented to their audiences.
“When Donald Trump staff posted a picture on his official Facebook page in 2019, journalists were able to spot the photoshopped changes to the president’s skin and body, as there is an unedited version on the official Flickr feed of the White House,” the lead author said Dr. TJ Thomson.
“But what if raw versions aren’t available online and journalists can’t rely on simple reverse image searches to verify that an image is real or has been tampered with?
“If it is possible to alter past and present images through methods such as cloning, splicing, cropping, re-touching, or re-scanning, there is a risk of rewriting history – a very Orwellian scenario.”
Examples highlighted in the report are photos shared by news outlets of crocodiles on the streets of Townsville during a flood last year. Later pictures of alligators in Florida from 2014 were shown. Also, a Reuters official is quoted on his discovery that a harrowing video was shared during the reporting period Cyclone Idai, which devastated parts of Africa in 2019, was shot dead in Libya five years earlier.
A picture of Dr. Martin Luther King Jr. on the passage of the Senate Civil Rights Act in 1964 was manipulated to create the appearance of turning the bird into the camera. This edited version was widely distributed on Twitter, Reddit, and the white supremacist website The Daily Stormer.
Dr. Thomson, Associate Professor Daniel Angus, Dr. Paula Dootson, Dr. Edward Hurcombe and Adam Smith surveyed the current social media verification techniques used by journalists and suggested which tools are most effective under which circumstances.
“The number of images created every day – more than 3.2 billion photos and 720,000 hours of video – and the speed at which they are produced, published, and shared, makes it difficult to detect false images,” said Dr. Thomson.
“Other considerations concern the digital and visual competence of those who see them. However, being able to spot fraudulent changes masquerading as reality is vital.
“While journalists who create visual media are not immune to ethical violations, there is a growing practice of including more user-generated and crowd-sourced visual content in news reports. Social media scrutiny must increase accordingly if we are to improve institutional trust and strengthen our democracy. ”
Dr. Thomson said a recent quantitative study by the International Center for Journalists (ICFJ) found that social media verification tools are very rarely used in newsrooms.
“The ICFJ surveyed over 2,700 journalists and editorial staff in more than 130 countries and found that only 11% of respondents used social media verification tools,” he said.
“Combined, the lack of easy-to-use forensic tools and poor digital media literacy are the main obstacles for those looking to stem the barrage of visual misinformation / disinformation online.”
Associate Professor Angus said the study showed an urgent need for better tools developed with journalists to provide greater clarity about the origin and authenticity of images and other media.
“Although journalists know little about the origin and accuracy of visual content, they need to be quick to decide whether to republish or add to that content,” he said.
“The many examples of incorrectly attributed, treated, and forged images confirm the importance of accuracy, Transparency and trust in the arena of public discourse. People generally vote and make decisions based on information they receive from friends and family, politicians, organizations, and journalists. ”
The researchers cite current manual recognition strategies – using reverse image search, examining image metadata, examining light and shadow; and the use of photo editing software – but let’s say more tools need to be developed, including more advanced machine learning methods, to verify the visual representation on social media.
Reference: “Visual Misinformation / Disinformation in Journalism and Public Communication: Current Review Practices, Challenges, and Future Opportunities” by TJ Thomson, Daniel Angus, Paula Dootson, Edward Hurcombe, and Adam Smith, October 19, 2020, Journalism practice.
DOI: 10.1080 / 17512786.2020.1832139