The Future Is Now

Facing the AI-Generated Image Threat: Why Awareness Is Imperative

AI technology continually blurs reality and fiction, saturating our visual realm — from advertising to entertainment with lifelike images. These images enable the manipulation of recognizable public figures, such as politicians, for disseminating misinformation or propaganda.

So, what consequences and concerns accompany the surge in AI-generated images?

While AI-generated images and videos bring forth benefits, such as fostering creativity and innovation, they also harbor potential risks. Generative AI technology empowers the creation of highly realistic images depicting events that never occurred, serving as a potent instrument for the propagation of falsehoods and the manipulation of public opinion.

Over the past six months, AI Photography, branded as “promptography” by Boris Eldgasen, has reached a chilling level of realism.

It is now possible to conjure images from text that leave viewers questioning their authenticity. These AI-generated photos have deceived judges, won photography contests, and been exploited by scammers during events like the Turkey-Syria earthquake.

Tech conglomerates and governments worldwide have begun implementing measures to shield citizens from the growing menace of AI-generated images. Even photographers themselves are expressing concerns, as the proliferation of AI technology in their craft poses a risk: their work may become indistinguishable from that of their peers.

A rising threat sparking unease globally

Generative AI technologies are evolving rapidly, making it increasingly challenging to differentiate between computer-generated images, also referred to as “synthetic imagery,” and those crafted without the aid of AI systems.

The homogenization of AI-generated images threatens the diversity and originality within the field of photography, making it arduous for photographers to distinguish their work and for audiences to discern between various photographers.

Furthermore, if AI-generated images become the norm, they may devalue the perceived worth of photography. AI-created images might no longer be seen as unique or precious, potentially reducing demand for original photographic creations.

Artificial intelligence tools could be exploited to produce child abuse images and terrorist propaganda, as cautioned by Australia’s eSafety Commissioner, who recently announced a industry standard mandating tech giants like Google, Microsoft’s Bing and DuckDuckGo to eradicate such material from AI-powered search engines.

This new industry code governing search engines demands that these tech giants eliminate child abuse material from their search results and take preventive measures to ensure generative AI products cannot be used to generate deceptive versions of such material.

Julie Inman Grant, the eSafety Commissioner, stressed the need for companies to take a proactive stance in minimizing the harms stemming from their products. She warned that “synthetic” child abuse material and terrorist propaganda are already emerging, emphasizing the urgency of addressing these issues.

Microsoft and Google have recently announced plans to integrate their AI tools, ChatGPT and Bard, respectively, into their popular consumer search engines. Inman Grant noted that the progress of AI technology necessitates a reevaluation of the “search code” governing these platforms.

Suspected Chinese operatives have also harnessed artificial intelligence to simulate American voters online and disseminate disinformation on divisive political topics as the 2024 US election approaches, according to a warning from Microsoft analysts.

In the past nine months, these operatives have posted striking AI-generated images featuring the Statue of Liberty and the Black Lives Matter movement on social media platforms, with a focus on disparaging US political figures and symbols.

This alleged Chinese influence network employed multiple accounts on Western social media platforms to disseminate AI-generated images. Although the images were computer-generated, real individuals, whether knowingly or unknowingly, shared them on social media, amplifying their impact.

Tech Conglomerates Unite to Safeguard Image Authenticity

Content and technology firm Thomson Reuters has partnered with Canon and Starling Lab, an academic research lab, to launch a pilot program aimed at verifying the authenticity of images used in news reporting. This collaborative initiative seeks to ensure that AI-generated images do not pass as genuine photographs, especially in news content, where accuracy is paramount.

This initiative is particularly timely in the battle against the growing tide of misinformation. Rickey Rogers, Global Editor of Reuters Pictures, emphasized the vital importance of trust in news reporting. 

“Trust in news is paramount. However, recent technological advancements in image generation and manipulation are prompting more individuals to question the authenticity of visual content. Reuters remains committed to exploring new technologies that guarantee the accuracy and trustworthiness of the content we deliver,” said Rogers. 

Likewise, Google launched SynthID, a tool for watermarking and identifying AI-generated photos, and has released its beta edition in collaboration with Google Cloud. This technology embeds a pixel-level digital watermark into images for verification, yet remains invisible to the naked eye.

Imagen, one of the latest text-to-image models, is now availing SynthID to a select group of Vertex AI customers. Imagen takes textual input and produces photorealistic images as output.

Researchers developed SynthID to maintain image quality while allowing the watermark to be detectable even after alterations such as filters, color changes, or compression using lossy algorithms, typically used for JPEGs.

SynthID employs two deep learning models—one for watermarking and one for identification—trained on a diverse set of photos. The combined model is finely tuned to achieve multiple objectives, including accurate recognition of watermarked information and aesthetic alignment of the watermark with the original content.

Addressing this issue demands action from photographers, AI developers, and the broader photography industry. This may entail the development of ethical guidelines and best practices for utilizing AI in photography and encouraging the exploration of new forms of photography that leverage AI technology’s unique capabilities while preserving the artistic integrity of the field.

Source: mPost

Share this article
Shareable URL
Prev Post

20 Most Underrated AI Startups in 2023: Ranked by Funding

Next Post

Super Mario Bros. Wonder: Everything You Need to Know

Leave a Reply

Your email address will not be published. Required fields are marked *

Read next