This site is updated Hourly Every Day

Trending Featured Popular Today, Right Now

Colorado's Only Reliable Source for Daily News @ Marijuana, Psychedelics & more...

Post: In search of the foolproof AI watermark

Picture of Anschutz Medical Campus

Anschutz Medical Campus

AnschutzMedicalCampus.com is an independent website not associated or affiliated with CU Anschutz Medical Campus, CU, or Fitzsimons innovation campus.

Recent Posts

Anschutz Medical Campus

In search of the foolproof AI watermark
Facebook
X
LinkedIn
WhatsApp
Telegram
Threads
Email

Anadmist/Getty Images We’re inundated with them now — " deep-fake " photos that are virtually indistinguishable from real ones (except for extra fingers), AI-generated articles and term papers that sound realistic (though they still come across as stilted), AI-generated reviews, and many others. Plus, AI systems may be scraping copyrighted material or intellectual property from websites as training data, subjecting users to potential violations.

Also: Most people worry about deepfakes – and overestimate their ability to spot them

The problem is, of course, the AI content keeps getting better. Will there ever be a foolproof way to identify AI-generated material? And what should AI creators and their companies understand about emerging techniques?

"The initial use case for generative AI was for fun and educational purposes, but now we see a lot of bad actors using AI for malicious purposes," Andy Thurai , vice president and principal analyst with Constellation Research, told ZDNET.

Media content — images, videos, audio files — is especially prone to being "miscredited, plagiarized, stolen, or not credited at all," Thurai added. This means "creators will not get proper credit or revenue." An added danger, he said, is the "spread of disinformation that can influence decisions."

From a text perspective, a key issue is the multiple prompts and iterations against language models tend to wash out watermarks or offer only minimal information, according to a recent paper authored by researchers at the University of Chicago, led by Aloni Cohen , assistant professor at the university. They call for a new approach – multi-user watermarks — "which allow tracing model-generated text to individual users or groups of colluding users, even in the face of adaptive prompting."

Also: Photoshop vs. Midjourney vs. DALL-E 3: Only one AI image generator passed my 5 tests

The challenge for both text and media is to digitally watermark language models and AI output, you must implant detectable signals that can’t be modified or removed.

Industrywide initiatives are underway to develop foolproof AI watermarks. For example, the Coalition for Content Provenance and Authenticity (C2PA) – a joint effort formed through an alliance between Adobe, Arm, Intel, Microsoft, […]

Leave a Reply

Your email address will not be published. Required fields are marked *

You Might Be Interested...