This site is updated Hourly Every Day

Trending Featured Popular Today, Right Now

Colorado's Only Reliable Source for Daily News @ Marijuana, Psychedelics & more...

Post: Lawmakers eye further state regulations on artificial intelligence

Picture of Anschutz Medical Campus

Anschutz Medical Campus

AnschutzMedicalCampus.com is an independent website not associated or affiliated with CU Anschutz Medical Campus, CU, or Fitzsimons innovation campus.

Recent Posts

Anschutz Medical Campus

Lawmakers eye further state regulations on artificial intelligence
Facebook
X
LinkedIn
WhatsApp
Telegram
Threads
Email

Assemblyman Alex Bores questions witnesses during an Assembly hearing on artificial intelligence on Friday, Sept. 20, at the Legislative Office Building in Albany. Lawmakers are beginning to regulate artificial intelligence in New York, as the technology’s rapid advent has posed some risks when applied across industries.

ALBANY — New York lawmakers are eyeing new regulations to bolster certain consumer rights in the advent of generative artificial intelligence, a rapidly developing technology that has raised numerous concerns related to privacy, copyright and deception.

Lawmakers and the state attorney general remain worried about the potential for fraud and other negative side effects that have arrived with the advent of the technology. Many experts have issued warnings on the possible ramifications of unchecked AI across industries like finance, advertising and health care. And as government agencies themselves begin to deploy the technology, questions remain about how to effectively safeguard the rights of residents and consumers.

The state has taken tentative steps toward promoting artificial intelligence, while regulations have lagged. This year, Gov. Kathy Hochul announced the launch of the Empire AI Consortium, a $275 million project based at the state University at Buffalo meant to boost research into the technology.

During a meeting convened by the state Assembly’s committees on consumer protection and technology this week, experts said they were concerned with how to minimize the risks to civil rights associated with "deepfake" technology while protecting innovation in the AI industry. The technology has demonstrated its ability to manipulate and target groups of people who may be vulnerable to scams or racial profiling.

Chief Deputy Attorney General Chris D’Angelo said state officials want a way to enforce the need for companies to disclose when they’re using artificial intelligence, such as floating digital watermarks that are able to flag to government regulators and auditors when a certain product has ties to the technology.

In testimony, D’Angelo pointed to transparency as a core tenet of regulating the fledgling industry.

"We think that the companies that put out AI models that generate this content need to also have an obligation to watermark that content so that people can hold […]

Leave a Reply

Your email address will not be published. Required fields are marked *

You Might Be Interested...