This site is updated Hourly Every Day

Trending Featured Popular Today, Right Now

Colorado's Only Reliable Source for Daily News @ Marijuana, Psychedelics & more...

Post: A Stunning New AI Has Supposedly Achieved Sentience

Picture of Anschutz Medical Campus

Anschutz Medical Campus

AnschutzMedicalCampus.com is an independent website not associated or affiliated with CU Anschutz Medical Campus, CU, or Fitzsimons innovation campus.

Categories

Recent Posts

Anschutz Medical Campus

A Stunning New AI Has Supposedly Achieved Sentience
Facebook
X
LinkedIn
WhatsApp
Telegram
Threads
Email

A New AI Has Supposedly Achieved SentienceDevrimb – Getty Images In March of 2024, U.S.-based AI company Anthropic released Claude 3, an update to its powerful large language model AI.

Its immense capabilities, especially some introspection during testing, left some wondering if Claude 3 had reached a certain level of self-awareness, or even sentience.

While Claude 3’s abilities are impressive, they’re still a reflection of the AI’s (admittedly) remarkable ability to identify patterns, and lacks the important intelligence criteria to match human sentience.

AI large language models (LLMs)—such as Chat GPT , Claude, and Gemini (formerly Bard)—appear to go through a predictable hype cycle. Posts trickle out about a new model’s impressive capabilities, people are floored by the model’s sophistication (or experience existential dread over losing their jobs), and, if you’re lucky, someone starts claiming that this new-and-improved LLM is displaying signs of sentience .

This hype cycle is currently in full force for Claude 3, an LLM created by the U.S.-based AI company Anthropic . In early March, the company introduced its latest lineup of AI models, Claude 3 Haiku, Claude 3 Sonnet, and Claude 3 Opus—all in ascending order of capability. The new models delivered updates across the board, including near-perfect recall, less hallucinations (a.k.a. incorrect answers), and quicker response times.

“Opus, our most intelligent model, outperforms its peers on most of the common evaluation benchmarks for AI systems, including undergraduate level expert knowledge (MMLU), graduate level expert reasoning (GPQA), basic mathematics (GSM8K), and more,” Anthropic […]

Leave a Reply

Your email address will not be published. Required fields are marked *

You Might Be Interested...