This site is updated Hourly Every Day

Trending Featured Popular Today, Right Now

Colorado's Only Reliable Source for Daily News @ Marijuana, Psychedelics & more...

Post: AI’s moment of disillusionment

Picture of Anschutz Medical Campus

Anschutz Medical Campus is an independent website not associated or affiliated with CU Anschutz Medical Campus, CU, or Fitzsimons innovation campus.


Recent Posts

Anschutz Medical Campus

AI’s moment of disillusionment

Well, that didn’t take long. After all the “this time it’s different” comments about artificial intelligence ( We see you, John Chambers !), enterprises are coming to grips with reality. AI isn’t going to take your job. It’s not going to write your code. It’s not going to write all your marketing copy (not unless you’re prepared to hire back the humans to fix it). And, no, it’s nowhere near artificial general intelligence (AGI) and won’t be anytime soon. Possibly never.

That’s right: We’ve entered AI’s trough of disillusionment , when we collectively stop believing the singularity is just around the corner and start finding ways AI augments , not replaces, humans. For those new to the industry, and hence new to our collective tendency to overhype pretty much everything—blockchain, web3 (remember that?), serverless—this isn’t cause for alarm. AI will have its place; it simply won’t be every place. So many foolish hopes

AI, whether generative AI , machine learning , deep learning , or you name it, was never going to be able to sustain the immense expectations we’ve foisted upon it. I suspect part of the reason we’ve let it run so far for so long is that it felt beyond our ability to understand. It was this magical thing, black-box algorithms that ingest prompts and create crazy-realistic images or text that sounds thoughtful and intelligent. And why not? The major large language models (LLMs) have all been trained on gazillions of examples of other people being thoughtful and intelligent, and tools like ChatGPT mimic back what they’ve “learned.”

The problem, however, is that LLMs don’t actually learn anything. They can’t reason. They’re great at pattern matching but not at extrapolating from past training data to future problems, as a recent IEEE study found. Software development has been one of the brightest spots for genAI tools , but perhaps not quite to the extent we’ve hoped. For example, GPT-3.5 lacked training data after 2021. As such, it struggled with easy coding problems on LeetCode that required information that came out after 2021. The study found that its success rate […]

Leave a Reply

Your email address will not be published. Required fields are marked *

You Might Be Interested...