Researchers at Georgia Tech are advancing neural networks to mimic human decision-making by training them to exhibit variability and confidence in their choices, similar to how humans operate, as demonstrated in their study published in Nature Human Behaviour. Their model, RTNet, not only matches human performance in recognizing noisy digits but also applies human-like traits such as confidence and evidence accumulation, enhancing both accuracy and reliability. Credit: SciTechDaily.com Georgia Tech researchers have developed a neural network, RTNet, that mimics human decision-making processes, including confidence and variability, improving its reliability and accuracy in tasks like digit recognition.
Humans make nearly 35,000 decisions each day, ranging from determining if it’s safe to cross the road to choosing what to have for lunch. Each decision involves evaluating options, recalling similar past situations, and feeling reasonably confident about the right choice. What might appear to be a snap decision actually results from gathering evidence from the environment. Additionally, the same person might make different decisions in identical scenarios at different times.
Neural networks do the opposite, making the same decisions each time. Now, Georgia Tech researchers in Associate Professor Dobromir Rahnev’s lab are training them to make decisions more like humans. This science of human decision-making is only just being applied to machine learning, but developing a neural network even closer to the actual human brain may make it more reliable, according to the researchers.
In a paper in Nature Human Behaviour , a team from the School of Psychology reveals a new neural network trained to make decisions similar to humans. Decoding Decision
“Neural networks make a decision without telling you whether or not they are confident about their decision,” said Farshad Rafiei, who earned his Ph.D. in psychology at Georgia Tech. “This is one of the essential differences from how people make decisions.”
Large language models (LLM), for example, are prone to hallucinations. When an LLM is asked a question it doesn’t know the answer to, it will make up something without acknowledging the artifice. By contrast, most humans in the same situation will admit they don’t know the answer. Building a more human-like […]
AI Learns To Think Like Humans: A Game-Changer in Machine Learning