hometechnology NewsDon’t bet with ChatGPT — study shows language AIs often make irrational decisions

Don’t bet with ChatGPT — study shows language AIs often make irrational decisions

Don’t bet with ChatGPT — study shows language AIs often make irrational decisions
For instance, when we used cards or dice instead of coins to frame our bet questions, we found that performance dropped significantly, by over 25 per cent, although it stayed above random selection.
So the idea that the model can be taught general principles of rational decision-making remains unresolved, at best.
More recent case studies that we conducted using ChatGPT confirm that decision-making remains a nontrivial and unsolved problem even for much bigger and more advanced large language models.
Getting the decision right This line of study is important because rational decision-making under conditions of uncertainty is critical to building systems that understand costs and benefits.
costs and benefits, an intelligent system might have been able to do better than humans at planning around the supply chain disruptions the world experienced during the COVID-19 pandemic, managing inventory or serving as a financial adviser.
Our work ultimately shows that if large language models are used for these kinds of purposes, humans need to guide, review and edit their work.
And until researchers figure out how to endow large language models with a general sense of rationality, the models should be treated with caution, especially in applications requiring high-stakes decision-making.
Check out our in-depth Market Coverage, Business News & get real-time Stock Market Updates on CNBC-TV18. Also, Watch our channels CNBC-TV18, CNBC Awaaz and CNBC Bajar Live on-the-go!
View All

Most Read

Market Movers

View All
Top GainersTop Losers
CurrencyCommodities
CompanyPriceChng%Chng