As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.
If you've been following our coverage of CES 2025, you'll notice that AI has been the key buzzword from this year's conference.
In fact, it's been the buzzword of the last 5 years. However, amidst the fog of false corporate claims and gimmicks - two experts from Princeton University have been cutting through the falsehoods and misconceptions about the technology.
Prof. Arvind Narayanan and former Facebook engineer Sayash Kapoor appeared in a recent conversation that explored the concept of 'AI Snake Oil', and the ways that companies are using these tools to deceptively market products. One of the cited examples was how hiring tools, which claim to predict a candidate's job performance, lack any credible foundation. "Companies are selling products that literally cannot work," Kapoor emphasizes.
Narayanan also sheds light on the limitations of predictive AI, which is commonly used in hiring, bail decisions, and health care - are often riddled with bias and fail to account for individual nuances. Software such as the AI used to automate insurance claims, as famously seen with United Healthcare, operated with an alleged 90% error rate. A similar trend follows for criminal risk prediction tools, kidney transplant matching, and job hiring algorithms.
While there's a level of excitement around the claims of AI capability within the world of tech hardware, the research from Princeton highlights the importance of tempering this with healthy skepticism.