Artificial Intelligence News - Page 6
It was only a few weeks ago Elon Musk responded to the allegations that his brain chip company Neuralink wrongfully caused the death of numerous monkeys undergoing testing.
Musk wrote on X that the monkeys used for testing that ended up dying were "terminal" and that these monkeys were "close to death already." However, an investigation launched by Wired that was recently published cited public documents, along with an interview of a former Neuralink employee, contradict Musk's statements.
According to Wired, veterinary records from the California National Primate Research Center (CNPRC) at UC Davis indicate that up to 12 monkeys suffered from brain swelling and partial paralysis following the insertion of a Neuralink brain implant. Monkeys that have been brought into question are as follows; "Animal 20" was inserted with a brain implant that "broke off" during surgery that was then scratched by the monkey, leading to infection and next-month euthanasia.
Google DeepMind has created a new AI system that is capable of detecting genetic mutations that may lead to diseases.
A new study published in the journal Science details a new AI model called AlphaMissense, which is an improvement on the AI model AlphaFold that DeepMind announced it achieved in 2020. As you can probably imagine, this AI model has been "fine-tuned" with genetic data from humans and primates, giving it its ability to detect what is called "missense" mutations, which are mutations that occur within a single letter of the DNA code.
Notably, these missense mutations can lead to illnesses such as cell anemia, cystic fibrosis, and cancer, but currently, levels of genetic disease hunters have only identified 0.1 percent of missense genes as good or bad. DeepMind's AI tool has now identified 71 million missense mutations, and of those has been able to classify 89% of the total variants as "either likely benign or likely pathogenic." This AI data has been released to the wider public in an effort to assist physicians around the world.
A team of researchers from the University of Tokyo have penned a new paper detailing the capabilities of a new artificial intelligence-powered system that can translate chicken clucks.
The paper has been published on a pre-print server and is yet to be peer-reviewed, but it details a new "cutting-edge" AI technique that the team calls "Deep Emotional Analysis Learning. The University of Tokyo researchers write they devised a new system capable of"interpreting various emotional states in chickens, including hunger, fear, anger, contentment, excitement, and distress".
As with most things to do with artificial intelligence, the new system is powered by what the researchers call "complex mathematical algorithms" that get better over time as different variations of chicken vocal patterns are added to the database. The study explains that the researchers recorded and analyzed 80 chicken vocal samples, applying different "emotional states" to the different sounds. According to the researchers, the team was able to predict a chicken's emotional state accurately.
Google has created a prototype AI-powered microscope that is designed to assist doctors in locating potentially cancerous cells and other pathogens.
The new type of microscope is called an "Augmented Reality Microscope" (ARM) and was birthed out of a relationship between Google and the Department of Defense. Reports indicate that the new microscope has heads-up-display (HUD) capabilities such as heatmaps, visual indicators, and object borders - all of which speed up the identification process conducted by doctors. Notably, the ARM microscope first came into the public eye in 2018 and has not been tested on patients with the purpose of diagnosis.
Google has designed 13 prototypes of the new device, and according to reports, substantial clinical studies still need to be conducted before ARM can be rolled out to potentially thousands of doctors. However, once the necessary studies have been completed and the results lead to a green light, ARM will be rolled out to hospitals and clinics. Google writes that ARM was designed with the intention of being able to attach to existing microscopes, or as Google describes it, "retrofitted".
AI-powered tools such as OpenAI's ChatGPT have certainly attracted a lot of attention through their raw power and seemingly endless capabilities.
With their popularity, there has been growing concern from researchers regarding the honesty and truth levels of tools and the underlying AI models powering these tools. These concerns stem from the real possibility that AI tools will be able to spread disinformation at an alarming rate, manipulate users into specific results, or even intentionally mislead or deceive users with a lie. A new article penned in The Conversation details an example with Meta's CICERO AI, which the company says was designed to be "largely honest and helpful".
Researchers put the AI model to the test and instructed it to participate in a game of Diplomacy, and the results were published in a new study in Science. Unlike Chess, Poker, and Go, Diplomacy requires an understanding of competition players' motivations, leading to negotiating complex, forward-thinking plans. The underlying idea is to see if CICERO was able to participate in the game at the level of a human.
Since the emergence of ChatGPT, many students have taken advantage of the new software to turn in papers and reports.
Educators quickly caught wind of the new internet phenomena and adopted many tools that all fell under the umbrella of "AI writing detectors". Some educators copied and pasted work back into ChatGPT and asked the AI if an AI generated the work. Answers varied, with ChatGPT sometimes would reply with certainty and other times it would give an approximation. However, both answers were wrong, as the underpinning language model wasn't designed to detect AI-created content.
Educators turned to other services that were offering AI detection tools, which resulted in some students even being "caught" turning in AI-generated content. While there are certainly some students taking advantage of the new technology to make writing easier, none of these AI detection tools or services are legitimate, at least according to OpenAI, the creators of ChatGPT. In a recently updated FAQ found on the company's website, OpenAI answers the question "Do AI detectors work?", with "In short, no."
A warning has been issued by University of Alberta Faculty of Law assistant professor Dr. Gideon Christian regarding authorities implementing racially biased artificial intelligence systems.
The warning was issued in the form of a press release published by the institution, and Christian reminds the public that while technology may seem unbiased, there are very real instances where it is. Notably, the assistant law professor received a $50,000 donation from the Office of the Privacy Commissioner Contributions Program for a research project called Mitigating Race, Gender, and Privacy Impacts of AI Facial Recognition Technology. This initiative aims to study the impact of AI-powered technologies, such as facial recognition has on race issues.
Notably, Christian has already claimed that AI-powered face recognition technology is damaging to people of color, and that this technology, while appearing to be unbiased, has the capacity to replicate human biases. Furthermore, Christian says that AI-powered face recognition technology has a 99% accuracy rate in identifying white male faces, while maintaining an accuracy rate of just 35% for faces of black women.
Picking wild mushrooms can be risky, which is why it's extremely important to make sure the information you've gathered is correct, as some mushrooms can make you very sick when eaten or even be lethal.
The Guardian has reported that several wild mushroom-picking guidebooks are being sold on Amazon and were discovered following an analysis of the book by originality.ai, a US firm dedicated to detecting AI-written content. The four examples that were analyzed by the US firm came back with a 100% rating, meaning its systems are extremely confident that the content within these books was written by an artificial intelligence-powered chatbot, such as ChatGPT.
Experts in the field of mycology have weighed in on the matter, pointing out serious flaws within some of the titles that could lead to health problems. Leon Frey, a foraging guide and field mycologist at Cornwall-based Family Foraging Kitchen, told The Guardian that some of the sample books promoted "smell and taste" as a way to identify mushroom species. "This seems to encourage tasting as a method of identification. This should absolutely not be the case."
Air surveillance technology that is used by the Pentagon to monitor the air space around Washington DC is getting an artificial intelligence-powered upgrade.
In an effort to improve national security, the Pentagon has announced that it will be replacing its surveillance systems that were implemented after 9/11 with new machine learning algorithms that are designed to identify, track, and warn officials of any objects entering the protected airspace around DC. Notably, the DC's airspace contains special flight rules that force defense officials to identify, track, and locate any aircraft that is flying within the Baltimore-Washington Metropolitan Area.
The new system is being designed by first-time defense contractor Teleidoscope, and the upgrade to the system is an effort to reduce response times to any potential threats. The system will include a mixture of technology, such as electro-optical and infrared sensors, combined with machine learning, augmented reality, and surveillance cameras.
With a staggering $13.5 billion in revenue for Q2 2024, a figure up 101% from a year ago and 88% from the previous quarter, it certainly looks like the AI boom has been going well for NVIDIA, so much so that this figure is a couple of billion dollars higher than Wall Street projections for NVIDIA's earnings for the quarter, which were already super high thanks to the growing demand for NVIDIA's data center hardware for AI.
For the $13.5 billion in revenue, $10.3 billion came from the data center sector due to the unprecedented demand for AI chips. NVIDIA's Gaming sector, which covers GPUs and the GeForce line-up, saw its revenue climb to nearly $2.5 billion - an 11% increase over the previous quarter and a 22% increase over the same period a year ago.
The Gaming sector used to be NVIDIA's main revenue driver. Still, these results show that data centers and AI utilizing Hopper H100, Ampere A100, and HGX systems have driven NVIDIA's revenue, profits, and share price to record highs. NVIDIA made over $6 billion in profit in Q2 2024, representing an 843 percent year-over-year increase.