Artificial Intelligence - Page 57
Discover the latest in artificial intelligence - including generative AI breakthroughs, ChatGPT updates, and major advancements from OpenAI, Google DeepMind, Anthropic, and xAI. Learn how NVIDIA is driving AI innovation with cutting-edge hardware, and explore impressive real-world demos showcasing the future of AI technology. - Page 57
As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.
John Carmack reckons we'll see an artificial general intelligence fully realized in 2030
The latest big-name personality to chip in on the artificial intelligence front is John Carmack, the dev who brought us Doom (and Quake) way back when.
Carmack reckons that an artificial general intelligence or AGI is likely to be realized around the year 2030.
AGI is a somewhat controversial idea, and even its definition can be argued over, and very much is. But the broad idea is that this is a 'real' AI - one that can reason and understand in a human way, as opposed to what we have now with AI (large language models, or essentially very fancy data scraping tools capable of doing a convincing impression of AI).
Elon Musk's response to brain chip implants killing monkeys has now been thrown into jeopardy
It was only a few weeks ago Elon Musk responded to the allegations that his brain chip company Neuralink wrongfully caused the death of numerous monkeys undergoing testing.
Musk wrote on X that the monkeys used for testing that ended up dying were "terminal" and that these monkeys were "close to death already." However, an investigation launched by Wired that was recently published cited public documents, along with an interview of a former Neuralink employee, contradict Musk's statements.
According to Wired, veterinary records from the California National Primate Research Center (CNPRC) at UC Davis indicate that up to 12 monkeys suffered from brain swelling and partial paralysis following the insertion of a Neuralink brain implant. Monkeys that have been brought into question are as follows; "Animal 20" was inserted with a brain implant that "broke off" during surgery that was then scratched by the monkey, leading to infection and next-month euthanasia.
Google's new AI breakthrough is a 'big step forward' to creating 'life-saving treatments'
Google DeepMind has created a new AI system that is capable of detecting genetic mutations that may lead to diseases.
A new study published in the journal Science details a new AI model called AlphaMissense, which is an improvement on the AI model AlphaFold that DeepMind announced it achieved in 2020. As you can probably imagine, this AI model has been "fine-tuned" with genetic data from humans and primates, giving it its ability to detect what is called "missense" mutations, which are mutations that occur within a single letter of the DNA code.
Notably, these missense mutations can lead to illnesses such as cell anemia, cystic fibrosis, and cancer, but currently, levels of genetic disease hunters have only identified 0.1 percent of missense genes as good or bad. DeepMind's AI tool has now identified 71 million missense mutations, and of those has been able to classify 89% of the total variants as "either likely benign or likely pathogenic." This AI data has been released to the wider public in an effort to assist physicians around the world.
Researchers create new AI that can translate what chickens are saying
A team of researchers from the University of Tokyo have penned a new paper detailing the capabilities of a new artificial intelligence-powered system that can translate chicken clucks.
The paper has been published on a pre-print server and is yet to be peer-reviewed, but it details a new "cutting-edge" AI technique that the team calls "Deep Emotional Analysis Learning. The University of Tokyo researchers write they devised a new system capable of"interpreting various emotional states in chickens, including hunger, fear, anger, contentment, excitement, and distress".
As with most things to do with artificial intelligence, the new system is powered by what the researchers call "complex mathematical algorithms" that get better over time as different variations of chicken vocal patterns are added to the database. The study explains that the researchers recorded and analyzed 80 chicken vocal samples, applying different "emotional states" to the different sounds. According to the researchers, the team was able to predict a chicken's emotional state accurately.
Continue reading: Researchers create new AI that can translate what chickens are saying (full post)
Google has built an AI-powered microscope designed to identify cancer cells
Google has created a prototype AI-powered microscope that is designed to assist doctors in locating potentially cancerous cells and other pathogens.
The new type of microscope is called an "Augmented Reality Microscope" (ARM) and was birthed out of a relationship between Google and the Department of Defense. Reports indicate that the new microscope has heads-up-display (HUD) capabilities such as heatmaps, visual indicators, and object borders - all of which speed up the identification process conducted by doctors. Notably, the ARM microscope first came into the public eye in 2018 and has not been tested on patients with the purpose of diagnosis.
Google has designed 13 prototypes of the new device, and according to reports, substantial clinical studies still need to be conducted before ARM can be rolled out to potentially thousands of doctors. However, once the necessary studies have been completed and the results lead to a green light, ARM will be rolled out to hospitals and clinics. Google writes that ARM was designed with the intention of being able to attach to existing microscopes, or as Google describes it, "retrofitted".
AI has now learned how to intentionally deceive humans
AI-powered tools such as OpenAI's ChatGPT have certainly attracted a lot of attention through their raw power and seemingly endless capabilities.
With their popularity, there has been growing concern from researchers regarding the honesty and truth levels of tools and the underlying AI models powering these tools. These concerns stem from the real possibility that AI tools will be able to spread disinformation at an alarming rate, manipulate users into specific results, or even intentionally mislead or deceive users with a lie. A new article penned in The Conversation details an example with Meta's CICERO AI, which the company says was designed to be "largely honest and helpful".
Researchers put the AI model to the test and instructed it to participate in a game of Diplomacy, and the results were published in a new study in Science. Unlike Chess, Poker, and Go, Diplomacy requires an understanding of competition players' motivations, leading to negotiating complex, forward-thinking plans. The underlying idea is to see if CICERO was able to participate in the game at the level of a human.
Continue reading: AI has now learned how to intentionally deceive humans (full post)
OpenAI confirms if AI writing detectors actually work or not
Since the emergence of ChatGPT, many students have taken advantage of the new software to turn in papers and reports.
Educators quickly caught wind of the new internet phenomena and adopted many tools that all fell under the umbrella of "AI writing detectors". Some educators copied and pasted work back into ChatGPT and asked the AI if an AI generated the work. Answers varied, with ChatGPT sometimes would reply with certainty and other times it would give an approximation. However, both answers were wrong, as the underpinning language model wasn't designed to detect AI-created content.
Educators turned to other services that were offering AI detection tools, which resulted in some students even being "caught" turning in AI-generated content. While there are certainly some students taking advantage of the new technology to make writing easier, none of these AI detection tools or services are legitimate, at least according to OpenAI, the creators of ChatGPT. In a recently updated FAQ found on the company's website, OpenAI answers the question "Do AI detectors work?", with "In short, no."
Continue reading: OpenAI confirms if AI writing detectors actually work or not (full post)
Expert warns racially-biased artificial intelligence can destroy people's lives
A warning has been issued by University of Alberta Faculty of Law assistant professor Dr. Gideon Christian regarding authorities implementing racially biased artificial intelligence systems.
The warning was issued in the form of a press release published by the institution, and Christian reminds the public that while technology may seem unbiased, there are very real instances where it is. Notably, the assistant law professor received a $50,000 donation from the Office of the Privacy Commissioner Contributions Program for a research project called Mitigating Race, Gender, and Privacy Impacts of AI Facial Recognition Technology. This initiative aims to study the impact of AI-powered technologies, such as facial recognition has on race issues.
Notably, Christian has already claimed that AI-powered face recognition technology is damaging to people of color, and that this technology, while appearing to be unbiased, has the capacity to replicate human biases. Furthermore, Christian says that AI-powered face recognition technology has a 99% accuracy rate in identifying white male faces, while maintaining an accuracy rate of just 35% for faces of black women.
AI is writing dangerous mushroom picking books that are being sold on Amazon
Picking wild mushrooms can be risky, which is why it's extremely important to make sure the information you've gathered is correct, as some mushrooms can make you very sick when eaten or even be lethal.
The Guardian has reported that several wild mushroom-picking guidebooks are being sold on Amazon and were discovered following an analysis of the book by originality.ai, a US firm dedicated to detecting AI-written content. The four examples that were analyzed by the US firm came back with a 100% rating, meaning its systems are extremely confident that the content within these books was written by an artificial intelligence-powered chatbot, such as ChatGPT.
Experts in the field of mycology have weighed in on the matter, pointing out serious flaws within some of the titles that could lead to health problems. Leon Frey, a foraging guide and field mycologist at Cornwall-based Family Foraging Kitchen, told The Guardian that some of the sample books promoted "smell and taste" as a way to identify mushroom species. "This seems to encourage tasting as a method of identification. This should absolutely not be the case."
Pentagon swaps out old 9/11 system for AI designed to defend Washington DC
Air surveillance technology that is used by the Pentagon to monitor the air space around Washington DC is getting an artificial intelligence-powered upgrade.
In an effort to improve national security, the Pentagon has announced that it will be replacing its surveillance systems that were implemented after 9/11 with new machine learning algorithms that are designed to identify, track, and warn officials of any objects entering the protected airspace around DC. Notably, the DC's airspace contains special flight rules that force defense officials to identify, track, and locate any aircraft that is flying within the Baltimore-Washington Metropolitan Area.
The new system is being designed by first-time defense contractor Teleidoscope, and the upgrade to the system is an effort to reduce response times to any potential threats. The system will include a mixture of technology, such as electro-optical and infrared sensors, combined with machine learning, augmented reality, and surveillance cameras.
NVIDIA smashes quarterly projections with $13.5 billion in Q2 revenue thanks to AI
With a staggering $13.5 billion in revenue for Q2 2024, a figure up 101% from a year ago and 88% from the previous quarter, it certainly looks like the AI boom has been going well for NVIDIA, so much so that this figure is a couple of billion dollars higher than Wall Street projections for NVIDIA's earnings for the quarter, which were already super high thanks to the growing demand for NVIDIA's data center hardware for AI.
For the $13.5 billion in revenue, $10.3 billion came from the data center sector due to the unprecedented demand for AI chips. NVIDIA's Gaming sector, which covers GPUs and the GeForce line-up, saw its revenue climb to nearly $2.5 billion - an 11% increase over the previous quarter and a 22% increase over the same period a year ago.
The Gaming sector used to be NVIDIA's main revenue driver. Still, these results show that data centers and AI utilizing Hopper H100, Ampere A100, and HGX systems have driven NVIDIA's revenue, profits, and share price to record highs. NVIDIA made over $6 billion in profit in Q2 2024, representing an 843 percent year-over-year increase.
NVIDIA's AI-powered NPCs can now express emotion and be given different personality traits
At Computex 2023, we got our first look at NVIDIA ACE, a custom AI model designed to power NPCs in open-world role-playing games - or titles where there's a lot of walking up and talking to people. The demonstration was impressive, with an Unreal Engine 5 tech demo with high-end RTX effects showcasing a futuristic ramen shop owner named Jin you can talk to in a world heavily inspired by Cyberpunk 2077.
The idea behind NVIDIA ACE is for developers to create NPCs within an established world, flesh them out with detail, and then let AI handle dialogue and engage with players in a way that feels real.
In the demos, we see interaction carried out via microphone, adding a layer of NPC interaction not seen in a game before - even if the dialogue, animation, and vocal performance are a little stilted, it's a fascinating glimpse into the future. And in the time between Computex 2023 and Gamescom 2023, NVIDIA has continued to work on the NVIDIA ACE demo. The latest update sees the inclusion of NVIDIA NeMo SteerLM for developers.
AI has produced more images in 1 year than cameras have over 150 years
Photography was invented in 1826, and since then, photographers have been snapping images at an exponential rate, but its only taken 150 years for artificial intelligence-powered image generators to create more images.
A new study published in Everypixel Journal has explored how many images have been created by AI-powered image generation tools such as OpenAI's DALL-E, Midjourney, Stable Diffusion, and Adobe Firefly. That total number of images was then compared to how many images have been taken by photographers since the inception of photography, and the results are shocking.
According to the new study, it has only taken 150 years for AI imagery to beat the total number of photography images. The study states that AI tools have generated 15 billion images, a figure that photographers only reached in 1975. So, how did the researchers arrive at these figures? The study looked at OpenAI's reporting of its DALL-E tool, which the company says more than two million images were generated - the researchers rounded that figure up to 916 million images over 15 months.
Continue reading: AI has produced more images in 1 year than cameras have over 150 years (full post)
Famous astrophysicist dismisses AI's like ChatGPT, calling them 'glorified tape recorders'
CNN's Fareed Zakaria has sat down with astrophysicist Michio Kaku for an interview where he threw a wet blanket on the erupting fire of artificial intelligence (AI).
Theoretical physicist Michio Kaku has calmed the rampant fears that artificial intelligence-powered chatbots are going to take over the world, either through replacing jobs or reaching a point of complexity where they are conscious and put into a physical robotic.
Kaku described these AI-powered systems, such as OpenAI's ChatGPT, as "glorified tape recorders," saying that these systems simply take "snippets of what's on the web created by a human, splices them together and passes it off as if it created these things," he said. "And people are saying, 'Oh my God, it's a human, it's humanlike.'"
Researchers find AI is much better than humans at solving 'prove you're a human' tests
A team of researchers has found that artificial intelligence bots are much better at finishing Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHAs) than humans, which are the website tests designed to verify users are human.
The team from the University of California, Irvine, led by Gene Tsudik, found that bots are not only much better at solving CAPTCHAs but also much faster as well. The paper, which is yet to be peer-reviewed, states that researchers asked 1,400 participants with various levels of technological knowledge to complete 14,000 CAPTCHAs. The results from that survey were compared to bots that completed the same number of CAPTCHAs.
Researchers immediately noticed that the bots consistently beat the human in accuracy, with humans scoring anywhere between 50 and 84% in accuracy, compared to the CAPTCHA-designed bots at 99.8% accuracy.
Google's AI executive walks the tight rope between an AI heaven and hell
A new interview by The Washington Post with James Manyika, a former technological adviser to the Obama Administration and Google's new head of "tech and society," has warned of the dangers of AI if its development isn't carried out responsibly.
In the new article by The Washington Post, Google's head of tech and society explained that there is a real possibility of bad things happening due to artificial intelligence. However, this is entirely dependent on the approach that is taken when developing AI, and according to Google, its approach will be "bold and responsible".
It should be noted that Manyika was one of the many AI insiders that signed a one-letter sentence back in May that called for widespread mitigation efforts to be implemented into AI development in an effort to prevent extinction.
AI busted impersonating author by writing and selling books under their name
An author checked her Goodreads profile last Sunday and realized that multiple books had been unlawfully published under her name.
The author is Jane Friedman, who took to X a few days ago to announce that following checking her Goodreads profile, she realized there are a "cache of garbage" books that had been uploaded to Amazon, which she didn't write. Friedman took to her blog and explained that she believes that the books were AI generated and that they were created through an AI model that had been trained on her blogging, which has been constant since 2009.
Friedman writes that she read the first pages of the books and immediately noticed the text was similar to ChatGPT responses. The author posted an update on Tuesday this week saying that the books from a Goodreads profile and Amazon were removed but not before this story of hers went viral, insinuating that the removal only took place because of notoriety within the writing and publishing community.
NVIDIA unveils new GH200 Grace Hopper Superchip with the world's first HBM3e processor for AI
At Computex 2023, we learned that NVIDIA's new GH200 Grace Hopper Superchip had entered full production, an AI powerhouse that combines an Arm-based NVIDIA Grace CPU and Hopper GPU architectures using NVIDIA NVLink-C2C interconnect technology.
Today as part of SIGGRAPH 2023, NVIDIA has announced that it's supercharging the GH200 Grace Hopper Superchip with the world's first deployment of HBM3e memory for both higher capacity and bandwidth.
"Built for the era of accelerated computing and generative AI," the new GH200 Grace Hopper Superchip with HBMe delivers up to 3.5 times more memory capacity and 3 times more bandwidth than the current offering. Spec-wise, you're looking at 144 Arm Neoverse cores, eight petaflops of AI performance, and 282GB of the latest HBM3e memory.
Elon Musk announces Tesla has figured out aspects of artificial general intelligence
Elon Musk has teased that Tesla has figured out some aspects of artificial general intelligence, the crown jewel of artificial intelligence programming.
The Tesla CEO replied to Whole Mars Catalog, who posted a video to their X account recounting the time that Elon Musk said on stage that full self-driving would work even in San Francisco, which at the time seemed impossible considering the current state of Tesla's self-driving technology.
However, the doubts were quelled as the technology advanced, and now full self-driving Tesla vehicles are easily driving around San Francisco with just their "computer vision".
Researchers train an AI to identify keystrokes on a keyboard by sound alone
"A Practical Deep Learning-Based Acoustic Side Channel Attack on Keyboards" is a new research paper out of Cornell showing how AI can accurately predict keystrokes being pressed on a keyboard through sound alone. The AI model was trained on a specific keyboard using the conferencing app Zoom and achieved 93% accuracy in predicting keystrokes as they were being entered.
It's impressive and scary stuff, thanks in part to the brand-new world of generative AI being used for malicious purposes, but the good news (at least for now) is that the system deployed by researchers Joshua Harrison, Ehsan Toreini and Maryam Mehrnezhad, required the use of a specific keyboard. This is unlikely to change as different keyboards and keyboard styles feature different sound profiles.
Using sound, the AI model analyses waveforms to recognize the subtle differences between different keys on a keyboard, even when pressed multiple times. Being able to hit a 93% accuracy in predicting keystrokes over a Zoom conference call is an impressive achievement.





















