Artificial Intelligence - Page 94
All the latest Artificial Intelligence (AI) news with plenty of coverage on new developments, AI tech, NVIDIA, OpenAI, ChatGPT, generative AI, impressive AI demos & plenty more - Page 94.
This $40 off A.I. device translates 12 unique languages in real-time
If you have traveled to different countries and encountered a strong language barrier, then you would understand how much of a hassle it can be trying to work out simple things like directions.
Luckily, that problem can now be solved with the ONE Mini. The ONE Mini was created from a $100,000 Kickstarter campaign that birthed what is now fully called the ONE Mini Pocket Multilingual Assistant. This pocket-sized device allows users to translate languages in real-time and is a fantastic purchase for any avid traveler.
The device has the ability to translate 12 different foreign languages and also comes with a multitude of features. The device comes with artificial intelligence that captures the audio that has been spoken, and then produces extremely accurate text or audio in the requested language. Most likely, the best feature of the ONE Mini is that upon purchasing, users will gain access to a 24/7 live interpreter service that can assist in more complex conversations.
Continue reading: This $40 off A.I. device translates 12 unique languages in real-time (full post)
World's 1st AI cameras that detect drivers using phones hit Australia
A state in Australia has rolled out the world's first AI cameras that are designed to detect drivers who are using their mobile phones.
New South Wales (NSW), a state in Australia has said that these new cameras use artificial intelligence to determine whether some is or isn't using their phone while operating a moving vehicle. The system will flag a driver if they are suspected as using a phone, and then the images will be reviewed by a human before fines are issued out. NSW police assistant commissioner, Michael Corboy, told Australian media that "It's a system to change the culture."
At the moment, the plans for these cameras include forty-five of them to roll out across the state over the next three years. The Australian law is giving drivers a small grace period, as for the first three months if a driver is caught by one of these cameras they will be issued an official warning -- after the three months fines of $344 AUD ($233 US) will be sent out as well as penalty points. The NSW government has said that having these cameras on the roads could prevent 100 fatal crashes across a five year period.
Continue reading: World's 1st AI cameras that detect drivers using phones hit Australia (full post)
Quantum milestone reached: world's first quantum computing benchmark
Researchers have managed to create the very first quantum computing benchmark, paving the way forward for more universal standards of measuring their immense performance.
Researchers out of the University of Waterloo have developed a new form of quantum computing benchmarking called 'cycle benchmarking'. This new form of benchmarking allows researchers to see the potential scalability of the quantum computer that's being tested, while also being able to compare the results to another. Joel Wallman, an assistant professor at Waterloo's Faculty of Mathematics and Institute for Quantum Computing, said "This finding could go a long way toward establishing standards for performance and strengthen the effort to build a large-scale, practical quantum computer".
This new method records the total probability of errors under any quantum computing application. This means that this benchmark marks the first time that researchers will be able to compare the capabilities of quantum processors that are customized for specific applications. This new breakthrough couldn't of come at a better time, with companies like Google, IBM and Microsoft all slowly but surely making progress in the quantum computing field.
Continue reading: Quantum milestone reached: world's first quantum computing benchmark (full post)
Self-driving cars are 25% better at predicting an idiot driver's move
One of the main problems with self-driving cars is that artificial intelligence inside the vehicle assumes all humans drive and act in the same way. This just simply isn't the case.
Luckily, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have taken that issue and begun examining it for potential solutions. Through their examination of this issue, they began predicting the behavior of other drivers on the road using social psychology techniques. They then fed these techniques to the artificial intelligence to help classify drivers into two basic categories "collaborative or competitive".
Through these classification techniques, the system was able to predict drivers' movements better when it came to lane mergers, faster turning, and more. The paper says that after these techniques were implemented, the artificial intelligence's accuracy increased by 25%. Wilko Schwarting, the lead author on the new paper, said, "Working with and around humans means figuring out their intentions to better understand their behavior. People's tendencies to be collaborative or competitive often spills over into how they behave as drivers."
Continue reading: Self-driving cars are 25% better at predicting an idiot driver's move (full post)
'Superhuman' AI beats pro players & hits Grandmaster Starcraft II rank
An artificial intelligence developed by Google's AI firm DeepMind has managed to achieve the highest level of ranking in Starcraft II.
According to the results which were published in Nature, DeepMind's AI was released onto the European Starcraft II servers and placed within the top 0.15% of the regions 90,000 players. Jon Dodge, an AI researcher at Oregon State University in Corvallis was shocked at the progress the AI made, saying "I did not expect AI to essentially be superhuman in this domain so quickly, maybe not for another couple of years."
The AI is called AlphaStar and before it was released onto the European servers its speed was reduced to make a fairer contests. The researchers wanted players to also not know they were versing an AI, David Silver, who co-leads the AlphaStar project said "We wanted this to be like a blind experiment. We really wanted to play under those conditions and really get a sense of, 'how well does this pool of humans perform against us?'"
Continue reading: 'Superhuman' AI beats pro players & hits Grandmaster Starcraft II rank (full post)
Deep neural network achieves human-like character movement & motion
Researchers have showcased a brand new 3D animated character being brought to life, the catch? It moves exactly like humans do.
Computer scientists from the University of Edinburgh and Adobe Research have developed a data driven technique that uses a deep neural network to accurately guide 3D animated characters. The precision of the neural network showcases characters in a variety of different motions such as sitting in chairs, picking up objects, running, side-stepping, climbing through obstacles and more.
Komura, coauthor and chair of computer graphics at the University of Edinburgh spoke out about the achievement, saying "The technique essentially mimics how a human intuitively moves through a scene or environment and how it interacts with objects, realistically and precisely." If you are interested in checking out the video of the animated character for yourself, click this link right here.
Continue reading: Deep neural network achieves human-like character movement & motion (full post)
Facebook's new AI system can 'de-identify' peoples faces in real-time
Facebook researchers have announced the development of a new AI system that will assist in de-identifying peoples faces.
While you would originally think that Facebook would be working on an facial recognition software that is designed to identify peoples faces, that's exactly the opposite of what they doing. A new announcement out of Facebook Research shines a light on a new AI system that is designed to ever so slightly distort peoples faces.
Above you can see an example of Jennifer Lawrence's face being distorted. From the image, you can see what the AI generated isn't that different when compared to the original, but is different enough that facial recognition technology would have a much harder time identifying its her. This new technology is designed to assist people in keeping their identification secure from third-party facial recognition software that could potentially scam users. For more information about the new AI, visit the Facebook Research website here.
Continue reading: Facebook's new AI system can 'de-identify' peoples faces in real-time (full post)
This AI can spot brain hemorrhages with INSANE pixel-level accuracy
Since their discovery, brain hemorrhages have always been a doctors nightmare as missing even the tiniest hemorrhage can leave the patient in a fatal state. Now the responsibility might not all be on the doctors, AI has now been developed to shoulder some as well.
UC Berkeley and UCSF researchers have managed to conceive an algorithm that is able to detect brain hemorrhages with better accuracy than two out of four radiologists. This algorithm was created using massive amounts of data, 4,396 CT scans and convolutional neural network. While that sample size of CT scans might sound relatively on the smaller side, it should be noted that the AI was able to detect abnormalities within the scans "at the pixel level".
This means that the AI is able to siphon noise and other errors (that normal human doctors may run into) out of the equation. Therefore giving a more technical analysis on the brain scans which would then result in a more accurate assessment of what needs to be done. While you might think that is AI will be replacing doctors, it won't be. Instead, it will be assisting doctors in discovering abnormalities that they might of originally missed, essentially saving the doctors massive amounts of time.
Continue reading: This AI can spot brain hemorrhages with INSANE pixel-level accuracy (full post)
Deep learning AI beats expert scholars at deciphering ancient texts
Researchers at the University of Oxford built and trained a neural network to be able to fill in the letter gaps of ancient texts, and now the AI is better than expert scholars.
The researchers tested the AI on ancient Greek inscriptions that were on objects such as stones, ceramics and metal. The texts are dated back to 1500 and 2600 years ago, and according to a report out of New Scientist, the AI creamed the humans in a head-to-head speed test at deciphering the artifacts. "In a head-to-head test, where the AI attempted to fill the gaps in 2949 damaged inscriptions, human experts made 30 percent more mistakes than the AI. Whereas the experts took 2 hours to get through 50 inscriptions, Pythia gave its guesses for the entire cohort in seconds."
New Scientist says that the AI which has been titled as Pythia was able to recognize and remember patterns in 35,000 different relics that amassed over 3 million words. It was also able to pick up patterns and include context such as the shape and layout within its descriptions. Pythia gives scholars predictions for missing letters or words within the text and rather than returning to the scholars with a single prediction Pythia gives multiple predictions and its level of confidence for each one.
Continue reading: Deep learning AI beats expert scholars at deciphering ancient texts (full post)
OpenAI shows one-handed robot solve a Rubik's cube
We all know our days are numbered, with our AI and robotic overlords planning to overthrow humanity at some point in the future... and it all seems like it'll begin with a Rubik's Cube.
AI research organization OpenAI have been hard at work building a general purpose, self-learning robot with its robotics division Dactyl unveiling its humanoid robotic hand in 2018 -- which is now being used to solve a Rubik's cube in less than 4 minutes flat. OpenAI is working on a number of different robotic parts with its in-house AI software, with this robotic arm just one of those.
Dactyl stumbles, but eventually solves the Rubik's cube -- with the team having a goal of seeing their AI-powered robotic appendages working on real-world tasks. Their robots packed with AI can learn real-world things, and won't need to be specifically programmed. This means that Dactyl is a self-learning robotic hand that looks at new tasks just like you and I would.
Continue reading: OpenAI shows one-handed robot solve a Rubik's cube (full post)