Artificial Intelligence News - Page 4
We all know our days are numbered, with our AI and robotic overlords planning to overthrow humanity at some point in the future... and it all seems like it'll begin with a Rubik's Cube.
AI research organization OpenAI have been hard at work building a general purpose, self-learning robot with its robotics division Dactyl unveiling its humanoid robotic hand in 2018 -- which is now being used to solve a Rubik's cube in less than 4 minutes flat. OpenAI is working on a number of different robotic parts with its in-house AI software, with this robotic arm just one of those.
Dactyl stumbles, but eventually solves the Rubik's cube -- with the team having a goal of seeing their AI-powered robotic appendages working on real-world tasks. Their robots packed with AI can learn real-world things, and won't need to be specifically programmed. This means that Dactyl is a self-learning robotic hand that looks at new tasks just like you and I would.
Have you ever wondered if it would be possible for you to pick up your bicycle, fold it into itself and then place it in your pocket? Well, this new super-compressible material could do just that.
Researchers at TU Delft have used artificial intelligence to create a new supercompressible but strong material. According to Miguel Bessa, assistant professor in materials science and engineering at TU Delft, the idea originiated when he was at the California Institute of Technology in the corner of the Space Structures Lab. Bessa noticed that a satellite structure could open long solar sails from an extremely small form-factor.
This observation drove Bessa's inspiration to create a supercompressible material that could be compressed into a fraction of its volume, but still remain strong. "If this was possible, everyday objects such as bicycles, dinner tables and umbrellas could be folded into your pocket." Bessa and his team used artificial intelligence instead of the traditional trial-and-error process to explore new design possibilities with metamaterials. This reduced experimentation to the absolute minimum, and after some time Bessa fabricated two designs that converted once brittle polymers into lightweight, recoverable and super-compressible metamaterials.
While it might seem like a silly idea at first, did you know that people with large hands actually have bigger vocabularies than people with small hands? Its true.
Dr. Gary Marcus, the director of the NYU Infant Language Learning Center, and a professor of psychology at New York University has spoken out about this very topic and how artificial intelligence (AI) is also thrown into the mix. Marcus says this is an old joke that is tossed around by statisticians, and when a person takes into account the entire population and measure everyone's hand-size, the people with larger hands will have larger vocabularies. This is purely because of the fact that the people with larger hands tend to be older, and that adults tend to know more words then children.
This is correlated evidence, and not causation. Saying that something is causing people to learn new words, and causing them to grow their hands at the same time is an observed correlation between the two measured groups. Saying that growing your hand made your vocabulary grow is suggesting causation, this is a very important distinguishable definition that us humans can understand quite easily. Artificial intelligence on the other hand struggles to see the relationship between the two.
Two new studies have been published by UT Southwestern and have shown evidence towards an artificial intelligence (AI) being able to determine whether antidepressants work on select patients.
With depression being a rampant disease in this day and age, scientists are working around the clock to counteract its widespread effects. The team out of UT Southwestern have used an AI to identify patterns of brain activity that allow them to determine whether or not an individual patient is responsive to certain antidepressants. This evidence suggests that if scientists can accurately scan patients brain activity they could be able to pinpoint whether or not the medication that has been prescribed to them is effective or not.
The complications that the team underwent is the fact that depression is extremely hard to pinpoint, as it can exist in various states of the brain. This makes it hard to get an accurate representation of how it manifests per patient and requires researchers to scan the brain in different states, eg emotion induced/resting. Dr. Madhukar Trivedi, founding Director of UT Southwestern's Center for Depression Research and Clinical Care said "Depression is a complex disease that affects people in different ways. Much like technology can identify us through fingerprints and facial scans, these studies show we can use imaging to identify specific signatures of depression in people."
We are quickly descending into the insane reality of deepfake technology, where it will be virtually impossible to know the difference between a real or fake video of someone -- and Google wants to help.
This deepfake of Bill Hader impersonating Tom Cruise would have to be my favorite go-to deepfake, with every single person I've showed it to (even those aware of what deepfake is) absolutely blown away by it.
The search giant has released a bunch of deepfake videos that will help researchers make deepfake detection tools, because we're at a stage when the fear factor hits 11/10 and people are scared of them being used in the upcoming 23020 presidential elections.
Google filmed actors in a bunch of different poses and scenes, and then used publicluy available deepfake generation methods to create 3000 new deepfakes. Researchers now have access to this trove of deepfakes, and can use them to train automated detection tools to make them better at detecting real, and not-so-real videos.
Facebook was slammed for its sloppiness in handling the Christchurch mass shooting, with the social network reacting with news that it will use police body cam footage to train its AI to recognize gun attack videos.
The UK's Metropolitan Police will supply its body cam footage to Facebook for its firearms training exercises, with the social networking giant using the video to train its content moderation programs to "rapidly identify real-life first person shooter incidents and remove them from our platform".
Facebook is also talking to US police departments about acquiring their police body cam footage for similar use with AI.
It looks like YouTube algorithms are going on the fritz, with its automated system flagging and taking down a bunch of videos of robots fighting and mistaking it for animal cruelty.
Some of the videos, of which included BattleBots contestants, were taken down with a message that read: "Content that displays the deliberate infliction of animal suffering or the forcing of animals to fight is not allowed on YouTube".
It's funny... a robot (AI) thinking that two robots (like itself) fighting is animal cruelty. Does this mean YouTube's AI is growing empathy, and doesn't like to see one of its own being battled to the death? Maybe.
Engadget talked with a YouTube spokesperson who explained: "With the massive volume of videos on our site, sometimes we make the wrong call. When it's brought to our attention that a video has been removed mistakenly, we act quickly to reinstate it. We also offer uploaders the ability to appeal removals and we will re-review the content".
Google DeepMind co-founder Mustafa Suleyman has been placed on leave abruptly from the secretative AI-focused division of Google, with the reason being controversy "over some of the projects he led".
WTF was Suleyman working on for him to be placed on leave, from DeepMind of all places?
Suleyman, until he was ejected from the company, was working on the "applied" division of DeepMind. This division looked for practicval uses for DeepMind's research into health, energy, and other industries. Suleyman was a big public face for DeepMind, with a DeepMind spokeswoman explaining: "Mustafa is taking time out right now after 10 hectic years".
He co-founded DeepMind back in 2010 with CEO Demis Hassabis, and within 4 years they attracted the bank accounts of Google which acquired DeepMind for $500 million. Google acquired DeepMind for its work into AI, and then post-acquisition DeepMind was pushing into health care research which led to the company opening a dedicated division to the health care industry. DeepMind Health was created out of this with a staff of 100.
While it might seem that seeing a cosmic level collision such as a galaxy hitting head-on with another galaxy would be hard to miss, astronomers have found and easier way to minimize missing such an event.
Astronomers have trouble defining which galaxies were a result of a past collision and which are just super-bright distant galaxies from the early universe. To assist with this task, astronomers have created an AI system that has been fed 1 million fake Hubble Space Telescope and James Webb Space Telescope images to help differentiate between galatic collisions and blurry super-bright star forming galaxies.
While the AI was being fed this data, astronomers already knew which ones where which, despite all images looking extremely similar at first glance. From this process astronomers managed to train the AI program to correctly differentiate between the two desired outcomes. The astronomers do say that for the system to be more accurate they must input more data from all different types/aged galaxies, this will enable wider range of detection for the AI to work with.
Instagram has changed the game today as users can now flag any 'false content' that they see on the platform. These flags will go back to an artificial intelligence system that will use the information to spot more false content.
Instagram will also be tracking the flagged information and depending on a range of different "signals" such as the posts age, engagement, the account holders previous behavior, Instagram will determine whether or not the flagged post should be reviewed by third-party fact checkers. To flag false content, Instagram users can simply press on the three dots at the top right hand corner of the Instagram post, once the dots are press they can select "it's inappropriate" and then choose "false information".
It should be noted that if a post is discovered to be false that it won't be removed from the platform and the uploader won't be notified of the discovery. Instead, the post will be "downplayed" on the 'Explorer' tab and its hash-tagged pages. Instagram's third-party 'fact checkers' are the same ones that Facebook use, 'Full Fact' who recently spoke out saying that Facebook's fact checking algorithms need work.