Artificial Intelligence News - Page 1
Researchers at the King's College London, Massachusetts General Hospital, and health science company ZOE have managed to create artificial intelligence capable of predicting if a person has COVID-19 without conducting a test.
The AI was created using data from the COVID Symptom Study app that is being used by more than 3.3 million people. These people who have downloaded the app are reporting their daily health status, which includes a long list of symptoms such as coughing, fatigue, fever, loss of taste and smell, and more. The researchers looked at the data and found which symptoms were most likely associated with a positive test if the person was to have one.
The researchers then created a mathematical model that can predict with nearly 80% accuracy whether a person is likely to have a COVID-19 infection based on these four symptoms: loss of smell or taste, severe or persistent cough, fatigue, and skipping meals. The mathematical model also takes into account the person's age and sex. The researchers applied this model to a group of 800,000 users and predicted that 17.42% of the users were likely to already have a COVID-19 infection. If you are interested in reading more about this study, check out this link here.
AI has predicted that if social distancing restrictions are lifted prematurely all the coronavirus prevention work we have done so far will be for nothing.
MIT researchers have trained an AI to gobble up publicly available data about the spread of the coronavirus, and provide insightful predictions about how social distancing has influenced the spread's outcome and when new infection cases will plateau. The AI was able to predict that the US and Italy will begin to see new case counts start to level off sometime next week. The AI was able to do this by using the statistical data gathered by researchers from January through to early March.
The researchers recently released a paper, which stated "Leveraging our neural network augmented model, we focus our analysis on four locales: Wuhan, Italy, South Korea and the United States of America, and compare the role played by the quarantine and isolation measures in each of these countries in controlling the effective reproduction number of the virus."
The coronavirus is undoubtedly a terrible thing to have happened to everyone, but out of the world's misery shines fantastic examples of humans coming together for one common cause.
One of those common causes is [email protected], which is a computing project run by a team at Stanford University. The whole premise of [email protected] is that the public can donate spare computation power from either their GPU or CPU to the project, and then the project aims that computational power at calculations that give scientists more understanding about a topic, or subject. At the moment, [email protected] is aiming its power at understanding the coronavirus, and it has recently hit a new power level - 2.4 exaFLOPS.
In the above Twitter post from the official [email protected] Twitter account, it's stated that the collective power of the [email protected] project is almost at 2.5exaFLOPS, which is "faster than the top 500 supercomputers combined!". The Twitter post also contains a graph, and as you can see, [email protected] is clearly exceeding all competing supercomputers, even to the point that it's "15x faster than any current supercomputer". It's absolutely fantastic to see the technology community rally together under one great cause. If you want to check out more about [email protected], or would like to donate some of your own computation power, check out this link here.
For some time now, we have known that Facebook is taking an aggressive stance towards misinformation about the coronavirus outbreak, but what if that stance has been letting misinformation run wild and free this whole time?
A new report has come out on Consumer Reports, where a journalist decided to test Facebook's misinformation AI that regulates the content posted to the platform. Facebook has said that any content that evokes fear into the public, or "create a sense of urgency, like implying a limited supply, or guaranteeing a cure or prevention" will be removed. Facebook's AI also is designed to remove any misinformation surrounding the coronavirus; this includes advertisements.
The journalist from Consumer Reports decided to test this AI to see what they could get past it. Unfortunately, all of the ads that were specifically designed to trigger the AI's misinformation barrier managed to slip through its cracks. The AI isn't picking up on every piece of misinformation, as ads that were promoting the coronavirus was a hoax, social distancing is fake, and drinking bleach kills coronavirus all were approved by Facebook's algorithm.
A new artificial intelligence has been developed to combat the coronavirus outbreak, and the company behind it has claimed that it can detect the virus with 96% accuracy.
If you don't know what Alibaba is, it's basically the Chinese version of Amazon. Now, Alibaba, much like Amazon, doesn't just participate in e-retail, it has other ventures as well, and some of those ventures fall into the artificial intelligence realm. Reports are now coming out from Nikkei Asian Review about a new algorithm developed by Alibaba's research institute Damo Academy.
In these reports, this AI has been trained on 5,000 confirmed coronavirus cases and can now detect the difference between ordinary viral pneumonia and COVID-19 infections in humans with 96% accuracy. This newly developed AI could be the savior the world needs to accurately detect the infected, which is the main problem with the coronavirus. According to the report, the AI can detect an infected human in just 20 seconds, which is pretty damn fast compared to a human doctor that takes anywhere between five and twenty minutes.
Alibaba has said that this new AI system is being rolled out to more than 100 hospitals in the provinces of Hubei, Guangdong, and Anhui.
Artificial Intelligence (AI) has some concerns around it, and one of the most prominent names voicing those concerns if SpaceX and Tesla CEO Elon Musk.
If you didn't know, Musk founded a non-profit organization called OpenAI with Sam Altman, Ilya Sutskever, Greg Brockman, Wojciech Zaremba, and John Schulman. The goal of the organization was to develop new and safe was to monitor and regulate AI development. Musk has since left the organization but is still voicing his concerns over the oversight on AI development.
Musk took to Twitter in response to a new MIT Technology Review profile of OpenAI. Here's what he said, "OpenAI should be more open imo (in my opinion). All orgs developing advanced AI should be regulated, including Tesla". Judging on Musk's responses, it can be assumed that Musk is now somewhat distant from the organization's operations. Back in 2019, OpenAI started a for-profit arm owned by the non-profit parent company, and also accepted a $1 billion investment from Microsoft.
Deepfake technology is getting uncomfortably good, and while most of what you have probably seen from deepfakes is movie-related, check out this video game one.
Above we have a video from BabyZone who used Deepfake technology to include various celebrities in Mortal Kombat 11. Using the deepfake tech, BabyZone was able to successfully replace Sub-Zero with Keanu Reeves from John Wick, Terminator with Bruce Campbell, Kabal as Dwayne 'The Rock' Johnson, Johnny Cage as Jean-Claude Van Damme, Kung Lao with Jackie Chan, Liu Kang with Bruce Lee and finally The Joker with Joaquin Phoenix.
I think Keanu Reeves as Sub-Zero and Jackie Chan as Kung Lao are the two that are particularly good out of the set. Unfortunately, you cannot download these to play as BabyZone used his own facesets, and the footage that you are seeing was edited in post. Regardless of not being able to download them, these deepfakes are extremely impressive.
It was only last week I reported on some law enforcement adopting a new artificial intelligence to assist them in investigations. Now, the company behind that artificial intelligence has been asked to cease and desist by multiple social media platforms.
The artificial intelligence we are talking about here is from Clearview and is a facial recognition software that uses public personal images scraped from multiple social media platforms to identify people. This AI has been fed over 3 billion publically available images and can now identify almost anyone at the drop of a hat. Clearview, the company behind the AI, has been pushing into government and law enforcement as they believe what they have created can assist in investigations.
According to a Facebook spokesperson who spoke to Buzzfeed News, Facebook has sent "multiple letters" asking Clearview to cease and desist scaping "data, images and media" from Facebook and Instagram. Facebook isn't alone in asking Clearview to 'please stop', as back in late January, Twitter also sent letters to Clearview. YouTube and Venmo are also in the same letter sending boat as the other social media platforms.
Chicago Police are using an artificial intelligence that could be in clear breach of people privacy online, as it scans everyone's social media.
The artificial intelligence is called Clearview AI and it is a massive database that includes 3 billion photos taken from social media and other platforms such as Facebook, YouTube and Twitter. Users of the AI feed an image of the person they want to look for into its system then the AI will cycle through its database and present the users with a bunch of different images from different platforms that 'match' the fed image. At the moment the AI is being used by the FBI, Homeland Security and the Chicago Police Department (CPD).
While this AI system would definitely save law enforcement some time in tracking criminals, many privacy advocates have said that it is in complete violation of peoples privacy rights. New Jersey's attorney general Gurbir Grewal, said "Until this week, I had not heard of Clearview AI. I was troubled." According to ACLU of New Jersey, Grewal "put a moratorium on Clearview AI's chilling, unregulated facial recognition software.
CES 2020 - American Airlines will be using Google Assistant's interpreter mode as a means of making travelers more comfortable in lounges.
If you weren't aware, Google Assistant has a really cool Interpreter Mode that allows for people to communicate over the language barrier. Engadget managed to spot Google Assistant's Interpreter Mode at the Los Angeles International Airport's Admirals Club lounge, where American Airlines was testing it out on Google Nest Hubs.
Interpreter Mode can currently translate 29 different languages in real-time; those languages are the following: Arabic, French, German, Japanese, Russian, Spanish, and Vietnamese. Engadget has also said that according to American Airlines Interpreter Mode will only be used if a multilingual team member isn't present to assist travelers.