Facebook was slammed for its sloppiness in handling the Christchurch mass shooting, with the social network reacting with news that it will use police body cam footage to train its AI to recognize gun attack videos.
The UK's Metropolitan Police will supply its body cam footage to Facebook for its firearms training exercises, with the social networking giant using the video to train its content moderation programs to "rapidly identify real-life first person shooter incidents and remove them from our platform".
Facebook is also talking to US police departments about acquiring their police body cam footage for similar use with AI.
It looks like YouTube algorithms are going on the fritz, with its automated system flagging and taking down a bunch of videos of robots fighting and mistaking it for animal cruelty.
Some of the videos, of which included BattleBots contestants, were taken down with a message that read: "Content that displays the deliberate infliction of animal suffering or the forcing of animals to fight is not allowed on YouTube".
It's funny... a robot (AI) thinking that two robots (like itself) fighting is animal cruelty. Does this mean YouTube's AI is growing empathy, and doesn't like to see one of its own being battled to the death? Maybe.
Engadget talked with a YouTube spokesperson who explained: "With the massive volume of videos on our site, sometimes we make the wrong call. When it's brought to our attention that a video has been removed mistakenly, we act quickly to reinstate it. We also offer uploaders the ability to appeal removals and we will re-review the content".
Google DeepMind co-founder Mustafa Suleyman has been placed on leave abruptly from the secretative AI-focused division of Google, with the reason being controversy "over some of the projects he led".
WTF was Suleyman working on for him to be placed on leave, from DeepMind of all places?
Suleyman, until he was ejected from the company, was working on the "applied" division of DeepMind. This division looked for practicval uses for DeepMind's research into health, energy, and other industries. Suleyman was a big public face for DeepMind, with a DeepMind spokeswoman explaining: "Mustafa is taking time out right now after 10 hectic years".
He co-founded DeepMind back in 2010 with CEO Demis Hassabis, and within 4 years they attracted the bank accounts of Google which acquired DeepMind for $500 million. Google acquired DeepMind for its work into AI, and then post-acquisition DeepMind was pushing into health care research which led to the company opening a dedicated division to the health care industry. DeepMind Health was created out of this with a staff of 100.
While it might seem that seeing a cosmic level collision such as a galaxy hitting head-on with another galaxy would be hard to miss, astronomers have found and easier way to minimize missing such an event.
Astronomers have trouble defining which galaxies were a result of a past collision and which are just super-bright distant galaxies from the early universe. To assist with this task, astronomers have created an AI system that has been fed 1 million fake Hubble Space Telescope and James Webb Space Telescope images to help differentiate between galatic collisions and blurry super-bright star forming galaxies.
While the AI was being fed this data, astronomers already knew which ones where which, despite all images looking extremely similar at first glance. From this process astronomers managed to train the AI program to correctly differentiate between the two desired outcomes. The astronomers do say that for the system to be more accurate they must input more data from all different types/aged galaxies, this will enable wider range of detection for the AI to work with.
Instagram has changed the game today as users can now flag any 'false content' that they see on the platform. These flags will go back to an artificial intelligence system that will use the information to spot more false content.
Instagram will also be tracking the flagged information and depending on a range of different "signals" such as the posts age, engagement, the account holders previous behavior, Instagram will determine whether or not the flagged post should be reviewed by third-party fact checkers. To flag false content, Instagram users can simply press on the three dots at the top right hand corner of the Instagram post, once the dots are press they can select "it's inappropriate" and then choose "false information".
It should be noted that if a post is discovered to be false that it won't be removed from the platform and the uploader won't be notified of the discovery. Instead, the post will be "downplayed" on the 'Explorer' tab and its hash-tagged pages. Instagram's third-party 'fact checkers' are the same ones that Facebook use, 'Full Fact' who recently spoke out saying that Facebook's fact checking algorithms need work.