Science, Space, Health & Robotics News - Page 385
Microsoft is turning to a very interesting platform to help improve research in artificial intelligence. Minecraft is being used by researchers at Microsoft, and by select academic researchers through a private beta program, in its unstructured mode, as a sort of testing ground for testing AI theories and programming.
Now how would they go about doing that? The AIX platform which is being developed by a Microsoft lab in Cambridge, U.K., that they're using can potentially be programmed to learn in a similar and, well, general way that humans do, to program the Minecraft avatar to go forth and do stuff. It's a closed environment where researchers can look easily observe what the character is doing, and in a safe environment. The constructible world allows for a number of different variables that let them test in a lot of different conditions. All without actually sacrificing real, and expensive, robots.
"Minecraft is the perfect platform for this kind of research because it's this very open world," Katja Hofmann, the platforms developer, said. "You can do survival mode, you can do 'build battles' with your friends, you can do courses, you can implement our own games. This is really exciting for artificial intelligence because it allows us to create games that stretch beyond current abilities." Even trying to get your character to do simple tasks, like walking, is very beneficial to AI research.
After dominating the five-game Go tournament series 3-0 and looking invincible in the process, Google's DeepMind AI has finally fallen to its human opponent; world Go champion Lee Sedol made the series 3-1 over the weekend after seeing weaknesses in Deepmind's game and mounting a surprising comeback. The match was described as "long and complicated."
The fifth and final match will begin late tonight or early tomorrow morning depending where you live, and can be viewed here. It's highly anticipated, and for good reason.
"It seems Lee Sedol can now read AlphaGo better and has a better understanding of how AlphaGo moves," said Song Taegon, 9-dan, Korean commentator. "For the fifth match, it will be a far closer battle than before since [they] know each better."
Yesterday, Google's DeepMind AI took the first game against Go world champion Lee Sedol, to the shock of many. You might think it's just one game and Sedol could just as well come back to trounce 4-1, but today DeepMind did it again, putting itself up 2-0. Sedol, who after yesterday's loss commented he was still very confident he could beat the AI, is now very much on the backfoot.
"Yesterday I was surprised but today it's more than that - I am speechless," Lee remarked after the game. "I admit that it was a very clear loss on my part. From the very beginning of the game I did not feel like there was a point that I was leading."
DeepMind founder Demis Hassabis said the AI was confident in victory from the midway point onward.
Back in January, Google's DeepMind AI beat the European champion of the complex board game Go, marking a major achievement for AI. Today -- or yesterday, depending where you are in the world -- began the matchup against world champion Lee Sedol, widely regarded as significantly more skilled than the previous opponent Fan Hui.
To the surprise of even the commentators, DeepMind took the first game of the five game series, which will play out over the next few days. While it's just one game, it proves DeepMind can hang with the best of the best -- all the more impressive given it was projected to not be able to even beat the European champion for another decade.
Astronaut Scott Kelly returned from his year in space mission yesterday, which NASA is commemorating with a look at his achievements during the long voyage.
To say the least, it was a very social voyage, with Kelly hosting the first NASA TweetChat, Tumblr AnswerTime, and Reddit AMA from space. He also Instagrammed the whole time at the President's request. Obama was also sure to give him a warm welcome on his return home.
Non-social achievements for Kelly: harvesting lettuce and zinnia flowers in the VEGGIE facility, the latter of which will assist scientists for deep-space missions and the upcoming Mars mission.
Someone has integrated an actual, working electrocardiogram onto a small business card. You heard right, a company called MobilECG has done something startlingly clever, giving the diagnostic power to everyone. And it's an open-source design that anyone can play with.
The card uses sensors built into it that measure the electrical signals created by the heart through your thumbs. It's not quite the most accurate way to do it, but as a first-line diagnostic tool when you're not quite feeling well, it might be a life-saver. A blog post from the company says that it should be accurate enough to provide cursory information from the P, Q, R, S and the T waves to prompt people to go to the hospital.
The innovation coming from different startups for integrated and smaller technology is just astounding. Just imagine where medical devices can go if something like this is just on the boundary of our imagination. At the moment, they're gauging interest in their product, and you can request one if you'd like. They'll sell you one for $29, or less if more people start showing interest.
The AI revolution continues, with a team of Stanford researchers creating a new way of teaching AI systems how to predict a human's response to their actions.
The system is called Augur, which provides access to an online writing community called Wattpad, and its archive of over 600,000 stories. The information in these stories will enable support for vector machines - which are learning algorithms at their core - and will allow AI to better predict what people do in certain situations.
The researchers wrote in their study: "Over many millions of words, these mundane patterns [of people's reactions] are far more common than their dramatic counterparts. Characters in modern fiction turn on the lights after entering rooms; they react to compliments by blushing; they do not answer their phones when they are in meetings".
Google's Boston Dynamics has an impressive new version of its Atlas robot on display in the video below, which gives off a pretty strong AT-AT vibe. In it, you can see Atlas open doors, walk through rocky, snowy terrain, pick up and move boxes, and get up again when it gets knocked down. Pretty impressive, especially considering this failure at last year's DARPA Robotics Challenge.
As for the source of its mystical robot powers, it features articulated, sensate hands, and an articulated sensor head with stereo cameras and laser range finder.
Researchers at the Salk Institute for Biological Studies have made a pretty startling discovering, finally figuring out just how large a capacity our brain actually has when put in normal computing terms. With the help of GPU computing they figured out that we can store around one petabyte, which is nearly 10 times larger than what we originally thought.
Modeling the full function of the brain isn't easy, and even with clusters of GPU powerhouse servers it takes time to properly model the actual function of the connection between neurons, the synapses. The team of researchers modeled the full hippocampus of a rat with startling accuracy and 26 different sizes of synapses. Size matters, apparently, and the bigger the more storage the accompanying neurons have. They found that the average amount of information that can be held in one synapse is around 4.7 bits, far more than we thought previously.
The amount of information stored and transferred across those still misunderstood synapses doesn't translate directly to computing terms, however. We obviously don't store our moving memories as GIF's or even H.265 encoded MPEG's either. In fact, our brain is incredibly efficient, moving information across the synapse at a rate of around 10-20% of the time.
Up until now, the Hubble Space Telescope was the one that was looking to the utter edges of the universe, taking a few photos and blowing the world away each time. Well, NASA could retire Hubble very soon with its new WFIRST telescope.
The Wide Field Infrared Survey Telescope (or WFIRST) has a field of view 100x larger than Hubble, and it's designed to block the glare from individual stars, which will make NASA's job of finding the chemical makeup of exoplanets easier. NASA won't launch WFIRST until the mid-2020s, so we should see the James Webb telescope that will be finished in 2018 be the champion until at around 2025 or so.
Once NASA has WFIRST online, it will provide a view of space that we've never seen before. NASA should be better capable of understanding the shape of the universe, as well as provide the US space agency with more insight into how dark energy and dark matter work, which could solve some very big problems and mysteries we have here on Earth.