Artificial Intelligence News - Page 20
DuckDuckGo, a search engine focused on maintaining user privacy, has revealed a new AI feature called DuckAssist that has just entered beta testing.
Right off the bat, we should clarify that this isn't a full-on chatbot like Microsoft's Bing AI which we've been hearing so much about lately, but rather an assistant that pops up to field some search queries.
As the company explains, DuckAssist is essentially a new form of 'Instant Answer' (in beta) that surfaces an immediate response to a question, if that query can be answered by Wikipedia. (Other sources will be used, such as Britannica, but it's mostly Wikipedia to start with, partly due to the regularity of its updates).
The development of artificial intelligence is exponential, and the release of products or tools, such as OpenAI's ChatGPT and Microsoft's Bing Chat, is just the beginning of the newly emerging space.
The widespread popularity of OpenAI's ChatGPT and the half-baked release of Microsoft's Bing Chat, which uses an upgraded version of the underlying technology of ChatGPT, has put a large spotlight on the development of artificial intelligence, causing some individuals and corporations to express their concerns and excitement surrounding the new technology.
Now, a new op-ed penned in the Los Angeles Times by philosophy expert Eric Schwitzgebel and "nonhuman" intelligence researcher Henry Shevlin, explores the future of AI from an ethical and moral standpoint. Both experts write that only a couple of years ago, the idea of an artificial intelligence system becoming "conscious" and capable of a subjective experience seemed like science fiction, but with the leaps and bounds in development we have seen in recent months with ChatGPT/BingChat, it's much more plausible that these systems could eventually "exhibit something like consciousness".
It was recently discovered that Microsoft's AI-powered Bing Chat has what it calls a "celebrity" mode that allows it to impersonate famous public figures.
Microsoft's Bing Chat has been a center point for attention since the widespread popularity of OpenAI's ChatGPT. However, Bing Chat seemingly has more concerns, or at least it's early iteration gave more reasons to be concerned. Before Microsoft "lobotomized" Bing Chat, the AI chatbot could bypass some of its own operating parameters, giving users strange responses that sometimes even led to threats.
Microsoft saw these responses and issued a statement saying they don't align with the company's values and that Bing Chat has new limitations that should prevent these responses from occurring. These limitations are still in place and restrict users to only sending six messages to Bing Chat before the conversation terminates. According to Microsoft, Bing Chat only began providing strange responses to users when they communicated with it extensively, hence the message limitation.
Microsoft's Bing Chat tool powered by artificial intelligence is being poked and prodded by users, and as time goes on, more features are being discovered, leading to new possibilities for the emerging technology.
Microsoft's Bing Chat certainly had a rocky start when it first launched as the AI-powered search assistant began to lose its mind if a user interacted with it for too long. In some instances, Bing Chat actually turned on the user, threatening revenge on a suspected hack, requesting the user to stop communicating with it, and even threatening to carry out the release of personal information on the user in order to ruin their reputation and chances of new job prospects.
All of these examples and more of Bing Chat going off the rails were heard by Microsoft, which "lobotomized" the artificial intelligence and rolled out new operating parameters. Following the update, users are now only able to send six messages to Bing Chat before the chat needs to be refreshed, as Microsoft identified that most of these "off the rails" responses happened after multiple messages were sent. Despite Microsoft turning down Bing Chat's intelligence, users are still finding impressive/concerning new features within the software.
Scammers are constantly looking for new ways to rip people off, and with the emergence of artificial intelligence-powered tools, individuals are starting to notice a new trend making the rounds.
Nefarious actors are always looking for new ways to pull the wool over unsuspecting eyes in hopes of manipulating the individual into sending over thousands of dollars. Scams can come in various forms, and it appears the next big con going around involves the use of artificial intelligence-powered tools that can clone an individual's voice. The scam is relatively simple.
Nefarious actors pick a target family, making sure one of the family members has audio of their voice uploaded to the internet. This audio could be uploaded to any public platform; YouTube, Facebook, TikTok, Instagram, etc.
The power of AI in image generation is undeniable, and we've seen some remarkable strides in this space in recent months - with Stable Diffusion being one of the leaders. That said, the latest revelation sounds pretty incredible, as it could lead to a situation where AI can reconstruct accurate images based on your memories. It sounds like something from Cyberpunk 2077, except real.
A new research paper from Yu Takagi and Shinji Nishimoto, from the Graduate School of Frontier Biosciences at Osaka University, outlines how it's possible to reconstruct high-resolution and accurate images from reading brain activity gained from fMRI signals "without the need for any additional training and fine-tuning of complex deep-learning models."
You can see some examples of image reconstruction above, with the presented images in the top row and the reconstructed images from brain waves in the bottom row. There's a definite dreamlike quality to the results, which is fascinating.
The emergence of new technologies, such as artificial intelligence in its various forms, has shaken up many industries around the world and has prominent figures in the tech world worried about its overall potential.
One of those prominent figures is SpaceX, Tesla, and Twitter CEO Elon Musk, who has recently been taking to his personal Twitter account to discuss the potential of AI-powered systems, while also poking fun at the people that were riding the cryptocurrency hype train that has now jumped over to the AI hype train.
Musk tweeted out that he's been recently experiencing "existential angst" about AI development, which eventually led to a series of other tweets where the Tesla CEO explained that OpenAI, the company behind the now widely popular ChatGPT, has changed it direction since Musk founded it in 2015. According to Musk, OpenAI was formed to bring transparency to AI in the form of being "open source", hence Musk's decision to name the company 'OpenAI'.
Air combat will eventually become artificial intelligence-powered systems battling each other, or at least that's what a group of researchers has predicted.
A new report published in The South China Morning Post details a study that was recently published in the Chinese journal Acta Aeronautica et Astronautica Sinica. The study was conducted by professor Huang Juntao from the Chinese army's Aerodynamics Research and Development Center located in Sichuan, China, and explores the implementation of artificial intelligence-powered systems in a real-life dogfight situation.
The AI system competed against human fighter pilots and won the entire contest in just 90 seconds, beating every human fighter pilot it encountered. The researchers write in their study that the AI consistently out-predicted the development of the battle through its superior calculation agility, leading it to establish a distinct advantage over its opponents. The results from this study caused the researchers behind it to predict that the era of air combat will soon transfer over from humans to AI systems, and that the time of that transition is already on the horizon.
NVIDIA recently launched the RTX Video Super Resolution update as part of its latest GeForce Game Ready driver update. In Chrome and Edge, GeForce RTX 30 and 40 Series users could enable the AI-powered upscaling tool to improve video quality when streaming at a lower resolution.
Today comes news that Microsoft is adding its version of VSR (Video Super Resolution) to its Edge browser, with support for all NVIDIA GeForce RTX model GPUs (GeForce RTX 20, 30, and 40 Series with Tensor Cores) in addition to AMD Radeon RX graphics covering Radeon RX 5700 up to Radeon RX 7800. Though we assume that's a typo, it's supposed to be Radeon RX 7900.
Like RTX VSR, Microsoft's VSR is said to remove or improve compression artifacts and enhance text though it only works if the video's resolution is lower than 720p. If you're wondering why it's limited to low-res videos, Microsoft notes that one of three videos is played at 480p or lower in Edge browsers. AI also powers Microsoft's VSR with machine learning to enhance any video watched in the browser.
Elon Musk has indicated that he's interested in creating his own version of OpenAI's ChatGPT in order to rival the popular AI chatbot.
A new report from The Information goes into detail about AI researchers being contacted by Musk's team to develop a new version of OpenAI's ChatGPT. According to the report, Musk's team has already reached out to several developers regarding the purported project, and one of those researchers was Igor Babuschkin, a former senior AI researcher at Google's Deepmind, who was contacted for the position of lead developer on Musk's vision for the AI.
Details of Musk's new AI chatbot are scarce and hardly set in stone, but we can expect that it will be different from OpenAI's ChatGPT in certain aspects. Notably, Musk co-founded OpenAI back in 2015 but left the company in 2018. Since then, Musk has been quite vocal about how he believes the company has changed direction from its initial heading and why the company was created in the first place.