Reports of an AI drone that ‘killed’ its operator are pure fiction

Reports of an AI drone that ‘killed’ its operator are pure fiction

Some AI tales are so unhealthy they might make a robotic facepalm

Corona Borealis Studio/Shutterstock

Information of an AI-controlled drone “killing” its supervisor jetted world wide this week. In a narrative that may very well be ripped from a sci-fi thriller, the hyper-motivated AI had been educated to destroy surface-to-air missiles solely with approval from a human overseer – and when denied approval, it turned on its handler.

Solely, it’s no shock that story sounds fictional – as a result of it’s. The story emerged from a report by the Royal Aeronautical Society, describing a presentation by US Air Drive (USAF) colonel Tucker Hamilton at a current convention. That report famous the incident was solely a simulation, wherein there was no actual drone and no actual danger to any human – a truth missed by many attention-grabbing headlines.

Later, it emerged that even the simulation hadn’t taken place, with the USAF issuing a denial and the unique report up to date to make clear that Hamilton “mis-spoke”. The apocalyptic state of affairs was nothing however a hypothetical thought experiment.

“The Division of the Air Drive has not carried out any such AI-drone simulations and stays dedicated to moral and accountable use of AI know-how. It seems the colonel’s feedback have been taken out of context and have been meant to be anecdotal,” a USAF spokesperson informed Insider. USAF didn’t reply to New Scientist‘s request for interview earlier than publication.

This story is simply the newest in a string of dramatic tales informed about AI that has at factors neared hysteria. In March, Time journal ran a remark piece by researcher Eliezer Yudkowsky wherein he stated that the almost definitely results of constructing a superhumanly good AI is that “actually everybody on Earth will die”. Elon Musk stated in April that AI has the potential to destroy civilisation, whereas a current letter from AI researchers stated the chance of extinction is so excessive that coping with it ought to be a precedence alongside pandemics and nuclear conflict.

Why do these narratives acquire a lot traction, and why are we so eager to consider them? “The notion of AI as an existential risk is being promulgated by AI specialists, which lends authority to it,” says Joshua Hart at Union School in New York – although it’s price noting that not all AI researchers share this view.

Beth Singler on the College of Zurich in Switzerland says that the media has an apparent incentive to publish such claims: “worry breeds clicks and shares”. However she says that people even have an innate want to inform and listen to scary tales. “AI appears initially to be science fiction, however it is usually a horror story that we wish to whisper across the campfire, and horror tales are thrilling and charming.”

One clear issue within the unfold of those tales is a lack of expertise round AI. Regardless of many individuals having used ChatGPT to put in writing a limerick or Midjourney to conjure up a picture, few know the way AI works beneath the hood. And whereas AI has been a well-known idea for many years, the fact is that the present crop of superior fashions show capabilities that shock specialists, not to mention laypeople.

“AI could be very non-transparent to the general public,” says Singler. “Wider training in regards to the limitations of AI would possibly assist, however our love for apocalyptic horror tales would possibly nonetheless win by way of.”


Related posts

Mechanical backpack boosts the sensation of jumping in virtual reality


Waste plastic can be recycled into hydrogen fuel and graphene


Inspired by Living Systems – Next Generation Material Adapts to Its History | Tech News


Leave a Comment