The Conversation just published something I have been worried about for a while now. Scary? Could be getting that way sometime soon. They said: "Artificial intelligence (AI) is learning more about how to work with (and on) humans. A recent study has shown how AI can learn to identify vulnerabilities in human habits and behaviours and use them to influence human decision-making.
It may seem cliched to say AI is transforming every aspect of the way we live and work, but it’s true. Various forms of AI are at work in fields as diverse as vaccine development, environmental management and office administration. And while AI does not possess human-like intelligence and emotions, its capabilities are powerful and rapidly developing.
There’s no need to worry about a machine takeover just yet, but this recent discovery highlights the power of AI and underscores the need for proper governance to prevent misuse.
How AI can learn to influence human behaviour
A team of researchers at CSIRO’s Data61, the data and digital arm of Australia’s national science agency, devised a systematic method of finding and exploiting vulnerabilities in the ways people make choices, using a kind of AI system called a recurrent neural network and deep reinforcement-learning. To test their model they carried out three experiments in which human participants played games against a computer. Here is the article: https://theconversation.com/ai-can-now-learn-to-manipulate-human-behaviour-155031
How AI detects your emotions by scanning you with radio waves
The same week, an article in Futurism reported: "If new research is to believed, you may find yourself coming home from work one day in a rotten mood — just to have your smart speaker automatically scan your emotions and start to play soothing music.
That’s one use case for a new neural network that Queen May University of London engineers taught how to automatically interpret certain human emotions — by blasting people with radio waves and picking up on emotional cues like changes in their heartbeat. The algorithm can detect feelings including fear, disgust, joy, and relaxation with 71 percent accuracy, according to research published earlier this month in the journal PLOS One. That’s far from perfect, but impressive enough that it could find some real-world use in our lives.
Combining a few of these technologies could mean a whole new level of social engineering. Watch out.