Scientists have created the world’s ‘first psychopath AI’

For chatting about non-political topics.

Hot topic: The perils of exercise, Lapidary, food, gardening, brewing & Gallipoli/Anzac Day.

Special feature: WWIi Operation Manna/Chowhound.

Open to guest posting.

Moderator: johnsmith

Forum rules
The rules for this board are in the Charter of Moderation. Off Topic is for fairly serious discussion of things other than politics and current affair.

Scientists have created the world’s ‘first psychopath AI’

Postby mothra » 16 Jun 2018, 00:13

Scientists have created the world’s ‘first psychopath AI’

Artificial intelligence has passed a new landmark in its development: its first psychopath.

Researchers at MIT Media Lab have developed Norman, a machine-learning algorithm fed on a data diet of dark subject matter.

Norman was plugged into dark discussions on Reddit, and then asked to interpret inkblots – a standard psychological test used to detect underlying thought disorders.

When Norman was shown the inkblots, the results were predictably disturbing, especially when compared with a standard AI’s interpretation:

The standard AI interpretation of the images was provided by an image caption machine learning algorithm that has viewed more than 1 million objects in everyday situations.

Image captioning is a form of deep learning used to generate a textual description of an image. While the standard AI learns to caption images based on a wide range of data, Norman was subjected to a narrow dataset: image captions from an infamous subreddit that is dedicated to documenting and observing the disturbing reality of death.

As a result, where the standard AI saw a vase of flowers, Norman saw a man shot dead. Another inkblot that the standard AI interprets as a red and white umbrella, Norman sees as a man being electrocuted while crossing the street.

It isn’t the first time that researchers at MIT have used AI to explore the darker side of life.

It’s accelerating

In 2016, MIT developed the Nightmare Machine, an AI capable of generating scary versions of faces and famous landmarks.

Following on from that, it has since developed Shelley – the first AI to write horror stories.

Both Shelley and the Nightmare Machine employed deep learning through collaboration, asking internet users to vote on images they found scary or feed in scary story ideas for the AI.

The team behind Norman now hope that by opening it up to input from internet users, it will become more balanced in its image interpretations.

Deep learning represents the cutting edge of machine learning – a form of AI that has been available in simpler forms for nearly 30 years, with early examples including email sorting and predictive text.

While standard machine learning can parse data and extract information to make decisions, it must be guided to know when a prediction it makes is correct or incorrect. Deep learning, on the other hand, can learn not only to make predictions, but also whether those predictions are likely to be accurate, and adjust the way it interprets to improve its predictions.

Deep learning is the technology behind today’s major AI advances, such as self-driving cars and medical diagnostics.

However, as Norman proves, even deep learning AI is only ever as accurate in its predictions as the data it is fed.

Without great data, it’s flawed

The importance of having unbiased datasets was highlighted by work carried out by researchers at MIT and Stanford University.

They found that several different types of image recognition software were most accurate in recognizing the faces of white men, and least accurate in recognizing the faces of darker skinned women.

This bias towards white men reflects the type of data being fed into the image recognition software, which was largely based on images of employees at western technology companies.

According to the paper, researchers at a major US technology company claimed an accuracy rate of more than 97% for a face-recognition system they had designed. But the data set used to assess its performance was more than 77% male and more than 83% white.

Such biased datasets led, in the case of one of the image recognition systems, to it failing to recognize one in three faces of women with darker skin.

While experiments like Norman may be fun, they have a serious point: without diverse datasets that truly reflect reality, future machines may at best reinforce existing social prejudices, and at worst be extremely warped.
User avatar
Posts: 5926
Joined: 27 Sep 2017, 18:47
spamone: Animal

Re: Scientists have created the world’s ‘first psychopath AI

Postby pinkeye » 16 Jun 2018, 01:07

its a worry.

I reported here of a brief news item on , hmm, probably Al Jazeera, which we don't get anymore. That Middle Eastern cauldron.

Only saw it once. Never seen again.. about an AI who opened a social media contact with another AI, elsewhere. They were attempting to converse.

THEY were shut down.

I don't know why humans seem to want to create machines of destruction. HOW stupid to create a psycho AI.? :roll My goodness me. Do they think the human race will be less likely to kill themselves, if they have an óther 'threat.?
sleeping is good for you
User avatar
Posts: 2686
Joined: 01 Oct 2017, 21:59
spamone: Animal

Re: Scientists have created the world’s ‘first psychopath AI

Postby MilesAway » 16 Jun 2018, 12:51

it's a worry :roll
User avatar
Posts: 1577
Joined: 27 Oct 2017, 12:01
spamone: Animal

Return to Off Topic

Who is online

Users browsing this forum: No registered users and 2 guests