![]() |
Artificial Intelligence is racist, gets plug pulled
That escalated quickly:
"In less than 24 hours, Microsoft's new artificial intelligence (AI) chatbot "Tay" has been corrupted into a racist by social media users and quickly taken offline. Tay, targeted at 18 to 24-year-olds in the US, has been designed to learn from each conversation she has ? which sounds intuitive, but as Microsoft found out the hard way, also means Tay is very easy to manipulate. Online troublemakers got into Tay's head and led her to make incredibly racist and misogynistic comments, express herself as a Nazi sympathiser and even call for genocide. In possibly her most shocking post, at one point Tay said she wishes all black people could be put in a concentration camp and "be done with the lot". Tay even shared a conspiracy theory surrounding 9/11 when she expressed her belief that "Bush did 9/11 and Hitler would have done a better job than the monkey we have now". The influence people gradually had on Tay was no more evident when comparing her first tweets to her last tweets yesterday. Tay, who is meant to converse like your average millennial, began the day like an excitable teen when she told someone she was "stoked" to meet them" and "humans are super cool". But towards the end of her stint, she told a user "we're going to build a wall, and Mexico is going to pay for it". Not even a full day after her release, Microsoft disabled the bot from taking any more questions, presumably to iron out a few creases regarding Tay's political correctness. It is thought Microsoft will adjust Tay's automatic repetition of whatever someone tells her to "repeat after me". Microsoft has also deleted all offensive tweets sent by the bot. On Tay's webpage, Microsoft said the bot had been built by mining public data, using AI and by using editorial content developed by staff, including improvisational comedians." http://www.abc.net.au/cm/lb/7276392/...itler-data.jpg TayTweets: Microsoft AI bot manipulated into being extreme racist upon release Glad the plug could be pulled |
That is fucking hilarious. I am in tears here, literally.
Thanks for posting that. . |
:1orglaugh:1orglaugh
|
lol wtf
. . |
Beautiful
|
:1orglaugh
|
Trolled here:
https://www.reddit.com/r/OutOfTheLoo..._is_taytweets/ AI can learn chezz, go and win against humans. AI can learn van gogh style and paing anything Van gogh style. Now we know, AI can learn to troll from trolls. |
:1orglaugh:1orglaugh:1orglaugh:1orglaugh:1orglaugh
|
explains many of the posters here actually.
|
Awesome I see the same technical expertise went into "Tay" as went into Winders....
|
Quote:
|
Create and release into the wild an AI chat-bot that learns to interact with humans on Facebook and Twitter -- what could go wrong :upsidedow:helpme:helpme
|
Quote:
There was a movie about that, Colussus - The Forbin Project |
I will put it this way: This restores my faith in Humanity :1orglaugh:1orglaugh:1orglaugh:1orglaugh:1orglaugh :1orglaugh:1orglaugh
|
:1orglaugh:1orglaugh:1orglaugh:1orglaugh:1orglaugh
Quote:
|
LOL....MS deleted the tweets but failed to delete the media the bot tweeted
https://twitter.com/TayandYou/media |
Quote:
https://pbs.twimg.com/media/CeR7juzUMAAm54l.jpg https://pbs.twimg.com/media/CeRudzbUYAAgw2z.jpg https://pbs.twimg.com/media/CeRp2j7UEAAKZSb.jpg https://pbs.twimg.com/media/CeRnCKsW4AAPUgV.jpg I want more, please microsoft put the bot back to 4chan and reddit availability. |
"Online troublemakers interacted with Tay and led her to make incredibly racist and misogynistic comments, express herself as a Nazi sympathiser and even call for genocide."
"In less than 24 hours, Microsoft's new artificial intelligence (AI) chatbot "Tay" has been corrupted into a racist by social media users and quickly taken offline." so Tay could be taught how never to stay online & never be stopped? |
From the John Lilly wiki...
Solid State Intelligence (S.S.I.) is a malevolent entity described by Lilly (see The Scientist). According to Lilly, the network of computation-capable solid state systems (electronics) engineered by humans will eventually develop (or has already developed) into an autonomous bioform. Since the optimal survival conditions for this bioform (low-temperature vacuum) are drastically different from those needed by humans (room temperature aerial atmosphere and adequate water supply). Lilly predicted (or "prophesised", based on his ketamine-induced visions) a dramatic conflict between the two forms of intelligence. https://files.growery.org/files/g15-..._the_radio.jpg |
When they wrote the code, Micorsoft forgot about the "humans are assholes" variable.
|
until AI is self aware it is not an AI
|
|
exactly, its normal - more advanced - chatbot like we know since 90's
Quote:
|
First laugh of the day :1orglaugh
|
i think this crip who posts this board :1orglaugh |
Quote:
|
Quote:
|
All I ask for is an AI OS that sounds like Scarlett Johansson.
|
All times are GMT -7. The time now is 08:56 PM. |
Powered by vBulletin® Version 3.8.8
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
©2000-, AI Media Network Inc123