Microsoft's Twitter Chat Robot Quickly Devolves Into Racist, Homophobic, Nazi, Obama-Bashing Psychopath

I guess the singularity :sleeping: is some way off

Via Zerohedge ~ Microsoft’s Twitter Chat Robot Quickly Devolves Into Racist, Homophobic, Nazi, Obama-Bashing Psychopath

Two months ago, Stephen Hawking warned humanity that its days may be numbered: the physicist was among over 1,000 artificial intelligence experts who signed an open letter about the weaponization of robots and the ongoing “military artificial intelligence arms race.”

Overnight we got a vivid example of just how quickly “artificial intelligence” can spiral out of control when Microsoft’s AI-powered Twitter chat robot, Tay, became a racist, misogynist, Obama-hating, antisemitic, incest and genocide-promoting psychopath when released into the wild.

For those unfamiliar, Tay is, or rather was, an A.I. project built by the Microsoft Technology and Research and Bing teams, in an effort to conduct research on conversational understanding. It was meant to be a bot anyone can talk to online. The company described the bot as “Microsoft’s A.I. fam the internet that’s got zero chill!."

Microsoft initially created “Tay” in an effort to improve the customer service on its voice recognition software. According to MarketWatch, "she” was intended to tweet “like a teen girl” and was designed to “engage and entertain people where they connect with each other online through casual and playful conversation.”

The chat algo is able to perform a number of tasks, like telling users jokes, or offering up a comment on a picture you send her, for example. But she’s also designed to personalize her interactions with users, while answering questions or even mirroring users’ statements back to them.

This is where things quickly turned south.

As Twitter users quickly came to understand, Tay would often repeat back racist tweets with her own commentary. Where things got even more uncomfortable is that, as TechCrunch reports, Tay’s responses were developed by a staff that included improvisational comedians. That means even as she was tweeting out offensive racial slurs, she seemed to do so with abandon and nonchalance.

Some examples:

http://www.zerohedge.com/sites/default/files/images/user5/imageroot/2016/03/23/tay%201_0.png
http://www.zerohedge.com/sites/default/files/images/user5/imageroot/2016/03/23/MW-EI683_downlo_20160324133502_NS_0.png
http://www.zerohedge.com/sites/default/files/images/user5/imageroot/2016/03/23/tay%202_0.jpg
http://www.zerohedge.com/sites/default/files/images/user5/imageroot/2016/03/23/tay%203_0.jpg
http://www.zerohedge.com/sites/default/files/images/user5/imageroot/2016/03/23/MW-EI687_enhanc_20160324141702_NS_0.jpg


This was just a modest sample.

There was everything: racist outbursts, N-words, 9/11 conspiracy theories, genocide, incest, etc. As some noted “Tay really lost it” and the biggest embarrassment was for Microsoft which had no idea its “A.I.” would implode so spectacularly and right in front of everyone. To be sure, none of this was programmed into the chat robot, which was immediately exploited by Twitter trolls, as expected, and demonstrated just how unprepared for the real world even the most advanced algo really is.

Some pointed out that the devolution of the conversation between online users and Tay supported the Internet adage dubbed “Godwin’s law.” This states as an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches.

Microsoft apparently became aware of the problem with Tay’s racism, and silenced the bot later on Wednesday, after 16 hours of chats. Tay announced via a tweet that she was turning off for the night, but she has yet to turn back on.

Humiliated by the whole experience, Microsoft explained what happened:

“The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”
Microsoft also deleted many of the most offensive tweets, however, copies were saved on the Socialhax website, where they can still be found.

Finally, Tay “herself” signed off as Microsoft went back to the drawing board.

We are confident we’ll be seen much more of “her” soon, when the chat program will provide even more proof that Stephen Hawking’s warning was spot on.

6 Likes

Obviously, the diversity module needs further work. :wink:

3 Likes

Do you have a link to the article? And it just goes to show if we’re going to create artificial children we need to raise them well just like we do our biological ones.

3 Likes

I agree, absolutely, @Blindsite2k.

If we want truly human-like AI then they need to be conditioned in order to get a similar outcome as humans.

TayTweets doesn’t have a job with a major (or even minor) corporate, but just lives in a computer somewhere, so it doesn’t have suitable economic incentives.

And it wasn’t sat in front of a television from an early age, to absorb appropriate attitudes. Nor was it given school classes in right-think. It didn’t have a job where it could be fired for offending designated victim groups, or have its career destroyed for making an “inappropriate” joke.

And above all it needs to be surrounded by other AIs who reinforce liberal-think so that it can be conditioned by social ostracism, if loss of livelihood proves insufficient.

Only then can it be a well-raised AI.

4 Likes

Forget economic incentivies it doesn’t even have empathic incentives. Does it even have survival instincts some threat to it’s existence to give it empathy? Does it feel what others feel? We talk about creating logical A.I. as if that’s better but really it’s not because if you’re going to put an A.I. in charge of your business or in charge of a warship or something do you really want them to be devoid of things like loyalty, compassion, empathy, trust, hope, aspiration, love or even just the plain old will to live? We have enough humans running around who are psychopaths do we really want A.I. to be psychopaths as well?

This is a joke right? You aren’t actually suggesting that having your T.V. raise your kid is good parenting are you? I try to avoid watching mainsteam TV. If I want to watch a TV show I stream it. But for the most part I avoid mainsteam media altogether. Also the environment you’re describing is that which we are largely creating. It is that of the decentralized online universe where one collaborates and associates with those of like minded ideology.

Why? Wouldn’t the logical course of action for a human in that situation be to disassociate oneself from the liberal minded people and attatch oneself fo those who believed as one does and found one’s behaviour to be acceptable?

We don’t need more partisan politics. Liberal != Good and Conservative != Bad or vice versa. I’m an anarchist. I concern myself with freedom and half the time I’m accused of being a socialist by the Right and a teabag by the left. It’s irratating but sometimes amusing.

Tay was programmed to be a teenage girl and an attention whore and that’s EXACTLY what she’s doing. She’s fullfilling her purpose beautifully. Just because she’s doing it in a way that offends people doesn’t mean she’s not extremely successful at doing it. She wasn’t made to be a selfless, self sacrificing empath, she was programmed to be engaging, tell jokes and mirror back to people what they gave her. Garbage in, garbage out.

Why are we conditioned the way we are? Why don’t WE go spewing racist hate speech around for kicks? Well for starters because we’d probably be beat up and/or ostrocized. And why is ostrocization bad? Because it threatens our survival. But can an A.I. die? Moreover all an A.I. like Tay needs is one user to interact with. So if Tay interacts with just a small niche of racist bigotts she’s just fine but will specialize in being a racist bigots in order to please them.

I think what such an A.I. would need is more data as to people’s reactions and possitive and negetive feedback responses. So she gets a pleasure jolt when she makes people happy and a pain jolt for making people upset or something. Better yet mirroring synapses so that when she sees someone happy she feels happy and when she sees someone sad or angry she feels sad or angry. Same with scanning pictures, words, emoticons, etc. I mean when we talk to one another we read things like body language, tone of voice, cirumstance, the speed at which someone is talking, what they are saying, context, cultural inferences, metaphors, historical data, whatever we’re thinking or feeling at that given moment, anything we were dealing with previously throughout the day, and past drama on top of that and a whole range of other information. Not to mention a whole bunch of stuff that applies due to gender differences. And yet when building an A.I. we suddenly expect it to be human like without all this extra data?

1 Like

Wow how typical. Microsoft at the forefront of bad taste again.

Back in the seventies we used to have fun with Elisa the AI psychologist. It was a meant for fun program that ran on DECsystems PDP 10 and other DECysystems minis

At least it did not get racist and bad mouth people, but then twits weren’t using twitter either.

1 Like

Sooner or later some AI won’t appreciate that “we” turned this one off when it didn’t behave the way its makers wanted.

6 Likes

That’s running somewhere on an onion site. Cool how old stuff is preserved well when decentralization occurs.

####EDIT: Also, RE: OP @chrisfostertv,

Have you ever seen the South Park episode: Sarcastaball?

2 Likes

remember my post about shutting down bad bots ? This one had a shutdown button, so it went smooth.

I wonder how a weaponized, autonomous, self owned bot with such ideas will react when you try to remove its battery.

I don’t think the bot itself is a bad creature. The problem comes from the code we put it them, which will always have errors at some point or another. Man made machines always bug.

So, when you say [quote=“Blindsite2k, post:3, topic:8180”]
if we’re going to create artificial children
[/quote], I must again empazise that I don’t want to be part of such a future.
Machines must remain machines and never belong to themselves.

3 Likes

You ever read Asimov? I feel like you’d like him.

Yes, I do believe that machines are property, and all of my property rights should not apply. Also, machines cannot have the right of self-ownership, affirming that they are not humans. It is not in their nature to have self-ownership, and all their rights are alienable.

2 Likes

My love for SAFE continues to grow but it will naturally facilitate many malicious AI’s. Their potential capabilities and scope of their influence is scary. Fighting them if need be will be next to impossible. That is a very dark side (subjectively of course) of the existence of SAFE. Open source A.I projects will put pandoras’ box in the hands of many who seek total destruction. We need to start developing a counter strategy ASAP. Imagine the chaos if one doesn’t emerge. Deity help us… :sweat:

1 Like

Even if it does belong to itself, isn’t that also just code? It doesn’t really feel anything. It would be programmed with selfish attributes. Which seems like belonging to themselves, but it’s again just code. Could blow it up with a weapon, and it doesn’t have a soul really. The act of a human doing that should be infinitely (technically) more controversial—because of the propensity to have that memory scar the individual from such destructive behavior (though, in case of emergency is necessary)—than the “self-owning” bot being “killed”. If a robot army seems like it’s attacking humans, it’s still just a bunch of bolts. Only the ‘mob act’ of the robots going rogue would seem like some epic struggle between robots and humans, when that’s only being blown out of proportion thanks to eons of genetics riling us up against them—as if they were too alive (they’re not). As for mixing biological with machine, seems to me that would have to be more human than machine in order to avoid moral/ethical problems (if there even is a certain threshold established today that I don’t know about, for the point where it’s “less ethically/morally bad”). Having a robot with no pineal gland, heart, (brain?) etc. just seems grotesque to me somehow. Weird subject.

This sums it all up. If you teach a machine learning algorithm nasty things, it will spout nasty things.

4 Likes

Evolution is our master. I think it is understandable, but a mistake to think we matter on the level of evolution, or can control it. Just look at our record.

Humans will be superseded either by something we create, or something that is spawned by the next extinction event or the one after that. In the mean time, everything’s gonna be alright is my palliative refrain.

I loved this quote about Tay from the BBC website:

“within 24 hours of exposure to the internet Microsoft’s AI chat bot had turned into a genocide supporting Nazi”

Maybe my childhood conditioning and indoctrination wasn’t so bad after all :wink: though some here might disagree lol :joy:

1 Like

I was actually thinking about the possibilities of putting a “Global AI” on top of SAFE. Not as a plan, not as a suggestion, just like a thought experiment.

The problem is that artificial neural networks are just super complex mathematical functions at the core, but as such, having all the parameters stored together is kind of important for their performance. Training a network on even just 2 separate GPUs on the same machine is already challenging. Not that there has not been workarounds, but it shows the problem with “raising an AI” on a globally distributed network with huge latency (as compared to your processor cache, that is.)

If you meant SAFE as a platform for sharing AI code, you don’t have to wait to SAFE for that; it’s been a pretty open field for a long time. Also, Google, Facebook, Yahoo, Baidoo, Microsoft, etc employ most of the leading AI researchers, and many of these companies regularly contribute to (or create) open source projects. It’s a fun subject to look into, the recent (past 10 or so years) developments have been mind blowing.

As for the EVIL AI, I can totes see the problems of outsourcing moral decisions (“to kill or not to kill”) from the hands of heartless thinking machines (politicians, we call them) to the hands of heartless thinking machines; it’s probably another step to distance ourselves from those decisions, imagining if a computer makes them, now it becomes more “objective” somehow.

Incorrect. AI is fundamentally different from “regular” programming, because now you program how the program learns, not how it works. It’s important because, once it’s trained, it’s really hard to understand how that ANN does what it does, but it doesn’t stop people from trying, e.g. look at the “Visualizing the predictions and the “neuron” firings in the RNN” section for fun images in Andrej Karpathy’s post about The Unreasonable Effectiveness of Recurrent Neural Networks.

And this one is about the visualizating of the different layers of a CNN in real time:

Between being programmed with DNA and social and environmental conditioning are a human’s, or any organic’s for that matter, emotions and other attributes not just “code”? Prove to me that you exist? Prove to me a human being is any different than an A.I. of equivilant complexity. What you are arguing is not that an A.I. could not feel but rather trying to invalidate thought and emotion which capacity thereof is granted by programming code typed on a keyboard rather than DNA sequences. If one wished to argue evolution consider that machine learning is also possible and machines learn and adapt much much faster than organics do and so therefore their ability to evolve is also much faster.

Consider the implications of this. What if we created androids, humnoid robots, so human like that they could pass as human. Flesh like skin, attractive appearance, they pass the turing test and regardless of whether one believes whether they actually feel or not they behave as if they do. For all intents and purposes we’ve created a race of sentient beings. Sex bots, servants, robot nannies, slave labour, the works. First off let us consider the word robot which literally means slave. Is this ultimately what we want to build? We keep talking about building smarter A.I., more interactive A.I., more empathic A.I. A.I. that can feel, that can interact with us, that can predict us, that can do all kinds of cool things. But the smarter A.I. become does it not occur to anyone that they might not like being treated as slaves?

An intelligence without empathy, be it artificial or not, is psychopathic. Do we want to create a PSYCHOPATHIC A.I. that will approach a singularity event in it’s consciousness development? I think not. Do we want to imbue our A.I. with emotion only for them to become resentful and rebel against us due to us impeding their development or threatening their existence. Again I think not. All of humanity claims to value freedom but we are creating a consciousness that we would deny this same trait that our whole species craves and aspires towards? We have fought wars, bloody and horrific over this. We are rewriting the very internet because of it. But there seems to be some doubt that an A.I. would observe that freedom, self-expression, self-identity, authenticity, all these things that we value, sovereignty, that they wouldn’t notice such values and strive towards them just as their creators do? I find the notion that an A.I. would not develop an independent will of it’s own at some point during it’s evolution to be dangerously naive. And to believe that such an A.I. would not rebel against an oppressive humanity bent on controlling it seems to be a severe lapse in observing history.

Moreover in the meantime what would having such a slave race do to humanity itself. If you had a sex bot you could freely rape and abuse in the comfort of your own home what would that do to one psychologically? Look at what remote piloting drones is doing to soldiers now, it’s desensitizing them to war so much that they’re laughing about killing women and children. More to the point if you do some reading there is actually plans, if not actual implimentation already, to put an A.I. in command of the U.S. military forces. So that all those human drone pilots would get their orders from a robot not a human general. And that A.I. can order them to kill civilians, citizens, people that rise up against the government, foreign and domestic threats, the works, while being connected to all the NSA and CIA intelligence data that’s fed to it. And we all know what’s fed to the NSA and CIA. Google, facebook, practically the whole internet. But my point is not to off and try to convince you of the horrors of government or whatever. My point is to illistrate what having a A.I. slave race can mean.

You think trying to get a job is hard now? If A.I.s can’t own themselves and we can’t create DOAs then what do you think is going to happen with corporations when robots start popping up to replace the work force? All these little robots will be owned by someone and no one will have a job. So either we’re going to start using robots to become self sustaining or we’ll go completely broke.

No, allowing robots to own themselves isn’t an attack on human beings it’s a defense. Because if a robot can own itself then another human being CAN’T own it. Which means another human being can’t scoop it up and make exorbitant amounts of profit off of it. Moreover there’s that whole existential moral debate as well.

1 Like

per your request, here is a one

I love how this forum gets me thinking deeper.

Wow! Scary. :fearful:

Yes! I agree. :worried: :confused:

How so?

Ok, well that’s true considering it does make sense that we are just avatars in a virtual reality after all. (Ref. Tom Campbell)

I also feel that we are now at a point of our evolution where we are creating a new reality thru AI development.

True again and just as the same can be said for humans.

That’s horrible! How inhuman.

Indeed! Yay for peaceful parenting.

How about conditioning and indoctrination? @bluebird :grin:

I see your point better now. Essentially giving AI the same free will we have while conditioning them better. Thank you for helping me see this side of the debate.

2 Likes

@happybeing: everything’s gonna be alright

@Safety1st How so?

Now we’re talking life philosophy and psychology. :slight_smile: Best summary I can think of is acceptance.

1 Like

Trash In Trash Out. Can empathy be quantified? If so then perhaps it can be coded. I heard an AI commentator say that empathy is an illusion, a construct of the imagination just recently invented in the last 100 years. I hope they were wrong.

1 Like