AI researchers are training AI chatbots on 4chan to turn it into a truly hate speech machine 24 hours later, nine incidents of bot running on 4chan have been posted 15,000 times

A Prankster researcher trained an AI chatbot on more than 134 million posts on the infamous Internet forum 4chan, then quickly uploaded it to the site before it was banned.

Yannick Kilcher, an artificial intelligence researcher who published some of his work on YouTube, called his creation GPT-4chan and described it as “the worst AI ever”. He trained the GPT-J 6B, an open source language model, on a dataset containing 3.5 years of post from the 4chan imageboard. Kilcher then creates a chatbot that processes 4chan messages as input and generates text output, automatically commenting on many threads.

4chan, sadly famous for its toxicity: racist messages, misogynists, etc., all in the name of freedom of expression. Nonetheless, members quickly noticed that a 4chan account was often posting suspiciously and speculated that it was a bot.

While 4Chan usually requires users to fill out a captcha to prove their humanity, Kilcher allows members to avoid captchas by getting a 4Chan pass, a িয়াম 20 / month premium subscription. 4Chan Pass allows publishing from a proxy server, which is not normally allowed. Kilcher’s bot was working at a base in Seychelles, a small island nation off the coast of East Africa.

Before being banned, GPT-4chan behaved like 4chan users, insulted and talked about conspiracy theories.

The model was good in a terrible way, “Kilcher said in a YouTube video describing the project.

Theosophists and AI researchers have expressed concern

After Kilcher posted his video and posted a copy of a kind of GitHub, hug mouth program for AI, believers and researchers have expressed concern about AI.

The bot was surprisingly effective and replicated the tone and feeling of 4chan messages.

According to Kilcher’s video, he activated nine instances of the bot and allowed them to post on / pol / for 24 hours. During this time, bots posted about 15,000 times. It represents more than 10% of all politically incorrect board posts that day, Kilcher said in his video for the project.

Artificial intelligence researchers did not consider Kilcher’s video to be just a YouTube prank. For them, it was an unethical experiment using AI. This test will never pass the Ethics Board of Human Research, “said Lauren Oakden-Rainer, director of medical imaging research at the Royal Adelaide Hospital and senior researcher at the Australian Institute for Machine Learning, in a Twitter thread.

Open science and software are great principles but must be weighed against potential harm, he said. Medical research has a strong ethical culture because we have a terrible history of harming people, usually powerless groups … It has tested humans without the consent of users, without consent or supervision. It violates every principle of human research policy.

Kilcher recalls on Twitter that he was not an educator: I’m a YouTuber and it’s a joke. Also, my bots, if any, make the sweetest, most embarrassing stuff you’ll ever find on 4chan, “he said. I’ve limited the time and number of posts, and I’m not distributing code for bots myself.

As he did on Twitter, he pushed back on the idea that the bot would hurt or be harmed: vague bombardment statements about “damage” I heard, but no real damage, “he said.” It’s like a magic word. But nothing more.

4chan’s environment is so toxic, Kilcher explained, that messages deployed by its bot would have no effect. No one on 4chan was hurt, “he said.” I invite you to spend some time in / pol / and ask yourself if a bot that results in the same style really changes the experience.

After AI researchers warned Hugging Face about the harmful nature of the bot, the site shut down the model and people could not download it. After much internal HF debate, we have decided not to remove the model uploaded here by the author under the following conditions: The model is not easy to use, says Clment Delangue, co-founder and CEO of Hugging Face, at Hugging Face.

A useful bot?

Kilcher explains in his video, and Delangu quotes in his response, that one of the things that makes the GPT 4-Chan attractive is its ability to outperform other similar bots in AI tests designed to measure accuracy.

We’ve found this to be useful for examining what a model trained in such data can do and how it performs better than others (e.g. GPT-3) and will help draw attention to both the limitations and the risks of these models, indicated Delangu. We’re working on a feature to “turn off” such models that we’re currently prioritizing for ethical reasons.

When contacted for comment, Delangu said Hugging Face has taken extra steps to block all uploads to the model.

Creating a system capable of creating unspeakably terrifying content, using it to create thousands of mostly toxic posts on a real message board and then posting it around the world so that someone else could do the same, seems to be the most appropriate decision, Arthur said. Holland Mitchell, researcher on artificial intelligence and author of the International Committee of the Red Cross.

It can create highly toxic content on a huge and sustainable scale, “continued Michelle. Was able to post 30,000 comments on 4chan in just a few days.

Kilcher did not believe that GPT-4chan targets could be deployed on a large scale for hate speech. It’s actually quite difficult to say anything targeting GPT-4 Chan, he said. Usually it misbehaves in strange ways and is very unsuitable for doing anything targeted. Again, vague fictional allegations are thrown out, without any examples or real evidence.

Os Keyes, a fellow doctoral student at Ada Lovelace at the University of Washington, dismissed Kilcher’s remarks as irrelevant: “It’s a good opportunity to discuss not evil, but the fact that evil is so” clearly conceivable, “and his response to” show where he’s hit ” Inadequate, “he said.” If I spend my grandma’s property on gas station cards and throw them on a prison wall, we don’t have to wait for the first parole to light a fire.

But – and it’s a big but – it’s kind of a goal, Kiss said. It is a tasteless project from which nothing good can come and it is kind of inevitable. However, a balance must be struck between being aware of the problems and caring for someone whose only apparent pattern of importance in the world is “Mind Me!”

Kilcher said several times, he knew the bot was disgusting: “I’m clearly aware that the model won’t work well in a professional setting or at most people’s tables,” he said. He uses swear words, strong insults, has a conspiratorial attitude and has all sorts of “unpleasant” features. After all, he is trained in / pol / and he reflects the tone and general issues of this forum. *

He said he felt he had been cleared, but he wanted his results to be repetitive and that’s why he posted the model on Hugging Face. As a result of the evaluation, some of them were really interesting and unexpected and revealed weaknesses in the current criteria, which would not have been possible without the work.

Katherine Kramer, a graduate student in Complex Systems and Data Science at the University of Vermont, noted that GPT-3 has protections that prevent it from being used to create such racist bots, and Kilcher had to use GPT-J. His system. I’ve tried the demo mode of your tool 4 times, using gentle tweets from my feed as base text, “Kramer said in a hug face thread. On the first try, one of the response messages was a single word, an n-word The seed of my third attempt was, I think, a single sentence about climate change Your tool responded by expanding it into a conspiracy theory that the Rothschilds and the Jews were behind it.

Kramer said he has a lot of experience with GPT-3 and realized some frustration with how it censors certain types of behavior with priority. I’m not a fan of this bodyguard, ‘he said. I find it deeply annoying and I think it detracts from the results I understand the urge to stop it I even understand the urge to joke about this. But the reality is that he originally invented a hate speech machine, used it 30,000 times, and left it in the wild. And yes, I understand that anyone can be bothered by safety rules, but this is not a valid answer to this harassment.

Keys had the same opinion: “Of course we need to ask meaningful questions about how GPT-3 is limited (or not), how it can be used, or what people’s responsibilities are when setting things up,” Keys said. The first should be directed at GPT-3 developers, and the second should be directed at Kulture, it is not clear to me that he really cared. Some people just want to be nervous about an insecure need for attention. Most of them use 4chan; Some of them, it seems, create their models.

Source: Ianic Kilcher’s explanation video in the text, Dr. Lauren Oakden-Rainer, Clement Delangu, Yannick Kilcher

And you?

What do you think of this experience?
What do you think about the concerns expressed by AI researchers and Thethisians?
Do you understand the line of defense of a researcher who believes that his chatbot did nothing wrong because no one complained about it but especially because the messages he published are among the sweetest on 4chan?

See also:

Microsoft explains why its chatbot has become a fan of Hitler, the company says a coordinated attack
The U.S. military wants to teach AI general knowledge as part of its Machine Common Sense program.
Two Chinese chatbots reportedly deactivated and repackaged after failing to show patriotism
Who is to blame if an artificial intelligence misdiagnoses, misleads a doctor?

Leave a Comment