Artificial Intelligence: Meta has created a huge new AI language – and is offering it for free





“This is a great initiative,” said Thomas Wolfe, the AI ​​startup behind BigScience, the chief scientist at Hugging Face, a project in which more than 1,000 volunteers from around the world collaborate on a language model. 6 with free access “The more open models there are, the better,” he added.

Large language models – powerful programs that can create text paragraphs and mimic human conversations – have become one of the most popular trends in AI in recent years. But they have deep flaws and they reveal misinformation, bias and toxic language.

Theoretically, this should help increase the number of people working on the problem. Nevertheless, since language model learning requires a great deal of data and computing power, they still remain a project for wealthy technology companies. The broader research community, including ethicists and sociologists who are concerned about their abuse, has had to stand by them.

Meta AI says it wants to change. “Many of us have been university researchers,” Pinu says. “We know the gap between academia and industry in terms of the ability to create these models. It was obvious to make it available to researchers. He hopes other people will see his work, differentiate it or be inspired by it. Success comes quickly when more people are involved, he said.

Meta puts her model, it is said Open Pretended Transformer (OPT), Available for non-commercial use. It also publishes its code and a logbook which records the training process. The logbook contains daily updates from training data team members: how and when it was added to the model, what worked and what didn’t. In a 100-page note, the researchers logged all bugs, crashes, and reboots that occurred during the three-month training process that ran uninterrupted from October 2021 to January 2022.

With 175 billion parameters (the value of a neural network modified during training), TPO equals the size of TPG-3. It’s intentional, Mr. Pinu explains. The team designed OPT to be as accurate and toxic as GPT-3 in language functions. OpenAI provided the GPT-3 as a paid service, but did not share the model or its code. The idea was to provide a similar language model for researchers to study, Pinu said.

Google, which is exploring the use of large language models in its search products, has also been criticized for its lack of transparency. The company sparked controversy in 2020 when it fired prominent members of its AI ethics team after creating a study highlighting technology issues.

Cultural hit

So why is Meta doing this? After all, Meta is a company that has said little about how the algorithms behind Facebook and Instagram work, and has a reputation for burying adverse results from its own internal research teams. See also: A robotist explains why he likes to work at Boston Dynamics. The different approach to meta AI was originally explained by Mrs. Pinu herself, who has been advocating for more transparency in AI for several years.

Ms. Pinu has helped change the way research is published at many large conferences, introducing a checklist of what researchers need to submit to their results, including details of codes and how tests are conducted. Since joining Meta (then Facebook) in 2017, he has made it a champion Culture In his AI lab.

“I came here because of this commitment to open science,” he says. “I would not have stayed here otherwise. A

In the end, Pinu wants to change the way we judge AI. “What we now call the state of the art cannot be limited to performance,” he says. “It has to be a state of the industry in terms of accountability as well.”

Nonetheless, providing a significant language model is a bold move for Mater. “I can’t tell you that this model will create a language that we’re not proud of, there’s no risk,” Pinu said. “She will do it.”

Weigh the risk

Margaret Mitchell, one of the AI ​​ethics researchers who was forcibly evicted by Google in 2020, and who now works for Hugging Face, sees the OPT release as a positive step. However, he said there were limitations to transparency. See the article: Smartphone: Samsung has officially launched the Galaxy A series. Has the language model been tested rigorously enough? Do distant advantages over false information or racist and misogynistic language outweigh distant disadvantages?

“Promoting a great linguistic model in the world, where a large audience can use it or be influenced by its consequences, involves responsibility,” he explained. Mrs Mitchell notes that the model will not only be able to create harmful content on its own, but also through the downstream applications that researchers will create from it.

Meta AI has tested OPT to remove some harmful behaviors, but the goal is to reveal a model that researchers can learn with all its flaws, Pinu said.

“We’ve had a lot of discussions about how to do this, which allows us to sleep at night, knowing that there is no zero reputation, no zero risk of loss,” he said. He rejects the notion that you should not publish a model because it is extremely dangerous, which is the reason for OpenAI not to publish GPT-2, the predecessor of GPT-3. “I understand the weaknesses of these models, but it’s not a research mindset,” he says

Mrs Bender, who co-authored the study at the center of Google’s dispute with Mr Mitchell, was also concerned about how the potential damages would be handled. “One of the things that really matters about reducing the risk of any type of machine learning technology is evaluating and researching specific uses,” he says. “What will the system be used for? Who will use it, and how will the results of the system be presented to them? A

Some researchers wonder why large language models are created, considering the potential for harm. For Miss Pino, these concerns need to be addressed more clearly, not less. “I believe the only way to build trust is through extreme transparency,” he said

“We have different views around the world about appropriate speech and AI is part of that conversation,” he says. He does not expect language models to say something that everyone agrees with. “But how do you deal with this situation? You need a lot of voice in this discussion. A





Leave a Comment