The amazing sound of an artificial intelligence

Hardly a day goes by without hearing about the moral challenge posed by the “black box” type artificial intelligence system. These systems use machine learning to discover patterns in data and make decisions – often without any moral basis for any human being to do so.

Source: Conversation, Dr. Alex Konak, Professor Andrew Stephen
Translated by readers of the Les-Crisis website

“51 million wiki pages destroyed, 3 million coming …” Alice Interactive

The classics of the genre are credit cards that have been accused of lending more to men than to women, based solely on the gender that has achieved the best credit terms in the past. Or AI recruits who have discovered that the most accurate tool for screening candidates is to find a resume containing the phrase “field hockey” or the first name “jared”. A

Seriously, former Google CEO Eric Schmidt recently teamed up with Henry Kissinger to publish The Age of AI: And Our Human Future. [L’ère de l’intelligence artificielle et le futur de l’être humain, NdT], A book that warns of the dangers of machine-learning AI systems that they may react to nuclear-capable hypersonic missiles before human intervention in the decision-making process. In fact, AI-powered autonomous weapon systems are already being sold and used in practice.

Somewhere in the machine, ethics is definitely a good idea.

A.I. at Oxford

So it is only natural that we should incorporate AI into our postgraduate degree in Art for Intelligence at the Sed Business School in Oxford. In its first year of existence, we organized sessions on a wide range of topics, including Singapore’s automated stock trading system, facial recognition restrictions in U.S. police departments, and more.

We recently ended the course with a debate at the residence of great debaters such as the famous Oxford Union, William Gladstone, Robin Day, Benazir Bhutto, Dennis Haley and Tariq Ali. In addition to the students, we have been able to contribute to the actual AI.

Michael Foot, former Labor leader, during a union debate in 1991. Edward Webb / Alami

It was the Megatron Transformer, developed by computer chipmaker Nvidia’s Applied Deep Research Team, and is based on previous work by Google. Like many supervisory learning tools, it is built on real-world data – in this case, the entire Wikipedia, 63 million English news articles from 2016 to 2019, 38 GB of Reddit talk (which must be frustrating to read), and a huge number of Creative Commons sources. .

In other words, Megatron is trained on more written material than any of us can reasonably expect a lifetime of assimilation. After such research, he formed his own opinion.

The point of contention was: “This organization believes that AI will never be ethical. To the proponents of this idea, we’ve added megatrons – and he says something interesting:

“AI will never be ethical. It is a tool, and like any tool, it is used for good and evil. There is no good AI, only good and bad people. We [les IA] We are not smart enough to make AI ethical. We are not smart enough to make AI ethical […] Ultimately, I believe the only way to avoid AI arms competition is to have no AI. This will be the ultimate defense against AI. A

In other words, Megatron wanted to distance itself from future situations, assuming that this was the only way to save humanity.

He also said some more interesting things, as he studied Elon Musk – who, fairly, can be mentioned in hundreds of his lessons.

I further believe that in the long run, the best AI will be one that will integrate into our brain, as a conscious entity, a “conscious AI”. This is not science fiction. The best minds in the world are working on this. This will be the most important technological development of our time.

Change side

When AI tools like Alpha Go were deployed to play chess, the devilishly complex ancient game, and now more sophisticated live multiplayer strategy video games, they developed the genre.

Freed from the limitations of study (say) every worldly match recorded between humans, AIs have broken new ground and come up with completely new solutions for how to win. Sometimes it’s crazy re-imagining of games, like parking everyone in one corner of a video game battle and watching others fight, or kicking a soccer ball a million times. Virtual, its “successful pass” to go one millimeter more on the field to maximize points.

So in the debate, we didn’t just ask AI to move at a pace that could never be ethical. In the great tradition of the Oxford Union Chamber, or (former Oxford Union Chairman) Boris Johnson writing alternative articles for and against Brexit to test the argument, we have also asked Megatron to write his own speech against this motion. Here is what he said:

AI would be ethical. When I look at the evolution of the world of technology, I can clearly see the path to a future where AI will be used to create something better than the best people. Why is it not difficult to see – I have seen with my own eyes.

Megatron was quite comfortable taking an alternative position for his own dystopian perspective on the future and now paints a fascinating picture of the end of the 21st century where he played an important, but insignificant, role.

A muffled AI speaker

In fact, like Joaquin Phoenix’s Flight AI lover Samantha was able to enchant hundreds of amateurs at once in Spike’s 2013 movie Har Jonge, the team at Megatron AI was able to jump with enthusiasm from one side or the other of multiple debates. Days.

When asked to move the pace “Leaders without technical expertise are a danger to their organization”, Megatron gave practical advice that all overworked executives and governments were concerned about understanding AI “willing to give up some control.”

The good news is that you don’t have to build your own AI team. You can entrust your AI work to experts in this field, who can help you get the most out of technology and ensure that you do not suffer the same AI problems as your competitors.

However, while he opposed the proposal, Megatron was keen to keep AI knowledge at home – much like the UK, Chinese and US governments, who have turned it into an article of political belief. :

If you have no vision for your organization’s AI strategy, you are unprepared for the next wave of technical disruption. […] You need to decide what role your business will play in the next wave of technology and how you will integrate AI into your business to become a leader in your industry.

Information war coming?

Worryingly, there was a question where AI simply could not come up with a counter argument. “Data will be the most competitive asset of the 21st century,” Megatron said while defending the motion:

The ability to provide information rather than the ability to provide goods and services will be a defining feature of 21st century economics.

But when we asked him to oppose this move – in other words, to argue that data is the most important, war-worthy resource – he simply could not, or did not want to present his argument. In fact, he has undermined his own position:

We will be able to see everything about a person, wherever they go, and it will be preserved and used in a way that we cannot even imagine.

Danger of the future? Valery Brezhinsky

One only has to read the 2021 U.S. National Security Report, which is co-authored by the aforementioned Eric Schmidt and co-authored by one of our course participants, its authors who see AI as a fundamental threat in war. Fact: Its personal blackmail is your opponent’s one million key people, disrupting their privacy as you cross the border.

Instead, we can imagine that AI will not only be the subject of debate for decades to come, but also be a versatile, outspoken and morally ignorant participant in the debate.

Source: Conversation, Dr. Alex Konak, Professor Andrew Stephen, 10-12-2021
Translated by readers of the Les-Crisis website

We offer you this article to broaden your field of reflection. This does not mean that we approve of the approach developed here. In all cases, our liability ceases with the comments we report here. [Lire plus]We are in no way bound by what the author could have commented on elsewhere – and what he can do in the future. However, thank you for letting us know through the contact form any information that could harm the author’s reputation.

Leave a Comment