AI that detects your emotions can be abused and should not be made available to everyone, says Microsoft


Microsoft on Tuesday announced plans to stop selling facial recognition technology that predicts a person’s emotions, gender or age, and restrict access to other artificial intelligence services, as it could put people at risk of stereotypes, discrimination or unfair denial of service. In a blog post, Microsoft mentions its work with researchers both internally and externally to create a standard for the use of technology. The article acknowledges that this work reveals serious problems with the reliability of the technology. This commitment is necessary because apparently there are still not enough laws to regulate the use of machine learning technology. So, in the absence of this law, Microsoft just has to force itself to do the right thing.

Microsoft has promised to limit access to AI tools designed to predict emotions, gender and age from images, and will limit its use of facial recognition and generative audio models in Azure. The computing giant pledged yesterday, while sharing its responsible AI standard, a document that the American company promises to limit any harm caused by its machine learning software. This commitment assures that the company will evaluate the impact of its technology, document model data and capabilities and apply strict usage guidelines.

The move follows harsh criticism of the technology, which has been used by companies to monitor job seekers during interviews. Facial recognition systems are often trained primarily in white and male databases, so their results may be biased when used in other cultures or groups. These efforts have raised important questions about privacy, the lack of consensus on the definition of emotion, and the inability to generalize the connection between facial expressions and emotional states across regions and devices. Population, “said Sarah Bird, senior product manager at Microsoft’s Azure AI unit.

Two years ago, Microsoft began a review process to create a responsible AI standard and a more beautiful and more reliable AI system. The agency released the results of that effort in a 27-page document on Tuesday. By introducing limited access, we’re adding an additional layer of verification to the use of face recognition and installation to ensure that the use of these services is consistent with Microsoft’s responsible AI standards and contributes high-value benefits to end-users and the community. ” Bird wrote in a blog post published on Tuesday.

“We recognize that in order for AI systems to be trustworthy, there must be appropriate solutions to the problems they want to solve,” wrote Natasha Crampton, head of artificial intelligence at Microsoft, in another blog post. Blog 7 Crampton added that the company would remove AI capabilities that assume the need for new standards of emotional state and identity traits such as gender, age, smile, facial hair, hair and makeup.

The move comes as lawmakers in the United States and the European Union debate legal and ethical issues related to the use of facial recognition technology. Some jurisdictions have already imposed restrictions on the installation of this technology From next year, New York City employers will face increased control over the use of automated tools to screen candidates. In 2020, Microsoft joins other technology giants by promising not to sell its face recognition systems to police departments until federal regulations exist.

But educators and experts have for years criticized tools such as Microsoft’s Azure Face API that claim to detect emotions from videos and photos. Their work has shown that even the most successful facial recognition systems inconsistently identify women and dark-skinned people.

The need for such practical advice is growing. AI is a growing part of our lives, and our laws are still lagging behind. They are not associated with the unique risk of AI or the needs of society. If we see signs that government action on AI is expanding, we also acknowledge our responsibility to act. “We believe we must work to make AI systems responsible through design,” said Natasha Crampton.

Thus, to prevent machine learning developers from using technology to create conflict and animosity, Microsoft is blocking access to tools designed to categorize human gender, age, and emotion. And analyzes their smiles, facial hair, hair and makeup through their mouths. API in Azure. New customers will not be able to use this API in Microsoft’s cloud, and it is time to transfer existing customers to other services until June 30, 2023, before the software officially retires.

Although these capabilities will not be offered through its API platform, they will still be used in other parts of the Microsoft Empire. For example, Features will be integrated into Seeing AI, an application that describes and describes persons and objects for visually impaired people.

Access to other types of Microsoft tools that are considered risky will be restricted, such as creating realistic sound (giving word to someone) and verbal recognition (useful for surveillance). New customers will need to request the use of these tools; Microsoft will evaluate whether the applications they want to develop are appropriate. Existing customers will also need permission to continue using these tools for their products from June 30, 2023.

Imitation of a person’s voice using the Generative AI model without the consent of the speaker is no longer permitted. Products and services created using Microsoft’s custom neural voice software must also indicate that the voices are counterfeit. The guidelines for using the company’s facial recognition tools are also strict if they are applied to public spaces and cannot be used to track people for surveillance purposes.

Source: Microsoft (1, 2, 3)

And you?

What is your opinion about it?

See also:

Emotion-recognition technology should be banned because it has little scientific basis, research institute IA Now concludes

Researchers are developing an AI capable of detecting Deepfake video with up to 99% accuracy, a method that detects manipulated facial expressions and identity changes.

Spain: Police Rely on AI to False Fraud Claims, Veripol Has 83% Accuracy Rate

Has the epidemic normalized employee monitoring software? Reports indicate that this type of software is spreading rapidly

Leave a Comment