Microsoft withdraws its AI facial analysis tool | information age

Microsoft will retire a range of AI features from its Azure Face service. Photo: Shutterstock

As AI applications continue to develop, so do the risks.

If the laws don’t follow, it’s up to tech companies to moderate how technology is used.

Keen to be on the right side of history, Microsoft has updated its guidelines for treating AI with care and will remove a range of AI features from its Azure Face. service which is used with things like identity verification, contactless access control, and privacy blurring.

Abilities can be used to infer emotional states and identity attributes such as gender, age, smile, facial hair, hair, and makeup.

Citing the lack of scientific consensus on the definition of “emotions,” the company said there are issues in how inferences generalize across use cases, regions, and demographics.

There are also heightened privacy concerns around this type of capability.

“[We] recognize that for AI systems to be trustworthy, they must be appropriate solutions to the problems they are meant to solve,” Microsoft said in a statement.

Limits on voice imitation too

Microsoft has systematically analyzed all of its AI systems that claim to infer people’s emotional states, whether the systems use facial analysis or any other AI technology.

In these other systems, the company will perform system-specific validity assessments upfront and be guided by its Use Case Policy for guidance in high-impact, science-based use cases, the company explained. society.

It will enforce limits on Azure AI’s neural voice technology that enables the creation of a synthetic voice that sounds almost identical to the original source, unlike Amazon which is developing a similar feature for its voice assistant Alexa.

Recognizing that the technology could also be used to impersonate speakers and deceive listeners, the company restricts customer access to the service, defines acceptable use cases, and has technical safeguards in place to help to guarantee the active participation of the speaker during the creation of a synthetic voice.

Its text-to-speech technology also proved problematic.

In 2020, a Stanford study showed how the technology produced error rates for members of some African American communities that were nearly double those of non-African American users.

After the study was published, Microsoft learned during preliminary tests that its speech-to-text technology did not sufficiently account for the diversity of speech among people from different backgrounds and from different regions.

Following a review by a sociolinguist, he set out to expand his data collection efforts while researching the best way to collect data from communities in a way that appropriately and respectfully engages them.

An evolving set of guidelines for AI

In these initiatives, Microsoft has been guided by its Responsible AI Standard, the second iteration of its policy that aims to limit product development toward outcomes that are both beneficial and equitable.

“This means keeping people and their goals at the center of system design decisions and adhering to enduring values ​​such as fairness, reliability and security, privacy and security, inclusiveness, transparency and accountability” , the company said.

A multidisciplinary group of researchers, engineers and policy experts spent 12 months developing the second version of the policy.

The standard defines the results that teams developing AI systems should strive to achieve.

It breaks down an overarching principle such as “accountability” into its key enablers, such as impact assessments, data governance and human oversight, with steps teams should take to ensure systems are IA achieves goals throughout the system lifecycle.

It also maps available tools and practices with specific requirements to help teams develop AI-related tools.

While the company is taking its own action, it says broader regulation is lacking, with laws lagging tech that haven’t caught up to the unique risks of AI and society’s needs for it. equity and inclusion in this type of technology.

“As we see signs that government action on AI is growing, we also recognize our responsibility to act. We believe we must work to ensure that AI systems are responsible by design.

Comments are closed.