MENU

Microsoft’s Responsible Initiative on AI

Thursday, 7 July 2022

Microsoft is taking a responsible initiative position on AI

Imagine an AI tool that could identify your mood on top of recognising facial features?

Microsoft has decided to retire and to additionally rework on Artificial Intelligence tools.
Some experts believe emotion recognition tools violate human rights.

Azure Face is an emotion recognition tool which falls into the highly criticised category.

Microsoft, who have just published the updated version of its Responsible AI Standard, wants AI to be a positive tool that can be used in the world and have mentioned the recognition of Azure Face having the potential to be misused.
It seems AI facial recognition is here to stick around. However, the public will not be able to access it, Microsoft notices the value of controlled access for specific requirements, such as assistance for the visually impaired.

AI’s ability to recognise individuals based on their gender, age, hair, and even facial expression has been cut. This is a result from the potential concern of cyber criminals impersonating individuals to commit fraudulent activities.

Microsoft is limiting which businesses can access its Custom Neural Voice service, a text speech app that has been said to be lifelike.

What else is Microsoft doing to help protect us from fraud and threats?

Adding new features to its email service in Microsoft 365 -improving the Tenant Allow Block List – reducing the chances of being caught in a phishing scam and building your security.

The improved feature should go into preview in the near future and is expected to be available by the end of July.
In the meantime, if you have any concerns regarding your business’s email security get in touch with Pisys.net

 

Published with permission from Your Tech Updates.

Back to all articles
  • CMS Telecom Logo
  • Samsung logo
  • Office 365 Logo
  • Microsoft
  • Webroot Logo
  • Datto 1