UN urges moratorium on use of AI that imperils human rights
GENEVA: The UN human rights chief is calling for a moratorium on the use of artificial intelligence technology that poses a serious risk to human rights, including face-scanning systems that track people in public spaces.
Michelle Bachelet, the UN High Commissioner for Human Rights, also said Wednesday that countries should expressly ban AI applications which donât comply with international human rights law.
Applications that should be prohibited include government âsocial scoringâ systems that judge people based on their behavior and certain AI-based tools that categorize people into clusters such as by ethnicity or gender.
AI-based technologies can be a force for good but they can also âhave negative, even catastrophic, effects if they are used without sufficient regard to how they affect peopleâs human rights,â Bachelet said in a statement.
Her comments came along with a new UN report that examines how countries and businesses have rushed into applying AI systems that affect peopleâs lives and livelihoods without setting up proper safeguards to prevent discrimination and other harms.
âThis is not about not having AI,â Peggy Hicks, the rights officeâs director of thematic engagement, told journalists as she presented the report in Geneva. âItâs about recognizing that if AI is going to be used in these human rights â" very critical â" function areas, that itâs got to be done the right way. And we simply havenât yet put in place a framework that ensures that happens.â
Bachelet didnât call for an outright ban of facial recognition technology, but said governments should halt the scanning of peopleâs features in real time until they can show the technology is accurate, wonât discriminate and meets certain privacy and data protection standards.
While countries werenât mentioned by name in the report, China has been among the countries that have rolled out facial recognition technology â" particularly for surveillance in the western region of Xinjiang, where many of its minority Uyghers live. The key authors of the report said naming specific countries wasnât part of their mandate and doing so could even be counterproductive.
âIn the Chinese context, as in other contexts, we are concerned about transparency and discriminatory applications that addresses particular communities,â said Hicks.
She cited several court cases in the United States and Australia where artificial intelligence had been wrongly applied..
The report also voices wariness about tools that try to deduce peopleâs emotional and mental states by analyzing their facial expressions or body movements, saying such technology is susceptible to bias, misinterpretations and lacks scientific basis.
âThe use of emotion recognition systems by public authorities, for instance for singling out individuals for police stops or arrests or to assess the veracity of statements during interrogations, risks undermining human rights, such as the rights to privacy, to liberty and to a fair trial,â the report says.
The reportâs recommendations echo the thinking of many political leaders in Western democracies, who hope to tap into AIâs economic and societal potential while addressing growing concerns about the reliability of tools that can track and profile individuals and make recommendations about who gets access to jobs, loans and educational opportunities.
European regulators have already taken steps to rein in the riskiest AI applications. Proposed regulations outlined by European Union officials this year would ban some uses of AI, such as real-time scanning of facial features, and tightly control others that could threaten peopleâs safety or rights.
US President Joe Bidenâs administration has voiced similar concerns, though it hasnât yet outlined a detailed approach to curtailing them. A newly formed group called the Trade and Technology Council, jointly led by American and European officials, has sought to collaborate on developing shared rules for AI and other tech policy.
Efforts to limit the riskiest uses of AI have been backed by Microsoft and other U.S. tech giants that hope to guide the rules affecting the technology. Microsoft has worked with and provided funding to the U.N. rights office to help improve its use of technology, but funding for the report came through the rights officeâs regular budget, Hicks said.
Western countries have been at the forefront of expressing concerns about the discriminatory use of AI.
âIf you think about the ways that AI could be used in a discriminatory fashion, or to further strengthen discriminatory tendencies, it is pretty scary,â said US Commerce Secretary Gina Raimondo during a virtual conference in June. âWe have to make sure we donât let that happen.â
She was speaking with Margrethe Vestager, the European Commissionâs executive vice president for the digital age, who suggested some AI uses should be off-limits completely in âdemocracies like ours.â She cited social scoring, which can close off someoneâs privileges in society, and the âbroad, blanket use of remote biometric identification in public space.â
Michelle Bachelet, the UN High Commissioner for Human Rights, also said Wednesday that countries should expressly ban AI applications which donât comply with international human rights law.
Applications that should be prohibited include government âsocial scoringâ systems that judge people based on their behavior and certain AI-based tools that categorize people into clusters such as by ethnicity or gender.
AI-based technologies can be a force for good but they can also âhave negative, even catastrophic, effects if they are used without sufficient regard to how they affect peopleâs human rights,â Bachelet said in a statement.
Her comments came along with a new UN report that examines how countries and businesses have rushed into applying AI systems that affect peopleâs lives and livelihoods without setting up proper safeguards to prevent discrimination and other harms.
âThis is not about not having AI,â Peggy Hicks, the rights officeâs director of thematic engagement, told journalists as she presented the report in Geneva. âItâs about recognizing that if AI is going to be used in these human rights â" very critical â" function areas, that itâs got to be done the right way. And we simply havenât yet put in place a framework that ensures that happens.â
Bachelet didnât call for an outright ban of facial recognition technology, but said governments should halt the scanning of peopleâs features in real time until they can show the technology is accurate, wonât discriminate and meets certain privacy and data protection standards.
While countries werenât mentioned by name in the report, China has been among the countries that have rolled out facial recognition technology â" particularly for surveillance in the western region of Xinjiang, where many of its minority Uyghers live. The key authors of the report said naming specific countries wasnât part of their mandate and doing so could even be counterproductive.
âIn the Chinese context, as in other contexts, we are concerned about transparency and discriminatory applications that addresses particular communities,â said Hicks.
She cited several court cases in the United States and Australia where artificial intelligence had been wrongly applied..
The report also voices wariness about tools that try to deduce peopleâs emotional and mental states by analyzing their facial expressions or body movements, saying such technology is susceptible to bias, misinterpretations and lacks scientific basis.
âThe use of emotion recognition systems by public authorities, for instance for singling out individuals for police stops or arrests or to assess the veracity of statements during interrogations, risks undermining human rights, such as the rights to privacy, to liberty and to a fair trial,â the report says.
The reportâs recommendations echo the thinking of many political leaders in Western democracies, who hope to tap into AIâs economic and societal potential while addressing growing concerns about the reliability of tools that can track and profile individuals and make recommendations about who gets access to jobs, loans and educational opportunities.
European regulators have already taken steps to rein in the riskiest AI applications. Proposed regulations outlined by European Union officials this year would ban some uses of AI, such as real-time scanning of facial features, and tightly control others that could threaten peopleâs safety or rights.
US President Joe Bidenâs administration has voiced similar concerns, though it hasnât yet outlined a detailed approach to curtailing them. A newly formed group called the Trade and Technology Council, jointly led by American and European officials, has sought to collaborate on developing shared rules for AI and other tech policy.
Efforts to limit the riskiest uses of AI have been backed by Microsoft and other U.S. tech giants that hope to guide the rules affecting the technology. Microsoft has worked with and provided funding to the U.N. rights office to help improve its use of technology, but funding for the report came through the rights officeâs regular budget, Hicks said.
Western countries have been at the forefront of expressing concerns about the discriminatory use of AI.
âIf you think about the ways that AI could be used in a discriminatory fashion, or to further strengthen discriminatory tendencies, it is pretty scary,â said US Commerce Secretary Gina Raimondo during a virtual conference in June. âWe have to make sure we donât let that happen.â
She was speaking with Margrethe Vestager, the European Commissionâs executive vice president for the digital age, who suggested some AI uses should be off-limits completely in âdemocracies like ours.â She cited social scoring, which can close off someoneâs privileges in society, and the âbroad, blanket use of remote biometric identification in public space.â
0 Response to "UN urges moratorium on use of AI that imperils human rights"
Post a Comment