Global EditionASIA 中文雙語Fran?ais
China
Home / China / Innovation

Guidelines for proper use of AI explored

By Chang Jun in San Francisco | China Daily | Updated: 2019-11-12 08:54
Share
Share - WeChat
[Photo/IC]

Experts at a conference discuss ways to monitor technology and minimize risks

Artificial intelligence, the revolutionary, disruptive and diffuse technology that has been sparking controversy and awe since its inception over 50 years ago, now enters a stage that requires the global community - academia, civil society, government and industry - to orchestrate regulations to have it guided in order to serve the common good.

At its conference on AI's ethics, policy and governance in late October, the Stanford Institute for Human-Centered Artificial Intelligence's drew hundreds of experts worldwide to a two-day conference to discuss how the major stakeholders can work together to supervise AI research, minimize risks and prohibit unethical AI-enhanced practices.

Unanimously, the attendees agreed that AI has transformed society profoundly. Major progress has been made due to availability of massive data, powerful computing architectures and machine learning advancement. AI is playing an increasing role across domains such as healthcare, education, mobility and smart homes.

However, AI has also caused concerns all over the world, mainly because of a lack of ethical awareness and penetration of individual privacy. For example, the AI applications in facial recognition.

Joy Buolamwini, a computer scientist at the MIT Media Lab, a research laboratory at the Massachusetts Institute of Technology, presented findings of her research on intersectional accuracy disparities in commercial gender classifications. In her research, Buolamwini showed facial recognition systems developed by tech companies such as Amazon, Microsoft and Google 1,000 faces and asked them to identify gender. The algorithms misidentified Michelle Obama, Oprah Winfrey and Serena Williams, the three iconic dark-skinned women, as male.

The bias in code can lead to discrimination against underrepresented groups and the most vulnerable individuals, Buolamwini said.

She also founded Algorithmic Justice League, a program through which she aims to highlight collective and individual harms that AI can cause - loss of opportunities, social stigmatization, workplace discrimination and inequality - and advocate for changes concerning regulating big tech companies and checking government's application of AI.

One of the key questions around AI governance and ethics, as a majority of attendees agreed, is how to regulate big tech companies.

This "nascent technology will help us build powerful new materials, understand the climate in new ways and generate far more efficient energy - it could even cure cancer," said Eric Schmidt, former Google CEO and current technical advisor to Alphabet Inc.

This is all good, he continued. "I don't want us, in these complicated debates about what we are doing, to forget that the scientists here at Stanford and other places are making progress on problems which were thought to be unsolvable ... because (without AI) they couldn't do the math at scale."

However, Marietje Schaake, a HAI International Policy Fellow and Dutch former member of the European Parliament who worked to pass the European Union's General Data Protection Regulation, argued that AI's potential shouldn't obscure its potential harms, which the law can help mitigate.

Large technology companies have a lot of power, Schaake said. "And with great power should come great responsibility, or at least modesty. Some of the outcomes of pattern recognition or machine learning are reason for such serious concerns that pauses are justified. I don't think that everything that's possible should also be put in the wild or into society as part of this often quoted 'race for dominance'. We need to actually answer the question, collectively, 'How much risk are we willing to take?'"

Like it or not, the age of AI is coming, and fast, and there is plenty to be concerned about, wrote Stanford HAI co-directors Fei-Fei Li and John Etchemendy.

The two believe the real threat lies in the fact that "Most of the world, including the United States, is unprepared to reap many of the economic and societal benefits offered by AI or mitigate the inevitable risks".

Getting there will take decades, they said. "Yet, AI applications are advancing faster than our policies or institutions at a time in which science and technology are being underfunded, under-supported and even challenged. It's a national emergency in the making."

They asked the US government to commit $120 billion in research, data and computing resources, education and startup capital over the next decade to support a bold human-centered AI framework in order to retain America's competence and leading position in this field.

An open dialogue and collaboration among nations regarding AI research and governance is important, attendees said. Given the complexity of cultural differences and motivation variations among international stakeholders, however, it's unrealistic for the whole world to create a single AI vision and a once-and-for-all solution to the problems and issues.

Nevertheless, governments across the continents are in action.

In Europe, the European Union issued its first draft of the ethical guidelines for the development, deployment and use of AI in Dec 2018, an important step toward innovative and trustworthy AI "made in Europe'".

In Feb, the US president signed an executive order revealing the country's cohesive plan for US leadership in AI development. "Continued American leadership in Artificial Intelligence is of paramount importance to maintaining the economic and national security of the United States," he said.

In China, the National New Generation Artificial Intelligence Governance Committee, which is under the Ministry of Science and Technology, in June released the New Generation AI Governance Principles - Developing Responsible AI.

The first official document of its kind issued in China on AI governance ethics, the principles include harmony and friendship, fairness and justice, inclusive and sharing, privacy protection, safety and controllability, shared responsibility, open collaboration and agile governance.

"We want to ensure the reliability and safety of AI while promoting economic, social and ecological sustainable development," said Zhang Xu, deputy director of the strategic planning department under the Ministry of Science and Technology.

"AI is advancing rapidly, but we still have time to get it right - if we act now," said Fei-fei Li.

Top
BACK TO THE TOP
English
Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349
FOLLOW US
 
主站蜘蛛池模板: 国产午夜亚洲精品不卡电影| 成人一级片在线观看| 亚洲色中文字幕在线播放| 香蕉视频软件app下载| 天天做天天爱天天爽综合网| 久久久久亚洲AV无码网站| 欧美大胆a级视频免费| 免费观看一级毛片| 韩国三级大全久久网站| 国产肝交视频在线观看| 一本色道久久88亚洲精品综合| 日韩精品一区二区三区在线观看| 亚洲精品成人区在线观看| 美村妇真湿夹得我好爽| 国产成人综合久久精品下载| a级毛片高清免费视频在线播放| 日本h无羞动漫在线观看网站| 亚洲另类自拍丝袜第五页| 精品xxxxxbbbb欧美中文| 国产免费av片在线观看| 香焦视频在线观看黄| 奇米第四色首页| 中文字幕无码精品亚洲资源网久久 | 五月亭亭免费高清在线| 天天澡天天摸天天爽免费| 丰满老熟好大bbb| 最新国产精品自在线观看| 亚洲狠狠色丁香婷婷综合| 精品91一区二区三区| 国产一级毛片视频在线!| 国产成人yy精品1024在线| 国产麻豆一精品一aV一免费| 一级毛片免费在线观看网站| 日本亚州视频在线八a| 亚欧洲精品在线视频免费观看| 欧美视频在线观| 伊人精品视频一区二区三区| 美国经典三级版在线播放| 国产乱码精品一区二区三区四川人 | 俄罗斯极品美女毛片免费播放| 老子影院dy888午夜|