Who’s afraid of artificial intelligence in China?20th September 2018
Something’s definitely up in China — most definitely. Almost every other week, there’s an international conference on artificial intelligence (AI) somewhere in Shanghai. Last month the city hosted AIAAT 2018 a conference on AI applications and technologies. That was quickly followed by IVPAI2018, a meeting on image, video, processing and AI. And this week it’s the World Artificial Intelligence Conference (WAIC 2018).
Clearly, China’s got plans for AI. And it being China, those plans are long-term — and big.
“China has a national strategy for becoming the global leader in artificial intelligence and it’s executing that strategy,” says Paul Scharre, a senior fellow at the Center for a New American Security (CNAS) in Washington, DC, and author of a book on autonomous weapons, “Army of None.”
“They’re investing significant amounts of capital in research and development; they’re working to improve public-private partnerships,” says Scharre, before continuing with a big list of indicators. “China has a number of top-tier AI companies like Alibaba, Baidu and Tencent, they have a growing startup culture, and they are working aggressively to increase their human capital through STEM education and by courting experts in Silicon Valley and bringing them to China.”
Go champion Lee Sedol reviews his game after getting crushed Google DeepMind's artificial intelligence, AlphaGo
None of this is strictly new. China has been working on AI and machine learning systems for years — whether that’s on the government level, academia, or private and commercial enterprise. In fact, commercial enterprise is one of the main things driving China in the field of AI. And with that “holy trinity” of Alibaba, Baidu and Tencent, but also SenseTime, a company focused on deep learning and facial recognition technologies and boasting the highest valuation of any AI startup in history, they are well on their way.
But there was a turning point in March, 2016, when Google DeepMind’s artificial intelligence, AlphaGo, beat a human champion, Lee Sedol, at the Chinese strategy game Go. And that may have implications for the second driver in China’s AI stack — the “intelligentization” of warfare.
“Economic benefits are the primary driving force, but there are definitely national security implications,” says Jeffrey Ding, a researcher with the Governance of AI Program at Oxford University in the United Kingdom. “There have been discussions about AI as a potential revolution in military affairs.”
It was probably already in the works, but efforts seem to have intensified since the “Sputnik moment” of Lee Sedol’s thrashing in the game of Go.
“Go has a unique history in China as a game that military generals would play,” says Ding, who’s written a report called Deciphering China’s AI Dream. “It’s very associated with military strategy.”
From Go to meGo?
That “Sputnik moment” led swiftly to two strategic moves by the Chinese government a year later — moves that got some people talking about a new technology race, similar to the space race of the Cold War. But Scharre, and his co-authors on a paper called Strategic Competition in an Era of Artificial Intelligence, say an AI race “may be even more intense” because it won’t just be between two superpowers but a host of them across an array of sectors and geographical regions.
In early 2017, China added a plan called Artificial Intelligence 2.0 to its list of “Science and Technology Innovation 2030 Megaprojects.” When that list was originally compiled in 2016, it included things like big data and robotics, which are related to AI, but AI itself was not an explicit priority.
AlphaGo changed China’s vision on that.
Then in July 2017, China published A New Generation of Artificial Intelligence Development Plan, which cites AI as a “focus of international competition.” The plan describes three main development stages, culminating, for now, in China’s ambition to become the “world’s primary AI innovation center” by 2030.
A workforce replaced by robots? A McKinsey report says China has the world's most potential for automation
By then, China says it “will achieve major breakthroughs in brain-inspired intelligence, autonomous intelligence, hybrid intelligence, swarm intelligence […] and AI should be expansively deepened and greatly expanded into production and livelihood, social governance,” according to a translation of the text by fellows at the think tank New America. (There are other translations, such as one by The Foundation for Law and International Affairs.)
“The goal is ambitious and it’s possible, but I don’t think it’s a certainty,” says Ding. The US, says Ding, still has the best universities that attract the best researchers, and American tech companies, such as social media platforms, command a huge global market share. “So displacing the US to become the primary innovation center by 2030 is unlikely, but it is a good goal to have.”
One nation under AI
This is just the beginning. China recently published its first AI textbook for schools, written by SenseTime founder, Tang Xiaoou. That new development plan specifically calls for AI-related courses to be set up in both primary and secondary schools. So it’s got education firmly in its sights.
And from there it’s probably just a hop, skip and a jump for Chinese schools to install facial recognition cameras in classrooms. Or is it?
“The kids are being monitored,” says Stefan Brehm, a researcher at the Centre for East and South-East Asian Studies at Lund University, Sweden. “Their facial recognition is monitored to see how attentive they are during class. The data is then evaluated by an algorithm and it gives feedback on you as a student.”
If that doesn’t make kids sit still, nothing will. But it goes beyond the school gates. China’s 2017 development plan refers repeatedly to group intelligence and an aim to promote “mutual trust.”
You could call it the third major AI driver in China.
And one way the country wants to achieve mutual trust is through so-called social credit systems. Originally introduced as a form of personal credit rating, these systems collect all kinds of data about individuals — their work, interactions, shopping patterns, behavior while crossing the street, travel plans, social media or browsing history — to create a social aggregate.
A third eye
“AI could help enable China’s mode of social governance. There is a lot of potential — and it’s already happening to some extent — for a more aggressive, expansive surveillance, monitoring of the population, but also softer means of ‘nudging’ people towards what the government views as better forms of citizenship,” says Ding.
It’s a carrot and stick if ever there was one.
“That’s exactly what the government says: We want to punish those who are disloyal, and good citizens will get benefits,” says Brehm.
Social credit systems like Alibaba’s commercial, Sesame Credit, are fairly common and generally accepted. The government is even considering a compulsory, national social credit system, although details are sketchy.
But we have seen reports by Human Rights Watch into China’s other attempts to monitor communities, like collecting DNA samples and installing a QR Code system at the homes of Uyghur Muslims.
And yet opposition to such forms of monitoring is thin on the ground.
Students undergo facial recorgnition, fingerprint verification, metal detectors and take exams in radio-shielded rooms
“You’d think that in an authoritarian country people would be worried, but most people welcome it,” says Brehm. “People have realized for instance that social media can be used to track down corruption. The social credit system is an extension of this kind of ‘uncovering of injustice.’ And it gives people back a sense of fairness because you can think of an algorithm as something more objective, where everyone is judged by the same criteria.”
It may be a hard pill to swallow, but the idea that people in China are okay with constant surveillance may be one of a few misconceptions we have on the outside. That includes the idea that we in Europe have a superior take on technological ethics.
China’s AI development plan sets out two other goals. One, that AI research can be open source — open to inspection and collaboration. And two, that the country needs to develop an ethical code of conduct and “strengthen the assessment of the potential hazards and benefits of AI.”
“There are certain topics that seem to be off limits. I’ve looked for discussions on facial recognition or QR Code systems for tracking people, and I haven’t found much in the public sphere,” says Ding. “But on other topics that are more inbounds, like personal information protection, privacy, preventing leakages, algorithmic bias — things that are perhaps less politically sensitive — there’s a very active discussion going on.”
Whether it meets its goals by 2030 or not, China is, according to a 2017 report by the McKinsey Global Institute, the “nation with the world’s largest automation potential.” That automation could affect hundreds of millions of people. And they, of all people, would want to be in on the conversation.
courtesy : DW