How to map a criminal: Chinese Courts and AI tech partners locate “deadbeats”

Dovetail Labs Researcher Shazeda Ahmed published an essay on Medium outlining WeChat’s new mini-program to map debtors, and the implications of this tool for relations between Chinese courts and technology partners.


An unprecedented development in China’s push to create a national social credit system has recently emerged. WeChat, the messaging app used by over a billion people, rolled out a mini-program in which a map displays the addresses of people who have failed to repay debts in the city of Shijiazhuang. The story first broke in several Chinese state media outlets. However, the mini-program is not the first of its kind, and is but one example of a broader trend of partnerships between the courts and tech companies to exert greater pressure on people who dodge court orders.

hongmei-zhao-227311-unsplash.jpg

The colloquial term laolai (老赖) is often translated in English as “deadbeat.”

The “Laolai Map” mini-program allows the user to toggle between two tabs of blacklisted parties, “natural persons” (individuals) and “legal persons” (companies and certain public institutions). Within each tab one can view the names, home addresses (office addresses in the case of legal persons), and reason why the entity is listed as a laolai. Some of these are as detailed as “did not register property owned,” but most are listed under more vague phrases such as: “failed to perform on a court order.”

The colloquial term laolai (老赖) is often translated in English as “deadbeat.” While many people who receive this designation do indeed owe debt, the broader and more important distinction worth noting is that all of them are considered “judgment defaulters” (失信被执行人, lit. “trust-breakers against whom an order or judgment is executed”) i.e., people who a judge commanded to perform a certain action as a result of losing a lawsuit, did not fulfill this court order, and ultimately ended up on court-generated blacklists that are shared with multiple government bureaus as well as with certain private companies.

make the data run more, let the masses run less 
让数据多跑路,让群众少跑腿

Outside observers’ understandable discomfort with this treatment of laolai is focused on the implicit decision that their (illegal but non-criminal) transgressions justify the loss of privacy in order to publicly shame them to change their behavior. On-the-ground reporting has yet to address the question of how laolai and their neighbors are responding to this development, and whether the social stigma is compelling individuals to rectify their court orders. Chinese media reports argue that providing this information to citizens can enable them to “avoid risks,” such as those of doing business with blacklisted people and firms.

Yet a series of additional partnerships preceding this mini-program’s roll-out suggest that even more invasive new initiatives have been quietly developed between local courts and major tech firms. How else are China’s courts, local governments, and tech companies cooperatively using AI to identify and punish laolai?

In an example that bears many similarities to Shijiazhuang’s “Laolai Map,” a court in Yantai, Shandong has offered a similar feature since 2017. News portal Sohu published an article explaining how Chinese news platform and AI powerhouse Toutiao (今日头条) signed a memorandum of agreement with Yantai’s intermediate court to send in-app push notifications about laolaiwithin a twenty-kilometer radius of a Yantai-based user. The Shijiazhuang “Laolai Map” makes no claims about integrating AI into its operations, raising the unanswered question of just how the Toutiao feature performs its operations.

Other allegedly AI-fueled initiatives to pressure laolai are equally short on details about how exactly AI is deployed for this purpose. According to a Xinhua article, an intermediate court in Wuxi, Jiangsu has partnered with Alibaba to create an AI-driven revamp of the court’s system for assessing the records of people and organizations deemed as laolai. These data will be automatically compiled into performance reports judges can use to make decisions about how to handle cases.

Praise for this new system insists that it will reduce data entry processes that used to take an hour into a three-minute job and that it will help judges make more precise decisions, along with the classic slogan that has run through much propaganda about the social credit system: “make the data run more, let the masses run less” [让数据多跑路,让群众少跑腿]. To this end, the system also includes a long-distance teleconferencing feature that automatically records dialogue between judges and people who have cases brought against them but cannot be physically present in the courthouse.

Many questions arise from the partnership of tech industry titans and Chinese government. Is Alibaba supplying any of their user data to the Wuxi intermediate court? If written transcripts of the teleconferenced conversations are produced, how accurate are these automatic transcriptions, and are they used as an official record of the case? What legislation is in place to ensure courts do not create additional burdens on citizens the state considers untrustworthy?

The fragmented, highly localized nature of these partnerships makes them difficult to track, which may partially contribute to why they slip through the cracks. It is notable that they take place in cities that are often out of the spotlight of top-tier Chinese cities — where secondary cities become testing grounds for new governance experiments. Local governments are more likely to view these as prestige projects that will garner government favor and potential funding for other forms of technological upgrading.

Despite local officials’ pride in AI tools, details are scant, which serves to obfuscate the unavoidable and all-too-predictable risks they entail. As these examples become more common, tech companies and local governments will hopefully be more exposed to media scrutiny at home and abroad, and may have to provide answers about how these “smart” new systems were developed and who they actually help — or harm.

- post by Shazeda Ahmed, University of California, Berkeley

Igor Rubinov