Structural Risks of Artificial Intelligence

Modern ethics discourse on artificial intelligence has tended to focus on rather constrained categories of harms. Specifically, it tends to focus on mistakes and accidents (as the authors of a recent post on Lawfare point out). In doing so, this discourse focuses on the immediate uses of AI and neglects to consider the structural risks of AI.

Artificial Intelligence has the potential to disrupt structural relations between labor and capital, to act as a force multiplier that spreads disruptions to an ecosystem and undermine the very basis of what counts as truth or knowledge.

image.png

AI research could benefit from forms of institutional review

In the context of biomedicine in the U.S., these kinds of structural risks have been partially addressed through the recommendations of the Belmont Report, which recognized the structural risk of physicians leading clinical medical trials. Clinical research physicians were structurally responsible both to the care of the patient and to the production of generalizable medical knowledge, in a sense serving two masters. In recognition of this structural risk, the Belmont Report recommended the creation of Institutional Review Boards (IRBs) to uphold the autonomy and dignity of research subjects, and to ensure the use of research for beneficial purposes.

Artificial Intelligence research could benefit from similar forms of institutional review (see Metcalf and Crawford 2016), even if the structural risks and harms are quite different than those of biomedical research. A recent NSF-funded initiative, the PERVADE project, is seeking to clarify exactly how this might be done by evaluating and recommending forms of institutional review suitable for AI research. This effort, as well as others, are steps in the right direction.

Social scientists are experts who seek to understand how bad outcomes can arise as unintended consequences — even from good intentions. These capabilities are somewhat in contrast to recent recommendations made by OpenAI to instrumentalize social science in the AI development pipeline, a perspective that looks more like social engineering than ethical research and development. Not only does AI research need reliable institutional review, but the skills and expertise of social scientists should inform this crucial work to address the structural risks of AI research.

  • by Manny Moss, PhD Candidate, CUNY Graduate Center; Research Analyst, Data & Society

Igor Rubinov