Does your company have an AI ethics dilemma?

Dovetail Labs co-Founders Alexa Hagerty and Igor Rubinov, in an article in Information Age, tackle the question of whether companies that are new to AI need to 'care' about ethics. They offer 5 steps your company can take now.  

While added agility, capacity, and capabilities are welcome additions to any enterprise, blind adoption of AI technologies can come with unintended and unforeseen consequences. Most businesses may not be creating AI, but they may be starting to use a set of tools and processes that fall under the concept (through AWS, Google Cloud, Azure, and others). Companies that are new to these AI tools may be asking themselves: Do I have to “care” about AI ethics? In short, the answer is yes.

As AI tools interface with the real world, different organisational structures and business processes can create a variety of challenges. No company that uses AI systems will be exempt from the moral ramifications of their deployment. Luckily, there are steps businesses can take.

As social scientists working on issues of ethical technology, we recommend that all companies, whether creators or adopters, commit to a serious and sustained engagement in order to understand the ethical implications and social impacts of AI. We’ve already seen that ‘off the shelf’ AI can be used in problematic ways (as the infamous Wang, Kosinski study of sexual orientation illustrates), so ‘only’ being an adopter does not excuse anyone from engaging with its ethical implications.

Here are five ways to address AI ethics in an organisation.

  1. Develop Ethics Principles: Most companies have a mission that may not be clearly operationalised. However, it is imperative for every company to align their work to their mission by establishing parameters to assure that AI tools will be used beneficially and in line with their values. These principles need to be clearly laid out and deployed across the organisation. This engagement must be rigorous, ongoing, and involve a range of stakeholders. A few informal conversations or a single meeting are not enough to tackle the complex issues that AI-driven technologies present.

  2. Talk with an AI Ethics Consultant. Consulting a specialist well-versed in AI ethics is critically important to help a company see beyond current engagements to provide industry-wide or global perspectives. By taking a deep dive on the actual uses of the technology, businesses can anticipate upcoming challenges and forecast ways to avoid future pitfalls. An AI Ethics Consultant can provide targeted, strategic guidance to make sense of an AI toolkit and how to account for its ethical implications.

  3. Provide Ethics Trainings. Training employees in AI ethics is immensely valuable. While this process can build on the ethics principles outlined above, trainings may also involve creating new tools, modules, and educational systems to help members throughout the company become equipped to recognize and respond to ethical challenges. Online courses from edX and Coursera explore how to apply ethical and legal frameworks to data initiatives along with practical approaches to analytics problems posed by AI. Enterprise-ready ethics trainings are a valuable resource for deploying current insights across an organisation.

  4. Discuss the Ethical OS toolkit. A valuable toolkit for thinking about ethical issues has been developed by the Institute for the Future, with support from the Omidyar Network. The Ethical Operating System (Ethical OS) is a practical framework targeting tech makers, such as engineers and product managers, to anticipate the impact of AI technologies. Even if an organisation is not a tech firm creating AI tools, this straightforward framework can help kick start the conversation. Apart from helping to assess risk zones of potential social harm and playing out scenarios to assess the long-term impacts of the technology, the Ethical OS toolkit can help generate future-proofing strategies that allow a company to take ethical action now.

  5. Hire an Ethics Director. One of the more robust steps a company can take is to allocate direct staff time and resources to tackling AI ethics (as Salesforce, 23andMe, Uber and other technology companies have done). Depending on the AI footprint, a business may seek to hire a director or specialist who manages this work full-time. Alternatively, existing employees can work on this challenge and help assure that ethics principles and trainings are deeply integrated into the work of the company at all levels.

Time to take action

While researchers are only beginning to understand the implications of AI, resources are emerging to help companies be proactive and resolve dilemmas before they make a negative impact. The five steps above are a great place to start thinking about the ethical implications of AI.

As AI systems continue to grow in importance and capacity, the impact on our workplaces, communities, and societies will only become more pronounced. These strategies can help establish a robust AI ethics strategy to anticipate future risks and challenges. AI ethics are not going away. Companies need to be prepared.

Igor Rubinov