Companies across industries are using artificial intelligence, machine learning, and algorithmic decision making (collectively ‘AI’) in their everyday business practices. To ensure legal compliance with their use of AI, companies must understand how their AI works and be able to explain it.
As legal regimes around the world work to keep up with AI, the direction of travel is clear: companies must be transparent about their AI practices.
Consider the following:
- In California, the California Consumer Privacy Act requires a business to inform consumers how it uses the personal information it collects. And the California Bolstering Online Transparency (BOT) law requires that businesses disclose when they use bots to communicate with consumers online.
- Regulators, such as the Federal Trade Commission, have recognized AI’s potential for bias and harm to vulnerable populations, and have stressed the importance of increased transparency.
- In the European Union and Brazil, the General Data Protection Regulation (GDPR) and the Lei Geral de Proteção de Dados (LGPD), require transparency and explainability. This means companies that use AI to process personal data and make decisions must, upon consumer request, be able to explain how those decisions are made in a concise, transparent, and intelligible way using clear and plain language.
- International human rights and technology organizations such as the Council of Europe, World Economic Forum, and Institute of Electrical and Electronics Engineers have supported or developed guidelines that emphasize AI transparency.
Transparency means knowing how your AI works and being able to explain it to diverse audiences. This is easier said than done. AI involves complex and fast-paced technical data processing that, once activated, can be incredibly difficult (sometimes even impossible) to unpack, fix, or reverse. Companies with leading AI transparency practices understand these inherent risks and dedicate time and human resources to understand their AI technology. This requires understanding why and where they are leveraging AI, how humans manage it, what the inputs are, how it is being trained, what it is designed to accomplish (and to avoid), and any potential (inadvertent) downstream harms and mitigations.
This detailed understanding is a prerequisite to being able to explain how AI is being used. Under the existing legal landscape, companies should already be prepared to explain their AI to external audiences—whether through data policies, in response to proper consumer requests, regulator inquiries, the media, or elsewhere. Understanding AI inputs and processing is a complex inquiry that cannot be meaningfully explained without careful examination and attention; companies that wait to think about these questions until asked do so risk being unable to come up with an answer.
So get to know your AI. Companies that do so will reap prudential benefits and operate prepared for inquiries into their legal compliance for years to come. Companies who fail to do so face downstream risks that may be impossible to mitigate without potentially business-crippling limitations on their use of AI.
About the author:
Michael H. Rubin represents companies in high-stakes and complex litigation in courts throughout the United States and in regulatory matters internationally.