With AI usage skyrocketing, questions continue to be raised around ethical and responsible practices. Several comprehensive AI laws and regulations are working their way through legislative processes, like the EU’s draft AI Act, and a host of standards, frameworks and guidance has been issued, including NIST’s AI RMF, ISO’s 23894 Guidance on AI Risk Management, the OECD’s AI Principles, and others.
But as the Biden-Harris Administration noted, it’s not just about law – “when it comes to AI, we must place people and communities at the center by supporting responsible innovation that serves the public good, while protecting our society, security, and economy. Importantly, this means that companies have a fundamental responsibility to make sure their products are safe before they are deployed or made public.”
Below are some practical steps you can take to establish a foundation to minimize risks related to AI and machine learning (ML) technologies and promote the responsible use of AI.
3 steps to implement responsible AI
1. Establish your team
Given the nature of AI, privacy professionals find themselves in a unique position to lead the charge on responsible AI throughout their organization, balancing the innovation of AI with user privacy. However, the development and use of AI requires the coming together of a variety of roles to understand, mitigate, and manage the various risks that may arise.
NIST’s AI RMF highlights the benefit of treating AI risks along with other critical risks – such an approach “will yield a more integrated outcome and organizational efficiencies”. In addition, the governance structure that will emerge by bridging the different stakeholders within your company will ensure that a systematic approach is taken to decisions concerning AI.
2. Develop an AI inventory
Make a list of all products, features, processes, and projects related to AI and ML, whether they're in-house or from external sources. From a privacy or IT risk management program perspective, you can build on your existing data maps or inventories. Otherwise, start by carrying out a data mapping exercise that involves processing personal information, in addition to AI and ML tech. Data discovery can also help when trying to determine how your AI systems are going to interact with different categories of data.
3. Map your efforts against a framework
The NIST AI RMF aims to provide organizations that develop AI systems and technologies with a practical and adaptable framework for measuring and protecting against potential harm to individuals and society. Mapping your efforts against a framework may help in understanding how to expand or create new governance structures. For example, AI risk questions can be embedded into existing assessments and user workflows, which may be in the form of privacy impact assessments (PIAs) or vendor assessments. Policies, processes, and training can also be updated to include your organization’s approach to AI where necessary.
Championing responsible AI together
AI and ML have been one of the most talked about issues of this year already. The speed at which the technology, its risks, and the policy around them are developing makes this a unique challenge to organizations and professionals. By building a dedicated, cross-functional team, developing your AI, and embracing a framework to structure efforts around, you can minimize risks linked to AI usage and foster responsible AI practices. And the ethical and moral imperative of responsible AI calls for a collective effort from organizations, developers, and policymakers to ensure that AI innovation remains synonymous with the protection of rights, safety, and trust for everyone.
To learn more about how your organization can get started with AI governance, download the whitepaper, “Navigating responsible AI: a privacy professional’s guide”.