Establishing performance-based metrics for AI systems gives consumers, businesses, and government an opportunity to better compare performance across different systems, as well as set minimum performance requirements.
Artificial intelligence (AI) has the potential to create many significant economic and social benefits. However, concerns about the technology have prompted calls to policymakers around laws and regulations to create "responsible AI" without impacting innovation in the field.
In a new report by Center for Data Innovation's Director, Daniel Castro, 10 principles have been highlighted to guide policymakers in crafting and evaluating regulatory proposals for AI that do not harm innovation. These have been summarised below as a quick reference for HR practitioners and employers:
1. Avoid pro-human biases: Allow AI systems to do what is legal for humans (and prohibit what is illegal too).
Rationale: Holding AI systems to a higher standard than for applies to humans disincentivises the technology’s use.
2. Regulate performance, not process: Address concerns about AI safety, efficacy, and bias by regulating outcomes rather than creating specific rules for the technology.
Rationale: Performance-based regulations allow for flexibility in how to meet objectives and does not impose potentially costly and unnecessary rules on AI systems.
3. Regulate sectors, not technologies: Set rules for specific AI applications in particular sectors rather than creating broad rules for AI technologies generally.
Rationale: Context matters. An AI system to drive a vehicle is different than one to automate stock trades or diagnose illnesses, even if they use similar underlying technologies.
4. Avoid AI myopia: Address the whole problem rather than fixate on the portion of a problem involving AI.
Rationale: Many problems need to be solved regardless of whether they involve AI. Focusing only on the AI-portion of the problem often distracts from resolving the bigger issue.
5. Define AI precisely: Define AI clearly to avoid inadvertently including other software and systems within the scope of new regulations.
Rationale: AI covers a broad range of technology and is integrated into many products. Policymakers should not use broad definitions of AI if they only intend to regulate machine learning or deep learning systems.
6. Enforce existing rules: Hold AI accountable for adhering to existing regulations.
Rationale: Many laws already address common concerns about AI, such as those relating to worker safety, product liability, discrimination, and more.
7. Ensure benefits outweigh costs: Consider the full potential costs and benefits of regulations.
Rationale: Costs, including both direct compliance costs and indirect innovation and competitiveness costs, impact the merits of a regulatory proposal.
8. Optimise regulations: Maximise the benefits and minimise the costs of regulations.
Rationale: Policymakers should find the most efficient way to achieve their regulatory objective.
9. Treat firms equally: Apply rules equally to firms regardless of their size or where they are domiciled.
Rationale: Exempting certain firms from regulations creates an uneven playing field and puts consumers at risk.
10. Seek expertise: Augment regulatory expertise with technical and industry expertise.
Rationale: Technical experts can help regulators understand the impact of regulatory options.
Report author Castro noted: "Poorly crafted laws and regulations could delay or stall the adoption of technologies that could save lives, increase wages, and improve quality of life. Therefore, policymakers should proceed with caution and be guided by these core principles so that their quest for responsible AI does not result in the creation of irresponsible regulation."
Thank you for reading our story! If you have any feedback, feel free to let us know — take our 2023 Readers' Survey here.
Lead image / Shutterstock