A pivotal moment in artificial intelligence (AI) regulation unfolded as Colorado Governor Jared Polis signed Senate Bill (SB) 24-205 into law. This legislation, titled “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems,” or the Colorado Artificial Intelligence (AI) Act, is a groundbreaking moment in the conscientious oversight of AI. The primary intent of the legislation is to impose anti-discriminatory safeguards on the developers (creators) and deployers (users) of AI systems that are at “high risk” of facilitating “algorithmic discrimination.”
The initiative signifies a pivotal step towards prioritizing AI governance across the United States. That said, here is what you need to know regarding Colorado’s groundbreaking AI legislation.
The Criteria of “High-Risk” AI Systems in Colorado
Colorado defines “high-risk” AI systems as those involved in making or is a “substantial factor” in making “consequential decisions.” The Colorado AI Act deems “consequential decisions” as having a significant impact on Colorado residents’ access to essential services, opportunities, entitlements, and matters related to employment or employment opportunities. A “substantial factor” within this regulatory scope is defined by meeting the following conditions:
i. Assists in making a consequential decision
ii. Is capable of altering the outcome of a consequential decision
Defining “Algorithmic Discrimination”
Algorithmic discrimination encompasses any instance where an AI system unfavorably treats individuals based on protected characteristics. This definition is intentionally broad to encompass a wide range of factors such as age, race, and disability that could potentially lead to discrimination. For this reason, developers and deployers must exercise vigilance when adhering to the Colorado AI Act due to the wide range of potential instances where algorithmic discrimination can occur. A few examples of AI that can lead to Algorithmic Discrimination include:
- Healthcare Algorithms – Algorithms used in healthcare, including those employed to forecast patient outcomes or prioritize care, might unintentionally exhibit bias against particular demographics. For instance, if a predictive model solely depends on medical expenses to assess patient requirements, it could unfairly prioritize younger or more affluent patients at the expense of older or lower-income individuals.
- Recruitment Algorithms – Recruitment algorithms are employed by certain companies to assess job applications. However, these algorithms may unintentionally discriminate against specific groups due to factors such as name, gender, or ZIP code. One example of this is AI using past hiring patterns to exhibit a preference for a particular demographic, resulting in an algorithm that perpetuates bias by favoring candidates with similar traits.
- Automated Decision-Making in Criminal Justice – In certain jurisdictions, algorithms aid in bail, sentencing, and parole determinations. Nonetheless, these algorithms might integrate biased data, resulting in more severe outcomes for specific groups. For example, if the algorithm considers the defendant’s socioeconomic status, it may propose higher bail amounts or longer sentences.
- Credit Lending Algorithms – ZIP codes or neighborhood data might be used as proxies for race or socioeconomic status. That said, if historically marginalized groups are concentrated in certain areas, using ZIP codes as a factor in credit decisions could indirectly discriminate against them.
If your organization leverages any algorithmic machine learning processes like the examples above, it’s important to invest in a robust AI risk assessment framework to mitigate the likelihood of algorithmic discrimination within your work environment. For this reason, the Colorado AI Act has included several requirements within its legislation to help Colorado developers and deployers combat the wide range of discriminatory patterns seen throughout AI.
Deployers Duties Under the Colorado AI Act
As members of the U.S. AI Safety Institute Consortium (AISIC), a group of 200+ leading AI stakeholders dedicated to promoting the ethical practice and deployment of AI across various sectors, Drummond has the resources to maximize the efficacy of your anti-discriminatory policies by performing thorough and effective third-party AI risk assessments utilizing the NIST AI RMF. By conducting regular evaluations, Drummond can help you ensure your AI systems operate in a manner that minimizes the likelihood of algorithmic bias, thus paving the way for the adoption of AI solutions that prioritize fairness, accountability, and integrity throughout their deployment.
In addition to risk management frameworks, impact assessments for high-risk AI systems are also mandatory, with a requirement for annual assessments or more frequently following significant modifications to the system. Like New York City’s AI law, these assessments delve into the system’s purpose, potential risks of discrimination, data processing, performance metrics, transparency measures, and user safeguards. Moreover, consumers must receive prior notification regarding the deployment of covered AI systems for consequential decisions and detailed disclosures about the system’s purpose and nature. In cases where consumers are adversely affected, additional notifications offering correction opportunities and avenues for appeal must be provided. Employers must also maintain a clear and accessible website notice outlining their use of AI systems, risk management strategies, and data collection practices. Furthermore, any algorithmic discrimination discovered must be promptly reported to Colorado’s attorney general within ninety days of detection.
To facilitate the continuous reduction of algorithmic discrimination within the AI sector, Colorado has also implemented “Affirmative Defense” legislation within the Colorado AI Act. Within this context, Affirmative Defense means that should a deployer uncover and rectify a violation through feedback, adversarial testing or red teaming (as defined by NIST), or internal review it may legally absolve them of liability or reduce the severity of their charges. Therefore, it behooves organizations to be proactive in their ongoing risk management efforts when deploying such systems.
Developer Duties Under the Colorado AI Act
Under the Colorado AI Act, developers responsible for high-risk AI systems are mandated to provide the following documentation and information to deployers or other developers of the high-risk AI system, as well as the Colorado Attorney General, within 90 days of request. This includes:
- An overarching statement delineating the foreseeable and detrimental uses of the high-risk AI system.
- Documentation that provides:
- Concise overviews of the data types utilized in training the high-risk AI system.
- Recognized or anticipated constraints of the high-risk AI system, including the potential for algorithmic discrimination stemming from its intended applications.
- The purpose of the high-risk AI system.
- The projected advantages and applications of the high-risk AI system.
- All other pertinent information necessary for the deployer to adhere to legal requirements.
- Evaluation methodologies employed to assess the performance and mitigate algorithmic discrimination of the high-risk AI system before it is available.
- Data management strategies addressing training datasets and assessments of data source appropriateness, potential biases, and mitigative measures.
- The intended results of the high-risk AI system.
- Steps taken by the developer to minimize recognized or foreseeable risks of algorithmic discrimination potentially arising from the anticipated deployment of the high-risk AI system.
- Guidelines detailing the appropriate usage, non-usage, and monitoring procedures for individuals when the high-risk AI system is involved in consequential decision-making.
- Any supplementary documentation deemed necessary to aid the deployer in comprehending the outputs and monitoring the high-risk AI system’s performance for algorithmic discrimination risks.
Where possible, the required details and documentation should be accessible via industry-standard artifacts like model cards, data set cards, or other impact assessments. These resources are crucial for deployers or third-party assessors to conduct mandated impact assessments. It’s worth noting that developers who both develop and deploy high-risk AI systems are exempt from producing such documentation unless the system is handed over to an independent deployer entity.
What Comes Next?
As the groundbreaking Colorado AI Act signifies yet another stride in AI system regulation, there’s a pressing need for comprehensive AI assessment services. Drummond AI assessment services can support these requirements, ensuring compliance with the stringent regulations outlined in the Colorado AI Act. With our help, your organization can facilitate robust assessments that scrutinize the impact of high-risk AI systems, allowing you to root out any potential causes for algorithmic discrimination.