New York City has recently passed a new law that aims to combat bias in AI hiring systems. Starting July 5th, the law requires companies to disclose their use of AI in hiring processes and to conduct annual audits to ensure that their systems are not racist or sexist. The audits must be done by third-party entities, which will scrutinize these systems for bias, intentional or not, that may be ingrained within them. Job candidates have the choice of requesting the information about what data are collected. Companies that fail to comply with the law could face fines of up to $1,500 per violation.
This law came into effect amid widespread concerns of AI biases in hiring. US Equal Employment Opportunity Commission chair estimated that about 80% of companies use some sort of automation in hiring.
Companies are not just collecting data from resumes. For example, some companies use AI tools on video interviews, analyzing facial expression and word choice. It is believed that AI systems might perpetuate existing biases and discrimination, leading to unfair treatment of certain groups of applicants.
Advocates of the law argue that it marks an encouraging first step in regulating AI and mitigating the risks associated with its usage, even if it falls short of perfection. It compels companies to gain a deeper understanding of the algorithms they employ, forcing them to confront any potential biases against women or people of color. Moreover, it represents a rare triumph in the realm of US AI policy, paving the way for a future brimming with similar, localized regulations. However, the efficacy of independent audits remains in question, given the relative immaturity of the auditing industry. Concerns arise over the access auditors would have to a company’s information and their ability to delve into its operational intricacies. It remains to be seen how effective the law will be in practice, but it represents an important step towards ensuring fairness and equality in the use of AI in hiring processes.
Further Possibilities
1. AI Ethics and Compliance Training
Offer training programs or workshops to educate HR professionals and hiring managers about the ethical use of AI in recruitment. Provide guidance on mitigating biases, understanding legal requirements, and implementing best practices.
2. Diverse and Representative Training Data
Develop training data for AI hiring system that are diverse and representative of the population. Incorporate data from a variety of sources and demographics to reduce the risk of biased outcomes. This includes taking steps to address historical imbalances and underrepresentation of certain groups in the training data.
3. AI Bias Detection and Mitigation Tools
Develop sophisticated AI-powered tools that detect and mitigate common biases in the hiring process. These tools can remind hiring managers of unconscious biases, such as anchoring bias, affinity bias, similarity bias, halo effect, and confirmation bias.
4. Open-Source AI Bias Libraries
Create open-source libraries specifically dedicated to mitigating bias in AI hiring systems. These libraries will feature pre-trained models, datasets, and tools designed to help developers build more inclusive and unbiased AI algorithms. By fostering collaboration and knowledge sharing, this radical approach democratizes AI bias mitigation and accelerates progress in the field.
5. Candidate Data Privacy Solutions
Create secure platforms or services that enable candidates to control their personal data and understand how it is collected and used during the hiring process. These solutions can provide transparency, giving candidates peace of mind and fostering trust between job seekers and potential employers.