Impact on Employers
AI’s Opportunity and Disruption in the Workplace
The increased use of generative AI in human resources (HR) and labor management decision-making is fraught with the risk of inaccuracy and bias. The opportunities these helpful new tools present are matched only by their ethical and legal risks. Without proper vetting and examination of AI tools and their underlying training data, subjective biases and improper stereotypes might infiltrate workplace processes and decisions, thus exposing a business to potential liability for discrimination.
“An employer that uses an online tool to manage any part of the recruitment, hiring, and interviewing process may become a focus for local, state, and federal enforcement agencies.”
Algorithmic Bias and Risk Management
Organizations are pivoting to AI systems and machine learning algorithms to help automate decision-making processes, both simple and complex. We’ve seen these tools assist in the recruiting, hiring, evaluation, promotion, compensation, workforce management, and DEI processes. They can be in the form of, but not limited to:
- Recruitment and outreach tools
- Screening software
- Chatbots
- Facial recognition technology
With more and more of these tools in use, there is an evolving risk and legal significance of bias found within these algorithms. At times, these algorithms can unintentionally mirror and magnify human biases by forming assumptions about specific groups or individuals from data such as demographics, identities, backgrounds, and more. It’s important to ask, "Does the use of our algorithm prevent bias or amplify bias, and what can we do about it?"
If left unchecked, biased algorithms can lead to destructive decisions that impact individuals. It is important to make sure these algorithms are governed by federal, state, and local laws to help regulate the decision-making process and ensure fairness, equity, and transparency.
In some cases, the law requires auditing or approval of an algorithm. Organizations must manage their legal risks and ensure their entity follows these legal requirements.
Here are three key legal risk management tactics:
- Testing for bias and conducting bias mitigation consulting, even when auditing is not required.
- Risk mitigation techniques that may make a product more likely to be approved by a government agency, such as the U.S. Food and Drug Administration (FDA).
- Risk mitigation techniques, including transparency and explainability, that may reduce the risk of civil liability, such as product liability.
Wage and Hour Compliance
AI is poised to disrupt employment law. AI may alter a worker’s overtime exempt status or contribute to discriminatory compensation decisions. Epstein Becker Green predicts that the proliferation of AI workplace tools will have an impact on the Fair Labor Standards Act and how it is applied. Some of the main issues might include whether AI could erode the overtime exemption duties test and whether the independent contractor classification could get harder.
AI has the power to both cause and ameliorate pay gaps. Many state and local governments have implemented salary history bans, which prohibit employers from inquiring about an applicant’s past compensation or benefits when making interview, hiring, or compensation decisions. AI tools may violate salary history ban laws by using this information as a relevant way to screen or select applicants. On the other hand, AI tools may give organizations a chance to combat and address pay inequity by sifting through people in similar roles and fields to find anomalies and address problems with solutions. Employers choosing to use algorithms should make sure that these tools are compliant with local jurisdictions that have enacted salary history bans and wage and hour laws.
Intellectual Property and AI
You’ve likely seen AI tools with the ability to produce automated text, images, videos, and more. While generative AI has the potential to automate many activities and is capable of improving productivity and efficiency, organizations must consider how to navigate this new technology. As ethical, legal, and privacy issues start to appear, it is critical for organizations to implement policies on the use of generative AI to guarantee that intellectual property (IP), trade secrets, and other confidential information are not lost, stolen, or disclosed.
Examples of IP risks posed by AI:
- Potential copyright infringement claims from third parties
- Inability to copyright work produced and authored by AI
- Ownership of the tool and the product produced
Under a new policy from the U.S. Copyright Office, AI may be eligible for copyright protection, but proof of human authorship is still required. Organizations must understand the risks associated with these tools and how to protect themselves if they choose to use them.
Data Privacy and Cybersecurity
AI and machine learning applications process an ever-increasing universe of data, much of it personal, health care, financial, and other confidential business data. With that phenomenon comes a magnified risk of error, bias, and data insecurity.
It is imperative that organizations create and administer effective AI governance and compliance programs that are compliant with new AI regulations. This includes counseling clients on how to properly store and protect sensitive employee, patient, and personal data from unauthorized access by third parties or exposure through a breach.
It is also critical to manage cybersecurity and data privacy risks within the AI supply chain. This can often include managing:
- Data privacy requirements and developments in different jurisdictions
- Risks in products/applications that use AI
- Contracts and contracting issues involving AI (both with AI vendors that supply services to an organization and its customers)
- Contractual representations and warranties, indemnities, and data protection agreements
Keep in Touch
© 2024 Epstein Becker & Green, P.C. All rights reserved. Attorney advertising.
It is important to ask yourself – does the use of my algorithm prevent bias or amplify bias, and what can we do about it?
Algorithmic Bias and Risk Management
Organizations are pivoting to artificial intelligence (AI) systems and machine learning algorithms to help automate decision making processes, simple and complex. We’ve seen these tools assist in the recruiting, hiring, evaluation, promotion, compensation, workforce management, and DEI processes, and can be in the form of, but not limited to:
- Recruitment and outreach tools
- Screening software
- Chatbots
- Facial recognition technology
With more and more of these tools being used, there is an evolving risk and legal significance of bias found within these algorithms. At times, these algorithms can unintentionally mirror and magnify human biases by forming assumptions on specific groups or individuals from data such as demographics, identities, backgrounds, and more. It’s important to ask – does the use of our algorithm prevent bias or amplify bias, and what can we do about it?
If left unchecked, biased algorithms can lead to destructive decisions that impact individuals. It is important to make sure these algorithms are governed by federal, state, and local laws to help regulate the decision-making process and ensure fairness, equity, and transparency.
In some cases, the law requires auditing or approval of an algorithm. It is critical that organizations manage their legal risks and ensure their entity is in compliance with these legal requirements. Three key legal risk management tactics include:
- Testing for bias and conducting bias mitigation consulting, even when auditing is not required.
- Risk mitigation technigues such as xxx and xxx… that may make a product more like to be approved by a government agency, such as the FDA.
- Risk mitigation techniques such as transparency and explainability that may reduce the risk of a civil liability, such as product liability.
Any employer who uses an online tool to manage any part of the recruitment, hiring, and interviewing process may become a focus for local, state, and federal enforcement agencies.