Effective October 1, 2025
California has taken a groundbreaking step in regulating artificial intelligence in the workplace. As of October 1, 2025, the state’s Civil Rights Council has implemented comprehensive regulations under the Fair Employment and Housing Act (FEHA) that fundamentally change how employers can use automated decision systems in hiring.
If your company uses AI tools, algorithms, or any automated software in recruitment, you need to understand these rules—because ignorance is no longer a defense.
The Bottom Line: No AI Shield from Liability
Here’s what every California employer needs to know: Using AI or automated tools does not protect you from discrimination liability. Period.
The Civil Rights Council has made it crystal clear that decisions made through automated systems are treated as the employer’s own actions. Whether a human or an algorithm screens resumes, ranks candidates, or flags applicants for rejection, your company bears full responsibility for any discriminatory outcomes.
This isn’t about whether AI is good or bad—it’s about accountability. Software used in hiring must now be treated like any other component of your hiring process: subject to bias scrutiny, oversight, and thorough documentation.
What Are Automated Decision Systems (ADS)?
Before we dive into compliance requirements, let’s clarify what falls under these regulations. Automated decision systems include any AI, algorithmic, or rule-based tool used in recruitment, such as:
- Resume screening software that filters applications
- Profile matching algorithms that rank candidate fit
- Assessment tests with automated scoring
- Video interview platforms with AI-based evaluation
- Targeted job advertising with algorithmic delivery
- Chatbots that pre-screen candidates
- Predictive analytics tools that forecast candidate success
If it uses code, rules, or algorithms to help make hiring decisions, it’s likely covered.
Key Action #1: Inventory & Classify All ADS Tools
The first step toward compliance is knowing exactly what you’re using. This isn’t optional—it’s foundational.
Map Every Tool in Your Hiring Stack
Start by creating a comprehensive inventory of every automated tool that touches your recruitment process. Don’t overlook anything. That “simple” resume parser? It counts. The personality assessment test? Absolutely. The targeted LinkedIn job ads? Those too.
For each tool, you need to document:
- Vendor name and contact information
- Software version (and how often it’s updated)
- Data sources the tool uses to make decisions
- Update frequency for the tool’s underlying logic
- Decision-making logic (if available from the vendor)
- Integration points with your human decision-making steps
Demand Transparency from Vendors
This is where employer-vendor relationships get tested. You need to ask tough questions:
- What anti-bias testing protocols have been implemented?
- Can you provide audit results or validation data?
- What disparate impact testing has been conducted?
- Who carries the burden of proof if a FEHA claim arises—you or the vendor?
That last question is critical. In a disparate impact lawsuit, someone will need to prove the tool doesn’t discriminate. Make sure you know whether your vendor contract addresses this, or if you’re on your own.
If a vendor can’t or won’t answer these questions, that’s a massive red flag. You may need to reconsider the partnership entirely.
Classify Tools by Risk Level
Not all automated tools carry equal risk. California employers should classify their ADS tools into risk categories:
High Risk: Tools that REJECT candidates
- Automated resume screeners that eliminate applicants
- Assessment tests with automatic disqualification thresholds
- AI interview platforms that can independently remove candidates from consideration
Medium Risk: Tools that RANK candidates
- Algorithms that score and order applicant pools
- Matching systems that create priority lists
- Predictive analytics that rate likelihood of success
Lower Risk: Tools that SUGGEST or SURFACE information
- Systems that recommend candidates for human review
- Dashboards that highlight applications
- Tools that organize information without making autonomous decisions
Your highest-risk tools should receive the most scrutiny, documentation, and human oversight.
What Happens If You Don’t Comply?
The consequences of non-compliance can be severe. FEHA allows for:
- Individual lawsuits from affected candidates
- Class action litigation
- Civil Rights Department investigations
- Compensatory and punitive damages
- Attorney’s fees and costs
- Injunctive relief requiring changes to hiring practices
More importantly, if you can’t document your ADS tools, demonstrate bias testing, or show appropriate oversight, you’ll be in an extremely weak position defending against discrimination claims.
Taking Action: Your Next Steps
If you’re using AI or automated tools in hiring, here’s what you should do immediately:
- Audit your hiring technology stack – Create that comprehensive inventory we discussed
- Engage with your vendors – Ask for anti-bias testing documentation and clarify liability
- Assess your risk exposure – Classify tools and identify which require enhanced oversight
- Document everything – Create records of your due diligence and decision-making processes
- Train your HR team – Ensure everyone understands the new liability framework
- Establish human oversight protocols – Define when and how humans review automated decisions
- Consult legal counsel – Consider having an employment attorney review your ADS usage and vendor contracts
The Bigger Picture
California’s regulations represent a significant shift in how we think about AI in hiring. Rather than seeing automation as a way to reduce bias or streamline processes without accountability, the law now recognizes that these tools are extensions of the employer’s decision-making authority—and liability.
Other states are watching California’s approach closely. What happens here often becomes a template for national standards. Employers who get ahead of these requirements now will be better positioned as similar regulations emerge elsewhere.
Final Thoughts
The use of AI in hiring isn’t going away, nor should it necessarily. Technology can help identify talent, reduce manual workload, and even mitigate certain types of bias when designed and monitored properly.
But these new regulations send a clear message: Employers cannot outsource accountability to algorithms. The decision to use automated tools must come with a commitment to transparency, testing, documentation, and human oversight.
If you’re using AI in hiring, treat it like what it legally is—your own decision-making process. Because under California law, that’s exactly what it is.
Need help navigating these regulations? Consider consulting with employment counsel who understands both FEHA requirements and automated decision systems. The investment in compliance now can save substantial legal exposure down the road.
This blog post provides general information and does not constitute legal advice. Employers should consult with qualified legal counsel regarding their specific circumstances.

