The Problem
Employers are using AI to screen resumes, score candidates, conduct video interviews, and even make termination recommendations. The scale is impressive — but so is the legal exposure. NYC Local Law 144 requires annual bias audits for AI hiring tools. Illinois requires consent for AI video interviews. Colorado's AI Act mandates impact assessments. The EEOC has made clear that employers — not AI vendors — are liable for discriminatory hiring decisions, regardless of whether an algorithm made the call.
- AI resume screening may systematically disadvantage protected classes
- Candidate scoring models lack explainable rationale for individual decisions
- No structured documentation for NYC Local Law 144 bias audit requirements
- Adverse impact analysis requires decision-level data most AI tools don't produce
What Gets Submitted
What gets submitted when an AI hiring decision is audited
How the Gate Works
Submit Evidence
AI decision + evidence payload submitted for structured evaluation
Review Against Policy
Decision evaluated against HR & Hiring regulations and policy context
Verdict & Audit Trail
Structured verdict with failure categories, corrections, and immutable audit record
Evaluation Taxonomy
Failure Categories
- Protected characteristic proxy detected
- Employment gap penalty (policy violation)
- Culture fit score lacks objective basis
- Adverse impact indicator
- Missing required disclosure to candidate
- Scoring criteria not job-related
Business Impact
- EEOC complaint
- State AG investigation
- NYC LL 144 violation
- Class action employment discrimination
- Candidate trust and employer brand damage
Evidence Sufficiency
- Complete application with scoring breakdown
- Partial scoring — missing criteria weights
- Critical scoring factor undocumented
- Scoring conflicts with stated job requirements
Example Verdict
Compliance Frameworks
Frequently Asked Questions
Related Use Cases
Government Benefits
Ensure AI-driven eligibility determinations are fair, documented, and compliant with federal oversight mandates.
Learn moreLegal Compliance
Ensure AI-assisted legal analysis, contract review, and compliance assessments are accurate, cited, and ethically sound.
Learn moreEducation
Ensure AI-driven admissions, grading, intervention recommendations, and student assessments are fair, explainable, and FERPA-compliant.
Learn moreSee how Bookbag audits AI decisions
Join the teams shipping safer AI with real-time evaluation, audit trails, and continuous improvement.