Turn Technologies, Inc. Effective Date: March 31, 2026. Last Reviewed: March 31, 2026.
Turn Technologies, Inc. (“Turn,” “we,” “us”) provides AI-powered background screening and workforce compliance infrastructure to employers, staffing platforms, and integration partners. Artificial intelligence and machine learning (“AI”) are central to how Turn delivers faster, more accurate, and more compliant screening services.
This Responsible AI Policy (“Policy”) describes how Turn develops, deploys, governs, and monitors its AI systems. It is intended to provide transparency to our partners, their customers, job applicants, regulators, and the public about our AI practices, and to document our commitment to operating AI that is fair, accountable, and legally compliant.
This Policy satisfies disclosure obligations under applicable laws including the Colorado Anti-Discrimination in AI Law (SB 24-205) and reflects our voluntary commitment to responsible AI transparency consistent with the EU AI Act and emerging U.S. state frameworks.
Turn Technologies, Inc. (“Turn,” “we,” “us”) provides AI-powered background screening and workforce compliance infrastructure to employers, staffing platforms, and integration partners. Artificial intelligence and machine learning (“AI”) are central to how Turn delivers faster, more accurate, and more compliant screening services.
This Responsible AI Policy (“Policy”) describes how Turn develops, deploys, governs, and monitors its AI systems. It is intended to provide transparency to our partners, their customers, job applicants, regulators, and the public about our AI practices, and to document our commitment to operating AI that is fair, accountable, and legally compliant.
This Policy satisfies disclosure obligations under applicable laws including the Colorado Anti-Discrimination in AI Law (SB 24-205) and reflects our voluntary commitment to responsible AI transparency consistent with the EU AI Act and emerging U.S. state frameworks.
This Policy applies to all AI systems operated by Turn in connection with its background screening, identity verification, and compliance platform (the “Services”). It covers:
Turn currently operates the following AI-powered capabilities in connection with the Services:
Provides contextual guidance to platform users interpreting screening reports, navigating FCRA compliance requirements, and understanding regulatory obligations. The Assistant is informational only and does not modify report data or influence employment outcomes.
Translates complex criminal history and motor vehicle record (MVR) language into clear, standardized summaries designed to improve consistency and fairness in human decision-making. ClarifAI outputs are informational and subject to human review.
Uses machine learning models to accurately match records across multiple data sources, reducing false positives and improving report precision. All matches are validated by qualified Turn personnel before being included in a final report.
Authenticates identity documents and detects fraud by analyzing data from IDs, consent forms, and credit bureau records. Identifies forgery, tampering, and synthetic identity manipulation. Human review is required for all flagged items.
Automates internal quality control, order management, and workflow processes to improve efficiency. Operational AI does not interact directly with candidate-facing outputs without human review.
Turn may introduce new AI capabilities from time to time. Material new capabilities that affect data use or compliance obligations will be disclosed to partners prior to deployment.
Turn’s AI governance is grounded in four core principles:
No Turn AI System makes independent final determinations affecting employment decisions or report content. Every AI-influenced output is subject to human review and validation by qualified Turn personnel before it affects a candidate’s report, screening status, or compliance outcome. Human oversight is a technical control, not merely a policy.
Turn is committed to designing and monitoring AI Systems to avoid bias and discriminatory outcomes. We conduct regular bias audits measuring disparate impact across protected characteristics, including race, sex, national origin, religion, disability, and age, consistent with EEOC guidance, Colorado SB 24-205, and applicable state and federal law.
Where bias risks are identified, we take prompt corrective action including data re-balancing, model adjustment, or enhanced human review. Turn does not use protected class characteristics as model features.
Turn maintains internal documentation of its AI systems’ purpose, operation, versioning, validation results, and performance benchmarks.
Turn processes personal data through its AI Systems only to the extent necessary to deliver contracted Services. Turn does not use live, identifiable, or partner-specific candidate data to train or improve its AI models.
All AI model improvements rely exclusively on aggregated, anonymized, or synthetic data generated through Turn’s internal validation and audit processes. Customer Data remains the exclusive property of the Partner or its customers.
All AI features undergo rigorous pre-deployment testing including accuracy benchmarking, bias evaluation, and adversarial testing before production release. New features require documented validation results and formal internal approval.
Turn monitors deployed AI Systems continuously for accuracy, performance drift, and bias. Where performance falls below internal thresholds, corrective action is initiated promptly.
All material updates to AI models, including retraining and new feature releases, are subject to Turn’s formal change management process requiring documentation, validation, and human approval. Partners receive advance notice of material changes to data use or compliance obligations.
Turn maintains governance records sufficient to demonstrate compliance with applicable laws and its Responsible AI Principles. Partners may request a summary review of Turn’s AI governance documentation no more than once per calendar year, subject to confidentiality requirements and reasonable advance notice.
Turn will notify affected partners within seventy-two (72) hours of confirming a material AI-related incident, including system malfunctions or data breaches materially affecting the accuracy, integrity, or lawful operation of the Services. Turn will document findings, take corrective action, and share a remediation summary upon request.
Turn’s AI governance framework is designed to comply with, and support partner compliance with, applicable laws including:
| Law / Regulation | Jurisdiction | Relevance to Turn |
|---|---|---|
| Fair Credit Reporting Act (FCRA) | United States (Federal) | Core compliance requirement for all consumer reports and AI outputs used in employment screening |
| EEO / EEOC Algorithmic Fairness Guidance | United States (Federal) | Bias testing and non-discrimination standards |
| Colorado ADAI (SB 24-205) | Colorado | Public disclosure requirement; algorithmic discrimination prevention; impact assessments |
| EU AI Act (Regulation 2024/1689) | European Union | High-risk AI employment system obligations effective Aug 2, 2026 |
| California FEHA — Automated-Decision Systems | California | Agency liability for AI developers in employment decisions; effective Oct 1, 2025 |
| Illinois Human Rights Act (HB 3773) | Illinois | Prohibition on AI with discriminatory effect in employment; effective Jan 1, 2026 |
| GDPR / UK GDPR | EU / United Kingdom | Data protection, lawful basis for processing, data subject rights |
| Applicable State ADM Laws | 18+ U.S. states | Consumer rights to opt out of automated processing in employment decisions |
Turn monitors evolving AI laws and regulations and updates its practices accordingly. This Policy is reviewed no less than annually and updated when material changes in law, Turn’s AI systems, or governance practices require.
Turn recognizes that background screening decisions affect real people. To the extent required by applicable law, and through coordination with our Partners:
Individuals will be informed when AI has materially influenced a screening output that affects their employment opportunity, consistent with FCRA adverse action requirements and applicable state AI transparency laws.
Individuals may request that a qualified human review any AI-influenced determination. Partners are responsible for routing and honoring such requests, and Turn will support Partners in fulfilling them.
In jurisdictions where applicable law provides a right to opt out of automated processing for employment decisions, Turn will work with Partners to support that right.
The FCRA’s existing dispute and reinvestigation process applies to all Turn screening outputs, including AI-assisted outputs. Information on dispute procedures is included in every consumer disclosure.
Turn’s AI Systems are not used for, and Turn contractually prohibits Partners from directing Turn’s AI to:
Questions, concerns, or requests regarding this Policy or Turn’s AI practices may be directed to:
Turn Technologies, Inc. — Compliance Team
Email: compliance@turn.ai
Address: 311 West Monroe Street, 3rd Floor, Chicago, IL 60606
Turn maintains an internal review process for ethical concerns raised by employees, partners, regulators, or members of the public regarding AI behavior or outcomes. Confirmed issues are investigated, documented, and remediated.
This Policy will be reviewed and updated at least annually, and whenever Turn introduces material new AI capabilities or when applicable law requires revision. The current version is always available at turn.ai/legal/responsible-ai-policy. Material updates will be communicated to active partners prior to the effective date.