Responsible AI Policy

Turn Technologies, Inc. Effective Date: March 31, 2026. Last Reviewed: March 31, 2026.

Turn Technologies, Inc. (“Turn,” “we,” “us”) provides AI-powered background screening and workforce compliance infrastructure to employers, staffing platforms, and integration partners. Artificial intelligence and machine learning (“AI”) are central to how Turn delivers faster, more accurate, and more compliant screening services.

This Responsible AI Policy (“Policy”) describes how Turn develops, deploys, governs, and monitors its AI systems. It is intended to provide transparency to our partners, their customers, job applicants, regulators, and the public about our AI practices, and to document our commitment to operating AI that is fair, accountable, and legally compliant.

This Policy satisfies disclosure obligations under applicable laws including the Colorado Anti-Discrimination in AI Law (SB 24-205) and reflects our voluntary commitment to responsible AI transparency consistent with the EU AI Act and emerging U.S. state frameworks.

1. Introduction and Purpose

Turn Technologies, Inc. (“Turn,” “we,” “us”) provides AI-powered background screening and workforce compliance infrastructure to employers, staffing platforms, and integration partners. Artificial intelligence and machine learning (“AI”) are central to how Turn delivers faster, more accurate, and more compliant screening services.

This Responsible AI Policy (“Policy”) describes how Turn develops, deploys, governs, and monitors its AI systems. It is intended to provide transparency to our partners, their customers, job applicants, regulators, and the public about our AI practices, and to document our commitment to operating AI that is fair, accountable, and legally compliant.

This Policy satisfies disclosure obligations under applicable laws including the Colorado Anti-Discrimination in AI Law (SB 24-205) and reflects our voluntary commitment to responsible AI transparency consistent with the EU AI Act and emerging U.S. state frameworks.

2. Scope

This Policy applies to all AI systems operated by Turn in connection with its background screening, identity verification, and compliance platform (the “Services”). It covers:

  • AI systems that process personal data about job applicants or workers;
  • AI systems that generate outputs used in, or capable of influencing, employment screening decisions;
  • AI systems used in Turn’s internal operations that interface with partner or applicant data.

3. Turn’s AI Systems

Turn currently operates the following AI-powered capabilities in connection with the Services:

3.1 Turn AI Assistant (Beta)

Provides contextual guidance to platform users interpreting screening reports, navigating FCRA compliance requirements, and understanding regulatory obligations. The Assistant is informational only and does not modify report data or influence employment outcomes.

3.2 ClarifAI

Translates complex criminal history and motor vehicle record (MVR) language into clear, standardized summaries designed to improve consistency and fairness in human decision-making. ClarifAI outputs are informational and subject to human review.

3.3 AI Record Matcher

Uses machine learning models to accurately match records across multiple data sources, reducing false positives and improving report precision. All matches are validated by qualified Turn personnel before being included in a final report.

3.4 Document Intelligence

Authenticates identity documents and detects fraud by analyzing data from IDs, consent forms, and credit bureau records. Identifies forgery, tampering, and synthetic identity manipulation. Human review is required for all flagged items.

3.5 Operational AI

Automates internal quality control, order management, and workflow processes to improve efficiency. Operational AI does not interact directly with candidate-facing outputs without human review.

Turn may introduce new AI capabilities from time to time. Material new capabilities that affect data use or compliance obligations will be disclosed to partners prior to deployment.

4. Responsible AI Principles

Turn’s AI governance is grounded in four core principles:

4.1 Human Oversight

No Turn AI System makes independent final determinations affecting employment decisions or report content. Every AI-influenced output is subject to human review and validation by qualified Turn personnel before it affects a candidate’s report, screening status, or compliance outcome. Human oversight is a technical control, not merely a policy.

4.2 Fairness and Non-Discrimination

Turn is committed to designing and monitoring AI Systems to avoid bias and discriminatory outcomes. We conduct regular bias audits measuring disparate impact across protected characteristics, including race, sex, national origin, religion, disability, and age, consistent with EEOC guidance, Colorado SB 24-205, and applicable state and federal law.

Where bias risks are identified, we take prompt corrective action including data re-balancing, model adjustment, or enhanced human review. Turn does not use protected class characteristics as model features.

4.3 Transparency and Explainability

Turn maintains internal documentation of its AI systems’ purpose, operation, versioning, validation results, and performance benchmarks.

  • Partners may request summary-level information explaining how AI Systems influence outputs relevant to their Services.
  • Candidates whose screening involves AI are entitled to know that AI was used, consistent with applicable law.
  • Turn will not obscure the role of AI in any output that materially affects an individual’s employment opportunity.

4.4 Data Integrity and Privacy

Turn processes personal data through its AI Systems only to the extent necessary to deliver contracted Services. Turn does not use live, identifiable, or partner-specific candidate data to train or improve its AI models.

All AI model improvements rely exclusively on aggregated, anonymized, or synthetic data generated through Turn’s internal validation and audit processes. Customer Data remains the exclusive property of the Partner or its customers.

5. Governance Framework

5.1 Pre-Deployment Testing

All AI features undergo rigorous pre-deployment testing including accuracy benchmarking, bias evaluation, and adversarial testing before production release. New features require documented validation results and formal internal approval.

5.2 Ongoing Monitoring

Turn monitors deployed AI Systems continuously for accuracy, performance drift, and bias. Where performance falls below internal thresholds, corrective action is initiated promptly.

5.3 Change Control

All material updates to AI models, including retraining and new feature releases, are subject to Turn’s formal change management process requiring documentation, validation, and human approval. Partners receive advance notice of material changes to data use or compliance obligations.

5.4 Documentation and Audit Rights

Turn maintains governance records sufficient to demonstrate compliance with applicable laws and its Responsible AI Principles. Partners may request a summary review of Turn’s AI governance documentation no more than once per calendar year, subject to confidentiality requirements and reasonable advance notice.

5.5 Incident Response

Turn will notify affected partners within seventy-two (72) hours of confirming a material AI-related incident, including system malfunctions or data breaches materially affecting the accuracy, integrity, or lawful operation of the Services. Turn will document findings, take corrective action, and share a remediation summary upon request.

6. Regulatory Compliance

Turn’s AI governance framework is designed to comply with, and support partner compliance with, applicable laws including:

Law / Regulation Jurisdiction Relevance to Turn
Fair Credit Reporting Act (FCRA) United States (Federal) Core compliance requirement for all consumer reports and AI outputs used in employment screening
EEO / EEOC Algorithmic Fairness Guidance United States (Federal) Bias testing and non-discrimination standards
Colorado ADAI (SB 24-205) Colorado Public disclosure requirement; algorithmic discrimination prevention; impact assessments
EU AI Act (Regulation 2024/1689) European Union High-risk AI employment system obligations effective Aug 2, 2026
California FEHA — Automated-Decision Systems California Agency liability for AI developers in employment decisions; effective Oct 1, 2025
Illinois Human Rights Act (HB 3773) Illinois Prohibition on AI with discriminatory effect in employment; effective Jan 1, 2026
GDPR / UK GDPR EU / United Kingdom Data protection, lawful basis for processing, data subject rights
Applicable State ADM Laws 18+ U.S. states Consumer rights to opt out of automated processing in employment decisions

Turn monitors evolving AI laws and regulations and updates its practices accordingly. This Policy is reviewed no less than annually and updated when material changes in law, Turn’s AI systems, or governance practices require.

7. Rights of Affected Individuals

Turn recognizes that background screening decisions affect real people. To the extent required by applicable law, and through coordination with our Partners:

Disclosure

Individuals will be informed when AI has materially influenced a screening output that affects their employment opportunity, consistent with FCRA adverse action requirements and applicable state AI transparency laws.

Human Review

Individuals may request that a qualified human review any AI-influenced determination. Partners are responsible for routing and honoring such requests, and Turn will support Partners in fulfilling them.

Opt-Out

In jurisdictions where applicable law provides a right to opt out of automated processing for employment decisions, Turn will work with Partners to support that right.

Accuracy Disputes

The FCRA’s existing dispute and reinvestigation process applies to all Turn screening outputs, including AI-assisted outputs. Information on dispute procedures is included in every consumer disclosure.

8. Prohibited Uses

Turn’s AI Systems are not used for, and Turn contractually prohibits Partners from directing Turn’s AI to:

  • Make fully automated final employment decisions without human review;
  • Evaluate candidates on the basis of protected class characteristics (race, color, religion, sex, national origin, age, disability, or other legally protected status);
  • Process biometric data for identification purposes in jurisdictions where such processing is prohibited or requires consent not obtained;
  • Predict criminal behavior using demographic profiling;
  • Any practice prohibited under the EU AI Act, applicable U.S. federal law, or applicable state law.

9. Contact and Accountability

Questions, concerns, or requests regarding this Policy or Turn’s AI practices may be directed to:

Turn Technologies, Inc. — Compliance Team
Email: compliance@turn.ai
Address: 311 West Monroe Street, 3rd Floor, Chicago, IL 60606

Turn maintains an internal review process for ethical concerns raised by employees, partners, regulators, or members of the public regarding AI behavior or outcomes. Confirmed issues are investigated, documented, and remediated.

10. Policy Updates

This Policy will be reviewed and updated at least annually, and whenever Turn introduces material new AI capabilities or when applicable law requires revision. The current version is always available at turn.ai/legal/responsible-ai-policy. Material updates will be communicated to active partners prior to the effective date.

Table of Contents

See How Turn Shields Resellers from FCRA Risk

Learn how our partner program simplifies compliance and reduces liability. Fill out the form to get the full guide.

Name

See How Turn Shields Resellers from FCRA Risk

Learn how our partner program simplifies compliance and reduces liability. Fill out the form to get the full guide.

Name
Partner Info - Step 1 of 5

Partner Info

Point of Contact Name
Email
Quotes and demos are for companies only (validation required). If you’re a candidate needing help, please use the chat in the bottom right corner. Thanks!
Business Address

Your Roadmap to Driver Retention

Get expert tactics for reducing churn, improving safety, and optimizing onboarding with real industry data and compliance tips.

Name

Stay Ahead of Staffing Risks

Discover how to tackle identity fraud, economic shifts, background check delays, and changing worker expectations with actionable, expert-backed strategies.

Name

Retailers Using AI See 2.5x Higher Profitability

Get the full breakdown on how AI improves job descriptions, reduces candidate drop-off, and speeds up hiring, all backed by industry stats.

Name

A Must-Read for Hospitality Hiring Teams

Learn how leading resorts, QSRs, and boutique hotels reduce screening friction, improve accuracy, and maintain trust across locations.

Name

A Must-Read for Gig Economy Hiring Teams

Discover how AI is helping gig economy marketplaces hire faster, smarter, and more efficiently than ever before.

Name

Enhance Patient Safety With Smarter Screening

Discover how continuous license checks, drug testing, and real-time alerts can prevent avoidable errors and boost care quality. See the 5 key tactics that work.

Name