The Connecticut Senate this week approved one of the most far-reaching artificial intelligence policy packages ever considered by the state Legislature.
SB 5, as amended, was approved on the Senate floor by a vote of 32-4 and now heads to the House for consideration.
Commonly referred to as the “Online Safety” bill, the revised version of the bill is more than 70 pages long and goes far beyond consumer protections.
Employers and businesses that use AI-driven tools, especially in hiring and hiring decisions, will be subject to costly new compliance obligations.
Among its many provisions, SB 5 establishes a new regulatory framework for the use of automated employment-related decision technology, including software used to screen applicants, rank candidates, evaluate performance, or support promotion, discipline, and termination decisions.
Why SB 5 is important for employers
For Connecticut employers navigating an already tight labor market and increasing compliance responsibilities, these provisions add new notice, disclosure, and documentation requirements and create new legal risks when AI tools are used in the workplace.
Under the bill, “employment-related automated decision-making technology” includes systems that:
- Processing of personal data
- Use calculations to generate outputs such as scores, rankings, predictions, classifications, and recommendations.
- is an important factor in determining or significantly influencing employment decisions;
This definition is intentionally broad and can be applied to many commonly used tools, including third-party recruiting platforms, resume screening software, assessment tools, scheduling algorithms, and performance analysis systems.
Certain everyday technologies such as word processing, spreadsheets, email, basic data storage, or tools used only on an occasional basis are excluded, but any system that has a significant impact on hiring decisions may be included.
New employer responsibilities
Starting October 1, 2027, employers implementing covered tools will face several new obligations.
If automated systems are used as a key element, employers must notify individuals before making employment decisions. The notification must disclose:
- Automated hiring decision tools are used
- The purpose of the tool and the types of hiring decisions it relates to
- Technology product name
- Categories and sources of personal data analyzed
- How to evaluate data
- Employer contact information
Additionally, employers must disclose when an applicant or employee is directly interacting with an automated system, unless it is obvious to a reasonable person.
If a tool is developed by a third-party vendor, SB 5 imposes an obligation on the developer to provide the adopter (employer) with sufficient information to meet compliance obligations, unless the tool is not commercially available or intended to materially influence employment decisions.
The bill would allow developers and employers to contractually assign compliance responsibilities, but the roles would need to be clearly defined in writing.
Trade secret protection – limited
Although SB 5 explicitly preserves the protection of trade secrets and confidential information, employers or vendors who keep information private must explain what is being kept private and why, adding another compliance step.
This bill would amend Connecticut’s anti-discrimination law to clarify that the use of automated decision-making technology is not a defense to employment discrimination claims.
Courts and regulators may consider evidence of anti-bias testing or similar proactive efforts, but such testing does not exclude liability.
For employers, this means that AI systems must be treated like any other employment decision tool and held fully accountable for discriminatory outcomes, whether the decision is automated or human-driven.
Enforcement, legal prosecution
Violations of the automatic employment technology provisions would be treated as unfair or deceptive trade practices and would be exclusively enforceable by the Connecticut Attorney General.
Although this bill does not create a private right of action, implementation could include:
- civil penalty
- injunctive relief
- Investigation and Compliance Obligations
In particular, the Attorney General has discretion to issue notices to correct violations until the end of 2027, with limited flexibility in initial implementation.
What else is in Senate Bill 5?
In addition to regulating the use of AI in employment decisions, SB 5 establishes a comprehensive framework covering consumer protections, generative AI, frontier models, online platforms, and state AI governance.
AI Subscription Transparency (Section 1). Subscription-based providers of AI tools must clearly disclose important terms, including functionality limitations and the provider’s discretion to restrict access, before billing or renewing a subscription. Violations are enforceable as unfair trade practices and there is no private right of action, which can only be enforced by the Attorney General.
Frontier AI models and “catastrophic risks” (Section 2). Large-scale developers training high-computing “foundational models” must establish whistleblower protection and reporting processes related to catastrophic risks (mass casualties, large-scale cyberattacks, etc.). These provisions apply to a limited group of large and sophisticated AI developers, rather than typical business users.
AI Regulatory Sandbox (Section 3). The Department of Economic and Community Development has been directed to design an AI regulatory sandbox that will allow companies to test innovative AI products under relaxed regulatory requirements, with the aim of supporting innovation and competitiveness.
AI companions and mental health safeguards (sections 4-6). Companies offering AI “companions” with human-like interactions need to:
- Disclose that the user is interacting with an AI rather than a human
- Preventing self-harm and the promotion of violence
- Introduce enhanced protection measures for minors, such as usage restrictions and parental controls
Violations will be enforced by the Attorney General. These provisions primarily affect consumer-facing AI platforms rather than internal business tools.
Origin of Generated AI Content (Section 15). Large providers of consumer-generated AI (with over 1 million monthly users) must comply with new technical standards to embed content provenance data in AI-generated or significantly modified images, audio, and video. B2B uses and many narrow tools are exempt.
AI Verification Pilot Program (Section 33). The Department of Consumer Protection will launch a pilot program to approve up to five independent third-party AI verifiers. While verification may be admissible in certain civil cases, verification is not a defense in state enforcement actions.
Social Media and Algorithm Feeds for Minors (Section 39). Online platforms that algorithmically recommend content should significantly limit how they personalize content for minors, impose default time limits, limit notifications, and prominently display health warnings. These provisions are aimed at social media platforms, not general business websites.
AI Surveillance of State Agencies (Sections 37-38). State agencies must inventory their AI systems, conduct impact assessments, and comply with concentration standards before deploying AI that impacts the public interest or individual rights. These requirements apply to the government’s use of AI, rather than directly to private employers.
Labor, Education, and Small Business Regulations (Sections 17-21, 26, 29). SB 5 establishes the Connecticut AI Academy, expands AI workforce training, requires employers to disclose when layoffs related to the use of AI occur, and directs state agencies to help small businesses responsibly and competitively implement AI.
As Connecticut pursues one of the most expansive AI regulatory frameworks in the nation, business transparency and real-world compliance will be critical to ensuring innovation, equity, and economic competitiveness go hand in hand.
CBIA will continue to track SB 5 and provide updates on changes impacting Connecticut employers.
For more information, contact Chris Davis at CBIA at 860.244.1931..
