Which states have AI laws in effect today? This tracker summarizes key AI laws that may impact your business. Subscribe for updates.
| State/Terr | AI Scope | Relevant Law | Law Link | Effective Date | Key Requirements | Enforcements & Penalties |
|---|---|---|---|---|---|---|
| California | AI Calling | AI Call Disclosures Law | AB 2905 | January 1, 2025 | • Requires callers using an automatic dialing-announcing device to inform the person called if the prerecorded message uses an artificial voice generated or significantly altered using artificial intelligence. | Up to $500 per violation. |
| California | AI Definition | AI Definition Bill | AB 2885 | January 1, 2025 | • Generally establishes a uniform definition for artificial intelligence (AI) in California Law: “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.” | N/A |
| California | AI Healthcare | AI Healthcare Utilization Law | SB 1120 | January 1, 2025 | • Requires health care service plans and disability insurers that use an artificial intelligence, algorithm, or other software tool for the purpose of utilization review or utilization management functions to ensure compliance with specified requirements, including that the tool bases its determination on specified information and is fairly and equitably applied. | Criminal penalties. |
| California | AI CSAM | Amendment of California CSAM Laws | Cal. Penal Code Part 1; Title 9; Chapter 7.5 (311-312.7) | January 1, 2025 | • Expands the scope of existing child pornography statutes to include matter that is digitally altered or generated by the use of AI. | Existing criminal penalties apply. |
| California | AI Intimate Images | Amendment of California Law Governing Distribution of Intimate Images | SB 926 | January 1, 2025 | • Extends prohibitions on the distribution of intimate images to include the intentional creation and distribution of any sexually explicit image of another identifiable person that was created in a manner that would cause a reasonable person to believe the image is an authentic image of the person depicted, under circumstances in which the person distributing the image knows or should know that distribution of the image will cause serious emotional distress, and the person depicted suffers that distress. | Existing criminal penalties apply. |
| California | AI Likeness | Amendment to Deceased Personality Protections | AB 1836 | January 1, 2025 | • Makes it unlawful for a person to produce, distribute, or make available the digital replica of a deceased personality’s voice or likeness in an expressive audiovisual work or sound recording without appropriate consent. | Greater of $10,000 or the actual damages suffered by a person controlling the rights to the deceased personality’s likeness. |
| California | AI in Political Advertising | Amendment to the Political Reform Act | AB 2355 | January 1, 2025 | • Requires any committee that creates, originally publishes, or originally distributes a qualified political advertisement to include in the advertisement a specified disclosure that the advertisement was generated or substantially altered using artificial intelligence. | Up to $5,000 per violation. |
| California | AI Healthcare | Artificial Intelligence in Health Care Services | Cal. Gov. Code § Section 1339.75_x000D_ AB 3030 | January 1, 2025 | • Requires health facilities, clinics, physician’s offices, and offices of a group practice that uses generative AI to generate written or verbal patient communications pertaining to patient clinical information to ensure those communications include both: - A disclaimer that indicates to the patient that a communication was generated by generative artificial intelligence; and - Clear instructions describing how a patient may contact a human healthcare provider, employee, or other appropriate person. • Exempts from disclosure written communications that are generated by AI that are reviewed by a licensed or certified healthcare provider. | Existing regulatory enforcement mechanisms. |
| California | AI Transparency | Artificial Intelligence Training Data Transparency Act | AB 2013 | January 1, 2026 | • Requires AI developers to post information on the data used to train their generative AI on their websites, including a high-level summary of the datasets used, the sources or owners of the datasets, a description of how the data is used, the number of data points in the set, whether copyrighted / IP protected or licensed data is included, and the time period the data was collected (among other information). | Not specified. |
| California | AI Transparency | California AI Transparency Act As Amended by AB 853 (2025) | Cal. Bus. & Prof. Code § 22757 et seq. | August 2, 2026 | • Requires providers of certain covered generative AI systems to: - Offer users the option to include in AI-generated image, video or audio content an indicator that the content is AI-generated content; - Include a detectable, latent disclosure in AI-generated image, audio, and video content created by the provider’s AI system that the content was generated by the system; and - Develop and make available tools to detect whether specified content was generated by the provider’s system. • Starting January 1, 2027, prohibits a GenAI system hosting platform from knowingly making available a GenAI system that does not place the disclosures identified above. • Starting January 1, 2027, requires certain large online platforms to: - Detect whether any provenance data that is compliant with widely adopted specifications adopted by an established standards-setting body is embedded into or attached to content distributed on the platform; and - Provide a user interface to disclose the availability of system provenance data that reliably indicates that the content was generated or substantially altered by a GenAI system or captured by a capture device. • Starting January 1, 2028, requires "capture device manufacturers" (e.g., camera and mobile phone manufacturers) to: - Embed a latent disclosure in content captured by the device by default; and - Offer users the option to include a disclosure containing certain information in content captured by the capture device. | Up to $5,000 per violation. |
| California | AI in Social Media & Online Platforms | California AI Transparency Act_x000D_ _x000D_ As Amended by AB 853 (2025) | Cal. Bus. & Prof. Code § 22757 et seq. | August 2, 2026 | • Starting January 1, 2027, requires certain large online platforms to: - Detect whether any provenance data that is compliant with widely adopted specifications adopted by an established standards-setting body is embedded into or attached to content distributed on the platform; and - Provide a user interface to disclose the availability of system provenance data that reliably indicates that the content was generated or substantially altered by a GenAI system or captured by a capture device. | Up to $5,000 per violation. |
| California | User-Facing AI | California Bot Act | Cal. Bus. & Prof. Code § 17940 –17943 | July 1, 2019 | • Prohibits any person from using a bot online to communicate or interact with a person in California with the intent to mislead the person about the bot’s artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a commercial transaction or influence a vote in an election. • Provides a safe harbor from liability where the person clearly and conspicuously discloses, in a manner reasonably designed to inform the relevant person, that a bot is in use. | Up to $2,500 per violation. |
| California | User-Facing AI | California Companion Chatbot | Cal. Bus. & Prof. Code § 22601 et. seq. | January 1, 2026 | • Requires operators of a "companion chatbot platform" to: - Issue a clear and conspicuous notification indicating that the chatbot is artificially generated and not human where a reasonable person interacting with the bot would be misled to believe that the person is interacting with a human; - Maintain and publish online details about a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user (including automated notification to the user that refers them to crisis service providers if they express such ideas); - Implement measures where the operator knows a user is a minor (under 18) to disclose that the user is interacting with AI, provide a clear and conspicuous notification at least every 3 hours that reminds the user to take a break and that the bot is AI, and institute reasonable measures to prevent the bot from producing visual material of sexually explicit conduct or directly stating the minor should engage in sexually explicit conduct; - Disclose on its platform that companion chatbots may not be suitable for some minors; and - Report annually on its compliance. | Provides a private right of action for anyone injured by a violation to seek injunctive relief, reasonable attorneys fees and damages in an amount equal to the greater of actual damages or $1,000 per violation. |
| California | AI Privacy | California Consumer Privacy Act | AB 1008 | January 1, 2025 | • Amends the definition of “personal information” under the CCPA to clarify personal information can exist in various formats, including, but not limited to, “abstract digital formats, including compressed or encrypted files, metadata, or artificial intelligence systems that are capable of outputting personal information.” | N/A |
| California | Automated Decision-Making | California Consumer Privacy Act Regulations | 11 CCR § 7001 et seq. | January 1, 2027 | Addresses the use of automated decision-making technology ("ADMT") when used to make significant decisions regarding consumers (those relating to financial or lending services, housing, education, employment, and healthcare): • Businesses must conduct a risk assessment when using ADMT to make significant decisions, or when using personal information to train ADMT. • Businesses that make ADMT (trained on personal information) available to another business to make a significant decision must provide to the recipient-business all facts available to the business that are necessary for the recipient-business to conduct its own risk assessment. • Businesses must provide pre-use notices to inform consumers about the use of ADMT, details about the ADMT, and the right to opt-out and access further information. • Businesses must allow consumers to opt out and provide consumers with access to information about the ADMT's use and logic. Other obligations and restrictions may apply depending on the type of data processed. | Up to $7,500 per violation. |
| California | AI in Government | California Government AI Inventory Law | Cal. Gov. Code § 11546.45.5 | January 1, 2024 | • Requires the California Department of Technology to inventory all high-risk automated decision systems used or proposed by state agencies on or before September 1, 2024, detailing their functions, benefits, data usage, and risk mitigation measures. • Requires the California Department of Technology to submit a report of the comprehensive inventory to specified committees of the California Legislature annually until January 1, 2029. | N/A |
| California | Algorithmic Pricing | Cartwright Act Common Pricing Algorithm Amendment | AB 325 | January 1, 2026 | • Makes it unlawful for a person to use or distribute a common pricing algorithm as part of a contract, combination in the form of a trust, or conspiracy to restrain trade or commerce in violation of the law. • Makes it unlawful for a person to use or distribute a common pricing algorithm if the person coerces another person to set or adopt a recommended price or commercial term recommended by the common pricing algorithm for the same or similar products or services in California. • Defines "common pricing algorithm" as "any methodology, including a computer, software, or other technology, used by two or more persons, that uses competitor data to recommend, align, stabilize, set, or otherwise influence a price or commercial term." | Criminal and civil penalties are available, including up to $6 million in criminal penalties or double either the gross gain derived from the violation or the gross loss suffered by the victim, whichever is greater. |
| California | AI Liability | Civil Actions | Cal. Civ. Code § 1714.46 | January 1, 2026 | • Establishes that in an action against a defendant who developed, modified, or used artificial intelligence that is alleged to have caused a harm to the plaintiff, it shall not be a defense, and the defendant may not assert, that the artificial intelligence autonomously caused the harm to the plaintiff. | N/A |
| California | AI Transparency | Data Broker Registration AI Disclosures | Cal. Civ. Code § 1798.99.82 | January 1, 2026 | • Requires data brokers to provide additional information upon registration to the California Privacy Protection Agency, including whether the data broker has shared or sold consumers’ data to a developer of a GenAI system or model in the past year. | Up to $200 for each day the data broker fails to register, an amount equal to the fees that were due during the period it failed to register, and reasonable expenses incurred by the California Privacy Protection Agency during the investigation. |
| California | AI in Political Advertising | Deceptive Media in Election Advertisements_x000D_ _x000D_ Narrowly Enjoined by Kohls v. Bonta (E.D. Cal.) | AB 2839 | September 17, 2024 | • Prohibits a person, committee, or other entity from knowingly distributing an advertisement or other election communication that contains certain materially deceptive deepfake content with malice within 120 days of an election in California and, in specified cases, 60 days after an election. | General or special damages. |
| California | AI in Social Media & Online Platforms | Defending Democracy from Deepfake Deception Act of 2024_x000D_ _x000D_ Broadly Enjoined by Kohls v. Bonta (E.D. Cal.) (2:24cv2527) | AB 2655 | January 1, 2025 | • Requires large online platforms with at least one million California users to develop and implement procedures for the use of state-of-the-art techniques to identify and either remove or label (depending on the closeness in proximity to an election) materially deceptive political deepfake content. • Requires the large online platform to also provide an easily accessible way for California residents to report such content to the platform. | Injunctive or other equitable relief by the Attorney General, any district attorney, or city attorney. |
| California | AI in Social Media & Online Platforms | Digital Identity Theft Act | SB 981 | January 1, 2025 | Requires a social media platform to: • Provide a reasonably accessible mechanism to California users to report to the social media platform any sexually explicit image or video of them posted on that platform that was created or altered through digitization without their consent (i.e., “sexually explicit digital identity theft”); • Temporarily block any covered material from being publicly viewable on the social media platform pending the social media platform’s determination on the report; and • Removing any covered material from being publicly viewable on the social media platform once the platform determines there is a reasonable basis to believe the reported material is sexually explicit digital identity theft. | Not specified. |
| California | AI in Employment | Employment Regulations Regarding Automated-Decision Systems, issued pursuant to the California Fair Employment and Housing Act, Cal. Gov. Code § § 12935(a), 12940, 12941 | Civil Rights Council Employment Regulations Regarding Automated-Decision Systems | October 1, 2025 | Regulations clarify the application of existing antidiscrimination laws in the workplace in the context of new and emerging technologies, including AI that makes a decision or facilitates human decision making regarding an employment benefit ("Automated-Decision System"): • Employers must not use automated-decision systems that discriminate against applicants or employees on the basis of protected characteristics. • Employers must maintain employment records, including automated-decision system data, for a minimum period of four years. | Existing enforcement mechanisms. |
| California | AI in Government | Generative Artificial Intelligence Accountability Act | SB 896 | January 1, 2025 | • Requires the Office of Emergency Services to perform a risk analysis of potential threats posed by the use of GenAI to California’s critical infrastructure, and certain other state agencies / actors to take AI into account in various government processes. • Requires a state agency or department that utilizes generative AI to directly communicate with a person regarding government services and benefits to ensure that those communications include both (i) a disclaimer that indicates to the person that the communication was generated by generative artificial intelligence and (ii) describing how the person may contact a human employee of the state agency or department. | N/A |
| California | AI Healthcare | Health Advice From Artificial Intelligence | Cal. Bus. & Prof. Code § 4999.9 | January 1, 2026 | • Extends to AI technology providers pre-existing prohibitions on the use of any terms, letters, or phrases to indicate or imply (i) possession of a license or certificate to practice a healthcare profession without one or (ii) that the services being offered are being provided by a licensed or certified health care professional (where such claim is not true). | Appropriate health care professional licensing boards and enforcement agencies can take whatever action they authorized by law to take in response to such a violation. |
| California | AI in Government | Law Enforcement Usage of Artificial Intelligence | Cal. Civ. Code § 13663 | January 1, 2026 | • Requires law enforcement agencies to maintain a policy to require an official report prepared by the law enforcement agency (or one of its members) that is generated using AI either fully or partially to contain: - A disclosure on each page of the report (or within the body of the text) that the report was written either fully or in part using AI and the identity of every specific AI program used; and - The signature of the law enforcement officer or member who prepared the official report verifying that they reviewed the contents of the report and that the facts contained in the report are true and correct. • Requires law enforcement agencies who use AI to create an official report, whether fully or partially, to retain the first draft created and to maintain an audit trail for as long as the official report is retained. • Requires contracted vendors to not share, sell, or otherwise use information provided by a law enforcement agency to be processed by AI except for the contracted law enforcement agency's purposes or pursuant to a court order (with certain exceptions for accessing such data for troubleshooting, bias mitigation, accuracy improvement, or system refinement). | N/A |
| California | AI in Real Estate | Real Estate Digitally Altered Images Disclosures | Cal. Bus. & Prof. Code § 10140.8 | January 1, 2026 | • Requires real estate brokers, salespersons and persons acting on their behalf who include a digitally altered image (including AI altered images) in an advertisement or other promotional material for the sale of real property to include a statement disclosing that the image has been altered and a link to a publicly accessible internet website, URL, or QR code that includes, and clearly identifies, the original, unaltered image. | Disciplinary action under the California Real Estate Regulations, including potential revocation or suspension of real estate licenses. |
| California | AI Likeness | Replica of Voice or Likeness Law | AB 2602 | January 1, 2025 | • Makes any provision in an agreement for the performance of personal or professional services unenforceable where: - The provision allows for the creation and use of a digital replica of the individual’s voice or likeness in place of work the individual would otherwise have performed in person; - The provision does not include a reasonably specific description of the intended uses of the digital replica; and - The individual was not represented (i) by legal counsel or (ii) by a labor union. | Unenforceability of a violating contractual provision. |
| California | Frontier or General-Purpose AI | Transparency in Frontier Artificial Intelligence Act | SB53 | January 1, 2026 | Imposes obligations on developers who have trained, or initiated training of, a foundation model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations ("Frontier Model"), including: • Publishing a report on its website with information about the model. • Reporting any critical safety incident involving its models to the OES within 15 days of discovery (and within 24 hours to an appropriate authority in certain circumstances). • Refraining from making materially false or misleading statements regarding the catastrophic risks of its models or how those risks are managed. • Refraining from taking action against an employee for (or issuing policies or contracts that attempt to stop an employee from) reporting serious safety risks or legal violations (plus requiring implementation of certain other whistleblower procedures). Large frontier model developers with over $500 million in annual revenue must also publish and follow an AI safety and oversight framework, submit a summary of catastrophic risk assessments to the OES, provide additional detail in its online report, and implement additional whistleblower procedures. | Provides a civil penalty up to $1 million per violation by a large frontier developer, enforced by the state attorney general. |
Which states have AI laws in effect today? This tracker summarizes key AI laws that may impact your business.
A guide to the online safety, privacy and harmful content state laws and global regulatory developments that may impact your business.