New York

Information is updated monthly and is only shown for states with defined laws.

Which states have AI laws in effect today? This tracker summarizes key AI laws that may impact your business. Subscribe for updates.

 

See all states

 

State/TerrAI ScopeRelevant LawLaw LinkEffective DateKey RequirementsEnforcements & Penalties
New YorkAI LikenessAmendment to Deceased Personality ProtectionsS8391December 11, 2025• Makes it unlawful for a person to use a deceased performer's digital replica in an audiovisual work, sound recording, or live performance of a musical work without appropriate consent.
• A "digital replica" is defined as a newly created, computer-generated, highly realistic electronic representation that is readily identifiable as the voice or visual likeness of an individual that is embodied in a sound recording, image, audiovisual work, including an audiovisual work that does not have any accompanying sounds, or transmission in which: (I) the actual individual did not actually perform or appear; or (II) the actual individual did perform or appear, but the fundamental character of the performance or appearance has been materially altered.
• A "digital replica" does not include the electronic reproduction, use of a sample of one sound recording or audiovisual work into another, remixing, mastering, or digital remastering of a sound recording or audiovisual work authorized by the copyright holder.
Greater of $2,000 or the compensatory damages suffered by the injured party and any profits from the unauthorized use that are attributable to such use and are not taken into account in computing the compensatory damages.
New YorkAI Intimate ImagesAmendment to the New York Statute Prohibiting Unlawful Dissemination or Publication of Intimate Images SB1042ANovember 28, 2023• Expands the definition of unlawful dissemination or publication of an intimate image to include “deep fake” images created by digitization.Existing penalties apply.
New YorkUser-Facing AIArtificial Intelligence Companion ModelsNew York General Business Law Article 47November 5, 2025• Prohibits any operator from providing an AI companion unless that AI companion contains a protocol to take reasonable efforts for detecting and addressing suicidal ideation or self-harm expressed by a user to the AI companion, including a notification to the user that refers them to crisis service providers.
• Operators must clearly and conspicuously display a notification at the beginning of any AI companion interaction, which need not exceed once per day and at least every three hours for continuing interactions, which states verbally or in writing that the user is not communicating with a human.
Up to $15,000 per day in violation.
New YorkAI in Political AdvertisingArtificial Intelligence Deceptive Practices ActN.Y. Election Law § 14-106April 20, 2024• Requires any person that distributes or publishes any political communication that was produced by or includes materially deceptive media (including AI deepfakes) and has actual knowledge that it is materially deceptive to provide proper disclosure.Injunctive relief.
New YorkAI LikenessArtificial Intelligence Deceptive Practices ActN.Y. Civ. Rights Law § 50 et seq.April 20, 2024• Extends New York’s Right of Privacy protecting an individual’s picture, likeness, or voice to also cover AI generated uses of the individual’s picture, likeness or voice.Varies based on violation.
New YorkAI in GovernmentAutomated Employment Decision-Making in State GovernmentA433July 1, 2025• Any state agency that utilizes an automated employment decision-making tool shall publish a list of such automated employment decision-making tools on such state agency's website.
• The state agency shall maintain an inventory of state agency artificial intelligence systems.
None specified.
New YorkAI LikenessContracts for the Creation and Use of Digital ReplicasGeneral Obligations CHAPTER 24-A, ARTICLE 5, TITLE 3January 1, 2025• Makes any provision in an agreement for the performance of personal or professional services unenforceable where:
- The provision allows for the creation and use of a digital replica of the individual’s voice or likeness in place of work the individual would otherwise have performed in person;
- The provision does not include a reasonably specific description of the intended uses of the digital replica; and
- The individual was not represented (i) by legal counsel or (ii) by a labor union.
Unenforceability of a violating contractual provision.
New YorkAlgorithmic PricingNew York Algorithmic Pricing Disclosure ActNew York General Business Law Article 22-A,§ 349-ANovember 10, 2025• Requires any entity that sets the price of goods or services using personalized algorithmic pricing and that advertises, promotes, labels or publishes a statement, display, image, offer or announcement of personalized algorithmic pricing to a consumer in New York, using personal data specific to the consumer, to include a clear and conspicuous disclosure that states: "THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA." Up to $1,000 per violation (after a preliminary cease-and-desist notice).
New YorkAlgorithmic PricingNew York Landlord Algorithmic Pricing LawNew York General Business Law Article 22,§ 340-BDecember 15, 2025• Makes it unlawful for a person or entity to knowingly or with reckless disregard facilitate an agreement between or among two or more residential rental property owners or managers to not compete with respect to residential rental dwelling units, including by operating or licensing a software, data analytics service, or algorithmic device that performs a coordinating function on behalf of or between and among such residential rental property owners or managers. Class E Felony For a corporation, a fine not exceeding $1 million. For an individual, a fine not exceeding $100,000 or imprisonment for not longer than four years (or both).
New YorkAI LikenessNew York State Fashion Workers ActS 9832June 19, 2025• Requires model management companies and those who receive modeling services to obtain clear written consent, separately from any representation agreement, for the creation or use of a model's digital replica, detailing the scope, purpose, rate of pay, and duration of such use. Up to $3,000 for the first violation; up to $5,000 for subsequent violations.
New YorkAI in EmploymentNYC AI Employment Law(Local Law 144)January 1, 2023• Prohibits employers and employment agencies from using an automated employment decision tool to screen a candidate or employee for an employment decision unless the tool has been subject to a recent bias audit and a summary of the audit is made publicly available on the website of the employer / agency.
• Requires employers / agencies using automated employment decision tools to notify each candidate that such a tool will be used and provide information about the qualifications and characteristics the tool will use in the assessment, the type of data collected by the tool for the assessment, and the employer’s data retention policy.
• Requires employers / agencies using automated employment decision tools to allow a candidate to request an alternative selection process or accommodation.
• Up to $500 per violation on the first day of violations.
• Up to $1,500 per subsequent violation.
New YorkAI TransparencySynthetic Performer DisclosuresS8420AJune 9, 2026• Requires any person who for any commercial purpose produces or creates an advertisement respecting any property or service in which they deal to conspicuously disclose when a "synthetic performer" is used in an advertisement.
• Defines "synthetic performer" to mean "a digitally created asset created, reproduced, or modified by computer, using generative artificial intelligence or a software algorithm, that is intended to create the impression that the asset is engaging in an audiovisual and/or visual performance of a human performer who is not recognizable as any identifiable natural person."
$1,000 for a first violation and $5,000 for any subsequent violation.
New YorkAI in GovernmentThe LOADinG Act: Legislative Oversight Of Automated Decision-Making in Government Act S 7543BDecember 21, 2024Imposes several requirements on state agency use of AI and automated decision-making tools, including:
• Requires state agencies to publish an online list of the automated decision-making tools they use.
• Requires state agencies to conduct, submit and publish an impact assessment for the lawful application and use of automated decision-making tools.
• Requires state agencies and any entity acting on their behalf to operationalize meaningful human review of automated decision-making tools that are used to (i) allocate public assistance benefits, or (ii) which may otherwise impact an individual's rights, safety, or welfare. Such tools must also be subject to an initial impact assessment to be repeated at least every two years.
• Prohibits state agencies from using automated decision-making systems to make internal employment decisions if they may result in discharge, displacement, loss of position, or impairment of collective bargaining agreements.
N/A
New YorkFrontier or General-Purpose AIThe Responsible AI Safety and Education (RAISE) Act

Passed by the Legislature as A 6453A and to be enacted through chapter amendments reflected in A 9449
New York General Business Law, Article 44-BJanuary 1, 2027Requires developers who have trained, or initiated the training of, a frontier model and, individually or together with its affiliates, had annual gross revenues in excess of $500 million in the preceding calendar year (i.e., "large frontier developers") to comply with obligations imposed on frontier models that are developed, deployed, or operated in whole or in part in New York, including:
• Writing, implementing, complying with and publishing a frontier AI framework detailing how the large frontier developer handles certain risk-related and other expectations.
• Publishing a transparency report providing information about the developer and the model.
• Filing a disclosure statement with the department of financial services. 
• Retaining an unredacted copy of any of the documents above when the published version is redacted for permitted reasons.
• Transmitting to the department of financial services a summary of any assessment of catastrophic risk resulting from internal use of a frontier model.
• Reporting any critical safety incident pertaining to its frontier model to the department of financial services within 72 hours of learning of the incident or of facts sufficient to establish a reasonable belief that a critical safety incident has occurred (within 24 hours to an authority if the incident poses an imminent risk of death or serious physical injury).
• Refraining from making a materially false or misleading statements about catastrophic risk from its frontier models, its management of catastrophic risk, or its frontier AI framework.

"Frontier model" is defined to mean an AI model that (a) is trained on a broad data set, designed for generality of output, and adaptable to a wide range of distinctive tasks (i.e., a "foundational model"), and (b) was trained using a quantity of computing power greater than 10^26 integer or floting-point operations.
Up to $1million for a first violation and $3 million for subsequent violations.

U.S. AI Law Tracker

Which states have AI laws in effect today? This tracker summarizes key AI laws that may impact your business.

Information updated monthly

AL AK AZ AR CA CO CT DE FL GA HI ID IL IN IA KS KY LA ME MD MA MI MN MS MO MT NE NV NH NJ NM NY NC ND OH OK OR PA RI SC SD TN TX UT VT VA WA WV WI WY DC

Track Other Legislation

Online Safety Resource Center

A guide to the online safety, privacy and harmful content state laws and global regulatory developments that may impact your business.