Michigan has joined a growing number of states confronting one of the most disturbing uses of artificial intelligence: the non-consensual creation or spread of sexualized deepfakes. Beginning August 26, 2025, House Bills 4047 and 4048 take effect, creating clear civil remedies and criminal penalties for “intimate deepfakes.” If you are an employer with operations or employees in Michigan or a platform that hosts user-generated content accessible in the state this law changes your risk profile in immediate and practical ways. What follows is a comprehensive, plain-English guide to what the law does, how it fits with other obligations you already have, and what steps you should take now. Where helpful, we reference Tishkoff PLC’s explainer video and thumbnail artwork as an example of how to educate your workforce and customers on the new rules of the road.
At its core, Michigan’s law recognizes that AI tools can fabricate realistic depictions of a real person engaged in sexual conduct or exposing intimate parts images so convincing that an ordinary viewer could believe they are genuine. This is not the realm of satire or obvious parody; it is the realm of synthetic content designed to deceive, humiliate, intimidate, harass, or extort. Legislators responded to cases in schools, workplaces, and online communities where victims discovered fabricated pornographic content of themselves circulating in group chats or social feeds. The technology is accessible, the harm is immediate, and the legal system historically lagged behind the speed at which deepfakes can proliferate. The new act closes that gap by defining a category of prohibited conduct, creating a private right of action, and layering criminal consequences for certain violations.
Michigan’s statute defines “deepfake” in functional terms that reflect how the technology is actually used. The key is whether a piece of media most often a photo or video, but potentially other formats has been created or altered by technical means so that a reasonable person would believe it depicts a real, identifiable individual in an intimate context. The person must be identifiable, either from the content itself or through accompanying details that tie the image to them. The depiction must be sexual in nature: exposure of intimate parts, simulated sexual acts, or other erotic imagery that would typically be characterized as sexually explicit. The law’s boundaries are designed to capture manipulations that pass as real to the casual observer, without sweeping in obvious caricatures or protected forms of artistic expression that no reasonable person would mistake for reality.
Equally important, the law is not limited to a single creator sitting behind a keyboard; it reaches anyone who knowingly disseminates intimate deepfakes without consent, including those who forward or repost content after learning or having good reason to know that it is synthetic and intimate. This is especially relevant for group administrators, moderators, or managers of workplace collaboration tools. The act contemplates not just the original act of fabrication, but the modern reality that harm is often multiplied by rapid sharing.
In drafting and committee hearings, lawmakers focused on three ideas that sharpen liability: consent, knowledge, and realistic appearance. Consent is the bright line. Without written, explicit, and informed consent from the depicted person, the creation or distribution of an intimate deepfake triggers the act’s civil remedies and can expose a person to criminal liability in aggravated circumstances. Knowledge functions as a second gate: individuals who did not create the content but share it can be liable if they knew or reasonably should have known it was a deepfake and that it depicted someone in an intimate way. Finally, the realistic-appearance threshold prevents the statute from ensnaring obvious jokes or art; the media must be realistic enough that a reasonable viewer could believe it is real.
This “realism” requirement is not an escape hatch for sophisticated creators. Advances in consumer tools mean that highly convincing composites often require little technical skill. The law anticipates that a judge or jury may consider how the media was made, the quality of the manipulation, and whether the context would lead an ordinary viewer to believe it genuine.
The civil side of the law is intentionally robust. A person depicted in an intimate deepfake without consent can bring a lawsuit against the creator, anyone who knowingly disseminated it, and in some cases those who materially contributed to its spread. Plaintiffs can seek an injunction to halt further distribution crucial when a video is ricocheting across platforms and they can recover monetary relief that includes compensatory damages for emotional distress and reputational harm. The law also allows recovery of profits attributable to the wrongful content, and in particularly egregious cases it authorizes additional statutory or punitive-style relief to ensure that harms which are otherwise hard to measure are not left uncompensated.
Because timing matters, the statute pairs its remedies with expedited procedures for injunctive relief. Courts can issue temporary restraining orders and preliminary injunctions to force a takedown or to prevent further sharing while the case proceeds. In practice, that means a victim can ask a court for immediate intervention as soon as they discover the content. Employers and platforms should anticipate receiving time-sensitive subpoenas or orders that attach to this expedited schedule.
While the civil cause of action gives victims a direct path to redress, lawmakers also created criminal penalties for certain conduct. A first offense can be charged as a misdemeanor when the circumstances involve willful creation or dissemination of intimate deepfakes without consent. Aggravated cases such as repeat offenses, activity tied to extortion or financial gain, or distribution targeted at minors or vulnerable individuals can escalate to felonies with the possibility of incarceration. The criminal provisions are a strong signal that the state views intimate deepfakes as a form of sexual exploitation rather than a mere prank. That signal matters to employers and platforms because it will influence how local police and prosecutors respond to victim complaints and how quickly investigations move once a report is made.
To avoid chilling legitimate uses of synthetic media, the statute includes narrow exemptions for law-enforcement training and investigations, certain medical and legal proceedings, and research conducted under appropriate safeguards. These carve-outs are not blanket permissions; they are tied to context and purpose. A defense based on satire or political commentary will be evaluated against the statute’s realism and consent requirements, and courts are likely to examine both the content and the circumstances of its presentation. The exemptions do not shelter a creator who cloaks a humiliating fabrication in a thin veneer of “commentary” while leaving viewers with the impression that the depiction is genuine.
Michigan already had a “revenge porn” statute addressing the disclosure of real intimate images without consent. The deepfake law extends those protections to manipulated media and brings civil and criminal enforcement into harmony for a digital era in which the image may never have existed in reality. For employers, that means misconduct involving real images and synthetic images now stand on equal footing from a risk perspective; both can trigger duties under harassment, hostile-environment, and retaliation laws.
On the federal side, the landscape is in motion. Section 230 of the Communications Decency Act continues to provide certain immunities for platforms regarding user-generated content, but state civil claims for intimate deepfakes can coexist with platform obligations to respond to lawful orders, to retain data in the face of litigation holds, and to take reasonable steps once on notice of unlawful material. If federal legislation addresses intimate deepfakes nationwide, Michigan’s statute will likely operate alongside those rules rather than being preempted outright, except where Congress expressly occupies the field. In practical terms, platforms should plan for state-law compliance that includes Michigan’s definitions and timelines while tracking federal developments that could add takedown or notice protocols.
Workplaces are where digital culture meets legal accountability. Even if an intimate deepfake originates off the clock, the moment it enters work channels email, messaging platforms, collaboration tools, shared drives or is wielded to harass an employee, it becomes a workplace issue. Michigan’s civil rights and anti-harassment laws require employers to take prompt, effective action when they know or should know about harassment. An intimate deepfake of an employee, contractor, or job applicant that circulates among coworkers can create a hostile environment, undermine career prospects, and cause lasting psychological harm. The new statute strengthens the argument that a reasonable employer must treat such incidents as serious misconduct warranting investigation and corrective action.
Liability risk is multi-layered. First, there is potential direct liability if a manager or supervisor participates in creating or spreading intimate deepfakes, or if company resources are used to do so. Second, there is exposure if HR or leadership receives a complaint and fails to respond promptly, allowing the content to continue circulating. Third, there is risk when an employer disciplines a victim who reports the incident or takes action to protect themselves; retaliation claims can be as damaging as the original misconduct. The lesson is simple: treat intimate deepfakes as you would any other form of sexual harassment, but add the urgency required by their viral nature.
Policy work is the foundation. Update your written anti-harassment, social-media, and acceptable-use policies to address synthetic sexual content explicitly. Define intimate deepfakes in plain language, prohibit creation or dissemination in any work-adjacent context, explain how employees should report incidents, and describe the range of disciplinary consequences. If your policies reference “images or videos of a sexual nature,” ensure that manipulated or AI-generated material is included within that scope.
Training should be refreshed to match the policy. Use human centered examples that show how deepfakes arise in real life: a doctored image shared in a group chat; a composite posted to a burner account; a manipulated photo dropped into a Slack channel as a “joke.” Your managers should understand that they must escalate any such report immediately, preserve evidence, and avoid making promises about outcomes that HR or legal will ultimately control. Short, scenario-based modules can be slotted into annual harassment training or rolled out as micro-learning
Investigation protocols deserve attention too. Identify who will triage and who will investigate, and equip them with a basic playbook: freeze team access to the channel where the content appears; capture screenshots and message logs with timestamps; request platform logs that show who posted or forwarded the item; and send a litigation hold for any potentially relevant devices or accounts. Investigators should avoid deleting content without preserving it first, even if everyone agrees it is harmful, because evidence is often needed to obtain court orders or to support disciplinary action. Parallel to the internal process, be ready to assist a victim who wishes to contact law enforcement or to pursue a civil injunction.
Bring-your-own-device arrangements complicate the response. If employees use personal phones for work chat apps, you may need to collect or review data on a device that your company does not own. Your policy should make clear that, as a condition of using personal devices for work communication, employees consent to reasonable evidence preservation related to workplace investigations, subject to legal and privacy safeguards. That clause should be narrowly tailored: it should authorize collection of relevant communications and metadata while emphasizing that private, unrelated content will not be reviewed.
Privacy also matters when assisting a victim. Employers should respect the victim’s autonomy in deciding whether to involve law enforcement. HR should offer options without pressure: internal investigation, protective steps to limit further spread inside the company, connection to external counsel, and information about the availability of restraining orders or civil suits under the new statute. Confidentiality should be maintained to the extent practicable, with disclosures only to those who need to know to effectuate the response.
Unionized workplaces and academic institutions present additional considerations. Where a collective bargaining agreement sets discipline procedures or investigatory timelines, employers must follow those procedures while still acting promptly to contain harm. Coordination with union representatives can help avoid unnecessary delays and preserve fairness for all parties. In higher education, “academic freedom” does not authorize harassment or exploitation; policies should state that while robust debate and critical inquiry are protected, intimate deepfakes are not an acceptable instrument of criticism or satire when they realistically depict an identifiable person without consent.
The new law will generate subpoenas and preservation demands. Employers should pre-assign a point of contact in legal or compliance to receive and process orders for takedowns, message archives, and account information. Document retention schedules should be reviewed to ensure that collaboration-tool logs, audit trails, and backups are preserved long enough to respond to a typical civil discovery timeline. Consider whether to adjust default message-retention settings in high-risk channels. At the same time, do not fall into the trap of excessive surveillance; the aim is to maintain reliable, lawful records of business communications so that, when a problem arises, you can respond in a measured way.
If you operate a platform that hosts user content social networks, forums, gaming communities, messaging apps, marketplaces, or smaller community boards Michigan’s law affects you in at least three ways. First, you will likely see an increase in user reports and legal requests tied to intimate deepfakes, including emergency takedown demands and court orders for removal. Second, you may need to adjust your terms of service and public-facing policies to prohibit intimate deepfakes explicitly, define them consistently with the statute, and explain how users can report violations. Third, you should refine your notice-and-action pipeline to respond quickly when an identifiable person reports a synthetic sexual depiction of themselves.
Operationally, platforms should build a transparent intake process: a single, well-publicized reporting channel; a checklist of information needed to evaluate a claim; identity verification steps to prevent abuse of the reporting system; and a clear, time-bound commitment to initial review. Moderation teams should have access to guidance on what constitutes a “realistic” depiction and how to weigh contextual clues that identify the depicted person. Repeat-offender policies and account-level sanctions should be aligned to the seriousness of the conduct, with heightened scrutiny for users who appear to be engaged in coordinated harassment.
Some platforms will reach for AI-based detection tools to flag potential intimate deepfakes. These tools can be part of a responsible strategy, but they cannot replace human judgment. False positives carry their own risks, including unjustified account suspensions and speech concerns. A layered approach works best: use hashing to prevent re-uploads of content that has already been identified as a deepfake; apply automated similarity checks to triage new reports; and ensure that trained reviewers make the final call based on the statute’s criteria and the available context. Where feasible, inform users about the outcome of their report and the reasons for the decision, a practice that can build trust and reduce perceptions of arbitrariness.
The internet does not stop at state lines, but liability can attach where harm occurs or where content is accessible. If your platform serves users nationwide, you should assume that Michigan residents can invoke the state’s law even if your company is headquartered elsewhere. That means your takedown processes need to work at scale and across jurisdictions. You do not have to be perfect; you do have to be responsive, consistent, and able to show that you acted reasonably upon notice. Maintaining a log of reports, actions taken, and timelines will help if your moderation decision is later challenged.
Organizations often overlook the contractual side of deepfake risk. Review your cyber liability, media liability, and employment practices policies to see whether claims involving synthetic sexual content are covered. Some policies exclude intentional conduct but still cover defense costs and certain vicarious-liability scenarios. In vendor agreements, audit whether indemnity and data-security clauses would help in a case where a third-party tool used by your organization is implicated in creating or disseminating intimate deepfakes. For platforms that host user content, ensure your creator-terms and API-terms require compliance with laws prohibiting intimate deepfakes and reserve the right to remove content and suspend access quickly when violations occur.
When a victim decides to pursue criminal charges or civil relief, employers and platforms are often asked to assist. That assistance might include preserving logs, identifying account holders, or implementing court-ordered takedowns. Cooperation should follow a well-defined protocol that protects user privacy and respects legal process. Verify the scope and authenticity of any order; designate a custodian of records; and produce only what is requested, nothing more. Where orders appear overbroad or ambiguous, seek clarification rather than guessing. Being a responsible steward of data is not at odds with assisting a victim; it is part
The most effective compliance programs are those people actually understand. Draft a short, plain-language announcement to employees or users explaining the law’s purpose, the basic rule no creating or sharing intimate deepfakes without consent the reasons the organization takes it seriously, and how people can report violations. Avoid legal jargon. Emphasize dignity and safety. Pair the message with concise internal FAQs that answer predictable questions: What if someone posted a doctored image of me? What happens after I report? Will I get in trouble for saving a copy to show HR or to a moderator? If you maintain a public trust and safety page, include a section on intimate deepfakes, and use clear, compassionate language to outline the path from report to resolution. Your video and thumbnail assets can be embedded as evergreen educational content that stays available for future reference.
Because momentum matters, here is how a well-run response typically unfolds. As soon as a report lands, acknowledge receipt and advise the reporter not to share the content further. Preserve the evidence in place by capturing URLs, message IDs, timestamps, and screengrabs. Quarantine the post or thread so it is not publicly visible while review proceeds, but do not delete the only copy before preservation is complete. Alert legal or HR to evaluate whether a litigation hold is necessary. If the victim requests, provide information about obtaining an emergency court order under Michigan’s statute to halt further distribution. When the facts indicate internal policy violations, begin a fair, prompt investigation, interviewing those who shared the content and documenting their knowledge and intent. Communicate interim steps to the victim so they are not left in the dark. When a decision is reached whether disciplinary action, account suspension, or referral to law enforcement explain the outcome within the boundaries of privacy rules.
No statute can freeze technology in place. New models will generate more convincing fabrications, and bad actors will experiment with ways to evade detection. The good news is that legal norms are converging: non-consensual intimate fabrications are increasingly treated as a form of sexual exploitation and harassment, not as edgy humor or protected speech. Michigan’s law codifies that social reality and provides concrete levers for victims to pull. For employers and platforms, the next phase is not just about checking the compliance box it is about building cultures and systems that prevent and deter abuse before it becomes a headline or a lawsuit.
Starting August 26, 2025, intimate deepfakes are not only a policy violation in Michigan they are a legal liability with real civil and criminal consequences. Employers should update policies, training, and investigation playbooks. Platforms should refine notice-and-action pipelines, clarify their terms, and prepare for increased demand on moderation and trust-and-safety teams. Everyone should communicate clearly and compassionately about the harm these fabrications cause and the steps available to stop them. If you need a concise primer to distribute, our video explainer and thumbnail artwork can serve as a model for internal communications and public education.
Contact Tishkoff
Tishkoff PLC specializes in business law and litigation. For inquiries, contact us at www.tish.law/contact/. & check out Tishkoff PLC’s Website (www.Tish.Law/), eBooks (www.Tish.Law/e-books), Blogs (www.Tish.Law/blog) and References (www.Tish.Law/resources).
Sources:
- Michigan Legislature, “House Bill 4047 of 2025 (Public Act 11 of 2025), Protection from Intimate Deep Fakes Act,” bill text and history. https://www.legislature.mi.gov/Bills/Bill?ObjectName=2025-HB-4047
- Michigan Senate Fiscal Agency, “Bill Analysis: HB 4047 and HB 4048 — Protection from Intimate Deep Fakes Act,” 2025. www.legislature.mi.gov/documents/2025-2026/billanalysis/Senate/pdf/2025-SFA-4047-S.pdf
- Michigan Public (NPR affiliate), “ ‘Deepfake’ videos and images involving sexual situations are now against Michigan law,” August 28, 2025, reporting by Tracy Samilton. https://www.michiganpublic.org/criminal-justice-legal-system/2025-08-28/deepfake-videos-and-images-involving-sexual-situations-are-now-against-michigan-law
- WDIV Local 4 / ClickOnDetroit, “Gov. Whitmer signs bills that criminalize sexually explicit deepfakes,” August 26, 2025. https://www.facebook.com/Local4/posts/gov-gretchen-whitmer-signed-bills-into-law-that-make-it-a-crime-to-create-and-di/1207566478082586/
- Bridge Michigan, “ ‘Deepfake’ pornography ban passes Michigan House with bipartisan support,” June 13, 2024. https://bridgemi.com/author/jordyn-hermani/page/18/
