Salary.com Compensation & Pay Equity Law Review

Let's Not Use AI for Sexual Harassment

NEWSLETTER VOLUME 3.26 | June 27, 2025

Editor's Note

Let's Not Use AI for Sexual Harassment

Yes. It is possible to create photos and videos of just about anyone that show them naked and having sex. Of course it's possible. It was probably one of the first deepfake use cases. The porn industry is often on the leading edge of tech. And that's been true for a very long time. Although they may be less enamored with the DIY versions.

But let's not create photos of naked people and material with sexual content and let's not distribute it to others, particularly at work.

I'm trying to be as calm, measured, and lawyerly as possible here. But c'mon. This is a horrible thing to do to someone, especially someone who doesn't know about it, never agreed, and is now the subject of views and comments by work colleagues. It violates privacy law, can violate laws about how our name and likeness are used, and can be the basis for criminal charges. Mostly though, it is a complete disregard for human dignity and personal autonomy.

Dignity and autonomy are fundamental to being free and human. Without it, we are merely property. We outlawed that a long time ago—at least on the books. Many of us are still fighting for the rights to simply be ourselves and make decisions about what happens with our bodies.

If I was advising a company where deepfakes were used to sexually harass someone or distributed behind their back, I would immediately terminate everyone involved. This type of material does not belong at work. And it should not belong in the world, unless the person depicted knows and explicitly consents.

Last, a quick note to Jerry Zhang (who co-authored the article below with Ivie Serious at Littler). Congratulations on finishing law school and good luck on the bar exam. You did excellent work on this article, which provides great research and background on the issue, how some states are addressing the issue, what employers should watch for, and how to address it.

- Heather Bussing

At a Glance

  • AI-generated videos, images, and audio are being weaponized in the workplace to harass, impersonate, and intimidate employees, often with devastating consequences.
  • While there are no workplace-specific federal laws that address deepfake harassment, new laws like the TAKE IT DOWN Act and Florida’s Brooke’s Law, passed in May and June 2025, respectively, address this growing digital threat.
  • Outdated policies, untrained staff, and unclear protocols leave organizations vulnerable. Now is the time to audit, train, and prepare.

    The landscape of workplace harassment has evolved beyond physical offices, after-hours texts and off-site events. Employers now face a sophisticated and deeply unsettling threat: deepfake technology. Once the domain of tech experts, AI-powered tools that generate hyper-realistic but fabricated videos, images, and audio are now widely accessible — even to those with minimal technical skills.

    As of 2023, 96% of deepfakes were sexually explicit, overwhelmingly targeting women without their consent. By 2024, nearly 100,000 explicit deepfake images and videos were being circulated daily across more than 9,500 websites. Alarmingly, a significant portion of these featured underage individuals.

    While image-based sexual abuse is not new, AI has dramatically amplified its scale and impact. In the workplace, deepfakes can be weaponized to harass, intimidate, retaliate, or destroy reputations—often with limited recourse under traditional employment policies.1

    For HR leaders, legal counsel, and executives, the question is no longer if deepfakes will affect your workforce but when, and how prepared your organization is to respond.

    The Rise of Deepfakes in the Workplace

    Deepfakes are synthetic media, i.e., content created or manipulated using machine learning, particularly deep learning models trained on large datasets of images, voices, or videos, in an effort to create false (and typically malicious) information. With minimal effort, bad actors can now impersonate coworkers, executives, or clients—making deepfakes a potent tool for fraud, impersonation, and harassment.

    Employers are increasingly encountering:
  • fake explicit videos falsely attributed to employees;
  • voice deepfakes used to send inappropriate messages; and
  • manipulated recordings simulating insubordination or offensive conduct.

These incidents cause severe reputational and psychological harm to victims and place employers in a difficult position regarding credibility determinations — especially when often relying on outdated policies and investigative procedures.2

Evolving Legal Framework

While federal law has yet to catch up, there are still existing sources of litigation that employers should keep in mind:

  • Employers may be liable under Title VII if deepfakes affect workplace dynamics—even if created off-hours as it can lead to a hostile work environment claim.
  • Failure to act on known or reasonably foreseeable deepfake harassment may also expose employers to negligent supervision or retention claims.

Emerging Federal/State Laws and Initiatives

  • The federal TAKE IT DOWN Act, 47 U.S. Code § 223(h) (signed May 19, 2025): This bipartisan law provides a streamlined process for minors and victims of non-consensual intimate imagery to request removal from online platforms. Platforms must comply within 48 hours or face penalties.3
  • Florida’s “Brooke’s Law” (HB 1161) (signed June 10, 2025): Requires platforms to remove non-consensual deepfake content within 48 hours or face civil penalties under Florida’s Deceptive and Unfair Trade Practices Act.4
  • The EEOC’s 2024–2028 Strategic Enforcement Plan emphasizes scrutiny of technology-driven discrimination and digital harassment.5
  • Proposed changes to Federal Rules of Evidence 901 and the proposed creation of FRE 707 would require parties to authenticate AI-generated evidence and meet expert witness standards for machine outputs, especially in cases involving deepfakes or algorithmic decision-making.

While these laws primarily target content platforms, they signal a growing legislative intolerance for deepfake abuse—especially when it intersects with sexual harassment or reputational harm. Employers should treat the creation or circulation of deepfake content as serious misconduct, regardless of where or when it occurs.

Key Employer Risks and Blind Spots

Employers face several legal and operational vulnerabilities:

  • Policy Gaps: Most handbooks don’t address synthetic media or manipulated content.
  • Delayed Response: Without clear protocols, investigations may be slow or ineffective.
  • Liability Exposure: Employers may face lawsuits from employees or third parties harmed by unaddressed deepfake harassment.
  • Reputational Harm: Public exposure of deepfake incidents can erode trust and damage workplace culture.

What Employers Can Do Now

Most employers conducting internal investigations often assumed that any photo/video/audio of concerning behavior was real, putting the onus on the accused to prove it wasn't so. Deepfakes upend that reflex, and at least for now, most victims of deepfakes are fighting against that presumption. Accordingly, the most practical mind-shift employers should have is about whom to believe and how are they are evaluating the basis of that belief. Accordingly, employers can take the following steps:

  1. Audit Existing Policies.
    Review harassment, acceptable use, and social media policies to ensure they cover synthetic content and image-based abuse.
  2. Develop Clear Response Plans.
    Establish protocols for investigating and responding to digital impersonation and synthetic harassment.
  3. Train Key Personnel.
    Equip HR, legal, and IT teams to recognize and respond to deepfake incidents effectively.
  4. Update Employee Training.
    Incorporate deepfake awareness into harassment prevention and cybersecurity training.
  5. Review Insurance Coverage.
    Confirm whether your employment practices liability or cyber insurance policies cover synthetic media-related claims.
  6. Monitor Legal Developments.
    Stay informed on evolving federal and state legislation, including New York’s expanding AI regulatory framework.

Conclusion

Deepfakes represent a fast-evolving threat to workplace safety, dignity, and trust. But with preemptive planning, employers can mitigate risk, protect employees, and uphold a respectful workplace culture. By treating synthetic media as a serious form of harassment—and updating policies, training, and response protocols accordingly—organizations can stay ahead of the curve and demonstrate leadership in this emerging area.

*Pre-Bar Associate

Footnotes

1 See Chase Perkins, et al., Synthetic Reality & Deep Fakes: Considerations for Employers and Implications of the Rise of Deep Fakes in the Workplace, Littler Report (June 2019).

2 Jesse Dill, AI and Deepfakes Complicate Evidence in Workplace Investigations, Bloomberg Law (Feb. 27, 2024).

3 Take It Down Act, Text - S.146 - 119th Congress (2025-2026): TAKE IT DOWN Act | Congress.gov | Library of Congress.

4 Florida’s “Brooke’s Law” (HB 1161).

5 See EEOC Strategic Enforcement Plan FY 2024–2028, EEOC (Aug. 22, 2022).

This content is licensed and was originally published by JD Supra

It's Easy to Get Started

Transform compensation at your organization and get pay right — see how with a personalized demo.