This quarterly update highlights key legislative, regulatory, and litigation developments in the fourth quarter of 2023 and early January 2024 related to technology issues.  These included developments related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), data privacy, and cybersecurity.  As noted below, some of these developments provide companies with the opportunity for participation and comment.

I. Artificial Intelligence

Federal Executive Developments on AI

The Executive Branch and U.S. federal agencies had an active quarter, which included the White House’s October 2023 release of the Executive Order (“EO”) on Safe, Secure, and Trustworthy Artificial Intelligence.  The EO declares a host of new actions for federal agencies designed to set standards for AI safety and security; protect Americans’ privacy; advance equity and civil rights; protect vulnerable groups such as consumers, patients, and students; support workers; promote innovation and competition; advance American leadership abroad; and effectively regulate the use of AI in government.  The EO builds on the White House’s prior work surrounding the development of responsible AI.  Concerning privacy, the EO sets forth a number of requirements for the use of personal data for AI systems, including the prioritization of federal support for privacy-preserving techniques and strengthening privacy-preserving research and technologies (e.g., cryptographic tools).  Regarding equity and civil rights, the EO calls for clear guidance to landlords, Federal benefits programs, and Federal contractors to keep AI systems from being used to exacerbate discrimination.  The EO also sets out requirements for developers of AI systems, including requiring companies developing any foundation model “that poses a serious risk to national security, national economic security, or national public health and safety” to notify the federal government when training the model and provide results of all red-team safety tests to the government.

Federal Legislative Activity on AI

Congress continued to evaluate AI legislation and proposed a number of AI bills, though none of these bills are expected to progress in the immediate future.  For example, members of Congress continued to hold meetings on AI and introduced bills related to deepfakes, AI research, and transparency for foundational models.

  • Deepfakes and Inauthentic Content:  In October 2023, a group of bipartisan senators released a discussion draft of the NO FAKES Act, which would prohibit persons or companies from producing an unauthorized digital replica of an individual in a performance or hosting unauthorized digital replicas if the platform has knowledge that the replica was not authorized by the individual depicted. 
  • Research:  In November 2023, Senator Thune (R-SD), along with five bipartisan co-sponsors, introduced the Artificial Intelligence Research, Innovation, and Accountability Act (S. 3312), which would require covered internet platforms that operate generative AI systems to provide their users with clear and conspicuous notice that the covered internet platform uses generative AI. 
  • Transparency for Foundational Models:  In December 2023, Representative Beyer (D-VA-8) introduced the AI Foundation Model Act (H.R. 6881), which would direct the Federal Trade Commission (“FTC”) to establish transparency standards for foundation model deployers in consultation with other agencies.  The standards would require companies to provide consumers and the FTC with information on a model’s training data and mechanisms, as well as information regarding whether user data is collected in inference.
  • Bipartisan Senate Forums:  Senator Schumer’s (D-NY) AI Insight Forums, which are a part of his SAFE Innovation Framework, continued to take place this quarter.  As part of these forums, bipartisan groups of senators met multiple times to learn more about key issues in AI policy, including privacy and liability, long-term risks of AI, and national security.

Federal Regulatory Updates

  • Federal Communications Commission (“FCC”):  The FCC adopted a Notice of Inquiry (“NOI”) to better understand how AI impacts illegal and unwanted robocalls and texts.  The NOI sought to understand AI benefits and risks to allow the FCC to better combat harms, utilize AI’s benefits, and protect consumers.  In addition, the FCC announced that it will re-establish the Communications Security, Reliability, and Interoperability Council (“CSRIR”), which will focus on how AI and machine learning can enhance the security, reliability, and integrity of communications networks.  This will be the FCC’s ninth charter of CSRIC, with an expected first meeting in June 2024.
  • FTC:  The FTC announced an exploratory challenge to understand the harms associated with AI-enabled voice cloning, which raised concerns about ways that voice cloning technology could be used to harm consumers.  Further, the FTC also hosted a virtual roundtable on the Creative Economy and Generative AI, during which speakers emphasized their view that the FTC must treat generative AI like any previous technological development that could harm consumers and competition.  
  • National Institute of Standards & Technology (“NIST”):  NIST released a Request for Information (“RFI”) seeking information to assist in carrying out several of its responsibilities under the EO, including a request for public input on guidelines for AI safety and security, AI content, and responsible global standards for AI development.  Public comments are due by February 2nd.
  • U.S. Copyright Office (“USCO”):  The USCO received more than 10,300 initial and reply comments in response to its NOI on AI and Copyright, which sought input on a range of legal and technical topics, including in regard to training, transparency and record keeping, copyrightability, infringement, fair use, and labeling.  Additionally, the USCO issued a Notice of Proposed Rulemaking (“NPRM”) that set forth proposals for renewed and new exemptions to the Digital Millennium Copyright Act’s (“DMCA”) anti-circumvention provisions.  One proposed exemption would permit circumvention of technological measures that control access to “copyrighted generative AI models solely for the purpose of researching biases within the models,” including the sharing of research, techniques, and methodologies that expose and address such biases.
  • Cybersecurity and Infrastructure Security Agency (“CISA”):  CISA announced that it was jointly releasing Guidelines for Secure AI System Development alongside with the United Kingdom’s National Cyber Security Centre.  The Guidelines are aimed at providers of AI systems and are focused on four main areas: (1) secure design; (2) secure development; (3) secure deployment; and (4) secure operation and maintenance.  The Guidelines aim to “help providers build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorized parties.”

AI Litigation Activities

Plaintiffs have brought and tested various theories in lawsuits against companies developing AI models and tools, including copyright infringement, violations of the DMCA, negligence, privacy harms, unjust enrichment, breach of contract, trademark infringement, right of publicity violations, and defamation, among others.  A number of high-profile lawsuits have focused on copyright infringement, generally alleging that: (a) the defendants developed or used generative AI models, including large language models (“LLMs”), that were trained on copyrighted works without the copyright owners’ consent; and (b) the model and/or its outputs infringe.  Q4 litigation developments include, for example:

  • Copyright Dismissals: On November 20th, the district court in Kadrey v. Meta Platforms Inc., 3:23-cv-03417 (N.D. Cal.), dismissed without prejudice most of the claims brought by plaintiff Sarah Silverman (who filed a separate case against Microsoft and OpenAI) and other authors alleging copyright infringement based on use of their works to develop and deploy LLMs.  The court found insufficient allegations that the LLMs themselves were directly infringing derivative works and that the plaintiffs had not alleged sufficient similarity between the contents of any LLM output and their copyrighted works.  Additionally, on October 30th, the district court in Andersen v. Stability AI Ltd., No. 3:23-cv-00201 (N.D. Cal.) dismissed without prejudice all of the plaintiffs’ claims except for direct copyright infringement of training materials.  
  • Amended Complaints: The Kadrey plaintiffs filed an amended complaint on December 12th, which narrowed their claims to alleged copying of the plaintiffs’ books for use as training material with supplemental supporting allegations.  The Andersen plaintiffs filed an amended complaint on November 29th, which included additional examples of AI-generated outputs and also included a trade dress infringement theory based on plaintiffs’ alleged artistic styles.
  • New Complaints by Authors & Publishers: On December 27th, the complaint in New York Times v. Microsoft, 1:23-cv-11195 (S.D.N.Y.) was filed, alleging that the defendants unlawfully used millions of New York Times articles to train LLMs.  The complaint includes several examples of ChatGPT and Bing Chat allegedly generating “near-verbatim copies of significant portions” of copyrighted articles, and asserts that using such materials to train LLMs does not serve a transformative purpose.  Microsoft and OpenAI face other similar lawsuits, including (1) Sancton v. Open AI, Inc., 1:23-cv-10211 (S.D.N.Y.); (2) Tremblay v. OpenAI, Inc., 3:23-cv-3223 (N.D. Cal.); and (3) Silverman v. OpenAI, Inc., 3:23-cv-3416 (N.D. Cal.).  The plaintiffs in these cases are various authors who allege, among other things, that the defendants reproduced their copyrighted works to train LLMs without authorization.  Similarly, on October 17th, a group of authors including former Arkansas Governor Mike Huckabee brought suit against Meta, Microsoft, and Bloomberg in Huckabee v. Meta Platforms, Inc., No. 1:23-cv-09152 (S.D.N.Y.).  
  • October 2023 Complaint by Music Publishers:A group of eight music publishers filed suit on October 18th in Concord Music Group, Inc. v. Anthropic PBC, No 3:23-cv-01092 (M.D. Tenn.), alleging that their copyrighted lyrics were directly and vicariously infringed by Anthropic’s Claude AI tool.  Anthropic has since moved to dismiss or transfer the suit for lack of personal jurisdiction and venue.   
  • December 2023 Antitrust Complaint: The complaint in Helena World Chronicle, LLC v. Google et. al., 1:23-cv-03677 (D.D.C.), filed on December 11, asserts claims under federal antitrust law.  The plaintiff alleges that Google abused its monopoly power in the search advertising market by, in part, scraping material to create a generative AI program, launching the Bard chatbot without sufficient development in order to undermine competition, and introducing “search generative experiences,” which involve responding to user searches by directing them to a summary of other websites say rather than to the websites themselves.

II. Connected & Automated Vehicles

  • The White House EO:  The White House’s EO on Safe, Secure, and Trustworthy Artificial Intelligence, referenced above, included a number of CAV-related provisions.  The EO directed the Secretary of Transportation to, within 30 days, direct the Nontraditional and Emerging Transportation Technology Council to assess the need for information and guidance regarding the use of AI in transportation, including by supporting existing and future initiatives to pilot transportation-related applications of AI.  Under the EO, the Secretary of Transportation also must direct appropriate Federal Advisory Committees of the Department of Transportation (“DOT”) to provide advice on the safe and responsible use of AI in transportation by the end of January 2024.  Finally, within 180 days of the EO, the Secretary of Transportation must direct the Advanced Research Projects Agency-Infrastructure to explore the transportation-related opportunities and challenges of AI, including software-defined AI enhancements impacting autonomous mobility ecosystems. 
  • NHTSA Notice and Request for Comment on Driving Automation SystemsThe National Highway Traffic Safety Administration (“NHTSA”) took steps to increase its understanding of potential safety issues implicated by driving automation systems (“DAS”), issuing a notice and request for comments on a request for approval of a new information collection regarding human interaction with DAS on December 12th.  NHTSA proposed to perform research involving the collection of information from the public as part of a multi-year effort to learn about how humans interact with DAS, which will “support NHTSA in understanding the potential safety challenges associated with human-DAS interactions, particularly in the context of mixed traffic interactions where some vehicles have DAS and others do not” and where some vehicles are equipped with DAS that have varying levels of automation.  The proposed project will examine driving performance measures (such as takeover time and reaction time), measure understanding of and trust in the automation through questionnaires, and measure risk taking through questionnaires.
  • FCC Letters to Carmakers Regarding Connectivity and Domestic ViolenceIn early January 2024, the FCC took steps to increase its understanding of certain safety issues implicated by connected vehicles by sending letters to several automotive manufacturers regarding the potential for wireless connectivity and location data to negatively impact partners in abusive relationships.  To help the FCC understand how it can better fulfill its duties under the Safe Connections Act – which provides the FCC with authority to assist survivors of domestic violence and abuse with secure access to communications – the FCC requested that letter recipients respond to a series of questions about current and planned connectivity options, policies in place to remove access to connected apps at the request of domestic violence survivors, and how the company retains, shares, and/or sells a driver’s geolocation data collected by connected apps.
  • Funding OpportunitiesThe federal government announced two funding opportunities this past quarter.  On November 15th, the Federal Transit Administration announced the opportunity to apply for $4.7M in FY23 funding under the Innovative Coordinated Access and Mobility pilot program.  This funding opportunity “seeks to improve coordination to enhance access and mobility to vital community services for older adults, people with disabilities, and people of low income.”  The Notice of Funding Opportunity provides that if an applicant is “proposing to implement autonomous vehicles or other innovative motor vehicle technology, the application should demonstrate that all vehicles will comply with applicable safety requirements,” including those administered by NHTSA and the Federal Motor Carrier Safety Administration (e.g., the Federal Motor Vehicle Safety Standards and Federal Motor Carrier Safety Regulations).  Applicants must submit completed proposals by February 13th.  Additionally, on December 13th, the DOT announced a $25M funding opportunity for its Rural Autonomous Vehicle research program.  Accredited universities may apply for the six-year cooperative agreement program.  Recipients will use program funding to conduct research on the benefits and responsible application of AVs and associated mobility technologies in rural and Tribal communities.
  • Stakeholder Advocacy:  On the stakeholder front, on December 7th, eighteen organizations sent a letter to DOT Secretary Pete Buttigieg stating that the CAV industry is “at a critical juncture and in need of strong leadership from USDOT” and urging the Department to “use existing authorities to assert its jurisdiction over the design, construction, and performance of motor vehicles, including those deploying emerging technology.”  The letter specifically encouraged DOT to move forward with a Notice of Proposed Rulemaking on the ADS-equipped Vehicle Safety, Transparency, and Evaluation Program (“AV STEP”) ─ a program announced in July wherein NHTSA would consider applications for deploying noncompliant ADS vehicles, subject to review processes, terms, and conditions, to collect data and enhance research into AV safety and performance.  DOT has yet to respond to the letter or issue a Notice of Proposed Rulemaking.
  • Updated FHA Manual:  Finally, on December 18th, the Federal Highway Administration published the 11th Edition of the Manual on Uniform Traffic Control Devices 2023.  The Manual includes considerations for agencies to prepare roadways for automated vehicle technologies and to support the safe deployment of automated driving systems.

III. Data Privacy & Cybersecurity

Privacy

With respect to privacy, California and Colorado state regulators advanced a number of regulations to better define the scope of privacy laws in each state respectively, including advancing rules for opt-out signals and new regulations, and the FTC brought a number of enforcement actions.

  • New Rules for Opt-out Signals:  At its December 8th board meeting, the California Privacy Protection Agency (“CPPA”) included a legislative proposal that would require vendors of web browsers to include a feature that would allow consumers to exercise data subject rights through opt-out preference signals.  The Colorado Attorney General also announced that the Global Privacy Control (“GPC”) will become the first universal opt-out mechanism the Attorney General considers valid under the Colorado Privacy Act.
  • Additional CCPA Regulations:  The CPPA also proposed draft rules on additional topics, including opt-out and access rights for automated decisionmaking technology, privacy risk assessments, and cybersecurity audits.  As a next step, the CPPA will initiate formal rulemaking, at which point, the public can provide comments on the proposed rules.
  • Key FTC Enforcement Actions:  The FTC continued to bring enforcement actions related to companies’ privacy practices.  For example, on December 19th, the FTC announced that it reached a settlement with Rite Aid Corporation and Rite Aid Headquarters Corporation to resolve allegations that the companies violated Section 5 of the FTC Act.  The FTC alleged that the companies used facial recognition in stores without taking reasonable measures to prevent harm to consumers, including by failing to test the accuracy of the facial recognition technology and failing to oversee and train employees.

Cybersecurity

Cybersecurity regulation and enforcement continued to be a priority for both federal and state regulators, including with respect to infrastructure and finance.

  • Infrastructure:  On October 16th, the U.S. Cybersecurity and Infrastructure Security Agency (“CISA”) released updated guidance on Security-by-Design and Security-by-Default principles for technology and software manufacturers, which it originally published in April 2023.  The latest guidance – published in coordination with the U.S. Federal Bureau of Investigation, U.S. National Security Agency, and thirteen international partners – provides additional recommendations for software manufacturers (including manufacturers of artificial intelligence software systems and models) to improve the security of their products. 
  • Finance:  On October 27th, the FTC amended its Safeguards Rule to require non-banking financial institutions to report data security breaches.  The amendment requires non-bank financial institutions to report when they discover that information affecting 500 or more people has been acquired without authorization.  Additionally, on November 1st, the New York Department of Financial Services (“NYDFS”) announced that it had finalized its “first-in-the-nation” cybersecurity regulation.  This Amendment implemented many of the changes that NYDFS originally proposed in prior versions of the regulations.  These include: (1) removing the previously-proposed requirement that each class A company conduct independent audits of its cybersecurity program “at least annually” ─ the regulation does require each class A company to conduct such audits based on its risk assessments; (2) requiring confirmation that a covered entity’s management has allocated sufficient resources to implement and maintain a cybersecurity program; and (3) removing a proposed additional requirement to report certain privileged account compromises to NYDFS while retaining requirements for covered entities to report certain ransomware deployments or extortion payments.

We will continue to update you on meaningful developments in these quarterly updates and across our blogs.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Jennifer Johnson Jennifer Johnson

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors…

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors, television companies, trade associations, and other entities on a wide range of media and technology matters. Jennifer has almost three decades of experience advising clients in the communications, media and technology sectors, and has held leadership roles in these practices for almost twenty years. On technology issues, she collaborates with Covington’s global, multi-disciplinary team to assist companies navigating the complex statutory and regulatory constructs surrounding this evolving area, including product counseling and technology transactions related to connected and autonomous vehicles, internet connected devices, artificial intelligence, smart ecosystems, and other IoT products and services. Jennifer serves on the Board of Editors of The Journal of Robotics, Artificial Intelligence & Law.

Jennifer assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission (FCC) and Congress and through transactions and other business arrangements. She regularly advises clients on FCC regulatory matters and advocates frequently before the FCC. Jennifer has extensive experience negotiating content acquisition and distribution agreements for media and technology companies, including program distribution agreements, network affiliation and other program rights agreements, and agreements providing for the aggregation and distribution of content on over-the-top app-based platforms. She also assists investment clients in structuring, evaluating, and pursuing potential investments in media and technology companies.

Photo of Nicholas Xenakis Nicholas Xenakis

Nick Xenakis draws on his Capitol Hill experience to provide regulatory and legislative advice to clients in a range of industries, including technology. He has particular expertise in matters involving the Judiciary Committees, such as intellectual property, antitrust, national security, immigration, and criminal…

Nick Xenakis draws on his Capitol Hill experience to provide regulatory and legislative advice to clients in a range of industries, including technology. He has particular expertise in matters involving the Judiciary Committees, such as intellectual property, antitrust, national security, immigration, and criminal justice.

Nick joined the firm’s Public Policy practice after serving most recently as Chief Counsel for Senator Dianne Feinstein (D-CA) and Staff Director of the Senate Judiciary Committee’s Human Rights and the Law Subcommittee, where he was responsible for managing the subcommittee and Senator Feinstein’s Judiciary staff. He also advised the Senator on all nominations, legislation, and oversight matters before the committee.

Previously, Nick was the General Counsel for the Senate Judiciary Committee, where he managed committee staff and directed legislative and policy efforts on all issues in the Committee’s jurisdiction. He also participated in key judicial and Cabinet confirmations, including of an Attorney General and two Supreme Court Justices. Nick was also responsible for managing a broad range of committee equities in larger legislation, including appropriations, COVID-relief packages, and the National Defense Authorization Act.

Before his time on Capitol Hill, Nick served as an attorney with the Federal Public Defender’s Office for the Eastern District of Virginia. There he represented indigent clients charged with misdemeanor, felony, and capital offenses in federal court throughout all stages of litigation, including trial and appeal. He also coordinated district-wide habeas litigation following the Supreme Court’s decision in Johnson v. United States (invalidating the residual clause of the Armed Career Criminal Act).

Photo of Phillip Hill Phillip Hill

Phillip Hill focuses on complex copyright matters with an emphasis on music, film/TV, video games, sports, theatre, and technology.

Phillip’s global practice includes all aspects of copyright and the DMCA, as well as trademark and right of publicity law, and encompasses the full

Phillip Hill focuses on complex copyright matters with an emphasis on music, film/TV, video games, sports, theatre, and technology.

Phillip’s global practice includes all aspects of copyright and the DMCA, as well as trademark and right of publicity law, and encompasses the full spectrum of litigation, transactions, counseling, legislation, and regulation. He regularly represents clients in federal and state court, as well as before the U.S. Copyright Royalty Board, Copyright Office, Patent & Trademark Office, and Trademark Trial & Appeal Board.

Through his work at the firm and prior industry and in-house experience, Phillip has developed a deep understanding of his clients’ industries and regularly advises on cutting-edge topics like generative artificial intelligence, the metaverse, and NFTs. Phillip has been recognized as one of Billboard as a Top Music Lawyers.

In addition to his full-time legal practice, Phillip serves as Chair of the ABA Music and Performing Arts Committee, frequently speaks on emerging trends, is active in educational efforts, and publishes regularly.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the…

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the rapidly evolving legal landscape. Her practice includes partnering with clients on the design of new products and services, drafting and negotiating privacy terms with vendors and third parties, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of Artificial Intelligence and Internet of Things technologies.

Jayne routinely represents clients in privacy and consumer protection enforcement actions brought by the Federal Trade Commission and state attorneys general, including related to data privacy and advertising topics. She also helps clients articulate their perspectives through the rulemaking processes led by state regulators and privacy agencies.

As part of her practice, Jayne advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.

Photo of Shayan Karbassi Shayan Karbassi

Shayan Karbassi is an associate in the firm’s Washington, DC office. He represents and advises clients on a range of cybersecurity and national security issues. As a part of his cybersecurity practice, Shayan assists clients with cyber and data security incident response and…

Shayan Karbassi is an associate in the firm’s Washington, DC office. He represents and advises clients on a range of cybersecurity and national security issues. As a part of his cybersecurity practice, Shayan assists clients with cyber and data security incident response and preparedness, government and internal investigations, and regulatory compliance. He also regularly advises clients with respect to risks stemming from U.S. criminal and civil anti-terrorism laws and other national security issues, to include investigating allegations of terrorism-financing and litigating Anti-Terrorism Act claims.

Shayan maintains an active pro bono litigation practice with a focus on human rights, freedom of information, and free media issues.

Prior to joining the firm, Shayan worked in the U.S. national security community.

Photo of Olivia Dworkin Olivia Dworkin

Olivia Dworkin minimizes regulatory and litigation risks for clients in the medical device, pharmaceutical, biotechnology, eCommerce, and digital health industries through strategic advice on complex FDA issues, helping to bring innovative products to market while ensuring regulatory compliance.

With a focus on cutting-edge…

Olivia Dworkin minimizes regulatory and litigation risks for clients in the medical device, pharmaceutical, biotechnology, eCommerce, and digital health industries through strategic advice on complex FDA issues, helping to bring innovative products to market while ensuring regulatory compliance.

With a focus on cutting-edge medical technologies and digital health products and services, Olivia regularly helps new and established companies navigate a variety of state and federal regulatory, legislative, and compliance matters throughout the total product lifecycle. She has experience counseling clients on the development, FDA regulatory classification, and commercialization of digital health tools, including clinical decision support software, mobile medical applications, general wellness products, medical device data systems, administrative support software, and products that incorporate artificial intelligence, machine learning, and other emerging technologies.

Olivia also assists clients in advocating for legislative and regulatory policies that will support innovation and the safe deployment of digital health tools, including by drafting comments on proposed legislation, frameworks, whitepapers, and guidance documents. Olivia keeps close to the evolving regulatory landscape and is a frequent contributor to Covington’s Digital Health blog. Her work also has been featured in the Journal of Robotics, Artificial Intelligence & Law, Law360, and the Michigan Journal of Law and Mobility.

Prior to joining Covington, Olivia was a fellow at the University of Michigan Veterans Legal Clinic, where she gained valuable experience as the lead attorney successfully representing clients at case evaluations, mediations, and motion hearings. At Michigan Law, Olivia served as Online Editor of the Michigan Journal of Gender and Law, president of the Trial Advocacy Society, and president of the Michigan Law Mock Trial Team. She excelled in national mock trial competitions, earning two Medals for Excellence in Advocacy from the American College of Trial Lawyers and being selected as one of the top sixteen advocates in the country for an elite, invitation-only mock trial tournament.

Photo of Jorge Ortiz Jorge Ortiz

Jorge Ortiz is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and the Technology and Communications Regulation Practice Groups.

Jorge advises clients on a broad range of privacy and cybersecurity issues, including topics related to…

Jorge Ortiz is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and the Technology and Communications Regulation Practice Groups.

Jorge advises clients on a broad range of privacy and cybersecurity issues, including topics related to privacy policies and compliance obligations under U.S. state privacy regulations like the California Consumer Privacy Act.

Photo of Jemie Fofanah Jemie Fofanah

Jemie Fofanah is an associate in the firm’s Washington, DC office. She is a member of the Privacy and Cybersecurity Practice Group and the Technology and Communication Regulatory Practice Group. She also maintains an active pro bono practice with a focus on criminal…

Jemie Fofanah is an associate in the firm’s Washington, DC office. She is a member of the Privacy and Cybersecurity Practice Group and the Technology and Communication Regulatory Practice Group. She also maintains an active pro bono practice with a focus on criminal defense and family law.

Photo of Andrew Longhi Andrew Longhi

Andrew Longhi advises national and multinational companies across industries on a wide range of regulatory, compliance, and enforcement matters involving data privacy, telecommunications, and emerging technologies.

Andrew’s practice focuses on advising clients on how to navigate the rapidly evolving legal landscape of state…

Andrew Longhi advises national and multinational companies across industries on a wide range of regulatory, compliance, and enforcement matters involving data privacy, telecommunications, and emerging technologies.

Andrew’s practice focuses on advising clients on how to navigate the rapidly evolving legal landscape of state, federal, and international data protection laws. He proactively counsels clients on the substantive requirements introduced by new laws and shifting enforcement priorities. In particular, Andrew routinely supports clients in their efforts to launch new products and services that implicate the laws governing the use of data, connected devices, biometrics, and telephone and email marketing.

Andrew assesses privacy and cybersecurity risk as a part of diligence in complex corporate transactions where personal data is a key asset or data processing issues are otherwise material. He also provides guidance on generative AI issues, including privacy, Section 230, age-gating, product liability, and litigation risk, and has drafted standards and guidelines for large-language machine-learning models to follow. Andrew focuses on providing risk-based guidance that can keep pace with evolving legal frameworks.

Photo of Vanessa Lauber Vanessa Lauber

Vanessa Lauber is an associate in the firm’s New York office and a member of the Data Privacy and Cybersecurity Practice Group, counseling clients on data privacy and emerging technologies, including artificial intelligence.

Vanessa’s practice includes partnering with clients on compliance with federal…

Vanessa Lauber is an associate in the firm’s New York office and a member of the Data Privacy and Cybersecurity Practice Group, counseling clients on data privacy and emerging technologies, including artificial intelligence.

Vanessa’s practice includes partnering with clients on compliance with federal and state privacy laws and FTC and consumer protection laws and guidance. Additionally, Vanessa routinely counsels clients on drafting and developing privacy notices and policies. Vanessa also advises clients on trends in artificial intelligence regulations and helps design governance programs for the development and deployment of artificial intelligence technologies across a number of industries.

Photo of Zoe Kaiser Zoe Kaiser

Zoe Kaiser is an associate in the firm’s San Francisco office, where she is a member of the Litigation and Investigations, Copyright and Trademark Litigation, and Class Actions Practice Groups. She advises on cutting-edge topics such as generative artificial intelligence.

Zoe maintains an…

Zoe Kaiser is an associate in the firm’s San Francisco office, where she is a member of the Litigation and Investigations, Copyright and Trademark Litigation, and Class Actions Practice Groups. She advises on cutting-edge topics such as generative artificial intelligence.

Zoe maintains an active pro bono practice, focusing on media freedom.

Photo of Madeleine Dolan Madeleine Dolan

Madeleine (Maddie) Dolan is a litigation associate in the Washington, DC office. She is a member of the Product Liability and Mass Torts litigation group and has an active pro bono criminal defense practice.

Maddie is a stand-up litigator, having first-chaired at a…

Madeleine (Maddie) Dolan is a litigation associate in the Washington, DC office. She is a member of the Product Liability and Mass Torts litigation group and has an active pro bono criminal defense practice.

Maddie is a stand-up litigator, having first-chaired at a pro bono trial as a member of a team that secured their client’s acquittal on first-degree murder charges; she has also taken a deposition in a commercial litigation matter. She also has extensive experience drafting dispositive motions, leading document reviews, developing expert reports, and preparing for depositions and trial.

Prior to joining Covington, Maddie served as a law clerk to U.S. District Judge Mark R. Hornak of the Western District of Pennsylvania in Pittsburgh, PA. She also previously worked as a consultant and strategic communications director, managing marketing campaigns for federal government agencies.