Photo of Lindsey Tonsager

Lindsey Tonsager

Lindsey Tonsager co-chairs the firm’s global Data Privacy and Cybersecurity practice. She advises clients in their strategic and proactive engagement with the Federal Trade Commission, the U.S. Congress, the California Privacy Protection Agency, and state attorneys general on proposed changes to data protection laws, and regularly represents clients in responding to investigations and enforcement actions involving their privacy and information security practices.

Lindsey’s practice focuses on helping clients launch new products and services that implicate the laws governing the use of artificial intelligence, data processing for connected devices, biometrics, online advertising, endorsements and testimonials in advertising and social media, the collection of personal information from children and students online, e-mail marketing, disclosures of video viewing information, and new technologies.

Lindsey also assesses privacy and data security risks in complex corporate transactions where personal data is a critical asset or data processing risks are otherwise material. In light of a dynamic regulatory environment where new state, federal, and international data protection laws are always on the horizon and enforcement priorities are shifting, she focuses on designing risk-based, global privacy programs for clients that can keep pace with evolving legal requirements and efficiently leverage the clients’ existing privacy policies and practices. She conducts data protection assessments to benchmark against legal requirements and industry trends and proposes practical risk mitigation measures.

Earlier this month, the Kentucky legislature passed comprehensive privacy legislation, H.B. 15  (the “Act”), joining California, Virginia, Colorado, Connecticut, Utah, Iowa, Indiana, Tennessee, Montana, Oregon, Texas, Florida, Delaware, New Jersey, and New Hampshire.  The Act is awaiting the Governor’s signature. If signed into

Continue Reading Kentucky Passes Comprehensive Privacy Bill

On January 9, the FTC published a blog post discussing privacy and confidentiality obligations for companies that provide artificial intelligence (“AI”) services.  The FTC described “model-as-a-service” companies as those that develop, host, and provide pre-trained AI models to users and businesses through end-user interfaces or application programming interfaces (“APIs”).  According to the FTC, when model-as-a-service

Continue Reading FTC on Models-as-a-Service

Ahead of its December 8 board meeting, the California Privacy Protection Agency (CPPA) has issued draft “automated decisionmaking technology” (ADMT) regulations.  The CPPA has yet to initiate the formal rulemaking process and has stated that it expects to begin formal rulemaking next year.  Accordingly, the draft ADMT regulations are subject to change.  Below are the key takeaways: Continue Reading CPPA Releases Draft Automated Decisionmaking Technology Regulations

On October 3, the Federal Trade Commission (“FTC”) released a blog post titled Consumers Are Voicing Concerns About AI, which discusses consumer concerns that the FTC received via its Consumer Sentinel Network concerning artificial intelligence (“AI”) and priority areas the agency is watching.  Although the FTC’s blog post acknowledged that it did not investigate whether the concerns cited indeed correlated to actual AI applications and practices, it found that these concerns fell into three general categories:Continue Reading FTC Publishes Blog Post Summarizing Consumer Concerns with AI Systems

Many employers and employment agencies have turned to artificial intelligence (“AI”) tools to assist them in making better and faster employment decisions, including in the hiring and promotion processes.  The use of AI for these purposes has been scrutinized and will now be regulated in New York City.  The New York City Department of Consumer

Continue Reading Artificial Intelligence & NYC Employers: New York City Seeks Publication of Proposed Rules That Would Regulate the Use of AI Tools in the Employment Context

On July 17, 2020, the High-Level Expert Group on Artificial Intelligence set up by the European Commission (“AI HLEG”) published The Assessment List for Trustworthy Artificial Intelligence (“Assessment List”). The purpose of the Assessment List is to help companies identify the risks of AI systems they develop, deploy or procure, and implement appropriate measures to mitigate those risks.

The Assessment List is not mandatory, and there isn’t yet a self-certification scheme or other formal framework built around it that would enable companies to signal their adherence to it.  The AI HLEG notes that the Assessment List should be used flexibly; organizations can add or ignore elements as they see fit, taking into consideration the sector in which they operate. As we’ve discussed in our previous blog post here, the European Commission is currently developing policies and legislative proposals relating to trustworthy AI, and it is possible that the Assessment List may influence the Commission’s thinking on how organizations should operationalize requirements relating to this topic.Continue Reading AI Update: EU High-Level Working Group Publishes Self Assessment for Trustworthy AI

Last week, Senators Amy Klobuchar (D-MN) and Lisa Murkowski (R-AK) introduced the Protecting Personal Health Data Act (S. 1842), which would provide new privacy and security rules from the Department of Health and Human Services (“HHS”) for technologies that collect personal health data, such as wearable fitness trackers, social-media sites focused on health data or conditions, and direct-to-consumer genetic testing services, among other technologies.  Specifically, the legislation would direct the HHS Secretary to issue regulations relating to the privacy and security of health-related consumer devices, services, applications, and software. These new regulations will also cover a new category of personal health data that is otherwise not protected health information under HIPAA.
Continue Reading IoT Update: Senators Introduce Legislation to Regulate Privacy and Security of Wearable Health Devices and Genetic Testing Kits

The Federal Communications Commission received over 300 comments from the public regarding its proposals to allow broadcast television stations to voluntarily participate in an auction of their spectrum to mobile broadband providers and to involuntarily repack remaining television stations into a smaller television spectrum band.  Broadcast television station groups, individual stations, mobile broadband providers, wireless microphone operators, proponents of unlicensed spectrum uses, equipment manufacturers, radio astronomers, wireless medical device makers, and a variety of trade associations weighed in on the Commission’s proposals.  There was significant disagreement on a number of the FCC’s proposals — including the extent to which viewers’ existing television services should be preserved in the repacking, the timeframe to complete the repacking, and how to address wireless microphones and unlicensed uses in the spectrum band.  However, at least three key areas of general industry agreement emerged:
Continue Reading SpectrumWatch: 3 Key Areas of Industry Agreement Regarding the FCC’s Spectrum Auction and Repacking Proposals

The Federal Communications Commission published a reminder to service providers and equipment manufacturers that provide advanced communications services — such as e-mail, instant messaging, Voice over Internet Protocol, and interoperable video conferencing services — or telecommunications services that are subject to Section 255 of the Communications Act to begin maintaining records by January 30, 2013 of the efforts they take to make their services and equipment accessible.
Continue Reading Recordkeeping Reminder for Service Providers and Equipment Manufacturers Offering Advanced Communications Services and Telecommunications Services

Path, a social networking mobile app, has agreed to enter into a settlement with the Federal Trade Commission (“FTC”) regarding charges that the company deceived consumers by collecting contact information from users’ mobile address books without notice and consent.  The agreement also resolves charges that the company violated the Children’s Online Privacy Protection Act (“COPPA”) by collecting personal information from children under  13 years old without parental notice and consent.  Path did not admit any liability by entering into the consent decree, which is for settlement purposes only.

The FTC alleged that the Path application included an “Add Friends” feature that allowed users to make new connections within the app.  Users were given three options when using the “Add Friends” functionality:  “Find friends from your contacts,” “Find Friends from Facebook,” or “Invite friends to join Path by email or SMS.”  Regardless of which option was chosen, Path automatically collected and stored contact information from the address book on the user’s mobile phone.  The FTC argued that this practice was contrary to representations made in the company’s privacy policy that only certain technical information, such as IP address, browser type, and site activity information, was automatically collected from the user.  Under the settlement, Path agreed to implement a comprehensive privacy program and obtain biennial, independent privacy assessments for the next twenty years.
Continue Reading FTC Settles Deception, COPPA Charges Against Social Networking App Path