On January 9, the FTC published a blog post discussing privacy and confidentiality obligations for companies that provide artificial intelligence (“AI”) services.  The FTC described “model-as-a-service” companies as those that develop, host, and provide pre-trained AI models to users and businesses through end-user interfaces or application programming interfaces (“APIs”).  According to the FTC, when model-as-a-service companies misrepresent how customer data is used, omit material facts related to the use and collection of customer data, or adopt data practices that harm competition, they may be exposed to enforcement action. 

  • Misrepresentation of Data Practices.  The FTC stated that model-as-a-service companies have an obligation to abide by their privacy commitments to users and customers, including promises not to use customer data for training or updating AI models and regardless of how or where commitments are made. 
  • Failure to Disclose Data Practices.  The FTC added that model-as-a-service companies may not omit material facts that would affect a customer’s decision to purchase their services, including the company’s collection and use of customer data. 
  • Harms Competition from Data Practices.  Finally, the FTC stated that misrepresentations, material omissions, and misuse related to a model-as-a-service company’s data practices could undermine competition.    

The FTC’s blog post is not legally binding but is another example of how the FTC is trying to influence the AI industry and re-iterates that AI is an enforcement priority for the agency.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Libbie Canter Libbie Canter

Libbie Canter represents a wide variety of multinational companies on managing privacy, cyber security, and artificial intelligence risks, including helping clients with their most complex privacy challenges and the development of governance frameworks and processes to comply with U.S. and global privacy laws.

Libbie Canter represents a wide variety of multinational companies on managing privacy, cyber security, and artificial intelligence risks, including helping clients with their most complex privacy challenges and the development of governance frameworks and processes to comply with U.S. and global privacy laws. She routinely supports clients on their efforts to launch new products and services involving emerging technologies, and she has assisted dozens of clients with their efforts to prepare for and comply with federal and state laws, including the California Consumer Privacy Act, the Colorado AI Act, and other state laws. As part of her practice, she also regularly represents clients in strategic transactions involving personal data, cybersecurity, and artificial intelligence risk and represents clients in enforcement and litigation postures.

Libbie represents clients across industries, but she also has deep expertise in advising clients in highly-regulated sectors, including financial services and digital health companies. She counsels these companies — and their technology and advertising partners — on how to address legacy regulatory issues and the cutting edge issues that have emerged with industry innovations and data collaborations.

Chambers USA 2025 ranks Libbie in Band 3 Nationwide for both Privacy & Data Security: Privacy and Privacy & Data Security: Healthcare. Chambers USA notes, Libbie is “incredibly sharp and really thorough. She can do the nitty-gritty, in-the-weeds legal work incredibly well but she also can think of a bigger-picture business context and help to think through practical solutions.”

Photo of Lindsey Tonsager Lindsey Tonsager

Lindsey Tonsager co-chairs the firm’s global Data Privacy and Cybersecurity practice. She advises clients in their strategic and proactive engagement with the Federal Trade Commission, the U.S. Congress, the California Privacy Protection Agency, and state attorneys general on proposed changes to data protection…

Lindsey Tonsager co-chairs the firm’s global Data Privacy and Cybersecurity practice. She advises clients in their strategic and proactive engagement with the Federal Trade Commission, the U.S. Congress, the California Privacy Protection Agency, and state attorneys general on proposed changes to data protection laws, and regularly represents clients in responding to investigations and enforcement actions involving their privacy and information security practices.

Lindsey’s practice focuses on helping clients launch new products and services that implicate the laws governing the use of artificial intelligence, data processing for connected devices, biometrics, online advertising, endorsements and testimonials in advertising and social media, the collection of personal information from children and students online, e-mail marketing, disclosures of video viewing information, and new technologies.

Lindsey also assesses privacy and data security risks in complex corporate transactions where personal data is a critical asset or data processing risks are otherwise material. In light of a dynamic regulatory environment where new state, federal, and international data protection laws are always on the horizon and enforcement priorities are shifting, she focuses on designing risk-based, global privacy programs for clients that can keep pace with evolving legal requirements and efficiently leverage the clients’ existing privacy policies and practices. She conducts data protection assessments to benchmark against legal requirements and industry trends and proposes practical risk mitigation measures.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy…

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy and enforcement trends. He regularly advises clients on AI governance, risk management, and compliance under data privacy, consumer protection, safety, procurement, and platform laws.

August’s practice includes providing comprehensive advice on U.S. state and federal AI policies and legislation, including the Colorado AI Act and state laws regulating automated decision-making technologies, AI-generated content, generative AI systems and chatbots, and foundation models. He also assists clients in assessing risks and compliance under federal and state privacy laws like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in AI public policy advocacy and rulemaking.