Algorithms define online shopping, allowing for individualized product recommendations driven by customer data. To date, this technology has spurred little litigation. Few if any courts have explicitly ruled on responsibilities related to AI-driven product recommendation software.

Still, developers should be aware of potential legal risks from this novel technology. For example: What happens if AI recommends a product to a shopper and the product injures the shopper because of a defect? Can the shopper bring a product liability claim — such as for strict liability or negligence — against the algorithm’s developer? Although there is limited precedent on this issue, the risk appears limited.

Is Software a Product Subject to Strict Liability?

As a threshold issue, recommendation software (even when embedded in an IoT device) may not be recognized as the type of “tangible personal property” that is subject to product liability claims. “Intangible” items, such as software, do not normally qualify as a “product,” but are closer to “thought and expression” — in this case, thought and expression about products available on the Internet.

Does a Recommendation Cause Injury?

Even if a plaintiff convinced a court that recommendation software constituted a product, it would still be difficult to demonstrate that a shopping algorithm legally caused an alleged injury. Causation under the law would involve two questions: (1) If the algorithm did not suggest the product, would the shopper have still purchased the product and been injured? (2) Can the algorithm be treated as the “proximate cause” of the injury?

The first question may depend greatly on the product, algorithm, online platform, and customer at issue (e.g., whether a shopper was already inclined to purchase the product). But the second question — on proximate causation — will likely be a significant barrier for any consumer seeking to hold a company liable for algorithmic recommendations.

Proximate causation normally requires that an alleged act or omission “substantially contribute” to the plaintiff’s injury and demands some direct relation between the asserted injury and actionable conduct. An AI-driven shopping algorithm, in contrast, is relatively passive. It suggests a product based on consumer-specific data or aggregate market data.

The links between the manufacturing or designing of a product, the development of an algorithm, the algorithm’s output recommending the product, the consumer’s purchasing decision, and the ultimate injury are attenuated, involving several actors, decisions, and events. As a result, a court is unlikely to find that the algorithm substantially contributed to an injury by merely recommending a product, and showing causation will likely be problematic for any tort claims based on AI-enabled product suggestions.

What Duty Exists in Recommending Products?

Additionally, to prove negligence (as distinct from strict liability) a shopper must demonstrate that an AI developer violated a “legal duty to exercise due care” in creating its software. Although the standard for determining the existence of a legal duty varies from state to state, no court has found that developers have a duty to design algorithms that ensure the safety of recommended products and prevent the purchase of defective products.

Such a standard of care may in fact be infeasible, as it would impose a level of knowledge about the products being suggested that may be impractical to expect. Still, it may be helpful for developers to consider the types of input to their algorithm — and what safety-related limits might be placed on each input — to ensure that any such duty would be satisfied.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Emily Ullman Emily Ullman

Emily Ullman is an experienced civil litigator with a focus on complex product liability and mass tort matters, particularly for clients in the life sciences, consumer goods, and technology sectors. She represents major manufacturers and suppliers in high-stakes disputes across federal and state…

Emily Ullman is an experienced civil litigator with a focus on complex product liability and mass tort matters, particularly for clients in the life sciences, consumer goods, and technology sectors. She represents major manufacturers and suppliers in high-stakes disputes across federal and state courts and regularly advises on transactions, regulatory issues, and strategic decisions that carry potential tort exposure.

Emily has defended some of the nation’s most significant product liability, class action, and multidistrict litigations. Her experience includes serving as national coordinating counsel to Mead Johnson in litigation around Enfamil premature infant formula; representing McKesson Corporation at trial in the opioids litigation; and defending AstraZeneca and Bristol Myers Squibb in an MDL involving Type 2 diabetes medication. She currently represents TikTok in consumer protection litigation challenging the platform’s suitability for minors.

Emily has been widely recognized for her accomplishments. She is ranked by Chambers USA (2022–2025), with clients describing her as “great on her feet as an oral advocate,” “a really sharp, tough cross-examiner,” and “one of the smartest people I have known — talented in mass tort and class action litigation.” She has also been named a Law360 Product Liability Rising Star and recognized multiple times by AmLaw Litigation Daily, including as a “Litigator of the Week” runner-up for obtaining complete victories on summary judgment—affirmed on appeal—in consolidated federal and state litigations surrounding the diabetes medication Onglyza. She was also recognized for her role in defending TikTok against state-led consumer protection and First Amendment challenges.