Earlier this week, the FCC released a Second Report and Order revising and expanding requirements to identify and disclose whether any “leased” broadcast program is sponsored by an agent of a foreign government.  The new order followed a decision in 2022 by the U.S. Court of Appeals for the D.C. Circuit to strike down a component of the original rule adopted by the FCC.  The new rule was adopted on a 3-to-2 vote, with the FCC’s two Republican members dissenting.  While the FCC has underscored that these rules are intended to provide broadcasters with flexible and simple options for compliance, failure to comply with these new information gathering and retention requirements could lead to enforcement action, including monetary forfeitures. 

Continue Reading FCC Adopts Revised Foreign Sponsorship Disclosure Requirements

A.    Starting point in Germany

Why is the classification of employees relevant? In Germany, this has considerable consequences: These range from the applicability of employee protection standards (the classic: protection against dismissal) to potential criminal law consequences for the client who turns out to be the employer and has not paid social security contributions. Compliance with the legal framework is therefore highly relevant in this area.

B.    The European way

Such risks are increasingly becoming important also considering the European level. At the end of April, the national labor ministries adopted the compromise proposal of the European platform work directive. Once the draft has been formally adopted, promulgated and published, the member states have two years to transpose it into national law, likely in 2026. The directive pursued to resolve the missing clarification of employes (versus self-employed persons) in respect of crowdworkers. The directive is intended to give misclassified crowdworkers the benefit of employee protection and to protect the personal data of all crowdworkers. It clarifies the terms “platform work” and “intermediary”. It also stipulates a presumption of an employment relationship between the platform worker and the intermediary. All the person concerned has to do is provide factual evidence for employment. The intermediary can then rebut the presumption of an employment relationship. So far, official EU sources estimate that around 5.5 million out of 28 million crowdworkers are misclassified. Other sources estimate around 1.7 to 4.1 million. However, this does not set a new low bar ‘employee’ test for all purposes. The national definition of employee remains unaffected by the directive. In addition, the European legislator leaves it up to the member states to implement tax, social and criminal law. This might reduce the risk of wider liability and criminal responsibility.

C.    Responsibility of the Member States

The obligation of the member states to provide suitable guidelines regarding the correct classification of crowdworkers is a positive step. The competent authorities must make it clear in which case a crowdworker is an employee and what consequences the presumption, including rebuttal, has for the classification. However, the directive does not resolve the question of which crowdworkers are employees and which are self-employed. Rather, the current draft leaves it up to the member states to determine the employment status using suitable procedures that include local laws, collective bargaining regulations and customary law, and taking into account the (rebuttable) presumption in favor of employee status. In the directive, the concrete transposition of the presumption is deliberately left to the member states. The recitals merely provide the usual principles for this, such as effectiveness and efficiency. It will be interesting to see how member states variously implement systems for presenting and checking the evidence to presume an employment relationship.

D.    AI in the draft directive

It is also noteworthy that the draft directive includes provisions on the use of AI in the employment context. The new regulations mandate that the termination of a crowdworker must involve a human final decision and cannot be based solely on an algorithm or automated decision-making system. Although this prohibition aligns with the prevailing interpretation under the GDPR, which prohibits termination solely based on an AI decision as per Article 22 of the GDPR, the draft directive does not provide exceptions like the GDPR does (e.g., consent). The rules in the crowdworker directive clarify that such exceptions will not apply to crowdworkers. Additionally, crowdworkers are entitled to receive explanations for algorithmic decisions in individual cases.

E.    Conclusion

Since the Covid pandemic, platforms such as food delivery and driving services have become indispensable. A corresponding app is usually part of the standard smartphone repertoire. Of course, crowdworkers need protection, but treating everyone equally and forcing them into an employment relationship — even against their contractual will — does not do justice to the usual individual legal assessment of whether employment status truly applies to an individual. Member states should be carefully considering the consequences when transposing.

Tomorrow, the Federal Senate of the Brazilian National Congress may have its first vote on the country’s new artificial intelligence (AI) legal framework, which takes a human rights, risk management, and transparency approach.

The bill, to be marked-up by the Senate Temporary Committee on Artificial Intelligence (“CTIA”), creates a broad and detailed legal framework. It contains rules on the rights of affected persons and groups, risk categorization and management, governance of AI systems, civil liability, penalties for non-compliance, and copyright protection. It also includes specific provisions for government use of AI, best practices and self-regulation, and communication of serious security incidents. Finally, it establishes an inter-agency regulatory system at the federal government level, whose main regulator will be chosen by the executive branch.

Continue Reading Key Vote Expected on Brazil’s Artificial Intelligence Legal Framework

This year, the UK’s Competition and Markets Authority (“CMA”) is set to gain a range of new enforcement powers under the Digital Markets, Competition and Consumers (“DMCC”) Act (the final text is now available here). The DMCC Act received Royal Assent on 24 May 2024. However, with certain exceptions, the Act’s provisions will not come into force until secondary legislation is passed. The CMA initially expected its new responsibilities to become operational in the Autumn, but this timeline may be delayed due to the UK’s election on 4th July. On the same day as the DMCC Act became law, the CMA published for consultation its new Digital Markets Competition Regime Guidance.

An outline of the key provisions of the DMCC Act can be found here. As the CMA sets the groundwork for exercising its powers under this new regime, this blog post considers five practical considerations for firms active in the UK.

Key takeaways:

  1. The CMA will administer the new regime through a specialist Digital Markets Unit, which was established over three years ago.
  2. The DMCC Act may diverge from the EU’s Digital Markets Act, both in the companies being designated, and the obligations imposed on designated companies.
  3. The interplay between the DMCC regime and existing regulatory obligations – particularly the GDPR – is likely to raise practical challenges.
  4. We expect the CMA to exercise its powers under the digital markets regime alongside existing antitrust tools (which the DMCC Act amends).
  5. The CMA’s jurisdictional thresholds to review mergers under the UK’s merger control regime will change as a result of the DMCC Act.
Continue Reading The UK’s New Digital Markets Regime: Some Key Takeaways

The Digital Markets, Competition and Consumers (“DMCC”) Act received Royal Assent on 24 May 2024 (the final text is now available here). The DMCC Act will only enter into force, however, when secondary commencement legislation has been enacted (with some minor exceptions). This is expected to occur in Autumn 2024, but it could be delayed due to the General Election taking place on 4th July. This secondary legislation could also stagger the dates on which separate provisions become effective.

This legislation ushers in a new rulebook for the largest digital firms active in the UK, alongside some consequential changes to the broader UK competition law framework. In relation to digital markets specifically:

  • The Competition and Markets Authority (“CMA”) may designate certain companies active in digital markets in the UK as holding “Strategic Market Status” (“SMS“) in relation to a specific digital activity. Companies designated with SMS will need to comply with tailored conduct requirements imposed by the CMA and report certain transactions to the CMA ahead of completion.
  • The CMA can make “pro-competition interventions” (“PCIs“) to impose requirements to remedy or prevent conduct in relation to digital activities which the CMA considers to have an adverse effect on competition.
  • The CMA will be able to impose fines of up to 10% of worldwide group turnover for non-compliance with SMS conduct requirements or “pro-competition orders”.

More broadly, the DMCC Act includes some amendments to the UK’s existing competition law regime which apply to all sectors of the economy. In particular, the DMCC Act introduces a new merger control jurisdictional review threshold designed to capture vertical and conglomerate mergers where the parties do not overlap, applicable to any industry. Additionally, the DMCC Act introduces a fast-track route to a Phase 2 review without the need to concede at Phase 1 that there is a realistic prospect that the merger gives rise to a substantial lessening of competition.

This post outlines each of these changes. For an analysis of some key practical considerations for companies in light of the DMCC Act, please see this separate post here.

Continue Reading Overview of the UK’s New Digital Markets Regime

On June 6, the Texas Attorney General published a news release announcing that the Attorney General has opened an investigation into several car manufacturers.  The news release states that the investigation was opened “after widespread reporting that [car manufacturers] have secretly been collecting mass amounts of data about drivers directly from their vehicles and then selling that data to third parties.”  Further, the release states that “car manufacturers and the third parties to whom they sold data are being instructed to produce documents relevant to their conduct. . .[and] to produce documents showing the disclosures they made to customers about the extent of their data collection practices and subsequent sale of their customers’ data.”  This announcement follows an earlier news release from the Attorney General describing the launch of a data privacy and security initiative, which will enforce Texas’s privacy protection laws, including the Texas Data Privacy and Security Act that goes into effect on July 1.

The Federal Communications Commission (FCC) recently adopted two Notices of Apparent Liability (NALs) in connection with its investigation into AI-based “deepfake” calls made to New Hampshire voters on January 21, 2024.  The NALs follow a cease-and-desist letter sent on February 6 to Lingo Telecom, LLC (Lingo), a voice service provider that originated the calls, demanding that it stop originating unlawful robocall traffic on its network, which we previously blogged about here.

Continue Reading FCC Proposes Fines for AI-based “Deepfake” Robocalls Before New Hampshire Primary

On May 17, 2024, the Council of Europe adopted the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (the “Convention”).  The Convention represents the first international treaty on AI that will be legally binding on the signatories.  The Convention will be open for signature on September 5, 2024. 

The Convention was drafted by representatives from the 46 Council of Europe member states, the European Union and 11 non-member states (Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the United States of America, and Uruguay).  The Convention is not directly applicable to businesses – it requires the signatories (the “CoE signatories”) to implement laws or other legal measures to give it effect.  The Convention represents an international consensus on the key aspects of AI legislation that are likely to emerge among the CoE signatories.

Continue Reading Council of Europe Adopts International Treaty on Artificial Intelligence

On May 20, 2024, a proposal for a law on artificial intelligence (“AI”) was laid before the Italian Senate.

The proposed law sets out (1) general principles for the development and use of AI systems and models; (2) sectorial provisions, particularly in the healthcare sector and for scientific research for healthcare; (3) rules on the national strategy on AI and governance, including designating the national competent authorities in accordance with the EU AI Act; and (4) amendments to copyright law. 

We provide below an overview of the proposal’s key provisions.

Continue Reading Italy Proposes New Artificial Intelligence Law

Updated May 28, 2024.  Originally posted May 10, 2024.

The U.S. Federal Communications Commission (FCC) is set to reopen the public comment period on potential further amendments to its orbital debris mitigation rules, providing space industry stakeholders with a new opportunity to provide input on regulations with far-reaching implications.  Further illustrating the FCC’s commitment to leadership in regulating commercial space operations, stakeholders have until Thursday, June 27 to provide input on the agency’s regulation of orbital debris.  Today’s Federal Register sets this comment deadline, as well as a cutoff of Friday, July 12 for any reply comments.

Continue Reading FCC’s Space Bureau Seeks Further Input on Regulation of Orbital Debris; Comments Due June 27