Joint Public Inquiry by the Federal Trade Commission and the US Department of Justice on Potential Additional Guidance Regarding Collaboration Among Competitors

Updated guidance should recognize that in the AI market, collaboration among firms can be not merely benign but affirmatively procompetitive

Joint Public Inquiry on Potential Additional Guidance Regarding Collaboration Among Competitors
Agencies: Federal Trade Commission and US Department of Justice
Comment Period Opens: February 23, 2026
Comment Period Closes: April 24, 2026
Comment Submitted: April 21, 2026
Docket No. ATR-2026-0001

I thank the Federal Trade Commission and the US Department of Justice (“the Agencies”) for the opportunity to comment on their public inquiry on potential additional guidance regarding collaboration among competitors.1

Established in 1980, the Mercatus Center at George Mason University serves as a leading university-based hub for market-oriented research, dedicated to connecting academic insights with real-world policy challenges. Through its graduate programs, research initiatives, and economic analysis, Mercatus works to deepen understanding of how markets function and how they can improve lives. Its mission is to advance knowledge about the institutions that support prosperity and to identify lasting solutions that remove obstacles to individual freedom, peace, and economic well-being. This comment reflects that mission and is not submitted on behalf of any particular interest group. Rather, it is intended to inform and support the decision-making process.

I am a senior research fellow at the Mercatus Center. My research focuses on competition policy, regulation, international trade, and intellectual property. I am a former general counsel of the Federal Trade Commission (2018–21) and an adjunct professor at the Scalia Law School at George Mason University.

Discussion

The Agencies have sought public comment on whether to issue updated guidance on collaborations among competitors, with that guidance building on the now-withdrawn 2000 Antitrust Guidelines for Collaborations Among Competitors.2The Agencies’ February 23, 2026, joint public inquiry is timely and welcome. In markets defined by rapid technological change, high fixed costs, and complementary assets—especially artificial intelligence (“AI”) and related digital technologies—collaboration among firms can be not merely benign, but affirmatively procompetitive.

From a market-oriented law-and-economics perspective, the central antitrust question is not whether competitors coordinate in some formal sense, but whether the collaboration is likely to reduce consumer welfare by facilitating output restriction, price coordination, exclusion, or the suppression of innovation. That distinction matters enormously in dynamic markets. Competitor collaborations may sometimes facilitate cartel behavior, but they may also reduce transaction costs, internalize spillovers, pool complementary capabilities, accelerate innovation, and expand output. Antitrust guidance should be designed to separate these categories with clarity, not to chill efficient coordination through ambiguity or speculation.

This comment therefore urges the Agencies to adopt updated guidance that gives greater certainty to four categories of AI-related collaboration that are often procompetitive and socially valuable: (1) research-and-development joint ventures and similar innovation collaborations in nascent technologies; (2) infrastructure collaborations needed to build the land, energy, and physical capacity required for data centers and advanced compute; (3) standards, safety-testing, and interoperability coordination; and (4) information-sharing arrangements directed to privacy-enhancing technologies, trusted privacy benchmarks, and data portability. In each of these settings, firms’ conduct should ordinarily be analyzed under the rule of reason and, where appropriate, protected by safe harbors or explicit enforcement presumptions of legality.

That general approach is consistent with existing statutory policy. Congress enacted the National Cooperative Research and Production Act (NCRPA)3to promote innovation and competitiveness by clarifying the rule-of-reason treatment of covered joint ventures and standards-development organizations and by limiting damages exposure for properly notified ventures. Updated guidance should build on that institutional framework rather than adopting a more suspicious stance toward collaboration in AI and other nascent technologies.

The most serious policy risk in this area is overdeterrence. When firms are uncertain whether beneficial collaboration will later be deemed unlawful coordination, they will predictably forgo projects whose private returns are uncertain even if the expected social returns are substantial. In innovation markets, such false positives are unusually costly because the lost products, infrastructure, and standards may never materialize, and because the relevant markets are too fluid to assume that collaboration among current participants necessarily dampens future rivalry. Updated guidance should therefore be explicit about error costs and should recognize that static market-share snapshots are poor guides to the competitive consequences of collaboration in technologically dynamic sectors.

I. AI and Nascent-Technology R&D Collaborations

The Agencies should expressly state that bona fide AI and nascent-technology R&D joint ventures are presumptively procompetitive when they are reasonably limited to research, testing, validation, and pre-commercialization development and when they do not involve unnecessary restraints on downstream competition. The economics are straightforward. Research generates positive spillovers; firms cannot usually appropriate the full social returns from foundational discovery; and innovation projects in AI often require the combination of complementary assets, specialized labor, compute resources, and highly uncertain investment. Joint ventures can mitigate these market failures by pooling inputs and internalizing some of the gains from innovation.

This logic is especially strong in AI. The innovation process spans chips, cloud infrastructure, model training, evaluation, security, privacy, and downstream deployment. No single firm necessarily possesses all relevant capabilities, and many necessary inputs are indivisible or costly to replicate. Collaboration may therefore lower barriers to entry and experimentation by permitting the sharing of technical infrastructure, testing environments, and evaluation tools that no individual firm would efficiently build alone.

Updated guidance should say clearly that the Agencies ordinarily will not challenge an R&D collaboration where the venture is directed to legitimate research or pre-commercialization objectives, ancillary restraints are reasonably necessary to the project, participants remain free to compete independently in downstream markets, and any exchange of competitively sensitive information unrelated to the project is subject to firewalls and other safeguards.4The guidance should also make clear that the relevant inquiry is functional rather than formal: The fact that firms are horizontal competitors in some dimension should not itself create suspicion if the collaboration expands innovative capacity or reduces duplication in socially valuable research.

The Agencies should be particularly careful in nascent and dynamic areas, where market boundaries are fluid and the identity of eventual winners is highly uncertain. In these settings, antitrust risks are frequently overstated owing to static conceptions of competition. A collaboration among firms today may intensify rivalry tomorrow by speeding technological development, creating interoperable tools, or enabling smaller firms to participate in downstream markets. The law should not assume that every coordination among current rivals threatens durable market power in markets that barely yet exist.

II. AI Infrastructure Collaborations

The Agencies should also provide safe harbors or strong presumptions of legality for collaborations that help scale the domestic infrastructure required for the AI economy. The United States will not lead in AI through software innovation alone. It must also build the physical assets on which advanced model training and deployment depend: land, energy supply, transmission access, interconnection, cooling, networking, and large-scale data-center facilities. These projects are capital-intensive, lumpy, and often delayed by coordination failures.

From a law-and-economics perspective, joint procurement and related infrastructure collaboration are not inherently suspect. Aggregating demand can lower transaction costs, solve holdout problems, support efficient investment, and accelerate the creation of productive capacity. In the AI context, purchasing collaborations or other joint arrangements may help firms rapidly develop vacant land, secure sufficient electricity, coordinate construction inputs, or make large-scale site development economically viable. The relevant effect of such arrangements is often increased output and faster capacity expansion, not supracompetitive pricing.

The Agencies should therefore state that they ordinarily will not challenge infrastructure collaborations aimed at expanding productive capacity where participants do not use the arrangement to coordinate downstream prices, allocate customers, suppress labor-market competition, or exclude rivals without technical or investment-based justification. If some degree of exclusivity is necessary to support sunk investment in a site or facility, guidance should explain that such exclusivity will be assessed under a rule-of-reason framework and in light of the capacity-creating efficiencies of the collaboration.

This point has added force because public and private analyses increasingly identify electricity demand, interconnection, and large-scale data-center buildout as important constraints on AI deployment.5Antitrust guidance should not become an additional bottleneck when collaboration can help bring new domestic capacity online more quickly.

III. AI Standards, Safety Testing, and Interoperability

Updated guidance should further explain that standards development, technical testing, interoperability coordination, and safety-related collaboration in AI are ordinarily procompetitive. Antitrust law has long recognized that standards can reduce transaction costs, improve compatibility, facilitate entry, and expand consumer choice. Congress likewise extended rule-of-reason treatment and damages limitations to standards-development organizations engaged in standards activity.6Those principles map naturally onto AI.

AI markets feature acute coordination needs. Firms may need common testing methodologies, red-teaming protocols,7incident-reporting taxonomies, provenance and watermarking approaches, secure evaluation procedures, or shared baseline expectations for robustness and cybersecurity. Such collaboration can improve trust, reduce external harms, and create a more competitive ecosystem by lowering uncertainty for users, developers, and complementary providers. The fact that technical coordination may influence product design does not make it anticompetitive; standards almost always shape design choices. The relevant question is whether the process is used to advance legitimate technical aims or instead to exclude rivals without sound justification.

The same is true of interoperability. Properly designed interoperability protocols often weaken lock-in, promote switching and multi-homing, and enlarge competitive opportunities for entrants and downstream complements. Guidance should therefore make clear that firms may coordinate on interoperability protocols for AI systems, interfaces, documentation, security reporting, and related technical matters where doing so reduces switching costs and expands participation. This coordination is especially important in fast-moving digital markets, where proprietary fragmentation can otherwise become an artificial barrier to entry.

Likewise, guidance should allow collaboration on measures to prevent AI models from producing dangerous content, being used by bad actors, or exposing national-security vulnerabilities. Coordination around abuse-prevention indicators, secure deployment practices, safety evaluation baselines, or technical safeguards may internalize harms that private firms would otherwise impose on third parties or on society more broadly.8Antitrust regulations should not penalize such conduct merely because it is collective, provided that safety is not used as a pretext for cartelization or exclusion.

IV. Information Sharing, Privacy-Enhancing Technologies, and Data Portability

The Agencies should also give clear guidance that certain forms of information sharing among industry participants can be procompetitive, particularly when directed to privacy protection, technical benchmarking, and secure user mobility. The standard antitrust concern with information exchange is that it may reduce strategic uncertainty in ways that facilitate collusion. But not all information sharing has that character. In many technology settings, information exchange reduces technical uncertainty, improves compatibility, and facilitates quality competition.

This is especially true for privacy-enhancing technologies (“PETs”), such as methods that enable useful analytics or collaboration while limiting exposure of sensitive data. Public institutions, including the National Institute of Standards and Technology and the FTC, have recognized the importance of evaluating and benchmarking PETs and of substantiating claims about their capabilities.9Updated guidance should therefore make clear that firms may collaborate on sharing best practices in PETs, benchmarking existing techniques, and developing standards for trusted privacy technologies, so long as the collaboration does not become a conduit for coordination on competitively sensitive terms unrelated to the technical objectives.

The same principle should apply to data portability. The FTC’s own work has recognized that thoughtfully designed data portability can promote competition and innovation while raising manageable privacy and security issues.10Industry coordination on portability principles, transfer formats, authentication tools, and privacy-protective technical standards can reduce switching costs, facilitate multi-homing, and give users more control over their data. Those are classically procompetitive effects. Guidance should say explicitly that such collaboration is ordinarily lawful when it is directed to portability and user choice rather than to the degradation of interoperability or the imposition of strategically exclusionary conditions.

Here, too, administrable safeguards are available. The Agencies can explain that information-sharing collaborations are less likely to raise concern when they use aggregation, anonymization, independent administrators, clean rooms, access controls, or other tools designed to prevent the exchange from becoming a means of downstream coordination. These are exactly the sorts of practical compliance tools that modern guidance should encourage.

V. Suggested Safe Harbors and Enforcement Principles

To maximize clarity and minimize chilling effects, the Agencies should adopt concrete safe harbors and examples for AI-related collaborations. At a minimum, the guidance should state that the Agencies ordinarily will not challenge: (1) R&D ventures reasonably limited to research, testing, or pre-commercialization development; (2) infrastructure collaborations designed to create or expand domestic AI-related capacity; (3) standards-development or interoperability efforts directed to objective technical and safety goals; and (4) information-sharing arrangements reasonably necessary to benchmark privacy, security, or portability tools.

The guidance should also identify the features that would take a collaboration outside those safe harbors. These include naked agreements on price, output, wages, or customer allocation; restraints unrelated to the venture’s legitimate objectives; exclusionary access rules lacking technical or investment-based justification; and the exchange of competitively sensitive information unnecessary to the collaboration. Framing the analysis this way would give businesses clearer direction while preserving ample room for enforcement against genuinely anticompetitive arrangements.

Such an approach is not deregulatory in any pejorative sense. It is simply economically coherent. It distinguishes between cartel behavior and productive cooperation, pays due regard to error costs, and recognizes that innovation markets are especially vulnerable to overdeterrence.11In the AI setting, where the relevant technologies, business models, and market boundaries remain highly unsettled, that disciplined approach is especially important.12

Conclusion

The Agencies should use this inquiry to provide modern guidance that promotes innovation without compromising traditional antitrust safeguards against naked collusion and exclusion. For AI and related nascent technologies, the sound economic presumption is that bona fide R&D collaboration, infrastructure coordination, standards and interoperability work, and privacy- and portability-related information sharing are often procompetitive and welfare enhancing.

A legal regime that leaves such conduct perpetually uncertain will predictably discourage investment, delay capacity expansion, impede interoperability, and slow the development of safety and privacy tools. A clearer regime would instead allow firms to cooperate where cooperation expands output and innovation, while continuing to prohibit coordination that substitutes collusion for competition. The Agencies should adopt guidance that reflects that distinction with candor and precision.

Notes

[1] Federal Trade Commission, “Federal Trade Commission and Department of Justice Seek Public Comment for Guidance on Business Collaborations,” news release, February 23, 2026; US Department of Justice, “Justice Department and Federal Trade Commission Seek Public Comment for Guidance on Business Collaborations,” news release, February 23, 2026.

[2] Federal Trade Commission and US Department of Justice, Antitrust Guidelines for Collaborations Among Competitors, April 2000.

[3]See National Cooperative Research and Production Act of 1993, 15 U.S.C. §§ 4301–06; Federal Trade Commission Legal Library, “National Cooperative Research and Production Act of 1993,” accessed April 13, 2026, https://www.ftc.gov/legal-library/browse/statutes/national-cooperative-…; US Department of Justice, Antitrust Division, “Filing a Notification Under the NCRPA,” last updated April 1, 2025, https://www.justice.gov/atr/filing-notification-under-ncrpa.

[4] These principles are consistent with the guidance set forth in the 2000 Antitrust Guidelines for Collaborations Among Competitors, note 2 supra.

[5] See Andreas Exarheas, “EIA Launches Pilot Studies to Assess Data Center Energy Demand,” RIGZONE, March 31, 2026, https://www.rigzone.com/news/eia_launches_pilot_studies_to_assess_data_….

[6]See Standards Development Organization Advancement Act of 2004, Pub. L. No. 108-237, tit. I, 118 Stat. 661, 661–66 (codified as amended at 15 U.S.C. §§ 4301–06); Federal Trade Commission Legal Library, “Standards Development Organization Act of 2004,” accessed April 13, 2026, https://www.ftc.gov/legal-library/browse/statutes/standards-development….

[7] Red-teaming protocols are structured, adversarial simulation exercises designed to test an organization's security, technology, or strategies by mimicking real-world attacks. See “What Is AI Red Teaming? An Essential Cybersecurity Guide for Digital-First Enterprises in the AI Age,” Protectt.ai, January 14, 2026, https://protectt.ai/blog/ai-red-teaming-guide.

[8] See Cass R. Sunstein, “AI, Reducing Internalities and Externalities,” SSRN Electronic Journal, May 8, 2024, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4817483.

[9]See Federal Trade Commission, “Keeping Your Privacy Enhancing Technology (PET) Promises,” Technology Blog, February 8, 2024, https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2024/02/keepin…; National Institute of Standards and Technology, “PETs Testbed,” last updated March 19, 2026, https://www.nist.gov/itl/applied-cybersecurity/privacy-engineering/pets….

[10] See Federal Trade Commission, “Data to Go: An FTC Workshop on Data Portability,” September 22, 2020, https://www.ftc.gov/news-events/events/2020/09/data-go-ftc-workshop-dat…; Simon Fondrie-Teitler, “Keeping Your Privacy Enhancing Technology (PET) Promises,” February 1, 2024, https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2024/02/keepin….

[11]See Aurelien Portuese, The Digital Markets Act: European Precautionary Antitrust (Information Technology and Innovation Foundation, May 24, 2021), https://itif.org/publications/2021/05/24/digital-markets-act-european-p….

[12]Concerns that a disciplined approach might be too slow and deliberate are belied by the lack of evidence that AI markets present a near-term risk of monopoly. See Geoffrey A. Manne, Dirk Auer, Kristian Stout, Lazar Radic, and Mario A. Zúñiga, ICLE Comments to DOJ on Promoting Competition in Artificial Intelligence (International Center for Law and Economics, July 15, 2024), https://laweconcenter.org/resources/icle-comments-to-doj-on-promoting-c…; Mario Zúñiga, “The Great AI Monopoly That Wasn’t,” Truth on the Market, March 18, 2026, https://truthonthemarket.com/2026/03/18/the-great-ai-monopoly-that-wasn….

Mercatus AI Assistant
Ask questions about this research.
GPT Logo
Mercatus AI Research Assistant
Ask questions about this research. Mercatus Chatbot AI More Details
Suggested Prompts:
Ask us anything. We use OpenAI's ChatGPT 4o base model to answer any question about Mercatus research.