The post Capital Confusion at the New York Times appeared first on Truth on the Market.
]]>That’s quite an indictment! Fortunately, Klein also offers solutions. Phew!
But really, it’s difficult to understand how someone could describe the highly innovative, dynamic, and efficient U.S. credit-card system as “broken.” If it were, why would U.S. credit-card networks like Visa, Mastercard, American Express, and Discover be the world leaders, accounting for more than 60% of global market share?
And far from being “predatory,” credit cards have been hugely valuable to American families at all income levels. During the pandemic, they literally saved lives by enabling people in lockdown to buy food online and have it delivered to their homes. As Klein himself notes:
The pandemic changed how we buy things, significantly increasing the share of transactions put on credit cards rather than conducted in cash.
Klein tries to turn this positive into a negative, noting that the increase in card use added “to the swipe fees merchants pay.” What he fails to mention is that the switch from cash to credit has, in general, reduced merchants total costs because the costs of handling cash are so much higher. When a customer pays in cash, checkout takes longer than paying with a card (especially where the transaction is contactless). For larger stores, that means more cash-register operators must be hired. For smaller stores, it means there are less resources available to perform other tasks, such as taking inventory or restocking shelves.
Cash also presents a greater risk of theft, which means merchants must invest in security systems both in-store and for moving cash to the bank. And cash must be counted and deposited, both of which take time.
Study after study has shown that, when all relevant costs are taken into account, cash costs merchants more than payment cards. A 2018 study by the IHL Group, for example, found that the cost of accepting cash averaged about 9% and ranged from 4.7% for larger grocery stores to 15.5% for bars and restaurants. By contrast, the all-in cost of processing credit-card payments is typically less than 3%.
Klein’s sources don’t inspire much confidence. The link in his opening paragraph is not to an academic study, but to a video on the Times’ own website that spins an elaborate tale of how a frying pan bought using credit-card rewards was actually paid for by MJ, the owner of a local convenience store. In essence, the video asserts that, by using a rewards credit card to buy $100 of goods every week for a year at MJ’s store, enough rewards were accrued to pay for the frying pan.
Let’s suppose that the bank that issued the rewards card charged the maximum interchange fee on the transactions at MJ’s store, which in 2023 was 3.15%. Further assume that MJ’s merchant-account provider charges her on an “interchange-plus” basis. If MJ used Helcim, she would pay the interchange fee plus 0.4%, plus $0.08 per transaction.
So, of the $5,200 spent over the course of the year by the customer using a rewards card, $163.80 would go to the issuing bank and $28.80 to Helcim, leaving MJ with a net of $5,007.40. By contrast, if MJ had been paid in cash, she would have a net of $4,768.4 (based on the average costs identified by IHL for convenience stores of 8.3%).
While the Times video wants our hearts to bleed for MJ having to pay for a customer’s All-Clad D5 12” frying pan, had the customer paid her in cash instead, she would have made $239 less. And the customer would have been less happy, because he would have effectively paid about $250 more (the cost of the pan). In other words, paying with cash would make the merchant and the consumer worse off to the tune of nearly $500.
Credit cards also have other benefits that Klein ignores in his simplistic story. By providing credit—including up to 45 days interest free for cardholders who pay off their balance each month—credit cards enable people to smooth their spending so that they can buy items even when they don’t have money in the bank. They also provide fraud and theft protection for the cardholder, making it far less risky to carry a card than a bundle of cash. Many cards also include payment-protection insurance, rental-car insurance, and travel insurance.
Finally, credit cards make it far easier to trace payments, because the card issuer knows the identity and address of the legitimate user. This makes it more difficult to make illegal purchases using credit cards (compared to relatively untraceable cash) and easier to enforce sales taxes.
This highlights a fundamental problem with Klein’s analysis: he counts the costs of paying with credit cards but fails to explain for what would happen in the alternative. He wants us to believe that:
the rest of us, whether we pay with cash, a debit-card or a middle-of-the-road credit card, wind up paying more—because we are subsidizing these rewards cards for whom only the wealthiest qualify.
Except that’s just not true.
First, as noted, in most cases, cash purchases are more costly for store owners, so credit-card users are subsidizing cash users. Second, as Todd Zywicki, Ben Sperry, and I have noted, and as can be seen in the figure below from the most recent Consumer Financial Protection Bureau (CFPB) report on the consumer credit-card market, access to rewards credit cards is less a function of income and more a function of the cardholder’s credit score. Third, as the figure also shows, more than 90% of all credit-card transactions are made using rewards cards.
Fourth, as can be seen in the chart below, and as the CFPB noted in the accompanying text: “Earning rates are about the same across credit score tiers for those with rewards cards, except for consumers with scores above 800.” Perhaps Mr. Klein does not have access to those higher-value rewards cards, but if so, he is the exception, not the rule.
These data suggest that, for all the critics’ bluster, the value of rewards per dollar spent varies relatively little from card to card; what differs is the types of rewards and other associated benefits. Innovations along these lines have been important drivers in the shift from cash and checks to credit cards, with attendant benefits for consumers, merchants, and society as a whole.
Nonetheless, Klein is correct that the fees charged for different cards can vary significantly. In fact, importantly, the story is rather more complicated than Klein makes out. Interchange fees typically vary not only by card, but also by type of merchant. This is, at least in part, because different merchants pose different risks of fraud and chargeback.
Moreover, in contrast to Klein’s assertion that low-dollar sales typically have high fees, networks often discount the fees on small-ticket items in order to encourage adoption. For transactions of $15 or less, Visa’s small-ticket interchange fee for credit cards carries no fixed-fee component. For transactions of $5 or less, Mastercard’s fixed fee is only $0.04. And in some cases, such as for gasoline purchases, they cap the total amount (for example, Mastercard charges a maximum of $0.95 and Visa a maximum of $1.10 for gas).
Nonetheless, Klein offers the anecdote that, one year, his “oldest friend’s small coffee shop paid more in card processing costs than for coffee beans.” In fairness, the coffee shop in question, Bump n Grind in Maryland, roasts its own beans, and therefore buys green beans that are considerably less expensive than pre-roasted beans. It also sells much more than just coffee, including vinyl records. So it probably has many costs that are higher than the amount it pays for beans, including: rent, equipment, utilities, and staff.
It may also use a merchant acquirer or a payment gateway (such as Square or Stripe) that offers blended rates (that is, a single rate for all transactions regardless of the type of card or size of transaction), which would mean that it is unable to take advantage of the small-ticket discount available on “interchange-plus” plans.
Klein lays much of the blame on the U.S. Supreme Court for the alleged problem of consumers being charged different swipe fees for different cards issued on the same payment network. He claims that “a 2018 Supreme Court ruling effectively forces merchants to accept either every type of card – from, say, a basic Green Card to the Platinum Card – from an issuer like Amex or none of them.” And he goes on to assert that “the ruling also barred merchants from incentivizing consumers to use cheaper ones.”
There’s just one problem with these claims: they’re not true.
What the Supreme Court did in Ohio v. Amex was to prevent the state from overriding the contractual “anti-steering” provisions that had long been established by credit-card networks in their agreements with merchants (either directly, in the case of three-party cards such as Amex and Discover, or via agreements with issuers, in the case of four-party cards like Visa and Mastercard). The Court explained its rationale clearly:
Respondent… Amex… operate[s] what economists call a “two-sided platform,” providing services to two different groups (cardholders and merchants) who depend on the platform to intermediate between them. Because the interaction between the two groups is a transaction, credit-card networks are a special type of two-sided platform known as a “transaction” platform. The key feature of transaction platforms is that they cannot make a sale to one side of the platform without simultaneously making a sale to the other. Unlike traditional markets, two-sided platforms exhibit “indirect network effects,” which exist where the value of the platform to one group depends on how many members of another group participate. Two-sided platforms must take these effects into account before making a change in price on either side, or they risk creating a feedback loop of declining demand. Thus, striking the optimal balance of the prices charged on each side of the platform is essential for two-sided platforms to maximize the value of their services and to compete with their rivals.
Visa and MasterCard—two of the major players in the credit-card market—have significant structural advantages over Amex. Amex competes with them by using a different business model, which focuses on cardholder spending rather than cardholder lending. To encourage cardholder spending, Amex provides better rewards than the other credit-card companies. Amex must continually invest in its cardholder rewards program to maintain its cardholders’ loyalty. But to fund those investments, it must charge merchants higher fees than its rivals. Although this business model has stimulated competitive innovations in the credit-card market, it sometimes causes friction with merchants. To avoid higher fees, merchants sometimes attempt to dissuade cardholders from using Amex cards at the point of sale—a practice known as “steering.” Amex places antisteering provisions in its contracts with merchants to combat this.
While these anti-steering provisions are important, they are not the provisions to which Klein refers, which are known as “honor-all-cards” provisions and which prevent merchants from discriminating against cards bearing the network’s brand. Nonetheless, honor-all-cards provisions are likewise important to the functioning of two-sided payment networks (as Nobel laurate economist Jean Tirole has noted) because they enable card networks to create a range of offerings, thereby facilitating innovation by and competition among issuers.
Without the honor-all-cards provisions, merchants might refuse to accept cards with higher interchange fees. Klein seems to think this is a good idea. He proposes that:
Congress should legislatively correct the Supreme Court’s mistake. For starters, give merchants the power to reject the priciest credit cards, and let’s see if their users are willing to pay the true cost of their rewards.
But this would result in a race to the bottom in which card issuers were unable to offer rewards or otherwise differentiate their products, leading to a decline in the use of cards. This, in turn, would reduce spending, harming both consumers and merchants.
Klein supports his argument that Congress could override the honor-all-cards provision by citing the Durbin amendment, which imposed price controls on debit-interchange fees. A recent study by Vladimir Mukharlyamov of Georgetown University and Natasha Sarin of the University of Pennsylvania found that this had the effect of reducing covered banks’ annual revenues by about $5.5 billion. Seeking to recoup some of the lost revenue, banks on average doubled their monthly fees on checking accounts; increased the minimum deposit required for “free” checking by 21%; and reduced the availability of accounts with no-minimum free checking by about half.
Likely as a direct consequence, hundreds of thousands of the poorest Americans left the banking system altogether. Meanwhile, merchants passed through little, if any, of the savings resulting from the reduced debit-interchange fees, so those on low incomes who kept their accounts but paid monthly fees were measurably worse off.
To make matters worse, Klein wants “brave policymakers” to “start taxing reward points.” At least he is clear that this is really just about taxing the rich:
The richer you are, the more likely you qualify for bigger rewards. Progressive taxation rates mean that exempting rewards from taxation makes them nearly four times as valuable to those in the top tax bracket as the bottom.
As it happens, some credit-card rewards probably are taxable; it depends on their function. But if policymakers were to make all rewards taxable, it would harm those that function primarily as rebates and loyalty incentives—such as airline miles received on co-branded cards. And that, in turn, would harm the co-brand partners, such as airlines and hotels.
Klein’s final proposal is more troubling. He suggests that “we could require all merchants have access to the same swipe-fee pricing, regardless of size.” His concern here is that some larger merchants leverage their bargaining power to obtain lower interchange fees. In part, larger merchants benefit from economies of scale and can implement transaction monitoring and security systems that smaller merchants simply can’t afford.
Meanwhile, a few large merchants (such as Costco) operate membership-based systems that enable them to forego customer convenience and strike exclusive deals with specific card issuers and networks, thereby obtaining lower swipe fees. Neither of these apply to individual smaller merchants, so the suggestion that swipe fees could be reduced by mandate to the levels negotiated by larger merchants is totally unrealistic.
In his last few paragraphs, Klein returns to the merger between Capital One and Discover, the hook around which he has hung a series of shibboleths about credit cards that he uses as the premise for his terrible policy proposals. And here again, he repeats those shibboleths, moaning that:
Capital One already seems to be competing with American Express for wealthy customers who like elite airport lounges and bit travel perks …
And in the next paragraph:
As the economy continues to digitize with more micropayments, the credit card burden will keep growing, particularly on smaller businesses.
And in the final paragraph:
Until legislators are willing to change the system that showers tax-free rewards on the upper middle class, the cash register will continue to exacerbate the wealth gap.
What utter tosh.
The post Capital Confusion at the New York Times appeared first on Truth on the Market.
]]>The post US v. Apple Lawsuit Has Big Implications for Competition and Innovation appeared first on Truth on the Market.
]]>At the heart of the complaint is the DOJ’s assertion that:
[Apple’s] anticompetitive acts include, but are not limited to, its contractual restrictions against app creation, distribution, and access to APIs that have impeded apps and technologies including, but not limited to, super apps, cloud streaming, messaging, wearables, and digital wallets.
The DOJ will have to show that those actions have no explanation apart from an effort to harm competition. Apple will be able to stress that they are all designed to create a curated experience for customers, enhance security, and thereby confer major benefits on Apple consumers—a significant efficiency. It will thereby claim that those actions are not “exclusionary” in the antitrust sense, and thus there is no monopolization or attempted monopolization under Sherman Act Section 2.
In short, there is a strong argument that Apple’s actions benefit Apple consumers. The fact that Apple consumers are willing to pay far more for iPhones than for Android phones indicates that they value them far more (obtain more consumer surplus). Apple will argue that interfering with Apple’s practices that create this value would harm Apple consumers, make iPhones more like Android phones, and degrade dynamic competition in iPhones. While some Android customers might prefer better access to messages sent by iPhones or to a few other iPhone features, Apple does not have a legal duty to provide such access.
Specifically, under Supreme Court precedent in, e.g., Verizon v. Trinko, Apple has no antitrust duty to assist its competitors or to afford them special access to aspects of its platform. The heart of DOJ’s complaint “sounds in” assisting Android phone makers, and Apple has no duty to do this.
According to revenue statistics compiled by Backlink.com, Android phones controlled roughly between 69% and 75% of the global smartphone market share (measured by revenue) during the 2016 to 2023 period. Over the same time frame, iPhone shares fluctuated in the 19% to 29% range—hardly the mark of a dominant firm, let alone a monopolist.
In the U.S. smartphone market (assumed relevant), however, iPhone shares fell in the 53% to 59% range during that time period, while Android shares fluctuated from roughly 41% to 45%. There is no showing that Android is rapidly losing ground and about to be minimized.
Apple will argue that it is obviously not the global market leader. Moreover, it is not a monopolist in the United States. It is merely a very successful competitor in the smartphone market. The iPhone’s share therefore reflects consumer demand for its special attributes, such as enhanced security and a curated experience.
The U.S. v. Microsoft case is inapposite. At the time of the case, Microsoft’s Windows operating system had a clear monopoly in the desktop operating-system market, and no close substitutes were in or about to enter that market.
Under existing Supreme Court precedent, it appears highly unlikely that the DOJ will win in court. During the expected years of litigation, however, Apple will have a reduced incentive to compete aggressively or to invest and innovate in dynamic areas, such as cloud computing. This could slow U.S. innovation overall in an area in which it has been a leader, harming U.S. economic welfare and the international competitiveness of our digital economy.
Even if DOJ somehow were to win after many years, the relief it would be able to obtain—an injunction covering the many practices (and possibly others) alluded to in DOJ’s complaint—would be virtually impossible to administer by a court. This would lead to constant litigation and haggling over whether future actions by Apple fell within the decree, thereby hobbling Apple and diminishing its future innovation. Once again, this would undermine the competitiveness of the U.S. digital sector and harm the American economy.
With this lawsuit, the United States is following the example of the European Union, in its effort to micromanage digital competition and hobble major American competitors—including, of course, Apple. Unlike the United States, the EU has produced no leading platforms and very little real innovation in the digital sector. Why does the U.S. government want to emulate that failed experience?
Notably, as an example of harmful bureaucratic stupidity, the European Commission failed miserably in forcing Microsoft to sell versions of Microsoft Office without the Windows Media Player, a decision that was soundly rejected by European consumers.
In sum, the DOJ’s Apple lawsuit is an assault on American innovation that will harm, rather than help, American consumers. It ignores the Supreme Court’s teaching that antitrust protects the competitive process, not individual competitors.
Notably, of course, major globally powerful foreign firms, such as Samsung, are among the beneficiaries of the lawsuit. These highly capitalized rival companies are fully free to protect their interests and to develop new sorts of smartphones and digital products, if they so desire. They do not need special protection from the U.S. government.
Significantly, in the 117th Congress, from 2021 to 2023, Congress considered, but elected not to pass, antitrust legislation that would have imposed new statutory duties of cooperation on giant digital companies. The U.S. government should not be authorized to impose new antitrust duties on digital innovators through litigation that it was unable to obtain through legislation.
Finally, the Apple lawsuit also weakens the competitive position of the U.S. digital sector vis-à-vis China, a result counter to American strategic interests in the economic sphere.
The post US v. Apple Lawsuit Has Big Implications for Competition and Innovation appeared first on Truth on the Market.
]]>The post Antitrust at the Agencies Roundup: Supply Chains, Noncompetes, and Greedflation appeared first on Truth on the Market.
]]>Two quick observations: First, the complaint opens with an anecdote from 2010 that suggests lock-in (a hard case under antitrust law), but demonstrates nothing. Second, the anecdote is followed by a statement that “[o]ver many years, Apple has repeatedly responded to competitive threats… by making it harder or more expensive for its users and developers to leave than by making it more attractive for them to stay.”
I’m not going to pretend to bless every bit of Apple’s conduct “over many years”—not least because I haven’t reviewed nearly all of it, and there really can be complex issues in that very big mix—however curious things seem at 30,000 feet. But the second part of the DOJ’s gloss on their allegation does make one wonder.
I happen to have an iPhone now (partly because my employer gave it to me). But “over many years,” I’ve had phones from several manufacturers—even two at once. Could I switch from Apple? Sure. I’d have to pay for a different phone with my own money (ick). But I could buy an alternative and I’d be able to use it just fine. I would not have to consult my (computer-science student) son or my wife (my partner in all things except tech support; she is my tech support and I’ve got nothing to give in that area).
Has cell-phone quality improved over the many years? Features? Um . . . yeah. That doesn’t prove that there wouldn’t be more or better but for someone’s conduct, but . . . there are other phones, all sorts of cool features, and I HAD A BLACKBERRY. Heck, my first cell phone was mostly just a phone.
Today also featured an open meeting of the Federal Trade Commission (FTC), with two or three orders of business, depending on how you count:
A few words on the report. First, an apology for (and to) the staff: it’s not a good report, but that’s not really the staff’s fault. Read the section on study design at Page 3 and the announcement of the “study,” and you might well conclude that no economists were harmed or even mildly inconvenienced in the study’s design. Assign a handful of smart, conscientious lawyers a hopeless task, hundreds of thousands of documents, and no systematic or uniform data and . . . it could have been a lot worse.
On study design, they responsibly note that:
the conclusions reported here are based on specific information, but they do not measure the wider prevalence of observed practices or the magnitude of their impact on competition.
Right. One might even say that one shouldn’t jump to any conclusions about the present state of competition in the grocery retail, wholesale, and production sectors, or about anticompetitive conduct in those sectors, based on the report. (Ok, I’m saying it.)
As a descriptive matter, the overview of the many moving parts in the various supply chains is potentially useful, as are descriptions of various complications, both isolated and interrelated.
One could even attempt an apology for the commission. Supply-chain disruptions were far from trivial, and pressure came from all corners (including both ends of Pennsylvania Avenue) to do something. Chair Lina Khan’s remarks at the open meeting reflect some of that, noting a “whole of government” approach and administration initiatives spread across several executive agencies.
Still, the document requests (via Section 6(b) orders, that is—compulsory process) to three large grocery retailers, three large wholesalers, and three large producers was a recipe for disaster, and it’s the commission that set the study’s terms and blessed issuing the report.
Why is the n nine, you might ask (or three sets of three, or not really an n)? Compulsory orders issued under Section 6(b) of the FTC Act can be issued to numerous firms, but if they are asking questions of 10 or more firms, the Paperwork Reduction Act requires that they jump through various hoops and receive approval from the Office of Management and Budget. Sometimes they do just that, but it’s no small thing. It’s nine so as to avoid the no small thing.
Given the three, three, and three, why those firms? It’s plain enough that those are all big players, but there’s no real explanation of the sample selection. So there’s that. Or isn’t. Amazon is a large retailer, not just (or mainly) via Whole Foods but, well, the FTC seems to really like investigating Amazon. We call that “revealed preference.”
While the report’s conclusions are qualified, those conclusions—and Chair Khan’s gloss on them—do sound in the “big is bad” chorus and intimate a need to investigate (contemplate? Initiate?) intervention. For example, noting that, “on one measure of annual profits for food and beverage retailers” (emphasis added) profits rose (reaching 7% in the first three quarters of 2023, versus 5.6% in 2015). According to the report:
This casts doubt on assertions that rising prices at the grocery store are simply moving in lockstep with retailers’ own rising costs. These elevated profit levels warrant further inquiry by the Commission and policymakers.
But wait, there’s more:
Some firms seem to have used rising costs as an opportunity to further hike prices to increase their profits, and profits remain elevated even as supply chain pressures have eased.
Right. The cover story. Maybe not?
But first, note a disclaimer in the report: “This study did not test whether the specific companies that received 6(b) Orders increased their prices by more or less than their input cost increases.” So, it’s not a finding that certain retailers, or certain large retailers, or any retailer in particular, “increased their prices by more or less than their input cost increases.”
Was the Amazon delta bigger than that at the bodega where I get tamales? Well, the FTC didn’t issue orders to the bodega, or to any other small or mid-sized retailer, and they don’t have a finding on that, anyway.
I suppose one can infer something other than perfect, idealized, undifferentiated competition in retail-grocery markets—at least, if one likes that one measure. But so what? A monopolist doesn’t need “cover” to raise prices. And while it will likely pass on some of its cost increases to consumers, a profit-maximizing monopolist will not pass on all of its cost increases. That is, we can expect its markups to decline, not rise, as costs increase.
If this is unfamiliar or unintuitive, my International Center for Law & Economics (ICLE) colleague Brian Albrecht has a nice clear post on markups, pricing, and purported explanations (like greed), for those interested in an accessible primer (here). And there’s this, and this on “greedflation” and price theory from Josh Hendrickson, as a useful addition.
More on the “greedflation” business (or nonsense) below.
Also, while the report doesn’t quite say so, Chair Khan’s open-meeting remarks made clear that she’s wondering if there might be some good Robinson-Patman cases in this area. Good, bad, or ugly, I’ll bet that she finds something.
As the FTC notes in its “Deception Statement,” one can violate Section 5 of the FTC Act by a “representation, omission or practice that is likely to mislead the consumer.” (emphasis added) That is, the FTC Act contemplates sins of omission, as well as sins of commission. Or, if not sins, then federal civil violations. I’ll get back to that.
From time to time (for example, here and here), I’ve applauded the resurfacing of the FTC’s much lauded competition-advocacy program (see Alden Abbott’s post here; former Chairman Bill Kovacic’s “FTC at 100” report here; former Acting Chair and Commissioner Maureen Ohlhausen here; Todd Zywicki, James Cooper, and Paul Pautler here; and Andy Gavil here). The program is not what it once was, but it’s “not dead yet,” and that’s been a good thing, for the most part.
Still, a recent piece of advocacy—a letter from Office of Policy Planning Director Hannah Garden-Monheit to Oregon state Sen. Deb Patterson (D-Salem) about a proposal to, among other things, prohibit noncompete agreements for medical professionals—seems odd to me in several ways.
It’s not about the topic. Restrictions on labor-market competition are a legitimate area of interest. The FTC has settled several matters in which it alleged that specific noncompete agreements were inconsistent with the FTC Act. This FTC has also published a notice of proposed rulemaking (NPRM) on noncompetes in the Federal Register.
I have many concerns about the FTC’s NPRM (see here and here, for example, and add Brian Albrecht here and here; Alden Abbott here; and Greg Werden here). Still, noncompete agreements—at least some terms, in some markets—can raise antitrust concerns, among others. These are acknowledged in, among many other places, a very critical review of the FTC’s NPRM by ICLE and numerous scholars of law and economics. There are, as well, reasons to be concerned about some physician noncompete agreements specifically, as I reviewed in a recent paper here.
So what’s so strange? One is the emphasis on anecdote: about half of the evidence in Garden-Monheit’s letter consists of excerpts from statements submitted to the FTC by individual physicians about their personal experiences of noncompete restrictions. Of course, personal accounts can make problems vivid to policymakers, but they are, after all, just anecdotes. One might wonder the extent to which the seven selected excerpts represent the perspectives of the roughly 1 million practicing U.S. physicians, much less the real impact of varied noncompete terms on health-care providers, patients, or other payers. And the FTC’s potential value add—one expects or hopes—is that of an agency with special expertise in antitrust law and economics, not piquant narratives.
Another is a conspicuous lacuna in the four-page single-spaced letter. It notes that the FTC’s NPRM on noncompetes includes an “extensive discussion of the literature, studies, and evidence on the effects of non-compete clauses,” and, specifically, that it covers “[e]vidence that non-compete clauses reduce earnings for both workers who are and who are not covered by non-compete clauses…”
Maybe. Sort of. There is very little literature on the direct impact of noncompete agreements themselves, but there are papers suggesting that, for example, the greater “enforceability” of noncompetes under state law is associated, on average, with lower wages for certain classes of workers. Most of those studies were not designed to show what’s causing lower wages, but let’s leave that aside.
A 2019 literature review from the FTC’s own Bureau of Economics noted that studies of wage effects—among others—report “mixed” results. (see John McAdams here). “Mixed” does not mean uniform or, in this case, even directionally consistent.
That brings me to a paper that’s not mentioned at all, even though it was cited in the NPRM and was discussed at the FTC’s 2019 workshop on noncompetes: a 2020 paper by Kurt Lavetti, Carol Simon, and William White on “The Impacts of Restricting Mobility of Skilled Service Workers: Evidence from Physicians.”
Why not mention a published and peer-reviewed paper that seems precisely on point, not to mention the only published empirical study of the impact of noncompete terms (and noncompete “enforceability”) on physicians? Might it have something to do with the paper finding that, for example, “noncompetes increase the annual rate of earnings growth by an average of 8 percentage points in each of the first 4 years of a job, with a cumulative effect of 35 percentage points after 10 years on the job”? Which really does seem contrary to the FTC narrative?
One might not think the physician-compensation paper to be definitive. Fair enough. It’s an interesting and useful paper, but it’s one paper—subject to certain limitations—and the economic literature on noncompete terms is very much a work in progress.
The McAdams literature review observes that “the more credible empirical studies tend to be narrow in scope, focusing on a limited number of specific occupations . . . or potentially idiosyncratic policy changes with uncertain and hard-to-quantify generalizability.” Certain issues run throughout the body of literature (in addition to my article above, the McAdams review, and the ICLE comments, see, e.g., Norman Bishara and Evan Starr here; the Global Antitrust Institute here; and ICLE here).
Still, we’re not worried about generalizing from one profession to the very same profession. Any reservations one might have about the physician-compensation estimates go double (at least) for the paper the FTC cites on price effects (which is also an interesting paper, but subject to many questions and presenting an entirely dubious estimate—see my discussion here again).
Bottom line: if an expert agency offers to summarize its “extensive discussion of the literature, studies, and evidence on the effects of non-compete clauses,” including ““[e]vidence that non-compete clauses reduce earnings,” and the expert agency neglects to mention contrary results, including those of the only extant study of non-competes and physician earnings . . .
Let’s put it this way: if the FTC were a private firm selling its advocacy position to Oregon, would this count as a material omission, in violation of Section 5 of the FTC Act? I mean, it’s not, not misleading.
I wrote “that’s been a good thing, for the most part,” and I meant it, but several recent pieces of interagency advocacy are just plain odd. Back in February, I noted another sort of omission entirely in the FTC’s advocacy for “an expansive and flexible approach to march-in rights.” That is, the advocacy wasn’t missing a citation or two, it was missing anything recognizable as a competition argument.
As Zywicki, Cooper, and Pautler explain:
Competition advocacy, broadly, is the use of FTC expertise in competition, economics, and consumer protection to persuade governmental actors at all levels of the political system and in all branches of government to design policies that further competition and consumer choice.
The 2022 and 2023 comments on certificates of public advantage (COPA) and other state-based attempts to shield certain health-care providers from antitrust scrutiny fit that definition well, building on decades of institutional experience and expertise on the topic. Some of the latest advocacies . . . not so much, and not so well. Maybe, and I’m just spit-balling here, a bit less politics and a bit more of that old expertise?
More hands across the agencies and the whole wide world of government: On March 5, the FTC, DOJ, and U.S. Department of Health and Human Services (HHS) separately announced that they’d “jointly launched a cross-government public inquiry into private-equity and other corporations’ increasing control over health care.”
They also each did it somewhat differently. The HHS press release is titled “HHS, DOJ, and FTC Issue Request for Public Input as Part of Inquiry into Impacts of Corporate Ownership Trend in Health Care.” The announcement from the DOJ’s Antitrust Division is nearly identical; they just swapped the order of the agencies to put the DOJ first. And the FTC continued the me-first trend, but followed with a different spin: “Federal Trade Commission, the Department of Justice, and the Department of Health and Human Services Launch Cross-Government Inquiry on Impact of Corporate Greed in Health Care.”
Technically, the three agencies jointly issued a request for information (RFI). And what better and less-biased way to solicit diverse and informative public input and commence an inquiry into ownership trends?
Interestingly—or “interestingly”—the language in the FTC press release mirrors that of a White House fact sheet, which noted that the administration is “[l]aunching a cross-government public inquiry into corporate greed in health care.”
I’ve yet to see a study design, much less a report on findings. For all I know, it’s just a coincidence that the “independent agency” used the White House language, while the cabinet-level executive agencies did not. After all, a press release is just a press release. Still, I wonder whether an inquiry into the moral fiber or foibles of corporate persons is the best way to learn about health-care markets or, for that matter, quite in the wheelhouse of the FTC, the division, or HHS.
That’s not to say that everyone’s an angel or that health-care providers cannot violate the antitrust laws, among others. The next antitrust violation in the health-care sector will not be the first, second, or third. And such violations can do real harm to consumers—human persons, among others. Still, it’s a large and heterogeneous sector, and an RFI should signal the beginning of an inquiry, not its conclusion.
And by the way, yes, some people seem greedy, in the colloquial sense. But “greed” has no clear meaning in antitrust law and economics, and no explanatory value when it comes to analyzing pricing or acquisitions. None.
But the greedy refrain . . . oy. “Corporate greed in health care,” “greedflation,” etc. The supply-chain study. As an explanation for inflation?
Colorful rhetoric, perhaps, but economic nonsense. Again, the links from Brian Albrecht (here) and Josh Hendrickson (here and here) that I mentioned above may be helpful.
The post Antitrust at the Agencies Roundup: Supply Chains, Noncompetes, and Greedflation appeared first on Truth on the Market.
]]>The post Murthy Oral Arguments: Standing, Coercion, and the Difficulty of Stopping Backdoor Government Censorship appeared first on Truth on the Market.
]]>In the International Center for Law & Economics’ (ICLE) amicus brief in the case, we argued that the First Amendment protects a marketplace of ideas, and government agents can’t intervene in that marketplace by coercing social-media companies into removing disfavored speech. But if the oral arguments are any indication, there are reasons to be skeptical that the Court will uphold the preliminary injunction the district court issued against the government officials (later upheld in a more limited form by the 5th U.S. Circuit Court of Appeals).
While it is certainly difficult to rely on injunctions to police informal government activity, there are potentially catastrophic implications if the Court’s opinion here ultimately serves to erect barriers so high that no one could successfully challenge the kinds of opaque-to-the-public government censorship alleged in this case.
Much of Monday’s oral argument focused not on determining when government pressure campaigns might overstep the line into coercion, but rather, whether the Court should consider the question at all. Some of the justices’ questions appeared to suggest that, unless a person whose speech was suppressed could allege a specific action by a government official that targeted them individually, they couldn’t even challenge the underlying censorship regime.
For instance, Justice Elana Kagan focused several times on the problem of traceability—i.e., how the government’s conduct caused speech to be suppressed:
JUSTICE KAGAN: Could we go back to the standing question? And – and if I ask you for the single piece of evidence – and maybe this is the – the piece that you were describing earlier. I just wanted to make clear what your answer was. The single piece of evidence that most clearly shows that the government was responsible for one of your clients having material taken down, what is that evidence and, you know, what does it say about how the government was responsible?
AGUIÑAGA: Sure, Your Honor. So, as I say, I think Jill Hines is the best example for us on standing. To give you one more example, look at page —
JUSTICE KAGAN: Yeah, but even on that one, I guess I just didn’t understand in what you were saying how you drew the link to the government. I mean, we know that there’s a lot of government encouragement around here. We also know that there’s – the platforms are actively content moderating, and they’re doing that irrespective of what the government wants. So how do you decide that it’s government action as opposed to platform action?
AGUIÑAGA: Your Honor, I think the clearest way – if I understand – so let me answer your question directly, Your Honor. The way — the link that I was drawing there was a temporal one. If you look at JA 715 to 716, that’s a May 2021 email. Two months later after that email, calls were targeting health groups just like Jill Hines’s group. She experiences the first example of that kind of group being —
JUSTICE KAGAN: Yeah. So, in two months, I mean, a lot of things can happen in two months. So that decision two months later could have been caused by the government’s email, or that government email might have been long since forgotten because, you know, there are a thousand other communications that platform employees have had with each other, that – a thousand other things that platform employees have read in the newspaper. I mean, why would we point to one email two months earlier and say it was that email that made all the difference?
…
JUSTICE KAGAN: I mean, you can say that about pretty much everything that’s in your brief, that there’s just nothing where you can say, okay, the government said take down that communication. The government is making some broad statements about the kinds of communications it thinks harmful. Facebook has a lot of opinions on its own about various kinds of communications it thinks harmful. I guess, if you’re going to use standard ideas about traceability and redressability, I guess what I’m suggesting is I don’t see a single item in your briefs that would satisfy our normal tests.
If the Court decides to avoid the merits of this case under such a lack-of-standing reasoning, it would allow government agents to engage in egregious censorship activity so long as they did a good job of not creating a record of asking for particular individuals’ speech to be suppressed. The government could do this by calling for entire types of content or viewpoints to be censored without targeting specific people.
In fact, this is pretty much what has been alleged: that government officials—often using government-funded nonprofits in partnership with government entities—algorithmically monitored viewpoints deemed inconsistent with the government’s viewpoint and reported those instances to social-media companies for suppression. Alternatively, government officials could use private meetings and phone calls (instead of discoverable emails or other written communications) to threaten intermediaries outright. Such interactions would presumably never be made public unless the intermediaries in question (those being coerced) made it public themselves. This also allegedly happened here with social-media companies prior to the release of the Twitter Files, and when discovery was conducted pursuant to this case.
If the Court vacates the preliminary injunction on traceability grounds, affected individuals in future cases may never know or be able to obtain discovery in order to prove their speech was suppressed due to government action, except where the pressured intermediary lets them know. This would allow covert censorship to continue unabated.
Questions from a few of the justices appeared to imply an exceedingly high bar to find state action under a coercion theory. For example, government officials privately berating an entity into suppressing speech on their behalf may not be regarded as coercion, but just an example of persuasion. Public speeches by government officials condemning speech carried by online platforms might just be acceptable use of the bully pulpit. And even threatening firms with more stringent regulation is only coercion if a particular entity ties it explicitly to a particular censorship request.
For instance, both Justices Kagan and Brett Kavanaugh drew on their experience as government attorneys who would call reporters to try to shape media coverage:
JUSTICE KAVANAUGH: You’re speaking on behalf of the United States. Again, my experience is the United States, in all its manifestations, has regular communications with the media to talk about things they don’t like or don’t want to see or are complaining about factual inaccuracies. I’d be interested in what you want to describe about that.
…
JUSTICE KAGAN: I mean, can I just understand because it seems like an extremely expansive argument, I must say, encouraging people basically to suppress their own speech. So, like Justice Kavanaugh, I’ve had some experience encouraging press to suppress their own speech. You just wrote a bad editorial. Here are the five reasons you shouldn’t write another one.
You just wrote a story that’s filled with factual errors. Here are the 10 reasons why you shouldn’t do that again.
I mean, this happens literally thousands of times a day in the federal government.
JUSTICE KAGAN: So back in – this – this still happens now – decades ago, it happened all the time, which is somebody from the White House got in touch with somebody from The Washington Post and said this will – this will just harm national security, and The Washington Post said, okay, whatever you say.
I mean, that was all – we didn’t know enough, but that was – that was coercion?
For these justices, government officials pleading for newspapers not to run specific speech does not constitute coercion. An adverse government action must be threatened, as in the theoretical example Kavanaugh offered: “if you publish the story, we’re going to pursue antitrust action against you.”
There are, of course, some important differences between newspapers and social-media platforms. Social-media platforms primarily curate third-party content from their users. While newspapers may publish some contributions from outside parties, most of their content is produced in-house. They also take legal responsibility for those op-eds or other pieces from third parties that they edit and choose to run. Thus, when government officials seek to convince newspapers to suppress their own speech, it does seem fundamentally different than trying to get platforms to suppress third-party speech on the basis of content or viewpoint.
It may be the case that more is required to constitute censorship than attempts by government agents to persuade. But the Court should not look to judge each government communication in isolation to determine whether there was coercion. The whole must be considered, which in this case includes allegations of public threats of investigation, enforcement, and regulation, along with private follow-ups to ensure particular viewpoints are suppressed.
What may appear in isolation to be an innocent ask from a government agent may not be so innocent in cases where a speech platform knows that it will likely be subject to continued government investigations and further regulation (especially when they are publicly threatened with both). The incentives to “play ball” are powerful, even if the government officials in question can’t do anything directly about the platform’s refusal to cooperate in censorship efforts.
Justice Samuel Alito made this point well in the NRA v. Vullo oral arguments, where New York officials allegedly coerced private insurance companies to no longer offer an insurance program in cooperation with the National Rifle Association (NRA):
JUSTICE ALITO: So does that mean that really the New York officials could have achieved what they wanted to achieve if they hadn’t done it in such a ham-handed manner? So, instead of having the meeting with Lloyd’s and – they just gave speeches about the terror – about guns and how bad the NRA is and they spoke about social backlash against guns and those who advocate for gun rights in the wake of the terrible Parkland shooting, but in all of that, they don’t mention anything about any regulatory authority, and then, after harping on that for a while, then they make general statements about the importance of every insurance company taking into account reputational risk, and then they sit back and they see whether that’s achieved the desired result, basically, that’s what your position is, isn’t it?
If the Court adopts an approach that would allow government officials with regulatory authority over social-media platforms to combine public threats with opaque badgering in order to get those companies to censor speech, then it doesn’t matter whether the government is censoring directly; “free speech” would be rendered a mere formality.
It seems likely that the Court will either vacate or further pare back the preliminary injunction in Murthy. But the justices must avoid the dangerous path that would essentially immunize the government from oversight for what amounts to backdoor censorship. The First Amendment, which starts with “Congress shall make no law…” is, in fact, supposed to hamstring the government.
The post Murthy Oral Arguments: Standing, Coercion, and the Difficulty of Stopping Backdoor Government Censorship appeared first on Truth on the Market.
]]>The post Systemic Risk and Copyright in the EU AI Act appeared first on Truth on the Market.
]]>Among the key features emerging from the legislation are its introduction of “general purpose AI” (GPAI) as a regulatory category and the ways that these GPAI might interact with copyright rules. Moving forward in what is rapidly becoming a global market for generative-AI services, it also bears reflecting on how the AI Act’s copyright provisions contrast with current U.S. copyright law.
Currently, U.S. copyright law may appear to offer a more permissive environment for AI training, while posing challenges for rightsholders who want to restrict the use of their creative works as training inputs for AI systems. Nevertheless, there are also ways that the U.S. copyright law framework may be more flexible, in that it can (at least, in theory) be modified by Congress to allow for incremental adjustments. Such tweaks could promote negotiations between rightsholders and AI developers by fostering markets for AI-generated outputs that could, in turn, offer compensation to rightsholders.
This approach contrasts with the EU’s AI Act, which risks cementing the current dynamic between rightsholders and AI producers. The act’s provisions may offer more immediate protection for rightsholders, but this rigidity could stifle the evolution of mutually beneficial markets. Therefore, while EU rightsholders might currently enjoy more favorable terms, the adaptable nature of the U.S. legal system could ultimately yield more innovative solutions that would better satisfy stakeholders in the long run.
The AI Act suggests that GPAI poses some degree of “systemic risk,” defined in Article 3(65) as:
[A] risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain.
But rather than the longstanding notion of “systemic risk” as used in the financial sector to refer to the risk of cascading failures, the AI Act’s definition bears closer resemblance to the “Hand formula” in U.S. tort law. Derived from the case United States v. Carroll Towing Co., the Hand formula is a means to determine whether a party has acted negligently by failing to take appropriate precautions.
The formula weighs the burden of taking precautions (B) against the probability of harm (P), multiplied by the severity of the potential harm (L). If the burden of precautions is less than the probability of harm multiplied by the severity (B < PL), then the party has acted negligently by failing to take those precautions. Similarly, the AI Act’s notion of “systemic risk” considers the potential for AI systems to cause harm on a large scale due to their extensive reach and impact on society.
The designation of an AI system as posing a “systemic risk” is based on an assessment of the likelihood and severity of potential negative effects on public health, safety, security, fundamental rights, and society as a whole. This assessment—like the Hand formula—involves a balancing of factors to determine whether the risks posed by an AI system are acceptable or require additional regulatory intervention. Also like the Hand formula, the “systemic risk” designation appears to contemplate systems operating at scale that could pose very minor risks in any particular case, but that aggregate harms in a meaningful way.
There are, however, some key differences between the two concepts. The Hand formula is applied on a case-by-case basis in tort law to determine whether a specific party has acted negligently in a particular situation. By contrast, the AI Act’s “systemic risk” designation is a broader regulatory classification that applies to entire categories of AI systems based on their potential for widespread harm. It thus creates the presumption of risk merely wherever a large-scale GPAI is operating.
Moreover, while the Hand formula focuses on the actions of individual parties, the “systemic risk” designation places the burden on AI providers to proactively address and mitigate potential risks associated with their systems. This would appear to have the potential for massive unintended consequences in inviting myriad opportunities for unwarranted regulatory intervention. As usual, whether this threat of harmful regulation will ultimately manifest comes down to how the law is implemented.
The AI Act imposes several copyright-related obligations on GPAI providers. As Andres Guadamuz of the University of Sussex notes in his analysis:
The main provision for GPAI models regarding copyright can be found in Art 53, under the obligations for providers of GPAI models. This imposes a transparency obligation that include the following:
- Draw up and keep up-to-date technical documentation about the model’s training. This should include, amongst others, its purpose, the computational power it consumes, and details about the data used in training.
- Draw up and keep up-to-date technical documentation for providers adopting the model. This documentation should enable providers to comprehend the model’s limitations while respecting trade secrets and other intellectual property rights. It can encompass a range of technical data, including the model’s interaction with hardware and software not included in the model itself.
- “put in place a policy to respect Union copyright law in particular to identify and respect, including through state of the art technologies, the reservations of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790”.
- “draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model, according to a template provided by the AI Office”.
Particularly relevant, according to Guadamuz, is the interaction between the exception for “Text and Data Mining” (TDM) in the Digital Single Market Directive and the potential for AI training:
Firstly, the requirement to establish policies that respect copyright essentially serves as a reminder to abide by existing laws. More crucially, however, providers are mandated to implement technologies enabling them to honour copyright holders’ opt-outs. This is due to Article 4 introducing a framework for utilising technological tools to manage opt-outs and rights reservations, good news for the providers of such technologies. Additionally, it now appears unequivocally clear that the exceptions for TDM in the DSM Directive include AI training, as it is specified in the AI Act. The need for clarification is needed because there were some doubts that TDM covered AI training, but its inclusion in a legal framework specifically addressing AI training suggests that the TDM exception indeed covers it.
This suggests complex interactions among GPAI producers (who need to train their models on large corpuses of text, images, audio, and/or video); rightsholders (who will enjoy an opt-out entitlement in the EU); and the producers of technical measures to facilitate opt-outs.
Moreover, the AI Act’s transparency requirements for GPAI producers will potentially lead to future issues for producers, for better or worse. As Guadamuz notes: “A recurring theme in ongoing copyright infringement cases has been the use of training content disclosure by plaintiffs, those who have disclosed training data have tended to be on the receiving end of suits.”
Another issue the act raises concerns the question of so-called “deepfakes,” which have proven particularly contentious in the United States. One concern expressed on this side of the Atlantic is that banning deepfakes could hamper creators’ ability to make legitimate replicas of individuals’ likenesses that—while meeting the technical definition of a deepfake—are used for new artistic purposes, such as a biopic.
The AI Act addresses the regulation of deepfakes through its provisions on transparency obligations for certain AI systems. Article 50 requires providers and deployers of AI systems that generate or manipulate image, audio, or video content to disclose when the content has been artificially generated or manipulated. This obligation applies to deepfake content, defined as:
AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful.
The act provides an exception for content that “forms part of an evidently artistic, creative, satirical, fictional analogous work or programme.” Such content would, however, still need to include disclosures about any deepfakes that are present “in an appropriate manner that does not hamper the display or enjoyment of the work.”
This provision is notable in that it would appear to create a means to identify AI-generated content, which could have implications for the copyrightability of such content in jurisdictions that impose restrictions on AI authorship based on human-author requirements. But it’s unclear how broad the “artistic, creative” and “satirical” exceptions will be interpreted. There almost certainly are many other benign uses of deepfakes that could run afoul of the act which should nonetheless be permitted.
As noted earlier, the AI Act’s impact on GPAI will depend largely on how it is implemented. At this point, little can be said with certainty regarding what effects its GPAI provisions will have on producers of large models. It’s probably fair to assume, however, there will be a major scramble among producers to try to stand up compliance mechanisms.
Drawing from the experience of the General Data Protection Regulation (GDPR), there is concern that the “systemic risk” category could become a significant lever for regulators to intervene in how firms like Mistral and Anthropic develop and release their products. Ideally, AI deployment should not be blocked in the EU absent evidence of tangible harms, as European citizens stand to gain from developing and accessing cutting-edge tools.
In the U.S. context, copyright law’s “fair use” exemption has become a bone of contention in a growing body of litigation. I remain skeptical that the fair-use defense is as clear cut as defendants currently appear to believe it is. As outlined in the International Center for Law & Economics’ (ICLE) submission to the U.S. Copyright Office, existing U.S. copyright law may not support the use of copyrighted material for training AI systems under fair use.
This does not, however, mean that copyright should stand in the way of AI development. There is a broad middle ground of legislative reforms that Congress could explore that would more appropriately balance protecting rightsholders’ interests and fostering the development of GPAI. Whether it will do so remains an open question.
As suggested in our Copyright Office submission, it appears the best path forward—on either side of the Atlantic—is to facilitate bargaining among rightsholders and AI producers to create a new kind of market. Indeed, excessive focus on the use of copyrighted work in AI training may ultimately just lead to unproductive negotiations.
While it is possible that U.S. copyright law will be amended or reinterpreted to provide greater flexibility for AI producers, the AI Act appears to create a stronger bulwark for rightsholders to protect their works against use by GPAI. One can hope that it will also provide sufficient flexibility to facilitate bargaining among the parties. If it fails in this respect, the EU risks hindering the development of AI within its borders.
The post Systemic Risk and Copyright in the EU AI Act appeared first on Truth on the Market.
]]>The post Section 214: Title II’s Trojan Horse appeared first on Truth on the Market.
]]>In the Trojan War, the Greeks conquered Troy by hiding their soldiers inside a giant wooden horse left as a gift to the besieged Trojans. Section 214 hides a potential takeover of the broadband industry inside the putative gift of improving national security.
Section 214 requires providers to obtain the FCC’s approval before constructing new networks, offering new services, discontinuing outdated offerings, or transferring control of licenses. George Ford of the Phoenix Center dubs this a “Mother, May I?” process. But “Mother, May I?” is a children’s game that loses steam after about 15 minutes. Section 214, by contrast, no game; it’s serious business, with real long-term consequences.
Faced with lengthy regulatory delays and uncertainty over whether they will get the green light, firms may make the rational decision to forgo network upgrades or expansions. AT&T, for example, argues that it would be hindered from retiring legacy copper lines in favor of modern fiber deployments if it had to seek permission each time. The prospect of enduring protracted reviews could deter startup ISPs from even entering the market.
These concerns aren’t hypothetical. Economic studies have found that past periods of utility-style Title II regulation corresponded with depressed broadband investment to the tune of billions of dollars annually, when compared to light-touch regimes. Perversely, saddling providers with more regulatory costs could slow the very broadband buildout that the Biden administration aims to accelerate through its “Internet for All” initiative.
As they say on just about every single episode of every single podcast: “Good question.”
The FCC’s notice of proposed rulemaking (NPRM) is clear that it intends to regulate satellite broadband (such as Starlink and Project Kuiper) under Title II:
We also propose to remain consistent with the Commission’s conclusions in prior Orders to include in the term “broadband Internet access service” those services provided over any technology platform, including but not limited to wire, terrestrial wireless (including fixed and mobile wireless services using licensed or unlicensed spectrum), and satellite.
Section 214 refers to “lines,” which the average person-on-the-street might take to mean physical connections, such as copper wire for fiber optics. Section 214, however, has a much more expansive definition of “lines”:
As used in this section the term “line” means any channel of communication established by the use of appropriate equipment, other than a channel of communication established by the interconnection of two or more existing channels …
“Any channel of communication” seems to be doing a lot of heavy lifting here. If the FCC adopts Title II regulation and does not forebear Section 214, then it seems that satellite-broadband providers would also be covered.
If that’s the case, then that means every satellite launch and every decommissioning and every investment to upgrade (or downgrade) service would have to obtain FCC approval under Section 214. Starlink plans to build 30,000 satellites. Project Kuiper plans to add more than 3,200 satellites to orbit. That’s a lot of approvals.
Just the process of preparing Section 214 applications imposes significant expense on providers, especially for smaller ISPs unaccustomed to this process. Hiring lawyers, compiling documentation, and navigating bureaucratic reviews could consume funds better spent on network improvements. If foreign investment triggers additional national-security reviews, costs could escalate even higher. WISPA, an industry group representing internet service providers, highlights some of these costs in its comments to the FCC:
Broadband providers across the country who never needed to obtain prior consent when adding a General Partner, refinancing, or engaging in numerous other kinds of purely domestic transactions would now need to file for and obtain prior Commission consent to an assignment or transfer of control. This alone will result in many more applications filed under Section 214 each year than have ever been filed before. … The Commission’s most recent Communications Marketplace Report states that as of the end of 2021 there were 2,021 entities providing broadband. More recent compilations indicate the number today could be closer to 3,000. If only five percent of these companies (the vast majority of which are small) file to assign or transfer control of their Section 214 authorizations each year, the number of applications filed annually will increase at least four times, and possibly up to six times. Even assuming the majority of those applications are subject to streamlined treatment, this would constitute a huge administrative burden on Commission resources, likely leading to far longer processing times than are already faced by applicants.
Inevitably, backlogs would generate longer delays to approve new services, investments, and pro-consumer mergers.
Several comments to the FCC questioned why broadband deployments should face utility-style “public convenience and necessity” tests, given that they connect with the decentralized internet, rather than traditional wireline telephone networks. Indeed, the FCC’s 2015 Open Internet Order concluded that Section 214 was unnecessary in light of the competitive environment and existing regulations.
Ironically, the FCC’s proposal to impose Section 214 potentially undermines its self-professed motivation of bolstering national and cyber-security. If providers require approval to retire outdated technologies, they may be forced to divert resources to maintaining legacy systems, rather than migrating fully to modern, more secure networks. Mandatory discontinuance reviews create uncertainty that could lock in suboptimal or vulnerable technologies, as discussed in AT&T’s comments to the FTC:
For example, in many areas, AT&T originally offered DSL internet access services over legacy copper networks, which still cost billions of dollars annually to maintain. Subjecting those services to Section 214 discontinuance obligations could indefinitely force AT&T to continue allocating capital to these outdated networks rather than investing in modern fiber networks. Further, the Commission’s current Section 214 rules were developed for legacy telephone service, not internet service—and certainly not for outdated DSL-based internet access services.
Conversely, streamlined transitions to new technologies allow providers to concentrate resources on improving network integrity and resiliency. Prompt incident response and continual hardware/software upgrades are vital to thwarting cyber threats, priorities that could be hampered if ISPs get bogged down in regulatory quarrels over how and when to sunset obsolete services.
A well-known saying in economics is “barriers to exit are barriers to entry.” It’s as true in broadband as anywhere else. While intended to preserve service availability, Section 214 discontinuance reviews could perversely deter broadband buildouts in higher-cost areas by threatening to lock providers into operating networks indefinitely, even if they become commercially unviable. With no opportunity to eventually exit, a provider may decide not to enter.
The FCC’s national-security rationale rests on its recent revocations of Chinese carriers’ international Section 214 licenses—actions that have been affirmed by courts even under the pre-existing Title I “information service” classification of broadband. It’s doubtful that imposing domestic Section 214 on all broadband providers would augment security in any meaningful way.
Moreover, other robust mechanisms already exist to vet foreign-investment and supply-chain risks to communications networks. These include the interagency Committee on Foreign Investment in the United States (CFIUS), as well as the interagency Committee for the Assessment of Foreign Participation in the United States Telecommunications Services Sector (previously, the much less-of-a-mouthful “Team Telecom”) reviews of international licenses. There are also new rules allowing the U.S. Commerce Department to restrict transactions with “foreign adversary” tech suppliers. The FCC has not made a clear or convincing case that its oversight under Section 214 would add any value beyond these existing processes.
To date, Section 214 appears to be a parade of horribles for both the industry and the federal government. To which, a proponent of Section 214 might say, “Oh, but the FCC would never do that”—whatever that bad thing is.
For example, in comments to the FCC on the proposed rules, some suggested the commission could grant itself limited authority to revoke authorizations if national-security concerns explicitly arose regarding particular broadband providers. Others suggested that the FCC might grant blanket domestic Section 214 entry authority, as it’s done in the past.
Unfortunately, we have no idea how far the FCC intends to go with its Section 214 authority. Will it apply to domestic providers or providers with a foreign financial interest? Will it apply to satellite broadband? Will it apply to mergers? Will it apply to copper-to-fiber replacements? We don’t know.
To a certain extent, it doesn’t matter. If the FCC imposes Title II regulations on broadband and includes Section 214 in that regulation, then the commission could easily change its enforcement regime down the road through the rulemaking process or its enforcement discretion. Promises made by today’s FCC members may be very different from the priorities of a future commission, as vividly noted in TechFreedom’s comments to the FCC:
Despite the FCC’s promise of forbearance, the core powers of Title II will loom over the broadband industry like Chekhov’s proverbial gun: “If you say in the first act that there is a rifle hanging on the wall, in the second or third act it absolutely must go off. If it’s not going to be fired, it shouldn’t be hanging there.”
In summary, adding Section 214 regulation to Title II’s already-onerous other provisions would stifle investment and innovation, hinder new entry, and impose enormous costs, with no discernible benefits to consumers, national security, or public safety. It would saddle the FCC and other agencies with a morass of regulatory red tape, adding pressure to their already-stretched budgets.
The post Section 214: Title II’s Trojan Horse appeared first on Truth on the Market.
]]>The post Mi Mercado Es Su Mercado: The Flawed Competition Analysis of Mexico’s COFECE appeared first on Truth on the Market.
]]>there are elements to preliminarily determine that there are no conditions of effective competition in the Relevant Market of Sellers and in the Relevant Market of Buyers, as well as the existence of three Barriers to Competition that generate restrictions on the efficient functioning of said markets. (Emphasis added).
The alleged barriers consist of:
To eliminate these alleged barriers, the report proposes three remedies, to be applied to Amazon and Mercado Libre:
We see at least three fundamental flaws with the report.
Rather than an “abuse of dominance” procedure, the market investigation that led to the report was a “quasi-regulatory procedure.” But the wording of Article 94 of the Mexican Federal Economic Competition Act (under which the investigation was authorized) strongly suggests that COFECE has to establish (not simply assert) an “absence of effective competition.” This would entail either that there is a “market failure” that impedes competition, or that there is an economic agent with a dominant position. The report tries to follow the second option, but we think it does a poor job.
To determine if any given company has a “dominant position” (monopoly power) competition agencies must first define a “relevant market” in which the challenged conduct or business model has an effect. Although it is common for antitrust enforcers to define relevant markets narrowly (often, the smaller the market, the easier it is to find that the hypothetical monopolist is, in fact, a monopolist), COFECE has gone too far in this case.
The Mexican competition watchdog regrettably follows the bad example of its American counterpart, the Federal Trade Commission (FTC). As one of us has explained in a post about the FTC’s recent monopolization complaint against Amazon, the agency:
describes two relevant markets in which anticompetitive harm has allegedly occurred: (1) the “online superstore market” and (2) the “online marketplace services market.” Because both markets are exceedingly narrow, they grossly inflate Amazon’s apparent market share and minimize the true extent of competition. Moreover, by lumping together wildly different products and wildly different sellers into single “cluster markets,” the FTC misapprehends the nature of competition relating to the challenged conduct.
COFECE does something similar in its report. By alleging that these large online marketplaces “have positioned themselves as an important choice,” the agency appears to feel free to ignore competition from other online and offline retailers. Again, while it’s possible the full report will offer a deeper explanation (and evidence), COFECE’s report appears to ignore other e-commerce platforms—like China’s Shein and Temu—that have gained both popularity and advertising-market share. The report also neglects to mention e-commerce aggregators like Google Shopping, which allow consumers to search for almost any product, compare them, and find competitive offers; as well as competition from e-commerce websites owned by sellers, such as Apple or Adidas.
It also appears to ignore that consumers can switch to brick-and-mortar retailers should Amazon or Mercado Libre try to exploit their market power. Of course, how many consumers might switch, and the extent to which that would affect the marketplaces, are empirical questions. But there is no question that some consumers might switch (and remember, competition happens on the margins; we don’t need all consumers to switch to affect a company’s sales).
The report does mention selling through social media, but does not include it in the relevant market. How can we disregard social media as a reasonable substitute for Amazon and Mercado Libre if 85% of small and medium enterprises turned to Facebook, Instagram, and WhatsApp during the COVID-19 pandemic to advertise and sell their products?
There is empirical evidence that Amazon not only competes, but competes intensively with other distribution channels, and has a net-positive welfare effect on Mexican consumers. A 2022 paper (Campos Vázquez et al., “Amazon’s Effect on Prices: The Case of Mexico”) found that:
e-commerce and brick-and-mortar retailers in Mexico operate in a single, highly competitive retail market
And that:
Amazon’s entry has generated a significant pro-competitive effect by reducing brick-and-mortar retail prices and increasing product selection for Mexican consumers.
The paper finds the market entry of products sold and delivered by Amazon gave rise to price reductions of up to 28%. How is that not competition?
As if this narrow definition were not bad enough, the report conflates Amazon and Mercado Libre’s market shares, to conclude that:
Amazon and Mercado Libre are the economic agents that have the largest market share; together, both hold more than 85% of the sales and transactions in the Relevant Seller Market during the period analyzed and the HHI exceeds two thousand points. Likewise, in the Relevant Buyers Market, the HHI was estimated, for 2022, at 1,614 units and the main three participants concentrate 61% (sixty-one percent) of the market. In both markets, the other participants have a significantly smaller share.
Why combine the market share of Amazon and Mercado Libre, as if they were acting as a single economic agent? Given this market definition, are Amazon and Mercado Libre not at least competing with each other? Unsurprisingly, the market’s continuous growth and the evolution of the companies’ respective market shares indicate that they do.
It is only on the basis of this distorted depiction of the market that COFECE jumps to the conclusion that Amazon and Mercado Libre have the power to fix prices (another way of saying that they have “market power”).
Suppose we accept COFECE’s definition of the relevant market. Even if Amazon and Mercado Libre have significant market share, they could face competition from new entrants attracted by the higher prices (or other “exploitative” conditions) charged to consumers. According to COFECE, alas:
There are barriers to entry related to the high amounts of investment for the development of the marketplace, as well as for the development of technological tools integrated into it…. In addition, high investment amounts are required related to the development of logistics infrastructure and in working capital related to funds necessary to cover operating expenses, inventories, accounts receivable and other current liabilities.
There are barriers to entry related to considerable investments in advertising, marketing and public relations. To attract a significant number of buyers and sellers to the platform that guarantees the success of the business, it is imperative to have a well-positioned, recognized brand with a good reputation.
These are costs, not “barriers to entry.” As Richard Posner explained long ago in his treatise “Antitrust Law” (at 73-74), the term “barrier to entry” is commonly used to describe any obstacle or cost faced by entrants. But by this definition (embraced by COFECE, apparently), any cost is a barrier to entry. Relying on George Stigler’s more precise definition, Posner suggested defining a barrier to entry as “a condition that imposes higher long-run costs of production on a new entrant than are borne by the firms already in the market.” In other words, properly understood, a barrier to entry is a cost borne by new entrants that was not borne by incumbents.
Of course Amazon and Mercado Libre have advantages over other firms in terms of their infrastructure, know-how, scale, and goodwill. But those advantages didn’t fall from the sky. Amazon and Mercado Libre built them over time, investing (and continuing to invest) enormous amounts to do so.
A digital platform does not need to invest in all of those things, all at once, everywhere, before entering a new market. Any firm with a sufficiently interesting or beneficial idea could enter the market and gain traction, as the above-mentioned examples of Shein and Temu demonstrate.
Even if we were to accept COFECE’s suggested market definition and its assessment of market power, the report’s proposed remedies—which could be summarized as the mandated unbundling of Amazon’s and Mercado Libre’s streaming services from their loyalty programs (like Amazon’s Prime) and to make (at least part of) their platforms “interoperable” with other logistic services—would harm consumers, rather than benefit them.
Amazon Prime provides consumers with many attractive benefits: access to video and music streaming; special deals and discounts; and last, but not least, two-day free shipping. According to COFECE, “this is an artificial strategy that attracts and retains buyers and, at the same time, hinders buyers and sellers from using alternative marketplaces.”
It’s not entirely clear what “artificial” means in this context, but it appears to imply something outside of the bounds of “natural” competition. Yet what COFECE describes is the very definition of competition.
A mandate to unbundle streaming services would actually degrade the experience enjoyed by consumers, who would instead have to contract and pay for those services separately (see here and here). The independent provision of such services would not benefit from Amazon’s economies of scale and scope and would, therefore, be more expensive. And providing more benefits for consumers at a given price is what we want competitors to do. Identifying consumer benefit as a harm turns competition enforcement—and, indeed, the very notion of competition itself—on its ear.
On the other hand, the report also proposes to mandate opening the Buy Box and modifying its rules to be neutral to all logistics providers. To mandate that such providers be allowed to offer their services at Amazon or Mercado Libre amounts to considering these platforms as “common carriers,” much like the old telephone networks of the 20th century (see, relatedly, here). This classification and the rules that follow from it (neutrality and price regulation, among others) was designed for markets with natural monopolies, where competition is not possible—even undesirable.
Digital platforms are much more competitive. In this context, common-carrier rules would only create free riding and negative incentives for investment and innovation (by both incumbents and new entrants). Sellers and logistics providers have many other options to access consumers. There is no economic or legal justification to mandate their access to Amazon or Mercado Libre’s platforms.
In sum, COFECE’s bad analysis leads to even worse remedies. Such remedies would not promote competition in Mexico and would not benefit consumers. Thankfully, this report is only a recommendation, and COFECE commissioners will have the chance to deviate from its conclusions. For the sake of Mexico’s consumers, let’s hope they correct course.
The post Mi Mercado Es Su Mercado: The Flawed Competition Analysis of Mexico’s COFECE appeared first on Truth on the Market.
]]>The post The Broken Promises of Europe’s Digital Regulation appeared first on Truth on the Market.
]]>Many of these changes are the result of a new European regulation called the Digital Markets Act (DMA), which seeks to increase competition in online markets. Under the DMA, so-called “gatekeepers” must allow rivals to access their platforms. Having taken effect March 7, firms now must comply with the regulation, which explains why we are seeing these changes unfold today.
But all is not well. When it was passed, European policymakers like Margrethe Vestager and Thierry Breton assured the public that the far-reaching regulation would not compromise security, lead to costlier services, or otherwise degrade users’ online experience. They also argued that it would be fast and easy to apply, thus avoiding the lengthy litigation that has come to be associated with competition enforcement.
As the effects of the DMA start to play out, however, these promises appear increasingly fanciful.
The biggest concern is that Europeans’ online safety is being compromised. Apple has warned that it will not be able to guarantee the safety of rival app stores and payment systems that can now access its ecosystem. If this sounds abstract, it is worth noting that these sorts of security flaws facilitated the Oct. 7 attacks carried out by Hamas. They also increase more mundane risks of identity theft and fraud.
Similarly, Amazon will struggle to exclude nefarious goods, sellers, and shippers from its online marketplace. Commenting on similar issues in the United States, the company surmised that it risked losing “customer trust by advertising something that is not a good deal for them.” This loss of consumer trust would, in turn, harm the bottom lines of the roughly two million businesses that rely on the platform.
The DMA is also making it increasingly difficult for platforms to offer certain functionalities in Europe. Google has been forced to remove features like maps, hotels bookings, and reviews from its search results. Until it can accommodate competitors who offer similar services (if this is even possible), these specialized search results will remain buried several clicks away from users’ general searches. Not only is this inconvenient for consumers, but it has important ramifications for business users. Early estimates suggest that clicks from Google ads to hotel websites decreased by 17.6% as a result of the DMA.
Last but not least, the DMA is forcing firms like Meta and Google, who operate several interrelated platforms, to gather user consent for run-of-the-mill targeted advertising (such as using data from one service to advertise on another owned by the same platform). This may sound like a feature of regulation, but the reality is more problematic.
Targeted advertising has important benefits. A 2023 study by Nielsen found that users are 68% more likely to click on personalized ads than non-personalized ones. Research has also shown that users are more likely to click through Google search results when ads are mixed in with organic results. Thus, both consumers and websites are better off when ads are displayed. Unfortunately, the DMA limits platforms’ ability to explain these benefits to users, as this may be construed as interfering with their “freely given” consent.
And because untargeted ads are less likely to hit their mark, they generate less revenue for platforms. Not only will this undermine investment in online services, but it will also accelerate the trend toward paid tiers. To wit, Meta’s introduction of subscriptions for European users is widely perceived as a response to the DMA and the GDPR.
This has knock-on effects for other players in the ecosystem. Research shows that GDPR enforcement, which requires firms to gather user consent for data processing, has a negative impact on startup investment. There is every reason to believe the DMA, which contains similar provisions, will have similar effects.
All of these harms would not be so problematic if the DMA actually delivered on its promise of increasing competition online. But there, too, doubts are starting to creep in. For instance, rivals like Meta and Epic Games are finding it harder than they expected to offer competing app stores or payment services.
At least some of this is due to the reality that offering safe online services is a costly endeavor. Apple reviews millions of apps every year to weed out bad actors. Replicating this business is easier said than done.
But instead of acknowledging these difficulties, officials and rivals are cutting an increasingly combative figure. Thierry Breton said the European Commission would take “strong action” if Apple’s compliance plan was not “good enough”. And there is mounting discontent from rivals. All of this suggests that there is little room or appetite for compromise. Litigation is looking increasingly likely.
The upshot is that the DMA was passed with great haste and fanfare, but there is mounting evidence that too little thought was given to its likely consequences. Addressing these issues is a thorny problem, but acknowledging that they exist and taking responsibility for them is a necessary first step. Whether policymakers do so is another question. On this score, alas, the European Commission does not have a reputation for introspection.
The post The Broken Promises of Europe’s Digital Regulation appeared first on Truth on the Market.
]]>The post A Closer Look at Spotify’s Claims About Apple’s App-Store Practices appeared first on Truth on the Market.
]]>Spotify’s first claim is that Apple imposes a “discriminatory” “tax” for use of its in-app purchase system. Right off the bat, this is deliberately misleading. Apple’s fee isn’t a “tax” any more than Spotify’s subscription fee is a “tax.” As pointed out in Lazar’s previous post, it’s relevant to go over how the App Store works.
Apple collects a fee to use its proprietary software and iOS (as well as to gain access to a customer base with which Apple has built considerable goodwill over the years) through a commission on its in-app purchasing system (IAP). Apple also charges a commission on paid apps. But most app downloads (86%) are free, in which cases Apple charges nothing. This arrangement can be sustained because Apple cross-subsidizes those free downloads by charging a commission on in-app purchases and paid downloads. This was acknowledged in the 9th U.S. Circuit Court of Appeals’ decision in Epic v. Apple.
In other words, Apple’s 30% “tax” (which, it should be noted, is the industry standard also charged by platforms like Microsoft, Google, and Sony) is, in fact, a fee for access to Apple’s proprietary software, platform, and user-base—all of which have taken Apple years of investment and goodwill to build. Even if Apple didn’t charge 30% through the IAP, it would still be entitled to collect a fee through some other means. With the current model, the cost of joining the App Store is lower, which particularly benefits small and non-game developers. Under a new model, free apps might have to shoulder some of that financial burden.
As for Apple’s fee being “discriminatory,” it is only “discriminatory” insofar as it is applied to paid apps and in-app purchases, but not to free apps. That is, by most accounts, a good thing, as it allows smaller apps (which are generally distributed free) to gain a foothold in the market.
This is not, however, what Spotify is claiming. Instead, Spotify claims that, e.g., Uber and Deliveroo do not pay the 30% fee simply because these services do not compete with Apple:
Does Apple Music pay it? No. Does Uber pay it? No. Deliveroo? No. Apple does not compete with Uber and Deliveroo. But in music streaming, Apple gives the advantage to their own services.
The suggestion is that Apple charges a fee to downstream competitors to give a leg up to its own services (in this case, presumably, Apple Music). But this is simply not true. Uber and Deliveroo’s exemption from the App Store fee is not due to the fact that they do not compete with Apple. Rather, Apple’s 30% fee only applies to digital goods and services (although not all digital goods and services; there are exceptions and counter-exceptions) that are subject to the App Store payment system.
Deliveroo and Uber, by contrast, both deliver physical goods and provide physical services and, thus, these companies utilize their own payment methods.
Like most of Spotify’s claims, this one needs to be put into context. Spotify can share deals through any other means—just not through the app (and even this is only half true). For example, Apple’s anti-steering provisions don’t prevent it from advertising on billboards or buying TV spots (see here, here, and here). The company still has ample means to reach consumers.
The fact that Apple doesn’t allow Spotify to advertise on iOS in its preferred manner—i.e., directly and for free—doesn’t mean that Apple is a monopolist or that its conduct is anticompetitive. After all, Spotify also doesn’t allow businesses to advertise without paying a fee. Nor does it allow users to listen to music for free. That is because, like Apple, Spotify has costs that it has to cover, such as royalties, and investments it needs to recoup, such as app maintenance and quality-of-life improvements. It does this by charging users and advertisers a subscription.
What is “ease” in this context? What Spotify means by this is that Apple doesn’t provide the kind of ease that is most ideal for Spotify: a conspicuous one-click option to subscribe to Spotify Premium inside the app, without paying Apple a fee.
But “ease” doesn’t mean “easiest,” or “as easy as Spotify would like it to be.” In the context of an exploitative-abuse case like this one, the question should be if the process imposed for upgrading to premium truly downgrades the user experience to such an extent that it harms consumers.
Two concrete reasons suggest otherwise. First, the path to upgrading to Premium is not difficult unless one lowers the bar for the average Spotify user to new, unrealistic depths (on regulatory paternalism, see here). Second, despite Apple’s anti-steering provisions, Spotify can inform iOS users about the existence of premium plans inside the app—just not about pricing. It doesn’t take a genius to figure out that the next stop is an internet search.
Let’s start with the former. As Spotify, Netflix, and other app developers have found, the way around Apple’s IAP is no Sisyphean ordeal. Spotify doesn’t use Apple’s IAP, so it pays Apple nothing whenever a user subscribes to premium (in fact, Spotify doesn’t pay Apple anything ever). Payment happens entirely outside of the App Store. Users open a browser on their phone, tablet, or computer, sign up, and pay for the service. After that, they can access content via the app on their phone or tablet. This is a one-off process, except when subscribers seek to either upgrade or downgrade their plans—in which case, the process is to be repeated.
While this may require additional steps relative to making in-app purchases, it takes minimal effort even for a user with below-average internet-browsing skills, and adds a paltry 30 seconds to the registration/upgrade process (iOS even lets you sign up with facial ID and pay with Apple Pay on the website). Further, the marginal information cost goes from very low to zero after the user learns that payment needs to happen outside of the app, and does it for the first time. Or, put differently, once you know that you need to subscribe and upgrade from outside the app, you never have to relearn it again.
Next, Spotify claims that it can’t inform users that they have to use a browser to upgrade their plans or to point them in the right direction. But Spotify clearly informs users through the app that the app cannot be used to upgrade a subscription plan.
Spotify is also allowed to inform users about the existence of premium accounts and their advantages. When a user clicks on one of the above premium plans, he or she gets taken to the following website:
For free users, Spotify also schedules a vocal prompt to subscribe to premium after every two or three songs (“skip the ads, go Premium”). Once users learn that there is a premium version of the app (assuming they, in 2024, somehow didn’t know that when downloading the app), where else would they go?
It is clear that consumers are logically guided to the web by Spotify. If you can’t perform an act on an app version, the only logical inference is that it can be performed on the web version. This is especially obvious with a service where creating the account and paying for the recurring service took place through the web version.
Apple’s restrictions are in place to nudge distributors to use its IAP, which is how it recoups investments in its iOS and monetizes the App Store. As a result, a marginal group of consumers who would have otherwise subscribed to premium may keep using the free version—which, by the way, still gives Spotify significant advertising revenue. Of course, Spotify’s revenue would be higher still if everyone switched to premium, which appears to be the source of its gripe with Apple. But not actively helping Spotify to maximize its profits is not an antitrust offense, and nor should it be.
Even if one rejects the procompetitive justification for Apple’s anti-steering provision, as the Commission did in the decision it published Monday, it is questionable whether this constitutes consumer harm, and particularly whether it is the sort of harm that warrants antitrust intervention. Not everything that falls below the bar of Spotify’s ideal notion of “ease” harms consumers.
This is a strange claim, and one that we admit we are less competent to appraise without additional information. We do know, however, that Apple is notoriously cautious about the apps and updates it allows on the iOS. A single bad player or a single broken app can compromise the integrity of Apple’s hardware and the user experience that Apple strives to uphold across its products. This, in part, is why Apple has the safest operating system out there, especially compared to Android (see, for example, here).
Apple applies (or at least, used to apply, before the Digital Markets Act enters into force today) a two-tiered app-review process combining machine and human review to ensure that apps not only work properly and are scam and malware-free, but that they also do not include any objectionable content (such as pornography, social-engineering apps such as the “Blue Whale Challenge,” etc.). The alleged rejection of some of Spotify’s enhancements could be due to these safety or security concerns.
What would Apple stand to gain from undercutting Spotify, anyway? Is it that making Spotify worse would boost Apple Music, as the company seems to imply? Perhaps, but why compromise the quality of the iOS to monopolize relatively marginal adjacent market? Apple Music accounts for only about 6% of Apple’s total revenue, compared to 59% from the sale of phones and tablets. That would be like killing the goose that lays the golden eggs for one good meal (for a similar point in the Amazon/iRobot merger case, see here).
We are also skeptical about the suggestion that Spotify runs worse on iOS than elsewhere due to Apple blocking quality-of-life improvements. Anecdotal evidence suggests otherwise. According to one Reddit user, Spotify runs silky smooth on iOS compared to Android. One thread on Spotify’s official community forum is titled “Android app is horrible compared to iOS.” Are Spotify’s enhancements also being denied on all Android phones and tablets? That would be very unlikely (and I am not aware that Spotify has made such a claim).
Conversely, it seems that the Spotify/iOS combo has traditionally been the better deal because there were certain features available on iOS but not on Android, such as swiping left to put a song in the queue or right to add it to a playlist; visualizing a song’s remaining time; or the “equalizer” option. All of these features were pioneered on iOS before reaching Android. How could that be, if Apple is blocking Spotify’s enhancements?
Maybe the putative issues Spotify claims it faces on iPhones and iPads don’t actually have much to do with Apple maliciously denying the company the ability to improve its app. As Occam’s Razor posits, the simplest explanation is often the correct one. Maybe Spotify is simply using the momentum of the DMA and its 10-year-long campaign against Apple to blame the company for its developers’ own shortcomings, and to extract some rents in the process. To use a term popular in European regulatory circles and the DMA’s lexicon, that seems unfair.
According to Spotify, Apple’s rules exist for one reason only:
to give Apple an unfair advantage over the many other services that are working hard to compete for fans. For competition to work and innovation to thrive, Apple needs to play fair.
While Spotify may perceive certain aspects of Apple’s policies to be restrictive or unfair, it is essential to evaluate these claims critically and consider the broader context. Doing that, we see that some of these claims are overblown, taken out of context and, indeed, unfair in their own right. Ultimately, not everything that falls short of Spotify’s ideal version of its relationship with Apple is anticompetitive, unfair, exploitative, or a herald of monopolization.
The post A Closer Look at Spotify’s Claims About Apple’s App-Store Practices appeared first on Truth on the Market.
]]>The post Blackout Rebates: Tipping the Scales at the FCC appeared first on Truth on the Market.
]]>Enter the Federal Communications Commission (FCC) with a bold proposal to reduce the likelihood of programming blackouts. The proposed rules would require cable and satellite providers to give rebates to customers when there’s a blackout due to failed retransmission agreements with broadcast stations, networks, and channel-group owners.
Given the history of retransmission and the current regulatory and competitive environment, the FCC’s proposal looks to be a fool’s errand that may end up doing more harm than good.
Imagine it’s the late 1940s or early 1950s and you live in a small or mid-sized town. There are miles of hills and valleys separating your town from the major metropolis with a television station, so you can’t pick up the signal.
One day, the local radio station owner comes up with an idea. He’ll set up a big antenna on top of one of the tallest buildings in town to get the TV signals from the big city. Then, he’ll run a cable from that antenna down into town, where he can run separate cables to households and their TVs. The households will pay a set-up fee and a monthly charge to cover the entrepreneur’s costs.
That’s the basic parable of how community-antenna television—or CATV, now known as cable TV—came to be. And it was a game changer for small and rural communities that never before enjoyed big-city TV signals.
At first blush, this would seem to be what is known as a Pareto improvement—one where someone is made better off, but no one is made worse off. Households got previously unavailable TV programming, the entrepreneur turned a profit, and the big-city TV station expanded its viewership, which offered opportunities for greater advertising revenues.
These benefits were amplified by the FCC’s “freeze” on issuing new television licenses between 1948 and 1952, when demand for TV was increasing. CATV was a market solution to a regulatory hiccup.
Once the FCC began issuing licenses again, conflicts around CATV arose. Smaller and more rural areas that were previously unserved by broadcast received licenses for TV stations. These stations rightly saw CATV providing “out of market” signals as competition for viewers and advertising, as discussed in Carter Mountain Transmission Corp. v. FCC (D.C. Cir. 1963):
If [CATV] appellant’s application were granted, the service which could be offered by the [CATV] Western systems would be improved. Western could offer subscribers a better picture. It is probable that new subscribers would be attracted and that many, if not most, of the subscribers would view only the stations on the CATV cable. The conclusion is certainly warranted on these facts that the local [broadcast] station would find it increasingly difficult to sell advertising, its best source of income, in the light of this potential shift in listener-viewer reception, and its survival would be seriously jeopardized.
Under the Radio Act of 1927 and the Communications Act of 1934, radio stations were prohibited from rebroadcasting another station’s programming without the originating station’s consent. With an eye toward these regulations, TV broadcasters expected the same rules would apply to cable retransmission of broadcast programming. That expectation was misguided.
According to Jack Goodman, in 1959, the FCC decided that the Communications Act did not require cable providers to obtain consent before retransmitting over-the-air TV signals. In addition, subsequent court decisions in the 1960s and 1970s held that cable retransmission did not constitute a “performance” under the Copyright Act and were therefore not required to obtain consent from or pay royalties to copyright holders.
In this regulatory landscape, over-the-air broadcasters believed (perhaps correctly) that cable providers were free-riding on broadcasters’ investments in programming. To say that a lot happened between then and now among Congress, the FCC, and the courts would be an understatement. But that interlude is a story for another day.
Fast forward to the Cable Act of 1992. Under the Cable Act, each cable operator with 12 or more channels must carry the signals of local commercial television stations and qualified low-power broadcasting stations. Broadcasters have two options to exercise their rights under this provision:
It’s important to note that the choice is up to the broadcaster, not the cable operator. The broadcaster could either exercise must carry or negotiate retransmission consent. Under must carry, the broadcaster is guaranteed carriage, but will not receive any fees or royalties. In negotiating retransmission consent, the broadcaster can receive carriage fees, but also runs the risk that the cable operator refuses the deal and drops the broadcaster from the cable operator’s channel lineup.
In the early years of the Cable Act’s framework, Charles Lubinsky reports that 80% of commercial TV stations and 90% of network-television affiliates chose retransmission consent over must carry. Interestingly, most of these agreements did not involve cash payments. Instead, the broadcasters negotiated for carriage of new broadcast services, according to Lubinsky:
Fox leveraged the introduction of fX; ABC leveraged ESPN2; NBC leveraged America’s Talking and renewals for CNBC; and some local stations leveraged a new channel, a regional news station, local news updates on cable news stations, or other specialty stations.
Jeffrey Eisenach notes that the first “significant” retransmission agreement to involve monetary compensation from a cable provider to a broadcaster was in 2005. By 2008, retransmission fees totaled $500 million, according to Variety.
It was around this time that networks began imposing “reverse transmission compensation” on their affiliates. Previously, networks paid local affiliates for airtime to run network advertisements during their programming. The new arrangement reversed the flow of compensation such that affiliates were expected to compensate the networks. The 2011 Variety article explains:
Station owners also face increased pressure to secure top fees for their retrans rights because their Big Four network partners now demand that affiliate stations fork over a portion of their retrans windfall to help them pay for pricey franchises like the NFL, “American Idol” and high-end scripted series.
It was also around this time that local-TV advertising revenues began to stagnate and shrink, as shown in the figure below. Higher retransmission fees were seen as a way to offset reduced ad revenues. By 2020, S&P Global reported that annual retransmission fees were approximately $12 billion.
With the wide range of programming and delivery options, it’s probably unwise to generalize who has the greater bargaining power in the current system. But if one had to choose, it seems that networks and, to a lesser extent, local broadcasters are in a slightly superior position. They have the right to choose must carry or retransmission and, in some cases, have alternative outlets (such as streaming) to distribute their programming.
I say slightly superior, because if the decks were firmly stacked in their favor, there would be fewer retransmission disputes. As many have noted, retransmission disputes that resulted in programming blackouts increased during the 2010s, peaking in 2020, as shown in the figure below.
As noted up top, in a step toward reducing the number of blackouts, the FCC recently proposed rules requiring cable and direct-broadcast satellite providers to give rebates to subscribers when they are deprived of programming during blackouts associated with retransmission-consent disputes.
At first glance, this may seem like a fair proposition. Consumers are merely bystanders in the dispute and they should not be on the hook to pay for programming they can’t access.
But like most issues in law & economics, the issue is more complex than simply consumer protection. If we believe that consumers are bystanders and that they are harmed by blackouts, then who is to blame for the blackouts? Or, in law & economics lingo, who is the least-cost avoider? This is an important question because it can be argued that the party in the better position to avoid the blackout should bear more of the cost of the blackout.
So who is responsible for programming blackouts?
It turns out that there is no good answer. Eun-A Park, Rob Frieden, and Krishna Jayakar use a database of nearly 400 retransmission agreements executed between 2011 and 2018 to evaluate the factors most likely to explain blackouts.
One factor is perhaps the most obvious—we’re seeing more blackouts because more stations have eschewed must carry in favor of negotiating retransmission consent. Park and her co-authors note that, even if the likelihood of a negotiation leading to a blackout is stable over time, the increased number of negotiations would be associated with an increase in the number of blackouts.
But Park et al.’s data don’t support this hypothesis: There’s no significant relationship between the number of “deals” and the number of blackouts. Instead, Park and her colleagues identify three factors that are associated with more blackouts and longer blackouts:
To be blunt, everyone’s to blame, so no one’s to blame. Ultimately, Park et al. conclude, “the statistical analysis is not able to identify the parties or the tactics responsible for blackouts.” Based on this research, it’s not clear which party is the least-cost avoider of blackouts.
Which takes us back to the FCC’s proposal to mandate cable and satellite providers rebate consumers for lost programming during blackouts. The proposed rules would put a thumb on the scale to favor broadcasters over cable and satellite providers. The rebates clearly impose a monetary cost, but there’s also a non-monetary cost.
If the rules are approved, the FCC would be tacitly indicating that the federal government believes cable and satellite providers are liable for programming disruptions resulting from retransmission disputes. This suggestion may drive more consumers to “cut the cord” and switch to an alternative MVPD, such as a streaming service.
The rules may backfire on programming providers. If cable and satellite providers are liable for the costs of the blackout, then they have incentives—and a rationale—for offering lower compensation to broadcasters to offset these costs. This could be especially harmful to smaller or local programmers who rely on retransmission fees to fill in the gap left by declining advertising revenue.
Finally, consumers may cheer the idea of receiving a rebate for missed programming during a blackout. Practically speaking, however, the rebates they receive would almost certainly be less than $10 and likely less than $5. The costs of imposing, monitoring, and enforcing the FCC’s proposed rules may end up being significantly larger than any consumer benefits.
The current retransmission regulations are imperfect, but perfection can never be achieved in such a complex and dynamic environment. The current rules have worked for three decades, and there’s no indication that the world has shifted in a way that demands the radical shakeup the FCC is proposing.
The post Blackout Rebates: Tipping the Scales at the FCC appeared first on Truth on the Market.
]]>The post The Law & Economics of the Capital One-Discover Merger appeared first on Truth on the Market.
]]>Conversely, credit analysts like Matt Schulz of LendingTree note that “if Capital One sees that there’s a bunch of overlap between what they have and what Discover brings to the table, and they want to combine the two instead of keeping them as separate brands, you could end up seeing some of those offers get reduced.” The Wall Street Journal reports that Capital One intends to maintain the Discover brand and shift some of its debit and credit cards from the Visa and Mastercard payment networks to Discover’s payment network.
The proposed deal requires shareholder approval, as well as regulatory approvals from the Federal Reserve Board, Office of the Comptroller of the Currency (OCC), and the Federal Deposit Insurance Corp. (FDIC). Although it is expected to take a year to consummate, and even longer for its effects to be felt, the deal has already been condemned from some quarters.
The critics include Sens. Josh Hawley (R-Mo.) and Elizabeth Warren (D-Mass.), who have demanded that regulators block it. Warren claims that, by increasing concentration in the credit-card sector, the deal would erode financial stability, reduce competition, and hurt consumers through higher fees and credit costs. The deal could also be challenged by the U.S. Justice Department (DOJ), which has promised increased scrutiny of financial-sector mergers. It will also be carefully scrutinized by the OCC, which recently announced changes to its merger-review process that are in-line with Biden administration statements promising to combat the perceived issue of merger-driven concentration in the banking sector.
The merged firm would be the sixth-largest bank in the United States by total assets and deposits. It will also be the country’s biggest credit-card issuer, with a market share of 19%, ahead of current market leader JPMorgan Chase’s 16%.
Capital One and Discover differ significantly, however, in their operations and business models. Capital One is the ninth-largest U.S. bank, with roughly 260 branches nationwide. Like most card issuers, it issues credit cards to its customers primarily through the Visa and Mastercard payment-processing networks. Conversely, Discover operates just one full-service branch, and issues credit cards through its own payment-processing network, which is the fourth-largest nationwide, behind Visa, Mastercard, and American Express.
More importantly, while Visa and Mastercard operate so-called “four-party” networks and are not themselves direct issuers, Discover’s business model is broadly similar to that of American Express, in that while both firms offer some “four-party” network services, they primarily serve as “three-party” networks for cards they issue themselves. The four major payment networks also vary in terms of merchant acceptability, fraud protection, their ability to offer benefits and rewards programs, and foreign transaction fees. As WalletHub notes:
Visa and Mastercard boast a significant advantage in terms of worldwide acceptance, while Amex and Discover supplement their payment facilitation business by issuing cards directly to consumers. Card network rental car insurance, extended warranty, and fraud liability policies vary widely as well.
As the merger would give Capital One access to its own payment network, it represents both a horizontal merger and vertical integration. When it comes to the horizontal merger, the merged entity’s 19% share of the credit-card issuer (or revolving small loan) market is well below the threshold that would typically raise regulatory concerns about market power sufficient to substantially reduce competition.
When it comes to vertical aspects of the merger, U.S. antitrust law is concerned with whether the merged entity would have both the incentive and ability to reduce competition and harm consumers. To its credit, as my Mercatus Center colleague Alden Abbott notes:
Capital One has used recent acquisitions to create new products that generate economic benefits, including enhancements in Capital One Shopping (through the acquisition of Wikibuy), Capital One Travel (through a partnership with Hopper), and Capital One Dining (through a collaboration with SevenRooms). These improvements make consumers better off.
To understand the likely implications of the merger, we must consider the history and current status of Discover, as well as the nature and circumstances of the credit-card market. Though it is the fourth-largest credit-card payment network in the country, Discover significantly trails its three larger rivals, and its share of balances as a credit-card issuer is just 8%, behind Capital One’s 10%.
Among the four major credit-card networks, Discover accounts for just 4% of the market by purchase volume, well behind Visa (52.6%), Mastercard (23.7%), and even American Express (19.6%). In terms of number of cards in circulation, as of 2022, Discover fared slightly better, with a market share of 7.13%, only slightly behind American Express’ 10.17%. This suggests that, among the two credit-card networks that issue their own cards, American Express customers tend to use the cards more extensively than Discover customers, despite their similar business models. Here too, Visa (41.7%) and Mastercard (27.4%), who exclusively offer their network to other card issuers (including banks like Capital One) are well ahead as market leaders.
It is no surprise, then, that Capital One and various financial analysts present the acquisition as an opportunity to create a powerful competitor that can challenge the dominance of Visa and Mastercard. The two top payment networks have been criticized for high fees, with Visa currently facing a DOJ investigation over the same. Capital One customers account for $300 billion in American credit, 10% of the total credit-card volumes of the Visa and Mastercard networks, giving them the opportunity to steer a large volume of customers and transactions to the Discover network. This poses a significant potential competitive threat.
The merger could also give the combined entity the economies of scale and transaction volume necessary to improve rewards-program offerings, offer improved security and fraud protections, or lower card-transaction fees, thereby placing further competitive pressure on the other networks. Consider that Discover originally got its start in 1985 as a division of the Sears retail chain, in an era when Sears was such a significant retailer that its card offering was a viable competitor to the longstanding incumbent card networks. Capital One notes that the merged firm would bring together more than 100 million customers.
Discover’s lack of scale economies, given its smaller customer base, have historically handicapped the firm. Though it does continue to distinguish itself by not charging annual fees, the company’s 1990s-era strategy of branding itself as a lower-fee alternative to Amex that could attract merchants by charging them lower fees to use its network were hampered by Amex’s use of vertical-trade-restraint contracts with merchants. These forbid merchants from steering or incentivizing customers to pay through any other credit-card payment network if they wanted to continue to use the Amex network. Given the high volume of transactions from Amex customers and the tendency of Amex customers to shop elsewhere if stores refused to accept Amex, most major merchants in the United States chose to accept these restraints rather than forgo access to Amex’s network.
The U.S. Supreme Court confirmed the legality of Amex’s restraints on steering customers in the 2018 American Express v. Ohio decision. They found that the restraints have procompetitive benefits in the two-sided credit-card market by supporting Amex’s generous rewards programs, as keeping Amex customers within their card network ensures the network’s continued viability, as well as its ability to bring higher purchase volumes to merchants.
The decision does remain controversial among some law and economics scholars and segments of the financial-sector policy community, as the court found evidence that Amex’s ability to charge merchants higher fees that benefit Amex customers resulted in merchants raising fees that were passed on to customers who do not use Amex cards (among other purported anticompetitive effects). Conversely, other scholars have supported the ruling for recognizing dynamic competition in two-sided markets, and the procompetitive benefits of steering restraints in enabling firms like Amex to fund generous rewards programs and more secure, reliable payment networks.
Whether Discover-Capital One will revisit the competitive strategy of competing by lowering fees is uncertain, though there would likely be at least some cost savings where merchants and the card issuer (Capital One) use the same bank. The combined entity may instead choose to maintain or even increase their fees. This does not, however, necessarily mean that customers would be worse off. For instance, interchange fees charged to merchants frequently support payment-network development, rewards programs, and enhanced security and fraud prevention.
The merged firm may even choose to compete with Visa and Mastercard in the “four-party” card market, as Discover already does to a limited extent. It is also possible that the merged firm could use its synergies and cost savings to both lower fees for merchants and customers and expand rewards programs, cybersecurity and other features. All three of these possibilities constitute increased competition and a challenge for Capital One and Discover’s competitors.
Capital One anticipates that the merger will generate $1.5 billion in cost synergies and $1.2 billion in network synergies in 2027. The merger would also generate other welfare-enhancing, pro-competitive efficiencies:
In addition to these potential synergies, regulators and antitrust enforcers may also consider the relative position of Discover, which has faced challenges recently that hindered its ability to compete at its full potential. Last year, it was forced to set aside $365 million to cover liabilities arising from consumer-compliance process flaws and misclassification of its customers’ credit-card accounts, a controversy that was followed by the resignation and replacement of its CEO and president. Capital One’s resources could help Discover weather its setbacks, and to facilitate commitments that Discover has made to the FDIC to improve and secure its consumer-compliance processes.
Notably, and despite its concerns about mergers in the financial sector, the Biden administration has also nominally welcomed deals that could “rescue” struggling institutions, though it remains to be seen whether the administrative agencies and regulators will consider this to be applicable to Discover. Recent aggressive actions by Biden administration agencies, including the DOJ’s move to block JetBlue’s proposed acquisition of Spirit Airlines, indicate that their sympathies for struggling-firm arguments only go so far.
In evaluating the Capital One-Discover merger, regulators and antitrust enforcers should consider evidence of the above pro-competitive synergies, potential improvements and efficiencies, and the firm’s ability and incentives to pass these on to consumers. Conversely, a focus on superficial increases in market concentration that fail to account for the deal’s effects on the competitive process or consumers could lead to less competition in the credit-card market and harm to consumers and innovation. This would only benefit incumbent dominant firms, including those that are the current targets of antitrust investigation and scrutiny.
The post The Law & Economics of the Capital One-Discover Merger appeared first on Truth on the Market.
]]>The post The DMA’s Missing Presumption of Innocence appeared first on Truth on the Market.
]]>Under competition law, even dominant companies are presumed innocent; bigness alone is not a crime. Almost any kind of business conduct can potentially be justified if it can be proven to lower price, increase output, or improve quality for consumers. Yet the DMA’s per se approach removes these important safeguards ensuring that pro-competitive, value-creating, and welfare-enhancing behaviors can continue. While other technology companies—even those with proven market power—can still justify such conduct on pro-competitive grounds, designated gatekeepers cannot (except on narrow grounds of security).
The DMA creates a two-tiered legal regime in which some digital companies—that is, those not subject to the DMA—are “more equal” than others. If applied too strictly, a host of pro-competitive conduct will be prohibited. Companies that benefit from the existing digital ecosystems efficiencies and network effects will lose out, as will consumers. But why is that the law, and can these negative outcomes be avoided?
Proponents of the DMA say that it’s not about targeting large U.S. technology companies, but you wouldn’t know that from looking at the list of designated core platform services (five out of the six designated companies are from the United States, and none is from the EU). The conduct of concern is largely the same as that already covered by competition law, and the obligations to be imposed are modeled on competition enforcement, with some based on ongoing investigations (or those still subject to judicial review). There is, however, no effects-based analysis or case-by-case assessment for application of the DMA. The Commission’s projections on what this will mean economically are shaky at best.
The Commission itself admits that, for some of the DMA’s interventions, “there is no decision or judgment confirming its effects on the market” (at para. 155), and those intimately familiar with the law’s details note that the DMA “also covers practices that have not been yet the subject of antitrust investigations in the EU or any of its Member States.” Unlike competition law, however, these novel obligations do not apply to all dominant companies—only those labeled as “gatekeepers.”
These rules are already forcing the designated platform owners to redesign their products and services, reducing their quality and exposing them to vulnerabilities. While the conduct has not been found illegal in Court, the defendants (and their users) will be punished and cannot appeal either on grounds of lacking a theory of harm or on efficiencies, because the latter are not cognizable. They will be forced to relinquish their technology, infrastructure, and—in some cases—trade secrets and intellectual property to their rivals “with the overall aim of ensuring the contestability of gatekeepers’ digital services”.
European policymakers like to present the DMA as, in effect, an embodiment of the adage from Spider-Man’s Uncle Ben that “with great power comes great responsibility.” But all companies have a responsibility to follow the law and, under antitrust law, all companies are prohibited from restricting competition.
Importantly, the DMA goes a step further. If taken literally, it treats these companies like state-owned public utilities, directed by the regulator to pursue a particular form of European industrial policy. Instead of competition law’s focus on consumer welfare, gatekeepers’ products and services will be geared towards “contestability” and the European Commission’s particular notions of “fairness.” Unlike all other companies built on private investment, successful risk taking, investment, ingenuity, and hard work, these designated companies could be tasked by the regulator to pursue an ever-moving series of goalposts, with no defenses in sight.
In competition cases, enforcers must not only show harm, but also afford the defendants a chance to provide defenses. Defendants can argue that their behavior was pro-competitive; that it led, on balance, to increased competition and improved consumer welfare; or that it lowered prices or led to increased quality or innovation. Dominant platforms who engage in the same kind of anticompetitive conduct that is prohibited (for gatekeepers) by the DMA will continue to have the right to defend themselves on these grounds — for conduct that is essentially the same as that covered by the DMA (self-preferencing, lack of interoperability, use of nonpublic third-party data, anti-steering provisions, MFNs, etc).
But under the DMA, designated gatekeepers do not have the right to defend themselves on these same grounds. This goes against the recommendation of experts, who recognize that the conduct in question can be pro-competitive and should not be prohibited per se. Even the DMA-modeled regulatory proposals of some U.S. lawmakers have added “affirmative defenses,” while the UK’s Digital Markets Competition and Consumers Bill contains a “countervailing benefits exemption” that at least theoretically looks at consumer welfare, so that companies have a chance to prove their innocence, and compete fairly on the market.
The lack of pro-competitive defenses in Europe under the DMA means that there is a very real risk that some of the law’s provisions could end up prohibiting procompetitive conduct when applied in the wrong context.
Legislators around the world have been considering their own models of antitrust reform, and how best to address the enforcement challenges posed by the digital industrial revolution. But there shouldn’t be a debate on whether antitrust reform includes basic legal pillars like the right of defense or judicial review, or whether the law ought to unfairly target specific companies with a different legal standard. Punishing companies’ market conduct without proof of harm, and instrumentalizing them to achieve certain market outcomes without consideration of competitive effects or consumer welfare, is not a sound basis for reform.
The DMA was first mooted as a reform updating antitrust law to address existing flaws. The explosion of new enforcement actions in the digital sector might itself be sufficient to show that evolution of the existing tools can make reform itself unnecessary.
Then why do we have the DMA? There’s a saying that “the purpose of a system is what it does.” To put it plainly, the DMA removes legal protections from a handful of large technology companies in order to apply far-reaching economic interventions that go beyond existing competition-law precedents.
European stakeholders rightfully take umbrage with foreign governments who would punish European companies arbitrarily and without rights of defense. U.S. lawmakers do, as well, and this unified front helps ensure that open-market economies can continue to deal on fair terms when trading abroad. The EU benefits greatly from this, and yet has created these new rules that could force leading U.S. tech firms to subsidize European rivals for their services (and potentially pay for the privilege, as well).
This can hardly be seen as “fair.” But it is a reality that, unless European policymakers find some limiting principles for the DMA, they could soon find European champions facing similar difficulties abroad.
The DMA is law and it must be enforced. There is, however, room in the enforcement regime to take account of “the specific circumstances of the gatekeeper” (Article 8(3)). Moreover, proportionality is a general principle of European law. There is room to avoid the worst outcomes and to ensure that, in practice, the DMA promotes consumer welfare, innovation, and value creation. Some of the conduct prohibited by the DMA will inevitably be beneficial and should be permitted.
There is a lot of commentary suggesting guiding principles that could limit unintended consequences. The question remains whether the Commission will do a thorough investigation and to reflect before enforcing changes that could have harmful consequences. After all, with great power comes great responsibility, and the greatest power is the power of governments.
The post The DMA’s Missing Presumption of Innocence appeared first on Truth on the Market.
]]>The post The CFPB’s Misleading Slant on Competition in Credit-Card Markets appeared first on Truth on the Market.
]]>New @CFPB research reveals that large banks are offering worse credit card terms & interest rates than small banks and credit unions, regardless of credit risk. For an average cardholder, the difference can amount to $400-$500 in additional interest/year.https://t.co/gsliqy1ohn
— Lina Khan (@linakhanFTC) February 20, 2024
Hmmm, does it? How so? And what ought one to do with that information?
A caveat: I’ve spent many years on competition issues, but I haven’t done much work on credit- card competition. I’ll focus on some rather straightforward points, but for deeper dives on specific issues to do with credit cards and competition (and regulation), see my International Center for Law & Economics (ICLE) colleague Julian Morris on the Credit Card Competition Act here; and Julian, Todd Zywicki, and Geoff Manne on payment-card interchange fees here.
Of course, this issue is not exactly in the FTC’s wheelhouse, either. As they say on their credit-card web page, “most credit cards are issued by banks, which are outside FTC’s jurisdiction.” And a tweet is just a tweet. Still, there are staff at the FTC with considerable experience in economic research and the FTC does have jurisdiction over nonbanks that deceptively market credit cards.
There are other connections between the FTC and the CFPB. For example, the agencies share enforcement responsibility for the Fair Credit Reporting Act (FCRA), which sets out requirements for companies that use data to determine creditworthiness, insurance eligibility, suitability for employment, and to screen tenants. And, as it happens, Rohit Chopra, the current CFPB director, served as an FTC commissioner from 2018 right up until he assumed leadership of the CFPB (and perhaps even a bit after that, via “zombie votes”).
Turning back to Khan’s tweet (yes, it’s just a tweet, not an article, or congressional testimony, or a lawsuit) touting the CFPB finding, it links not to CFPB research, but to a press release, which itself links to a “data spotlight” based on, among other things, an October 2023 CFPB report on the consumer credit-card market.
That’s fine, in and of itself. The underlying 175-page report is required by statute and issued biannually. Both the CFPB and the FTC have reporting obligations. Moreover, consumer education can be a useful and low-cost intervention that better enables consumers to participate in competitive markets. So, an agency—independent or otherwise—might do actual research, and it might report on that research in various ways for various readers. Primary research can be translated into a substantial (if, perhaps, less technical) report for lawmakers and others. A long report might get an executive summary. And a report can inform more accessible publications aimed at consumers, businesses, or others. A gloss here and a graphic there can provide easily digested, material information.
In this case, the report itself responsibly notes some of its limitations:
The limitations inherent to the CFPB’s methodology in this report are substantially similar to those inherent in the CFPB’s previous reports on the credit card market. All results reported from data throughout this report aggregate results from multiple industry participants. Each source has particular limitations, as not all data rely upon consistent definitions or cover the same periods, products, or phenomena. Additionally, the available data generally do not allow for definitive identification of causal relationships. Accordingly, correlations presented throughout this report do not necessarily indicate causation.
So, among other things, we see here lots of talk about averages, disparate—perhaps not easily integrated—data, and correlations, but not very much about causation.
There’s a fair bit going on in the report, if not the data spotlight. For today’s purposes, I want to focus on just a couple of things.
Much of the report’s discussion regards competition among cards and issuers, describing various dimensions of competition and innovation over a highly differentiated product space. Consumers likely know that card offers may vary along multiple dimensions—including, among others, signing bonuses; rewards programs; interest rates (reported as maximum annual percentage rates); over-limit charge policies; fees (late fees, cash-advance fees, etc.); and–perhaps of specific relevance to offerings from larger banks–international fees and purchase protection.
As the report notes, over-limit transaction fees are highly regulated, and have largely been eliminated. And, of course, a given consumer’s credit limit may vary (and may be adjusted) even for a given card, obtained at a time certain.
Two credit cards, both from the same issuer (say, for example, Citibank) using the same network (say, for example, Visa) obtained in the same week under the same credit rating (e.g., a certain FICO score) do not necessarily—or even likely—offer the same bundle of terms. That makes for some complexity. Then again, it’s a complex space (consumers, cards, issuers, networks, retailers, etc.), inputs vary, and different consumers value different terms differently.
So far, so good. But then there’s this:
About 4,000 financial institutions offer credit cards, yet a handful of issuers represent an overwhelming majority of credit card debt. The top 10 issuers by average credit card outstandings represented 83 percent of credit card loans in 2022, continuing a decline from 87 percent in 2016. The next 20 issuers by reported credit card debt accounted for 12 percent, an increase of four percentage points over the past six years. 3,800 smaller banks and credit unions account for the remaining five to six percent of the market. No single issuer outside the top 15 represented more than one percent of total credit card loans in regulatory filings.
What’s the point? Well, basic structural features of the market may be of interest, and may signal something, even if—as many have noted in discussing the new FTC/U.S. Justice Department (DOJ) merger guidelines (and as I did here):
economic learning and agency experience have tended to diminish the role of structural presumptions over the course of several decades (at least). My ICLE colleagues and I spent a good many pages (and citations) on this in our response to the draft merger guidelines. The structure/conduct/performance paradigm has been largely abandoned, because it’s widely recognized that market structure is not outcome–determinative. The view is shared, as we note, by scholars across the political spectrum.
We link to this from Fiona Scott-Morton, Martin Gaynor, and Steven Berry, and this from the Global Antitrust Institute, but there are scores of relevant comments based on a well-developed body of literature.
Still, it seems a bit odd, and not just because 10 firms is not “a handful.” It seems odder still if we look at the CFPB data spotlight, which tells us that: “Lack of competition likely contributes to higher rates at the largest credit card companies.”
Does it?
Specifically, we are told: “CFPB research has found high levels of concentration and evidence of anticompetitive behavior in the consumer credit card market.”
Well, that sounds bad, even if it’s not so easily found in the underlying study.
The data spotlight further explains: “the top 30 credit card companies represent about 95 percent of credit card debt, and the top 10 dominate the marketplace.” More specifically, the report tells us that “[a]bout 4,000 financial institutions issue credit cards,” with the top 10 issuers (by average outstanding debt) accounting for 83% of credit-card loans in 2022 and the next 20 issuers accounting for another 12%. That tracks the numbers in the report, which, not incidentally, indicates declining market share by the top 10, from 87% in 2016.
But, 30 companies? That’s the high level of concentration?
According to the 2023 merger guidelines, markets with a Herfindahl–Hirschman index (HHI) score of greater than 1,800 are “highly concentrated.” Under the 2010 Horizontal Merger Guidelines, “highly concentrated” markets were those with an HHI greater than 2,500, and “moderately concentrated markets” were those with an HHI between 1,500 and 2,500.
Many readers know that HHI is a concentration measure that sums the squares of each individual market participant’s market share. For sake of simplicity, a market with five firms, where each possesses a 20% market share, would have an HHI of 2,000; that is, “highly concentrated” under the new guidelines, but only “moderately concentrated” under the 2010 guidelines.
If there are 30 firms, each with what I’ll round to a 3.3% market share, the HHI is 30(10.89) = 326.7. The new merger guidelines do not say what constitutes an “unconcentrated market,” but the 2010 Horizontal Merger Guidelines did: any market with an HHI below 1,500. Which is greater than 326.7, yes?
Of course, the issuers don’t all have equal market shares. According to at least one source, the top 10 have the following shares: 17.9%; 13.3%; 12.4%; 11.4%; 10.8%; 4.5%; 3.8%; 2.6%; 2.4%; 2.3%: voila, that’s 82.4% (just did it in my head and hoping I haven’t botched the sum).
What happens when we sum the squares of those numbers?
320.41 + 176.89 + 153.76 + 129.96 + 116.64 + 20.25 + 14.44 + 6.76 + 5.76 + 5.29 = 950.16.
For those who have trouble with the number line, that’s less than 1,000, which is less than 1,500, which is less than 1,800, which is less than 2,500.
Oops, I forgot the other 17.6% share of outstanding credit card debt. Well, let’s keep it simple and add a nice big square. Assume that there’s just one more firm (not nearly 4,000). If that were true (it’s not), we’d add 17.6 squared, which is 309.76 (and, of course, the largest sum of squares for any decomposition of that 17.6%).
Add that to 950 and we’re still well shy of 1,500 (the bottom rung of “moderately concentrated” under the 2010 guidelines). Which is, of course, below both 1,800 and 2,500.
Well, there may be examples of anticompetitive or otherwise unlawful behavior, but by their own accounting, they have not found high levels of concentration or market failure.
Economists at the FTC don’t much rely on structural thresholds these days—not for actual competition analysis—even as HHIs might serve as quick and dirty preliminary signals and might be usefully cited before judges or juries in arguing a case. But to the extent that they are useful at all, they don’t support the CFPB’s having “found” a highly concentrated market or any issuer’s ability to exert market power. Not one iota.
Director Chopra knows that.
The spotlight tells us that “small issuers offer lower rates,” and then that their “median APRs are significantly lower than the largest institutions’ rates.” CFPB says its survey data, gathered on 643 credit cards from 156 issuers, indicates a considerable variation in the reported purchase APR, with a “spread between the largest (top 25) and small issuers across credit tiers (of) between eight to 10 percentage points.”
I suppose that could be useful consumer information. Certainly, the fact that terms vary considerably should be useful to those consumers who do not know it—perhaps a significant subset of consumers, even if it’s a minority. Maybe that should be the leading message? Although a direct pointer to low rates and favorable terms, or how to find them (and identify them as such) might be much more helpful.
As one might guess, the underlying details can be complicated. That’s not to say that high-level findings, clearly articulated, are not valuable. They can be. Depending on the audience, they might be far more valuable than the underlying research. But there’s an art to finding the nuggets of information gold, and another to clear communication.
Consider, for example, that they are identifying issuers, which each issue multiple cards. Pointing to issuers might be useful to consumers, who might note that an issuer is, e.g., Citibank, Capital One, or the Bank of Missouri, and might do well to seek out issuers that offer favorable terms. But if the real issue is the APR (or the APR plus the other terms), the average or median APR of an issuer’s cards might be less important than the APR of a given card available to a given consumer (or given set of consumers). And a range of readily available APRs (however contingent) might be more useful than the large/small issuer divide.
Also, based on the CFPB’s own numbers, the list of issuers reporting “at least one product” (one card or more) with a maximum APR of more than 30% includes nine issuers from among the top 25 and six smaller issuers. So, while the large/small distinction may be useful, somehow, it’s not exactly a clean split.
Among the reasons that it’s complicated is because that’s the maximum APR for one card and, e.g., Citibank doesn’t just issue one card. Some cards charge different interest rates for, e.g, purchases and cash advances, and not all consumers pay the maximum APR (or, indeed, any). For those who pay their balances in full in a timely fashion each month, the APR may be a curiosity, at best. For those who do not, it’s a key bit of information, although not the only one, and it’s agreement-specific, not issuer-generic.
Consumers might also want to consider annual fees. The data spotlight tells us that “[i]n general, large institutions were more likely to charge annual fees than small institutions,” and to charge higher fees, on average, at that. But it’s “27% of large issuers’ card products” that are reported to charge an annual fee, which seems to be a way of saying that 73% of their products do not. Perhaps the more important information would be that annual fees, like APRs vary.
Some of this is obvious to those with a bit of experience with credit, and the CFPB reminds us that most consumers have more than one credit card. Still, not everybody has such experience. And other terms might be relevant, such as the differences between the terms of an “initial offer” and those that follow; various fees, such as late fees; and, of course, rewards, which may come in the form of “miles” or kick-back credits on purchases (or even cash). One of my cards gives me a real-time credit equal to the amount of state sales tax charged to my purchases.
And for consumers with deep subprime, subprime, and near-prime scores—that’s the bottom 18.7% of consumers, according to Experian—the ability to secure a credit card at all may be of primary concern. Try renting a car with cash.
The CFPB’s biannual report divides consumers by credit scores into six categories, for most purposes (into five, for some, and into seven, for others), using the 300–850-point scale common to the commonly used FICO and Vantage credit scores.
The data spotlight uses only three categories of consumers: those with “poor,” “good,” and “great” credit scores (619 or less; 620-719; and 720 or greater, respectively).
There are many ways we might divide the 300-850-point scale into categories, and I don’t know how many categories are best (or for whom, or what purpose). I suppose that simplicity is a good thing, all else equal. But one might wonder whether three of them are adequate to capture useful distinctions (for consumers, lenders, lawmakers, or anyone else) among consumers and, if so, where to draw the lines.
The CFPB report and Experian both mark scores of 579 or less as “poor”; and both place scores of 800-or-higher in their top categories. Most consumers fall in between. What about them?
Some good news is that average FICO scores in the United States have been climbing steadily for more than a decade: from 689 in 2010 and 2011 to 715 in 2023. I’d rather have them report median scores, but it’s a large n and what’s very likely a normal(ish) distribution, if truncated, so we’ll work with what we’re given. More good news: according to Experian’s numbers, 71.3% of consumers have scores falling into the good or better range, with about half (49.3%) falling into the “very good” or “exceptional” categories (so maybe the mean and median are not too far apart).
What about the three categories from the CFPB data spotlight? That 715 average falls pretty darn close to the very top of the “good” category in the data spotlight and just five points shy of “great,” even if it’s only a middling “good” score according to Experian, falling below not just “exceptional” credit scores but “very good” ones, as well as the top two (plus) categories in the CFPB’s own report.
Why tell consumers with scores in the 650-669 range that they have “good” credit, when their credit scores are (a) well-below average; (b) not considered “good” by major credit-reporting firms; and (c) in the fourth tranche from the top (of six overall) in the CFPB’s own report? Should they expect average terms?
A hallmark of good consumer guidance—of good agency guidance, generally—is consistency between the underlying research and the accessible distillation. Sometimes the question of consistency is plain, and sometimes it’s a judgment call. If this move from five, six, or seven categories to three is a judgment call, it seems a poor one. Telling consumers that large, well-known issuers do not necessarily offer the best terms could be useful consumer information, if best coupled with other information. Telling low-score consumers that they have good credit ratings—not so much.
As presented, it seems overly simplified, obscuring differences across consumers, cards, and issuers, and tilting toward a “big is bad” line central to this administration’s competition policy. A crude implication of the report seems to be that varied terms and profits signal market power and, perhaps worse (as a consequence or as an inference), anticompetitive conduct.
First, the data spotlight on credit-card data touts some misleading numbers.
Second, the data spotlight and the underlying report provide (at best) an odd view of concentration in the industry, and the data spotlight’s conjecture about a “lack of competition” seems just rubbish.
Third, like the FTC’s comments to NIST in support of expanded march-in rights (the subject of my most recent Agency Roundup), it’s too much cant and not enough information. More signal, less noise, please.
Fourth, they know better.
And fifth (through nth), enough with “big is bad.” It might be, sometimes, but it needn’t be, and bigness might also offer countervailing advantages. For example, I don’t want artisanal-complex molecule drugs purchased on Etsy, thanks; and I don’t really want my own personal social network (it’s so hurtful when I don’t get any “likes” from my “friends” and my friends are me).
Do I want to deal with large banks? Well, large enough. And, for credit purposes, I’ll shop around.
The post The CFPB’s Misleading Slant on Competition in Credit-Card Markets appeared first on Truth on the Market.
]]>The post Apple Fined at the 11th Hour Before the DMA Enters into Force appeared first on Truth on the Market.
]]>The timing of the fine, and its seemingly arbitrary amount, are both curious. Announced as being the result of a four-year investigation, the decision amounts to an exclusionary-abuse turned exploitative-conduct case, and is underpinned by a flimsy theory of harm in a market that has seen exponential growth over the past decade. The fact that the Commission fined Apple for conduct that would be banned per se just two days later raises questions as to why the DMA was necessary and whether the Commission has faith in the new law’s effectiveness.
The timing of this fine is somewhat puzzling, as Apple would have been precluded from including anti-steering provisions in its contracts with developers following the DMA’s entry into force on 6 March (i.e., two days after the fine). Moreover, the €1.8 billion total, while significant, doesn’t appear to follow from any objective methodology. In fact, the fine would have been closer to €40 million had a rather arbitrarily determined “lump sum” not been added to “ensure the overall fine imposed is sufficiently deterrent.”
It’s unclear why such deterrence would be needed, given that the DMA allows the Commission to impose up to 20% of a company’s worldwide turnover for infringements. Further, such a haphazardly imposed sum is a first in EU competition law, and is therefore likely to be overturned on appeal.
So, if not deterrence, then what does the Commission hope to achieve? Perhaps it is using competition law to punish Apple for what it sees as a lackluster compliance plan with the DMA (Apple recently published its DMA compliance plan, which was criticized by some of the company’s usual opponents). Or maybe it would just like to punish Apple—full stop.
The Commission and the EU have made no bones about their hostility to “big tech” these past five years, and have used every opportunity to antagonize Apple in particular. This could be part of the broader strategy to “discipline” big tech and to “level down” gatekeepers that have “too much power.” Or maybe this is a grudge match fueled by a regulator with a bad track record of enforcement litigation against Apple; perhaps the Commission wanted one last shot at the company before the DMA entered into force.
Or maybe it is just the case that the Commission had in the pipeline for the longest time, and decided to pull the trigger right before the DMA’s entry into force because it thought Apple would be unlikely to appeal a fine for conduct remedies with which it has to comply anyway. After all, €1.8 billion makes headlines, and it also lines the Commission’s pockets. What’s not to love? If that was the strategy, it should be noted that Apple has already said that it will appeal.
The substance of the case is dubious, at best. It would be interesting to see where the Commission has found evidence of harm in a digital-streaming music market that has grown exponentially. Indeed, In just eight years, the digital-streaming music market has gone from 25 million subscribers to almost 160 million—a 27% annual growth rate. As Herbert Hovenkamp recently pointed out in an interview for the Financial Times, this is not the sort of market that screams “anticompetitive harm,” even if some complementors, like Spotify, would like to get a better deal (because of course they would).
To a significant extent, [enforcers] are grappling with the wrong things. Which industries should the antitrust authorities pursue? You look for industries that are characterised by slow growth, oligopoly, rigid market shares, not very big increases in productivity — industries of poor performance. You try to make those industries perform better. With Big Tech, we’re looking at probably the most productive part of the economy.
In this supposedly anti-competitive environment, which has persisted for years, Spotify, based in Sweden, has been the biggest winner. Today, Spotify is dominant in the music-streaming market—with a 56% share, double its closest competitor—all the while paying Apple nothing.
It’s relevant to go over how the App Store works. Apple collects a fee to use its proprietary software and iOS (as well as access to a customer base with which it has built considerable goodwill over the years) through a commission on its in-app payment system (IAP). Apple also charges a commission on paid apps. But most app downloads (86%) are free, meaning that Apple charges nothing. This arrangement can be sustained because Apple cross-subsidizes those free downloads by charging a commission on in-app purchases and paid downloads.
Spotify has effectively circumvented paying this fee because users don’t pay to download the app and because there are no in-app purchases. Users instead subscribe to Spotify through their web browser. Indeed, the fact that app developers have this power to either bypass the fee or to negotiate better terms may be considered evidence that the DMA’s “gatekeeper” designation is not as powerful as the Commission argues that it is. If an app offers a sufficiently cool service or product, users will follow that app to wherever they need to go and help the company to “win” the negotiation.
Today, Spotify is 2.3 times bigger than Apple’s own Apple Music competitor in the UK; 1.6 times bigger in France; and 1.9 times bigger in Germany. Deezer, a French music-streaming service, is roughly the same size as Apple Music in France. Given this, and given the general growth of the sector, it is difficult to argue convincingly that Apple has used its dominant position in the (extremely narrowly defined) “market for the distribution of music streaming apps to iOS users” to exclude rivals. (For more on gerrymandering in antitrust markets, see here.)
This may have been why the Commission switched its appraisal of the impermissible conduct under scrutiny here from the initial “exclusionary” to “exploitative conduct.” Under Article 102 of the Treaty on the Functioning of the European Union (TFEU), exclusionary conduct indirectly harms consumers by excluding rivals. By contrast, exploitative conduct directly harms consumers. Traditionally, the Commission has prioritized exclusionary-abuse cases, as it was understood that this type of conduct was more harmful in the long run (but see here).
As I have pointed out above, an exclusionary-abuse case without evidence of exclusion, and with plenty of evidence of competitors thriving, would be a hard sell. According to the Commission, Apple harmed consumers because it obstructed them from taking informed and effective decisions on where and how to purchase music-streaming subscriptions for use on their devices. As a result, they paid more and had fewer options. The Commission found that:
Apple’s anti-steering provisions led to non-monetary harm in the form of a degraded user experience: iOS users either had to engage in a cumbersome search before they found their way to relevant offers outside the app, or they never subscribed to any service because they did not find the right one on their own.
But were consumers really precluded from making an informed decision? It doesn’t seem overly hard to find out about Spotify and its membership fees. Users can’t subscribe through the app on iOS, but they can use the browser on their phone (see for yourself). Depending on how precipitously one lowers the bar for the average consumer, just about any intervention might be justified, in principle (see also here). This then inevitably leads to paternalism and overenforcement, and facilitates rent-seeking by those who—like Spotify—would use the antitrust laws out of convenience.
Incidentally, these non-pricing harms are the same ones that the Commission argued warranted bumping up the fine from €40 million to €1.8 billion because, in its view:
Such lump sum fine was necessary in this case because a significant part of the harm caused by the infringement consists of non-monetary harm, which cannot be properly accounted for under the revenue-based methodology as set out in the Commission’s 2006 Guidelines on Fines.
In other words, a random €1.8 billion is added to a more-or-less random theory of harm that posits an unlikely level of incompetence from the average iOS user to be even minimally plausible as a theory of anticompetitive harm. Let’s see what the European General Court finds once it hears the appeal that Apple has already announced it will file.
The decision also raises an interesting question about the relationship between EU competition law and the DMA. Anti-steering provisions such as the one at stake here are prohibited by the DMA. The decision thus confirms that there is a continuum between antitrust investigations and DMA obligations, despite the Commission consistently denying this link (for the opposite view, see here).
This also raises a broader question about the DMA’s necessity, which should give pause to those countries looking to mimic it. If DMA obligations can be imposed through competition law, why is the DMA necessary? Adopting and then enforcing DMA-style regulation is anything but costless. It costs money to adopt, it costs money to enforce, and it can result in unintended consequences and chill pro-competitive conduct in ways that impose additional costs on society (see here, here, and here).
Furthermore, if the Commission is so confident that the DMA will be effective, why has it used competition law to impose a fine for conduct that will become a legal obligation in the EU in just two days?
The post Apple Fined at the 11th Hour Before the DMA Enters into Force appeared first on Truth on the Market.
]]>The post The Whole Wide World of Government appeared first on Truth on the Market.
]]>Once upon a time (July 9, 2021, to be precise), President Joe Biden issued an executive order on “Promoting Competition in the American Economy,” which declared that “a whole-of-government approach is necessary to address overconcentration, monopolization, and unfair competition in the American economy.”
It was a big deal and a hot mess. It was also the birth of a mantra, chanted hither and yon among the agencies. It was touted most recently in the Federal Trade Commission’s (FTC) comments to the National Institute of Standards and Technology (NIST) in support of expanded march-in rights. The U.S. Justice Department (DOJ) went so far as to celebrate the order’s one-year anniversary.
“Whole-of-government approach,” which has its own Wikipedia page, seems like the sort of thing that could be a good idea or a terrible one, depending on what it means (supposing that it does mean something specific). It’s often trotted out in favor of interagency cooperation, which certainly can be a good thing, even if it’s nothing new.
Of course, the FTC is supposed to be an independent agency headed by a bipartisan commission. That doesn’t mean it can’t cooperate with other federal agencies, but its enabling statute stipulates that independence and that “[n]ot more than three of the Commissioners shall be members of the same political party.” (Although it does not promise that the president will nominate or that Congress confirm commissioners from more than one party so, sometimes—today, for instance—it’s more of a partisan affair and seems a bit less independent).
Moreover, executive agencies (independent or not) are created, empowered, funded, and overseen by Congress, which enacts laws and has the power of the purse.
Surely, a whole-of-government approach shouldn’t mean that the Constitution’s balance of powers ought to be cast aside so that the whole (of) government just means, say, the executive branch, much less any one unelected appointee running (solo or with fellow commissioners) an agency of greater or lesser independence.
Which brings us to a Feb. 22 interim staff report from the U.S. House Judiciary Committee. The report’s title (“Abuse of Power, Waste of Resources, and Fear: What Internal Documents and Testimony from Career Employees Show About the FTC under Chair Lina Khan”) is a bit of a spoiler. There seems to be some friction between the committee and the commission, as also evidenced by—among many other examples—this letter from committee leadership a little over a year ago.
Unsurprisingly, the interim staff report is a political document, but it’s also thoroughly—some might say depressingly—documented, and well-studded with quotations from FTC management and staff. I don’t see any of my old emails quoted, and for that I am grateful.
Here’s my own spoiler alert: most of the quoted material has a familiar ring to it, and most of the criticisms seem to me to be warranted, even if it’s not the report that I would have written, and not the style in which I’d have written it.
One might wonder about selection of staff and management views, of course. The report’s legitimate focus on areas of concern also shouldn’t obscure the fact that there remain experienced people doing familiar—indeed, important—work at both federal antitrust agencies. It’s not all neo-Brandeisian power plays all the time. Still, a great deal has not been business as usual; this is not the testimony of some select malcontent; and there are real issues there.
Most of the material seems familiar, but not all of it. I cannot pretend to have seen everything in my last year-and-a-half at the FTC, and I didn’t talk to everyone, even if I talked quite a bit. I don’t doubt the quoted material, but I didn’t personally observe a “culture of fear,” even if I observed no small amount of frustration.
Going through some of the report’s findings one-by-one:
Yep. A substantial portion of that was officially enshrined in changes to the FTC’s rules of practice. See, for example, the section on administrative-law recommenders in my post here, which also covered changes to the rules on rulemaking, as well as the dissenting statement from then-Commissioners Christine Wilson and Noah Phillips.
Post-change, FTC rulemaking would still be overseen by a presiding officer, but it would now be one chosen by the chair, not by an independent administrative law judge. The interim staff report rightly cites additional dissents from Commissioners Wilson and Phillips (here and here) on omnibus resolutions that were adopted on strict 3-2 party-line votes. These, among other things, gave the chair the ability to authorize issuance of compulsory process—more broadly, to initiate and oversee investigations—in many instances that had previously required a vote of the bipartisan commission.
Check. The interim staff report says that career staff were “silenced internally and externally.” Certainly, that happened. There was the well-publicized (if eventually moderated) moratorium on outside speaking (at conferences, etc.) by FTC staff. Not this or that staffer, but the whole darn staff (including me, at the time). It’s embarrassing when you’re told to pull out of a conference panel discussion with, say, an hour-and-a-half to go. It should be embarrassing to the agency, as well. And it’s super-embarrassing to be told that a draft report (or two) cannot be shared with sitting FTC commissioners.
Many people inside the building were not happy. See pages 32-36 of the interim staff report, discussing the Federal Employee Viewpoint Surveys, which are independently conducted by the Office of Professional Management. Still, as I wrote before on the FTC’s “Theater of Listening,” and allowing that I can be a bit obtuse from time to time, I didn’t feel intimidated or fearful.
Remember the rule changes on rulemaking? Wilson and Phillips pointed out that the changes eliminated the requirement of an expert staff report. They also eliminated Bureau of Economics reviews of preliminary staff reports, diminishing both the transparency of the rulemaking process and the opportunity for independent input (and see me here and ICLE here on pp. 48-49).
It’s not just about the process. It’s about the collective expertise of a trained and experienced staff. With due and considerable respect to some of the fantastic appointees I’ve seen over the years, it’s that cumulative and collective expertise and experience that makes an expert agency expert, in anything.
Chair Lina Khan is highly intelligent and, as far as I know, believes in what she’s doing. She was duly nominated to the commission and, after her Senate confirmation, duly (if somewhat irregularly) named chair nearly immediately. The chair has always had some agenda-setting and hiring authority—a sort of first among equals. But not this much. And process, priors, prejudgment, and the foundations of her antitrust views aside, why place someone who is not even five years out of law school at the head of an important government agency? Someone who has neither staffed nor run an enforcement investigation? Or, as the interim staff report notes, tried a case?
That’s on President Biden. Let’s hope that no present or future president does the same at the Defense Department, State Department, Treasury Department, or the DOJ.
There have been self-inflicted wounds too. Consider, as a baseline, the FTC chiefs of staff appointed before Chair Khan’s tenure. Joe Simons, an FTC veteran, appointed a chief of staff with considerable agency experience and excellent contacts throughout the agency. Maureen Ohlhausen, his immediate predecessor and another FTC veteran, also appointed someone who knew the agency well and who’d worked in both of the FTC’s enforcement bureaus. Edith Ramirez, Jon Leibowitz, Bill Kovacic, and so on all sought help from those who knew the proper business of the agency, its resources, its processes, and its people.
Chair Khan’s first chief of staff was someone unqualified for most staff positions in the Bureau of Competition (no antitrust experience, no law degree), Bureau of Consumer Protection (still no law degree), or Bureau of Economics (no economics degree) and spent only a short term working in the FTC for Commissioner Rohit Chopra, where she was known for antagonizing FTC staff. That sort of appointment is a way to shoot oneself in the foot.
Khan’s first chief technologist was someone who, like the chief of staff, had worked for Commissioner Chopra, but had no law degree, no economics degree, and no technology degree. To be fair, that chief technologist had worked on tech issues of one sort or another. But hardcore technical expertise of a sort that FTC staff couldn’t deliver on their own? No, sorry. No such expertise. So much the worse if such a person also had a sub rosa “appointment” intervening between the Office of Policy Planning (and its experienced acting director and staff) and the Chair’s Office.
That’s no way to run a railroad. No wonder so many fled (see me here, Bloomberg here). Others hunkered down and hid.
The post The Whole Wide World of Government appeared first on Truth on the Market.
]]>The post Will the FTC Scupper the Kroger/Albersons Merger? appeared first on Truth on the Market.
]]>The FTC’s administrative process could drag on for an extended period of time, assuming an administrative judge’s decision, an appeal to the FTC, and a possible appeal of the FTC’s determination to a federal court of appeals. The threat of this prolonged review period could cause the parties to drop the deal or, in the alternative, impose substantial delay-related costs on the companies (and also consumers).
The FTC alleges likely harm to consumers in a “supermarket market” that excludes club stores (such as BJ’s and Costco), specialty grocery stores (Trader Joe’s and Aldi), and online retailers (Amazon and others). The merging parties argue, to the contrary, that the transaction will enhance their offerings and make them more effective competitors against other mega-retailers.
Based on a nontraditional FTC Act Section 5 theory, the commission also claims harm to unionized Kroger and Albertsons workers in the form of reduced bargaining power. The FTC neglects to mention that this posited bargaining power shift could benefit consumers in the form of lower prices. Moreover, the exercise of additional bargaining power, which often is output-enhancing, is generally far different from the exercise of additional market power, which reduces output.
A reviewing court would likely find the FTC’s labor story hard to square with the sort of monopsony-power theory that is cognizable under federal antitrust law (see here). (Relatedly, for a brief critique of claims that there is substantial monopsony power in U.S. labor markets, see here.) Moreover, the notion that there should be a separate market of unionized labor suppliers in this field is peculiar to say the least. As Geoff Manne stated in a Feb. 28 tweet:
I believe most grocery workers come from/move to non-grocery jobs, to say nothing of non-union jobs. (Union jobs are only something like 4% of wholesale/retail jobs nationwide). The notion that non-union jobs don’t compete for labor with union jobs is virtually impossible.
— Geoffrey Manne (@geoffmanne) February 28, 2024
In short, mark me unconvinced by this latest FTC foray into mergerland. The labor-market theory appears to be a nonstarter. The non-labor supermarket consumer-harm theory conceivably might be plausible, but we need hard facts, and I am very skeptical. (In particular, note that a California federal court judge “twice agreed to dismiss . . . [a private] [consumers] lawsuit [challenging the merger], ruling in December [2023] that the consumers did not show how they would be harmed by the merger.”)
I am not skeptical, however, about the reality that FTC administrative litigation and its attendant delays, if they occur as proposed, will impose major costs, including likely harm to business efficiency and, quite possibly, to consumer welfare. Kroger’s response to the FTC’s announcement it would challenge the Kroger/Albertsons merger is credible:
Contrary to the FTC’s statements, blocking Kroger’s merger with Albertsons Companies will actually harm the very people the FTC purports to serve America’s consumers and workers.
The post Will the FTC Scupper the Kroger/Albersons Merger? appeared first on Truth on the Market.
]]>The post NetChoice, the Supreme Court, and the State Action Doctrine appeared first on Truth on the Market.
]]>In the oral arguments in this week’s NetChoice cases, several questions from Justices Clarence Thomas and Samuel Alito suggested that they believed social-media companies engaged in “censorship,” conflating the right of private actors to set rules for their property with government oppression. This is an abuse of language, and completely inconsistent with Supreme Court precedent that differentiates between state and private action.
This is well-worn ground. In Manhattan Cmty. Access Corp. v. Halleck, the Court (including both Thomas and Alito) was crystal clear about the distinction between government and private actors, emphasizing that the state-action doctrine was fundamental to understanding the First Amendment’s protections. Just a short survey of quotes from the case make abundantly clear that the First Amendment only applies to government actors:
The Free Speech Clause of the First Amendment constrains governmental actors and protects private actors. To draw the line between governmental and private, this Court applies what is known as the state-action doctrine.
Ratified in 1791, the First Amendment provides in relevant part that “Congress shall make no law … abridging the freedom of speech.” Ratified in 1868, the Fourteenth Amendment makes the First Amendment’s Free Speech Clause applicable against the States: “No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law ….” § 1. The text and original meaning of those Amendments, as well as this Court’s longstanding precedents, establish that the Free Speech Clause prohibits only governmental abridgment of speech. The Free Speech Clause does not prohibit private abridgment of speech.
In accord with the text and structure of the Constitution, this Court’s state-action doctrine distinguishes the government from individuals and private entities. By enforcing that constitutional boundary between the governmental and the private, the state-action doctrine protects a robust sphere of individual liberty.
When the government provides a forum for speech (known as a public forum), the government may be constrained by the First Amendment, meaning that the government ordinarily may not exclude speech or speakers from the forum on the basis of viewpoint, or sometimes even on the basis of content…
By contrast, when a private entity provides a forum for speech, the private entity is not ordinarily constrained by the First Amendment because the private entity is not a state actor. The private entity may thus exercise editorial discretion over the speech and speakers in the forum.
Providing some kind of forum for speech is not an activity that only governmental entities have traditionally performed. Therefore, a private entity who provides a forum for speech is not transformed by that fact alone into a state actor. After all, private property owners and private lessees often open their property for speech. Grocery stores put up community bulletin boards. Comedy clubs host open mic nights…
In short, merely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints.
If the rule were otherwise, all private property owners and private lessees who open their property for speech would be subject to First Amendment constraints and would lose the ability to exercise what they deem to be appropriate editorial discretion within that open forum. Private property owners and private lessees would face the unappetizing choice of allowing all comers or closing the platform altogether…. The Constitution does not disable private property owners and private lessees from exercising editorial discretion over speech and speakers on their property.
It is sometimes said that the bigger the government, the smaller the individual. Consistent with the text of the Constitution, the state-action doctrine enforces a critical boundary between the government and the individual, and thereby protects a robust sphere of individual liberty. Expanding the state-action doctrine beyond its traditional boundaries would expand governmental control while restricting individual liberty and private enterprise.
Thus, it is quite perplexing to see the embedded assumptions that Justices Thomas and Alito appear to be in questions like these:
JUSTICE THOMAS: So I’m just trying to get more of a – more specificity as to what the speech is in this case. They are censoring, as far as I can tell, and I don’t know of any protected – speech interests in censoring other speech, but perhaps there is something else.
JUSTICE THOMAS: Mr. Clement, if the government did what your clients are doing, would that be government speech?
JUSTICE THOMAS: can you give me one example of a case in which we have said the First Amendment protects the right to censor?
JUSTICE ALITO: There’s a lot of new terminology bouncing around in these cases, and just out of curiosity, one of them is “content moderation.” Could you define that for me?JUSTICE ALITO: Is it — is it anything more than a euphemism for censorship? Let me just ask you this. If somebody in 1917 was prosecuted and thrown in jail for opposing U.S. participation in World War I, was that content moderation?
JUSTICE ALITO: Well, I mean, the particular word that you use matters only to the extent that some may want to resist the Orwellian temptation to recategorize offensive conduct in seemingly bland terms. But, anyway, thank you.
JUSTICE THOMAS: The – Mr. Clement said the difference is that if the government does it, it is censoring. If a private party does it, it is – I forget – content moderation. These euphemisms bypass me sometimes. But – or elude me. The – do you agree with that distinction?
Put simply, the state-action doctrine explains the difference between censorship (of the concerning governmental sort) and editorial discretion (ordinary private “censorship”) which, in the case of social-media companies, is often called content moderation.
Social-media companies can kick you off their platform or restrict your ability to post, but that’s about it. They can’t put you in jail. However much social media is the “modern public square,” it remains private property, and they have the right to exercise editorial discretion. The only thing Orwellian is to conflate this obvious distinction. The euphemism would be saying that you are in “Facebook jail” or some other nonsense that obfuscates that you were found in violation of the terms of service that you agreed to abide when using the platform.
As Justice Brett Kavanaugh put it:
JUSTICE KAVANAUGH: Just pick up on the word “censorship” because I think it’s being used in lots of different ways. So, when the government censors, when the government excludes speech from the public square, that is obviously a violation of the First Amendment.
When a private individual or private entity makes decisions about what to include and what to exclude, that’s protected generally editorial discretion, even though you could view the private entity’s decision to exclude something as “private censorship.”
JUSTICE KAVANAUGH: When I think of “Orwellian,” I think of the state, not the private sector, not private individuals. Maybe people have different conceptions of “Orwellian,” but the state taking over media, like in some other countries. And in Tornillo, we made clear, the Court made clear, that we don’t want to be that – that country, that we have a different model here and have since the beginning, and we don’t want the state interfering with these private choices.
This is why I think (although I am admittedly biased, as a co-author) that the International Center for Law & Economics’ (ICLE) amicus brief in these cases is especially relevant. The approach we took was not only to discuss the economic reasons that multisided platforms like social-media companies have to engage in content moderation, but to show that a “common-carriage” regulatory regime for social media would be functionally the same as the “company town” state-action theory rejected in Halleck. As we put it:
The challenged Florida and Texas laws treat social-media platforms essentially as company towns. But social-media platforms simply do not demonstrate the requisite characteristics sufficient to treat them as company towns whose moderation decisions are subject to court review for viewpoint discrimination. Instead, consistent with their economic function, they are private actors with their own rights to editorial discretion protected from government interference.
While one might be rightfully concerned about how social-media companies exercise their editorial discretion, that doesn’t mean you can restrict their rights in a way that would be consistent with the First Amendment, any more than you can use government force to suppress speech you don’t like. The answer for bad uses of editorial discretion comes from the marketplace of ideas itself, where social-media users and advertisers can exit the platform if they don’t like its content-moderation policies or want to be associated with its reputation.
As the Court put it in Miami Herald Publishing Co. v. Tornillo:
The power of a privately owned newspaper to advance its own political, social, and economic views is bounded by only two factors: first, the acceptance of a sufficient number of readers—and hence advertisers —to assure financial success; and, second, the journalistic integrity of its editors and publishers.
In sum, to directly answer Justices Thomas and Alito, social-media companies really can’t “censor” in ways that the government can. The best they can do is exclude those who violate the rules they set up for using their property. Constitutionally, the only limitation on the editorial discretion of these companies comes from the marketplace of ideas itself.
The post NetChoice, the Supreme Court, and the State Action Doctrine appeared first on Truth on the Market.
]]>The post FCC’s Digital-Discrimination Rules: An Open Invitation to Flood the Field with Schlock appeared first on Truth on the Market.
]]>This has the hallmarks of a significant case that will almost certainly involve the U.S. Supreme Court’s emerging “major questions” doctrine, and likely will also be affected by the Court’s forthcoming decisions in Loper Bright and Relentless. But lurking in the background haunts the spirit of LabMD.
Section 60506 of the Infrastructure Investment and Jobs Act (IIJA) required the FCC to adopt rules “preventing digital discrimination of access based on income level, race, ethnicity, color, religion, or national origin.” The law also required the FCC and the U.S. attorney general to “ensure that Federal policies promote equal access to robust broadband internet access service by prohibiting deployment discrimination.” [emphasis added]
Under this reading, one could be forgiven for concluding that Congress intended the administration to focus narrowly on preventing digital discrimination in deployment policies and practices.
Indeed, soon after the IIJA was enacted, FCC Chair Jessica Rosenworcel announced the formation of a digital-discrimination task force. In the announcement, she indicated “final rules to facilitate equal access to broadband service that prevents digital discrimination and promotes equal access to robust broadband internet access” would be accomplished by “prohibiting deployment discrimination.”
But, as they say in the theater, a funny thing happened on the way to the forum. Instead of focusing on deployment discrimination, the FCC approved sweeping digital-discrimination rules that cover nearly every aspect of broadband service, including speeds, capacities, data caps, credit checks, marketing, and advertising, as well as pricing and discounts.
The FCC’s digital-discrimination order (¶ 105) explicitly includes pricing within the scope of the agency’s rules. Even so, the commission assured the public that this inclusion is “not an attempt to institute rate regulation.” Instead, it claims the agency is merely attempting to facilitate “the equal opportunity to subscribe to an offered service that provides comparable quality of service ‘for comparable terms and conditions’” [emphasis provided by FCC].
The order goes on to argue that pricing is a key term—if not the key term—that consumers consider when choosing a broadband service. As such, the FCC claims authority under the order to “determine whether prices are ‘comparable’ within the meaning of the equal access definition.”
To be fair, most reasonable people would agree that it would be egregious for a provider to intentionally charge a low-income household more for service than a higher-income household for the same service. A disparate-intent standard would readily address such allegations. But the FCC has gone a step further. In addition to a disparate-intent standard, the FCC has adopted a disparate-effects approach.
Under a disparate-effects approach, even if a provider doesn’t intend to discriminate based on income or some other protected class, they could be liable for discrimination if a policy results in different outcomes for protected classes, as described in the FCC’s rules (¶ 39):
[W]here evidence of a statistical disparity is shown to support a complaint of disparate impact, liability is properly limited where (1) the challenged policy or practice is shown to cause the disparity complained about, and (2) business owners are permitted to explain the valid interests served by the challenged policy or practice.
Broadly speaking, this sets up a three-step process to investigate and enforce allegations of discrimination:
These last two steps amount to costly and time-consuming litigation, with massive discovery served on accused providers, a bevy of experts on each side, and “technical and economic feasibility” standards stacked against the accused.
But the first step—the step that kicks off the costly and time-consuming quasi-judicial administrative proceeding—is trivially easy. It’s especially easy when income is considered a protected class. In an International Center for Law & Economics (ICLE) issue brief, we concluded:
Because … other factors are correlated with income level—and with other protected characteristics—applying an effects-based statistical analysis is likely to produce a false positive concluding the presence of digital discrimination, even when there was an explicit effort to avoid such discrimination. This is a version of Nobel laureate Ronald Coase’s well-known quote: “If you torture the data long enough, it will confess.”
This is not mere speculation. In fact, the FCC’s report and order cites several reports claiming to have found statistical disparities in broadband deployment and pricing. In particular, the FCC seven times cites a report by The Markup claiming to show that—within certain selected localities—AT&T, Verizon, EarthLink, and CenturyLink offered slower connection speeds to lower-income and nonwhite areas for the same price that higher speeds were offered to other parts of the cities surveyed. The authors argued:
By failing to price according to service speed, these companies are demanding some customers pay dramatically higher unit prices for advertised download speed than others.
Without saying so directly, The Markup’s authors imply that these observations amount to discrimination. But to say that there’s been some pushback on The Markup’s data and methodology would be a huge understatement. Even the authors themselves admit that their approach and data could not produce reliable statistical tests:
We purposely did not run statistical tests with p-values because, as advised by statisticians we consulted, we can’t assume independence between addresses’ offers, an assumption required for Student’s t-tests, Chi-squared tests, and z-tests.
Perhaps one reason why the authors couldn’t produce any statistical tests is that there’s actually little variation in the pricing. I downloaded the data for Portland, Oregon—which includes only CenturyLink and Earthlink (the latter is a contract carrier for CenturyLink)—and summarized it by ZIP code. For each provider, there was no variation at all from ZIP code to ZIP code.
With such unreliable data, as well as an unreliable approach, it would be reasonable to conclude that The Markup’s study should be ignored. But under the FCC’s digital-discrimination rules, the agency can use even unreliable studies as a first step to open a full-blown investigation. The FCC says that it “will evaluate all data relevant to a claim of digital discrimination of access on a case-by-case basis, including all Commission and external data sources and studies.” (¶ 167)
Even worse, under the FCC’s expansive definition of “equal opportunity to subscribe” to mean “an offered service that provides comparable quality of service ‘for comparable terms and conditions,’” providers could be accused of discrimination in pricing if the prices do not reflect differences in available speeds. They could also be accused of discrimination in deployment if some parts of town have different ranges of available speeds than other parts of town. It’s an open invitation to the spaghetti approach to litigation—throw a bunch of allegations against the wall and see what sticks.
The California Community Foundation and Digital Equity LA Coalition’s 2022 report on disparities in advertised broadband pricing has been credited with the Los Angeles City Council’s approval of the nation’s first city-level digital-discrimination policy, as well as a pending California State Assembly bill to codify the FCC’s digital-discrimination rules into state law.
The report’s authors claimed to show that Charter’s broadband service (Spectrum) is more expensive and slower in high-poverty neighborhoods in Los Angeles County, while wealthier neighborhoods in the county are offered higher speeds at lower prices. But as with The Markup report, the approach and data provided are unreliable.
For starters, the data sample only 165 addresses in an area with nearly 3.4 million households. Even a simple yes/no survey would require almost 400 observations to produce the standard levels of statistical significance and margin of error. But aside from the sample size problem, the LA study does not demonstrate—even on its own terms—income discrimination or discrimination against a protected class. Instead, the report’s data appear to show that competition is a key factor driving differences in prices throughout the region. For example, for 500 Mbps download speeds:
Again, with such unreliable data and an unreliable approach, it would be reasonable to conclude that the Los Angeles study should be ignored. But under the FCC’s digital-discrimination rules, the agency can use even unreliable studies as a first step to open a full-blown investigation.
It’s been argued that one reason for the variation in Spectrum prices throughout the region may be related to promotions, as noted in NCTA’s comments that:
Charter also extends limited promotional offers based on various market dynamics, including competitive actions and seasonal trends (such as back-to-school).
It could be argued that offering promotions is a reasonable competitive response to rival providers’ offers. But under the FCC’s broad definition of “covered services,” even promotions are subject to digital-discrimination scrutiny. Moreover, it’s not clear that competition from rivals would provide a safe harbor under the FCC’s onerous “technical and economic feasibility” provisions.
With its promise to consider all “external data sources and studies” alleging digital discrimination, the FCC has sent out an open invitation to flood the field with schlocky studies to trigger an investigation. Anyone who’s been in the world of regulation long enough knows that, in some cases, the process can be the punishment, and settlements can be shakedowns. The Federal Trade Commission’s (FTC) treatment of LabMD is perhaps one of the most extreme examples:
Since the early 2000s, the FTC has brought charges against more than 150 companies alleging they had bad security or privacy practices. LabMD was one of them, when its computer system was compromised by professional hackers in 2008. The FTC claimed that LabMD’s failure to adequately protect customer data was an “unfair” business practice.
Challenging the FTC can get very expensive and the agency used the threat of litigation to secure settlements from dozens of companies. It then used those settlements to convince everyone else that those settlements constituted binding law and enforceable security standards.
Because no one ever forced the FTC to defend what it was doing in court, the FTC’s assertion of legal authority became a self-fulfilling prophecy. LabMD, however, chose to challege the FTC. The fight drove LabMD out of business, but public interest law firm Cause of Action and lawyers at Ropes & Gray took the case on a pro bono basis.
Under the FCC’s digital-discrimination rules, every internet service provider or organization involved with providing or facilitating broadband service runs the risk of being a LabMD. This could have been avoided if only the FCC exercised some regulatory humility and limited the scope of its rules to intentional discrimination in deployment.
In LabMD, the 11th U.S. Circuit Court of Appeals ruled that the FTC’s approach violated the basic legal principle that the government can’t punish someone for conduct that the government hasn’t previously explained is problematic. The FCC may face a similar challenge with respect to its digital-discrimination enforcement.
The post FCC’s Digital-Discrimination Rules: An Open Invitation to Flood the Field with Schlock appeared first on Truth on the Market.
]]>The post The FTC Should Not Enact a Deceptive or Unfair Marketing Earnings-Claims Rule appeared first on Truth on the Market.
]]>[The Deceptive or Unfair ANPRM was aimed at] challenging bogus money-making claims used to lure consumers, workers, and prospective entrepreneurs into risky business ventures that often turn into dead-end debt traps. If finalized, a rule in this area would allow the Commission to recover redress for defrauded consumers, and seek steep penalties against the multilevel marketers, for-profit colleges, “gig economy” platforms, and other bad actors who prey on people’s hopes for economic advancement.
The FTC has not yet proposed a final rule in this matter.
A just released Mercatus Center policy brief by my colleague Tracy Miller concludes that the FTC should not issue a new Magnuson-Moss rule (pursuant to 15 U.S.C. § 57a) directed at deceptive or unfair marketing earnings claims (footnote references omitted):
The FTC has a long history of enforcement actions—in the form of adjudication—against firms that make unfair and deceptive earnings claims. It also has provided extensive guidance concerning practices that it considers unfair and deceptive. While a rule could make it easier to seek and obtain monetary relief, rulemaking, especially given the commission’s current membership, can result in more onerous rules that inhibit innovation and entrepreneurship in firms’ communications about earnings opportunities. Rather than completing the proposed rulemaking process, the FTC would be better off addressing deceptive claims by combining its penalty offense authority with continued adjudication.
Miller’s recommendation is sound. The legal and policy reasons that militate against enactment of an FTC earnings-claim rule are summarized below.
Prior to April 2021, the FTC could proceed directly against a private party for equitable monetary relief under its injunctive authority, found in Section 13(b) of the FTC Act. The U.S. Supreme Court, however, held in AMG Capital Management LLC v. FTC that Section 13(b) does not authorize the commission to obtain court-ordered monetary relief (such as restitution or disgorgement). As such, the FTC was denied an important vehicle that it could use to deter (and sometimes extract money settlements against) specific conduct it deemed illegal—including deceptive or unfair earnings claims. (I discussed the future of FTC equitable monetary relief following AMG in a 2021 Truth on the Market commentary.)
The FTC, nevertheless, still retains two potential monetary-enforcement powers that it might employ to discourage or prosecute problematic earnings claims—its penalty-offense authority and its Magnuson-Moss rulemaking authority.
The FTC’s penalty-offense authority (POA) (Section 5(m)(1)(B) of the FTC Act, 15 U.S.C. §45(m)(1)(B)) (see here) authorizes the FTC to seek civil penalties against a company that acted unfairly or deceptively. The commission can obtain POA penalties if it proves that:
In order to trigger this authority, the commission can send companies a “notice of penalty offenses.” This notice is a document listing certain types of conduct that the commission has determined, in one or more final litigated administrative orders (not consent orders), to be unfair or deceptive in violation of the FTC Act. Companies are sent a notice to ensure that they understand the law, and that they are deterred from breaking it.
Firms that receive this notice and nevertheless engage in prohibited practices can face civil penalties of up to $50,120 per violation. (As required by federal statute, the FTC adjusts the amounts of its civil-penalty maximums for inflation every January.) Because the maximum penalty can be assessed for each day that the firm violates a rule, the penalty can greatly exceed a firm’s gains from unfair or deceptive claims. This may be necessary to provide optimal deterrence when the likelihood of detecting wrongdoing is small. (This point was made in a 2021 law review article by former FTC Commissioner Rohit Chopra and current FTC Bureau of Consumer Protection Samuel Levine.)
In October 2021, the FTC announced that it had sent a POA notice informing more than 1,100 businesses that promoted money-making ventures that they would face civil penalties if they deceived or misled consumers about potential earnings.
As the FTC explained, the notice outlined a number of practices that the FTC had determined to be unfair or deceptive in 12 prior administrative cases. Broadly, the cases found that it was unlawful to make false, misleading, or deceptive representations concerning the profits or earnings that may be anticipated by a participant in a money-making opportunity. This includes, for example, representations that participants will make a profit, or that represented profits are typical.
The notice also described seven other practices used by sellers or marketers of money-making opportunities that the FTC had determined involved deceptive representations (e.g., falsely telling consumers they do not need experience to earn income or that they must act immediately to participate).
As Miller’s earnings-claims brief notes, the FTC’s POA has been used with some success recently, although there are questions about the degree of its effectiveness in challenging deceptive earnings claims (footnote references omitted):
In 2022, the FTC assessed a penalty of $1.7 million in a settlement with WealthPress based on allegations that the company made “outlandish and false claims” deceiving consumers about its investment advisory services. Recently the FTC has issued notices of penalty offenses concerning money-making opportunities and a variety of other topics.
There is some uncertainty about the efficacy of using the penalty offense authority in earnings claims cases. If the authority is challenged in a particular case, the courts would be likely to rule against the FTC if it is too difficult to demonstrate that the current claim is sufficiently like conduct in the prior proceeding(s), which forms the basis of the FTC’s claim. The penalty offense authority includes strong due process protections for the defendants, such as the requirement that the parties must have had actual knowledge of the commission’s prior determination that a specific practice similar to the one they engaged in was unlawful. Nevertheless, it has been used effectively in several cases. Because civil penalties are available, and firms are notified in advance of the offenses that are subject to penalty, firms are more likely to comply, and the need is reduced “to bring enforcement actions for similar conduct over and over again.” In the early years after the commission gained this authority, most firms that received notice complied voluntarily.
An earnings-claim rule, which would be promulgated pursuant to the Magnuson-Moss Act, should not be adopted unless it is welfare-superior to POA enforcement. It is not.
In a December 2023 Truth on the Market commentary on FTC rulemaking, I summarized key problems that beset Mag-Moss proceedings (hyperlinks omitted):
[R]ulemakings . . . are resource-intensive, and may take years to come to fruition. With respect to consumer protection [former FTC Bureau of Consumer Protection Director] Jessica Rich explains that, despite 2021 FTC procedural changes to “streamline” Magnuson-Moss (Mag-Moss) rulemaking under Section 18 of the FTC Act, “the hurdles remain high” to the enactment of Magnuson-Moss rules. . . .
Specifically, Rich explains that Mag-Moss initiatives must still follow cumbersome statutory steps prior to enactment. Significantly, the FTC:
- Must seek public comment on a draft rule and hold public hearings if requested;
- It must have “reason to believe” targeted practices are prevalent (that requires hard evidence, not just assertions); and
- It must publish a final rule setting forth a cost-benefit regulatory analysis that also must demonstrate why the rule was chosen over alternatives.
Also, judicial review of a Mag-Moss rule is far more exacting than under the APA’s [Administrative Procedure Act’s] requirements (the relatively lenient “arbitrary and capricious” standard). A court may, of course, choose to strike down a poorly justified Mag-Moss rule under the relatively lenient APA “arbitrary and capricious standard.” But even if a Mag-Moss rule survives APA review, the FTC may still lose in court. Under Mag-Moss, a court may direct the FTC to consider additional submissions, may set aside the rule if it is not supported by “substantial evidence,” and may “set aside the rule if FTC’s limits on rebuttal or cross examination precluded disclosure of material facts.”
Finally, and perhaps most significantly, a reviewing court may decide that the FTC has done an inadequate job of demonstrating that a Mag-Moss rule would be cost-beneficial.
These considerations strongly militate against the enactment of an earnings-claims rule on opportunity-cost grounds. The substantial resources devoted to such a rule would achieve a far higher return in economic surplus if devoted to hardcore fraud, which imposes major harm on consumers and is subject to minimum error costs in application.
Unlike hardcore-fraud enforcement, the application of an earnings-claims rule would entail substantial error costs and raise First Amendment concerns to boot. As Miller’s earnings-claims brief notes, citing scholarly critiques of the Earnings Claims ANPRM (footnotes omitted):
Opponents of an earnings claim rule express concern that it “will be too rigid in regulating first amendment protected commercial and non-commercial speech.” The commission argues that the rules may make it easier to determine whether a particular earnings claim is deceptive. But clarifying the rules may also make them more rigid, hindering the ability of some businesses to communicate the unique features of the opportunities they are offering. [[I]n particular, as noted by a former FTC Bureau of Consumer Protection Director, specific restrictions on how an earnings claim may be stated could interfere with the communication process, since restrictive rules likely interfere with communicating idiosyncratic aspects of a business or investment opportunity.] In its ANPRM the commission seems to disparage the use of testimonials, lifestyle claims, or claims about earnings that are atypical, even if such claims are accompanied by disclaimers.
More specifically, future case-specific First Amendment concerns posed by a rigid earnings-claims rule would be grave, even if the rule passed initial judicial muster. The First Amendment sharply limits government’s power to impose content-based restrictions on speech. See Reed v. Town of Gilbert, 576 U.S. 155, 163 (2015). Such restrictions are disfavored and “presumptively invalid.” See R.A.V. v. City of St. Paul, 505 U.S. 377, 382 (1992). Moreover, to the extent an earnings-claim rule were applied to restrain non-commercial speech (for example, discussion of earnings claims in informational newsletters and webinars), it would be subject to judicial strict scrutiny and would be presumed unconstitutional “absent a need to further a state interest of the highest order.” Smith v. Daily Mail Publ’g Co., 443 U.S. 97, 103 (1979).
Even to the extent that an earnings-claims rule applied primarily or almost always to commercial speech (business advertising), its invocation would confront substantial First Amendment hurdles. It would be subject to intermediate scrutiny, which assess speech under four exacting criteria:
A rule that barred or severely limited certain lawful communications regarding actual examples of large earnings, for example, could well be seen to be an overly severe (“more extensive than necessary”) constraint on the ability of businesses to communicate their potential (particularly if serious questions existed as to the materiality of the earnings-claims communications to consumer decision making).
Finally, the FTC would have an extremely difficult time demonstrating that the benefits of an earnings-claims rule would exceed its costs, as required by Mag-Moss. The rule’s interference with businesses’ ability to communicate effectively with consumers, combined with major First Amendment risks posed by its application, could well convince a reviewing judge that the rule would impose enormous costs. The magnitude of those costs might well be seen to vastly outweigh the rule’s speculative benefits, given likely difficulties in showing that specific earnings claims materially affected consumer decision making.
Enactment of an FTC earnings-claims rule is uncalled for. The FTC already has major POA powers (which commendably protect due process rights) to challenge any clearly unfair or deceptive earnings claims on a case-specific basis. The significant resources devoted to an earnings-claims rule would yield far greater economic-welfare gains (consumers’ plus producers’ surplus) if applied instead to attacking hardcore consumer fraud.
Furthermore, the high error costs and First Amendment problems inherent in a one-size-fits-all earnings-claims rule underscore such a rule’s lack of substantive merit and inability to withstand required cost-benefit scrutiny. It follows that the FTC should withdraw the Earnings Claims Rules ANPRM forthwith.
The post The FTC Should Not Enact a Deceptive or Unfair Marketing Earnings-Claims Rule appeared first on Truth on the Market.
]]>The post From Europe, with Love: Lessons in Regulatory Humility Following the DMA Implementation appeared first on Truth on the Market.
]]>The first regards “regulatory humility.” Designing ex ante regulation to promote competition by applying blanket prohibitions to digital platforms as different as search engines, operating systems, social networks, and streaming platforms (among others) is no simple task. It was foreseeable that these blanket prohibitions could have unintended consequences.
To comply with the DMA, digital platforms will have to adapt their business models, governance, and even their “digital architecture,” which will affect how they provide services and monetize their assets. These changes will be felt not only by the platforms themselves, but also by the services that run on them (whether called “business users” or “complementors”) and by consumers, all of whom will be forced to grapple with new risks or a potential reduction in quality.
This was even predicted. For instance, Giuseppe Colangelo and Oscar Borgogno have explained that:
… such regulatory proposals may ultimately harm consumers. Indeed, by questioning the core of digital platform business models and affecting their governance design, these interventions entrust public authorities with mammoth tasks that could ultimately jeopardize the profitability of app-store ecosystems. They also overlook the differences that may exist between the business models of different platforms, such as Google and Apple’s app stores.
Along the same lines, Lazar Radic has pointed out that:
… there are a range of risks and possible unintended consequences associated with the DMA, such as the privacy dangers of sideloading and interoperability mandates; worsening product quality as a result of blanket bans on self-preferencing; decreased innovation…
Well, some of these consequences are already materializing. Google, for instance, will implement additional consent screens for “linked services.” If users do not provide consent for each service (and that is not easy, given regulations like the General Data Protection Regulation), they could end up receiving a product of inferior quality (i.e., a search for a restaurant not being linked to a reservation option or to a Google Map address) or not receiving valuable recommendations at all (i.e., “what to watch” on YouTube).
Apple has been more explicit about the DMA’s consequences:
The new options for processing payments and downloading apps on iOS open new avenues for malware, fraud and scams, illicit and harmful content, and other privacy and security threats.
Apple has also implemented a “core technology fee” to be paid for apps distributed on alternative app marketplaces. Some have considered this a form of “bypassing” the rules imposed by the DMA, but the regulation included no explicit prohibition of this kind of fee (and it is only reasonable that the owner of a valuable platform would like to receive compensation for access to it).
One developer has complained that “the tech giant [Apple] treats iPhones as its territory”. Well, they are Apple’s territory. Of course, consumers own their smartphones, but when we buy an iPhone we are also signing up to a platform (or platforms) that has rules, security concerns, and maintenance costs. The operating system (iOS), the application store (AppStore), and the native applications that come with an iPhone all have costs that are not necessarily included in the price. Commissions help to cover those costs. Some people, like Spotify’s CEO, consider the fees set by Apple “too high” (“unaffordable”) but unless you want to impose some kind of price regulation (and we already know how those can end), it’s better to leave that to the market.
This is related to a second lesson. The reaction to some of the gatekeepers’ announcements regarding their DMA-compliance plans shows how we could quickly be thrown into a downward spiral in which regulations beget more regulations. Once the first layer of regulations fail to yield the desired results, politicians, consumers, and business users demand more regulation. This leads, in the end, to more heavy-handed rules like the aforementioned price controls or structural separations.
As we learned, however, from the deregulation movement that began in the late 1970s—focused primarily on transportation, telecommunications, energy and financial services–the regulation of competitive industries has anticompetitive effects: with less entry comes less innovation and higher prices (see, e.g., Richard Posner’s “The Effects of Deregulation on Competition: The Experience of the United States“).
Finally, there is a third lesson: being “late” in the regulatory race could be actually a good thing. Several jurisdictions have been rushing to approve their own DMA-like regulations, with regulators openly boasting about being the first to regulate new products like artificial intelligence (AI). Countries that take their time, however, to study markets, perform proper regulatory impact analysis, and enact a serious notice-and comment-process, will be those most able to learn from the experience of other regulators and markets. These regulatory impact analyses should, of course, also consider the possibility that the regulation in question may not be necessary at all, as I think is the case.
All in all, it may be wise to follow the example of South Korea, which has hit the pause button on its proposal to regulate digital markets.
The post From Europe, with Love: Lessons in Regulatory Humility Following the DMA Implementation appeared first on Truth on the Market.
]]>The post Whose Failure Is the Failed Amazon/iRobot Merger? appeared first on Truth on the Market.
]]>Recently, and in anticipation of a negative clearance decision from the Commission, Amazon and iRobot jointly announced they had terminated their acquisition agreement.
The Commission’s position is built on a series of legal and economic fallacies. Amazon has earned its reputation as a valuable retail platform through the varied selection of affordable products. By de-listing, reducing visibility, limiting access, and/or raising the prices of products sold on the Amazon platform, the company would effectively ruin their marketplace and their credibility as a provider of goods, pushing consumers to alternative marketplaces (which, yes, includes brick-and-mortar shops).
This strategy would only make sense under an extremely skewed vision of reality in which Amazon is more interested in monopolizing a marginal vacuum-cleaner market than in preserving the goose that lays the company’s golden eggs: its marketplace platform. Further, the notion that Amazon would even be able to monopolize the sale of RVCs in the European Union necessarily ignores the plethora of other venues through which RVCs are sold on the continent, including other online stores, websites, and physical shops. Finally, the Commission’s approach in this case recalls its misguided crusade against vertical integration more generally, which refuses to see the many legitimate and procompetitive reasons that a platform and a seller might choose to merge.
Amazon is often labeled an “online superstore market” thanks to its enormous variety of products, fast shipping, and large consumer base. But this classification can serve to obscure the competitive pressure that a company like Amazon faces from other sales channels that are technically not “online superstore markets” (here). For example, despite the Commission’s assertions to the contrary, Amazon’s acquisition of iRobot won’t prevent other RVC companies from reaching consumers, either online or through physical stores. Nor will it prevent rational consumers from choosing different sales channels if they feel they offer better products or services at better prices. Amazon may be an “important” sales channel for RVCs, but it is simply false to suggest it is or that it will be the only one.
If Amazon were to delist or raise the prices of iRobot’s rival RVCs, consumers could take their business to another online store, like Aliexpress, Mediamarkt, El Corte Inglés online, or Worten online, or to a brick-and-mortar shop like Carrefour, Costco, Fnac, or El Corte Inglés. They also could buy directly from RVC sellers like Cecotec or Roboroc, among others. Note that this doesn’t account for the new shops that are guaranteed to mushroom if Amazon were to decide not to offer a strong selection of RVCs on the marketplace.
Similarly, concerns about Amazon self-preferencing by “reducing visibility of rival RVCs in both non-paid (i.e., organic) and paid results (i.e., advertisements) displayed in Amazon’s marketplace” fail to understand the structure of Amazon’s marketplace. If Amazon decided to increase their own products’ visibility over that of better and/or better-priced alternatives, rational consumers are likely to scroll down to find those alternatives (88% of users scroll through two or more pages on Amazon) or to explore other online marketplaces. Why would Amazon trade the quality and credibility of its primary service offering to monopolize a marginal market that, in all likelihood, it could not monopolize without buying out all competing retail channels? (Even then, there would still be a strong potential for new market entrants.)
The Commission’s statement of objections missed this important dimension, and rested on a simple (and simplistic) equation: that, because selling more Amazon RVCs would be better for the company, the acquisition must therefore be anticompetitive, as Amazon would have an incentive to foreclose. But what about Amazon’s stronger incentive not to prioritize inferior products (or, at least, products that its users do not want) in order to preserve the attractiveness of its online marketplace? The same could be said, mutatis mutandis, about “delisting.”
As for denying iRobot’s rivals the opportunity to qualify for “commercially attractive” labels, why would Amazon undercut sales from its own platform, even if these sales come from products that compete with its own label? Amazon has an interest in bolstering its marketplace, not monopolizing the peripheral markets for nails, batteries, smart doorbells or, in this case, RVCs. Where Amazon’s products compete with third parties, Amazon often allows those third-party products to keep the labels.
There are many legitimate and/or procompetitive reasons that Amazon might have for acquiring iRobot. It could have chosen to buy iRobot simply because it truly believes their RVCs are the best. Amazon General Counsel David Zapolsky asserted in a company statement that the Amazon team “have always been fans of iRobot’s products, which delight consumers and solve problems in ways that improve their lives.”
Additionally, iRobot (IRBT) has been a pioneer in the robotic vacuum market and has been a top seller on Amazon marketplace. This is consistent with the deal’s rationale. If Amazon didn’t believe that iRobot made the best RVCs out there, why did it buy iRobot, and not another RVC seller? It certainly has the financial muscle to take its pick.
This turns the logic of anticompetitive self-preferencing on its head in a way that is becoming increasingly difficult for competition authorities to grasp. On this view, self-preferencing is a natural consequence of acquiring the best downstream company in a market; and not the last step in an anticompetitive scheme. In other words: self-preferencing is the symptom of a rational business decision, not an indicator of foreclosure.
If this is the case, then Amazon also has an interest in making the purchase worthwhile by improving iRobot’s products. Indeed, Amazon could have wanted to buy iRobot to bring it to greater scale and/or integrate its products into Amazon’s pipeline.
Take the example of Amazon’s acquisition of Ivona in 2013. In just a few years, Amazon scaled that company beyond the founders’ wildest dreams. Ivona went from 13 to 1,000 (highly paid) workers, and served as a basis for Alexa. Alexa, in turn, enabled a range of Amazon products that today’s consumers love, including the Amazon Echo Smart Speaker, Echo Dot, and Tap speakers. As of 2018, 10,000 employees worked on Alexa and Alexa-related products. As of 2023, Amazon had sold more than 500 million Alexa and Alexa-enabled devices.
Could iRobot have been next? Amazon spokesperson Alexandra Miller declared that the company could “offer a company like iRobot the resources to accelerate innovation and invest in critical features while lowering prices for consumers.” This would help iRobot compete in the global marketplace of RVCs and create better products. As Zapolsky reflected on the deal’s termination:
Amazon and iRobot were excited to see what our teams could build together…This outcome will deny consumers faster innovation and more competitive prices, which we’re confident would have made their lives easier and more enjoyable. Mergers and acquisitions like this help companies like iRobot better compete in the global marketplace, particularly against companies, and from countries, that aren’t subject to the same regulatory requirements in fast-moving technology segments like robotics.
This is also consistent with recent findings in the literature that examined acquisitions by large tech firms, finding that acquired products are often not killed, but scaled; that post-merger industry output demonstrably increases; and that the relevant markets remain dynamic post-transaction.
The Commission’s hard-nosed approach to the Amazon/iRobot merger arguably stems from undue hostility to vertical integration across the board. Increasingly, competition authorities see vertical integration as suspect (here). The misguided crusade against self-preferencing is largely to blame for this.
But insofar as the ability to self-preference is one of the primary reasons for firms to vertically integrate (here), removing the option to do so is likely to dampen incentives for vertical mergers and destroy the many benefits that flow from them. Indeed, vertical integration is far from the anti-competitive boogeyman some make it out to be.
One of the main purposes of vertical integration is to transfer technology between firms and reduce the inefficiencies and discoordination that can occur in supply chains. This benefits consumers “through a number of mechanisms that allow for reduced costs and better product quality.” As Alden Abbott has written on the case at hand, “Amazon’s acquisition of iRobot would likely promote efficiencies, raise welfare, and enhance competition.”
Acquisition is a key pathway to exit for entrepreneurs. Many startups are created by founders with the explicit plan of being acquired by a larger tech company. European startups have traditionally looked to Big Tech’s deep pockets as a way to maximize their growth plans (here).
If this avenue is foreclosed, where do startups go from there? How can they scale without such opportunities? Do they all have to go public? According to a recent Financial Times article, the Commission’s response to the Amazon/iRobot deal has faced criticism from the startup community:
Some entrepreneurs are concerned that if Amazon can’t buy a maker of vacuum cleaners, it sends a signal that it will be difficult for Big Tech to buy anything at all — and that might be a blow for their exit strategies and for innovation as a whole.
Further, precluding acquisition by larger tech companies stifles an important exit strategy for founders and, by extension, an important incentive to invest in startups in the first place. In an interview for that same FT article, Stefan Mortiz of the lobbying group European Entrepreneurs said:
It’s a bad sign if the EU intervenes so heavily…in the long run nobody will want to be an entrepreneur, many companies will shut down or be bought if they have any remaining valuable assets.
Perhaps this is what the EU really wants? German Member of European Parliament Andreas Schwab has stated that:
It’s good for the economy that startups should not rely on a few Big Tech players but that we push innovative companies with new products to penetrate the market by themselves, thereby diversifying institutional channels.
But it is not good for the economy if founders are discouraged from creating startups because a vital exit strategy is cut off by regulators. Harold Demsetz—one of the most important regulatory economists of the past century—coined the phrase “nirvana fallacy” to critique would-be regulators’ tendency to justify policies on the basis of the discrepancy between the messy, real-world economic circumstances they see and the idealized alternatives they imagine. Wishful thinking, in other words (here).
Schwab’s comments fall into the nirvana fallacy of assuming that, without the possibility of acquisition, startups would be able to grow, scale, and challenge incumbents organically. But it is much more likely that many of these startups would not exist or would fall by the wayside due to an inability to secure funding.
We recently praised the Commission for taking market realities into account when it exempted iMessage from gatekeeper status, despite meeting the Digital Markets Act’s (more-or-less arbitrary) quantitative thresholds. But in the Amazon/iRobot case, the Commission somehow completely manages to ignore those same market realities. The assumption that the Amazon/iRobot merger would lead to a harmful outcome for other suppliers of RVC’s was based on flawed premises and an unlikely theory of harm that failed to see the bigger picture.
When firms like Amazon abandon procompetitive deals, the effects ripple through the rest of society, well beyond the comparatively narrow confines of antitrust and the optics of consumer welfare. Jobs, workers, and entrepreneurs are all affected—largely for the worse. While this is not a concern of direct relevance to merger control, those who defend broader conceptions of antitrust’s goals—who, somewhat ironically, tend to include many of those celebrating the abandoned iRobot deal as a victory against “big tech”—should care.
The post Whose Failure Is the Failed Amazon/iRobot Merger? appeared first on Truth on the Market.
]]>The post DMA: Setting the Goalposts appeared first on Truth on the Market.
]]>By March 7, companies that were designated as “gatekeepers” in September 2023 will be required to meet the obligations of Articles 5, 6, and 7 of the DMA Regulation. With the exception of ByteDance Ltd., the Chinese owners of TikTok, all of the designated companies have, by now, presented compliance proposals. The DMA’s expected beneficiaries (and, arguably, the loudest in favor of its passage) have been disappointed by some of these proposals, and seek more. But should the European Commission grant them what they are asking for?
It bears remembering that the DMA applies to only a handful of mostly U.S. tech companies (a far narrower and more targeted set than initially advertised). It designates companies based on quantitative thresholds, not any analysis of market power (as evidenced by the fact that there are multiple services within nearly every category of “core platform service”). It imposes competition-law remedies drawn from a series of competition-law investigations, not settled case law. It also applies these remedies out-of-context, and with very limited safeguards (no considerations for value creation, or what would be best for the ecosystem overall).
For example, the DMA mandates that rivals have access to the infrastructure, features, and functionalities of designated platforms on equal terms to those of the platform owner. It does this ostensibly to promote ambiguous notions of “fairness” and “contestability,” which some say opens the door to discretionary enforcement, moving targets, and shifting goalposts. If true, the European Commission can do effectively whatever it wants to achieve its aims.
Given this, there is a clear need for goalposts to ascertain whether enforcement is delivering benefits to consumers. There will even be some at the Commission who recognize that, in the absence of limiting principles, they will be subject to rent-seeking, requests for protectionism, and unending lobbying demands for ever-greater concessions. This is not an ideal outcome. Commission officials are also cognizant of the risk that interventions will have unexpected and undesirable consequences, particularly where complainants’ arguments are based on specious theories of harm.
Until now, however, the common refrain has been that the DMA establishes a “clear list of ‘dos and don’ts’.” Companies know the “rules of the game” and the only question is whether or not they choose to follow them. In other words, what matters for the Commission is whether companies follow the letter of the law. The goalposts were presumably set from the start.
Not so many years ago, European Commissioner for Competition Margrethe Vestager said that “[w]e don’t need a new rule of fairness in our system. Because fair markets are just what competition is about.” In other words,at least under competition law, a fair competitive process is the policy objective, not some arbitrary notion of what is fair to whomever complains the most or is the most politically favored. Competition law protects the process of competition, not “fair” outcomes for rivals (as the latter increases the risk of regulatory capture, which some have dubbed “swampetition”).
This distinction between a fair process and fair outcomes is even more important under the DMA. Commission officials have stated that the DMA is about creating the opportunity for platforms’ business users and rivals to take advantage of the DMA’s access provisions. But the coming months will test whether they stick to their guns.
If the goal is to have a fair process—rather than particular outcomes—then successful enforcement (and compliance) does not require that rivals actually take advantage of the opportunities offered by the DMA, nor that users choose to switch to their services. After all, if users choose not to switch to European alternatives, it could simply be because users still deem the services of the designated companies as superior, on the merits. It’s a possibility that Commission officials have to consider.
For example, there may be a plethora of new cross-platform mobile-app stores available post-DMA, but consumers may find their selection of apps unappealing; app developers may find it uneconomical to port their apps and maintain separate distribution channels; and some of these app stores may never approach the scale of the stores most closely associated with the device operating system. It could be that users have chosen their devices specifically because of the ecosystem, and don’t actually care for the services of other ecosystems.
It would be hard to convince the average citizen that a regulation ostensibly about fairness means forcing users to adopt services they don’t want. It will therefore be incumbent on the Commission to disentangle causation and correlation, better understand evolving market dynamics, and dismiss complainants’ calls for ever more interventionist enforcement where it’s clearly inconsistent with market demand. As former European Commission Director General for Competition Johannes Laitenberger has said: “far from being a ‘weasel word,’ used to justify voluntaristically desired outcomes, fairness only rings true if it is understood as a call to rigour, coherence and consistency.”
It will also be important for the Commission to consider what level of compliance costs (for both firms and users) are acceptable. Indeed, there are concerns that, even if the DMA leads to a more competitive landscape, this may come at the cost of more expensive goods and a degraded online experience for European users. Such an outcome could hardly be described as “success.”
For a while, industry analysts have been raising concerns that DMA compliance will lead to a worse experience for users (pointing to the consent fatigue failure of the GDPR, and the likely proliferation of choice screens).
Some of these have seemingly been vindicated by the compliance plans announced by gatekeepers. Apple has repeatedly warned users that alternative app stores will provide a less-secure environment. Some have argued Google’s (forced) decision to unbundle its services increases friction for users. Likewise, the consent forms that are central to Meta’s compliance plan may be a waste of time for the majority of users that are not especially privacy conscious, or simply desire to have more personalized advertising.
The upshot is that, even if the DMA leads to more competition on the market, this will not be a victory for the bill unless European users are ultimately more satisfied with their online experience.
Given these important uncertainties, it will be essential for enforcers and politicians to clearly signal that the rules of the game have been set, and to measure the DMA’s success free from preconceived notions of how the competitive process should unfold. Unfortunately, early pronouncements suggest this is unlikely to be the case.
Potentially sensing that the DMA would not lead to the outcomes that were initially promised, Commission officials have recently raised the possibility of going beyond the “clear list” of obligations, and imposing on gatekeepers to achieve particular market outcomes. On a trip to Silicon Valley, Vestager expressed her expectation that designated companies will “work with market participants to test if this can bring real changes in the digital marketplace.” But what is a “real change,” other than a particular outcome?
Likewise, Commissioner Thierry Breton—one of the DMA’s architects—said that “[i]f the proposed solutions are not good enough, we will not hesitate to take strong action.” But when will the solutions ever be “good enough,” especially for those rivals who continue to fail the marketplace? At the very least, this seems to contradict claims that the DMA would be deemed successful if companies complied with its “clear list” of obligations.
MEP Stéphanie Yon-Courtin, who also helped draft the legislation as one of the lead rapporteurs, has even suggested that some of the compliance measures could amount to circumvention, punishable by significant fines.
For those with a legal background, this can all be rather frustrating. Clients don’t like to hear “it depends” as an answer, especially if “it depends” on how favorably competitors react. For a business that has to make investment or engineering decisions on the basis of clear legal rules, it can be maddening.
It remains to be seen how much these warnings from Europe’s most powerful officials can be brushed off as pre-campaigning for the 2024 European elections. Several politicians think “Europe’s battle to reign in Big Tech” will lead to electoral success. But will citizens be impressed if their favorite digital services are dis-integrated and de-personalised? Will regulators be tempted to push even harder, to try and force market outcomes, at the expense of users and overall ecosystem health?
There has been an antitrust revolution brewing in Europe for quite some time, but there’s a risk it will look a bit too much like some older, idealistic, and ultimately misguided economic revolutions. If that day comes, who in Europe will remember how far the goalposts have moved from “the principle of an open market economy with free competition, favouring an efficient allocation of resources,” as laid out in TFEU Article 120?
The post DMA: Setting the Goalposts appeared first on Truth on the Market.
]]>The post Navigating the AI Frontier, Part I appeared first on Truth on the Market.
]]>Over the coming months, we will be delving into the nuances of the proposed text, aiming to illuminate the potential challenges and interpretive dilemmas that lie ahead. This series will serve as a guide to understanding and preparing for the AI Act’s impact, ensuring that stakeholders are well-informed and equipped to adapt to the regulatory challenges on the horizon.
The AI Act was approved unanimously by representatives of EU national governments on Jan. 26 (the approved text is available here). But this is not the end of the legislative process. Most importantly, the European Parliament has yet to approve the law’s final text, with a final vote scheduled for April 10-11. It is generally expected that Parliament will give its approval.
Once the AI Act is enacted, we will still need to wait, likely until the summer of 2026, until it’s fully applicable. Some of its provisions will, however, become binding sooner than that. For example, the act’s prohibitions on practices such as using AI systems with “subliminal techniques” will come into force after just six months, and the codes of practice will become effective after nine months (perhaps around Easter 2025). Whereas, e.g., the rules on general-purpose AI models are expected to take effect about a year after enactment, so roughly the summer of 2025.
For this post, we want to offer some thoughts on potential issues that could arise from how the act defines an “AI system,” which will largely determine the law’s overall scope of application.
As we have written previously, there has been a concern, dating back to the first drafts of the AI Act, that it will not be “at all limited to AI, but would be sweeping legislation covering virtually all software.” The act’s scope is determined primarily by its section defining an “AI system” (Article 3(1)). This definition has undergone various changes, but the end result remains very broad:
‘AI system’ is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
“Varying levels of autonomy” may still be read to include low levels of autonomy. “Inferring” from “input” to generate “content” also could have a very broad reading, covering nearly all software.
Some helpful clarity is provided in the act’s preamble (preambles are used to aid interpretation of EU legislation)—i.e., in Recital 6, which explicitly states that the definition:
should be based on key characteristics of artificial intelligence systems, that distinguish it from simpler traditional software systems or programming approaches and should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations.
Even here, however, some risks of excessive scope remain. Programmers already widely use AI tools like GitHub Copilot, which generates code, partially automating the programmers’ jobs. Hence, some may argue that the code they create includes rules that are not “defined solely by natural persons.”
Moreover, the recital characterizes the capacity to “infer” in a way that could be interpreted broadly, including software that few would characterize as “AI.” The recital attempts to clarify that “[t]he capacity of an AI system to infer goes beyond basic data processing, enable learning, reasoning or modelling.” The concepts of “learning,” “reasoning,” and “modeling” are, however, all contestable. Some interpretations of those concepts—especially older interpretations—could be applied to what most today would see as ordinary software.
Given this broad definition, there is a palpable risk that traditional software systems such as expert systems, search algorithms, and decision trees all might inadvertently fall under the act’s purview, despite the disclaimer in Recital 6 that the definition “should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations.”
The ambiguity arises from the evolving nature of these technologies and their potential overlap with AI functionalities. Indeed, in one sense, these technologies might be considered under the umbrella of “AI” insofar as they attempt to approximate human learning. But in another sense, they do not. These techniques have been employed for decades and can be, as they long have been, used in ways that do not implicate more recent advances in artificial-intelligence research.
For instance, expert systems (in use since 1965) are designed to employ rule-based processing to mimic human experts’ decision-making abilities. These, it could be argued, infer from inputs in ways that are not entirely dissimilar to AI systems, particularly when they are enhanced with sophisticated logical frameworks that allow for a degree of dynamic response to new information. Similarly, search algorithms—particularly those that employ complex heuristics or optimization techniques to improve search outcomes—might blur the lines between traditional algorithmic processing and AI’s inferential capabilities.
Decision trees (also in use since the 1960s) further complicate this picture. In their simplest form, decision trees are straightforward, rule-based classifiers. When they are used within ensemble learning methods like random forests or boosted trees, however, they contribute to a system’s ability to learn from data and make predictions, edging closer to what might be considered AI.
Thus, although some of these techniques might be considered AI, they are, in many cases, components of software that have been used for quite a long time without cause for concern. Regulatory focus on such software techniques is almost certain to miss the mark and be either underinclusive or overinclusive. This is because they are, in a sense, attempting to regulate the use of math.
That’s why it would seem that a much better approach to address risks arising from the uses of AI (or, indeed, any computer systems) is via legal regimes focused on—and well-tested in dealing with—specific harms (e.g., copyright law) associated with the use of these systems. The EU’s alternative approach to regulating AI technology faces the heavy burden of demonstrating that existing laws are insufficient to handle such harms. We are skeptical that EU legislators have satisfied that burden.
Nonetheless, assuming the EU maintains its current course, the interpretive ambiguities surrounding the AI Act raise substantial concerns for software developers. Without greater clarity, the law potentially subjects a wide array of software systems to its regulatory framework, regardless of whether or not they employ learning algorithms. This uncertainty threatens to cast a shadow over the entire software industry, potentially requiring developers of even traditional software to navigate the AI Act’s compliance landscape.
Such a scenario would inevitably inflate compliance costs, as developers might need to conduct detailed analyses to determine whether their systems—at a granular level—fall within the act’s scope, even when using well-established, non-learning-based techniques. This not only burdens developers with additional regulatory overhead, but also risks stifling innovation through the imposition of undue constraints on the development and deployment of software solutions.
Inevitably, there is going to be some degree of uncertainty as to the AI Act’s scope. This uncertainty could be partially alleviated by, e.g., explicitly limiting the act to specific techniques that currently are broadly considered to be “AI” (even “machine learning” itself is excessively broad).
The AI Act mandates that the European Commission develop guidelines on the application of the law’s definition of an AI system (currently: Article 82a). One can hope that the guidelines will be sufficiently specific to address the concern that “the public is told that a law is meant to cover one sphere of life” (AI), “but it mostly covers something different” (software that few today would consider AI).
The post Navigating the AI Frontier, Part I appeared first on Truth on the Market.
]]>The post March-Right-on-In Rights? appeared first on Truth on the Market.
]]>What are “march-in” rights? In brief, they provide for certain very-limited exceptions to patent holders’ (and their licensees’) ability to control the use of their inventions.
Intellectual-property rights (including, but not limited to, patent rights) are supposed to provide incentives to invest, at risk, in the R&D necessary to develop innovative products. Or, as the Patent and Copyright Clause of the U.S. Constitution (Article 1, Section 8, Clause 8) says:
To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.
On this point, we could also look to the Patents Act of 1790; various subsequent developments in IP law, including contemporary patent law: Title 35 of the U.S. Code; the writings of Joseph Schumpeter; a large body of literature; and, once upon a time, the advocacy of the U.S. Justice Department (DOJ) and the Federal Trade Commission (FTC).
Patent and other IP rights do not guarantee market power or economic profits, but when there is demand for IP-protected products and no close substitutes in the market, they do permit supra-competitive pricing. That’s what they are for. The possibility of those profits is the incentive to invest, at risk, in product development in the first place.
IP rights are not limitless, and the Bayh-Dole Act of 1980 authorizes their suspension under certain limited circumstances. When IP-protected products are developed with the support of federal funding, Bayh-Dole permits government agencies to “march in” on patent holders’ IP rights under special conditions outlined in the statute. But in a break from more than 40 years of precedent, and with no clear statutory authorization, NIST’s new draft guidance framework asserts that where:
the price or other terms at which the product is currently offered to the public are not reasonable, agencies may need to further assess whether march-in is warranted.
Current leadership at the FTC appears to think the framework is a very good idea: “The FTC supports NIST’s expansive and flexible approach to march in.” Expansive indeed.
My International Center for Law & Economics (ICLE) colleague Kristian Stout (rightfully) criticizes both the NIST framework and the FTC’s cheerleading. I would also highlight a critical letter to President Joe Biden from, among others, former secretaries of the U.S. Commerce Department, former heads of the U.S. Patent and Trademark Office (USPTO), and former heads of NIST itself, across both Democratic and Republican administrations. The letter states that the newly proposed criteria under which the government can exercise march-in rights “are all problematic.” As, indeed, they are.
In addition to that letter and Kristian’s post, I would also recommend one by my former FTC colleague (and former FTC general counsel) Alden Abbott. And while we’re at it, comments to NIST from intellectual-property experts (here and here) and additional comments from Alden (here).
Indeed, Alden, Kristian, and nearly everyone not pushing the current NIST/White House agenda are right: the Bayh-Dole Act’s authorization (under specific circumstances) of government march-in rights does not contemplate patent holders’ failure to offer a “reasonable price” as a trigger for those rights. It’s sure as hell not written in the statute itself. And, as the letter from former Commerce, USPTO, and NIST officials reminds us:
That price was never meant to be one of the triggers for march-in rights is not in doubt. In 2002, Senators Bayh and Dole – the original authors – made clear that this omission was purposeful. And earlier, in the late 1990s, Congress rejected amendments that would have added price as a fifth trigger. The repeated, failed attempts clearly demonstrate that even proponents of using march-in rights as price controls recognized that only Congress, not the executive branch, has the authority to amend the Bayh-Dole Act and add price as a trigger.
Before looking at the details, one can understand a certain high-level impetus for highly circumscribed march-in rights. If the American people foot the bill for a product’s development, why leave it to a private party to decide whether to manufacture and distribute the product at all? What if there’s a public emergency that might be ameliorated by production? And why, under such circumstances, leave pricing to a private party?
But given the importance of innovation incentives to drug development and competition, it is crucial to consider and delineate the circumstances under which patent rights are supposed to step aside. What counts as a public “need”? How much federal funding? How much relative to private funding? At what stage of development? And when and how (if ever) should the government intervene?
As I said, current FTC leadership supports NIST’s recent frolic and detour into the ancient art of pillaging.
My colleague Kristian was incredulous:
But if NIST takes the FTC’s … contribution seriously, such an expansion could lead to overregulation that would ultimately hurt consumers and destroy the incentives that firms have to develop and commercialize lifesaving medicines.
A few things stand out about the FTC comment. For one, the comments make a mockery of the commission’s once-lauded (if now vastly diminished) competition-advocacy program. In their comments to NIST, the FTC rightly notes its experience in health-care competition matters and interest in the intersection of IP and competition policy.
That’s all fine, as far as it goes. I worked on competition-advocacy matters at the FTC for about 16 years, and I honestly have no idea what they heck they are talking about. Vigorous enforcement of the antitrust laws? Sure, that’s their statutory mission. Disagreement about evidentiary thresholds, burden shifting, etc. Understandable.
But this is not that.
As for the rest of it: if you can find a competition argument in the FTC’s comments, you should win a pony (or, if not a pony, some prize you actually value). According to the FTC, the Bayh-Dole Act is “a statute designed to safeguard public health needs against patent holders’ private interests.” Perhaps, after a fashion, under certain limited and congressionally specified circumstances.
But patent rights are granted by Congress. And the antitrust laws are not public-health laws, exceptions to IP law, or tools for regulatory price regulation. Proper enforcement of IP and antitrust laws can facilitate both innovation and competition, to the benefit of health-care consumers. But IP and competition policy do not guarantee low prices for costly products or convenient supply in emergency circumstances. Bayh-Dole provides for special-case exceptions to established IP and competition policy; and the FTC comments do not even attempt to explain how Bayh-Dole solves a competition problem.
Advocating for “an expansive and flexible approach to march-in rights, including providing that agencies can march in on the basis of high prices” has nothing recognizable to do with established antitrust law or the FTC’s expertise and experience. What is the pro-competition argument for the ad hoc suspension of property rights, or for price regulation?
The FTC suggests that its comments to NIST:
draws on its experience in promoting competition and combatting anticompetitive practices in the pharmaceuticals industry. Lack of competition in pharmaceutical markets can lead to inflated pricing, rendering some lifesaving treatments out of reach for many Americans. Nearly three in 10 Americans report rationing or skipping their medications due to high costs. Contrary to industry claims that high drug prices are necessary to fund research and development (R&D), drug prices often depend more on whether the drug faces competition than the drug’s R&D costs. At the same time, pharmaceutical firms enjoy hundreds of billions of dollars of taxpayer investment in R&D. March-in rights are an essential check to ensure that taxpayer-funded inventions are affordable and accessible to the public.
What’s wrong with that? First, the NIST framework has nothing to do with enforcing the federal antitrust laws to combat anticompetitive mergers and practices in the pharmaceutical industry. As the comments rightly note, the FTC (and DOJ, and state enforcers, and sometimes private plaintiffs) already enforce the antitrust laws in the pharmaceutical sector.
Second, the economic argument is something of an embarrassment: yes, prices are subject to competitive pressures, but IP policy (whether optimal or a kludge) is not an ex post reward for successful investments; it’s an ex ante incentive for firms to make billions of dollars of investments, at risk, in R&D that might or might not lead to successful drug products. Is it the socially optimal reward? We don’t know, partly because we don’t have a neat consensus model of the optimal tradeoffs. But, as explained below, it is eminently clear that Congress has recognized and balanced incentives for both innovation and present competition across a complex set of express statutory provisions in drug and patent law.
Third, yes, federal research funding related to drug development is considerable. On the other hand, private investment in drug development is considerably greater. According to a 2018 Proceedings of the National Academy of Sciences of the United States of America (PNAS) report, National Institutes of Health (NIH) funding contributed to research related to “every one of the 210 new drugs approved by the Food and Drug administration from 2010-2016,” with roughly $115 billion spent on research over that period. That’s real money, even in Washington.
On the other hand, the same report notes that most of the funding (more than 90% of it) “represents basic research related to the biological targets for drug action rather than the drugs themselves.” Such basic research is important to drug development, but it serves diverse research interests. Studies fitting the PNAS report’s criteria were conducted from 1985-2016, and were not at all just about treatment or drug therapies, much less about specific drugs approved during the period studied.
By contrast, according to the Congressional Budget Office (CBO), R&D spending by pharmaceutical companies on actual drug development (not just research on biological targets somehow related to disease development) was considerably greater: about $83 billion in 2018 alone (for those who don’t like doing sums in their heads, 7 x $83 billion = $581 billion). Moreover, those investments are made at risk; that is, the 210 approved products were hardly the only ones investigated. As the CBO report also notes:
Developing new drugs is a costly and uncertain process, and many potential drugs never make it to market. Only about 12 percent of drugs entering clinical trials are ultimately approved for introduction by the FDA.
Studies of drug-development costs provide varying estimates, partly depending on sample selection. Median per-drug estimates vary according to therapeutic area, from $0.8-2.8 billion per new drug. Consistent with CBO observations that pharmaceutical industry R&D investment has risen more than tenfold, adjusted for inflation, from the 1980s to 2019, more recent studies tend to suggest higher average costs (here and here).
Neither the NIST framework nor the FTC comments consider the question of what incentives would be necessary to create the socially desirable level of drug development. Or the costs—including risks to public health—likely to attend frequent and unpredictable suspension of IP protections.
One suspects that no economists were harmed (or even mildly inconvenienced) in drafting the FTC comment. And it’s hard to see a competition argument there, besides an argument for actually enforcing the antitrust laws (which don’t depend on the NIST framework at all).
Bayh-Dole’s statutory language does provide that march-in rights might be triggered when “action is necessary to alleviate health or safety needs” that are not being met by the rights holder, but it establishes a high threshold for such intervention. As the letter from former NIST and IP officials notes:
Previously, the bar for whether something constituted a “health or safety need” was universally recognized to be extremely high. For instance, the government briefly considered invoking march-in rights on Cipro, an antibiotic capable of counteracting anthrax, in the aftermath of 9/11 and the ensuing anthrax scare, when the prospect of a terrorist attack using the deadly pathogen loomed large and necessitated building a stockpile of millions of doses quickly. However, the government was ultimately able to secure sufficient quantities of the drug without resorting to such an extraordinary measure.
The NIST framework isn’t just about drug products, but while they’re on the topic of Cipro, let’s talk about drugs. The development and marketing of pharmaceutical and biological drugs are subject to complex federal laws and regulations, including IP provisions beyond those generally provided under federal patent law. The federal Food, Drug, and Cosmetic Act (FDCA) and its implementing regulations governs drug testing and approval based on safety and effectiveness, and the labeling and marketing of drug products, but it does more than that. Under the Hatch-Waxman Act’s amendments to the FDCA and patent laws, Congress created a complex set of incentives for both generic entry (facilitating immediate competition) and the IP rights needed to encourage investment in new products.
These incentives include, among other things, an abbreviated (and lower-cost) approval pathway for generic drugs (21 U.S.C. §355(j)(1)); a statutory exemption from patent-law limitations on generic-drug studies (35 U.S.C. §271(e)(1)); and a 180-day generic exclusivity period for the first generic applicant (21 U.S.C. §355(j)(5)(B)(iv)) to test an innovator firm’s claimed IP rights.
To further encourage innovation, beyond the incentives provided by patent protection, Hatch-Waxman provides for the possibility of patent-term extensions of up to five years to make up for the considerable time and expense required for regulatory approval of a new drug (35 U.S.C. §156)); a five-year exclusivity period for new drugs that are “new chemical entities,” (21 U.S.C. §355(j)(5)(F)(ii)); a three-year new-clinical-study exclusivity period (21 U.S.C. §355(j)(5)(F)(iii, iv)); and a 30-month stay of approval of a generic application if the patent holder sues for infringement (pushing back against the generic applicant’s challenge).
There are other provisions. For example, a couple of years prior to Hatch-Waxman, Congress enacted the Orphan Drug Act, which provides additional incentives (both streamlined approval and seven years of exclusivity) to foster further development of drugs to treat rare diseases. Add to that, among other pieces of legislation, the Biologics Price Competition and Innovation Act of 2007.
In brief, Congress enacted very specific provisions governing the IP rights—and limitations to those rights—available for new drugs. These rights provide considerable incentives to innovators above and beyond those provided under the general patent laws. Congress also recognized the importance of drug-price competition by providing both cover and IP incentives for generic entrants. That is, express statutory provisions provide for a complex balancing of incentives for innovation and competition.
So what about fairness? First, it’s complicated by many factors. Among others, there’s the question of how to balance our interest in new drug development with our interest in vigorous present competition and low prices, and to do that under conditions of uncertainty (which face Congress, as well as pharmaceutical firms and patients). That is, it’s not really an issue of being fair to pharmaceutical companies at all. It’s about the IP policy that best serves society’s interests, including those of both present and future patients.
Second is a question of expertise. I worked at the FTC for some 16 years. I collaborated—off and on, in various ways—with the U.S. Department of Health and Human Services (HHS) and other government agencies. In earlier days, I was also a guest researcher at NIH and a faculty member at a medical school. In these roles, I worked with some very fine people, some of whom are still in government.
But I’d be hard-pressed to argue either that the politically appointed commissioners of the FTC or anyone at NIST has any special expertise in deciding fundamental issues of fairness. By “hard pressed” I mean “there’s no way I would.” And no, inclusion of the word “unfair” twice in Section 5(a) of the FTC Act doesn’t suggest otherwise. At all.
Congress has created a detailed set of countervailing incentives regarding drug development and competition, and Congress has expressly specified certain limited circumstances under which those incentives might be set aside. But Congress has not expressly included “price” among those circumstances.
An expansive notion of march-in rights is simply at odds with the idea of a limited, special-case stopgap exception to IP policy. If Congress has specified both the general policy and the stopgap, administrative agencies should be cautious and disciplined, not “expansive and flexible,” when it comes to suspending negotiated pricing and the congressionally established complex of IP rights (and limitations).
Given all the detail in the Bayh-Dole statute, and in IP rights, and limitations in the drug and patent statutes, the FTC’s support for “an expansive and flexible approach to march-in rights, including on price” (and while we’re at it, expansive and flexible notions of health and safety needs and public needs) is anomalous. Whatever else it is, it’s not competition advocacy, or IP advocacy.
What could possibly go wrong? Well, nothing, if we don’t care about any future investments in drug development or what Congress might have to say about them. But don’t we?
The post March-Right-on-In Rights? appeared first on Truth on the Market.
]]>The post ICLE’s Amicus Briefs on the Future of Online Speech appeared first on Truth on the Market.
]]>The basic premise we have outlined is that online platforms ought to retain the right to engage in the marketplace of ideas by exercising editorial discretion, free from government interference. A free marketplace of ideas best serves both the users of these platforms, and society at-large.
In December, we filed an amicus to the Supreme Court in the NetChoice v. Paxton and Moody v. NetChoice cases, arguing that social-media companies are best positioned to balance the speech interests of their users, and that the First Amendment protects their right to exercise editorial discretion by enforcing their moderation policies. We also argue that the “common carriage” label is inappropriate for social-media platforms, which require users—even before they have made their member profiles—to agree to moderation policies that include restricting speech believed to harm others.
In other words, the online platforms do not hold themselves out to be open to all comers or all speech. Thus, Texas and Florida’s state laws not only violate the First Amendment, but also reduce social-media platforms’ value to users by requiring them to carry “lawful but awful” speech.
Last month, ICLE filed an amicus brief in the Court of Common Pleas of Delaware County, Ohio, in Ohio v. Google, in which argued that Google Search was not a common carrier and, in fact, had a First Amendment interest in their own search results. Google is likewise not a common carrier, because it offers individualized answers to users’ queries, which may be based, in part, on their location, search history, and other factors. In other words, online search is not an undifferentiated product like railroad carriage or telephone service.
In fact, as several federal district courts have found, search results are themselves protected by the First Amendment. They constitute speech, as they amount to search engines giving their opinion as to the best answers to various queries. The order of such search results—even if they give preference to Google’s own products and services—are therefore protected editorial discretion. There is no basis to assume that Google’s users are harmed, particularly when they can easily choose to use other general or specialized search engines if they don’t like the integration that Google provides.
Most recently, just this past Friday, ICLE filed an amicus brief in Murthy v. Missouri, in which we argue that the balance that social-media companies strike in exercising editorial discretion to benefit their users is upset when government actors intervene in the marketplace of ideas by coercing such companies into censorship.
Under the First Amendment, government actors may not suppress speech (in this case, speech that the government actors deem “misinformation”), even if the suppression is accomplished by pressuring private actors to do so on their behalf. The government may participate in the marketplace of ideas through counterspeech, but they may not coerce social-media companies into removing lawful speech or speakers, even in the name of misinformation. This is to the benefit of both the supply side of the marketplace of ideas (i.e., speakers), and the demand side (i.e., listeners) and redounds to society’s benefit at-large, as it empowers the people to make democratic decisions.
The uniting factor in each of these briefs is a proper understanding of the digital platforms as multi-sided markets that participate in the marketplace of ideas by exercising editorial discretion to their users’ benefit. As I put it in a previous post on our amicus in the NetChoice cases:
[T]he First Amendment’s protection of the “marketplace of ideas” requires allowing private actors—like social-media companies—to set speech policies for their own private property. Social-media companies are best-placed to balance the speech interests of their users, a process that requires considering both the benefits and harms of various kinds of speech. Moreover, the First Amendment protects their ability to do so, free from government intrusion, even if the intrusion is justified by an attempt to identify social media as common carriers.
If social-media companies are to create a useful product for their users, they must be able to strike a delicate balance between what people want to post and what they want to see and hear. As multisided platforms that rely on advertising revenue, they must also make sure to keep high-value users engaged on the platform. Moderation policies are an attempt to create community rules to strike this balance. This may include limits on otherwise legal speech in ways that are not viewpoint neutral. For instance, to keep users and advertisers, social-media platforms may choose to restrict pro-Nazi speech. But in order to enforce these rules, they need the ability to exclude those who refuse to abide by them. This is private ordering: the ability of private actors to create rules for their own property and to enforce them through technological and legal means.
Similarly, in the Ohio v. Google case, the search engine must be able to exercise editorial discretion in its search results in order to provide the best answers to its users, or risk losing users to competitors and thus becoming less valuable to advertisers. This could include integrating its own products and services into search results. As we put it in our amicus:
Google’s mission is to “organize the world’s information and make it universally accessible and useful.” … Google does this at zero price, otherwise known as free, to its users. This generates billions of dollars of consumer surplus per year for U.S. consumers… This incredible deal for users is possible because Google is what economists call a multisided platform… On one side of the platform, Google provides answers to queries of users. On the other side of the platform, advertisers, pay for access to Google’s users, and, by extension, subsidize the user-side consumption of Google’s free services.
In order to maximize the value of its platform, Google must curate the answers it provides in its search results to the benefit of its users, or it risks losing those users to other search engines. This includes both other general search engines and specialized search engines that focus on one segment of online content (like Yelp or Etsy or Amazon). Losing users would mean the platform becomes less valuable to advertisers.
If users don’t find Google’s answers useful, including answers that may preference other Google products, then they can easily leave and use alternative methods of search. Thus, there are real limitations on how much Google can self-preference before the incentives that allowed it to build a successful platform unravel as users and therefore advertisers leave. In fact, it is highly likely that users of Google search want the integration of direct answers and Google products, and Google provides these results to the benefit of its users.
Whether it’s imposing common-carriage requirements that force social-media companies (or Google Search) to change how they exercise editorial discretion, or by imposing censorship requirements by pressuring social-media companies to take down alleged misinformation, government actors violate the First Amendment when they seek to intervene in the marketplace of ideas, and ultimately harm users of those platforms.
The best answer for the future of online speech is found in the First Amendment’s protection of the marketplace of ideas from government intervention. Competition in the idea market requires a hands-off approach. Appeals to preventing “bias,” “unfairness,” or “misinformation” are insufficient to justify departing from established constitutional norms.
The post ICLE’s Amicus Briefs on the Future of Online Speech appeared first on Truth on the Market.
]]>The post Using Bayh-Dole March-in to Set Patent Price Controls: An Assault on American Innovation appeared first on Truth on the Market.
]]>The law does not list the pricing of a license as a grounds justifying march-in.
In December 2023, the National Institute of Standards and Technology (NIST) released for public comment draft interagency guidance outlining when the government should exercise its march-in rights, which have never before been utilized. The draft NIST framework makes clear that high price is an appropriate basis for exercising march-in rights.
Recently, Kristian Stout excoriated the Federal Trade Commission’s (FTC) Feb. 6 comment to NIST that the “march-in” rights provision under the Bayh-Dole Act authorizes the government to impose price controls on patents developed through federally funded research. According to Kristian:
if NIST takes the FTC’s (unexpected, but ultimately unsurprising) contribution seriously, such an expansion could lead to overregulation that would ultimately hurt consumers and destroy the incentives that firms have to develop and commercialize lifesaving medicines.
Stout’s commentary was in harmony with comments conveyed in a recent letter to President Joe Biden by former secretaries of the U.S. Commerce Department, as well as former directors of both NIST and the U.S. Patent and Trademark Office (USPTO) in opposition to the misuse of march-in. As Steve Brachmann explains:
Tracing back to the Clinton Administration, these former government officials note that every U.S. President, including Joseph Biden, previously concluded that the Bayh-Dole Act did not give the Executive Branch proper authority to order the relicensing of patent rights based on commercial pricing.
Also notable is a fulsome letter signed by 22 scholars, former judges, and former government officials who are experts in patent law, patent licensing, and innovation policy. The letter explains in detail why the federal government lacks the statutory authority to regulate patent pricing through march-in. The letter concludes that:
The Guidance Framework proposes the addition of “reasonable price” as an unprecedented criterion for exercising the march-in powers specified in § 203 of the Bayh-Dole Act. This is a legally unjustified and unauthorized arrogation of power by NIST. The Bay-Dole Act does not state in its plain text a congressional authorization for federal agencies to consider “reasonable price” as a criterion for imposing price controls on all Bayh-Dole patented products or services that are commercialized in the marketplace. In addition to lack of authorization in the plain text of § 203, the Guidance Framework’s inclusion of “reasonable price” as a march-in criterion contradicts the function of Bayh-Dole in promoting the commercialization of inventions by patent owners in the marketplace. The NIH has consistently and repeatedly confirmed this lack of statutory authorization in § 203 to impose price controls across bipartisan administrations over several decades in rejecting all march-in petitions seeking to impose price controls.
My Feb. 2 comment submitted to NIST (posted on Feb. 7) concludes that:
[T]he proposed Framework would twist Bayh-Dole and weaken the U.S. intellectual property system is misguided and will harm market competition, consumer access to new technologies, and our strategic global interests in technology leadership. I strongly urge NIST to withdraw this proposed framework and uphold the Bayh-Dole Act as written. Thank you for the opportunity to comment on this critical issue.
Key excerpts from my comment to NIST (footnote references to hyperlinks omitted) are set forth below:
The Bayh-Dole Act was passed on a bipartisan basis in 1980 to respond to the rampant waste of federally-funded R&D dollars in national labs and large research universities. Congress correctly noted that taxpayer-funded research ended up sitting on the proverbial shelf because licensing was centralized in an opaque federal bureaucracy. Prior to the law’s passage, less than five percent of federally-funded inventions were ever licensed.
Bayh-Dole’s innovation was simple but revolutionary. The law decentralized licensing decisions to university technology transfer offices, which had every incentive to find private sector partners willing to commercialize their research discoveries. This unleashed a flood of commercial activity that created millions of high-paying jobs, thousands of new startups, and more than $1 trillion in economic output. Across critical sectors including the life sciences, agriculture, telecommunications, semiconductors, and more, Bayh-Dole inventions underpin transformative technologies that save lives, benefit consumers, and ensure the United States’ continued technological leadership.
Unfortunately, the proposed framework threatens to dismantle this progress by twisting the triggering provisions for march-in under Bayh-Dole Act to include price – contrary to congressional intent and without an express grant of that authority.
As the White House notes, this is the first time the federal government has asserted a right to forcibly relicense a Bayh-Dole patent solely on the basis of price. Price is not mentioned once in the legislation, and as the authors of the law noted, march-in was never intended to be a tool to lower the price of a Bayh-Dole invention. Over the past four decades, administrations from both parties have consistently rejected repeated requests to twist Bayh-Dole into a price control mechanism.
The negative repercussions of such a move cannot be overstated.
The new framework undermines the patent system, which underpins innovation in a wide variety of high-tech industries. As the USPTO reports, IP-intensive industries account for 41 percent of U.S. GDP and directly employ a third of the American workforce.
Empirical research consistently demonstrates that patents do not confer monopoly power to inventors, but rather encourage competitors to “design around” a patent — that is, to draw on its basic insight in a way that does not infringe on it. This phenomenon spurs further innovation, resulting in superior consumer welfare gains. Weakening patent protections will slow market competition, keeping prices artificially high and depriving patients and consumers of crucial new products and improvements.
Driving the proposed reinterpretation of Bayh-Dole is the desire for lower drug prices. Setting aside several additional cogent critiques of this position (including that most drugs on the market today are protected by several patents, many of which do not fall under Bayh-Dole ), note that the proposed framework applies to all technologies, not just to pharmaceuticals. Bayh-Dole inventions power U.S. leadership in sectors including agriculture, telecommunications, and semiconductors. If federal officials deemed the price of any product derived from federally funded research too high, the government could march in on it.
Not only would the proposed framework reduce market competition in precisely the industries the United States is relying on to maintain strategic dominance. It would also undermine several bipartisan initiatives to leverage government investment and public-private partnerships to promote national security and secure domestic supply chains for critical technologies.
Take, for example, the $52 billion CHIPS and Science Act, which emphasizes “nanotechnology, clean energy, quantum computing, and artificial intelligence.” The proposed framework could impose price controls on any start-up or firm that leverages federally-funded research in these sectors. This will inevitably chill private investment in these technologies.
With the United States in a generational competition with China, we cannot afford to weaken our most innovative technologists. A study from last year discovered that Chinese research institutions lead the world in 37 out of 44 critical technology sectors. Damaging the U.S. tech transfer ecosystem, which currently drives a “virtuous cycle” of investment and reinvestment into our top institutions, would be catastrophic for our national security and technological leadership.
In sum, the proposed Framework would twist Bayh-Dole and weaken the U.S. intellectual property system. It is misguided and will harm market competition, consumer access to new technologies, and our strategic global interests in technology leadership. I strongly urge NIST to withdraw this proposed Framework and uphold the Bayh-Dole Act as written.
The post Using Bayh-Dole March-in to Set Patent Price Controls: An Assault on American Innovation appeared first on Truth on the Market.
]]>The post The FTC’s Misguided Campaign to Expand Bayh-Dole ‘March-In’ Rights appeared first on Truth on the Market.
]]>Enacted in 1980, the Bayh-Dole Act fundamentally altered the landscape of American innovation by allowing universities, small businesses, and nonprofits to own and commercialize patents on inventions that resulted from federally funded research. The legislation has been instrumental in catalyzing the commercialization of research, leading to the development of new drugs, technologies, and industries that have bolstered the U.S. economy and improved global well-being.
“March-in rights” are a provision of the original Bayh-Dole Act—in essence, an exemption from the law’s overall thrust—that allows the federal government to intervene and grant licenses to other parties to use a patented invention in exceptional circumstances where the original patent holder has not made the invention available to the public on reasonable terms, or where public health or safety needs are not being met. The mechanism is intended to ensure that inventions that arise from federally funded research are accessible to the public and serve the public interest.
The FTC, however, advocates that march-in rights be used to intervene when prices for drugs developed from federally funded research are deemed “too high.” Apart from the fact that price controls like those proposed by the FTC never work out well and typically guarantee an undersupply of the goods in question, the proposed march-in rights expansion would not just be limited to pharmaceuticals. The law covers any patented invention that received some federal funding at any point in its development. Thus, it implicates technology in general, biomedical devices, and just about any other patented discovery that relied to any extent on government-funded research.
Bayh-Dole was not designed as a tool for price controls, but as a mechanism to foster innovation and ensure that inventions arising from federal funding reach the public. By securing patent rights for inventors and small businesses, the law created incentives for the private sector to invest in the high-risk process of transforming basic research into marketable products. This incentive to commercialize is especially needed in sectors where development costs are exorbitant and the risk of failure is high. And that framework has proven pivotal in making the United States a global leader in patent-reliant industries generally, and biotechnology and pharmaceuticals, in particular.
Price controls of the sort the FTC advocates would completely undermine the law’s goals, and would almost certainly deter investment in drug commercialization. The prospect of march-in rights being exercised based on drug pricing could inject uncertainty into the drug-development lifecycle, making it less attractive to investors. The development of new drugs is a resource-intensive process, often requiring billions of dollars and taking more than a decade to come to fruition. Investors’ willingness to fund this risky endeavor is predicated on exclusive rights to commercialize successful products (and most drugs in-development are not ultimately successful). Introducing the risk that these rights could be revoked or undermined over pricing concerns would lead to a decrease in available capital for research and development, thereby slowing innovation and limiting the introduction of new treatments.
Further, liberalizing the use of march-in rights will encourage firms in patent-reliant industries to invest more in regulatory gamesmanship, such as by complaining to regulators about their competitors’ pricing strategies. Even if such moves are unsuccessful, this dynamic would become a drag on production and commercialization and ultimately harm consumers.
The potential economic implications of expanded march-in rights extend beyond the pharmaceutical industry. By setting a precedent for government intervention in patent rights based on product pricing, it could discourage private-sector investment in all sectors of innovation that benefit from federal research funding. This shift could stifle American innovation, impacting economic growth and job creation.
Moreover, while it is understandable that consumers would desire lower drug prices, access to innovative new treatments is equally important. The use of march-in rights as a tool for price controls could lead to fewer new drugs entering the market, ultimately harming consumers who depend on medical innovation for improved health outcomes.
The FTC’s preference for price controls should not become federal policy. Federal agencies are simply not well-positioned to adequately second-guess the incredibly complex set of factors that firms must balance in order to commercialize products. Even if one could point to isolated examples of patented drugs that appear to be priced “too high,” there are far too many pieces of data that go into deciding which products to commercialize, and on what terms, for it to be reasonable to expect that federal agencies could outperform the wisdom of markets. Positioning agencies as centralized price setters would mean arbitrarily choosing winners and losers. And ultimately, we should expect that the process will be guided by regulatory capture, leading to outcomes that are even worse than arbitrary.
The Bayh-Dole Act’s legacy is a testament to the power of policy to drive innovation and economic growth. Misinterpreting its intent, or applying its provisions in ways that threaten the delicate balance of incentives that fuel the biopharmaceutical-innovation ecosystem, could have far-reaching negative consequences.
While it is crucial that lifesaving drugs are made affordable, it would serve no one to achieve that outcome by undermining the foundational principles that have made the United States a leader in innovation. Ensuring access to lifesaving medications requires a nuanced understanding of the innovation process and a commitment to fostering an environment in which new and effective treatments can be developed and brought to market. Policymakers should tread carefully, considering the long-term impacts of regulatory changes on innovation, economic growth, and consumer welfare.
The post The FTC’s Misguided Campaign to Expand Bayh-Dole ‘March-In’ Rights appeared first on Truth on the Market.
]]>The post Are Early-Termination Fees ‘Junk’ Fees? appeared first on Truth on the Market.
]]>In recent comments to the FCC, the International Center for Law & Economics (ICLE) argues that the economics behind ETFs tell a different story, one in which both consumers and providers typically benefit.
ETFs are a longstanding practice and are fairly ubiquitous in modern life. One of the first recorded ETFs can be found in a 2,200-year-old rental agreement from the Greek city of Teos in what is now modern-day Turkey. The agreement specified a steep penalty for a tenant backing out:
If the tenant does not ratify the contract on the day on which he is chosen or on the following day, we shall choose another tenant, and if the bid price is less, he shall owe ten times the difference to the lessors.
Today, ETFs in some form or another apply to a wide range of services, including mortgage loans, gym memberships, hotel reservations, and doctor’s appointments. Cable and satellite subscriptions are no exceptions. That’s because ETFs play a crucial role across the economy. They allow companies to plan effectively by offering discounts or lower rates in exchange for a customer’s commitment to stick around for the length of an agreement.
In most cases, consumers have a choice of entering an agreement with an ETF or without one. This is especially true in the cable and direct broadcast satellite (DBS) industries. In written testimony to the FCC, the NCTA (formerly known as the National Cable & Telecommunications Association) reported that cable plans with ETFs are “always optional.” DirecTV testified that its DBS customers have a choice of plans, either with or without an ETF.
Many customers opt for contracts with an ETF provision, largely because such agreements offer lower prices over the term of the contract, and because these customers expect to stick with the contract for the entire term. In 2020, New America and the Open Technology Institute calculated that the monthly cost of plans with ETFs were about $17 less than those without an ETF. NCTA testified that “the amount many providers charge for ETFs is significantly less than the discount the customer is provided for agreeing to a term contract.”
ETF critics tend to counter that far too many people fall prey to confusing terms or fail to read the fine print. For example, one provider briefly made national headlines for charging an ETF to a 102-year-old woman who died in the middle of her contract (although the provider quickly apologized and waived the fee). This year, new FCC transparency rules go into effect—what Chair Jessica Rosenworcel calls “nutrition labels”—that should help to eliminate those surprises going forward.
Even so, in many cases, customers are fully aware of the ETF when they enter the agreement. They just don’t want to pay it if they break their contract. One consumer testified to the FCC, “I knew when I signed up for cellular service with Verizon that I was obligated to agree to the early termination fee” but “tried to dispute or reverse the charges.”
Outlawing ETFs altogether could backfire for consumers. Companies depend on long-term commitments and the reduced turnover that ETFs encourage in order to make investment decisions that allow them to keep rates low overall. Banning ETFs will likely lead to higher prices for those consumers who benefit from contracts with the provision.
In essence, ETFs allow cable companies to offer discounts to keep otherwise fickle customers subscribed. Customers gain by committing to a subscription for a minimum term. Ban the fees, and the savings may also disappear.
That’s because there’s a real cost to customer turnover, or churn. In its latest quarterly report to the U.S. Securities and Exchange Commission, Dish Network reported that it incurs “subscriber-acquisition costs of $1,065 per new DISH TV subscriber.” The company also reported that it incurs “significant” costs to retain existing subscribers. These retention costs include upgrading and installing equipment, as well as free programming, and promotional pricing “in exchange for a contractual commitment to receive service for a minimum term.”
If consumers can switch providers willy-nilly, these subscriber acquisition and retention costs will skyrocket, with little opportunity to ameliorate these costs via ETFs. Thus, for cable and DBS companies, ETFs help ensure a reliable revenue stream to justify keeping rates low. For many customers, ETF provisions provide discounts that outweigh the future risk of incurring the fee.
Early-termination fees let cable providers and subscribers strike a mutually beneficial bargain into which consumers voluntarily choose to enter. Viewed through an economic lens, ETFs look less like traps and more like tools to improve consumer and provider wellbeing. Thus, for consumers who oppose ETFs, rather than a federally imposed ban on the practice, the best solution is the simplest: Choose a contract without an ETF.
The post Are Early-Termination Fees ‘Junk’ Fees? appeared first on Truth on the Market.
]]>The post The Curious Case of the Missing Data Caps Investigation appeared first on Truth on the Market.
]]>But more than half a year later, the NOI has not been issued and none of the information from the data portal has been publicly released.
Does that mean the FCC has lost interest in data caps? Not likely. They’re likely to make an appearance somewhere under the cover of “digital discrimination” or “net neutrality.” But until that day appears more imminent, let’s focus on what people mean by “data caps.”
The FCC has a long history of slapping prejudicial labels on innocuous conduct. If you are an internet service provider (ISP), you are labeled a BIAS (broadband internet access services) provider—because bias is bad, and neutrality is good. Providers who bill in monthly (rather than daily) increments, are mischaracterized as charging a “billing cycle fee,” or BCF, on consumers who terminate in the middle of the month. The FCC pejoratively calls BCFs and early termination fees (ETFs) “junk fee billing practices.”
This history continues. What the FCC calls “data caps” aren’t really caps at all, as the American Enterprise Institute’s Daniel Lyons points out (here and here):
[T]he phrase “data caps” is a misnomer. A cap implies a hard limit on the amount of data a customer may consume each month. That’s not an accurate description of most [usage-based pricing] offers, which are perhaps better characterized as pay-as-you-go plans. Customers pay in advance for a certain amount of data, and if they exceed that amount, they can purchase an additional amount. In other words, customers on these plans have unlimited data—they just pay for what they consume, just as they do with most other goods in society.
Lyons is correct. What most people think of when they hear “data caps” in policy circles is very different from how internet providers price consumer usage. To illustrate what I mean, let’s offer some simple examples.
Consider a hypothetical case of a typical consumer in an all-you-can-eat data plan (Figure 1). Under this plan, the user pays an upfront flat fee for unlimited data use. The flat fee is equal to the consumer’s willingness to pay for unlimited data. This is equal to the area under the entire demand curve (A + B). Because the price per unit is zero, the consumer will use data until his or her marginal benefit is zero (Q1). Because the price paid is equal to the consumer’s willingness to pay, consumer surplus is zero. The provider receives revenues equal to A + B, but also incurs costs equal to B + C, so the producer surplus is A – C.
Next, consider a high-demand consumer under the same plan (Figure 2). This consumer will use Q2 amount of data, which is much more than the typical consumer above. She is willing to pay much more for this data (A + B + C + D + E + F), but only pays A + B under her plan, providing a much bigger consumer surplus (C + D + E + F) than for a typical consumer. In contrast, the provider is much worse off with the high-demand consumer under this plan. The provider receives A + B in revenue, but incurs B + C + E + F + G in costs, for a consumer surplus of A – (C + E + F + G), which is substantially less than with a typical consumer.
In a perfect world, the provider would be able to identify which consumers have low, typical, or high demand, and to craft an all-you-can eat plan tailored for each consumer type. But the real world is not so perfect. It may be impossible to accurately identify each type ex ante. More importantly, there are real risks of both adverse selection and moral hazard.
With adverse selection, consumers with large demand would identify themselves as low-demand users to obtain the lower price. With moral hazard, users facing a zero price for data have incentives to find new and additional ways of using data. For example, instead of listening to NPR on their radio, they may have it stream to their smart device. Or instead of watching TV via a cable provider, they “cut the cord” and switch to streaming services. (Note, these are both entirely legitimate practices, which is why, like “junk fees,” “moral hazard” is a pejorative that misrepresents normal practices.)
In this imperfect world, providers can implement programs that simultaneously benefit consumers while improving their bottom line. One alternative is usage-based pricing, such as a version of a two-part tariff. Figure 3 illustrates a program that charges an upfront flat fee for unlimited data use up to a specific quantity (Q1). For any data used in excess of that quantity, the consumer is charged a per-unit price equal to the provider’s marginal cost (MC). The consumer will use data until the marginal benefit equals the price charged (Q3, which is more than Q1 and less than Q2). In this case, the consumer would pay the flat fee of A + B, plus a per-unit amount equal to E, which is equal to P x (Q3 – Q1). This amount is less than the consumer is willing to pay, which is equal to A + B + C + D + E. Thus, the consumer surplus is C + D, which is less than under all-can-eat pricing, but more than a typical consumer’s surplus.
The provider receives the flat fee of A + B, plus a per-unit amount equal to E and incurs costs equal to B + C + E, yielding a producer surplus of A – C.
Another alternative that a provider could consider would be a “hard” data cap at specified quantity. Under this policy, the consumer pays an upfront flat fee for unlimited data use up to a specific quantity, Q1, but would not be able to use any data in excess of that amount.
Figure 4 illustrates the result. As with the example of the typical customer in Figure 1, under a “hard” cap, the provider receives A + B in revenue and incurs B + C in costs, yielding a producer surplus of A – C.
The consumer is willing to pay A + B + C + d for this plan, but only pays A + B, providing a consumer surplus of C + d. (Note that I use a lower case “d” in Figure 4 because it is smaller than “D” in Figures 2 & 3—i.e., d = D – Z.)
But here’s where things get interesting. The consumer is willing to pay more for the option to exceed the cap. He or she would be willing to pay up to Z + E to use up to Q3 and willing to pay an additional amount F for unlimited data. And if the consumer made that offer, the provider would accept it, because it would increase the provider’s producer surplus.
Put simply, it is in neither the consumer’s nor the provider’s interest to set a hard data cap. The consumer is willing to pay for more data and the amount the consumer is willing to pay is greater than the additional cost to the provider. That is, not only are hard caps subjectively “bad” for consumers, but they are also bad business, because they leave money on the table. There’s no need to ban hard caps, because the market has already banned them. Consumers don’t want hard caps and providers don’t want to impose them.
The point of this whole hypothetical hootenanny is to demonstrate that usage-based pricing is an elegant solution to profitably accommodate high-use consumers and improve their well-being without affecting other, lower-demand consumers. A program that charges an upfront flat fee for unlimited data use up to a specific quantity, and a per-unit price thereafter, is not in any way a “data cap.” Rather, it is an efficient way to ensure that consumers who want to pay for more data get more data.
Maybe the FCC has already realized this, and that’s why the agency’s data-caps investigation has gone missing. Or maybe the FCC hasn’t realized this, and decided instead that it can shoehorn data-cap regulation in one or more of its other recent expansive regulatory undertakings. Let’s hope it’s the former.
The post The Curious Case of the Missing Data Caps Investigation appeared first on Truth on the Market.
]]>The post The WHO’s Insufficient Curiosity and Humility appeared first on Truth on the Market.
]]>While there is a need to coordinate the detection of and response to potential pandemics, it is not clear what role, if any, the WHO should have. Perhaps more importantly, it is uncertain what policies should be put in place (and by whom) to prevent, limit, and respond to any future pandemic. The U.S. government should encourage the WHO to delay both changes to the IHRs and the introduction of a new treaty until several issues are satisfactorily resolved.
If we knew the source of the COVID-19 pandemic, it might be easier to prevent a new one. Unfortunately, the Chinese government removed evidence that might well have explained the virus’ origins and/or exonerated Chinese actors of creating the virus in a lab or of failing to have in place adequate measures to prevent its accidental release. But the WHO has shrugged its shoulders and not pressured China for all relevant data. How times have changed. When SARS killed hundreds in 2003, China was again the source of the disease, but then WHO had a public spat with China over its attempted cover up of the disease.
Given that most new viruses appear to come from China, and genetic-level research encouraging new pathogens is rife in China, we’re likely to see history repeat itself unless better means can be found to reduce the likelihood that new pathogens are released accidentally, and to ensure that information about any new pathogens that do emerge—whether from the wild or the lab—is shared widely and quickly. If the WHO is to oversee this process, more robust and honest WHO leadership is an absolute necessity.
This is especially true if SARS-CoV-2 stemmed from government-supported gain-of-function research. Imagine that a plane crashes and the agency responsible for air safety (in the United States, this would be the Federal Aviation Administration) showed no interest in the reason for the crash. It would not happen, and it should not happen with COVID.
The WHO has been particularly reluctant even to acknowledge the various approaches taken by jurisdictions that did not follow its advice. A good example is Sweden, which never “locked down,” allowed schools and business to stay open, and relied on the good sense of the Swedish people to socially distance and quarantine where required. Since Sweden has had among the lowest mortality rates in the world over the past four years, perhaps it has lessons for the rest of us. If so, those lessons should be taken into consideration by any organization seeking to provide advice in the event of a future pandemic.
If the WHO wants to play that role, it should be more open to evidence of effectiveness from Sweden and other countries that took heterodox approaches. Moreover, even if Sweden is an outlier for idiosyncratic reasons, it’s crucial to understand why that is the case in order to better inform responses more broadly.
Perhaps in part underpinning its lack of curiosity, the WHO appears rather too certain of the best ways to combat a future pandemic: lockdowns, mask mandates, testing mandates, and vaccine mandates.
But according to a recent Cochrane Library review—the gold standard for evaluations of health interventions—mask mandates are simply ineffective. And according to a comprehensive meta-analysis, while lockdowns may have prevented deaths during at least the initial phase of the COVID-19 pandemic, they increased deaths from other diseases and imposed enormous social and economic costs.
Meanwhile, data from several countries that followed quite different policies—which also include Taiwan, Germany and South Korea—show that other approaches were equally successful, if not more so. The paths these countries took demonstrate that pandemic policies are not “one size fits all,” that mandates don’t always work, and that the tradeoffs in shuttering schools and businesses might ultimately cause more harm than good.
When mandates are under consideration, government agencies should—at a bare minimum—assess whether their costs outweigh their benefits. The WHO should acknowledge this reality and use its position to provide a comprehensive picture of the various ways that countries successfully responded to the exigencies of the pandemic.
The WHO acknowledges that health misinformation can cost lives. Yet during the pandemic, it inhibited the free flow of information and effectively contributed to the dissemination of misinformation.
Taiwan first alerted the WHO to the threat coming out of Wuhan, yet the WHO backed Beijing’s claims that it could contain the virus. In this case, the problem resulted from the fact that Taiwan is not a member of the UN or any of its agencies, including the WHO. Its statements are therefore not recognized by the WHO, due to the UN’s official “One China” policy. The unfortunate result is that the WHO provided misinformation. Unless the WHO becomes more inclusive, it cannot be trusted to act as the information coordinator in a pandemic.
The WHO says we should “follow the science.” But science is a process, not the opinion of senior figures. Science requires robust debate, which the WHO sought to shut down.
As a case in point, during the early phases of the pandemic, public-health officials assessed the infection fatality rate (IFR, a measure of how many people die when infected) by using data from hospitalizations. But these data inaccurately biased the IFR upward, since most of those infected were either asymptomatic or not severe enough to need a hospital. The falsely high IFR led to even greater calls for lockdowns.
Moreover, of those hospitalized, physicians tried many drugs to combat the infection and its symptoms, including steroids, hydroxychloroquine, and ivermectin. While some of these efforts failed or remain unproven, others worked in various contexts. Encouraging trial and error and communication about successes and failures is important.
WHO decisions on what new approaches to support appear arbitrary, without a clear understanding of what the agency promotes and why it would be useful. That applies, as well, to its assessments of the efficacy and risks of vaccines. COVID vaccines have saved countless lives, and many concerns about these vaccines have been overblown or incorrect. But the available evidence shows vaccine rates declining in the United States because the WHO (among other actors) served to shut down debate, which had the downstream effect of creating suspicion of the vaccines in the popular imagination.
The one other WHO treaty—the Framework Convention on Tobacco Control—has the noble aim of reducing the harmful use of tobacco. It is, however, highly exclusionary and has long had opaque processes.
For example, because the international police agency INTERPOL has received funds from the tobacco company Philip Morris (to work on a project on illicit cigarettes), the WHO banned it from observing—let alone participating in—treaty negotiations and follow-up meetings until INTERPOL had ended its association with Philip Morris. INTERPOL, it should be noted, probably knows more about illicit production and smuggling than any other single group, and surely should be consulted on policies to combat the tens of billions of illicit cigarettes sold every year.
It seems almost inevitable that, unless the concerns I raise here are addressed effectively, a new WHO pandemic treaty would suffer similar defects.
There are no doubt many other lessons we could draw from a better understanding of the history of COVID’s emergence and our response to it. While there are ongoing efforts to do so, governments do not appear to be heeding them.
For example, the UK government inquiry into lockdowns likewise avoids discussing COVID’s origins and other sensitive topics. As some have alleged, the inquiry’s goal may be more about scoring political points—especially attaching and avoiding blame for policy failures—rather than finding the truth.
What is required is a thorough assessment of all aspects of COVID, from understanding its origin to assessing ongoing vaccine policy, so that the best advice about practices and products can be disseminated rapidly. An understanding of which political level (local, state, federal, international) is best to address specific aspects of the problem is also important.
Perhaps the WHO should have a role—even the central role—in some of these efforts. But it would be premature to give it more power to direct responses without some accounting for past failures. There are glaring examples of where the WHO got things wrong (e.g., its assertion that China would be transparent with its data and that China would adequately control the virus; that lockdowns and vaccine and mask mandates are essential; that vaccines prevent transmission; that vaccines are required even if you just had COVID), or where it erroneously failed to show interest (in COVID’s origins, in the successes of places like Sweden and Taiwan).
Some of those failures appear to be systemic, as they are a result of the WHO’s structure, funding, lack of transparency, and authoritarian leadership. The U.S. government must prevent expansion of the WHO’s powers until these issues are thoroughly investigated and solutions are agreed upon and implemented.
The U.S. government made some of the same mistakes as the WHO. Congressional oversight of U.S. agencies, policies, and officials is also often slow and incomplete. But such oversight can and does happen, and changes are undertaken. This oversight is required now.
By contrast, oversight of UN bodies (including the WHO) is very weak, and the only real constraint is the threat to withhold funds. But with an increasing share of WHO funds coming from private actors (such as the Gates Foundation) and for specific projects, governmental threats to withhold general funds are becoming weaker. As such, the U.S. government should only agree to grant the WHO new powers when it is sure both that it is the correct body to wield such powers and that it can execute said powers fairly and effectively. At the moment, neither of those conditions has been met.
It’s tempting to just move on from COVID, and even to assume that a WHO treaty could prevent a future pandemic. But to do so would be to invite a new pandemic that, when it arrives, could lead to even more draconian policies than during COVID. The result could well be lasting harms to our economy, our health, and our children’s education.
The post The WHO’s Insufficient Curiosity and Humility appeared first on Truth on the Market.
]]>The post How the FTC’s Amazon Case Gerrymanders Relevant Markets and Obscures Competitive Processes appeared first on Truth on the Market.
]]>This unfortunate tendency is exemplified in the Federal Trade Commission’s (FTC) recent complaint against Amazon, which describes two relevant markets in which anticompetitive harm has allegedly occurred: (1) the “online superstore market” and (2) the “online marketplace services market.” Because both markets are exceedingly narrow, they grossly inflate Amazon’s apparent market share and minimize the true extent of competition. Moreover, by lumping together wildly different products and wildly different sellers into single “cluster markets,” the FTC misapprehends the nature of competition relating to the challenged conduct.
What follows is a distillation of my just-published ICLE Issue Brief analyzing these market definition problems in the FTC’s Amazon case, “Gerrymandered Market Definitions in FTC v. Amazon,” available here.
According to the complaint, the online-superstore market is limited to online stores that have an “extensive breadth and depth” of products. This means online stores that carry virtually all categories of products, from sporting goods to consumer electronics, and that also have an extensive variety of brands within each category. In practice, this definition excludes leading brands’ private channels, such as Nike’s online store, as well as online stores that focus on a particular category of goods, such as Wayfair’s focus on furniture. It also excludes brick-and-mortar stores, which still account for the vast majority of retail transactions. Firms with significant online and brick-and-mortar sales might count, but only their online sales would be considered part of the market.
To see how this market definition tilts the balance, consider the FTC’s allegation that Amazon dominates the online-superstore market, with approximately 82% market share. To reach that figure, the FTC determined that Amazon’s offerings constitute 82% of the gross merchandise value (GMV) of U.S. online sales in a market that excludes perishables and includes only those goods sold online by Amazon, Walmart, Target, or eBay. While Amazon’s share of overall online retail is, indeed, substantial, it’s actually less than half the figure in the market the FTC has gerrymandered, at 37.6%. Indeed, if one were to count total retail sales, Walmart actually leads Amazon, not vice versa. And while e-commerce may be substantial and growing, it represents only about 15% of U.S. retail.
Not only does defining a relevant market with reference to a single retailer’s particular product offerings not “identify the competitive process alleged to be harmed,” as Werden put it, but it doesn’t actually identify a product at all. Instead, it ends up excluding a host of competing sellers that offer economic substitutes for the products that consumers actually buy. Consumers could prevent a hypothetical monopolist within the “online superstore market” from raising prices by switching to other online channels that don’t qualify as a “superstore,” as defined by the FTC. They could, for instance, switch to a brick-and-mortar retailer. How many might switch, and the extent to which that constrains the monopolist’s pricing, are empirical questions, but there is no question that some consumers might switch: retail multi-homing is common.
Further, despite its repeated emphasis on the “depth and breadth” offered by online superstores, the FTC’s complaint ignores e-commerce aggregators, which allow consumers to search products and pricing across an incredible variety of retailers. Google Shopping, the most notable example, is curiously absent from the complaint. Google Shopping and other aggregators allow consumers to browse extensive results in one place for almost any product, including across all categories and across many brands. Indeed, surveys find that roughly half of all shoppers said they use Google both to discover new items and to research their planned online purchases. And Google Shopping is not alone, as buying through social media has boomed. Instagram, for example, has become an online-shopping juggernaut.
The FTC’s complaint limits its definition of the online-marketplace-services market to those online platforms that provide access to a “significant base of shoppers”; a search function to identify products; a means for the seller to set prices and present product information; and a method to display customer reviews. For instance, the complaint distinguishes online marketplaces from online retailers where the seller functions as a vendor and those where sellers provide their own storefronts or sell directly through social media and other aggregators using “software-as-a-service” (“SaaS”) to market products.
This implies that current Amazon sellers can’t reach consumers through mechanisms that don’t incorporate all of these specific functions, even though consumers regularly use multiple services and third-party sites that accomplish the same thing, including Google Shopping, Shopify, and Instagram. Moreover, it implies that these myriad alternative channels do not constrain how Amazon prices its services.
The complaint alleges that neither operating as a vendor nor utilizing SaaS is “reasonably interchangeable” with online marketplace services—the key language from the U.S. Supreme Court’s 1962 decision in Brown Shoe Co. v. United States. But merely saying so doesn’t make it true. Differentiated competition exists in service markets, just as it does in product markets. Superficial differences among services do not establish that they are not competitors.
For example, if a hypothetical monopolist online marketplace increased prices or decreased quality for selling a product, why would, say, Nike not transfer its products away from the monopolist and toward Foot Locker, Macy’s, or any other number of retailers where Nike operates as a vendor? Or why not rely on Nike’s own website, selling directly to the consumer? In fact, Nike did exactly this in 2019, when it stopped selling products to Amazon because it was dissatisfied with Amazon’s efforts to limit counterfeit products. Instead, Nike instead opted to sell directly to its consumers or through its other retailers (both online and offline, of course).
The same can be said for sellers without well-known brands or those who opt to use SaaS to sell their products. Certainly, there are differences between SaaS and online-marketplace services, but that doesn’t mean a seller can’t or won’t use SaaS in the face of increased prices or decreased quality from an online marketplace. Notably, Shopify claims to be the third-largest online retailer in the United States, with 820,000 merchants selling through the platform. It’s remarkable that it is completely absent from the FTC’s market definition.
For its part, Instagram allows sellers to use Meta’s “Checkout on Instagram” service to process orders directly on Instagram, and to use logistics services like Shopify or ShipBob to manage their supply chains and fulfill sales, replicating the core functionality of a vertically integrated storefront like Amazon.
Indeed, one thing that Amazon, SaaS providers, and other similar platforms have in common is that they invest significantly in designing and operating user interfaces, matching algorithms, marketing channels, and innumerable other functionalities to convert undifferentiated masses of consumers and sellers into a functional retail experience. Amazon’s value for sellers in providing access to customers must be balanced by the reality that, in doing so, large “superstores” like Amazon also necessarily put disparate sellers all in the same unified space.
For obvious reasons, sellers don’t necessarily value selling their products in the same location as other sellers. They do, of course, want access to consumers. But Amazon’s “marketplace” or “superstore” aspects simultaneously facilitate that access while also impeding it by congesting it with other sellers and products. In this sense, a specialized outlet may, in fact, offer the optimal selling environment: all consumers seeking the seller’s category of goods (but somewhat fewer consumers), and fewer sellers impeding discovery and access (though more selling the same category of goods). There is little to no reason to think that, by virtue of also offering batteries, clothes, and bolt cutters, Amazon offers anything truly unique to a furniture seller that it can’t get by selling through another distribution channel with a different business model.
The FTC’s casual use of “cluster markets,” which lump together distinct types of products and different types of sellers into single markets, may severely undermine the commission’s case. Indeed, despite their widespread use, the economic logic of cluster markets is, at best, poorly established.
It’s one thing to group, say, all recorded music into a single market (despite the lack of substitutability between, say, death metal and choral Christmas music), but it’s another entirely to group batteries and bedroom furniture into a single “market,” just because Amazon happens to facilitate sales of both.
Courts have recognized that such an approach—using “cluster markets” to assess a group of disparate products or services in a single market—can be appropriate for the sake of “administrative[ ]convenience.” As the 6th U.S. Circuit Court of Appeals noted in Promedica Health v. FTC, “[t]his theory holds, in essence, that there is no need to perform separate antitrust analyses for separate product markets when competitive conditions are similar for each.”
A second basis for clustering is the “transactional-complements” theory, relabeled by the 6th Circuit as the “‘package-deal’ theory.” This approach clusters products together for relevant market analysis when “‘most customers would be willing to pay monopoly prices for the convenience’ of receiving certain products as a package.”
The Supreme Court put its imprimatur on the notion of a cluster market in Philadelphia National Bank, accepting the lower court’s determination that “commercial banking” constituted a relevant market because of the distinctiveness, cost advantages, or consumer preferences of the constituent products. But while the Court suggested some reasons why, in its own telling, “some commercial banking products or services” may be insulated from competition, that still leaves open the possibility that others aren’t, and that the relevant insulating characteristics could be eroded by simple product repositioning, different pricing strategies, or changes in reputation and brand allegiance.
Perhaps the best example of a rigorous defense of cluster markets came in the first Staples/Office Depot merger matter, where ordinary-course documents played a role in the FTC’s review, but were by no means core to the staff’s analysis. The FTC Bureau of Economics applied considerable econometric analysis of price data to establish that office superstore chains constrained each other’s pricing in a way that other vendors of office supplies did not. But it is notable that the exercise was undertaken at all. That is, it was assumed to be a crucial question whether other types of retailers (those with fewer products or catalog-only sales) constrained the pricing power of office-supply “superstores.” Moreover, the groupings of products analyzed were based on detailed analyses of pricing and price sensitivity over identified products, not superficial, subjective impressions of the market.
While the Amazon case is only at the complaint stage, there is no evidence in the complaint that the FTC even considered the possibility that different products and different sellers would need to be considered separately. The complaint offers no evidence to support the assertion of similar competitive conditions, no analysis of cross-elasticities of demand or supply across product categories, and no empirical evidence that a price increase for, say furniture, could be offset by increased sales of batteries. Nor does the complaint consider more granular markets—like furniture, or sporting goods, or books—that would better capture these critical differences. Instead, the complaint appears to assume that, if Amazon offers a grouping of products, or offers services to different types of sellers, this constitutes an economically rigorous “relevant market.” (Spoiler alert: It does not.)
The implication of all this is that it seems highly dubious that furniture and batteries face sufficiently similar competitive conditions across online superstores for them to be grouped together in a single “cluster market.” While there may be superficial similarities in the website or technology connecting buyers and sellers, the underlying economics of production, distribution, and consumption seem to vary enormously.
Indeed, it’s quite possible that narrower markets would demonstrate that Amazon faces real competition in some areas but not others. Grouping disparate products together risks obscuring situations where market power—and thus potentially anticompetitive effects from Amazon’s conduct—might exist in some product spaces but not others. The failure to properly define the relevant market for antitrust analysis doesn’t inherently imply a particular outcome; it just means that no outcome can properly be determined.
The relevant markets alleged in the FTC’s complaint draw a distinct line between the seller and buyer sides of Amazon’s platform, thereby implicitly rejecting cross-market effects as justification for Amazon’s business conduct. Some of the FTC’s specific concerns—e.g., the alleged obligation imposed on sellers to use Amazon’s fulfillment services to market their products under Amazon’s Prime label—have virtually opposite implications for the seller and buyer sides of the market. Arbitrarily cordoning off such conduct to one market or the other based on where it purportedly causes harm (and thus ignoring where it creates benefit) mangles the two-sided, platform nature of Amazon’s business and would almost certainly lead to its erroneous over-condemnation.
If Amazon’s practices vis-à-vis sellers cause the sellers to lower their prices, improve the quality of the products available through the marketplace, or otherwise lower costs and whittle down the seller’s profits, then consumers would benefit. Similarly, if Amazon’s practices with sellers improve the quality of consumers’ experience on its marketplace, then consumers would also benefit. The question is whether gain on one side should offset any harms on the other.
Limiting access to the “Buy Box” by sellers of products that are available for less elsewhere, for example, ensures that consumers pay less and builds Amazon’s reputation for reliability; bundling Prime services may mean some consumers pay for services they don’t use in order to get fast shipping, but it also attracts more Prime customers, enabling Amazon to raise revenue sufficient to guarantee same-, one-, or two-day shipping and providing a larger customer base for the benefit of its sellers.
The bifurcated market approach that the FTC appears to be pursuing here conflicts with the Supreme Court’s holding in Ohio v. American Express. In Amex, the Court held that there must be net harm to both sides of a two-sided market (like Amazon) before a violation of the Sherman Act may be found. And even the decision’s critics recognize the need to look at effects on both sides of the market (whether they are treated as a single market, as in Amex, or not).
The economic literature shows that two-sided markets exhibit interconnectedness between their sides. It would thus be improper to consider effects on only one side in isolation. Yet that is what artificially narrow market definitions facilitate—letting plaintiffs make out a prima facie case of harm in one discrete area. This selective focus then gets upended once defendants demonstrate countervailing efficiencies outside that narrow market. (For a discussion of this problem in the context of mergers (though with relevance for Section 2 cases), see my TOTM post with my colleagues Dan Gilman and Brian Albrecht).
But why define markets so narrowly if weighing interrelated effects is ultimately essential? Doing so seems certain to heighten false-positive risks. Moreover, cabining market definitions and then trying to “take account” of interdependencies is analytically incoherent. It makes little sense to start with an approach prone to miss the forest for the trees, only to try correcting the distorted lens part way into the analysis. If interconnectedness means single-market treatment is appropriate, the market definition should match from the outset.
But I think the FTC is aiming not for the most accurate approach, but for the one that (it believes) simply permits it to ignore procompetitive effects in other markets, despite its repeated acknowledgment of the “feedback loops” between them. Certainly, FTC Chair Lina Khan is well aware of the possible role that Amex could play, and has even stated previously that she believes Amex does apply to Amazon. Instead, the agency is hoping (incorrectly, I believe) that the Court’s decision in Amex won’t apply, and that its decisions in PNB and Topco will ensure that each market be considered separately and without allowance for “out-of-market” effects occurring between them. Such an approach would make it much easier for the FTC to win its case, but would do nothing to ensure an accurate result.
Ultimately, what determines the proper scope of relevant markets is economic analysis based on empirical data. But based on the FTC’s complaint, public data, and common sense (the best we have to go on, for now), it seems implausible that the FTC’s conception of distinct, and distinctly narrow, relevant markets will comport with reality.
An artificially narrow and gerrymandered market definition is a double-edged sword. If the court accepts it, it’s much easier to show market power. But the odder the construction, the more likely it is to strain the court’s credulity. The FTC has the burden of proving its market definition, as well as competitive harm. By defining these markets so narrowly, the FTC has ensured it will face an uphill battle before the courts.
The post How the FTC’s Amazon Case Gerrymanders Relevant Markets and Obscures Competitive Processes appeared first on Truth on the Market.
]]>The post What Do We Do with Presumptions in Antitrust? appeared first on Truth on the Market.
]]>That’s big news in antitrust, even though the guidelines do not have the force of law. There’s more on the merger guidelines below. But what else is new?
For one thing, the agencies have won a few. The FTC secured a preliminary injunction against the proposed IQVIA Holdings/Propel Media merger in the U.S. District Court for the Southern District of New York. It also succeeded in blocking the Illumina/Grail merger by partly winning in the 5th U.S. Circuit Court of Appeals, which rejected some of Illumina’s constitutional claims, left others for the U.S. Supreme Court, and agreed that the FTC had made out a prima facie case under the burden-shifting framework commonly attributed to Baker Hughes.
The decision was not, however, an FTC sweep. The court held that the FTC had applied the wrong legal standard in evaluating Illumina’s “open offer” (see me here; Alden Abbott here and here; Jonathan Barnett here; and the International Center for Law & Economics’ (ICLE) amicus brief here). The open offer was a contractual tool (already in force with some parties) designed to eliminate the risk of harm (however great or slight) that the FTC alleged.
Contra the FTC, the court ruled that the open offer should have been considered at the liability stage, not the remedy stage, and that the FTC had therefore analyzed the open offer under a more stringent standard than it should have. Thus, the case was remanded to the FTC. Illumina abandoned—no doubt for various reasons, including an estimation that continuing the dispute would be costly. This was perhaps not least due to an inkling that the FTC would view its own case favorably on remand.
The FTC also touted its settlement of the Amgen/Horizon matter (here and here) although, as I explained in December, the core of the consent order—reached on the eve of trial—had been proposed by Amgen all along. The settlement of a case that shouldn’t have been brought seems roughly the right outcome (see ICLE’s amicus brief here). Tanking the Illumina/Grail merger seemed the wrong one, although it’s worth noting that the FTC’s case was based on an established theory of harm and did not depend on anything terribly novel from, e.g., the new merger guidelines or the FTC’s expansive (if not downright fanciful) Section 5 statement.
Just last week, on Jan. 16, Judge William G. Young of the U.S. District Court for the District of Massachusetts ruled in favor of the DOJ in enjoining JetBlue’s acquisition of Spirit Airways. The decision is, in many respects, grounded in established law, applying the Baker Hughes burden-shifting framework, and considering price effects (and consumer welfare); likelihood of entry; and, indeed, likely merger efficiencies.
Yet, in one very fundamental way, it’s unfortunate. Judge Young recognized that, as a matter of fact, the merger would be procompetitive, on net, on a national level. But because he agreed with the government that the merger was likely to harm competition “in at least some relevant markets”–that is, on some of the specific city-to-city routes among the hundreds identified in the government’s complaint–he ruled it a violation of Section 7 of the Clayton Act. That decision was not baseless, in law or fact, but it was hardly necessary, and it’s unfortunate for both competition and consumers. On that sort of balancing, see me and my ICLE colleagues Brian Albrecht and Geoff Manne on out-of-market effects here.
Much has been made of the antitrust agencies’ win/loss record under the Biden administration (even as we remind ourselves that the FTC is—or is supposed to be—an independent agency headed by a bipartisan commission). Initially, it seemed dismal. It’s improved. Jan Rybnicek—a former FTC attorney advisor—has a useful thread about it here. As he observes, the win/loss ratio has improved, but it remains unremarkable, both in terms of cases brought and in terms of cases won. He also notes that the wins have tended to rest on established theories of harm.
And three years are but three years; that is, not that long and not enough cases to make much of a trend, one way or the other. The future of the agencies’ more creative endeavors remains to be seen. Still, losses and wins both count.
Back to the agencies’ December 2023 gift to the world of antitrust (if not to consumers and competition): many think it’s an improvement over the July draft. In some ways it is, although views on the magnitude of improvement vary, ranging from slightly less bad to notably better but still problematic.
For a refresher on issues to do with the July draft guidelines, see . . . lots. ICLE’s comments on the draft are here, and my posts here and here identify (with links) useful commentary from many others.
The agencies have given me much to kvetch about (thanks?). For today, I’ll leave myself stuck on first base, with Guideline 1.
Guideline 1 alters the treatment of structural presumptions in merger analysis in several ways. Most conspicuously, it changes the thresholds for structural presumptions. The 2010 Horizontal Merger Guidelines identified three categories of market concentration: “unconcentrated markets” (those with a Herfindahl–Hirschman index (HHI) below 1,500), “moderately concentrated markets” (those with an HHI between 1,500 and 2,500 (inclusive)), and “highly concentrated markets” (those with an HHI above 2,500).
Under the new guidelines, markets with an HHI above 1,800 are deemed “highly concentrated.” The new guidelines do not define “unconcentrated” or “moderately concentrated” markets.
The new guidelines also announce that “mergers raise a presumption of illegality when they significantly increase concentration in a highly concentrated market.” Under the guidelines, there are two ways to raise that presumption. First, there is a structural presumption of illegality if the post-merger HHI is (a) greater than 1,800 and (b) the change in HHI is greater than 100. Second, the presumption is triggered when the merged firm’s market share is (c) greater than 30% and (d) the change in HHI is greater than 100.
The 2023 guidelines, like the 2010 Horizontal Merger Guidelines (and the 1992 guidelines) use the HHI as a measure of concentration. For any given product (or service) and geographic market, the HHI is simply the sum of the squares of each market participant’s market share. So, for a true one-participant (100% market share) monopoly, the HHI is 10,000. If there are two participants, each with a 50% market share, the HHI is 5,000.
So, the 2023 Merger Guidelines, like the 2010, 1992, and even 1982 guidelines, employ structural presumptions and use HHI as a measure of concentration. But much has changed.
One obvious change is in the simple number of structural thresholds: one, under the 2023 guidelines, versus three, under the 2010 guidelines. Another is a change in the variety of presumptions. Under the 2010 guidelines, either of two presumptions might be applied to mergers in or to highly concentrated markets, depending on the change in HHI:
Mergers resulting in highly concentrated markets that involve an increase in the HHI of between 100 and 200 points potentially raise significant competitive concerns and often warrant scrutiny. Mergers resulting in highly concentrated markets that involve an increase in the HHI of more than 200 points will be presumed to be likely to enhance market power. The presumption may be rebutted by persuasive evidence showing that the merger is unlikely to enhance market power. (emphasis added)
Moreover, the 2010 guidelines told us that small changes in concentration (HHI deltas smaller than 100 points) were “unlikely to have adverse competitive effects and ordinarily require no further analysis.” Similarly, “[m]ergers resulting in unconcentrated markets are unlikely to have adverse competitive effects and ordinarily require no further analysis.” Mergers that resulted in “moderately concentrated markets” (HHI greater than 1,500 but not greater than 2,500) and an increase in HHI of more than 100 points “potentially raise significant competitive concerns and often warrant scrutiny.”
As described above, there’s only one defined structural presumption: a presumption of illegality. That is, while the 2023 guidelines expressly identify a single structural presumption, the 2010 guidelines identified four different presumptions, depending on the post-merger HHI and the change in HHI.
Plainly, there’s been a substantial drop in the threshold for a “highly concentrated” market (from 2,500 to 1,800), while the change in concentration triggering the strongest presumption in the 2010 guidelines (and the only presumption in the 2023 guidelines) has been cut in half (from 200 to 100).
This is not about mergers to monopoly or two-to-one mergers. As we illustrate in our out-of-market piece, under the new guidelines, a market is deemed “highly concentrated” if, e.g., seven competitors have market shares of 30%, 20%, 15%, 15%, 9%, 8%, and 3% (HHI = 1,904). If the largest firm were to acquire the firm with 9% market share, the HHI would jump 540 points, to 2,444. What was deemed a merger from and to a moderately concentrated market under the 2010 guidelines would trigger both structural presumptions of illegality under the new guidelines.
Then there’s the question of the nature of the presumption. While the 2010, 1992, and 1982 guidelines did not quite specify safe harbors (they disavowed “rigid screens” at either end of the spectrum), they came close to it in identifying classes of mergers that are “unlikely to have adverse competitive effects and ordinarily require no further analysis.”
What about the strongest structural presumption? In 2010, the most worrisome mergers were “presumed likely to enhance market power.” In 2023, we are told that mergers to highly concentrated markets (a lower threshold) with HHI changes of at least 100 points (a smaller change) are “presumed to substantially lessen competition or tend to create a monopoly.” The phrase “substantially lessen competition or tend to create a monopoly” is lifted straight from Section 7 of the Clayton Act. And the heading to Guideline 1 tells us that the presumption is a “[p]resumption of illegality.”
On the one hand, antitrust has long been concerned with the acquisition and exploitation of market power; that’s recognized in the case law, and in the 2010 guidelines. Mergers in or to highly concentrated markets that involved HHI changes greater than 200 raised red flags and were bound to be subject to careful scrutiny at either agency. So, apart from the substantially revised thresholds, the difference in language may seem a small difference—perhaps much smaller than the difference in the numbers.
Or not. There’s that phrase “presumption of illegality,” which does not appear in the 2010 Horizontal Merger Guidelines (or the 1992 guidelines, or the 2020 Vertical Merger Guidelines, for that matter). If we dial the clock back 40-plus years, we come close to it, as the DOJ’s 1982 guidelines identified mergers the agency was “likely to challenge.” But that was the DOJ in 1982, with 42 years and lots of water (agency experience, economic research, and case law) under the bridge.
And what about the more basic question: what to make of structural presumptions as a general matter? As many have noted, economic learning and agency experience have tended to diminish the role of structural presumptions over the course of several decades (at least). My ICLE colleagues and I spent a good many pages (and citations) on this in our response to the draft merger guidelines. The structure/conduct/performance paradigm has been largely abandoned, because it’s widely recognized that market structure is not outcome–determinative. The view is shared, as we note, by scholars across the political spectrum.
To take one prominent recent example, professors Fiona Scott Morton (deputy assistant attorney general for economics in the DOJ Antitrust Division under President Barack Obama, and now teaching at Yale University), Martin Gaynor (former director of the FTC Bureau of Economics under Obama, now serving as special advisor to Assistant U.S. Attorney General Jonathan Kanter, on leave from Carnegie Mellon University), and Steven Berry (an industrial organization economist at Yale) surveyed the industrial–organization literature and found that presumptions (and analyses) based on measures of concentration are unlikely to provide sound policy guidance:
In short, there is no well-defined “causal effect of concentration on price,” but rather a set of hypotheses that can explain observed correlations of the joint outcomes of price, measured markups, market share, and concentration.…Our own view, based on the well-established mainstream wisdom in the field of industrial organization for several decades, is that regressions of market outcomes on measures of industry structure like the Herfindahl-Hirschman Index should be given little weight in policy debates.
We (and they) were hardly alone. The Global Antitrust Institute (GAI) filed three distinct comments on the draft merger guidelines, with one focused specifically on “the 2023 Draft Merger Guidelines emphasis on structural antitrust.” As they observe, the structuralist approach of the 2023 draft (which was maintained in the final 2023 guidelines) harkened back to that of the 1968 Merger Guidelines, which “was also supported by ‘the Structure-Conduct-Performance (SCP) paradigm dominant in industrial organization economics at the time.’” Since 1968, however:
[A]dvances in economic knowledge exposed the flaws associated with the SCP paradigm, undermining the empirical basis for structural antitrust contained in the 1968 Guidelines. This research demonstrated that the SCP paradigm’s empirical methodology could not identify the causal effect of industrial concentration on market performance. The correlations between market concentration and economic performance produced by SCP studies could not be used to predict the causal effect of a merger nor relied upon by Agencies or courts for the purposes of rational antitrust merger review. Moreover, economic analysis also exposed the lack of robustness of the paradigm’s main empirical underpinnings. (internal citations omitted)
And there are many more such examples, including Gregory Werden (former chief counsel for economics at DOJ) here, and Nathan Miller et al. (many co-authors, including Aviv Nevo, director of the FTC’s Bureau of Economics, eight former directors of the FTC’s Bureau of Economics, several former chief economists at the DOJ Antitrust Division, and leading academics) in 2022 here.
Notably, comments submitted to the FTC just last year by John Asker, Kostis Hatzitaskos, Bob Majure, Ana McDowall, Nathan Miller, and, again, the FTC’s Aviv Nevo had the following to say in 2022 comments to the agencies:
As noted by DOJ and FTC staff and front office economists in late 2020, the 2010 HMG continue to accurately reflect the practices of the agencies and highlight practices and techniques of continued relevance to modern practice. We agree. Despite concerns voiced by commentators and raised in some academic studies, our read of the evidence is that it does not support large deviations from the approach in the 2010 HMG or the 2020 VMG. Some proponents favor strengthening structural presumptions and lowering the presumption thresholds. This, they argue, would be an important step toward strengthening enforcement and reducing the number of harmful mergers passing unchallenged. Our view is that this would not be the most productive route for the agencies to pursue to successfully prevent harmful mergers, and could backfire by putting even further emphasis on market definition and structural presumptions. If the agencies were to substantially change the presumption thresholds, they would also need to persuade courts that the new thresholds were at the right level. Is the evidence there to do so? The existing body of research on this question is, today, thin and mostly based on individual case studies in a handful of industries. Our reading of the literature is that it is not clear and persuasive enough, at this point in time, to support a substantially different threshold that will be applied across the board to all industries and market conditions.
I don’t present that as “gotcha” on Nevo: agency guidelines are just that, and even a director of the Bureau of Economics cannot exert complete control over the particulars. Rather, like some of the comments and articles cited above, it’s a serious appraisal from established economists with a demonstrated willingness to support vigorous enforcement of the antitrust laws, and to consider pro-enforcement reform. And it’s very much at odds with the new guidelines.
That’s not to say that nobody thinks there’s any signaling value to HHI measures. Some recent research suggests there might be, although it points more toward the significance of changes in HHI than the absolute post-merger measure, and it suggests different results depending on the nature of the transaction and its associated (likely) efficiencies.
But what sort of signaling value? This is not simply a question about whether or not to return to the 1992 HHI thresholds. It’s a question of the presumption that’s triggered at any given threshold. The new guidelines, unlike the 2010 guidelines, or even the 1992 guidelines, describe a “presumption of illegality,” and they do so:
Collectively, that seems a very big change, indeed.
So what do the agencies have to say to justify the new presumptions of illegality being attached to very old thresholds? In a footnote, they acknowledge the 2010 thresholds and say, “based on experience and evidence developed since, the Agencies consider the original HHI thresholds to better reflect the law and the risks of competitive harm suggested by market structure and have therefore returned to those thresholds.”
What experience and evidence? There are no supporting citations to the literature, or descriptions of agency analysis of particular merger matters from the intervening years. None. They do cite three cases decided prior to 2010: Chicago Bridge & Iron Co. N.V. v. FTC; FTC v. H.J. Heinz Co.; and FTC v. Univ. Health, Inc. While they are correct in noting that those decisions cite prior editions of the guidelines as persuasive authority, not one of the cases depends on anything remotely like the new thresholds.
Three cases, no post-merger HHIs below 3,200, considerably larger changes in HHI than any specified in the 2010 guidelines or the 2023 guidelines, and more going on than just the HHI measures.
That all seems like scant evidence to double down on structural presumptions, borrowing “highly concentrated” numbers from 1992 and a presumption of illegality at least as strong as that signaled by the DOJ in 1982. I use “scant” here as a term of art: really, no good evidence at all. And frankly, given the considerable controversy raised by the draft guidelines, that’s just plain lazy.
Once upon a time, there was a TV sitcom called The Odd Couple, based on Neil Simon’s play of the same name. In one episode, there’s a courtroom scene that’s become a television classic (yes, there are such things—sorry, Professor Adkins). One of the characters, Felix, representing himself pro hac vice, gets the witness to admit that she “just assumed” something material. With great (self-sense of) drama, and the aid of a blackboard, Felix tells her that “you should never assume, because [as he writes the word “ASSUME,” and underlines its constituent parts in sequence] when you assume, you make an ASS of U and Me.”
He said “assume,” not “presume,” and yet . . .
The post What Do We Do with Presumptions in Antitrust? appeared first on Truth on the Market.
]]>The post A European Commission Challenge to iRobot’s Acquisition Is Unjustified and Would Harm Dynamic Competition appeared first on Truth on the Market.
]]>iRobot, headquartered in Bedford, Massachusetts, is an American success story:
Founded in 1990 by Massachusetts Institute of Technology roboticists with the vision of making practical robots a reality, iRobot has sold more than 40 million robots worldwide. The company has developed some of the world’s most important robots, and has a rich history steeped in innovation. Its robots have revealed mysteries of the Great Pyramid of Giza, found harmful subsea oil in the Gulf of Mexico, and saved thousands of lives in areas of conflict and crisis around the globe. iRobot inspired the first Micro Rovers used by NASA, changing space travel forever, deployed the first ground robots used by U.S. Forces in conflict, brought the first self-navigating FDA-approved remote presence robots to hospitals and introduced the first practical home robot with Roomba [a robotic vacuum cleaner], forging a path for an entirely new category in home cleaning.
Amazon and iRobot signed an agreement in August 2022 under which Amazon would acquire the robotics company. Subsequently, Amazon explained the rationale behind the acquisition:
iRobot, which faces intense competition from other vacuum cleaner suppliers, offers practical and inventive products. We believe Amazon can offer a company like iRobot the resources to accelerate innovation and invest in critical features while lowering prices for consumers.
A September 2022 iRobot investor filing revealed that the Federal Trade Commission (FTC) had made a “second request” for additional information pertaining to the transaction. Surprise, surprise, this followed hard on the heels of a letter to the FTC from 24 interventionist-leaning “public interest” groups (including the Open Markets Institute), requesting that the commission challenge the acquisition (which, the letter alleged, “would endanger fair competition and open markets”). The FTC has not yet announced its position on the merger.
The acquisition was also reviewed by the European Commission and by the UK Competition and Markets Authority (CMA). In June 2023, the CMA announced that it had cleared the transaction, finding that it would not lead to competitive concerns in that market. In particular, the CMA found that:
The European Commission, however, had a different view. In a November 2023 press release, the Commission announced that it “ha[d] informed Amazon of its preliminary view that its proposed acquisition of iRobot may restrict competition in the market for robot vacuum cleaners [RVCs].” The key concern was that “Amazon may have the ability and the incentive to foreclose iRobot’s rivals by engaging in several foreclosing strategies aimed at preventing rivals from selling RVCs on Amazon’s online marketplace and/or at degrading their access to it.”
Subsequently, it was reported last week that the Commission plans to block Amazon’s acquisition of iRobot.
A closer look at this matter indicates that a decision to block the merger would harm consumer welfare and undermine dynamic competition.
The robotic vacuum-cleaner market is highly competitive, according to data gathered by market-research company Mordor Intelligence. Mordor has conducted detailed studies of the evolution of this market and has fly-specked 2023 and 2024 market-share data in reaching its conclusion. According to a 2024 Mordor report:
The robot vacuum cleaners market is very competitive primarily due to the presence of major players such as iRobot Corporation and Neato Robotics (Vorwerk). Furthermore, the probability of new players entering the market is moderately high, which could further intensify the market competition. Product launch, high expense on research and development, and strategic partnerships and acquisitions are the prime growth strategy followed by the companies to sustain the intense competition.
Not only is the robotic market competitive, but iRobot has been lagging competitively over the last two years, during the period that it has faced merger-review uncertainty. At the same time, Chinese robot vacuum-cleaner producers have been on the rise.
In November 2023, China Daily reported that Chinese robot vacuums had captured nearly 50% of the category, and more specifically, that Chinese robotic-vacuum brands had 68% and 55% of the category in Southeast Asia and Europe, respectively. According to Nikkei Asia, Chinese companies like Ecovacs are competing globally with cutting-edge features and affordable prices, gaining market share at iRobot’s expense.
These recent robotics-market dynamics suggest that the real effect of blocking iRobot’s acquisition by Amazon would not be to prevent some anticompetitive foreclosure (a dubious theory at best, given the CMA’s findings), but rather, to strengthen the competitive position of fast-rising Chinese competitors relative to an American rival.
Has Amazon attempted to promote iRobot products at the expense of its rivals? Do iRobot vacuum cleaners enjoy disproportionate attention and favorable publicity compared to those made by its rivals? Is there really a vibrant, intensely competitive online market for robotic vacuum cleaners? Whatever the current state of the market, would Amazon’s acquisition be likely to substantially lessen competition? In sum, is the Mordor research that portrays a highly competitive market accurate? Let’s see what some basic web-surfing I recently undertook reveals.
A simple Google search on Jan. 20 that included the terms “Amazon” and “vacuum cleaners” linked immediately to a landing page entitled “2024 Best Robot Vacuums New Year Deals Today,” which stressed that “[w]ith a wide range of models available, you can find the perfect robot vacuum to suit your needs and budget. Some popular options include the iRobot Roomba, Roborock, the Eufy RoboVac, and the Shark IQ Robot.” The landing page featured equal-sized photographs with direct links and prices of 20 different robot vacuums (five made by iRobot) from eight different manufacturers (including one huge multinational, Samsung). The first of the four lines of photographed robots on the landing page included models by Roborock (“special deal”); iRobot (“special deal”); Shark (“special deal”); Eufy (“special deal”); and ECOVACS. The iRobot model had the highest price among the three “bargain-priced” first line robots.
A one-click Google search using the terms “Walmart” and “robot vacuum” linked to a Walmart landing page with an even larger number of robots, mostly priced lower than the Amazon brands. A one-click Google search under “robot vacuum” led to a large number of landing pages featuring many brands of robots at varying prices. A one-click Bing search using “robot vacuum” led to multiple landing pages (including Amazon’s). The top of the Bing search display was a New York Times link to a Wirecutter landing page with an article titled “The Best Robot Vacuums.” None of the three products featured in the article (two from Roborock, one from Eufy) came from iRobot.
Similar basic web searches featuring various related terms on Google and Bing yielded the same results: easy-to-find links to a wide variety of products, most of them not made by iRobot. Furthermore, reviews of robotic vacuum cleaners tout many non-iRobot products.
My “quick and dirty” anecdotal online searches undoubtedly are non-scientific casual empiricism, but they nevertheless are instructive. They reveal that, at this time, online shoppers enjoy a wide variety of robot vacuum choices from a large number of manufacturers in a range of prices, with iRobot devices being only a few of many. Moreover, iRobot does not appear to enjoy any particular online market supremacy, in terms of either pricing or prestige.
Amazon’s acquisition of iRobot would not, I believe, materially affect the thrust of my results for the casual browser entering basic search terms. Even with greater post-acquisition favoritism shown toward iRobot products on Amazon’s website, consumers would be offered a huge number of alternative attractive choices, easily accessible online. In short, it strains credulity to believe that Amazon’s acquisition of iRobot is a threat of any sort to effective competition in the robot vacuum-cleaner market (and, I suspect, in the market for other consumer robotic devices, based on my search).
But what if European Commission enforcers are concerned about Amazon strengthening its hand by giving specially favored treatment to iRobot devices? As noted above, even assuming this were the case, effective competition would almost certainly be preserved online. But apart from that, Amazon is one of the six gatekeepers subject to the EU’s Digital Markets Act (DMA), which (among other obligations) requires gatekeepers to “treat services and products offered by the gatekeeper itself more favourably in ranking than similar services or products offered by third parties on the gatekeeper’s platform.” Huge fines could be imposed on Amazon if it did not comply.
In short, the specter of DMA enforcement would short-circuit any conceivable incentive Amazon might have to engage in anticompetitive foreclosure post-merger. (Of course, that incentive was lacking in the first place, given the CMA’s findings. The real danger is that the hyper-regulatory DMA will harmfully interfere with efficient platform management by Amazon and other American platforms (see here and here).)
In short, a European Commission decision to block Amazon’s acquisition of iRobot would serve no procompetitive purpose. It would, however, preclude Amazon from realizing substantial efficiencies through iRobot, including promoting iRobot’s ability to introduce innovative new welfare-enhancing robotic products and to lower the prices of its entire lineup. As such, a Commission decision to prevent this merger would harm consumer welfare and innovation—a result at odds with sound competition policy.
Amazon’s acquisition of iRobot would likely promote efficiencies, raise welfare, and enhance competition. There is no sound justification for preventing this merger. Attempting to do so would not only harmfully undermine innovation in a highly competitive market, but would have broader ramifications, as well. It would dissuade large companies from contemplating welfare-creating complementary acquisitions, to the detriment of innovative welfare enhancement in a large number of markets. It would, once again, single out without justification a highly successful American digital platform, which, with its fellow U.S. platforms, has generated enormous welfare gains for consumers (see here).
The Chinese government and its large firms (which, by the way, are not listed DMA gatekeepers) must be laughing at the wounds the “sophisticated” Western world competition authorities continue unnecessarily to inflict on Western (and, in particular, American) technological giants—wounds that harm consumers and the Western economies.
In a normal world, the U.S. government, drawing on the expert advice of the FTC and the U.S. Justice Department (DOJ), would be working hard to convince the Commission not to block the Amazon-iRobot merger. Unfortunately, we are living in a world in which the FTC and DOJ have abandoned consumer-welfare promotion and sound economic analysis altogether, in their pursuit of fatuous neo-Brandeisian dreams. Let us hope that U.S. antitrust enforcers and their European counterparts both come to their senses soon.
The post A European Commission Challenge to iRobot’s Acquisition Is Unjustified and Would Harm Dynamic Competition appeared first on Truth on the Market.
]]>The post Four Problems with the Supreme Court’s Refusal To Hear the Epic v Apple Dispute appeared first on Truth on the Market.
]]>That’s partly right. The district court had correctly rejected Epic’s federal antitrust claims against Apple (and against Epic, on Apple’s breach-of-contract counterclaim); the 9th Circuit upheld the trial court’s decision; and the Supreme Court’s refusal to grant cert leaves those Epic losses undisturbed.
But Apple was denied a sweep at the district court, which ruled in favor of Epic’s claim under California’s Unfair Competition Law (UCL). The 9th Circuit likewise sustained that state law decision. The Supreme Court has thus left both that state law decision and the district court’s nationwide injunction undisturbed.
The state law decision was not a trivial matter, and its practical ramifications present four distinct challenges.
First, the district court’s injunction is overly broad, as it applies to all app developers on the App Store, not just to Epic. Second, the district court’s finding that Apple’s anti-steering provisions violate California’s UCL are inconsistent with its conclusion on the federal antitrust law counts. Third, and relatedly, this discrepancy effectively enables a single state’s unfair competition law to undermine federal antitrust policy nationwide. Fourth, but related specifically to federal antitrust law, the Supreme Court could have taken this opportunity to clarify some of the contentious questions surrounding the fourth step of the rule-of-reason framework.
Ultimately, the case is a victory for no one. After costly and complex litigation on both federal and state competition claims, the biggest “change” is that Apple now has to delete its anti-steering provisions. Apple, however, remains entitled to charge a commission fee for use of its iOS and App Store—as it no doubt will continue to do. Epic’s attempt to circumvent Apple’s IAP fees has thus, for now, been for naught. If anything, Apple’s new method of collecting may actually be more cumbersome, and therefore worse for developers.
It is therefore not clear what the case has achieved, other than debilitating Apple’s ability to enforce strict privacy and security standards on its platform, thanks to an overly broad nationwide injunction.
In the original complaint, Epic challenged as a violation of federal antitrust law Apple’s prohibition of third-party app stores and in-app-payment (IAP) systems from operating on its proprietary iOS platform. The U.S. District Court for the Northern District of California ruled against Epic, finding that the company’s real concern was its own business interests in the face of Apple’s business model—in particular, the commission that Apple charges for use of its IAP system—rather than harm to consumers and to competition more broadly. It also found that Apple’s IAP and App Store restrictions were an integral part of its “walled garden” model, which benefitted users through increased privacy and security.
At the same time, District Court Judge Yvonne Gonzalez Rogers found that Apple’s anti-steering provision—i.e., the prohibition on informing users about payment options other than Apple’s IAP—violated California’s UCL. She issued an injunction forcing Apple to allow links and other “calls to action” that would bypass Apple’s payment system.
Both parties appealed to the 9th Circuit, which affirmed in part and reversed in part the district court’s judgment, which included affirming the injunction against Apple’s anti-steering provisions. At the time, we at the International Center for Law & Economics (ICLE) filed an amicus brief in favor of Apple’s rehearing request, in which argued that, if Apple’s IAP and App Store restrictions did not violate federal antitrust law, they could not violate UCL, either:
The panel’s holdings that (1) Apple’s conduct with respect to its close control over the App Store and restrictions on in-app payments…do not give rise to an antitrust violation, but that (2) its anti-steering provisions nevertheless violate California’s Unfair Competition Law… are incongruent. The anti-steering provisions violate the UCL only if they constitute an “incipient violation of an antitrust law, or . . . [cause harm] comparable to or the same as a violation of the law.” Cel-Tech Commc’ns, Inc. v. L.A. Cellular Tel. Co., 20 Cal. 4th 163, 186-87 (1999). But provisions limiting app developers’ ability to steer consumers to alternative payment options exist merely to further the goals of the lawful IAP restrictions, and thus the anti-steering provisions cannot constitute incipient antitrust violations or cause harm comparable to such violations.
Having affirmed the District Court’s finding that Apple’s IAP policies are procompetitive, the panel should have ruled that Apple’s anti-steering provisions— which constitute a less restrictive means of pursuing the same procompetitive objective—are not unfair under the UCL.
We contended that, if left to stand, the court’s decision risked chilling procompetitive conduct by deterring investment in efficiency-enhancing business practices, such as Apple’s “walled garden” iOS. More egregiously, it risked creating a fundamental contradiction by enjoining conduct under the UCL that is benign— and even beneficial—under antitrust law. Indeed, the district court had recognized that Apple had non-pretextual, legally cognizable, pro-competitive reasons for its IAP restrictions.
Apple also filed a petition requesting a motion to put on hold the appeals court ruling pushing the company to undo its anti-steering provisions. The circuit court granted the petition, and ordered a stay on that part of the ruling in July 2023, giving Apple 90 days to petition the Supreme Court and see if its appeal was taken up. Epic moved to block the petition before the Supreme Court, but was denied.
Apple and Epic both filed a motion for a writ of certiorari before the Supreme Court; with the outcome of the Epic Games v Apple saga now hinging on whether the Court would take up the dispute. This week’s decision resolves that question.
What it means, in a nutshell, is that Apple wins on the federal antitrust counts, but will be forced to remove its anti-steering provisions in line with the district court’s injunction, which is based on California’s UCL. That injunction is now in effect, meaning that developers can now include in their apps “buttons, external links, or other calls to action that direct customers to purchasing mechanisms, in addition to IAP.”
Overall, this is a blow to Epic’s efforts to open iOS to competing stores and payment systems, and thus to its ultimate goal: free access to iOS users. As we argued in our amicus brief filed before the 9th Circuit:
Ultimately this case boils down to Epic wanting a free ride for its own Epic Games Store and its own IAP on iOS.
Judged from this perspective, Epic’s legal crusade has been a fiasco. There are, however, a string of problems with the nationwide injunction against Apple’s anti-steering provision that dilute Apple’s victory and undermine the consistency of the district court’s ruling.
While Epic v. Apple was not a class-action lawsuit, the district court’s nationwide injunction applies to millions of non-party app developers. In other words, this ruling allows one aggrieved party, which is no longer even present on the App Store, to dictate the terms and conditions for millions of app developers. These developers signed Apple’s guidelines on the exclusive use of the IAP and the related anti-steering provisions, and may reasonably prefer their apps to benefit from the full advantage of Apple’s walled-garden model, rather than risk it being compromised by lesser, third-party IAPs.
Incidentally, as part of a settlement in Cameron v. Apple Inc—a class-action lawsuit filed in the very same Northern District of California that involved some 6,700 developers—Apple removed a prohibition on targeted communications between developers and consumers outside of the app, meaning that developers are now free to communicate outside the apps about external purchasing options (or anything else). But that settlement did not require Apple to modify or remove the anti-steering provision, making it even more jarring that a case involving a single plaintiff would do just that. Not only does this contradict the principle that “injunctive relief should be no more burdensome to the defendant than necessary to provide complete relief to the plaintiffs,” it could also cause serious harm to nonparties who had no opportunity to argue for more limited relief.
The district court’s nationwide anti-steering injunction is also difficult to reconcile with the court’s simultaneous rejection of Epic’s antitrust claim.
In its decision, the district court recognized that Apple’s walled-garden model yields procompetitive consumer benefits—including greater privacy and data security—and that such benefits are cognizable under federal antitrust law.
The district court conceded that Apple’s “closed” distribution model allows the company to curate the App Store’s apps and payment options. For example, Apple’s guidelines exclude apps that pose data-security threats, threaten to impose physical harm on users, or undermine child-safety filters. These rules increase trust between users and previously unknown developers, because users do not have to fear that their apps contain malware. The terms also alleviate user fears about payment fraud. By increasing the total value of the platform, these benefits increase the total number of transactions it facilitates. Indeed, those wondering about the pro-consumer aspects of Apple’s walled-garden model might consider Epic’s 2023 settlement with the Federal Trade Commission (FTC): Epic agreed to pay $245 million to settle charges that it “trick[ed] players into making unwanted purchases and let children rack up unauthorized charges without any parental involvement.”
In addition, anti-steering provisions (especially in two-sided markets) have other, legitimate procompetitive benefits, such as preventing free riding. “Free riding” occurs when someone uses a valuable resource without paying for it. In this case, Apple owns a valuable resource that it has created and steadily improved: the iPhone and iOS ecosystem, including the App Store. Apple currently charges commissions of between 15% and 30% for digital goods sold through the App Store, including for certain in-app purchases. Epic would like to access that ecosystem without paying.
But while Epic may benefit from its long-term strategy to reduce the fees it pays to Apple, consumers might not. If reductions in revenue from the iOS ecosystem mean that Apple has less incentive to invest in it, Epic’s gain may come at the consumer’s expense. In other words, by preventing free riding, anti-steering provisions maintain Apple’s incentives to invest in its iOS, to the ultimate benefit of both sides of the market: consumers and developers.
The district court correctly rejected Epic’s primary claim, as Epic failed to establish under antitrust law any cognizable harm from Apple’s prohibition of third-party app stores and IAPs. In essence, that foreclosed Epic’s ability to directly circumvent the App Store and pay a lower commission, or none at all. But in granting a nationwide injunction against Apple’s anti-steering provisions, the district court facilitated precisely that type of free riding. And, since Apple’s practice of vetting unsafe payment systems and malware on its App Store depends on its ability to prevent third parties from “steering” consumers towards purchase mechanisms other than Apple’s secure IAP system, the district court also undercut the very security and privacy benefits which it recognized as valid procompetitive justifications for Apple’s policy.
The fact that anti-steering provisions are procompetitive should be a relevant factor in whether a federal court grants nationwide injunctive relief. To interpret California’s UCL as the district court has done—in a way at loggerheads with federal antitrust law, while permitting a nationwide injunction— is to undermine the fundamental goal of antitrust policy, and to do so on a national level.
As discussed above, the district court recognized Apple’s security arguments as a key procompetitive factor that determines Apple’s success and increases output across the platform, ultimately benefiting both consumers and developers. Yet the court issued an unnecessarily broad injunction against Apple’s anti-steering provisions that risks chilling procompetitive conduct by deterring investment in efficiency-enhancing business practices, such as Apple’s walled-garden iOS (e.g., by facilitating free-riding.)
Even more egregious is that the district court’s injunction risks undermining federal antitrust law by enjoining conduct under state unfair competition law that is recognized as benign—and even beneficial—under federal antitrust law. If the district court’s remedy is left to stand, state laws will be stretched beyond their territorial remit and used to contradict federal antitrust laws nationally, thus eviscerating federal antitrust policy from the bottom-up. This is not an unrealistic prospect: California has already shown an appetite to use its UCL to seek nationwide injunctions (see here).
Apple’s petition for certiorari arises from Epic’s state law claims, on which Apple lost. Epic’s petition, on the other hand, arises from the rule-of-reason framework under federal antitrust law, on which Epic lost. Epic contends that there is no need to consider costs in assessing less-restrictive alternatives (LRAs) under step three of the rule-of-reason analysis, and argues that, as a matter of law, courts should undertake a fourth step of “balancing” competitive effects.
On the former point, both parties agreed that, in line with 9th Circuit’s 2015 O’Bannon decision:
[T]o be viable . . . an alternative must be “virtually as effective” in serving the procompetitive purposes of the [challenged restraints], and “without significantly increased cost.”
Further, as we argued in our amicus brief to the 9th Circuit in Epic v Apple, the reliance on LRA in this case is misplaced for at least two reasons. First, by failing to show net anticompetitive harm accounting for both sides of a two-sided market, Epic failed at step one of the rule-of-reason analysis, thus rendering LRAs irrelevant (see also here). Second, forcing Apple to adopt the “open” platform that Epic champions would reduce interbrand competition and improperly permit antitrust plaintiffs to commandeer the judiciary to modify routine business conduct any time a plaintiff’s attorney or district court can imagine a less-restrictive version of a challenged practice, and to do so independent of whether the practice promotes consumer welfare. This is particularly problematic in the context of two-sided platform businesses, where such an approach would sacrifice interbrand, systems-level competition for the sake of a superficial increase in competition among a small subset of platform users (see also here).
On the “fourth step” point, it should be noted that the Supreme Court’s most recent rulings in this area of law—i.e., Alston and Amex—did not require a fourth step. And why would they? Cost-benefit analysis is already baked into the rule of reason. As the 9th Circuit itself recognizes:
We are skeptical of the wisdom of superimposing a totality-of-the-circumstances balancing step onto a three-part test that is already intended to assess a restraint’s overall effect.
Further:
Several amici suggest that balancing is needed to pick out restrictions that have significant anticompetitive effects but only minimal procompetitive benefits. But the three-step framework is already designed to identify such an imbalance: A court is likely to find the purported benefits pretextual at step two, or step-three review will likely reveal the existence of viable LRAs.
It is therefore unclear what benefits a fourth step would offer. In most cases, it would only serve to “briefly [confirm] the result suggested by a step-three failure: that a business practice without a less restrictive alternative is not, on balance, anticompetitive.”
The “fourth step question” was complicated by the 9th Circuit, which held (albeit reluctantly) that where the plaintiff fails to show an LRA as part of a “third step” in rule of reason, a fourth step is required to weigh the procompetitive against anticompetitive effects. The problem with this logic is twofold. First, it is circular. If, as the 9th Circuit notes, the rule of reason is not a “rotary list,” why was the district court’s failure to undertake a fourth step seen as a mistake (even if, by the circuit court’s own admission, it was a harmless one)? Shouldn’t it be enough that the district court weighed the procompetitive and anticompetitive effects correctly? And second, it could grant plaintiffs not one, but two last-ditch (and unjustified) attempts to make their case, even after having failed previous steps.
While the district court’s decision (where these blemishes are less visible) stands, the Supreme Court could have used the opportunity to set the record straight. The interpretation of LRAs and the fourth step of the rule-of-reason that the circuit court espoused could have ramifications not just for the parties in the present dispute, but for antitrust law more broadly.
That Epic “lost” because it didn’t get exactly what it wanted doesn’t mean that Apple won. Nor does Apple’s loss on the state claims do much for Epic, either.
Apple’s removal of the anti-steering provisions is unlikely to benefit Epic, or any other developers. Even if Apple jettisons anti-steering provisions (and thus cannot charge a commission through IAPs), it is still allowed to recoup its investment through other means.
For instance, Apple could allow independent payment processors to compete, and charge an all-in fee of 30% when Apple’s IAP is chosen. To recoup the costs of developing and running its App Store, Apple could also charge app developers a reduced, mandatory per-transaction fee (on top of developers’ “competitive” payment to a third-party IAP provider) when Apple’s IAP is not used. Indeed, where a similar remedy has been imposed already, Apple has taken similar steps. In the Netherlands, for example, where Apple was required by the Authority for Consumers and Markets to uncouple distribution and payments for dating apps, Apple has adopted a policy under which any apps that want to use a non-Apple payment provider must still “pay Apple a commission on transactions” of 3 percentage points less than normal (so, 27%, for most transactions), a slightly “reduced rate that excludes value related to payment processing and related activities.” (see here).
Something similar is likely to happen in the United States. Indeed, it seems to have happened already. While Apple now lets developers link to outside payments, it is still charging a 27% commission, even where buyers obtain digital goods and services from a website linked to from within the app (see here).
Materially speaking, then, the injunction changes practically nothing. Developers still have to pay Apple an almost identical commission as before, whether they use Apple’s IAP or not. That is the sense in which Sweeney was right to regard the Supreme Court’s denial of certiorari as a wholesale loss. If anything, this outcome could be even more cumbersome, and therefore worse for developers. As Ben Thompson has written:
I wouldn’t be surprised if Apple does the same in this case: developers who steer users to their website may be required to provide auditable conversion numbers and give Apple 27%, and oh-by-the-way, they still have to include an in-app purchase flow (that costs 30% and includes payment processor fees and converts much better). In other words, nothing changes.
[…]
The 7-day attribution period is pretty aggressive, and gets closer to the worst-case scenario I described above. Now not only will Apple collect whenever a user initiates a purchase within an app, but they also insist on collecting even if a user comes back to the webpage (not app!) at any time within a week after clicking the app. That, by extension, means that developers will need to track users to know if they arrived on the website from said link.
Furthermore, it could also reduce the overall attractiveness of Apple’s platform by making it more vulnerable to the security and privacy threats posed by third-party IAPs.
Another question is whether or not Apple’s “victory” will be overturned by the imminent entry into force of the European Union’s Digital Markets Act, which will supposedly force Apple to allow alternative IAPs and App Stores on its iOS (for a skeptical view, see here). For now, however, Apple’s only victory is that it didn’t lose.
The post Four Problems with the Supreme Court’s Refusal To Hear the Epic v Apple Dispute appeared first on Truth on the Market.
]]>The post Slouching Toward Disconnection and the End of the ACP appeared first on Truth on the Market.
]]>The Affordable Connectivity Program (ACP) is a federal program, administered by the Federal Communications Commission (FCC), that provides eligible low-income households with discounts of up to $30 per-month for broadband-internet service, and up to $100 for a laptop, desktop computer, or tablet from a participating provider. Congress created the program in 2021 as part of the COVID-19 relief package.
The ACP has been funded by a $14 billion appropriation, but that money is expected to run out this year. In October 2023, FCC Chair Jessica Rosenworcel requested $6 billion to keep the program running through the end of 2024. If additional funds are not appropriated, the program will shutter by May 2024, affecting nearly 23 million currently enrolled households, or about 17% of all U.S. households.
Legislation was introduced last week—the Affordable Connectivity Program Extension Act—to appropriate $7 billion. Despite bipartisan support for the bill, several powerful Republican members of Congress sent a letter to Rosenworcel decrying the program as “wasteful.” The legislators—Sens. John Thune (R-S.D.) and Ted Cruz (R-Texas), and Reps. Cathy McMorris Rodgers (R-Wash.) and Bob Latta (R-Ohio)—complain that the program is ineffective in connecting nonsubscribers to the internet, and that the FCC has shirked its obligation to collect data on broadband adoption by first-time subscribers under the ACP.
These criticisms are somewhat valid, but also misleading in some ways. In a recent International Center for Law & Economics issue brief, we find, on the one hand, that the ACP has faced difficulties in stimulating sufficient interest among some segments of the 5% of unconnected households that could access the internet, but fail to take up service. On the other hand, the ACP’s subsidies appear to have successfully enabled already-subscribed households to maintain at-home internet service through the COVID-19 pandemic and afterward. In other words, the benefits may be less about connecting the unconnected and more about helping households stay connected.
With no deal imminent, the FCC plans to wind down the ACP. The agency will stop accepting new enrollees after Feb. 7 and will bar providers from joining thereafter. Those providers also will have to barrage beneficiaries with warnings about looming rate hikes. Under the FCC’s winddown order, households losing their ACP subsidies must affirmatively opt in to continue their internet service at the higher unsubsidized rates.
An ACP winddown could have far-reaching—and unintended—consequences. The median household receiving ACP subsidies reports they pay $40 a month for internet service. Loss of ACP subsidies would result in a 75% increase in those subscribers’ monthly internet bills. Such a steep increase may cause many households to think twice about continuing their internet service. Many also may see service discontinued because they were unaware they had to opt in or failed to take the proper steps to opt in. This could amount to a double-whammy that serves to disconnect millions of households from the internet.
In addition, the ACP prompted many internet providers to offer low-priced plans targeted to ACP households. Without the ACP subsidies driving demand for these programs, providers may take steps to eliminate these plans. If that’s the case, we may face a triple-whammy.
In addition to these foreseeable consequences, the end of the ACP could ripple through to other programs, such as the Broadband Equity, Access, and Deployment (BEAD) Program, which was established by the Infrastructure Investment and Jobs Act (IIJA) and is administered by the National Telecommunications and Information Administration (NTIA). As explained by the Information Technology & Innovation Foundation (emphasis added):
The IIJA and NTIA’s Notice of Funding Opportunity were therefore explicit that networks funded through BEAD include a low-cost service option for eligible consumers, which NTIA will evaluate based on the total recurring charges to the consumer, accounting for any subsidies like the Affordable Connectivity Program (ACP). Per NTIA guidance, states’ “Initial Proposals” for the BEAD funds, which are all due by year’s end, must include the expected price of this low-cost option, or their formula for arriving at it, for NTIA’s approval—prior to the disbursement of funds.
Thus, if the ACP is wound down, then BEAD proposals that rely on the ACP to support a low-cost option may be rejected, imperiling some states’ access to BEAD funding. ITIF points out that states could reduce this risk by developing low-cost options that do not rely on ACP subsidies, but this would likely impose additional administrative burdens on an already-burdensome process marked by red tape.
The ACP can be thought of as a demand-side program to reduce the cost of internet adoption to households. The BEAD program can be thought of as a supply-side program to boost the deployment of internet service to underserved areas. If the winddown of the ACP slows BEAD deployment, then internet services may experience both a decrease in demand and a slow-down in supply. That means 2024 could be the first year in history that the United States would see a decrease in the number of households connected to the internet.
As we conclude in ICLE’s issue brief, despite its shortcomings, the ACP is a much better policy than other alternatives—such as direct rate regulation or municipal broadband. Rate regulation would discourage investment and innovation in the broadband market. Municipal broadband would create unfair competition and waste local taxpayer money. If the ACP goes away, these inferior policies will likely be trotted out and gain some traction with policymakers. The ACP is not perfect, but it’s good enough, and it’s better than the alternatives.
The post Slouching Toward Disconnection and the End of the ACP appeared first on Truth on the Market.
]]>The post The Conundrum of Out-of-Market Effects in Merger Enforcement appeared first on Truth on the Market.
]]>That would appear to simplify antitrust analysis, and it certainly can. But as is so often the case in antitrust, apparent simplicity can be confounding in application. Is it really true that harm in any market, however narrow, is grounds to block a merger, whatever its broader effects? Is that the best reading of legal precedent? Is it required? And is it either practicable or desirable?
This post will be the first of several. Here, we focus on the question of out-of-market merger effects: efficiencies and other merger benefits under the Clayton Act. We note, however, that questions about the domain of inquiry—of in-market vs. out-of-market effects—can arise in conduct cases brought under Section 1 of the Sherman Act (as in Ohio v American Express, where the Supreme Court considered both sides of a two-sided transactional platform as a single market) or under Section 2 (as in Aspen Skiing Co., where the Court, considering allegedly exclusionary conduct, held that “it is appropriate to examine the effect of the challenged pattern of conduct on consumers, on respondent, and on petitioner itself.”) And analogous questions can confound merger definition in either merger or conduct cases.
This topic has found new salience, given a heightened emphasis on labor-competition issues at the federal antitrust agencies and, specifically, an express concern with labor effects in merger policy. This can be seen in, e.g., the merger guidelines jointly issued by the Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) in December 2023, and in the FTC’s November 2022 Policy Statement Regarding the Scope of Unfair Methods of Competition Under Section 5 of the Federal Trade Commission Act, which finds harm to competition (or “a tendency… to negatively affect competitive conditions”) in conduct—including mergers—that harms “consumers, workers, or other market participants.” In addition, while the DOJ’s case blocking the merger of Penguin Random House with Simon & Schuster identified output effects in a product market—in addition to alleged impact on certain skilled labor—it was also styled as a labor-monopsony case.
The new merger guidelines declare ongoing—indeed, heightened—concern with labor-market effects, arguing that “Labor markets frequently have characteristics that can exacerbate the competitive effects of a merger between competing employers.” In that regard, the guidelines note, e.g., switching costs and search frictions that, of course, may be high or low and may impose costs on employers as well as employees. Guideline 10 doesn’t merely emphasize a potential concern with labor-market effects. It doubles down on the “any market” issue—first, by marrying an express concern with labor-market effects with likely (or perhaps merely potential) impact on competition “in any line of commerce and in any section of the country,” (emphasis in original–that is, in the guidelines, not the Clayton Act) and, second, by stating that “a merger’s harm to competition among buyers is not saved by benefits to competition among sellers.” That’s of special relevance because the merger of two national firms serving national product markets may often implicate numerous labor markets in any given locale, as well as labor markets in different locales.
Policy statements from the agencies suggest that a transaction’s likely (or perhaps just possible) impact on workers should be a routine consideration in merger scrutiny, and not just of labor-specific conduct. Are mergers to be challenged—and, if challenged, blocked—if they harm workers in a single labor market, even if they are procompetitive (and pro-consumer) in the relevant product market? Some labor markets may be national or even international—say, the search for a full professor and endowed chair in the economics department at MIT, or for an NHL goalie. But the large majority of labor markets are local. What if a merger harms workers in one local labor market but benefits workers in another?
At the core of the debate is a larger discussion regarding how antitrust deals with “out-of-market efficiencies” and “trading partner welfare.” A recent, interesting discussion on X (the platform formerly known as Twitter), involving Geoff Manne, Herb Hovenkamp, Steve Salop, and others illustrated some of the complexity in implementation.
Hovenkamp is right that, as a general matter, Article III judges are not experienced at balancing costs and benefits in the way that general consideration of cross-market efficiencies might suggest. Indeed, a single-market guideline might make a reasonable heuristic in many cases where out-of-market benefits are unclear or hard to substantiate.
But if the any market rule is truly a rule–taken both literally and seriously–why haven’t the agencies, over, say, the past 100 years, opposed more mergers on labor grounds, while refusing to consider productive efficiencies or other merger benefits on the grounds that they are “cross-market” or “out-of-market” efficiencies and, consequently, that there was no proper justification for harm to a single labor market?
Consider, for example, two auto manufacturers proposing to merge—in no small part, to streamline both manufacturing and distribution across their various models and lines. For an historical example, in the wake of the 1950 Celler-Kefauver amendments to the Clayton Act (which ought to be old enough for FTC Chair Lina Khan and Assistant U.S. Attorney General Jonathan Kanter), consider the 1954 merger that created American Motors (AMC).
Suppose that the horizontal merger does not raise any competition concerns in auto manufacturing or distribution markets—indeed, it could improve competition in both of those markets. But suppose, too, that there is at least one local labor market in which employment will drop as a result of the streamlining. An old inefficient manufacturing site is shuttered, and some workers are offered the opportunity to transfer, but others are not, and some who could move choose not to. The suppression of employment/output in a given local labor market—say, welders within commuting distance of Flint, Michigan—would seem to be a cognizable harm in a line of commerce, in a single market. Would that sink the merger? Should it?
One could take a deep dive into the 1954 merger, but the problem is ubiquitous across diverse industries or lines of commerce and was characteristic of, e.g., industrialization. The more we look around, the more we see the potential to identify these harms in a single line of (labor) commerce.
Building on the AMC example, suppose a merger will consolidate manufacturing facilities in two states. The acquiring firm will expand production at a relatively new facility in State A, but its post-merger plans are to shutter an older, less-efficient facility owned and operated by the target firm, 1,100 miles away in State B. Numerous labor markets are involved, because, e.g., welders do not compete for jobs with fork-lift operators or engineers. Most, but not all, workers in the old facility will be offered the opportunity to move and work at the new facility; it’s likely that some will accept such offers and others will not. And suppose that the contemplated move is competitively significant in at least some of the labor markets. By assumption, suppose the merger increases efficiency: for any level of output, the merged party can produce it at lower cost than the separate companies could. The downstream product market is relatively competitive, so some of those efficiency gains will be passed on to consumers in the form of lower prices. Suppose there’s a national product market with a very large number of consumers relative to workers in the local labor market at issue.
To be sure, in any given case, out-of-market efficiencies (and other merger benefits) may be slight, poorly substantiated, unlikely, or even pretextual. The same can be said of in-market efficiencies. But that ain’t necessarily so. And in the aggregate, such merger benefits might well swamp the harm done in a particular local labor market. That could be true if consumer benefits are realized nationwide, across a large number of consumers, and harms are felt by a relatively small number of workers facing somewhat lower demand for their services in a specific local labor market. It could also be true if there are net gains to labor across states A and B, where harms in one labor market are offset by gains in another.
What do we do about that? What would be done—or could be done—under the 2023 Merger Guidelines? A given merger might implicate many distinct local labor markets, all of which cross-cut the geographic and product markets (originally) at issue.
That’s a complication, but it’s a complication that cuts both ways: there is no practicable way for the agencies to scrutinize (and analyze) all such markets for every proposed merger; at least, not with anything resembling current staffing, experience, and expertise (not to mention statutory timetables). Even if the agencies confined themselves to obvious overlaps in specific labor markets (occupational and geographic), there’d be no real possibility of litigating to block all such mergers. It might not turn Chair Khan’s “98% go through without even any second questions being asked by the agencies” figure on its head; but it could be applied to a great many mergers. What if the attorney general of State B sues, in parens patriae, to block the merger. A settlement is reached that’s beneficial to a labor market in State B but detrimental to a labor market in State A. Can the attorney general in State A intervene?
These general concerns about applying a strong reading of “in any market” to labor concerns are heightened by the agencies’ emphasis on structural presumptions in merger analysis, which is at odds with several decades of economic learning. Guideline 1 states that: “[m]ergers raise a presumption of illegality when they significantly increase concentration in a highly concentrated market.” The 2010 Horizontal Merger Guidelines also discussed concentration measures, but it discussed different ones, and it did not use the expression “presumption of illegality.” Under the 2010 guidelines, mergers “resulting in highly concentrated markets that involve an increase in the HHI of between 100 points and 200 points potentially raised significant competitive concerns and often warrant scrutiny” (emphasis added); and “[m]ergers resulting in highly concentrated markets that involve an increase in the HHI of more than 200 points will be presumed likely to enhance market power.” Under the 2010 guidelines, a highly concentrated market was one with an HHI above 2,500. Under the 2023 guidelines, a highly concentrated market is one with an HHI above 1,800 and a presumption of illegality applies with a change in HHI of greater than 100. The new guidelines also impugn any merger resulting in a firm with a market share greater than 30% and an increase in HHI greater than 100.
In brief:
This is not a default concern with, e.g., mergers to monopoly or 3-to-2 mergers. For example, firms in a market with seven viable employers of a given class of employees might, pre-merger, have market shares of 30%, 20%, 15%, 15%, 9%, 8%, and 3%. Under the new guidelines, that would already be deemed highly concentrated, with an HHI (the sum of the market shares) of 1904. If the firm with 30% market share (in any single job category in any locale) were to acquire the firm with 9% market share, the HHI would jump to 2444—an increase of 540 points. That is, the acquisition would be presumed illegal under both of Guideline 1’s structural presumptions. Moreover, the presumption could not be rebutted by, e.g., any benefits to consumers in the downstream product market, no matter their magnitude (and no matter the number of consumers relative to the number of workers). Indeed, as noted above, it’s not at all clear the agencies would consider benefits to other labor markets, whether defined by different occupational categories or by different locales.
That seems a bit of a conundrum, as the Clayton Act does not prohibit all (or nearly all) mergers. That’s not in the statute; it’s not in established agency practice; and it’s not evident in a century of case law. So why don’t we see more precedents in favor of cross-market efficiencies, at least as they might offset local labor market harms? Perhaps because it’s hard; and perhaps because the agencies just haven’t brought those labor cases, even if they’ve brought a few others. Not in this century. Not in the last one either.
Of course, the agencies could manage selective enforcement under this sort of labor theory, but that would lend itself to arbitrary enforcement of the merger laws, perhaps largely blocking what would otherwise appear to be pro-competitive or benign mergers. This is true of any change in the guidelines—such as lowering HHI thresholds—that increases the possible cases the agencies could bring without increasing the resources that the agencies have to bring them. In both cases, the result is more arbitrary (and possibly political) case selection.
The 2010 Horizontal Merger Guidelines noted a measure of prosecutorial discretion. Perhaps the history of labor antitrust is the exercise of just that sort of discretion on the part of the agencies and the courts—leaving most labor concerns to federal and state labor law, while reserving antitrust scrutiny of labor issues to those more readily parsed from established merger-enforcement concerns. That would still be a matter of discretion, but at least a predictable and practicable one. And perhaps that’s all for the best.
Perhaps not, but if not, there seems to be a very general problem here, if not an intractable one. At least, there would seem to be a problem if one still cares about consumer welfare, and the possibility that a non-trivial number of transactions might foster it.
The post The Conundrum of Out-of-Market Effects in Merger Enforcement appeared first on Truth on the Market.
]]>The post FTC v. Illumina/Grail – A Rare FTC Merger Victory? (Actually, a Loss for Consumers) appeared first on Truth on the Market.
]]>The press release crowed that the 5th U.S. Circuit Court of Appeals “issued an opinion in the case finding that there was substantial evidence supporting the Commission’s ruling that the deal was anticompetitive.”
True enough. Although the 5th Circuit’s assessment of the FTC’s competitive-harm arguments certainly is open to criticism, that is not the purpose of this commentary.
The FTC also, however, was constrained to note that the circuit court “vacated the Commission’s order and remanded it for further proceedings based on the standard the Commission applied when reviewing one aspect of Illumina’s rebuttal evidence.” In so noting, the commission failed to explain that the reason for the remand went to the heart of Illumina’s argument that its acquisition of Grail was not anticompetitive.
The circuit court’s opinion based its remand on Illumina’s “open offer”—i.e., the long-term supply agreement under which Illumina offered to make its platform available to all cancer blood-test developers. The 5th Circuit agreed with former Commissioner Christine S. Wilson that the commission should have enabled Illumina to show the open offer’s competitive effects as part of its rebuttal to the prima facie case. The FTC had failed to do so, instead claiming that the open offer could only be considered at the remedy stage, following a finding of liability.
The court instead viewed the open offer as “a post-signing, pre-closing adjustment to the status quo implemented by the merging parties to stave off concerns about potential anticompetitive conduct.” The 5th Circuit agreed “with those courts [in other cases involving AT&T and Microsoft] that such agreements should be addressed at the liability—not remedy—stage of the Section 7 proceedings.”
The 5th Circuit’s broader message is that antitrust enforcers should fully consider proposed safeguards designed by merging parties to ensure that possible future competitive harms are avoided. This is important. To the extent that other courts agree with this common-sense message (as they should), antitrust enforcers will no longer be able to ignore actual fixes by merging parties in rendering Section 7 liability assessments.
Given the court’s remand order, why did Illumina immediately acquiesce and state that it would divest itself of Grail? One can only speculate, but it is quite possible that Illumina believed the commission inevitably would ignore the open offer as inadequate (despite the fact that it facially appears to be more than adequate), and once again find liability. Illumina would then have to decide whether to spin off Grail or to appeal, and wait for yet another court decision. Such a decision might well prove unfavorable to Illumina, to the extent the court deferred to an FTC “reasoned fact finding” purporting to show the open order’s inadequacies. Illumina may simply have decided that direct litigation costs, plus legal uncertainty and diversion of corporate resources, militated in favor of cutting its losses and letting Grail go.
In sum, the FTC’s “success” in the Illumina/Grail matter amounts to far less than a full legal victory for the commission. Far more significant is the negative signal that this case sends to sophisticated tech-savvy firms that want to acquire (or reacquire) complementary assets to enhance their offerings and speed up the adoption of welfare-enhancing innovations.
As law & economics experts have pointed out (see, for example, here and here), the FTC’s Illumina/Grail case has been a travesty from start to finish, focusing on possible theoretical harms in future markets while ignoring real harm in existing markets. It may even cost future lives (see here). Let us hope that this supposed FTC “win” is not misinterpreted as a victory for sound merger-enforcement policy. It should properly be viewed as a misguided welfare-inimical antitrust crusade of the sort that should be avoided.
The post FTC v. Illumina/Grail – A Rare FTC Merger Victory? (Actually, a Loss for Consumers) appeared first on Truth on the Market.
]]>The post In Reforming Its Antitrust Act, Argentina Should Not Ignore Its Institutional Achilles Heel appeared first on Truth on the Market.
]]>Deregulation necessarily means relying to a greater degree on free-market competition to procure the best possible prices and quality for consumers and citizens. It therefore makes sense to strengthen competition law. As Judge Richard Posner has put it:
Because deregulation contemplates the substitution of competition for regulation as the “regulator” of the deregulated markets, deregulation increases the importance of antitrust law as a means of preventing unregulated firms from eliminating competition among themselves by mergers or price-fixing agreements.[2]
This was the case in Latin America after several countries adopted market-oriented reforms in the early 1990s.[3] But do Milei’s proposed modifications to the Argentinian Competition Act actually achieve that?
The question might be a red herring, as I would argue that the more pressing issue facing Argentinian Competition Law is that the act currently in force is not actually enforced. The most important reform introduced by Act No. 27.442 of 2018 is that it created an independent agency—the National Competition Authority (Autoridad Nacional de la Competencia or ANC)—to replace the Comisión Nacional de Defensa de la Competencia (CNDC), a body dependent of the secretary of commerce and, as such, subject to political pressure.[4] As CNDC Commissioner Pablo Trevisan has pointed out, the lack of an independent competition authority had been the Argentinian Antitrust Law’s Achilles Heel for decades.
Six years on, however, not only is there still not an ANC, but the members of the incumbent CNDC are all still political appointees.
While the text of Milei’s omnibus bill has a provision “creating” a new independent competition agency, this was already included in Act No. 27.442. The proposed bill thus still does not address the Achilles Heel, and how could it? Like many other things in Argentina, the absence of an independent competition authority is primarily a political issue, rather than a legal one. Successive governments have not had the political will or the power to implement an independent agency with independent officials.
The agency’s lack of independence is important because an agency that is not independent can be used to weaponize antitrust law and distort competition, rather than to protect it. This is especially important where the law assigns a great degree of discretion to enforcers, as is the case in Argentina. Although Argentinian enforcers have declared that “Act No. 27.442 does not include express provisions regarding non-competition aims. Decisions of the CNDC and the NCA (when established) should exclusively be focused on competition issues,” the act’s first article prohibits agreements, mergers, and abuses of a dominant position that harm the “general economic interest.”
Additionally, as Argentinian competition attorney Julián Peña aptly observes, the CNDC published new merger-control regulations in May 2023 that, among other changes, expanded the definition of “general economic interest” in merger analysis. The new definition requires defendants to show benefits related to employment, import substitution, investment, the environment, or gender policies. Rather than subject the long arm of the government to a workable standard of proposed harms (such as the consumer-welfare standard), this broadly defined notion of “general economic interest” allows highly discretionary interventions against business models and practices that benefit society overall in order to protect specific groups (i.e., small businesses, local businesses, etc.)
One of the modifications proposed by the omnibus bill is a good example of the importance of independent authorities for legal stability and predictability (and certainly is not freedom-increasing). Article 10 of the proposed new Antitrust Act allows the government to require (mandate) the notification of merger transactions if there are “reasonable indications that the economic concentration operation in question may constitute, protect or strengthen a dominant position,” even if the operation has not met the established thresholds. The rule’s broad language of the rule (“reasonable indications”) gives the government significant discretionary power, especially when combined with the expansive definition given to the “general economic interest.”
This language also resembles that used in the Peruvian Merger Control Act. When the Peruvian Merger Control Act was under discussion, I argued that such a rule “distorts the function that the notification thresholds should fulfill, which is to be an objective and easy indicator of whether or not it is appropriate to pass the review.” Such a rule would require the merging parties to carry out additional legal and economic analysis prior to the operation’s notification—very similar to what would be carried out after the notification—to determine if it entails any substantial risks to competition. An efficient merger-control regime should consider its administrative costs and the incentives it creates for all merger transactions, not just the ones it actually reviews.
Certainly, one could hope that a more libertarian government would not use its political power to interfere with its antitrust agency’s decisions, but even a libertarian government has self-interested officials. Let’s not forget that underenforcement of competitions law can be a problem too (particularly in the case of cartels, where Argentina lacks a strong record.)[5] More importantly, however, we should consider that this power could later be used by more interventionist politicians (where Argentina does have a “strong” record.)
Any changes proposed to Argentinian antitrust law must begin by finally addressing its obvious Achilles Heel.
[1] For a full review of the proposed competition law, I would suggest checking this detailed review by Esteban Greco, former president of the Argentinian Competition Defense Commission (Comisión Nacional de Defensa de la Competencia or “CNDC”).
[2] POSNER, Richard. The Effects of Deregulation on Competition: The Experience of the United States, 23 Fordham Int’l L.J. S7 (1999). p. S18.
[3] “After the Washington Consensus (1990), Latin American countries decided to change the Protectionist Model that ruled economic theory until then, and evolved into a development model based on markets that were opened to international free trade, and consequentially more competitive pressure. Pursuant to this change of strategy, all Latin American Countries included a principle of free competition in their Constitutions and issued a new wave of competition laws that they have been increasingly applying since then.” MIRANDA, Alfonso. Competition Law in Latin America. Main Trends and Features. Centro de Estudios de Derecho de la Competencia. Bogotá (2012). Available at: https://centrocedec.files.wordpress.com/2010/06/cornell-lacompetition-20123.pdf (last visited, Jan. 9, 2024).
[4] An indicator of the political nature of the CNDC is that all its members submitted their resignation en masse, in December 2023, due to the change of government.
[5] ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT – INTERAMERICAN DEVELOPMENT BANK. Follow-up to the nine Peer Reviews of Competition Law and Policy of Latin american countries: Argentina, Brazil, Chile, Colombia, El Salvador, Honduras, Mexico, Panama and Peru (2012). p. 43. Available at: https://web-archive.oecd.org/2013-08-14/244069-2012Follow-upNinePeer%20Review_en.pdf (last visited, Jan. 9, 2024).
The post In Reforming Its Antitrust Act, Argentina Should Not Ignore Its Institutional Achilles Heel appeared first on Truth on the Market.
]]>The post Three Problems with Accelerated Access: Will They Be Overcome? appeared first on Truth on the Market.
]]>Improved T-cell count was determined to reliably predict fewer infections in AIDS patients and was accepted as a surrogate endpoint that could be used to demonstrate the efficacy of HIV/AIDS drugs. AZT, the first medicine approved to combat HIV, improved T-cell counts and was provisionally approved on March 20, 1987, based on the surrogate-endpoint data.
The AZT approval convinced many scientists, physicians, and regulators that a successful drug could be predicated on the use of surrogate endpoints. As consensus grew about their utility in clinical-trial design, the FDA embraced drug-approval reform and promulgated regulations formalizing the accelerated-approval pathway in 1992.
Weighing risks and benefits underlies all drug approvals. Even when a clinical benefit is shown in a trial, it doesn’t mean that it will sustain for all—or even most—patients. But it is obvious that relying on a surrogate endpoint instead of a clinical benefit lowers the odds of a successful clinical outcome. As a result, the FDA has demanded that any company promoting a drug for accelerated approval based on surrogate-endpoint data must conduct a post-marketing confirmatory study if the drug was approved. If that study does not confirm a significant clinical benefit, then the drug should probably be withdrawn.
The FDA has discussed whether accelerated approval was creating a different standard, which might require changes in regulatory oversight and even the law, but it has to date rejected the notion of a separate legal standard. On this, the agency is unequivocal: “[t]he evidence available at the time of approval under this rule will meet the statutory standard, in that there must be evidence from adequate and well-controlled studies showing that the drug will have the effect it is represented to have in its labeling.” Accelerated approval did not represent a “lower standard,” nor one “inconsistent with section 505(d) of the Act,” but rather an approval based on assessment of a different type of data demonstrating “that the same statutory standard has been met.”[1]
In 2012, Congress codified the accelerated-approval pathway by enacting the Food and Drug Administration Safety and Innovation Act (FDASIA), which amended the Food Drug and Cosmetic Act. In codifying the accelerated-approval pathway, Congress acknowledged the vital role it served for patients with poorly served diseases, and expressed their hope that it would bring lifesaving drugs to the market expeditiously. Congress also affirmed the FDA’s conclusion that accelerated approval did not create a different standard for drug approval, stating that accelerated approval “may result in fewer, smaller, or shorter clinical trials… without compromising or altering the high standards of the FDA for the approval of drugs.”[2]
That Congress and the FDA would consider accelerated approval to be the same standard as normal approval is both understandable, and also problematic. It was understandable because the FDA was approving a medicine based on some positive change (a surrogate endpoint) that was linked to a clinical benefit. They claimed that this positive change meant that the legal standard had been established and that this change was reflected in what was claimed on the label. Approval based on such changes allows both the agency and its overseers in Congress to not have to acknowledge true risk analysis, and not change any laws or loosen their grip on the power of approval.
It is problematic for the same reason. The surrogate-endpoint change may turn out to be clinically irrelevant for many patients. Without a real assessment of risk and benefit, and of how the speed of approval based on surrogate-endpoint data changes that calculus, accelerated approval is a limited change in policy. After all, decisionmaking on more speculative data means more risk is assumed by the patient and physician (and also the insurer or other payer). The increased importance of these other players in making decisions is not acknowledged by either the FDA or Congress.
After all, the ramifications are significant. Will physicians be sued by patients or patients’ relatives when there is a bad outcome? And perhaps more significantly, are all payers duty bound to cover accelerated-approval medicines in the same way as normally approved medicines? Notably, the Centers for Medicare & Medicaid Services (CMS) and some U.S. states (notably, Oregon) want to limit payment for some accelerated-approval medications, or remove such medications from key formularies until they pass regular FDA approval. The concept of risk taking and the more practical payment mechanisms are not directly linked, but a greater appreciation of the former could inform discussions about who pays for drugs and under what circumstances. These issues will be discussed in future posts, and also later in this post in the section about approving drugs with marginal efficacy.
Most public and media-reported complaints about accelerated approval appear to be driven by a distrust of pharmaceutical companies. Some of these complaints are entirely legitimate. A key failure of the existing system is that some companies are not fulfilling their obligations to do confirmatory studies on the products approved under accelerated approval. As National Public Radio recently reported, the manufacturer of one oncology drug (Clolar) approved 20 years ago has still not completed the required confirmatory study. When companies get accelerated approval, they promise to continue studying the medicines, complete existing trials and, if required, start and finish completely new ones. Continuing to sell drugs without doing the trials (and, potentially, finding out the drug doesn’t work) is financially appealing to companies, but it is suboptimal as a strategy. Over time, companies can lose the trust of regulators and potentially the benefits of the accelerated-approval system. There are good reasons why trials take a long time for rare diseases, since it is especially difficult to find patients and, hence, trial participants. But continuing to study drugs is the most important requirement for accelerated approval. If companies do not do them, products should have approval withdrawn.
Completion of trials may not be a priority for some companies, but patients remain supportive of fast approval, nonetheless. As NPR reports, even parents of children who have died from cancer—and where the accelerated drug (Clolar and others) has failed or been potentially harmful—are supportive of faster access to drugs, even if those treatments lack confirmatory studies. They are not interested in suing the company, but they are also critical of the companies for not completing the trials and of the FDA for not making them do so. This finding is echoed in my own interviews with oncologists, the vast majority of whom said patients’ families had no interest in causing problems for companies making potential breakthrough drugs, even if they flouted regulations on study timelines.
Some medications probably aren’t withdrawn quickly enough. For example, some cancer medications stay on the market even after studies fail to show a benefit. This is arguably a far greater issue. If one has a drug approved early based on certain surrogate endpoints, but then fails to deliver the expected clinical benefit, it should be pulled immediately. The failure of drug companies to do legislatively required follow-up studies or withdraw ineffective medicines provides support for those who would slow approval processes.
But as explained in the previous post, the vast majority of trials are completed, and drugs that were approved are either withdrawn (if ineffective) or moved to regular approval. In total, 26 cancer approvals have been withdrawn for failing to demonstrate clinical benefit after confirmatory studies, whereas 96 accelerated approvals to treat cancers have gone on to show undeniable clinical benefit. The remainder are still approved, with some clinical benefits shown, but to an uncertain extent and are still being investigated.
Over the past dozen years—during which, oncology medicines dominated accelerated approvals—the median time from approval to withdrawal of approval was 3.5 years, and conversion from accelerated to regular approval took a median time of 2.3 years.
The third problem links the first two. Is the FDA approving medicines with no obvious benefits?
Most of the complaints about underperforming drugs center on treatments for Alzheimer’s disease. The FDA approved Aduhelm (aducanumab) in 2022 for the treatment of AD with limited clinical data to support improvement in patient outcomes, but with evidence that it did reduce amyloid plaques, which are strongly associated with the disease. There was debate about whether this approval should have been given, and even more debate about whether insurers and Medicare should cover the drug.
The Aduhelm decision may have hardened resolve against faster approvals. It certainly has among a subset of physicians and academics, if the remarks in the CMS-required public consultation are any guide. Of course, approving Aduhelm encouraged other researchers (and their venture capital backers) and drug companies to drive even more aggressively in trying to combat this terrible affliction, which prior to Aduhelm had no treatment options for the disease itself.
At one level, with millions suffering from the condition and extremely limited treatment options (the last of which was approved in 2003), it seems like it was the right decision to allow patients who want to risk trying a new and marginally effective drug. But it was unlike most prior FDA decisions, partly due to the enormous potential market for the product. It’s a case almost custom-designed to create debate over accelerated approval. Some physicians and physician groups have criticized the FDA, and there is a congressional investigation into the decision to approve Aduhelm.
The drug had safety risks and marginal benefits, but it was left to physicians and patients to decide whether to take it. At least, that was the plan, but insurers, the CMS, and other payers refused to fund it outside of very restrictive clinical trials.
It did, however, encourage other trial sponsors to file for approval. In 2023, lecanemab (Leqembi) was approved, with a better clinical profile than Aduhelm. And in March 2023, the U.S. Department of Veterans Affairs (VA) announced it would provide lecanemab to veterans who meet key criteria, something that had not happened with Aduhelm. This set the VA at odds with Medicare, which has still not approved lecanemab.
Approving Aduhelm was arguably the correct decision, but it certainly shifted decisionmaking from FDA to physicians and payers, leading to much-heated arguments about the future of rapid approvals.
[1] 21 C.F.R. §§ 314.500 et seq. & 601.40 et seq.; 57 Fed. Reg. 58942, 58942 (Dec. 11, 1992)57 Fed. Reg. at 58944. 38 Id. at 58943–44 (emphasis added). 39 Id. at 58944
[2] See 21 U.S.C. § 356(c). In the codified version, Congress made two notable modifications to the agency’s regulations. First, the original requirement that the new drug provide a meaningful therapeutic benefit to patients over existing treatments was replaced with the more flexible “tak[e] into account . . . the availability or lack of alternative treatments.” Second, Congress specified that accelerated approval “may” be subject to one or both of two requirements: (1) that appropriate post-approval studies be conducted to verify and describe the predicted effect on the surrogate or intermediate clinical endpoint; and (2) pre-dissemination review of promotional materials. Id. In practice, these changes have not altered the FDA’s practices. See H.R. Rep. 112-495, *35–36 (2012). 43 See 158 Cong. Rec. H3825-01, H3848 (2012)
The post Three Problems with Accelerated Access: Will They Be Overcome? appeared first on Truth on the Market.
]]>The post The Porcine 2023 Merger Guidelines (The Pig Still Oinks) appeared first on Truth on the Market.
]]>The two agencies try to put lipstick on this pig by claiming that the guidelines “emphasize the dynamic and complex nature of competition,” an approach that supposedly “enables the agencies to assess the commercial realities of the United States’ modern economy when making enforcement decisions.” But no amount of verbal makeup prevents this porker from oinking, despite the valiant best efforts of the antitrust agencies’ talented and highly respected chief economists (Susan Athey and Aviv Nevo) to argue otherwise.
The cosmetic changes to the previous draft merger guidelines consist primarily in softening the language regarding the impact of structural presumptions. The reality, however, is that the draft’s structural presumptions remain, as does the draft’s reduction in concentration numbers, lower Herfindahl-Hirschman Index thresholds, and all. In response to some public comments, economic and evidentiary analysis previously relegated to appendices was inserted into the main guidelines. But this, once again, is merely a cosmetic fix (akin to “freshening up” the pig’s rouge), not a substantive change.
Also, the 13 special guidelines (really, theories of alleged competitive harm) in the previous draft were trimmed to 11. This was done by removing an economically challenged characterization of vertical mergers (the previous Guideline 6) and deleting language indicating that the special guidelines’ theories were “not exhaustive of the ways that a merger may substantially lessen competition” (former Guideline 13). Old Guideline 6 is not specifically disavowed, however, and the Guideline 13 limitation is retained, showing up in revised words that carry the same message on page 4 of the final guidelines (“factors contemplated in these Merger Guidelines neither dictate nor exhaust the range of theories or evidence that the Agencies may introduce in merger litigation”). The retained “non-exhaustive” language implicitly rules out any safe harbor for nonproblematic mergers, thereby injecting costly uncertainty into merger planning.
Most regrettably, the final guidelines, like the draft version, fail to recognize the substantial economic benefits that countless mergers generate. Such benefits include efficiency-induced cost reductions; innovation-induced quality improvements and new product generation; and reallocation of resources to higher-valued uses. Prior merger guidelines recognized the real possibility of efficiencies, and vowed to provide guidance to let nonproblematic mergers proceed. Not so the final 2023 guidelines.
Furthermore, the final guidelines also adopt a very stringent view of cognizable efficiencies, imposing conditions that will almost never be met in the real world. They also blandly assert that alternatives to mergers, such as contracts, may be employed to achieve claimed efficiencies, without considering that such alternatives may not be achievable in the real world.
Finally, in selectively citing cases, the guidelines ignore the immense changes in U.S. antitrust case law over the last four decades, reflecting an economics-based appreciation for the role of business arrangements in advancing efficiency and consumer welfare. The specter of the discredited primary focus on “increasing concentration” and harm to competitors, which animated old and dated antitrust case law, still haunts these guidelines.
In short, the pig is still a pig. Sophisticated courts hopefully will hear the oinks, and agree with Geoffrey Manne that “[t]he primary effect of these updated guidelines is to reduce their utility to courts as a reflection of current legal and economic understanding.” The courts should also recognize that the 2023 Guidelines are, in essence, little more than an extended discussion of theories of anticompetitive behavior that provide no true guidance as to what mergers will not be challenged. As such, the guidelines do not guide, they pontificate. They should be withdrawn as soon as possible.
The post The Porcine 2023 Merger Guidelines (The Pig Still Oinks) appeared first on Truth on the Market.
]]>The post The View from Turkey: A TOTM Q&A with Kerem Cem Sanli appeared first on Truth on the Market.
]]>I am a full-time professor in competition law at Bilgi University in Istanbul. I first became interested in the application of competition law in digital markets when a PhD student of mine, Cihan Dogan, wrote his PhD thesis on the topic in 2020. We later co-authored a book together (“Regulation of Digital Platforms in Turkish Law”). Ever since, I have been following these increasingly prominent issues closely.
Tukey has responded to emerging competition issues on two fronts. First, the Turkish Competition Authority (TCA) has been very vigilant on the enforcement of competition law, specifically Article 6 (abuse of dominance) to digital platforms. The TCA has conducted almost 20 investigations against digital platforms (both local and global) most of which are completed with infringement decisions. These investigations involve both local (Sahibinden, Nadirkitap, Yemeksepeti) and global players (Meta, Google I-IV, Trendyol, Booking) and relate to very different abusive behaviors, such as data portability, unfair pricing, MFNs and self preferencing. Alongside these investigations, the TCA has also conducted market studies concerning digital marketplaces, digital advertising, and digital payments. There is also an ongoing study on mobile ecosystems.
Another front is legislation. Two drafts regulating digital platforms have been prepared, one by the Ministry of Trade, the other by the TCA. The former one is an amendment on E-Commerce Law, numbered 6563, and has been enacted by the Parliament and is in force. The other, draft revision of the Turkish Competition Act, is expected to be enacted by the legislature in 2024, according to the “medium term programme” of the presidency of Turkey.
There is still room to influence the revision of the Turkish Competition Act. In fact, we have been told that the draft has already been changed since the last public appearance. The forthcoming legislative process is not clear, but it is fair to assume that the last version (which we have not analyzed) will be circulated to collect public opinion. After collecting the opinions, we expect that there will be changes, as well. Also, when one considers the usual legislative activity, in most cases, some last-minute revisions take place in the Parliament.
Unfortunately, there was no impact assessment. Only the preamble of the E-Commerce Act, which mainly regulates the behaviors of the platforms in marketplaces, generally refers to some e-commerce statistics that indicate the prevalent use of platforms in e-commerce.
When one considers the provisions of the act, limitations of advertisement and promotion budgets (separate limitations); prohibition of certain activities (especially in financial services); and, most importantly, obligation to pay license fees, it seems likely that the act will produce counterproductive results. It will hamper investment and economic growth.
The draft Competition Act refers to the market studies conducted by the TCA. Of course, there are other market studies, as well. Hence, it is likely that the draft will be affected by the outcomes of these studies. We could, in theory, expect better and more informed legislation on that front.
The TCA has conducted several studies in digital markets and, upon these studies, extensive market studies reports have been published. The e-marketplace platforms-industry inquiry preliminary report was the first report of that kind, which was followed by the final report. Another one was related to the e-payment system; the full name of the report is the “Financial Technologies Concerning Payment Services Inquiry Report.” A third one relates to the online-advertising industry. The preliminary report has been published and it is going to be presented and discussed with the public on the 20th of December in Ankara. There is another study, which is not yet completed, concerning mobile ecosystems. This was initiated in April 2023 and is still being conducted. Apart from these reports, the TCA has launched and completed numerous investigations on digital platforms and, as a part of these investigations, it has carried out micro market studies, as well. Therefore, it is fair to say it is one of the authorities that has extensive information on the features and the dynamics of these markets.
This is hard to answer. Of course, the same theoretical economic considerations (network externalities, economies of scale and scope, big data and customer ignorance) also hold true in Turkish markets. Besides, Google, Meta, and Microsoft dominate certain digital markets in Turkey, as well. However, to what extent these considerations require economic regulation is an open-ended matter.
Hence, without delving into this grand matter, there are at least three arguments that can be made for the Turkish case, which point to the fact that a DMA-type regulation of platforms is not urgent.
The first is that the EU’s enforcement of the DMA would likely yield positive externalities for third-country markets like Turkey. As the companies subjected to the DMA prohibitions have global operations, it is likely that the compliance efforts will change the market practices globally and this will affect Turkish markets, as well (this is not unique to Turkey, though it is still important).
Second, as we have limited information about the probable economic consequences of DMA-type regulations, it would be wiser to monitor the early enforcement practice in the EU and make legislation only if these consequences are likely to create better market outcomes.
Third, as already mentioned, the TCA is already very active in digital markets and, given this vigorous enforcement practice, it is hardly urgent to call for a new regulatory tool. The TCA can quickly respond to market failures and has done so with several interlocutory injunctions in digital markets (Trendyol, Meta, and Yemeksepeti).
I should note that the e-commerce regulation, which is in force at the moment, is a platform regulation that is different from DMA. Hence (as the rules are partly DMA-style), I did not consider this regulation while responding to this question. If we consider this regulation, we can say that (apart from the fact that it is not a good law) there are serious informational problems on the side of enforcers, as this law is implemented by the Ministry of Trade.
There are various provisions in the law (or in the amendment) that make the legislation seriously problematic from a competition-policy perspective. I will provide some examples.
The first is the private-label prohibition (additional article 2/1(a)). The E-Commerce Law prohibits platforms to sell their own PLs on the platforms. This prohibition is applied to all platforms independent of market power or the size of the platforms. The concern is understandable—i.e., self-preferencing or discrimination in favor of PL. However, this prohibition is evidently unproportional. These concerns can easily be addressed by less drastic and intrusive measures, such as the platform PL unit may be prohibited from using platform data. In fact, the law governs this issue and the medium-sized platforms are prohibited to use sellers’ data in cases where they also compete with the seller. (additional article 2/2(a)). This prohibition will certainly result in welfare losses, since creating and selling PL brands will not only increase consumer choices but also enable platforms to offer goods in line with consumer preferences with competitive prices. This prohibition interferes with these consumer benefits.
A second example is the limitation on the advertisement and the promotion/discount budgets (additional article 2/3 (a and b). Relevant provisions impose a limit on the advertising and discount spending budget of large-scale platforms. In both provisions, the budget cap is determined according to the same formula and is therefore equivalent. Accordingly, the platform may spend 2% on advertising or discounts for up to 30 billion Turkish liras of the net transaction value arising from sales it brokered in the previous year, and 0.3% for the following portion above that limit. The amounts determined accordingly constitute the upper limit of discount and advertising budgets. To explain with an example, for a platform with a net transaction volume of 50 billion Turkish lira, 60 million Turkish lira (0.3% of 20 billion) will be added to the base expenditure limit of 900 million Turkish lira. In this case, the advertising budget will be 960 million Turkish lira. According to the continuing paragraph, the discount-budget cap will be determined in the same way. Therefore, advertising and discount-budget limits will be equivalent.
What is the rationale for these two provisions? When looking at the rationale, the basic logic is similar. The economic power of the platform allows it to implement exclusionary strategies against its rivals. The incumbent platform makes it difficult for competitors to operate by excessive advertising. This justification bears traces of the theory of “raising rivals’ costs” in the economic literature. The reason for the discount-budget limitation is the proposition that large-scale platforms cause predatory pricing through discounts, leading to the exclusion of competitors. Therefore, both restrictions discourage strategies and behaviors that weaken competition in the markets in which platforms operate. This will encourage entries. That is, at least, the suggested logic in the preamble.
One does not need to elaborate the problems associated with these rules. It is not only hard to interpret and apply the law as it is, since some of the concepts are vague and indeterminate, but also the rules will certainly have anti-competitive effects and therefore decrease social welfare. Take the advertising-budget limitation. The first problem concerns the scope of the concept of advertising. Although the law and secondary regulation seek to specify which activities will be considered within the scope of advertising, it is inevitable that there will be a problem of determination and interpretation, which will likely create transaction costs and increase uncertainty. Moreover, advertising is a socially beneficial activity due to its informational and quality-signaling functions for branded products. Limiting the advertising budget will cause a social cost in this respect. Advertising benefits not only the platform, but also the sellers, and limiting it may also harm the sellers.
For a moment, it may be thought that it is reasonable to incur these costs, because competition in the market will increase by preventing excessive advertising. However, although the law refers to the phenomenon of excessive advertising, there is no scientific study on this subject. Therefore, it is unclear whether this type of risk actually exists. As a matter of fact, the economic theory of raising rivals’ costs through excessive advertising has not found much response in practical application. In this respect, the economic basis of the ban in question is also weak (even if we think for a moment that there is a real problem in this regard). In addition, it is unclear how the criterion on which the monetary limitation on the advertising budget is based is determined. Considering that the same criterion is used for advertising and discount budgets, we can say that this is determined largely arbitrarily.
Similar criticisms also apply to the discount-budget limitation. Moreover, the benefit of the discount to both consumers and sellers is not controversial, when compared to the advertising benefit. Failure to provide a discount results in a net consumer loss. We can easily say that, due to this limitation, the products sold on the platform will increase in price and the amount of output will decrease.
A third example is the obligation to obtain a license from the Ministry of Trade and to pay an annual license fee. The law foresees gradually increasing license fees for all platforms exceeding a certain size, based on net transaction volume. According to the system in the law, this license fee, which must be paid annually, increases as the net transaction volume increases. What’s interesting here is the amount of increase. To put it concretely, a platform with a net transaction volume of between 15 billion and 30 billion Turkish liras is obliged to pay a fee of 0.03% of the net transaction volume, while a platform with a net transaction volume of more than 90 billion liras is obliged to pay a license fee of 25% of the net transaction volume. Considering that the fee is calculated on net transaction volume, not turnover, these fees clearly constitute a barrier to economic growth. For large platforms, these costs not only cause price increases for their services, but also serve as a disincentive to compete.
There are other examples, as well, which indicate that the law’s primary goal is not to protect competition but rather competitors, and that efficiency and productivity is not its concern. Therefore, from a competition-policy perspective, this is clearly a bad law.
Well, if we consider the E-Commerce Law, there are two risks: conflict and overregulation. In certain cases, platforms will be both subjected to the investigations of the Trade Ministry (E-Commerce Law) and of the TCA. And there is a chance that both laws may inflict financial costs on the platforms, as the fines can be applied simultaneously. And interestingly, these rules are not totally aligned with each other, which will create uncertainties for the platforms. Unfortunately, the E-Commerce Law does not set out specific provisions as to which law should prevail in the event of a conflict.
If the Turkish legislature enacts the draft Competition Act, then the situation will become much worse. Three laws will regulate the behavior of platforms and, unless the draft Competition Act sets out specific (well-tuned) provisions for overlap/conflict, there will be chaos.
I have sought to elaborate the likely effects of the E-Commerce Law above. As I said, the law is anticompetitive, will likely hamper competition, and increase prices on the digital markets. It will slow growth and innovation.
It is hard to foresee the economic effects of the draft amendment to the Competition Act.
One thing is sure, these laws will increase the compliance costs of platforms and, given that two laws are regulating similar behaviors, there will be some overlaps and overenforcement, which will create additional uncertainties that already exist in these regulations.
It is not hard to predict that FDI will be negatively affected by the E-Commerce Law. Also, regulating digital platforms with two similar laws will also have adverse effects on FDI.
The enforcement practice of the TCA is quite innovative, in the sense that new harm theories for article 6 (abuse of dominance) are easily adopted and applied. Hence, the cases against Google, Trendyol, and Meta can be exemplary for other jurisdictions.
What could we learn from the EU/US? As there is limited enforcement practice in the United States, I do not think that there are many exemplary cases in the U.S. practice, whereas EU practice is a different story. The TCA has already been inspired by the EU practice in various matters, like cases against Google. Besides, as said above, Turkey can benefit (and learn) from the early enforcement practice of the DMA and prepare its legislation accordingly.
Turkey has serious economic problems, and concentration in digital markets is not even on the list of problems. So, from a macro perspective, regulating digital platforms does not seem to be in the priority list. However, there is a general tendency toward regulating markets, as Turkey moves away from the free-market approach. And for some reason, regulation of the digital world (including other aspects) has been deemed important. And regulating digital platforms can be regarded as a part of this general approach. Besides, the draft regulation in the Competition Act is prepared by the Competition Authority and, perhaps to increase its authority and control on the markets, the authority pushes for the legislation. As a consequence, a proposal for a new regulation took part in the medium-term plan of the presidency.
The post The View from Turkey: A TOTM Q&A with Kerem Cem Sanli appeared first on Truth on the Market.
]]>The post A Holiday Hootenanny Hiatus, But First, Some Title II Talk appeared first on Truth on the Market.
]]>The FCC has pushed telecom folks to crank out more content than James Patterson. So, we can be forgiven for pouring ourselves a cup of cheer, turning on “The Muppet Christmas Carol,” and taking a brief hiatus.
Yet again, the FCC intends to reclassify broadband internet-access services under Title II of the Communications Act of 1934. Among the multitude of rules this move entails, the FCC would impose so-called “net neutrality” conditions by banning providers from blocking or throttling content and engaging in paid-prioritization practices.
If approved, the Title II rules would work hand-in-hand with the FCC’s sweeping digital-discrimination rules, which explicitly subject broadband pricing, discounts, incentives and other terms and conditions to scrutiny and enforcement. FCC Commissioner Brendan Carr has described Title II and digital discrimination as “fraternal twins.” While the FCC has been clear that it will not (for now) regulate rates under Title II, the agency’s digital-discrimination rules explicitly identify broadband pricing as subject to FCC scrutiny and enforcement.
In comments to the FCC, the International Center for Law & Economics (ICLE) note that some critics see the FCC’s pursuit of common-carrier regulation of broadband internet as an attempt to “control” an industry with vast economic and political significance. And that may be true. A more charitable criticism, however, is that the commission mistakenly believes that the provision of broadband internet is a natural monopoly best served by utility-style regulation.
Alternatively, it could be argued that the FCC mistakenly believes that a dynamic and competitive industry marked by rapid innovation, improving quality, and falling prices can be effectively regulated as if it were a public utility. Indeed, ICLE—along with many other commenters—point out that, despite several recent opportunities to regulate broadband internet under Title II, Congress never explicitly provided the FCC the authority to do so.
In particular, the 2021 Infrastructure Investment and Jobs Act (IIJA) was a perfect opportunity for Congress to legislate net neutrality or Title II regulation, if that’s what it wanted the FCC to do. The IIJA was the legislation that mandated the FCC to issue rules to prevent digital discrimination. It was also the legislation that allocated more that $42 billion to build out broadband infrastructure under the Broadband Equity, Access and Deployment (BEAD) program. Tellingly, Congress empowered the National Telecommunications and Information Administration (NTIA) to implement BEAD, rather than the FCC. Mandating Title II classification would’ve taken less than a page of the 1,000+ page IIJA. Its omission can be seen as a clear sign that Congress—with Democratic majorities in both houses at the time—had little interest in such expansive regulatory intervention.
By most measures, U.S. broadband competition is vibrant and has strengthened dramatically since both the repeal of Title II rules in 2018 and the advent of the COVID-19 pandemic in 2020.
With apologies to Frank Loesser and Ella Fitzgerald:
Ah, but in case I stand one little chance,
Here comes the major question in advance.
Will Title II be an MQ, MQD?
Oh, will it be an MQ, MQD?
A big question—some would say a major question—regarding the FCC’s latest attempt at Title II classification is: Can they do it?
Under what is now known as the “major questions doctrine” or “MQD,” the U.S. Supreme Court has said: “We expect Congress to speak clearly if it wishes to assign to an agency decisions of vast ‘economic and political significance.’” That is, the MQD requires that Congress give an agency clear congressional authorization to act in such cases. In other words, an ambiguous grant of authority is not enough.
In their comments to the FCC, Gus Hurwitz and Christopher Yoo conclude that the FCC itself seems to think that Title II regulation is a major question of “economic and political significance”:
Rather, the fact that an agency feels it is necessary to ask whether its decisions raise major questions suggests that those questions may well be major. This alone should give the agency pause about taking such decisions—especially in an era of intense judicial scrutiny of agency action it would be a curious decision for any agency to instead seek to structure its decisions so as to avoid the appearance of their having vast economic or political significance. [emphasis added]
In addition, Hurwitz & Yoo note:
The vast significance of the proposed rules cannot be overstated—though the NPRM seems notably to attempt to understate it. Discussing broadband in 2015, former Chair Tom Wheeler described the Internet as “the most powerful network in the history of mankind.” This was echoed in the 2015 Open Internet Order. The first sentence of the 2015 Order asserted that “[t]he open Internet drives the American economy and serves, every day, as a critical tool for America’s citizens to conduct commerce, communicate, educate, entertain, and engage in the world around them.” Similarly, the NPRM for that Order began with: “The Internet is America’s most important platform for economic growth, innovation, competition, [and] free expression . . . . [It] has been, and remains to date, the preeminent 21st century engine for innovation and the economic and social benefits that follow.” [emphasis added, citations omitted]
Thus, ICLE’s comments conclude, (1) the major questions doctrine is now clearly recognized by the Supreme Court, (2) the decision to apply Title II to broadband services is a decision of “vast economic and political significance” under the MQD, and (3) the Court’s Brand X decision found that the Communications Act is ambiguous as to whether broadband is a “telecommunications service.” As such, the decision to reclassify broadband service again will likely fail under the MQD.
Happy Holidays, Hootenanniers. We’ll see you in the New Year for more telecom-policy shenanigans.
The post A Holiday Hootenanny Hiatus, But First, Some Title II Talk appeared first on Truth on the Market.
]]>The post A Consumer-Welfare-Centric Reform Agenda for the Federal Trade Commission appeared first on Truth on the Market.
]]>It bears emphasizing that these 12 suggested reforms should not be viewed as partisan in nature. They are designed to move the FTC back toward the largely bipartisan approach that characterized decision making for more than 30 years, spawning the Janet D. Steiger, Robert Pitofsky, Timothy J. Muris, Deborah Platt Majoras, William Kovacic, Jon Leibowitz, Edith Ramirez, Maureen K. Ohlhausen (acting), and Joseph J. Simons chairmanships.
Over those three decades, both the FTC’s Democratic and Republican leaders consistently focused on advancing the interests of consumers as their guiding principle. Promoting consumer welfare was the recognized goal of antitrust enforcement, and combating harm to consumers the centerpiece of consumer-protection policy. Enforcement priorities changed on the margin, but consistently broad agreement on the nature of the commission’s general mission was retained from administration to administration.
In sharp contrast, under the Biden administration, FTC Chair Lina Khan has repudiated consumer-welfare enhancement as the guiding light of competition enforcement. She has signaled that other considerations—such as civil rights, labor, the environment, and equity—will also inform policy. Such a multi-factor approach leads to unpredictability and arbitrary decision making, at odds with the rule of law (see here and here, for examples). It repudiates more than three decades of rational consumer-oriented antitrust and consumer-protection enforcement, informed by sound economics. It is to be hoped that this “neo-Brandeisian” interlude is but a blip in time, and that new leadership will restore the FTC’s tried and true consumer-welfare-centric mission.
The reform recommendations set forth below are designed to spark a dialogue on FTC policy changes. The specifics are worthy of debate, in scholarly and political circles. Although many will disagree with some of my recommendations, I believe that they could establish the groundwork for a bipartisan dialogue and evaluation of alternatives. My hope is that this dialogue may build support for specific reforms that merit swift implementation after new commission leadership takes office.
Having set the stage, let me not leave you in suspense. My 12 recommended suggestions for FTC reform initiatives (to be undertaken by the new chair, or, if required, by a majority vote of the commission) follow. A more detailed description and justification for these reforms is set forth in my just-released Mercatus Center policy brief, Reforming the Federal Trade Commission.
Under Chair Khan, the FTC initiated major procedural and substantive policy changes that may run counter to the rule of law and threaten to impose substantial harm on American businesses and consumers. These dramatic changes—many of them implemented at the direction of the chair and without appropriate consultation—are at odds with a decades-long bipartisan tradition of incremental changes at the FTC.
The 12 specific reforms identified in this brief could, if implemented, repair much of the damage stemming from Chair Khan’s program. New FTC leadership undoubtedly will want to closely scrutinize these and other possible reforms, as it determines the best course of action to restore the FTC as a respected deliberative body committed to economically sound, welfare-enhancing antitrust and consumer-protection enforcement.
The post A Consumer-Welfare-Centric Reform Agenda for the Federal Trade Commission appeared first on Truth on the Market.
]]>The post Oncology Drives Most Recent Accelerated Approvals appeared first on Truth on the Market.
]]>This post estimates the benefits of accelerated approval and how oncology medicines have become the treatment category with the most accelerated approvals.
Accelerated approval can facilitate the development of drugs indicated to treat rare diseases, many of which are forms of cancer. More than 25 million Americans suffer from rare diseases, which are particularly likely to be serious and life-threatening conditions with unmet medical needs. Of the 7,000 rare diseases that have been identified, more than 90% have no FDA-approved treatment.
Many facets of rare diseases make them particularly difficult to study in clinical trials targeting direct clinical benefit. It is widely acknowledged that developing drugs for rare diseases can be challenging due to characteristics like small heterogeneous patient populations, long timeframes for disease progression, a poor understanding of the disease’s natural history, and a lack of prior clinical studies. This makes accelerated approval a particularly important tool to develop treatments for these diseases.
And cancers—many of which affect only a few thousand people a year—have been targeted the most for accelerated approval. The reasons include a long-held attempt to combat cancer among scientists, past success in treating some cancers, a well-organized lobby to raise awareness and funds, drug companies that have spent decades combating the disease, and payers that were already accustomed to expending large sums for cancer treatments that cleared the U.S. Food and Drug Administration’s (FDA) regular approval process.
By the end of 2020, FDA had approved a total of 253 new drugs under the accelerated-approval pathway. From 1992 through roughly 2010, accelerated approval was primarily used to approve drugs indicated for treatment of HIV (39.7% of approvals); cancer (35.6%); and other rare disease treatments and specialty drugs (24.7%). Since then, the use of accelerated approval has shifted dramatically to focus on oncology drugs. Indeed, approximately 85% of accelerated approvals from 2010 to 2020 were for oncology indications.
Many of the additional approvals are for new indications for drugs previously approved to treat a different cancer. Keytruda is a major oncology drug developed by Merck and approved to treat multiple cancers, notably breast cancers. Naturally, given its impressive performance against myriad cancers, it has been tried against nearly all of them.
In some instances, its use to treat a new indication is withdrawn. For example, Keytruda was approved for gastric cancer treatment in September 2017 but accelerated approval was withdrawn in February 2022 when the drug didn’t appear to lead to a significant clinical benefit after confirmatory studies. This is the way the system is supposed to work. If a drug shows some promise, it is given early approval. But if this promise is not confirmed, the approval is withdrawn.
In total, 26 cancer approvals have been withdrawn for failing to demonstrate clinical benefit after confirmatory studies. Whereas 96 accelerated approvals to treat cancers have gone on to show undeniable clinical benefit. The remainder are still approved, with some clinical benefits shown, but to an uncertain extent and are still being investigated.
Over the past dozen years, during which time oncology medicines have dominated accelerated approvals, the median time from approval to withdrawal of approval was 3.5 years, and conversion from accelerated to regular approval took a median of 2.3 years.
To test the impact of accelerated approval, the industry-backed group Vital Transformations calculated the net present value (NPV) of 93 primary accelerated-approval therapies from 2001-2021 (drugs with at least one FDA-approved indication). An economic model was built testing the impacts of delays of two, three, four, and five years in the granting of full FDA marketing approval to determine the NPV of any accelerated-approval therapy.
For most of the orphan conditions currently lacking treatment in the United States, each condition affects at most 330 people—an incidence rate less than one in a million. Without the accelerated-approval pathway, the Vital Transformations research suggests that the potential development of these therapies to treat many rare diseases would be economically untenable. Vital Transformations estimated that:
As the authors told me: “If removing accelerated approval leads to a five-year delay, the percentage of therapies with a negative NPV would rise to between 51% and 73%.”
As will be discussed in more detail in a later post, some firms may not be doing their statutorily required confirmatory studies in a timely fashion. But Vital Transformations suggests that 80% of confirmatory trials are filed within five years as follow-up to accelerated approval. The smaller the population of the diseases, the longer it takes to find participants. Hence, it is harder to complete a trial and some fail to fulfill within the requisite time.
But the authors noted that this does not mean that these are “bad actors”; just that the company is trying to solve a really difficult problem for very few people. It is a “logical pattern of behavior” and “predictable that trials for small populations take longer.” These are less likely to be “profitable drugs,” so it is not in the companies’ favor to delay trials.
Vital Transformations’ arguments may constitute an overly positive gloss on the benefits of accelerated approval, but it is undeniable that there have been benefits. The current pathway has led to the approval of many drugs that have helped patients.
For example, according to Adam Brufsky, clinical oncologist at Pittsburgh Medical Center, at least two types of medicines—especially CDK46 inhibitors and trastuzumab-dexruxtecan (Herceptin), which were both approved under accelerated approval—have been remarkable at treating breast cancer. Brufsky told me that accelerated approved drugs have improved the lives of hundreds of thousands, maybe millions, of women since they were first approved in 1998. Furthermore, breast-cancer patients (and their support networks) know that, when they fail to respond to one treatment, the disease progression can be rapid, so fast approvals and the right to try pending drugs is high on their list of priorities.
Brufsky’s insights lead to further understanding of why cancer medications are the type most approved rapidly. Cancer can progress rapidly and new treatments are constantly required. This is true across many types of cancer and, as such, the entire support network of cancer advocacy pushes rapid approval of new and exciting medicines.
As a result, oncologists are likely more familiar with these medicines than regular physicians. To find out how familiar, I decided to interview oncologists about their prescribing practices, to see how many had knowingly prescribed accelerated products. This is an ongoing survey, which will be published later, but some of the more general responses are discussed briefly below.
Of 131 oncologists approached to participate in a brief survey, more than three quarters (104) responded in full to the questions posed.
Within the previous five years, 78% (79 of 104) oncologists have prescribed a medicine that was approved through accelerated access, and only 19% said they were fairly sure they had not. Additionally, 62% had prescribed a medicine “off-label” (not for FDA-approved intent) in the last five years. Perhaps more importantly, 25% said there were medicines they had read about, and/or were in trials, that they wished they could try for certain patients.
Also roughly two-thirds (65%) thought that medicines should be available for them to prescribe if they had received accelerated approval, and not just be allowed for use in clinical trials, or in very restrictive ways (as is the case with some approvals). As one put it “real world clinical feedback can be just as useful as clinical trial data, and it should be for my patient and I to decide whether a drug is taken…not insurers or payers.” Furthermore, 75% of oncologists didn’t prescribe at least one medication they otherwise would because of cost. They knew their patient couldn’t afford it and most, presumably, had insurance coverage or were on Medicare.
All oncologists who ventured an opinion saw a role for the FDA in approving and regulating medicines, with only 6% thinking clinical-trial data—even peer-reviewed data from pharmaceutical companies—were sufficiently reliable without FDA oversight.
But there was concern among many physicians (75%) who thought that the FDA was too slow in approving new medicines. As two physicians put it to me, they are specialists in a disease area—those seeing patients and often at the cutting edge of research—and felt they could evaluate trial data and decide on a case-by-case basis whether a patient could use an unproved medicine. And while they relied on the FDA to assess overall data, they felt they should be deciding whether to prescribe a medicine.
I broadened the discussion to the idea of rapid “challenge” trials, with small numbers of participants, and hence cheaper and faster than current trials. Nearly three-quarters of oncologists thought rapid trials a good idea where no treatment exists for extremely sick patients. For at least a third, frustration is high that patients die while waiting for long-term trial results.
I will return to this ongoing survey in future posts.
Overall, while there is frustration among surveyed oncologists that approvals are not quicker, there is no doubt that the accelerated-approval process has been a boon to those developing, prescribing, and taking oncology medicines.
The post Oncology Drives Most Recent Accelerated Approvals appeared first on Truth on the Market.
]]>The post Hands Across the Agencies appeared first on Truth on the Market.
]]>The headline sounds great. One wonders about the extent to which the subhead is true.
It was true for years—at least, at the FTC. The FTC has a very solid tradition of pro-consumer, research-based enforcement in the health-care sector, both in conduct matters and, especially since the early aughts of this century, in hospital-merger matters (here, here, and here). Significant contributions to the economic literature by FTC Bureau of Economics staff and management have helped to undergird and refine provider-merger scrutiny. (for just a few examples, see, e.g., here, here, here, here, here, and here).
Under present leadership, it has continued to do well, if not uniformly so, when it has brought complaints under established theories and evidence (here and here). Hospital-merger enforcement can be especially challenging, given the opportunities for regulatory rentseeking via both certificate of need (CON) (see, e.g., here, here, and here) regimes and certificate of public advantage (COPA) legislation at the state level (here). With regard to COPA (if not CON), we’ve seen a few bright spots, where the commission has permitted its long-running and highly successful competition-advocacy program to continue (here).
What’s less clear is the payoff from some of the more innovative matters brought under present FTC leadership. The press release describes more than 30 initiatives across the three agencies—enforcement matters, regulation, and policy statements. A comprehensive review would be the longest Truth on the Market post ever, by a long shot. Hoping to keep at least a few readers, I’ll focus on just a few.
First, I’ll note in passing that some of the DOJ matters appear to be straightforward anti-price-fixing cases, brought under long-established precedent condemning price- and wage-fixing agreements and market-allocation agreements (see here and here) (and not creative applications of 40- and 50-year-old precedents).
Turning to FTC matters, the commission touts its proposed noncompete rule, which doesn’t specifically address health care but would sweep across all occupations economywide, including those in the health-care sector. According to the FTC:
By stopping this practice, the agency estimates that the new proposed rule could increase wages by nearly $300 billion per year and expand career opportunities for about 30 million Americans, including those working in the health care industry. Consumers would save up to $148 billion annually on health care costs under the proposed rule according to FTC estimates.
Wow. Or maybe not. It’s not that there’s nothing there. As I noted way back in April, “there are contexts—perhaps many contexts—in which noncompete agreements raise legitimate policy concerns.” Those include contexts in which such agreements may be anticompetitive, although most of the issues do not appear to be antitrust issues. And some contexts are not all contexts. There are legitimate business reasons for at least some of the restrictions, and the specifics are tricky.
The FTC’s discussion of the literature in its notice of proposed rulemaking (NPRM) was in some ways careful and in some ways both slanted and strained. For one thing, the NPRM bent over backward to discount inconvenient empirical findings, while glossing over limitations to seemingly favorable results. It simply ignored solid work that raised significant questions about the empirical basis of the proposed rule. See, for example, John M. McAdams; Norman Bishara & Evan Starr; and Jonathan M. Barnett & Ted Sichelman. As McAdams noted in 2019:
[T]he more credible empirical studies tend to be narrow in scope, focusing on a limited number of specific occupations . . . or potentially idiosyncratic policy changes with uncertain and hard-to-quantify generalizability.
For general critiques of the FTC proposal, see those submitted to the record by the International Center for Law & Economics (ICLE) and the Global Antitrust Institute.
This is a rule that the FTC, in its current form, couldn’t possibly enforce. Even leaving that aside, there are a few more wrinkles—some small, and some not. First, it’s not at all clear that the proposed rule has “helped lower costs,” not least because they haven’t adopted any final noncompete rule. And presumably (or one aspires to presume), a final rule, if adopted, would reflect the many serious critical comments submitted to the FTC, and not just the cheerleading.
Second, the question of whether the FTC has the sort of general competition-rulemaking authority on which the rule would depend is controversial. It’s not baseless, but old agency hands—including, among others, former FTC General Counsel Alden Abbott; former FTC Commissioner and Acting Chair Maureen Ohlhausen (with Ben Rossen here); former FTC Commissioner Noah Phillips; and Gregory Werden, former chief counsel for economics at the DOJ Antitrust Division—have argued that there is no such authority, and noted administrative-law scholar Thomas Merrill agrees. As Abbott explains:
[T]he structure of the FTC Act indicates that Section 6(g) [the pertinent provision] is best understood as authorizing procedural regulations, not substantive rules. What’s more, Section 6(g) rules raise serious questions under the U.S. Supreme Court’s nondelegation and major questions doctrines … and under administrative law (very broad unfair methods of competition rules may be deemed ‘arbitrary and capricious’ and raise due process concerns.
Third, as the FTC itself recognizes, its competition authority is limited with regard to not-for-profit organizations, and most hospitals and their networks are not-for-profit. It’s unclear what portion of the health-care workforce would be covered, were the FTC to adopt the rule as proposed, and were the rule sustained. It’s also unclear to what extent such a rule would change the law in the 15 states—plus the District of Columbia—that already place certain restrictions on the enforcement of noncompete agreements for physicians (some covering health-care professionals more broadly), not to mention the four states that have general restrictions on noncompete agreements.
Fourth, the literature on health-care noncompetes is very limited: there is one paper investigating the impact on physician compensation (which finds that physicians with noncompetes are more highly compensated than those without) and one paper investigating the impact of certain policy changes on health-care prices. This is a very large and complex space, and two papers do not a body of literature make. One ought to be cautious before basing sweeping, nationwide regulatory reform on a couple of preliminary estimates.
And as discussed in ICLE’s comments to the FTC proposal, and as I discuss in detail here, the paper on health-care prices does not employ a causal design, does not employ data on hospital-based services (ambulatory-care services only), and rests on both market definitions and analytical methods largely repudiated by research from staff in the FTC’s own Bureau of Economics—research undergirding the commission’s own, very successful, health-care merger-enforcement program. The paper is a thoughtful one and, in many ways, a useful starting point to investigate the impact of noncompete agreements on downstream health-care prices. But its estimates are highly dubious, as FTC staff are well-aware, and it is the only basis the FTC has for its estimate of $148 billion in annual health-care savings. Indeed, it’s the only empirical paper that the FTC can cite that says anything about downstream price effects in any product or service market.
Then again, they said “could increase” wages (not strictly inconsistent with any increase or decrease) and “up to” $148 billion (not strictly inconsistent with zero).
The FTC also touts its settlement of the Amgen/Horizon merger matter “to address the potential competitive harm that would result from Amgen’s $27.8 billion acquisition of Horizon Therapeutics plc.”
Sort of. But as I explained in September, “the core of the consent agreement has Amgen doing what they’d offered to do all along.” It’s all well that they settled. That saved the agency scarce resources. And as I explained way back in May, and as my ICLE colleagues and I—together with noted scholars of law and economics—argued in an amicus brief, it was a bad case to bring in the first place. The merger would not have increased anyone’s market share, much less market power, one iota. The FTC had sued “to block a likely procompetitive conglomerate merger based on harms supposed to arise from a chain of conjectured post-transaction events, where each link in the chain is highly speculative. It is unlikely they will all come to pass and cause the harm the FTC posits.” And the risk, however slight, was easily remediable in any case.
Finally, on the eve of trial, the FTC basically accepted Amgen’s standing offer and tacked on a few pointless extras. Better late than never—and better to settle on the eve of trial than to litigate to conclusion, and a loss. But that deserves just one cheer, not three, as it was not really a savings for American consumers so much as a saving of face.
This week’s press release doesn’t tout a similar matter that’s now before the 5th U.S. Circuit Court of Appeals—that is, the FTC’s decision to block Illumina’s acquisition of Grail, notwithstanding the decision of FTC Administrative Law Judge D. Michael Chappell to dismiss the FTC’s challenge in 2022. This is, like the Amgen/Horizon matter, a nonhorizontal merger case based on highly speculative harms that could, in any case, be easily addressed by contractual provisions already in place—provisions that could be further reinforced if they were incorporated in settlement orders. For a detailed critique, see the ICLE amicus brief signed by 28 scholars of law and economics. And, among others, there’s also me (here and here); Alden Abbott; Thom Lambert; various law and economics scholars (here); former director of the FTC Bureau of Economics Director Bruce Kobayashi & former FTC Chairman Tim Muris (here); and Kobayashi & Muris again, this time with Jessica Melugin and Kent Lassman, here.
As a number of us pointed out, there was a serious downside to the FTC getting this wrong. That is, the merger appeared to be very likely procompetitive—in fact, beneficial to consumers, specifically, cancer patients. There is a conspicuous and very real risk that the FTC’s action will slow the development of life-saving multi-cancer early detection (MCED) tests. Antitrust officials should, first, do no harm.
One more matter the agencies touted: the Eyeglass Rule. As the press release points out, the FTC proposed updating its Ophthalmic Practice Rules—known as the Eyeglass Rule—back in December 2022. I have no objection. Actually, I helped a bit with this issue before departing in September 2022. But the rule dates to 1978, and we’ve yet to see what the proposal will bring.
Basically, the rule requires that a prescribing eye doctor—an ophthalmologist (MD) or optometrist—must provide patients with a copy of their prescription after a refractive exam (supposing the exam indicates corrective lenses). There can be no extra charges (beyond those for the exam), and no artificial impediments to comparison shopping.
Under the new proposal, the prescribers will be required to document the prescription release, because such documentation could facilitate enforcement. This mirrors a proposed amendment to the FTC’s Contact Lens Rule (CLR), which was first adopted in 2004, and which implemented the federal Fairness to Contact Lens Consumers Act enacted in 2003. The CLR, analogously, requires prescribers to release such prescriptions automatically, without an extra charge, upon completion of an exam, so that consumers (patients) can shop for contact lenses where they like, instead of being forced to buy them from their prescribing doctors.
That was all well and good, but the “10-year” rule review revealed that the rule was hard to enforce with the FTC’s limited resources. In fact, despite numerous complaints of noncompliance—and survey evidence of widespread noncompliance among prescribers—the FTC had not enforced the central prescription-release requirement of the Contact Lens Rule even once in the first 12 years that it was in force. Which might be a bit shy of establishing a credible threat of penalties for violations.
In 2016, a record-keeping provision was proposed—later modified in 2019—that was supposed to facilitate enforcement. I’m not objecting to any of that. The FTC should enforce the Contact Lens Rule, not least because the statutory charge to the agency is clear. And they should be able to do so. And it’s not easy. But proposing these amendments to the Contact Lens Rule (20 years after enactment of the statutory command) and to the Eyeglass Rule (some 45 years after the initial adoption of the Eyeglass Rule) is not yet a great accomplishment for American consumers, even if they prove to be steps in the right direction.
And perhaps there’s an implication there for the many new regulatory proposals brought forth by the current commission, including some truly vast proposals, like the “commercial surveillance” ANPR and the noncompete NPRM. Regulations are not self-enforcing, even when rules are simple and clear. Not all rules are that.
If, given present resources, it’s this hard to refine and enforce simple prescription-release provisions for eyeglasses and contact lenses—and leaving aside the key question of whether the other proposed rules are, in some abstract sense, good rules—what’s the wisdom in pursuing rules that would pertain to, say, every labor agreement in the United States where the employer is not the government or a not-for-profit? Or every commercial use of “personal data,” however personal data ends up being defined?
Does the commission imagine it could enforce either rule effectively? If so, what on earth are they thinking? And if not, why are they spinning their wheels when—grand ambitions for remaking antitrust and consumer-protection law aside—there are good and important health-care matters waiting in the wings, as well as patients who care about such things.
Health-care competition is important. It’s a significant sector of the economy and one closely tied to consumer welfare. The new record seems very much a mixed bag, and more mixed than it had been.
And now for something completely different: Happy Chanukah to all who are celebrating the holiday, and a good weekend to all.
The post Hands Across the Agencies appeared first on Truth on the Market.
]]>The post ICLE Files Amicus in NetChoice Social-Media Regulation Cases appeared first on Truth on the Market.
]]>If social-media companies are to create a useful product for their users, they must be able to strike a delicate balance between what people want to post and what they want to see and hear. As multisided platforms that rely on advertising revenue, they must also make sure to keep high-value users engaged on the platform. Moderation policies are an attempt to create community rules to strike this balance. This may include limits on otherwise legal speech in ways that are not viewpoint neutral. For instance, to keep users and advertisers, social-media platforms may choose to restrict pro-Nazi speech. But in order to enforce these rules, they need the ability to exclude those who refuse to abide by them. This is private ordering: the ability of private actors to create rules for their own property and to enforce them through technological and legal means.
The First Amendment’s protection of private ordering is exemplified by the Court’s jurisprudence on state action and the right to editorial discretion. As stated in Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1928 (2019):
The text and original meaning [of the First and Fourteenth Amendments], as well as this Court’s longstanding precedents, establish that the Free Speech Clause prohibits only governmental abridgment of speech. The Free Speech Clause does not prohibit private abridgment of speech.
One of the exceptions from this general rule is when a private actor exercises a function “traditionally and exclusively” performed by the government. Id. at 1929 (emphasis in original). The paradigmatic example is a company town, as in the case of Marsh v. Alabama, 326 U.S. 501 (1946). If it is “not an activity that only governmental entities have traditionally performed,” a private actor providing a forum for speech retains “editorial discretion over the speech and speakers in the forum.” Halleck, 139 S. Ct. at 1930.
Moreover, the First Amendment’s reach does not grow when private-property owners open their property for speech. If such property owners were “subject to First Amendment constraints” and thus “lose the ability to exercise what they deem to be appropriate editorial discretion within that open forum,” then they would “face the unappetizing choice of allowing all comers or closing the platform altogether.” Id. at 1930.
The application to social-media platforms is obvious: they are private actors with a right to exercise editorial discretion over the forum they open to third-party speech. They are not exercising a function traditionally and exclusively performed by the government in hosting speech. In other words, social media is not a company town.
Finally, the common-carriage rationales raised by Florida and Texas are simply inapplicable to social-media platforms. For instance, social-media companies do not open up their property to the public on an indiscriminate basis to say whatever they want. Instead, by their terms of service, they require users to agree to their moderation policies, and they retain their editorial discretion to enforce those policies. And insofar as market power is relevant, neither Texas or Florida made any attempt to show (nor could they) that all those companies subject to their laws have market power.
The summary of argument is below, and the whole brief is here.
“The most basic of all decisions is who shall decide.” Thomas Sowell, Knowledge and Decisions 40 (2d ed. 1996). Under the First Amendment, the general rule is that private actors get to decide what speech is acceptable. It is not the government’s place to censor speech or to require private actors to open their property to unwanted speech. The market process determines speech rules on social media platforms[1] just as it does in the offline world.
The animating principle of the First Amendment is to protect this “marketplace of ideas.” “The theory of our Constitution is ‘that the best test of truth is the power of the thought to get itself accepted in the competition of the market.’” United States v. Alvarez, 567 U.S. 709, 728 (2012) (quoting Abrams v. United States, 250 U.S. 616, 630 (1919) (Holmes, J., dissenting)). To facilitate that competition, the Constitution staunchly protects the liberty of private actors to determine what speech is acceptable, largely free from government regulation of this marketplace. See Halleck, 139 S. Ct. at 1926 (“The Free Speech Clause of the First Amendment constrains governmental actors and protects private actors….”).
Importantly, one way private actors participate in the marketplace of ideas is through private ordering—by setting speech policies for their own private property, enforceable by common law remedies under contract and property law. See id. at 1930 (a “private entity may thus exercise editorial discretion over the speech and speakers in the forum”).
Protecting private ordering is particularly important with social media. While the challenged laws concern producers of social media content, producers are only a sliver of social media users. The vast majority of social media users are content consumers, and it is for their benefit that social media companies moderate content. Speech, even when lawful and otherwise protected by the First Amendment, can still be harmful, at least from the point of view of listeners. Social media companies must balance users’ demand for speech with the fact that not everyone wants to consume every possible type of speech.
The issue is how best to optimize the benefits of speech while minimizing negative speech externalities. Speech produced on social media platforms causes negative externalities when some consumers are exposed to speech they find offensive, disconcerting, or otherwise harmful. Those consumers may stop using the platform as a result. On the other hand, if limits on speech production are too extreme, speech producers and consumers may seek other speech platforms.
To optimize the value of their platforms, social media companies must consider how best to keep users—both producers and consumers of speech—engaged. Major social media platforms mainly generate revenue through advertisements. This means a loss in user engagement could reduce the value to advertisers, and thus result in less advertising revenue. In particular, a loss in engagement by high-value users could result in less advertising, and that in turn, diminishes incentives to invest in the platform. Optimizing a platform requires satisfying users who are valuable to advertisers.
Major social media platforms have developed moderation policies in response to market demand to protect their users from speech those users consider harmful. This editorial control is protected First Amendment activity.
On the other hand, the common carriage justifications Texas and Florida offer for their restrictions on social media platforms’ control over their own property do not save the States’ impermissible intervention into the marketplace of ideas. Two of the most prominent legal justifications for common carriage regulation—holding one’s property open to all-comers and market power—do not apply to social media companies. Major social media companies require all users to accept terms of service, which limit what speech is allowed. And assuming market power can justify common carriage, neither Florida nor Texas even attempted to make such a finding, making at best mere assertions.
The States’ intervention is more like treating social media platforms as company towns—an outdated approach that this Court should reject as inconsistent with First Amendment doctrine and utterly unsuitable to the Internet Age.
[1] Throughout this brief, the term “platform” as applied to the property of social media companies is used in the economic sense, as these companies are all what economists call multisided platforms. See David S. Evans, Multisided Platforms, Dynamic Competition, and the Assessment of Market Power for Internet-Based Firms, at 6 (Coase-Sandor Inst. for L. & Econ. Working Paper No. 753, Mar. 2016).
The post ICLE Files Amicus in NetChoice Social-Media Regulation Cases appeared first on Truth on the Market.
]]>The post Where Are the New FTC Rules? appeared first on Truth on the Market.
]]>Khan’s arrival at the FTC in the summer of 2021 seemed to herald a new era of competition and consumer-protection rulemaking.
Khan’s first FTC statement of regulatory priorities, issued in December 2021, called for possible consumer-protection rules that would, among other things: authorize penalties for “data abuses”; and authorize penalties for “abuses . . . from surveillance-based business models.” Relatedly, rules to combat illegal discrimination stemming from algorithmic decision making were also mentioned. More generally, the statement envisioned rules to “define with specificity unfair or deceptive acts or practices.”
In addition, drawing upon recommendations in President Joe Biden’s July 2021 Executive Order on Promoting Competition, the statement announced that the FTC would consider “competition rulemakings” dealing with: noncompete clauses; surveillance; the right to repair; pay-for-delay pharmaceutical agreements; unfair competition in online marketplaces; occupational licensing; real-estate listing and brokerage; and industry-specific practices that substantially inhibit competition.
Also in late 2021, the FTC announced it was pursuing possible updates and modifications of existing consumer-protection rules, including the Children’s Online Privacy Protection Act; the Health Breach Notification Rule (HBNR); identity-theft rules; and the FTC Safeguards Rule (providing additional requirements for financial institutions’ privacy programs).
Flash forward to February 2023. In an article highlighting the FTC’s transformation into a major regulator, former FTC Bureau of Consumer Protection Director Jessica Rich pointed to major ongoing rulemaking initiatives:
During 2023, the FTC’s rulemaking activity has focused on taking further steps to implement the proposals listed above, and to issue updates on reviews of possible modifications to existing rules. (See here for Federal Register announcements of these initiatives.) Two proposed actions merit particular note because of their potentially substantial economic impacts.
On June 27, 2023, the FTC—with the concurrence of the U.S. Justice Department (DOJ)—announced sweeping proposed amendments to the rules implementing the Hart-Scott-Rodino Act’s premerger-notification requirements. Those changes would require firms proposing to merge to provide huge amounts of business information that have no bearing on the potential competitive effects of their transaction. Amended pre-notification rules would be subject to deferential (to the FTC) “arbitrary and capricious” review under the Administrative Procedure Act (APA).
A Sidley & Austin analysis points out that “[t]he FTC estimates that the changes, as proposed, will quadruple the filing preparation burden. Some experts believe that is a significant underestimate.” These added costs would outweigh any benefits that the amended rules might generate. As explained in a Sept. 27, 2023 public comment by the International Center for Law & Economics (ICLE), the proposed regulatory change “would increase compliance costs for merging parties generally, with disproportionate impact on small and first-time filers; they would impose additional burdens on agency staff; yet it is unlikely that they would provide countervailing benefits to competition and consumers.”
On Oct. 11, the FTC announced a specific proposed rule that would ban “junk fees.” (The commission published and requested public comments on the rule on Nov. 11.) Specifically, the proposed rule would “[p]rohibit businesses from advertising prices that hide or omit mandatory fees” and “[p]rohibit sellers from misrepresenting fees and require sellers to disclose upfront the amount and purpose of fees and whether the fees are refundable.”
The junk fees proposal is quite problematic. Former FTC economist Mary Sullivan stresses that, by requiring additional fee-related disclosures, the proposal “has the potential to clutter advertising, making it less effective and more confusing” and “could result in enormous compliance and administrative costs, especially if applied to all industries.”
Sullivan adds that, depending on how it is implemented, the new rule might be used “to regulate optional add-ons in cases where regulators determine that the add-ons do not add enough value or consumers reasonably assume them to be included in the advertised price. Regulating add-ons would create inefficiencies and restrict firms’ freedom to design their own products.” More generally, the proposed rule may be seen as an unprecedented regulatory interference in business-pricing practices that could render affected markets less efficient.
In sum, while the Khan FTC has spawned an unprecedented number of significant regulatory proposals, they do not appear close to enactment (except perhaps the Hart-Scott-Rodino regulatory amendments, see here). Furthermore, it is entirely possible that a new FTC leadership might revisit at least some (if not all) of the proposals that have attracted significant controversy.
It is not surprising that, shortly after President Biden designated the neo-Brandeisian Lina Khan as chair, the FTC announced plans to propose a large number of new rules. As explained by Christine Wilson and Adam Cella, neo-Brandeisians reject core classical-liberal tenets, including the importance of the rule of law and due process. Neo-Brandeisians also are skeptical of capitalism, view mergers as inherently harmful, and distrust large firms. Furthermore, and relatedly, they are critical of individualism and believe in central planning.
It therefore follows that neo-Brandeisians would support government control of markets through regulation. In particular, FTC regulation would be seen as superior to reliance on market forces, because it allows the agency to prescribe particular rules governing business conduct—rules that are deemed inherently superior to what unruly markets might yield.
At first blush, greater reliance on rulemaking seems to further the goal of putting the FTC “in charge of the entire economy,” and avoids the problems inherent in case-by-case litigation. Litigation is cumbersome (it requires a lot of discovery, consistent with due process), time-consuming, and lacks the economywide impact of rules. What’s worse, judicial case-law precedents and statutory language (such as the “balancing test” for consumer-protection unfairness found in Section 5(n) of the FTC Act) sharply limit the FTC’s ability to change the law dramatically by bringing enforcement actions.
It might have appeared initially to Khan that greater reliance on promulgating economywide FTC rules, which specify what businesses may or may not do, would avoid the limitations of litigation. But a closer look reveals the problems with a rulemaking-heavy approach.
Notably, rulemakings themselves are resource-intensive, and may take years to come to fruition. With respect to consumer protection, Jessica Rich explains that, despite 2021 FTC procedural changes to “streamline” Magnuson-Moss (Mag-Moss) rulemaking under Section 18 of the FTC Act, “the hurdles remain high” to the enactment of Magnuson-Moss rules. (Consumer-protection rules that implement specific congressional enactments are subject to the less exacting standards of the APA. But much of the FTC’s recent “innovative” consumer-protection rulemaking activity has focused on problems not specifically addressed by Congress.)
Specifically, Rich explains that Mag-Moss initiatives must still follow cumbersome statutory steps prior to enactment. Significantly, the FTC:
Also, judicial review of a Mag-Moss rule is far more exacting than under the APA’s requirements (the relatively lenient “arbitrary and capricious” standard). A court may, of course, choose to strike down a poorly justified Mag-Moss rule under the relatively lenient APA “arbitrary and capricious standard.” But even if a Mag-Moss rule survives APA review, the FTC may still lose in court. Under Mag-Moss, a court may direct the FTC to consider additional submissions, may set aside the rule if it is not supported by “substantial evidence,” and may “set aside the rule if FTC’s limits on rebuttal or cross examination precluded disclosure of material facts.”
Finally, and perhaps most significantly, a reviewing court may decide that the FTC has done an inadequate job of demonstrating that a Mag-Moss rule would be cost-beneficial.
Given these high hurdles, the resource-constrained FTC would be expected to handle, at most, a couple of Mag-Moss rulemakings at a time. But it currently contemplates almost 10, based on its public pronouncements since 2021. Moreover, given the sweeping breadth of some of its announced initiatives, it would have a particularly hard time building a factual case that could withstand scrutiny for its more ambitious initiatives (such as business-data privacy and security). As such, it appears unlikely that the FTC will be able to bring even a single Mag-Moss rule to successful conclusion prior to the end of the current presidential term.
The outlook for FTC competition rules is even bleaker. Such rules are virtually unprecedented and stand very little chance of being upheld, due to a lack of legal authority to support their promulgation (see here). Nevertheless, the FTC has put forth an extremely detailed draft competition rule that would ban most noncompete agreements in the U.S. economy, an exercise that is highly problematic, both from a legal and economic standpoint (see here, for example). (If a final non-compete rule is issued in 2024, it will almost certainly fail judicial scrutiny.) Perhaps the realization that far-reaching competition rules are legal longshots (at best), due to a lack of statutory authority, explains why the FTC has not followed through on announcing additional proposals drawn from the 2021 FTC statement of regulatory priorities.
So why has the FTC announced it is considering a large number of very expansive potential Mag-Moss rules that would tread new ground?
Perhaps this is merely legal “vaporware,” meant to show that the FTC intends to second guess a variety of well-established (and often efficient) business practices that affect large swaths of the economy. Khan might hope that some risk-averse firms may decide to avoid those practices, even if no legal action is imminent, in order to avoid future problems with the government. She might also think that Mag-Moss rules, once developed, may garner additional public support and stand a reasonable chance of success in court.
Khan might also view potential Mag-Moss rule announcements as a means of spotlighting specific alleged “market failures,” in the hope of prodding future congressional action to deal with them (and perhaps grant the FTC specific rulemaking authority to fill in the details). This might be a form of “setting the stage” for a dramatic long-term expansion in federal regulatory activity when the time is ripe (that is, when the consciousness of legislators has been suitably raised to enable enlightened bureaucrats to exercise far broader authority).
Whatever Khan’s motives may have been, the FTC’s blizzard of Mag-Moss rulemaking initiatives (and one major competition rulemaking) has been economically wasteful. The substantial resources directed to developing dubious rulemaking proposals (many, if not all, of which could not satisfy cost-benefit scrutiny) could have far better been allocated to clearly beneficial enforcement activity—in particular, combating burgeoning mass-market consumer fraud (which remains a serious and seemingly growing source of consumer harm, see, for example, here, here, and here). Note that those wasted resources are sunk costs, and thus there is no justification to continue to allocate resources to those rulemaking projects “because they are there” (to do so would be to fall prey to the sunk cost fallacy).
It follows that FTC leadership should discontinue work on those rulemakings, perhaps after quick cost-benefit analyses by the FTC Bureau of Economics to assure itself that the proposals put forth fail a cost-benefit test. It should then redirect the rulemaking resources to the most welfare-enhancing uses it can identify (I would opt for hardcore-fraud enforcement). Of course, such a course of action must await the departure of Chair Khan, and the installation of new and enlightened leadership at the commission.
The FTC’s highly publicized proposals previewing the issuance of new rules, beginning in 2021, have not brought us much closer to the issuance of final rules, with perhaps one or two exceptions. The FTC has, however, wasted scarce staff resources on launching one competition rulemaking (banning noncompete clauses) and a large number of dubious consumer-protection (“unfair or deceptive acts or practices”) Magnuson-Moss rulemakings. Those are sunk costs, and the FTC should not waste additional resources on further developing those rules. (The FTC, of course, will remain obligated to issue or update rules dealing with discrete topics, as required by specific congressional statutory enactments.) New FTC leadership will, however, be required to bring about this salutary change in direction.
The continued allocation of significant resources to these troubled rulemaking initiatives could entail more than the large opportunity costs of added waste and foregone welfare-superior FTC enforcement activity. It could lead to economic inefficiency and harm by deterring some legitimate efficient business conduct that falls within “the shadow of rulemaking.” It could also impose further harm, to the extent that particular welfare-inimical rules were finalized and upheld in court. (There is at least a reasonable possibility that one or more Mag-Moss rules would survive judicial review.)
New FTC leadership should keep these sobering realities in mind as it considers whether to “pull the plug” on the Khan-era rulemaking folly.
The post Where Are the New FTC Rules? appeared first on Truth on the Market.
]]>The post Has the Biden Administration Taken Over Broadband? appeared first on Truth on the Market.
]]>This week, a U.S. House subcommittee hearing featured testimony from all five members of the Federal Communications Commission (FCC). The majority on the House Energy and Commerce Subcommittee on Communications and Technology did away with the question mark, titling the hearing “Oversight of President Biden’s Broadband Takeover.”
While it might be a stretch to call the administration’s broadband-policy agenda a “takeover,” one can be forgiven for concluding that the FCC is moving forward with so many massive and comprehensive interventions in nearly every aspect of the broadband market that it looks a lot like a takeover.
The agency’s newly enacted digital-discrimination rules cover any entity even remotely involved in the deployment and delivery of broadband, and cover just about any activity involved in deployment and delivery. The FCC’s full-steam-ahead drive to impose Title II common-carrier obligations on broadband will involve similarly massive and comprehensive interventions. On top of those policies, the FCC is also considering regulating broadband data caps and fees for early termination of broadband subscriptions.
Written testimony submitted by the commissioners reveal deep divisions among the FCC. Republican Commissioner Brendan Carr accused the administration of choosing “partisan ideology over smart policy” to increase government control. Democratic Chair Jessica Rosenworcel defended the FCC’s actions as necessary to ensure “every consumer” has “fast, open, and fair” broadband access.
“The theme running through the Biden administration’s internet policies is not one of connectivity or capacity—it is control,” Carr said, highlighting the administration’s support for bringing back Title II utility-style regulations on broadband, including net-neutrality rules. Carr argued that these amount to outdated and sweeping government controls.
In contrast, Rosenworcel argued that the FCC is working to “keep pace with the rapidly evolving communications landscape and bring high-speed connectivity to everyone.”
During the hearing, Energy and Commerce Committee Committee Chair Cathy McMorris Rodgers (R-Wash.) took issue with the FCC’s sweeping digital-discrimination rules:
I’m equally concerned about the agency’s new so-called digital discrimination rules. The FCC went far beyond its congressional mandate by adopting far-reaching rules that could result in the agency micromanaging basic business decisions made by providers like prices, contract terms, even marketing campaigns and regulating industries outside of its jurisdiction including landlords and banks. … The IIJA does not give the FCC authority to regulate these practices or industries. Where did the FCC find this authority and what expertise does the FCC even have to regulate these practices?
Rosenworcel responded that Congress gave the FCC “a very broad mandate” to “prevent and eliminate digital discrimination,” and that this “exceptionally broad” mandate was not limited to just “some internet providers” and “certain terms and conditions.”
Carr noted that both the Title II rules and the digital-discrimination rules allow for rate regulation. Rosenworcel herself has in the past been adamant that the FCC has no intention to regulate broadband rates (“Nope. No how, no way”), although her written testimony for this hearing was notably silent on the topic of rate regulation. When asked by McMorris Rogers whether the FCC would use digital-discrimination rules to regulate rates, Rosenworcel responded: “No, actually in the text of it, we make clear there will be no rate regulation.”
Technically, Rosenworcel is correct. The agency’s digital-discrimination rules do not regulate rates. Instead, the rules set up a complaint process through which claimants can allege discrimination in pricing, with the FCC acting as judge and jury in the proceeding. So, while the digital-discrimination rules do not provide for ex ante rate regulation, the FCC gave itself authority to impose ex post rate regulation in the name of eliminating digital discrimination.
In comments submitted to the FCC, the U.S. Chamber of Commerce summarized the consequences of the agency’s ex post approach:
These policies would render it impossible for businesses and the marketplace to make rational investment decisions. The scope of the services that the Draft covers is so broad that it does not provide meaningful guidance for how to comply. And because the Draft fails to grant sufficient guidance, it does not give fair notice of how to avoid liability. Consequently, investment in broadband innovation would disappear and consumers would have to pay higher costs for less efficient services.
Let’s take the chair at her word that, so long as she’s chair, the FCC will not engage in rate regulation. Alas, she won’t be chair forever. Under current rules, there’s little to stop a future FCC from imposing rate regulation, either through Title II or via enforcement of the agency’s digital-discrimination rules.
This is not a new concern. When the FCC imposed Title II regulation in 2015, then-Commissioner Ajit Pai noted in his dissent that forbearance from rate regulation merely meant that “the FCC will not impose rules ‘for now.’”
Despite disagreement among Democratic and Republican commissioners on Title II and digital discrimination, their written testimony was unanimous in asking Congress to restore the FCC’s spectrum-auction authority.
For his part, Carr chastised the administration for focusing so much on “command and control” broadband policies, rather than spectrum. He criticized the recently released National Spectrum Strategy as a “spectrumless spectrum plan,” adding that: “After nearly three years of study, the administration’s spectrum plan commits to freeing up exactly zero MHz of spectrum. That is not a typo.”
So, has the Biden administration taken over broadband? Under Betteridge’s Law, the answer is no—but that’s only because the takeover isn’t yet complete.
The post Has the Biden Administration Taken Over Broadband? appeared first on Truth on the Market.
]]>The post Google, Amazon, Switching Costs, and Red Herrings appeared first on Truth on the Market.
]]>Not wishing to show excessive dis-favoritism, the FTC’s November 2023 complaint against Amazon (here) is the third that the agency has brought against that firm in calendar year 2023 (see here and here for the others, and that’s not counting the commission’s February 2023 statement about Amazon’s acquisition of One Medical). Of course, both Meta and Amazon are large firms engaging in varied conduct, and no firms get a free pass on either the consumer-protection or competition side of Section 5 of the FTC Act. But with that said, this, too, seems a lot. Whereas the Meta onslaught entailed two antitrust actions and one consumer-protection matter, Amazon faces one antitrust “monopoly maintenance” complaint and two on the consumer-protection side.
On the competition side, Amazon has been in FTC Chair Lina Khan’s cross-hairs since, at least, the 2017 publication of her much-discussed student note in the Yale Law Journal, “Amazon’s Antitrust Paradox” (which is not to say that she’s pre-judged any Amazon matters, or that a reasonable person might suspect as much . . . although, I mean . . . right).
All of this windup about the FTC, Meta, and Amazon brings me to—well, bear with me—the U.S. Justice Department (DOJ) and Google, and the very recently concluded trial of the DOJ’s “monopoly maintenance” case. The complaint was filed in January 2020 in U.S. District Court for the District of Columbia, and Judge Amit Mehta has scheduled closing arguments for May 2024, so it ain’t over until it’s over. In the meantime, there’s a Truth on the Market symposium on “The Google Lawsuits” that provides useful discussion of various aspects of the DOJ’s case (among others). There’s also a useful “tl;dr” explainer by Sam Bowman and Geoff Manne (here). Greg Werden—former senior antitrust counsel at the DOJ Antitrust Division—blogged about it here. I’ve also blogged about it and opined here that, based on the public evidence presented at trial, the DOJ hasn’t made its case. That’s the tip of the iceberg. It has, to a first approximation, been in all the papers.
In case you’re not yet sated, I thought I’d focus on one specific line of argument in the Google antitrust trial that echoes one in the FTC’s “dark patterns” consumer-protection case against Amazon. It has to do with switching costs. As I and everyone else have noted, at issue in the Google case are two sets of agreements under which Google is the default search engine on various browsers and cell phones. Default status means what it sounds like: Google is pre-loaded and ready to roll “out of the box.” It doesn’t mean that consumers cannot use other search engines, or that they cannot alter their settings so that alternative search engines (and/or alternative browsers) become their defaults. It simply means that they have to take affirmative steps to suspend or change the Google default.
Similarly, the FTC’s “dark patterns” case against Amazon alleged, first, that Amazon somehow tricked consumers into registering with Prime and, darkly, that Amazon made it unduly difficult for consumers to cancel Prime. (I’m here; Manne is here, and here with Lazar Radic; former FTC General Counsel Alden Abbott says that the FTC’s complaint is “Perhaps the Greatest Affront to Consumer and Producer Welfare in Antitrust History” here).
Neither switching cost allegation struck me as convincing. I use Google often, but I’ve used other search engines too; and I’ve gone through the steps of changing defaults back and forth, just for giggles. I haven’t canceled Prime (I don’t want to), but I’ve checked out the various screens required, and cancellation doesn’t seem all that onerous either. Really. That’s me, but that’s not just me. The FTC complaint itself alleges that Prime cancellation required six clicks of a computer mouse (or substitute) prior to streamlining by Amazon; and it’s easier than that now. Six clicks, or three or four, are more than one or two, but something short of running a marathon or fighting a winter ground campaign in Russia. Or something and a half.
Before we ask about the antitrust implications, we might simply ask: How hard is it? For whom? And how do we know?
As I (and others) have noted, Android devices in Europe are required to offer a choice screen: no default. The consumer benefits of the mandate are unclear. Google’s share of general search on European mobile phones is reported to be about 96%. And while Microsoft preloads its Edge browser and Bing search engine as defaults on computers with the Windows operating system, Bing still lags way behind Google–way, way behind–on general search on U.S. desktops. Both observations suggest that it’s possible–at least possible–for consumers to switch to whatever search engine they prefer.
A recent Washington Post column stated—misleadingly, if not just plain wrongly—that Google spent billions “to hide this setting from you.” It’s not entirely clear what the column’s author, Geoffrey A. Fowler, is talking about. First, he says: “I’m talking about your search engine.” Then he suggests it’s the default setting itself, which isn’t hidden at all—that’s the point of the default: convenience. It’s right there, ready to roll, and conspicuously so (a design feature that serves a positive function, not an engineered impediment to the use of Bing).
It’s not as if the challenged agreements redesigned the settings functions on, e.g., Apple iPhones, to make it especially difficult (or even more difficult than it was) for consumers to change the default. That’s not even among the DOJ’s allegations. Rather, the complaint argues that it’s simply too darn hard for consumers to switch. Here’s the complaint:
Even where users can change the default, they rarely do. This leaves the preset default general search engine with de facto exclusivity.
“Even” is not in dispute. Users can change the default. What about the rest of it? Testimony for the government by Antonio Rangel, Bing [yep] Professor of Neuroscience, Behavioral Biology, and Economics at the California Institute of Technology, was widely reported (see, e.g., The New York Times here and Bloomberg here). Here’s some of Rangel’s testimony, as reported by Bloomberg:
Rangel said in testimony Wednesday and Thursday that his research on the prime placement of cereal boxes in stores was relevant to his assessment of search engine defaults. He found that getting prominent real estate on a web browser or mobile phone discourages people from switching to rival search engines. Consumers are reluctant to change behaviors that have hardened into habit, he said.
“Search engine defaults generate a sizable and robust bias towards the default,” Rangel said. “Defaults have a powerful impact on consumer decisions.” Often consumers don’t even realize they are making a choice by default and they don’t know how to change it, he said.
This has never been precisely my field, but in my misspent youth (ok, my late 30s), I did some work at the National Institutes of Health (NIH), in the Laboratory of Neuropsychology (LN), so I thought I’d take a look at the underlying research. On the one hand, it’s interesting research. On the other hand, it’s clear enough that Rangel and his colleagues didn’t actually study browser or search-engine defaults, or the degree to which default status would, or would not, impede switching.
Rather, there were studies of mechanisms of visual attention, with particular experiments displaying certain food choices to investigate—e.g., the influence of visual salience on the amount of attention devoted to a display. Generalizing findings from, e.g., cereal-box placement to the durability of search-engine defaults seems a stretch (or entirely speculative).
For example, experiments on visual saliency and consumer choice (e.g., here, here, and here) suggest that the visual properties of, e.g., packaging, can influence visual attention, which can bias choice in experiments where items were displayed in marked screen “shelves” (not actual grocery shelves), and eye tracking was measured. Rangel and colleagues observed that eliminating peripheral display of alternatives approximately doubled the attention biases—the amount of attention devoted to a displayed food item. The authors “suggest that individuals might be influenceable by settings in which only one item is shown at a time, such as e-commerce.”
Perhaps, but it’s not really a measurement of the extent to which, say, varied cereal displays in grocery stores (for which manufacturers might pay slotting fees) actually modulate the choices made by either marginal consumers or those with established preferences. Manipulating the relative amount of attention that research subjects pay to appetitive (desirable) and aversive food items, it was observed that appetitive items were 6-11% more likely to be chosen with long fixation (exposure)—with a similar but somewhat different bias for aversive items. What that says about the degree to which a given default setting for a browser or search engine might or might not be a barrier to switching is anyone’s guess. All of it is very context-dependent. For example, visual salience has an especially significant effect when consumers are (experimentally) forced to make very rapid choices. That’s interesting, and intuitive, but it constrains generalization across both experimental and real-world contexts.
At best, we have some careful research with findings that are consistent with the general observation that visual salience can have an effect on attention, and that attention can have an effect on choice. And so what? I don’t mean to denigrate the underlying research as behavioral research, but nobody ever disputed that making something prominent could have some bearing on choice. That’s what advertising is for. And while the government’s expert seems not to have studied the defaults in question at all, neither Google nor anybody else disputed that they were worth something, rather than nothing.
The evidentiary stretch seems endemic. There’s much of interest in behavioral economics, and some serious work seems to promise productive application to competition policy. Yet the intersection of behavioral findings and antitrust remains, if not necessarily the empty set, then hazy at best. That might change, but thus far the space seems to recall Gertrude Stein’s take on Oakland, California: there’s no there, there.
Switching costs could matter, and in a different context, there’s some precedent regarding switching costs and “lock-in.” In Eastman Kodak Co v. Image Technical Services Inc., the U.S. Supreme Court held that a defendant’s (Kodak’s) lack of market power in a primary-equipment market (photocopiers, etc.) does not preclude, as a matter of law, the possibility of their having and exploiting market power in derivative aftermarkets (parts and service, etc.). Kodak lost on the question of whether it was entitled to summary judgment, so Kodak’s refusal to sell its spare parts could, in theory, prove anticompetitive.
But Kodak-type lock-in allegations have been few, and have been difficult to prove. Carl Shapiro has explained why consumer harm in lock-in-type cases is both possible and unlikely. And jointly with David Teece, he has expanded on the analysis. In aftermarkets cases, high switching costs are a necessary, but not sufficient, condition for establishing the relevant market power. For example, firms that have made substantial investments in employee training and computer programming (sunk costs) might face substantial new training and programming costs in switching. Information costs could also be pertinent, if sufficiently high (and of a sort) to prevent customers from anticipating life-cycle aftermarket costs in their initial purchase decisions.
But Kodak-type lock-in is rightly hard to establish; and it seems a poor fit for either the Google antitrust case or the Amazon “dark patterns” case. Prime customers have many ways, and considerable experience, to investigate price and nonprice features of retail goods available from channels other than Prime; and apart from three, four, or even six clicks, it’s unclear that they have any switching costs at all, much less those requisite for lock-in.
And neither Kodak nor any other precedent says that switching costs need to be zero, or that a firm accused of exclusionary conduct needs to take affirmative steps to make them so. Pace Rangel’s research on attention and choice, there’s really nothing besides DOJ’s bald allegations to suggest the task of switching to a new search engine or browser is so onerous that Google’s default agreements (with other firms, such as Apple) are de facto exclusive agreements—that is, that they render switching practically impossible. Are they suggesting that consumers, having trained themselves to search via Google, need to make considerable investments in training before they can search via Bing or Duck-Duck-Go?
That’s it, really. There are many issues on the table, and many facts in dispute. But important to both the Amazon consumer-protection case and the Google antitrust case are allegations that switching costs are extremely high. The agencies can say so, but they haven’t yet shown it. And trotting out sophisticated neuroscientists who haven’t actually studied the switching in question seems a red herring. At best.
The post Google, Amazon, Switching Costs, and Red Herrings appeared first on Truth on the Market.
]]>The post A Brief History of the US Drug Approval Process, and the Birth of Accelerated Approval appeared first on Truth on the Market.
]]>Lone inventors, groups of scientists, serial entrepreneurs and, more recently, large corporations have been introducing new medicines, medical devices, and other products to improve our lives for millennia. Some of the early products were merely placebos, but a few—like tonics and vitamins—helped to prevent or remediate disease. Especially over the past century, many of these products have saved lives and improved the quality and length of human life. But in a few instances, these products have proven deadly or harmful, and led to calls for stricter regulation to ensure safety and efficacy.
Reputation matters in business. Companies like Crosse and Blackwell in the United Kingdom and Heinz in the United States were among the first to ensure quality. As a result, they thrived, while cost-cutting competitors with often-contaminated products failed. Today, companies like Merck, Pfizer, Novartis, BMS, GSK, and others have reputations for quality. But every company has made mistakes, and establishing the safety, efficacy and, hence, value of medicine is even harder than with food products. Over time, independent evaluators of quality were both demanded and introduced.
The FDA began its modern regulatory functions with the passage of the 1906 Pure Food and Drugs Act, which prohibited interstate commerce in adulterated and misbranded food and drugs. The FDA has grown enormously since then, in terms of staff and budget. Today, it oversees roughly a quarter of all U.S. economic activity. The expansion of FDA oversight often followed public-health disasters. The two most notable examples involved diethylene glycol and thalidomide.
The first antibiotics had scarcely been invented before they were misused with fatal consequences. In 1937, a newly developed class of antibiotics called sulfonamides were wildly popular, and experiments with hundreds of formulations led to new life-saving products. But one of these new formulations included an even newer compound: diethylene glycol, a sweet-tasting syrupy solvent.
Chemists at the highly respectable drug company Massengill mixed sulfanilamide with diethylene glycol to make the drug easier for patients to swallow. The problem is that diethylene glycol is fatally toxic and killed 105 Americans in the year or so after its introduction. At the time, the FDA could only prosecute Massengill for mislabeling its product. Public outcry led to the 1938 Federal Food Drug and Cosmetic Act (FDC), which required companies to perform product-safety tests prior to marketing. This disaster drove public support for the regulator’s watchdog role.
Diethylene glycol has continued to kill people across the world: in South Africa in 1969 and in Nigeria in 1990. In 2022, 60 children died in the Gambia from the same product. In total, at least 850 deaths in 11 countries have been attributed to drugs (often imported from India) contaminated with diethylene glycol—all in poorer nations without respected and well-funded drug regulators, and without widespread social media or an independent press to provide rapid feedback loops.
The most significant expansion of the FDA’s authority over drugs came two decades later. Thalidomide was developed as a sleeping pill, and was also expected to treat insomnia, nausea, and headaches. After its introduction in Germany, it was to treat morning sickness, without a trial in pregnant women. Indeed, pregnant women would never normally be in a clinical trial, due to potential risks to the fetus. It was nonetheless marketed from 1957 by German manufacturer Chemie Grunenthal as a safe sedative to combat morning sickness in pregnant women. Thalidomide resulted in about 8,000 birth defects in Europe.
Manufacturer Richardson Merrell had wanted to market the drug in the United States, but FDA reviewer Frances Kelsey read reports of harm published in the British Medical Journal. She ultimately refused to approve the drug for widespread use in the United States because she was not satisfied with the safety reports provided. Not all Americans escaped unscathed, however. There were 17 American babies born with birth defects because their mothers took the drug, as some U.S. doctors had been persuaded to use it experimentally.
The media avidly reported the largely averted disaster in the United States, which elevated the political clamor for stronger drug laws and led to the 1962 Amendments to the Federal Food, Drug and Cosmetic (FD&C) Act. Known colloquially as the “Kefauver-Harris amendments” to the FDC, the legislation established that drugs had to be both safe and effective prior to FDA approval; the 1938 law had only required proof of safety. The major change in the 1962 act would have made no difference to the thalidomide case, because drug safety was at issue there. Nevertheless, the experience with thalidomide drove stricter controls for advanced testing of new drugs.
Ironically, larger or tighter controlled clinical trials would not have found the thalidomide problem, because pregnant women would not have been used in a trial. Testing thalidomide on pregnant mice might have identified the teratogenic problem seen in humans, but expanded human testing would not have found it. Real-world data about pregnant women using the drug is what alerted authorities to the problem, not a controlled trial.
For an expanded FDA to find a future thalidomide, what was needed is an enhanced feedback mechanism for real-world use of medicines. Problems can be found in clinical trials, but they often arise with medicines when used as intended by the general population post-trial or—as with thalidomide—when used “off-label” (to treat a condition for which the drug was not tested in a clinical trial). Post-thalidomide, the FDA’s efforts, budget, staff, and authority all grew enormously, leading to a massive expansion of clinical trials.
Though the initial changes demanded in the 1960s and early 1970s were not too onerous for the drug companies, the continued increase in demands for testing and, hence, costs led to the demise of many firms from 1973 onward. Economists like Sam Peltzman have argued that these changes reduced the flow of new drugs entering the market. And it is possible, perhaps even probable, that the drug lag induced by the 1962 act has caused more deaths than the FDA’s extra caution has saved. This is certainly Peltzman’s conclusion: “FDA’s proof of efficacy requirement was a public health disaster, promoting much more sickness and death than it prevented.”
It is hard to accurately assess the historical costs and benefits of delayed approvals but, as later posts will discuss, the costs probably significantly outweigh the benefits. What is certain is that the incentive structure for FDA officials is quite simple: you may be criticized for delays, but to allow a second thalidomide-like incident would be terminal to a career. The bias toward caution certainly exists.
As will be discussed in later posts, there should be a constant tension between safety concerns and faster approvals. Patients and the patient groups representing them want the right to try speculative medicines, but they are concerned about safety. Patients and the small companies developing cutting-edge therapies might benefit from faster approval, with appropriate liability waivers and risk acceptance by patients. But staff at the FDA and the large multinational companies they oversee benefit from the barrier to entry that stricter safety enforcement requires. In other words, most of the insiders benefit from delay, while only some of the outsiders want faster approval.
The FDA implemented the efficacy requirement for new drugs, promulgating regulations that detailed the scientific principles of adequate and well-controlled clinical investigations. This was how FDA standards of efficacy through clinical trials evolved. FDA regulations defined “adequate and well-controlled” clinical investigations as permitting “a valid comparison with a control to provide a quantitative assessment of drug effect.” In practice, this meant studies that typically were randomized, blinded, and placebo-controlled, generating data where clinical benefit could be assessed.
The FDA engaged with experts to improve the design and execution of these clinical trials, establishing external advisory committees that changed study designs and drove greater understanding of what data were required across various therapeutic areas. Critically, the FDA wanted trial sponsors to demonstrate clinical benefits, such as improved survival rates or improved function.
To reach this end, the FDA claimed that more than one adequate and well-controlled investigation was necessary, since a single trial might have biases that falsely demonstrated efficacy. As a result, two clinical trials became the standard. While this lowered the chance of a Type One error (a safety concern with the medicine being tested), it massively increased the chances of Type Two errors (the risk to patients of a delay in use of the medicine).
Demonstrating clinical benefit over two trials takes a long time, and is expensive. By the 1980s, it became obvious that the cost of drug development was a significant barrier for developing drugs for rare diseases (small markets) or for diseases with no current therapies (where the science was new or uncertain).
The Orphan Drug Act of 1983 aimed to simplify approval for drugs with small markets. Ultimately, wider awareness of patients in dire need of treatment drove calls for reform, but the impact was minimal at best. There was significant distrust of the pharmaceutical industry—especially that it would cut corners on safety and overprice products. Suggestions or demands that approvals be made more rapidly were met with skepticism. This changed with the arrival of HIV.
While rare diseases attracted some media attention, a disaster like HIV led to more significant concern and, ultimately, reform. From mid-1981 until the end of 1982, the U.S. Centers for Disease Control and Prevention (CDC) received more than 650 daily reports of HIV cases. Roughly 40% of those with HIV who developed AIDS would ultimately die.
The AIDS epidemic was an emergency in health care and a catalyst for change everywhere, including clinical-trial requirements. AIDS drove a reevaluation of what was essential to demonstrate efficacy, including how the FDA defined “adequate and well-controlled studies.” By 1987, activist sit-ins outside of FDA headquarters and widespread media attention to HIV/AIDS became daily occurrences, building pressure to speed up access.
The FDA created the new class of investigational new drug (IND) application, which allowed patients to receive investigational treatments in an unblinded setting. But the FDA still allowed trial sponsors to use data collected through such treatments in new drug applications for full approval. This process was limited for cases like HIV, since the FDA did not want to undermine blinded studies, which remained the gold standard for drug approvals.
By allowing HIV patients to take investigational treatments, the FDA opened up research into how drugs could be approved faster. Trial experts sought ways to streamline trials by focusing on surrogate endpoints, which—while not direct measures of clinical benefit—were demonstrably correlated with improved clinical outcomes.
For example, improved T-cell count was determined to reliably predict fewer infections in AIDS patients and was accepted as a surrogate endpoint that could be used to demonstrate the efficacy of HIV/AIDS drugs. AZT, the first medicine approved to combat HIV, improved T-cell counts and was provisionally approved March 20, 1987. The time between the first demonstration that AZT was active against HIV in the laboratory and its approval was 25 months.
AZT was approved for all HIV patients in 1990. AZT was initially administered in significantly higher dosages than today, typically 400 mg every four hours, day and night, compared to the modern dosage of 300 mg twice daily. While the drug’s side effects (especially anemia) were significant, the decision to approve was widely supported, given the alternative of a slow and painful death from AIDS.
The AZT approval convinced many that success could be predicated on the use of surrogate endpoints. As consensus grew about the utility of surrogate endpoints in clinical-trial design, the FDA came under pressure to accept drug-approval reform. As a result, the FDA formalized the accelerated approval pathway in 1992.
The FDA could expedite the approval of and patient access to drugs that were intended to treat serious and life-threatening diseases and conditions for which there were unmet medical needs. By relying on surrogate endpoints or other intermediate clinical endpoints that could be measured earlier than irreversible morbidity, development programs could be accelerated. Patients would generally be well-served and the “substantial evidence of efficacy” standard had largely been met.
In short, for patients with serious or life-threatening illnesses, and where there was an unmet medical need, there could be a different risk-benefit calculation: The more serious the illness and the greater the effect of the drug on that illness, the greater the acceptable risk from the drug. If products “provide meaningful therapeutic benefit over existing treatment for a serious or life-threatening disease, a greater risk may also be acceptable.”
The question of whether accelerated approval would be a success was uncertain in 1992, but it offered hope for AIDS patients otherwise destined to die due to lack of effective treatment.
The post A Brief History of the US Drug Approval Process, and the Birth of Accelerated Approval appeared first on Truth on the Market.
]]>