Dismal Science Fictions: Network Effects, Microsoft, and Antitrust Speculation

 

 

by Stan Liebowitz & Stephen E. Margolis

 

 

To Be Published as a Cato Policy Analysis on or about Sept 1, 1998

Executive Summary

 

In the antitrust action against Microsoft, both the Justice Department and the private parties that have aligned against Microsoft have invoked novel economic theories to provide grounds for new antitrust doctrines and give new life to old ones. These theories imply that in high technology markets, a product or technology with a head start or large market share may have an insurmountable advantage over its rivals. These theories stipulate influences called network effects, increasing returns or path dependence. Any of these are alleged to create lock-in, the condition that markets get stuck with inferior products or technologies. We demonstrate that these theories leave out important elements of real-world markets that may alleviate this kind of market failure, and therefore can be evaluated only by reference to real-world evidence. Our research, surveyed here, demonstrates that the claimed examples of lock-in are not market failures.

We also consider arguments that Microsoft’s allegedly locked-in position allows it to engage in exclusionary or predatory behavior. We argue that exclusion and predation do not explain Microsoft’s behavior. We further show that the rights to determine the use of the desktop are of limited value, since final consumers can alter the desktop quite easily. Such desktop rights will go where they are most valuable. Finally, we argue that progress in software inevitably involves increased functionality. A legal rule against adding function to software products would impede progress in the software industry.

_____________________________________________________________

Stan Liebowitz is a Professor of Economics in the Management School of the University Of Texas at Dallas. Stephen E. Margolis is a Professor of Economics at North Carolina State University.

Introduction

Revolutions in science and technology, while bringing benefits to large numbers of people, also bring stresses of various sorts. New technologies can alter the scale of business activities, the geographic distribution of these activities, the types of firms that are involved in production and distribution, and the distribution of wealth. The benefits are many: consumers may enjoy cheaper goods and new products; firms that implement the new technology may make very substantial profits; workers may enjoy higher wages, new types of careers, and generally expanded opportunities. At the same time, some businesses and workers will lose as new skills and methods of commerce supplant old ones.

In these circumstances, interested parties have often enlisted legislation or regulation to preserve old interests or defend new ones. The historical motivations for U.S. antitrust law have been at least in part an attempt by various parties to defend their stakes in the economy. The antitrust debates over new computer technologies in general, and Microsoft in particular, are consistent with this pattern. In particular, today, as in the past, there are calls for restrictions on the leading firms in new technology industries. While the focus for scrutiny is Microsoft, the effects are likely to reach much further. As with past generations of antitrust law, the precedent and enforcement practice reached in the current debate are likely to have a wide and long-lasting influence.

In the policy debates surrounding the antitrust campaign against Microsoft, both the Justice Department and various parties that have aligned against Microsoft have invoked novel and incomplete economic theories to justify action against a firm with a large market share. In markets for high technology products, it is alleged, a company with a head start or the largest market share will have what may prove to be an insurmountable advantage over its rivals. These new theories are associated with terminology such as increasing returns, network effects, path dependence, or lock-in.

According to these theories, where industries exhibit increasing returns, certain old-fashioned beliefs about market outcomes and market processes should be cast aside in favor of the following: First, the success of products in the marketplace comes from size, good timing, aggressive strategies, or luck, rather than their inherent values. Second, we should have no confidence that new products, technologies, or standards should be able to displace their established counterparts, even if they offer important advantages. Finally, and perhaps most directly of interest to the antitrust enforcers, any action taken by a market leader that might increase market share, such as lowering price, should receive heightened scrutiny, since such actions have the likely consequence of locking-out superior products.

Widespread acceptance of such theories would necessitate a radical rethinking of antitrust policy. Further, it appears that such theories are holding considerable sway in today’s antitrust debates. For example, Business Week reported:

Instead of basing his attack against Microsoft on outdated economic theories that demonize bigness, Assistant Attorney General Joel I. Klein is relying on a developing body of antitrust thinking that warns that the threat of anticompetitive behavior could be even greater in high technology than in traditional industries. This research on "network externalities" deserves to be taken seriously . . . . The Microsoft case is one of the first ever in which Justice has made use of network theory.

Even the pundits at the Wall Street Journal, a publication not known for embracing radical expansions of antitrust law, have fallen for lock-in theory. Alan Murray recently opined, on the paper’s front page, that:

[H]igh-tech industries might be more susceptible to antitrust problems than their low-tech brethren. That’s because consumers often feel a need to buy the product that everyone else is using, even if it isn’t the best, so their equipment is compatible. Economists call this "network externalities."

It’s why most people use a keyboard that begins clumsily with the letters QWERTY; why most videos are now available only in VHS format; and why Windows has become the dominant operating system.

These new theories provide a convenient solution for those who would bring antitrust claims to bear against market leaders such as Microsoft. Those "outdated economic theories," so cavalierly dismissed in the above quote, might fail to support antitrust enforcement against the current generation of market leaders in high-tech industries. Standard theories of monopoly, which have long provided what economic foundation there was for antitrust, hold that monopoly restricts output in order to elevate prices. Monopoly was bad only for these reasons. In contrast to this concern, what we seem to see in high technology markets are falling prices and increased quantities, even as market shares of the market leaders become extremely large. Absent an allegation of high prices, antitrust authorities have looked to these new lock-in theories in order to provide some economic support for their actions against such high technology firms.

The problem with all this is that these new economic theories are fundamentally flawed. Our writings, appearing in academic journals since 1990, show that the case for lock-in is an extraordinarily weak one.

While our work has criticized lock-in theories as being based on overly restricted assumptions, the more telling criticism has to do with the lack of empirical support for these theories. Alleged examples of lock-in seem, when held up to critical scrutiny, to be more the products of wishful thinking than the fruits of serious study. This essay reviews the case against the economic theory of lock-in and analyzes the lock-in claims levied against Microsoft in recent months. With regard to Microsoft, as elsewhere, neither theory nor fact supports the call for antitrust enforcement measures.

The Economics of Increasing Returns, Network Effects, and Path Dependence

There is a closely related group of ideas that come together under the theory of lock-in. Increasing returns are said to occur wherever the net benefits of an activity increase with the scale of the activity. Within a firm, increasing returns are present if the average cost of producing goods decreases as the output of the firm increases. All firms are thought to have increasing returns at outputs that are small. Economists have also long considered cases in which increasing returns are more persistent, such that even if a single firm were supplying the entire industry demand, it would still experience decreases in average costs as output increased. In such a case, known as natural monopoly, monopoly is the likely evolution of a free market. Further, in this instance, monopoly offers society the opportunity for the lowest possible production cost because it takes full advantage of decreasing costs: In this instance monopoly is socially desirable. The problem, however, is that even though such a monopoly would minimize costs, it would be expected to restrict output and elevate price.

Many industries regarded as experiencing these persistent economies to scale are those treated as public utilities: Electricity, telephone, natural gas, cable TV, and others. The policy response to this circumstance has been price (or rate of return) regulation. It is interesting, particularly in the context of the antitrust debate over operating systems, that the industries that we have traditionally regarded as natural monopolies are now being widely deregulated.

Network Effects, also sometimes called network externalities, may be understood as a special case of increasing returns. With a network effect, the benefit that someone gets from purchasing a product depends upon the number of other users of the product. For example, people who buy fax machines will find them to be more valuable as other people buy compatible fax machines. The relationship to increasing returns is straightforward. As a product becomes more popular, it will become more valuable to consumers, giving it an ever-increasing advantage over its smaller rivals. As a result, smaller rivals are likely to disappear. We may settle on a single format for videocassette recorders or a single communication protocol for fax machines. This need not constitute a monopoly in the usual sense of a single firm, depending on ownership of the standard, but in many cases it will.

As is the case for natural monopoly, a monopoly outcome may be socially desirable. This observation is critical for consideration of antitrust policies. Of course, the potential for monopoly price elevation still applies, and if it occurred, such price elevation might result in social losses. But this has not been the concern of the network externality literature that apparently has influenced the Justice Department.

The problem of path dependence, or lock-in, begins with the observation that monopoly is a likely outcome of network effects. But the concern here shifts to whether the best technology or product is chosen. The allegation of this literature is that we are likely to get the wrong monopolist, producing the wrong product, not that the monopolist charges the wrong price or produces the wrong quantity.

Theories of Lock-in

A useful starting point in understanding the theory of lock-in is an example presented by Brian Arthur, one of the leading figures in the literature of lock-in.

Table one presents his example. In Arthur’s table, society faces an opportunity to develop one of two technologies. For each technology, the greater the number of adopters of the technology, the greater the payoffs to those adopters. Network effects, for example, could cause this relationship.

Individuals make decisions based on their private interests and receive payoffs as shown in the table. The first adopter of technology A would expect a payoff of 10. Similarly, the first adopter of technology B would expect a payoff of 4. Under these circumstances, Arthur notes, the very first adopter of any technology would certainly choose A, thus receiving 10 instead of 4. A second adopter would reach the same conclusion, and so on, since the advantage of technology A over technology B would only increase with additional adoptions of A. But notice that if the number of adopters does eventually become large, technology B offers greater payoffs. Thus for Arthur, the table tells a story of lock-in to an undesirable outcome.

There are problems with this table and the lessons that are drawn from it. First, note that the increasing payoffs in the table must be stronger for B than for A if this story is to unfold as presented. Yet there is no reason to think that among competing technologies the one with the greatest payoffs with many users does not also have greater payoffs with smaller numbers of users. At a minimum, this restriction narrows the set of possible lock-ins.

Table 1: ADOPTION PAYOFFS

Number of Previous Adoptions

0

10

20

30

40

50

60

70

80

90

100

Technology A

10

11

12

13

14

15

16

17

18

19

20

Technology B

4

7

10

13

16

19

22

25

28

31

34

Also, the table does not allow adopters to anticipate or influence the outcome. But people clearly do both. If the first person faced with the opportunity to purchase a fax machine had assumed that he was going to be the only user, we would still be waiting for this technology to catch on. If a technology is owned, the owner of the technology may assure adopters that they will receive highest available payoffs by leasing applications with a cancellation option, publicizing current and planned adoptions, or simply bribing adopters through low prices or other compensation. Of course, the owners of both of these technologies can do this, but the owner of the technology that creates more wealth can profitably invest more to win such a contest.

Lock-in theory is often argued in the context of formal models with multiple equilibria. In these models, several different outcomes are equally likely, even though they may not be equally desirable, and the choice among them is largely a matter of coincidence. Better products, it is shown, may not win. All of this comes to us dressed in the full rigor of mathematical proofs, supported by theorems on stochastic processes. These models, although seemingly more complex than the simple table in Arthur, again abstract from most of the things that companies do to win technological races and most of the things that consumers do to purchase well. Inclusion of these factors is difficult, and can not prove that markets always choose correctly. Therefore, whether we are considering the increasing returns story of Table 1, or multiple equilibrium models, we are left with an empirical question: Have these models captured something important about the way that markets work? The next section addresses that empirical question.

Evidence for Lock-in in the Economy: Do We Get the Wrong Monopolist?

Facts, or empirical evidence, must be the final arbiters of these theories, as they are with all theories. Given the extensive publicity received by these theories, one might conclude that they are supported by a large body of evidence. Nothing, however, could be further from the truth. The little support that has been offered consists of a few key examples where markets have supposedly settled on the wrong system or standard and failed to change to a purportedly better system or standard. The key examples are presented in the following subsections.

The Qwerty Keyboard

The most commonly cited example in the network-externality, path-dependence literature is the prosaic typewriter keyboard. The importance of this example can be gleaned from Paul Krugman’s 1994 book "Peddling Prosperity." In that book Krugman speaks glowingly of this entire literature in a chapter entitled "The Economics of QWERTY." He does, however, appear to have altered his views when made aware of the facts presented below.

QWERTY refers to the letters in the upper left-hand portion of the typewriter (and computer) keyboard. The received story is that the QWERTY arrangement was able to minimize the problem of jamming keys in the first typewriters, by slowing typing speed. The story continues that QWERTY’s ascendance was due to a serendipitous association with the winner of a famous typing contest who by happenstance used the QWERTY design.

The QWERTY design is reputed to be far inferior to the "scientifically" designed Dvorak keyboard, which allegedly offered a 40% increase in typing speed. Supposedly, the Navy conducted experiments during the Second World War demonstrating that the costs of retraining typists on the new keyboard could be fully recovered within ten days. The story is claimed to validate path dependence: No typists learn Dvorak because too many others use QWERTY, which increases the value of QWERTY all the more.

This is an ideal example, since the number of dimensions of performance is small and in these dimensions, the Dvorak keyboard appears overwhelmingly superior. Yet upon investigation, this story appears to be based on nothing more than wishful thinking and a shoddy reading of the history of the typewriter keyboard. The QWERTY keyboard, it turns out, is about as good a design as the Dvorak keyboard, and was better than most competing designs that existed in the late 1800s when there were many keyboard designs maneuvering for a place in the market.

Ignored in these stories of Dvorak’s superiority is a carefully controlled experiment conducted under the auspices of the General Service Administration in the 1950s comparing QWERTY with Dvorak. That experiment contradicted the claims made by advocates of Dvorak and concluded that it made no sense to retrain typists on the Dvorak keyboard. Modern research in ergonomics also finds little advantage in the Dvorak keyboard layout, confirming the results of the GSA study.

So on what bases were the claims of Dvorak’s superiority made? Critical examination shows that most, if not all, of the claims of Dvorak’s superiority can be traced to the patent owner, Professor August Dvorak. His book on the relative merits of QWERTY versus his own keyboard has about as much objectivity as a modern infomercial found on late night television.

The wartime Navy study turns out to have been conducted under the auspices the Navy’s chief expert in time-motion studies--Lt. Commander August Dvorak--and the results of that study were clearly fudged. There is far more to this story, but it all leads to the conclusion that the QWERTY story qualifies as no better than a convenient myth.

The acceptance of this story, wrong as it is in almost every detail, illustrates both the desire by path dependence theorists for empirical support and their reluctance to check the facts. The economic historian who wrote an influential paper on the keyboard story and who cites Navy study to provide support for path dependence theories never actually examined the Navy study.

We published a very detailed account of all this in the spring of 1990. Yet in spite of our paper, which has not been factually disputed, Garth Saloner, who is certainly aware of our paper, used the keyboard example as recently as last fall at the Ralph Nader’s anti-Microsoft conference. One could hardly find better evidence of this theory’s lack of empirical support then the continued use of a result that is known to be incorrect.

Beta-Vhs

The second most popular example of how markets allegedly get locked-in to poor standards is the Beta-VHS videorecorder format struggle. It is sometimes claimed that Beta was a better format and that VHS won the competition between formats only because it fortuitously got a large market share early on in the competition with Beta. But this story turns out to be just as inaccurate as the keyboard story.

In 1969 Sony developed a cartridge-based videorecorder, the U-matic, which it hoped to sell to households. Since other companies had such products in the works, Sony invited Matsushita and JVC to produce the machine jointly and to share technology and patents, which they did. Sony hoped by this behavior to achieve a standard, which indicates considerable foresight on the part of the market participants. But the U-matic was not a success.

In the mid-1970’s, Sony developed the Betamax. Sony demonstrated the machine to Matsushita and JVC and disclosed technical details, hoping to establish a new set of agreements. Months later, when JVC demonstrated its new machine to Sony, Sony engineers concluded that JVC has expropriated their ideas. The resulting bitterness left Sony and Matsushita-JVC each to go their separate ways.

The only real technical difference between Beta and VHS was the manner in which the tape was threaded and, more importantly, the size of the cassette. Sony believed that a paperback sized cassette, allowing easy transportability (although limiting recording time to 1 hour), was paramount to the consumer, whereas Matsushita believed that a 2-hour recording time, allowing the taping of complete movies, was essential. The larger VHS tape meant that for any given state of the recording technology, VHS machines could provide longer playing time, or higher quality playback, or some combination of the two.

The behavior of the antagonists in this competition is a wonderful example of forward-looking behavior. They used partnerships, advertising, pricing and any other tool at their disposal. The behavior was nothing like the passive adoption story that the theoretical models of lock-in present.

In an attempt to increase market share, Sony contracted to have its Beta machines sold under Zenith’s brand name, a highly unusual move for Sony, and licensed the format to Toshiba and Sanyo. To counter this move, Matsushita allowed RCA to puts its name on VHS machines and brought Hitachi, Sharp, and Mitsubishi into its camp. Sony slowed down tape speed to increase its playing time to two hours; VHS did the same and increased playing time to four hours. RCA radically lowered price and came up with a simple but effective ad campaign: "Four hours. $1000. SelectaVision." Zenith responded by lowering the price for its Beta machine to $996.

The market’s referendum on playing time versus tape compactness was decisive and rapid. Beta had an initial monopoly for almost two years. But within six months of VHS’ introduction in the US, VHS was outselling Beta. These results were repeated in Europe and Japan as well. By mid 1979 VHS was outselling Beta by more than 2 to 1 in the US. By 1983 Beta’s world share was down to 12 percent. By 1984 every VCR manufacturer except Sony had adopted VHS.

Not only did the market not get stuck on the Beta path; it was able to make the switch to the slightly better VHS path. Although Beta was first, VHS was able to overtake Beta very quickly. This, of course, is the exact opposite of what path dependence theory predicted: that the first product to reach the market is likely to win the race even if it is inferior to later rivals.

Now listen to the version of this story found in Brian Arthur’s work:

Both systems were introduced at about the same time and so began with roughly equal market shares . . . . Increasing returns on early gains eventually tilted the competition toward VHS: . . . if the claim that Beta was technically superior is true, then the market’s choice did not represent the best outcome.

The story is little more than an inaccurate anecdote. The elevation of poorly researched anecdotes to the category of ‘proof’ for narrowly constructed theories reappears in the current discussions surrounding Microsoft, as shown below.

Other Purported Examples, Including the Macintosh

Path dependence advocates have sometimes claimed that the continued use of FORTRAN by academics and scientists is an example of getting stuck on a wrong standard. But one doesn’t have to peruse too many computer magazines to realize that FORTRAN has long ago been superseded by languages such as Pascal, C, C++, and now, perhaps, Java. Individuals continue to use Fortran not because they want to be like everyone else, but because their cost of switching is too high. Network effects, as normally modeled, should have induced them to switch years ago. This is a story of ordinary sunk costs, not of network "externality" or other market failure.

Path dependence proponents have also sometimes claimed that the gasoline-powered engine might have been a mistake, and that steam or electricity might have been a superior choice for vehicle propulsion. This is in spite of the fact that in the century since automobiles became common, with all of the applications of motors and batteries in other endeavors, and with all the advantages of digital electronic power-management systems, today’s most advanced electric automobiles do not yet equal state of the art internal-combustion automobiles of the late nineteen-twenties.

The most captivating of these other stories, however, is the success of the PC over the Macintosh. Mac users naturally favor the claim that they chose operating systems wisely whereas the rest of the world ignorantly opted for Microsoft’s products. The presence of this large and somewhat embittered audience probably explains why the idea of getting stuck with an inferior product resonates so strongly in the Microsoft case, playing as it does in the arena of personal computer aficionados.

Yet even here the facts do not support the lock-in thesis. Yes, Macintosh owners were forward-looking when they made their purchases in the early and mid-1980s. At that time, the advantages of a graphical interface were clearly understood, if not fully implemented. The lack of implementation, however, had to do with the high price, in terms of speed and cost, that owners of graphical computers had to pay at a time when processors were slow and memory was expensive.

A graphical user interface requires considerably more power to get the job done, significantly increasing costs when that power was hard to come by. Macintosh owners had to wait for their screen displays to change, whereas PCs owners had almost instantaneous updates. True, Macintosh owners could see italics and bold onscreen, but to print the screen as they saw it required a postscript printer, and such printers cost in the vicinity of a thousand dollars more than ordinary laser printers. The graphical user interface allowed users to learn programs much more easily, but in many business settings, a computer tended to be used for only a single application. In that environment, the operator had very little interaction with the operating system interface, and once the operator had learned the application, the advantages of the graphical user interface were diminished.

The case for DOS therefore, was stronger than appears from the vantage of the 1990s with our multimegabyte memories and multigigabyte hard drives. Now that we routinely use computers that, compared to those old DOS machines, can run thirty times as fast, with fifty times the memory and one hundred times the hard drive capacity, the requirements of a graphical operating system seem rather puny. But they were enormous back in the days of DOS.

As processors became faster, memory cheaper, and hard drives larger, the advantages of a graphical user interface should have overcome any command (text) based system such as DOS. If we were still using DOS, that would certainly be an example of being stuck with an inferior product. But we are not using DOS.

Instead we are using a Mac-like graphical user interface. If someone went to sleep in 1983 and awoke in 1995 to see a modern PC, they most likely would think that the Macintosh graphical user interface had been colorized and updated, with a second button added to the mouse. Our modern Rip van Winkle might be surprised to learn, however, that the owner of the graphical user interface was not Apple, but Microsoft.

The movement from DOS to Windows was costly, yet it occurred quite rapidly. As in the other examples, what the evidence shows is quite the opposite of what the path dependence pundits predicts: not markets getting stuck in ruts, but instead markets that make changes when there is a clear advantage in doing so.

Microsoft’s Dispute with the Justice Department

Historically, new antitrust doctrines have developed in connection with the big cases of the times. These big cases most often involved the biggest and most successful companies. That pattern is being repeated today. Various antitrust actions against Microsoft have been the main venue for discussions and actions that propose and explore new economic foundations for antitrust and new interpretations of old antitrust doctrines.

Microsoft’s antitrust problems began with a government investigation of Microsoft’s pricing of software sold to original equipment manufacturers (OEMs). Microsoft agreed to end these practices in a highly publicized 1994 Consent Decree with the Department of Justice (DOJ). Whether these practices were anticompetitive or not, there can be little doubt that these practices had little to do with Microsoft’s successes in the market.

The Consent Decree did little, however, to end Microsoft’s legal problems with the DOJ. When Microsoft attempted to purchase Intuit, a maker of financial software, the DOJ opposed the deal. In a highly publicized decision, the Consent Decree itself was temporarily overturned by Judge Stanley Sporkin’s (later overturned) decision, which appears to be the first time that path dependence theory had reached the point of having a serious influence on policy.

There were other skirmishes as well. The DOJ examined Microsoft’s inclusion of the Microsoft Network icon on the Windows 95 desktop. It was claimed that consumers would be unwittingly forced into acceptance of this product to the detriment of competition in the online service industry.

The most recent twist in the DOJ’s continuing investigation is its interest in Microsoft’s channel partners on its ‘active desktop’. The antitrust theory behind this investigation is still unclear, but appears to be related to the exclusionary claims being made against Microsoft with regard to Internet Explorer.

The DOJ’s primary focus appears to be an investigation of the competition between Netscape and Microsoft that was initiated in 1996. This investigation erupted into activity recently when the DOJ accused Microsoft of violating the 1994 Consent Decree. The current issue revolves around Microsoft’s inclusion of its web browser in the Windows operating system, and Microsoft’s insistence that its browser not be removed by OEMs. The next section will discuss that.

Newspaper accounts and public statements by Department of Justice officials and other participants indicate that the economics behind these investigations are either partly or completely based on the theories of path dependence. The most famous, and perhaps most influential, attempt to connect these theories to antitrust is a series of briefs prepared by Gary Reback, a lawyer working for several of Microsoft’s competitors, along with two economists who have played prominent roles in this literature: Brian Arthur and Garth Saloner.

These briefs actually go much farther than the economics literature has gone. Reback does not stop with the traditional path dependence claim that a market-based economy is likely to choose all sorts of wrong products. Nor does he stop with the claim that innovation might be eliminated in the computing industry. Instead, Reback portrays Microsoft as an evil empire intent on nothing less than world domination. To hear him tell it, the American Way of Life will be imperiled if Microsoft is not reined in by the government: "It is difficult to imagine that in an open society such as this one with multiple information sources, a single company could seize sufficient control of information transmission so as to constitute a threat to the underpinnings of a free society. But such a scenario is a realistic (and perhaps probable) outcome."

These are fantastic claims indeed. They were repeated at the conference on Microsoft recently held by Ralph Nader. Brian Arthur, Gary Reback, and Garth Saloner all made presentations.

Antitrust Doctrines and Network Technologies

Both the Justice Department and some of Microsoft’s private competitors have used theories of lock-in to support a call for heightened antitrust scrutiny of Microsoft. By itself, lock-in would seem not to constitute an antitrust offense. There is nothing in the law that makes it a crime to have technologies that are less than the best available or less than the best imaginable. Instead, lock-in theories offer an alternative way to claim harm in the absence of the usual monopoly problem of elevated prices and restricted outputs. Also lock-in stories offer new life and a contemporary spin on old antitrust doctrines. The following two subsections considers some of the antitrust issues that have been raised in the software industry. The first subsection describes why monopoly leverage requires special conditions that make it nearly impossible. The second describes why no smart monopolist would try predatory bundling.

Monopoly leverage, tie-ins, and bundling

In theory, monopoly leverage occurs when a firm uses its monopoly in one industry to win a monopoly in another industry. Tie-in sales and bundling are contractual practices that are sometimes alleged to facilitate monopoly leverage, but tie-ins and bundling do not have to create new monopoly to be profitable. Nor do tie-ins necessarily harm consumers. In fact, as this subsection explains, the theory of monopoly leverage requires so many special conditions that it seems certain to remain just that: A theoretical problem.

Economists have long been skeptical that monopoly leverage is either feasible or profitable. In most circumstances, forcing consumers to purchase some other product so as to create a second monopoly will not add to a firm’s profits. A monopolist can instead simply extract the value of its monopoly through the pricing of the good in the market where it has its first monopoly.

Suppose, for example, that a firm held a monopoly on oil furnaces. Such a monopoly might be quite profitable; oil furnaces are useful things that offer some advantages over other kinds of furnaces. The monopolist’s ability to maximize profits would face some limits, of course, such as the availability of substitutes like propane and electric heating. Still, the monopolist could devise a pricing system that captures the extra value of using an oil furnace rather than a competing source of heat. The lower the price of heating oil relative to the price of propane or electricity, the greater that value would be. If the furnace monopolist were to become the oil monopolist too, he might raise the price of heating oil, but that would only reduce what he could extract through the furnace price.

Consider this analogy: regardless of whether or not it worried you that someone had a key to the front door of your house, it would not worry you more if that person also had a key to your back door. Nevertheless, the idea that the second monopoly could be used for something has intuitive appeal. Even if the monopoly in furnaces could be used to extract everything that can be extracted from the furnace users, could not a monopoly in heating oil be used to extract something from people who use heating oil for another purpose? It turns out that, yes, there is a circumstance in which a second monopoly is worth something. That circumstance is a very limited one, however. If the furnace monopolist could also monopolize the heating oil industry, he could extract additional monopoly rents from heating-oil users who were not also his furnace customers.

The question then arises whether one monopoly could ever be extended to capture customers of solely another market. The answer again is yes, it is possible--but, again, only under very special circumstances. If there were economies of scale in the heating oil industry and if too few customers bought heating oil for non-furnace uses to support a separate supply of heating oil, then the furnace seller could lever his monopoly in furnaces into a monopoly in heating oil by preventing furnace customers from buying heating oil other sources. By assumption, the non-furnace customers would not offer a large enough market to support any independent oil supplier and the furnace monopolist could then extract new monopoly rents in this other market. This explanation of leverage is sometimes referred to as market foreclosure. Ironically, the larger the furnace monopolist relative to the heating oil industry, the less likely it will benefit from monopolizing heating oil since it will already have virtually all the potential customers.

This explanation is a theoretical possibility of harmful monopoly leverage--but it requires very special conditions. The levered market must be big enough to matter, but not so big as to allow competitive independent suppliers to survive. There must be some economies of scale in the levered market, but not enough to have caused prior monopolization of the market. The levered market must have many of the same customers as the initial monopoly, so as to provide control of the new market, but not too many, or there will be no new rents to extract by establishing the second monopoly. In short, leveraging can be viewed as the Goldilocks theory of monopoly extension --everything has to be just the right size.

Do the facts of the Microsoft case fit within the leverage story at all? If Microsoft requires each customer to buy one copy of some other Microsoft product, this would, in and of itself, add nothing to its profits. That sort of tie-in sale with fixed proportions has long been understood to offer no particular advantage to the monopolist. So the issue becomes whether Microsoft could crowd out any rivals that sell to customers who do not use Microsoft’s own operating system.

Here the application to Microsoft of the market foreclosure theory runs into trouble. If the products that allegedly crowded out by Microsoft’s bundling are products that run only under the Windows operating system, then monopoly leverage offers Microsoft no advantage.

To illustrate this point, consider a hypothetical example of successful tying-foreclosure using personal software products, such as Quicken and Microsoft Money. Both are sold in the Macintosh market and the Windows market. If Microsoft were to build Microsoft Money into the Windows operating system, and if this eliminated Quicken in the Windows market, and if the Macintosh market were too small to allow a product like Quicken to be produced at reasonable average cost in that market alone, and if Microsoft continued to sell the product separately in the Macintosh market (now at a monopoly price), and if there were few additional costs for Microsoft in creating a Macintosh version, then, and only then, would Microsoft benefit from leveraging monopoly.

Has this been done? Does Microsoft sell in the Macintosh market disk compression, backup, fax software, or any other program that is included in the Windows operating system? Although we have not performed an exhaustive search, the only product that comes to mind is a Macintosh version of Internet Explorer. But Microsoft gives away this product in the Macintosh market, and promises a permanent price of zero. If Microsoft sticks to its promise, it cannot profit from including the browser in the operating system. Even then, the other required conditions for market foreclosure (the Macintosh market being too small to support Navigator and the costs of Microsoft creating a Macintosh version not being too large) may very well fail to obtain.

A simple rule that would prevent this type of foreclosure would prevent Microsoft from including in its operating system any program that it sells separately in another market. But although such a rule might remove the risk of this sort of market leverage, it also would penalize customers in other markets who would be excluded from the benefits of these programs in cases where no market leverage was contemplated. Given all the special conditions required for successful leveraging, it would be unwise to implement such a rule without further investigation of the potential harm of denying Microsoft products to consumers in tied markets.

Predatory Bundling

The most recent allegations against Microsoft concern predatory use of its ownership of the Windows operating system. The specific allegation is that Microsoft’s integration of its browser into the operating system is largely predatory in intent, aimed at forcing other firms out of the browser market. The implications of this issue, however, extend well beyond the browser market, and extend to the very issues of what an operating system can be and the nature of progress in the software industry.

Antitrust law defines as predatory those actions that are inconsistent with profit maximizing behavior except when they succeed in driving a competitor out of business. In predatory pricing, for example, a would-be monopolist allegedly charges a price that is so low that other firms cannot sell their outputs at prices that will cover even their variable costs. These other firms are then forced either into bankruptcy or to exit the industry because they have become unprofitable. Upon completing the predatory episode, the predator then gets to enjoy the benefits of monopoly pricing. It should be noted that during the predatory episode, consumers benefit greatly from the low prices, so it is only the later monopoly pricing that causes harm to consumers.

Economists are generally skeptical of claims that price cuts or other actions have predatory intent because they have determined, both in theory and practice, that predatory campaigns are unlikely to have profitable endings. First, the predatory action is likely to be more expensive for the predator than for the prey. The predator cannot just cut price; it must also meet market demand at the lower price. Otherwise, customers will be forced to patronize the prey, even if at higher prices. If the predator is a large firm, it stands to lose money at a faster rate than the prey. Second, even if the predation succeeds in bankrupting the prey, there is no guarantee that a reestablished firm will not just re-enter the industry once the predator has established monopoly pricing. If there are fixed investments in the industry, such as durable specialized equipment, the predator cannot establish monopoly prices as long as these durable assets can return to the market. If there are no durable assets, then the prey can cheaply exit the industry and re-enter when monopoly prices return. Either way, the predatory episode drains the predator while imposing uncertain burdens on the prey.

Another problem with predation is that almost any action that a firm takes to become more attractive to consumers can be alleged to be predatory. If customers like something a firm is doing, its competitors will not. In the most elementary case, a price cut or product improvement will damage the prospects for some competitor. It bears noting that most of the alleged cases of predation have been demonstrated to be false.

Predatory bundling, like predatory pricing, is a simple idea that ultimately has the same failings as pure predation. If a firm with a controlling share of one product bundles in some other product, competitors who sell the bundled-in product will have to compete with a product that, to the consumer, has a zero cost. If Microsoft includes in its operating system a piece of software that competes with other vendors in what had been a separate market, Microsoft ensures that virtually all purchasers of computers then have a copy of the new software.

Suppose Microsoft bundles a fax program into Windows98. If Microsoft’s fax program, relative to its cost, is better than other fax products, then the bundling can not really be predatory. The Microsoft product would win in the marketplace anyway and adding it to the operating system costs less than its value to consumers. If the product is worth more to consumers than the costs of creating it, then bundling will also be profitable without any exclusionary consequences. In contrast, if Microsoft’s fax program, again considering its cost, is inferior to alternatives or provides less value than its cost, then Microsoft would profit only if bundling caused other firms to exit the market and Microsoft were able to raise the price of the operating system by the now higher implicit monopoly price for its fax product.

As a strategy, however, predatory bundling has the same liabilities as predatory pricing. As in predatory pricing, Microsoft stands to lose money (relative to not including the fax software) faster than its rivals if its fax program costs more to produce than its value to consumers. Moreover, a rival with a superior fax program could keep the product on the market for a price that reflects the advantages that it offers over the bundled product. The rival could not charge more than that because the Windows consumer would already have the inferior fax program. The rival could still capture the extra value that its own intellectual property contributes, however. While it may lose profits or market share, the rival will retire its fax program only if it is inferior to Microsoft’s.

From a social or consumer welfare perspective, then, Microsoft’s bundling action would do no harm. The rival software is a fixed asset in the industry; it does not wear out. In the extreme case, a bankrupt producer might put its fax program on the web, making it freely available to all. This would limit what consumers would be willing to pay for the program bundled into Windows98 to its extra value, which is zero. Thus Microsoft would be unable to charge a higher price for the bundled software despite having incurred the costs of creating the fax program. Microsoft would lose money and fail to monopolize the market. Furthermore, the creative talents used to make the rival fax program still exist, ready for some other firm to hire should Microsoft ever achieve a monopoly price on the fax program.

Of course, an antitrust enforcer might reply that the OS producer has distribution or coordination advantages that an independent rival lacks. But if these are real advantages that outweigh any quality advantages of the rival, then it is efficient for the OS producer to bundle its fax program.

All this suggests that bundling, as a predatory action, is unlikely to succeed. Furthermore, the software industry has very important non-predatory reasons to bundle functions into operating systems and other software products. As we explain below, new sales of software will require continual additions to functionality.

In the Netscape case, antitrust enforcers might allege that Microsoft is not interested in defeating the Netscape browser so much as destroying Netscape as a company. Industry pundits have often theorized that web browsers might constitute a means of establishing a new operating system. Netscape, they allege, constitutes a threat to Microsoft’s position in the operating system market. Regardless of the technical reasonableness of this claim, however, it runs into the same problems as other allegations of predation.

Here, as elsewhere, predation would not destroy the durable assets of the prey. Netscape’s software will hardly disappear if Microsoft bundles a browser into Windows. Indeed, Netscape has already made the source code for its Navigator program publicly available. Even if Microsoft still tried to destroy Netscape in order to protect Windows’ market share, it would ultimately fail. Any of Microsoft’s several large and vigorous competitors, such as IBM or Sun, would happily purchase Netscape, or hire its engineers, if they thought that by so doing they could share some of Microsoft’s enviable profits.

The Rate of Innovation

Putative Dangers

One concern that has been raised by the Justice Department, in the Judiciary committee hearings, by some journalists, and by several path-dependence theorists, is that Microsoft’s dominant position in the market will somehow inhibit innovation. The suggestion is that Microsoft will be able to so dominate the software market that no small firms will dare compete with it. Firms will be unwilling to create new products in any market that is likely to attract Microsoft’s attention, especially in products that are possible additions to the operating system. It is not clear that current antitrust law addresses such concerns. If valid, however, and if not addressed by antitrust law, they might encourage new legislation. Of course, the impact of such legislation would probably reach beyond the computer industry.

Concerns about lock-in drive the accusations against Microsoft. Consumers are viewed as being so locked-in to Microsoft’s products that even if the Wintel platform fell far behind the cutting edge of computer technology, no other combination of an operating system, applications, and support could displace it. Obviously, no one can empirically disprove the claim that products that might have been created would have been better than currently existing products. Instead, the analysis here focuses whether lock-in theory correctly concludes that Microsoft will stifle innovation in the computer industry.

Certainly there are instances where Microsoft has incorporated programs into the operating system where the former providers of such programs have gone on to other things. Disk compression and memory management programs are two examples. Fax programs, backup programs, and disk-defragmenting programs are counter-examples where the inclusion of such programs in the operating system has not eliminated the separate market. The difference appears to be in whether the programs Microsoft includes in its operating system are as good as the separate programs or not. When Microsoft produces a product as good or better than the competition, the separate market usually does disappear. It is difficult, however to conceive of consumer harm in this case.

The general claim that innovation will suffer if Microsoft is allowed to grow and add programs to the operating system has several shortcomings. It does not just assume that creative ideas are more likely to come from small startup companies than from Microsoft. That assumption is likely to be true, since the number of outside programmers developing products for Windows is more than fifteen times as large as the number of programmers working for Microsoft. Instead, it assumes that Microsoft could not, or would not, use these programmers to produce as much creative activity as they would produce if they continued to work independently.

It is, of course, conceivable that large firms produce less innovation than small firms do (adjusting for size). But this has been investigated at length in the economics literature with no clear consensus. If there were a reason to believe that the software industry would be different from most other industries in this regard, it would tend to support a view that large software firms will continue to innovate.

Firms benefit from good new ideas. Profits will increase when these new products are brought to market. Monopolists benefit just as much from an extra dollar of profit as do competitive firms. The argument that large firms might innovate less than small firms do usually relies on some variation of the view that large firms are fat and lazy. That is, that they do not innovate because they do not have to. Still, a dollar is a dollar. Most investors are just as eager for their large-firm stocks to perform well as they are for their small firm stocks to perform well. For the fat-and-lazy condition to hold, it must be that large firms with dispersed ownership of their stock do not have the same incentives to maximize shareholder value and profits as do small firms which are usually closely held. This real possibility is known as the problem of separation of ownership and control.

With regard to Microsoft and many other successful high technology firms, however, this argument would seem to have little force. The ownership of Microsoft and most other high tech firms is not widely disbursed. For example, Bill Gates owns almost 25% of Microsoft and several other early Microsoft investors own very substantial stakes. This may in fact explain why Microsoft is still considered such an intense competitor.

Alternatively, it is vaguely suggested that Microsoft stifles innovation because it copies software ideas from others, leaving these other firms no reward for their efforts. If there were any truth to this claim, the problem would appear to lie in intellectual property law, not in any potential monopoly power on the part of Microsoft. After all, if Microsoft could copy the ideas of its rivals, so could a host of other large (or small) firms in the industry, in each instance lowering the profits of the innovator, large or small.

It would be a serious problem if innovators in software were not being properly rewarded for their efforts. The purpose of intellectual property laws is to allow innovators to collect economic rewards for their efforts. Without such laws, imitators could free ride off the efforts of innovators and produce similar products at lower cost, driving true innovators out of business. So, while deserving of investigation, these problems do not seem fundamental in any way to Microsoft, or its ownership of the operating system. Perhaps a reevaluation of intellectual property laws would be in order. But this claim seems to have little to do with antitrust.

There are some factual matters that do not seem consistent with the claim that Microsoft reduces innovation. Microsoft’s behavior toward its developers, for example, does not seem to square with the claim that it is intent on driving out independent software producers:

Microsoft doesn’t court only the powers from other industries. It’s also spending $85 million this year ministering to the needs of 300,000 computer software developers. It subsidizes trade-show space for hundreds of partners. And it’s not above lavishing attention on small companies when it needs their support. . . . "The platforms that succeed are the ones that appeal to developers," admits Alan Baratz, president of Sun Microsystem Inc.’s JavaSoft division. He calls Microsoft’s hold on the developer community its "crown jewel."

More broadly, there seems to be a paucity of evidence to support the concern that the pace of innovation is insufficiently rapid. The pace of innovation in the computer industry is generally regarded with some awe. Certainly, the Windows market does not appear to have suffered from stifled development of applications.

Finally, there seem to be tremendous rewards to those who do innovate in this industry. Even in the instance of Netscape, a supposed victim of Microsoft’s power, the founders walked away with hundreds of millions of dollars. Does this discourage others from taking the same path? Unless and until careful research answers these sorts of questions, any antitrust action would be premature and potentially dangerous to the software industry and the economy as a whole.

A Real Danger to Innovation

The nature of software markets requires that software producers continually add functionality to their products. Unlike most other products, software never wears out. If Big Macs never change, Macdonald’s can keep selling them because consumers still want to purchase Big Macs that are just like the ones that they ate the day before. This is true for most goods, which eventually need replacement. But because software lasts forever, with no diminution in quality, there is no reason for consumers to purchase more than once a word processor or operating system unless new improved versions come to market. Undoubtedly, improvement will mean additional functionality.

To aid in understanding this, consider what it means to improve software. Software could be made faster and perhaps more intuitive, with no additional functionality. But this is not likely to win over many new customers. First, consumers will discover that real speed improvements are likely to come from the inevitable speed increases that occur when they replace their old computers with faster ones. Further, although intuitive interfaces are useful, practice overcome inherent design imperfections. So the natural inclination of consumers would be to stick with any familiar version of a program (or operating system) unless the newer version could perform some useful tasks not available in the old version. This requires adding functionality not found in previous versions.

Added functionality can be seen in every category of software. Word-processors have far more functionality than they used to--spell and grammar checkers, mail merge programs, thesauruses--which were not included with the original generation of word processors. Spreadsheets, database programs, and virtually every other category of program also have far more functionality than before. That is one reason why new software seems to fill our ever-expanding hard drives, which have hundreds or thousands of times the storage capacity of earlier machines.

The consumer benefits in many ways from this added functionality. These large programs almost always cost far less than the sum of the prices that the individual component products used to command. The various components also tend to work together far better then separate components because they are made for each other. If it were not the case, consumers would not purchase new generations of software products.

As this process of adding functionality to programs continues, it is possible that the number of small companies specializing in add-ons will shrink. But is this any reasons to prevent creators of word processors from including grammar-checkers and thesauruses? Should the producers of dominant programs be forbidden to add functionality while producers of less successful programs are allowed to add new functions? That hardly seems a recipe for success. Do we really believe that innovation was retarded because add-on companies feared that they might have been put out of business? Do we even know if they have been put out of business? That those programmers are no longer working on new ideas? Again, questionable logic and a dearth of evidence make these claims suspect.

Yet it appears that some Microsoft’s critics, including those within the government, have proposed freezing the operating system, putting an end to adding functionality. If this proposal were accepted for the operating system, it would also seem to apply to other categories of software. The results would be disastrous for software producers, who would have no new sales except to new computer users, for computer manufacturers, who would find little demand for more capable hardware, and most importantly for users, who would be forced to use seriously crippled software. The proposal to freeze Windows reflects a view that all the useful things have already been invented. Few proposed antitrust policies are as dangerous as this one.

Who Should Get to Assign Desktop Icons? The Irrelevance of the ‘Browser Wars’

At the Senate hearings, and in the media, considerable attention has been given to the claim that Microsoft’s desire to prevent OEMs from removing the Internet Explorer icon from the desktop was somehow inimical to competition. This section explains why Microsoft and OEMs might each want to control the placement of desktop icons and provides an economic framework for deciding who should be allowed to control the desktop icons. Ultimately, though, it turns out that icon placement should probably not matter to even the computer and software industry, much less to antitrust enforcers.

Control of the desktop might be valuable since, as a practical matter, all computer users see the desktop. In principle, desktop placements of ‘advertisements,’ whether a program or a message, could be sold to companies interested in such exposure. For example, assume that an icon for America Online appears on the desktop. Consumers interested in an online service might just click on the icon and begin the process of becoming an America Online customer. Knowing this, a company such as America Online might be willing to pay the controller of the desktop for a good placement of its icon.

Assume for the moment, then, that these icon placements are indeed valuable. The next subsection explains why, nonetheless, regulators should not care whether Microsoft or OEMs control icon placement. Following that, the discussion critically re-examines the assumption that control of icons should matter even to the computer industry.

A simple theory of ‘desktop rights’

If revenues can be generated by placing icons on the desktop, it should not be surprising that both OEMs and the owner of the operating system (Microsoft) each will claim the rights to place the icons. Economic analysis allows us to examine whether it makes any difference who has this right. It also may provide some guidance as to who should get this right.

The Coase theorem can help to explain the tradeoffs involved. If the rights to place desktop icons were well defined, and if there were no transactions costs or wealth effects, the Coase theorem tells use that regardless of who initially has these rights, they would end up where they have the greatest value. Consider the following example. If the rights to sell desktop placement were worth $5 to Microsoft and $10 to OEMs, then OEMs would wind up controlling the desktop icons regardless of who initially had the rights. If Microsoft initially controlled the desktop, OEMs would be willing to pay up to $10 to Microsoft for these rights, and Microsoft would be better off selling the rights to OEMs. It would do this by raising the price of the operating system by more than $5 (but no more than the $10 that OEMs would pay) and granting OEMs the right to place the icons.

If, on the other hand, OEMs initially control desktop placements, Microsoft would be willing to lower the price of the desktop by up to $5 in exchange for the right to control icon placements. OEMs would prefer to keep the rights themselves, however, since they can generate more than $5 in revenues by maintaining this control. In either case, OEMs wind up with the rights, and both parties share the $10 in extra revenue brought about by icon placement sales. Although the two parties might be expected to fight over the rights, it makes no difference to the rest of us who gets the rights. By analogy, as virtually all microeconomics textbooks explain, if the government subsidizes gasoline purchases it makes no difference whether automobile drivers or service stations receive the subsidy, because in either case the subsidy would be shared in exactly the same way.

Sometimes the assumptions of the Coase theorem are not met. For example, if negotiations between OEMs and Microsoft were not feasible, efficiency considerations would require that the property rights be assigned to the party who can generate the highest value for desktop placements. Since Microsoft and OEMs are already negotiating over other aspects of the desktop (e.g. price), however, there is little reason to believe that the market will not work efficiently. Since this is a matter of contract, property rights can be defined and transacted within the contract.

The current anxiety regarding desktop placements is misplaced. So long as the parties freely enter into new contracts, neither party will benefit from a legal stipulation of who initially controls the desktop. It should not matter at all to the government who has the rights.

The reader may naturally ask "if it makes no difference, why is there fighting over who places the icons?" There are two answers. First, there is no evidence that Microsoft and OEMs disagree. It is Microsoft’s competitors who are complaining. Second, it is not unusual in such circumstances for there to be contract disputes or strategic behavior. Two parties can negotiate a contract, then subsequently dispute their understanding of the terms of that contract. If, for example, OEMs are receiving a lower price from Microsoft because Microsoft thought it controlled desktop placement, but now OEMs have a chance to sell icon placement while remaining under a fixed contractual price for Windows, it would not be surprising that a dispute would arise.

Is icon placement valuable?

In order for icon placement to be valuable, it must generate future revenues. America Online benefits in the previous example because consumers could not use its services without paying a monthly fee. Having its icon on the desktop increased the chances that consumers would sign up for the service.

For a typical software product to be on the desktop, however, it is usually the case that the software is already installed on the computer, and thus already purchased. The icon placement only increases its likelihood of use. The only additional benefits to the software producer from having the consumer use the software after purchasing it is that the consumer might purchase upgrades or ancillary products.

For the Netscape and Microsoft browsers there are several reasons why the icon placement might be important. (This analysis ignores any future revenues from upgrades since both companies have agreed not to charge for browsers or upgrades.) It is possible that Netscape and Microsoft might be able to trade off the success of their browsers to sell software specializing in serving up web pages (known as servers) because of their large presence among the base of users and the (presumably) assured compatibility with these browsers.

There is another possible reason for the web browser icon to have value. When a browser is first put into use, it goes to a default starting location on the Internet. If large numbers of web users (surfers) view a particular location, advertising revenues can be generated as some popular locations on the Internet, such as Yahoo, have discovered. Yahoo in fact paid Netscape many millions of dollars to provide Netscape users an easy way to reach the Yahoo page. Netscape and Microsoft, although somewhat late to this game, both are working on new start pages (to which the browsers will be preprogrammed to go) in the hopes of enticing users to stay at their web sites. It is thought that browsers might become a potent revenue generating force by leading consumers to particular pages,.

There are serious reservations to the claim that the browser icons are valuable for the control they provide of the start page, however. First, it is possible, and quite easy, for users to alter the start page. Would it make sense for radio stations to pay automobile dealers to have car radios set at certain stations when the cars leave the new car lot? This is virtually a perfect analogy to the browser icon story. Yet it seems hard to believe that radio stations would benefit, mainly because it is so easy to change stations. Is it really that much more difficult for consumers to change the icons on the desktop? This is an empirical question whose answer may change as consumers become more accustomed to the operating system.

There is, however, a more fundamental impediment to the claim that desktop placement is important for Browsers. Just having the icon on the desktop is insufficient to gain access to the Internet. Clicking on that icon will not connect users to the Internet. For that they will have to use one of many internet service providers. The Internet service provider will almost certainly provide its own browser, independent of what icon is on the desktop. Therefore, it is hard to see how the icon on the desktop at the time of sale provides much value at all.

Finally, the concept of detailed governmental control over desktop placement leads to other seemingly endless and seemingly absurd questions. What about the Start button in Windows? The order of programs tends to be alphabetical. Should the government be concerned about the ordering of these programs, and who gets the rights to order these programs? Has anyone investigated whether the various color schemes found in Windows work to benefit Microsoft’s icons over the alternatives? Is the screen saver in Windows that shows the Microsoft Windows icon moving around anticompetitive in is subliminal effects? In conclusion, and in all seriousness, we should ask this: Should the government really be involved in these types of decisions?

Implications

The theories of path dependence and lock-in are relatively new to the economic literature. These theories have not won over the economics profession after years of debate and have they have not made their way into many economics textbooks. Nor do these theories draw on first principles in obvious and secure ways. That does not make theories of path dependence and lock-in bad economics, or wrong economics, or inappropriate topics for academic research. On the contrary, it makes the academic debate that much more important. It makes these theories, however, a poor foundation for public policies that could effect the progressiveness of the American economy.

If we were treating a dying industry, even speculative economic medicine might be worth a try. But the computer and software industries continue to astound most people both with the rates at which products improve and at which prices decline. It makes no sense to submit such a robust patient to the risks of economic quackery.

In our academic writings summarized above, we have shown that there is a poor connection between theories of path dependence and the real-world behaviors of entrepreneurs and consumers. Our work also demonstrates that there is no connection between the alleged empirical support for these theories and real events. Contrary to the lock-in claim, and contrary to some popular stories of markets gone awry, good products do seem to displace bad ones. Since there is no real support for the supposition that markets fail in increasing returns environments, there is no more basis for antitrust in increasing returns markets than in any others.

There might even be less reason to apply antitrust to such markets. Our most basic theory of increasing returns implies that monopoly or near-monopoly equilibria are likely. Where people do value compatibility, or where increases in firm scale really do lower costs, dominant formats or single producers will probably result at any particular moment.

Furthermore, consumers want it that way. Anything else will frustrate the urge for compatibility, unnecessarily raise costs, or both. So monopoly outcomes need not imply that anything has gone wrong or been done wrong. Monopolies that are undone by the government may lead only to monopolies that are redone in the market. The faces may change, but market structure may not. If we insist that natural monopolies be populated by several firms kept at inefficiently small shares, we are likely to find these markets taken over by foreign companies without such restrictions.

In such markets, firms will compete to be the monopolist. It is in this competition that products that create more value for consumers prevail against those that create less. Notice what that means. The very acts of competition that bring about the market tests of these products--the strategies that save us from inferior keyboards--will look like monopolizing acts. That is because they are. They determine which monopoly prevails until better products prompt new campaigns to capture an increasing returns market.

Many of the other claims that surround the new antitrust debate are disconnected, not only from real world observations, but also from any real theoretical support. One such claim is that Microsoft would like to crush any would be direct competitor. It probably would. Theory and history, however, do not tell us how predation could ever work in a world in which assets are perfectly durable. Further, Microsoft has been visibly unsuccessful in crushing anything except where their products are better than the opposition. They had to resort to (the attempted) buying uncrushed Intuit, they have barely dented America On Line with the much ballyhooed Microsoft Network, and they only began to erode Netscape’s near monopoly when their own browser came up to snuff. Microsoft’s products that dominate in the Windows environment are the very ones that have dominated elsewhere.

There is, finally, the vaguely posed claim that Microsoft stifles innovation--another disconnect. The claim fails to conform with several prominent features of the PC landscape. First, Microsoft courts and supports its many software developers, who now number in the hundreds of thousands. Second, the personal computing industry, by any practical standard of comparison, seem to be astonishingly innovative.

Finally and most importantly, antitrust doctrines brought to bear against Microsoft cannot be constructed to apply to Microsoft alone. If doctrines emerge that the biggest operating system must be kept on a short leash, then why not also a big producer of database software that sets the standards for that activity, or the biggest producer of printers, or scanners, or modems, of microprocessors, and so on? If these new technologies do exhibit increasing returns, or important reliance on standards, or network effects, then we are likely to see high concentration in all of these areas. Unless we are to embark on a relentless attack on whatever it is that succeeds, we need to acknowledge that the constructive competitive actions that firms take in this environment--new products, new capabilities, new deals--will often hurt competitors by the very fact that they make consumers better off.

Selected Readings

Liebowitz, S. J. & Margolis, S. E., "Fable of the Keys," Journal of Law and Economics 33 (1990): 1-25.

Liebowitz, S. J. & Margolis, S. E., "Network Externality: An Uncommon Tragedy" Journal of Economic Perspectives 8 (1994): 133-150.

Liebowitz, S. J. & Margolis, S. E., "Path Dependence, Lock-in, and History," Journal of Law, Economics and Organization 11 (1995): 205-226.

Liebowitz, S. J. & Margolis, S. E., "Are Network Externalities a New Source of Market Failure?" Research in Law and Economics 17 (1995): 1-22.

Liebowitz, S. J. & Margolis, S. E., "Should Technology Choice be a Concern for Antitrust," Harvard Journal of Law and Technology 9 (1996): 283-318