Chicken Little Comes Home to Roost: A Misplaced and Flawed Economic Theory Bedevils Microsoft.

by Stan Liebowitz and Stephen Margolis

Introduction

There is a famous quote attributed to Keynes to the effect that politicians are apt to be slaves to the ideas of long deceased economists. The view that politicians care very much about ideas and theories may be overly optimistic in this day and age, but it clearly is the case that economic theories are seriously influencing regulatory and antitrust thinking regarding the computer industry. Microsoft's aborted purchase of Intuit, and Judge Sporkin's (later overturned) decision to invalidate the consent decree between Microsoft and Justice were both heavily influenced by the same economic theories. Not just any theories, mind you, and not an old theory from a deceased economist, but a modern economic theory that has been percolating through Silicon Valley. The most famous expression of these theories is the White Paper, prepared by Gary Reback and others, as provided to Judge Sporkin and reprinted in Upside several months ago. Reback's paper reads much like an Oliver Stone script: short on fact, long on conspiracy. The White Paper could even have chosen a title patterned on one of Stone's movies: Natural Born Monopolists.

The economic theories at the center of the Reback paper have been used to support claims that a market based economy is likely to choose all sorts of wrong products. For example, proponents of these theories have claimed that we type on the wrong keyboards, tape with the wrong videorecorders, and drive cars with the wrong types of engines. In these instances, we are told, we might be better off relying on the government, in its wisdom, to pick for us the products that will provide us the greatest value. Al Gore, for example, as the current administration's leader on matters of technology, might be relied upon to have a clearer vision of the course of technological change than would private-market actors such as Bill Gates.

The White Paper is an extreme example of this line of reasoning. Brian Arthur, one of the two economists who assisted in the writing of the white paper, and a leading figure in this literature, is quoted in the Wall Street Journal of March 8 as claiming: "My greatest fear [of allowing the Intuit merger to occur] is that an inferior technology could be locked in and bring progress to a halt." Bringing progress to a halt might seem to be a rather breathtaking claim, but as readers of the White Paper know, this is almost trivial compared with the claims put forward in that document. The White Paper puts forward the Orwellian claim that the American way of life would have been imperiled if Microsoft had purchased Intuit: "It is difficult to imagine that in an open society such as this one with multiple information sources, a single company could seize sufficient control of information transmission so as to constitute a threat to the underpinnings of a free society. But such a scenario is a realistic (and perhaps probable) outcome."

These are extravagant claims indeed. And they appear to have been taken seriously. But what are bases for these theories that offer such a bleak vision of the future? Have they been subject to serious scrutiny, are they regarded as having general applicability, and most important of all, do they have any empirical foundation? Since these theories seem to be moving with unusual speed out of the world of academic debate and into public policy, they certainly are worth a careful examination. Given the strong claims that are being made in the public policy arena, readers may be surprised at both the narrowness of these theories and their complete lack of empirical foundation.

In what follows, we present an overview of these theories and consider what they say and what they don't say about the subject matter of the White Paper. After that we will discuss the evidence, such as it is, that has been put forward to support these theories. Our discussion address the broad themes of the White Paper. If you have the February Upside you might want to consult the White Paper to confirm the details.

Network effects

About two thirds of the way through the White Paper, Reback and company present a discussion of the economic theories that allegedly are the underpinning of their grave concerns about our economy. A premise of these theories is that one person's decision to purchase a product is influenced by the number of other users of that product. The term that is used to describe this phenomenon, one that is likely to have a familiar sounding ring to readers of Upside, is "network effect". Undoubtedly, there are many instances where the number of users of a product influences the value that a single user of a product receives. The greater the number of fax machines in use, for example, the more valuable it is to have a fax machine. Nothing revolutionary or particularly objectionable about this idea. But what is objectionable is the next step in their argument, which is that markets work poorly under these circumstances. That step in turn rests upon the claim that large networks (producers) have an ineluctable advantage over smaller networks.

Let us take a hypothetical market for a product with network effects (say commercial telecommunications networks, such as CompuServe). If the cost per user to CompuServe of the communication system did not increase as the number of users increased [or if costs were zero, as is assumed in most of these theories], those networks with the largest number of consumers would always have an advantage over smaller networks since consumers, by assumption, get more value from having a larger number of other users of the network. Consumers would be willing to pay higher subscription prices for CompuServe, with its large number of users, than they would for Delphi or Genie, with their smaller number of users. In this world there are no offsetting advantages from being small that would allow small networks to compete. Since consumers derive greater value as the number of users increases, profit per user must be continuously increasing with the number of users. Note that this means not only that larger networks have larger profits, but that they have larger profit rates. This logic leads to the conclusion that the largest producer will inevitably eliminate the small producers in the give and take of competition.

An industry in which large firms always have advantages over smaller firms is known in the economics literature as a natural monopoly. Natural monopoly and has been understood for generations of economists, and discussions of natural monopoly can be found in most intermediate microeconomics textbooks. This is the economic theory of public utilities. Of course the textbook natural monopoly is somewhat different than the one arising from network effects. Natural monopolies are usually presumed to arise when average costs continuously fall as the firm gets larger. Gas, electric, telephone and cable companies are generally thought to be natural monopolies, which is the reason that they are granted legal monopolies by the state, in return (usually) for regulation of prices. It is normally considered to be a good idea to grant a legal monopoly to a natural monopoly, so as to allow production costs to be as low as possible. If a monopoly were not granted, costs would be higher since dividing industry output among several firms sharing the industry output would mean that each firm has higher average costs than would be the case if only one firm produced all the output. Similarly for some network effects, having several competing producers (standards) would reduce the value received by consumers relative to the case where all consumers used the same network. The normal prescription for network effects, as they have been presented in this literature, would be to welcome monopoly. With regulations, of course. This is not to say that industries with network effects are natural monopolies (in fact we have argued in print that they are not) but only that if they were the natural monopolies that they have been made out to be, the implied solution would be legal monopoly of one sort or another.

If this were all there were to the network-externality/path-dependence literature, economists might have sighed a collective yawn. Natural monopoly is just not a new or exciting problem. Natural monopoly would not likely be considered sexy or current enough to warrant publication in economics journals or to generate grants from the National Science Foundation. Furthermore, some of the industries that economists had long believed to be natural monopolies now appear not to be, and perhaps never were, natural monopolies, which is in part responsible for the current trends toward deregulation.

Network effect models, therefore, often incorporate an additional concern. The focus of these models is not on whether to have a monopoly, but rather on which particular firm to have as the natural monopolist. What if the firm that initially grows to be the largest does not have the best product or format? Could it still dominate the market because of its large size? These theories have concluded that consumers might be reluctant to switch to a better product (excess inertia) or that they might be too eager to switch to a product that is not better (excess momentum). Although economists might get goose-bumps reading about such intriguing possibilities, such theories have been of little practical value since they give no clear predictions and appear to be untestable.

The White Paper does not follow these arguments to their logical conclusion. It does not conclude that all software standards should be provided by regulated monopolies. Nor can it make a case against Microsoft on the basis of the contradictory conclusions of excess inertia or momentum. Instead, the White Paper buttresses its rhetoric by turning to the related literature of path dependence.

Path Dependence

The path dependence literature assumes natural monopoly, and then argues that society often gets stuck with the wrong natural monopoly when it relies on markets. Since network effects are presumed to lead to natural monopoly, these theories dovetail nicely.

The logic of path dependence can be illustrated with the following table, reproduced from Brian Arthur's papers. (We've added the Beta and VHS notations).

Assume that consumers have a choice between products based on two competing technologies (Beta and VHS videorecorders, for example). Consumers come one at a time and must choose one of the two technologies (or video formats). Let the benefits that consumers receive from the purchase of a videorecorder be given by the numbers in the above table (benefits to producers are ignored for now). For example, if there are fewer than 11 users of Beta, they each receive benefits of 10, whereas each user receives a benefit of 16 when there are 61 users. Since the benefits increase as the number of adopters of the technology increases, these numbers exhibit the network effects discussed above. According to this theory, the first consumers, looking at the rewards from choosing Beta or VHS will prefer Beta to VHS since there is a larger reward associated with Beta. (10 for Beta vs. 4 for VHS) As more consumers purchase Beta, the advantage of Beta over VHS continually widens and so Beta will prove to be the eventual choice of the market. Yet if the ultimate number of consumers is large, VHS is clearly superior to Beta. (Compare, for example the benefits to each consumer when the number of adopters is 100) In the terminology of path dependence, we would say that society gets "locked-in" to Beta even though VHS is superior. Using a slightly different terminology, it is claimed that the market has "tipped" toward Beta although VHS was better.

This is the underlying logic, presumably, for Arthur's concern that progress might come to a halt, since according to his theory, we might remain with Beta even in the face of ever improving alternatives. In other writings we have referred to this as the "Chicken Little Theory."

There are, however, many problems with this story. For one thing, it assumes that each individual consumer has no foresight, but merely makes choices based on his narrow and myopic view of the one column in the table that is the payoff from his purchase. Second, it assumes that the providers of technologies, or technology related goods, have no ability to influence the outcome of the competition between the two technologies. Third, it assumes that consumers get value from a larger number of other users without regard to who these users might be. Fourth, it assumes a particular structure of rewards that is unlikely to occur. Let's examine each of these problems in turn.

It is ironic that in this model, which has been applied to various high-tech products, there is no recognition of foresight. For once foresight is allowed, this particular problem of path dependence goes away. If consumers have foresight, they can easily see that VHS is the better technology in the long run, and they know that all other consumers are aware of this. Some of the familiar features of the market will work to coordinate the outcome. Decision makers will rely on consumer and trade publications to keep up on the characteristics of technologies. Retailers play a role by committing their marketing energies, and to a degree staking their reputations, on the basis of their predictions of these contests. The assumed lack of any foresight on the part of these decisionmakers is a very serious shortcoming of Arthur's analysis. In a world without foresight, cd-players, automobiles, and most any new technology would never get started. Cd-players at first had almost no disks, CD-ROM's (and computers) at first had very little software, automobiles did not have gas stations, and so forth. Clearly consumers must form some expectation of the future if they are to act at all, even if that foresight is imperfect.

Arthur's story assumes that each decisionmaker constitutes only a single adoption of a technology, that each consumer purchases only one unit. Some large (corporate) customers, however, might be sufficiently large that they can realize the advantage of the superior technology even if, and perhaps particularly if, no one else uses it. Thus fax machines at first were used largely by companies wishing to send information and pictures within a firm. The effect of large customers is to tend to swing the entire market toward the efficient solution.

Even if customers do not have foresight, producers of these products probably do. Producers will have both the reason and the means to influence the outcomes of these competitions between technologies. They can, for example, subsidize or give away their products in order to demonstrate the values of their products or to create positive network effects. In the table above, the amount of wealth that can be created by large-scale adoption of VHS is greater than the corresponding amount for Beta. Since the owners of a technology ordinarily would be expected to appropriate some or all of this wealth, the owners of VHS would have a greater potential gain than the owners of Beta (the table presumes that costs are identical for these products). With any form of rational capital markets, the owners of VHS will be able to enlist allies with deeper pockets than will the owners of Beta. But in the theories of path dependence that have been promulgated, owners of technologies, standards, or products have no such roles to play. Again, this is a remarkable deficiency in an analysis applied to the computer industry.

Surely, firms and individuals have some foresight, even if only imperfect foresight. Imperfection is inevitable, even in markets. But for the reason argued above, where there is a significant difference in the relative advantages of competing formats (or technologies, networks, standards), we would expect the choices made in markets to be the correct choices most of the time. Furthermore, the relevant question for public policy is not whether markets are imperfect but rather whether they are more imperfect than governments. So, even where we identify imperfections in market outcomes, we still must raise the question of whether government can do any better.

Finally, there is another special aspect to this table that is easy to overlook. A plot of the of returns in the table would cross. That is, Beta is better when the numbers of adopters are small, VHS is better when the numbers are large. Figure 1 is constructed to reflect this: The slopes of the payoff lines differ, with the slope of V being steeper than the slope of B. Without this "crossing" effect, the technology that is chosen first will always be the better of the two and no harmful lock-in can occur. In order for the paths to cross, the network effects, or economies of scale in production, must be much stronger for the technology that is less desirable prior to these network or scale effects. That is, the one that starts off badly must get better faster. The instinct to root for the underdog notwithstanding, there is no reason to believe that this overtaking characteristic is a likely characteristic of technologies. For the technologies that are often mentioned in this literature, there is every reason to believe that this overtaking characteristic is very unlikely.

Considering software, for example, does it seem reasonable that the benefit curves can cross in this way? Interestingly, there is no attempt in the literature that Reback cites to ponder whether the network effects are likely to differ in this way. Are network effects likely to differ between two different word-processors? We can see no reason why the slopes should differ. The value of compatibility should be independent of the choice of format. The benefit from the ability to exchange files with others (the network effect for software) would not seem to depend on the particular attributes of the word processor itself. If the value of compatibility is independent of the product itself, the slopes of the benefits curves for competing products would be the same and thus could not intersect. So another instrumental assumption of the theory seems inconsistent with reality. [The same story should hold for videorecorders, where the value of compatibility is the larger selection of pre-recorded movies available].

Do those Orwellian Conclusions come From this Theory?

It is worth noting that the theories of path dependence and network externality, however flawed or speculative they might be, still do not get us to the White Paper's frightening conclusions. These theories may suggest that some industries will become monopolies, or even that we may end up with the wrong monopoly, but it takes a marvelous leap of faith to get to the result that our basic freedoms are in serious peril. That conclusion is apparently a flourish brought to us by the authors of the White Paper, going well beyond the economic theories that the White Paper claims to rely upon. Their nightmare is a mixture of one part Intuit, one part the Microsoft Network, and many parts hyperbole. The White Paper contends that consumers are on there verge of becoming so locked in to Microsoft's technology that they will be compelled to purchase Microsoft products even when those products are far inferior to rival products and far afield from traditional computer products. As a result, Microsoft will gain control over every market it enters, and it will have an incentive to enter lots of markets.1 The leveraging of control from one market to another is an area that has captured the attention of economists for at least several decades. The key point for the reader to understand, however, is that Microsoft can not expect consumers to pay more for the bundle of Microsoft products than the overall value of these products relative to the next best alternatives. Thus, Microsoft can't expect consumers to leave the old technology unless the new technology provides greater value, limiting the ability of Microsoft to extract excessive prices. Similarly, Microsoft could not load-up the system that it delivers to consumers with tied-in inferior products without eventually prompting consumers to switch to alternatives that are not so burdened. With competitors such as Apple, IBM and Novell, Microsoft could not extract profits from a host of other markets without inducing consumers to switch to these alternatives.

The White Paper implies that Microsoft's acquisition of Intuit would have forced all computer based home-banking, and eventually all banking, to go through Microsoft's (Intuit's) interface. [Conveniently, Reback never explains how it is that Microsoft was unable to use its mythic powers of monopolization to make its product, Money, rather than the rival Quicken, the number one product, given all the network effects and path dependence that supposedly exist]. Microsoft's incipient dial-in network was then supposedly going to have hooks built in to Intuit's software, requiring that all Intuit users go through the Microsoft Network. Since all shopping would require using Intuit, it too would require Microsoft's network. As books, newspapers, and other sources of information become increasingly tied to computer network transmission, these too would require use of Microsoft controlled networks. Libraries, bookstores, and newsstands would be replaced by the universal PC screen, and all of that is under the control of Microsoft in Reback's adaptation of "1984".

Eventually, according to this scenario, Microsoft would write, or at least approve, all the books, plays, poetry, and movie scripts. All the music and lyrics. All political commentary. These are the "threats to the underpinnings of a free society" that Reback contemplates. What evidence does Reback present for these claims? Aside from speculation, only this. Microsoft's' CD-ROM encyclopedia, Encarta, provides a flattering biography of Big Brother Bill (Gates). Nevertheless, conspiracy theories are awfully hard to disprove, especially to the conspiracy theorist. Ordinarily their influence is limited by their general implausibility and their lack of substantiating fact. But the White Paper is lent credibility by its association with the economic theories of path dependence, and network externality. Roughly the last third of the White Paper appears under the heading "Economic Evaluation." This is introduced with the statement that "the arguments draw upon what has become an extensive and rigorous literature on increasing returns economics." What we have shown here is that the theoretical arguments of this literature apply only in very limited circumstances and even then, are far from airtight. What is more damning is that these theories are, so far, entirely without empirical support, notwithstanding a few claims to the contrary.

These Theories have no Empirical Support

The proof of the pudding is in the eating. Either there is a phenomenon of path dependence making us the prisoners of inferior technologies, or there is not. Either there are cases where markets choose and stick with wrong products or there are not. Theories, after all, are acceptable only when those theories are capable of explaining actual events. Facts, or empirical evidence, are the final arbiters of this theory, as they are with all theories. And it is in the realm of actual examples that the theories relied upon by Reback et. al. are most sorely lacking. The White Paper alludes to an extensive and rigorous literature. Those unfamiliar with the contemporary economic literature might conclude from this that these theories are supported by a large body of evidence. Nothing, however, could be further from the truth. The little support that has been offered consists of a few key examples where markets have supposedly settled on the wrong system or standard and failed to change to a purportedly better system or standard. We turn now to those examples.

The QWERTY Keyboard

The single example that is found over and over again in the network-externality/path-dependence literature is the rather quaint but well-known example of the typewriter keyboard. Paul Krugman, in his recent book "Peddling Prosperity" speaks glowingly of this entire literature in a chapter entitled "The Economics of QWERTY." The significance of the keyboard example to this literature can not be overstated.

QWERTY refers to the letters in the upper left hand portion of the typewriter (and computer) keyboard. One commonly hears the claim that to keep the old-fashioned mechanisms from jamming on the early typewriters the mechanics who created the keyboard used trial and error to find a design that actually slowed down typing speed. The claim is made that QWERTY's ascendance was due to a serendipitous association with the world's first touch typist, who won a famous typing contest using the QWERTY design. The QWERTY design is reputed to be far inferior to the "scientifically" designed Dvorak keyboard which claimed to offer a 40% increase in typing speed. Supposedly, the Navy conducted experiments during the Second World War demonstrating that the costs of retraining typists on the new keyboard could be fully recovered within ten days! According to the path dependency theories, no producers found it profitable to create Dvorak keyboards since everyone already knew QWERTY, and no one learned Dvorak because there were no Dvorak keyboards.

This is an ideal example, which accounts for its continued use by virtually every author looking for an example of path dependence. The number of dimensions of performance are few and in these dimensions the Dvorak keyboard appears overwhelmingly superior.

Yet upon investigation, this story appears to be based on nothing more than wishful thinking and a shoddy reading of the history of the typewriter keyboard. The QWERTY keyboard, it turns out, is about as good a design as the Dvorak keyboard, and was better than most competing designs that existed in the late 1800s when there were many keyboard designs maneuvering for a place in the market.

Ignored in these stories of Dvorak's superiority is a carefully controlled experiment conducted under the auspices of the General Service Administration in the 1950s comparing QWERTY with Dvorak. That experiment contradicted the claims made by advocates of Dvorak and concluded that it made no sense to retrain typists on the Dvorak keyboard. This study, which was influential in its time, brought to an end any serious efforts to shift from QWERTY to Dvorak. Modern research in ergonomics also reaches similar conclusions. This research consists of simulations and experiments that compare various keyboard designs. It finds little advantage in the Dvorak keyboard layout, confirming the results of the GSA study.

So on what basis were the claims of Dvorak's superiority made? We discovered that most, if not all, of the claims of Dvorak's superiority can be traced to the patent owner, Professor August Dvorak. His book on the relative merits of QWERTY versus his own keyboard has about as much objectivity as a modern infomercial found on late night television. The wartime Navy study turns out to have been conducted under the auspices the Navy's chief expert in time-motion studies -- Lt. Commander August Dvorak, and the results of that study were clearly fudged. The study also appears to be lacking in anything remotely related to objectivity. The difficulties that we had getting a copy of the Navy study, and the fact that it is mentioned, but never actually cited, convinced us that those economists enamored of the Dvorak fable never actually perused a copy of that study.

Many other aspects of the received story were also erroneous. It turns out that there was intense competition between producers of various keyboard designs early in the history of the typewriter keyboard. And contrary to prior claims, there were many typing competitions between touch typists on various keyboard designs, and QWERTY won its share of such competitions. Thus QWERTY was put through a fairly severe set of tests by the market, and the reason QWERTY survives seems to be that it is a reasonably good design.

We published a very detailed account of all this the Journal of Law and Economics in the spring of 1990. Yet in spite of this five year old paper, which has not been factually disputed, economists working on path dependence topics continue to use the QWERTY keyboard as the main example to support their theory that markets cannot be trusted to choose products. One could hardly find better evidence of this theory's lack of empirical support than the continued use of a result that is known to be incorrect. The QWERTY story, by the way, is cited in Reback's paper (his footnote 44).

Beta-VHS

The second most popular example is the Beta-VHS videorecorder format tussle (see footnote 35 of Reback's paper). It is sometimes claimed that Beta was a better format and that VHS only won the competition between formats because it fortuitously got a large market share early on in the competition with Beta. But this story turns out to be just as inaccurate as the keyboard story.

In 1969 Sony developed a cartridge based videorecorder, the U-matic, which it hoped to sell to households. Since other companies had such products in the works, Sony invited Matshushita and JVC to produce the machine jointly and to share technology and patents. This was for the very purpose of achieving a standard, which indicates considerable foresight on the part of the market participants. But the U-matic was not a success as a home machine, though it did find a niche in educational markets. Many other attempts to break into the home market were tried by various companies, American, Japanese, and European, but all met with failure.

In the mid 1970's, Sony developed the Betamax. Believing that with the Betamax it finally had a machine that would succeed in the home, Sony again offered the machine to Matsushita and JVC. Once again, Sony hoped to establish a standard that would cut through the clutter of competing formats. Sony provided technical details of the Betamax, including an advance in azimuth recording that helped eliminate the problem of crosstalk. But at a meeting at Matsushita's headquarters many months later, where JVC demonstrated its new machine, Sony engineers concluded that JCV has expropriated their ideas. Needless to say, this apparent usurping by JVC of the Sony technological advances created bitterness between the one-time allies, leaving Sony and Matsushita-JVC to go their own separate ways. The only real technical difference between Beta and VHS was the manner in which the tape was threaded and, more importantly, the size of the cassette. The choice of cassette size was based on a different perception of consumer desires. Sony believed that a paperback sized cassette, allowing easy transportability (although limiting recording time to 1 hour), was paramount to the consumer, whereas Matsushita believed that a 2 hour recording time, allowing the taping of complete movies, was essential.

The larger VHS cassette accommodated more tape. For any given speed of tape this implied a greater recording time. Slowing the tape increases the recording time, but also decreases picture quality. VHS, because of its larger size cassette, could always have an advantageous combination of picture quality and playing time. This difference was to prove crucial.

The behavior of the antagonists in this competition is a wonderful example of forward looking behavior, even if there was some misperception on the part of the players. Both sides attempted to influence expectations and sales in every way they could. They used partnerships, advertising, pricing and any other tool at their disposal. The behavior was nothing like the passive adoption story that is presented with Arthur's table.

Sony, in an attempt to increase market share, allowed its Beta machines to be sold under Zenith's brand name, a highly unusual move for Sony. To counter this move, Matshushita allowed RCA to puts its name on VHS machines. Although Sony was able to recruit Toshiba and Sanyo to the Beta format, Matsushita was able to bring Hitachi, Sharp, and Mitsubishi into its camp. Beta slowed down the tape and increased its playing time to two hours; VHS did the same and increased playing time to four hours. RCA radically lowered price and came up with a simple but effective ad campaign which touted VHS' advantage: "Four hours. $1000. SelectaVision." Zenith responded by lowering the price for its Beta machine to $996.

The market's referendum on playing time versus tape compactness was decisive and rapid. Beta had an initial monopoly for almost two years. But within six months of VHS' introduction in the US, VHS was outselling Beta. These results were repeated in Europe and Japan as well. By mid 1979 VHS was outselling Beta by more than 2 to 1 in the US. By 1983 Beta's world share was down to 12 percent. By 1984 every VCR manufacturer except Sony had adopted VHS. Not only did the market not get stuck on the Beta path, but it was able to make the switch to the slightly better VHS path. Notice that this is not path dependence. Even though Beta got there first, VHS was able to overtake Beta very quickly. This, of course, is the exact opposite of the predictions of path dependence, which implies that the first product to reach the market is likely to win the race even if it is inferior to later rivals.

Now listen to the version of this story found in Brian Arthur's work: "The history of the videocassette recorder furnishes a simple example of positive feedback. The VCR market started out with two competing formats selling at about the same price: VHS and Beta. .....Both systems were introduced at about the same time and so began with roughly equal market shares; those shares fluctuated early on because of external circumstance, "luck" and corporate maneuvering. Increasing returns on early gains eventually tilted the competition toward VHS: it accumulated enough of an advantage to take virtually the entire VCR market. Yet it would have been impossible at the outset of the competition to say which system would win, which of the two possible equilibria would be selected. Furthermore, if the claim that Beta was technically superior is true, then the market's choice did not represent the best outcome."

The lesson of the path dependence literature is that markets can not be trusted to chose the right products. We would argue that a better lesson is that public policies and legal theories should not be based on a literature that is based on only the most casual sort of empirical analysis.

Other Examples

Path dependence advocates have sometimes claimed that the continued use of FORTRAN by academics and scientists is an example of getting stuck on a wrong standard. But one doesn't have to peruse too many computer magazines to realize that FORTRAN has long ago been superseded by languages such as C. Thus this example can hardly be taken as support for a claim that we get stuck with inferior standards or products.

Arthur also has claimed that the gasoline powered engine might have been a mistake, and that steam or electricity might have been a superior choice for vehicle propulsion. Never mind that even with all of the applications of motors and batteries in the century since, and that with all the advantages of digital electronic power-management systems, the most advanced electric automobiles that anyone has been able to make do not yet equal the state of the art in internal-combustion automobiles as of the early nineteen-twenties. Never mind that electric automobiles actually were commercially viable in the early stages of the industry, and that electric power has been viable ever since in the nearby technologies of smaller industrial and recreational vehicles . Never mind that in the technologies in which steam has been dominant, railroads and ocean-going ships, it has gradually been eclipsed by diesel, electric, and hybrid designs. Surely it is a bad idea to base public policy on science fiction instead of science. Yet we fear that this will be the unintended result of following theories that appear to be based on little more than casual storytelling.

Applications to Microsoft

Even if these theories were based on fairly general assumptions, and even if the theories had strong empirical foundations, the relevance of these theories to Microsoft or Intuit is very tenuous, contrary to the assertions of Reback and company.

Irrelevance of Network Effects for Intuit's Products

Network effects are notably unimportant for the products sold by Intuit. This minor fact seems to have been overlooked in the Reback paper. Notwithstanding the numerous references to network effects in Reback's paper, and not withstanding his claim (Section III.B.4.c) that Intuit's products embody network effects, Intuit's products simply do not embody such effects.

Think for a minute about the way that personal finance software is used. Network effects imply that consumers derive additional value from the fact that other consumers are using the same product, in general because this enhances compatibility. But personal financial information would seem to be one clear exception to this. Do most users exchange personal financial information with each other? Do we value the ability to exchange such information? Surely the answer to the last two question questions is a resounding "no". If anything, we prefer to keep this type of software away from prying eyes, since the fewer people who have access to one's personal data the safer one usually feels. Amazingly, it seems unlikely that there would be any other category of software less influenced by network effects than is personal finance software.

What this means is that the types of network effects usually associated with software do not exist in this case. Thus it is a non-sequitor to claim that network effects provided an economic basis against the Intuit merger, as Reback did. Reback claimed that consumers tend to get "locked-in" to their financial software, but "lock-in" in this case has nothing to do with network effects or path dependence. Lock-in has a particular meaning in the context of path dependence. We are locked-in to eating and breathing for obvious reasons. This type of lock-in might be thought of as positive lock-in and does no harm. Lock-in, as it is used in the path dependence literature, means that users continue to use product A, say, when in fact they would prefer to use product B, but because everyone else uses product A they feel compelled to use product A as well. Such an instance can be thought of as negative lock-in.

Reback is clearly talking about negative lock-in. Thus to say customers are locked-in to TurboTax, or Quicken, Reback must mean that if left to their own devices, most consumers would actually prefer one of the alternative programs, but since they must interact with others they use TurboTax or Quicken. Since there are no network effects for personal finance software, no interaction with other consumers, this argument makes no sense. Consumers are not negatively "locked-in" to Quicken or TurboTax but merely prefer these programs to the alternatives. Note that a majority of individuals still do their banking the old fashioned way, i.e. without computers. Would Reback believe this majority is "locked-in" to pens and pencils, checkbooks, and the US postal service? Would such a lock-in imply that pencil manufacturers nascent threats to our freedom?

Windows and Competition

Reback has argued that Microsoft has used its operating systems software, Windows, to restrict competition. But Reback ignores the pro-competitive aspects of this type of operating software. Prior to Windows, software producers each had to design their own printer drivers, video drivers (for graphics displays), menuing systems, and so forth. This provided large software producers such as Lotus and WordPerfect with a major advantage over smaller rivals since it was costly to write hundreds of drivers for every different printer, video card and so forth. Windows allowed software authors to dispense with these efforts and concentrate on the program itself. It allowed users to have one setup for printers, networks, etc., and have all the applications be able to access these peripherals. Contrary to the view put forth in Reback, Windows actually allowed smaller firms, who might not have had the resources to write drivers for each video card and printer, to compete with more established firms.

It is also worth remembering that earlier versions of Windows, (2.0) were quite unsuccessful. Microsoft was able to wean consumers away from DOS only when Windows 3.0 was able to demonstrate a clear superiority. This was a window of opportunity for developers wishing to oust the market leaders. A developer who bet on windows might have been able to surpass dominant rivals from the DOS world if the windows version of the product was sufficiently better than the DOS version. This is how Microsoft was able to come to dominate the applications market.

Microsoft already had a successful history as a developer of applications based on graphical interfaces. It produced spreadsheets and word processors in its role as a major developer for the Macintosh Platform. Lotus and WordPerfect, seeing the poor results of Windows 2.0, and largely ignoring the Macintosh market, might have thought it prudent at the time to put Windows 3.0 versions of their products on the back burner. They also were not well versed in writing quality GUI applications since DOS programs were still their bread and butter.

Most readers of this magazine can probably remember the less than enthusiastic reviews of the early Windows versions of 1-2-3 and WordPerfect in the computer press. At that time the interesting question was why consumers continued to use these inferior spreadsheets and wordprocessors when the Windows versions of Word and Excel were more highly regarded. The decisions to continue focusing on DOS were made in the corporate boardrooms of Lotus and WordPerfect. Those decisions are now clearly seen to have been in error. It is disingenuous for the officers of these companies to now try to shift the blame. The current attempt to claim that Microsoft's market success is due to its control of the operating system, and not the creation of better products at lower prices, is merely an attempt to rewrite history so as to promulgate antitrust theories that might be used to erase the errors of Microsoft's competitors.

The Microsoft Network

Reback has argued that if the Intuit merger had occurred, Microsoft would have been able to leverage users to its on-line service because of Quicken, TurboTax, and Windows 95. As this article is being written, after the Intuit merger has been called off, the Justice department is investigating the Microsoft Network.

Microsoft's claimed transgression appears to be that the Microsoft Network will be made available to all purchaser's of Windows 95. Microsoft's critics take the success of Windows 95 as a given and then claim that Microsoft has a great advantage by including the software for its network with the operating system. We are told that there is a button in Windows 95 which makes the Microsoft Network so simple to use that consumers will be unable to resist. Yet these critics forget to mention that consumers will need to pay monthly fees if they are to use this product. They do not mention that of course one needs a modem before one can use the product. They do not mention that the purchase of a modem most frequently includes free software from other commercial on-line services, such as America Online and Prodigy. They do not mention that software for other commercial services can be easily procured for free. They do not mention that all software that runs under Windows works by pushing a button (icon) on the screen. In reality, the inclusion of such software in Windows 95 merely matches the practices of other on-line services, albeit at a somewhat lower cost. But it is also not as well targeted since it is modem users who can use the software, not computer users as a whole.

If Microsoft's on-line service does not provide sufficient value relative to market competitors (CompuServe, Prodigy, America On-line, Genie, Delphi and the Internet), it will not succeed. Reback's extravagant claims for Microsoft's power notwithstanding, it is useful to remember that Microsoft could not leverage consumers to use its Money product. Consumers need a reason to pick particular products, and will use the product that provides the greatest utility for the money. Reback points out that Computer Associates gave away free copies of Simply Money yet could not gain significant market share. Why expect that the Microsoft network will do any better? Unless, of course, these critics believe that the product itself is one that consumers will want. Microsoft is going up against the likes of IBM-Sears (Prodigy), AT&T (Interchange), and General Electric (GENIE). There is no reason to think that these companies can not fend for themselves, although these large companies have not done well against their smaller rivals (America Online and CompuServe).

Implications

Taken seriously, the White Paper would have us mobilize antitrust law to limit the activities of Microsoft and perhaps other software producers that have the temerity to be successful. If we follow that advice we may be handicapping one sector of the economy that has been a powerful source of growth, innovation and vitality in domestic and international markets. This mobilization would come not on the basis of well supported theories of monopoly behavior, but rather on the basis of a legal theory that is based on highly speculative economic theories that are without any empirical support. Further, the legal theory carries these economic theories to outlandish extremes well beyond the scope of their claims. While some of Microsoft's competitors might take delight in the impact that these theories have had in derailing some of Microsoft's plans and imposing costs on Microsoft, the misuse of economic theory for public policy purposes can not be in the country's long run interest. Consumers, manufacturers, regulators and economists will all be better off when our discourse is based on models of the world that conform to the reality that exists outside our windows, whether glass or Microsoft's.

Computer software does pose interesting problems for economic analysis. It may be that some types of software products should be produced by only a single supplier. But this is not the claim made in the White Paper. There might be reason to intervene in the market it there were evidence that rivalry in the marketplace were moribund. But the evidence would seem to be overwhelmingly to the contrary. Or there might be reason to intervene if there were evidence that these industries were seriously deficient in technological progress. But there is no such evidence. There might be reason to overturn the market's selection of a standard if it could be shown that markets are systematically deficient at such choices. But as we have shown, there is as yet no evidence for such a view.

Of course, there would be reason to overhaul entire industries, and damn the consequences, if we really could be led to believe that our most fundamental freedoms were in serious peril. This, it appears, is the tack that is taken in the White Paper. Since conventional efficiency arguments about monopoly don't carry much force here, perhaps a large dose of hyperbole will stir the populist pot. And so arguments are presented to the court and to the public that if Microsoft succeeds in one more market, our freedoms are at risk. But such claims stretch credibility beyond the breaking point. More likely, the true danger would come from relying on technical wizards in the judiciary and government to choose our technologies for us, or establishing policy set by lawyers and professors who habitually get their facts wrong. Similarly, abridgments of our freedoms are far more likely to come from a government that can compel our behavior, as opposed to a corporation that requires, for its own survival, our voluntary purchases of its products.

Selected Readings

Liebowitz, S. J. and Margolis, S. E., "Fable of the Keys." Journal of Law and Economics, April 1990 pp. 1-25.

Liebowitz, S. J. and Margolis, S. E., "Network Externality: An Uncommon Tragedy" The Journal of Economic Perspectives, Spring 1994, pp. 133-150.

Liebowitz, S. J. and Margolis, S. E., "Path Dependence, Lock-in, and History," The Journal of Law, Economics and Organization, Spring 1995, pp.205-226.

Liebowitz, S. J. and Margolis, S. E., "Are Network Externalities a New Source of Market Failure?" Research in Law and Economics, forthcoming, 1995.