The economic theories at the center of the Reback paper have been used to support claims that a market based economy is likely to choose all sorts of wrong products. For example, proponents of these theories have claimed that we type on the wrong keyboards, tape with the wrong videorecorders, and drive cars with the wrong types of engines. In these instances, we are told, we might be better off relying on the government, in its wisdom, to pick for us the products that will provide us the greatest value. Al Gore, for example, as the current administration's leader on matters of technology, might be relied upon to have a clearer vision of the course of technological change than would private-market actors such as Bill Gates.
The White Paper is an extreme example of this line of reasoning. Brian Arthur, one of the two economists who assisted in the writing of the white paper, and a leading figure in this literature, is quoted in the Wall Street Journal of March 8 as claiming: "My greatest fear [of allowing the Intuit merger to occur] is that an inferior technology could be locked in and bring progress to a halt." Bringing progress to a halt might seem to be a rather breathtaking claim, but as readers of the White Paper know, this is almost trivial compared with the claims put forward in that document. The White Paper puts forward the Orwellian claim that the American way of life would have been imperiled if Microsoft had purchased Intuit: "It is difficult to imagine that in an open society such as this one with multiple information sources, a single company could seize sufficient control of information transmission so as to constitute a threat to the underpinnings of a free society. But such a scenario is a realistic (and perhaps probable) outcome."
These are extravagant claims indeed. And they appear to have been taken seriously. But what are bases for these theories that offer such a bleak vision of the future? Have they been subject to serious scrutiny, are they regarded as having general applicability, and most important of all, do they have any empirical foundation? Since these theories seem to be moving with unusual speed out of the world of academic debate and into public policy, they certainly are worth a careful examination. Given the strong claims that are being made in the public policy arena, readers may be surprised at both the narrowness of these theories and their complete lack of empirical foundation.
In what follows, we present an overview of these theories and consider what they say and what they don't say about the subject matter of the White Paper. After that we will discuss the evidence, such as it is, that has been put forward to support these theories. Our discussion address the broad themes of the White Paper. If you have the February Upside you might want to consult the White Paper to confirm the details.
Let us take a hypothetical market for a product with network effects (say commercial telecommunications networks, such as CompuServe). If the cost per user to CompuServe of the communication system did not increase as the number of users increased [or if costs were zero, as is assumed in most of these theories], those networks with the largest number of consumers would always have an advantage over smaller networks since consumers, by assumption, get more value from having a larger number of other users of the network. Consumers would be willing to pay higher subscription prices for CompuServe, with its large number of users, than they would for Delphi or Genie, with their smaller number of users. In this world there are no offsetting advantages from being small that would allow small networks to compete. Since consumers derive greater value as the number of users increases, profit per user must be continuously increasing with the number of users. Note that this means not only that larger networks have larger profits, but that they have larger profit rates. This logic leads to the conclusion that the largest producer will inevitably eliminate the small producers in the give and take of competition.
An industry in which large firms always have advantages over smaller firms is known in the economics literature as a natural monopoly. Natural monopoly and has been understood for generations of economists, and discussions of natural monopoly can be found in most intermediate microeconomics textbooks. This is the economic theory of public utilities. Of course the textbook natural monopoly is somewhat different than the one arising from network effects. Natural monopolies are usually presumed to arise when average costs continuously fall as the firm gets larger. Gas, electric, telephone and cable companies are generally thought to be natural monopolies, which is the reason that they are granted legal monopolies by the state, in return (usually) for regulation of prices. It is normally considered to be a good idea to grant a legal monopoly to a natural monopoly, so as to allow production costs to be as low as possible. If a monopoly were not granted, costs would be higher since dividing industry output among several firms sharing the industry output would mean that each firm has higher average costs than would be the case if only one firm produced all the output. Similarly for some network effects, having several competing producers (standards) would reduce the value received by consumers relative to the case where all consumers used the same network. The normal prescription for network effects, as they have been presented in this literature, would be to welcome monopoly. With regulations, of course. This is not to say that industries with network effects are natural monopolies (in fact we have argued in print that they are not) but only that if they were the natural monopolies that they have been made out to be, the implied solution would be legal monopoly of one sort or another.
If this were all there were to the network-externality/path-dependence literature, economists might have sighed a collective yawn. Natural monopoly is just not a new or exciting problem. Natural monopoly would not likely be considered sexy or current enough to warrant publication in economics journals or to generate grants from the National Science Foundation. Furthermore, some of the industries that economists had long believed to be natural monopolies now appear not to be, and perhaps never were, natural monopolies, which is in part responsible for the current trends toward deregulation.
Network effect models, therefore, often incorporate an additional concern. The focus of these models is not on whether to have a monopoly, but rather on which particular firm to have as the natural monopolist. What if the firm that initially grows to be the largest does not have the best product or format? Could it still dominate the market because of its large size? These theories have concluded that consumers might be reluctant to switch to a better product (excess inertia) or that they might be too eager to switch to a product that is not better (excess momentum). Although economists might get goose-bumps reading about such intriguing possibilities, such theories have been of little practical value since they give no clear predictions and appear to be untestable.
The White Paper does not follow these arguments to their logical conclusion. It does not conclude that all software standards should be provided by regulated monopolies. Nor can it make a case against Microsoft on the basis of the contradictory conclusions of excess inertia or momentum. Instead, the White Paper buttresses its rhetoric by turning to the related literature of path dependence.
The logic of path dependence can be illustrated with the following table, reproduced from Brian Arthur's papers. (We've added the Beta and VHS notations).
Assume that consumers have a choice between products based on two competing technologies (Beta and VHS videorecorders, for example). Consumers come one at a time and must choose one of the two technologies (or video formats). Let the benefits that consumers receive from the purchase of a videorecorder be given by the numbers in the above table (benefits to producers are ignored for now). For example, if there are fewer than 11 users of Beta, they each receive benefits of 10, whereas each user receives a benefit of 16 when there are 61 users. Since the benefits increase as the number of adopters of the technology increases, these numbers exhibit the network effects discussed above. According to this theory, the first consumers, looking at the rewards from choosing Beta or VHS will prefer Beta to VHS since there is a larger reward associated with Beta. (10 for Beta vs. 4 for VHS) As more consumers purchase Beta, the advantage of Beta over VHS continually widens and so Beta will prove to be the eventual choice of the market. Yet if the ultimate number of consumers is large, VHS is clearly superior to Beta. (Compare, for example the benefits to each consumer when the number of adopters is 100) In the terminology of path dependence, we would say that society gets "locked-in" to Beta even though VHS is superior. Using a slightly different terminology, it is claimed that the market has "tipped" toward Beta although VHS was better.
This is the underlying logic, presumably, for Arthur's concern that progress might come to a halt, since according to his theory, we might remain with Beta even in the face of ever improving alternatives. In other writings we have referred to this as the "Chicken Little Theory."
There are, however, many problems with this story. For one thing, it assumes that each individual consumer has no foresight, but merely makes choices based on his narrow and myopic view of the one column in the table that is the payoff from his purchase. Second, it assumes that the providers of technologies, or technology related goods, have no ability to influence the outcome of the competition between the two technologies. Third, it assumes that consumers get value from a larger number of other users without regard to who these users might be. Fourth, it assumes a particular structure of rewards that is unlikely to occur. Let's examine each of these problems in turn.
It is ironic that in this model, which has been applied to various high-tech products, there is no recognition of foresight. For once foresight is allowed, this particular problem of path dependence goes away. If consumers have foresight, they can easily see that VHS is the better technology in the long run, and they know that all other consumers are aware of this. Some of the familiar features of the market will work to coordinate the outcome. Decision makers will rely on consumer and trade publications to keep up on the characteristics of technologies. Retailers play a role by committing their marketing energies, and to a degree staking their reputations, on the basis of their predictions of these contests. The assumed lack of any foresight on the part of these decisionmakers is a very serious shortcoming of Arthur's analysis. In a world without foresight, cd-players, automobiles, and most any new technology would never get started. Cd-players at first had almost no disks, CD-ROM's (and computers) at first had very little software, automobiles did not have gas stations, and so forth. Clearly consumers must form some expectation of the future if they are to act at all, even if that foresight is imperfect.
Arthur's story assumes that each decisionmaker constitutes only a single adoption of a technology, that each consumer purchases only one unit. Some large (corporate) customers, however, might be sufficiently large that they can realize the advantage of the superior technology even if, and perhaps particularly if, no one else uses it. Thus fax machines at first were used largely by companies wishing to send information and pictures within a firm. The effect of large customers is to tend to swing the entire market toward the efficient solution.
Even if customers do not have foresight, producers of these products probably do. Producers will have both the reason and the means to influence the outcomes of these competitions between technologies. They can, for example, subsidize or give away their products in order to demonstrate the values of their products or to create positive network effects. In the table above, the amount of wealth that can be created by large-scale adoption of VHS is greater than the corresponding amount for Beta. Since the owners of a technology ordinarily would be expected to appropriate some or all of this wealth, the owners of VHS would have a greater potential gain than the owners of Beta (the table presumes that costs are identical for these products). With any form of rational capital markets, the owners of VHS will be able to enlist allies with deeper pockets than will the owners of Beta. But in the theories of path dependence that have been promulgated, owners of technologies, standards, or products have no such roles to play. Again, this is a remarkable deficiency in an analysis applied to the computer industry.
Surely, firms and individuals have some foresight, even if only imperfect foresight. Imperfection is inevitable, even in markets. But for the reason argued above, where there is a significant difference in the relative advantages of competing formats (or technologies, networks, standards), we would expect the choices made in markets to be the correct choices most of the time. Furthermore, the relevant question for public policy is not whether markets are imperfect but rather whether they are more imperfect than governments. So, even where we identify imperfections in market outcomes, we still must raise the question of whether government can do any better.
Finally, there is another special aspect to this table that is easy to overlook. A plot of the of returns in the table would cross. That is, Beta is better when the numbers of adopters are small, VHS is better when the numbers are large. Figure 1 is constructed to reflect this: The slopes of the payoff lines differ, with the slope of V being steeper than the slope of B. Without this "crossing" effect, the technology that is chosen first will always be the better of the two and no harmful lock-in can occur. In order for the paths to cross, the network effects, or economies of scale in production, must be much stronger for the technology that is less desirable prior to these network or scale effects. That is, the one that starts off badly must get better faster. The instinct to root for the underdog notwithstanding, there is no reason to believe that this overtaking characteristic is a likely characteristic of technologies. For the technologies that are often mentioned in this literature, there is every reason to believe that this overtaking characteristic is very unlikely.
Considering software, for example, does it seem reasonable that the benefit curves can cross in this way? Interestingly, there is no attempt in the literature that Reback cites to ponder whether the network effects are likely to differ in this way. Are network effects likely to differ between two different word-processors? We can see no reason why the slopes should differ. The value of compatibility should be independent of the choice of format. The benefit from the ability to exchange files with others (the network effect for software) would not seem to depend on the particular attributes of the word processor itself. If the value of compatibility is independent of the product itself, the slopes of the benefits curves for competing products would be the same and thus could not intersect. So another instrumental assumption of the theory seems inconsistent with reality. [The same story should hold for videorecorders, where the value of compatibility is the larger selection of pre-recorded movies available].
The White Paper implies that Microsoft's acquisition of Intuit would have forced all computer based home-banking, and eventually all banking, to go through Microsoft's (Intuit's) interface. [Conveniently, Reback never explains how it is that Microsoft was unable to use its mythic powers of monopolization to make its product, Money, rather than the rival Quicken, the number one product, given all the network effects and path dependence that supposedly exist]. Microsoft's incipient dial-in network was then supposedly going to have hooks built in to Intuit's software, requiring that all Intuit users go through the Microsoft Network. Since all shopping would require using Intuit, it too would require Microsoft's network. As books, newspapers, and other sources of information become increasingly tied to computer network transmission, these too would require use of Microsoft controlled networks. Libraries, bookstores, and newsstands would be replaced by the universal PC screen, and all of that is under the control of Microsoft in Reback's adaptation of "1984".
Eventually, according to this scenario, Microsoft would write, or at least approve, all the books, plays, poetry, and movie scripts. All the music and lyrics. All political commentary. These are the "threats to the underpinnings of a free society" that Reback contemplates. What evidence does Reback present for these claims? Aside from speculation, only this. Microsoft's' CD-ROM encyclopedia, Encarta, provides a flattering biography of Big Brother Bill (Gates). Nevertheless, conspiracy theories are awfully hard to disprove, especially to the conspiracy theorist. Ordinarily their influence is limited by their general implausibility and their lack of substantiating fact. But the White Paper is lent credibility by its association with the economic theories of path dependence, and network externality. Roughly the last third of the White Paper appears under the heading "Economic Evaluation." This is introduced with the statement that "the arguments draw upon what has become an extensive and rigorous literature on increasing returns economics." What we have shown here is that the theoretical arguments of this literature apply only in very limited circumstances and even then, are far from airtight. What is more damning is that these theories are, so far, entirely without empirical support, notwithstanding a few claims to the contrary.
QWERTY refers to the letters in the upper left hand portion of the typewriter (and computer) keyboard. One commonly hears the claim that to keep the old-fashioned mechanisms from jamming on the early typewriters the mechanics who created the keyboard used trial and error to find a design that actually slowed down typing speed. The claim is made that QWERTY's ascendance was due to a serendipitous association with the world's first touch typist, who won a famous typing contest using the QWERTY design. The QWERTY design is reputed to be far inferior to the "scientifically" designed Dvorak keyboard which claimed to offer a 40% increase in typing speed. Supposedly, the Navy conducted experiments during the Second World War demonstrating that the costs of retraining typists on the new keyboard could be fully recovered within ten days! According to the path dependency theories, no producers found it profitable to create Dvorak keyboards since everyone already knew QWERTY, and no one learned Dvorak because there were no Dvorak keyboards.
This is an ideal example, which accounts for its continued use by virtually every author looking for an example of path dependence. The number of dimensions of performance are few and in these dimensions the Dvorak keyboard appears overwhelmingly superior.
Yet upon investigation, this story appears to be based on nothing more than wishful thinking and a shoddy reading of the history of the typewriter keyboard. The QWERTY keyboard, it turns out, is about as good a design as the Dvorak keyboard, and was better than most competing designs that existed in the late 1800s when there were many keyboard designs maneuvering for a place in the market.
Ignored in these stories of Dvorak's superiority is a carefully controlled experiment conducted under the auspices of the General Service Administration in the 1950s comparing QWERTY with Dvorak. That experiment contradicted the claims made by advocates of Dvorak and concluded that it made no sense to retrain typists on the Dvorak keyboard. This study, which was influential in its time, brought to an end any serious efforts to shift from QWERTY to Dvorak. Modern research in ergonomics also reaches similar conclusions. This research consists of simulations and experiments that compare various keyboard designs. It finds little advantage in the Dvorak keyboard layout, confirming the results of the GSA study.
So on what basis were the claims of Dvorak's superiority made? We discovered that most, if not all, of the claims of Dvorak's superiority can be traced to the patent owner, Professor August Dvorak. His book on the relative merits of QWERTY versus his own keyboard has about as much objectivity as a modern infomercial found on late night television. The wartime Navy study turns out to have been conducted under the auspices the Navy's chief expert in time-motion studies -- Lt. Commander August Dvorak, and the results of that study were clearly fudged. The study also appears to be lacking in anything remotely related to objectivity. The difficulties that we had getting a copy of the Navy study, and the fact that it is mentioned, but never actually cited, convinced us that those economists enamored of the Dvorak fable never actually perused a copy of that study.
Many other aspects of the received story were also erroneous. It turns out that there was intense competition between producers of various keyboard designs early in the history of the typewriter keyboard. And contrary to prior claims, there were many typing competitions between touch typists on various keyboard designs, and QWERTY won its share of such competitions. Thus QWERTY was put through a fairly severe set of tests by the market, and the reason QWERTY survives seems to be that it is a reasonably good design.
We published a very detailed account of all this the Journal of Law and Economics in the spring of 1990. Yet in spite of this five year old paper, which has not been factually disputed, economists working on path dependence topics continue to use the QWERTY keyboard as the main example to support their theory that markets cannot be trusted to choose products. One could hardly find better evidence of this theory's lack of empirical support than the continued use of a result that is known to be incorrect. The QWERTY story, by the way, is cited in Reback's paper (his footnote 44).
In 1969 Sony developed a cartridge based videorecorder, the U-matic, which it hoped to sell to households. Since other companies had such products in the works, Sony invited Matshushita and JVC to produce the machine jointly and to share technology and patents. This was for the very purpose of achieving a standard, which indicates considerable foresight on the part of the market participants. But the U-matic was not a success as a home machine, though it did find a niche in educational markets. Many other attempts to break into the home market were tried by various companies, American, Japanese, and European, but all met with failure.
In the mid 1970's, Sony developed the Betamax. Believing that with the Betamax it finally had a machine that would succeed in the home, Sony again offered the machine to Matsushita and JVC. Once again, Sony hoped to establish a standard that would cut through the clutter of competing formats. Sony provided technical details of the Betamax, including an advance in azimuth recording that helped eliminate the problem of crosstalk. But at a meeting at Matsushita's headquarters many months later, where JVC demonstrated its new machine, Sony engineers concluded that JCV has expropriated their ideas. Needless to say, this apparent usurping by JVC of the Sony technological advances created bitterness between the one-time allies, leaving Sony and Matsushita-JVC to go their own separate ways. The only real technical difference between Beta and VHS was the manner in which the tape was threaded and, more importantly, the size of the cassette. The choice of cassette size was based on a different perception of consumer desires. Sony believed that a paperback sized cassette, allowing easy transportability (although limiting recording time to 1 hour), was paramount to the consumer, whereas Matsushita believed that a 2 hour recording time, allowing the taping of complete movies, was essential.
The larger VHS cassette accommodated more tape. For any given speed of tape this implied a greater recording time. Slowing the tape increases the recording time, but also decreases picture quality. VHS, because of its larger size cassette, could always have an advantageous combination of picture quality and playing time. This difference was to prove crucial.
The behavior of the antagonists in this competition is a wonderful example of forward looking behavior, even if there was some misperception on the part of the players. Both sides attempted to influence expectations and sales in every way they could. They used partnerships, advertising, pricing and any other tool at their disposal. The behavior was nothing like the passive adoption story that is presented with Arthur's table.
Sony, in an attempt to increase market share, allowed its Beta machines to be sold under Zenith's brand name, a highly unusual move for Sony. To counter this move, Matshushita allowed RCA to puts its name on VHS machines. Although Sony was able to recruit Toshiba and Sanyo to the Beta format, Matsushita was able to bring Hitachi, Sharp, and Mitsubishi into its camp. Beta slowed down the tape and increased its playing time to two hours; VHS did the same and increased playing time to four hours. RCA radically lowered price and came up with a simple but effective ad campaign which touted VHS' advantage: "Four hours. $1000. SelectaVision." Zenith responded by lowering the price for its Beta machine to $996.
The market's referendum on playing time versus tape compactness was decisive and rapid. Beta had an initial monopoly for almost two years. But within six months of VHS' introduction in the US, VHS was outselling Beta. These results were repeated in Europe and Japan as well. By mid 1979 VHS was outselling Beta by more than 2 to 1 in the US. By 1983 Beta's world share was down to 12 percent. By 1984 every VCR manufacturer except Sony had adopted VHS. Not only did the market not get stuck on the Beta path, but it was able to make the switch to the slightly better VHS path. Notice that this is not path dependence. Even though Beta got there first, VHS was able to overtake Beta very quickly. This, of course, is the exact opposite of the predictions of path dependence, which implies that the first product to reach the market is likely to win the race even if it is inferior to later rivals.
Now listen to the version of this story found in Brian Arthur's work: "The history of the videocassette recorder furnishes a simple example of positive feedback. The VCR market started out with two competing formats selling at about the same price: VHS and Beta. .....Both systems were introduced at about the same time and so began with roughly equal market shares; those shares fluctuated early on because of external circumstance, "luck" and corporate maneuvering. Increasing returns on early gains eventually tilted the competition toward VHS: it accumulated enough of an advantage to take virtually the entire VCR market. Yet it would have been impossible at the outset of the competition to say which system would win, which of the two possible equilibria would be selected. Furthermore, if the claim that Beta was technically superior is true, then the market's choice did not represent the best outcome."
The lesson of the path dependence literature is that markets can not be trusted to chose the right products. We would argue that a better lesson is that public policies and legal theories should not be based on a literature that is based on only the most casual sort of empirical analysis.
Arthur also has claimed that the gasoline powered engine might have been a mistake, and that steam or electricity might have been a superior choice for vehicle propulsion. Never mind that even with all of the applications of motors and batteries in the century since, and that with all the advantages of digital electronic power-management systems, the most advanced electric automobiles that anyone has been able to make do not yet equal the state of the art in internal-combustion automobiles as of the early nineteen-twenties. Never mind that electric automobiles actually were commercially viable in the early stages of the industry, and that electric power has been viable ever since in the nearby technologies of smaller industrial and recreational vehicles . Never mind that in the technologies in which steam has been dominant, railroads and ocean-going ships, it has gradually been eclipsed by diesel, electric, and hybrid designs. Surely it is a bad idea to base public policy on science fiction instead of science. Yet we fear that this will be the unintended result of following theories that appear to be based on little more than casual storytelling.
Think for a minute about the way that personal finance software is used. Network effects imply that consumers derive additional value from the fact that other consumers are using the same product, in general because this enhances compatibility. But personal financial information would seem to be one clear exception to this. Do most users exchange personal financial information with each other? Do we value the ability to exchange such information? Surely the answer to the last two question questions is a resounding "no". If anything, we prefer to keep this type of software away from prying eyes, since the fewer people who have access to one's personal data the safer one usually feels. Amazingly, it seems unlikely that there would be any other category of software less influenced by network effects than is personal finance software.
What this means is that the types of network effects usually associated with software do not exist in this case. Thus it is a non-sequitor to claim that network effects provided an economic basis against the Intuit merger, as Reback did. Reback claimed that consumers tend to get "locked-in" to their financial software, but "lock-in" in this case has nothing to do with network effects or path dependence. Lock-in has a particular meaning in the context of path dependence. We are locked-in to eating and breathing for obvious reasons. This type of lock-in might be thought of as positive lock-in and does no harm. Lock-in, as it is used in the path dependence literature, means that users continue to use product A, say, when in fact they would prefer to use product B, but because everyone else uses product A they feel compelled to use product A as well. Such an instance can be thought of as negative lock-in.
Reback is clearly talking about negative lock-in. Thus to say customers are locked-in to TurboTax, or Quicken, Reback must mean that if left to their own devices, most consumers would actually prefer one of the alternative programs, but since they must interact with others they use TurboTax or Quicken. Since there are no network effects for personal finance software, no interaction with other consumers, this argument makes no sense. Consumers are not negatively "locked-in" to Quicken or TurboTax but merely prefer these programs to the alternatives. Note that a majority of individuals still do their banking the old fashioned way, i.e. without computers. Would Reback believe this majority is "locked-in" to pens and pencils, checkbooks, and the US postal service? Would such a lock-in imply that pencil manufacturers nascent threats to our freedom?
It is also worth remembering that earlier versions of Windows, (2.0) were quite unsuccessful. Microsoft was able to wean consumers away from DOS only when Windows 3.0 was able to demonstrate a clear superiority. This was a window of opportunity for developers wishing to oust the market leaders. A developer who bet on windows might have been able to surpass dominant rivals from the DOS world if the windows version of the product was sufficiently better than the DOS version. This is how Microsoft was able to come to dominate the applications market.
Microsoft already had a successful history as a developer of applications based on graphical interfaces. It produced spreadsheets and word processors in its role as a major developer for the Macintosh Platform. Lotus and WordPerfect, seeing the poor results of Windows 2.0, and largely ignoring the Macintosh market, might have thought it prudent at the time to put Windows 3.0 versions of their products on the back burner. They also were not well versed in writing quality GUI applications since DOS programs were still their bread and butter.
Most readers of this magazine can probably remember the less than enthusiastic reviews of the early Windows versions of 1-2-3 and WordPerfect in the computer press. At that time the interesting question was why consumers continued to use these inferior spreadsheets and wordprocessors when the Windows versions of Word and Excel were more highly regarded. The decisions to continue focusing on DOS were made in the corporate boardrooms of Lotus and WordPerfect. Those decisions are now clearly seen to have been in error. It is disingenuous for the officers of these companies to now try to shift the blame. The current attempt to claim that Microsoft's market success is due to its control of the operating system, and not the creation of better products at lower prices, is merely an attempt to rewrite history so as to promulgate antitrust theories that might be used to erase the errors of Microsoft's competitors.
Microsoft's claimed transgression appears to be that the Microsoft Network will be made available to all purchaser's of Windows 95. Microsoft's critics take the success of Windows 95 as a given and then claim that Microsoft has a great advantage by including the software for its network with the operating system. We are told that there is a button in Windows 95 which makes the Microsoft Network so simple to use that consumers will be unable to resist. Yet these critics forget to mention that consumers will need to pay monthly fees if they are to use this product. They do not mention that of course one needs a modem before one can use the product. They do not mention that the purchase of a modem most frequently includes free software from other commercial on-line services, such as America Online and Prodigy. They do not mention that software for other commercial services can be easily procured for free. They do not mention that all software that runs under Windows works by pushing a button (icon) on the screen. In reality, the inclusion of such software in Windows 95 merely matches the practices of other on-line services, albeit at a somewhat lower cost. But it is also not as well targeted since it is modem users who can use the software, not computer users as a whole.
If Microsoft's on-line service does not provide sufficient value relative to market competitors (CompuServe, Prodigy, America On-line, Genie, Delphi and the Internet), it will not succeed. Reback's extravagant claims for Microsoft's power notwithstanding, it is useful to remember that Microsoft could not leverage consumers to use its Money product. Consumers need a reason to pick particular products, and will use the product that provides the greatest utility for the money. Reback points out that Computer Associates gave away free copies of Simply Money yet could not gain significant market share. Why expect that the Microsoft network will do any better? Unless, of course, these critics believe that the product itself is one that consumers will want. Microsoft is going up against the likes of IBM-Sears (Prodigy), AT&T (Interchange), and General Electric (GENIE). There is no reason to think that these companies can not fend for themselves, although these large companies have not done well against their smaller rivals (America Online and CompuServe).
Computer software does pose interesting problems for economic analysis. It may be that some types of software products should be produced by only a single supplier. But this is not the claim made in the White Paper. There might be reason to intervene in the market it there were evidence that rivalry in the marketplace were moribund. But the evidence would seem to be overwhelmingly to the contrary. Or there might be reason to intervene if there were evidence that these industries were seriously deficient in technological progress. But there is no such evidence. There might be reason to overturn the market's selection of a standard if it could be shown that markets are systematically deficient at such choices. But as we have shown, there is as yet no evidence for such a view.
Of course, there would be reason to overhaul entire industries, and damn the consequences, if we really could be led to believe that our most fundamental freedoms were in serious peril. This, it appears, is the tack that is taken in the White Paper. Since conventional efficiency arguments about monopoly don't carry much force here, perhaps a large dose of hyperbole will stir the populist pot. And so arguments are presented to the court and to the public that if Microsoft succeeds in one more market, our freedoms are at risk. But such claims stretch credibility beyond the breaking point. More likely, the true danger would come from relying on technical wizards in the judiciary and government to choose our technologies for us, or establishing policy set by lawyers and professors who habitually get their facts wrong. Similarly, abridgments of our freedoms are far more likely to come from a government that can compel our behavior, as opposed to a corporation that requires, for its own survival, our voluntary purchases of its products.
Liebowitz, S. J. and Margolis, S. E., "Network Externality: An Uncommon Tragedy" The Journal of Economic Perspectives, Spring 1994, pp. 133-150.
Liebowitz, S. J. and Margolis, S. E., "Path Dependence, Lock-in, and History," The Journal of Law, Economics and Organization, Spring 1995, pp.205-226.
Liebowitz, S. J. and Margolis, S. E., "Are Network Externalities a New Source of Market Failure?" Research in Law and Economics, forthcoming, 1995.