The Economics of  Knowledge Based Goods

In reality, all goods are knowledge based. But for goods that are at the leading edge of technology, the cost of the knowledge is usually a more important part of the cost of production. For example, the design cost of a microprocessor is likely to be a larger share of total cost of its production than is the design cost of an automobile. The R&D in developing a new drug is likely to be a higher percentage of its total cost than R&D would be for the production of corn, say.

In many respects, this course will really be about the creation of knowledge (as a product of markets). The knowledge itself can be separated from the the physical embodiments of that knowledge in particular products. The writing of the code for a word processor, for example, can be separated from the duplication of the embodiment of that code on a floppy disk or CD-ROM.

The course will begin by examining how markets create knowledge. We will use a theoretical construct called a public good. Then we will examine several other characteristics often found in markets for new, high technology products, such as economies of scale, network economies, standard setting problems and pirating.

Before we get to this, however, we will review some material on price discrimination that will be essential for understanding the pricing that is used by producers of knowledge based goods.

PRICE DISCRIMINATION: Charging different prices for essentially the same good.

Perfect price discrimination : Each unit sells for its maximum price

Demand curve is actually now the MR curve since the price doesn’t have to be lowered in order to sell additional units.

Profit maximizing firm produces the same output as a perfectly competitive market. No deadweight loss. Difference is that entire surplus goes to the producer.

Problem: how to prevent arbitrage, and how to learn the max price for each unit and who is willing to pay it.

Examples: no such thing as pure perfect price discriminator. Cases where producers sell at multiple prices:

a.     automobiles: salesmen try to determine just what a consumer is willing to pay.  Why do we find this for cars and not for food? Was this more prevalent or less prevalent 100 years ago?


Medical doctors back in the days when they made house-calls and set their own rates

Ordinary Price Discimination

. The key concept is the equalizing of marginal revenues. Logic is simple. If you have two markets where goods are sold at identical prices, but if the marginal revenues are different, profits can be made if sales are increased in the market with high marginal revenue and sales are decreased in the market with low marginal revenue. This will have the effect of tending to equalize the marginal revenues.

rule: increase P in low MR market decrease P in high MR market

Before price discrimination, the price is the same in both markets so that

MR1 =P1(1-1/); MR2 =P2(1-1/2) but P1=P2

MR1 does not equal MR2 (since the marginal revenues are not equal but the prices are).


·       identify customers as belonging to a particular group

·       know the price elasticity of demand in each group (or the reservation price)

·       prevent ARBITRAGE -- buying low and selling high by middlemen.

NOTE: Prior to discrimination the market with the high marginal revenue is also the market with the high elasticity. From the formula that MR=P(1- 1/E)

New Rule: increase price in market with low elasticity (low responsiveness), lower price in the other market. Continue to do this until the marginal revenues are equalized.


 Note that when markets are merged, the same price exists in each. But the marginal revenue in the two markets is almost certain to be different in the two markets. Profits can be increased by shifting output from the low marginal revenue market to the high marginal revenue market since we lose only a small increase in revenue from the first market and replace it with a larger increase in revenue in the second market. Before price discrimination occurs, that is when the price is the same in both markets, there exists are clear relationship between the marginal revenues in the two markets and the elasticities. In particular, since MR=P(1-(1/n)), where n= price elasticity of demand, the market with the higher n must have a marginal revenue that is closer to the price, and vice-versa. Thus, markets with higher marginal revenues also have higher elasticities (note: only when the prices are the same!).

 Since we can increase profit by transferring output from low to high marginal revenue markets, we need to lower price in high marginal revenue markets (to increase sales) and raise price in low marginal revenue markets (to decrease sales).

This rule translates into one which goes:

raise price in the market with lower elasticity, lower price in the market with higher elasticity. Continue to do this until the marginal revenues are equated (note: when price is lowered in the high mr market, the mr falls, and vice-versa).

 Methods used to discriminate

 Movies :lower prices to over 65 and children. why?  over 65's are presumably more elastic. They have lots of time and other activities. In particular, they are not likely to come during peak periods, when the least elastic demanders view movies. Cost of servicing is also lower. Children tend to bring their parents. They also eat a lot. Higher cost of cleanup. But they also come during off peak hours. Weekend evenings are prime time. Working people and teenagers are hard-pressed to see movies at other times and have very inelastic demands. This is why we want to charge them the higher price. Others get the low price, but only children and old people can be distinguished at low cost.

 stamps (s&h green stamps, etc.)

  Stores pay the stamp company for the stamps which they give out. Customers get stamps based on the dollar amount of products they purchase. The stamp company redeems these stamps for various products purchased by the stamp company which the consumer can order when they have enough stamps. Stores using stamps have higher costs of doing business since they have to pay for the stamps to cover the costs of the stamp companies. These stores, therefore, have to raise their prices.

  Not all customers redeem the stamps. Some customers lose them, throw them out, or give them away. These customers therefore pay a price which includes the cost of the stamps, but receive nothing extra in return. Those customers who do redeem the stamps are reimbursed for the higher prices by the value of the merchandise they get. In fact, these consumers are subsidized by those who do not redeem the stamps. Thus these two groups (redeemers and non-redeemers) are charged different prices for merchandise sold by the store, and price discrimination is occurring.

 Why do we want to lower the price to those customers who redeem stamps? Presumably these customers are more elastic. After all, they are sensitive enough to price differentials that they are willing to go to the effort of collecting and redeeming the stamps. They presumably would react strongly to any price change.

 Cents-off coupons found in newspapers and product containers.

 Either retailers or manufacturers give back cash to consumers who bring the coupon to the retailer or return the coupon back to the manufacturer.

 Same basic idea as stamps. Those customers who are willing to take the time and effort to cut the coupons out of the paper, carry it with them, and present it to the cashier are fairly dedicated to getting a lower price for their purchases. They would seem to be the more elastic customers and they do receive a lower price than those customers who are not interested in redeeming the coupons.  how else can you explain coupons?

 Regular interval sales: Sales, such as tires as Sears, or food sales, which are quite predictable.

 People who are willing or able to wait can get the lower price. If your battery has died, or your tire has blown you need to get a replacement immediately and generally will have to pay the higher price. If you are not in such a hurry you can get a lower price. Once again, elastic customers get the lower price.

 Dumping: Mercedes had higher prices in the US than in Europe, relative to other makes.

  Geographical discrimination. American customers of this product are less elastic. Mercedes has a better reputation here than there. In Japan, on the other hand, automobile and other produces charge more at home than they do abroad. This is typical since the reputation of a car is usually strongest at home and weakest where it is least known. Japanese consumers much prefer Japanese products, so they are less elastic for Japanese producers.

 Airline fares: requirements for discount = 30 days in advance, stay over weekend.

 Discrimination here is between business and non-business customers. Business customers are less elastic since the value of a business trip is often far greater than the cost of the air fare and little preparation time is available. Business travelers do not as a rule stay over weekends and so they would not be able to take advantage of the lower fare. They also don't often have the luxury of planning 30 days in advance. People on vacation, on the other hand, usually want to stay over a weekend and plan far in advance. hardcover and paperback: markups on hardcover books are much higher than on paperbacks.  Customers are separated according to their urgency to purchase or their value placed on hardcover books. By delaying the paperback introduction the manufacturer is forcing impatient customers to pay a high price. These are probably low elasticity customers. Also, wealthier customers probably have higher reservation prices than others and also prefer hardcover books by a greater extent. The current practice charges a higher price to each group. Etc. for movies (tapes, cable, network). Books turned into screenplays.



Commodity Bundling (Block Booking)

Bundling consists of selling two or more products together as a package. It differs from tie-in sales in that tie-ins do not fix the quantities of the two goods sold up-front whereas bundling fixes the relative amounts of the two goods at the initial purchase. In other words, tie-in sales allow customers to use different amounts of the tied good whereas bundling forces them to be purchased in fixed proportions. Office Suites are a well-known bundle, as would be computers that come with preinstalled software; a la carte versus complete meals on the menu, etc.

  Computers and Stereo systems are good interesting examples. Complete computer systems are bundles, which normally include monitors, printers, and hard drives. Stereo systems consist of speakers, receivers, tape recorders, turntables, compact disks, etc. Manufacturers may sell complete bundles or they may prefer to sell each component separately, or both.

 It is interesting to note that high end stereos tend to be separate components, whereas high end computers tend to sold as systems. Why might this be?

Two types of bundling: mixed and pure.

Pure bundling: Producer only sells the goods as a bundle

mixed bundling: producer sells products both as a bundle and as separate units.

 Look at the diagram (figure bundle 1). Assume that consumers only buy 1 unit of good X and good Y. The points (large round dots) in this diagram represent people and their values (reservation prices - the maximum price they are willing to pay) for goods X and Y. The point labeled "Mr. A" represents Mr. A's maximum willingness to pay for X (Pax) and Y (Pay). The information on this diagram is the same as the information contained in the market demands for good X and good Y.

Px and Py are the normal profit maximizing prices for the two commodities X and Y. These prices were determined in the normal way. That is to say, the Px's are ranked in descending order to arrive at the demand curve for X (this is how a demand curve is derived, right?). This information is combined with the cost curves and used by the seller to derive the profit maximizing price, Px. The same is done for good Y to determine Py.

 Figure bundle 2 shows the profit maximizing prices Px and Py. This diagram also demonstrate which customers will buy which combinations of products. In the northeast quadrant customers buy both X and Y. In the southeast they buy neither good. In the northwest they by Y only. In the southeast they buy X only.

 Figure bundle 3 demonstrates the case of pure bundling. The downward sloping line represents the price of a bundle. The line must have a slope of 1 since dollars are on both axes. The price of the bundle can be read off either axis. Customers who have a combined value for X and Y greater than the price of the bundle purchase the bundle. Those are the customers to the NE of the bundle price line. Customers with a combined value less than the price of a bundle don't buy the bundle. Those are the customers to the southeast of the bundle line.

How does pure bundling compare with no bundling?

Figure bundle 4 give a particular pair of prices and a bundle price which is equal to the sum of the individual prices. In choosing between pure bundling and no bundling what considerations must be made?

In area B and D, consumers used to buy only one of the two goods. Now they buy both. In areas A and C consumers used to buy one of the two goods, now they buy none. What happens to profits?

Assume that A=20, B=30, C=25 and D=35.

B-C = change in sales of X = 30-25 = +5

D-A = change in sales of Y = 35-20 = +15

Therefore, at the bundle price Px +Py there will be a larger quantity of the two goods demanded than with no bundling. This allows the price of the bundle to be increased and total profits to go up. Of course, if A and C are large relative to B and D, total profits will fall.


 Another way of examining these changes is to calculate the marginal profit for X and Y. The changed sales of X multiplied by the marginal profit of X (this is imprecise since the marginal profit of X changes as the quantity of X changes) + the changed sales of Y multiplied by the marginal profit of Y = the change in total profits.

Bottom line: sometimes pure bundling is better than no bundling, sometimes not. It depends on the values of customers.



This is a special case of bundling. In block booking consumers are forced to purchase blocks of products or none at all. Famous antitrust cases involving block booking have to do with being sold in blocks to theaters. Television broadcasters currently buy 'libraries' of movies as opposed to individual titles. Why don't the producers of these products sell the titles one at a time?

Stigler provides an answer.

His answer can be translated into the bundling scheme provided above. He assumes an inverse correlation between the demand for goods X and Y. This translates into tastes represented by a diagram as that shown as figure 1. Because of the inverse correlation, there are no separate prices for X and Y that can take away most of the consumers surplus from customers without also greatly reducing the number of customers actually purchasing the products. In this case, the imposition of bundling in place of pricing will be able to greatly increase profits.

Figure 2 demonstrates this possibility. The bundle is priced so as to remove almost all the consumers surplus yet the number of consumers is at a maximum. In this case pure bundling (block booking) beats pure pricing.

But this doesn't have to be the case. There will be many instances when pure pricing will be superior to pure bundling. It is easy to imagine an example where tastes are aligned in such a way that pure pricing can extract virtually all the surplus from consumers without deterring any consumers from purchasing the products. Yet no bundle could achieve anywhere near that level of results.

The basic intuition underlying these results is that when tastes (reservation prices) are fairly homogeneous for a good, across consumers, pricing will extract most surplus. And when tastes (reservation prices) are homogeneous across consumers for bundles then bundling will work particularly well. This essentially means that if demand curves for products are very flat, pure pricing will do very well at removing consumers surplus. But steeper curves means that pure pricing will leave much potential consumers surplus untapped. Sometimes, the demand for bundles will be flatter than the demand for individual goods, and that is when bundling will work best.

This practice is still common in the film industry. Here is a quote from Silver Screen Partners IV 1991 Annual Report:

A portfolio of films is often likened to a train, with a small number of "locomotives" providing the steam as the films travel through markets around the world. In the Silver Screen Partners IV portfolio, there are some powerful "locomotives" that were major box-office successes and have enormous value. Principal among them are: Beauty and the Beast, Pretty Woman, The Little Mermaid, Dick Tracy, Dead Poets Society, Turner and Hooch and The Rescuers Down Under.

In all forms of television throughout the world, including network and syndicated television and basic and pay cable, films are licensed as packages rather than on a film-by-film basis. As in any portfolio of films, in the Silver Screen IV portfolio there is a wide range of value between the weakest and the strongest films. The art of packaging is to combine the "locomotives" with a variety of other films to maximize the value of the entire portfolio.


Public Goods

Two definitions in the literature. Samuelson coined the term when explaining some economic difficulties with the way markets produce products such as television broadcasts.

a) Nonrivalrous consumption

b) Nonrivalrous consumption plus non-excludability of users


Are both of the factors totally dependent on the good itself, or do social conventions play a role? Clearly, a) is a function of the good itself, while non-excludability depends on the law and its enforcement.

Non-excludability causes problems in markets for private goods, as well as markets for nonrivalrous goods. If you can not exclude people from using what you create, you won’t create it. If you can’t exclude people from using what you own (your car, say) you wont purchase it. All markets break down.

We will adopt the first definition since it depends only on the good itself.


Why do we care about public goods?

Ideas, inventions and designs are public goods. They are also the basis for new technologies. Interestingly, the economic analysis of public goods is very different than the economic analysis of private goods.

How do markets produce public goods?

For Private Goods: demand curve of industry is the Horizontal sum of the demands for all consumers at the given price.

Assume there are four individuals in the market, Mr. A, Ms. B, Mr. C, and Mr. D.

For any given price, we find the quantity that each individual would want. We then add those quantities together to get the quantity the market demands at that price.

If a market had 100,000 consumers who wished to purchase 1 unit and 100,000 who wished to purchase 2 units, the total market demand at that price would be 300,000

For public goods, the analysis is quite different. The demand curve of industry is the Vertical sum of the demands for all consumers at the given quantity.

For any given quantity, we find the price that each individual would be willing to pay. We then add those prices together to get the total price the market is willing to pay for that unit.

If a market had 100,000 consumers who wished to purchase the third unit at a price of ten cents, and 100,000 who wished to purchase the third unit at a price of twenty cents, total market demand would be a willingness to pay $30,000 for the third unit of output.


Analyzing Public Goods

Think of book titles as public goods, but physical copies of single book title are private goods that embody a public good.

Several questions arise: how many titles are optimal to publish? How many copies of each title would be optimal? How do competitive markets work? Monopolies? Finally, is it possible to produce public goods efficiently?


Production of a Single book title

Take the case of the production of books. First we start with the production of book titles, then we move to the production of individual titles. There are problems with the usual assumption of homogenous units on the quantity axis. In order to come as close as possible to our usual assumptions, let us assume that these books are books by a single author such as John Gresham or Stephen King.

In the above diagram, the producer, having a copyright, and therefore monopoly, produces output Qm units of the book and sells them at a price of Pm.. Note that the profit maximizing quantity is found by equating the MC of printing with the MR curve [for items like software, the appropriate MCs would be the cost of reproduction]. The cost of writing the book doesn’t enter into these calculations at all. In order to determine if this book will be written, the profit (area 3+4) need to be contrasted with the cost of writing the book. As long as the profit is greater, the market will be able to contract to have the book written.

Note that to achieve economic efficiency, any book whose cost of writing is less than 1+2+3+4+7 should be written since it can provide value of 1+2+3+4+5+6+7+8 if sold at the cost of printing. By giving a copyright, some readers of books who get a value greater than the cost of producing the copy will not be able to buy a copy because the price is above MC of printing. Therefore, too few copies of books will be produced compared to the ideal. Note also that if the MC of printing is zero, the publisher (author) would produce up to the point where revenue was maximized (elasticity =1).


Production of Book titles


The demand for titles is supposed to be the vertical sum of the demands for titles by individual consumers. According to the diagram below, if the price of a title is P, this consumer will demand 7 titles. He will also receive consumer’s surplus of 1+2.

If the producer of books could perfectly discriminate, the amount that any consumer would pay would equal the value they place on each book title, and there would be no consumer surplus.

Determining the demand for titles is where the public good vertical addition of demands comes in. The supplier of titles is the author(s), who gets a payment for each title written. This price of a title is not the price that individual consumers pay, but is instead the total revenue net of printing costs that is available to the author (we assume that book producers only keep the printing costs which include a normal return on investment because the publishing market is competitive).

When determining the market output for this author, or for these close substitute titles, we can imagine two different demand curves. The “true” demand curve, which reflects the total value that society could in theory receive from the writing of the title is given by the perfectly discriminating demand curve. It is truly the sum of the demands of all individual consumers. The theoretically optimal output is Q*, such that every title which has a potential value greater than the cost of writing it gets produced.

The practical (attainable) demand curve represents the revenues that are actually attainable in the market for each individual title. The difference between these two demand curves is related to the areas 1+2 in the figure representing an individuals demand for titles. Q** is the best output we can get given the imperfect attainability of revenue. The producer of these titles, however, if there is monopoly power, will restrict output to Qm. From this we can conclude that the production of public goods will be less than the ideal level. However, it is unclear that there is any reasonable hope of improving this imperfect situation.


Joint Products

Joint Products are products that are jointly produced by a production process and which can be used for several noncompeting uses. For example, cows can be used to provide many different types of meat, or leather. The production of Tylenol creates a nitrogen based fertilizer as a by-product.

[Our interest in this model is that public goods can be thought of as a joint product (a single product can serve more than one consumer). The difference is that joint products are normally independent (beef and leather) whereas public goods are the same product sold to different individuals. When we discuss the economics of copying public goods we will treat originals and copies as two separate products that are imperfectly substitutable (imperfectly independent). But we will use this model.]

 The profit maximizing and socially efficient output of these products is somewhat more complicated than simple products.

The accompanying diagram represents some of these difficulties. It represents the market for steers, beef and hides. After steers are slaughtered, producers are left with both beef and hides. The demand for beef and hides are presented in the diagram, along with their marginal revenues.

Competitive solution:  Q=QC ; PH + PB = PS in figure.

The competitive solution is straightforward. The demand for steers, given by the vertical sum of the demand for hides and the demand for beef, intersects the marginal cost of producing steers at Qc. Competitive producers will continue to produce steers as long as the price of a steer is greater than the mc of producing it. Since any producer takes the price as given, PB + PH is compared to the mc of production. This is also the socially efficient solution.  It is possible that either PB or PH could equal zero. If the demand for hides, say, were low enough, the price of hides at Qc might very well be zero. It would be inefficient to prevent anyone from consuming the good since it has already been produced. Competition in the production of cows and hides will lower the price of hides to zero (it becomes a by-product of producing beef and has a zero economic marginal value.

Monopoly solution: Figure 2 Simple minded solution: Equate MC of steers with MR of steers.

This solution may in fact work. If the marginal revenue in both the beef and hide market is positive then this is the profit maximizing solution. If the marginal revenue in one of the markets is negative the profit maximizing solution will require throwing away some of one of the products. If the hide market has negative marginal revenue at output Q* then obviously total revenue in the hide market would increase if output of hides were decreased (leading to an increase in price). After all, this would remove the sale of units which were decreasing revenue. Profits would have to rise. In fact, total revenue in the hides market would be maximized when quantity was set to the point where the marginal revenue equals zero QH2 (with price of PH2). Remember that the marginal cost of hides to the left of Qc is zero since additional steers are being produced in order to sell the hides.  If the quantity of hides is reduced to QH2 then the marginal revenue of steers beyond QH2 becomes identical to the marginal revenue of beef, since only beef is being sold. Total profit maximization then requires that output Q2* of steers be produced. The price of beef is PB2. The quantity of beef sold is QB2 .

The market for new and used goods

This is a variation of the previous set of models. Here you have a good (say a textbook) which can, in its 2 period life, be both a new and a used book. Assume, for a moment that they are only imperfect substitutes for one another, i.e. that they are hardly substitutes at all, as beef and hides are not substitutes. Then there would be separate demands for new and used books. Benjamin and Kormendi is the best reading for this.

Producers of books, if they can rent them out for 1 period, will generate payments equal to the vertical sum of the demand curves. Now what happens if the book producers sell the books? If there is a resale market, we would expect the resale price to equal to rental price of used books, and the net demand for new books become the vertical sum of the pure demand for new books and the demand for used books. And what if there were no market for used books? Then consumers of used books would have to switch to new books if they wanted the books at all. This would lead to the horizontal addition of demand curves (if new books were very good substitutes for used books). The less perfect new books are as substitutes for used books, the lower the demand until it equals the pure demand for new books if they are not substitutes at all.

This is illustrated in the accompanying diagram. For convenience, the demand for new books is assumed identical to the demand for used books. D1 is the demand for new books and D2 is the demand for used books. If new and used books are substitutes, and if the used book market is eliminated, the demanders of used books would switch to the new book market. The addition of new and used demanders in the new book market would lead to a net demand for new books of Dh, the horizontal sum of D1 and D2. If new and used books were only imperfect substitutes, then the net demand curve would lie between D1 and Dh. The less substitutable they are, the closer the curve would lie to D1.

If the used book market is allowed to exist, the net demand is the vertical sum Dv. When would a firm be better off? With or without a used market? This will depend on what the marginal cost cure looks like. It could look like MC1 or MC2.  With a curve like MC1 Dv lies above Dh at the intersection of the demand with the MC curve. If the industry were competitive the equilibrium would lie at the intersection of the demand and mc curves, which would mean the industry would be better off if the used good market existed (implying that Dv was the appropriate demand curve). A monopoly will prefer to produce a smaller output than a competitive firm, so that a monopoly would also produce where Dv lies above Dh, meaning that with a mc of MC1 the monopoly would be better off with the used market.

With a curve like MC2, the competitive market will be better off eliminating the used market (Dv) and having Dh in its place since Dh leads to both a higher price and a higher quantity. A monopolist may or may not be better off eliminating the used market.

If the used market does restrict the profitability of the producer there are several ways in which to enhance profits:

 a. Eliminate used market through legislation.

 b. Rent the items instead of selling them.

 c. Reduce the durability of the product.

 All three methods have costs. Eliminating the used market may not be practical. It may not be legal. It may be expensive.

 Renting the product allows the producer to keep complete control over the quantity of the item available in every period. There is a cost involved with renting, however.

The main cost in renting is the monitoring that is required to make sure that the rentee is not mistreating the product. Renters, since they don't have to suffer any consequences from mistreating durable goods, have less incentive to properly care for them. This tends to increase the number of repairs that must be made, decreases the life expectancy, and lowers the resale value of the good. This is one reason that rented houses cost more than the mortgage carrying costs to the landlord (of course large deposits can alleviate this risk to the landlord). The same should hold true for rented versus sold automobiles except that so often, for the first 3 or 4 years, the purchaser doesn't own the car and thus someone else besides the owner (the bank which made the auto loan) bears some costs if the purchaser decides to mistreat the automobile and walk away from the loan.

What does this imply for the producers of goods which can be copied?

Public goods: goods such that one person's consumption doesn't reduce anyone else's possible consumption. Ideas, computer programs, songs, stories, etc. are examples of these type of goods.

When people can make copies of originals, they are willing to pay more for originals. Making tapes of records, copies of software, etc., are all examples of this.  A key element here is the number of copies made of each original. If, for example, every purchaser of a record made exactly one cassette for use in an automobile, then the net demand for records would be the vertical sum of the demand for just record use and the demand for cassettes, and record producers should be able to price the product and collect revenues from the use of cassettes by raising the price of records. But if some users made 100 copies and other made none, then the vertical sum would consist of two segments, and it would be hard to collect revenues from those making tapes without charging a price which was too high to keep the non copiers in the market.

Site licensing is a form of pricing which explicitly recognizes the extra value in making copies.

Higher journal prices to libraries is another form of this pricing.

A test of this model: Liebowitz 1985

Dependent variable



Commercial/NonProfit Dummy

Age of Journal




.0065 [1.99]

.65      [4.14]


.17 n=80



.0071 [2.14]

.578     [3.36]

-.16 [1.01]




Libraries that



Price Discriminate



Don’t Price Discriminate




Ratio of Book to Journal Expenditures, US Academic Libraries
































Application to Napster

The entertainment industry has always exaggerated the damage to itself that each new copying technology would bring—from reel-to-reel tapes and videorecorders, to MP3s and Napster. Crying ‘wolf’ too many times, however, shouldn’t by itself negate claims that a new technology will harm copyright owners. Napster is one of those cases where it does.

When record companies estimate the harm they suffer from illicit copying activities, they incorrectly assume that every unauthorized copy substitutes for a sale of an original. No less an authority than Alan Greenspan, when he was still a civilian economist with record companies as his clients, was willing to estimate the harm in this manner.

The are two key factors that actually determine whether copying harms copyright owners, however. First is the question of whether the material being copied substitutes for a sale of an original. Obviously, not everyone willing to use a pirated copy of a work would also be willing to purchase an original. The second, more subtle factor, which I first examined two decades ago, is whether it is possible for copyright owners to indirectly collect revenues from the copying activity.

This last point can be illustrated with the following example. If all purchasers of CDs were to make a single cassette, say for use in their automobiles, record producers need merely raise the price of the CD by an amount that roughly captures the additional value consumers receive from making the cassette. This would allow record companies to indirectly capture the revenue from the copying activity. Illicit copying then increases the price that consumers will pay for CDs and record producers are not harmed.

Alternatively, if certain users made numerous copies, and those users could be identified and charged a higher price than other users, the copyright owner might also benefit from the copying activity. This is what currently happens with photocopying in libraries. Libraries pay a price two, three, or even four times as much as personal subscribers for the same heavily copied journals, and this price differential only arose after the introduction of photocopiers.

Note that if this unauthorized copying were eliminated, copyright holders might actually be worse off. In a world with no copying, record producers might find that consumers are unwilling to pay as much for CDs, lowering revenues and profits (it is not clear how many, if any, of the former copiers would purchase legal copies).

The Betamax case, so called because at the time the case was brought VHS had not yet begun its obliteration of the Beta video format, represented another instance where copying was unlikely to harm copyright owners, althouh for slightly different reasons. Almost all viewing in the early 1980s was of advertising-based over-the-air broadcasters, particularly the big three networks—ABC, CBS, and NBC. Viewers made tapes to timeshift programs for more convenient viewing. Although remote controls made it possible for viewers to fast-forward through commercials, close attention had to be paid to ensure that the viewer wouldn’t also skip by the programming. Combined with the fact that the amount of time-shifting had to be small, it is clear that Betamax was not going to harm copyright owners.

Why would the amount of time shifting be small? Because there was too little free viewing time. The average household viewed six or seven hours of TV a day, including virtually complete participation in prime time programming. There was little free viewing time to watch tapes since a family could not both watch a tape and record a program on their single videorecorder.

Thus it was proper to conclude that videorecorders would not harm the revenues of copyright holders. Fortunately, the courts managed to get it right. Several years later, Hollywood learned that by lowering the price of prerecorded movies from $100 to $20, they could sell a ton of them, so that now Hollywood’s sale of videotaped movies generates more revenue than theatrical showings.

Fast-forward to 2000. Napster’s supporters claim that the online sharing of songs is a latter-day Betamax scenario. They claim that Napster users actually purchase more CDs because Napster allows listeners to sample music with which they might otherwise be unfamiliar. Although some such effect undoubtedly occurs, it seems most unlikely that it would outweigh the negative impacts on copyright holders.

Unlike the cassette example mentioned above, Napster does not allow record companies to indirectly capture the value of the copies being made from legal originals since some originals will have dozens or hundreds of copies made and others none. Nor does it seem likely that the amount of copying will be small—there are no time constraints or confusing instructions preventing widespread copying. Finally, copies are likely to serve as substitutes for the purchase of originals in this case. The people making the copies are the very group that was expected to purchase originals (that is why it is not surprising that surveys indicate that Napster users are among the heaviest purchasers of CDs).

Record companies are right to fear Napster. The Internet, however, should prove a boon to them, once they can get the right pricing. As was true in the video example, record companies need to learn that they are currently charging way too much for music downloads. When they learn that it is more profitable to lower their prices, even if it largely destroys record stores, the old distribution methodology will be seen for what it is—primitive and inefficient.


Network Effects

A little background: review the concept of natural monopoly, and the tragedy of the commons.

Natural monopoly: the AC of a firm falls continuously. In an industry where firms have such cost curves, a single firm is likely to become dominant, thus the term natural monopoly. These were the ‘public utilities’.

Define external effect and externality.

Tragedy of the commons: The ‘negative’ externalities that fishermen have on each other cause them to overuse the lake.

Network externality has been defined as a change in the benefit, or surplus, that an agent derives from a good when the number of other agents consuming the same kind of good changes (Katz and Shapiro 1985). As fax machines increase in popularity, for example, your fax machine becomes increasingly valuable since you will have greater use for it.

Sometimes called network externalities, but this is a lazy usage.

Two types of network effects have been identified. Direct network effects have been defined as those generated through a direct physical effect of the number of purchasers on the value of a product (e.g. fax machines). Indirect network effects are “market mediated effects” such as cases where complementary goods (e.g. toner cartridges) are more readily available or lower in price as the number of users of a good (laser printers) increases.

Putting aside definitional concerns, the import of network effects comes largely from the belief that they are endemic to new, high-tech industries, and that accordingly such industries experience problems that are different in character from the problems that have, for more ordinary commodities. The purported problems due to network effects are several, but the most arresting is a claim that markets may adopt an inferior product or network in the place of some superior alternative, which we shall investigate below.

Read the material in Liebowitz and Margolis to understand how our model of network effects shows that getting stuck is possible, but not likely. Chapter 5.

1.  Levels of Network Related Activities

The difference between a network effect and a network externality lies in whether the impact of an additional user on other users is somehow internalized. Since the synchronization effect is almost always assumed to be positive in this literature, the social value from another network user will always be greater than the private value. If network effects are not internalized, the equilibrium network size may be smaller than is efficient. For example, if the network of telephone users were not owned, it would likely be smaller than optimal since no agent would capture the benefits that an additional member of the network would confer on other members. (Alternatively, if the network effects were negative a congestion externality might imply that networks tend to be larger than optimal.) Where networks are owned, this effect is internalized and under certain conditions the profit maximizing network size will also be socially optimal. (see Liebowitz and Margolis 1995b.)

Perhaps surprisingly, the problem of internalizing the network externality is largely unrelated to the problem of choice between competing networks that is taken up in the next section.  In the case of positive network effects, all networks are too small. Therefore, it is not the relative market shares of two competing formats but rather the overall level of network activity that will be affected by this difference between private and social values. This is completely compatible with standard results on conventional externalities. For reasons that we will expand on below, this is a far more likely consequence of uninternalized network effects than the more exotic cases of incorrect choices of networks, standards or technologies.

Network size is a real and significant issue that is raised by network effects.  Nevertheless, this issue has received fairly little attention in contemporary discussions of network externality, perhaps because it is well handled by more conventional economic  models.


The 3 Meanings of Lock-In or Path  Dependence

Three distinct forms of path dependence.

First-degree path dependence: Complete knowledge of the future when decision is made, but at some points we have what appears to be regret and inefficient results.  One plans on having a family and buys a large house. Things work out as predicted. When the kids leave the house it too big for just the parents. But this was predictable, and could not be improved upon. Or, you buy a computer. A year later a better computer comes out, as you knew it would, and you wish you had that one. But as a whole you made the best decision.

Second-degree path dependence is durability in the presence of imperfect information. Information is never perfect. Here you buy the big house but wind up getting divorced, which was not planned. This is real regret. And the results are not what you would have wanted if you had know what would happen. But you don’t. or you buy a computer. Next year one comes out that is much better than you anticipated and you would have waited had you known. You make the wrong decisions, but they were correct given your information. The error is not remediable.

Third-degree path dependence involves remediable error. You know you are going to get divorced but by a new house knowing it is a bad idea. You know it would be better to wait, but you buy by a computer now anyway.

The failure to distinguish among these three discrete forms of path dependence has led to some unfortunate mistakes. The error here involves transferring the plausibility of the empirical and logical support for the two weaker forms of path dependence (first- and second-degree) to the strongest implications of third-degree path dependence. Although it is fairly easy to identify allocations, technologies, or institutions that are path-dependent in some form, it is very difficult to establish the theoretical case or empirical grounding for path-dependent inefficiency.

First- and second-degree path dependence are commonplace.

Only third degree is new.

It seems impossible to read David or Arthur and not conclude that this third degree lock-in to inferior technologies is the centerpiece of their theories.

A Popular Version of Lock-In

Here is the way the story usually goes:


JANUARY 18, 1996



This week Apple Computer, one of the giants of American technology, announced large losses and a painful reorganization. The story provides reason to wonder why some inventions like Apple's Macintosh have trouble in the marketplace. Business Correspondent Paul Solman of WGBH-Boston reports.


PAUL SOLMAN: In the event you've been on Mars for the past few months, Microsoft, the Tyrannosaurus Rex of computer software, has successfully launched Windows 95. One goal of the new program is to make the IBM type computers it runs, so-called PC's, easier to operate. Microsoft's other goal: To bury rival systems, most notably the one that runs the Apple Macintosh.

MAN: (Apple Commercial) Hey, you want to see some dinosaurs?

CHILD: Yeah, dinosaurs.

MAN: Loading DOS CD into Windows 95.

CHILD: Where are the dinosaurs, dad?

MAN: I'm not sure.

PAUL SOLMAN: Apple has counter-advertised, touting the legendary user-friendliness of its Macintosh, with its Mac operating system.

ANNOUNCER: If you're looking for a computer that's easy to use--

MAN: Where are you going to kiddo?

CHILD: To the Crandells; they have a Mac.

ANNOUNCER: (Apple Computer) There's still only one way to go.

PAUL SOLMAN: Actually, there's not only one way to go in computer operating systems, at least not yet, and Apple's lucky there isn't, for if there were, the one way would probably be PC's like the IBM running on Microsoft Windows, not the Mac, despite the fact that the Mac technology has widely been considered superior to Microsoft for a decade.

SPOKESMAN: Quick Time VR is a brand new technology that we brought out about a year ago, which allows us to capture environments like this with a standard 35-millimeter camera, bring those images to the computer, and have our computer stitch the images together, and you'd get a 360-degree panoramic scene.

PAUL SOLMAN: Apple's struggle to compete with Microsoft-driven PC's may seem like inside baseball for businessmen but it actually provides a key insight for understanding the world of technology around us. Among the most famous quotes in business history is Ralph Waldo Emerson's: "If a man make a better mousetrap than his neighbor, though he build his house in the woods, the world will make a beaten path to his door." In fact, there are better mousetraps than this one, the ultrasonic pest repeller, for example, yet, this remains the standard. Similarly, the Apple Macintosh may be the better mousetrap in computing, yet, the world has beaten a path to the IBM PC and Microsoft. The question is: Why? Well, one useful answer is an idea known as path dependency, i.e., once enough people follow a particular path in technology, that path becomes the standard one on which future technology and products depend. Consider keyboard technology. Using the same text and equally skilled typists, it was demonstrated back on the silent film in the 1930's that you could type 165 words a minute with the keyboard on the right versus 131 on the left, and which is the one we all use? The apparently slower, older one on the left. The Dvorak System on the right claims to demand less of a left hand, less row to row finger hopping, no irksome pinky stretches. The arrangement of the letters seems to be more efficient but almost no one uses it.

DON NORMAN, Psychology Professor: Dvorak in the 1930's did a whole host of human factor studies and made a keyboard that was far superior--too late. Once you have an installed base, once you have tens of millions of people using the typewriter, it's too expensive to change. And for a small improvement in learning and typing speed, it is not worth it.

PAUL SOLMAN: Don Norman, a long-time psychology professor, is trying to forge new paths for Apple, making its technology ever easier for the consumer. As for Emerson's quote about the better mousetrap, he's blunt.

DON NORMAN: Just not true.

PAUL SOLMAN: And it's not true because?

DON NORMAN: Because Herbert Simon had invented this wonderful concept of satisfy-sync. When something is satisfactory, you don't need to have perfection, and so if something is good enough and serves your needs, then people will buy it, and if people find others buying it, then they will buy it, and soon more and more people buy it. And then soon if somebody comes out with a better thing, like the Dvorak keyboard, well, but this one seems good enough, why should I make an effort to switch?

PAUL SOLMAN: It's arguably the same story with every technology, from the keyboard to the paper clip, when Henry Petroski of Duke has studied.

HENRY PETROSKI, Duke University: What we want the paper clip to do is to sit there, preferably not crease the paper, preferably not leave any permanent marks in the paper, not rust. We'd like it not to come off accidentally. Of course, we wanted it to hold tight while it was on the papers. We don't want it to tear the paper or, or rip the paper when it's coming off.

PAUL SOLMAN: The standard Gem clip falls short on each of these counts. Since its invention in the late 1800's, rust-proof, angular, ribbed, and butterfly clips have all challenged the flawed Gem unsuccessfully.

HENRY PETROSKI: People adapt to technologies that have limitations or have shortcomings, and after a while, we adapt so well that we don't notice the shortcomings.

PAUL SOLMAN: Or for that matter the computer. Apple was the first to come out with an easy to use graphic operating system. Click on an icon and voila, the machine responds. Microsoft did develop its own graphic operating system, Windows, for the IBM PC, but years late, and a few features short. So why does the well-beaten path now lead to PC's with Windows and not Macs with the Apple operating system? Partly, it's Apple's own fault. When Apple launched the Mac with its famous 1984 TV ad, it made what is now seen as a strategic business blunder, refusing to license its graphic user-friendly operating system to other manufacturers. By contrast, Microsoft, which owned the software system to run IBM PC's, licensed its technology to all comers. Today, 10 years later, Microsoft has some 85 percent of the market. As more software programs are written exclusively for Microsoft-driven PC's, it becomes more difficult for lawyer Mac users to resist the IBM Microsoft path. Software designer Richard Anders.

RICHARD ANDERS, Software Designer: Normally, if you're a developer and you look at the numbers and you see that Apple, depending on the market, has anywhere from 10 to maybe on the high end in educational markets or something like that 20 or 30 percent, when you look at those numbers, you start to think, these are very grim; if I'm going to develop software, I want it to be like Willie Sutton said, where the money is, and the money is on the PC side, where everybody else is.

PAUL SOLMAN: If you go into a computer store today, says Anders, there are seven aisles of PC software written for Windows for every aisle of software written for the Mac.

ANNOUNCER: (commercial) Oh, the things people do to decide between two TV shows they want to watch.

PAUL SOLMAN: Now, there's a recent precedent for the Microsoft-Macintosh battle in which the better mousetrap also didn't win: Sony Betamax versus VHS. Again, Apple's Don Norman.

DON NORMAN: Yes, Beta was superior to VHS, but there was a deadly marketing war going on, where the other Japanese companies banded together to teach Sony a lesson, because Sony was being too arrogant and trying to retain all of the property rights for Beta.

PAUL SOLMAN: Beta was better, but Matsushita, JVC, and the rest had the better strategy, teaming up to set a common standard, which induced more movies to be put onto VHS, more consumers to buy the machines to play the movies, you get the picture. Microsoft has, in effect, done the same thing, promoted a sharing of strategy to create an industry standard and to be sure, it's also marketed like mad, throwing its weight around monopoly-like, some would say illegally, to keep the competition at bay. It's in this context that the new improved Windows 95 is luring more consumers down the Microsoft path, while Apple, as it happens, has been stumbling, with manufacturing delays, batteries catching on fire, key executives leaving, and talk of a failed merger with IBM. Now, with newly-reported losses, the company is actually planning significant layoffs. But, says Apple, all is not lost. The company's counting on loyalty to keep its current customers' innovation to attract new ones.

SPOKESMAN: Let's talk about computer voice synthesis.

PAUL SOLMAN: Moreover, since Apple, unlike Microsoft, produces both the software and the hardware for the computers, themselves, it says it can develop and build new ideas into its machines more quickly and cheaply than the competition.

COMPUTER SYNTHESIZER: My name is Bruce. I am generally considered to be one of the best voice synthesizers in the industry today.

PAUL SOLMAN: Also, Apple's finally sharing, having licensed its operating system to other companies to make cheaper Apple clones, but perhaps Apple's best hope for the future is the Internet.


PAUL SOLMAN: That high speed network of telephone cables and modems connecting millions of computers worldwide. From company headquarters in Cupertino, California, I'm using Apple Quick Time Conferencing software to play tic tac toe on the Internet with Larry Duffy at the jet propulsion lab in Pasadena over a satellite photo of Mars he just sent me.

SPOKESMAN: You've beaten me. I'm overwhelmed.

PAUL SOLMAN: Information sent on the Internet all adheres to a common standard. It doesn't matter whether it's a Mac or PC at the end of the line, and that gives Apple executives like Don Norman hope.

DON NORMAN: We now suddenly have a way that makes it easy to move around the world and it doesn't matter what computer you're using, and if we move that way, and the new Internet is an example of how it happens, then it's a whole new game again, a completely new game, where the best products can compete and can win.

PAUL SOLMAN: Well, Don Norman may be right and then again he may not be. After all, future paths will depend on all sorts of things that haven't yet happened. But for the present, path dependence can explain a lot about how and why the world of technology around us has taken the shape it has and why the better mousetrap doesn't necessarily prevail.

2.The Simple logic of Third Degree Path Dependence and Lock-In.

If larger competitors have a forever widening advantage over smaller firms, we have entered the realm of natural monopoly, which is exactly where most models of network effects find themselves. Traditionally it has been assumed that the natural monopolist who comes to dominate a market will be at least as efficient as any other producer. This assumption is challenged in the network literature although specifics differ across the many models populating it.

 The mere existence of network effects and increasing returns is not sufficient to lead to the choice of an inferior technology, however. For that, some additional assumptions are needed.

The path dependence literature assumes natural monopoly, and then argues that society often gets stuck with the wrong natural monopoly when it relies on markets. Since network effects are presumed to lead to natural monopoly, these theories dovetail nicely.

The logic of path dependence can be illustrated with the following table, reproduced from Brian Arthur's papers. (We've added the Beta and VHS notations).




Number of Previous Adoptions












Technology B (Beta)












Technology V (VHS)














Assume that consumers have a choice between products based on two competing technologies (Beta and VHS videorecorders, for example). Consumers come one at a time and must choose one of the two technologies (or video formats). Let the benefits that consumers receive from the purchase of a videorecorder be given by the numbers in the above table (benefits to producers are ignored for now). For example, if there are fewer than 11 users of Beta, they each receive benefits of 10, whereas each user receives a benefit of 16 when there are 61 users. Since the benefits increase as the number of adopters of the technology increases, these numbers exhibit the network effects discussed above.

According to this theory, the first consumers, looking at the rewards from choosing Beta or VHS will prefer Beta to VHS since there is a larger reward associated with Beta. (10 for Beta vs. 4 for VHS)  As more consumers purchase Beta, the advantage of Beta over VHS continually widens and so Beta will prove to be the eventual choice of the market. Yet if the ultimate number of consumers is large, VHS is clearly superior to Beta. (Compare, for example the benefits to each consumer when the number of adopters is 100) In the terminology of path dependence, we would say that society gets "locked-in" to Beta even though VHS is superior. Using a slightly different terminology, it is claimed that the market has "tipped" toward Beta although VHS was better.

This is the underlying logic, presumably, for Arthur's concern that progress might come to a halt, since according to his theory, we might remain with Beta even in the face of ever improving alternatives. In other writings we have referred to this as the "Chicken Little Theory."

There are, however, many problems with this story. For one thing, it assumes that each individual consumer has no foresight, but merely makes choices based on his narrow and myopic view of the one column in the table that is the payoff from his purchase. Second, it assumes that the providers of technologies, or technology related goods, have no ability to influence the outcome of the competition between the two technologies. Third, it assumes that consumers get value from a larger number of other users without regard to who these users might be. Fourth, it assumes a particular structure of rewards that is unlikely to occur. Let's examine each of these problems in turn.

It is ironic that in this model, which has been applied to various high-tech products, there is no recognition of foresight. For once foresight is allowed, this particular problem of path dependence goes away. If consumers have foresight, they can easily see that VHS is the better technology in the long run, and they know that all other consumers are aware of this. Some of the familiar features of the market will work to coordinate the outcome. Decision makers will rely on consumer and trade publications to keep up on the characteristics of technologies. Retailers play a role by committing their marketing energies, and to a degree staking their reputations, on the basis of their predictions of these contests.

The assumed lack of any foresight on the part of these decisionmakers is a very serious shortcoming of Arthur's analysis. In a world without foresight, cd-players, automobiles, and most any new technology would never get started. Cd-players at first had almost no disks, CD-ROM's (and computers) at first had very little software, automobiles did not have gas stations, and so forth. Clearly consumers must form some expectation of the future if they are to act at all, even if   that foresight is imperfect.

Arthur's story assumes that each decisionmaker constitutes only a single adoption of a technology, that each consumer purchases only one unit. Some large (corporate) customers, however, might be sufficiently large that they can realize the advantage of the superior technology even if, and perhaps particularly if, no one else uses it. Thus fax machines at first were used largely by companies wishing to send information and pictures within a firm. The effect of large customers is to tend to swing the entire market toward the efficient solution.

Even if customers do not have foresight, producers of these products probably do. Producers will have both the reason and the means to influence the outcomes of these competitions between technologies. They can, for example, subsidize or give away their products in order to demonstrate the values of their products or to create positive network effects. In the table above, the amount of wealth that can be created by large-scale adoption of VHS is greater than the corresponding amount for Beta. Since the owners of a technology ordinarily would be expected to appropriate some or all of this wealth, the owners of VHS would have a greater potential gain than the owners of Beta (the table presumes that costs are identical for these products). With any form of rational capital markets, the owners of VHS will be able to enlist allies with deeper pockets than will the owners of Beta. But in the theories of path dependence that have been promulgated, owners of technologies, standards, or products have no such roles to play. Again, this is a remarkable deficiency in an analysis applied to the computer industry.

Surely, firms and individuals have some foresight, even if only imperfect foresight. Imperfection is inevitable, even in markets. But for the reason argued above, where there is a significant difference in the relative advantages of competing formats (or technologies, networks, standards), we would expect the choices made in markets to be the correct choices most of the time. Furthermore, the relevant question for public policy is not whether markets are imperfect but rather whether they are more imperfect than governments. So, even where we identify imperfections in market outcomes, we still must raise the question of whether government can do any better.

Finally, there is another special aspect to this table that is easy to overlook. A plot of the of returns in the table would cross. That is, Beta is better when the numbers of adopters are small, VHS is better when the numbers are large. Figure 1 is constructed to reflect this: The slopes of the payoff lines differ, with the slope of V being steeper than the slope of B. Without this "crossing" effect, the technology that is chosen first will always be the better of the two and no harmful lock-in can occur. In order for the paths to cross, the network effects, or economies of scale in production, must be much stronger for the technology that is less desirable prior to these network or scale effects.  That is, the one that starts off badly must get better faster. The instinct to root for the underdog notwithstanding, there is no reason to believe that this overtaking characteristic is a likely characteristic of technologies. For the technologies that are often mentioned in this literature, there is every reason to believe that this overtaking characteristic is very unlikely.

 One common assumption that can generate a prediction of inefficient network choice is that the network effect differs across the alternative networks. In particular, it is sometimes assumed that the network offering the greatest surplus when network participation is large also offers the smallest surplus when participation is small. This condition, however, is not likely to be satisfied, since synchronization effects are likely to be uniform. For example, if there is value in a cellular telephone network becoming larger, this should be equally true whether the network is digital or analog. Similarly, the network value of an additional user of a particular videorecorder format is purported to be the benefits accrued by having more opportiunities to exchange video tapes. But this extra value does not depend on the particular format of videorecorder chosen. If network effects are the same for all versions of a given product, it is very unlikely that the wrong format would be chosen if both are available at the same time.

A further restriction in the modelling is the undifferentiated value received by consumers when another consumer joins a network, regardless of who the new consumer is. If economists, for example, much prefer to have other economists join their network as opposed to, say, sociologists, then a sociologist has a smaller network effect than another economist. Such differential network impacts make it possible for economists to form a coalition that switches to a new standard even if the new standard failed to attract many sociologists. This latter point will prove to be of great importance when examining empirical examples of choosing the wrong standard, where large entities such as multinational firms and governments play an important role.

An Example: The video recorder market

Read chapter 6 in Liebowitz and Margolis. Or JLEO paper 1995.

What Does our Model say about third degree lock-in?

See the end of chapter 5 in Liebowitz and Margolis.

The QWERTY Story

Read Chapter 1 in Liebowitz and Margolis or JLE 1990.


What is left of third degree lock-in?

It is natural enough to wonder how the proponents of path dependence and lock-in deal with the fact that their empirical examples all seem to be wrong. The answer is revealed in the following interview with Brian Arthur. When confronted with facts that he can not answer, he throws dirt at the deliverers of those facts (Margolis and myself). Note that he wants to claim that lock-in is everywhere, even when it is locking-in to perfectly good products. What can the term lock-in even mean in these circumstances? The simple implication of the words imply that whoever is locked-in would like to get out but can't. In first degree and second degree path dependence, getting out is not worth the effort even though at zero cost we would make the change. In third degree cases, it is worth the effort but it requires coordination that is not achieved for some reason. In this interview Arthur seems to define lock-in to include cases where we don't want to change our current position, even if the costs were zero. By doing so he eviscerates the term so that it no longer has any logical meaning.

Here is The [slightly annotated] Wit and Wisdom of Brian Arthur:

At the end of April, as the Microsoft case approached a new climax, Professor Arthur gave an interview with PreText magazine editor Dominic Gates. For the first time in a public forum he spoke extensively about his theories, his critics, and the Microsoft case.

[Jumping to 'his critics']

Gates: Is the theory of increasing returns still controversial?

Arthur: Absolutely not. This is now completely taken for granted in Silicon Valley. I don't have to go around California telling the Marc Andreesens of the world or the Andy Groves of the world that there are increasing returns. Intuitively, the smart people in high tech knew this all along.

Gates: Do you think Bill Gates realizes this?

Arthur: Absolutely, and has done for a long, long time, independent probably of any academic theories.

I don't know anyone who would describe themselves as a market capitalist, as a typical business person, who would find increasing returns threatening. On the contrary, what we're finding is that these are a body of theories that resonate very deeply with their own intuitions. Folks at Sun Microsystems, or other places, are using these theories. Sun used my theories to launch Java. In return they gave me a high end Sun workstation.

And all the academic battles about increasing returns were over around 1990. That's when the controversy stopped over whether it was correct, and the controversy started as to who had thought of it first.

Gates: So you don't see these ideas in opposition to classical economic theories, the Chicago school?

Arthur: Not at all.

However, some people who are great proponents of Chicago neoclassical economics seem to get uptight every so often in the opinion pages of the Wall Street Journal. The source of the problem is that if I say that markets can lock in to one product or one company, not necessarily the best, [note how he forgets this part of his story a few paragraphs down]  then that's taken as a threat to the whole ideology of capitalism.

The only controversies are ideological ones. I think it's inevitable that any important theory, or any new theory of any importance, does have a trail of flat-earthers behind it, a trail of creationists; people who won't get it and don't get it, for one or other ideological reason.

So there is a rearguard battle being fought between academic economists and, how shall I put it, capitalist ideologues. Two or three years ago the Economist magazine, which can be quite ideological, seemed to be negative towards these ideas. Now it is no longer denying any of this.

Gates: And the Wall Street Journal?

Arthur: Well, the Wall Street Journal itself has story after story of increasing returns and the dynamics of these market. Only the opinion page of the Wall Street Journal is a lagging indicator of economic thinking.

Gates: So these theories are no threat to capitalism?

Arthur: On the contrary. Markets operate according to diminishing or increasing returns. Those are just like the laws of physics. Markets operate the way they do. Capitalism is a structure that's built on top of those markets, and it seems to me that standard capitalism of the sort that we have now does very well indeed under increasing returns.

But it does tend to make [the more] highly open capitalistic markets, as there are in high tech, seem to be somewhat more unstable. People in high tech know that you can lock-in a market almost before it starts. I don't think that is a threat to capitalism, but it makes for a less leisurely capitalism. And it makes for maybe more intense competition.

Gates: Let's talk about some of your critics. Stan Leibowitz and Stephen Margolis have written a critique, which was the subject of a recent story in the Wall Street Journal [note: this story was not in the editorial section but in the same section that had previously run a flattering article on Arthur.] They claim to debunk the historical basis of path dependence theory, specifically the famous QWERTY story [that the familiar QWERTY arrangement of the keys on a typewriter was deliberately designed in the 19th century to slow typists down, because early manual typewriters tended to jam. Once typewriter manufacturers were locked into QWERTY, an alternative design that allowed faster typing failed to supplant it.] Leibowitz and Margolis call this story a "fable" and the Wall Street Journal refers to it as an "urban legend."

Arthur: It is perfectly demonstrable that we are indeed locked into a single QWERTY keyboard. There are legions of examples of lock-in. I'm not sure even Margolis or Leibowitz would deny that.

Gates: Right. But what they were saying was that it wasn't an inferior technology that locked in, and that the historical story which claimed it to be so was simply not true.

Arthur: It's absurd to think that any theories of increasing returns hinge upon whether QWERTY is better or worse. That is nonsense.

If you shine the appropriate light on it, you could demonstrate that under certain circumstances something that locked in -- like QWERTY -- wasn't so bad after all. I don't know anybody who is saying QWERTY is wonderful, but it's not clear to me that QWERTY is that great.

Take another example in the Wall Street Journal article: Microsoft DOS. I know of no independent computer scientist who thinks that DOS was a wonderful operating system, even when it was produced, though you can find ingenious ways to show that it was in some strange sense superior.

Gates: Well, they claimed that DOS beat out Apple because it was cheaper.

Arthur: One can take anything that locks in and at the time it locks in, normally it's better; that's why people are buying it. It's more convenient, or it's out there, or it's what you run across. But the point is that there could have been something else that might have locked in that, in the long run, may well have been better.

Not so long after DOS came out, the Macintosh [operating system] was demonstrably better. I think that Microsoft itself has acknowledged that fact by designing Windows to look just like it. And if people are saying, "Yes, but DOS was cheaper", well, think of all the wasted hours trying to use the damned thing. In computer science circles, DOS was a joke.

As far as I can see, the Leibowitz and Margolis arguments are ideological arguments for the far right. [Presumably Hitler himself would have liked our arguments.]  I don't see that there is a debate on increasing returns. You can have a debate as to whether what locked in might, under certain lights you shine on it, actually be better than what was locked out. You can make a case that gasoline engines are better than any alternative could have been. But frankly I don't know how to settle that, because you're talking about what might have been versus what is.

Gates: These theories are often discussed via particular examples or counter examples. One that you have cited is VHS versus Beta Max. Leibowitz and Margolis say that there was a good reason why VHS won: VHS tapes could record longer. And so, there was a reason why it locked in; it's not an example of path dependence.

Arthur: It is an example of path dependence.

The question of whether the product at the start was better or worse is moot. Yes, people may adopt VHS because it has a longer recording time. But the point of increasing returns is that if it gets ahead it locks in. Not what is better or what is worse. That's only a point for ideologues and the back pages of the Wall Street Journal. [Putting us in our place.]

Gates: But isn't an important part of your contribution your pointing out that things that get locked in aren't necessarily the best? It's not just to demonstrate lock-in, but to demonstrate lock-in of something that wasn't good for consumers. [this is just repeating what Arthur himself had said a few paragraphs ago]

Arthur: Well, again, you only get excited about that if you belong to the right wing of American ideology.

This notion that the market is always wonderful and perfect is a right-wing ideological idea. [Can you say the words 'Ad Hominem'?] People don't expect that all the friends they have are the most optimal friends. People get married; sometimes it's wonderful and sometimes it isn't. Lock-ins occur; sometimes for the best, sometimes not.

The theory doesn't say that what locks in has to be inferior. The theory says that it's not necessarily superior. [And that is the empirical question - are there any cases where a 'not necessarily superior product', which to normal human beings includes inferior products, wins.]

Gates: That same Wall Street Journal article concluded that there is "an emerging consensus . . . that the path dependence school has yet to come up with the smoking gun it needs to show the market-place locked into a manifestly inferior technology."

Do you have a smoking gun for increasing returns?

Arthur: I find I'm puzzled by all of this [you can say that again] because it's a bit like debating evolution with creationists: "But if you believe in evolution, the inference is that angels must have evolved their wings, and that would upset all of theology." [This is the ultimate put-down; being compared to a creationist far far worse even than being called a right-wing ideologue.] For me it's moot. The onus isn't on me or anyone else, to show that we're locked in to any inferior thing. [Why should he have to prove that anything he says is true? Is he supposed to be a scientist or something?]  The onus is on the opinion page of the Wall Street Journal and the libertarians to show that all things that we're using in the economy are not just the best they could have been at the time, but are the best that could possibly have emerged. [So let me understand this. He has a theory whose only novel component is saying that we might get stuck with second rate products, but it is not up to him to show that it is ever true, requiring but a single case, but up to his critics to demonstrate that it is always false, requiring an infinite number of cases.]  Nobody in computer science believes that about DOS. As for the QWERTY keyboard, if Margolis and Liebowitz can prove it's the best, my hat is off to them.

Gates: Let me throw at you some more of these free enterprise think tank critiques of your theories. Clyde Wayne Crews went beyond saying that lock-in to inferior goods was a myth; he claims that lock-in is a myth. The examples he cited were: color TVs did supersede black and white; CDs did replace vinyl records. In another piece, Robert Levy of the Cato Institute, added a couple more examples: Word Perfect once looked unassailable as a word processing product; Lotus 123 once had no competition in spreadsheets. All of those actually failed to lock in and exclude the competition.

Arthur: Not at all. They all locked in, but then the next wave of technology took over. We were indeed locked in.

The fact is, technology comes in waves. No one I know who talks of increasing returns says that lock-in is forever. We are locked in to English, temporarily. In 500 years time it'll be a different language. Three-hundred years ago people were locked into Latin as the international means of discourse. No one said a lock-in is forever. In fact, it's taken for granted in high tech that lock-ins typically last anywhere between a year or two and five years.

Let me give you a very specific example here again. Netscape, as you know, had a heavy lock-in in the browser market. And it wasn't dislodged by means of a new wave of technology: no new software product came along to supplant the browser; instead it was steamrollered aside by the Microsoft juggernaut, Internet Explorer.

Gates: [Pointing out Arthur's characteristic hyperbole.] But it hasn't exactly been steamrollered out of the way. It still actually has a bigger share of the market than Internet Explorer.

Arthur: Well, you can certainly claim that its lock in isn't as heavy as it was two years ago. I'm just saying that a lock-in is only good until the next wave of technology, until the game changes. And even if the game doesn't change -- it didn't with the browser market -- if you have enough guns, you can dislodge the lock-in.

Gates: [trying to figure out what Arthur could possibly mean by lock-in] Isn't lock-in just another word for standardization? Britain and the U.S. drive on different sides of the road. Wouldn't it be better if they both drove on the same side and you only had to make one kind of car? Similarly, the European Union has a single cell phone standard and the United States has three incompatible technologies.

What's wrong with standardization?

Arthur: Increasing returns are about the dynamics of markets. If a market locks in to something, it's not necessarily the best; on the other hand, as you were saying, there may well be advantages to locking in to a single standard. So any theories of increasing returns aren't necessarily pro- or anti-Microsoft. Under increasing returns, you can lock into a single standard and that might have enormous advantages.

Judging the pros and cons of increasing returns markets is case specific. Let me give you one example. If a market, say, software, locked into a single standard, say, Microsoft Windows/Explorer, you could argue that there's some advantage to that. It would be like having a single railway gauge all the way from Calais to Moscow a hundred years ago, so you didn't have to change trains at each border.

So my answer is yes, there are many advantages to increasing returns, and certainly one of them is that we can use a certain standard. Basically the entire Internet is the result of a telecommunications/computing standard: TCP/IP. The existence of that standard made the World Wide Web possible. So yes, there are advantages in standardization.

Increasing returns are in a particular industry. They're either present or they're not. I want to get my point very clear on this. Increasing returns have to do with how markets work. Whether that is good or bad is somewhat case specific. [Except that he can't find any cases where it is bad that turn out to be true]


Economics of the Internet


The value-profit paradox, the Cruelty of Competition and the drivers of success.

The Internet is going to create great value. That is not in dispute. Consumer’s are going to be better off as they voluntarily adopt this new technology. Consumers, after all, wouldn’t make the switch unless they were better off.[i] Everyone recognizes this creation of value, which is why firms have been stampeding to get a piece of the action.

Does this mean that producers must gain wealth as this new technology filters through the economy? The commonly accepted answer is “yes,” and that is myth number 1, because the correct answer is actually ‘no’.[ii]

Notice that I am not saying that some producers will not do very well, only that the typical or average producer need not do well even though the market is growing. This is what happened to DRAM memory chip manufacturers who ran into very low profits even in the face of ever-increasing sales of computers and an ever-increasing appetite for memory.

This seeming paradox is due to the fact that value creation doesn’t necessarily get converted into wealth creation for producers. If there are few enough producers, a situation commonly known as monopoly, much of the value will go to producers. If there are too many producers, much of the value will go to consumers.

This is not very different than the odds of striking it rich in a gold rush. These odds are much greater when few people know about the gold than if everyone knows about it. This simple point seems almost entirely overlooked in today’s overheated market, but it has strongly negative implications for profits. Today, just about everyone is headed for the fabled gold out there in the ether.

It is also important to understand that the profit generation in product markets work just the opposite of stock market profits. In the case of stocks, the more people that jump on the bandwagon, the higher the stock price goes and the greater everyone’s profits. For firm’s competing with one-another, the more producers that enter the industry, the lower everyone’s profits turn out to be. The willingness of the capital market to fund untested Internet companies, and the publicity and expectations surrounding the Internet auger poorly for the likelihood that these companies will do well in the real market.

The divorce between value creation and dollar generation can be traced to a famous analysis known as the diamond-water paradox. The meaning of economic value, and how markets translate (or fail to translate) such value into revenue, profit, and wealth is at the heart of this issue. 

Water (or air) can be used to illustrate the example of a product that provides enormous value but very little profit or revenue.[iii] The very abundance of air and water makes it impossible to generate wealth in spite of the great value created. Therefore, the creation of great value is not by itself the key for producing profits. Creating value when there is little competition, on the other hand, is the key for creating wealth. Diamonds, unlike water, provide little value but high prices, profits, and revenues. It is the difficulty in producing diamonds, and the resulting lack of supply and competition, that leads to its high price.

farmers lobbying to make it more difficult to sell agricultural products, taxicabs owners making it more difficult to own a taxicab, doctors trying to impose barriers restricting the training of new doctors, and numerous other examples. In each of these markets strenuous attempts were made to reduce entry. The purpose was to raise the prices and revenues of the sellers in those markets. The fact that it costs $250,000 in New York for a piece of paper that provides permission to own a taxicab reveals the success of that program in generating profits. The taxi illustration is both fascinating and a useful illustration of the impact of entry on competition and profits.

The general rule, then, is for entrepreneurs to go where consumers swarm but to avoid going where producers congregate. This rule will hold equally for Internet firms. Nothing about the Internet or network effects alters this strategy

Because free entry is a current characteristic of the Internet, I expect profits will be driven down to an ordinary level in a long run time frame. In the shorter period of the next few years, however, the question is whether the current investment is large enough to meet the coming demand, or whether it is too small or too large. I do not claim to know the answer to this question. But I will discuss the size of the investments in Internet markets—by IPOs, venture capitalists, and traditional firms. The enormity of these investments suggest that short-run profits may be negative or at least below normal.


The Internet creates value by reducing the costs of transmitting information. That, in a nutshell is it. I put it this way not to belittle what the Internet accomplishes. After all, automobiles merely lowered the costs of transportation.

 This reduction in transmission costs, while creating value, will reduce the ability of firms participating on the Internet to create brand name loyalty and make it difficult for them to take advantage of consumer ignorance and inertia. It will also make it more difficult for price spreads to exist, and for firms to engage in differential pricing. These factors will affect profitability—negatively.

Many new products associated with the Internet, such as computers and software, have a property known as network effects—the product becomes more useful to individual consumers the more other people there are using it. Everything else equal, network effects should lead to winner-takes-all markets. Many computer products, such as software and central processing chips, seem to have winner-take-all characteristics, so we find ourselves with one dominant operating system (Windows), one dominant spreadsheet (Excel), one dominant financial package (Quicken), one dominant chip maker (Intel) and so on.

But any factor that causes large firms, solely because of their size, to have an economic advantage over small firms, will tend to lead to winner-take-all results, not just network effects. Network effects are often given credit for the winner-take-all characteristics in the aforementioned industries. Nonetheless, there are two other possibly more important factors that can also lead to winner-take-all results—economies of scale and instant scalability. Economies of scale occur when large producers have cost advantages over small producers, solely because of their size. Instant scalability is the ability of a firm to meet market demand in almost no time, tending to cause any favored product to get the lion’s share of the market.

It is commonly thought that most firms operating on the Internet have network effects because the Internet is a network and that they are winner-takes-all. For example, Michael Mauboussin, chief investment strategist at Credit Suisse First Boston was quoted in the Wall Street Journal as saying: “Most of these [technology companies] are winner-take-most or winner-take-all markets.”[iv]

This is myth number 2 and it is due to a misunderstanding of network effects. Many Internet companies, when properly analyzed, are seen to have few if any network effects—Amazon, Etoys, PeaPod, and most other Internet retailers have no network effects to speak of. Whether we are talking about selling sirloin steaks, Furbys, or recordings of Elvis, the value of the retailing services to individual consumers bears no relationship to the number of consumers serviced by the online sellers[sl1] .

Whether these firms have the characteristics of economies of scale or instant scalability depends on the specifics of their products and the manner in which they use the Internet. One has to examine each industry on a case-by-case basis to determine whether winner-take-all is likely, and I will provide the tools to do so by going through several examples such as Amazon, Expedia, Priceline, and so forth.

If there were few difference in product quality, and if consumers were relatively uniform in their tastes, winner-take-all markets might also be first-mover-wins markets, since under these circumstances, there is no reason for consumers to shift from the first firm to later competitors. Although this is the main subject of chapter 4, I set up the theory here since these two concepts are so closely related, sometimes even being mistaken for one another.

There is a stronger claim made by some theorists that many winner-take-all markets will also be first-mover win markets even when there are quality differentials. In this view, the initial entrant gets the largest market share and network effects propel the firm forward. This is supposed to be true even if later firm have a superior product, with the term ‘lock-in’ being used to describe this situation. A few famous examples have been used to support this view: the QWERTY typewriter keyboard and VHS VCR. The supposed object lesson for firms is to get to market first and ignore relative quality, at least up to a point.

My research with Stephen Margolis (that formed the subject of my book with him), however, found none, nada, nothing in the way of support for claims of lock-in, little support for first-mover-wins results but strong support for winner-take-all results. That explains why Altair, VisiCalc, and Ampex—the first firms to produce PCs, spreadsheets, and VCRs respectively--are not the leaders today. It also explains why Excel and VHS have such large market shares.



Might the Tortoise Win the E-race?

This is the subject of my first column in CIO magazine, co-authored with Margolis

Firms operating in markets with first-mover-wins characteristics should exert immense effort to gain market share. In these markets, free entry and competition can be expected to have little impact on profits, at least for the market leader, because small competitors are always at a severe disadvantage.

It is often asserted that being first is of paramount importance in the Internet age, far more important, say, than for brick-and-mortar industries. For example, the famed Morgan Stanley stock-market analyst closely associated with the Nasdaq and Internet stock run-ups, Mary Meeker, said in a 1997 report “Our Internet team thinks first-mover advantage for Web retailers may be important.. The retail group, by contrast, doesn’t think being first matters much, since barriers to entry will likely remain low on the Web.[v] The view of the Morgan Stanley’s Internet team reflects myth number 3.

This is not to say that no Internet markets are first-mover-wins. Even though some Internet markets have this characteristic (AOL messaging, Geocities), most do not. I will go though several cases to provide the reader with the tools to answer this question for any particular case they might be considering.

For the most part, online retailing will not have the characteristics of winner-take-all or first-mover-wins. Most online retailers will not exhibit characteristics of network effects or instant scalability. Economies of scale, on the other hand, could be important, but there is little reason to think that brick-and-mortar firms in the same industry would not possess equivalent economies of scale.

Take the case of Amazon, the firm most famous for its strategy of forgoing current profits in order to establish its brand name and produce a large market share, a firm willing to lose fifty cents for each dollar of sales in the name of market share growth. Is this a smart move? Does online bookselling exhibit the economic characteristics that will lead to winner-take-all or first-mover-wins?

The creation of the web site is a fixed cost. This component of cost, therefore, exhibits economy of scale effects since the average website cost falls as output increases. But warehousing, shipping, customer relations and personnel costs change as output changes. These other costs are likely to swamp the cost of creating the web site. Therefore, the fixed cost component will be too small to dominate Amazon’s overall average costs.

Network effects for Amazon are also very limited—things like product reviews by users, purchase circle information, and little else. Product reviews have network effect characteristics because they make the retailing activity more valuable to users and the number of product reviews depends on the number of other users. But product reviews are likely to be of only modest value to most users. I will also explain ways in which Barnes and Nobel could counteract these effects without needing as large a base of users.

Instant scalability is certainly not a characteristic of Amazon’s business—it can not increase output at a moments notice the way a software company can product additional CDs. Amazon’s winner-take-all characteristics, therefore, will be largely limited to those enjoyed by brick-and-mortar booksellers.

Amazon, therefore, will have characteristics very similar to brick-and-mortar sellers. If brick-and-mortar bookselling is not winner-take-all (and for all the bookstore agglomeration in recent years, Barnes and Noble and Borders each hold only about 10% of the book retailing market), then online bookselling will not be winner-take-all either. Amazon’s generation of enormous losses may have been totally wasteful.

As we saw, the general rule is for entrepreneurs to go where consumers swarm but to avoid going where producers congregate. What is probably the leading strategy allowing firms to resist competition (other than petitioning the government to restrict entry) is for a firm to produce better products, since that is a hard strategy for other firms to imitate. This conclusion is more than just anecdotal. I conducted a study for McKinsey in 1999 will come in handy here.

In that study I examined the financial performance of firms in twenty industries for which product quality ratings existed. There was a very strong relation between producing the best quality product, earning above normal profit, and generating high stock market returns. Since PC manufacturers, software producers, and web site portals were all included in the study, it will provide some specific cases to support this claim that being better is most important, even in high tech markets. For example, in personal computer production, being first didn’t count for much. Dell didn’t achieve its success by being first, but by having better performing products needing fewer repairs. Packard Bell gained a large market share with low prices, but was plagued with poor quality and service, and essentially went bankrupt. Similarly, Yahoo not only was first, but also was a higher quality portal than its competitors, and that is why it is one of the few profitable web portals.



Margins and Profits on the Net:

In the long-run, virtual stores have certain advantages and disadvantages relative to brick-and-mortar retailers. The main advantage, the lack of physical storefronts, should translate into lower costs. It has often been presumed that these lower costs will translate into above normal profits for a lengthy period of time.

This is myth number 4.

In fact, these lower costs will lead to lower margins in a competitive environment. Even Internet skeptics, such as the Perkins brothers, do not realize that the margins will be smaller, not larger, for the online versions of retailing businesses.

The key is to understand that online retailers are going to compete mainly with other online retailers, not brick-and-mortar retailers. That is because consumers will segment themselves into those who prefer virtual and those who prefer brick-and-mortar retailing.

Brick-and-mortar retailers will coexist side-by-side with online retailers, just as mail order has coexisted with brick-and-mortar retailing. Therefore, the cost advantage of online retailers over brick-and-mortar retailers is largely irrelevant for profitability, just as discount houses are not necessarily more profitable than full service providers.

In the long run, entry (or exit from an overpopulated Internet marketplace) will return profits to a normal level. A normal level, however, implies very small margins for virtual retailers since their investment is relative to sales is even smaller than for brick-and-mortar retailers. This low investment relative to sales is why brick-and-mortar grocery stores have always had very small margins.

I will provide an examination of margins for brick-and-mortar grocery stores, book retailers, drugstores, and other retail outlets that are moving to the Internet. This should be an upper bound for the margins likely to be earned for the Internet versions of these retailers.

During their startup phase, new and growing markets are frequently more profitable than older established markets. This is because demand in these markets tends to outstrip supply, although capacity eventually catches up with demand. But it need not work out this way, particularly if suppliers over-anticipate demand and in so doing oversupply the market, leading to below normal profits during this startup phase. In either case, however, the market will eventually shake out and establish its own profit level that will depend on the level of competition and the variations in efficiency and quality.

It is hard to imagine that the current rush to the Internet is not indicative of over investment. Surely, the immensely large negative profits generated by most Internet retailers appear to be more than just normal startup costs, and this is before the costs of the massive 1999 Christmas advertising season have been incorporated.



The Ubiquity of E-tailing?

“Creation of New Distribution Channels Create Opportunities for Retailers. The Internet represents the potential creation of the greatest, most efficient distribution vehicle in the history of the planet.”                                    Mary Meeker, Morgan Stanley

"Economies and interest rates can come and go, but every business over the next few year must aggressively become an e-business."….

 Robert Austrian, Banc of America.[1]

The last few years have seen numerous Internet companies formed to sell everything from airline tickets to dog food. Everything, it seemed, was going to be sold over the Internet. The opinions expressed in the above quotation has been taken quite seriously. This claim that the Internet is a good vehicle for selling all products is myth number 5.

Consumers like to see and touch many of the items they buy. They are used to and tend to demand instant gratification. They also like to save money. They like to avoid lines. All of these are somewhat contradictory desires, and only some are better met by virtual retailing.

The Internet does provide some advantages to consumers: a large selection that can be offered, no lineups at the register, and perhaps lower pricing. Yet there are also many disadvantages. You cannot touch, smell, squeeze, shake, or feel products on the Internet[SL2] . Transportation costs are higher, delivery less than immediate, and its current status as sales-tax-free is likely to be short-lived.

Some products, such as airline tickets or stocks, can easily move to the web since these products have no disadvantage being sold on the web – no transportation costs, no examination required, no instant gratification. In these cases the Internet is the natural retail outlet. We would also expect software, music, videos, and other ‘digital’ intellectual products to be sold over the web.

More interesting are the online groceries such as, and They are not going to replace brick-and-mortar groceries any time soon. The items they sell are bulky relative to their value, making them poor candidates for economical delivery. Perishable and frozen items cannot be left on the porch, particularly in the summer, thus requiring pinpoint delivery times, which is a necessarily more expensive form of delivery. Various items, such a fruit and meat, are experience goods in the sense that consumers like to look at or squeeze the items. There are compelling economic reasons why grocery delivery was always a small niche market and the Internet does nothing to change those conditions. Firms such as Webvan, however, are trying to change some of the underlying economics of warehousing and transportation costs by automating warehouses and developing special trucks. It is conceivable that this could work, but it seems most unlikely, and there is no reason in principal that it couldn’t have worked using fax or mail.

This analysis can also be applied to other goods, and will explain why most clothing, automobiles, furniture, prescription drugs, and many other products are not likely to migrate to the web. Automobiles are an interesting special case, particularly since there is a web of state laws protecting current dealers from competition. I plan to discuss many of these markets in some detail in order to provide an understanding of the strengths and weaknesses of the Internet as a primary conduit of exchange for these products.


Can Advertising Revenues Support the Net?

Currently, the market appears to have rejected subscription fees, since many content providers that had tried subscription fees (Slate, Salon, TheStreet, Microsoft Investor) have reverted to covering their costs through advertising revenues alone. The view that advertising revenue alone can support numerous Internet sites is myth number 6.

This reliance on advertising ignores the history of other eyeball-based mediums, such as magazines and newspapers. Advertising based over-the-air television is a fluke that is due to its technological inability to charge a subscription fee, and it is being rapidly surpassed and replaced by cable networks which use a dual revenue stream. A combination of subscription and advertising revenues is almost certain to replace pure advertising as the revenue model on the Internet because a dual revenue system has many advantages.

It is also unlikely that sufficient advertising revenues could be generated to support all the sites counting on it. First, the audience (measured in total viewing-hours) is not large compared to television, nor is it likely to be terribly large until television migrates to the net. Current estimates of time spent on the web averages 30 minutes per day for the average user compared to 3 hours for television viewing.

Second, Internet advertising will remain less effective than television advertising as long as it is so easy to avoid. If it becomes intrusive, in other words, not allowing the Internet user to move forward until the advertisement is viewed, there will likely be a backlash from users.

Third, advertising budgets are not terribly malleable, and Internet advertising will have to come largely at the expense of other media. Taking away share from other media will become increasingly difficult. Although Internet advertising is very good at segmenting the population according to tastes, these ‘narrowcast’ messages will be insufficient to support mainstream content.

 One area where Internet advertising will shine is classified advertising, a very large advertising market, which can currently be found on sites such as or There sites are quite specialized and do not support content provision. As described in  , auction pricing may be most common on some of these sites.

Finally, competition for eyeballs will drive up the price of content. Television pays viewers to see the advertisements by providing costly to produce programming for free. If advertising based sites such as Yahoo earn above normal returns, entrants will compete by providing more expensive (and/or higher quality) programming and eventually profits will only be normal. We are already seeing a primitive form of this with contests and giveaways intended to lure viewers to sites.



Auctions –  Back to the Bazaar?

The success of companies sponsoring auctions, such as Ebay, Yahoo, and Priceline, have caused some commentators to suggest that auctions are going to be playing an increasing role in future sales. For example, Clay Shirky, a professor of media studies at Hunter College recently opined in the pages of the Wall Street Journal that “the real importance of the name-your-price model.. [is that it is] a harbinger of a revolution being wrought by the information economy: the disappearance of fixed retail prices.”[vi]

This is myth number 7.  In fact, the Internet is going to have the exact opposite effect. Instead of charging different prices to each customer, Internet retailing is going to reduce the current variability in prices.

Historically, bargaining was common until the very modern period. Bargaining worked to the sellers advantage. The high cost of time and low cost of information made it uneconomic for sellers in modern economies to engage in this type of activity, except for the most expensive items, such as automobiles. The Internet is not going to send us back to the Bazaar.

The increased availability of price information made possible by the Internet will make it increasingly difficult to sell new, homogeneous products at differential prices. It will just be too easy for consumers to compare prices, and to know what is the best available price for a product.

Sellers, on the other hand, will only wish to engage in auctions if they can receive a higher price than is available elsewhere. Sellers, therefore, will have to receive a price as least as high as is available through traditional outlets. But buyers, being aware of the lower prices elsewhere will not pay higher prices in auctions. This will keep auctions from becoming a dominant retailing format unless the entertainment value of auctions is such that consumers are willing to pay a higher price.

Interestingly, in the current excitement brought about by auctions, buyers have been paying more for a large percentage of auctioned goods (though less than half) than they would have paid purchasing the identical items sold at retail. However, as this information becomes more readily known (through outlets such as Consumer Reports), consumers will become more wary of auctions, and auctions are likely to be relegated to those items for which auctions make the most sense.

Auctions are a good way to sell out-of-season, clearance, and one-of-a-kind merchandise, and for these reasons may largely replace brick-and-mortar discounters who have tended to specialize in such merchandise. Net auctions are also likely displace the classified advertising market since the Internet allows easy searching and is national or international in scope. Markets that were too thin at local geographic levels (meaning that too few buyers or sellers existed to ensure that the price went to a reasonable level) can function more efficiently on the Internet.


Internet Stock Mania

It has been repeated numerous times that traditional valuation techniques cannot be used when discussing Internet companies. This is myth number 8.

The Internet stock market at the turn of the 20th century is, in my opinion, a once in a lifetime event, and will go down as one of the great financial bubbles. American adults may never get to see these levels of valuations on unproven concepts again. Imagine how foolish it will look just a few years down the road for investors to have put so much money in the hands of youngsters with little or no business experience.

The usual caution associated with investing in start up firms has been thrown out the window, due to the first-mover-wins mentality. The IPO market has been bringing less and less seasoned firms and ideas to market. Tracking stocks appear to exist merely to take advantage of the bubble.

What was at one time merely an amusing little market out of control has become larger and larger. The market capitalization of Internet companies is probably in the vicinity of a trillion dollars, which makes it large enough to start having an impact on the economy as a whole should the market bubble burst, as opposed to slowly deflating.

Japan had a similar market in its real estate when the nominal value of land in Japan surpassed all of North American and Europe combined. Although that seems absurd in retrospect, analysts then as now were able to put forward reasonable sounding explanations for why it was rational.

The bottom line on all this is that many investors in dotcom stocks are going to be left holding worthless shares. The Perkins brothers very good book underestimate the problem because they assume that margins for Internet companies will match brick-and-mortar margins, which, as I discuss in chapter 5, overstates the margins. They think that the overall bubble is twice the ‘true value’. I suspect that it is considerably more inflated than that.

I don’t want to dwell on the stock market aspects for fear that it will overshadow the rest of the book. But the implications are too important to not spend some time discussing the valuation of some of the major companies such as Amazon, Yahoo, AOL and so forth. The material in the previous chapter will make it easier to discuss these valuations cogently but without taking up too much space.

The true cost of an Internet melt down is that it might reverse other beneficial trends in the economy—participation in the equity markets by large numbers of Americans, a healthy respect for markets, and a healthy disrespect for government intervention in markets. One of my goals in writing this book is to prevent the bubble from growing even larger and reducing the chance that an investing calamity would provide the excuse for the government to start playing a larger role in the everyday economy.

Chapter 1: What, if anything, is different about the new economy?

One might get the impression from the previous chapters that there really is nothing new or different about the Internet, or how business is conducted on the Internet. That is not quite accurate.

Because the Internet enhances the transmission of information, the role of information is enhanced. Information, as a product, has some very different characteristics than more traditional goods that can be touched, used-up, and easily defined.

Because information is not ‘used-up’ when it is ‘consumed’, the most basic and traditional economic models, which assume an ‘opportunity cost’ for all actions, no longer apply to the production and distribution of information. The information contained in a book is not ‘used-up’ although a book itself can be. This is not to say that scarcity doesn’t exist, as has been incorrectly claimed by several commentators, including Kevin Kelly, but instead that additional copies of these scarce goods are no longer scarce.

Although this type of good (known as a public good in the economics literature) is different than typical goods, it is not new, and has been analyzed at some length. The reason that this has little to do with most Internet companies is that most of the these companies are not selling information per se. Certainly, retailers aren’t.

Information sellers have traditionally been newspapers, radio, television, book and magazine publishers, schools, and so forth. The Internet basically provides a new transmission mechanism, and provides tools to make the creation of knowledge easier, but doesn’t by itself create any new knowledge. And because bandwidth is limited, the Internet is not a public good.

So the Internet may help make information relatively more important, but this is a somewhat indirect impact, and not something that will generate revenues for Internet companies in any clear manner.


Software as a possible incubator for lock-in, and the Microsoft antitrust case

DoJ Claims in Microsoft case

     Microsoft exercises monopoly power.

     Network Effects “lock-in” (enhance) its monopoly power.

     Netscape Navigator a threat to Windows.

     Microsoft used its monopoly power to exclude Netscape from the market.

     Including browser in operating system is illegal tie-in

Our Analysis of Software Markets

     Three main questions.

   Is there any evidence that network effects entrench market leaders?

   How do software makers (and Microsoft)  achieve success?

   Has Microsoft exercised monopoly power (harmed consumers)?

Network Effects

     Possible Characteristics of network markets.

   Winner-take-all result (when tastes are fairly homogeneous).

   Lock-in, or inertia. Discussed on next slide.

   ‘Tipping’ might occur. After some critical point, market share suddenly accelerates in growth

   Instant scalability: the ability to increase output at will by using non-specialized duplicating machines.

   Increasing prices as networks grow (this aspect has been neglected).


Lock-In, Inertia, and Antitrust

     Lock-in now apparently viewed as a barrier-to-entry.

     Franklin Fisher, government’s economics expert claims: “The barriers to entry in the present case include two phenomena known respectively as economies of scale and network effects.”

   Do they really protect market leaders?

   Totally untested.

   We examine actual causes of market success in software.


Testing for Lock-in and Inertia in Software

Do better products win or lose in software markets?


Examination of Market Success & Quality

     What to look for on the next set of slides

   Note the relationship between market share changes and product quality.

   Note the dramatic and rapid changes in market share. No sign of inertia, although we don’t have good benchmarks.

   Note the lack of ‘tipping’ points.

   Book is far more detailed.

Read chapters 7 & 8 in Winners, Losers and Microsoft.


Findings on Quality:

     Product quality is key to success.

   Inferior products lose market share amazingly rapidly. No evidence of lock-in, inertia, or protected monopoly.

   Success seems to come only to the  #1  product. Avis would be in trouble.

   Price appears to play a role for consumer products.

   Microsoft was only successful when it produced better products at lower prices.


 The question is: when does price cutting go beyond that associated with normal competition and enter the realm of predation? Predation is defined, in general, as one firm driving others out of business so as to monopolize the industry. There is no universal agreement on this point among economists or the courts as to how to discern acts of predation.

Utah Pie was an important case in that debate. In that case a dominant firm in a local market was protected from price cutting competitors by the Supreme Court's interpretation of Robinson-Patman. The decision was quite controversial.

In the last 30 years researchers have re-examined this issue and now generally conclude that predatory pricing is not a very intelligent way to remove rivals from the market, and actual historical cases are considered quite rare. [See McGee] This thinking has worked its way into the legal system. A reasonable case can be made that firms would never cut prices below their average variable costs since if they can not even cover their average variable costs they would earn higher profits if they were to not produce any output at all.




 The reasoning here is quite simple. Costs can be apportioned into fixed and variable components - those that do not change as output changes and those which do change when output changes. Average cost in figure 2 is assumed to have the standard U-shaped curve. Average variable costs differ from average cost only by the average fixed costs, which fall continuously as output increases (the same fixed cost divided by a larger and larger quantity). Therefore average variable cost curve lies closer to the average cost curve as quantity increases. Marginal cost must go through the bottom of both average cost curves. If prices drop below the average cost curve but above the bottom of the average variable cost curve (e.g. P2) the firm maximizes it profits by continuing to produce in the short run, although it will shut down in the long run (i.e. it won't reinvest in this industry).  If a firm can not cover its variable costs, it will lose all its fixed costs plus some additional amount due to variable costs being larger than revenues. If the firm shuts down it will only lose its fixed costs. Thus a firm should never produce a good if the price is below the average variable cost (e.g. P1). Areeda and Turner take prices below average variable cost as evidence that the firm is not maximizing profits, and are willing to presume that the firm is engaging in illegal price cutting in order to drive a competitor out of business. They do propose looking at other factors as well.

  A definition of predatory pricing based on this logic has been proposed by Areeda and Turner and has received a favorable response so far in the courts. While the Supreme Court has not yet ruled directly on this matter, lower courts have seemed to accept the general premise. It has become much more difficult to demonstrate predatory intent, and will probably continue to become even more difficult in the future, since at a minimum one would have to show that the "predator" was selling below his average costs and possibly even below his average variable costs.

 There are, however, difficulties with this definition. Free samples, for example, might be misconstrued as instances of predatory pricing. More importantly, it is not possible to accurately measure average variable costs since (among other problems) categorization of costs into fixed and variable components is a very difficult task. And firms with many products often have costs which are jointly shared by several of these products, make apportionment imprecise, at best. Therefore the implementation of this rule is difficult and somewhat risky. Bottom line: firms with large market shares should be aware of this potential hazard when they aggressively set prices. Keep an eye on average costs and average variable costs since it appears safe to lower price until that point is reached. Do not lower price below any level which may have been calculated by the firm and discussed in internal firm documents. And do not, in internal documents, refer to your competitive thrusts as attempts to "kill", "squash" or "destroy" your opponents.







Examining Microsoft’s Monopoly (Pricing)


Regime Change

     Monopoly has high prices, competition low prices. No prediction regarding price changes.

     Early period had Lotus and WordPerfect as market leaders - prices were high.



     When Microsoft became the leader, prices fell.

     Consistent with change to competitive regime. Once low price is reached, no reason to expect continued fall.


PC - Macintosh comparison

     In late 1980s:

   Microsoft is dominant in the Macintosh market, and also-ran in the PC market.

   DoJ market structure theories would predict a higher price for Microsoft products in Macintosh market.

   False prediction. Macintosh consumers paid prices for Excel and Word that were 25% lower than PC prices. 

Microsoft’s overall impact on prices

     Categorized software markets by whether Microsoft competed or not.

     Looked at average prices over time.

     3 categories:

   no competition

   direct competition

     competition with operating system (utilities)

     Microsoft was moving out of operating systems into applications.


The categories where Microsoft competes are: midrange desktop publishing, personal fi-nance, presentation graphics, spreadsheets, word processors, database, project manage-ment, and integrated software. The categories where Microsoft does not have an entrant are: accounting, draw and paint, high-end desktop publishing, and forms. The categories that compete with the operating system are utilities/application and  communicati



Compare DTP prices with and without Microsoft


Summing up on prices

     Microsoft  lowered prices after it became the market leader.

     Microsoft charged lower prices in markets it controlled (Macintosh) than in markets where it was (likely) a price follower (PC).

     Prices didn’t fall nearly as much in markets where Microsoft did not participate.

     Conclusions: no evidence of monopoly pricing or that consumers have been harmed.


     Virtually identical theoretically to economies of scale (natural monopolies) - except that it implies increased prices as network grows.


Does software pricing have anything to do with hardware pricing?

      Warren-Boulton: “Microsoft's Monopoly Power is Reflected in its Prices, in its Margins, and in the Market Value of Its Equity...Although accurate historical data on Microsoft's operating system product license fees are not readily available, it is my understanding that since at least 1987 the operating system has accounted for a steadily increasing share of the cost of a PC.” (WB, p 27 of direct testimony)

      Fisher:  Next, you can look at the price for Windows, for  the operating system, relative to the price for other components of the PC.  And the price of other components of the PC has been coming down and coming down quite  rapidly at a time that the price for windows has been going up.    (redirect, Jan 11, afternoon session , p 43)



     Monopoly and competition differ in levels of prices, not price changes.

     Software and hardware are complements.

     Holding everything else constant, when the price of one complement falls, the other’s price _____? Is this evidence of monopoly?

     Could hardware be getting more competitive?



$10 billion harm? A Nonsense Study

     Consumer Federation of America (CFA) study compared ‘quality adjusted’ software prices to nonquality adjusted operating system software prices. They compared Apples to Oranges because they didn’t understand the economics articles they were reading (author had sociology Ph.D.).

     They also overstated operating system price increase. CFA study compared Windows without DOS in 1991 to Windows98 which includes DOS.


     Quality differentials induces rapid market share changes. Leaders are more vulnerable to dramatic losses than in most other markets.

     Microsoft has a low price strategy. No evidence that consumers have been harmed in Microsoft case.

     Network effects may work differently than thought (lack of price increases, no tipping, other explanations for winner-take-all).

No lock-in detected. Idea of network effects entrenching monopoly is unsupported


TIE-IN SALES: Forcing consumers to buy their purchases of good A (the tied good) in order to be allowed to buy good B (the tying good).

Tie-ins are often between machines and articles used with the machines - e.g. ibm computation machines and hollerith (sp?) cards, xerox machines and toner, etc. The machine is called the tying good, and the other item is known as the tied good. It is common that the price of the tying good is set below cost and the price of the tied good is set at a price well above normal market levels.

 There are several explanations for this type of behavior. The most common are:

1) Extension of monopoly

The courts are very fond of this explanation, although economists are much less happy with it. It is essentially false. The logic is very simple: the firm has a monopoly or near monopoly in the tying good. By imposing a tie-in sale on its customers it effectively creates a new monopoly for the tied good. Two monopolies are better than one (from the firms point of view) and this increases profits and deadweight losses. Thus the courts have a habit of declaring tie-in sales illegal.

But this logic does not stand up to scrutiny.

When two goods are used together in fixed proportions it is clear that a monopoly on one is just as good as a monopoly on both, if the second market would be competitive if it weren't monopolized.

The accompanying diagram shows the situation when there is a retail and manufacturing sector. If they were jointly integrated under common ownership the profit maximizing output would be Q* and Price would be Pm.


If the manufacturing was monopolized, but the retailing was competitive, then the manufacturer could charge the retail sector a price equal to Pm -MC retailing . Retailers, being a competitive market, would merely included the MCretailing  as their markup, and the final price would be Pm. Quantity would be Q* and the manufacturer would get all the profit.

Only if the retail sector was not perfectly competitive would the manufacturer wish to have the two monopolies instead of one. But the courts always assumed that the second market was competitive.

2) Quality control

 Firms can argue that they want to keep their products from deteriorating and thus need to ensure the high quality of complementary products. Thus IBM did not want calculator users to use low quality cards for fear that the machines would be blamed for breakdowns. The courts rejected this because they claimed IBM could have required users to buy minimum quality without the tie-in. And certainly changing the relative prices of tying and tied goods was unnecessary.

3) Monitor Cartel:

 If a firm suspects another of cheating on an agreement you could charge a high (monopoly) price on the tied good, but offer to match any other firm's prices on the tied good. If a customer wants the low price he will state where he received a lower price and you will have caught the cheater. It is thought that railroads, which sold timber, use this mechanism (the transport of the timber was tied to the sale of the timber rights).


 IBM tied calculating machine cards to its calculating machines in the 1930's. This means that consumers of machines had to buy however many cards that they used from IBM. IBM did not make the cards that it sold, but bought them from outside vendors. IBM sold the cards at very high prices, but the machines were sold at low prices. The traditional story, which is wrong in most of its elements goes as follows:  Assume that there are two types of users, heavy and light users. Heavy users are presumed to be the less elastic users (why should this be?). By raising the price of cards and lowering the price of machines you have the effect of raising the paid by the heavy users relative to the light users. E.g. if the heavy user uses 100 cards a day and the light user uses 20 cards a day, if card cost $1 each and machines cost $50 each, and both were sold at cost the heavy user would pay $150, the light user $70. If the machines were lowered to $20, but cards increased to $2, the heavy user would pay $220, the light user would pay $60. Obviously the heavy users now pays a relatively higher price than the light users, which seem to be consistent with the price discrimination hypothesis.  So why do we generally reject this old view? First, there is no good reason for the heavy users to be less elastic. The input for heavy users may be a bigger percentage of total inputs, tending to make demand more elastic. And bigger demands are not necessarily less elastic.  Second: The cost of providing calculating services may differ by intensity of use. That is, heavy users probably use more machines. In fact, the more intense users probably use up more machines and so the example above is not correct when it assumes that the heavy user uses the same amount of machine as the light user.


 This explanation makes more sense. The consumers of the computers and cards are unsure what type of year they will have. If they have a good year they would have done better with the calculating machine. If they have a bad year they word do worse with the calculating machine. The calculating machine increases the variance of their profits (their risk). It may also increase the expected profits. But if firms are risk averse they wont buy the calculating machine even though the expected returns are positive.

 IBM, on the other hand, also knows the above facts. It knows that the firms in the industry should buy the machine, but that they do not. In order to reduce the risk to the firms IBM has a tie-in lowering the price of the machine and raising the price of cards. If a bad year occurs the firm doesn't pay much for the machine + cards, and so doesn't lose as much as it might without the tie-in. The net effect is the machine does not increase the variance of profits as much as it might. Risk averse firms are more likely to buy it because it has the same positive expected value and a smaller risk than before.




[1] Is There Rationale That Can Justify New Economy's Lofty Stock Prices? By E.S. BROWNING, Wall Street Journal, March 23, 2000

[i] Unless one believes a result of the ‘network externality’ literature that the market can not coordinate their behavior and that there is either excess inertia or excess momentum. This seems little more than a theoretical musing, however, since my extensive research with Margolis shows that there is no evidence to support these beliefs.

[ii] This point, of course, is neither shocking nor new, but it has been largely ignored in the current discussions about the Internet economy. As the stock market swoons this point of view will come out of the closet. But it should have been heard earlier, and it is true always and not just for the Internet.

[iii] Food can also be used since even though world population continues to increase to unprecedented levels, food production capacity continues to outstrip it and prices fall along with profits. That is why there is less land devoted to farms, and the government has had to step in and try to artificially restrict output so as to increase the profits of farmers.

[iv] “Nasdaq Swings Are Unprecedented But Consumers Are Not Spooked” Wall Street Journal, April 14, 2000, Page1 A1, by Greg Ip and E.S. Browning.

[v] “The Internet Retailing Report”, MORGAN STANLEY, May 28, 1997: Internet Mary Meeker, Retail Sharon Pearson.

[vi] “Haggling Goes High-Tech,” April 10, 2000, Wall Street Journal.

 [sl1]Need to get some quotes here from Information Rules

 [SL2]Just a marker to keep the demonstration argument for resale price maintenance close at hand as a problem that might occur if consumers go to  brick-and-mortar retailers for the experience and then go online to make the purchase. In this case, manufacturers might want to eliminate online sales.