Clay Shirky’s Writings About the Internet.
The Possibility of Spectrum as a Public Good.
The FCC is considering opening up additional spectrum to unlicensed uses — the same kind of regulatory change that gave rise to Wifi. Much of the spectrum being considered for unlicensed use is currently allocated for broadcasters, however, so FCC’s proposal creates tension between incumbents and groups that want to take advantage of the possibilities inherent in unlicensed spectrum.
Most issues the FCC deals with, even contentious ones like limits on the ownership of radio and television stations, are changes within regulatory schemes. The recent proposal to move the maximum media market reach from 35% to 45% took the idea of an ownership cap itself at face value, and involved a simple change of amount.
Unlicensed spectrum is different. In addition to all the regulatory complexities, an enormous philosophical change is being proposed. Transmuting spectrum from licensed to unlicensed changes what spectrum is . This change is possible because of advances in the engineering of wireless systems.
This matters, a lot, because with the spread of unlicensed wireless, the FCC could live up to its mandate of managing spectrum on behalf of the public, by allowing for and even encouraging engineering practices that treat spectrum itself as a public good. A public good, in economic terms, is something that is best provisioned for everyone (an economic characteristic called non-excludability) and which anyone can use without depleting the resource (a characteristic called non-rival use — individual users aren’t rivals for the resource.)
This transformation will be no easy task, because the proposed change differs radically from the current regulatory view of spectrum, which is two parts physics to two parts engineering.
Two Parts Physics.
Though the details can be arcane, the physics of spectrum is relatively simple. Spectrum, in the aggregate, is just a collection of waves, and a wave is defined by its characteristic frequency, measured by counting the number of waves that pass a given point in a second — the more waves, the higher the frequency. (Wavelength is a corollary measurement — the more waves that pass a point in per second, the shorter the length of each wave; therefore, the greater the frequency, the shorter the wavelength. Wavelength and frequency are just alternate ways of expressing the same characteristic.)
The easiest part of spectrum to understand is light — light is simply that collection of frequencies the eye can detect. Other than that, though, there is no real difference between light and radio waves; they are all part of the same electro-magnetic spectrum. Light has a very high frequency compared to almost all useful communications spectrum. Like all high-frequency waves, light can’t pass through walls, while lower frequencies can — in fact, the lower the frequency, the better the penetration. This makes low frequencies more valuable for long-range communication, particularly in urban areas.
The second important characteristic of spectrum is power. Like the diminishing height of waves that emanate outward from a rock dropped in a pond, the power of a wave radiating outward from a broadcasting antenna falls as the distance from the antenna increases. Worse, this falloff isn’t just related to distance, it is the square of that distance. This pattern, called the inverse square law, says that power at distance N will be 1/N 2 — two miles from a given broadcaster, the signal will be 1/4th the strength of the signal at one mile, at three miles, it will be 1/9th, and so on.
Two Parts Engineering.
Frequency and power are real attributes of the waves that make up spectrum. The questions revolving around regulation of that spectrum, though, aren’t about those characteristics. Instead, they are about the engineering of systems that make use of the characteristics of frequency and power. Right now, the FCC regulations make two assumptions about such systems, based largely on radio engineering as it existed for most of the 20th century.
First, frequency. Current regulation assumes that a given frequency is like a virtual wire. For a sender and receiver to communicate, they need to be communicating on a single, agreed-on frequency. Though our experience of receiving these frequencies is sometimes discrete (changing the channel on TV) and sometimes variable (turning the dial on a radio), the process is always the same — making the receiver listen in to the specific frequency being sent by the transmitter.
Treating frequency as a wire also sets limits on the amount of data that can be transmitted, since the data is encoded as minor changes to the waves themselves In the frequency-as-wire model, the higher the frequency of waves, the higher the data rate, and the lower, the lower. Because of the tradeoff between penetration and data rate, most of the useful radio frequencies are in the kilohertz (Khz) to Gigahertz (Ghz) range — low enough to travel through walls, high enough to carry the data required for voice or video signals.
Second, power. Because a given frequency is treated like a wire, and because power falls off so rapidly as it radiates outwards from the broadcasting antenna, the communication between sender and receiver relies on no other broadcaster using that same frequency in the same geographic area. If two or more broadcasters are using the same frequency, a standard receiver won’t be able to discern one signal from another. Though engineering parlance calls this interference, the waves aren’t actually interfering with one another — rather the profusion of signals is interfering with the receiver’s ability to listen to one specific signal.
In the early decades of radio, interference was a ubiquitous problem — no receiving hardware could distinguish between two signals of similar frequencies. This model of interference required strict limits on use of a particular frequency, in order to ensure reception — a sender had to “own” a frequency to use it. In its role as the regulator of spectrum, the FCC has been in the business of managing these engineering tradeoffs, determining who gets to use what spectrum (based in part on requirements for penetration of buildings and carriage of data, and in part on what’s available.) Once spectrum has been allocated, the FCC then enforces rights over the spectrum on behalf of the owners, in order to ensure that no other signals risk confusing receivers in proximity to the antenna.
With the old model of transmitters locked on one frequency and receivers unable to do anything but listen, this was the right answer. Accordingly, almost all the usable spectrum was licensed to a small number of parties, especially the Government and broadcasters. These organizations in turn use only a tiny fraction of this spectrum, treating the rest of it as “white space”, a buffer zone against competition from other signals. (This imbalance between used and unused signal is actually getting more extreme as broadcasters transition to digital signal, which requires an even narrower slice of frequency than analog signals do.)
Thus, because of engineering assumptions, the FCC treats spectrum as property, a regulatory approach that creates enormous difficulty, since spectrum isn’t actually property. The necessary characteristics of property are the opposite of the characteristics of a public good.
Things like shoes, cars, and houses are all property. Property is excludable — it is easy to prevent others from using it — and rival — meaning that one person’s use of it will interfere with another person’s use of it. Spectrum has neither characteristic. Spectrum is purely descriptive — a frequency is just a particular number of waves a second — so no one can own a particular frequency of spectrum in the same way no one can own a particular color of light.
Instead, when an organization ‘owns’ spectrum, what they really have is a contract guaranteeing Federal prosecution if someone else broadcasts on their frequency in their area. The regulatory costs of forcing spectrum to emulate property are enormous, but worthwhile so long as it leads to better use of spectrum than other methods can. That used to be true. No longer.
The Philosopher’s Stone.
In the handling of spectrum, technological improvement is the philosopher’s stone, capable of turning one kind of material into another. Since the treatment of spectrum as property is an artifact of current regulatory structure, itself an artifact of engineering assumptions, changing the engineering can change what spectrum is, at least in a regulatory setting. This matters, because the inefficiencies and distortions arising from treating spectrum as property create obstacles to more economically efficient and flexible uses of wireless communication.
There have been two critical changes in the engineering of radio systems since the FCC’s implicit model was adopted. The first is computationally smart devices that can coordinate with one another. One possible use of such smart devices is to allow the sender to broadcast not with as much power as possible, but with as little. Because smart senders and receivers can coordinate, they can agree on different degrees of broadcast power in different situations, in the same way people modulate their volume around a dinner table. Because the sender no longer has to use maximum power to maximize the receiver’s ability to ‘hear’ the signal, we can reduce the overall power required in the system (and thus the cause of traditional interference), even if no other aspect of radio engineering were to change.
The second, and much more major change is the invention of spread-spectrum radio. As the name suggests, spread-spectrum encodes data on several frequencies simultaneously. This has two critical advantages. First, it decouples the link between the frequency of a particular signal and the amount of data that can be sent between devices, allowing data transfer rates to be much higher than the carrying capacity of frequency considered as a virtual wire. Second, because both sender and receiver are computationally smart, they can agree on ways of sending and receiving data in ways that largely avoid the traditional form of interference.
Neither smart radios or spread spectrum existed in 1934, the year of the FCC’s birth, and the context in which many of its most basic engineering assumptions were set. We have good theoretical reasons to believe that these techniques can transform the way we treat spectrum. We also have a good practical reason to believe it — Wifi.
The Example of Wifi.
Wifi, operating in a slice of unlicensed spectrum at the relatively high frequency of 2.4 Ghz, has been one of the bright economic spots during the tech downturn, with base stations and cards shipping at a torrid pace throughout the last few years. (The number of Wifi PC cards shipped is expected to top 20 million this year.) Wifi is also a giant demonstration project of what can happen when the problem of non-interference is left up to smart devices, rather than arranged by fiat.
The first-order value of this is obvious: You and I can be neighbors, both running Wifi routers that broadcast signal into one another’s apartments, without generating anything that looks like the old model of interference. This lowering of coordination costs between participants in the system has had a hugely beneficial effect on the spread of the technology (incredibly rapid for hardware), because no one has to ask for help or permission to set up a Wifi node, nor do they have to coordinate with anyone else making the same set of choices.
There is a surprising second-order value of the Wifi revolution as well: an alternate model of capitalization. Most wireless technology, whether TV, radio, or phones, requires a huge investment up front in broadcasting equipment, an investment which limits what can be done with the technology, since all subsequent uses require extracting money from the users or third parties like advertisers, in order to recoup the investment and cover the ongoing expenses.
Wifi networks, by contrast, are capitalized by the users, one hotspot or PC card at a time. This model has provided an enormous amount of flexibility in business models, from the Wireless ISP model being pursued by T-Mobile and Starbucks; to the civic infrastructure model, as with Emenity unwiring parks and other public spots; to the office LAN model, where a business treats Wifi access as part of the cost of doing business. And then, of course, there’s the home user model, where the user sets up an access point in their house and uses it themselves, as they would a toaster or a TV, without needing to offer access to anyone else, or to come up with a business model to cover the small one-time charge.
There are two ways to build $10 billion in network infrastructure. The first is to get ten large firms to pony up a billion, and the second is to get 10 million users to spend a hundred dollars each. Wifi fits that second model, and has created an explosion of interest and experimentation that would be impossible to create in a world where the 2.4Ghz band was treated as property.
Definition of a public good.
The Wifi story has two parts — 20 years ago, the FCC decided to allow communications tools to operate in the 2.4Ghz frequency, but refused to police interference the old way. Instead, the devices had to be able to operate in noisy environments.
Then, for 15 of those 20 years, nothing much happened other than the spread of garage door openers. The regulatory change alone wasn’t enough. The second part of the story was the development of Wifi as a standard that any company could build products for, products that would then be interoperable.
These two characteristics — unlicensed spectrum and clear engineering standards — helped ignite the Wifi revolution. The 2.4Ghz spectrum is not treated as property, with the FCC in the ungainly role of a ‘No Trespassing” enforcer; instead, it is being treated as a public good, with regulations in place to require devices to be good neighbors, but with no caps or other restrictions on deployment or use.
Though Wifi-enabled hardware is property, of course, the slice of spectrum the hardware uses isn’t. Anyone can buy a Wifi base station or card to make use of the 2.4 Ghz spectrum (that is, the spectrum is non-excludable.) Furthermore, anyone can use it without interfering with other’s uses of it (it is non-rival as well.)
The right to broadcast on the 2.4G spectrum is almost worthless, since everyone has that right in an unlicensed regime. But the economic value created by uses of 2.4G are almost certainly higher than for any other section of spectrum, and is still growing rapidly.
So What Could Be Bad?
The problem with Wifi, however, is that it is in the wrong frequency for wide deployment. 2.4Ghz is the frequency for baby monitors and cordless phones — applications designed to operate at short distances, and usually to be contained in the walls of a house, which Wifi penetrates partially but weakly. Though it’s possible to do a remarkable amount to extend the range of a Wifi signal through amplification and antenna design, the basic physics of higher frequencies means that Wifi isn’t appropriate for uses meant to cover distances of miles rather than yards. Wifi is great for unwiring a home, or carpeting an urban park, but lousy for getting bandwidth to rural or remote areas, or indeed to anyplace that doesn’t already have a wired connection to the internet to extend.
With Wifi as proof of concept, it should be easy to argue that other, lower frequency spectrum should be transmuted from licensed to unlicensed (which is to say from a synthetic property model to a public good.) The argument runs into trouble, though, on the fact that almost all useful spectrum is presently regulated like property, meaning that any such re-assignment of spectrum will involve the current license holders of the spectrum in question.
The broadcasters have a legitimate concern about old-style interference, of course. After 70 years of hearing that anyone else broadcasting in their spectrum would be catastrophic, they are understandably leery of models that adopt alternate models of interference, even models that only operate in their unused “white space.”
Unlike the 2.4 Ghz band, which was already used by microwave ovens and other appliances, the broadcaster’s spectrum is only used for communications, so they will have to be shown that new devices can not only cooperate with one another, but operate without disrupting current signals. (The prospects for this are good — in a related test in February concerning low-power radio, the company performing the interference tests concluded, “Due to the lack of measurable interference produced by [low-power] stations during testing, the listener tests and economic analysis scheduled for Phase II of the LPFM field tests and experimental program should not be done.” Report in PDF.)
Beneath the simple challenge of avoiding interference, though, is a deeper and more hidden fear. Spectrum is currently valuable because it is scarce, and it is scarce because it is treated like property. Even if novel uses of spectrum can be shown not to interfere with the current broadcast model, evidence that spectrum can be transmuted from a property-rights model to being treated as a public good might not be welcome, in part because it could call into question the hold the broadcasters have on spectrum. This is especially true now that over 85% of television viewers get their TV from cable and satellite, not from traditional broadcast.
The potential threat to spectrum holders is clear. We have a set of arguments for creating and enforcing property rights for things that aren’t actually property. We usually apply this artificial scarcity to intellectual property — patents, trademarks, copyright — and grant these rights to protect certain forms of abstract work or communications.
The rationale for all these rights, however, is to reward their creators for novel intellectual work. This does not offer much relief to spectrum holders seeking a justification for continued Government enforcement of scarcity. None of the current holders of spectrum have created any of it — a wavelength is a physical property that cannot be created or destroyed. If spectrum can be regulated without the traditional licensing regime, it’s hard to argue that the Government has a compelling interest in creating and enforcing scarcity.
And this is what makes the current fight so interesting, and so serious. There are simple arguments about interference, but the ramifications of these arguments are about essence — what kind of thing is spectrum? We have the opportunity to get a world where cheap but smart equipment allows for high utility and low coordination costs between users.
As we’ve seen with Wifi, a small slice of spectrum can become an enormous platform for innovation and user-created value. The kinds of economic activity we’ve seen in the limited example of Wifi can be realized on a much broader scale. The only issue now is whether and how the FCC manages its proposed transmutation of small slices of spectrum away from property rights and towards a model that regulates spectrum as a public good.
Shirky: The Possibility of Spectrum As A Public Good


