Policy by the Numbers

Data for sound policymaking from Google and friends


Modeling a Market for White Space

Thursday, November 29, 2012

Kate Harrison is a graduate student at UC Berkeley and Anant Sahai is an Associate Professor at UC Berkeley.

Using TV white spaces means allowing wireless devices (e.g. wireless routers) to transmit on frequencies previously exclusive to over-the-air TV. The goal is not to eliminate over-the-air TV but instead to increase efficiency by maximizing existing resources. A useful analogy is pouring sand into a jar of large rocks, where the rocks in the jar naturally leave gaps for sand to fill in. We can think of the signals for TVs, called primaries, as the rocks, which leave room for signals from new devices, the secondaries, our sand.

The principle concern is preventing harm to primaries. Secondaries must be "quiet" enough that TV sets can still "hear" TV signals (in communications lingo, the signal-to-noise ratio must not drop too much). Consequently, we must enforce a limit on the collective "volume" of secondaries.

The standard approach is to hard-code the per-device limit on transmit power ("volume"). This works where devices have roughly the same requirements regardless of location. However, as the map below shows, white space availability varies greatly.


Large variation in the number of available white space channels.

The natural response to a variable environment is to adapt to it. To be legal, white space devices must contact servers to register and get permission to transmit, which ensures they don’t get too close to protected TV signals (in this sense, the policy is already data-driven). With this setup, it’s easy to simultaneously assign a custom transmit power. We showed with data-driven simulations that there is a power limit function which allows significantly higher mobile data rates without hurting TV coverage:


Single transmit power everywhereTransmit power varies by location

These maps were created in Matlab using US 2010 Census data by tract, the ITU propagation model, and a list of the 8,186 US TV towers, assuming white space ISP towers are placed to serve 2,000 people each. Find the code here.

Notice that data rates are much higher and more uniform in the variable-power map than in the single-power map. But an infinite number of functions satisfy the (linear) constraints of the problem (i.e. preserving TV reception). How should we pick in a principled way?

The traditional economics approach is to assign prices (using real or pretend money) to transmit power and allow people to trade freely until everyone is satisfied. Given the quantity of wireless devices, this is practically infeasible: imagine asking people to manually adjust the power of their wireless routers or even determine their valuation for a unit of power.

However, if we make the simple assumption that all devices (users) crave data rate, we can actually simulate their actions in a hypothetical market. This lets us approximate the optimal outcome easily without requiring any human interactions.

In our award-winning paper, "Seeing the bigger picture: context-aware regulations," we created a proof-of-concept “market” under the additional assumption that fair access to white space services is important to society. For example, San Franciscans will need more per-channel power than Montanans because they have fewer available white space channels. This hypothetical market is just a min/max convex optimization problem which can be solved quickly using today’s data centers and scales well even with thousands of constraints.

Since white space access already requires communication with a data center, we can easily apply changes there without deploying new white space devices. This lets us refine algorithms over time—including testing them in small regions before deploying them to the entire nation—in order to improve data rates for users. Through this, the white spaces could open up an exciting new realm of real-time data-driven policy.

2 comments:

Richard Bennett said...

I think you've got this bacwkard: "For example, San Franciscans will need more per-channel power than Montanans because they have fewer available white space channels."

You get more capacity by limiting radio power and harnessing the capacity gains that come from spatial re-use.

Anant Sahai said...

Excellent comment @RichardBennett. Indeed, the per-device power in high-pop-density area will be lower because of enhanced spectral reuse if the devices can talk over shorter ranges. However, the total *aggregate* transmit power in a particular area can be allowed to be higher. (Some devices might want longer ranges, higher spectral efficiency, building penetration, etc...) Where all the devices have to share fewer frequencies, they would appreciate the higher total power to do that.

In the interests of simplicity in our exposition, we didn't call out this difference between per-device power and total power density in an area in the blog post itself. But it is aggregate power in a locality that must be kept under control to prevent loss of service to primaries far away. And this is what a Cloud-oriented whitespace architecture allows because it knows how many devices are registering in a given area...