Last time out, I went over (with graphs!) operators’ rhetoric about needing to raise data prices to discourage usage, since their networks can’t keep up with demand. Today, we’ll look at what’s behind the data traffic increase, and what (other than raising rates) operators can do about it.
I closed that last post by saying the black-and-white choice many operators have presented — if we have flat-rate (ie cheap) data plans, the networks are rendered useless; if we have usage-based (ie more expensive) pricing, everything will be dandy — is a false dilemma. Since I wrote that, you might have noticed that Apple announced the iPad. I bring that up only because of the connectivity plan El Jobso announced along with it: $30 per month for unlimited data in the US on AT&T, with no contract required.
You might also remember that AT&T was one of the operators talking about the need to go back to usage-based pricing to save their network. So it’s only natural they’d come in with another unlimited plan on a device that’s largely designed for web surfing, which should deliver another decent bump in data traffic on their already strained network. But I digress…
So what’s driving this data traffic? That’s pretty simple, really: smartphones and 3G dongles for PCs, making operators’ networks victims of their own success, to some extent. But that rise in traffic doesn’t necessarily explain the congestion issue, which involves network capacity as well. Then there are several different capacity factors to consider, as Dean Bubley points out: downlink capacity, uplink capacity, backhaul capacity, even parts of the core network.
The default assumption (which operators do little to counter) is that network capacity is finite. This is true to some extent, as the amount of data that can be carried in a fixed amount of bandwidth is finite. But short of this limit, the constraining factors are essentially built into the network, which is to say that operators can increase capacity (by upgrading backhaul, adding cell sites, etc.). So when it is said that “networks weren’t built to handle this level of traffic”, it’s worth questioning whether that means the network itself wasn’t built to handle the traffic, or, as is often assumed, the level of traffic is beyond the theoretical capacity of a certain type of network in a given amount of spectrum.
Of course, increasing capacity isn’t free, and that’s the rub. In order to increase capital expenditures and maintain a desired level of profitability, revenues have to increase as well, hence the desire for price increases. Part of the issue is that these capacity problems are recent. It seems unlikely that so many operators would have leapt into the mobile broadband market so wholeheartedly if they saw it was going to so quickly lead to a need for increased capex. So in that sense, you could say that operators underbuilt their networks, and deferred the spending necessary for today’s (or tomorrow’s) needed level of traffic, and now it’s time to pay the piper. (I should stress that such a course of action is reasonable from a financial sense.)
So what’s the solution? Add more capacity — either in the physical sense of adding more infrastructure, or by implementing other solutions. In Dean’s post that I linked above, he lists 10 different types of offloading solutions available to operators. There’s really no magic sauce that’s going to solve this problem easily and cheaply for operators, and that includes going back to usage-based pricing, which holds far more downside potential (by holding back usage) than it offers revenue upside.
Returning to usage-based pricing, or even tiers of usage, would certainly help from a traffic perspective, but then it will also kill revenues. The mental transaction cost incurred by the “what is looking at this page/video/map going to cost me?” will put a lid on usage, and kill the goose as well as its golden eggs.
In the short term, a solution is to build out more WiFi hotspots, particularly in areas of heavy usage. I’d venture that much of the heavy bandwidth usage (such as from dongle-connected laptops) comes from “nomadic” rather than truly mobile users — people who are on the go, but stop and use their device while stationary. So if you’re an operator, you figure out your heavy usage areas, then get as much WiFi as you can in public areas and coffee shops and libraries and the like. If you have a fixed-line network, I think you could build a credible case for giving away DSL and WiFi routers, or offering them at cut-rate prices, in these zones, giving up some fixed-line profit for the value of offloading traffic.
If operators want to introduce differentiated pricing, they’re going to have to go about it cleverly. The default option would likely be peak/off-peak rates, but that’s hardly a solution, and the overhead of such tariffs probably make them unattractive for introduction to end users. They could be easier for connected devices vendors. Just as an example to illustrate, take Sprint’s arrangement with Amazon for data service on the Kindle: delivering content to the devices at off-peak hours (when networks aren’t as congested) could be priced much lower than doing so during data-traffic rush hour. Content such as audio or video could be downloaded overnight and stored locally for on-demand use. My DirecTV satellite service already uses such a model, in which certain pay-per-view movies are automatically pushed to my set-top box and stored on a reserved portion of its hard drive, from where I can watch them any time I want (and I’m only charged if I do so).
Obviously that model doesn’t work with every type of device or content, but it’s an example of how operators will have to do more than simply raise prices to deal with this issue. None of the solutions are perfectly simple or completely inexpensive, but that’s a reality that will need to be swallowed. But expecting these solutions, or additional network capacity, to be paid for by a return to usage-based pricing, is unrealistic.