Driven by computing power, not Newton or newtons.
New Zealand’s Canterbury Plain is hosting Google’s latest idea-that’s-so-goofy-it-might-work, appropriately named Project Loon. Thirty high altitude balloons carrying data relay equipment were released to drift over Christchurch, generally heading east towards the telecoms starved Chatham Islands. The concept Google is testing is to put enough balloons into the air to create a fleet of atmospheric satellites that can talk to each other and to the ground, and relay Internet service to hard to reach places.
With one major exception, the technology behind it is reasonably well established. In the late 1990s several projects emerged that incorporated various aspects of Project Loon. Teledesic, for example, proposed to launch more than 800 small satellites into low earth orbit and ring the globe with Internet access. SkyTower, a project I worked on for southern California drone pioneer AeroVironment, was an attempt to use a solar powered, unmanned airplane to maintain station over an area, at about the same altitude Google is testing, and serve as a low hanging satellite.
Teledesic, like the far less ambitious but actually launched Iridium and GlobalStar constellations, foundered because of the astronomical capital cost of building birds that would last for years and then getting them into orbit. The limitations of solar cell and battery technology did the same for AeroVironment. Google’s balloons solve both problems: they’re cheap to build and launch, can be easily recovered and refurbished, and the solar panels only have to power telecoms equipment and controls, not propellors.
The huge challenge for Project Loon is to figure out how to manage a free floating balloon fleet by modelling wind patterns and navigating simply by going up or down to find the right airstream. And do it in way that maintains a usable telecommunications network architecture. That is a horribly complex computing problem: just the sort of thing Google is good at.
Big state, big farms.
The version of the federal omnibus farm bill that was approved by the U.S. Senate last week improves the chances of actually building broadband infrastructure in areas of California where no service currently exists. That’s assuming the lack of service can be documented and withstand challenges from competing providers who might claim otherwise, which is a separate can of worms.
The legislation, which still has to be approved by the House, allows the Rural Utilities Service (RUS) to give outright grants to pay for broadband projects, in addition to its existing loan program. Adding grants to the mix eliminates a problem faced by rural broadband start-ups: creating a credible business plan that can repay the total cost of construction out of operating revenues.
On a per household basis, it’s expensive to build facilities – particularly the fiber infrastructure that’s needed for the gigabit-class service referenced in the bill – in sparsely populated areas. The challenge is compounded by lower rural income levels that keeps revenue down and operating profits harder to achieve. Reducing the capital overhang – the bill allows up to 50% grant funding for a project – makes a sustainable business case much more achievable.
RUS loans largely go to rural utility cooperatives that have existing customers, capital assets and operational resources to build upon. Those kinds of co-ops, though, are vanishingly rare in California, where agribusiness is not generally based on multitudes of small farms, as in the midwest and south. Deploying rural broadband infrastructure here often requires starting a business from scratch, which usually makes it impossible to meet current RUS loan requirements.
One can argue that the federal government shouldn’t be paying for broadband projects at all. Fair enough. But if congress does decide to subsidise rural infrastructure, it should be in a way that recognises the differences in rural economies across the country.
Akamai, a leading content delivery network provider, publishes periodic performance reports that ranks global Internet service by country. Its latest figures put Korea, Japan and Hong Kong at the top of the chart.
That doesn’t tell the whole story, though. The Akamai numbers show how fast traffic is moving on its network, not how much of it there is. So having, say, super fast connections in gaming centers and clusters of homes with gigabit class connections can skew the rankings.
The top three countries have a high proportion of people living in large apartment blocks concentrated in core urban areas (albeit very big cores – Seoul and Tokyo are immense). Which makes modern fiber deployments relatively cheap on a per household basis. Doesn’t mean the people living in buildings along the route all subscribe, although many do and that pulls average speeds up in Asia.
Despite lower population densities, North America still leads on a per capita consumption basis, and is expected to do so for some years to come, at least according to Cisco’s projections. 50 year old coaxial cables and 100 year old telephone wires are there and are well used, albeit not at gigabit speeds. Compared to world population as a whole – rural and urban – the state of the Internet in North America looks a lot better than many critics believe.
That’s not to advocate relying on ageing plant going forward. As wireline incumbents pull back on rural service, focus capital in “high potential” urban markets and try to exploit monopoly/duopoly positions for as long as possible, North American advantages will quickly fade. It’s no cause for panic, but it’s good justification for independent and broadband investments, public and private, based on realistic and rigorous financial criteria.
Bringing down the vertical market.
Machine-to-machine communication protocols are propriety, frequently established by low volume vertical applications that are bolted onto existing mobile networks. There’s no established way to make M2M equipment that can roam across a large ecosystem of different networks. But similar to the GSM and CDMA standards that were originally developed for voice, carriers are starting to group together, with four European carriers – Telecom Italia, Deutsche Telekom, Orange and TeliaSonera – forming the Global M2M Association (GMA) and a larger group – which includes NTT Docomo, SingTel, Telefonica, O2 and Optus – coalescing around a proprietary platform developed by Jasper Wireless.
A unified standard doesn’t need to emerge from the rival groups. Mobile equipment manufacturers can support two standards. In fact, they seem to like it that way. Having two viable alternatives can lead to healthy competition that keeps costs down and pressure on to continue to innovate.
Samsung, for example, isn’t happy with relying on Google’s Android operating system. Although Apple’s iOS gives consumers a choice, it doesn’t license it to other manufacturers. So Samsung is partnering with Intel and others to develop phones based on the Tizen OS.
In order for Ericsson’s expectation of 50 billion connected devices (or any of the other “billions and billions” predictions) to come true, manufacturers have to start knocking out M2M modules like popcorn. And they can’t do that in today’s fragmented, proprietary market.
Once operators start deploying infrastructure that supports either or both standards and seamless roaming becomes possible, expect to see a burst of new M2M products, applications and networks.
Seamless offloading of cellular data traffic onto WiFi networks is a big step closer. Apple announced that version 7 of iOS and the next generation of iPhones will support the Hotspot 2.0 standard. The new capability should start appearing this fall.
The idea is to allow users to automatically authenticate on a WiFi hotspot blessed by their carrier when it’s available. Data traffic would then be routed via WiFi until the user moves out of range. At that point, it would switch back to the cellular network.
AT&T is already moving ahead with establishing compatible WiFi capacity, as it struggles to move traffic off of its mobile network. It needs support from handset manufacturers, and Apple’s decision to do so makes it a lot more likely that others will follow suit. Apple has a history of playing the market leader role when it comes to popularising standards – it was originally the first to get into WiFi in a big way. The calculation for other manufacturers will change, as they weigh the risk of letting Apple have a lock on what could be a very popular feature against the cost of integrating the new technology.
Having a widely accepted standard for integrating WiFi authentication and payment with mobile traffic management makes it easier for carriers to use third party capacity, and it gives hotspot operators a financial incentive to expand coverage. With a third of smart phone traffic already going via WiFi and overall bandwidth consumption continuing to boom, that kind of flexibility is a necessity.
Don’t fence me in.
There’s nothing new about local governments getting into the utilities business. Nearly all waste water utilities and many (most?) water utilities are publicly owned and managed, either by a primary agency (i.e. city or county) or a special district or equivalent. Plenty of publicly owned electric and solid waste utilities are around too.
So long as the go/no-go decision is made by the taxpayers involved – indirectly by representative government or directly by vote, as they prefer – it’s little different from a corporation and its shareholders deciding to commit capital. Keeping the decision local is the key
that allows for a variety of choices within a general area.
Broadband is in a grey area. Sewer and water infrastructure is a true natural monopoly and broadband shares many of those characteristics. Local circumstances tip it one way or the other. Where sufficient competition exists, taxpayers (and usually their representatives) do not support publicly owned competitors. There are a few exceptions, all of which (that I’m aware of) have turned out badly. In most of those cases, it’s the taxpayers and not the private sector that have blinked first and cut their losses.
Where insufficient competition exists, taxpayers are more willing to back a public option. It’s generally in circumstances where the barriers to entry – particularly the economic characteristics of the market – tip broadband toward the natural monopoly column. In those circumstances, creating a competitive market where a publicly owned system succeeds or fails on its own merit is less intrusive than perpetual regulation.
Telecoms companies direct capital toward competitive markets and sectors within markets, which is their right. However, people living in a non-competitive area are not obligated to accept the status quo. Challenging it through market-based competition is economically healthy and politically legitimate.
Thank you for your input.
Organizational budgets and goals are set in the C-suite, defining the resources and limiting the options available to IT executives. Then it’s up to them to find solutions that maximize employees’ chances of meeting those goals while minimizing the pain and staying within the budget.
IT executives have to balance the arts of managing up and implementing down. The best outcome occurs when everyone’s needs, wants and dreams are fulfilled. It’s a tough job that requires a diverse set of skills.
But it’s not the same set of skills that brings success on the consumer side of the Internet service, pay television and telecommunications business. There, every customer sits in his or her own C-suite. Consumers set their own goals, work within their own budgets and determine which solutions suit them best.
That’s why I maintain a healthy degree of skepticism whenever I see FTTH initiatives that are led by executives with blue chip credentials in the IT world but lack a commensurate level of consumer market experience. They are well equipped to engineer effective solutions and evangelize the benefits, but they’re accustomed to having the option of making top down decisions. You don’t close sales on a desktop by desktop basis in corporations or institutions.
The difference between customers and employees is that customers are sovereign. Both can say “no”, but it doesn’t mean the same thing. In corporations and institutions, that’s when decisions are made and the debate ends. In the consumer market, that’s the point when selling begins.
There’s a variety of methods IT departments use to manage bring-your-own-device (BYOD) users. It ranges from limiting access to the internal network – no different, say, than accessing your business email from home – to putting managed apps on devices to installing a ring-fenced operating environment. SAP, for example, provides companies with a way of creating a sealed-off area on consumer-grade phones.
Limiting access isn’t intrusive, but it greatly limits the company resources an employee can access with his or her phone. You might be able to read your email but not access the company’s CRM (customer relationship management) system, for example.
Creating a sealed-off area on the phone gives the most security and offers the greatest access to corporate resources. It’s also the most expensive way for a company to do it. It is intrusive, in the sense that you don’t have the same level of control over what’s in the sealed off area as you do over your own apps, but it has little or no effect on those apps. And the IT department, in theory, can’t access your data, just the stuff that’s inside the ring-fence. Most people might even like it – they don’t have to worry about their kids picking up the phone and accidentally emailing clients.
It’s the middle ground – installing corporately managed apps – where IT management can be intrusive, because security is provided, in part, by usage policies rather than largely invisible technology. It’s frustrating for both employees and IT managers. Companies will eventually move to one end of the spectrum or the other, or both, and get out of the middle ground. No one likes the arm wrestling involved and ultimately it’s doomed to fail.
Boys prefer helicopters?
What might be the most revolutionary technology poking its nose into the market right now is just a toy. NeuroSky makes a headset that controls devices by reading your brainwaves. Their first shot at a product was a tiara with cat ears that reacted to the wearer’s mood. A big hit with girls. A helicopter for the boys followed.
There’s a long and proud tradition of breakthrough technology getting its first consumer foothold in toy stores. Last year marked the fortieth anniversary of Pong. It began in arcades game but quickly became a home game.
Pong was the launching pad for video games, personal computers and, after a few early adopters forgot to turn it off, screen savers. It was four years ahead of the Apple I. Its descendants – like the Atari 400, my first personal computer – were early Silicon Valley market leaders.
Breakthrough technologies first move through a toy phase because no one is really sure what to do with it or is willing to rely on unstable products for anything except a moment’s entertainment. Toys grab our interest. A lot of the value comes from figuring out what to do with them.
When Wham-O (another Californian garage start-up) introduced the Frisbee, there was nothing much to do with it except toss it back and forth. It caught on because the heavy lifting of product development was done for free. And for fun.
Toys drive technology because the cost of failure is low and the intrinsic value is high. Companies like NeuroSky, Sphero and Road Narrows are learning while they make a living. That’s how you get a head start on the land rush into completely new industries.
You’re gonna have to answer to the Coca-Cola company.
What we’ve seen over the past week might be news, but it’s not new. Telecoms and information service providers are in an ever tightening squeeze, as public and private interests use congressional influence to access customer information for their own ends.
It’s still not clear just how enthusiastically Google, Facebook, Verizon, AT&T and others have cooperated with federal spying efforts. So far, when companies have commented, it’s been along the lines of “we’re only doing what we’re required by law to do, and nothing more.”
In that sense, it’s no different than the cooperation telecoms companies are required to give intellectual property owners under the Digital Millennium Copyright Act. And they don’t seem too upset at having to share customer information. In fact, they’ve figured out how to use the law to their own advantage too.
Copyright holders are far more brazen about throwing their congressionally granted weight around than the government agencies in the news lately. Of course, the fear caused by a letter from Warner Brothers demanding twenty bucks is a few orders of magnitude less than the terror of an IRS audit or an FBI investigation.
But small encroachments on personal choice and privacy add up, as lobbyists focus their attention and largesse on a relative handful of political decision makers.
Whether it’s entertainment company lawyers or CIA drone pilots that threaten, the power to access information we’ve placed in service provider hands has come from our elected representatives. They’re doing it with the consent of the governed. If you don’t like it, withdraw your consent the next time you vote.