The disruption in cryptocurrency markets this week, when Bitcoin sorta split into two, was the result of disagreements between different interests about the technology and crowd-sourced methods used to run it. It was also inevitable and purposeful – cryptocurrencies are intended to rise and fall according to the cumulative decisions of millions – eventually, billions – of sovereign, individual users, who won’t always agree with each other.
Bitcoin’s underlying software can’t keep up with the growing number and speed of transactions between its users. The limits of the software has been a known problem for years, but the urgency of solving it has increased in the past few months as the strain on the system began to slow down transactions.
The solution is simple: upgrade the software. But sometimes simple things are supremely difficult, and so it is with Bitcoin.
It’s nothing like updating a commercial application like Excel or iTunes that’s owned by a single company – Microsoft or Apple just do it. It’s not even much like Linux or other widely used open source software that can comfortably exist with many different versions – distros – floating around. Linux might be open source, but any given installation is a closed system – so long as you’re satisfied with the way your preferred version runs on your hardware, all is well. Operationally, it doesn’t matter if the person sitting next to you uses a different distro.
But if you’re exchanging information with other people – which is what Bitcoin is all about – then everyone has to format and process the data in the same way. Email works because everyone has more or less settled on a set of open standards that are periodically updated by industry groups that include big companies, like Google and Microsoft. If enough of the major players agree then pretty much everyone else has to follow along, or risk being shut out.
The same principle applies to cryptocurrencies like Bitcoin, but because schisms like we saw this week produce competing versions that, so far, have added value to the overall market and can be freely exchanged within their respective universes, there’s also an incentive to not standardise. By preventing consolidation into a single, monopoly platform, that balance has kept an ecosystem of independent cryptocurrencies alive.
The basic blockchain technology that underpins bitcoin and other cryptocurrencies could find its way into the basic infrastructure of the global financial system. A group of nine of the world’s biggest banks is taking the first steps towards adopting the blockchain concept, initially as a way of recording transactions. According to a story on Reuters, the group has engaged a financial technology company, R3, to develop a common blockchain-based platform…
[R3’s CEO David ] Rutter said the initial focus would be to agree on an underlying architecture, but it had not yet been decided whether that would be underpinned by bitcoin’s blockchain or another one, such as one being built by Ethereum, which offers more features than the original bitcoin technology.
The group will not do transactions via cryptocurrencies, at least not in the foreseeable future. Instead, the banks have decided that the blockchain method of reliably and transparently documenting transactions is potentially a better way of keeping track of who has bought what. At this point, there’s no plan to use it to buy or sell anything.
It’s a big endorsement of the open source method of developing key cybersecurity technologies. The bitcoin blockchain has remained secure throughout its lifetime, despite the huge incentive someone would have for cracking it. Flaws have been found in it, but widespread scrutiny – the result of the parallel incentive honest users have to keep it secure – has meant that bugs have been squashed and not exploited. Other aspects of the bitcoin ecosystem, online exchanges for example, have been successfully attacked, but the underlying technology that the banks are evaluating has proven rock solid.
San Francisco assemblyman Phil Ting was the first of more than a dozen speakers, talking about the need for consistent and timely release of public data in a useful way, and the bill he’s sponsoring to encourage it.
There was general agreement about the value of releasing public data – $3 trillion, if you believe a report from the McKinsey Global Institute – and the woeful state of affairs in California, where paper forms still dominate workflows at some public agencies.
What was largely missing, I thought, was discussion of two critical problems: establishing standards and convincing agency employees to embrace information publishing as a routine part of the job.
“We have to move away from the form driven mind set that has driven state government for too long”, said Jodi Remke, the chairwoman of the Calfornia Fair Political Practices Commission. She made a convincing case for the need to do so, but didn’t really have an answer. Mike Wilkening, from the department of health and human services, talked about his approach of working through offices one by one to bring people on board, rather than issue a top down make it so order. I would have liked to hear more about that.
Several speakers shared their experience developing and rolling out web and app-based tools for accessing particular information, but only Nicole Neditch from Code for America dove into the problem of creating common, open source tools for doing so. There was barely a mention, though, of establishing common standards for data publishing, which is a critical first step if open data initiatives are to break out of internal, let alone agency by agency, silos.
If there’s going to be 50 billion connected devices by 2020 – which is the goal set by Ericsson – then interoperability and interconnection standards will be necessary, according to Ulf Ewaldsson, the company’s CTO. He was speaking at a CES panel session on corporate research and development. Those standards aren’t there yet, but the likeliest path will be through open source collaboration, rather than propriety technology.
“Open source creates both standards and it creates a more rapid development process than before”, he said. “Open source is a very rapid way to increase the pace of software development”.
“Increasingly, gone are the days when a company can make a proprietary standard and make it successful”, said Todd Rytting, CTO for Panasonic North America. Particularly, killer apps “don’t often come from predictable sources”.
Both emphasised that corporate involvement in open source efforts has to active and wide ranging, if it’s to be effective. “There isn’t one open source consortium that rules them all, there’s a need for many different flavors” Rytting, said. “You have to participate. Participate isn’t just taking it in and using it, it’s about contributing”.
One role corporations can play in open source projects is to help turn the results into something that’s easily deployable. Ewaldsson pointed out that open source technology does not usually come in a “ready to go” condition, but companies like Ericsson are good at packaging software – open source and otherwise – and making it accessible to less technically capable users.
But being big isn’t a particular advantage, particularly when it comes to recruiting the kind of engineering talent needed to develop cutting edge software. “It’s interesting to try to compete against the start ups when you’re a big giant company”, Rytting said.
Windows 8 will survive as a mobile operating system. It'll have a place in enterprise networks, because its integration with desktop computing will appeal to some IT managers. It could even edge out RIM if the Blackberry 10 OS fails to impress. But I didn't talk to a single consumer facing app developer who is coding for anything other than Android and iOS.
Makers starting moving into CES this year. 3D printing grabbed everyone's attention, with printer manufacturers' booths jammed and a few garage scale start-ups showing products. Expect a lot more next year.
Wearable computing and home automation are closer to being commonplace. Near term, wristwatch-style Bluetooth devices like Pebble will provide quick text and incoming call notifications, plus limited control functions for your smart phone. Long term, eyeglass mounted video displays and health monitors will become self contained and fully functional, with or without a phone.
There's no clear leader in the space, but there might not need to be. Whether it's by automatically associating to a home WiFi network, talking to a networked hub or connecting directly to mobile networks, smart home devices will get their smarts from cloud-based middleware platforms. Consumers can just plug and forget. Apps and web pages will provide information and control.
It's fair to call the International CES a technology event rather than a dedicated consumer electronics show. Distinctions between consumer and enterprise markets, and shrink wrapped products and core technologies are largely irrelevant. Calling it global is still a stretch. Although attendees come from all over, only a quarter of the world's countries were represented on the exhibit floor. Two continents – Africa and South America – were all but absent. India's presence barely registered. Big as this year's show was, there's room to grow in 2014.