Tag Archives: geek stuff

Covid-19 was barely a sniffle for the Internet, study finds

by Steve Blum • , , , ,

Fixed broadband weighted median download

Broadband networks in the U.S. and around the world held up well as countries locked down and work, school and play moved online in March. Anna-Maria Kovacs, a visiting scholar at Georgetown University in Washington, D.C., took a brief look at worldwide Internet speed test data collected by Ookla and traffic data from Sandvine, and found that the crush of traffic put a temporary downward bend – and only that – on planetary network speeds

It is not unusual, of course, for internet traffic to grow…What is unusual in the Covid–19 environment is the suddenness of the traffic growth. Rather than growing 30% in a year, traffic grew about that much in a month. Sandvine reports a “staggering increase in volume for network operators to cope with and absorb.” During March, according to Sandvine, global traffic grew 28.69% with an additional 9.28% during April, for a total of 38% over the two months. Upstream traffic growth was even more stunning, up 123.18% in March before leveling off…

The U.S. networks’ fixed-broadband speed bottomed out within three weeks, as did the global index, while the speeds of the EU, EU–4, and OECD continued to decline for another three weeks.

Kovacs’ conclusions – the apparent superior performance of U.S. networks is due to the beneficence of telecoms companies and the somnambulance of the Federal Communications Commission – are unsupported. Her top level observations might correlate to Ookla’s network traffic data, but she offers no analytical rigor or evidence of causation. The data sets she relies on aren’t necessarily globally consistent. For example, Ookla’s Speedtest.net is based in Seattle and relies on crowdsourced data. It’s a mistake to assume that the crowd generating the data in the U.S. is largely identical to the crowd in the E.U. It might or might not be.

Comparisons of the same population over a few weeks time are valid, though. The top line conclusion stands: globally, networks withstood the initial covid–19 induced surge, and adapted to higher traffic levels within a few weeks.

The Internet was originally designed to ride out a nuclear war. It works just fine in a pandemic, too.

Apple’s rumored move to ARM-based Macs aims for a world of continuous connectivity

by Steve Blum • , , , ,

Technological tipping points are easy see in the rearview mirror – do you remember what the world was like before the iPhone? – but hard to spot in advance. One might be on the way. A well respected analyst, Ming-Chi Kuo, who works for TF Securities, predicts that Apple will start using ARM-based chips it designs and makes itself in Macintosh computers.

According to a story on Apple Insider by Malcom Owen

Kuo forecasts that Apple will be using a 5-nanometer process at the core of its new products 12 to 18 months’ time. As part of this, Kuo believes there will be a “new 1H21 Mac equipped with the own-design processor”…

Shifting over to an ARM-based chip would also give some context to Apple’s decision to move away from supporting 32-bit apps in macOS Catalina, as well as Apple’s work on Catalyst. In theory, this could allow Apple to use the same chips in the Mac as it does in iPhones and iPads, reducing its overall costs and enabling apps to be more usable throughout the entire Apple ecosystem.

It would be the first time a major personal computer maker abandons processors built on Intel’s 40 year old x86 architecture, and switches to chips based on ARM designs. Other hardware companies have dabbled with ARM-based servers and PCs – like Blackberry and Palm dipped their toes into the pre-iPhone smartphone market – but if he’s right, Apple will be the first to 1. shift an entire line of mainstream computers off of Intel and onto ARM, and 2. build a complete smartphone-tablet-computer-consumer device ecosystem around a single chip architecture.

There’s a broadband angle to this move, too. If the Apple device universe collapses into a single, integrated hardware and operating system platform, with the only distinction between devices being form factor and peripheral functions like sensors and telephone network access, then its value will be maximised by giving those devices seamless access to a common set of data, content, applications and services via persistent and ubiquitous connectivity.

It’s one thing to rely on file and data syncing across a family of products, as Apple does now, but it’s quite another to build a costly lineup of hardware, software and content on top of the assumption that connectivity can be taken for granted anywhere you go.

Wearables graduate from accessories to hardware platform status as CES opens

by Steve Blum • , ,

Smart watch

CES is underway in Las Vegas. What used to be called the Consumer Electronics Show but now goes by the less modest appellation of “CES 2020, the world’s largest and most influential technology event” kicked off this weekend with pre-show and preview events. Today is press day and the show floor opens tomorrow.

From a product perspective, the consumer electronics technology industry is collapsing into a handful of all purpose products – smart phones, cars, and computers and big screens of one sort or another. That list will grow this year as wearables become full featured hardware platforms that can support complete ecosystems of apps, services and content.

The wearables market is about form factor, not specific device function. That’s true whether it’s smart watches, fitness trackers, sleep monitors or something else. Smart phones are networked, handheld computers that are a convenient parking spot for any app, sensor or content that you can imagine. It’s an accident of history that we call them phones. Similarly, what we’ll end up calling a smart watch will just be a wrist-mounted platform for whatever can conveniently ride on it. I’m seeing fewer and fewer Fitbits and other dedicated fitness wearables on people, and more and more Apple watches, which are often used for step counting and other fitness tracking purposes.

Batteries are the major limiting factor inhibiting the collapse of everything into a single smart watch. There are two problems: battery life and recharging. So far I haven’t found a smart watch that can operate with everything running, including GPS, for more than about eight hours straight. That’s inconvenient for people who just want to put it on in the morning and let it do its thing all day long. It’s a deal killer for people who need that level of functionality for long durations – cyclists, hikers, triathletes for example. Recharging requires users to take the watch off once or twice a day and leave it somewhere to charge. That can limit its usefulness as a sleep monitor, for example. It is also a lot more fussy than we’re used to being about our watches.

But there’s a potential solution to both problems. If someone can figure out a system for wirelessly recharging smart watches with ambient energy, it’ll be a game changer. At that point, it won’t be just fitness trackers that collapse into smart watches, but also many smart phone functions as well. Maybe a low level magnetic field on keyboards, steering wheels, handlebars or anything else that’s regularly near your wrist for more than a few minutes a day?

California’s marquee industries are two halves of the same brain

by Steve Blum • , , ,

Egghead

Disney and Apple launched online video services this month, with both companies falling short of perfection. It’s interesting to compare the two platforms, dubbed Disney+ and Apple+. One is the brain child of an entertainment giant struggling with technology, the other was created by a tech giant struggling with content.

When Disney+ went live last week, demand outstripped capacity and users were locked out. Apple+, on the other hand, had no such problems. Its programming could be seen by anyone interested enough to log in. Unfortunately, the content offered has not excited anyone. It was reckoned workman-like, at a moment when Apple needed blockbuster pizzazz to break out of the over-the-top pack.

Disney’s server problem was solved in hours, if not minutes. By now, I doubt many people remember it. Fixing technical issues is a left brain, linear process. Apple, on the other hand, has to contend with a chaotic, right brain challenge. You don’t create world class content by assigning more engineers and spinning up more servers. So now there’s talk of former HBO chief Richard Plepler doing a deal with Apple – he has a proven track record. That’s no guarantee in the entertainment business, but it’s the way to bet.

Silicon Valley and Hollywood have a lot more in common than people realise. In both ends of California it’s about finding executives who can manage very talented, highly mobile people who can create marvels out of thin air. A track record of success, even if liberally sprinkled with failures, will attract investors in Los Angeles and San Francisco alike. Both cities are magnets for risk-tolerant capital, outrageous concepts and creative talent. The difference is that in Silicon Valley fortune seekers of modest gifts end up in cubicles making a hundred grand or two a year, while in Hollywood they’re waiting on tables.

For now, anyway.

Huawei’s U.S. troubles jumpstart push for new mobile operating systems

by Steve Blum • , , , ,

Huawei press conference ces 5jan2019

With the impact of a U.S. trading ban growing, Huawei launched its own operating system, initially aimed at Internet of Things devices but with the potential to compete with Android in the mobile phone ecosystem. Branded HarmonyOS (and called Hongmeng in China) it is designed to be lightweight and very secure. Huawei isn’t installing it in its smart phones, but that could change.

A deep dive into Huawei’s relationship with Google by The Information’s Juro Osawa highlights how Chinese companies have flirted with developing independent operating systems, but ultimately backed away from investing in a risky corporate strategy that could find no executive champions…

In 2016, a top Huawei executive passed on an opportunity to partner with the maker of an Android alternative called Sailfish, seeing little need for a Plan B…

After the meeting, [Huawei consumer division chief Richard] Yu didn’t follow up on the idea of working with Jolla. He showed little interest in an alliance with another maker of operating systems.

But even though interest in reducing dependence on operating systems controlled by foreign companies is now coming from the Chinese government, according to Osawa’s article, Huawei didn’t take the threat seriously…

“In China, companies that supply products to the government are under growing pressure to use domestic software as well as hardware,” said Canalys analyst Nicole Peng. “Major Chinese tech companies like Huawei are feeling obliged to develop their own homegrown operating systems.”

Huawei’s renewed effort to develop its own OS was halfhearted, prompted in part by the company’s need to conform to Beijing’s homegrown software push…few executives viewed it as an Android replacement because the chances of Google ending its work with the Chinese company seemed remote.

Huawei lost that bet, and is now trying to play catch up. The result could a further isolation of technology and online services behind national firewalls. Or it might be the impetus the industry needs to finally break out of operating system architectures that were drafted nearly fifty years ago.

Caltech turns eastern California fiber network into earthquake detector

by Steve Blum • , , , ,

Caltech readout

Fiber optic networks do more than just ride out major earthquakes without dropping a bit. They can also detect and collect data on the quakes themselves. Two major quakes – magnitude 6.4 and 7.1 – hit eastern California on 4 and 5 July 2019 respectively, in the high desert of Kern and San Bernardino counties, where seismometers aren’t thick on the ground. To understand what happened, and what continues to happen, Caltech scientists needed to quickly get more sensors into the field.

Fortunately, the eastern slope of the Sierra Nevada – Mono, Inyo, Kern and San Bernardino counties in California, and Washoe and Douglas counties and Carson City in Nevada – has fast, earthquake ready fiber connectivity.

The Digital 395 open access fiber optic network, which links Reno to Barstow along the eastern Sierra, runs right through the area that was hardest hit. By connecting “surveillance technology initially developed for military and general security applications that can detect ground movement” to a single fiber strand, an underground fiber route – or sections of it, at least – can be used for “pre-shock detection of P and S waves across the fibers”, according to Michael Ort, CEO of Praxis Associates/Inyo Networks, which built and operates Digital 395. In other words, fiber optic networks can be used detect the big incoming shockwaves a few critical seconds before they hit, as well as provide valuable scientific data about the event.

Preliminary discussions about installing distributed acoustic sensing equipment had been held with Caltech, but everything went into high gear when the quakes began hitting Ridgecrest. Zhongwen Zhan, a Caltech scientist, asked about using one of Digital 395’s strands, and got a quick yes from Ort.

He hooked up his instruments on 9 July 2019, four days after the 7.1 quake and while the ground was still shaking with aftershocks. The results were immediate, with multiple (mostly small) quakes detected every minute, beginning as soon as the equipment was turned on.

“The fiber gave them about 5,000 sample points over 10km of fiber. Before they had only a handful of sample points in the area. So you got only “discrete points” of these, not the overall picture”, Ort said.

“This first time ever use of fiber has given us many data points, making our observations more complete and natural”, said Mark Simons, JPL chief scientist and CalTech professor of geophysics. “It’s a true breakthrough that will revolutionise our perspective and help with early warning”.

Digital 395 was built with money from the 2009 federal stimulus program and from the California Advanced Services Fund (CASF). It was the first and the longest of the open access middle mile fiber routes funded by CASF, before the California legislature bowed to pressure (and money) from incumbent telephone and cable companies and banned those types of projects.

The eternal why not WiFi question has an eternal answer

by Steve Blum • , , ,


The retro look.

Every so often someone asks me something like why can’t we just use WiFi to deliver broadband service? For those of us who’ve been working in the community broadband sector for a decade or more, the question was settled with the collapse of the Great Muni WiFi Bubble more than ten years ago. But for most, that’s a relic of the distant and dim pre-iPhone past, when rocking good service was measured in kilobits and the fastest way to download a movie was to drive to a store and rent a video.

The answer is that WiFi technology was originally designed as an indoor substitute for short distance ethernet cables, and not for outdoor or wide area service. It uses unlicensed spectrum with power determined by federal regulations and propagation characteristics set by the laws of physics.

The primary factors that determine the practical service radius of a WiFi-based network are transmit power (again, limited by law) and antenna design and position. Other factors, such as foliage, interference/noise level and the limitations of the WiFi protocol, come into play, but raw power and antenna capabilities are the big ones.

So if you have a top of the line WiFi access point bolted to a light pole, using maximised omni-directional antenna design and transmit power, it can communicate at reasonably high speeds with a similar access point over something like 400 meters, assuming there are no major obstructions.

But if that access point is communicating through clear air with a laptop or mobile phone or similar mass market device, that effective distance drops to 100 meters or less. If there’s a wall between the device and the access point – i.e. the user is inside a home or business – the distance is considerably, maybe impossibly, less. The transmit power and antenna design of the user’s equipment counts, too. If the user has a special gizmo – a WiFi bridge with higher power and a better antenna – the effective range might go up as high as 200 meters, and it might be useable indoors. Might be.

But while might be is good enough for an occasional free connection to a hotspot, it isn’t an acceptable standard for mainstream, consumer grade broadband service. That’s why we need something better: appropriately designed, professionally engineered and sufficiently provisioned copper, fiber or wireless infrastructure.

Cutting off Huawei could kill it, or kill tech monopolies

by Steve Blum • , , , ,

Huawei press conference ces 6jan2014

Conventional wisdom is that Huawei can’t survive without access to U.S. technology. It was cut off from access to U.S. customers and vendors last week, although the toughest sanctions were delayed for three months earlier this week. If and when those sanctions take full effect, two companies – ARM and Google – say they’ll stop selling Huawei licenses to use two essential building blocks of the mobile industry – ARM’s chip designs and Google’s Android ecosystem. Huawei could be cutoff from similarly essential technology in other industry segments, for example the Windows operating system.

It’s dangerous to assume, however, that any company, let alone one as big and ambitious and well supported as Huawei will just roll over die. The company has said it’s kept a Plan B on the back burner for several years, which require it to launch its own operating system, to replace Android and Windows, and develop advanced chip technology in house.

There’s a lot of skepticism about a Huawei OS. The assumption is that it would be based on the open source bits of Android, but wouldn’t be able to gain any more uptake than past alternate mobile OS attempts, such as Tizen, Firefox or Sailfish. The counter argument is that the Chinese market is already semi-isolated from the global app and service ecosystem. If Huawei gets developer support and user adoption on its home turf – not a far out possibility – it could become the mythical third mobile OS that so many competitors – Microsoft, Nokia, Samsung, Canonical, Mozilla, [Blackberry] –(https://www.tellusventure.com/blog/blackberry-shares-the-big-one-with-the-cops/) have failed to capture.

Chipsets are a tougher problem, but there could be hardware workarounds, according to a TechRepublic article by James Sanders

In terms of hardware, Huawei is far from self-sufficient. Their HiSilicon division licenses the Arm ISA for use in Kirin smartphone SoCs and Kunpeng server CPUs. HiSilicon already possesses the requisite information to manufacture chips based on the technology, and they can continue to design ARMv8-powered chips without the involvement of Arm Holdings, which has cut ties with Huawei. The actual production of these is handled by TSMC, which is one of the few organizations continuing work with Huawei…

There are still options for Huawei…Samsung, LG, and BOE are potential vendors for displays, and Sony and Leica can provide lenses and sensors for cameras. Flash storage and RAM may be an issue, as Toshiba and Micron are used, though SK Hynix provides RAM on some devices, and Samsung can likewise supply both.

It’s too soon to know with any degree of certainty how this battle in the U.S.-China trade war will play out. It could just be another round of brinkmanship, and president Donald Trump has all but admitted that’s what this is all about. But if it isn’t, the result could be a global scale competitor to some cherished de facto technology monopolies, which are either based in the U.S. or dependent on intellectual property that’s rooted here. That would be good for the market, but it’s not exactly what the Trump administration has in mind.

Merry Christmas! Because that’s what today is

by Steve Blum • , , , ,

Christmas vacation

Thank you, Gentle Reader, for the best Christmas present a writer can wish for: an audience. If you’re reading this on Christmas morning, you are doubly valued and thrice blessed. And you might even be interested in a blog post about the blog. If you aren’t, please forgive me and be assured my usual rants insights typing will resume tomorrow. If I were reading this, I’d just click here and listen to Jimmy Buffet and Linda Ronstadt instead.

The top three posts for 2018 were about 4K television, with the number one slot going to an analysis of 4K bandwidth requirements. With video already the biggest source of Internet traffic, upgrades to 4K and 8K formats, and beyond, will determine network capacity requirements for years to come. Big thanks goes to Danielle Cassagnol at the Consumer Technology Association for the stats.

The top ten included two posts about Tim Draper’s second attempt to break up California, this time into three states. The news that it was blocked by Californian judges finished far down the rankings, though. Frontier’s California travails also hit the list twice. The top ten was rounded out by posts about vertical integration, fiber maps and wildfire prevention.

It’s tricky to estimate how many people read this blog. I think my audience is something like 5,000 unique readers a month, including social media distribution, but it’s hard to know for sure. It’s stayed more or less even over the past year. If I include my occasional articles for Santa Cruz Tech Beat, which are usually republished here, the average goes up by untold thousands. Special thanks goes to SCTB editor Sara Isenberg for her patronage.

I’ve been posting every day, seven days a week for more than six years. At one point, my plan was to cut back to something like five days a week, but I couldn’t let go. For 2019, I really mean it. After CES, anyway. I made a deal with myself, and please hold me to it: write fewer but better posts. I’ll occasionally post on weekends when something is happening, and I might skip a holiday, when something is not. During the work week, I’ll maintain the schedule. Other changes are in the works, too.

Again, thank you for reading!

Will California earthquakes move faster than mobile networks?

by Steve Blum • , ,

Earthquakes happen quickly, but not instantly. The shaking can last anywhere from a few seconds to more than a minute for a major quake. The shock waves spread out from the epicenter at something like the speed of sound, so it can be a few minutes before everything stops moving everywhere. The initial underground movement can also be detected by instruments before it’s felt on the surface.

Data networks, on the other hand, run at nearly the speed of light. So the right sensors combined with fast, smart computers and ubiquitous broadband coverage can give a few seconds of warning to people via smart phones. In the case of a massive 9.1 magnitude quake in Japan, where such a system is already in place, Tokyo residents had a minute and a half to prepare.

There are a couple of early earthquake warning systems under development in California. One is about to be tested by the City of Los Angeles, which partnered with AT&T to develop it after the project was put out to bid last year. Another system, developed by a private company, Early Warning labs, and the U.S. Geological Survey, is also nearing the test phase in California.

But there is a big if in those assumptions: mobile networks have to perform flawlessly for it all to work. There’s concern that Californian wireless networks are not up to the job, according to a Los Angeles Times article by Rong Gong Lin

Another big challenge faced by the system is how slow cellphone networks and other communications can be in transmitting warnings to the public. The Federal Emergency Management Agency’s Wireless Emergency Alert system is not fast enough to support earthquake early warnings; there have been reports of tens of seconds to even minutes of delays in receiving such messages.

The government and phone carriers are working to improve speed, but an ideal fix could take years to implement.

5G technology, which is particularly designed to shorten data transmission times, will help. At least where it’s fully deployed. Communities that are lucky enough – affluent enough – to meet mobile carriers’ return on investment goals will see that happen over the next ten years. For everyone else, what you have is what you’ll get when the Big One hits.