Tag Archives: geek stuff

Tired of 5G hype? Refresh yourself with 6G speculation

by Steve Blum • , , , ,

Samsung 6g

While AT&T, Verizon and T-Mobile squabble over each other’s claims of 5G dominance and their theories of 5G Evolution, it’s a good time to pause and reflect on how nothing changes in the mobile business. They had the same fights over 4G and they will do it all over again when 6G arrives.

Yes, 6G.

Expect to hear more about it in the not too distant future. 6G is undefined now, but there’s an assumption that it will be developed over the next 10 years, and that it will be something like total immersion in a sea of data.

FCC commissioner Jessica Rosenworcel talked about 6G at the Mobile World Congress show in Los Angeles a couple of years ago – the first time I heard someone try to define it. She described it as continual network densification. Samsung calls it “hyper-connectivity involving humans and everything”.

5G technology is all about network densification at the city block and factory floor level. 6G will be about densifying networks at a personal level.

6G development is likely to take the diverse development path that 4G took, rather than the internationally coordinated standards setting process that led to 5G. It’ll be developed by bits and pieces over the next ten years, and then eventually bundled into a package with a 6G label on it. As with other technologies, initial attempts might be for military applications. Technology that allows troops, equipment and weapons to be continually and comprehensively linked to AI-class analysis, command and control would be a game changer.

It’s not simple connectivity, of any generation, that’ll make the difference. Superiority – military or economic – will be gained or lost on the basis of the applications, data and devices that use it. 5G’s potential has barely been tapped and there’s a lot of work that has to be done before it runs out of steam.

But, ya know, 5G is so 2020.

Upload demand up, download demand down during covid-19 quarantine, report says

by Steve Blum • , , , ,

Upstream traffic growth openvault 2q2020

The covid–19 emergency buried the tired argument that consumers want fast download speeds to watch video and don’t need, or care about, fast upload speeds. If the flood of anecdotal reports about online classes freezing and telework grinding to a halt as upstream bandwidth gridlocked wasn’t convincing enough, a report published by a broadband data consultancy might finally do the trick.

OpenVault just published its network analysis for the second quarter of 2020, the first full quarter under covid–19 restrictions. It found that the need for upload speed jumped – largely due to video conferencing – even as downstream demand dipped…

In contrast to quarter-over- quarter declines in downstream usage, upstream consumption was up 5.3% in 2Q20, when compared with 1Q20. It is likely that this reflects increased use of videoconferencing as a business, educational and lifestyle tool…

The trends of higher demand for bandwidth consumption and faster speeds appear to be forming that new broadband normal…As more people work and learn from home, the demand for upstream bandwidth will continue to multiply. Two-way video communication for videoconferencing and remote learning is helping drive this surge in upstream bandwidth demand. This demand spiked in 2Q20, growing by 56% over 2Q19.

The stats confirmed what a handful of California senators told the author of an industry-backed effort to keep California’s broadband standard at a ridiculous level of 6 Mbps download and 1 Mbps upload speeds.

“There are people, and the kids, who either totally lack Internet access or have very slow…or just people who have what would be considered normal Internet service and it’s still terrible”, senator Scott Wiener (D – San Francisco) said during a committee hearing to consider assembly bill 570, carried by assemblywoman Cecilia Aguiar-Curry (D – Yolo).

The 6 Mbps down/1 Mbps standard in the bill was subsequently raised a bit, to 25 Mbps down/3 Mbps up, but with a catch. Comcast, Charter, AT&T and Frontier want to make sure they’re the only Internet service providers that get taxpayer subsidies that support those still slow speeds, so AB 570 was amended to give them the right to claim for themselves any projects proposed by independents, and the money that goes with it. This right of the first night would effectively lock out competition and lock in their monopoly grip on Californians’ broadband service.

Keeping California’s broadband speed limit low suits business models that rely on extracting monopoly profits from decaying rural telephone systems while directing investment to high income communities. What we need, though, is modern broadband infrastructure in every community. That’s why senate bill 1130, authored by senator Lena Gonzalez (D – Los Angeles) sets 25 Mbps as the minimum acceptable broadband speed for upload and download use.

Both bills are still alive and moving in Sacramento. Key decisions are due the end of next week, just ten days before California’s 2020 legislative session ends.

Covid-19 was barely a sniffle for the Internet, study finds

by Steve Blum • , , , ,

Fixed broadband weighted median download

Broadband networks in the U.S. and around the world held up well as countries locked down and work, school and play moved online in March. Anna-Maria Kovacs, a visiting scholar at Georgetown University in Washington, D.C., took a brief look at worldwide Internet speed test data collected by Ookla and traffic data from Sandvine, and found that the crush of traffic put a temporary downward bend – and only that – on planetary network speeds

It is not unusual, of course, for internet traffic to grow…What is unusual in the Covid–19 environment is the suddenness of the traffic growth. Rather than growing 30% in a year, traffic grew about that much in a month. Sandvine reports a “staggering increase in volume for network operators to cope with and absorb.” During March, according to Sandvine, global traffic grew 28.69% with an additional 9.28% during April, for a total of 38% over the two months. Upstream traffic growth was even more stunning, up 123.18% in March before leveling off…

The U.S. networks’ fixed-broadband speed bottomed out within three weeks, as did the global index, while the speeds of the EU, EU–4, and OECD continued to decline for another three weeks.

Kovacs’ conclusions – the apparent superior performance of U.S. networks is due to the beneficence of telecoms companies and the somnambulance of the Federal Communications Commission – are unsupported. Her top level observations might correlate to Ookla’s network traffic data, but she offers no analytical rigor or evidence of causation. The data sets she relies on aren’t necessarily globally consistent. For example, Ookla’s Speedtest.net is based in Seattle and relies on crowdsourced data. It’s a mistake to assume that the crowd generating the data in the U.S. is largely identical to the crowd in the E.U. It might or might not be.

Comparisons of the same population over a few weeks time are valid, though. The top line conclusion stands: globally, networks withstood the initial covid–19 induced surge, and adapted to higher traffic levels within a few weeks.

The Internet was originally designed to ride out a nuclear war. It works just fine in a pandemic, too.

Apple’s rumored move to ARM-based Macs aims for a world of continuous connectivity

by Steve Blum • , , , ,

Technological tipping points are easy see in the rearview mirror – do you remember what the world was like before the iPhone? – but hard to spot in advance. One might be on the way. A well respected analyst, Ming-Chi Kuo, who works for TF Securities, predicts that Apple will start using ARM-based chips it designs and makes itself in Macintosh computers.

According to a story on Apple Insider by Malcom Owen

Kuo forecasts that Apple will be using a 5-nanometer process at the core of its new products 12 to 18 months’ time. As part of this, Kuo believes there will be a “new 1H21 Mac equipped with the own-design processor”…

Shifting over to an ARM-based chip would also give some context to Apple’s decision to move away from supporting 32-bit apps in macOS Catalina, as well as Apple’s work on Catalyst. In theory, this could allow Apple to use the same chips in the Mac as it does in iPhones and iPads, reducing its overall costs and enabling apps to be more usable throughout the entire Apple ecosystem.

It would be the first time a major personal computer maker abandons processors built on Intel’s 40 year old x86 architecture, and switches to chips based on ARM designs. Other hardware companies have dabbled with ARM-based servers and PCs – like Blackberry and Palm dipped their toes into the pre-iPhone smartphone market – but if he’s right, Apple will be the first to 1. shift an entire line of mainstream computers off of Intel and onto ARM, and 2. build a complete smartphone-tablet-computer-consumer device ecosystem around a single chip architecture.

There’s a broadband angle to this move, too. If the Apple device universe collapses into a single, integrated hardware and operating system platform, with the only distinction between devices being form factor and peripheral functions like sensors and telephone network access, then its value will be maximised by giving those devices seamless access to a common set of data, content, applications and services via persistent and ubiquitous connectivity.

It’s one thing to rely on file and data syncing across a family of products, as Apple does now, but it’s quite another to build a costly lineup of hardware, software and content on top of the assumption that connectivity can be taken for granted anywhere you go.

Wearables graduate from accessories to hardware platform status as CES opens

by Steve Blum • , ,

Smart watch

CES is underway in Las Vegas. What used to be called the Consumer Electronics Show but now goes by the less modest appellation of “CES 2020, the world’s largest and most influential technology event” kicked off this weekend with pre-show and preview events. Today is press day and the show floor opens tomorrow.

From a product perspective, the consumer electronics technology industry is collapsing into a handful of all purpose products – smart phones, cars, and computers and big screens of one sort or another. That list will grow this year as wearables become full featured hardware platforms that can support complete ecosystems of apps, services and content.

The wearables market is about form factor, not specific device function. That’s true whether it’s smart watches, fitness trackers, sleep monitors or something else. Smart phones are networked, handheld computers that are a convenient parking spot for any app, sensor or content that you can imagine. It’s an accident of history that we call them phones. Similarly, what we’ll end up calling a smart watch will just be a wrist-mounted platform for whatever can conveniently ride on it. I’m seeing fewer and fewer Fitbits and other dedicated fitness wearables on people, and more and more Apple watches, which are often used for step counting and other fitness tracking purposes.

Batteries are the major limiting factor inhibiting the collapse of everything into a single smart watch. There are two problems: battery life and recharging. So far I haven’t found a smart watch that can operate with everything running, including GPS, for more than about eight hours straight. That’s inconvenient for people who just want to put it on in the morning and let it do its thing all day long. It’s a deal killer for people who need that level of functionality for long durations – cyclists, hikers, triathletes for example. Recharging requires users to take the watch off once or twice a day and leave it somewhere to charge. That can limit its usefulness as a sleep monitor, for example. It is also a lot more fussy than we’re used to being about our watches.

But there’s a potential solution to both problems. If someone can figure out a system for wirelessly recharging smart watches with ambient energy, it’ll be a game changer. At that point, it won’t be just fitness trackers that collapse into smart watches, but also many smart phone functions as well. Maybe a low level magnetic field on keyboards, steering wheels, handlebars or anything else that’s regularly near your wrist for more than a few minutes a day?

California’s marquee industries are two halves of the same brain

by Steve Blum • , , ,

Egghead

Disney and Apple launched online video services this month, with both companies falling short of perfection. It’s interesting to compare the two platforms, dubbed Disney+ and Apple+. One is the brain child of an entertainment giant struggling with technology, the other was created by a tech giant struggling with content.

When Disney+ went live last week, demand outstripped capacity and users were locked out. Apple+, on the other hand, had no such problems. Its programming could be seen by anyone interested enough to log in. Unfortunately, the content offered has not excited anyone. It was reckoned workman-like, at a moment when Apple needed blockbuster pizzazz to break out of the over-the-top pack.

Disney’s server problem was solved in hours, if not minutes. By now, I doubt many people remember it. Fixing technical issues is a left brain, linear process. Apple, on the other hand, has to contend with a chaotic, right brain challenge. You don’t create world class content by assigning more engineers and spinning up more servers. So now there’s talk of former HBO chief Richard Plepler doing a deal with Apple – he has a proven track record. That’s no guarantee in the entertainment business, but it’s the way to bet.

Silicon Valley and Hollywood have a lot more in common than people realise. In both ends of California it’s about finding executives who can manage very talented, highly mobile people who can create marvels out of thin air. A track record of success, even if liberally sprinkled with failures, will attract investors in Los Angeles and San Francisco alike. Both cities are magnets for risk-tolerant capital, outrageous concepts and creative talent. The difference is that in Silicon Valley fortune seekers of modest gifts end up in cubicles making a hundred grand or two a year, while in Hollywood they’re waiting on tables.

For now, anyway.

Huawei’s U.S. troubles jumpstart push for new mobile operating systems

by Steve Blum • , , , ,

Huawei press conference ces 5jan2019

With the impact of a U.S. trading ban growing, Huawei launched its own operating system, initially aimed at Internet of Things devices but with the potential to compete with Android in the mobile phone ecosystem. Branded HarmonyOS (and called Hongmeng in China) it is designed to be lightweight and very secure. Huawei isn’t installing it in its smart phones, but that could change.

A deep dive into Huawei’s relationship with Google by The Information’s Juro Osawa highlights how Chinese companies have flirted with developing independent operating systems, but ultimately backed away from investing in a risky corporate strategy that could find no executive champions…

In 2016, a top Huawei executive passed on an opportunity to partner with the maker of an Android alternative called Sailfish, seeing little need for a Plan B…

After the meeting, [Huawei consumer division chief Richard] Yu didn’t follow up on the idea of working with Jolla. He showed little interest in an alliance with another maker of operating systems.

But even though interest in reducing dependence on operating systems controlled by foreign companies is now coming from the Chinese government, according to Osawa’s article, Huawei didn’t take the threat seriously…

“In China, companies that supply products to the government are under growing pressure to use domestic software as well as hardware,” said Canalys analyst Nicole Peng. “Major Chinese tech companies like Huawei are feeling obliged to develop their own homegrown operating systems.”

Huawei’s renewed effort to develop its own OS was halfhearted, prompted in part by the company’s need to conform to Beijing’s homegrown software push…few executives viewed it as an Android replacement because the chances of Google ending its work with the Chinese company seemed remote.

Huawei lost that bet, and is now trying to play catch up. The result could a further isolation of technology and online services behind national firewalls. Or it might be the impetus the industry needs to finally break out of operating system architectures that were drafted nearly fifty years ago.

Caltech turns eastern California fiber network into earthquake detector

by Steve Blum • , , , ,

Caltech readout

Fiber optic networks do more than just ride out major earthquakes without dropping a bit. They can also detect and collect data on the quakes themselves. Two major quakes – magnitude 6.4 and 7.1 – hit eastern California on 4 and 5 July 2019 respectively, in the high desert of Kern and San Bernardino counties, where seismometers aren’t thick on the ground. To understand what happened, and what continues to happen, Caltech scientists needed to quickly get more sensors into the field.

Fortunately, the eastern slope of the Sierra Nevada – Mono, Inyo, Kern and San Bernardino counties in California, and Washoe and Douglas counties and Carson City in Nevada – has fast, earthquake ready fiber connectivity.

The Digital 395 open access fiber optic network, which links Reno to Barstow along the eastern Sierra, runs right through the area that was hardest hit. By connecting “surveillance technology initially developed for military and general security applications that can detect ground movement” to a single fiber strand, an underground fiber route – or sections of it, at least – can be used for “pre-shock detection of P and S waves across the fibers”, according to Michael Ort, CEO of Praxis Associates/Inyo Networks, which built and operates Digital 395. In other words, fiber optic networks can be used detect the big incoming shockwaves a few critical seconds before they hit, as well as provide valuable scientific data about the event.

Preliminary discussions about installing distributed acoustic sensing equipment had been held with Caltech, but everything went into high gear when the quakes began hitting Ridgecrest. Zhongwen Zhan, a Caltech scientist, asked about using one of Digital 395’s strands, and got a quick yes from Ort.

He hooked up his instruments on 9 July 2019, four days after the 7.1 quake and while the ground was still shaking with aftershocks. The results were immediate, with multiple (mostly small) quakes detected every minute, beginning as soon as the equipment was turned on.

“The fiber gave them about 5,000 sample points over 10km of fiber. Before they had only a handful of sample points in the area. So you got only “discrete points” of these, not the overall picture”, Ort said.

“This first time ever use of fiber has given us many data points, making our observations more complete and natural”, said Mark Simons, JPL chief scientist and CalTech professor of geophysics. “It’s a true breakthrough that will revolutionise our perspective and help with early warning”.

Digital 395 was built with money from the 2009 federal stimulus program and from the California Advanced Services Fund (CASF). It was the first and the longest of the open access middle mile fiber routes funded by CASF, before the California legislature bowed to pressure (and money) from incumbent telephone and cable companies and banned those types of projects.

The eternal why not WiFi question has an eternal answer

by Steve Blum • , , ,


The retro look.

Every so often someone asks me something like why can’t we just use WiFi to deliver broadband service? For those of us who’ve been working in the community broadband sector for a decade or more, the question was settled with the collapse of the Great Muni WiFi Bubble more than ten years ago. But for most, that’s a relic of the distant and dim pre-iPhone past, when rocking good service was measured in kilobits and the fastest way to download a movie was to drive to a store and rent a video.

The answer is that WiFi technology was originally designed as an indoor substitute for short distance ethernet cables, and not for outdoor or wide area service. It uses unlicensed spectrum with power determined by federal regulations and propagation characteristics set by the laws of physics.

The primary factors that determine the practical service radius of a WiFi-based network are transmit power (again, limited by law) and antenna design and position. Other factors, such as foliage, interference/noise level and the limitations of the WiFi protocol, come into play, but raw power and antenna capabilities are the big ones.

So if you have a top of the line WiFi access point bolted to a light pole, using maximised omni-directional antenna design and transmit power, it can communicate at reasonably high speeds with a similar access point over something like 400 meters, assuming there are no major obstructions.

But if that access point is communicating through clear air with a laptop or mobile phone or similar mass market device, that effective distance drops to 100 meters or less. If there’s a wall between the device and the access point – i.e. the user is inside a home or business – the distance is considerably, maybe impossibly, less. The transmit power and antenna design of the user’s equipment counts, too. If the user has a special gizmo – a WiFi bridge with higher power and a better antenna – the effective range might go up as high as 200 meters, and it might be useable indoors. Might be.

But while might be is good enough for an occasional free connection to a hotspot, it isn’t an acceptable standard for mainstream, consumer grade broadband service. That’s why we need something better: appropriately designed, professionally engineered and sufficiently provisioned copper, fiber or wireless infrastructure.

Cutting off Huawei could kill it, or kill tech monopolies

by Steve Blum • , , , ,

Huawei press conference ces 6jan2014

Conventional wisdom is that Huawei can’t survive without access to U.S. technology. It was cut off from access to U.S. customers and vendors last week, although the toughest sanctions were delayed for three months earlier this week. If and when those sanctions take full effect, two companies – ARM and Google – say they’ll stop selling Huawei licenses to use two essential building blocks of the mobile industry – ARM’s chip designs and Google’s Android ecosystem. Huawei could be cutoff from similarly essential technology in other industry segments, for example the Windows operating system.

It’s dangerous to assume, however, that any company, let alone one as big and ambitious and well supported as Huawei will just roll over die. The company has said it’s kept a Plan B on the back burner for several years, which require it to launch its own operating system, to replace Android and Windows, and develop advanced chip technology in house.

There’s a lot of skepticism about a Huawei OS. The assumption is that it would be based on the open source bits of Android, but wouldn’t be able to gain any more uptake than past alternate mobile OS attempts, such as Tizen, Firefox or Sailfish. The counter argument is that the Chinese market is already semi-isolated from the global app and service ecosystem. If Huawei gets developer support and user adoption on its home turf – not a far out possibility – it could become the mythical third mobile OS that so many competitors – Microsoft, Nokia, Samsung, Canonical, Mozilla, [Blackberry] –(https://www.tellusventure.com/blog/blackberry-shares-the-big-one-with-the-cops/) have failed to capture.

Chipsets are a tougher problem, but there could be hardware workarounds, according to a TechRepublic article by James Sanders

In terms of hardware, Huawei is far from self-sufficient. Their HiSilicon division licenses the Arm ISA for use in Kirin smartphone SoCs and Kunpeng server CPUs. HiSilicon already possesses the requisite information to manufacture chips based on the technology, and they can continue to design ARMv8-powered chips without the involvement of Arm Holdings, which has cut ties with Huawei. The actual production of these is handled by TSMC, which is one of the few organizations continuing work with Huawei…

There are still options for Huawei…Samsung, LG, and BOE are potential vendors for displays, and Sony and Leica can provide lenses and sensors for cameras. Flash storage and RAM may be an issue, as Toshiba and Micron are used, though SK Hynix provides RAM on some devices, and Samsung can likewise supply both.

It’s too soon to know with any degree of certainty how this battle in the U.S.-China trade war will play out. It could just be another round of brinkmanship, and president Donald Trump has all but admitted that’s what this is all about. But if it isn’t, the result could be a global scale competitor to some cherished de facto technology monopolies, which are either based in the U.S. or dependent on intellectual property that’s rooted here. That would be good for the market, but it’s not exactly what the Trump administration has in mind.