Tag Archives: security

Privacy and digital security is a personal responsibility. It can’t be anything else

by Steve Blum • , , , ,

Gagged by privacy

Three unrelated stories that broke within 24 hours demonstrate why digital security is a personal responsibility, and how blindly trusting third parties – individuals or private companies or governments – to look after your best interests is no solution:

  • The European Court of Justice nixed a data sharing safe harbor deal between the European Union and the U.S., pointing out in its decision that “the requirements of US national security, public interest and law enforcement have primacy”, which makes any promises of privacy meaningless.
  • Western intelligence agencies took the unusual step of calling out Russia by name, and blaming its spooks for breaking into systems used by researchers working on a covid–19 vaccine.
  • Crackers punked Twitter employees, and got the keys to the kingdom. Or at least sufficient credentials to take over Bill Gates’, Warren Buffett’s and Joe Biden’s accounts, among others.

Twitter’s explanation for its breach is as succinct a description of the fundamental problem as I’ve ever seen…

We detected what we believe to be a coordinated social engineering attack by people who successfully targeted some of our employees with access to internal systems and tools.

I have no doubt that Twitter takes security and customer privacy seriously and takes the steps it truly believes are necessary to safeguard its systems. I believe the same about medical researchers.

And the National Security Agency too. But that, thankfully, did not prevent Edward Snowden from blowing the whistle on its mindless and pervasive surveillance of electronic communications, thanks to AT&T’s “extreme willingness to help”and similar assistance from other compliant telecoms companies.

Good intentions and diligent efforts are not enough. With U.S. law enforcement agencies continuing to press for backdoors into secure systems and breakable encryption, the problem will only get worse.

People will always have to have “access to internal systems”. Trustworthy, competent people, to be sure, but people with human frailties and fallibility. Perfect privacy and security is impossible. All we can do is vigorously accept personal responsibility for individual privacy and security, and resist anyone’s claim of greater need or superior authority.

Telecoms, data center infrastructure infiltrated, Bloomberg stories say, mystery deepens despite denials

by Steve Blum • , , , ,

Taken at face value, a pair of articles on Bloomberg by Jordan Robertson and Michael Riley details how Chinese government intelligence agencies snuck tiny chips into computer servers used by Amazon and Apple, and by at least one major U.S. telecoms company. The devices – as small as the tip of a pencil – could be used to listen to communications going in and out, or to dive deeper into those systems.

If true, Bloomberg’s reporting means that the Chinese government, and possibly other intelligence agencies and criminal groups, have a backdoor that leads deep into U.S. telecoms and data processing infrastructure. It is flatly denied by some U.S. government security officials, by Apple and Amazon, and, according to a story by Jason Koebler, Joseph Cox, and Lorenzo Franceschi-Bicchierai on Motherboard, by most major U.S. telecoms companies…

Motherboard has reached out to 10 major US telecom providers, and the four biggest telecoms in the US have denied to Motherboard that they were attacked: In an email, T-Mobile denied being the one mentioned in the Bloomberg story. Sprint said in an email that the company does not use SuperMicro equipment, and an AT&T spokesperson said in an email that “these devices are not a part of our network, and we are not affected.” A Verizon spokesperson said: “Verizon’s network is not affected.”

A CenturyLink spokesperson also denied that the company is the subject of Bloomberg’s new story. A Cox Communications spokesperson said in an email: ”The telecom company referenced in the story is NOT us." Comcast also said it’s not the company in the Bloomberg story.

Charter Communications and Frontier Communications, two of California’s biggest telecoms companies, aren’t on the not me list, but that might be the result of poor response by their press relations people or, less likely, because they weren’t contacted by Motherboard.

Although Bloomberg’s stories have been refuted by U.K. intelligence agencies, their U.S. counterparts have been silent, as is common practice. Which leaves the door open to uncomfortable speculation: they could have discovered the backdoors and be taking advantage of them too. And if they can, so can other national governments and criminal organisations. Unfortunately, U.S. government spy agencies put a higher priority on their own access to cracked systems, than on defending public cyberspace.

Until this mystery is solved, we’ll have to cope with the possibility that our data centers and telecoms networks are hopelessly compromised.

Quickest way to defeat cyber security is to not engage it

by Steve Blum • , ,

Newsflash! Bad software development practices cause bad results. That’s the gist of a press release issued by Appthority, an IT security company specialising in the mobile enterprise sector.

What Appthority found isn’t a particular revelation. Developers will often hard code their own login credentials into apps while writing and debugging early versions, just to keep things simple. If they forget to remove that data before moving into beta testing and launch phases, it’s there for the taking. And exploiting.

And that’s what Appthority claims it found in hundreds of mobile applications, including “an app for secure communication for a federal law enforcement agency”.

The core problem is developer laziness. It’s tempting for a coder to take shortcuts while developing an app, with the sincere intent of cleaning things up later. Except later never comes. With apps often the work of a single person or a small team, quality control checks are sparse – a problem not confined to small shops, by the way. Right now, it’s up to the stores – Apple and Android, primarily – to do the final QC work. They’re effectively the last line of defence and I’d bet they’re taking a look at how they can better target this particular problem.

One measure they should consider is disbarring repeat offenders from their developer programs. It’s easy to make a mistake out of ignorance, but failing to learn from the experience is pure stupidity.

As far as high security applications go, it’s up to the end user to confirm that an app meets spec. A lack of IT talent and, even more importantly, work ethic is an increasingly worrisome problem at the federal level, at least judging by the most recent GAO report.

Appthority says that it notified the companies most involved, but there are still 170 affected apps “which are live in the official app stores today”. It didn’t release a list of the apps, though, so there’s no way of knowing whether any are sitting on your phone now.

Federal agencies ignore cyber security while breaches continue

by Steve Blum • , ,

Cyber security at federal agencies continues to be so bad that the Government Accountability Office is throwing up its hands and saying we’ve already told you what needs to be done, so just do it

While federal agencies are working to carry out their [Federal Information Security Modernization Act]-assigned responsibilities, they continue to experience information security program deficiencies and security control weaknesses in all areas including access, configuration management, and segregation of duties. In addition, the inspectors general evaluations of the information security program and practices at their agencies determined that most agencies did not have effective information security program functions. We are not making new recommendations to address these weaknesses because we and the inspectors general have previously made hundreds of recommendations. Until agencies correct longstanding control deficiencies and address our and agency inspectors general’s recommendations, federal IT systems will remain at increased and unnecessary risk of attack or compromise.

The report is a good primer on cyber security threats and best practices. It includes some telling examples. The Internet Revenue Service’s website allowed access to private data, using personally identifiable information about taxpayers that’s available elsewhere. In another breach, thousands of treasury department documents walked out the door with a former employee…

Concurrent with a new policy that restricted employees’ use of removable media devices to prevent users from downloading information onto the devices without approval and review, the agency began reviewing employee downloads to removable media devices. During the review, it identified a significant change in download patterns for a former employee in the weeks before the employee’s separation from the agency. The former employee had downloaded approximately 28,000 files that may have contained controlled unclassified information onto two encrypted external thumb-drive devices. As of October 2016, the agency had been unable to recover the devices storing the files.

The next time a federal agency demands a back door into private sector platforms or encryption systems, this report accompanied by a simple no should be all the answer that’s required.

NSA shares blame with criminals for massive ransomware attack

by Steve Blum • , , , ,

Cybercriminals successfully penetrated more than 200,000 computer systems in 150 countries in a continuing attack that began late last week. The initial assault was unwittingly blocked by a security blogger who triggered an off switch while trying to figure out what was going on. But that didn’t help systems that were already infected – it will can still spread from computer to computer within a network – and a new version, without the kill switch, is reported to be already out and running wild.

The ransomware encrypts data on infected networks, and demands a bitcoin payment of $300 to free it up.

It did not have to happen. The ransomware exploited a flaw in Microsoft’s Windows operating system that was 1. known to the U.S. National Security Agency and 2. leaked into the public domain earlier this year. It gives the lie to the claims of the NSA, FBI and other national security and law enforcement agencies that they can be trusted to safeguard and wisely use software and encryption backdoors, as the Washington Post’s Brian Fung explains

The NSA leak in April showed that even those vulnerabilities thought to be under control by responsible state actors can find themselves on the black market. The story of Wanna Decryptor, ultimately, is the story of nearly all weapons technology: Eventually, it will get out. And it will fall into the wrong hands.

“These attacks show that we can no longer say that vulnerabilities will only be used by the ‘good guys,’ ” said Simon Crosby, the co-founder of Bromium, a California-based computer security firm. Crosby likened the unauthorized leak of the NSA’s hacking tools to “giving nuclear weapons to common criminals.”

The NSA’s conduct was irresponsible. When it discovered the Windows exploit, it should have notified Microsoft so that the vulnerability could be fixed immediately. Instead, it kept a backdoor open to millions upon millions of computers and networks, that would have eventually been found and used by criminals, even if it hadn’t managed its own security so incompetently.

Trump broadband policy boots up slowly

by Steve Blum • , ,

The first day of Donald Trump’s presidency wasn’t the blockbuster Day One he promised during the campaign. D-Day is Monday in his reckoning. That’s when he says he’ll start pounding the beach with the heavy guns of executive orders, although the door is open for weekend maneuvers and he took a few ranging shots immediately after taking the oath of office.

Following a custom established by Ronald Reagan, Trump sat down in the President’s Room in the U.S. capitol and took his first actions as president. He signed a stack of mostly cabinet level nominations, along with one proclamation that declares a “national day of patriotism”. Ajit Pai, or whomever Trump plans to anoint as chair of the Federal Communications Commission – permanent or interim – didn’t make the cut. Presumably, that decision will wait until Monday.

Other actions included an executive order regarding ObamaCare and memos that were sent to executive departments ordering regulatory and hiring freezes. Few details were available, but under normal circumstances such orders would not be sent to the FCC, since it’s nominally an independent agency. For now though, I make no assumptions.

If nothing else, Trump offered a clue about where telecommunications ranks on his infrastructure priority list, or rather, where it doesn’t. In his inaugural speech, which he apparently wrote himself, Trump said “we will build new roads and highways and bridges and airports and tunnels and railways all across our wonderful nation”.

No mention of fiber or conduit or Pony Express stations.

A complete makeover of the white house website appeared within seconds of Trump’s swearing in. The new site briefly appeared as a redirect, and then quickly settled down into its proper whitehouse.gov location. Initial content was sparse, and looked familiar to anyone who has browsed the transition team’s website. The official presidential Twitter account – @POTUS – was also handed over.

The new administration posted policy briefings on half a dozen issues, none of which mention an infrastructure spending plan, let alone broadband. But the online world is clearly on the Trump administration’s military radar, with one position paper calling out cyberwarfare as “an emerging battlefield” and making the development of “defensive and offensive cyber capabilities” a priority.

A known cyber threat is no threat to those who know it

by Steve Blum • , , ,


Vermont municipal electric utility employees read the cyber security alert jointly published by the FBI and the federal homeland security department, and did what it suggested: check their computers for the specific type of malware detailed in the report. According to a press release from the City of Burlington’s Electric Department

U.S. utilities were alerted by the Department of Homeland Security (DHS) of a malware code used in Grizzly Steppe, the name DHS has applied to a Russian campaign linked to recent hacks. We acted quickly to scan all computers in our system for the malware signature. We detected the malware in a single Burlington Electric Department laptop not connected to our organization’s grid systems. We took immediate action to isolate the laptop and alerted federal officials of this finding.

There are three important take aways here. First, don’t trust the first thing you hear about such events from general news outlets. The Washington Post broke the story and made it sound like the nation’s electric grid was about to come crashing down around us. Not so. It was a single, properly isolated, if perhaps improperly used, laptop. Nothing to see here. Move along.

Second, when malware or bugs are reported running around loose, check to see if your system has been compromised. No one is going to do it for you.

Third, and most importantly, this kind of information has to be released quickly and fully by law enforcement and security agencies as soon as they discover it. They can’t wait until it turns into a international controversy, as Russia’s cracking of democratic party computers did. Or until they themselves have no more use for the exploit. And they certainly can’t continue to demand that technology companies deliberately weaken products in order to make their lives easier and, in doing so, our lives less secure.

The only way to fight clandestine cyber attacks – state sponsored or not, good guys or bad – is to expose the attackers and their weapons to the full light of day.

FBI wants network administrators to tighten security, up to a point

by Steve Blum • , , ,

Crackers working for the Russian government broke into the computer system of “a U.S. political party” during the last election cycle. That’s the unsurprising top line conclusion of a joint report issued by the federal homeland security department and the FBI. Two separate teams working for Russian intelligence agencies phished more than a thousand party functionaries and eventually gained access to administrator level privileges on the target system.

Beneath that top line, though, lurks a fascinating, and ironic, description of how state-sanctioned crackers can penetrate workaday IT networks maintained by corporations and government agencies, and what can be done to stop them.

It’s worth reading, although number one of the top seven list of good security practices in the report seems like a no brainer: keep your software up to date…

Patch applications and operating systems – Vulnerable applications and operating systems are the targets of most attacks. Ensuring these are patched with the latest updates greatly reduces the number of exploitable entry points available to an attacker. Use best practices when updating software and patches by only downloading updates from authenticated vendor sites.

The irony lies in the fact that the FBI and other law enforcement and national security agencies not only routinely exploit such software vulnerabilities instead of quickly and publicly squashing such bugs, but also want technology companies to build back doors and weaken encryption to make that job easier.

You can’t have it both ways, as a recent congressional report pointed out (albeit with much handwringing over the need to try). Either the FBI and its fellow travellers are working 24/7 to plug security holes for everyone, or they’re playing on the same team as the Russian, Chinese and other state sponsored cyber spies who are routinely, and correctly, accused of subverting democratic processes and stomping out personal liberty at every opportunity.

Mobile OS security gains strength as a selling proposition

by Steve Blum • , , , ,

They mind their own business.

A reason for Sailfish’s existence, and perhaps even for the $12 million investment it received earlier this year is becoming clearer. It’s an alternative mobile operating system – a competitor to Android and iOS – that arose from the ashes of Nokia’s MeeGo operating system, which was scrapped when Microsoft bought the company.

But it didn’t buy everything and the Finnish engineers who stayed behind started a new company, Jolla, and kept working on it. And now they’ve found a big customer in the Russian government. According to a press release from Jolla

Sami Pienimaki, CEO of Jolla Ltd. comments: “Sailfish OS development in Russia is an important part of Jolla’s wider agenda, aiming to power various countries’ mobile ecosystems. Our solution is based on open source code and contribution models with partners, which makes it possible to ramp up local systems effectively in 6 months. We have now done this in Russia with a local partner and using this experience we are looking forward to ramping up similar projects in other countries.”

In Russia, Sailfish OS is the only mobile operating system, which has been officially accepted to be used in governmental and government controlled corporations’ upcoming mobile device projects.

Customers in China and South Africa – two other countries that don’t put complete trust in the developed world’s good intentions – are also reported to be giving Sailfish a close look.

Sailfish’s selling proposition is security, and it makes good on that promise in a couple different ways. First, it’s open source, which means anyone who installs it can inspect the code for bugs and gain a level of confidence that there are no backdoors or otherwise compromised encryption systems, as with the Blackberry OS or as the U.S. government seeks for iOS and Android.

Second, Finland has strong privacy laws. It’s why Turing Robotics, a tiny mobile phone maker that also aims for the security minded side of the market, moved its mobile phone operations there from California.

FCC approves stricter consumer privacy rules for ISPs and telcos

by Steve Blum • , , , ,

Secure shopping.

The Federal Communications Commission voted 3 to 2 along party lines yesterday to implement privacy requirements for Internet service providers. If your ISP wants to, say, sell your web browsing history to Facebook, it will need to get your permission first. Facebook, on the other hand, will still be running under the Federal Trade Commission’s looser rules, since it’s an edge provider and isn’t regulated by the FCC.

We don’t know what the rules actually say – that’s a secret, despite the open vote – but a revised summary released afterwards clears up a few outstanding questions.

The post-vote summary was largely identical to the pre-vote summary released earlier this month. Grammatical tweaks aside, there were some changes to reporting requirements, more details about what ISPs can do with your private data for their own purposes, and a promise to “address mandatory arbitration requirements in contracts for communications services” early next year. The concern is that the legal boilerplate ISPs flash at customers unfairly restricts their legal rights. The summary implies that subscribers will be able to take advantage of the FCC’s existing dispute resolution process, regardless of whether a mandatory arbitration clause is in effect.

ISPs will have to get positive, opt-in permission from customers to share or sell sensitive information to third parties. The definition of sensitive information remains the same, and includes precise geolocation data, web browsing and app usage history and the content of communications, as well as things like social security numbers and medical information.

Non-sensitive information, such as a customer’s service tier, is assumed to be shareable unless customers specifically ask that it not be. In some cases, for example if an ISP wants to sell extra services to a subscriber or do routine things like send bills or troubleshoot a line, that consent is inferred regardless. Or so it seems – it’ll be interesting to read the actual text of the rules to see how the FCC proposes to draw those lines.

Telephone companies will have to play by the same rules – the FCC said that call records and such are also considered sensitive information.

Absent legal and procedural challenges – not a good assumption, actually – the meat of the new privacy rules will take effect in a little over a year, with small ISPs given two years to comply.