News
Computing Profession

Unintended Consequences

Posted
Artist's representation of a hardware trojan.
Current computer security research suggests the ban on Huawei will not provide the Trump administration with any kind of magic bullet that instantly provides trustworthy 5G.

In mid-May, the Trump administration banned Huawei of Shenzhen, China, from selling its technology into the American Information Technology and telecom sectors on the grounds that it could pose a threat to U.S. national security. The White House fears the firm's 5G mobile networks, in particular, could be used to pass commercial and military secrets to Beijing under China's sweeping 2017 National Intelligence Law, which mandates that networks pass on data to the government if asked to do so.

While Huawei consistently maintains it would never comply with such a request, doubt hangs over its ability to resist such a demand from the totalitarian state's intelligence services.

Yet Trump's ban on Huawei quickly led to unintended consequences: it emerged that Huawei's handset business could lose very basic access to app and Android updates from U.S.-based Google for smartphones it launches in the future. Ironically for an issue all about trust in technology, this could result in users of future Huawei phones running unpatched, insecure apps.

"It may be a ticking security time bomb," says Eoin Keary, CEO of Edgescan, a Dublin, Ireland-based cybersecurity firm.

That issue, alongside U.S. chip industry concerns over component sales lost to Huawei, and a threat from Huawei that it may close its U.S.-based research lab at the cost of 850 jobs (the company announced  July 23 it was laying off more than 600 of its workers in the U.S.), has the administration backtracking somewhat , leaving the situation one of great confusion.

Was Trump's move any way to boost trust in technology?

As the ramifications of the attempted ban continue to unfold, current computer security research suggests it will not provide the Trump administration with any kind of magic bullet that instantly provides trustworthy 5G. The reason: technical and economic change mean new fronts are opening up in the field of covert data theft, with computer firmware and hardware itself now vulnerable to attack, potentially in ways so difficult to detect that even systems from 'trusted' suppliers could leak confidential data to adversaries.

Programmable risks

These new attacks include firmware trojans that maliciously modify the basic, built-in, boot-level control software in a system, perhaps sabotaging security by preventing software updates and vulnerability patches being applied, and/or allowing data exfiltration. Such attacks can be aided, it turns out, by the increasing use of a popular and versatile logic circuit: the field programmable gate array, or FPGA.

In addition, the globalization of the microchip supply chain means that chips are now designed, fabricated, tested, packaged, and delivered by different low-cost, outsourced providers all over the world—with the fabs often in China, even for Huawei's more trusted rivals—providing multiple points where small, maliciously inserted circuits called "hardware trojans" could be introduced.

"Supply chains for telecommunications networks have become global and complex," admits Norman Lamb, chair of the British parliament's Science and Technology Committee, after taking evidence from the industry. "Many vendors use equipment that has been manufactured in China, so a ban on Huawei equipment would not remove potential Chinese influence from the supply chain."

Sometimes comprising just a few hundred hard-to-find transistors amongst millions (or billions) of the devices, hardware trojans can act as service-denying kill-switches, or activate data leakage to attackers through paths (dubbed side channels) when activated. Finding them is difficult: "How can we interrogate a circuit for malice when we don't trust the circuit in the first place?" asked Kenneth Plaks, a program manager at the U.S. Defense Advanced Research Projects Agency (DARPA) working on ways to obfuscate chip circuit layouts so attackers cannot modify them.

Like DARPA, other security teams are concerned that hardware trojans need containment, too. "Users may believe their systems are secure if they run only trusted software. However, trusted software is only as trustworthy as the underlying hardware," warned Hansen Zhang and colleagues from Princeton University at April's ACM International Conference on Architectural Support For Programming Languages and Operating Systems (ASPLOS 2019) in Providence, RI. "Even if users run only trusted software, attackers can gain unauthorized access to sensitive data by exploiting hardware errors or by using backdoors inserted at any point during design, manufacture, or deployment."

A straw in the wind

That a serious firmware vulnerability (albeit an accidental one) can be a risk was revealed the same week the White House ban on Huawei was announced. Indeed, in what was a very bad week for cybersecurity all around, vulnerabilities were revealed for Microsoft, WhatsApp, Intel, and Cisco Systems products. It was Cisco's flaw that stood out as very different, however: cyberanalysts at Red Balloon Security in New York City found a vulnerability, which they have dubbed Thangrycat, in the Trust Anchor module that Cisco uses to securely boot many of its products, such as network routers, switches, and firewalls.

In Trust Anchor, the firmware of an FPGA is stored in a flash memory chip, rather than in a read-only memory (ROM). When fed to the FPGA, this firmware "bitstream" dictates the way logic gates are connected in a Boolean circuit to ensure that, at boot time, software updates and patches are applied. However, Red Balloon found that by modifying the bitstream, they could rearrange the FPGA's logic circuit "so an attacker can remotely and persistently bypass Cisco's secure boot mechanism and lock out all future software updates," the company says.

It was an important find, says Alan Woodward, a visiting professor of cybersecurity and digital forensics at the University of Surrey in Guildford, U.K. "This finding really matters. The root of trust in embedded devices quite often relies upon FPGAs. So if you can do this, you can effectively circumvent secure boot processes and you have a strong attack vector."

Andrew Tierney, a consultant with penetration testing firm PenTestPartners, in Buckingham, U.K., says Thangrycat revealed an exploitable gap in the product design. "The interesting aspects here are the mistakes that Cisco made in the implementation. It seems remiss to develop a secure boot system that doesn't provide secure boot." Most devices that do offer secure boot, he says, tend to do so using read-only data: in flash RAM, the bitstream was modifiable.

"We haven't seen many attacks against FPGAs yet. This is possibly due to obscurity; taking a bitstream, the firmware of an FPGA, reverse-engineering it, and then modifying it is very challenging," Tierney says, adding that it's also time-consuming and expensive.

Despite the challenges and expense, researchers remain concerned about future attacks on firmware, especially in the kind of critical infrastructure that will use a litany of communications links, the kind 5G can provide as the supposed future enabler of Internet of Things applications. After all, the U.S./Israeli attack on Iran's Natanz nuclear enrichment plant using the Stuxnet worm —which allegedly shook 400 uranium centrifuges to pieces by injecting them with sabotaged motor control data—proved the viability of firmware attacks on embedded programmable logic.

A firmware trojan family tree

As a result, a team from the New York University Polytechnic School of Engineering, led by Charalambos Konstantinou, has drawn up a whole taxonomy of what forms firmware trojans could feasibly take to disable a sample critical infrastructure application (such as a smart power grid) so they can be used to establish defense mechanisms against their distribution in smart grid testbeds.

Usefully for security researchers, the school developed a raft of sample firmware trojans, too, with insertion mechanisms varying from those delivered via test ports, via simple communications links or, for the really determined money-no-object attacker, "chip-off forensics." In the latter, the top of a memory chip is removed and the die exposed, allowing data to be read, rewritten, and reinjected, to scurrilous ends.

Where attackers do the delivery is also an issue. Stuxnet relied on simple social engineering, with USB sticks left in public places like cafes and car parks near the target plant. However, delivering a threat based on something like Thangrycat's mechanism would be way tougher. "It's not that easy to exploit, as you need administrator access, but if you were in the equipment supply chain somewhere, that might be possible," says Woodward.

Out of sight, out of trust?

It is in the supply chain—the extended, global, highly outsourced, out-of-sight/out-of-mind semiconductor microchip one—that specialists believe hardware trojans are going to hail from. One type of hardware trojan could simply sabotage the chip's chemistry, so connections or transistor channels burn out, so the chip fails after a set time; effectively, a killswitch on a timer. More likely, others might use a trigger signal to activate circuitry that has been added to a chip at some stage of manufacture such that it delivers a result (a disabling killswitch command perhaps, or data in a memory chip such as a stored cryptographic key).

To do that, however, attackers need to know the circuit layout so they can see, for instance, where they can place their malicious circuit elements. That is something that can be fought, Johann Knechtel and colleagues at the Tandon School of Engineering at New York University told the International Conference on Omni-Layer Intelligent Systems (COINS) on the Greek island of Crete, in early May.

In their COINS paper, the NYU team revealed how design can effectively be frozen by describing the list of logic gate connections, known as the 'netlist' to IC designers, in code, then encrypting and storing it on the chip in tamper-proof memory. At runtime, the code is compared to a code that is dynamically generated by the in-use circuit; the code hashes should be the same if the chip is untainted.

In another technique, dummy logic gates can be added to the circuit to camouflage the physical design, altering its appearance and limiting the sites where trojan creators can place things.

The NYU team is not alone. "By using a combination of techniques, our goal is to make the placement and triggering of hardware trojans more difficult and their detection easier," says DARPA's Plaks.

One of the countermeasures the Defense research unit is investigating involves placing a radio frequency (RF) intrusion sensor circuit on a chip die. Made purposely fragile, it breaks if the chip is tampered with, disabling the device.

In addition, Plaks says DARPA is investigating how the addition of trojan transistors and logic gates affects the timing of circuits, as parasitic capacitances from malicious add-ons will change it in hopefully telltale ways.

Zhang's team at Princeton is tacking another tack: they have developed TrustGuard, a sentry-like system that checks that microchips only issue data in formats expected by the design, raising an alarm if out-of-the-ordinary transmissions, such as data leaks on side channels, occur.

All the anti-trojan measures need resources, and so have speed and power-drain impacts, however (TrustGuard, for example, reduces performance by 15%), so hardware trojan defenses are very much works in progress in need of improvement.

Yet if all this innovation in hardware and firmware trojan countermeasures tells us one thing, it is that trust in technology is a moveable feast. Like all of cybersecurity to date, it is an arms race, and all the manufacturer bans in the world will not guarantee systems are trustworthy now that, thanks to globalization,  device manufacture has been cast to the four winds.

Paul Marks is a technology journalist, writer, and editor based in London, U.K.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More
Opinion
Computing Profession Cerf's up

Unintended Consequences

Posted
  1. Article
  2. Author
  3. Footnotes
Vinton G. Cerf

When the internet was being developed, scientists and engineers in academic and research settings drove the process. In their world, information was a medium of exchange. Rather than buying information from each other, they exchanged it. Patents were not the first choice for making progress; rather, open sharing of designs and protocols were preferred. Of course, there were instances where hardware and even software received patent and licensing treatment, but the overwhelming trend was to keep protocols and standards open and free of licensing constraints. The information-sharing ethic contributed to the belief that driving down barriers to information and resource sharing was an important objective. Indeed, the Internet as we know it today has driven the barrier to the generation and sharing of information to nearly zero. Smartphones, laptops, tablets, Web cams, sensors, and other devices share text, imagery, video, and other data with a tap of a finger or through autonomous operation. Blogs, tweets, social media, and Web page updates, email and a host of other communication mechanisms course through the global Internet in torrents (no pun intended). Much, if not most, of the information found on the Internet seems to me to be beneficial; a harvest of human knowledge. But there are other consequences of the reduced threshold for access to the Internet.

The volume of information is mind-boggling. I recently read one estimate that 1.7 trillion images were taken (and many shared) in the past year. The Twittersphere is alive with vast numbers of brief tweets. The social media have captured audiences and contributors measured in the billions. Incentives to generate and share content abound—some monetary, some for the sake of influence, some purely narcissistic, some to share beneficial knowledge, to list just a few. A serious problem is that the information comes in all qualities, from incalculably valuable to completely worthless and in some cases seriously damaging. Even setting aside malware, DDOS attacks, hacking and the like, we still have misinformation, disinformation, "fake news," "post-truth alternate facts," fraudulent propositions, and a raft of other execrable material often introduced cause deliberate harm to victims around the world. The vast choice of information available to readers and viewers leads to bubble/echo chamber effects that reinforce partisan views, prejudices, and other societal ills.


The question before us is what to do about the bad stuff.


There are few international norms concerning content. Perhaps child pornography qualifies as one type of content widely agreed to be unacceptable and which should be filtered and removed from the Internet. There are national norms that vary from country to country regarding legitimate and illegitimate/illegal content. The result is a cacophony of fragmentation and misinformation that pollutes the vast majority of useful or at least innocuous content to be found on the Internet. The question before us is what to do about the bad stuff. It is irresponsible to ignore it. It is impossible to filter in real time. YouTube alone gets 400 hours of video uploaded per minute (that is 16.7 days of a 24-hour television channel). The platforms that support content are challenged to cope with the scale of the problem. Unlike other media that have time and space limitations (page counts for newspapers and magazines; minutes for television and radio channels) making it more feasible to exercise editorial oversight, the Internet is limitless in time and space, for all practical purposes.

Moreover, automated algorithms are subject to error or can be misled by the action of botnets, for example, that pretend to be human users "voting" in favor of deliberate or accidental misinformation. Purely manual review of all the incoming content is infeasible. The consumers of this information might be able to use critical thinking to reject invalid content but that takes work and some people are often unwilling or unable to do that work. If we are to cope with this new environment, we are going to need new tools, better ways to validate sources of information and factual data, broader agreement on transnational norms all the while striving to preserve freedom of speech and freedom to hear, enshrined in the Universal Declaration of Human Rights.a I hope our computer science community will find or invent ways to engage, using powerful computing, artificial intelligence, machine learning, and other tools to enable better quality assessment of the ocean of content contained in our growing online universe.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More