This article explores the threat of cybercrime and how hardware is evolving to better resist cyberattacks. It also covers the pros and cons of security by design and security by secrecy and the practical issues to deal with open-source secure silicon designs.
The cost of cybercrime
Cybercrime was predicted to cost the world $10.5 trillion globally in 2025, growing by 10% annually. Major businesses regularly appear in the news following cyberattacks that leave them unable to run their operations, which has brought cybersecurity to the top of the agenda of governments, businesses and individuals.
As the Internet of Things takes off and millions of embedded devices controlling core infrastructure provide a large potential attack surface, the problem is only going to get worse, and governments have already reacted by introducing new legislation such as the EU’s 2024 Cyber Resilience Act.
Similarly, governments are aware of the security threats created by the availability of quantum computers, and are already mandating the availability of post-quantum cryptography in future electronic systems.
Electronics vendors have reacted to both customer demand and government pressure, and for years they have been including security features into their products.
Hardware security has a cost associated with it (e.g. in silicon area or an extra device on the PCB), and how far designers are willing to invest in it depends on the market and application where the product will get used. However, the trend is to invest much more in hardware security than it was done in the past.
Basic security features in electronic products
Most electronic products implement some degree of security. The basic security you should expect from any connected device covers:
- Encryption keys programmed in the foundry through fuses. These keys cannot be read through software, but they can only be used to encrypt or decrypt data through hardware blocks like AES.
- Secure boot and secure firmware update. This feature ensures that only authentic software from a trusted source (who has the right cryptographic key) can be used to boot the system, and provides a mechanism to update such firmware as required.
These features are so important because they provide a way to restore the system even if it gets hacked: if the boot code got compromised it would render your electronic system inoperable and you would have to send it back to the factory. In a better scenario in which you find a vulnerability in your software before it gets hacked, it enables you to patch the software to remove the vulnerability and prevent a future attack.
These security features form the core of a “root of trust”, the part of the system that is “trusted” to not have been compromised. It is therefore important that such a system is never compromised. In the past the root of trust was implemented in software as part of the boot code, but that code was the target of heavy cyberattacks and was often cracked. As a result, most chips today rely on a hardware root of trust, and this is something we strongly advocate as a foundational technology which every computer system should have.
Secure enclaves and hardware root of trust
As the complexity of SoCs increases it gets harder and harder to keep them secure. As a result, most complex chips now incorporate a secure enclave. For example, Darjeeling, the integratable top level of OpenTitan®, and COSMIC can be used as secure enclaves.
A secure enclave is an “island” (a subsystem) in the SoC which is optimized for security. It keeps the device’s cryptographic keys and other secrets (e.g. biometric data to authorize access) and it is responsible for the chip’s boot process. The secure enclave is only accessible from the main CPU(s) through mailboxes, instead of the usual shared memory, to better insulate it against a cyberattack.
The next significant step in hardware security is the introduction of a secure element. An example of it is the Trusted Platform Modules (TPMs) which are standard in laptops and desktops. A hardware root of trust extends protection to a broader range of attacks, particularly attacks from people with access to the hardware and lab equipment to attempt to hack it.
Examples of those attacks include:
- Tearing down the device: removing the top of the package, and using a microscope to work out the hardware design, or micro-probes to measure signals on the actual silicon
- Supply chain attacks: replacing chips, tampering with chips or the PCB during the manufacturing or shipment of the product
- Side channel analysis attacks: running different software and exploring the response of the hardware to them (e.g. timing, power consumption, electromagnetic radiation) to work out the value of the cryptographic keys
- Fault injection attacks: using electromagnetic radiation, lasers or other equipment to try to toggle bits in the silicon, change its behaviour and expose the cryptographic keys
Hardware root of trust devices like those based on the Earl Grey top level in OpenTitan are designed specifically to deal with all of the above attacks.
Security by design, security by obscurity
Modern cryptography is based on Kerckhoffs’ principle, which states that a cryptosystem should be secure even if the entire system’s design, algorithm, and implementation are public knowledge, with the sole exception of the secret key.
This is the basis for cryptoalgorithms that everybody uses, such as AES and RSA. The security of these algorithms relies entirely on the secrecy of the key, not on keeping the algorithm hidden. The benefits of this approach are clear, in terms of interoperability, quality and cost.
At lowRISC® our whole approach is based on Kerckhoffs’ principle and we have adopted it in the design of OpenTitan. We do not believe that the design of a secure chip can be kept secret indefinitely – a determined organisation with enough money and resources can find a way to steal it. Instead, by being open we can fully benefit from contributions of project members, friendly organisations and individuals to keep improving the security of the design.
That said, for some parts of the design there is little value in openness. For example, the details of anti-tampering analog IP that specify exactly upon which conditions the device detects a hardware attack. This is a decision for the silicon vendor making the chip, and by adding some degree of obscurity the device becomes more secure without many disadvantages.
Practical considerations in secure open-source silicon
Designing secure silicon in the open has drawbacks. For example, an open design repository might inadvertently reveal vulnerabilities to potential attackers, and open-source systems could be susceptible to sabotage.
We believe that leveraging the expertise from companies like lowRISC is valuable to realise the benefits of open-source. Our approach involves 3 pillars:
- Transparency: not only in the actual design and its verification infrastructure, but also in the processes we follow to accept or reject submissions, and report vulnerabilities
- Quality: by adopting strong digital and formal verification and the latest techniques against side channel analysis, fault injection and other attacks. Also by being vigilant and carrying out the necessary in-depth code and security reviews on submissions
- Ecosystem: by engaging with security experts, friendly hackers, experienced silicon vendors and product users who can help us continuously improve the design and take it to market