Leave the Internet of Things Alone

Don't let government regulators stifle internet innovation.

Cyber-attacks and device vulnerabilities, including the recent "WannaCry" ransomware incident, have kept us on the alert lately. As the number of our devices that are connected to the internet – collectively called the "internet of things" – continues to grow, the number of attack vectors resulting from poor security could increase. While that's a valid concern, we need to make sure that our "solutions" don't merely create more problems.

One common proposal is that the government should step in to regulate connected devices and change the current liability scheme to place more pressure on groups involved with building, testing and selling software. Computer security expert Bruce Schneier made this argument recently in a New York Times article titled "What Happens When Your Car Gets Hacked?"

Schneier points out that non-internet of things attacks like WannaCry can be solved relatively quickly because major technology firms have teams of capable software engineers on hand to quickly patch bugs. But manufacturers of connected devices often lack such in-house technical know-how, Schneier argued, so the products they sell might be less secure by default.

He believes that we simply can't trust private companies to solve this problem on their own, and that "considerable government involvement" is needed to improve security. While short on specifics, Schneier wants a top-down regulatory regime that will test and approve connected devices. Furthermore, he wants government to make software companies liable for vulnerabilities.

To Schneier's credit, he readily admits that this proposal will be very expensive. But there are good reasons to doubt that such government involvement will "go a long way toward improved security" as is claimed.

First, it's important to note that the traditional software industry has been able to provide security without such government intervention, as Schneier himself stated. This did not happen overnight. With time, and trial and error, technology firms were able to build out the knowledge and labor pool needed to quickly patch software vulnerabilities. After all, providing good security is an important way that businesses compete. Companies that lag behind on security will take a reputational hit and eventually be left in the dust by companies that prioritize security. It is not perfect, but it represents a superior approach to imposing a "Department of Internet Security" for devices.

The problem with creating such a federal bureaucracy is that we could expect the rate of new innovation to slow considerably. Subjecting businesses to expensive pre-market approval introduces compliance costs and uncertainty that many small firms simply can't bear. This has been the case with Food and Drug Administration regulation, where most new pharmaceuticals are introduced by a handful of huge companies that can withstand the years-long, multi-million dollar regulatory process.

Furthermore, we would need to trust such an agency to have the wherewithal to promulgate appropriate security guidelines in the first place. As Mercatus Center research has pointed out, federal agencies have consistently failed to meet their own security guidelines and suffer record numbers of information security breaches each year. It is hard to imagine that the federal government can improve the security of our nation's devices when it cannot even get its own house in order.

Introducing a super-charged liability scheme for software developers has major downsides as well. Software vendors can already be held liable for negligence in exposing users to flaws that are not in keeping with best practices. But beefing up liability risks to the point where software must be virtually perfect before release would significantly limit the rate of new software introduction. (Indeed, some proponents of this option state that this is the intention!) And again, it is likely that only the largest companies would be able to bear these costs, leading to industry consolidation, less innovation and diminished consumer choices.

Instead, policymakers should seek to empower, not punish, software developers. Where possible, agencies could promote and assist existing industry self-regulation efforts, such as the many information sharing and analysis centers that coordinate responses to cyber-attacks. Private certification and standards bodies could audit and recommend companies that provide good security, like the Institute for Electrical and Electronics Engineers and Underwriters Laboratories currently do for other technologies. And if any internet of things firm proves to be especially negligent, the Federal Trade Commission can investigate it for "unfair and deceptive practices," as it has indeed already done for businesses.

There is a lot that must be done to improve our cybersecurity, but heavy-handed regulatory solutions based on worst-case thinking won't help. Ironically, Schneier himself wrote a powerful essay on worst-case scenario thinking in 2010 in which he noted that it "substitutes imagination for thinking, speculation for risk analysis, and fear for reason."

This is just as applicable to internet of things regulation, and it's important that we not allow fears of catastrophe to unduly burden a new technology.