Information security matters. From securing military networks, to industrial control systems, to personal data, there are real human and economic costs associated with poor information security practices. To some extent, the market provides for reasonable security practices. A company has a lot to loose if its industrial control systems fail. In markets where customer data and consumer trust are valuable business asset, there is a commensurate incentive for business to use good security practices.
However, data is non-rivalrous: if someone steals my database, I probably still have a copy. If a company’s user data is compromised, and copied, their customers may never learn about it. This leads to an incentive problem for corporations. Even if the corporation knows that there has been a data breech, they don’t want to tell their users, because telling users will cause a loss of trust. This is a perverse situation that the market can’t solve. Users can’t judge the reliability of corporations on information security, so corporations can’t meaningfully compete on this basis.
Since the market can’t deal with this situation, regulation may be required. California has an effective solution to this problem: a legal requirement to report data breeches. If a corporation looses data on a sufficient number of Californians, they have an obligation to report this to the state of California. It turns out that any major data breach in the USA includes a sufficient number of Californians to qualify under this law, making California a de facto data-breach reporting venue for the country.
This law is effective. Forcing corporations to report data breeches is a good on its own: it allows individuals whose data has been compromised to take measures to protect themselves. It also has the instrumental effect of allowing corporations to compete credibly on the basis of their data protection practices.
We can take this principle further. In an effective marketplace, a rational actor will implement a predictable level of security: exactly enough so that the probably-weighted cost of a breach is equal to the cost of the security measures. However, as indicated previously sometimes a security breach for one actor has costs for another: as when a company’s breach exposes their users to risks. These externalities are not accounted for in a single actor’s security planning.
Users aren’t the only ones who might be willing to pay for others to implement better security practices. Corporate systems constitute a national security and economy asset. Therefore, they constitute a valid target for attacks. It’s possible that the security standard promoted by the market leaves corporations (and individuals) at risk from sophisticated attacks. Given the recent attacks on Google’s infrastructure, this is not implausible. It’s also possible that the security threats are difficult to anticipate: state actors tend not to give precise details about their cyber attack capabilities.
All this adds up to a situation where – like with data breach reporting – the market does not optimise results. However, as with data breach reporting, legislation can offer a remedy. Government funds could be used to penalise poor security practices, and encourage good ones. This sort of motivator could shift the security/breach economic balance in the favour of more secure systems.
I certainly don’t mean that there should be some sort of federal mandate about what constitutes a secure computer system. That’s a recipe for disaster which could never keep up with the rapidly-evolving security landscape. However, it’s much easier to define a failed security system. Providing increased penalties for security breaches would increase the cost of breaches, encouraging better security.
However, security is hard; making it easier to implement good security would go some way towards making security practices better across the board. We could use federal funds to support an NSF-like organization responsible for the development of good security software, protocols and practices. This organization’s work would help to reduce the cost of implementing and auditing security practices, and thereby promote better practices in general. Grants for security development work would encourage private security work to remain in the public domain, which also helps general practice.
Of course, ubiquitously available open security is useful everywhere, not just to us and our allies. However, right now, we’re one of the most digitally-integrated nations, in civil society, our economy, and our military. We have free, open networks, due process, and all our eggs online, giving us the the most to loose from cyber attacks. That means that we also have to most to gain from doing things right.