No security capability operates as intended. Even with perfect data, perfect planning, and perfect foresight? Small differences between our assumptions and reality quickly add up to unpredictable situations. Security faces the proverbial butterfly flapping its wings in Brazil producing tornado in the United States.
The butterfly effect was coined by Edward Lorenz, a meteorologist and father of chaos theory. It all started when the limitations of computing led to the limitations in forecasting. It’s a pattern that still plays out today, leading some to point to the need for chaos engineering.
Edward Lorenz was working on one of the first desktop computers: the Royal McBee LGP-30. Desktop in the sense that the computer was, in fact, the size of a desk. It also cost nearly a half billion dollars, in today’s US currency. We’re talking state-of-the-art vacuum tube technology. A teletype machine, the Friden Flexowriter, provided both input and output. It printed at a glacial ten characters per second.
These constraints of his machine inspired Edward Lorenz. But I’m getting ahead of myself.
So there Lorenz was, modeling the weather. To save memory, as he ran the calculations, he printed the results to charts. At ten characters a second this was tedious. To save time, he printed to three decimal points.
The LGP-30 would hum and pop while it calculated a value to six decimal places. The Flexowriter would bang and punch out the result to three decimal places. Calculate 0.573547 and print 0.574. Again and again, line by line, while Lorenz waited.
This shouldn’t have been a big deal. The differences between the calculated results and printed values were quite small. But when Lorenz retyped the numbers and reran the models, he noticed something extraordinary. Weather on the original chart and the new chart would track for a day or two. But pretty soon, they’d differ widely, unexpectedly. What was once a calm day suddenly turned into a tornado. All due to the tiny differences in the source data. Edward Lorenz had discovered chaos theory.
“Complexity. It’s extremely difficult to predict all the outcomes of one seemingly small change.” David Lavezzo of Capital One wrote in the book Security Chaos Engineering. “Measurement is hard.” And even when we have metrics, which we rarely do, these small changes compound and lead us into unforeseen territory.
You can’t just rely on the temperature numbers predicted at the beginning of the week. You have to actually go outside. See if you need a jacket. See if you should be wearing shorts. The same is true of security. We can’t rely on our long-range forecast. We need to check the reality on the ground. Regularly. From there, adapt according to our principles.
We future-proof our security architecture by choosing versatility. We design for adaptability by prioritizing principles over rules-based approaches. But when we get to implementation, we should expect that we’ve missed something. Expect people and applications and devices and butterflies have behaved in ways that are a few decimal places further than we had considered.
We need some applied chaos to test and harden our implementation. The emerging domain of security chaos engineering is providing some useful techniques. Inject some evidence. Change some settings. Run some exploits. Validate that the security controls continue to operate. Security chaos engineering provides a way to explore the unexpected.
But ultimately, the take-away from Edward Lorenz is one of humility. We simply don’t know what will come. With the data we have, we can’t predict what will happen. Decades of advances in computing since the Royal McBee LGP-30 haven’t changed this equation. When implementing security, pilot with chaos to prepare for the unforeseen.
This article is part of a series on designing cyber security capabilities. To see other articles in the series, including a full list of design principles, click here.
Posted by