Steven Hofmeyr just gave a fascinating talk on immunological approaches to security. The basic idea is that you generate detectors, which are just bitstrings, starting from random patterns. Throw away the ones that match patterns in the self set. You're left with selectors for non-self. Give them finite lifetimes, and constantly regenerate them, so things stay dynamic. A key point about these detectors: they evolve within, and are specific to, local environments.
The real-world experiment: try to protect a LAN. Characterize patterns of communication: source, target, and service (SMTP). Each datapath is a string. Detectors are strings equivalent in length to these datapath triples (e.g. 184.108.40.206, 220.127.116.11, smtp). The test monitored 50 computers on a LAN. Each computer had a set of 100 detectors. False positives were less than 2 per day. Events detected: port scanning, address-space probing, stealth probing, failed intrusions, successful intrusions.
Real immunological systems suggest more sophisticated strategies:
- The B-cell adaptive response. Cells in your lymph adapt to recognize pathogens, in your lymph, according to Darwinian selection. We can imagine using a genetic algorithm to evolve our detectors.
- Migration of detectors. In immunological systems, cloning and distributing detectors is a rapid-response mechanism to a fast-spreading threat -- like a worm.
What about Schneier's issue of diversity vs monoculture? Two points. First, there's more diversity than you might think. Systems vary by patch level and configuration profile. Their vulnerability varies accordingly. That said, there is also a need to manufacture more diversity in software. There's research on having compilers do this. Mass-customizing software, with varying patterns of no-ops, might be one defense against buffer overflow. Maybe software needs to evolve introns -- that is, the mysterious inactive stretches of DNA
All this was a hopeful counterpart to Bruce Schneier's brutally cold assessment. Biological systems are incredibly robust. It's hard to make them fail; they tend toward health. That's clearly not true of computers, software, and networks. There will always be risk, and the need to manage it, but that needn't be the whole bleak story. Learning how to make systems inherently healthy is a great new challenge with, let's hope, a bright future.
Former URL: http://weblog.infoworld.com/udell/2002/05/15.html#a241