GEOFF HUSTON holds a B.Sc. and a M.Sc. from the Australian National University. He has been closely involved with the development of the Internet for the past two decades, particularly within Australia, where he was responsible for the initial build of the Internet within the Australian academic and research sector. Huston is currently the Chief Scientist in the Internet area for Telstra. He is also the Executive Director of the Internet Architecture Board. He was an inaugural Trustee of the Internet Society, and served as Chair and Secretary of the Board of Trustees in that period. He has been a member of the Executive Council of the Asia Pacific Network Information Centre. He is author of a number of books on the Internet technology and ISP operation. Precis: The Trashing of the Commons Geoff Huston The Internet has often been compared to the Commons, where a communal resource was owned by noone, yet it was commonly used to the benefit of all. It is not the concept of the commons itself that has become entrenched in our vocabulary, but the aspect of the "tragedy of the commons", where the unmanaged common resource was abused to the point of destruction. This tragedy of the commons sounds very familiar for the Internet. The Internet Commons is quickly turning from a valued medium for communal activity into a hostile wasteland. We are seeing an inexorable increase in the levels of unsolicited email. Reports indicate that 60% of all email carried on the Internet is spam of one form of another. For many individuals whose mail address has been public for some years spam is now at a point of some 500 messages per day. That's double the rate of some five months ago. The trend of spam continues sharply upward. What will email look like when the rate of spam is ten times current levels, or higher? With email it may only be a matter of time before the medium is destroyed by this relentless method of abusive attack. There are also various form of attack on the Internet, ranging from hostile programs embedded in mail messages through to direcdt attacks on the wire. Having your computer system fatally corrupted and losing the entire contents of your file system is not just a remote possibility. Every computer system openly connected to the Internet is being continually probed, sniffed, checked out and tested for vulnerabilities. And it would appear that pretty much every user has fallen prey to this continual assault more than once. How much of the Internet traffic is hostile? How much of the traffic on the network is either the initial attempts to probe the level of vulnerability of remote systems, or the result of infected zombies sending out a further torrent of digital noise? How rapidly can software vendors convey a continuous sequence of patches and updates to applications to counter the efforts of others to exploit vulnerabilities in these systems? How disruptive is the combination of a virus and spam, where the infected host starts to send out virus- infected messages, masquerading as the local system's user, and sending these messages by mail to every party listed in the local user's mail contacts, sometimes even going to the extent of borrowing fragments of stored mail in order to look like genuine mail? And maybe these are not the central questions. Perhaps the more worrisome question is what does the Internet look like when this traffic increases by factors of 10 or higher over the current levels? How much will this form of abuse collectively cost us, both financially and in terms of an ever-rising sense of impotent frustration? The commons of the Internet falls prey to such hostile abuse. Internet service providers are increasingly being pushed into a an undesirable position. In order to provide 'normal' service to their clients they have to provision their systems to be able to manage the massive overloads caused by these waves of attack, rather than constructing systems that are dimensioned for 'normal' use. There have been a number of well reported incidents where large public mail server systems have been unable to cope with the torrent of junk mail generated by infected systems that spew out vast quantities of infected mail. The forced response has been to spend additional resources to increase the capacity of the systems to cope with this overhead while still attempting to pass genuine traffic through without hindrance. Whether your role is in provisioning sufficient capacity in DNS servers to handle not only the normal traffic load, but also the additional load imposed by various forms of attack and abuse, or whether you are operating a public mail service where you need to be able to provide sufficient capacity to cope with fluctuations in load where attack peaks can impose overload conditions orders of magnitude greater than genuine message processing rates, you are facing a common problem. We are now dimensioning our servers to handle abuse, not genuine use. Technical approaches to the problem have so far proved ineffectual. Aggressive attempts by suppliers to fix vulnerabilities often expose these vulnerabilities to subsequent attack, as the user base tends to lag well behind in terms of installing the latest set of updates. Attempts to automate much of this update process have in turn made these update systems the target of attacks, and often expose vulnerabilities that are then used as the basis of subsequent attacks. What are we doing about it? Like the commons, individual actions in the face of this assault on the commons can only offer some degree of individual mitigation to the problem, again at the expense of the common resource. Perhaps its the end-to-end model of the Internet that is failing here, and perhaps we need to consider a networked environment where trust is mediated between endpoints by network-based services. Perhaps its time to look at applications in a new light and understand how applications can utilise proxy servers and agents within the network to provide additional assurance to the ultimate end points of a transaction of the bona fides of teh other party and the integrity of the transaction. Unfortunately this is not completely good news for the Internet Commons as we've come to know it. All this is a step back from the original model of a simple switching network with capable and agile collection of end systems engaging in a peer-to-peer communication environment. As we retreat into our walled gardens of limited trust, install the guards at the gates and control the perimeters with attack dogs, the Internet commons may fall into further neglect. Perhaps intermediary-assisted communications systems are not technically required, but socially they become an simple imperative.