The ISP Column
An occasional column on things Internet
Other Formats: PDF   TXT  


10 Years Later
June 2008


Geoff Huston

In this article I'd like to present personal perspective of the evolution of the Internet over the last decade, highlighting my impressions of what has worked, what has not and what has changed over this period.

It has been an extraordinary decade for the Internet, encompassing an economic boom and an ensuing bust that would rate up there with history's finest episodes of exuberant mania, a comprehensive restructuring of the entire global communications enterprise, and generated a set of changes that have already altered the way in which each of us now work and play. That's some decade!

By 1998 any lingering doubts about the ultimate success of the Internet had been thoroughly dispelled. The Internet was not just a research experiment any longer, or an intermediate waystop on the road to adoption of the Open Systems Interconnect (OSI) framework. There was nothing else left standing in the data communications landscape that could serve our emerging needs for data communications. IP was now the communications technology of the day, if not the coming century, and the industry message at the time was a clear one that said: "adopt the Internet into every product and service or imperil your entire future in this business". No longer did the traditional telecommunications enterprises view the Internet with some polite amusement or even overt derision. It was now time to acknowledge that they had wrongly ignored the Internet in the mid-90's, and it was now time for a desperate scramble to be part of this revolution in one of the world's major activity sectors. The largest enterprises in this sector, the old world ex-monopoly telcos, had been caught wrong-footed in one of the biggest changes of the industry for many decades, and this time the concurrent wave of deregulation and competition meant that the communications industry's entire future was being handed over to the most agile and flexible Internet players. By 1998 the Internet had, finally, made it into the big time. The job was apparently done, and the Internet had prevailed.

But the story was not over. Communications continues to drive our world, and the Internet continues to evolve and change. What has happened in the last decade of the Internet? What aspects of internet technology has changed, and why?

The evolutionary path of any technology can often take strange and unanticipated turns and twists. At some points simplicity and minimalism can be replaced by complexity and ornamentation, while at other times a dramatic cut-through exposes the core concepts of the technology and removes layers of superfluous additions. The technical evolution of the Internet appears to be no exception, and contains these same forms of unanticipated turns and twists.

Rather than offer a set of unordered observations about the various changes and developments over the past decade, I'll use the IP protocol model as a template, starting with the underlying transmission media, then looking at IP, the transport layer, then applications and services, and closing with a look at the business of the Internet.

The Transmission Media Layer

It seems like it was in an entirely different earlier lifetime, but the Internet Service Provider business of 1998 was still centrally involved in the technology of dial-up modems. The state of the art of modem speed had been continually refined, from 9600bps to 14.4kbps, to 28kbps to, finally, 56kbps, squeezing every last bit out the phase amplitude space contained in an analogue 3Khz voice circuit. Modems were the bane of an ISP's life. They were capricious, constantly being superseded by the next technical refinement, unreliable, difficult for customers to use, and on top of that they were slow! Almost everything else on the Internet was forced to be tailored to download reasonably quickly over a modem connection. Web pages were carefully composed with compressed images to ensure a rapid download, and plain text was the dominant medium as a consequence.

The evolution of access networks has seen a shift away from modems to a number of digital access methods, including DSL, cable modems and high speed wireless services. The copper pair of the telco network has proved surprisingly resilient, and DSL has been able to achieve speeds of tens of megabits per second through this network, with the prospect of hundred megabit systems appearing at the moment. Whether this surprising longevitiy of the copper pair is an result of the telcos' continuing ownership of this access infrastructure and a consequent residual monopoly position in access infrastructure, or just an interim holding position while the search for a viable business model that is capable of underwriting the costs of deployment and use of an open fibre-based access networks continues, is a matter of speculation. In any case, the result so far has been a technology refinement effort that has extracted far greater levels of data performance from the venerable copper loop infrastructure than was ever considered possible even a decade ago.

Not all forms of internet access were based on dial-up in 1998. ISDN was on use in some places, but it was never cheap enough as a retail service to take over as the ubiquitous access method for the Internet. There were also access services based on Frame Relay, X.25 and various forms of digital data services. At the high end of the speed spectrum there were T-1 access circuits with 1.5Mbps clocking, and T-3 circuits clocked at 45Mbps.

If you were an ISP you leased circuits from a telco. In 1998 the ISP industry was undergoing a general transition of their trunk IP infrastructure from T-1 circuits to T-3 circuits. While it was not going to stop here, squeezing even more capacity from the network was now proving to be a challenge. 622Mbps IP circuits were being deployed, although many of these were constructed using 155Mbps ATM circuits using router load balancing to share the IP load over four of these circuits in parallel. Gigabit circuits were just around the corner, and the initial exercises of running IP over 2.5Gbps SDH circuits were being undertaken in 1998.

In some ways 1998 was a pivotal year for IP transmission. Until this time IP was still just another data application that was positioned as just another customer of the telco's switched circuit infrastructure. This telco infrastructure was designed and constructed primarily to support telephony. From the analogue voice circuits to the 64K digital circuit through to the higher speed trunk bearers, IP had been running on top of the voice network's infrastructure. Communications infrastructure connected population centres where there was call volume, and as long as the total round trip delays of the system was kept under 300ms or so telephony worked just fine. The Internet had different demands. Internet Traffic patterns did not mirror voice traffic, and IP performance is sensitive to every additional millisecond of delay. Constraining the Internet to the role of an overlay placed on top of a voice network was starting to show signs of stress, and by 1998 things were changing. The Internet had started to make ever larger demands on transmission capacity, and the driver for further growth in the network's infrastructure was now not voice, but data. It made little sense to provision an ever larger voice-based switching infrastructure just to repackage it as IP, and by 1998 the industry was starting to consider just what an all-IP high speed network would look like, building an IP network all the way from the photon in a fibre optic cable all the way through to the design of the Internet application.

At the same time the fibre optic systems were changing with the introduction of Wave Division Multiplexing (WDM). Older fibre equipment with electro-optical repeaters and PDH multiplexors allowed a single fibre pair to carry around 560Mbps of data. WDM allowed a fibre pair to carry multiple channels of data using different wavelengths, with each channel supporting a data rate of up to 10Gbps. Channel capacity in a fibre strand was between 40 to 160 channels using Dense WDM (DWDM). Combined with the use of all-optical amplifiers, the most remarkable part of this entire evolution in fibre systems was that a cable system capable of an aggregate capacity of a terabit can be constructed today for much the same cost as a 560Mbps cable system of the mid '90s. That's a cost efficiency improvement of a factor of one million in a decade. The drive to deploy these high capacity DWDM fibre systems was never based on expansion of telephony. The explosive growth of the industry was all to do with supporting the demand for IP. So it came as no surprise that at the same time as there was increasing demand for IP transmission there was a shift in the transmission model where instead of plugging routers into telco switching gear and using virtual point-to-point circuits for IP, we started to plug routers into wavelengths of the DWDM equipment, and operate all-IP networks in the core of the Internet.

In terms of transmission, the last 10 years has seen the network migrate from an overlay system of kilobit per second access with multi-megabit trunks, operating as a customer of the telco switched network, to today's picture of comprehensive IP networking with access networks that deliver megabits per second with multi-gigabit network trunks, or a thousand-fold increase in basic network capacity in this period.

The Internet's demand for capacity continues, and we are seeing work on standardising 40G and 100G transmission systems in the IEEE at the moment, and the prospect of terabit transmissions is now taking shape for the Internet.

The Internet Layer

If transmission has seen dramatic changes in the past decade then what has happened at the IP layer over the same period?

The glib answer is "absolutely nothing"! But that answer would be gliding over a large amount of activity in this area. We've tried to change many parts of IP in the past decade, but, interestingly, none of the proposed changes have managed to gain any significant traction out there in the network, and IP today is largely no different from IP of a decade ago. Mobility [1], Multicast [2] and IP Security (IPSec) [3] remain poised in the wings, still awaiting adoption by the mainstream of the Internet.

Quality of Service (QoS) was a hot topic in 1998, and it involved the search for a reasonable way for some packets to take the fast path while others took a more leisurely stroll through the network. We experimented with various forms of signalling, packet classifiers, queue management algorithms and interpretations of the Type of Service bits in the IPv4 packet header, and we explored the QoS architectures of Integrated and Differentiated Services in great detail. However QoS never managed to get a toehold into mainstream Internet service environments. In this case the Internet took a simpler direction, and in response to not enough network capacity the alternate approach to installing additional mechanisms in the network, in host protocol stack and even in the application in order to ration what capacity you have, is to simply expand the network to meet the total level of demand. So far the simple approach has prevailed in the network, and QoS remains largely unused [4].

We've experimented with putting circuits back into the IP datagram architecture in various ways, most notably with the Multi-Protocol Label Switching (MPLS) technology [5]. This technology used the label swapping approach that was previously used in X.25, Frame Relay and ATM virtual circuit switching systems, and created a collection of virtual paths from each network ingress to each network egress across the IP network. The idea was that in the interior of the network you no longer needed to load up a complete routing table into each switching element, and instead of performing destination address lookup you could perform a much smaller, and hopefully faster, label lookup. This performance differentiator did not eventuate and switching packets using the 32 bit destination address in a fully populated forwarding table continued to present much the same level of cost efficiency at the hardware level as virtual circuit label switching. When you add the additional overhead of an another level of indirection in terms of operational management of these MPLS overlay circuits, MPLS has become another technology that so far has just not managed to achieve traction in mainstream Internet networks. However, MPLS is by no means a defunct technology, and one place where MPLS has enjoyed considerable deployment is in the corporate service sector where many Virtual Private Networks [6] are constructed using MPLS as the core technology, steadily replacing a raft of legacy private data systems that used X.25, Frame Relay, ATM, SMDS and switched Ethernet over the past decade.

There was of course one change at the IP level of the protocol stack that was meant to have happened in the past decade, but has not, and that's IP version 6 [7]. In 1998 we were forecasting that we would've consumed all the remaining unallocated IPv4 addresses by around 2008. We were saying at the time that, as we had completed the technical specification of IPv6, the next step was that of deployment and transition. There was no particular sense of urgency and there was the comfortable expectation that with a decade to go we did not need to ring the emergency bell or raise any alarms. And this plan has worked, so some extent, in that today's popular desktop operating systems of Windows, MacOS and Unix all have IPv6 support. But other parts of this transition have been painfully slow. It was only a few months ago that the root of the Domain Name System (DNS) was able to answer queries using the IPv6 protocol as transport, and provide the IPv6 addresses of the root nameservers. There are very few mainstream services out there that are configured in a dual stack fashion, and the prevailing view is still that the case for IPv6 deployment just hasn't reached the necessary threshold yet. Current usage measurements for IPv6 point to a level of IPv6 deployment of around one thousandth of the IPv4 network, and, perhaps more worrisome, this metric has not changed by any appreciable level over the past four years.

So what about that projection of IPv4 unallocated pool exhaustion by 2008? How urgent is IPv6 now? The current news is that IANA still has some 16% of the address space in its unallocated pool, so IPv4 address exhaustion is unlikely to occur this year. The bad news is that the global consumption rate of IP addresses is now at a level such that the remaining address pool can fuel the Internet for less than a further three years, and the exhaustion prediction is now some time around 2010 Ð 2011.

So why haven't we deployed IPv6 more seriously yet? And if we are not going to deploy IPv6, then what's the alternative?

Of all the technical refinements to IP that have occurred, one technology that received little fanfare when it was first published, has enjoyed massive deployment over the past decade, and that's the technology of Network Address Translators (NATs) [8]. Today NATs are ubiquitous. It seems like every home access unit, every corporate firewall, every data centre, and every service, includes a NAT. One measure of NAT's ubiquity is the transformation that has occurred in the application space. By 2008 applications have either adopted a strict client server approach, where the client always initiates the network transaction, or, where there is some form of peer interaction, then the application now is equipped with NAT behaviour discovery, NAT binding management, application level name spaces and multi-party rendezvous mechanisms in order to allow the application to function across NATs. So, so far, we've managed to offload the problem of looming address scarcity in the Internet onto NATs, and the really significant change at the IP level that has occurred in the past decade is the default assumption about the semantics of an IP address. An IP address is no longer synonymous with the persistent identity of the remote party that anyone can use to initiate a communication, but a temporary token to allow a single transaction to complete. As a consequence, most Internet services have retreated into data centres and the business of hosting services has thrived. And the change that would've preserved the coherent end-to-end architecture of the IP layer of the Internet, namely IPv6, is still waiting in the wings.

The next few years promise to be "interesting" in every form of meaning of the word! The exhaustion of the remaining IPv4 address pool is imminent, and if we are going to comprehensively substitute IPv6 in place of IPv4 then it appears that we simply don't have enough time to achieve this before the remaining IPv4 address pool is depleted. And while NATs have conveniently pushed the problem of increasing address scarcity off the network and over to the edge devices and on to applications so far, its not clear that this approach can sustain an ever-growing Internet indefinitely. We've yet to understand just what a "carrier-grade NAT" might be, and yet to understand whether it can even work in any useful manner at this level of scaling. NATs were an accidental addition to the Internet, and there is no clear idea of their role in the coming years as NAT attempts to head towards the inner core of the network as well as living at the edges.

The early '90s saw a flurry of activity in the routing space, and protocols were quickly developed and deployed. By 1998 the "standard" internet environment was the use of either IS-IS or OSPF as a large scale interior routing protocol, and BGP-4 as the inter-domain routing protocol [9]. This picture has remained constant over the past decade. In some ways it reassuring to see a technology that is capable of sustaining a quite dramatic growth rate, but perhaps that's not quite the complete picture.

We never quite got around to completing the specification of a "next" inter-domain routing protocol, and BGP-4 is now showing signs of stress [10]. The pool of Autonomous System (AS) numbers is forecast to run out early in 2011, and by then we have to deploy a new variant of BGP that is capable of operating with a much larger pool of AS numbers [11]. Fortunately the technology development for BGP has been completed and an approach that allows incremental deployment has been devised, so this is not quite the traumatic transition that is associated with IPv6. But deployment is slow, and of the current level of adoption of the larger AS number set is, oddly enough, comparable to IPv6, at a level of around one thousandth of the total AS number pool. The routing system has also been growing inexorably, and the capability of switching systems to cope with ever larger routing tables and do so while offering continual improvements in cost efficiencies is now looking lees certain. So, once again, we appear to be examining routing protocol theory and practice, and looking at alternate approaches to routing than can offer superior scaling properties to BGP for the future.

No listing of the major highlights in IP over the past decade would be complete without some mention of the perennial issue of location and identity. One of the original simplifications in the IP architecture was to place the semantics of identity, location and forwarding into an IP address. While that has proved phenomenally effective in terms of simplicity of applications and simplicity of IP networks, it has posed some serious challenges when considering mobility, routing, protocol transition and network scaling. Each of these aspects of the Internet would benefit considerably if the Internet architecture allowed identity to be distinct from location. Numerous efforts have been directed at this problem over the past decade, particularly in IPv6, but so far we really haven't arrived at an approach that feels truly comfortable in the context of IP. The problem we appear to have stuck on for the past decade is that if we create a framework of applications that use identity as a rendezvous mechanism and use an IP layer that requires location, then how is the mapping between identity and location distributed in an efficient and suitably robust manner?

So while it is possible to observe that not much has happened at the IP level in the past decade that has managed to be deployed in the Internet, and IP is still IP, there is still a considerable agenda to tackle at the Internet layer!

The Transport Layer

A decade ago in 1998 the transport layer of the IP architecture consisted of UDP and TCP, and the network use pattern was around 95% TCP and 5% UDP. Here, as well, not much has changed in the intervening 10 years.

We've developed two new transport protocols, the Datagram Congestion Control Protocol (DCCP) and the Stream Control Transmission Protocol (SCTP) [12] which can be regarding as refinements of TCP to cover flow control for datagram streams in the case of DCCP and flow control over multiple reliable streams in the case of SCTP. However, in a world of transport-aware middleware that is the Internet today, the level of capability to actually deploy these new protocols in the public Internet is marginal at best. These more recent transport protocols are not recognised by Firewalls, NATs and similar, and as a result, the prospects of widescale deployment are not good.

TCP has proved to be remarkably resilient over the years, but as the network increases in capacity the ability of TCP to continue to deliver ever faster data rates over distances than span the globe is becoming a significant issue. There has been much work in recent times to devise revised TCP flow control algorithms that still share the network fairly with other concurrent TCP sessions, yet can ramp up to multi-gigabit per second data transfer rates and sustain that rate over extended periods [13]. At this stage much this work is still in the area of research and experimentation, and TCP today as deployed on the Internet is much the same as TCP of a decade ago, with perhaps a couple of notable exceptions. The latest TCP stack from Microsoft in Vista uses dynamic tuning of the receive window, and larger inflation factor of the send window in congestion avoidance where there is a large bandwidth delay product, and improved loss recovery algorithms that are particularly useful in wireless environments. Linux now includes an implementation of BIC, which undertakes a binomial search to re-establish a sustainable send rate. Both of these approaches can improve the performance of TCP particularly when driving the TCP session over long distances and trying to maintain high transfer speeds.

As well as extending the performance range of TCP to long haul single stream gigabit sessions, there has been considerable work in trying to make TCP operate efficiently over wireless networks. TCP assumes a reliable transmission system, and interprets both data corruption and packet loss as a signal of network congestion. This, in turn, causes TCP to reduce it's sending rate, and it takes a number of round trip cycles to recover the original data transfer rate. For wireline systems with very low bit error rates this assumption is a sound one. For wireless systems with the potential for bit error bursts this assumption does not hold, and high speed TCP over wireless degrades quickly once the bit error level rises. The typical response to far has been to keep the signal to noise level low by using wireless in localised contexts for high speed, and using lower speeds when the coverage area for the wireless system increases. This is not exactly a satisfactory response, and there have been various efforts to 'tune' TCP to react in more efficient ways in response to bit error bursts. The most promising is the Explicit Congestion Notification (ECN) bits in the packet header, which allows the sender to differentiate between packet loss caused by congestion and packet loss caused by packet corruption. Again, however, deployment of this approach in the Internet has not happened, as the necessary changes to host stacks, and the benign acceptance of the packet header bits by middleware is not a given.

The Application and Service Layer

This area, unlike the transport layer, has seen quite profound changes over the past decade. A decade ago the Internet was on the cusp of portal mania, where Look Smart was the darling of the Internet boom and everyone was trying to promote their own favourite "one stop shop" for all your Internet needs. We were still using various forms of hand-compiled directories and navigation of the Internet was still the subject of various courses and books.

By 1998 AltaVista has made its debut, and winds of change were already making themselves felt. This change, from directories and lists to active search completely changed the Internet. These days we simply assume that we can type any query we like into a search engine and the search machinery will deliver a set of pointers to relevant documents. And every time this occurs our expectations about the quality and utility of search engines are reinforced, and now we've moved beyond swapping URLs as pointers and we are simply exchanging search terms as an implicit reference to the material. Content is also changing as a result, as users no longer remain on a "site" and navigate around the site. Instead users are driving the search engines, and pulling the relevant page form the target site without reference to any other material.

Another area of profound change has been the rise of active collaboration over content, best typified in wikis. Wikipedia is perhaps the most cited example of user-created content, but almost every other aspect of content generation is also being sucked into the active user model, including YouTube, Flickr, Joost and similar.

Underlying these changes is another significant development, namely the change in the content economy. In 1998 content providers and ISPs were eyeing each other off in a fight for user revenue. Content providers were unable to make pay per view and other forms of direct financial relationship with users work in their favour, and were arguing that ISPs should fund content. After all, they argued, the only reason why users paid for Internet access was because of the perceived value of the content that they found on the Internet. ISPs, on the other hand, took up the stance that content providers were enjoying a free ride across the ISP-funded infrastructure, and content providers should contribute to network costs. The model that has gained ascendency as a result of this unresolved tension is that of advertised-funded content services, and this model has been capable of sustaining a vastly richer, larger and more compelling content environment.

At the same time peer-to-peer networks have emerged, and from its beginnings as a music sharing subsystem, the distributed data model of content sharing now dominates the Internet with audio, video and large data sets now using this form of content distribution and its associated highly effective transport architecture. Various measurements of Internet traffic have placed P2P content movement at between 40% to 80% of the network's overall traffic profile.

In many ways applications and services have been the high frontier of innovation in the Internet in the past decade. An entire revolution in open interconnection of content elements is embraced under the generic term Web 2.0, and "content" is now a very malleable concept. It's no longer the case of "my computer, my applications, my workspace" but an emerging model where not only the workspace for each user is held in the network, but where the applications themselves are part of the network, and all are accessed through a generic browser interface.

And I suppose any summary of the evolution of the application space over the last decade would not be complete without noting that while in 1998 the Internet was still an application that sat on top of the network infrastructure used to support the telephone network, by 2008 voice telephony was just another application layered on the infrastructure of the Internet, and the Internet had even managed to swallow the entire telephone number space into the Internet's DNS, using an approach called ENUM [14].

The Business Layer

As much as the application environment of the Internet has been on a wild ride over the past decade, the business environment has also had its tickets on the roller coaster ride, and the list of business winners and losers include some of the historical giants of the telephone world as well as the Internet-bred new wave of entrants.

In 1998, despite the growing momentum of public awareness, the Internet was still largely a curiosity. It was an environment inhabited by geeks, game players and academics, whose rites of initiation were quite arcane. As a part of the data networking sector, the Internet was just one further activity among many, and the level of attention from the mainstream telco sector was still relatively small. Most Internet users were customers of independent ISPs and the business relationship between the ISP sector and the telco was tense and acrimonious. The ISPs were seen as opportunistic leeches on the telco industry; they ordered large banks of phone lines, but never made any calls; their customers did not hang up after 3 minutes, but kept their calls open for hours or even days at a time, and they kept on ordering ever larger inventories of transmission capacity, yet had business plans that made the back of an envelope look professional by comparison. The telco was unwilling to make large long term capital investments in additional infrastructure to pander to the extravagant demands of a wildcat set of internet speculators and their fellow travellers. The telco, on the other hand was slow, expensive, inconsistent, ill-informed and hostile to the ISP business. The telco wanted financial settlements and bit level accounting while the ISP industry appeared to manage quite well with a far simpler system of peering and tiering that avoided putting a value on individual packets or flows [15]. This was never a relationship that was going to last, and it resolved itself in ways that in retrospect were quite predictable. From the telco perspective it quickly became apparent that the only reason why the telco was being pushed to install additional network capacity at ever increasing rates was the ISP sector. From the ISP perspective the only way to grow at a rate that matched customer demand was to become one's own carrier and to take over infrastructure investment. And, in various ways, both outcomes occurred. Telcos bought up ISPs, and ISPs became infrastructure carriers.

All this activity generated considerable investor interest, and the rapid value escalation of the ISP industry and then the entire Internet sector generated the levels of wild-eyed optimism that are only associated with an exceptional boom. By 2000 almost anything associated with the Internet, whether it was a simple portal, a new browser development, a search engine, or an ISP, attracted investor attention, and the valuations of internet startups achieved dizzying heights. Of course one of the basic lessons of economic history is that every boom has an ensuing bust, and in 2001 the Internet collapse happened. The bust was as inevitable and as brutal as the preceding boom was euphoric. But, like the railway boom and bust of the 1840's, once the wreckage was cleared away, what remained was a viable, and indeed a valuable, industry.

By 2003 the era of the independent retail ISP was effectively over. ISPs still exist, but those that are not competitive carriers tend to operate as IT business consultants who provide services to niche markets. Their earlier foray in to the mass market paved the way for the economies of scale that only the carrier industry could bring to bear on the market.

But the grander aspirations of these larger players has not been met, and effective monopoly positions in many internet access markets has not translated to effective control over the user's experience of the Internet, or anything even close to such control. The industry was already "unbundled," with intense competition occurring at every level of the market, including content, search, applications, and hosting. The efforts of the telco sector to translate their investment into mass market internet access into a more comprehensive control over content and its delivery in the Internet has been continually frustrated. The content world of the Internet has been reinvigorated by the successful introduction of advertiser-funded models of content generation and delivery, and this has been coupled with the more recent innovations of turning back to the users themselves as the source of content, so that the content world is once again the focus of a second wave of optimism, bordering on euphoria.

And Now?

Its been a revolutionary decade for us all, and in the last ten years the Internet has directly or indirectly touched the lives of almost every person on this planet. Current estimates put the number of regular Internet users at 19% of the world's population.

Over this decade some of our expectations were achieved and then surpassed with apparent ease, while others remained elusive. And some things occurred that were entirely unanticipated. At the same time very little of the Internet we have today was confidently predicted in 1998, while many of the problems we saw in 1998 remain problems today.

What we have today is not the technical Internet we thought we were building a decade ago. It is not a coherent end-to-end network with clear signalling across commodity packet switching fabric with IP as the universal adaptor, but a network that is replete with all forms of active middleware [16], from NATs to firewalls [17] and filters, including packet shapers, torrent detectors, Voice over IP (VOIP) blockers and load balancers. It is not a secure or a safe network, but one that includes a continual barrage on end hosts in the form of over a million different forms of viruses [18], worms and assorted malware [19], as well as a barrage on users in the form of torrents of spam [20]. The network is a host to a litany of hostile attacks, from gigabit traffic swamping attacks, redirection, inspection, passing off and denial of service attacks [21]. The attacks are directed at links, routers [22], the routing protocols [23] [24], hosts, and applications. Our ability to effectively defend the network and its connected hosts continues to be, on the whole, ineffectual. Our level of interest in paying a premium to support highly secure systems still remains slight. But somehow we are not deterred by all this. Somehow each of us have found a way to make our Internet work for each of us.

I'm not sure that the next decade will bring the same level of intensity of structural change to the global communications sector, and perhaps that's a good thing given the collection of other challenges that are confronting us all in the coming decades. At the same time I think it would be good the believe that the past decade of the Internet's development has completely rewritten what it means to communicate, rewritten the way in which we can share our experience and knowledge, and, hopefully, rewritten the ways in which we can work together on these challenges.

References

The Internet Protocol Journal has published articles on all the major aspects of the Internet's technical evolution over the past decade. To illustrate the extraordinary breadth of these articles I've included as references here only pointers to articles that have been published in IPJ.





















Disclaimer

The above views do not necessarily represent the views of the Asia Pacific Network Information Centre.

About the Author

 
 
GEOFF HUSTON holds a B.Sc. and a M.Sc. from the Australian National University. He has been closely involved with the development of the Internet for many years, particularly within Australia, where he was responsible for the initial build of the Internet within the Australian academic and research sector. He is author of a number of Internet-related books, and is currently the Chief Scientist at APNIC, the Regional Internet Registry serving the Asia Pacific region. He was a member of the Internet Architecture Board from 1999 until 2005, and served on the Board of the Internet Society from 1992 until 2001.

www.potaroo.net