How fast is a terabit a second?

Most of us are used now to the Internet. The capability to send and receive email and browse and down-load web pages has its fans and its detractors, but the technology is here to stay. One criticism concerns the value of content on the Internet, though my guess is that less time is wasted perhaps than watching TV.

However, a common valid technical criticism of the Internet concerns its performance. Users frequently find the reliability and responsiveness unsatisfactory. The departments of Computer Science and Electrical and Electronic Engineering at UCL are world leading research centers addressing these problems. Our research centers on application of advanced switching techniques and software to control such systems to provide faster and more robust transmission.

In the past, Internet performance was limited partly due to the high price of communications capacity. Recently, after a steady drop in price of capacity due to intense competition we now find that the bottlenecks to the information highway are the intersections, rather than number of lanes on the roadways. De-regulation, and emergence of new businesses from old, such as satellite and terrestrial analogue and digital TV, and Internet Service Providers. We expect the use of relatively simple digital access links from the home and office to the network, and widespread deployment of new transmission technologies including fibre for telephony and cable TV trunks, as well as new copper, and new wireless communications technology for digital mobile phones and satellite communications to allow user to reach this ever faster communications matrix.

The basic measure of information is the binary digit, or "bit", and in communications the standard measure of effective capacity is bits-per-second (bps). To get a feel for the capacity needed for typical internet communication, consider transmitting this article by electronic mail, or retrieving it from the web. This article is around 200 lines of text, or 1000 words, or 6000 letters - this takes something like 50,000 bits to store digitally. To transmit the article as fast as you can read it (say 10 words per second) takes only 100 secs, or 500bps, but to transmit it as fast as the fastest printer could print it might require a communications link running at 50,000bps (i.e. print in 1 second). This is just about within the capability of most current typical Internet user's reach - for around 5 pounds per month, modem access over traditional phone lines to "backbone" networks can provide this performance - interestingly, this is about the same performance as is needed for digital speech, low rate video, and modest "AM radio" quality music. However, the same copper phone lines can be used quite differently to deliver much higher data rates - by removing the old telephone equipment at each end of the wire which place special limits on the use of the copper, it is possible to use smart elctronic techniques to increase this to 1 to 10 Mbps (20-200 times as much). This article would take less than 1/100th of a second to send, or put another way, you could retrieve 100-1000 articles a second, or perhaps a whole book in one second. If several people shared the network in turn, it is more than sufficient for their typical needs. The networks we are now experimenting with in the labs at UCL are capable of a terabit a second on a single fiber, which is a thousand times a gigabit a second, or one million megabits a second. These can support millions of users quite easily. Cable TV and Digital Satellite transmission offer other pathways to fast access to the Information Superhighway. Researchers in UCL Computer Science and EE departments are working with industrial partners sch as Nortel, BT, Cable London and Eutelsat, to make this type of technology cheap and efficient enough to deploy to the millions of homes and offices that will need it.

UCL CS was a pioneer on the Internet, having been connected since 1973! The Internet is capable of being implemented over any transmission technology. However, it operates best when implemented as "close to the wire" as possible, when a computer is the key controlling entity directly connected at every point in the path. If a large number of users have access links capable of carrying Millions of bits per second, how can the poor network cope? Lets see how fast it must be to support say 1% of the population using this kind of data rate to each other for digital video walls, gaming, and whatever other applications come along? There are about 35 million users in the UK, so 1% of them makes 350,000 simultaneous streams of 1 Million bits per second. This is around 350Gbps (G for Giga, 10**9) total, or a third of a terabit (10**10). How can this much data be routed and switched to the right place at the right time? Nick McKeown at Stanford has a working prototype of the answer called the "Tiny Tera". This is a router based in a telecommunications digital switch design, essentially, a special very fast digital switching fabric is built that can take all the data in, and route it to the right output port without incurring any extra delays - the decisions need to be made about 10**8 times per second. UCL EE are working on optical technology which will extend far beyond this, collaborating with CS researchers on how to provide efficient and effective interfaces for todays electronic computers to control tommorrows optical switches!

Once upon a time, this type of decision making seemed as if it would need special purpose hardware logic. However, a pentium processor is now in the labs (a commodity item price device next year) that runs at a clock speed of 1GHz. This is means that there is something of the order of 10 instructions per packet of data to decide where the packet goes, without even using special hardware. Recent work by folks at Microsoft suggests that this is not only adequate, but that it should be relatively easy to build low cost systems with reasonable numbers of such processors in.

All this new technology is all very well, but isn't it supposed to save us travel? And isn't it supposed to allow us to live in a pleasant environment in pembrokeshire or the western islands and highlands instead of the hurly burly (sorry, hustle and bustle) of the inner city? Yes, however, the cost of deployment of higher speed access is very much distance related. In other words, access (at all, or speed) will always lag in performance the further from the centre of things you are. Nevertheless, it is already possible to telework very effectively. LEARNET (the London East Anglia Research Network) is a ultra-fast network facility which connects BT, UCL, Essex and Cambridge universities, looking into how people will use such capacity (including remote teaching and learning) and what kind of quality and pricing mechanisms can be deployed on these 21st century technologies. Should there be a charging scheme to stop people using the service and incur the cost of deploying a charging system, or is it cheaper to deploy more capacity than a bill? Only the future can tell, but UCL will have a large role in that future.