Fifth Progress Report on the HICID Project,

December 1, 1998 - March 15, 1999.

Panos Gevros, Fulvio Risso, Peter T. Kirstein and Jon Crowcroft

March 20, 1999

  1. Introduction
  2. LEARNET is now becoming a QoS testbed on it. The current configuration is shown and discussed in Section 2. We still need certain changes before the configuration is really suitable for performing experiments with QoS algorithms in the routers. We have continued to work with our laboratory testbed, and have largely completed some simple measurements of algorithms. In Section 3, we describe our local QoS testbed, and the measurements we have been doing. Our progress with IPv6 is considered in Section 4. Our future plans are discussed in Section 5.

  3. The Current LEARNET QoS Testbed

Two PC routers at UCL-EE have been installed, and at Essex U, and would be ready to do QoS activities if only the configuration was suitable. The current configuration is shown in Fig. 1:

 

 

 

 

 

 

 

 

 

 

 

 

 

Here the dotted lines indicate ATM PVCs which have been installed.

There are still some problems with the current Fig. 1 installation:

The UCL-CS (UCL-CS-PC) and the Essex U (Essex-PC) CAIRN routers are connected directly to ATM switches. The ATM switches at UCL-EE (UCL-EE-ATM), UCL-CS (UCL-CS-ATM), Essex (Essex-ATM) and BT (BT-ATM) are also connected together directly. Finally the three CISCO routers (UCL-CS-R, BT-R and Essex-R) are all connected to their local switch. This gives the requisite connectivity. In fact the local configurations are not quite accurate; for instance there are two ATM switches at Essex for the real routes.

For many purposes, we would like to investigate also multiple hops and multicast. We still need some slight extensions to Fig. 1 to enrich the topology:

  1. Connect the CAIRN PCs (UCL-EE-PC) router to the ATM switch UCL-EE-ATM;
  2. Add a CAIRN router (BT-PC) at BT.

It is already possible to set up an all-CISCO and an all PC router path at the ATM level. If all the above were done, it would be possible to establish ATM VCs between any sets of CISCOs or of CAIRN routers.

In addition to the above, an ATM Host, which the HIGHVIEW video server, is connected directly to an Essex ATM switch. There are also various UCL-CS hosts attached to the CAIRN and CISCO routers at UCL-CS.

The first experiments have been done between a Video Server at Essex U and a client at UCL-CS. This has now been done using the all-CISCO topology and the all-PC router one. Satisfactory performance has been observed qualitatively; no real QoS measurements have been done yet.

 

  1. The Laboratory Testbed
    1. Introduction
    2. We have extended our tests on our laboratory, small-scale, testbed to experiment with QoS. In our experiments we use research prototypes (mainly but not only those under the ALTQ framework) and measure how effective the traffic management mechanisms are. The problems with the ALTQ ATM driver and its capabilities that were highlighted in the previous report have now been resolved; we are using the latest FreeBSD Releases (2.2.7) and the ALTQ-1.1.1 distribution.

      In the CAIRN, our colleagues are starting to use also another research prototype from Carnegie Mellon University, the Hierarchical Fair Share Curve (HFSC) used for link sharing (like CBQ). There are some advantages in this system - not the least that other colleagues are exploring its use, and CMU has offered to configure it onto our routers (it is not as well finished a product as ALTQ from the viewpoint of easy configuration). We have extended our tests to include HSFC. The main part of this report is our experiences with CBQ; these are given much more fully in [1] by Fulvio and Gevros, but the salient points are reported below.

       

    3. Test environment
    4. We have continued our tests with the test environment below:

       

       

       

       

      Here Ammon and Kiki are PCs running FreeBSD, Truciolo is a PC running Microsoft Windows, and Thud is a Solaris Sparc-5.

      The main test environment consists of two PCs (Ammon and Kiki) running Unix BSD connected with an ATM 155Mbps link: one runs the CBQ daemon and the other acts as a network capture. The ttcp program is used as Traffic Generator: this program runs on Truciolo. This double location is basic because the switched Ethernet between the Win95 machine and the CBQ can affect the results in high-bandwidth tests. The TTT package is used as traffic monitor, in certain cases running directly on the second BSD machine, in certain others running the TTTProbe on the BSD machine and the TTTView on the Solaris machine. Since the TTT graphical interface uses a lot of CPU resources, the second option is used to avoid a CPU overload when running some high-bandwidth tests.

       

      Name

      Machine Type

      Network

      OS

      Other Packages installed

      Task

      Kirki

      AMD K6/200, 64MB RAM

      1 Ethernet 10Mbps, 1 ATM 155Mbps

      PVC setting: At various speed (generally between 1 and 10 Mbps)

      FreeBSD 2.2.7

      ALTQ 1.1.1

      TTCP (recv)

      TTT

      TTTProbe

      Network Capture; in certain cases also Network Monitor

      Ammon

      Intel P166, 32MB RAM

      1 Ethernet 10Mbps, 1 ATM 155Mbps

      PVC setting: At various speed (generally between 1 and 10 Mbps)

      FreeBSD 2.2.7

      ALTQ 1.1.2

      TTCP (send)

      CBQ daemon

      In certain cases, also Traffic Generator

      Thud

      Sun Spark 5

      Ethernet 10Mbps

      Solaris 2.5.1

      TTTview

      Network Monitor

      Truciolo

      Intel PII-266

      Ethernet 10Mbps

      Windows 95

      TTCP (send)

      Traffic Generator

       

    5. CBQ Performance

Most of our work has been to try to understand the performance problems we have been observing. We now have a fair understanding of the problems, and these are reported in [1]. We have found some actual bugs in the ALTQ implementations; these included some CBQ kernel bugs and some ATM drivers ones, which have been fixed. In addition, we have found some problems with the TCP buffer management when used with CBQ. Finally, there are probably with the CBQ algorithms, which do not allow the full bandwidth to be used. We are still trying to see how to overcome these problems.

 

  1. Progress with IPv6
  2. It is not yet quite clear how much IPv6 can be used in the HICID/HIGHVIEW/JAVIC projects. It is clear that IPv6 will have much more support for QoS eventually, so we would like to use it for HICID. This requires, however, that the applications we are using would themselves support IPv6. Mainly with effort from outside the project, we are building up an IPv6 capability for our testbed. Our current progress is summarised below.

    1. Stacks
    2. UCL-CS has put up the IPv6 stacks from Microsoft/NT, DASSAULT/NT, Solaris-7 and FreeBSD. We have done inter-working experiments, and now understand the different application APIs. We will put up the LINUX one from Lancaster U and the FreeBSD one from CMU; we will need this for our mobile work but not for HICID Routers.

      The CAIRN and CISCO routers we are using both support IPv6; there is no problem at that level. However the QoS support in the routers is much more limited. The CAIRN router supports only HFSC; it does not yet support ALTQ under IPv6. There is a version of ALTQ distributed by INRIA; this has not been evaluated yet by any in the CAIRN community - including UCL. The CISCO support is also much more limited under IPv6 than IPv4.

      The NT stacks are used only in end-Hosts - not in routers. In practice the DASSAULT/NT stack is still unstable. This is being discussed with DASSAULT.

       

    3. Applications

    There has been substantial progress in making two applications work with IPv6; these are RAT and VIC. These now work with all the stacks - though we are still working at making the applications more uniform for the different stacks. We have received a port of SDR for IPv6 from ISI; we have not yet investigated this yet.

  3. Future Work

Our main activities during the next quarter are the following:

 

Reference

1. Fulvio Risso ALTQ Package – CBQ Testing, (Personal Communications)