Third Progress Report on the HICID Project,

June 1, 1998 - August 31, 1998.

 

Panos Gevros, Peter T. Kirstein and Jon Crowcroft

September 2, 1998

 

  1. Introduction
  2. With the absence of a LEARNET, and the lack of preparedness of the different HIGHVIEW and JAVIC partners' applications for QoS, we decided to do some more fundamental measurements of QoS algorithms first. We have decided to do base our work on the CAIRN router and mainly altq; the recent addition of an implementation from CMU has persuaded us to look at this also. We have set up a laboratory testbed, and are doing some simple measurements of algorithms. The testbed is described in Section 2, and the measurements we are doing in Section 3. Our early measurements are outlined in Section 4, and our future plans in Section 5.

  3. The Laboratory Testbed
  4. We are currently operating in our laboratory a small-scale testbed to experiment with QoS. In our experiments we use research prototypes (mainly but not only those under the altq framework) and measure how effective the traffic management mechanisms are. The problems with the altq ATM driver and its capabilities that were highlighted in the previous report have now been resolved; we are using the latest FreeBSD Releases (2.2.6/7) and the altq-1.1.1 distribution.

     

    In the CAIRN, our colleagues are starting to use also another research prototype from Carnegie Mellon University, the Hierarchical Fair Share Curve (HFSC) used for link sharing (like CBQ). There are some advantages in this system - not the least that other colleagues are exploring its use, and CMU has offered to configure it onto our routers (it is not as well finished a product as altq from the viewpoint of easy configuration). We intend to include HSFC in future tests.

     

    The aim of our tests is to ascertain the relevant advantages of the different algorithms. The method is to measure a number of parameters experimentally, and analyse the results to determine the reasons why some mechanisms are preferable. The metrics under test are throughput and delay compared against standard FIFO, and against what the configuration parameters promise. We use ATM links with different capacities and traffic mixes that consist of TCP and UDP traffic. So far we have tested scheduling mechanisms used for link sharing (mainly CBQ but also WFQ); in the future we are planning to use RED and diffserv mechanisms, like RIO, which have become available recently.

     

  5. The Testbed Configuration
  6. We have set up two PC-routers, one of them is forwarding and the other one is the data sink, we use a third host as a data source. The two PC-routers are connected by a permanent virtual circuit (PVC) of Unspecified Bit Rate (UBR); we vary the PVC capacity for different experiments in order to ensure a bottleneck. The alternative queuing discipline is applied on interface "2" (see diagram below). As traffic generator and sink we use the 'ttcp' network performance benchmark. The link between the source host and Router A is currently an Ethernet. Since we want the link under test to be the bottleneck the bandwidth of the ATM link should not exceed 10Mb/s.

    Figure 1. Schematic of simple experimental configuration

     

    A more complex one is shown below

    Figure 2. Schematic of a three machine configuration

     

    This effectively causes the bottleneck capacity to be limited to below 5 Mb/s. We are considering using ATM links between all three machines to test high speed links of up to 155Mb/s.

     

  7. The Measurements
  8. We shall now present briefly our findings and the future plans.

     

    1. Applying CBQ
    2.  

      We have run 'cbqd' with an appropriate configuration file on router A where the network interface and the link sharing rules are specified (the class hierarchy, whether borrowing from parent class is permitted, class attributes protocols, addresses, ports, allocated bandwidth etc.) For a more detailed description refer to the 'cbqd' manual page.

       

      We have tested different scenarios varying the configuration parameters and the traffic mix; we have observed an obvious depart from the FIFO behaviour with the throughput achieved within each class consistent with the ratio of bandwidth allocation between the classes. An example is given below:

       

      Figure 3. Schematic of CBQ Class Hierarchy

    3. Applying WFQ
    4. The process of testing WFQ is very similar to that of testing CBQ. We have been collaborating with George Uhl from NASA, who has been doing similar experiments in the CAIRN context. We have agreed to put in a direct connection between us, and carry out more detailed measurements during the coming quarter. This activity arose partly from discussions Peter Kirstein had with NASA on a recent visit to the NASA-Ames facility in California.

       

    5. Further tests

    We plan to put up RED shortly. Currently we are using static configuration of QoS parameters; this will be extended to Dynamic configuration of QoS parameters using RSVP during the next few months. With each we are going to carry out the same sort of measurements that we have started with CBQ and WFQ.

     

  9. Actual Measurements
  10. We shall have the first results put on the Web shortly. As new capabilities become available (diffserv RIO for example) in newer altq releases we are planning to include them in our experiments.

     

    We have actually already done extensive measurements, and typical graphs are provided at: http://www.cs.ucl.ac.uk/staff/jon/arpa/altq/fulvio-cbq-1.html. Some of these graphs are put in Fig. 3 below. In fact there was a slight error in the experimental set-up; thus while Fig.3 shows the sort of measurements we will produce, no conclusions can be drawn from our actual past measurements.

     

     

     

    Figure 4 Traffic as a function of time for a particular set of parameters

     

    Figure 5 Traffic as a function of time for another set of parameters

  11. Deployment

Wider deployment has been held up by the delays in LEARNET. We do now have connectivity through that network - but not using the ATM circuits that were expected initially. This is because only UCL has been configured with an ATM switch so far. We understand that BTRL is interested in similar work; we intend to explore with BT whether they can also incorporate such a switch and allow us to do Wide Area tests. Because Essex U has a CISCO with an ATM interface, which can act as an ATM switch, we expect to extend our work to Essex U shortly thereafter. Since Essex U is also running a video server, this will allow us to carry out the first true HICID investigations in collaboration with one of the other projects.