The HICID Role in Umbrella Experiments

Final Report

 

Panos Gevros, Fulvio Risso, Peter Kirstein and Jon Crowcroft

Department of Computer Science,

University College London,

Gower Street WC1E 6BT,

London, U.K.

{f.risso, pgevros, p.kirstein, j.crowcroft}@cs.ucl.ac.uk

 

Abstract

This report summarises the work done under the BT-sponsored HICID project. We describe the network configuration and QoS facilities put in to support both the HICID QoS experiments with multimedia streams, the JAVIC work on audio and video codecs, and the HIGHVIEW work on user assessment. We were able to provide go3od connectivity using LEARNET with QoS support using the FreeBSD ALTQ stack. We provided also a traffic generator and monitoring facilities to correlate with measurements made at other levels and with user reaction to the facilities provided.

 

  1. Overview
  2. The overall aim of the HIGHVIEW/JAVIC/HICID projects is to show how good multimedia can be obtained over networks by using layered Codecs and QoS for the flows the Codecs produced. In JAVIC, different audio and video Codecs were supposed to be used to distribute high-quality audio and video over packet-switched networks like SuperJANET and LEARNET; the project was concerned both with the development of the Codecs, and the evaluation of their quality from a technical viewpoint. HIGHVIEW was concerned both with running specific high-quality video streams over the SuperJANET and LEARNET packet-switched network, and the user evaluation of the quality of the streams provided both by its own experiments and those of JAVIC. HICID was supposed to make the network connectivity possible at the lower levels, and provide network QoS support for the JAVIC experiments. This support was to provide monitoring, measurement, background traffic generation as required for the evaluation of the JAVIC Codecs and most important the appropriate resource management framework driven by higher level objectives.

     

    Fairly early on, some decisions were made by the other projects that narrowed the HICID activity. It was decided that King's and Bristol Univ., who were part of JAVIC, would not do on-line experiments themselves. Their Codecs would be integrated at UCL into the UCL tools, and the experimentation would be done over LEARNET. In addition, HIGHVIEW also decided that they would use only LEARNET; as a result, the HICID network experiments could be restricted to LEARNET only.

     

    It was still not clear exactly what router connectivity would be required; for this reason, HICID implemented a reasonably complete connectivity to allow whatever experimentation might be desired over the LEARNET topology. We wished to be able to provide Quality of Service for any combination of flows the JAVIC or HIGHVIEW might require. For this reason we provided a connectivity more flexible than was used in the experiments.

     

    The HICID project provided both the network infrastructure and instrumentation required for the JAVIC/HIGHVIEW experiments; it provided also some extra experimental results exploring parts of the QoS space not investigated in the JAVIC experiments. It was intended to do multimedia JAVIC experiments using both audio and video. In the event, no suitable JAVIC video Codecs were available. It was intended to use layered Codecs for both the video and the audio. The layered video Codec from Bristol U could not work in real time; it was therefore agreed to be unsuitable for the tests. BT offered to provide a H263+ layered Codec; however the JAVIC group had difficulty in making it operate, and were unable to get assistance from BT. For this reason, JAVIC decided to do only audio experiments, and HICID agreed to support them in this activity. It was of no direct concern to HICID what Codecs were used; both the development of the Codecs, and the reason for their deployment, is of direct concern only to the JAVIC project. Our function was to provide a QoS, network and instrumentation support which matched the needs of the JAVIC–chosen Codecs, and to understand how the different choices made by JAVIC impacted the QoS classes required. We were also concerned with providing hooks in the media tools to signal QoS – whether or not these were used eventually by the JAVIC Codecs.

     

    This note is the Final Report for the HICID project. Most of the material has been provided for various Umbrella meetings. Fuller versions of the results are given in the references.

     

    Our work in HICID covered four areas; each is mentioned briefly in a different section. First we describe in Section 2 a network infrastructure that could have been used for many different purposes with a variety of sites. We mention also the communications-related modules developed for the later experiments. In Section 3 we discuss the experiments done on the QoS module itself, to understand the capabilities of that module. In Section 4 we describe briefly some of the measurements made on the QoS module to evaluate parameters needed for the QoS set up. In Section 4 we consider the HICID role in the collaborative experiments. Finally, in Section 5 we draw some conclusions, and suggest areas requiring further work.

     

  3. The Network Infrastructure
  4. In the early days of the project, it was not clear which sites would be involved – except that at most it would involve the JAVIC and HIGHVIEW partners King’s College London, Bristol U, Essex U and BTRL. It soon became clear that neither King’s College nor Bristol U intended to participate in on-line experiments; moreover, there were going to be changes in SuperJANET, so that putting in special routers there might prove a nuisance. For this reason we decided to concentrate the experiments on LEARNET – and the Essex U, BTRL and UCL sites. We had choice of two sets of routers: production ones from Cisco and research ones from the CAIRN community running FreeBSD on PCs. We were not clear which would be the most appropriate, and what facilities would be required. For this reason we provided a complete connectivity both with sets of routers. The connectivity provided is shown in Fig. 1.

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

    Figure 1 The LEARNET Connectivity provided under HICID

     

    It was possible, as can be seen from Fig. 1, to set up the topology flexibly using the ATM switches. This would have allowed us to use all four sites through an all-Cisco, all-CAIRN router, or mixed network. For this reason the ATM VCs shown were set up. In practice, the CAIRN PC routers proved more flexible, and only these were used the experiments. Moreover, it became clear that neither BTRL nor UCL-EE were needed for in these experiments. For this reason only a subset of Fig. 1 was actually used in the experiments – mainly ATM VCs from Essex U and UCL-CS.

     

    For the actual experiments, it was usually simpler to do all the experiments in one site – even if LEARNET was used in the transmission. A typical experimental set-up is shown in Fig. 2 – for experiments carried out from UCL.

     

     

     

    Figure 2 A typical experimental set-up for experiments run from UCL-CS

     

     

    Figure 2 shows the topology used in our final experiments. The multicast routers (kirki, essex, calypso) run CBQ on the ATM interfaces (thin lines), the local networks are 100Mbit/s switched Ethernets and there two more hosts used for traffic generation (TrafGen1, 2). The topology uses two ATM PVCs between UCL and Essex U. running over LEARNET and enables us to run the experiments only at UCL where they can be better coordinated and still maintain their validity as tests performed over wide-area ATM connections.

     

    There were variants on this theme both in the number of machines used for the media data and the reflected nature of the data. Some of these matters are considered further in Section 4. However some experiments were done between servers at Essex U and clients at UCL.

     

     

     

     

     

     

     

     

     

     

     

     

     

     

    Figure 3 Schematic of experiments between a Server at Essex U and clients at UCL

     

  5. Components Provided under HICID
  6. In addition to the network connectivity, we provided a number of components partly under the HICID project. These included detailed investigation on QoS mechanisms, provision of a traffic generator, and moving over some of the applications onto IPv6. Each is described briefly below.

    1. Quality of Service and the ALTQ Performance
    2. A major part of the HICID work was the set up and understanding of mechanisms for providing QoS control to the JAVIC and HIGHVIEW applications. This work did not need either the applications from the other projects, nor even wide-networks. Initially we looked at the available scheduling algorithms as Weighted Fair Queuing (WFQ), and the Hierarchical Fair Sharing Control (HFSC) from Carnegie Mellon U. After making fairly detailed evaluation of the status of each at the time, we decided to concentrate on the Class Based Queuing (CBQ) module from the ALTQ using FreeBSD PC routers. Here it was necessary to set up classes and arrange various parameters for giving specific weightings for the different classes. A full description of the package is described in [2]. In this work much of the effort was in keeping up with the releases both of ALTQ and of the underlying FreeBSD on which it ran. As a result of our experiments, we found specific faults with the implementations. We passed these back to the package developers – and then had to both put up and test the resulting improvements.

       

      Most of the testing was done with test traffic. There are detailed documents on the measurements, but a typical configuration is shown in Fig. 4.

       

       

      Figure 4 Typical Test environment

       

      The first part of these experiments were in understanding the working of the ALTQ module that provided the Class Based Queuing (CBQ) shown in Fig. 4. Here examined the effects of changing the priorities of the classes and the performance of the borrowing mechanism. We then tried to understand both the correctness and performance problems observed. Many of these are discussed in [1]. We found some actual bugs in the ALTQ implementations; these included some CBQ kernel bugs and some ATM driver ones, which had to be fixed. In addition, we found some problems with the TCP buffer management when used with CBQ and ATM (due to the large ATM MTU of 8192bytes). Finally, there were problems with the CBQ internal mechanism, which did not allow full link utilisation. It did not correctly allow borrowing from one class to another. The operation was different for UDP and TCP, because of the way the borrowing mechanism was triggered (adequate queue backlog). In the course of trying to do these experiments, we had to pay attention to the properties of the traffic generators and network monitors, as well as the CBQ implementations.

    3. Improvements in the Media Tools to ease Application of QoS
    4. The wide-area experiments with media tools are of considerable interest to the HICID PIs in the context of providing QoS for conferencing – particularly in the context of being able to provide it in the underlying network. It is for this reason we also moved the media tools over to working with IPv6. This will prove particularly valuable when we work with the Codec developers to mark their packets as requiring different priorities. Examples are, of course, the I and P frames in MPEG. While this work was done during the time of the HICID project, and sometimes by the people working on HICID, it is not really part of the HICID/Umbrella projects.

    5. The Traffic Generator and Monitor
    6. A key requirement in the Umbrella projects was to provide measurements and evaluations at different levels. For this reason, under the HICID project we provided tools both for the traffic generation and measurement.

      Real network conditions were emulated using cross-traffic from a custom-developed traffic generator based on [Netperf]. This was for the purpose of overloading the PC routers in way that closely resembles real network traffic conditions where the TCP traffic classes are populated both with "large" MSS-sized packets and "small" 40-byte ACK packets.

      For routers employing Round Robin scheduling mechanisms (i.e. WRR in the CBQ case), the packet size distribution of the traffic inserted in a certain class can seriously affect performance. For the noise used in the UDP multimedia classes in order to make them oversubscribed we have developed another traffic generator by modifying the [ttcp] traffic generation program to allow it to send on demand CBR UDP flows of a specified rate (in Kbits/sec) and packet size (bytes). Note that the problem of producing an accurate CBR UDP flow in a real implementation is not trivial. For simulations the UDP CBR flows could be of arbitrary precision, however real world and non-real time operating systems, there are problems. If one tries to implement accurate timers and to clock out packets with microsecond precision (inter-packet gap), one will not succeed. One will be faced with a situation where standard Unix system calls like ``select'' and ``gettimeofday'' only provide control of inter-packet gap in the range of milliseconds (msec) instead of the required (usec). We tried busy waiting in the process, but this affects other processes running on the same machine that are related to the experiments. Due to these non-trivial technical difficulties the packets were clocked out in small bursts and for reasonably lengthy time intervals the average rate of the noise generator was almost exactly the prescribed one. For the video noise generation we had to options either use the CBR noise generator described above or use the another "real" video source. We took the second path since the video traffic (as produce by VIC with H.261 codec) is VBR in nature and the traffic class for video will involve other VBR sources. For the TCP traffic patterns the noise generator used exponentially distributed inter-arrival times (average lambda connections/sec) and for the actual duration of each connection we used a bimodal distribution with mean values of 1sec and 30sec respectively

       

       

       

    7. Typical Local HICID Experiments on Different Classes
    8. In the context of the Umbrella experiments themselves, we carried out a number of experiments on CBQ with our own media tools locally. We set up various classes for the CBQ, using the set-up of Fig. 4, as illustrated in Fig. 5.

       

       

      Figure 5 Two typical sets of CBQ experiments with different class structures for CBQ

       

      These could reflect the sharing of channels between different agencies, or giving different priorities and bandwidth allocations to different media, or different layers of the media.

       

      We then used as traffic sources the Mbone tools. Here we gave differential priority to the audio, video and shared workspace tools. We then tried to introduce layered media tools from the MECCANO project. Here we had a limited layered video, but no layered audio. Not only did these experiments provide some interesting results [2], they also made us understand how to configure the routers for the joint experiments with JAVIC and HIGHVIEW.

    9. The Umbrella Experiments

    First we set up the basic Network infrastructure of Section 2, and done the basic experiments on ALTQ of Section 3. Next, we did local experiments with the total system locally, with class configurations that might match the needs of the JAVIC project with its layered codecs. We were now in a position to assist the JAVIC and HIGHVIEW projects directly.

     

    In this context, we first dimensioned the CBQ link sharing structures to cater for the streams generated by layered audio codec from King’s, which had been integrated under JAVIC. UCL Experimenters from the JAVIC and HIGHVIEW worked with this codec and looked at the User acceptability correlated with the parameter variability that we could provide under JAVIC. The experiments were first done locally, using the set up of Fig. 2, without the use of LEARNET. They were then repeated using the complete configuration of Fig. 2, going over LEARNET with varying congestion conditions. The layered codecs allowed us to provide different priorities to the different levels, and had various amounts of noise. The results of these experiments are discussed in []. The system was then extended to deal with both layered audio and single layer video; this was because JAVIC had not been able to integrate in any of the layered video codecs by the end of the project.

     

    In a separate activity, we set up the configuration of Fig. 3 to allow HIGHVIEW to do server-based experiments.

     

    The actual results of these experiments are provided in the HIGHVIEW [4] and JAVIC [5] Final Reports.

     

     

  7. Conclusions and Further Work

At the outset of this project, it was clear that it would be possible to set up configurations so that it would be possible to do QoS experiments, but not what we would be able to conclude from them. In the event, we found that it was difficult to draw many conclusions from this type of experiment on a network with real traffic on it – there were too many unknown activities, and we had too little control. However, if we were able to have access to a network over which we had sufficient control, it was quite feasible. For this purpose the use of ATM, to establish a Virtual Network with known parameters, proved invaluable.

 

Providing an understandable QoS environment proved harder than expected. There were many unexpected characteristics of the particular mechanisms we adopted – even though it is probably one of the best and most flexible mechanisms currently available. We suffered through all the expected problems of updates both in the package itself and the operating system version on in which it ran; this is not a problem we believe will be resolved in the next couple of years. Moreover configuring the QoS to deal with the exact traffic characteristics we wished to investigate proved hard at the router level. It would clearly be much easier if the application marked packets, and the router could be configured to operate more blindly on the marked packet streams. Indeed this approach is clearly essential for realistic traffic volumes.

 

The technology of layered codecs was much less advanced than we expected; we never really got any video codecs to work well with different layers in a flexible way. To really apply QoS properly on the different layers, it is necessary to ensure that the Codec developer has designed the encoder and particularly decoder for such use.

 

We were able to carry out complete experiments in which we could apply QoS, operate with different Codecs, measure user satisfaction, and correlate the performance at the different layers. While this required close collaboration between the different tasks, it proved quite feasible. It was not so easy to find applications where the impact of quality on the user performance could be correlated accurately with the quality of the audio and video. It is clearly much easier to carry out such experiments locally than over a real wide-area network; nevertheless, verification of local experiments with limited wide-area experimental activity proved both straightforward and reliable.

 

We did not manage to carry out one vital experiment that BT wanted – namely to prove conclusively the value of layered coding with QoS. There were two reasons for this. First there was the obvious lack of suitable layered codecs; we believe that this problem has been largely resolved, though it will take further work to put the codecs in a form that they could be used in these experiments. Secondly, it takes considerable experimentation to get objective results on the evaluation of any single set of parameters for audio/video streams. Moreover, the quality of one of the media may impact the perception of the other.

 

As is usual in this type of project, we are sure that we would now be in a position to answer the sort of questions BT really wants answered with considerable less effort than spent on this experiment. Nevertheless, we do not think that it would take less than the ten man-months officially devoted to the HICID work.

 

References

  1. P. Gevros, F. Risso, and P. Kirstein: "CBQ Traffic Management for Layered Audio Transmission", UCL CS Technical Report Sep. 1999
  2.  

  3. F. Risso : CBQ Testing, (Personal Communications)
  4.  

  5. P Gevros and P Kirstein: "CBQ Traffic Management for Layered Audio and High Quality Video", UCL CS Technical Report - Part II. Oct 1999.
  6.  

  7. F. Risso, P.Gevros "Operational and Performance Issues of a CBQ router". ACM Computer Communication Review Vol. 29, No. 5 Oct. 1999.
  8.  

  9. HIGHVIEW Final Report
  10.  

  11. JAVIC Final Report