Institution: University College London

Principal Investigators: Profs Peter Kirstein and Jon Crowcroft

Tel: 0171-380-7286, Fax: 0171-387-1397, e-mail: kirstein@cs.ucl.ac.uk

Title: High Performance Interactive Conferencing and Information Distribution

(HICID)

1. Objectives

The objective of this project is to demonstrate the feasibility of providing high quality multimedia conferencing and information distribution using multicast techniques over SuperJANET. The aim is to provide an appropriate infrastructure to the HIGHVIEW and JAVIC projects that they can demonstrate different qualities of multimedia over SuperJanet - including ascertaining the limitations of the different mechanisms for traffic prioritising and different network technologies. Other projects will determine the suitability of their tools and these mechanisms for real teaching and collaboration applications. The development of tools or teaching material is not an objective; existing tools will be taken from various sources and other projects.

2. Potential Benefits

Multi-service networks would provide very important services for the end user, providing that they are able to demonstrate the relevant Quality of Service (QoS) for real-time information including audio, video and data. Past work has indicated that the use of multicast IP is particularly suitable to deliver such services. We expect to provide a number of important benefits as a result of this work:

3. The HIGHVIEW and JAVIC Projects, and the Background to HICID

3.1 The background

The HIPMUCID project was proposed in response to the BT/JISC Call for Proposal of March 15, 1997. This was a proposal for investigating the impact of Quality of Service (QoS) over SuperJANET, with a number of real applications and distributed sites. The collaborating institutions were to have been Aberystwyth U, Edinburgh U, and the University of London Computing Centre. Unfortunately, insufficient money was available to fund the proposal, but BT signalled their interest in the subject matter, and the availability of much more limited funds.

At the time of the original proposal, UCL was also participating in two other responses to the same Call with the HIGHVIEW and JAVIC proposals; they were also waiting for responses from the European Commission on the MECCANO project. Because all these projects were awaiting funding, it was not possible to rely on them for a separate proposal. All these have now been funded. As a result it is possible to propose a reduced here which investigates as large a subset of the original HIPMUCID proposal as is possible with the reduced funding. We discuss below the aspects of the other projects which are relevant to HICID, and develop a proposal which relies on the existence of the other projects.

3.2 The HighView project

The HIGHVIEW project recognises that successful transmission of high-quality video and audio over existing IP networks requires progress in four related areas:

  1. source data-traffic management (i.e. scaleable codec and transcoding technology, loss recovery, etc.);
  2. network protocols (including buffer/delay management);
  3. network and service management (including queuing strategies and bandwidth reservation);
  4. the proper understanding of end-user's perception of service quality, which in turn is sensitive to the type of application.

The HIGHVIEW project will focus on areas (a), (b) and (d). Their deliverables will be a set of recommended techniques and metrics, backed by practical demonstrations of specimen applications. Their deliverables will provide essential requirements (i.e. objective targets related to subjective thresholds) for those working in area (d), namely packet-switched network and service management. The project involves Essex U and UCL, and most of the applications will be mounted between these two sites.

The HIGHVIEW Work-plan will include both server and real-time multicast environments, using both their own software and a selection of commercial offerings (Oracle, Precept, Real, etc.) on workstations and high-end PCs. Their activity on multilayer codecs will take into account constraints imposed by practical computational environments, modest-cost networking environments, and the range of appropriate end-user interfaces.

The HICID project will mesh well with this, since it will mainly attack the problems of (c) above, though it will have to provide some extensions to the end-stations to allow these to be tackled.

3.3 The JAVIC project

The JAVIC project is concentrating on the provision of high quality audio codecs, and will use them in a number of interesting applications. Their requirements are for very high quality including video. The institutions involved are Bristol U (BU), King’s College London (KCL), and UCL. Though the proposal again would benefit from the existence of a high quality infrastructure, no suggestions are made on how it could be provided.

Again this is where the HICID project will mesh well with this.

3.4 The MECCANO Project

The MECCANO project is a European Telematics project, in which Professor Kirstein is the Project Director. It concentrates on the provision of the relevant multimedia conferencing and Server tools to provide relevant quality multimedia conferencing over heterogeneous networks. Parts of this project include the provision of filtering, mixing and transcoding gateways, integration of MPEG coders into the Mbone conferencing suite, use of layered codecs, and experimentation on Quality of Service over heterogeneous networks.

The work in MECCANO is entirely relevant to the HIPIC project. This means that some of the experimentation in MECCANO on Quality of Service and RSVP will ease the set-up and experimentation problems in HICID; the gateways will be completely relevant; more tools and servers will be provided than could be found in either the HIGHVIEW or the JAVIC activities. Connectivity includes sites in Belgium, Canada, France, Germany and Norway.

4. Technical Activities

4.1 The WAN Topology

The basic WAN configuration of the HIPMUCID configuration is shown in Fig.1. While all the sites have Unicast communication through SuperJANET, we will set up the multicast trees as shown in Fig. 1. This multicast tree is not the same as the general multicast tree on SuperJANET - at least during most of the project. It will have the capability of much higher speeds. Ideally this could be as high as 10 - 20 Mbps for some of the highest performance activity, but most could be done at much lower speeds - typically down to 1.5 Mbps for individual streams. Lower quality multimedia could be delivered at much lower speeds, and one part of the work will be to stress the network with many parallel multimedia streams with lower individual bandwidth requirement.

Not shown in Fig. 1, are additional LAN and MAN networks under each individual node. Typically these have a capability of communicating directly with the Super JANET gateways at speeds up to 8 Mbps on individual streams, and significantly fast on multiple streams. There are also links to other sites - e.g. to BT-Martlesham and Cambridge U via LEARNET.

Figure 1 Proposed High Performance Multicast Tree for the Project

 



SuperJANET SuperJANET


London MAN

 

 

Most of the activities will be between the nodes shown; these are the only ones funded by the project. However we would be delighted to extend our activities to other sites - if they are also funded under this initiative or another and would wish to participate. For example, it is probable that the network of Fig. 1 will b extended significantly by providing the media services also over the MECCANO network. We will experiment also with early versions of the PRECEPT software using the UCL research link to the US CAIRN network - albeit with only single streams because of our limited transatlantic bandwidth.

 

4.2 Site Facilities

UCL and KCL currently have ATM links into the LONDON MAN. Bristol, Essex, and UCL have SMDS links. UCL has a 2 Mbps ATM link into the US CAIRN network - intended specifically for the style of working proposed here, and several European sites as part of its JAMES links for the MERCI/MECCANO projects; the MECCANO project will retain links to Brussels, Ottawa, Oslo, Sophia-Antipolis and Stuttgart

 

4.3 End User Equipment

At each node, there will be substantial terminal equipment including at least the following: one or more multimedia-equipped workstations - capable of receiving multicast streams, high performance UNIX and PC workstations and multicast routers. Bristol and UCL also have lecture room will be equipped for interactive multicast lectures. However, this aspect of any trials is directly controlled by the HIGHVIEW, JAVIC and MERCI/MECCANO projects.

There will be extensive multimedia servers. The UCL one will run both the PRECEPT IP/TV and the UCL MMCR software; Essex will provide their ORACLE Server to HIGHVIEW. The servers we are using are able to deliver a substantial number of streams at up to 1.5 Mbps - based on H.261 and MPEG-I; the PRECEPT Server will be providing MPEG-2 by the end of 1997.

 

4.4 Resource Management Software

The project will deploy mechanisms for resource reservation and priority routing over several specific network groupings, set up Mbone facilities with different controllable resources, apply different classes of multimedia traffic over the facilities, interconnect sites running both Internet and ITU-T styles of protocol..

  1. Resource reservation and priority routing Here we will investigate the implementations of RSVP to make reservations and RED to drop packets, using PIM or DVMRP as routing protocols and WFQ or CBQ to provide traffic priorities. Now that RSVP has moved to proposed Standard, and with the rapid progress in the other technologies, we anticipate that stable implementations of the above, and others, will be provided over the next 12 months on the different platforms in use. Most will come from the suppliers, but some may come also from the CAIRN programme (in which SUN, DEC, Cisco, and co-incidentally UCL are participating). We expect to do the following steps:

We will provide various traffic types for the experiments - see (f) below. There will be technical activity in setting up the network for different algorithms, in setting up the management structures to enter the relevant parameters into the algorithms, and in ensuring that the resultant system is controllable and stable.

  1. Network Technology Here we will use mainly ATM and SMDS - as exemplified in SuperJANET and the various MANs - with transmission over several sites in each. We will also test any configurations over the European JAMES network (as part of MERCI) and the US CAIRN.
  2. Routers and Switches As far as possible commercial routers will be used. It may be necessary, however, to put experimental routers, for instance those from CAIRN, at the boundaries. This may be necessary for some activities where it is desired to apply route filtering at routers to constrain the multicast traffic entering a part of the network; at present the necessary code exists only from Xerox PARC. There will have to be careful tuning of router parameters to achieve high performance without impacting other uses of the networks. In addition, we will work with the vendors on examining the availability and efficacy of multicast aware switching and switched network "edge" equipment - in the context of some of the schemes in (b).
  3. Gateways Three styles of gateway will be incorporated and compared: filtering and transcoding.

 

 

  1. Classes of Traffic We will investigate three classes of traffic: low quality in the 128 - 512 Kbps range, medium quality in the 1-2 Mbps range and high quality in the 10-20 Mbps range. For video, the first uses the coding currently prevalent in the Mbone; the second will focus on MPEG-1 and Motion JPEG, the third on MPEG-2 from the PRECEPT Server, and from some of the high quality versions deployed in the Scottish MAN or emerging in the work of Essex U and others - provided that these are made available. Using the different techniques mentioned under gateways, high quality streams will be receivable also at lower quality by less privileged recipients.
  2. Types of Traffic We will use the standard traffic types: video, audio, shared workspace and injected multimedia server. We will use conferencing and lecturing facilities, as are currently deployed over MERCI and the SuperJANET video service, and Video-on-Demand - as used in the PRECEPT IP/TV and the Essex U Oracle Server. The MERCI conferencing tools are VIC (Video), RAT (audio) and Teledraw. Other traffic will be provided from HIGHVIEW and JAVIC.
  3. More of the technical tests will be based on the server work - because it is easier to provide controllable traffic at high rates
  4. Types of Application We will investigate the use of the system for standard interactive lecturing, conferencing, media broadcast and media-on-demand applications. Many of these will be in realistic sessions, but the amount of teaching attempted will be limited.
  5. Measurement and Monitoring The endstations, and where possible the intermediate nodes, will be instrumented to allow measurement of traffic loss during sessions. Here we will include the network performance tools being used by the application and network measurement tools assembled in the MERCI project.
  6. Performance Assessment There will be subjective assessment of the different sessions, correlated with the objective measurements. This project will not develop assessment techniques; it is expected that at least one of the other projects to be funded under this call will be providing such techniques. If that is not the case, some relevant tools have been developed under both the RELATE and MERCI projects

Clearly this breadth of programme would be impossible within the resources requested and the number of sites proposed. However, we believe that this programme is feasible because of the other projects in which the proposed partners are involved. We propose to bring into this project tools, modules and the subjective assessment techniques developed principally in other projects mentioned below. These projects will include others under this initiative - with which we expect to collaborate.

 

5. Applications and Verifications

5.1 Verification

The majority of the experimentation will be testing the behaviour of the traffic under the different network and reservation conditions. Here we will rely heavily on objective measurements of traffic loss, but it will be essential to supplement this with subjective views on the success of the trials. It is notoriously difficult to gauge the suitability of such technologies only from objective measurements of traffic loss. This form of testing will be to deliberately stress the networks beyond their capability - within the constraints permitted by the network operations. Examples are that we would take a given set of bandwidth and reservation parameters, and then attempt to run additional multimedia streams from a server until the quality becomes clearly unacceptable.

Other aspects of the technology are the manageability of Quality of Service; here it is not the quality itself, but the management effort and the stability of the management which is being tested. We expect to obtain both better implementations and better understanding of the parameters to be used during the course of the project; nevertheless, at any given time we will have a good indication of the ranges of parameters which should be acceptable for real usage. It is for these ranges of parameters that we will run the applications of Section 4.2.

 

5.2 Applications

Running full applications is normally very expensive, and would be unaffordable within the budgets proposed by JISC; it would be possible either to do almost no new technology integration - but to undertake real applications, or vice versa. We will undertake real applications - but they will be ones that HIGHVIEW, JAVIC and MECCANO are committed to in any case for other reasons. These include Video Conferences, Seminars, Taught Courses and Group Collaboration

 

 

 

 

 

6. Work Plan

NO

ACTIVITY

COMPLETED Month

1

Set up workstations, conference rooms, networks, basic routers

4

2

Set up gateways, measurement facilities, servers, routers with PIM and with DVRMP

2

3

Do test experiments with PIM and DVMRP; set up RSVP, CBT and WFQ and RED

3

4

Do test experiments with WFQ + RSVP, CBT and RSVP, WFQ + RED

6

5

Do application level experiments and pilot trials on MECCANO, HIGHVIEW and JAVIC tools with different Servers with single streams

9

6

Extend test and application trials to use the later versions of the above tools, with improved QoS reservations.

15

 

The schedule of Deliverables - many of which could be Demonstrations also

 

7. Deliverables

The deliverables will be a set of recommendations and demonstrations, based on performance assessment, both subjective and objective, in a controlled way with different classes of traffic, of the different components considered below using different traffic classes in real applications. Each of the activities of Section 5, and the demonstrations of Section 4 will be documented as a Deliverable.

 

8. Resources Requested

8.1 Resources needed

The resources required to carry out the full programme is indicated below:

 

ITEM

COSTS

Staff Costs

22324

O/H on Staff

7873

Recurrent

1,500

Equipment

9000

Travel

100

Other

0

Total Budget

Person months

10

 

The Proposed Budget for each Partner

In the above, the staff costs are based on current salaries scales. The recurrent costs include those on computer supplies and maintenance. The Overhead on staff is lower than that normal in EPSRC contracts in view of the fact that JISC normally pays no overheads. The rates charge do not reflect the actual costs of the programme to the institutions, and are only possible because of the other projects each is doing. The overhead costs could be listed specifically; it includes a number of direct marginal costs in each institution. The equipment costs are all associated with UCL; in practice, we expect to provide routers to other institutions based on the PC routers used in the CAIRN project.

 

 

 

9. Outside Dependencies

The work will benefit from other projects in which the partners are involved. We state below what is expected from projects already funded.

9.1 Current Projects

All the basic technology needed for this project is either available or will come from currently funded projects. Much of the technology of the media tools, the gateways, the measurement tools and the conference room set-up depend on the CEC MERCI project - which is funded until the end of 1997. Suitable user interfaces, audio tools and assessment technology come from the EPSRC and JISC-funded RELATE and MEDAL projects. Much of the routing technology and the MMCR recorder are being developed under the BT URI on multi-service networks, and the DARPA multi-service network project.

 

9.2 New Projects Starting Later in the Year

Projects which have not yet started, funded by the EPSRC, JISC and EC will also feed into this project. The contributions of the HIGHVIEW, JAVIC and MECCANO projects have already been described.