The Atlanta Demonstration for RADIOACTIVE/SCAMPI

Peter T. Kirstein, Piers O’Hanlon

16 Aug 2000

  1. The Players

We clearly must put on demonstration at the DARPA Active Networks PI meeting in Georgia Tech, December 6.8, 2000. The facilities will be available from December 4, but the PI meeting itself will be at the end of the week.

    1. The Partners

At the meeting, it was agreed the right partners for us would be:

However, at the meeting, Gary said that he would summarise a note PTK had written, and would lead the work. Since then we have been unable to get any response from him. We contacted Joe Evans from Kansas, who promised to ping Gary, but we have still heard nothing. While we continue to hope that he will collaborate, this note is written without that assumption. For this reason, any mention of Kansas is put in brackets. This note is being sent also to Gary; it may be revised if we hear from him.

It was never clear where UCSC would fit in; without Gary to organise this from the US, we are willing only to collaborate with ISI.

    1. The Non-UCL Activities

[Kansas] U has three relevant activities in three areas:

It is not obvious that the second and third meld in with a demonstration of UCL work. The first could be very useful, provided it as a viable platform on which several of the joint demonstration could be done. Our only questions in this regard are that we have received no information on its characteristics – for instance what OSs it supports, or how it can be controlled remotely. We believe it should have been comparatively straightforward to make an integrated demo, and we would have learnt a lot from it. However, in the absence of any response from Gary, we assume that it will not happen.

UC Santa Cruz is mainly concerned with multicast routing and reliable multicast. While this has little to do with Active Networks, it is part of Doug’s programs. From our viewpoint, it could be fitted in well with the UCL model of RADIOACTIVE. Nevertheless, we are not willing to co-ordinate this from the UK.

ISI in the X-Bone activity has a way of setting up a set of Hosts as a virtual network – with the set-up and running as a secured VPN.

    1. UCL Activities

UCL in RADIOACTIVE has a set of Application Level Active Services. It has its own execution environment – currently running as an Execution Environment for Proxylets (EEP). The Proxylets themselves are written in Java 1.2, and should run on any system which can run the Java JVM. The proxylets are loaded from Web Servers. Two existing proxylets are an HTTP webcache and the UTG (UCL transcoding gateway). The HTTP webcache operates as a proxy for the client and analyses requests for appropriate MIME types; it runs a down-line loading proxylet from a trusted server onto a ‘close’ EEP. The UTG allows for remote unicast connection to multicast sessions – incorporating transcoding and bandwidth control.

Proxylets can, of course, call other proxylets. Proxylets may be self-contained, or they may be a control for other proxylets; these may include dynamic protocol stacks. Several more proxylets that are relevant are the following:

In addition we hope to secure the RMI connections between the EEPs using SSL. We may make use of XML-based policies for control of system behaviour. The demonstrations could be based on a combination of these Proxylets and specific applications.

  1. The Nature of the UCL Demonstrations
  2. We need to discuss carefully what is the point of the demonstrations, what we want to show in some detail, and how we want to show it.

    1. The Point of the Demonstrations
    2. Clearly an important driver is DARPA’s interest in showing that it has an integrated programme. However, we also should be keen to demonstrate the important aspects of the technology being developed. Moreover, doing this well for RADIOACTIVE should help also for other projects.

      Earlier projects have provided a number of components of secure conferencing and multicast applications. We do not need RADIOACTIVE to demonstrate these. However in these applications, we did discover the importance of having active components inside the network – at boundaries between different technologies – particular at the edges. These components have also been developed, but have had to be placed manually. This required a priori information about the topological location of the network, its users and its facilities. The main purpose of RADIOACTIVE is to show the self-management that application level active networks provide.

      The purpose, therefore, must be to arrange stubs at many salient points in the networks, and to have these stubs responding to both qualitative heterogeneity of network services, and quantitative heterogeneity of network technology.

    3. The levels of the Demonstrations
    4. The levels we could demonstrate include:

      1. The applications themselves;

      2. The Proxylets/EEPs;

      3. The Abone or some other Network level topology.

      We will consider the last first The Abone provides a consistent view of a network of EEPs, but these then make up for inconsistencies - e.g. some paths between EEPs can provide QoS, some not; some paths between EEPs can provide multicast, some not. If it is convenient, we may try to show our applications operating over the Abone. In any case, it is desirable to show that they can be re-configured in various ways; here the VPN technology of the Xbone from ISI surely helps and should be demonstrated.

    5. Conferencing Demonstrations

At the first level, we have some existing applications in conferencing like RAT, VIC and NTE. These already do limited adaptation. They can operate both unicast and multicast. They can modify their coding for the media streams. NTE needs reliable multicast – but this is provided by every NTE end-point. Nevertheless, there are some Proxylets that help with these, and should be demonstrated. These are:

These should be demonstrated with those applications in the conferencing environment. We should show the various Proxylets started up automatically, at the most appropriate location, when a receiver joins the conference.

    1. Other Applications

Another set of less smart applications can also benefit from the Active Service environments. These include the download streams of audio, video and data from normal unicast web servers to ordinary, non-adaptive, non QOS-capable browsers. Using the Active Service encvironment, we could demonstrate some new functionality:

All of these require the following:

This could be controlled by policies imported from web repositories (which might limit capabilities for some of the middle layer in the manner of a VPN) through some standard XML-specified interface.

It would be nice to demonstrate this, but we doubt if this will be possible for the December demonstration.

  1. Demonstration Configuration
  2. The Atlanta PI demonstration will need to be worked out carefully at least between [Kansas] U, UCL and ISI. However, this report makes a stab at a configuration. The demonstrations will run at least the applications of Section 2.3, and possibly those of Section 2.4, on a collection of Hosts, gateways and networks. It is improbable that the applications being demonstrated will impact the configurations, which are shown in Fig. 1 below: Here the Hosts are labelled as for the applications of Section 2.3, but will not differ much for those of Section 2.4.

    Figure 1 Schematic of the DARPA PI Atlanta Configuration

    The demonstration will include a number of networks (Nxx), Gateways (Gxx), Hosts (Hxx) and Web Servers (Sxx). The Networks will include Internet-2 connecting UCL (Net1), [Kansas], ISI (Net5) and Georgia Tech (Net2, 3 and 4). Net2 will be a wired Ethernet, connected to two separate WAVELAN networks (Nets 3 and 4). G1x, G2x run the audio transcoding and reliable multicast proxylets. H21 will be a fixed host, and H31 and H41 will wander. The VPN consisting of all the hosts, routers and gateways in use machines at UCL, ISI and the demo site; if they they come in, they may use also computers at [Kansas] U. The VPN from Joe should include the fixed end points; for this reason it can include all the components in Net 1, and the end-user devices (and gateways) on Nets 3 and 4. Thus the Servers, Hosts, Edge Nodes and Application Gateways will all be considered. The whole configuration will need further discussion – including whether it is feasible. It should be possible, because the mobile nodes can be multi-homed. For this reason they could be stated to be on both networks – even though they really go from one to the other. There will be Web Servers S11, S51 and S21 at UCL, ISI Georgia Tech to allow down-loading of Proxylets. It would seem that the whole configuration could be set up as a VPN by Joe, with the Proxylets and applications configured from UCL or Atlanta.

  3. The Demonstrations
  4. Based on our proxylets, we will show the Audio, Video and collaborative text processing using RAT, VIC and NTE. We will set up a VPN with some Hosts running RAT, VIC and NTE; some will be connected to a "terrestrial network" N1, and the two "mobile networks" N3 and N4. We will run applications between Hosts H11 attached to N1, and H31 and H41 on network N3 and N4. There will be Gateways G11, G12, G23 and G24, which do the application level activity on Hosts sited near the routers on the three network boundaries to N1, N3 and N4.

    The demonstrations are running audio, video and NTE on the collection of Hosts. For this demonstration, all components will run IPv4; the Java support for IPv6 is still too chancy. However we do expect to move over to IPv6 shortly thereafter, and would like to explore Joe’s XBone abilities in this regard The audio transcoding and reliable multicast proxylets run from G1x between G24 or G23. The VPN consisting of all the Hosts, routers and gateways use machines at UCL, ISI and the demo site; during testing, they will use also computers at [Kansas] U. The VPN from Joe should include the fixed end points; for this reason it can include all the components in Net 1, and the end-user devices (and gateways) on Nets 3 and 4. Thus the Servers, Hosts, Edge Nodes and Application Gateways will all be considered.

    The UCL Proxylets should be fired up when roving laptops try to come in to a conference via the WAVELANs. There must be some stub activity, which can bring in the Proxylets when a Join is requested from one mobile net. As a new machine comes in, or as the user moves to the other net, the new Proxylet is fired up. When fired up, the Proxylet must do a topology discovery for the reliable multicast, some multicast-unicast conversion, and reliable multicast, and perhaps transcoding.

  5. Schedule

With the commitments of the various parties, it is going to be necessary to set fairly tight deadlines if the demonstration is to work. The schedule that would best fit in with what we understand from everyone would be the following:

  1. Demonstration Schedules
  2. The leader should say by the middle of August roughly what is needed. Full details are needed by October 1. There will be excellent connectivity to the Internet-2 and other networks. They are also prepared to do some connectivity testing. They will provide monitors as needed. They will have Monday/Tuesday to set up things, and Wednesday – Friday for the demonstrations. Ellen Segura is the person coordinating this at Georgia Tech.

  3. Requirements for Demo
    1. From ISI
    2. A web server and an EEP machine running jdk1.2 (Linux or WinNT)

    3. From Georgia Tech

  1. Demo title:
  2. Multimedia Services over Active Service Interfaces in a VPN

    The RADIOACTIVE and XBone Projects

  3. Contact Names:

  1. Major Equipment we plan to bring

3. Equipment to borrow from GT

Ethernet Hub with 4 desktop PCs (2 of which require two Ethernet interfaces each) and 2 laptops

4. Network services required in addition to DNS and Reverse DNS


5. What remote machines connections will be needed in the demo

Connections to ISI and UCL over Internet-2

6. Display machines/equipment for demo

We plan to use borrowed machines at GT and laptops for display. Without knowing the GT demo environment we would initially suggest we require two display projectors.


  1. OS considerations

The UCL proxylets will be run under FreeBSD. We must still evaluate the Pros and Cons of FreeBSD 3.x vs. 4.x. We would like to remain stable from September 1 to the end of 2000; to achieve that, we must evaluate what version is best for the following: