UCL Demonstrations discussed at DARPA PI Meeting
Portland, Oregon, May 25/26, 2000
Peter T. Kirstein, 13 June 2000
We will be doing a collaborative RADIOACTIVE demonstration in the December DARPA meeting in Atlanta Georgia, at Georgia Tech, from December 4 to December 8. The setup time for the demo will be December 4/5.
After considerable discussion, it seemed that the right partners for us would be:
· Garry Minden (Kansas U, Kansas U, Kansas; email@example.com)
· JJ Garcia-Luna (UCSC, Santa Cruz, California)
· Joe Touch (ISI, Santa Monica, California; firstname.lastname@example.org)
· Peter Kirstein/Jon Crowcroft (UCL, London, UK; email@example.com/ firstname.lastname@example.org)
Kansas U has three relevant activities in three areas:
It is not obvious that the second and third meld in with a demonstration of UCL work. The first could be very useful provided we consider it as a useful platform, with which several of the joint demonstration could be done. Nevertheless, it looks comparatively straightforward to make an integrated demo, and we would learn a lot from it. I will assume that it happens.
UC Santa Cruz is mainly concerned with multicast routing and reliable multicast. While this has little to do with Active Networks, it is part of Doug’s programs. From our viewpoint, it could be fitted in well with the UCL model of Radioactive.
ISI in the X-Bone activity has a way of setting up a set of Hosts as a virtual network – with the set up and running as a secured VPN.
UCL in RadioActive has a set of Application Level Active Services. It has its own execution environment – currently running as an Execution Environment for Proxylets (EEP). The Proxylets themselves are written in Java 1.2, and should run on any system which can run the Java JVM. The proxylets are loaded from Web Servers. Two existing proxylets are HTTP and XML Parsers
Proxylets can, of course, call other proxylets. Proxylets may be self-contained, or they may be a control for other proxylets; these may include dynamic protocol stacks. Several more proxylets that are relevant are the following:
2. Media stream filtering based on UTG
3. Transcoding of audio based on RAT
Based on our proxylets, we could do applications like Audio and collaborative text processing using RAT and NTE. We could set up a VPN with some Hosts running RAT and NTE; some will be connected to a “terrestrial network” N1, and a couple of “mobile networks” N3 and N4. We will run applications between Hosts H11 attached to N1, and H31 and H41 on network N3 and N4. There are Gateways G11, G12, G13, G23 and G24, which do the application level activity on Hosts sited near the routers on the three network boundaries to Net 1, Net 3 and Net 4. A schematic is given in Fig. 1.
Figure 1 Schematic of Proposed Configuration
The demonstrations are running audio and NTE on the collection of Hosts. G1x run the reliable multicast, audio transcoding and reliable multicast proxylets. The networks N3 and N4 are routed by JJ’s multicast routing and reliable multicast procedures. The VPN consisting of all the Hosts, routers and gateways use machines at UCL, ISI and the demo site; during testing, they will use also computers at Kansas U. The VPN from Joe should include the fixed end points; for this reason it can include all the components in Net 1, and the end-user devices (and gateways) on Nets 3 and 4. Thus the Servers, Hosts, Edge Nodes and Application Gateways will all be considered.
It is probable that the interior of Nets 2, 3 and 4 are not part of the VPN – they will use some of the active network routing, which cannot co-exist with the Joe Touch VPN. It is intended that Nets 3 and 4 are the Radio nets of JJ, and Net 2 is the one of Garry. This needs further discussion – whether this is possible. It should be possible, because the mobile nodes can be multi-homed. For this reason they could be stated to be on both networks – even though they really go from one to the other.
It would seem that all the things on Net 1 can be done by setting up a VPN by Joe, and the Proxylets and applications from UCL. The aim in Net 2 is to include some of the Garry activity. As I understand it, Net 2 is a complete PLAN etc Active Net; I do not know whether there are any problems in having things connecting Net 1 and Net 2. Nets 3 and Net 4 are entirely ones from JJ; it would be possible to connect them directly to Net 1 – without Net 2 – if there were no added value from the Net 2 connection, or technical problems in the connection.
With the commitments of the various parties, it is going to be necessary to set fairly tight deadlines if the demonstration is to work. The schedule which would best fit in with what I understand from everyone would be the following:
1. UCL makes sure that a minimal set of Proxylets works on several FunnelWeb Hosts locally. Ideally we would like to run the Mbone tools over the concatenation. A first version end of June.
2. UCL and ISI establish a VPN embracing all the components involved with hosts that support the tools – by the end of June.
3. UCL gets more information on Garcia S/w by June 20.
4. Connectivity be established with Kansas by mid-July
5. Attempt to use Kansas remote cluster by end July
6. Integration with Garcia and Minden, using VPN, established by mid-August.
7. Review of Progress last week of August.
The leader should say by the end of June roughly what is needed. Full details are needed by October 1. There will be excellent connectivity to the Internet-2 and other networks. They are also prepared to do some connectivity testing. They will provide monitors as needed. They will have Monday/Tuesday to set up things, and Wednesday – Friday for the demonstrations. Ellen Segura is the person coordinating this at Georgia Tech.
We should consider carefully what we need to have for our demonstrations; we clearly do not want to put together too much – but it really does need to be of critical size.
I think that we should look again at the way we have built our rack. This sort of device is important – but it should be transportable and loadable. I would like to check if we could buy such a device – say with 12 processors, which can be reconfigured independently. Alternately, we might build something exactly like Garry Minden. I suggest we explore this option.