This document provides details of the Trial Business Systems defined in FlowThru and the Components of which they are composed. This is precursored by details of the business model used and the methodology followed in arriving at the system and component models.
This document compliments the more brief first year report of FlowThru.
This document is also available in postscript (4Mbyte!) and pdf (5Mbytes!) formats.
OVERVIEW OF TRIAL BUSINESS SYSTEMS
DEVELOPMENT GUIDELINES DETAILS
A Development Process for Reusable management Components
Reuse using Facades
Guidelines for Business Process Analysis
Multi-Domain Management Development using Reusable Management Components
Multi-Domain Management Development using Re-usable Management Components
Configuration Management- Network Management Component
Access Session Control Component
Usage Session Control Component
TINA Trouble Reporting System (TTRS)
Fault manager component and simulator
Overview of Trial Business Systems
In the reporting period the project plan has been carefully examined and detailed in order to ensure that the project goals can be effectively achieved within project resources. This is achieved by developing guidelines on both development methodologies and integration technology based on a set of Trial Business Systems based on business process flow requirements and the reuse of existing components.
The Trial Business systems have therefore been chosen carefully to both reflects the needs of industry as well as to support the goals of the project. The Trial Business Systems have been selected to cover the three major business process areas identified by the TM Forum, in order to demonstrate the widest range of industrial; applicability while at the same time focussing on specific project goals.
All the trial business systems and component analysis and design activities
have used the project’s development guidelines, and are therefore providing
a working case study of the application of the methodology. This will be
validated by examination of the models produced and through questioning
of people who had to generate and understand the models. Other specific
project goals have been addressed by each Trial Business system as shown
in the table in figure 1. The table also shows which components are used
in which Trial business Systems.
Goals | Fulfilment | Assurance | Accounting |
To demonstrate integrated management | Integrating service ordering with network planning and network configuration | Integrating network fault detection with service level trouble ticketing | Integrating network and service level usage metering with accounting. |
To demonstrate integration technology | CORBA Components | CORBA-CMIP gateway | Workflow |
Components used | Subscription
Management, Configuration
Management- Network Management, Network
Planning
|
TINA Webstore, TINA Trouble Reporting System, TMN/IMA-TTS, Network Management OSF, Fault Manager Component and Simulator | Access and Usage Session Control, Configuration Management- Connection Management, ATM Accounting, Service Level Accounting |
Reuse and integrate existing components from other projects | Prospect, REFORM | Prospect, TRUMPET, SUSIE, P.612 | Prospect, VITAL, SUSIE, ReTINA |
Return to table of contents
TeleManagement Forum Business Process Model
The TMF business model is described in the TMF Telecoms Operations Map. The primary aim of this model is to provide a reference against which the definition of standardised interface between service providers and customer, suppliers or other service providers can be conducted. It was based on surveys of existing providers, and framed so as to enable discussion of industrial agreements without having to expose the possibly sensitive internal structures of any particular TMF member. A simplified view of the model is presented in figure 2. The model is partitioned into processes that relate directly to the customer, to internal service development and maintenance and to the management of the provider’s networks and systems. The processes are also grouped vertically into major service management areas, i.e. the fulfilment/delivery of the service, the assurance/maintenance of the service and the billing/accounting for the service. Individual processes are defined in terms of activities within the process and of input and output triggers to the process.
TINA Business Model and Reference Points
The TINA approach to business modelling is to identify a number of business roles and define the reference points that exist between them. The TINA business model defines the following business roles:
In a business situation where a TINA conformant system is required to support inter-domain interactions, the different business administrative domains involved may be characterised by the TINA business roles they play with respect to each other. This determines the TINA business relationships they have with respect to each other. The identification of business relationships then allows inter-domain conformance specifications to be defined by the amalgamation of the reference points related to the business relationships, plus any service specific interactions required. These reference points are defined in segments, with common segments used to cover the core parts of TINA system functionality. The primary segmentation is between access functionality and usage functionality. The access segment is concerned with authentication and authorisation of users, the selection of services and the setting up the context for the use and management of the service. The usage segment is subdivided into primary usage segments covering the functionality that is the main objective of the service, and ancillary usage segments that address administrative and management functionality. This segmentation of reference point definitions enables any inter-domain reference point to be defined with the minimum set of functionality needed for the business relationships being analysed.
Return to table of contents
Mapping TMF Business Processes to TINA Business Roles
This mapping formed one of the early achievements of the project, having already been accepted for publication and also forming the basis of the projects close collaboration with both TINA and TMF.
Though TINA defines a fairly restrictive architecture, it has been demonstrated that certain service control and management concepts can be applied to other frameworks, e.g. Internet services . Mapping the more flexible TMF business model onto the TINA one enables us to determine where specific TINA management solutions can be more widely applied. It also provides us with an example of how the TMF business model can be used in the analysis of other management frameworks.
Before examining such a mapping however, the core differences between the two models must be appreciated. Firstly, the TMF Operations Map defines general business processes in existing telecoms businesses. These may be human based processes or automated ones. Part of the intention of the Operations Map is to identify and prioritise which processes they wish to automate, and therefore which inter-process interactions would benefit from industry agreements. The TINA model restricts itself only to reference points that will yield automated interfaces. Also, the TMF Operations Map is concerned only with service and network management processes, while the TINA reference points cover issues of service and network control as well. TINA also assumes its Distributed Processing Environment (essentially CORBA) will be used to implement RP interactions, while the TMF Operations Map makes no assumptions about implementation technology. Functionally, TINA management is aimed specifically at managing TINA services (multimedia, multiparty, multiway, mobile) and network resources (connection oriented, broadband), while the TM Forum model is less specific, but is derived from the management of more contemporary services and networks, i.e. POTS, Frame Rely etc. TINA also specifically covers information services, while these have probably not influenced the initial TMF Operations Map to a large extent. Finally, the TM Forum Operation Map prioritises issues of process interaction and information flow between processes, while the TINA BM and RPs are focused on the development of detailed RP specifications, based on other ODP-based TINA specifications, with little direct attention placed on information flows.
The basic approach to merging the TM Forum Operations Map to the TINA business model and reference points is to identify which TM Forum processes operate in which TINA Business Roles. Note that some TM Forum Processes may be present in more than one TINA Business Role and that some TINA Business Roles may contain no TM Forum business processes. This reflects the relative differences in the concerns of the two bodies, the TM Forum for instance sees little current application for the Broker role in management.
An initial mapping of these TM Forum Business Processes onto TINA Business Roles is given in the figure 4.
Given the possibility of merging the TM Forum and TINA models, one pressing issue is to align the notation and process used in defining both TM Forum Solution Sets and TINA RPs. Though both aim to define open interfaces between business roles, the TM Forum approach is based on analysing business process information flows, while the TINA approach involved integrating component interfaces into suitably modular segments. A methodological approach to integrating these approaches is addressed in FlowThru, which is liaising with both groups with the aim of achieving an agreement on this.
Return to table of contents
Development Guidelines Details
The FlowThru Development guidelines present three ‘elements’ to ensuring successful and flexible development of multi-domain service management systems, namely: a development process which is customised to support management system component development and component re-use; the development of business models capable of representing the underlying business processes for these systems; and integration strategies designed to assist the flexible and timely co-operation of these management components both within a single domain as well as across domain boundaries to realise multi-domain management systems.
The guidelines present a categorisation of the types of organisations that influence the operation of such delivery chains. The guidelines then present the emergent trends in management systems development processes, business process modelling and reusable component development. They also identify several possible management integration technologies and techniques. They then describe guidelines for a development process for reusable management components, a business process representation to support multimedia telecommunication management systems and guidelines as to the application of two integration technologies to support management component co-operation and interoperability. The guidelines illustrate how these three elements can be combined to successfully develop co-operative management solutions across service delivery chains.
Multi-Domain Management Development Process
This section proposes a customised management development process. The development process facilitates both the design of reusable components as well as the overall multi-domain systems development composed of reusable management components. The development process indicates where architectural and integration technologies influence the design of the components and systems. The development process generates the relevant information required by a workflow management system to perform automated, flexible component integration and interoperability.
The FlowThru development process supports both component reuse and analysis of business processes, as part of a telecommunications management system development, and draws heavily on the Object Oriented Software Engineering (OOSE) development methodology. This methodology is further extended with its application to software reuse in, which also benefits from the application of UML notations to this process.
The proposed multi-domain management development process is an incremental process which supports the phases of requirement capture, analysis modelling, deign modelling, implementation, testing, deployment and integration. Figure 5 presents the design process as a cycle. The inner cycle illustrates the OOSE classical phases of the development cycle. The main cycle (signified with large circular arrows) identify the actual steps that need to be performed at each stage of the development of the management components and systems. The text external to the design process illustrates the influences and constraints particular to the development of management systems within the context of the target implementation environment.
The Development process above exhibits a co-existence of OO development and component-based development methodologies. In the context of the multi-domain management systems, an OO development methodology is used to describe a set of business processes that will be supported by a number of reusable management components that will solve specific business problems.
A Development Process for Reusable management Components
Reusable components are typically presented to system developers as sets of libraries, i.e. as a set of software modules and definitions of the individual operations of the component. Thus a component is presented in terms of that component’s design model and its software. This can cause problems if changes are required in order to reuse the component. Consider, for example a component, which is part of a framework. The framework may be general, e.g. CORBA Services, or aimed to supporting systems solving problems in a particular problem's domain, e.g. the TINA Service Architecture. In either case the framework will provide (or assume!) some high level architectural and technological guidance on how components can be integrated together and how they can support the development of the system. Such frameworks are often considered at the analysis stage so that the analysis model is structured in a way that accommodates the inclusion of the framework’s components into the design model. This situation is depicted in figure 6a However frameworks typically only give general guidance on the use of components, and the suitability or otherwise of individual components in satisfying requirements still needs to be considered in the design process.
For telecommunication management systems development, such a typical component reuse situation is difficult to apply because there is no commonly accepted framework that supports a suitably wide range of components. Currently this is a problem domain where several different, though probably complementary frameworks can be used, such as TMN and SMFs, CORBA, TINA. The TM Forum has the aim of trying to define this framework that encompasses these and other suitable architectural and technological frameworks, however no such over-arching framework is currently available.
This development process for management component reuse is motivated by the absence of such a well defined, common framework. Instead it attempts to provide guidance on how components can be specified in a more self-contained manner that is commonly understood. In this way, decisions about reuse can be made based on the suitability of individual components rather than a wider assessment of the suitability of an entire framework. The approach is aimed at making decisions based on the architectural and functional aspects of a component rather than its technology. The technology is treated as an orthogonal issue that is handled primarily through the employment of suitable integration technologies and techniques.
The basis of the approach for reusable components is that components are not presented just as units of design and software, but should be packaged together with the analysis model of the component, rather than being strongly integrated into a specific component framework. If the modelling techniques used for the analysis model of the component are similar to that used for the modelling of the system in which they might be included, then it is much easier to ensure that the component is suitable for use in the system. In addition the system’s analysis model can directly use the abstractions of the various components it reuses, easing the task of requirements analysis, and ensuring at an early stage compatibility between components and the system requirements. This analysis model-based reuse approach is depicted in figure 6b.
The presentation of a component for reuse is known as a facade. A facade presents the re-user of a component with the information needed to effectively reuse the component, while at the same time hiding from the re-user unnecessary design and implementation details about the component. In the analysis model-based reuse approach, the facade consists not just of reusable code and the design model details needed for reuse, but also the analysis model relevant to the design model part of the facade. The following section provides more details on the generation of facades.
A component may present several different facades, possibly aimed at different types of re-users, e.g. black box or white-box re-users. A component may have various releases of a facade to reflect the evolution of the component. The usefulness of the facade is strengthened if there is clear traceability between the different models, so that within the facade re-users can easily determine which parts are useful for them, working down from the analysis model level.
Obviously the construction of a facade from a component’s internal models that were generated during its software development process will be greatly eased if the same type of modelling approach was used for this process. Strong traceability between the components internal models will ease the selection of parts of the model for inclusion in the facade, based for instance, on a set of use cases. However, it is not a requirement for the component to have been originally developed and documented using an OOSE-like process, and part of the benefit of facades should be that it can hide the internal model if necessary.
For instance if a component has been developed and documented using ODP viewpoints the following mappings can be made, effectively reverse engineering the ODP model into the facade format.
While OOSE provides good traceability through the development cycle of a system, at the analysis stage use cases only help us analyse the interactions between a system and the actors that use it at the system boundary. Use cases are not good at describing the internal operation of a system. Even if sub-systems are identified, and use cases for each subsystem generated, these use cases may only interact with external roles, or subsystems modelled as roles. There is no well-understood mechanism for tracing the interactions between different use cases in different subsystems. Use cases in different subsystems may be related by generalisation relationships, e.g. "uses" or "extends", but these do not define details of the interactions, only that they are related. Use case text can include details of interactions with different subsystems, but this quickly becomes unwieldy, generating in effect a textual description of the system’s internal functionality.
However the identification of interactions between subsystems, is typically the kind of analysis that is performed in business process reengineering activities. Here, businesses aim to define the major processes they provide to their customers, which at a high level can be adequately captured with use cases. However, the aim of business process reengineering is to analyse the internal processes of an organisation to understand how they interact to provide value to customers, and how the structure of these processes and their interactions can be changed to improve customer services and reduce costs. This problem is complicated for management systems by the interactions often required with processes in other organisations. The framework’s development methodology must therefore address the problem of analysing the requirements for a multi-domain management task that must be performed by interactions between management processes, some of which will support automated interactions and some of which will not.
The TM Forum Operations Map provides a model of suitable business processes which fairly confidently reflects the typical operations of a service provider. This can therefore provide a starting point for the analysis and design of common solutions to management problems. It must also, however, allow the analysis of specific providers business processes in order to identify where existing solutions, available as reusable components, can be applied. Analysis of business processes is typically performed by identifying discrete activities and the events that propagate the control of execution of a task between activities. This may involve different events being triggered in different conditions and thus different sequences of activities being followed in the execution of a task. Such an analysis should also represent parallel activities, the conditions for their completion and the synchronisation of control. Specific activities may also be broken down hierarchically into finer grained activities.
A common representation of such control flow is event-driven process chains, a graphical modelling technique that allows activities to be associated with organisational roles and with objects representing business information. The inclusion of activity diagrams in UML allows it to support a similar type of modelling diagram. The analysis of a specific management tasks in a specific business scenario can therefore be initially described in terms of a use case giving the interactions of the system with human roles. The use case may not necessarily mention the internal business processes. These processes therefore need to be analysed using activity diagrams. The activities can be placed within swim-lanes representing TM Forum business processes, possibly residing in different administrative domains. This will ease the identification of where existing TM Forum business agreements match to the requirements of the task at hand.
Multi-Domain Management Development using Re-usable Management Components
The proposed multi-domain management development process is the modelling of the business processes and constituent activities, the mapping between these management activities and reusable management components, and the integration of the components. The development process outlined below is customised for use of workflow engine to perform component integration. The development cycle identifies the steps that need to be performed at each stage of the development of the management systems. The text external to the design process illustrates the influences and constraints particular to the development of management systems formed from reusable management components within the context of the target implementation environment.
Return to table of contents
System Model Details - Trial Business Systems
The Fulfilment Business System
The FlowThru Fulfilment Business System aims at providing an example of the interactions between processes involved in service and network management related to the provision of a service to a customer. The fulfilment business process is shown in figure 2. The scenario we consider in FlowThru does not address the Sales or the Service Planning/Development processes. Though links to some of the other TM Forum processes are identified, the operation of these other processes also is not explicitly addressed. The business processes addressed are therefore Order Handling, Service Configuration, Network Planning/Development, Network Provisioning and Network Inventory Management.
System Description
In FlowThru, the Fulfilment Business System concentrates on aspects associated with the appropriate network planning and provisioning activities for the delivery of services that a customer has been subscribed to. In particular, we consider a scenario that deals with the pre-service phase processes. That is, system set-up and ordering/subscription as well as the appropriate network planning and provisioning with respect to the established subscription contracts.
In essence, the system deals with the provisioning of ATM connectivity services to customers. The customer subscribes to an ATM service offered by a connectivity provider that is capable of offering ATM connectivity with guaranteed QoS, in terms agreed by the customer upon creation of his subscription contract. The connectivity provider undertakes appropriate network planning and configuration activities based on a predicted usage model that is built upon existing customers’ subscription contracts information.
Figure 8 depicts the high-level system description of the Fulfilment Business System. The figure depicts the involved components and identifies the boundary objects where interactions cross component boundaries.
The following components are included in the system:
The Fulfilment Business System will be used for planning and providing network connectivity at the ‘pre-service’ phase of the service lifecycle. We assume the following preconditions to hold:
Lastly, one of the future enhancements envisaged to the Network Planner boundary is the interaction with a Network Performance Verification component that could be used for verifying the estimated traffic predictions with respect to the actual network load and emit performance degradation alerts when needed.
Return to table of contents
The Assurance Business System concentrates on problem handling aspects of the in-service phase. It makes use of Subscription / Service Level Agreement (SLA) information (as defined in the Fulfilment Business System) to identify SLA violations. Since SLA violations have an impact on billing, a discounting scenario has been considered within the Assurance Business System to highlight closely related interactions with the Accounting Business System.
The system presented here aims to provide an example of the interactions between processes involved in the TMF customer access, service and network management levels. It will demonstrate the information flow to improve the quality of the service (QoS) provided to a customer. The assurance business system will be based on the processes as identified in the TMF Business Process Model(see figure 2). The specific scenarios adopted here, focus on the QoS management processes (CQM and SQM). Though links to some of the other TMF BPM processes are identified, the operation of these other processes is not addressed. The business process areas addressed are therefore Problem Handling, Customer QoS Management, Service Problem Resolution, Service Quality Management and Network Maintenance and Planning. Since a SLA violation may have an impact on the payment of a customer, the link to the rating and discounting process will also be demonstrated in the assurance demo.
System Description
Figure 9 gives a high-level system description of the Assurance Business System including the various components. The diagram also identifies the boundary objects where interactions cross either domain boundaries or component boundaries. The following components comprise the system:
The assurance business system is used for the demonstration of six trial scenarios to demonstrate the distributed fault management, i.e.: connectivity problem/QoS degradations detected by customer or detected by connectivity providers (#1 or #2), TINA service (WebStore) problem/QoS degradations detected by customer and detected by WebStore provider, identification of SLA violations and granting of discounts caused by SLA violations . The execution of the scenarios listed above will require and interworking between all of the components depicted in figure 9. Each of the identified use cases corresponds to a trial scenario, although more complicated scenarios exhibiting more than one use-case might be considered for the FlowThru Trial demonstration.
Return to table of contents
The Accounting Business System
The accounting system is a realisation of the TM Forum Billing Business Process. The system consists of several co-operating service and network management components that provide a combined and coherent accounting interface to the service customer.
With respect to the TM Forum Business Process Model, the bulling business process is comprised of the Invoice and Collection, Rating and Discounting and Network Data Management processes.
Since the existing components that FlowThru will re-engineer and integrate for accounting are originally built on the TINA architectural and modelling concepts, a mapping of the above components onto TINA business roles was proven useful. In the context of FlowThru, the components map onto TINA business roles, as follows:
The specific scenario adopted in the Accounting Business System focuses on aspects of the accounting process for the connectivity provider, the third-party (3-Pty) service provider and the invoicing (but not collection) part of the retailer.
System Description
Figure 10 shows the boundary objects of all the components comprising the accounting demonstration system, their interfaces and their interactions. The system consists of six separate components, covering aspects of service and network control and management. The components are:
Accounting Business Scenarios Overview
The components outlined above are envisaged to co-operate in the following fashion.
In the ‘pre-service’ phase:
System Model Details - Components
Subscription management comprises all management functions needed in order to define service offerings, administer customers and users, and manage the details of service provisioning. For instance, the component allows for authorisation and barring of users’ access to specific services, and the addition and removal of network sites from which these users may access the service. The design is based on the subscription model developed in TINA [tina-sa?.
The subscription component use cases have been developed based on the simple business model of a service customer and a service provider having a contractual relationship with each other. The subscription component is located in the provider’s administrative domain and is used by two types of actors: administrators of the service provider and of the service customer.
The use cases implemented by the component cover the management of services, subscribers, and subscriptions. Service management is definition and adaptation of offered telecommunications services, from a management point of view. Management of subscribers consists of the creation and deletion of subscribers, the update of subscriber details, and the definition of subscriber’s end-user groups and network sites. Finally, subscription management covers subscribing customers to services, update subscription details, cancel subscriptions, and the authorisation of end-user for specific services.
The subscription Computational Objects (CO) define generic capabilities needed to manage, from the subscription point of view, subscriber organisations and end-users, different types of services, and subscriptions to these services. The COs include: Service Template Handler, Subscriber Manager, Subscription Registrar, and Subscription Agent.
The Subscriber Manager manages information associated to subscribers, e.g., subscriber details and user groups. The Service Template Handler is responsible for handling service templates that represent service capabilities provided by the provider. It also provides operations, such as verification of subscription parameters, to the corresponding Subscription Registrar. The Subscription Registrar acts on behalf of a provider in dealing with subscribers wishing to subscribe to a service. A single Subscription Registrar can collect information for one service from one or more subscribers and will handle exactly one service, since each service is likely to have different requirements for subscribing a customer to a service. A Subscription Registrar is responsible for managing of subscription contracts, among others. The Subscription Agent is closely related to the user agent CO which manages the access session of the TINA service architecture. It sends appropriate subscription information to the User Agent, in reply to its request.
A separate management application or a special management may access the subscription component. The implementation used was developed in the Prospect project, where it was reused in several different services.
Return to table of contents
Configuration Management- Network Management Component
The Configuration Management (ConfM) component is part of the fulfilment scenario in FlowThru. Given the fact that the components taking part in this scenario are organised in a hierarchical layered fashion, the ConfM component is at the lowest part of this hierarchy, supporting configuration management capabilities to the underlying ATM network.
The key functionality of the ConfM component is to keep a logical database of ATM network configuration, which comprises both static resources, e.g. cross-connects / switches, links, ports, and semi-dynamic resources e.g. ATM Virtual Paths which in fact form a logical network over the underlying physical network. In addition, the ConfM component provides configuration capabilities upon the request of its clients, being able to set-up and tear-down ATM Virtual Paths and to activate / deactivate a particular termination point. In terms of the TMN layered hierarchy, ConfM covers both network and element management functionality.
In the fulfilment scenario, ConfM is accessed by the Order Handler and Network Planner components. The Order Handler interacts with ConfM in order to verify if a Network Access Point (NAP) is available for a new subscriber and to subsequently activate it. The Network Planner interacts with ConfM in order to reconfigure the existing ATM Virtual Path logical network when the addition of new subscribers reaches a threshold that implies a different expected utilisation pattern. In case the existing resources are not adequate any more for the "busy-hour" traffic, the installation of new nodes and links will be requested before the reconfiguration of the logical network takes place.
The ConfM consists internally of two sub-components: a passive sub-component, the Network Map (ConfM-NM), which keeps the current network topology, and an active sub-component, the Configuration Manager (ConfM-CM), which undertakes the manipulation of network elements. The ConfM component is based on CORBA, uses the TINA Network Resource Information Model (NRIM) and borrows ideas from the TMN in the sense that the ConfM-NM component offers a CMIS-like interface in CORBA IDL, including facilities such as scoping and filtering.
Return to table of contents
The Network Planner (NP) component is located at the network management level of the TMN hierarchy. From the TMF viewpoint, it maps to the "Network Planning and Development" process.
This component is responsible for
More specifically, the Network Planner consists of the following three subsystems:
The Class Of Service Model (CoSM) subsystem.
The CoSM maintains a repository of the CoSs supported by the network. Each CoS is defined in terms of its bandwidth characteristics, performance targets and restoration characteristics. It should be noted that the CoSM restricts, yet is based on, the services offered to users through the Subscription Management component.
The Predicted Usage Model (PUM) subsystem.
The functionality of the PUM encompasses the maintenance of a valid model of the anticipated traffic to the network. Based on this model, the PUM supplies the VPC_TD (see below) with traffic predictions, required for the design of the VP layer. The VP layer (set of VPCs) and the static aspects of the VC layer (set of admissible routes per source-destination and CoS) are constructed based so that to satisfy the predicted traffic demand.
Anticipated traffic is modelled in terms of the numbers of connection requests per CoS between source-destination pairs. The model details how the connection requests and characteristics (e.g. holding time) change: hour by hour over the day; day by day over the week; and week by week over the year.
The anticipated traffic model will be acquired from information regarding the subscribers of the network. This information is given by the Subscription Management component and it describes the number of users that are subscribed to use the network according to specific CoS characteristics as well as the destination distribution for their connectivity sessions given specific sources (i.e. NAPs). Also, data regarding historical network traffic will be taken into account in order to make predictions as accurate as possible.
Normally, the model needs to be validated and modified against actual network usage regarding end-to-end connectivity. However, this will not be considered within the context of FlowThru.
The VPC Topology Design (VPC_TD) subsystem.
The main task of VPC_TD is to design and redesign, whenever necessary, working VPCs and sets of admissible routes taking into account the existing CoSs. In addition, the existing component (made available by the REFORM project) also designs suitable protection VPCs and allocates the appropriate amount of protection resources, thus providing for a resilient network design. VPC_TD’s task is based on the predicted traffic and the physical constraints (e.g. connectivity, capacity of links) of the network. These input parameters are subject to changes over the lifetime of a network. VPC_TD offers a certain level of flexibility to adapt to such changes.
The tasks of designing VPCs and routes based on them should not be seen in isolation as they are tightly coupled; routes are based on VPCs; and VPCs are defined for routing. Along with these tasks, the tasks of designing appropriate protection VPCs and subsequently allocating protection resources across the network could be considered. These tasks cannot be done in isolation but in an integrated manner and should follow the design of working VPCs and defined routes. All these tasks are part of an iterative process involving complex optimisation problems aiming at providing cost-effective design solutions. The optimisation targets are set according to the business policy of the network operator. Part of this process will be to identify all possible routes between source and destinations and to choose a certain subset of these, according to the performance targets of the CoSs to be carried. As a result, not all routes defined between a source destination pair might be admissible to all CoSs.
The VPC_TD functionality has both static and dynamic aspects. The static aspect is related to the network planning activity and is used to initially design the network in terms of working VPCs, routes per CoS based on them and protection VPCs and the appropriate amount of protection resources to cope with faults. This part is performed at network start-up time.
The dynamic aspects cater for changes in the predicted network traffic, for prediction inaccuracies that could not be resolved by the lower level components, for the creation of new VPC or the deletion of old ones and for handling fault situations (including changes in the physical topology). As a result, the VPC network and the protection VPCs are reconfigured.
Return to table of contents
Access Session Control Component
The access session represents a secure relationship between a user and a provider of services. A user needs to have an access session established before he can engage in any service. The access session control component originates from the VITAL project and is aligned with the TINA Ret RP and Service Component Specifications. The functionality provided by the VITAL component can be summarised as:
Return to table of contents
Usage Session Control Component
The service session represents an instance of a specific service and can involves a service provider and any number of service users. For each of the users involved in the service session, this service session needs to be associated with an established access session with the provider. The usage session control component originates from the VITAL project and is aligned with the TINA Ret RP and Service Component Specifications. The functionality provided by the VITAL component implements the TINA session model feature sets, which can be summarised as:
The WebStore will be used in the assurance scenario as sample TINA service application to demonstrate TINA service fault scenarios and related trouble and problem handling. The WebStore system can be seen in the enhanced component model (see figure below). The service offers a Web interface to the customers for up- and downloading documents to a centralised or distributed document store. The TINA Middleware Environment covers the required Service Architecture components embedding the Webstore components depicted in figure 11, including TTRS; provider agent, user agent, user and service session manager, subscription, accounting and the TINA DPE.
An initial version of the WebStore has been developed in the ACTS Project Prospect. It has been used in a tele-educational scenario to gain experience in multi-domain service management, multi-provider service composition and technology integration (Internet (SNMP), OSI (CMIS/CMIP), TMN based network management and CORBA / TINA based service management. The Flowthru version will be adapted to current TINA version 5.0 specifications and will provide an additional Interface to the TTRS as well as a direct up- and download of documents within the used databases.
Return to table of contents
TINA Trouble Reporting System (TTRS)
The TINA Trouble Reporting System (TTRS) will be implemented as a TINA service that realises the problem handling management process at the service layer of the FlowThru system. The service layer is based on the TINA service architecture, thus, the TINA TR service should be considered as a particular TINA management service providing all the TINA service model functionality.
The TINA TRS provides facilities that allow handling of so called trouble reports (TRs at service level) and trouble tickets (TTs at network/connectivity service level) which inform customers about malfunction of the service to which they are subscribed. The TINA TRS provides service view insurance to the customers in order to handle their problems. This means that the TTRS hides to the customer the details of the failure (network or service) and the recovery process. The contract between the customer and the service provider is based on SLA (Service Layer Agreements). So the second objective of TINA TRS is to verify over time if the QoS for each customer and each provided service (e.g. Webstore) is fulfilled during the lifecycle of the service subscription. If not, a SLA violation will be generated in order to discount the customer’s bill according to the tariff and the SLA agreed within the fulfilment phase.
The TINA TRS realise two main processes of the assurance process model:
Return to table of contents
Trouble Ticketing (TT) is a widely used term within Telecommunication Network Operators (TNOs) and Service Providers (SPs) to describe a mechanism to control and record the problem handling and repair progress. – It is also being used to execute processes generated by the detection of a violation of the quality of service in context of a defined SLA.
Inter-domain trouble ticketing is needed when the provision of the end to end service to a customer involves two or more organisations - such as occurs, for instance, for International Private Leased Data Circuits. In the case of e.g. two TNOs the first one may have a problem reported to it by a customer and, after having carried out Analysis/Testing and Diagnosis, it may determine that the fault lies within the domain of the second TNO. A standardised TTS on the Network Service Layer should be able, to carry out the interconnection between the proprietary (pre-existent Remedy ARS) TT systems of the two inter-operating TNO/SPs ("Legacy system integration").
The IMA-TTS is a TMN compliant implementation of an Inter-operable Trouble Ticketing System based on ITU-T X.790 Recommendation and EURESCOM P612 specifications.
The IMA-TTS is basically made-up of a pair manager-agent (according to TMN definitions) using the CMIP protocol. They are running on Hewlett-Packard Open View DM™ platforms and were developed using HP OV MOT (Managed Object Toolkit) facility and the C++ programming language. An agent can serve more than one manager. For both components the human interface is realised by Java , enabling remote execution of the GUI(s) and for its downloading through the Word Wide Web.
IMA-TTS implements a number of Management Service Components (MSCs) that are each composed of Management Function Sets (MFSs) which support and specify the desired functionality. The MFS decomposition reflects roughly the natural life cycle of a TT.
Due to the way the MFS has been defined, the CMIP interface can be either an interface between to Customer (X.user) or a cooperating interface between TNO/SP domains (X.coop) which provides the following MSC functions:
For the FlowThru projects we will use the reuse of IMA-TTS Agent in three ways:
Return to table of contents
The chief aim of the ATM Network OS is to provide end-to-end network connections between its customers in a flexible, efficient and economical way. Apart from providing network connections the Network OS provides management functionality to support the needs of its customers in terms of communication quality of service and behaviour in response to network faults.
The public network infrastructure in FlowThru consists of interconnected ATM cross connects in a topology of simulated elements. The ATM VP network management service is offered through a CMIP interface providing a layered view of the ATM VP layer. Each of the separate ATM cross connects is viewed as a subnet partition in the VP layer network. FlowThru applies a distributed object model, where the management and control of the subnetwork partitions can be arbitrarily distributed among the cooperating subnetwork OS’es. The public ATM VP network management service is fully implemented on CMIS/CMIP technology using the Q3ADE TMN development tool. The network model applied within the public network is based on the ETSI NA43316 Generic Object Model Library (GOM), which has been specialised to the ATM technology.
The fundamental idea behind the GOM network model is the recursive decomposition of networks into subnetworks interconnected by links. This decomposition process stops at the lowest level where a subnetwork is "equal" to an ATM cross connect. The ATM network element level is controllable via an ITU-T I.751 Q3 interface.
The issue of distribution of the network management application is in particular relevant for ATM networks, where the granularity of the object model and the vast number of object instances renders a centralised management application inadequate for large ATM networks. When implementing a decentralised network management application it is important to allow for a distributed route determination process, and to allow for subnetworks at higher partitioning levels to make qualified route decisions concerning routes through an opaque subnetwork at a lower partitioning level. The concept of opaque subnetworks and associated concepts for the handling of these within a Network OS provides the means to distribute the implementation of the network service at will. In order to facilitate opaque subnetworks FlowThru (inherited from Prospect) has added route management objects to the network model. Within the partitioning hierarchy these managed objects allow subnetworks at higher partitioning levels to query route information from subnetworks at a lower partitioning level in order to make qualified decisions about which subnetworks a requested connection should be routed through.
Return to table of contents
Fault manager component and simulator
Surveillance of faults in end-to-end connections is achieved by the introduction of a generic fault correlation function. If any of the managed objects that are involved in an end-to-end ATM connection changes to a faulty state, the managed objects which represents the end-to-end ATM connection will be put into a fault state by the correlation function. This concept has been developed in an entirely generic way, which allows the implementation to be reused in many similar situations without special programming effort.
Configuration Management- Connection Management Component
The Configuration Management component was designed and developed in the REFORM project. It conforms to a relatively loose interpretation of the TINA Network Resource Architecture (NRA) and to its Network Resource Information Model (NRIM), which it modifies and extends.
Its key functionality is the following:
ATM Virtual Paths are set-up by a Connection Management object (Confm-CM). This provides an interface that conforms loosely to TINA Connection Management, which it extends so that a particular route for the VP can be specified. In addition, this object simplifies TINA connection management by avoiding a hierarchical tree of network and element management layer connection performers (NML and EML CPs), which are all "collapsed" to one object for simplicity and efficiency.
Return to table of contents
The ATM Accounting component is designed to capture ATM based charging schemes and apply these schemes to usage data gathered from an ATM service machine (Network Element OS). The system is capable of producing individual charges, or amalgamated charges (bills) based on this usage data, which are stored to a relational database, or produced in simple report form. The overall component is broken down into two sub-components.
The metering sub-component communicates with an ATM Service Machine (Network Element OS) to extract Connection Detail Records (CDR's) using an API specifically developed for that purpose. These CDRs describe completed SVC or PVC sessions, and are derived according to the CANCAN CDR specification. The CDR's are buffered/logged by the ATM Network OS and are periodically dispatched to the Metering Manager. The Metering Manager also filters and stores the CDR's in a generic and more persistent form. This generic form is referred to as Service Detail Record (SDR).
The charging sub-component will accept the SDR's from the metering sub-component. Its central function is to then rate these SDR's and produce into persistent storage rated SDR's, which we can call Charge Records. The rate at which these charge records is produced is dependent on two factors:
Return to table of contents
Service Level Accounting Component
The Flowthru accounting component is based on TINA accounting management defined in the second version of the TINA Service Architecture, and was developed within the Prospect project. Some extensions of the Prospect system were deemed necessary by the presence of network level accounting in the Flowthru project. The accounting component contains functionality that covers all areas of the TMF "Billing" high-level business process. It maps certain TMF processes onto TINA business roles, and in some cases concentrates on specific low-level activities within the processes.
The accounting component deals only with the production of the final bill (invoicing) not payment of that bill (collection) for the TMF "Invoicing and Collection" process, it also offers "hot-billing" functionality, enabling users and administrators to see the charges accrued so far in a particular service session.
Bill production is accomplished by the BillControlCO sub-component of the accounting subsystem (or system component), while the TMF "Rating and Discounting" business process maps onto the operational interfaces of the ChargeControlCO sub-component. Rate calculation is accomplished via control interfaces on the sub-component that are outside the scope of the TMF business process model.
The TMF concept of "Network Data Management" is expanded to cover both network and service level usage data collection, with the collection, collation and aggregation of service level data achieved by the UMDataCO (Usage Metering Data) sub-component and the aggregation of charges bases on network and service level usage effected in the BACO (Billing Aggregation) sub-component.
The Accounting component interacts with the following components within the overall accounting business process demonstration system;
Return to table of contents
Integration Technology Details
The approach taken to the definition of integration technology follows a similar approach to the TMF Technology Integration Map, i.e. to identify the different technologies used in or applicable to the area and then discern where integration technology is required. However FlowThru goes further with an emphasis on how integration technology can support the integration of components from different sources, including commercial off-the-shelf components.
In addition to using CORBA-CMIP gateways, CORBA Components and Workflow technology this section details some other integration technology being applied in the Trial Business Systems.
Return to table of contents
The TM Forum specification set NMF040-NMF043 defines a standard Application Programming Interface (API) for use with the international standards for network and systems management. The objective of this API is to provide a straightforward mechanism to write portable and interoperable management application programs using C++.
The specification of this API is divided in four parts. An overview is provided in NMF 043 which discusses conventions used, and documents shared or common areas of the API. The remaining parts document three separable APIs that are designed to work together.
FlowThru intends to use the TMN/C++ API as a means to integrate the accounting and network components.
Return to table of contents
Data Bases as an Integration Technology
The Data Base Integration can be seen as a 'last resort' integration technology. Such a technology is to be used mainly in two cases:
Return to table of contents
The CORBA-CMIP gateway (CORBAGW ) used in FlowThru was initially developed in the Prospect project. In FlowThru it will be used in the accounting and assurance scenarios to bridge the technology gap between the CMIP and CORBA worlds.
The CORBAGW effectively brings CMIP managed objects into the CORBA object domain. The CORBAGW acts as a CORBA server in the CORBA domain, effectively carrying out the invocations on all the managed objects that reside in CMIP agents. It does this by acting as a manager (client) in the CMIP domain, translating CORBA method invocations to CMIP requests. The CORBAGW thus provides CORBA-based management systems with an access to existing CMIP-based agents with standardised information models.
The CORBAGW implements a CORBA server/client process that enables CORBA based management applications to view/control objects in CMIP based Q3 agents, and to receive events emitted by CMIP agents. The CORBAGW adheres to the interworking principles defined by the Open Group/NMF Joint Inter-Domain Management task force (JIDM).
The primary requirements pursued in the design of the CORBAGW are to:
The CORBAGW includes a JIDM compliant GDMO to IDL compiler, which performs a translation of GDMO definition into CORBA IDL and provides the mapping information needed by the CORBAGW kernel.
The CORBAGW has numerous fields of application.
Return to table of contents
Back to FlowThru homepage