NEW

NAVAL EXPEDITIONARY WARFARE (NEW) INTERNETWORKING

The Naval Expeditionary Warfare Internetworking project (“NEW” for short) was funded by ONR (*not secure site) at the University of California, Santa Cruz (UCSC). This three-year project ended in 2002.

At UCSC, this project was part of the research carried out within the Computer Communication Research Group (CCRG) of the Baskin School of Engineering.

The principal investigator of this project was J.J. Garcia-Luna-Aceves.


Objective

The communication requirements of a Naval Expeditionary Warfare (NEW) environment cannot be met simply by applying existing COTS Internet protocols and architectures. A NEW environment is subject to drastic and continuous changes in radio-link quality, connectivity, and the loation of nodes and services. The applications running in this environment may need different application objects to be delivered with very different reliability and deadlines, and the quality of service needed by one application or NEW node may be very different from the quality required by others. By contrast, the Internet architecture is based on offering the same basic datagram service to {\em all} network users, TCP breaks down in the presence of link losses or packets arriving out of order, TCP and UDP offer a very restrictive transport service to the applications (reliable byte streams or user datagrams), COTS routing and multicasting protocols provide few mechanisms, if any, to support the provision of qualities of service, and rely on easy-to-break single paths and fairly static trees for packet dissemination.

The NEW Internetworking project seeks to develop new communication protocols for the NEW environmemnt. The innovations we intend to introduce with this project are:

  1. A Federated Internet Architecture for NEW: We extended the existing IP Internet model to an architecture in which IP is used for signaling, and data-plane protocols tailored for the NEW environment and applications are used for information distribution.
  2. End-to-End Transport: We extended the error-recovery and flow-control mechanisms available today in TCP to (a) permit applications to exchange objects requiring different transport service (e.g., unreliable, resilient, reliable) within the same connection, and (b) operate with lossy links and multipath routing protocols.

Impact

The results from the proposed research have direct applicability to the development and implementation of much needed solutions in support of the Naval Expeditionary Warfare (NEW) environment. In this environment, command ships, support ships, Naval surface fire support ships, air rescue, air fire support, small unit operation (SUO) teams, command posts ashore, and many ground sensors. The NEW environment constitues a very dynamic internetwork with heterogeneous applications and nodes. However, even though we address an internetworking problem, the characteristics of the applications and nodes, and dynamics of the environment are very different from the stable networking environment assumed in the Internet architecture and protocols today. lessons learned in the Task Force XXI experiments indicate that using COTS Internetowkring solutions will not suffice to satisfy the communication needs of the NEW environment. We intend to submit protocols resulting from our research as Internet Drafts to the IETF, some of them to through the MANET Working Group, in order to influence the Internet community and foster the adoption of some of our solutions by commercial vendors. Furthermore, the proposed federated internet architacture permits the Navy to maintain full interoperation with COTS internetworking products and Internet standards, while ensuring that communication needs are met in the NEW environment using communication protocols tailored to the Navy’s needs. Many of the features of the proposed end-to-end transport solutions can be implemented as part of the network layer (between IP and TCP) or as a TCP option that can be negotiated at the start of a TCP session. The research in NEW was part of the thesis research of a number of Ph.D. students.


Approach

Our approach consisted of advancing the state of the art in the following areas: We defined a new model of internetworking, and determine how it can deliver better functionality than the IP Internet model. The problems that we addressed to accomplish this include the following:

  • Defining the signaling required to permit a family of hosts and routers interconnected through heterogeneous or homogeneous transmission media, to agree on the names, addresses, and protocols to be used for the deliverey of user data. We call a federated family of hosts and routers a virtual network.
  • Defining delivery protocols required to trasnport user data across a virtual network. The main objective of this part of our research is to demonstrate that the proposed architecture interoperates with the IP Internet and is more efficient than simply sending encapsulated IP packets across any transmission media and forcing all applications to adhere to the same notions of QoS.

We developed a new end-to-end transport protocol that (a) provides richer del;ivery semantics than TCP does, and (b) eliminates TCP’s performance problems over dynamic networks by providing better error-recovery and flow-control algorithms. We implemented the error-recovery and flow-control algorithms of our new protocol assuming the same delivery semantics of TCP as a TCP option, using the extra 40 bytes available in the TCP option field.


Accomplishments

1999

We started the definition of our federated internetworking architecture. The salient features of the architecture are its use of anycasting services. A detailed description of the new architecture is planned to be ready by the end of the summer and a paper will be submitted for publication at that time.

We completed the first design of TCP-Santa Cruz (or TCP-SC). TCP-SC is a a new implementation of TCP that can be implemented as a TCP option by utilizing the extra 40 bytes available in the options field. TCP-SC detects not only the initial stages of congestion, but can also identify the direction of congestion, i.e., it determines whether congestion is developing in the forward path and then isolates the forward throughput from events such as congestion that may occur on the reverse path. The direction of congestion is determined by estimating the relative delay that one packet experiences with respect to another; this relative delay is the foundation of our congestion control algorithm. Our approach is significantly different from rate-controlled congestion control approaches, e.g., TCP-Vegas, as well as those which use increasing round-trip time (RTT) as a primary indication of congestion, in that we do not use RTT estimates in any way for congestion control. This represents a fundamental improvement over the latter approaches, because round-trip time measurements are inherently misleading in that they cannot distinguish between variation due to increases or decreases in the forward or reverse paths of the connection.

TCP-SC provides a better error-recovery strategy than Reno and Tahoe by providing a mechanism to perform RTT estimates for every packet transmitted, including retransmissions. This eliminates the need for Karn’s algorithm and does not need to use any timer-backoff strategies that can lead to long idle periods on the links. In the absence of congestion in the reverse path or asymmetric links (i.e., if all transmitted acknowledgment packets are received), TCP-SC also provides a mechanism to avoid retransmission by timeout when there are several losses per window.

Several simulation experiments were run to show the performance improvements of TCP-SC over Reno, Tahoe and Vegas.

2000

The definition of our federated internetworking architecture continued.

Recent research on TCP has focussed on the problems associated with TCP performance in the presence of wireless links and ways to improve its performance. During this reporting period, we extended our work on TCP-SC by introducing an extension to it that improves TCP performance over lossy wireless links. TCP has no mechanism to differentiate random losses on the wireless link from congestion, and therefore treats all losses as congestive. We developed a simple method in which TCP-SC is able to differentiate these random losses, thereby avoiding the rate-halving approach taken by standard TCP whenever any loss is detected. We have compared the performance of our protocol against TCP Reno using simulation experiments runnign in ns2, which show that TCP-SC achieves higher throughput and lower end-to-end delay than today’s TCP.

In addition to our work on TCP, we addressed the use of Forward Error Correction (FEC) techniques in reliable multipoint communication. Mor especifically, we studied group loss probabilities of FEC codes in shared loss multicast communication. Most other research to date has studied independent loss in multicast trees. It has extended non-FEC, single packet analysis to the FEC realm. We revised this form of analysis with state equations specific to C(n,k) codes, where any k out of n packets may decode an entire group. Rather than analyzing the number of transmissions of a particular packet, we studied an FEC group as a whole. Using recursive equations, we found the cumulative distribution function that all leaf nodes in a shared loss tree successfully decode a C(n,k) FEC transmission group. Our analysis also yields a method for computing the probability mass function that r leaf nodes successfully decode a transmission group on the first transmission. We may also compute for a particular leaf node the expected number of packets received on successful decode and the expected number of missing packets on decode failure. Our analytic results closely match simulation runs that we performed over a variety of configurations.

To analyze retransmissions, we developed a set of theorems to calculate the packet correlation between leaf nodes in a full regular tree. We were able to compute the probability mass function that all leaf nodes hold the same m packets in common, given that the source transmitted an ordered sequence of n packets.

We applied our analysis to several multicast topologies. We evaluated packet reception and ACK/NAK generation for the first transmission of an FEC group. Retransmissions are studied both with and without subcasting. We also compared retransmissions with and without FEC-enhancement.


Publications

The following is the list of published papers describing our research results in this project. A more complete list of CCRG publications, including PDF format, can be found here.

  • Chris Parsa and J.J. Garcia-Luna-Aceves “Improving TCP Congestion Control over Internets with Heterogeneous Transmission Media,” 7th International Conference on Network Protocols, Toronto, Canada October 31 – November 3, 1999.
  • Chris Parsa and J.J. Garcia-Luna-Aceves “Differentiating Congestion vs. Random Loss: A Method for Improving TCP Performance over Wireless Links,” Proc. IEEE 2nd IEEE Wireless Communications and Networking Conference (WCNC), Chicago, IL, 23-28 September 2000.
  • M. Mosko, J.J. Garcia-Luna-Aceves, “An Analysis of Packet Loss Correlation in FEC-Enhanced Multicast Trees,” Proc. 8th IEEE International Conference on Network Protocols (ICNP 2000), Osaka University Convention Center, Osaka, Japan, November 14 – 17, 2000.