Skip to main content

Frame Relay / ATM

2.2.5 Frame Relay
With increasing demand for higher bandwidth and lower latency packet switching, communications providers introduced Frame Relay. Although the network layout appears similar to that for X.25, available data rates are commonly up to 4 Mbps, with some providers offering even higher rates.
Frame Relay differs from X.25 in several aspects. Most importantly, it is a much simpler protocol that works at the data link layer rather than the network layer.
Frame Relay implements no error or flow control. The simplified handling of frames leads to reduced latency, and measures taken to avoid frame build-up at intermediate switches help reduce jitter.
Most Frame Relay connections are PVCs rather than SVCs. The connection to the network edge is often a leased line but dialup connections are available from some providers using ISDN lines. The ISDN D channel is used to set up an SVC on one or more B channels. Frame Relay tariffs are based on the capacity of the connecting port at the network edge. Additional factors are the agreed capacity and committed information rate (CIR) of the various PVCs through the port.
Frame Relay provides permanent shared medium bandwidth connectivity that carries both voice and data traffic. Frame Relay is ideal for connecting enterprise LANs. The router on the LAN needs only a single interface, even when multiple VCs are used. The short-leased line to the Frame Relay network edge allows cost-effective connections between widely scattered LANs.
2.2.6 ATM
Communications providers saw a need for a permanent shared network technology that offered very low latency and jitter at much higher bandwidths. Their solution was Asynchronous Transfer Mode (ATM). ATM has data rates beyond 155 Mbps. As with the other shared technologies, such as X.25 and Frame Relay, diagrams for ATM WANs look the same.
ATM is a technology that is capable of transferring voice, video, and data through private and public networks. It is built on a cell-based architecture rather than on a frame-based architecture. ATM cells are always a fixed length of 53 bytes. The 53 byte ATM cell contains a 5 byte ATM header followed by 48 bytes of ATM payload. Small, fixed-length cells are well suited for carrying voice and video traffic because this traffic is intolerant of delay. Video and voice traffic do not have to wait for a larger data packet to be transmitted.
The 53 byte ATM cell is less efficient than the bigger frames and packets of Frame Relay and X.25. Furthermore, the ATM cell has at least 5 bytes of overhead for each 48-byte payload. When the cell is carrying segmented network layer packets, the overhead will be higher because the ATM switch must be able to reassemble the packets at the destination. A typical ATM line needs almost 20% greater bandwidth than Frame Relay to carry the same volume of network layer data.
ATM offers both PVCs and SVCs, although PVCs are more common with WANs.
As with other shared technologies, ATM allows multiple virtual circuits on a single leased line connection to the network edge.

Comments

Popular posts from this blog

OSI layers / Peer-to-peer communications / TCP/IP model

OSI layers 2.3.4 This page discusses the seven layers of the OSI model. The OSI reference model is a framework that is used to understand how information travels throughout a network. The OSI reference model explains how packets travel through the various layers to another device on a network, even if the sender and destination have different types of network media. In the OSI reference model, there are seven numbered layers, each of which illustrates a particular network function. - Dividing the network into seven layers provides the following advantages: • It breaks network communication into smaller, more manageable parts. • It standardizes network components to allow multiple vendor development and support. • It allows different types of network hardware and software to communicate with each other. • It prevents changes in one layer from affecting other layers. • It divides network communication into smaller parts to make learning it easier to understand. In the foll...

Advantages and disadvantages of link-state routing

Advantages and disadvantages of link-state routing 2.1.5  This page lists the advantages and disadvantages of link-state routing protocols. The following are advantages of link-state routing protocols:  Link-state protocols use cost metrics to choose paths through the network. The cost metric reflects the capacity of the links on those paths. Link-state protocols use triggered updates and LSA floods to immediately report changes in the network topology to all routers in the network. This leads to fast convergence times. Each router has a complete and synchronized picture of the network. Therefore, it is very difficult for routing loops to occur. Routers use the latest information to make the best routing decisions. The link-state database sizes can be minimized with careful network design. This leads to smaller Dijkstra calculations and faster convergence. Every router, at the very least, maps the topology of it...

Ports for services

Ports for services 10.2.2  Services running on hosts must have a port number assigned to them so communication can occur. A remote host attempting to connect to a service expects that service to use specific transport layer protocols and ports. Some ports, which are defined in RFC 1700, are known as the well-known ports. These ports are reserved in both TCP and UDP.  These well-known ports define applications that run above the transport layer protocols. For example, a server that runs FTP will use ports 20 and 21 to forward TCP connections from clients to its FTP application. This allows the server to determine which service a client requests. TCP and UDP use port numbers to determine the correct service to which requests are forwarded. The next page will discuss ports in greater detail.