Tuesday, March 23, 2010

Flow control

Flow control
11.1.2 This page will describe how the transport layer provides flow control.


As the transport layer sends data segments, it tries to ensure that data is not lost. Data loss may occur if a host cannot process data as quickly as it arrives. The host is then forced to discard the data. Flow control ensures that a source host does not overflow the buffers in a destination host. To provide flow control, TCP allows the source and destination hosts to communicate. The two hosts then establish a data-transfer rate that is agreeable to both.

The next page will discuss data transport connections
Session establishment, maintenance, and termination
11.1.3 This page discusses transport functionality and how it is accomplished on a segment-by-segment basis.


Applications can send data segments on a first-come, first-served basis. The segments that arrive first will be taken care of first. These segments can be routed to the same or different destinations. Multiple applications can share the same transport connection in the OSI reference model. This is referred to as the multiplexing of upper-layer conversations. Numerous simultaneous upper-layer conversations can be multiplexed over a single connection.

One function of the transport layer is to establish a connection-oriented session between similar devices at the application layer. For data transfer to begin, the source and destination applications inform the operating systems that a connection will be initiated. One node initiates a connection that must be accepted by the other. Protocol software modules in the two operating systems exchange messages across the network to verify that the transfer is authorized and that both sides are ready.

The connection is established and the transfer of data begins after all synchronization has occurred. The two machines continue to communicate through their protocol software to verify that the data is received correctly.

Figure shows a typical connection between two systems. The first handshake requests synchronization. The second handshake acknowledge the initial synchronization request, as well as synchronizing connection parameters in the opposite direction. The third handshake segment is an acknowledgment used to inform the destination that both sides agree that a connection has been established. After the connection has been established, data transfer begins.

Congestion can occur for two reasons:

• First, a high-speed computer might generate traffic faster than a network can transfer it.

• Second, if many computers simultaneously need to send datagrams to a single destination, that destination can experience congestion, although no single source caused the problem.

When datagrams arrive too quickly for a host or gateway to process, they are temporarily stored in memory. If the traffic continues, the host or gateway eventually exhausts its memory and must discard additional datagrams that arrive.

Instead of allowing data to be lost, the TCP process on the receiving host can issue a “not ready” indicator to the sender. This indicator signals the sender to stop data transmission. When the receiver can handle additional data, it sends a “ready” transport indicator. When this indicator is received, the sender can resume the segment transmission.

At the end of data transfer, the source host sends a signal that indicates the end of the transmission. The destination host acknowledges the end of transmission and the connection is terminated.

The next page will define three-way handshakes.

No comments:

Post a Comment