Wednesday, March 24, 2010

Acknowledgment / TCP

Acknowledgment
11.1.6 This page will discuss acknowledgments and the sequence of segments.


Reliable delivery guarantees that a stream of data sent from one device is delivered through a data link to another device without duplication or data loss. Positive acknowledgment with retransmission is one technique that guarantees reliable delivery of data. Positive acknowledgment requires a recipient to communicate with the source and send back an ACK when the data is received. The sender keeps a record of each data packet, or TCP segment, that it sends and expects an ACK. The sender also starts a timer when it sends a segment and will retransmit a segment if the timer expires before an ACK arrives.

Figure shows a sender that transmits data packets 1, 2, and 3. The receiver acknowledges receipt of the packets with a request for packet 4. When the sender receives the ACK, it sends packets 4, 5, and 6. If packet 5 does not arrive at the destination, the receiver acknowledges with a request to resend packet 5. The sender resends packet 5 and then receives an ACK to continue with the transmission of packet 7.

TCP provides sequencing of segments with a forward reference acknowledgment. Each segment is numbered before transmission. At the destination, TCP reassembles the segments into a complete message. If a sequence number is missing in the series, that segment is retransmitted. Segments that are not acknowledged within a given time period will result in a retransmission.

The next page will describe TCP in more detail.

TCP
11.1.7 This page will discuss the protocols that use TCP and the fields included in a TCP segment.


TCP is a connection-oriented transport layer protocol that provides reliable full-duplex data transmission. TCP is part of the TCP/IP protocol stack. In a connection-oriented environment, a connection is established between both ends before the transfer of information can begin. TCP breaks messages into segments, reassembles them at the destination, and resends anything that is not received. TCP supplies a virtual circuit between end-user applications.

The following protocols use TCP:

• FTP
• HTTP
• SMTP
• Telnet

The following are the definitions of the fields in the TCP segment:

• Source port – Number of the port that sends data
• Destination port – Number of the port that receives data
• Sequence number – Number used to ensure the data arrives in the correct order
• Acknowledgment number – Next expected TCP octet
• HLEN – Number of 32-bit words in the header
• Reserved – Set to zero
• Code bits – Control functions, such as setup and termination of a session
• Window – Number of octets that the sender will accept
• Checksum – Calculated checksum of the header and data fields
• Urgent pointer – Indicates the end of the urgent data
• Option – One option currently defined, maximum TCP segment size
• Data – Upper-layer protocol data

The next page will define UDP.

Tuesday, March 23, 2010

Windowing

Windowing
11.1.5 This page will explain how windows are used to transmit data.


Data packets must be delivered to the recipient in the same order in which they were transmitted to have a reliable, connection-oriented data transfer. The protocol fails if any data packets are lost, damaged, duplicated, or received in a different order. An easy solution is to have a recipient acknowledge the receipt of each packet before the next packet is sent.

If a sender had to wait for an ACK after each packet was sent, throughput would be low. Therefore, most connection-oriented, reliable protocols allow multiple packets to be sent before an ACK is received. The time interval after the sender transmits a data packet and before the sender processes any ACKs is used to transmit more data. The number of data packets the sender can transmit before it receives an ACK is known as the window size, or window.

TCP uses expectational ACKs. This means that the ACK number refers to the next packet that is expected.

Windowing refers to the fact that the window size is negotiated dynamically in the TCP session. Windowing is a flow-control mechanism. Windowing requires the source device to receive an ACK from the destination after a certain amount of data is transmitted. The destination host reports a window size to the source host. This window specifies the number of packets that the destination host is prepared to receive. The first packet is the ACK.

With a window size of three, the source device can send three bytes to the destination. The source device must then wait for an ACK. If the destination receives the three bytes, it sends an acknowledgment to the source device, which can now transmit three more bytes. If the destination does not receive the three bytes, because of overflowing buffers, it does not send an acknowledgment. Because the source does not receive an acknowledgment, it knows that the bytes should be retransmitted, and that the transmission rate should be decreased.

In Figure , the sender sends three packets before it expects an ACK. If the receiver can handle only two packets, the window drops packet three, specifies three as the next packet, and indicates a new window size of two. The sender sends the next two packets, but still specifies a window size of three. This means that the sender will still expect a three-packet ACK from the receiver. The receiver replies with a request for packet five and again specifies a window size of two.

The next page describes the acknowledgment process.

Three-way handshake


Three-way handshake
11.1.4 This page will explain how TCP uses three-way handshakes for data transmission.


TCP is a connection-oriented protocol. TCP requires a connection to be established before data transfer begins. The two hosts must synchronize their initial sequence numbers to establish a connection. Synchronization occurs through an exchange of segments that carry a synchronize (SYN) control bit and the initial sequence numbers. This solution requires a mechanism that picks the initial sequence numbers and a handshake to exchange them.

The synchronization requires each side to send its own initial sequence number and to receive a confirmation of exchange in an acknowledgment (ACK) from the other side. Each side must receive the initial sequence number from the other side and respond with an ACK. The sequence is as follows:

1. The sending host (A) initiates a connection by sending a SYN packet to the receiving host (B) indicating its INS = X:

A - > B SYN, seq of A = X

2. B receives the packet, records that the seq of A = X, replies with an ACK of X + 1, and indicates that its INS = Y. The ACK of X + 1 means that host B has received all octets up to and including X and is expecting X + 1 next:

B - > A ACK, seq of A = X, SYN seq of B = Y, ACK = X + 1

3. A receives the packet from B, it knows that the seq of B = Y, and responds with an ACK of Y + 1, which finalizes the connection process:

A - > B ACK, seq of B = Y, ACK = Y + 1

This exchange is called the three-way handshake.

A three-way handshake is necessary because sequence numbers are not based on a global clock in the network and TCP protocols may use different mechanisms to choose the initial sequence numbers. The receiver of the first SYN would not know if the segment was delayed unless it kept track of the last sequence number used on the connection. If the receiver does not have this information, it must ask the sender to verify the SYN.

The next page will discuss the concept of windowing.

Flow control

Flow control
11.1.2 This page will describe how the transport layer provides flow control.


As the transport layer sends data segments, it tries to ensure that data is not lost. Data loss may occur if a host cannot process data as quickly as it arrives. The host is then forced to discard the data. Flow control ensures that a source host does not overflow the buffers in a destination host. To provide flow control, TCP allows the source and destination hosts to communicate. The two hosts then establish a data-transfer rate that is agreeable to both.

The next page will discuss data transport connections
Session establishment, maintenance, and termination
11.1.3 This page discusses transport functionality and how it is accomplished on a segment-by-segment basis.


Applications can send data segments on a first-come, first-served basis. The segments that arrive first will be taken care of first. These segments can be routed to the same or different destinations. Multiple applications can share the same transport connection in the OSI reference model. This is referred to as the multiplexing of upper-layer conversations. Numerous simultaneous upper-layer conversations can be multiplexed over a single connection.

One function of the transport layer is to establish a connection-oriented session between similar devices at the application layer. For data transfer to begin, the source and destination applications inform the operating systems that a connection will be initiated. One node initiates a connection that must be accepted by the other. Protocol software modules in the two operating systems exchange messages across the network to verify that the transfer is authorized and that both sides are ready.

The connection is established and the transfer of data begins after all synchronization has occurred. The two machines continue to communicate through their protocol software to verify that the data is received correctly.

Figure shows a typical connection between two systems. The first handshake requests synchronization. The second handshake acknowledge the initial synchronization request, as well as synchronizing connection parameters in the opposite direction. The third handshake segment is an acknowledgment used to inform the destination that both sides agree that a connection has been established. After the connection has been established, data transfer begins.

Congestion can occur for two reasons:

• First, a high-speed computer might generate traffic faster than a network can transfer it.

• Second, if many computers simultaneously need to send datagrams to a single destination, that destination can experience congestion, although no single source caused the problem.

When datagrams arrive too quickly for a host or gateway to process, they are temporarily stored in memory. If the traffic continues, the host or gateway eventually exhausts its memory and must discard additional datagrams that arrive.

Instead of allowing data to be lost, the TCP process on the receiving host can issue a “not ready” indicator to the sender. This indicator signals the sender to stop data transmission. When the receiver can handle additional data, it sends a “ready” transport indicator. When this indicator is received, the sender can resume the segment transmission.

At the end of data transfer, the source host sends a signal that indicates the end of the transmission. The destination host acknowledges the end of transmission and the connection is terminated.

The next page will define three-way handshakes.

Introduction to the TCP/IP transport layer

Introduction to the TCP/IP transport layer
11.1.1 This page will describe the functions of the transport layer.


The primary duties of the transport layer are to transport and regulate the flow of information from a source to a destination, reliably and accurately. End-to-end control and reliability are provided by sliding windows, sequencing numbers, and acknowledgments.

To understand reliability and flow control, think of someone who studies a foreign language for one year and then visits the country where that language is used. In conversation, words must be repeated for reliability. People must also speak slowly so that the conversation is understood, which relates to flow control.

The transport layer establishes a logical connection between two endpoints of a network. Protocols in the transport layer segment and reassemble data sent by upper-layer applications into the same transport layer data stream. This transport layer data stream provides end-to-end transport services.

The two primary duties of the transport layer are to provide flow control and reliability. The transport layer defines end-to-end connectivity between host applications. Some basic transport services are as follows:

• Segmentation of upper-layer application data

• Establishment of end-to-end operations

• Transportation of segments from one end host to another

• Flow control provided by sliding windows

• Reliability provided by sequence numbers and acknowledgments

TCP/IP is a combination of two individual protocols. IP operates at Layer 3 of the OSI model and is a connectionless protocol that provides best-effort delivery across a network. TCP operates at the transport layer and is a connection-oriented service that provides flow control and reliability. When these protocols are combined they provide a wider range of services. The combined protocols are the basis for the TCP/IP protocol suite. The Internet is built upon this TCP/IP protocol suite.

The next page will explain how the transport layer controls the flow of data.

Module 11: TCP/IP Transport and Application Layers

Overview
The TCP/IP transport layer transports data between applications on source and destination devices. Familiarity with the transport layer is essential to understand modern data networks. This module will describe the functions and services of this layer.


Many of the network applications that are found at the TCP/IP application layer are familiar to most network users. HTTP, FTP, and SMTP are acronyms that are commonly seen by users of Web browsers and e-mail clients. This module also describes the function of these and other applications from the TCP/IP networking model. This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811 exams.

Students who complete this module should be able to perform the following tasks:

• Describe the functions of the TCP/IP transport layer
• Describe flow control
• Explain how a connection is established between peer systems
• Describe windowing
• Describe acknowledgment
• Identify and describe transport layer protocols
• Describe TCP and UDP header formats
• Describe TCP and UDP port numbers
• List the major protocols of the TCP/IP application layer
• Provide a brief description of the features and operation of well-known TCP/IP applications

Summary of Module 10

Summary
This page summarizes the topics discussed in this module.


IP is referred to as a connectionless protocol because no dedicated circuit connection is established between source and destination prior to transmission, IP is referred to as unreliable because does not verify that the data reached its destination. If verification of delivery is required then a combination of IP and a connection-oriented transport protocol such as TCP is required. If verification of error-free delivery is not required IP can be used in combination with a connectionless transport protocol such as UDP. Connectionless network processes are often referred to as packet switched processes. Connection-oriented network processes are often referred to as circuit switched processes.

Protocols at each layer of the OSI model add control information to the data as it moves through the network. Because this information is added at the beginning and end of the data, this process is referred to as encapsulating the data. Layer 3 adds network, or logical, address information to the data and Layer 2 adds local, or physical, address information.

Layer 3 routing and Layer 2 switching are used to direct and deliver data throughout the network. Initially, the router receives a Layer 2 frame with a Layer 3 packet encapsulated within it. The router must strip off the Layer 2 frame and examine the Layer 3 packet. If the packet is destined for local delivery the router must encapsulate it in a new frame with the correct local MAC address as the destination. If the data must be forwarded to another broadcast domain, the router must encapsulate the Layer 3 packet in a new Layer 2 frame that contains the MAC address of the next internetworking device. In this way a frame is transmitted through networks from broadcast domain to broadcast domain and eventually delivered to the correct host.

Routed protocols, such as IP, transport data across a network. Routing protocols allow routers to choose the best path for data from source to destination. These routes can be either static routes, which are entered manually, or dynamic routes, which are learned through routing protocols. When dynamic routing protocols are used, routers use routing update messages to communicate with one another and maintain their routing tables. Routing algorithms use metrics to process routing updates and populate the routing table with the best routes. Convergence describes the speed at which all routers agree on a change in the network.

Interior gateway protocols (IGP) are routing protocols that route data within autonomous systems, while exterior gateway protocols (EGP) route data between autonomous systems. IGPs can be further categorized as either distance-vector or link-state protocols. Routers using distance-vector routing protocols periodically send routing updates consisting of all or part of their routing tables. Routers using link-state routing protocols use link-state advertisements (LSAs) to send updates only when topological changes occur in the network, and send complete routing tables much less frequently.

As a packet travels through the network devices need a method of determining what portion of the IP address identifies the network and what portion identifies the host. A 32-bit address mask, called a subnet mask, is used to indicate the bits of an IP address that are being used for the network address. The default subnet mask for a Class A address is 255.0.0.0. For a Class B address, the subnet mask always starts out as 255.255.0.0, and a Class C subnet mask begins as 255.255.255.0. The subnet mask can be used to split up an existing network into subnetworks, or subnets.

Subnetting reduces the size of broadcast domains, allows LAN segments in different geographical locations to communicate through routers and provides improved security by separating one LAN segment from another.

Custom subnet masks use more bits than the default subnet masks by borrowing these bits from the host portion of the IP address. This creates a three-part address:

• The original network address

• The subnet address made up of the bits borrowed

• The host address made up of the bits left after borrowing some for subnets

Routers use subnet masks to determine the subnetwork portion of an address for an incoming packet. This process is referred to as logical ANDing.