Friday, March 26, 2010

WANs / Introduction to WANs

Introduction to WANs
1.1.1 A WAN is a data communications network that spans a large geographic area such as a state, province, or country. WANs often use transmission facilities provided by common carriers such as telephone companies.

These are the major characteristics of WANs:

They connect devices that are separated by wide geographical areas.

They use the services of carriers such as the Regional Bell Operating Companies (RBOCs), Sprint, MCI, and VPM Internet Services, Inc. to establish the link or connection between sites.

They use serial connections of various types to access bandwidth over large geographic areas.

A WAN differs from a LAN in several ways. For example, unlike a LAN, which connects workstations, peripherals, terminals, and other devices in a single building, a WAN makes data connections across a broad geographic area. Companies use a WAN to connect various company sites so that information can be exchanged between distant offices.

A WAN operates at the physical layer and the data link layer of the OSI reference model. It interconnects LANs that are usually separated by large geographic areas. WANs provide for the exchange of data packets and frames between routers and switches and the LANs they support.

The following devices are used in WANs:

Routers offer many services, including internetworking and WAN interface ports.

Modems include interface voice-grade services, channel service units/digital service units (CSU/DSUs) that interface T1/E1 services, and Terminal Adapters/Network Termination 1 (TA/NT1s) that interface Integrated Services Digital Network (ISDN) services.

Communication servers concentrate dial in and dial out user communication.

WAN data link protocols describe how frames are carried between systems on a single data link. They include protocols designed to operate over dedicated point-to-point, multipoint, and multi-access switched services such as Frame Relay. WAN standards are defined and managed by a number of recognized authorities, including the following agencies:

International Telecommunication Union-Telecommunication Standardization Sector (ITU-T), formerly the Consultative Committee for International Telegraph and Telephone (CCITT)

International Organization for Standardization (ISO)
Internet Engineering Task Force (IETF)
Electronic Industries Association (EIA)
The next page will describe routers. This information is important to further understand WANs.

CCNA 2 :- Module 1 Router and Routing Basic Overview

Overview
A wide-area network (WAN) is a data communications network that connects user networks over a large geographical area. WANs have several important characteristics that distinguish them from LANs. The first lesson in this module will provide an overview of WAN technologies and protocols. It will also explain how WANs and LANs are different, and ways in which they are similar.


 
It is important to understand the physical layer components of a router. This knowledge builds a foundation for other information and skills that are needed to configure routers and manage routed networks. This module provides a close examination of the internal and external physical components of the router. The module also describes techniques for physically connecting the various router interfaces.

 
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811 exams. -

Students who complete this module should be able to perform the following tasks:

  • Identify organizations responsible for WAN standards  
  • Explain the difference between a WAN and LAN and the type of standards and protocols each uses  
  • Describe the role of a router in a WAN  
  • Identify internal components of the router and describe their functions  
  • Describe the physical characteristics of the router  
  • Identify LAN and management ports on a router  
  • Properly connect Ethernet, serial WAN, and console ports

 

Thursday, March 25, 2010

Notice for all viewers :)

Notice

The first semester of CCNA has publised with 11 Chapter. Please send feed back on my email, if all reader of have any question, please must write back. I feel happy.

The second semester will update after few days, this is under process. Hope all will enjoy.

Aqeel Haider
(Writer)

Summary of Module 11

Summary
This page summarizes the topics discussed in this module.


The primary duties of the transport layer, Layer 4 of the OSI model, are to transport and regulate the flow of information from the source to the destination reliably and accurately.

The transport layer multiplexes data from upper layer applications into a stream of data packets. It uses port (socket) numbers to identify different conversations and delivers the data to the correct application.

The Transmission Control Protocol (TCP) is a connection-oriented transport protocol that provides flow control as well as reliability. TCP uses a three-way handshake to establish a synchronized circuit between end-user applications. Each datagram is numbered before transmission. At the receiving station, TCP reassembles the segments into a complete message. If a sequence number is missing in the series, that segment is retransmitted.

Flow control ensures that a transmitting node does not overwhelm a receiving node with data. The simplest method of flow control used by TCP involves a “not ready” signal that notifies the transmitting device that the buffers on the receiving device are full. When the receiver can handle additional data, the receiver sends a “ready” transport indicator.

Positive acknowledgment with retransmission is another TCP protocol technique that guarantees reliable delivery of data. Because having to wait for an acknowledgment after sending each packet would negatively impact throughput, windowing is used to allow multiple packets to be transmitted before an acknowledgment is received. TCP window sizes are variable during the lifetime of a connection.

Positive acknowledgment with retransmission is another TCP protocol technique that guarantees reliable delivery of data. Because having to wait for an acknowledgment after sending each packet would negatively impact throughput, windowing is used to allow multiple packets to be transmitted before an acknowledgment is received. TCP window sizes are variable during the lifetime of a connection.

If an application does not require flow control or an acknowledgment, as in the case of a broadcast transmission, User Datagram Protocol (UDP) can be used instead of TCP. UDP is a connectionless transport protocol in the TCP/IP protocol stack that allows multiple conversations to occur simultaneously but does not provide acknowledgments or guaranteed delivery. A UDP header is much smaller than a TCP header because of the lack of control information it must contain.

Some of the protocols and applications that function at the application level are well known to Internet users:

• Domain Name System (DNS) - Used in IP networks to translate names of network nodes into IP addresses

• File Transfer Protocol (FTP) - Used for transferring files between networks

• Hypertext Transfer Protocol (HTTP) - Used to deliver hypertext markup language (HTML) documents to a client application, such as a WWW browser

• Simple Mail Transfer Protocol (SMTP) - Used to provide electronic mail services

• Simple Network Management Protocol (SNMP) - Used to monitor and control network devices and to manage configurations, statistics collection, performance and security

• Telnet - Used to login to a remote host that is running a Telnet server application and then to execute commands from the command line

SMTP / SNMP / Telnet

SMTP
11.2.5 This page will discuss the features of SMTP.


Email servers communicate with each other using the Simple Mail Transfer Protocol (SMTP) to send and receive mail. The SMTP protocol transports email messages in ASCII format using TCP.

When a mail server receives a message destined for a local client, it stores that message and waits for the client to collect the mail. There are several ways for mail clients to collect their mail. They can use programs that access the mail server files directly or collect their mail using one of many network protocols. The most popular mail client protocols are POP3 and IMAP4, which both use TCP to transport data. Even though mail clients use these special protocols to collect mail, they almost always use SMTP to send mail. Since two different protocols, and possibly two different servers, are used to send and receive mail, it is possible that mail clients can perform one task and not the other. Therefore, it is usually a good idea to troubleshoot e-mail sending problems separately from e-mail receiving problems.

When checking the configuration of a mail client, verify that the SMTP and POP or IMAP settings are correctly configured. A good way to test if a mail server is reachable is to Telnet to the SMTP port (25) or to the POP3 port (110). The following command format is used at the Windows command line to test the ability to reach the SMTP service on the mail server at IP address 192.168.10.5:

C:\>telnet 192.168.10.5 25

The SMTP protocol does not offer much in the way of security and does not require any authentication. Administrators often do not allow hosts that are not part of their network to use their SMTP server to send or relay mail. This is to prevent unauthorized users from using their servers as mail relays.

The next page will describe the features of SNMP.

SNMP
11.2.6 This page will define SNMP.


The Simple Network Management Protocol (SNMP) is an application layer protocol that facilitates the exchange of management information between network devices. SNMP enables network administrators to manage network performance, find and solve network problems, and plan for network growth. SNMP uses UDP as its transport layer protocol.

An SNMP managed network consists of the following three key components:

• Network management system (NMS) – NMS executes applications that monitor and control managed devices. The bulk of the processing and memory resources required for network management are provided by NMS. One or more NMSs must exist on any managed network.

• Managed devices – Managed devices are network nodes that contain an SNMP agent and that reside on a managed network. Managed devices collect and store management information and make this information available to NMSs using SNMP. Managed devices, sometimes called network elements, can be routers, access servers, switches, and bridges, hubs, computer hosts, or printers.

• Agents – Agents are network-management software modules that reside in managed devices. An agent has local knowledge of management information and translates that information into a form compatible with SNMP.

The next page will describe Telnet.

Telnet
11.2.7 This page will explain the features of Telnet.


Telnet client software provides the ability to login to a remote Internet host that is running a Telnet server application and then to execute commands from the command line. A Telnet client is referred to as a local host. Telnet server, which uses special software called a daemon, is referred to as a remote host.

To make a connection from a Telnet client, the connection option must be selected. A dialog box typically prompts for a host name and terminal type. The host name is the IP address or DNS name of the remote computer. The terminal type describes the type of terminal emulation that the Telnet client should perform. The Telnet operation uses none of the processing power from the transmitting computer. Instead, it transmits the keystrokes to the remote host and sends the resulting screen output back to the local monitor. All processing and storage take place on the remote computer.

Telnet works at the application layer of the TCP/IP model. Therefore, Telnet works at the top three layers of the OSI model. The application layer deals with commands. The presentation layer handles formatting, usually ASCII. The session layer transmits. In the TCP/IP model, all of these functions are considered to be part of the application layer.

This page concludes this lesson. The next page will summarize the main points from the module.

FTP and TFTP / HTTP

FTP and TFTP
11.2.3 This page will describe the features of FTP and TFPT.


FTP is a reliable, connection-oriented service that uses TCP to transfer files between systems that support FTP. The main purpose of FTP is to transfer files from one computer to another by copying and moving files from servers to clients, and from clients to servers. When files are copied from a server, FTP first establishes a control connection between the client and the server. Then a second connection is established, which is a link between the computers through which the data is transferred. Data transfer can occur in ASCII mode or in binary mode. These modes determine the encoding used for data file, which in the OSI model is a presentation layer task. After the file transfer has ended, the data connection terminates automatically. When the entire session of copying and moving files is complete, the command link is closed when the user logs off and ends the session.

TFTP is a connectionless service that uses User Datagram Protocol (UDP). TFTP is used on the router to transfer configuration files and Cisco IOS images and to transfer files between systems that support TFTP. TFTP is designed to be small and easy to implement. Therefore, it lacks most of the features of FTP. TFTP can read or write files to or from a remote server but it cannot list directories and currently has no provisions for user authentication. It is useful in some LANs because it operates faster than FTP and in a stable environment it works reliably.

The next page will discuss HTTP.

HTTP
11.2.4 This page will describe the features of HTTP.


Hypertext Transfer Protocol (HTTP) works with the World Wide Web, which is the fastest growing and most used part of the Internet. One of the main reasons for the extraordinary growth of the Web is the ease with which it allows access to information. A Web browser is a client-server application, which means that it requires both a client and a server component in order to function. A Web browser presents data in multimedia formats on Web pages that use text, graphics, sound, and video. The Web pages are created with a format language called Hypertext Markup Language (HTML). HTML directs a Web browser on a particular Web page to produce the appearance of the page in a specific manner. In addition, HTML specifies locations for the placement of text, files, and objects that are to be transferred from the Web server to the Web browser.

Hyperlinks make the World Wide Web easy to navigate. A hyperlink is an object, word, phrase, or picture, on a Web page. When that hyperlink is clicked, it directs the browser to a new Web page. The Web page contains, often hidden within its HTML description, an address location known as a Uniform Resource Locator (URL).

In the URL http://www.cisco.com/edu/, the "http://" tells the browser which protocol to use. The second part, "www", is the hostname or name of a specific machine with a specific IP address. The last part, /edu/ identifies the specific folder location on the server that contains the default web page.

A Web browser usually opens to a starting or "home" page. The URL of the home page has already been stored in the configuration area of the Web browser and can be changed at any time. From the starting page, click on one of the Web page hyperlinks, or type a URL in the address bar of the browser. The Web browser examines the protocol to determine if it needs to open another program, and then determines the IP address of the Web server using DNS. Then the transport layer, network layer, data link layer, and physical layer work together to initiate a session with the Web server. The data that is transferred to the HTTP server contains the folder name of the Web page location. The data can also contain a specific file name for an HTML page. If no name is given, then the default name as specified in the configuration on the server is used.

The server responds to the request by sending to the Web client all of the text, audio, video, and graphic files specified in the HTML instructions. The client browser reassembles all the files to create a view of the Web page, and then terminates the session. If another page that is located on the same or a different server is clicked, the whole process begins again.

The next page will describe the protocol used to send e-mail.

Introduction to the TCP/IP application layer / DNS

Introduction to the TCP/IP application layer
11.2.1 This page will introduce some TCP/IP application layer protocols.


The session, presentation, and application layers of the OSI model are bundled into the application layer of the TCP/IP model. This means that representation, encoding, and dialog control are all handled in the TCP/IP application layer. This design ensures that the TCP/IP model provides maximum flexibility at the application layer for software developers.

The TCP/IP protocols that support file transfer, e-mail, and remote login are probably the most familiar to users of the Internet. These protocols include the following applications:

• DNS
• FTP
• HTTP
• SMTP
• SNMP
• Telnet

The next page will discuss DNS.

DNS
11.2.2 This page will describe DNS.


The Internet is built on a hierarchical addressing scheme. This scheme allows for routing to be based on classes of addresses rather than based on individual addresses. The problem this creates for the user is associating the correct address with the Internet site. It is very easy to forget an IP address to a particular site because there is nothing to associate the contents of the site with the address. Imagine the difficulty of remembering the IP addresses of tens, hundreds, or even thousands of Internet sites.

A domain naming system was developed in order to associate the contents of the site with the address of that site. The Domain Name System (DNS) is a system used on the Internet for translating names of domains and their publicly advertised network nodes into IP addresses. A domain is a group of computers that are associated by their geographical location or their business type. A domain name is a string of characters, number, or both. Usually a name or abbreviation that represents the numeric address of an Internet site will make up the domain name. There are more than 200 top-level domains on the Internet, examples of which include the following:

.us – United States
.uk – United Kingdom

There are also generic names, which examples include the following:

.edu – educational sites
.com – commercial sites
.gov – government sites
.org – non-profit sites
.net – network service

The next page will discuss FTP and TFTP.

Wednesday, March 24, 2010

UDP

UDP
11.1.8 This page will discuss UDP. UDP is the connectionless transport protocol in the TCP/IP protocol stack.


UDP is a simple protocol that exchanges datagrams without guaranteed delivery. It relies on higher-layer protocols to handle errors and retransmit data.

UDP does not use windows or ACKs. Reliability is provided by application layer protocols. UDP is designed for applications that do not need to put sequences of segments together.

The following protocols use UDP:

• TFTP
• SNMP
• DHCP
• DNS

The following are the definitions of the fields in the UDP segment:

• Source port – Number of the port that sends data
• Destination port – Number of the port that receives data
• Length – Number of bytes in header and data
• Checksum – Calculated checksum of the header and data fields
• Data – Upper-layer protocol data

The next page discusses port numbers used by both TCP and UDP.

TCP and UDP port numbers
11.1.9 This page examines port numbers.


Both TCP and UDP use port numbers to pass information to the upper layers. Port numbers are used to keep track of different conversations that cross the network at the same time.

Application software developers agree to use well-known port numbers that are issued by the Internet Assigned Numbers Authority (IANA). Any conversation bound for the FTP application uses the standard port numbers 20 and 21. Port 20 is used for the data portion and Port 21 is used for control. Conversations that do not involve an application with a well-known port number are assigned port numbers randomly from within a specific range above 1023. Some ports are reserved in both TCP and UDP. However, applications might not be written to support them. Port numbers have the following assigned ranges:

• Numbers below 1024 are considered well-known ports numbers.
• Numbers above 1024 are dynamically-assigned ports numbers.
• Registered port numbers are for vendor-specific applications. Most of these are above 1024.

End systems use port numbers to select the proper application. The source host dynamically assigns source port numbers. These numbers are always greater than 1023.

This page concludes this lesson. The next lesson will focus on the application layer. The first page provides an introduction.

Acknowledgment / TCP

Acknowledgment
11.1.6 This page will discuss acknowledgments and the sequence of segments.


Reliable delivery guarantees that a stream of data sent from one device is delivered through a data link to another device without duplication or data loss. Positive acknowledgment with retransmission is one technique that guarantees reliable delivery of data. Positive acknowledgment requires a recipient to communicate with the source and send back an ACK when the data is received. The sender keeps a record of each data packet, or TCP segment, that it sends and expects an ACK. The sender also starts a timer when it sends a segment and will retransmit a segment if the timer expires before an ACK arrives.

Figure shows a sender that transmits data packets 1, 2, and 3. The receiver acknowledges receipt of the packets with a request for packet 4. When the sender receives the ACK, it sends packets 4, 5, and 6. If packet 5 does not arrive at the destination, the receiver acknowledges with a request to resend packet 5. The sender resends packet 5 and then receives an ACK to continue with the transmission of packet 7.

TCP provides sequencing of segments with a forward reference acknowledgment. Each segment is numbered before transmission. At the destination, TCP reassembles the segments into a complete message. If a sequence number is missing in the series, that segment is retransmitted. Segments that are not acknowledged within a given time period will result in a retransmission.

The next page will describe TCP in more detail.

TCP
11.1.7 This page will discuss the protocols that use TCP and the fields included in a TCP segment.


TCP is a connection-oriented transport layer protocol that provides reliable full-duplex data transmission. TCP is part of the TCP/IP protocol stack. In a connection-oriented environment, a connection is established between both ends before the transfer of information can begin. TCP breaks messages into segments, reassembles them at the destination, and resends anything that is not received. TCP supplies a virtual circuit between end-user applications.

The following protocols use TCP:

• FTP
• HTTP
• SMTP
• Telnet

The following are the definitions of the fields in the TCP segment:

• Source port – Number of the port that sends data
• Destination port – Number of the port that receives data
• Sequence number – Number used to ensure the data arrives in the correct order
• Acknowledgment number – Next expected TCP octet
• HLEN – Number of 32-bit words in the header
• Reserved – Set to zero
• Code bits – Control functions, such as setup and termination of a session
• Window – Number of octets that the sender will accept
• Checksum – Calculated checksum of the header and data fields
• Urgent pointer – Indicates the end of the urgent data
• Option – One option currently defined, maximum TCP segment size
• Data – Upper-layer protocol data

The next page will define UDP.

Tuesday, March 23, 2010

Windowing

Windowing
11.1.5 This page will explain how windows are used to transmit data.


Data packets must be delivered to the recipient in the same order in which they were transmitted to have a reliable, connection-oriented data transfer. The protocol fails if any data packets are lost, damaged, duplicated, or received in a different order. An easy solution is to have a recipient acknowledge the receipt of each packet before the next packet is sent.

If a sender had to wait for an ACK after each packet was sent, throughput would be low. Therefore, most connection-oriented, reliable protocols allow multiple packets to be sent before an ACK is received. The time interval after the sender transmits a data packet and before the sender processes any ACKs is used to transmit more data. The number of data packets the sender can transmit before it receives an ACK is known as the window size, or window.

TCP uses expectational ACKs. This means that the ACK number refers to the next packet that is expected.

Windowing refers to the fact that the window size is negotiated dynamically in the TCP session. Windowing is a flow-control mechanism. Windowing requires the source device to receive an ACK from the destination after a certain amount of data is transmitted. The destination host reports a window size to the source host. This window specifies the number of packets that the destination host is prepared to receive. The first packet is the ACK.

With a window size of three, the source device can send three bytes to the destination. The source device must then wait for an ACK. If the destination receives the three bytes, it sends an acknowledgment to the source device, which can now transmit three more bytes. If the destination does not receive the three bytes, because of overflowing buffers, it does not send an acknowledgment. Because the source does not receive an acknowledgment, it knows that the bytes should be retransmitted, and that the transmission rate should be decreased.

In Figure , the sender sends three packets before it expects an ACK. If the receiver can handle only two packets, the window drops packet three, specifies three as the next packet, and indicates a new window size of two. The sender sends the next two packets, but still specifies a window size of three. This means that the sender will still expect a three-packet ACK from the receiver. The receiver replies with a request for packet five and again specifies a window size of two.

The next page describes the acknowledgment process.

Three-way handshake


Three-way handshake
11.1.4 This page will explain how TCP uses three-way handshakes for data transmission.


TCP is a connection-oriented protocol. TCP requires a connection to be established before data transfer begins. The two hosts must synchronize their initial sequence numbers to establish a connection. Synchronization occurs through an exchange of segments that carry a synchronize (SYN) control bit and the initial sequence numbers. This solution requires a mechanism that picks the initial sequence numbers and a handshake to exchange them.

The synchronization requires each side to send its own initial sequence number and to receive a confirmation of exchange in an acknowledgment (ACK) from the other side. Each side must receive the initial sequence number from the other side and respond with an ACK. The sequence is as follows:

1. The sending host (A) initiates a connection by sending a SYN packet to the receiving host (B) indicating its INS = X:

A - > B SYN, seq of A = X

2. B receives the packet, records that the seq of A = X, replies with an ACK of X + 1, and indicates that its INS = Y. The ACK of X + 1 means that host B has received all octets up to and including X and is expecting X + 1 next:

B - > A ACK, seq of A = X, SYN seq of B = Y, ACK = X + 1

3. A receives the packet from B, it knows that the seq of B = Y, and responds with an ACK of Y + 1, which finalizes the connection process:

A - > B ACK, seq of B = Y, ACK = Y + 1

This exchange is called the three-way handshake.

A three-way handshake is necessary because sequence numbers are not based on a global clock in the network and TCP protocols may use different mechanisms to choose the initial sequence numbers. The receiver of the first SYN would not know if the segment was delayed unless it kept track of the last sequence number used on the connection. If the receiver does not have this information, it must ask the sender to verify the SYN.

The next page will discuss the concept of windowing.

Flow control

Flow control
11.1.2 This page will describe how the transport layer provides flow control.


As the transport layer sends data segments, it tries to ensure that data is not lost. Data loss may occur if a host cannot process data as quickly as it arrives. The host is then forced to discard the data. Flow control ensures that a source host does not overflow the buffers in a destination host. To provide flow control, TCP allows the source and destination hosts to communicate. The two hosts then establish a data-transfer rate that is agreeable to both.

The next page will discuss data transport connections
Session establishment, maintenance, and termination
11.1.3 This page discusses transport functionality and how it is accomplished on a segment-by-segment basis.


Applications can send data segments on a first-come, first-served basis. The segments that arrive first will be taken care of first. These segments can be routed to the same or different destinations. Multiple applications can share the same transport connection in the OSI reference model. This is referred to as the multiplexing of upper-layer conversations. Numerous simultaneous upper-layer conversations can be multiplexed over a single connection.

One function of the transport layer is to establish a connection-oriented session between similar devices at the application layer. For data transfer to begin, the source and destination applications inform the operating systems that a connection will be initiated. One node initiates a connection that must be accepted by the other. Protocol software modules in the two operating systems exchange messages across the network to verify that the transfer is authorized and that both sides are ready.

The connection is established and the transfer of data begins after all synchronization has occurred. The two machines continue to communicate through their protocol software to verify that the data is received correctly.

Figure shows a typical connection between two systems. The first handshake requests synchronization. The second handshake acknowledge the initial synchronization request, as well as synchronizing connection parameters in the opposite direction. The third handshake segment is an acknowledgment used to inform the destination that both sides agree that a connection has been established. After the connection has been established, data transfer begins.

Congestion can occur for two reasons:

• First, a high-speed computer might generate traffic faster than a network can transfer it.

• Second, if many computers simultaneously need to send datagrams to a single destination, that destination can experience congestion, although no single source caused the problem.

When datagrams arrive too quickly for a host or gateway to process, they are temporarily stored in memory. If the traffic continues, the host or gateway eventually exhausts its memory and must discard additional datagrams that arrive.

Instead of allowing data to be lost, the TCP process on the receiving host can issue a “not ready” indicator to the sender. This indicator signals the sender to stop data transmission. When the receiver can handle additional data, it sends a “ready” transport indicator. When this indicator is received, the sender can resume the segment transmission.

At the end of data transfer, the source host sends a signal that indicates the end of the transmission. The destination host acknowledges the end of transmission and the connection is terminated.

The next page will define three-way handshakes.

Introduction to the TCP/IP transport layer

Introduction to the TCP/IP transport layer
11.1.1 This page will describe the functions of the transport layer.


The primary duties of the transport layer are to transport and regulate the flow of information from a source to a destination, reliably and accurately. End-to-end control and reliability are provided by sliding windows, sequencing numbers, and acknowledgments.

To understand reliability and flow control, think of someone who studies a foreign language for one year and then visits the country where that language is used. In conversation, words must be repeated for reliability. People must also speak slowly so that the conversation is understood, which relates to flow control.

The transport layer establishes a logical connection between two endpoints of a network. Protocols in the transport layer segment and reassemble data sent by upper-layer applications into the same transport layer data stream. This transport layer data stream provides end-to-end transport services.

The two primary duties of the transport layer are to provide flow control and reliability. The transport layer defines end-to-end connectivity between host applications. Some basic transport services are as follows:

• Segmentation of upper-layer application data

• Establishment of end-to-end operations

• Transportation of segments from one end host to another

• Flow control provided by sliding windows

• Reliability provided by sequence numbers and acknowledgments

TCP/IP is a combination of two individual protocols. IP operates at Layer 3 of the OSI model and is a connectionless protocol that provides best-effort delivery across a network. TCP operates at the transport layer and is a connection-oriented service that provides flow control and reliability. When these protocols are combined they provide a wider range of services. The combined protocols are the basis for the TCP/IP protocol suite. The Internet is built upon this TCP/IP protocol suite.

The next page will explain how the transport layer controls the flow of data.

Module 11: TCP/IP Transport and Application Layers

Overview
The TCP/IP transport layer transports data between applications on source and destination devices. Familiarity with the transport layer is essential to understand modern data networks. This module will describe the functions and services of this layer.


Many of the network applications that are found at the TCP/IP application layer are familiar to most network users. HTTP, FTP, and SMTP are acronyms that are commonly seen by users of Web browsers and e-mail clients. This module also describes the function of these and other applications from the TCP/IP networking model. This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811 exams.

Students who complete this module should be able to perform the following tasks:

• Describe the functions of the TCP/IP transport layer
• Describe flow control
• Explain how a connection is established between peer systems
• Describe windowing
• Describe acknowledgment
• Identify and describe transport layer protocols
• Describe TCP and UDP header formats
• Describe TCP and UDP port numbers
• List the major protocols of the TCP/IP application layer
• Provide a brief description of the features and operation of well-known TCP/IP applications

Summary of Module 10

Summary
This page summarizes the topics discussed in this module.


IP is referred to as a connectionless protocol because no dedicated circuit connection is established between source and destination prior to transmission, IP is referred to as unreliable because does not verify that the data reached its destination. If verification of delivery is required then a combination of IP and a connection-oriented transport protocol such as TCP is required. If verification of error-free delivery is not required IP can be used in combination with a connectionless transport protocol such as UDP. Connectionless network processes are often referred to as packet switched processes. Connection-oriented network processes are often referred to as circuit switched processes.

Protocols at each layer of the OSI model add control information to the data as it moves through the network. Because this information is added at the beginning and end of the data, this process is referred to as encapsulating the data. Layer 3 adds network, or logical, address information to the data and Layer 2 adds local, or physical, address information.

Layer 3 routing and Layer 2 switching are used to direct and deliver data throughout the network. Initially, the router receives a Layer 2 frame with a Layer 3 packet encapsulated within it. The router must strip off the Layer 2 frame and examine the Layer 3 packet. If the packet is destined for local delivery the router must encapsulate it in a new frame with the correct local MAC address as the destination. If the data must be forwarded to another broadcast domain, the router must encapsulate the Layer 3 packet in a new Layer 2 frame that contains the MAC address of the next internetworking device. In this way a frame is transmitted through networks from broadcast domain to broadcast domain and eventually delivered to the correct host.

Routed protocols, such as IP, transport data across a network. Routing protocols allow routers to choose the best path for data from source to destination. These routes can be either static routes, which are entered manually, or dynamic routes, which are learned through routing protocols. When dynamic routing protocols are used, routers use routing update messages to communicate with one another and maintain their routing tables. Routing algorithms use metrics to process routing updates and populate the routing table with the best routes. Convergence describes the speed at which all routers agree on a change in the network.

Interior gateway protocols (IGP) are routing protocols that route data within autonomous systems, while exterior gateway protocols (EGP) route data between autonomous systems. IGPs can be further categorized as either distance-vector or link-state protocols. Routers using distance-vector routing protocols periodically send routing updates consisting of all or part of their routing tables. Routers using link-state routing protocols use link-state advertisements (LSAs) to send updates only when topological changes occur in the network, and send complete routing tables much less frequently.

As a packet travels through the network devices need a method of determining what portion of the IP address identifies the network and what portion identifies the host. A 32-bit address mask, called a subnet mask, is used to indicate the bits of an IP address that are being used for the network address. The default subnet mask for a Class A address is 255.0.0.0. For a Class B address, the subnet mask always starts out as 255.255.0.0, and a Class C subnet mask begins as 255.255.255.0. The subnet mask can be used to split up an existing network into subnetworks, or subnets.

Subnetting reduces the size of broadcast domains, allows LAN segments in different geographical locations to communicate through routers and provides improved security by separating one LAN segment from another.

Custom subnet masks use more bits than the default subnet masks by borrowing these bits from the host portion of the IP address. This creates a three-part address:

• The original network address

• The subnet address made up of the bits borrowed

• The host address made up of the bits left after borrowing some for subnets

Routers use subnet masks to determine the subnetwork portion of an address for an incoming packet. This process is referred to as logical ANDing.

Calculating the resident subnetwork through ANDing

Calculating the resident subnetwork through ANDing
10.3.6 This page will explain the concept of ANDing.


Routers use subnet masks to determine the home subnetwork for individual nodes. This process is referred to as logical ANDing. ANDing is a binary process by which the router calculates the subnetwork ID for an incoming packet. ANDing is similar to multiplication.

This process is handled at the binary level. Therefore, it is necessary to view the IP address and mask in binary. The IP address and the subnetwork address are ANDed with the result being the subnetwork ID. The router then uses that information to forward the packet across the correct interface.

Subnetting is a learned skill. It will take many hours performing practice exercises to gain a development of flexible and workable schemes. A variety of subnet calculators are available on the web. However, a network administrator must know how to manually calculate subnets in order to effectively design the network scheme and assure the validity of the results from a subnet calculator. The subnet calculator will not provide the initial scheme, only the final addressing. Also, no calculators, of any kind, are permitted during the certification exam.

This page concludes this lesson. The next page will summarize the main points from the module.

Subnetting Class A and B networks

Subnetting Class A and B networks
10.3.5 This page will describe the process used to subnet Class A, B, and C networks.


The Class A and B subnetting procedure is identical to the process for Class C, except there may be significantly more bits involved. The available bits for assignment to the subnet field in a Class A address is 22 bits while a Class B address has 14 bits.

Assigning 12 bits of a Class B address to the subnet field creates a subnet mask of 255.255.255.240 or /28. All eight bits were assigned in the third octet resulting in 255, the total value of all eight bits. Four bits were assigned in the fourth octet resulting in 240. Recall that the slash mask is the sum total of all bits assigned to the subnet field plus the fixed network bits.

Assigning 20 bits of a Class A address to the subnet field creates a subnet mask of 255.255.255.240 or /28. All eight bits of the second and third octets were assigned to the subnet field and four bits from the fourth octet.

In this situation, it is apparent that the subnet mask for the Class A and Class B addresses appear identical. Unless the mask is related to a network address it is not possible to decipher how many bits were assigned to the subnet field.

Whichever class of address needs to be subnetted, the following rules are the same:

Total subnets = 2 to the power of the bits borrowed
Total hosts = 2 to the power of the bits remaining
Usable subnets = 2 to the power of the bits borrowed minus 2
Usable hosts = 2 to the power of the bits remaining minus 2

The next page will discuss logical ANDing.

Applying the subnet mask

Applying the subnet mask
10.3.4 This page will teach students how to apply a subnet mask.
Once the subnet mask has been established it then can be used to create the subnet scheme. The chart in Figure is an example of the subnets and addresses created by assigning three bits to the subnet field. This will create eight subnets with 32 hosts per subnet. Start with zero (0) when numbering subnets. The first subnet is always referenced as the zero subnet.

When filling in the subnet chart three of the fields are automatic, others require some calculation. The subnetwork ID of subnet zero is the same as the major network number, in this case 192.168.10.0. The broadcast ID for the whole network is the largest number possible, in this case 192.168.10.255. The third number that is given is the subnetwork ID for subnet number seven. This number is the three network octets with the subnet mask number inserted in the fourth octet position. Three bits were assigned to the subnet field with a cumulative value of 224. The ID for subnet seven is 192.168.10.224. By inserting these numbers, checkpoints have been established that will verify the accuracy when the chart is completed.

When consulting the subnetting chart or using the formula, the three bits assigned to the subnet field will result in 32 total hosts assigned to each subnet. This information provides the step count for each subnetwork ID. Adding 32 to each preceding number, starting with subnet zero, the ID for each subnet is established. Notice that the subnet ID has all binary 0s in the host portion.

The broadcast field is the last number in each subnetwork, and has all binary ones in the host portion. This address has the ability to broadcast only to the members of a single subnet. Since the subnetwork ID for subnet zero is 192.168.10.0 and there are 32 total hosts the broadcast ID would be 192.168.10.31. Starting at zero the 32nd sequential number is 31. It is important to remember that zero (0) is a real number in the world of networking.

The balance of the broadcast ID column can be filled in using the same process that was used in the subnetwork ID column. Simply add 32 to the preceding broadcast ID of the subnet. Another option is to start at the bottom of this column and work up to the top by subtracting one from the preceding subnetwork ID.

The next page will discuss subnetting for Class A, B, and C networks.

Establishing the subnet mask address

Establishing the subnet mask address
10.3.3 This page provides detailed information about subnet masks and how they are established on a network.


Selecting the number of bits to use in the subnet process will depend on the maximum number of hosts required per subnet. An understanding of basic binary math and the position value of the bits in each octet is necessary when calculating the number of subnetworks and hosts created when bits were borrowed.

The last two bits in the last octet, regardless of the IP address class, may never be assigned to the subnetwork. These bits are referred to as the last two significant bits. Use of all the available bits to create subnets, except these last two, will result in subnets with only two usable hosts. This is a practical address conservation method for addressing serial router links. However, for a working LAN this would result in prohibitive equipment costs.

The subnet mask gives the router the information required to determine in which network and subnet a particular host resides. The subnet mask is created by using binary ones in the network bit positions. The subnet bits are determined by adding the position value of the bits that were borrowed. If three bits were borrowed, the mask for a Class C address would be 255.255.255.224. This mask may also be represented, in the slash format, as /27. The number following the slash is the total number of bits that were used for the network and subnetwork portion.

To determine the number of bits to be used, the network designer needs to calculate how many hosts the largest subnetwork requires and the number of subnetworks needed. As an example, the network requires 30 hosts and five subnetworks. A shortcut to determine how many bits to reassign is by using the subnetting chart. By consulting the row titled ”Usable Hosts”, the chart indicates that for 30 usable hosts three bits are required. The chart also shows that this creates six usable subnetworks, which will satisfy the requirements of this scheme. The difference between usable hosts and total hosts is a result of using the first available address as the ID and the last available address as the broadcast for each subnetwork. Borrowing the appropriate number of bits to accommodate required subnetworks and hosts per subnetwork can be a balancing act and may result in unused host addresses in multiple subnetworks. The ability to use these addresses is not provided with classful routing. However, classless routing, which will be covered later in the course can recover many of these lost addresses.

The method that was used to create the subnet chart can be used to solve all subnetting problems. This method uses the following formula:

Number of usable subnets = two to the power of the assigned subnet bits or borrowed bits, minus two. The minus two is for the reserved addresses of network ID and network broadcast.

(2 power of borrowed bits) – 2 = usable subnets

(23) – 2 = 6

Number of usable hosts = two to the power of the bits remaining, minus two (reserved addresses for subnet id and subnet broadcast).

(2 power of remaining host bits) – 2 = usable hosts

(25) – 2 = 30

The next page will explain how a subnet mask is applied.

Classes of network IP addresses

Classes of network IP addresses
10.3.1 This page will review the classes of IP addresses. The combined classes of IP addresses offer a range from 256 to 16.8 million hosts.


To efficiently manage a limited supply of IP addresses, all classes can be subdivided into smaller subnetworks. Figure provides an overview of the division between networks and hosts.

The next page will explain why subnetting is important

Introduction to and reason for subnetting
10.3.2 This page will describe how subnetting works and why it is important.


To create the subnetwork structure, host bits must be reassigned as network bits. This is often referred to as ‘borrowing’ bits. However, a more accurate term would be ‘lending’ bits. The starting point for this process is always the leftmost host bit, the one closest to the last network octet.

Subnet addresses include the Class A, Class B, and Class C network portion, plus a subnet field and a host field. The subnet field and the host field are created from the original host portion of the major IP address. This is done by re-assigning bits from the host portion to the original network portion of the address. - The ability to divide the original host portion of the address into the new subnet and host fields provides addressing flexibility for the network administrator.

In addition to the need for manageability, subnetting enables the network administrator to provide broadcast containment and low-level security on the LAN. Subnetting provides some security since access to other subnets is only available through the services of a router. Further, access security may be provided through the use of access lists. These lists can permit or deny access to a subnet, based on a variety of criteria, thereby providing more security. Access lists will be studied later in the curriculum. Some owners of Class A and B networks have also discovered that subnetting creates a revenue source for the organization through the leasing or sale of previously unused IP addresses.

Subnetting is an internal function of a network. From the outside, a LAN is seen as a single network with no details of the internal network structure. This view of the network keeps the routing tables small and efficient. Given a local node address of 147.10.43.14 on subnet 147.10.43.0, the world outside the LAN sees only the advertised major network number of 147.10.0.0. The reason for this is that the local subnet address of 147.10.43.0 is only valid within the LAN where subnetting is applied.

The next page will discuss subnet masks.

Routing protocols

Routing Protocols
10.2.9 This page will describe different types of router protocols.
RIP is a distance vector routing protocol that uses hop count as its metric to determine the direction and distance to any link in the internetwork. If there are multiple paths to a destination, RIP selects the path with the least number of hops. However, because hop count is the only routing metric used by RIP, it does not always select the fastest path to a destination. Also, RIP cannot route a packet beyond 15 hops. RIP Version 1 (RIPv1) requires that all devices in the network use the same subnet mask, because it does not include subnet mask information in routing updates. This is also known as classful routing.
RIP Version 2 (RIPv2) provides prefix routing, and does send subnet mask information in routing updates. This is also known as classless routing. With classless routing protocols, different subnets within the same network can have different subnet masks. The use of different subnet masks within the same network is referred to as variable-length subnet masking (VLSM).
IGRP is a distance-vector routing protocol developed by Cisco. IGRP was developed specifically to address problems associated with routing in large networks that were beyond the range of protocols such as RIP. IGRP can select the fastest available path based on delay, bandwidth, load, and reliability. IGRP also has a much higher maximum hop count limit than RIP. IGRP uses only classful routing.
OSPF is a link-state routing protocol developed by the Internet Engineering Task Force (IETF) in 1988. OSPF was written to address the needs of large, scalable internetworks that RIP could not.
Intermediate System-to-Intermediate System (IS-IS) is a link-state routing protocol used for routed protocols other than IP. Integrated IS-IS is an expanded implementation of IS-IS that supports multiple routed protocols including IP.
Like IGRP, EIGRP is a proprietary Cisco protocol. EIGRP is an advanced version of IGRP. Specifically, EIGRP provides superior operating efficiency such as fast convergence and low overhead bandwidth. EIGRP is an advanced distance-vector protocol that also uses some link-state protocol functions. Therefore, EIGRP is sometimes categorized as a hybrid routing protocol.
Border Gateway Protocol (BGP) is an example of an External Gateway Protocol (EGP). BGP exchanges routing information between autonomous systems while guaranteeing loop-free path selection. BGP is the principal route advertising protocol used by major companies and ISPs on the Internet. BGP4 is the first version of BGP that supports classless interdomain routing (CIDR) and route aggregation. Unlike common Internal Gateway Protocols (IGPs), such as RIP, OSPF, and EIGRP, BGP does not use metrics like hop count, bandwidth, or delay. Instead, BGP makes routing decisions based on network policies, or rules using various BGP path attributes.
The Lab Activity will help students understand the price of a small router.
This page concludes this lesson. The next lesson will focus on the mechanics of subnetting. The first page covers the different classes of IP addresses.

IGP and EGP / Link state and distance vector

IGP and EGP
10.2.7 This page will introduce two types of routing protocols.


An autonomous system is a network or set of networks under common administrative control, such as the cisco.com domain. An autonomous system consists of routers that present a consistent view of routing to the external world.

Two families of routing protocols are Interior Gateway Protocols (IGPs) and Exterior Gateway Protocols (EGPs).

IGPs route data within an autonomous system:

• RIP and RIPv2
• IGRP
• EIGRP
• OSPF
• Intermediate System-to-Intermediate System (IS-IS) protocol

EGPs route data between autonomous systems. An example of an EGP is BGP.

The next page will define link-state and distance vector protocols.

Link state and distance vector
10.2.8 Routing protocols can be classified as either IGPs or EGPs. Which type is used depends on whether a group of routers is under a single administration or not. IGPs can be further categorized as either distance-vector or link-state protocols. This page describes distance-vector and link-state routing and explains when each type of routing protocol is used.


The distance-vector routing approach determines the distance and direction, vector, to any link in the internetwork. The distance may be the hop count to the link. Routers using distance-vector algorithms send all or part of their routing table entries to adjacent routers on a periodic basis. This happens even if there are no changes in the network. By receiving a routing update, a router can verify all the known routes and make changes to its routing table. This process is also known as “routing by rumor”. The understanding that a router has of the network is based upon the perspective of the adjacent router of the network topology.

Examples of distance-vector protocols include the following:

• Routing Information Protocol (RIP) – The most common IGP in the Internet, RIP uses hop count as its only routing metric.

• Interior Gateway Routing Protocol (IGRP) – This IGP was developed by Cisco to address issues associated with routing in large, heterogeneous networks.

• Enhanced IGRP (EIGRP) – This Cisco-proprietary IGP includes many of the features of a link-state routing protocol. Because of this, it has been called a balanced-hybrid protocol, but it is really an advanced distance-vector routing protocol.

Link-state routing protocols were designed to overcome limitations of distance vector routing protocols. Link-state routing protocols respond quickly to network changes sending trigger updates only when a network change has occurred. Link-state routing protocols send periodic updates, known as link-state refreshes, at longer time intervals, such as every 30 minutes.

When a route or link changes, the device that detected the change creates a link-state advertisement (LSA) concerning that link. The LSA is then transmitted to all neighboring devices. Each routing device takes a copy of the LSA, updates its link-state database, and forwards the LSA to all neighboring devices. This flooding of LSAs is required to ensure that all routing devices create databases that accurately reflect the network topology before updating their routing tables.

Link-state algorithms typically use their databases to create routing table entries that prefer the shortest path. Examples of link-state protocols include Open Shortest Path First (OSPF) and Intermediate System-to-Intermediate System (IS-IS).

The Interactive Media Activity will identify the differences between link-state and distance vector routing protocols.

The next page will discuss routing protocols.

Routing tables / Routing algorithms and metrics

Routing tables
10.2.5 This page will describe the functions of a routing table.


Routers use routing protocols to build and maintain routing tables that contain route information. This aids in the process of path determination. Routing protocols fill routing tables with a variety of route information. This information varies based on the routing protocol used. Routing tables contain the information necessary to forward data packets across connected networks. Layer 3 devices interconnect broadcast domains or LANs. A hierarchical address scheme is required for data transfers.

Routers keep track of the following information in their routing tables:

• Protocol type – Identifies the type of routing protocol that created each entry.

• Next-hop associations – Tell a router that a destination is either directly connected to the router or that it can be reached through another router called the next-hop on the way to the destination. When a router receives a packet, it checks the destination address and attempts to match this address with a routing table entry.

• Routing metric – Different routing protocols use different routing metrics. Routing metrics are used to determine the desirability of a route. For example, RIP uses hop count as its only routing metric. IGRP uses bandwidth, load, delay, and reliability metrics to create a composite metric value.

• Outbound interfaces – The interface that the data must be sent out of to reach the final destination.

Routers communicate with one another to maintain their routing tables through the transmission of routing update messages. Some routing protocols transmit update messages periodically. Other protocols send them only when there are changes in the network topology. Some protocols transmit the entire routing table in each update message and some transmit only routes that have changed. Routers analyze the routing updates from directly-connected routers to build and maintain their routing tables.

The next page will explain routing algorithms and metrics.

Routing algorithms and metrics
10.2.6 This page will define algorithms and metrics as they relate to routers.


An algorithm is a detailed solution to a problem. Different routing protocols use different algorithms to choose the port to which a packet should be sent. Routing algorithms depend on metrics to make these decisions.

Routing protocols often have one or more of the following design goals:

• Optimization – This is the capability of a routing algorithm to select the best route. The route will depend on the metrics and metric weights used in the calculation. For example, one algorithm may use both hop count and delay metrics, but may consider delay metrics as more important in the calculation.

• Simplicity and low overhead – The simpler the algorithm, the more efficiently it will be processed by the CPU and memory in the router. This is important so that the network can scale to large proportions, such as the Internet.

• Robustness and stability – A routing algorithm should perform correctly when confronted by unusual or unforeseen circumstances, such as hardware failures, high load conditions, and implementation errors.

• Flexibility – A routing algorithm should quickly adapt to a variety of network changes. These changes include router availability, router memory, changes in bandwidth, and network delay.

• Rapid convergence – Convergence is the process of agreement by all routers on available routes. When a network event causes changes in router availability, updates are needed to reestablish network connectivity. Routing algorithms that converge slowly can cause data to be undeliverable.

Routing algorithms use different metrics to determine the best route. Each routing algorithm interprets what is best in its own way. A routing algorithm generates a number called a metric value for each path through a network. Sophisticated routing algorithms base route selection on multiple metrics that are combined in a composite metric value.
Typically, smaller metric values indicate preferred paths.


Metrics can be based on a single characteristic of a path, or can be calculated based on several characteristics. The following metrics are most commonly used by routing protocols:

• Bandwidth – Bandwidth is the data capacity of a link. Normally, a 10-Mbps Ethernet link is preferable to a 64-kbps leased line.

• Delay – Delay is the length of time required to move a packet along each link from a source to a destination. Delay depends on the bandwidth of intermediate links, the amount of data that can be temporarily stored at each router, network congestion, and physical distance.

• Load – Load is the amount of activity on a network resource such as a router or a link.

• Reliability – Reliability is usually a reference to the error rate of each network link.

• Hop count – Hop count is the number of routers that a packet must travel through before reaching its destination. Each router is equal to one hop. A hop count of four indicates that data would have to pass through four routers to reach its destination. If multiple paths are available to a destination, the path with the least number of hops is preferred.

• Ticks – The delay on a data link using IBM PC clock ticks. One tick is approximately 1/18 second.

• Cost – Cost is an arbitrary value, usually based on bandwidth, monetary expense, or other measurement, that is assigned by a network administrator.

The next page will discuss two types of routing protocols.

Thursday, March 18, 2010

Routed versus routing / Path determination

Routed versus routing
10.2.3 This page explains the differences between routing protocols and routed protocols.


Routed or routable protocols are used at the network layer to transfer data from one host to another across a router. Routed protocols transport data across a network. Routing protocols allow routers to choose the best path for data from a source to a destination.

Some functions of a routed protocol are as follows:

• Includes any network protocol suite that provides enough information in its network layer address to allow a router to forward it to the next device and ultimately to its destination

• Defines the format and use of the fields within a packet

The Internet Protocol (IP) and Novell Internetwork Packet Exchange (IPX) are examples of routed protocols. Other examples include DECnet, AppleTalk, Banyan VINES, and Xerox Network Systems (XNS).

Routers use routing protocols to exchange routing tables and share routing information. In other words, routing protocols enable routers to route routed protocols.

Some functions of a routing protocol are as follows:

• Provides processes used to share route information

• Allows routers to communicate with other routers to update and maintain the routing tables

Examples of routing protocols that support the IP routed protocol include RIP, IGRP, OSPF, BGP, and EIGRP.

Path determination
10.2.4 This page will explain how path determination occurs.


Path determination occurs at the network layer. A router uses path determination to compare a destination address to the available routes in its routing table and select the best path. The routers learn of these available routes through static routing or dynamic routing. Routes configured manually by the network administrator are static routes. Routes learned by others routers using a routing protocol are dynamic routes.

The router uses path determination to decide which port to send a packet out of to reach its destination. This process is also referred to as routing the packet. Each router that the packet encounters along the way is called a hop. The hop count is the distanced traveled. Path determination can be compared to a person who drives from one location in a city to another. The driver has a map that shows which streets lead to the destination, just as a router has a routing table. The driver travels from one intersection to another just as a packet travels from one router to another in each hop. At any intersection, the driver can choose to turn left, turn right, or go straight ahead. This is similar to how a router chooses the outbound port through which a packet is sent.

The decisions of a driver are influenced by factors such as traffic, the speed limit, the number of lanes, tolls, and whether or not a road is frequently closed. Sometimes it is faster to take a longer route on a smaller, less crowded back street instead of a highway with a lot of traffic. Similarly, routers can make decisions based on the load, bandwidth, delay, cost, and reliability of a network link.

The following process is used to determine the path for every packet that is routed:

• The router compares the IP address of the packet that it received to the IP tables that it has.
• The destination address is obtained from the packet.
• The mask of the first entry in the routing table is applied to the destination address.
• The masked destination and the routing table entry are compared.
• If there is a match, the packet is forwarded to the port that is associated with that table entry.
• If there is not a match, the next entry in the table is checked.
• If the packet does not match any entries in the table, the router checks to see if a default route has been set.
• If a default route has been set, the packet is forwarded to the associated port. A default route is a route that is configured by the network administrator as the route to use if there are no matches in the routing table.
• If there is no default route, the packet is discarded. A message is often sent back to the device that sent the data to indicate that the destination was unreachable.

The next page will explain how routing protocols build and maintain routing tables.

Routing versus switching

Routing versus switching
10.2.2 This page will compare and contrast routing and switching. Routers and switches may seem to perform the same function. The primary difference is that switches operate at Layer 2 of the OSI model and routers operate at Layer 3. This distinction indicates that routers and switches use different information to send data from a source to a destination.


The relationship between switching and routing can be compared to local and long-distance telephone calls. When a telephone call is made to a number within the same area code, a local switch handles the call. The local switch can only keep track of its local numbers. The local switch cannot handle all the telephone numbers in the world. When the switch receives a request for a call outside of its area code, it switches the call to a higher-level switch that recognizes area codes. The higher-level switch then switches the call so that it eventually gets to the local switch for the area code dialed.

The router performs a function similar to that of the higher-level switch in the telephone example. Figure shows the ARP tables for Layer 2 MAC addresses and routing tables for Layer 3 IP addresses. Each computer and router interface maintains an ARP table for Layer 2 communication. The ARP table is only effective for the broadcast domain to which it is connected. The router also maintains a routing table that allows it to route data outside of the broadcast domain. Each ARP table entry contains an IP-MAC address pair.

The Layer 2 switch builds its forwarding table using MAC addresses. When a host has data for a non-local IP address, it sends the frame to the closest router. This router is also known as its default gateway. The host uses the MAC address of the router as the destination MAC address.

A switch interconnects segments that belong to the same logical network or subnetwork. For non-local hosts, the switch forwards the frame to the router based on the destination MAC address. The router examines the Layer 3 destination address of the packet to make the forwarding decision. Host X knows the IP address of the router because the IP configuration of the host contains the IP address of the default gateway.

Just as a switch keeps a table of known MAC addresses, the router keeps a table of IP addresses known as a routing table. MAC addresses are not logically organized. IP addresses are organized in a hierarchy. A switch can handle a limited number of unorganized MAC addresses since it only has to search its table for addresses within its segment. Routers require an organized address system that can group similar addresses together and treat them as a single network unit until the data reaches the destination segment.

If IP addresses were not organized, the Internet would not work. This could be compared to a library that contained millions of individual pages of printed material in a large pile. This material is useless because it is impossible to locate an individual document. If the pages are identified and organized into books and each book is listed in a book index, it will be a lot easier to locate and use the data.

Another difference between switched and routed networks is switched networks do not block broadcasts. As a result, switches can be overwhelmed by broadcast storms. Routers block LAN broadcasts, so a broadcast storm only affects the broadcast domain from which it originated. Since routers block broadcasts, they also provide a higher level of security and bandwidth control than switches.

The next page will compare routing and routed protocols.

Routing overview

Routing overview
10.2.1 This page will discuss routing and the two main functions of a router.


Routing is an OSI Layer 3 function. Routing is a hierarchical organizational scheme that allows individual addresses to be grouped together. These individual addresses are treated as a single unit until the destination address is needed for final delivery of the data. Routing finds the most efficient path from one device to another. The primary device that performs the routing process is the router.

The following are the two key functions of a router:

• Routers must maintain routing tables and make sure other routers know of changes in the network topology. They use routing protocols to communicate network information with other routers.

• When packets arrive at an interface, the router must use the routing table to determine where to send them. The router switches the packets to the appropriate interface, adds the frame information for the interface, and then transmits the frame.

A router is a network layer device that uses one or more routing metrics to determine the optimal path along which network traffic should be forwarded. Routing metrics are values that are used to determine the advantage of one route over another. Routing protocols use various combinations of metrics to determine the best path for data.

Routers interconnect network segments or entire networks. Routers pass data frames between networks based on Layer 3 information. Routers make logical decisions about the best path for the delivery of data. Routers then direct packets to the appropriate output port to be encapsulated for transmission. Stages of the encapsulation and de-encapsulation process occur each time a packet transfers through a router. The router must de-encapsulate the Layer 2 data frame to access and examine the Layer 3 address. As shown in Figure , the complete process of sending data from one device to another involves encapsulation and de-encapsulation on all seven OSI layers. The encapsulation process breaks up the data stream into segments, adds the appropriate headers and trailers, and then transmits the data. The de-encapsulation process removes the headers and trailers and then recombines the data into a seamless stream.

This course focuses on the most common routable protocol, which is IP. Other examples of routable protocols include IPX/SPX and AppleTalk. These protocols provide Layer 3 support. Non-routable protocols do not provide Layer 3 support. The most common non-routable protocol is NetBEUI. NetBEUI is a small, fast, and efficient protocol that is limited to frame delivery within one segment.

The next page will compare routing and switching.

Anatomy of an IP packet

Anatomy of an IP packet
10.1.5 IP packets consist of the data from upper layers plus an IP header. This page will discuss the information contained in the IP header:


• Version – Specifies the format of the IP packet header. The 4-bit version field contains the number 4 if it is an IPv4 packet and 6 if it is an IPv6 packet. However, this field is not used to distinguish between IPv4 and IPv6 packets. The protocol type field present in the Layer 2 envelope is used for that.

• IP header length (HLEN) – Indicates the datagram header length in 32-bit words. This is the total length of all header information and includes the two variable-length header fields.
• Type of service (ToS) – 8 bits that specify the level of importance that has been assigned by a particular upper-layer protocol.
• Total length – 16 bits that specify the length of the entire packet in bytes. This includes the data and header. To get the length of the data payload subtract the HLEN from the total length.
• Identification – 16 bits that identify the current datagram. This is the sequence number.
• Flags – A 3-bit field in which the two low-order bits control fragmentation. One bit specifies if the packet can be fragmented and the other indicates if the packet is the last fragment in a series of fragmented packets.
• Fragment offset – 13 bits that are used to help piece together datagram fragments. This field allows the previous field to end on a 16-bit boundary.
• Time to Live (TTL) – A field that specifies the number of hops a packet may travel. This number is decreased by one as the packet travels through a router. When the counter reaches zero the packet is discarded. This prevents packets from looping endlessly.
• Protocol – 8 bits that indicate which upper-layer protocol such as TCP or UDP receives incoming packets after the IP processes have been completed.
• Header checksum – 16 bits that help ensure IP header integrity.
• Source address – 32 bits that specify the IP address of the node from which the packet was sent.
• Destination address – 32 bits that specify the IP address of the node to which the data is sent.
• Options – Allows IP to support various options such as security. The length of this field varies.
• Padding – Extra zeros are added to this field to ensure that the IP header is always a multiple of 32 bits.
• Data – Contains upper-layer information and has a variable length of up to 64 bits.

While the IP source and destination addresses are important, the other header fields have made IP very flexible. The header fields list the source and destination address information of the packet and often indicate the length of the message data. The information for routing the message is also contained in IP headers, which can get long and complex

This page concludes this lesson. The next lesson will focus on IP routing protocols. The first page provides a routing overview.

Connectionless and connection-oriented delivery

Connectionless and connection-oriented delivery
10.1.4 This page will introduce two types of delivery systems, which are connectionless and connection-oriented.


These two services provide the actual end-to-end delivery of data in an internetwork.

Most network services use a connectionless delivery system. Different packets may take different paths to get through the network. The packets are reassembled after they arrive at the destination. In a connectionless system, the destination is not contacted before a packet is sent. A good comparison for a connectionless system is a postal system. The recipient is not contacted to see if they will accept the letter before it is sent. Also, the sender does not know if the letter arrived at the destination.

In connection-oriented systems, a connection is established between the sender and the recipient before any data is transferred. An example of a connection-oriented network is the telephone system. The caller places the call, a connection is established, and then communication occurs.

Connectionless network processes are often referred to as packet-switched processes. As the packets pass from source to destination, packets can switch to different paths, and possibly arrive out of order. Devices make the path determination for each packet based on a variety of criteria. Some of the criteria, such as available bandwidth, may differ from packet to packet.

Connection-oriented network processes are often referred to as circuit-switched processes. A connection with the recipient is first established, and then data transfer begins. All packets travel sequentially across the same physical or virtual circuit.

The Internet is a gigantic, connectionless network in which the majority of packet deliveries are handled by IP. TCP adds Layer 4, connection-oriented reliability services to IP.

The next page will discuss the IP header.

Packet propagation and switching within a router

Packet propagation and switching within a router
10.1.3 This page will explain the process that occurs as a packet moves through a network.


As a packet travels through an internetwork to its final destination, the Layer 2 frame headers and trailers are removed and replaced at every Layer 3 device. This is because Layer 2 data units, or frames, are for local addressing. Layer 3 data units, or packets, are for end-to-end addressing.

Layer 2 Ethernet frames are designed to operate within a broadcast domain with the MAC address that is burned into the physical device. Other Layer 2 frame types include PPP serial links and Frame Relay connections, which use different Layer 2 addressing schemes. Regardless of the type of Layer 2 addressing used, frames are designed to operate within a Layer 2 broadcast domain. When the data is sent to a Layer 3 device the Layer 2 information changes.

As a frame is received at a router interface, the destination MAC address is extracted. The address is checked to see if the frame is directly addressed to the router interface, or if it is a broadcast. In either situation, the frame is accepted. Otherwise, the frame is discarded since it is destined for another device on the collision domain.

The CRC information is extracted from the frame trailer of an accepted frame. The CRC is calculated to verify that the frame data is without error.

If the check fails, the frame is discarded. If the check is valid, the frame header and trailer are removed and the packet is passed up to Layer 3. The packet is then checked to see if it is actually destined for the router, or if it is to be routed to another device in the internetwork. If the destination IP address matches one of the router ports, the Layer 3 header is removed and the data is passed up to the Layer 4. If the packet is to be routed, the destination IP address will be compared to the routing table. If a match is found or there is a default route, the packet will be sent to the interface specified in the matched routing table statement. When the packet is switched to the outgoing interface, a new CRC value is added as a frame trailer, and the proper frame header is added to the packet. The frame is then transmitted to the next broadcast domain on its trip to the final destination.

The next page will describe two types of delivery services.

Routed Protocol / IP as a routed protocol

Routed Protocol
Routable and routed protocols
10.1.1 This page will define routed and routable protocols.


A protocol is a set of rules that determines how computers communicate with each other across networks. Computers exchange data messages to communicate with each other. To accept and act on these messages, computers must have sets of rules that determine how a message is interpreted. Examples include messages used to establish a connection to a remote machine, e-mail messages, and files transferred over a network.

A protocol describes the following:

• The required format of a message
• The way that computers must exchange messages for specific activities

A routed protocol allows the router to forward data between nodes on different networks. A routable protocol must provide the ability to assign a network number and a host number to each device. Some protocols, such as IPX, require only a network number. These protocols use the MAC address of the host for the host number. Other protocols, such as IP, require an address with a network portion and a host portion. These protocols also require a network mask to differentiate the two numbers. The network address is obtained by ANDing the address with the network mask.

The reason that a network mask is used is to allow groups of sequential IP addresses to be treated as a single unit. If this grouping were not allowed, each host would have to be mapped individually for routing. This would be impossible, because according to the Internet Software Consortium there are approximately 233,101,500 hosts on the Internet.

The next page will discuss IP.
IP as a routed protocol
10.1.2 This page describes the features and functions of IP.


IP is the most widely used implementation of a hierarchical network-addressing scheme. IP is a connectionless, unreliable, best-effort delivery protocol. The term connectionless means that no dedicated circuit connection is established prior to transmission. IP determines the most efficient route for data based on the routing protocol. The terms unreliable and best-effort do not imply that the system is unreliable and does not work well. They indicate that IP does not verify that data sent on the network reaches its destination. If required, verification is handled by upper layer protocols.

As information flows down the layers of the OSI model, the data is processed at each layer. At the network layer, the data is encapsulated into packets. These packets are also known as datagrams. IP determines the contents of the IP packet header, which includes address information. However, it is not concerned with the actual data. IP accepts whatever data is passed down to it from the upper layers.

The next page examines how a packet travels through a network