Transport layer
9.1.3 This page will explain how the transport layer provides transport services from the source host to the destination host.
The transport layer provides a logical connection between a source host and a destination host. Transport protocols segment and reassemble data sent by upper-layer applications into the same data stream, or logical connection, between end points.
The Internet is often represented by a cloud. The transport layer sends data packets from a source to a destination through the cloud. The primary duty of the transport layer is to provide end-to-end control and reliability as data travels through this cloud. This is accomplished through the use of sliding windows, sequence numbers, and acknowledgments. The transport layer also defines end-to-end connectivity between host applications. Transport layer protocols include TCP and UDP.
The functions of TCP and UDP are as follows:
• Segment upper-layer application data
• Send segments from one end device to another
The functions of TCP are as follows:
• Establish end-to-end operations
• Provide flow control through the use of sliding windows
• Ensure reliability through the use of sequence numbers and acknowledgments
The Interactive Media Activity will help students become familiar with the transport layer protocols.
The next page will describe the Internet layer.
Internet layer
9.1.4 This page explains the functions of the TCP/IP Internet layer.
The purpose of the Internet layer is to select the best path through the network for packets to travel. The main protocol that functions at this layer is IP. Best path determination and packet switching occur at this layer.
The following protocols operate at the TCP/IP Internet layer:
• IP provides connectionless, best-effort delivery routing of packets. IP is not concerned with the content of the packets but looks for a path to the destination.
• Internet Control Message Protocol (ICMP) provides control and messaging capabilities.
• Address Resolution Protocol (ARP) determines the data link layer address, or MAC address, for known IP addresses.
• Reverse Address Resolution Protocol (RARP) determines the IP address for a known MAC address.
IP performs the following operations:
• Defines a packet and an addressing scheme
• Transfers data between the Internet layer and network access layer
• Routes packets to remote hosts
IP is sometimes referred to as an unreliable protocol. This does not mean that IP will not accurately deliver data across a network. IP is unreliable because it does not perform error checking and correction. That function is handled by upper layer protocols from the transport or application layers.
The Interactive Media Activity will help students become familiar with the protocols used in the Internet layer.
The next page will discuss the network access layer.
Network access layer
9.1.5 This page will discuss the TCP/IP network access layer, which is also called the host-to-network layer.
The network access layer allows an IP packet to make a physical link to the network media. It includes the LAN and WAN technology details and all the details contained in the OSI physical and data link layers.
Drivers for software applications, modem cards, and other devices operate at the network access layer. The network access layer defines the procedures used to interface with the network hardware and access the transmission medium. Modem protocol standards such as Serial Line Internet Protocol (SLIP) and Point-to-Point Protocol (PPP) provide network access through a modem connection. Many protocols are required to determine the hardware, software, and transmission-medium specifications at this layer. This can lead to confusion for users. Most of the recognizable protocols operate at the transport and Internet layers of the TCP/IP model.
Network access layer protocols also map IP addresses to physical hardware addresses and encapsulate IP packets into frames. The network access layer defines the physical media connection based on the hardware type and network interface.
Here is an example of a network access layer configuration that involves a Windows system set up with a third party NIC. The NIC would automatically be detected by some versions of Windows and then the proper drivers would be installed. In an older version of Windows, the user would have to specify the network card driver. The card manufacturer supplies these drivers on disks or CD-ROMs.
The Interactive Media Activity will help students become familiar with the network access layer protocols.
The next page explains the similarities and differences between the TCP/IP model and the OSI reference model.
Saturday, March 13, 2010
Friday, February 26, 2010
History and future of TCP/IP / Application layer
History and future of TCP/IP
9.1.1 This page discusses the history and the future of TCP/IP.
The U.S. Department of Defense (DoD) created the TCP/IP reference model because it wanted a network that could survive any conditions. To illustrate further, imagine a world, crossed by multiple cable runs, wires, microwaves, optical fibers, and satellite links. Then imagine a need for data to be transmitted without regard for the condition of any particular node or network. The U.S. DoD required reliable data transmission to any destination on the network under any circumstances. The creation of the TCP/IP model helped to solve this difficult design problem. The TCP/IP model has since become the standard on which the Internet is based.
Think about the layers of the TCP/IP model layers in relation to the original intent of the Internet. This will help reduce confusion. The four layers of the TCP/IP model are the application layer, transport layer, Internet layer, and network access layer. Some of the layers in the TCP/IP model have the same name as layers in the OSI model. It is critical not to confuse the layer functions of the two models because the layers include different functions in each model. The present version of TCP/IP was standardized in September of 1981.
The next page will discuss the application layer of TCP/IP.
Application layer
9.1.2 This page describes the functions of the TCP/IP application layer.
The application layer handles high-level protocols, representation, encoding, and dialog control. The TCP/IP protocol suite combines all application related issues into one layer. It ensures that the data is properly packaged before it is passed on to the next layer. TCP/IP includes Internet and transport layer specifications such as IP and TCP as well as specifications for common applications. TCP/IP has protocols to support file transfer, e-mail, and remote login, in addition to the following:
• File Transfer Protocol (FTP) – FTP is a reliable, connection-oriented service that uses TCP to transfer files between systems that support FTP. It supports bi-directional binary file and ASCII file transfers.
• Trivial File Transfer Protocol (TFTP) – TFTP is a connectionless service that uses the User Datagram Protocol (UDP). TFTP is used on the router to transfer configuration files and Cisco IOS images, and to transfer files between systems that support TFTP. It is useful in some LANs because it operates faster than FTP in a stable environment.
• Network File System (NFS) – NFS is a distributed file system protocol suite developed by Sun Microsystems that allows file access to a remote storage device such as a hard disk across a network.
• Simple Mail Transfer Protocol (SMTP) – SMTP administers the transmission of e-mail over computer networks. It does not provide support for transmission of data other than plain text.
• Telnet – Telnet provides the capability to remotely access another computer. It enables a user to log into an Internet host and execute commands. A Telnet client is referred to as a local host. A Telnet server is referred to as a remote host.
• Simple Network Management Protocol (SNMP) – SNMP is a protocol that provides a way to monitor and control network devices. SNMP is also used to manage configurations, statistics, performance, and security.
• Domain Name System (DNS) – DNS is a system used on the Internet to translate domain names and publicly advertised network nodes into IP addresses.
The next page will discuss the transport layer
9.1.1 This page discusses the history and the future of TCP/IP.
The U.S. Department of Defense (DoD) created the TCP/IP reference model because it wanted a network that could survive any conditions. To illustrate further, imagine a world, crossed by multiple cable runs, wires, microwaves, optical fibers, and satellite links. Then imagine a need for data to be transmitted without regard for the condition of any particular node or network. The U.S. DoD required reliable data transmission to any destination on the network under any circumstances. The creation of the TCP/IP model helped to solve this difficult design problem. The TCP/IP model has since become the standard on which the Internet is based.
Think about the layers of the TCP/IP model layers in relation to the original intent of the Internet. This will help reduce confusion. The four layers of the TCP/IP model are the application layer, transport layer, Internet layer, and network access layer. Some of the layers in the TCP/IP model have the same name as layers in the OSI model. It is critical not to confuse the layer functions of the two models because the layers include different functions in each model. The present version of TCP/IP was standardized in September of 1981.
The next page will discuss the application layer of TCP/IP.
Application layer
9.1.2 This page describes the functions of the TCP/IP application layer.
The application layer handles high-level protocols, representation, encoding, and dialog control. The TCP/IP protocol suite combines all application related issues into one layer. It ensures that the data is properly packaged before it is passed on to the next layer. TCP/IP includes Internet and transport layer specifications such as IP and TCP as well as specifications for common applications. TCP/IP has protocols to support file transfer, e-mail, and remote login, in addition to the following:
• File Transfer Protocol (FTP) – FTP is a reliable, connection-oriented service that uses TCP to transfer files between systems that support FTP. It supports bi-directional binary file and ASCII file transfers.
• Trivial File Transfer Protocol (TFTP) – TFTP is a connectionless service that uses the User Datagram Protocol (UDP). TFTP is used on the router to transfer configuration files and Cisco IOS images, and to transfer files between systems that support TFTP. It is useful in some LANs because it operates faster than FTP in a stable environment.
• Network File System (NFS) – NFS is a distributed file system protocol suite developed by Sun Microsystems that allows file access to a remote storage device such as a hard disk across a network.
• Simple Mail Transfer Protocol (SMTP) – SMTP administers the transmission of e-mail over computer networks. It does not provide support for transmission of data other than plain text.
• Telnet – Telnet provides the capability to remotely access another computer. It enables a user to log into an Internet host and execute commands. A Telnet client is referred to as a local host. A Telnet server is referred to as a remote host.
• Simple Network Management Protocol (SNMP) – SNMP is a protocol that provides a way to monitor and control network devices. SNMP is also used to manage configurations, statistics, performance, and security.
• Domain Name System (DNS) – DNS is a system used on the Internet to translate domain names and publicly advertised network nodes into IP addresses.
The next page will discuss the transport layer
Module 9: TCP/IP Protocol Suite and IP Addressing Overview
Overview
The Internet was developed to provide a communication network that could function in wartime. Although the Internet has evolved from the original plan, it is still based on the TCP/IP protocol suite. The design of TCP/IP is ideal for the decentralized and robust Internet. Many common protocols were designed based on the four-layer TCP/IP model.
It is useful to know both the TCP/IP and OSI network models. Each model uses its own structure to explain how a network works. However, there is much overlap between the two models. A system administrator should be familiar with both models to understand how a network functions.
Any device on the Internet that wants to communicate with other Internet devices must have a unique identifier. The identifier is known as the IP address because routers use a Layer 3 protocol called the IP protocol to find the best route to that device. The current version of IP is IPv4. This was designed before there was a large demand for addresses. Explosive growth of the Internet has threatened to deplete the supply of IP addresses. Subnets, Network Address Translation (NAT), and private addresses are used to extend the supply of IP addresses. IPv6 improves on IPv4 and provides a much larger address space. Administrators can use IPv6 to integrate or eliminate the methods used to work with IPv4.
In addition to the physical MAC address, each computer needs a unique IP address to be part of the Internet. This is also called the logical address. There are several ways to assign an IP address to a device. Some devices always have a static address. Others have a temporary address assigned to them each time they connect to the network. When a dynamically assigned IP address is needed, a device can obtain it several ways.
For efficient routing to occur between devices, issues such as duplicate IP addresses must be resolved.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811 exams.
Students who complete this module should be able to perform the following tasks:
• Explain why the Internet was developed and how TCP/IP fits the design of the Internet
• List the four layers of the TCP/IP model
• Describe the functions of each layer of the TCP/IP model
• Compare the OSI model and the TCP/IP model
• Describe the function and structure of IP addresses
• Understand why subnetting is necessary
• Explain the difference between public and private addressing
• Understand the function of reserved IP addresses
• Explain the use of static and dynamic addressing for a device
• Understand how dynamic addresses can be assigned with RARP, BootP, and DHCP
• Use ARP to obtain the MAC address to send a packet to another device
• Understand the issues related to addressing between networks
The Internet was developed to provide a communication network that could function in wartime. Although the Internet has evolved from the original plan, it is still based on the TCP/IP protocol suite. The design of TCP/IP is ideal for the decentralized and robust Internet. Many common protocols were designed based on the four-layer TCP/IP model.
It is useful to know both the TCP/IP and OSI network models. Each model uses its own structure to explain how a network works. However, there is much overlap between the two models. A system administrator should be familiar with both models to understand how a network functions.
Any device on the Internet that wants to communicate with other Internet devices must have a unique identifier. The identifier is known as the IP address because routers use a Layer 3 protocol called the IP protocol to find the best route to that device. The current version of IP is IPv4. This was designed before there was a large demand for addresses. Explosive growth of the Internet has threatened to deplete the supply of IP addresses. Subnets, Network Address Translation (NAT), and private addresses are used to extend the supply of IP addresses. IPv6 improves on IPv4 and provides a much larger address space. Administrators can use IPv6 to integrate or eliminate the methods used to work with IPv4.
In addition to the physical MAC address, each computer needs a unique IP address to be part of the Internet. This is also called the logical address. There are several ways to assign an IP address to a device. Some devices always have a static address. Others have a temporary address assigned to them each time they connect to the network. When a dynamically assigned IP address is needed, a device can obtain it several ways.
For efficient routing to occur between devices, issues such as duplicate IP addresses must be resolved.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811 exams.
Students who complete this module should be able to perform the following tasks:
• Explain why the Internet was developed and how TCP/IP fits the design of the Internet
• List the four layers of the TCP/IP model
• Describe the functions of each layer of the TCP/IP model
• Compare the OSI model and the TCP/IP model
• Describe the function and structure of IP addresses
• Understand why subnetting is necessary
• Explain the difference between public and private addressing
• Understand the function of reserved IP addresses
• Explain the use of static and dynamic addressing for a device
• Understand how dynamic addresses can be assigned with RARP, BootP, and DHCP
• Use ARP to obtain the MAC address to send a packet to another device
• Understand the issues related to addressing between networks
Summary of Module 8
Summary
This page summarizes the topics discussed in this module.
Ethernet is a shared media, baseband technology, which means only one node can transmit data at a time. Increasing the number of nodes on a single segment increases demand on the available bandwidth. This in turn increases the probability of collisions. A solution to the problem is to break a large network segment into parts and separate it into isolated collision domains. Bridges and switches are used to segment the network into multiple collision domains.
A bridge builds a bridge table from the source addresses of packets it processes. An address is associated with the port the frame came in on. Eventually the bridge table contains enough address information to allow the bridge to forward a frame out a particular port based on the destination address. This is how the bridge controls traffic between two collision domains.
Switches learn in much the same way as bridges but provide a virtual connection directly between the source and destination nodes, rather than the source collision domain and destination collision domain. Each port creates its own collision domain. A switch dynamically builds and maintains a Content-Addressable Memory (CAM) table, holding all of the necessary MAC information for each port. CAM is memory that essentially works backwards compared to conventional memory. Entering data into the memory will return the associated address.
Two devices connected through switch ports become the only two nodes in a small collision domain. These small physical segments are called microsegments. Microsegments connected using twisted pair cabling are capable of full-duplex communications. In full duplex mode, when separate wires are used for transmitting and receiving between two hosts, there is no contention for the media. Thus, a collision domain no longer exists.
There is a propagation delay for the signals traveling along transmission medium. Additionally, as signals are processed by network devices further delay, or latency, is introduced.
How a frame is switched affects latency and reliability. A switch can start to transfer the frame as soon as the destination MAC address is received. Switching at this point is called cut-through switching and results in the lowest latency through the switch. However, cut-through switching provides no error checking. At the other extreme, the switch can receive the entire frame before sending it out the destination port. This is called store-and-forward switching. Fragment-free switching reads and checks the first sixty-four bytes of the frame before forwarding it to the destination port.
Switched networks are often designed with redundant paths to provide for reliability and fault tolerance. Switches use the Spanning-Tree Protocol (STP) to identify and shut down redundant paths through the network. The result is a logical hierarchical path through the network with no loops.
Using Layer 2 devices to break up a LAN into multiple collision domains increases available bandwidth for every host. But Layer 2 devices forward broadcasts, such as ARP requests. A Layer 3 device is required to control broadcasts and define broadcast domains.
Data flow through a routed IP network, involves data moving across traffic management devices at Layers 1, 2, and 3 of the OSI model. Layer 1 is used for transmission across the physical media, Layer 2 for collision domain management, and Layer 3 for broadcast domain management.
This page summarizes the topics discussed in this module.
Ethernet is a shared media, baseband technology, which means only one node can transmit data at a time. Increasing the number of nodes on a single segment increases demand on the available bandwidth. This in turn increases the probability of collisions. A solution to the problem is to break a large network segment into parts and separate it into isolated collision domains. Bridges and switches are used to segment the network into multiple collision domains.
A bridge builds a bridge table from the source addresses of packets it processes. An address is associated with the port the frame came in on. Eventually the bridge table contains enough address information to allow the bridge to forward a frame out a particular port based on the destination address. This is how the bridge controls traffic between two collision domains.
Switches learn in much the same way as bridges but provide a virtual connection directly between the source and destination nodes, rather than the source collision domain and destination collision domain. Each port creates its own collision domain. A switch dynamically builds and maintains a Content-Addressable Memory (CAM) table, holding all of the necessary MAC information for each port. CAM is memory that essentially works backwards compared to conventional memory. Entering data into the memory will return the associated address.
Two devices connected through switch ports become the only two nodes in a small collision domain. These small physical segments are called microsegments. Microsegments connected using twisted pair cabling are capable of full-duplex communications. In full duplex mode, when separate wires are used for transmitting and receiving between two hosts, there is no contention for the media. Thus, a collision domain no longer exists.
There is a propagation delay for the signals traveling along transmission medium. Additionally, as signals are processed by network devices further delay, or latency, is introduced.
How a frame is switched affects latency and reliability. A switch can start to transfer the frame as soon as the destination MAC address is received. Switching at this point is called cut-through switching and results in the lowest latency through the switch. However, cut-through switching provides no error checking. At the other extreme, the switch can receive the entire frame before sending it out the destination port. This is called store-and-forward switching. Fragment-free switching reads and checks the first sixty-four bytes of the frame before forwarding it to the destination port.
Switched networks are often designed with redundant paths to provide for reliability and fault tolerance. Switches use the Spanning-Tree Protocol (STP) to identify and shut down redundant paths through the network. The result is a logical hierarchical path through the network with no loops.
Using Layer 2 devices to break up a LAN into multiple collision domains increases available bandwidth for every host. But Layer 2 devices forward broadcasts, such as ARP requests. A Layer 3 device is required to control broadcasts and define broadcast domains.
Data flow through a routed IP network, involves data moving across traffic management devices at Layers 1, 2, and 3 of the OSI model. Layer 1 is used for transmission across the physical media, Layer 2 for collision domain management, and Layer 3 for broadcast domain management.
What is a network segment?
What is a network segment?
8.2.7 This page explains what a network segment is.
As with many terms and acronyms, segment has multiple meanings. The dictionary definition of the term is as follows:
• A separate piece of something
• One of the parts into which an entity, or quantity is divided or marked off by or as if by natural boundaries
In the context of data communication, the following definitions are used:
• Section of a network that is bounded by bridges, routers, or switches.
• In a LAN using a bus topology, a segment is a continuous electrical circuit that is often connected to other such segments with repeaters.
• Term used in the TCP specification to describe a single transport layer unit of information. The terms datagram, frame, message, and packet are also used to describe logical information groupings at various layers of the OSI reference model and in various technology circles.
To properly define the term segment, the context of the usage must be presented with the word. If segment is used in the context of TCP, it would be defined as a separate piece of the data. If segment is being used in the context of physical networking media in a routed network, it would be seen as one of the parts or sections of the total network.
This page concludes this lesson. The next page will summarize the main points from the module.
8.2.7 This page explains what a network segment is.
As with many terms and acronyms, segment has multiple meanings. The dictionary definition of the term is as follows:
• A separate piece of something
• One of the parts into which an entity, or quantity is divided or marked off by or as if by natural boundaries
In the context of data communication, the following definitions are used:
• Section of a network that is bounded by bridges, routers, or switches.
• In a LAN using a bus topology, a segment is a continuous electrical circuit that is often connected to other such segments with repeaters.
• Term used in the TCP specification to describe a single transport layer unit of information. The terms datagram, frame, message, and packet are also used to describe logical information groupings at various layers of the OSI reference model and in various technology circles.
To properly define the term segment, the context of the usage must be presented with the word. If segment is used in the context of TCP, it would be defined as a separate piece of the data. If segment is being used in the context of physical networking media in a routed network, it would be seen as one of the parts or sections of the total network.
This page concludes this lesson. The next page will summarize the main points from the module.
Broadcast domains / Introduction to data flow
Broadcast domains
8.2.5 This page will explain the features of a broadcast domain.
A broadcast domain is a group of collision domains that are connected by Layer 2 devices. When a LAN is broken up into multiple collision domains, each host in the network has more opportunities to gain access to the media. This reduces the chance of collisions and increases available bandwidth for every host. Broadcasts are forwarded by Layer 2 devices. Excessive broadcasts can reduce the efficiency of the entire LAN. Broadcasts have to be controlled at Layer 3 since Layers 1 and 2 devices cannot control them. A broadcast domain includes all of the collision domains that process the same broadcast frame. This includes all the nodes that are part of the network segment bounded by a Layer 3 device. Broadcast domains are controlled at Layer 3 because routers do not forward broadcasts. Routers actually work at Layers 1, 2, and 3. Like all Layer 1 devices, routers have a physical connection and transmit data onto the media. Routers also have a Layer 2 encapsulation on all interfaces and perform the same functions as other Layer 2 devices. Layer 3 allows routers to segment broadcast domains.
In order for a packet to be forwarded through a router it must have already been processed by a Layer 2 device and the frame information stripped off. Layer 3 forwarding is based on the destination IP address and not the MAC address. For a packet to be forwarded it must contain an IP address that is outside of the range of addresses assigned to the LAN and the router must have a destination to send the specific packet to in its routing table.
Introduction to data flow
8.2.6 This page discusses data flow.
Data flow in the context of collision and broadcast domains focuses on how data frames propagate through a network. It refers to the movement of data through Layers 1, 2 and 3 devices and how data must be encapsulated to effectively make that journey. Remember that data is encapsulated at the network layer with an IP source and destination address, and at the data-link layer with a MAC source and destination address.
A good rule to follow is that a Layer 1 device always forwards the frame, while a Layer 2 device wants to forward the frame. In other words, a Layer 2 device will forward the frame unless something prevents it from doing so. A Layer 3 device will not forward the frame unless it has to. Using this rule will help identify how data flows through a network.
Layer 1 devices do no filtering, so everything that is received is passed on to the next segment. The frame is simply regenerated and retimed and thus returned to its original transmission quality. Any segments connected by Layer 1 devices are part of the same domain, both collision and broadcast.
Layer 2 devices filter data frames based on the destination MAC address. A frame is forwarded if it is going to an unknown destination outside the collision domain. The frame will also be forwarded if it is a broadcast, multicast, or a unicast going outside of the local collision domain. The only time that a frame is not forwarded is when the Layer 2 device finds that the sending host and the receiving host are in the same collision domain. A Layer 2 device, such as a bridge, creates multiple collision domains but maintains only one broadcast domain.
Layer 3 devices filter data packets based on IP destination address. The only way that a packet will be forwarded is if its destination IP address is outside of the broadcast domain and the router has an identified location to send the packet. A Layer 3 device creates multiple collision and broadcast domains.
Data flow through a routed IP based network, involves data moving across traffic management devices at Layers 1, 2, and 3 of the OSI model. Layer 1 is used for transmission across the physical media, Layer 2 for collision domain management, and Layer 3 for broadcast domain management.
The next page defines a network segment.
8.2.5 This page will explain the features of a broadcast domain.
A broadcast domain is a group of collision domains that are connected by Layer 2 devices. When a LAN is broken up into multiple collision domains, each host in the network has more opportunities to gain access to the media. This reduces the chance of collisions and increases available bandwidth for every host. Broadcasts are forwarded by Layer 2 devices. Excessive broadcasts can reduce the efficiency of the entire LAN. Broadcasts have to be controlled at Layer 3 since Layers 1 and 2 devices cannot control them. A broadcast domain includes all of the collision domains that process the same broadcast frame. This includes all the nodes that are part of the network segment bounded by a Layer 3 device. Broadcast domains are controlled at Layer 3 because routers do not forward broadcasts. Routers actually work at Layers 1, 2, and 3. Like all Layer 1 devices, routers have a physical connection and transmit data onto the media. Routers also have a Layer 2 encapsulation on all interfaces and perform the same functions as other Layer 2 devices. Layer 3 allows routers to segment broadcast domains.
In order for a packet to be forwarded through a router it must have already been processed by a Layer 2 device and the frame information stripped off. Layer 3 forwarding is based on the destination IP address and not the MAC address. For a packet to be forwarded it must contain an IP address that is outside of the range of addresses assigned to the LAN and the router must have a destination to send the specific packet to in its routing table.
Introduction to data flow
8.2.6 This page discusses data flow.
Data flow in the context of collision and broadcast domains focuses on how data frames propagate through a network. It refers to the movement of data through Layers 1, 2 and 3 devices and how data must be encapsulated to effectively make that journey. Remember that data is encapsulated at the network layer with an IP source and destination address, and at the data-link layer with a MAC source and destination address.
A good rule to follow is that a Layer 1 device always forwards the frame, while a Layer 2 device wants to forward the frame. In other words, a Layer 2 device will forward the frame unless something prevents it from doing so. A Layer 3 device will not forward the frame unless it has to. Using this rule will help identify how data flows through a network.
Layer 1 devices do no filtering, so everything that is received is passed on to the next segment. The frame is simply regenerated and retimed and thus returned to its original transmission quality. Any segments connected by Layer 1 devices are part of the same domain, both collision and broadcast.
Layer 2 devices filter data frames based on the destination MAC address. A frame is forwarded if it is going to an unknown destination outside the collision domain. The frame will also be forwarded if it is a broadcast, multicast, or a unicast going outside of the local collision domain. The only time that a frame is not forwarded is when the Layer 2 device finds that the sending host and the receiving host are in the same collision domain. A Layer 2 device, such as a bridge, creates multiple collision domains but maintains only one broadcast domain.
Layer 3 devices filter data packets based on IP destination address. The only way that a packet will be forwarded is if its destination IP address is outside of the broadcast domain and the router has an identified location to send the packet. A Layer 3 device creates multiple collision and broadcast domains.
Data flow through a routed IP based network, involves data moving across traffic management devices at Layers 1, 2, and 3 of the OSI model. Layer 1 is used for transmission across the physical media, Layer 2 for collision domain management, and Layer 3 for broadcast domain management.
The next page defines a network segment.
Layer 2 broadcasts
Layer 2 broadcasts
8.2.4 This page will explain how Layer 2 broadcasts are used.
To communicate with all collision domains, protocols use broadcast and multicast frames at Layer 2 of the OSI model. When a node needs to communicate with all hosts on the network, it sends a broadcast frame with a destination MAC address 0xFFFFFFFFFFFF. This is an address to which the NIC of every host must respond.
Layer 2 devices must flood all broadcast and multicast traffic. The accumulation of broadcast and multicast traffic from each device in the network is referred to as broadcast radiation. In some cases, the circulation of broadcast radiation can saturate the network so that there is no bandwidth left for application data. In this case, new network connections cannot be made and established connections may be dropped. This situation is called a broadcast storm. The probability of broadcast storms increases as the switched network grows.
A NIC must rely on the CPU to process each broadcast or multicast group it belongs to. Therefore, broadcast radiation affects the performance of hosts in the network. Figure shows the results of tests that Cisco conducted on the effect of broadcast radiation on the CPU performance of a Sun SPARCstation 2 with a standard built-in Ethernet card. The results indicate that an IP workstation can be effectively shut down by broadcasts that flood the network. Although extreme, broadcast peaks of thousands of broadcasts per second have been observed during broadcast storms. Tests in a controlled environment with a range of broadcasts and multicasts on the network show measurable system degradation with as few as 100 broadcasts or multicasts per second.
A host does not usually benefit if it processes a broadcast when it is not the intended destination. The host is not interested in the service that is advertised. High levels of broadcast radiation can noticeably degrade host performance. The three sources of broadcasts and multicasts in IP networks are workstations, routers, and multicast applications.
Workstations broadcast an Address Resolution Protocol (ARP) request every time they need to locate a MAC address that is not in the ARP table. Although the numbers in the figure might appear low, they represent an average, well-designed IP network. When broadcast and multicast traffic peak due to storm behavior, peak CPU loss can be much higher than average. Broadcast storms can be caused by a device that requests information from a network that has grown too large. So many responses are sent to the original request that the device cannot process them, or the first request triggers similar requests from other devices that effectively block normal traffic flow on the network.
As an example, the command telnet mumble.com translates into an IP address through a Domain Name System (DNS) search. An ARP request is broadcast to locate the MAC address. Generally, IP workstations cache 10 to 100 addresses in their ARP tables for about 2 hours. The ARP rate for a typical workstation might be about 50 addresses every 2 hours or 0.007 ARPs per second. Therefore, 2000 IP end stations will produce about 14 ARPs per second.
The routing protocols that are configured on a network can increase broadcast traffic significantly. Some administrators configure all workstations to run Routing Information Protocol (RIP) as a redundancy and reachability policy. Every 30 seconds, RIPv1 uses broadcasts to retransmit the entire RIP routing table to other RIP routers. If 2000 workstations were configured to run RIP and, on average, 50 packets were required to transmit the routing table, the workstations would generate 3333 broadcasts per second. Most network administrators only configure RIP on five to ten routers. For a routing table that has a size of 50 packets, 10 RIP routers would generate about 16 broadcasts per second.
IP multicast applications can adversely affect the performance of large, scaled, switched networks. Multicasting is an efficient way to send a stream of multimedia data to many users on a shared-media hub. However, it affects every user on a flat switched network. A packet video application could generate a 7-MB stream of multicast data that would be sent to every segment. This would result in severe congestion.
The next page will describe broadcast domains.
8.2.4 This page will explain how Layer 2 broadcasts are used.
To communicate with all collision domains, protocols use broadcast and multicast frames at Layer 2 of the OSI model. When a node needs to communicate with all hosts on the network, it sends a broadcast frame with a destination MAC address 0xFFFFFFFFFFFF. This is an address to which the NIC of every host must respond.
Layer 2 devices must flood all broadcast and multicast traffic. The accumulation of broadcast and multicast traffic from each device in the network is referred to as broadcast radiation. In some cases, the circulation of broadcast radiation can saturate the network so that there is no bandwidth left for application data. In this case, new network connections cannot be made and established connections may be dropped. This situation is called a broadcast storm. The probability of broadcast storms increases as the switched network grows.
A NIC must rely on the CPU to process each broadcast or multicast group it belongs to. Therefore, broadcast radiation affects the performance of hosts in the network. Figure shows the results of tests that Cisco conducted on the effect of broadcast radiation on the CPU performance of a Sun SPARCstation 2 with a standard built-in Ethernet card. The results indicate that an IP workstation can be effectively shut down by broadcasts that flood the network. Although extreme, broadcast peaks of thousands of broadcasts per second have been observed during broadcast storms. Tests in a controlled environment with a range of broadcasts and multicasts on the network show measurable system degradation with as few as 100 broadcasts or multicasts per second.
A host does not usually benefit if it processes a broadcast when it is not the intended destination. The host is not interested in the service that is advertised. High levels of broadcast radiation can noticeably degrade host performance. The three sources of broadcasts and multicasts in IP networks are workstations, routers, and multicast applications.
Workstations broadcast an Address Resolution Protocol (ARP) request every time they need to locate a MAC address that is not in the ARP table. Although the numbers in the figure might appear low, they represent an average, well-designed IP network. When broadcast and multicast traffic peak due to storm behavior, peak CPU loss can be much higher than average. Broadcast storms can be caused by a device that requests information from a network that has grown too large. So many responses are sent to the original request that the device cannot process them, or the first request triggers similar requests from other devices that effectively block normal traffic flow on the network.
As an example, the command telnet mumble.com translates into an IP address through a Domain Name System (DNS) search. An ARP request is broadcast to locate the MAC address. Generally, IP workstations cache 10 to 100 addresses in their ARP tables for about 2 hours. The ARP rate for a typical workstation might be about 50 addresses every 2 hours or 0.007 ARPs per second. Therefore, 2000 IP end stations will produce about 14 ARPs per second.
The routing protocols that are configured on a network can increase broadcast traffic significantly. Some administrators configure all workstations to run Routing Information Protocol (RIP) as a redundancy and reachability policy. Every 30 seconds, RIPv1 uses broadcasts to retransmit the entire RIP routing table to other RIP routers. If 2000 workstations were configured to run RIP and, on average, 50 packets were required to transmit the routing table, the workstations would generate 3333 broadcasts per second. Most network administrators only configure RIP on five to ten routers. For a routing table that has a size of 50 packets, 10 RIP routers would generate about 16 broadcasts per second.
IP multicast applications can adversely affect the performance of large, scaled, switched networks. Multicasting is an efficient way to send a stream of multimedia data to many users on a shared-media hub. However, it affects every user on a flat switched network. A packet video application could generate a 7-MB stream of multicast data that would be sent to every segment. This would result in severe congestion.
The next page will describe broadcast domains.
Subscribe to:
Posts (Atom)