Friday, February 26, 2010

Collision domains / Segmentation

Collision domains
8.2.2 This page will define collision domains.


Collision domains are the connected physical network segments where collisions can occur. Collisions cause the network to be inefficient. Every time a collision happens on a network, all transmission stops for a period of time. The length of this period of time varies and is determined by a backoff algorithm for each network device.

The types of devices that interconnect the media segments define collision domains. These devices have been classified as OSI Layer 1, 2 or 3 devices. Layer 2 and Layer 3 devices break up collision domains. This process is also known as segmentation.

Layer 1 devices such as repeaters and hubs are mainly used to extend the Ethernet cable segments. This allows more hosts to be added. However, every host that is added increases the amount of potential traffic on the network. Layer 1 devices forward all data that is sent on the media. As more traffic is transmitted within a collision domain, collisions become more likely. This results in diminished network performance, which will be even more pronounced if all the computers use large amounts of bandwidth. Layer 1 devices can cause the length of a LAN to be overextended and result in collisions.

The four repeater rule in Ethernet states that no more than four repeaters or repeating hubs can be between any two computers on the network. For a repeated 10BASE-T network to function properly, the round-trip delay calculation must be within certain limits. This ensures that all the workstations will be able to hear all the collisions on the network. Repeater latency, propagation delay, and NIC latency all contribute to the four repeater rule. If the four repeater rule is violated, the maximum delay limit may be exceeded. A late collision is when a collision happens after the first 64 bytes of the frame are transmitted. The chipsets in NICs are not required to retransmit automatically when a late collision occurs. These late collision frames add delay that is referred to as consumption delay. As consumption delay and latency increase, network performance decreases.

The 5-4-3-2-1 rule requires that the following guidelines should not be exceeded:

• Five segments of network media
• Four repeaters or hubs
• Three host segments of the network
• Two link sections with no hosts
• One large collision domain

The 5-4-3-2-1 rule also provides guidelines to keep round-trip delay time within acceptable limits.

The next page will discuss segmentation.

Segmentation
8.2.3 This page will explain how Layer 2 and 3 devices are used to segment a network.


The history of how Ethernet handles collisions and collision domains dates back to research at the University of Hawaii in 1970. In its attempts to develop a wireless communication system for the islands of Hawaii, university researchers developed a protocol called Aloha. The Ethernet protocol is actually based on the Aloha protocol.

One important skill for a networking professional is the ability to recognize collision domains. A collision domain is created when several computers are connected to a single shared-access medium that is not attached to other network devices. This situation limits the number of computers that can use the segment. Layer 1 devices extend but do not control collision domains.

Layer 2 devices segment or divide collision domains. They use the MAC address assigned to every Ethernet device to control frame propagation. Layer 2 devices are bridges and switches. They keep track of the MAC addresses and their segments. This allows these devices to control the flow of traffic at the Layer 2 level. This function makes networks more efficient. It allows data to be transmitted on different segments of the LAN at the same time without collisions. Bridges and switches divide collision domains into smaller parts. Each part becomes its own collision domain.

These smaller collision domains will have fewer hosts and less traffic than the original domain. The fewer hosts that exist in a collision domain, the more likely the media will be available. If the traffic between bridged segments is not too heavy a bridged network works well. Otherwise, the Layer 2 device can slow down communication and become a bottleneck.

Layer 2 and 3 devices do not forward collisions. Layer 3 devices divide collision domains into smaller domains.
Layer 3 devices also perform other functions. These functions will be covered in the section on broadcast domains.

The next page will discuss broadcasts

Spanning-Tree Protocol / Shared media environments

Spanning-Tree Protocol
8.1.6 This page will introduce STP.


When multiple switches are arranged in a simple hierarchical tree, switching loops are unlikely to occur. However, switched networks are often designed with redundant paths to provide for reliability and fault tolerance. Redundant paths are desirable but they can have undesirable side effects such as switching loops. Switching loops are one such side effect. Switching loops can occur by design or by accident, and they can lead to broadcast storms that will rapidly overwhelm a network. STP is a standards-based routing protocol that is used to avoid routing loops. Each switch in a LAN that uses STP sends messages called Bridge Protocol Data Units (BPDUs) out all its ports to let other switches know of its existence. This information is used to elect a root bridge for the network. The switches use the spanning-tree algorithm (STA) to resolve and shut down the redundant paths.

Each port on a switch that uses STP exists in one of the following five states:

• Blocking
• Listening
• Learning
• Forwarding
• Disabled

A port moves through these five states as follows:

• From initialization to blocking
• From blocking to listening or to disabled
• From listening to learning or to disabled
• From learning to forwarding or to disabled
• From forwarding to disabled

STP is used to create a logical hierarchical tree with no loops. However, the alternate paths are still available if necessary.

This page concludes this lesson. The next lesson will discuss collision and broadcast domains. The first page covers shared media environments.

Shared media environments
8.2.1 This page explains Layer 1 media and topologies to help students understand collisions and collision domains.


Here are some examples of shared media and directly connected networks:

• Shared media environment – This occurs when multiple hosts have access to the same medium. For example, if several PCs are attached to the same physical wire or optical fiber, they all share the same media environment.

• Extended shared media environment – This is a special type of shared media environment in which networking devices can extend the environment so that it can accommodate multiple access or longer cable distances.

• Point-to-point network environment – This is widely used in dialup network connections and is most common for home users. It is a shared network environment in which one device is connected to only one other device. An example is a PC that is connected to an Internet service provider through a modem and a phone line.

Collisions only occur in a shared environment. A highway system is an example of a shared environment in which collisions can occur because multiple vehicles use the same roads. As more vehicles enter the system, collisions become more likely. A shared data network is much like a highway. Rules exist to determine who has access to the network medium. However, sometimes the rules cannot handle the traffic load and collisions occur.

The next page will focus on collision domains.

Switch operation / Latency / Switch modes

Switch operation
8.1.3 This page describes the operation of a switch.


A switch is simply a bridge with many ports. When only one node is connected to a switch port, the collision domain on the shared media contains only two nodes. The two nodes in this small segment, or collision domain, consist of the switch port and the host connected to it. These small physical segments are called microsegments. Another capability emerges when only two nodes are connected. In a network that uses twisted-pair cabling, one pair is used to carry the transmitted signal from one node to the other node. A separate pair is used for the return or received signal. It is possible for signals to pass through both pairs simultaneously. The ability to communicate in both directions at once is known as full duplex. Most switches are capable of supporting full duplex, as are most NICs. In full duplex mode, there is no contention for the media. A collision domain no longer exists. In theory, the bandwidth is doubled when full duplex is used.

In addition to faster microprocessors and memory, two other technological advances made switches possible. CAM is memory that works backward compared to conventional memory. When data is entered into the memory it will return the associated address. CAM allows a switch to find the port that is associated with a MAC address without search algorithms. An application-specific integrated circuit or ASIC comprises an integrated circuit (IC) with functionality customized for a particular use (equipment or project), rather than serving for general-purpose use. An ASIC allows some software operations to be done in hardware. These technologies greatly reduced the delays caused by software processes and enabled a switch to keep up with the data demands of many microsegments and high bit rates.

The next page will define latency.

Latency
8.1.4 This page will discuss some situations that cause latency.


Latency is the delay between the time a frame begins to leave the source device and when the first part of the frame reaches its destination. A variety of conditions can cause delays:

• Media delays may be caused by the finite speed that signals can travel through the physical media.
• Circuit delays may be caused by the electronics that process the signal along the path.
• Software delays may be caused by the decisions that software must make to implement switching and protocols.
• Delays may be caused by the content of the frame and the location of the frame switching decisions. For example, a device cannot route a frame to a destination until the destination MAC address has been read.

The next page will discuss switch modes.

Switch modes
8.1.5 This page will introduce the three switch modes.


How a frame is switched to the destination port is a trade off between latency and reliability. A switch can start to transfer the frame as soon as the destination MAC address is received. This is called cut-through packet switching and results in the lowest latency through the switch. However, no error checking is available. The switch can also receive the entire frame before it is sent to the destination port. This gives the switch software an opportunity to verify the Frame Check Sequence (FCS). If the frame is invalid, it is discarded at the switch. Since the entire frame is stored before it is forwarded, this is called store-and-forward packet switching. A compromise between cut-through and store-and-forward packet switching is the fragment-free mode. Fragment-free packet switching reads the first 64 bytes, which includes the frame header, and starts to send out the packet before the entire data field and checksum are read. This mode verifies the reliability of the addresses and LLC protocol information to ensure the data will be handled properly and arrive at the correct destination.

When cut-through packet switching is used, the source and destination ports must have the same bit rate to keep the frame intact. This is called symmetric switching. If the bit rates are not the same, the frame must be stored at one bit rate before it is sent out at the other bit rate. This is known as asymmetric switching. Store-and-forward mode must be used for asymmetric switching.

Asymmetric switching provides switched connections between ports with different bandwidths. Asymmetric switching is optimized for client/server traffic flows in which multiple clients communicate with a server at once. More bandwidth must be dedicated to the server port to prevent a bottleneck.

The Interactive Media Activity will help students become familiar with the three types of switch modes.

The next page will discuss the Spanning-Tree Protocol (STP).

Layer 2 bridging / Layer 2 switching

Layer 2 bridging 
8.1.1 This page will discuss the operation of Layer 2 bridges.
As more nodes are added to an Ethernet segment, use of the media increases. Ethernet is a shared media, which means only one node can transmit data at a time. The addition of more nodes increases the demands on the available bandwidth and places additional loads on the media. This also increases the probability of collisions, which results in more retransmissions. A solution to the problem is to break the large segment into parts and separate it into isolated collision domains.

To accomplish this a bridge keeps a table of MAC addresses and the associated ports. The bridge then forwards or discards frames based on the table entries. The following steps illustrate the operation of a bridge:

• The bridge has just been started so the bridge table is empty. The bridge just waits for traffic on the segment. When traffic is detected, it is processed by the bridge.
• Host A pings Host B. Since the data is transmitted on the entire collision domain segment, both the bridge and Host B process the packet.
• The bridge adds the source address of the frame to its bridge table. Since the address was in the source address field and the frame was received on Port 1, the frame must be associated with Port 1 in the table.
• The destination address of the frame is checked against the bridge table. Since the address is not in the table, even though it is on the same collision domain, the frame is forwarded to the other segment. The address of Host B has not been recorded yet.
• Host B processes the ping request and transmits a ping reply back to Host A. The data is transmitted over the whole collision domain. Both Host A and the bridge receive the frame and process it.
• The bridge adds the source address of the frame to its bridge table. Since the source address was not in the bridge table and was received on Port 1, the source address of the frame must be associated with Port 1 in the table.
• The destination address of the frame is checked against the bridge table to see if its entry is there. Since the address is in the table, the port assignment is checked. The address of Host A is associated with the port the frame was received on, so the frame is not forwarded.
• Host A pings Host C. Since the data is transmitted on the entire collision domain segment, both the bridge and Host B process the frame. Host B discards the frame since it was not the intended destination.
• The bridge adds the source address of the frame to its bridge table. Since the address is already entered into the bridge table the entry is just renewed.
• The destination address of the frame is checked against the bridge table. Since the address is not in the table, the frame is forwarded to the other segment. The address of Host C has not been recorded yet.
• Host C processes the ping request and transmits a ping reply back to Host A. The data is transmitted over the whole collision domain. Both Host D and the bridge receive the frame and process it. Host D discards the frame since it is not the intended destination.
• The bridge adds the source address of the frame to its bridge table. Since the address was in the source address field and the frame was received on Port 2, the frame must be associated with Port 2 in the table.
• The destination address of the frame is checked against the bridge table to see if its entry is present. The address is in the table but it is associated with Port 1, so the frame is forwarded to the other segment.
• When Host D transmits data, its MAC address will also be recorded in the bridge table. This is how the bridge controls traffic between to collision domains.

These are the steps that a bridge uses to forward and discard frames that are received on any of its ports.

The next page will describe Layer 2 switching.

Layer 2 switching
8.1.2 This page will discuss Layer 2 switches.


Generally, a bridge has only two ports and divides a collision domain into two parts. All decisions made by a bridge are based on MAC or Layer 2 addresses and do not affect the logical or Layer 3 addresses. A bridge will divide a collision domain but has no effect on a logical or broadcast domain. If a network does not have a device that works with Layer 3 addresses, such as a router, the entire network will share the same logical broadcast address space. A bridge will create more collision domains but will not add broadcast domains.

A switch is essentially a fast, multi-port bridge that can contain dozens of ports. Each port creates its own collision domain. In a network of 20 nodes, 20 collision domains exist if each node is plugged into its own switch port. If an uplink port is included, one switch creates 21 single-node collision domains. A switch dynamically builds and maintains a content-addressable memory (CAM) table, which holds all of the necessary MAC information for each port.

The next page will explain how a switch operates

Module 8: Ethernet Switching Overview

Ethernet Switching Overview
Shared Ethernet works extremely well under ideal conditions. If the number of devices that try to access the network is low, the number of collisions stays well within acceptable limits. However, when the number of users on the network increases, the number of collisions can significantly reduce performance. Bridges were developed to help correct performance problems that arose from increased collisions. Switches evolved from bridges to become the main technology in modern Ethernet LANs.


Collisions and broadcasts are expected events in modern networks. They are engineered into the design of Ethernet and higher layer technologies. However, when collisions and broadcasts occur in numbers that are above the optimum, network performance suffers. Collision domains and broadcast domains should be designed to limit the negative effects of collisions and broadcasts. This module explores the effects of collisions and broadcasts on network traffic and then describes how bridges and routers are used to segment networks for improved performance.

This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811 exams.

Students who complete this module should be able to perform the following tasks:

• Define bridging and switching
• Define and describe the content-addressable memory (CAM) table
• Define latency
• Describe store-and-forward and cut-through packet switching modes
• Explain Spanning-Tree Protocol (STP)
• Define collisions, broadcasts, collision domains, and broadcast domains
• Identify the Layers 1, 2, and 3 devices used to create collision domains and broadcast domains
• Discuss data flow and problems with broadcasts
• Explain network segmentation and list the devices used to create segments

Summary of Module 7

Summary
This page summarizes the topics discussed in this module.


Ethernet is a technology that has increased in speed one thousand times, from 10 Mbps to 10,000 Mbps, in less than a decade. All forms of Ethernet share a similar frame structure and this leads to excellent interoperability. Most Ethernet copper connections are now switched full duplex, and the fastest copper-based Ethernet is 1000BASE-T, or Gigabit Ethernet. 10 Gigabit Ethernet and faster are exclusively optical fiber-based technologies.

10BASE5, 10BASE2, and 10BASE-T Ethernet are considered Legacy Ethernet. The four common features of Legacy Ethernet are timing parameters, frame format, transmission process, and a basic design rule.

Legacy Ethernet encodes data on an electrical signal. The form of encoding used in 10 Mbps systems is called Manchester encoding. Manchester encoding uses a change in voltage to represent the binary numbers zero and one. An increase or decrease in voltage during a timed period, called the bit period, determines the binary value of the bit.

In addition to a standard bit period, Ethernet standards set limits for slot time and interframe spacing. Different types of media can affect transmission timing and timing standards ensure interoperability. 10 Mbps Ethernet operates within the timing limits offered by a series of no more than five segments separated by no more than four repeaters.

A single thick coaxial cable was the first medium used for Ethernet. 10BASE2, using a thinner coax cable, was introduced in 1985. 10BASE-T, using twisted-pair copper wire, was introduced in 1990. Because it used multiple wires 10BASE-T offered the option of full-duplex signaling. 10BASE-T carries 10 Mbps of traffic in half-duplex mode and 20 Mbps in full-duplex mode.

10BASE-T links can have unrepeated distances up to 100 m. Beyond that network devices such as repeaters, hub, bridges and switches are used to extend the scope of the LAN. With the advent of switches, the 4-repeater rule is not so relevant. You can extend the LAN indefinitely by daisy-chaining switches. Each switch-to-switch connection, with maximum length of 100m, is essentially a point-to-point connection without the media contention or timing issues of using repeaters and hubs.

100-Mbps Ethernet, also known as Fast Ethernet, can be implemented using twisted-pair copper wire, as in 100BASE-TX, or fiber media, as in 100BASE-FX. 100 Mbps forms of Ethernet can transmit 200 Mbps in full duplex.

Because the higher frequency signals used in Fast Ethernet are more susceptible to noise, two separate encoding steps are used by 100-Mbps Ethernet to enhance signal integrity.

Gigabit Ethernet over copper wire is accomplished by the following:

• Category 5e UTP cable and careful improvements in electronics are used to boost 100 Mbps per wire pair to 125 Mbps per wire pair.
• All four wire pairs instead of just two. This allows 125 Mbps per wire pair, or 500 Mbps for the four wire pairs.
• Sophisticated electronics allow permanent collisions on each wire pair and run signals in full duplex, doubling the 500 Mbps to 1000 Mbps.

On Gigabit Ethernet networks bit signals occur in one tenth of the time of 100 Mbps networks and 1/100 of the time of 10 Mbps networks. With signals occurring in less time the bits become more susceptible to noise. The issue becomes how fast the network adapter or interface can change voltage levels to signal bits and still be detected reliably one hundred meters away at the receiving NIC or interface. At this speed encoding and decoding data becomes even more complex.

The fiber versions of Gigabit Ethernet, 1000BASE-SX and 1000BASE-LX offer the following advantages: noise immunity, small size, and increased unrepeated distances and bandwidth. The IEEE 802.3 standard recommends that Gigabit Ethernet over fiber be the preferred backbone technology.

10-Gigabit Ethernet architectures / Future of Ethernet

10-Gigabit Ethernet architectures
7.2.6 This page describes the 10-Gigabit Ethernet architectures.


As with the development of Gigabit Ethernet, the increase in speed comes with extra requirements. The shorter bit time duration because of increased speed requires special considerations. For 10 GbE transmissions, each data bit duration is 0.1 nanosecond. This means there would be 1,000 GbE data bits in the same bit time as one data bit in a 10-Mbps Ethernet data stream. Because of the short duration of the 10 GbE data bit, it is often difficult to separate a data bit from noise. 10 GbE data transmissions rely on exact bit timing to separate the data from the effects of noise on the physical layer. This is the purpose of synchronization.

In response to these issues of synchronization, bandwidth, and Signal-to-Noise Ratio, 10-Gigabit Ethernet uses two separate encoding steps. By using codes to represent the user data, transmission is made more efficient. The encoded data provides synchronization, efficient usage of bandwidth, and improved Signal-to-Noise Ratio characteristics.

Complex serial bit streams are used for all versions of 10GbE except for 10GBASE-LX4, which uses Wide Wavelength Division Multiplex (WWDM) to multiplex four bit simultaneous bit streams as four wavelengths of light launched into the fiber at one time.

Figure represents the particular case of using four slightly different wavelength, laser sources. Upon receipt from the medium, the optical signal stream is demultiplexed into four separate optical signal streams. The four optical signal streams are then converted back into four electronic bit streams as they travel in approximately the reverse process back up through the sublayers to the MAC layer.

Currently, most 10GbE products are in the form of modules, or line cards, for addition to high-end switches and routers. As the 10GbE technologies evolve, an increasing diversity of signaling components can be expected. As optical technologies evolve, improved transmitters and receivers will be incorporated into these products, taking further advantage of modularity. All 10GbE varieties use optical fiber media. Fiber types include 10µ single-mode Fiber, and 50µ and 62.5µ multimode fibers. A range of fiber attenuation and dispersion characteristics is supported, but they limit operating distances.

Even though support is limited to fiber optic media, some of the maximum cable lengths are surprisingly short. No repeater is defined for 10-Gigabit Ethernet since half duplex is explicitly not supported.

As with 10 Mbps, 100 Mbps and 1000 Mbps versions, it is possible to modify some of the architecture rules slightly. Possible architecture adjustments are related to signal loss and distortion along the medium. Due to dispersion of the signal and other issues the light pulse becomes undecipherable beyond certain distances.

The next page will discuss the future of Ethernet.

Future of Ethernet
7.2.7 Ethernet has gone through an evolution from Legacy —> Fast —> Gigabit —> MultiGigabit technologies. While other LAN technologies are still in place (legacy installations), Ethernet dominates new LAN installations. So much so that some have referred to Ethernet as the LAN “dial tone”. Ethernet is now the standard for horizontal, vertical, and inter-building connections. Recently developing versions of Ethernet are blurring the distinction between LANs, MANs, and WANs.


While 1-Gigabit Ethernet is now widely available and 10-Gigabit products becoming more available, the IEEE and the 10-Gigabit Ethernet Alliance are working on 40, 100, or even 160 Gbps standards. The technologies that are adopted will depend on a number of factors, including the rate of maturation of the technologies and standards, the rate of adoption in the market, and cost.

Proposals for Ethernet arbitration schemes other than CSMA/CD have been made. The problem of collisions with physical bus topologies of 10BASE5 and 10BASE2 and 10BASE-T and 100BASE-TX hubs is no longer common. Using UTP and optical fiber with separate Tx and Rx paths, and the decreasing costs of switches make single shared media, half-duplex media connections much less important.

The future of networking media is three-fold:

1. Copper (up to 1000 Mbps, perhaps more)
2. Wireless (approaching 100 Mbps, perhaps more)
3. Optical fiber (currently at 10,000 Mbps and soon to be more)

Copper and wireless media have certain physical and practical limitations on the highest frequency signals that can be transmitted. This is not a limiting factor for optical fiber in the foreseeable future. The bandwidth limitations on optical fiber are extremely large and are not yet being threatened. In fiber systems, it is the electronics technology (such as emitters and detectors) and fiber manufacturing processes that most limit the speed. Upcoming developments in Ethernet are likely to be heavily weighted towards Laser light sources and single-mode optical fiber.

When Ethernet was slower, half-duplex, subject to collisions and a “democratic” process for prioritization, was not considered to have the Quality of Service (QoS) capabilities required to handle certain types of traffic. This included such things as IP telephony and video multicast.

The full-duplex high-speed Ethernet technologies that now dominate the market are proving to be sufficient at supporting even QoS-intensive applications. This makes the potential applications of Ethernet even wider. Ironically end-to-end QoS capability helped drive a push for ATM to the desktop and to the WAN in the mid-1990s, but now it is Ethernet, not ATM that is approaching this goal.

This page concludes this lesson. The next page will summarize the main points from the module.