Thursday, October 17, 2013

Frame transmission modes

Frame transmission modes 
4.3.2 This page will describe the three main frame transmission modes: 
  • Cut-through - A switch that performs cut-through switching only reads the destination address when receiving the frame. The switch begins to forward the frame before the entire frame arrives. This mode decreases the latency of the transmission, but has poor error detection. There are two forms of cut-through switching:
    1. Fast-forward switching - This type of switching offers the lowest level of latency by immediately forwarding a packet after receiving the destination address. Latency is measured from the first bit received to the first bit transmitted, or first in first out (FIFO). This mode has poor LAN switching error detection.
    2. Fragment-free switching - This type of switching filters out collision fragments, with are the majority of packet errors, before forwarding begins. Usually, collision fragments are smaller than 64 bytes. Fragment-free switching waits until the received packet has been determined not to be a collision fragment before forwarding the packet. Latency is also measured as FIFO.
  • Store-and-forward - The entire frame is received before any forwarding takes place. The destination and source addresses are read and filters are applied before the frame is forwarded. Latency occurs while the frame is being received. Latency is greater with larger frames because the entire frame must be received before the switching process begins. The switch has time available to check for errors, which allows more error detection.
  • Adaptive cut-through - This transmission mode is a hybrid mode that is a combination of cut-through and store-and-forward. In this mode, the switch uses cut-through until it detects a given number of errors. Once the error threshold is reached, the switch changes to store-and-forward mode.
The next page will explain how switches learn about the network. 

Switch Operation / Functions of Ethernet switches

Switch Operation / 
Functions of Ethernet switches 
4.3.1 the following two main functions of Ethernet switches:
  • Isolate traffic among segments
  • Achieve greater amount of bandwidth per user by creating smaller collision domains
The first function is to isolate traffic among segments. Segments are the smaller units into which the networks are divided by use of Ethernet switches. Each segment uses carrier sense multiple access/collision detect (CSMA/CD) access method to maintain data traffic flow among the users on that segment. It would be useful to refer back to the section on CSMA/CD here and show the flowchart. Such segmentation allows multiple users to send information at the same time on the different segments without slowing down the network.
The second function of an Ethernet switch is to ensure each user has more bandwidth by creating smaller collision domains. Ethernet and Fast Ethernet switches segment LANs by creating smaller collision domains. Each segment becomes a dedicated network link, like a highway lane functioning at up to 100 Mbps. Popular servers can then be placed on individual 100-Mbps links. In modern networks, a Fast Ethernet switch will often act as the backbone of a LAN, with Ethernet hubs, Ethernet switches, or Fast Ethernet hubs providing the desktop connections in workgroups.

This page will review the functions of an Ethernet switch.
A switch is a device that connects LAN segments using a table of MAC addresses to determine the segment on which a frame needs to be transmitted. Both switches and bridges operate at Layer 2 of the OSI model. 
Switches are sometimes called multiport bridges or switching hubs. Switches make decisions based on MAC addresses and therefore, are Layer 2 devices. In contrast, hubs regenerate the Layer 1 signals out of all ports without making any decisions. Since a switch has the capacity to make path selection decisions, the LAN becomes much more efficient. Usually, in an Ethernet network the workstations are connected directly to the switch. Switches learn which hosts are connected to a port by reading the source MAC address in frames. The switch opens a virtual circuit between the source and destination nodes only. This confines communication to those two ports without affecting traffic on other ports. In contrast, a hub forwards data out all of its ports so that all hosts see the data and must process it, even if that data is not intended for it. High-performance LANs are usually fully switched: 
  • A switch concentrates connectivity, making data transmission more efficient. Frames are switched from incoming ports to outgoing ports. Each port or interface can provide the full bandwidth of the connection to the host.
  • On a typical Ethernet hub, all ports connect to a common backplane or physical connection within the hub, and all devices attached to the hub share the bandwidth of the network. If two stations establish a session that uses a significant level of bandwidth, the network performance of all other stations attached to the hub is degraded.
  • To reduce degradation, the switch treats each interface as an individual segment. When stations on different interfaces need to communicate, the switch forwards frames at wire speed from one interface to the other, to ensure that each session receives full bandwidth.
To efficiently switch frames between interfaces, the switch maintains an address table. When a frame enters the switch, it associates the MAC address of the sending station with the interface on which it was received.
The main features of Ethernet switches are:
  • Isolate traffic among segments
  • Achieve greater amount of bandwidth per user by creating smaller collision domains
The first feature, isolate traffic among segments, provides for greater security for hosts on the network. Each segment uses the CSMA/CD access method to maintain data traffic flow among the users on that segment. Such segmentation allows multiple users to send information at the same time on the different segments without slowing down the network. 
By using the segments in the network fewer users and/or devices are sharing the same bandwidth when communicating with one another. Each segment has its own collision domain. Ethernet switches filter the traffic by redirecting the datagrams to the correct port or ports, which are based on Layer 2 MAC addresses.
The second feature is called microsegmentation. Microsegmentation allows the creation of dedicated network segments with one host per segment. Each hosts receives access to the full bandwidth and does not have to compete for available bandwidth with other hosts. Popular servers can then be placed on individual 100-Mbps links. Often in networks of today, a Fast Ethernet switch will act as the backbone of the LAN, with Ethernet hubs, Ethernet switches, or Fast Ethernet hubs providing the desktop connections in workgroups. As demanding new applications such as desktop multimedia or video conferencing become more popular, certain individual desktop computers will have dedicated 100-Mbps links to the network. 
The next page will introduce three frame transmission modes. 

Two switching methods

Two switching methods 
4.2.10 This page will introduce store-and-forward and cut-through switching.
The following two switching modes are available to forward frames: 
  • Store-and-forward - The entire frame is received before any forwarding takes place. The destination and source addresses are read and filters are applied before the frame is forwarded. Latency occurs while the frame is being received. Latency is greater with larger frames because the entire frame must be received before the switching process begins. The switch is able to check the entire frame for errors, which allows more error detection.
  • Cut-through - The frame is forwarded through the switch before the entire frame is received. At a minimum the frame destination address must be read before the frame can be forwarded. This mode decreases the latency of the transmission, but also reduces error detection.
The following are two forms of cut-through switching: 
  • Fast-forward - Fast-forward switching offers the lowest level of latency. Fast-forward switching immediately forwards a packet after reading the destination address. Because fast-forward switching starts forwarding before the entire packet is received, there may be times when packets are relayed with errors. Although this occurs infrequently and the destination network adapter will discard the faulty packet upon receipt. In fast-forward mode, latency is measured from the first bit received to the first bit transmitted.
  • Fragment-free - Fragment-free switching filters out collision fragments before forwarding begins. Collision fragments are the majority of packet errors. In a properly functioning network, collision fragments must be smaller than 64 bytes. Anything greater than 64 bytes is a valid packet and is usually received without error. Fragment-free switching waits until the packet is determined not to be a collision fragment before forwarding. In fragment-free mode, latency is also measured from the first bit received to the first bit transmitted.
The latency of each switching mode depends on how the switch forwards the frames. To accomplish faster frame forwarding, the switch reduces the time for error checking. However, reducing the error checking time can lead to a higher number of retransmissions.
This page concludes this lesson. The next lesson will describe Ethernet Switches. The first page will explain the main functions of switches.

Symmetric and asymmetric switching / Memory buffering

Symmetric and asymmetric switching 
4.2.8 This page will explain the difference between symmetric and asymmetric switching.
LAN switching may be classified as symmetric or asymmetric based on the way in which bandwidth is allocated to the switch ports. A symmetric switch provides switched connections between ports with the same bandwidth. An asymmetric LAN switch provides switched connections between ports of unlike bandwidth, such as a combination of 10-Mbps and 100-Mbps ports.
Asymmetric switching enables more bandwidth to be dedicated to the server switch port in order to prevent a bottleneck. This allows smoother traffic flows where multiple clients are communicating with a server at the same time. Memory buffering is required on an asymmetric switch. The use of buffers keeps the frames contiguous between different data rate ports.
The next page will discuss memory buffers.
Memory buffering 
4.2.9 This page will explain what a memory buffer is and how it is used.
An Ethernet switch may use a buffering technique to store and forward frames. Buffering may also be used when the destination port is busy. The area of memory where the switch stores the data is called the memory buffer. This memory buffer can use two methods for forwarding frames, port-based memory buffering and shared memory buffering. 
In port-based memory buffering frames are stored in queues that are linked to specific incoming ports. A frame is transmitted to the outgoing port only when all the frames ahead of it in the queue have been successfully transmitted. It is possible for a single frame to delay the transmission of all the frames in memory because of a busy destination port. This delay occurs even if the other frames could be transmitted to open destination ports.
Shared memory buffering deposits all frames into a common memory buffer which all the ports on the switch share. The amount of buffer memory required by a port is dynamically allocated. The frames in the buffer are linked dynamically to the destination port. This allows the packet to be received on one port and then transmitted on another port, without moving it to a different queue.
The switch keeps a map of frame to port links showing where a packet needs to be transmitted. The map link is cleared after the frame has been successfully transmitted. The memory buffer is shared. The number of frames stored in the buffer is restricted by the size of the entire memory buffer, and not limited to a single port buffer. This permits larger frames to be transmitted with fewer dropped frames. This is important to asymmetric switching, where frames are being exchanged between different rate ports.
The next page will describe two switching methods.

Wednesday, October 16, 2013

Ethernet switch latency / Layer 2 and Layer 3 switching

Ethernet switch latency
 4.2.6 Switch latency is the period of time when a frame enters a switch to the time it takes the frame to exit the switch. Latency is directly related to the configured switching process and volume of traffic. 
Latency is measured in fractions of a second. Network devices operate at incredibly high speeds so every additional nanosecond of latency adversely affects network performance.
The next page will describe Layer 2 and Layer 3 switching.
Layer 2 and Layer 3 switching 
4.2.7 There are two methods of switching data frames, Layer 2 switching and Layer 3 switching. Routers and Layer 3 switches use Layer 3 switching to switch packets. Layer 2 switches and bridges use Layer 2 switching to forward frames.
The difference between Layer 2 and Layer 3 switching is the type of information inside the frame that is used to determine the correct output interface. Layer 2 switching is based on MAC address information. Layer 3 switching is based on network layer addresses, or IP addresses. The features and functionality of Layer 3 switches and routers have numerous similarities. The only major difference between the packet switching operation of a router and a Layer 3 switch is the physical implementation. In general-purpose routers, packet switching takes place in software, using microprocessor-based engines, whereas a Layer 3 switch performs packet forwarding using application specific integrated circuit (ASIC) hardware.
Layer 2 switching looks at a destination MAC address in the frame header and forwards the frame to the appropriate interface or port based on the MAC address in the switching table. The switching table is contained in Content Addressable Memory (CAM). If the Layer 2 switch does not know where to send the frame, it broadcasts the frame out all ports to the network. When a reply is returned, the switch records the new address in the CAM.
Layer 3 switching is a function of the network layer. The Layer 3 header information is examined and the packet is forwarded based on the IP address.
Traffic flow in a switched or flat network is inherently different from the traffic flow in a routed or hierarchical network. Hierarchical networks offer more flexible traffic flow than flat networks.
The next page will discuss symmetric and asymmetric switching.

Basic operations of a switch

Basic operations of a switch
 4.2.5 Switching is a technology that decreases congestion in Ethernet, Token Ring, and Fiber Distributed Data Interface (FDDI) LANs. Switches use microsegmentation to reduce collision domains and network traffic. This reduction results in more efficient use of bandwidth and increased throughput. LAN switches often replace shared hubs and are designed to work with cable infrastructures already in place. 
The following are the two basic operations that switches perform:
  • Switch data frames - The process of receiving a frame on a switch interface, selecting the correct forwarding switch port(s), and forwarding the frame.
  • Maintain switch operations - Switches build and maintain forwarding tables. Switches also construct and maintain a loop-free topology across the LAN.
 The next page will discuss latency.

LAN segmentation with routers / LAN segmentation with switches

LAN segmentation with routers 
4.2.3 Routers provide network segmentation which adds a latency factor of twenty to thirty percent over a switched network. The increased latency is because routers operate at the network layer and use the IP address to determine the best path to the destination node. Figure shows a Cisco router.
Bridges and switches provide segmentation within a single network or subnetwork. Routers provide connectivity between networks and subnetworks.
Routers do not forward broadcasts while switches and bridges must forward broadcast frames.
The Interactive Media Activities will help students become more familiar with the Cisco 2621 and 3640 routers.
The next page will discuss switches.

LAN segmentation with switches 
4.2.4 Switches decrease bandwidth shortages and network bottlenecks, such as those between several workstations and a remote file server. Figure shows a Cisco switch. Switches segment LANs into microsegments which decreases the size of collision domains. However, all hosts connected to a switch are still in the same broadcast domain.
In a completely switched Ethernet LAN, the source and destination nodes function as if they are the only nodes on the network. When these two nodes establish a link, or virtual circuit, they have access to the maximum available bandwidth. These links provide significantly more throughput than Ethernet LANs connected by bridges or hubs. This virtual network circuit is established within the switch and exists only when the nodes need to communicate.
The next page will explain the role of a switch in a LAN.