Thursday, October 17, 2013

Two switching methods

Two switching methods 
4.2.10 This page will introduce store-and-forward and cut-through switching.
The following two switching modes are available to forward frames: 
  • Store-and-forward - The entire frame is received before any forwarding takes place. The destination and source addresses are read and filters are applied before the frame is forwarded. Latency occurs while the frame is being received. Latency is greater with larger frames because the entire frame must be received before the switching process begins. The switch is able to check the entire frame for errors, which allows more error detection.
  • Cut-through - The frame is forwarded through the switch before the entire frame is received. At a minimum the frame destination address must be read before the frame can be forwarded. This mode decreases the latency of the transmission, but also reduces error detection.
The following are two forms of cut-through switching: 
  • Fast-forward - Fast-forward switching offers the lowest level of latency. Fast-forward switching immediately forwards a packet after reading the destination address. Because fast-forward switching starts forwarding before the entire packet is received, there may be times when packets are relayed with errors. Although this occurs infrequently and the destination network adapter will discard the faulty packet upon receipt. In fast-forward mode, latency is measured from the first bit received to the first bit transmitted.
  • Fragment-free - Fragment-free switching filters out collision fragments before forwarding begins. Collision fragments are the majority of packet errors. In a properly functioning network, collision fragments must be smaller than 64 bytes. Anything greater than 64 bytes is a valid packet and is usually received without error. Fragment-free switching waits until the packet is determined not to be a collision fragment before forwarding. In fragment-free mode, latency is also measured from the first bit received to the first bit transmitted.
The latency of each switching mode depends on how the switch forwards the frames. To accomplish faster frame forwarding, the switch reduces the time for error checking. However, reducing the error checking time can lead to a higher number of retransmissions.
This page concludes this lesson. The next lesson will describe Ethernet Switches. The first page will explain the main functions of switches.

Symmetric and asymmetric switching / Memory buffering

Symmetric and asymmetric switching 
4.2.8 This page will explain the difference between symmetric and asymmetric switching.
LAN switching may be classified as symmetric or asymmetric based on the way in which bandwidth is allocated to the switch ports. A symmetric switch provides switched connections between ports with the same bandwidth. An asymmetric LAN switch provides switched connections between ports of unlike bandwidth, such as a combination of 10-Mbps and 100-Mbps ports.
Asymmetric switching enables more bandwidth to be dedicated to the server switch port in order to prevent a bottleneck. This allows smoother traffic flows where multiple clients are communicating with a server at the same time. Memory buffering is required on an asymmetric switch. The use of buffers keeps the frames contiguous between different data rate ports.
The next page will discuss memory buffers.
Memory buffering 
4.2.9 This page will explain what a memory buffer is and how it is used.
An Ethernet switch may use a buffering technique to store and forward frames. Buffering may also be used when the destination port is busy. The area of memory where the switch stores the data is called the memory buffer. This memory buffer can use two methods for forwarding frames, port-based memory buffering and shared memory buffering. 
In port-based memory buffering frames are stored in queues that are linked to specific incoming ports. A frame is transmitted to the outgoing port only when all the frames ahead of it in the queue have been successfully transmitted. It is possible for a single frame to delay the transmission of all the frames in memory because of a busy destination port. This delay occurs even if the other frames could be transmitted to open destination ports.
Shared memory buffering deposits all frames into a common memory buffer which all the ports on the switch share. The amount of buffer memory required by a port is dynamically allocated. The frames in the buffer are linked dynamically to the destination port. This allows the packet to be received on one port and then transmitted on another port, without moving it to a different queue.
The switch keeps a map of frame to port links showing where a packet needs to be transmitted. The map link is cleared after the frame has been successfully transmitted. The memory buffer is shared. The number of frames stored in the buffer is restricted by the size of the entire memory buffer, and not limited to a single port buffer. This permits larger frames to be transmitted with fewer dropped frames. This is important to asymmetric switching, where frames are being exchanged between different rate ports.
The next page will describe two switching methods.

Wednesday, October 16, 2013

Ethernet switch latency / Layer 2 and Layer 3 switching

Ethernet switch latency
 4.2.6 Switch latency is the period of time when a frame enters a switch to the time it takes the frame to exit the switch. Latency is directly related to the configured switching process and volume of traffic. 
Latency is measured in fractions of a second. Network devices operate at incredibly high speeds so every additional nanosecond of latency adversely affects network performance.
The next page will describe Layer 2 and Layer 3 switching.
Layer 2 and Layer 3 switching 
4.2.7 There are two methods of switching data frames, Layer 2 switching and Layer 3 switching. Routers and Layer 3 switches use Layer 3 switching to switch packets. Layer 2 switches and bridges use Layer 2 switching to forward frames.
The difference between Layer 2 and Layer 3 switching is the type of information inside the frame that is used to determine the correct output interface. Layer 2 switching is based on MAC address information. Layer 3 switching is based on network layer addresses, or IP addresses. The features and functionality of Layer 3 switches and routers have numerous similarities. The only major difference between the packet switching operation of a router and a Layer 3 switch is the physical implementation. In general-purpose routers, packet switching takes place in software, using microprocessor-based engines, whereas a Layer 3 switch performs packet forwarding using application specific integrated circuit (ASIC) hardware.
Layer 2 switching looks at a destination MAC address in the frame header and forwards the frame to the appropriate interface or port based on the MAC address in the switching table. The switching table is contained in Content Addressable Memory (CAM). If the Layer 2 switch does not know where to send the frame, it broadcasts the frame out all ports to the network. When a reply is returned, the switch records the new address in the CAM.
Layer 3 switching is a function of the network layer. The Layer 3 header information is examined and the packet is forwarded based on the IP address.
Traffic flow in a switched or flat network is inherently different from the traffic flow in a routed or hierarchical network. Hierarchical networks offer more flexible traffic flow than flat networks.
The next page will discuss symmetric and asymmetric switching.

Basic operations of a switch

Basic operations of a switch
 4.2.5 Switching is a technology that decreases congestion in Ethernet, Token Ring, and Fiber Distributed Data Interface (FDDI) LANs. Switches use microsegmentation to reduce collision domains and network traffic. This reduction results in more efficient use of bandwidth and increased throughput. LAN switches often replace shared hubs and are designed to work with cable infrastructures already in place. 
The following are the two basic operations that switches perform:
  • Switch data frames - The process of receiving a frame on a switch interface, selecting the correct forwarding switch port(s), and forwarding the frame.
  • Maintain switch operations - Switches build and maintain forwarding tables. Switches also construct and maintain a loop-free topology across the LAN.
 The next page will discuss latency.

LAN segmentation with routers / LAN segmentation with switches

LAN segmentation with routers 
4.2.3 Routers provide network segmentation which adds a latency factor of twenty to thirty percent over a switched network. The increased latency is because routers operate at the network layer and use the IP address to determine the best path to the destination node. Figure shows a Cisco router.
Bridges and switches provide segmentation within a single network or subnetwork. Routers provide connectivity between networks and subnetworks.
Routers do not forward broadcasts while switches and bridges must forward broadcast frames.
The Interactive Media Activities will help students become more familiar with the Cisco 2621 and 3640 routers.
The next page will discuss switches.

LAN segmentation with switches 
4.2.4 Switches decrease bandwidth shortages and network bottlenecks, such as those between several workstations and a remote file server. Figure shows a Cisco switch. Switches segment LANs into microsegments which decreases the size of collision domains. However, all hosts connected to a switch are still in the same broadcast domain.
In a completely switched Ethernet LAN, the source and destination nodes function as if they are the only nodes on the network. When these two nodes establish a link, or virtual circuit, they have access to the maximum available bandwidth. These links provide significantly more throughput than Ethernet LANs connected by bridges or hubs. This virtual network circuit is established within the switch and exists only when the nodes need to communicate.
The next page will explain the role of a switch in a LAN.

Saturday, September 7, 2013

Introduction to LAN Switching / LAN segmentation / LAN segmentation

Introduction to LAN Switching
LAN segmentation 
4.2.1 This page will explain LAN segmentation.
A network can be divided into smaller units called segments. Figure shows an example of a segmented Ethernet network. The entire network has fifteen computers. Of the fifteen computers, six are servers and nine are workstations. Each segment uses the CSMA/CD access method and maintains traffic between users on the segment. Each segment is its own collision domain.
Segmentation allows network congestion to be significantly reduced within each segment. When data is transmitted within a segment, the devices within that segment share the total available bandwidth. Data that is passed between segments is transmitted over the backbone of the network through a bridge, router, or switch.
The next page will discuss bridges






LAN segmentation wit
4.2.2 This page will describe the main functions of a bridge in a LAN.
Bridges are Layer 2 devices that forward data frames based on the MAC address. Bridges read the source MAC address of the data packets to discover the devices that are on each segment. The MAC addresses are then used to build a bridging table. This allows bridges to block packets that do not need to be forwarded from the local segment. 
Although bridges are transparent to other network devices, the latency on a network increases by ten to thirty percent when a bridge is used. The increased latency is because of the decisions that bridges make before the packets are forwarded. A bridge is considered a store-and-forward device. Bridges examine the destination address field and calculate the cyclic redundancy check (CRC) in the Frame Check Sequence field before the frame is forwarded. If the destination port is busy, bridges temporarily store the frame until that port is available.   
The next page will discuss routers.h bridges 

The benefits of using repeaters / Full-duplex transmitting

The benefits of using repeaters 
4.1.8 This page will explain how a repeater can be used to extend the distance of a LAN.
The distance that a LAN can cover is limited due to attenuation. Attenuation means that the signal weakens as it travels through the network. The resistance in the cable or medium through which the signal travels causes the loss of signal strength. An Ethernet repeater is a physical layer device on the network that boosts or regenerates the signal on an Ethernet LAN. When a repeater is used to extend the distance of a LAN, a single network can cover a greater distance and more users can share that same network. However, the use of repeaters and hubs adds to problems associated with broadcasts and collisions. It also has a negative effect on the overall performance of the shared media LAN. 
The Interactive Media Activity will teach students about the Cisco 1503 Micro Hub.
The next page will discuss full-duplex technology.

Full-duplex transmitting 
4.1.9 This page will explain how full-duplex Ethernet allows the transmission of a packet and the reception of a different packet at the same time. This simultaneous transmission and reception requires the use of two pairs of wires in the cable and a switched connection between each node. This connection is considered point-to-point and is collision free. Because both nodes can transmit and receive at the same time, there are no negotiations for bandwidth. Full-duplex Ethernet can use a cable infrastructure already in place, as long as the medium meets the minimum Ethernet standards.
To transmit and receive simultaneously, a dedicated switch port is required for each node. Full-duplex connections can use 10BASE-T, 100BASE-TX, or 100BASE-FX media to create point-to-point connections. The NICs on all connected devices must have full-duplex capabilities.
The full-duplex Ethernet switch takes advantage of the two pairs of wires in the cable and creates a direct connection between the transmit (TX) at one end of the circuit and the receive (RX) at the other end. With the two stations connected in this manner a collision free environment is created as the transmission and receipt of data occurs on separate non-competitive circuits.
Ethernet can usually only use 50 to 60 percent of the available 10 Mbps of bandwidth because of collisions and latency. Full-duplex Ethernet offers 100 percent of the bandwidth in both directions. This produces a potential 20 Mbps throughput, which results from 10 Mbps TX and 10 Mbps RX.
The Interactive Media Activity will help students learn the different characteristics of two full-duplex Ethernet standards.
This page concludes this lesson. The next lesson will introduce LAN switching. The first page describes LAN segmentation.