Skip to main content

Ethernet switch latency / Layer 2 and Layer 3 switching

Ethernet switch latency
 4.2.6 Switch latency is the period of time when a frame enters a switch to the time it takes the frame to exit the switch. Latency is directly related to the configured switching process and volume of traffic. 
Latency is measured in fractions of a second. Network devices operate at incredibly high speeds so every additional nanosecond of latency adversely affects network performance.
The next page will describe Layer 2 and Layer 3 switching.
Layer 2 and Layer 3 switching 
4.2.7 There are two methods of switching data frames, Layer 2 switching and Layer 3 switching. Routers and Layer 3 switches use Layer 3 switching to switch packets. Layer 2 switches and bridges use Layer 2 switching to forward frames.
The difference between Layer 2 and Layer 3 switching is the type of information inside the frame that is used to determine the correct output interface. Layer 2 switching is based on MAC address information. Layer 3 switching is based on network layer addresses, or IP addresses. The features and functionality of Layer 3 switches and routers have numerous similarities. The only major difference between the packet switching operation of a router and a Layer 3 switch is the physical implementation. In general-purpose routers, packet switching takes place in software, using microprocessor-based engines, whereas a Layer 3 switch performs packet forwarding using application specific integrated circuit (ASIC) hardware.
Layer 2 switching looks at a destination MAC address in the frame header and forwards the frame to the appropriate interface or port based on the MAC address in the switching table. The switching table is contained in Content Addressable Memory (CAM). If the Layer 2 switch does not know where to send the frame, it broadcasts the frame out all ports to the network. When a reply is returned, the switch records the new address in the CAM.
Layer 3 switching is a function of the network layer. The Layer 3 header information is examined and the packet is forwarded based on the IP address.
Traffic flow in a switched or flat network is inherently different from the traffic flow in a routed or hierarchical network. Hierarchical networks offer more flexible traffic flow than flat networks.
The next page will discuss symmetric and asymmetric switching.

Comments

Popular posts from this blog

OSI layers / Peer-to-peer communications / TCP/IP model

OSI layers 2.3.4 This page discusses the seven layers of the OSI model. The OSI reference model is a framework that is used to understand how information travels throughout a network. The OSI reference model explains how packets travel through the various layers to another device on a network, even if the sender and destination have different types of network media. In the OSI reference model, there are seven numbered layers, each of which illustrates a particular network function. - Dividing the network into seven layers provides the following advantages: • It breaks network communication into smaller, more manageable parts. • It standardizes network components to allow multiple vendor development and support. • It allows different types of network hardware and software to communicate with each other. • It prevents changes in one layer from affecting other layers. • It divides network communication into smaller parts to make learning it easier to understand. In the foll...

Advantages and disadvantages of link-state routing

Advantages and disadvantages of link-state routing 2.1.5  This page lists the advantages and disadvantages of link-state routing protocols. The following are advantages of link-state routing protocols:  Link-state protocols use cost metrics to choose paths through the network. The cost metric reflects the capacity of the links on those paths. Link-state protocols use triggered updates and LSA floods to immediately report changes in the network topology to all routers in the network. This leads to fast convergence times. Each router has a complete and synchronized picture of the network. Therefore, it is very difficult for routing loops to occur. Routers use the latest information to make the best routing decisions. The link-state database sizes can be minimized with careful network design. This leads to smaller Dijkstra calculations and faster convergence. Every router, at the very least, maps the topology of it...

Ports for services

Ports for services 10.2.2  Services running on hosts must have a port number assigned to them so communication can occur. A remote host attempting to connect to a service expects that service to use specific transport layer protocols and ports. Some ports, which are defined in RFC 1700, are known as the well-known ports. These ports are reserved in both TCP and UDP.  These well-known ports define applications that run above the transport layer protocols. For example, a server that runs FTP will use ports 20 and 21 to forward TCP connections from clients to its FTP application. This allows the server to determine which service a client requests. TCP and UDP use port numbers to determine the correct service to which requests are forwarded. The next page will discuss ports in greater detail.