Skip to main content

Module 4 Summary

Summary
An understanding of the following key points should have been achieved:
  • The history and function of shared, half-duplex Ethernet
  • Collisions in an Ethernet network
  • Microsegmentation
  • CSMA/CD
  • Elements affecting network performance
  • The function of repeaters
  • Network latency
  • Transmission time
  • The basic function of Fast Ethernet
  • Network segmentation using routers, switches, and bridges
  • The basic operations of a switch
  • Ethernet switch latency
  • The differences between Layer 2 and Layer 3 switching
  • Symmetric and asymmetric switching
  • Memory buffering
  • Store-and-forward and cut-through switching modes
  • The differences between hubs, bridges, and switches
  • The main functions of switches
  • Major switch frame transmission modes
  • The process by which switches learn addresses
  • The frame-filtering process
  • LAN segmentation
  • Microsegmentation using switching
  • The process a switch uses to learn addresses
  • Forwarding modes
  • Collision and broadcast domains
  • The cables needed to connect switches to workstations
  • The cables needed to connect switches to switches 

This page summarizes the topics discussed in this module.

Ethernet is the most common LAN architecture and it is used to transport data between devices on a network. Originally Ethernet was a half-duplex technology. Using half-duplex, a host could either transmit or receive at one time, but not both. When two or more Ethernet hosts transmit at the same time on a shared medium, the result is a collision. The time a frame or a packet takes to travel from the source station to the final destination is known as latency or delay. The three sources of latency include NIC delay, actual propagation delay, and delay due to specific network devices.  
Bit or slot time is the basic unit of time in which ONE bit can be sent. there must be some minimum duration during which the bit is on or off in order for the device to recognize a binary one or zero.  
Attenuation means that a signal will weaken at it travels through the network. This limits the distance that a LAN can cover. A repeater can extend the distance of a LAN but it also has a negative effect on the overall performance of a LAN.  
Full-duplex transmission between stations is achieved by using point-to-point Ethernet connections. Full-duplex transmission provides a collision-free transmission environment. Both stations can transmit and receive at the same time, and there are no negotiations for bandwidth. The existing cable infrastructure can be utilized as long as the medium meets the minimum Ethernet standards.
Segmentation divides a network into smaller units to reduce network congestion and enhance security. The CSMA/CD access method on each segment maintains traffic between users. Segmentation with a Layer 2 bridge is transparent to other network devices but latency is increased significantly. The more work done by a network device, the more latency the device will introduce into the network. Routers provide segmentation of networks but can add a latency factor of 20% to 30% over a switched network. This increased latency is because a router operates at the network layer and uses the IP address to determine the best path to the destination node. A switch can segment a LAN into microsegments which decreases the size of collision domains. However all hosts connected to the switch are still in the same broadcast domain.
Switching is a technology that decreases congestion in Ethernet, Token Ring, and Fiber Distributed Data Interface (FDDI) LANs. Switching is the process of receiving an incoming frame on one interface and delivering that frame out another interface. Routers use Layer 3 switching to route a packet. Switches use Layer 2 switching to forward frames. A symmetric switch provides switched connections between ports with the same bandwidth. An asymmetric LAN switch provides switched connections between ports of unlike bandwidth, such as a combination of 10-Mbps and 100-Mbps ports.
A memory buffer is an area of memory where a switch stores data. It can use two methods for forwarding frames including port-based memory buffering and shared memory buffering.
There are two modes used to forward frames. Store-and-forward receives the entire frame before forwarding while cut-through forwards the frame as it is received decreasing latency. Fast-forward and fragment-free are two types of cut-through forwarding.

Comments

Popular posts from this blog

OSI layers / Peer-to-peer communications / TCP/IP model

OSI layers 2.3.4 This page discusses the seven layers of the OSI model. The OSI reference model is a framework that is used to understand how information travels throughout a network. The OSI reference model explains how packets travel through the various layers to another device on a network, even if the sender and destination have different types of network media. In the OSI reference model, there are seven numbered layers, each of which illustrates a particular network function. - Dividing the network into seven layers provides the following advantages: • It breaks network communication into smaller, more manageable parts. • It standardizes network components to allow multiple vendor development and support. • It allows different types of network hardware and software to communicate with each other. • It prevents changes in one layer from affecting other layers. • It divides network communication into smaller parts to make learning it easier to understand. In the foll...

Advantages and disadvantages of link-state routing

Advantages and disadvantages of link-state routing 2.1.5  This page lists the advantages and disadvantages of link-state routing protocols. The following are advantages of link-state routing protocols:  Link-state protocols use cost metrics to choose paths through the network. The cost metric reflects the capacity of the links on those paths. Link-state protocols use triggered updates and LSA floods to immediately report changes in the network topology to all routers in the network. This leads to fast convergence times. Each router has a complete and synchronized picture of the network. Therefore, it is very difficult for routing loops to occur. Routers use the latest information to make the best routing decisions. The link-state database sizes can be minimized with careful network design. This leads to smaller Dijkstra calculations and faster convergence. Every router, at the very least, maps the topology of it...

Ports for services

Ports for services 10.2.2  Services running on hosts must have a port number assigned to them so communication can occur. A remote host attempting to connect to a service expects that service to use specific transport layer protocols and ports. Some ports, which are defined in RFC 1700, are known as the well-known ports. These ports are reserved in both TCP and UDP.  These well-known ports define applications that run above the transport layer protocols. For example, a server that runs FTP will use ports 20 and 21 to forward TCP connections from clients to its FTP application. This allows the server to determine which service a client requests. TCP and UDP use port numbers to determine the correct service to which requests are forwarded. The next page will discuss ports in greater detail.