Skip to main content

Switch Operation / Functions of Ethernet switches

Switch Operation / 
Functions of Ethernet switches 
4.3.1 the following two main functions of Ethernet switches:
  • Isolate traffic among segments
  • Achieve greater amount of bandwidth per user by creating smaller collision domains
The first function is to isolate traffic among segments. Segments are the smaller units into which the networks are divided by use of Ethernet switches. Each segment uses carrier sense multiple access/collision detect (CSMA/CD) access method to maintain data traffic flow among the users on that segment. It would be useful to refer back to the section on CSMA/CD here and show the flowchart. Such segmentation allows multiple users to send information at the same time on the different segments without slowing down the network.
The second function of an Ethernet switch is to ensure each user has more bandwidth by creating smaller collision domains. Ethernet and Fast Ethernet switches segment LANs by creating smaller collision domains. Each segment becomes a dedicated network link, like a highway lane functioning at up to 100 Mbps. Popular servers can then be placed on individual 100-Mbps links. In modern networks, a Fast Ethernet switch will often act as the backbone of a LAN, with Ethernet hubs, Ethernet switches, or Fast Ethernet hubs providing the desktop connections in workgroups.

This page will review the functions of an Ethernet switch.
A switch is a device that connects LAN segments using a table of MAC addresses to determine the segment on which a frame needs to be transmitted. Both switches and bridges operate at Layer 2 of the OSI model. 
Switches are sometimes called multiport bridges or switching hubs. Switches make decisions based on MAC addresses and therefore, are Layer 2 devices. In contrast, hubs regenerate the Layer 1 signals out of all ports without making any decisions. Since a switch has the capacity to make path selection decisions, the LAN becomes much more efficient. Usually, in an Ethernet network the workstations are connected directly to the switch. Switches learn which hosts are connected to a port by reading the source MAC address in frames. The switch opens a virtual circuit between the source and destination nodes only. This confines communication to those two ports without affecting traffic on other ports. In contrast, a hub forwards data out all of its ports so that all hosts see the data and must process it, even if that data is not intended for it. High-performance LANs are usually fully switched: 
  • A switch concentrates connectivity, making data transmission more efficient. Frames are switched from incoming ports to outgoing ports. Each port or interface can provide the full bandwidth of the connection to the host.
  • On a typical Ethernet hub, all ports connect to a common backplane or physical connection within the hub, and all devices attached to the hub share the bandwidth of the network. If two stations establish a session that uses a significant level of bandwidth, the network performance of all other stations attached to the hub is degraded.
  • To reduce degradation, the switch treats each interface as an individual segment. When stations on different interfaces need to communicate, the switch forwards frames at wire speed from one interface to the other, to ensure that each session receives full bandwidth.
To efficiently switch frames between interfaces, the switch maintains an address table. When a frame enters the switch, it associates the MAC address of the sending station with the interface on which it was received.
The main features of Ethernet switches are:
  • Isolate traffic among segments
  • Achieve greater amount of bandwidth per user by creating smaller collision domains
The first feature, isolate traffic among segments, provides for greater security for hosts on the network. Each segment uses the CSMA/CD access method to maintain data traffic flow among the users on that segment. Such segmentation allows multiple users to send information at the same time on the different segments without slowing down the network. 
By using the segments in the network fewer users and/or devices are sharing the same bandwidth when communicating with one another. Each segment has its own collision domain. Ethernet switches filter the traffic by redirecting the datagrams to the correct port or ports, which are based on Layer 2 MAC addresses.
The second feature is called microsegmentation. Microsegmentation allows the creation of dedicated network segments with one host per segment. Each hosts receives access to the full bandwidth and does not have to compete for available bandwidth with other hosts. Popular servers can then be placed on individual 100-Mbps links. Often in networks of today, a Fast Ethernet switch will act as the backbone of the LAN, with Ethernet hubs, Ethernet switches, or Fast Ethernet hubs providing the desktop connections in workgroups. As demanding new applications such as desktop multimedia or video conferencing become more popular, certain individual desktop computers will have dedicated 100-Mbps links to the network. 
The next page will introduce three frame transmission modes. 

Comments

Popular posts from this blog

OSI layers / Peer-to-peer communications / TCP/IP model

OSI layers 2.3.4 This page discusses the seven layers of the OSI model. The OSI reference model is a framework that is used to understand how information travels throughout a network. The OSI reference model explains how packets travel through the various layers to another device on a network, even if the sender and destination have different types of network media. In the OSI reference model, there are seven numbered layers, each of which illustrates a particular network function. - Dividing the network into seven layers provides the following advantages: • It breaks network communication into smaller, more manageable parts. • It standardizes network components to allow multiple vendor development and support. • It allows different types of network hardware and software to communicate with each other. • It prevents changes in one layer from affecting other layers. • It divides network communication into smaller parts to make learning it easier to understand. In the foll...

Advantages and disadvantages of link-state routing

Advantages and disadvantages of link-state routing 2.1.5  This page lists the advantages and disadvantages of link-state routing protocols. The following are advantages of link-state routing protocols:  Link-state protocols use cost metrics to choose paths through the network. The cost metric reflects the capacity of the links on those paths. Link-state protocols use triggered updates and LSA floods to immediately report changes in the network topology to all routers in the network. This leads to fast convergence times. Each router has a complete and synchronized picture of the network. Therefore, it is very difficult for routing loops to occur. Routers use the latest information to make the best routing decisions. The link-state database sizes can be minimized with careful network design. This leads to smaller Dijkstra calculations and faster convergence. Every router, at the very least, maps the topology of it...

Ports for services

Ports for services 10.2.2  Services running on hosts must have a port number assigned to them so communication can occur. A remote host attempting to connect to a service expects that service to use specific transport layer protocols and ports. Some ports, which are defined in RFC 1700, are known as the well-known ports. These ports are reserved in both TCP and UDP.  These well-known ports define applications that run above the transport layer protocols. For example, a server that runs FTP will use ports 20 and 21 to forward TCP connections from clients to its FTP application. This allows the server to determine which service a client requests. TCP and UDP use port numbers to determine the correct service to which requests are forwarded. The next page will discuss ports in greater detail.