Skip to main content

Layer 2 bridging / Layer 2 switching

Layer 2 bridging 
8.1.1 This page will discuss the operation of Layer 2 bridges.
As more nodes are added to an Ethernet segment, use of the media increases. Ethernet is a shared media, which means only one node can transmit data at a time. The addition of more nodes increases the demands on the available bandwidth and places additional loads on the media. This also increases the probability of collisions, which results in more retransmissions. A solution to the problem is to break the large segment into parts and separate it into isolated collision domains.

To accomplish this a bridge keeps a table of MAC addresses and the associated ports. The bridge then forwards or discards frames based on the table entries. The following steps illustrate the operation of a bridge:

• The bridge has just been started so the bridge table is empty. The bridge just waits for traffic on the segment. When traffic is detected, it is processed by the bridge.
• Host A pings Host B. Since the data is transmitted on the entire collision domain segment, both the bridge and Host B process the packet.
• The bridge adds the source address of the frame to its bridge table. Since the address was in the source address field and the frame was received on Port 1, the frame must be associated with Port 1 in the table.
• The destination address of the frame is checked against the bridge table. Since the address is not in the table, even though it is on the same collision domain, the frame is forwarded to the other segment. The address of Host B has not been recorded yet.
• Host B processes the ping request and transmits a ping reply back to Host A. The data is transmitted over the whole collision domain. Both Host A and the bridge receive the frame and process it.
• The bridge adds the source address of the frame to its bridge table. Since the source address was not in the bridge table and was received on Port 1, the source address of the frame must be associated with Port 1 in the table.
• The destination address of the frame is checked against the bridge table to see if its entry is there. Since the address is in the table, the port assignment is checked. The address of Host A is associated with the port the frame was received on, so the frame is not forwarded.
• Host A pings Host C. Since the data is transmitted on the entire collision domain segment, both the bridge and Host B process the frame. Host B discards the frame since it was not the intended destination.
• The bridge adds the source address of the frame to its bridge table. Since the address is already entered into the bridge table the entry is just renewed.
• The destination address of the frame is checked against the bridge table. Since the address is not in the table, the frame is forwarded to the other segment. The address of Host C has not been recorded yet.
• Host C processes the ping request and transmits a ping reply back to Host A. The data is transmitted over the whole collision domain. Both Host D and the bridge receive the frame and process it. Host D discards the frame since it is not the intended destination.
• The bridge adds the source address of the frame to its bridge table. Since the address was in the source address field and the frame was received on Port 2, the frame must be associated with Port 2 in the table.
• The destination address of the frame is checked against the bridge table to see if its entry is present. The address is in the table but it is associated with Port 1, so the frame is forwarded to the other segment.
• When Host D transmits data, its MAC address will also be recorded in the bridge table. This is how the bridge controls traffic between to collision domains.

These are the steps that a bridge uses to forward and discard frames that are received on any of its ports.

The next page will describe Layer 2 switching.

Layer 2 switching
8.1.2 This page will discuss Layer 2 switches.


Generally, a bridge has only two ports and divides a collision domain into two parts. All decisions made by a bridge are based on MAC or Layer 2 addresses and do not affect the logical or Layer 3 addresses. A bridge will divide a collision domain but has no effect on a logical or broadcast domain. If a network does not have a device that works with Layer 3 addresses, such as a router, the entire network will share the same logical broadcast address space. A bridge will create more collision domains but will not add broadcast domains.

A switch is essentially a fast, multi-port bridge that can contain dozens of ports. Each port creates its own collision domain. In a network of 20 nodes, 20 collision domains exist if each node is plugged into its own switch port. If an uplink port is included, one switch creates 21 single-node collision domains. A switch dynamically builds and maintains a content-addressable memory (CAM) table, which holds all of the necessary MAC information for each port.

The next page will explain how a switch operates

Comments

Popular posts from this blog

OSI layers / Peer-to-peer communications / TCP/IP model

OSI layers 2.3.4 This page discusses the seven layers of the OSI model. The OSI reference model is a framework that is used to understand how information travels throughout a network. The OSI reference model explains how packets travel through the various layers to another device on a network, even if the sender and destination have different types of network media. In the OSI reference model, there are seven numbered layers, each of which illustrates a particular network function. - Dividing the network into seven layers provides the following advantages: • It breaks network communication into smaller, more manageable parts. • It standardizes network components to allow multiple vendor development and support. • It allows different types of network hardware and software to communicate with each other. • It prevents changes in one layer from affecting other layers. • It divides network communication into smaller parts to make learning it easier to understand. In the foll...

Advantages and disadvantages of link-state routing

Advantages and disadvantages of link-state routing 2.1.5  This page lists the advantages and disadvantages of link-state routing protocols. The following are advantages of link-state routing protocols:  Link-state protocols use cost metrics to choose paths through the network. The cost metric reflects the capacity of the links on those paths. Link-state protocols use triggered updates and LSA floods to immediately report changes in the network topology to all routers in the network. This leads to fast convergence times. Each router has a complete and synchronized picture of the network. Therefore, it is very difficult for routing loops to occur. Routers use the latest information to make the best routing decisions. The link-state database sizes can be minimized with careful network design. This leads to smaller Dijkstra calculations and faster convergence. Every router, at the very least, maps the topology of it...

Ports for services

Ports for services 10.2.2  Services running on hosts must have a port number assigned to them so communication can occur. A remote host attempting to connect to a service expects that service to use specific transport layer protocols and ports. Some ports, which are defined in RFC 1700, are known as the well-known ports. These ports are reserved in both TCP and UDP.  These well-known ports define applications that run above the transport layer protocols. For example, a server that runs FTP will use ports 20 and 21 to forward TCP connections from clients to its FTP application. This allows the server to determine which service a client requests. TCP and UDP use port numbers to determine the correct service to which requests are forwarded. The next page will discuss ports in greater detail.