Skip to main content

Summary

Summary


This page summarizes the topics discussed in this module.
An essential difference between link-state routing protocols and distance vector protocols is how they exchange routing information. Link-state routing protocols respond quickly to network changes, send triggered updates only when a network change has occurred, send periodic updates known as link-state refreshes, and use a hello mechanism to determine the reachability of neighbors.
A router running a link-state protocol uses the hello information and LSAs it receives from other routers to build a database about the network. It also uses the shortest path first (SPF) algorithm to calculate the shortest route to each network.
To overcome the limitations of distance vector routing protocols, link-state routing protocols use link-state advertisements (LSAs), a topological database, the shortest path first (SPF) algorithm, a resulting SPF tree, and a routing table of paths and ports to each network to determine the best paths for packets.
A link is the same as an interface on a router. The state of the link is a description of an interface and the relationship to its neighboring routers. Link-state routers advertise with LSAs the states of their links to all other routers in the area so that each router can build a complete link-state database. They form special relationships with their neighbors and other link-state routers. Link state routers are a good choice for complex, scalable networks. The benefits of link-state routing over distance vector protocols include faster convergence and improved bandwidth utilization. Link-state protocols support classless interdomain routing (CIDR) and variable-length subnet mask (VLSM).
Open Shortest Path First (OSPF) is a link-state routing protocol based on open standards. The Open in OSPF means that it is open to the public and is non-proprietary. OSPF routers elect a Designated Router (DR) and a Backup Designated Router (BDR) that serve as focal points for routing information exchange in order to reduce the number of exchanges of routing information among several neighbors on the same network. OSPF selects routes based on cost, which in the Cisco implementation is related to bandwidth. OSPF selects the fastest loop-free path from the shortest-path first tree as the best path in the network. OSPF guarantees loop-free routing. Distance vector protocols may cause routing loops. When a router starts an OSPF routing process on an interface, it sends a hello packet and continues to send hellos at regular intervals. The rules that govern the exchange of OSPF hello packets are called the Hello protocol. If all parameters in the OSPF Hello packets are agreed upon, the routers become neighbors.
Each router sends link-state advertisements (LSA) in link-state update (LSU) packets. Each router that receives an LSA from its neighbor records the LSA in the link-state database. This process is repeated for all routers in the OSPF network. When the databases are complete, each router uses the SPF algorithm to calculate a loop free logical topology to every known network. The shortest path with the lowest cost is used in building this topology, therefore the best route is selected.
This routing information is maintained. When there is a change in a link-state, routers use a flooding process to notify other routers on the network about the change. The Hello protocol dead interval provides a simple mechanism for determining that an adjacent neighbor is down

Comments

Popular posts from this blog

OSI layers / Peer-to-peer communications / TCP/IP model

OSI layers 2.3.4 This page discusses the seven layers of the OSI model. The OSI reference model is a framework that is used to understand how information travels throughout a network. The OSI reference model explains how packets travel through the various layers to another device on a network, even if the sender and destination have different types of network media. In the OSI reference model, there are seven numbered layers, each of which illustrates a particular network function. - Dividing the network into seven layers provides the following advantages: • It breaks network communication into smaller, more manageable parts. • It standardizes network components to allow multiple vendor development and support. • It allows different types of network hardware and software to communicate with each other. • It prevents changes in one layer from affecting other layers. • It divides network communication into smaller parts to make learning it easier to understand. In the foll...

Advantages and disadvantages of link-state routing

Advantages and disadvantages of link-state routing 2.1.5  This page lists the advantages and disadvantages of link-state routing protocols. The following are advantages of link-state routing protocols:  Link-state protocols use cost metrics to choose paths through the network. The cost metric reflects the capacity of the links on those paths. Link-state protocols use triggered updates and LSA floods to immediately report changes in the network topology to all routers in the network. This leads to fast convergence times. Each router has a complete and synchronized picture of the network. Therefore, it is very difficult for routing loops to occur. Routers use the latest information to make the best routing decisions. The link-state database sizes can be minimized with careful network design. This leads to smaller Dijkstra calculations and faster convergence. Every router, at the very least, maps the topology of it...

Symmetric and asymmetric switching / Memory buffering

Symmetric and asymmetric switching   4.2.8  This page will explain the difference between symmetric and asymmetric switching. LAN switching may be classified as symmetric or asymmetric based on the way in which bandwidth is allocated to the switch ports. A symmetric switch provides switched connections between ports with the same bandwidth. An asymmetric LAN switch provides switched connections between ports of unlike bandwidth, such as a combination of 10-Mbps and 100-Mbps ports. Asymmetric switching enables more bandwidth to be dedicated to the server switch port in order to prevent a bottleneck. This allows smoother traffic flows where multiple clients are communicating with a server at the same time. Memory buffering is required on an asymmetric switch. The use of buffers keeps the frames contiguous between different data rate ports. The next page will discuss memory buffers. Memory buffering   4.2.9  This page will explain what a memory buffer is...