Skip to main content

RIP / RIP routing process

RIP 
RIP routing process 
7.2.1
This page will provide an overview of the RIP routing process.
The modern open standard version of RIP, which is sometimes referred to as IP RIP, is formally detailed in two separate documents. The first is known as Request for Comments (RFC) 1058 and the other as Internet Standard (STD) 56.
RIP has evolved over the years from a Classful Routing Protocol, RIP Version 1 (RIP v1), to a Classless Routing Protocol, RIP Version 2 (RIP v2). RIP v2 enhancements include the following:
  • Ability to carry additional packet routing information
  • Authentication mechanism to secure table updates
  • Support for variable-length subnet mask (VLSM)
To prevent indefinite routing loops, RIP implements a limit on the number of hops allowed in a path from a source to a destination. The maximum number of hops in a path is 15. When a router receives a routing update that contains a new or changed entry, the metric value is increased by 1 to account for itself as a hop in the path. If this causes the metric to be higher than 15, the network destination is considered unreachable. RIP includes a number of features that are common in other routing protocols. For example, RIP implements split horizon and holddown mechanisms to prevent the propagation of incorrect routing information.
The next page will teach students how to configure RIP.

Comments

Popular posts from this blog

OSI layers / Peer-to-peer communications / TCP/IP model

OSI layers 2.3.4 This page discusses the seven layers of the OSI model. The OSI reference model is a framework that is used to understand how information travels throughout a network. The OSI reference model explains how packets travel through the various layers to another device on a network, even if the sender and destination have different types of network media. In the OSI reference model, there are seven numbered layers, each of which illustrates a particular network function. - Dividing the network into seven layers provides the following advantages: • It breaks network communication into smaller, more manageable parts. • It standardizes network components to allow multiple vendor development and support. • It allows different types of network hardware and software to communicate with each other. • It prevents changes in one layer from affecting other layers. • It divides network communication into smaller parts to make learning it easier to understand. In the foll...

Advantages and disadvantages of link-state routing

Advantages and disadvantages of link-state routing 2.1.5  This page lists the advantages and disadvantages of link-state routing protocols. The following are advantages of link-state routing protocols:  Link-state protocols use cost metrics to choose paths through the network. The cost metric reflects the capacity of the links on those paths. Link-state protocols use triggered updates and LSA floods to immediately report changes in the network topology to all routers in the network. This leads to fast convergence times. Each router has a complete and synchronized picture of the network. Therefore, it is very difficult for routing loops to occur. Routers use the latest information to make the best routing decisions. The link-state database sizes can be minimized with careful network design. This leads to smaller Dijkstra calculations and faster convergence. Every router, at the very least, maps the topology of it...

Symmetric and asymmetric switching / Memory buffering

Symmetric and asymmetric switching   4.2.8  This page will explain the difference between symmetric and asymmetric switching. LAN switching may be classified as symmetric or asymmetric based on the way in which bandwidth is allocated to the switch ports. A symmetric switch provides switched connections between ports with the same bandwidth. An asymmetric LAN switch provides switched connections between ports of unlike bandwidth, such as a combination of 10-Mbps and 100-Mbps ports. Asymmetric switching enables more bandwidth to be dedicated to the server switch port in order to prevent a bottleneck. This allows smoother traffic flows where multiple clients are communicating with a server at the same time. Memory buffering is required on an asymmetric switch. The use of buffers keeps the frames contiguous between different data rate ports. The next page will discuss memory buffers. Memory buffering   4.2.9  This page will explain what a memory buffer is...