Skip to main content

RIP Version 2 (RIP history)



RIP Version 2

RIP history
1.2.1 

This page will explain the functions and limitations of RIP. The Internet is a collection of autonomous systems (AS). Each AS is generally administered by a single entity. Each AS has a routing technology which can differ from other autonomous systems. The routing protocol used within an AS is referred to as an Interior Gateway Protocol (IGP). A separate protocol used to transfer routing information between autonomous systems is referred to as an Exterior Gateway Protocol (EGP). RIP is designed to work as an IGP in a moderate-sized AS. It is not intended for use in more complex environments.
RIP v1 is considered a classful IGP. RIP v1 is a distance vector protocol that broadcasts the entire routing table to each neighbor router at predetermined intervals. The default interval is 30 seconds. RIP uses hop count as a metric, with 15 as the maximum number of hops.
If the router receives information about a network, and the receiving interface belongs to the same network but is on a different subnet, the router applies the one subnet mask that is configured on the receiving interface:
  • For Class A addresses, the default classful mask is 255.0.0.0.
  • For Class B addresses, the default classful mask is 255.255.0.0.
  • For Class C addresses, the default classful mask is 255.255.255.0.
RIP v1 is a popular routing protocol because virtually all IP routers support it. The popularity of RIP v1 is based on the simplicity and the universal compatibility it demonstrates. RIP v1 is capable of load balancing over as many as six equal-cost paths, with four paths as the default.
RIP v1 has the following limitations:
  • It does not send subnet mask information in its updates.
  • It sends updates as broadcasts on 255.255.255.255.
  • It does not support authentication.
  • It is not able to support VLSM or classless interdomain routing (CIDR).
RIP v1 is simple to configure, as shown in Figure .
The next page will introduce RIP v2.

Comments

Popular posts from this blog

OSI layers / Peer-to-peer communications / TCP/IP model

OSI layers 2.3.4 This page discusses the seven layers of the OSI model. The OSI reference model is a framework that is used to understand how information travels throughout a network. The OSI reference model explains how packets travel through the various layers to another device on a network, even if the sender and destination have different types of network media. In the OSI reference model, there are seven numbered layers, each of which illustrates a particular network function. - Dividing the network into seven layers provides the following advantages: • It breaks network communication into smaller, more manageable parts. • It standardizes network components to allow multiple vendor development and support. • It allows different types of network hardware and software to communicate with each other. • It prevents changes in one layer from affecting other layers. • It divides network communication into smaller parts to make learning it easier to understand. In the foll...

Advantages and disadvantages of link-state routing

Advantages and disadvantages of link-state routing 2.1.5  This page lists the advantages and disadvantages of link-state routing protocols. The following are advantages of link-state routing protocols:  Link-state protocols use cost metrics to choose paths through the network. The cost metric reflects the capacity of the links on those paths. Link-state protocols use triggered updates and LSA floods to immediately report changes in the network topology to all routers in the network. This leads to fast convergence times. Each router has a complete and synchronized picture of the network. Therefore, it is very difficult for routing loops to occur. Routers use the latest information to make the best routing decisions. The link-state database sizes can be minimized with careful network design. This leads to smaller Dijkstra calculations and faster convergence. Every router, at the very least, maps the topology of it...

Symmetric and asymmetric switching / Memory buffering

Symmetric and asymmetric switching   4.2.8  This page will explain the difference between symmetric and asymmetric switching. LAN switching may be classified as symmetric or asymmetric based on the way in which bandwidth is allocated to the switch ports. A symmetric switch provides switched connections between ports with the same bandwidth. An asymmetric LAN switch provides switched connections between ports of unlike bandwidth, such as a combination of 10-Mbps and 100-Mbps ports. Asymmetric switching enables more bandwidth to be dedicated to the server switch port in order to prevent a bottleneck. This allows smoother traffic flows where multiple clients are communicating with a server at the same time. Memory buffering is required on an asymmetric switch. The use of buffers keeps the frames contiguous between different data rate ports. The next page will discuss memory buffers. Memory buffering   4.2.9  This page will explain what a memory buffer is...