Skip to main content

Network latency / Ethernet 10BASE-T transmission time

Network latency
4.1.6 This page will help students understand the factors that increase network latency.
Latency, or delay, is the time a frame or a packet takes to travel from the source station to the final destination. It is important to quantify the total latency of the path between the source and the destination for LANs and WANs. In the specific case of an Ethernet LAN, it is important to understand latency and its effect on network timing as it is used to determine if CSMA/CD will work properly.
Latency has at least three sources:
  • First, there is the time it takes the source NIC to place voltage pulses on the wire and the time it takes the destination NIC to interpret these pulses. This is sometimes called NIC delay, typically around 1 microsecond for a 10BASE-T NIC.
  • Second, there is the actual propagation delay as the signal takes time to travel through the cable. Typically, this is about 0.556 microseconds per 100 m for Cat 5 UTP. Longer cable and slower nominal velocity of propagation (NVP) results in more propagation delay.
  • Third, latency is added based on network devices that are in the path between two computers. These are either Layer 1, Layer 2, or Layer 3 devices.
Latency does not depend solely on distance and number of devices. For example, if three properly configured switches separate two workstations, the workstations may experience less latency than if two properly configured routers separated them. This is because routers conduct more complex and time-intensive functions. A router must analyze Layer 3 data.
The next page will discuss transmission time. 

Ethernet 10BASE-T transmission time 
4.1.7 This page will explain how transmission time is determined for 10BASE-T.
All networks have what is called bit time or slot time. Many LAN technologies, such as Ethernet, define bit time as the basic unit of time in which one bit can be sent. In order for the electronic or optical devices to recognize a binary one or zero, there must be some minimum duration during which the bit is on or off.
Transmission time equals the number of bits to be sent times the bit time for a given technology. Another way to think about transmission time is the interval between the start and end of a frame transmission, or between the start of a frame transmission and a collision. Small frames take a shorter amount of time. Large frames take a longer amount of time.
Each 10-Mbps Ethernet bit has a 100 ns transmission window. This is the bit time. A byte equals eight bits. Therefore, 1 byte takes a minimum of 800 ns to transmit. A 64-byte frame, which is the smallest 10BASE-T frame that allows CSMA/CD to function properly, has a transmission time of 51,200 ns or 51.2 microseconds. Transmission of an entire 1000-byte frame from the source requires 800 microseconds. The time at which the frame actually arrives at the destination station depends on the additional latency introduced by the network. This latency can be due to a variety of delays including all of the following:
  • NIC delays
  • Propagation delays
  • Layer 1, Layer 2, or Layer 3 device delays
The Interactive Media Activity will help students determine the 10BASE-T transmission times for different frame sizes.
The next page will describe the benefits of repeaters.

Comments

Popular posts from this blog

OSI layers / Peer-to-peer communications / TCP/IP model

OSI layers 2.3.4 This page discusses the seven layers of the OSI model. The OSI reference model is a framework that is used to understand how information travels throughout a network. The OSI reference model explains how packets travel through the various layers to another device on a network, even if the sender and destination have different types of network media. In the OSI reference model, there are seven numbered layers, each of which illustrates a particular network function. - Dividing the network into seven layers provides the following advantages: • It breaks network communication into smaller, more manageable parts. • It standardizes network components to allow multiple vendor development and support. • It allows different types of network hardware and software to communicate with each other. • It prevents changes in one layer from affecting other layers. • It divides network communication into smaller parts to make learning it easier to understand. In the foll...

Advantages and disadvantages of link-state routing

Advantages and disadvantages of link-state routing 2.1.5  This page lists the advantages and disadvantages of link-state routing protocols. The following are advantages of link-state routing protocols:  Link-state protocols use cost metrics to choose paths through the network. The cost metric reflects the capacity of the links on those paths. Link-state protocols use triggered updates and LSA floods to immediately report changes in the network topology to all routers in the network. This leads to fast convergence times. Each router has a complete and synchronized picture of the network. Therefore, it is very difficult for routing loops to occur. Routers use the latest information to make the best routing decisions. The link-state database sizes can be minimized with careful network design. This leads to smaller Dijkstra calculations and faster convergence. Every router, at the very least, maps the topology of it...

Ports for services

Ports for services 10.2.2  Services running on hosts must have a port number assigned to them so communication can occur. A remote host attempting to connect to a service expects that service to use specific transport layer protocols and ports. Some ports, which are defined in RFC 1700, are known as the well-known ports. These ports are reserved in both TCP and UDP.  These well-known ports define applications that run above the transport layer protocols. For example, a server that runs FTP will use ports 20 and 21 to forward TCP connections from clients to its FTP application. This allows the server to determine which service a client requests. TCP and UDP use port numbers to determine the correct service to which requests are forwarded. The next page will discuss ports in greater detail.