Skip to main content

Ethernet timing

Ethernet timing
6.2.3 This page explains the importance of slot times in an Ethernet network.


The basic rules and specifications for proper operation of Ethernet are not particularly complicated, though some of the faster physical layer implementations are becoming so. Despite the basic simplicity, when a problem occurs in Ethernet it is often quite difficult to isolate the source. Because of the common bus architecture of Ethernet, also described as a distributed single point of failure, the scope of the problem usually encompasses all devices within the collision domain. In situations where repeaters are used, this can include devices up to four segments away.

Any station on an Ethernet network wishing to transmit a message first “listens” to ensure that no other station is currently transmitting. If the cable is quiet, the station will begin transmitting immediately. The electrical signal takes time to travel down the cable (delay), and each subsequent repeater introduces a small amount of latency in forwarding the frame from one port to the next. Because of the delay and latency, it is possible for more than one station to begin transmitting at or near the same time. This results in a collision.

If the attached station is operating in full duplex then the station may send and receive simultaneously and collisions should not occur. Full-duplex operation also changes the timing considerations and eliminates the concept of slot time. Full-duplex operation allows for larger network architecture designs since the timing restriction for collision detection is removed.

In half duplex, assuming that a collision does not occur, the sending station will transmit 64 bits of timing synchronization information that is known as the preamble. The sending station will then transmit the following information:

• Destination and source MAC addressing information

• Certain other header information

• The actual data payload

• Checksum (FCS) used to ensure that the message was not corrupted along the way

Stations receiving the frame recalculate the FCS to determine if the incoming message is valid and then pass valid messages to the next higher layer in the protocol stack.

10 Mbps and slower versions of Ethernet are asynchronous. Asynchronous means that each receiving station will use the eight octets of timing information to synchronize the receive circuit to the incoming data, and then discard it. 100 Mbps and higher speed implementations of Ethernet are synchronous. Synchronous means the timing information is not required, however for compatibility reasons the Preamble and Start Frame Delimiter (SFD) are present.

For all speeds of Ethernet transmission at or below 1000 Mbps, the standard describes how a transmission may be no smaller than the slot time. Slot time for 10 and 100-Mbps Ethernet is 512 bit-times, or 64 octets. Slot time for 1000-Mbps Ethernet is 4096 bit-times, or 512 octets. Slot time is calculated assuming maximum cable lengths on the largest legal network architecture. All hardware propagation delay times are at the legal maximum and the 32-bit jam signal is used when collisions are detected.

The actual calculated slot time is just longer than the theoretical amount of time required to travel between the furthest points of the collision domain, collide with another transmission at the last possible instant, and then have the collision fragments return to the sending station and be detected. For the system to work the first station must learn about the collision before it finishes sending the smallest legal frame size. To allow 1000-Mbps Ethernet to operate in half duplex the extension field was added when sending small frames purely to keep the transmitter busy long enough for a collision fragment to return. This field is present only on 1000-Mbps, half-duplex links and allows minimum-sized frames to be long enough to meet slot time requirements. Extension bits are discarded by the receiving station.

On 10-Mbps Ethernet one bit at the MAC layer requires 100 nanoseconds (ns) to transmit. At 100 Mbps that same bit requires 10 ns to transmit and at 1000 Mbps only takes 1 ns. As a rough estimate, 20.3 cm (8 in) per nanosecond is often used for calculating propagation delay down a UTP cable. For 100 meters of UTP, this means that it takes just under 5 bit-times for a 10BASE-T signal to travel the length the cable.

For CSMA/CD Ethernet to operate, the sending station must become aware of a collision before it has completed transmission of a minimum-sized frame. At 100 Mbps the system timing is barely able to accommodate 100 meter cables. At 1000 Mbps special adjustments are required as nearly an entire minimum-sized frame would be transmitted before the first bit reached the end of the first 100 meters of UTP cable. For this reason half duplex is not permitted in 10-Gigabit Ethernet.

The next page defines interframe spacing and backoff.

Comments

Popular posts from this blog

OSI layers / Peer-to-peer communications / TCP/IP model

OSI layers 2.3.4 This page discusses the seven layers of the OSI model. The OSI reference model is a framework that is used to understand how information travels throughout a network. The OSI reference model explains how packets travel through the various layers to another device on a network, even if the sender and destination have different types of network media. In the OSI reference model, there are seven numbered layers, each of which illustrates a particular network function. - Dividing the network into seven layers provides the following advantages: • It breaks network communication into smaller, more manageable parts. • It standardizes network components to allow multiple vendor development and support. • It allows different types of network hardware and software to communicate with each other. • It prevents changes in one layer from affecting other layers. • It divides network communication into smaller parts to make learning it easier to understand. In the foll...

Advantages and disadvantages of link-state routing

Advantages and disadvantages of link-state routing 2.1.5  This page lists the advantages and disadvantages of link-state routing protocols. The following are advantages of link-state routing protocols:  Link-state protocols use cost metrics to choose paths through the network. The cost metric reflects the capacity of the links on those paths. Link-state protocols use triggered updates and LSA floods to immediately report changes in the network topology to all routers in the network. This leads to fast convergence times. Each router has a complete and synchronized picture of the network. Therefore, it is very difficult for routing loops to occur. Routers use the latest information to make the best routing decisions. The link-state database sizes can be minimized with careful network design. This leads to smaller Dijkstra calculations and faster convergence. Every router, at the very least, maps the topology of it...

Ports for services

Ports for services 10.2.2  Services running on hosts must have a port number assigned to them so communication can occur. A remote host attempting to connect to a service expects that service to use specific transport layer protocols and ports. Some ports, which are defined in RFC 1700, are known as the well-known ports. These ports are reserved in both TCP and UDP.  These well-known ports define applications that run above the transport layer protocols. For example, a server that runs FTP will use ports 20 and 21 to forward TCP connections from clients to its FTP application. This allows the server to determine which service a client requests. TCP and UDP use port numbers to determine the correct service to which requests are forwarded. The next page will discuss ports in greater detail.