Skip to main content

Summary of Module 6

Summary
This page summarizes the topics discussed in this module.


Ethernet is not one networking technology, but a family of LAN technologies that includes Legacy, Fast Ethernet, and Gigabit Ethernet. When Ethernet needs to be expanded to add a new medium or capability, the IEEE issues a new supplement to the 802.3 standard. The new supplements are given a one or two letter designation such as 802.3u. Ethernet relies on baseband signaling, which uses the entire bandwidth of the transmission medium. Ethernet operates at two layers of the OSI model, the lower half of the data link layer, known as the MAC sublayer and the physical layer. Ethernet at Layer 1 involves interfacing with media, signals, bit streams that travel on the media, components that put signals on media, and various physical topologies. Layer 1 bits need structure so OSI Layer 2 frames are used. The MAC sublayer of Layer 2 determines the type of frame appropriate for the physical media.

The one thing common to all forms of Ethernet is the frame structure. This is what allows the interoperability of the different types of Ethernet.

Some of the fields permitted or required in an 802.3 Ethernet Frame are:

• Preamble
• Start Frame Delimiter
• Destination Address
• Source Address
• Length/Type
• Data and Pad
• Frame Check Sequence

In 10 Mbps and slower versions of Ethernet, the Preamble provides timing information the receiving node needs in order to interpret the electrical signals it is receiving. The Start Frame Delimiter marks the end of the timing information. 10 Mbps and slower versions of Ethernet are asynchronous. That is, they will use the preamble timing information to synchronize the receive circuit to the incoming data. 100 Mbps and higher speed implementations of Ethernet are synchronous. Synchronous means the timing information is not required, however for compatibility reasons the Preamble and SFD are present.

The address fields of the Ethernet frame contain Layer 2, or MAC, addresses.

All frames are susceptible to errors from a variety of sources. The Frame Check Sequence (FCS) field of an Ethernet frame contains a number that is calculated by the source node based on the data in the frame. At the destination it is recalculated and compared to determine that the data received is complete and error free.

Once the data is framed the Media Access Control (MAC) sublayer is also responsible to determine which computer on a shared-medium environment, or collision domain, is allowed to transmit the data. There are two broad categories of Media Access Control, deterministic (taking turns) and non-deterministic (first come, first served).

Examples of deterministic protocols include Token Ring and FDDI. The carrier sense multiple access with collision detection (CSMA/CD) access method is a simple non-deterministic system. The NIC listens for an absence of a signal on the media and starts transmitting. If two nodes or more nodes transmit at the same time a collision occurs. If a collision is detected the nodes wait a random amount of time and retransmit.

The minimum spacing between two non-colliding frames is also called the interframe spacing. Interframe spacing is required to insure that all stations have time to process the previous frame and prepare for the next frame.

Collisions can occur at various points during transmission. A collision where a signal is detected on the receive and transmit circuits at the same time is referred to as a local collision. A collision that occurs before the minimum number of bytes can be transmitted is called a remote collision. A collision that occurs after the first sixty-four octets of data have been sent is considered a late collision. The NIC will not automatically retransmit for this type of collision.

While local and remote collisions are considered to be a normal part of Ethernet operation, late collisions are considered to be an error. Ethernet errors result from detection of frames sizes that are longer or shorter than standards allow or excessively long or illegal transmissions called jabber. Runt is a slang term that refers to something less than the legal frame size.

Auto-Negotiation detects the speed and duplex mode, half-duplex or full-duplex, of the device on the other end of the wire and adjusts to match those settings.

Comments

Popular posts from this blog

OSI layers / Peer-to-peer communications / TCP/IP model

OSI layers 2.3.4 This page discusses the seven layers of the OSI model. The OSI reference model is a framework that is used to understand how information travels throughout a network. The OSI reference model explains how packets travel through the various layers to another device on a network, even if the sender and destination have different types of network media. In the OSI reference model, there are seven numbered layers, each of which illustrates a particular network function. - Dividing the network into seven layers provides the following advantages: • It breaks network communication into smaller, more manageable parts. • It standardizes network components to allow multiple vendor development and support. • It allows different types of network hardware and software to communicate with each other. • It prevents changes in one layer from affecting other layers. • It divides network communication into smaller parts to make learning it easier to understand. In the foll...

Advantages and disadvantages of link-state routing

Advantages and disadvantages of link-state routing 2.1.5  This page lists the advantages and disadvantages of link-state routing protocols. The following are advantages of link-state routing protocols:  Link-state protocols use cost metrics to choose paths through the network. The cost metric reflects the capacity of the links on those paths. Link-state protocols use triggered updates and LSA floods to immediately report changes in the network topology to all routers in the network. This leads to fast convergence times. Each router has a complete and synchronized picture of the network. Therefore, it is very difficult for routing loops to occur. Routers use the latest information to make the best routing decisions. The link-state database sizes can be minimized with careful network design. This leads to smaller Dijkstra calculations and faster convergence. Every router, at the very least, maps the topology of it...

Ports for services

Ports for services 10.2.2  Services running on hosts must have a port number assigned to them so communication can occur. A remote host attempting to connect to a service expects that service to use specific transport layer protocols and ports. Some ports, which are defined in RFC 1700, are known as the well-known ports. These ports are reserved in both TCP and UDP.  These well-known ports define applications that run above the transport layer protocols. For example, a server that runs FTP will use ports 20 and 21 to forward TCP connections from clients to its FTP application. This allows the server to determine which service a client requests. TCP and UDP use port numbers to determine the correct service to which requests are forwarded. The next page will discuss ports in greater detail.