Friday, February 26, 2010

History and future of TCP/IP / Application layer

History and future of TCP/IP
9.1.1 This page discusses the history and the future of TCP/IP.


The U.S. Department of Defense (DoD) created the TCP/IP reference model because it wanted a network that could survive any conditions. To illustrate further, imagine a world, crossed by multiple cable runs, wires, microwaves, optical fibers, and satellite links. Then imagine a need for data to be transmitted without regard for the condition of any particular node or network. The U.S. DoD required reliable data transmission to any destination on the network under any circumstances. The creation of the TCP/IP model helped to solve this difficult design problem. The TCP/IP model has since become the standard on which the Internet is based.

Think about the layers of the TCP/IP model layers in relation to the original intent of the Internet. This will help reduce confusion. The four layers of the TCP/IP model are the application layer, transport layer, Internet layer, and network access layer. Some of the layers in the TCP/IP model have the same name as layers in the OSI model. It is critical not to confuse the layer functions of the two models because the layers include different functions in each model. The present version of TCP/IP was standardized in September of 1981.

The next page will discuss the application layer of TCP/IP.

Application layer
9.1.2 This page describes the functions of the TCP/IP application layer.


The application layer handles high-level protocols, representation, encoding, and dialog control. The TCP/IP protocol suite combines all application related issues into one layer. It ensures that the data is properly packaged before it is passed on to the next layer. TCP/IP includes Internet and transport layer specifications such as IP and TCP as well as specifications for common applications. TCP/IP has protocols to support file transfer, e-mail, and remote login, in addition to the following:

• File Transfer Protocol (FTP) – FTP is a reliable, connection-oriented service that uses TCP to transfer files between systems that support FTP. It supports bi-directional binary file and ASCII file transfers.

• Trivial File Transfer Protocol (TFTP) – TFTP is a connectionless service that uses the User Datagram Protocol (UDP). TFTP is used on the router to transfer configuration files and Cisco IOS images, and to transfer files between systems that support TFTP. It is useful in some LANs because it operates faster than FTP in a stable environment.

• Network File System (NFS) – NFS is a distributed file system protocol suite developed by Sun Microsystems that allows file access to a remote storage device such as a hard disk across a network.
• Simple Mail Transfer Protocol (SMTP) – SMTP administers the transmission of e-mail over computer networks. It does not provide support for transmission of data other than plain text.
• Telnet – Telnet provides the capability to remotely access another computer. It enables a user to log into an Internet host and execute commands. A Telnet client is referred to as a local host. A Telnet server is referred to as a remote host.
• Simple Network Management Protocol (SNMP) – SNMP is a protocol that provides a way to monitor and control network devices. SNMP is also used to manage configurations, statistics, performance, and security.
• Domain Name System (DNS) – DNS is a system used on the Internet to translate domain names and publicly advertised network nodes into IP addresses.

The next page will discuss the transport layer

Module 9: TCP/IP Protocol Suite and IP Addressing Overview

Overview
The Internet was developed to provide a communication network that could function in wartime. Although the Internet has evolved from the original plan, it is still based on the TCP/IP protocol suite. The design of TCP/IP is ideal for the decentralized and robust Internet. Many common protocols were designed based on the four-layer TCP/IP model.


It is useful to know both the TCP/IP and OSI network models. Each model uses its own structure to explain how a network works. However, there is much overlap between the two models. A system administrator should be familiar with both models to understand how a network functions.

Any device on the Internet that wants to communicate with other Internet devices must have a unique identifier. The identifier is known as the IP address because routers use a Layer 3 protocol called the IP protocol to find the best route to that device. The current version of IP is IPv4. This was designed before there was a large demand for addresses. Explosive growth of the Internet has threatened to deplete the supply of IP addresses. Subnets, Network Address Translation (NAT), and private addresses are used to extend the supply of IP addresses. IPv6 improves on IPv4 and provides a much larger address space. Administrators can use IPv6 to integrate or eliminate the methods used to work with IPv4.

In addition to the physical MAC address, each computer needs a unique IP address to be part of the Internet. This is also called the logical address. There are several ways to assign an IP address to a device. Some devices always have a static address. Others have a temporary address assigned to them each time they connect to the network. When a dynamically assigned IP address is needed, a device can obtain it several ways.

For efficient routing to occur between devices, issues such as duplicate IP addresses must be resolved.

This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811 exams.

Students who complete this module should be able to perform the following tasks:

• Explain why the Internet was developed and how TCP/IP fits the design of the Internet
• List the four layers of the TCP/IP model
• Describe the functions of each layer of the TCP/IP model
• Compare the OSI model and the TCP/IP model
• Describe the function and structure of IP addresses
• Understand why subnetting is necessary
• Explain the difference between public and private addressing
• Understand the function of reserved IP addresses
• Explain the use of static and dynamic addressing for a device
• Understand how dynamic addresses can be assigned with RARP, BootP, and DHCP
• Use ARP to obtain the MAC address to send a packet to another device
• Understand the issues related to addressing between networks

Summary of Module 8

Summary
This page summarizes the topics discussed in this module.


Ethernet is a shared media, baseband technology, which means only one node can transmit data at a time. Increasing the number of nodes on a single segment increases demand on the available bandwidth. This in turn increases the probability of collisions. A solution to the problem is to break a large network segment into parts and separate it into isolated collision domains. Bridges and switches are used to segment the network into multiple collision domains.

A bridge builds a bridge table from the source addresses of packets it processes. An address is associated with the port the frame came in on. Eventually the bridge table contains enough address information to allow the bridge to forward a frame out a particular port based on the destination address. This is how the bridge controls traffic between two collision domains.

Switches learn in much the same way as bridges but provide a virtual connection directly between the source and destination nodes, rather than the source collision domain and destination collision domain. Each port creates its own collision domain. A switch dynamically builds and maintains a Content-Addressable Memory (CAM) table, holding all of the necessary MAC information for each port. CAM is memory that essentially works backwards compared to conventional memory. Entering data into the memory will return the associated address.

Two devices connected through switch ports become the only two nodes in a small collision domain. These small physical segments are called microsegments. Microsegments connected using twisted pair cabling are capable of full-duplex communications. In full duplex mode, when separate wires are used for transmitting and receiving between two hosts, there is no contention for the media. Thus, a collision domain no longer exists.

There is a propagation delay for the signals traveling along transmission medium. Additionally, as signals are processed by network devices further delay, or latency, is introduced.

How a frame is switched affects latency and reliability. A switch can start to transfer the frame as soon as the destination MAC address is received. Switching at this point is called cut-through switching and results in the lowest latency through the switch. However, cut-through switching provides no error checking. At the other extreme, the switch can receive the entire frame before sending it out the destination port. This is called store-and-forward switching. Fragment-free switching reads and checks the first sixty-four bytes of the frame before forwarding it to the destination port.

Switched networks are often designed with redundant paths to provide for reliability and fault tolerance. Switches use the Spanning-Tree Protocol (STP) to identify and shut down redundant paths through the network. The result is a logical hierarchical path through the network with no loops.

Using Layer 2 devices to break up a LAN into multiple collision domains increases available bandwidth for every host. But Layer 2 devices forward broadcasts, such as ARP requests. A Layer 3 device is required to control broadcasts and define broadcast domains.

Data flow through a routed IP network, involves data moving across traffic management devices at Layers 1, 2, and 3 of the OSI model. Layer 1 is used for transmission across the physical media, Layer 2 for collision domain management, and Layer 3 for broadcast domain management.

What is a network segment?

What is a network segment?
8.2.7 This page explains what a network segment is.


As with many terms and acronyms, segment has multiple meanings. The dictionary definition of the term is as follows:

• A separate piece of something
• One of the parts into which an entity, or quantity is divided or marked off by or as if by natural boundaries

In the context of data communication, the following definitions are used:

• Section of a network that is bounded by bridges, routers, or switches.
• In a LAN using a bus topology, a segment is a continuous electrical circuit that is often connected to other such segments with repeaters.
• Term used in the TCP specification to describe a single transport layer unit of information. The terms datagram, frame, message, and packet are also used to describe logical information groupings at various layers of the OSI reference model and in various technology circles.

To properly define the term segment, the context of the usage must be presented with the word. If segment is used in the context of TCP, it would be defined as a separate piece of the data. If segment is being used in the context of physical networking media in a routed network, it would be seen as one of the parts or sections of the total network.

This page concludes this lesson. The next page will summarize the main points from the module.

Broadcast domains / Introduction to data flow

Broadcast domains
8.2.5 This page will explain the features of a broadcast domain.


A broadcast domain is a group of collision domains that are connected by Layer 2 devices. When a LAN is broken up into multiple collision domains, each host in the network has more opportunities to gain access to the media. This reduces the chance of collisions and increases available bandwidth for every host. Broadcasts are forwarded by Layer 2 devices. Excessive broadcasts can reduce the efficiency of the entire LAN. Broadcasts have to be controlled at Layer 3 since Layers 1 and 2 devices cannot control them. A broadcast domain includes all of the collision domains that process the same broadcast frame. This includes all the nodes that are part of the network segment bounded by a Layer 3 device. Broadcast domains are controlled at Layer 3 because routers do not forward broadcasts. Routers actually work at Layers 1, 2, and 3. Like all Layer 1 devices, routers have a physical connection and transmit data onto the media. Routers also have a Layer 2 encapsulation on all interfaces and perform the same functions as other Layer 2 devices. Layer 3 allows routers to segment broadcast domains.

In order for a packet to be forwarded through a router it must have already been processed by a Layer 2 device and the frame information stripped off. Layer 3 forwarding is based on the destination IP address and not the MAC address. For a packet to be forwarded it must contain an IP address that is outside of the range of addresses assigned to the LAN and the router must have a destination to send the specific packet to in its routing table.

Introduction to data flow
8.2.6 This page discusses data flow.


Data flow in the context of collision and broadcast domains focuses on how data frames propagate through a network. It refers to the movement of data through Layers 1, 2 and 3 devices and how data must be encapsulated to effectively make that journey. Remember that data is encapsulated at the network layer with an IP source and destination address, and at the data-link layer with a MAC source and destination address.

A good rule to follow is that a Layer 1 device always forwards the frame, while a Layer 2 device wants to forward the frame. In other words, a Layer 2 device will forward the frame unless something prevents it from doing so. A Layer 3 device will not forward the frame unless it has to. Using this rule will help identify how data flows through a network.

Layer 1 devices do no filtering, so everything that is received is passed on to the next segment. The frame is simply regenerated and retimed and thus returned to its original transmission quality. Any segments connected by Layer 1 devices are part of the same domain, both collision and broadcast.

Layer 2 devices filter data frames based on the destination MAC address. A frame is forwarded if it is going to an unknown destination outside the collision domain. The frame will also be forwarded if it is a broadcast, multicast, or a unicast going outside of the local collision domain. The only time that a frame is not forwarded is when the Layer 2 device finds that the sending host and the receiving host are in the same collision domain. A Layer 2 device, such as a bridge, creates multiple collision domains but maintains only one broadcast domain.

Layer 3 devices filter data packets based on IP destination address. The only way that a packet will be forwarded is if its destination IP address is outside of the broadcast domain and the router has an identified location to send the packet. A Layer 3 device creates multiple collision and broadcast domains.

Data flow through a routed IP based network, involves data moving across traffic management devices at Layers 1, 2, and 3 of the OSI model. Layer 1 is used for transmission across the physical media, Layer 2 for collision domain management, and Layer 3 for broadcast domain management.

The next page defines a network segment.

Layer 2 broadcasts

Layer 2 broadcasts
8.2.4 This page will explain how Layer 2 broadcasts are used.


To communicate with all collision domains, protocols use broadcast and multicast frames at Layer 2 of the OSI model. When a node needs to communicate with all hosts on the network, it sends a broadcast frame with a destination MAC address 0xFFFFFFFFFFFF. This is an address to which the NIC of every host must respond.

Layer 2 devices must flood all broadcast and multicast traffic. The accumulation of broadcast and multicast traffic from each device in the network is referred to as broadcast radiation. In some cases, the circulation of broadcast radiation can saturate the network so that there is no bandwidth left for application data. In this case, new network connections cannot be made and established connections may be dropped. This situation is called a broadcast storm. The probability of broadcast storms increases as the switched network grows.

A NIC must rely on the CPU to process each broadcast or multicast group it belongs to. Therefore, broadcast radiation affects the performance of hosts in the network. Figure shows the results of tests that Cisco conducted on the effect of broadcast radiation on the CPU performance of a Sun SPARCstation 2 with a standard built-in Ethernet card. The results indicate that an IP workstation can be effectively shut down by broadcasts that flood the network. Although extreme, broadcast peaks of thousands of broadcasts per second have been observed during broadcast storms. Tests in a controlled environment with a range of broadcasts and multicasts on the network show measurable system degradation with as few as 100 broadcasts or multicasts per second.

A host does not usually benefit if it processes a broadcast when it is not the intended destination. The host is not interested in the service that is advertised. High levels of broadcast radiation can noticeably degrade host performance. The three sources of broadcasts and multicasts in IP networks are workstations, routers, and multicast applications.

Workstations broadcast an Address Resolution Protocol (ARP) request every time they need to locate a MAC address that is not in the ARP table. Although the numbers in the figure might appear low, they represent an average, well-designed IP network. When broadcast and multicast traffic peak due to storm behavior, peak CPU loss can be much higher than average. Broadcast storms can be caused by a device that requests information from a network that has grown too large. So many responses are sent to the original request that the device cannot process them, or the first request triggers similar requests from other devices that effectively block normal traffic flow on the network.

As an example, the command telnet mumble.com translates into an IP address through a Domain Name System (DNS) search. An ARP request is broadcast to locate the MAC address. Generally, IP workstations cache 10 to 100 addresses in their ARP tables for about 2 hours. The ARP rate for a typical workstation might be about 50 addresses every 2 hours or 0.007 ARPs per second. Therefore, 2000 IP end stations will produce about 14 ARPs per second.

The routing protocols that are configured on a network can increase broadcast traffic significantly. Some administrators configure all workstations to run Routing Information Protocol (RIP) as a redundancy and reachability policy. Every 30 seconds, RIPv1 uses broadcasts to retransmit the entire RIP routing table to other RIP routers. If 2000 workstations were configured to run RIP and, on average, 50 packets were required to transmit the routing table, the workstations would generate 3333 broadcasts per second. Most network administrators only configure RIP on five to ten routers. For a routing table that has a size of 50 packets, 10 RIP routers would generate about 16 broadcasts per second.

IP multicast applications can adversely affect the performance of large, scaled, switched networks. Multicasting is an efficient way to send a stream of multimedia data to many users on a shared-media hub. However, it affects every user on a flat switched network. A packet video application could generate a 7-MB stream of multicast data that would be sent to every segment. This would result in severe congestion.

The next page will describe broadcast domains.

Collision domains / Segmentation

Collision domains
8.2.2 This page will define collision domains.


Collision domains are the connected physical network segments where collisions can occur. Collisions cause the network to be inefficient. Every time a collision happens on a network, all transmission stops for a period of time. The length of this period of time varies and is determined by a backoff algorithm for each network device.

The types of devices that interconnect the media segments define collision domains. These devices have been classified as OSI Layer 1, 2 or 3 devices. Layer 2 and Layer 3 devices break up collision domains. This process is also known as segmentation.

Layer 1 devices such as repeaters and hubs are mainly used to extend the Ethernet cable segments. This allows more hosts to be added. However, every host that is added increases the amount of potential traffic on the network. Layer 1 devices forward all data that is sent on the media. As more traffic is transmitted within a collision domain, collisions become more likely. This results in diminished network performance, which will be even more pronounced if all the computers use large amounts of bandwidth. Layer 1 devices can cause the length of a LAN to be overextended and result in collisions.

The four repeater rule in Ethernet states that no more than four repeaters or repeating hubs can be between any two computers on the network. For a repeated 10BASE-T network to function properly, the round-trip delay calculation must be within certain limits. This ensures that all the workstations will be able to hear all the collisions on the network. Repeater latency, propagation delay, and NIC latency all contribute to the four repeater rule. If the four repeater rule is violated, the maximum delay limit may be exceeded. A late collision is when a collision happens after the first 64 bytes of the frame are transmitted. The chipsets in NICs are not required to retransmit automatically when a late collision occurs. These late collision frames add delay that is referred to as consumption delay. As consumption delay and latency increase, network performance decreases.

The 5-4-3-2-1 rule requires that the following guidelines should not be exceeded:

• Five segments of network media
• Four repeaters or hubs
• Three host segments of the network
• Two link sections with no hosts
• One large collision domain

The 5-4-3-2-1 rule also provides guidelines to keep round-trip delay time within acceptable limits.

The next page will discuss segmentation.

Segmentation
8.2.3 This page will explain how Layer 2 and 3 devices are used to segment a network.


The history of how Ethernet handles collisions and collision domains dates back to research at the University of Hawaii in 1970. In its attempts to develop a wireless communication system for the islands of Hawaii, university researchers developed a protocol called Aloha. The Ethernet protocol is actually based on the Aloha protocol.

One important skill for a networking professional is the ability to recognize collision domains. A collision domain is created when several computers are connected to a single shared-access medium that is not attached to other network devices. This situation limits the number of computers that can use the segment. Layer 1 devices extend but do not control collision domains.

Layer 2 devices segment or divide collision domains. They use the MAC address assigned to every Ethernet device to control frame propagation. Layer 2 devices are bridges and switches. They keep track of the MAC addresses and their segments. This allows these devices to control the flow of traffic at the Layer 2 level. This function makes networks more efficient. It allows data to be transmitted on different segments of the LAN at the same time without collisions. Bridges and switches divide collision domains into smaller parts. Each part becomes its own collision domain.

These smaller collision domains will have fewer hosts and less traffic than the original domain. The fewer hosts that exist in a collision domain, the more likely the media will be available. If the traffic between bridged segments is not too heavy a bridged network works well. Otherwise, the Layer 2 device can slow down communication and become a bottleneck.

Layer 2 and 3 devices do not forward collisions. Layer 3 devices divide collision domains into smaller domains.
Layer 3 devices also perform other functions. These functions will be covered in the section on broadcast domains.

The next page will discuss broadcasts

Spanning-Tree Protocol / Shared media environments

Spanning-Tree Protocol
8.1.6 This page will introduce STP.


When multiple switches are arranged in a simple hierarchical tree, switching loops are unlikely to occur. However, switched networks are often designed with redundant paths to provide for reliability and fault tolerance. Redundant paths are desirable but they can have undesirable side effects such as switching loops. Switching loops are one such side effect. Switching loops can occur by design or by accident, and they can lead to broadcast storms that will rapidly overwhelm a network. STP is a standards-based routing protocol that is used to avoid routing loops. Each switch in a LAN that uses STP sends messages called Bridge Protocol Data Units (BPDUs) out all its ports to let other switches know of its existence. This information is used to elect a root bridge for the network. The switches use the spanning-tree algorithm (STA) to resolve and shut down the redundant paths.

Each port on a switch that uses STP exists in one of the following five states:

• Blocking
• Listening
• Learning
• Forwarding
• Disabled

A port moves through these five states as follows:

• From initialization to blocking
• From blocking to listening or to disabled
• From listening to learning or to disabled
• From learning to forwarding or to disabled
• From forwarding to disabled

STP is used to create a logical hierarchical tree with no loops. However, the alternate paths are still available if necessary.

This page concludes this lesson. The next lesson will discuss collision and broadcast domains. The first page covers shared media environments.

Shared media environments
8.2.1 This page explains Layer 1 media and topologies to help students understand collisions and collision domains.


Here are some examples of shared media and directly connected networks:

• Shared media environment – This occurs when multiple hosts have access to the same medium. For example, if several PCs are attached to the same physical wire or optical fiber, they all share the same media environment.

• Extended shared media environment – This is a special type of shared media environment in which networking devices can extend the environment so that it can accommodate multiple access or longer cable distances.

• Point-to-point network environment – This is widely used in dialup network connections and is most common for home users. It is a shared network environment in which one device is connected to only one other device. An example is a PC that is connected to an Internet service provider through a modem and a phone line.

Collisions only occur in a shared environment. A highway system is an example of a shared environment in which collisions can occur because multiple vehicles use the same roads. As more vehicles enter the system, collisions become more likely. A shared data network is much like a highway. Rules exist to determine who has access to the network medium. However, sometimes the rules cannot handle the traffic load and collisions occur.

The next page will focus on collision domains.

Switch operation / Latency / Switch modes

Switch operation
8.1.3 This page describes the operation of a switch.


A switch is simply a bridge with many ports. When only one node is connected to a switch port, the collision domain on the shared media contains only two nodes. The two nodes in this small segment, or collision domain, consist of the switch port and the host connected to it. These small physical segments are called microsegments. Another capability emerges when only two nodes are connected. In a network that uses twisted-pair cabling, one pair is used to carry the transmitted signal from one node to the other node. A separate pair is used for the return or received signal. It is possible for signals to pass through both pairs simultaneously. The ability to communicate in both directions at once is known as full duplex. Most switches are capable of supporting full duplex, as are most NICs. In full duplex mode, there is no contention for the media. A collision domain no longer exists. In theory, the bandwidth is doubled when full duplex is used.

In addition to faster microprocessors and memory, two other technological advances made switches possible. CAM is memory that works backward compared to conventional memory. When data is entered into the memory it will return the associated address. CAM allows a switch to find the port that is associated with a MAC address without search algorithms. An application-specific integrated circuit or ASIC comprises an integrated circuit (IC) with functionality customized for a particular use (equipment or project), rather than serving for general-purpose use. An ASIC allows some software operations to be done in hardware. These technologies greatly reduced the delays caused by software processes and enabled a switch to keep up with the data demands of many microsegments and high bit rates.

The next page will define latency.

Latency
8.1.4 This page will discuss some situations that cause latency.


Latency is the delay between the time a frame begins to leave the source device and when the first part of the frame reaches its destination. A variety of conditions can cause delays:

• Media delays may be caused by the finite speed that signals can travel through the physical media.
• Circuit delays may be caused by the electronics that process the signal along the path.
• Software delays may be caused by the decisions that software must make to implement switching and protocols.
• Delays may be caused by the content of the frame and the location of the frame switching decisions. For example, a device cannot route a frame to a destination until the destination MAC address has been read.

The next page will discuss switch modes.

Switch modes
8.1.5 This page will introduce the three switch modes.


How a frame is switched to the destination port is a trade off between latency and reliability. A switch can start to transfer the frame as soon as the destination MAC address is received. This is called cut-through packet switching and results in the lowest latency through the switch. However, no error checking is available. The switch can also receive the entire frame before it is sent to the destination port. This gives the switch software an opportunity to verify the Frame Check Sequence (FCS). If the frame is invalid, it is discarded at the switch. Since the entire frame is stored before it is forwarded, this is called store-and-forward packet switching. A compromise between cut-through and store-and-forward packet switching is the fragment-free mode. Fragment-free packet switching reads the first 64 bytes, which includes the frame header, and starts to send out the packet before the entire data field and checksum are read. This mode verifies the reliability of the addresses and LLC protocol information to ensure the data will be handled properly and arrive at the correct destination.

When cut-through packet switching is used, the source and destination ports must have the same bit rate to keep the frame intact. This is called symmetric switching. If the bit rates are not the same, the frame must be stored at one bit rate before it is sent out at the other bit rate. This is known as asymmetric switching. Store-and-forward mode must be used for asymmetric switching.

Asymmetric switching provides switched connections between ports with different bandwidths. Asymmetric switching is optimized for client/server traffic flows in which multiple clients communicate with a server at once. More bandwidth must be dedicated to the server port to prevent a bottleneck.

The Interactive Media Activity will help students become familiar with the three types of switch modes.

The next page will discuss the Spanning-Tree Protocol (STP).

Layer 2 bridging / Layer 2 switching

Layer 2 bridging 
8.1.1 This page will discuss the operation of Layer 2 bridges.
As more nodes are added to an Ethernet segment, use of the media increases. Ethernet is a shared media, which means only one node can transmit data at a time. The addition of more nodes increases the demands on the available bandwidth and places additional loads on the media. This also increases the probability of collisions, which results in more retransmissions. A solution to the problem is to break the large segment into parts and separate it into isolated collision domains.

To accomplish this a bridge keeps a table of MAC addresses and the associated ports. The bridge then forwards or discards frames based on the table entries. The following steps illustrate the operation of a bridge:

• The bridge has just been started so the bridge table is empty. The bridge just waits for traffic on the segment. When traffic is detected, it is processed by the bridge.
• Host A pings Host B. Since the data is transmitted on the entire collision domain segment, both the bridge and Host B process the packet.
• The bridge adds the source address of the frame to its bridge table. Since the address was in the source address field and the frame was received on Port 1, the frame must be associated with Port 1 in the table.
• The destination address of the frame is checked against the bridge table. Since the address is not in the table, even though it is on the same collision domain, the frame is forwarded to the other segment. The address of Host B has not been recorded yet.
• Host B processes the ping request and transmits a ping reply back to Host A. The data is transmitted over the whole collision domain. Both Host A and the bridge receive the frame and process it.
• The bridge adds the source address of the frame to its bridge table. Since the source address was not in the bridge table and was received on Port 1, the source address of the frame must be associated with Port 1 in the table.
• The destination address of the frame is checked against the bridge table to see if its entry is there. Since the address is in the table, the port assignment is checked. The address of Host A is associated with the port the frame was received on, so the frame is not forwarded.
• Host A pings Host C. Since the data is transmitted on the entire collision domain segment, both the bridge and Host B process the frame. Host B discards the frame since it was not the intended destination.
• The bridge adds the source address of the frame to its bridge table. Since the address is already entered into the bridge table the entry is just renewed.
• The destination address of the frame is checked against the bridge table. Since the address is not in the table, the frame is forwarded to the other segment. The address of Host C has not been recorded yet.
• Host C processes the ping request and transmits a ping reply back to Host A. The data is transmitted over the whole collision domain. Both Host D and the bridge receive the frame and process it. Host D discards the frame since it is not the intended destination.
• The bridge adds the source address of the frame to its bridge table. Since the address was in the source address field and the frame was received on Port 2, the frame must be associated with Port 2 in the table.
• The destination address of the frame is checked against the bridge table to see if its entry is present. The address is in the table but it is associated with Port 1, so the frame is forwarded to the other segment.
• When Host D transmits data, its MAC address will also be recorded in the bridge table. This is how the bridge controls traffic between to collision domains.

These are the steps that a bridge uses to forward and discard frames that are received on any of its ports.

The next page will describe Layer 2 switching.

Layer 2 switching
8.1.2 This page will discuss Layer 2 switches.


Generally, a bridge has only two ports and divides a collision domain into two parts. All decisions made by a bridge are based on MAC or Layer 2 addresses and do not affect the logical or Layer 3 addresses. A bridge will divide a collision domain but has no effect on a logical or broadcast domain. If a network does not have a device that works with Layer 3 addresses, such as a router, the entire network will share the same logical broadcast address space. A bridge will create more collision domains but will not add broadcast domains.

A switch is essentially a fast, multi-port bridge that can contain dozens of ports. Each port creates its own collision domain. In a network of 20 nodes, 20 collision domains exist if each node is plugged into its own switch port. If an uplink port is included, one switch creates 21 single-node collision domains. A switch dynamically builds and maintains a content-addressable memory (CAM) table, which holds all of the necessary MAC information for each port.

The next page will explain how a switch operates

Module 8: Ethernet Switching Overview

Ethernet Switching Overview
Shared Ethernet works extremely well under ideal conditions. If the number of devices that try to access the network is low, the number of collisions stays well within acceptable limits. However, when the number of users on the network increases, the number of collisions can significantly reduce performance. Bridges were developed to help correct performance problems that arose from increased collisions. Switches evolved from bridges to become the main technology in modern Ethernet LANs.


Collisions and broadcasts are expected events in modern networks. They are engineered into the design of Ethernet and higher layer technologies. However, when collisions and broadcasts occur in numbers that are above the optimum, network performance suffers. Collision domains and broadcast domains should be designed to limit the negative effects of collisions and broadcasts. This module explores the effects of collisions and broadcasts on network traffic and then describes how bridges and routers are used to segment networks for improved performance.

This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811 exams.

Students who complete this module should be able to perform the following tasks:

• Define bridging and switching
• Define and describe the content-addressable memory (CAM) table
• Define latency
• Describe store-and-forward and cut-through packet switching modes
• Explain Spanning-Tree Protocol (STP)
• Define collisions, broadcasts, collision domains, and broadcast domains
• Identify the Layers 1, 2, and 3 devices used to create collision domains and broadcast domains
• Discuss data flow and problems with broadcasts
• Explain network segmentation and list the devices used to create segments

Summary of Module 7

Summary
This page summarizes the topics discussed in this module.


Ethernet is a technology that has increased in speed one thousand times, from 10 Mbps to 10,000 Mbps, in less than a decade. All forms of Ethernet share a similar frame structure and this leads to excellent interoperability. Most Ethernet copper connections are now switched full duplex, and the fastest copper-based Ethernet is 1000BASE-T, or Gigabit Ethernet. 10 Gigabit Ethernet and faster are exclusively optical fiber-based technologies.

10BASE5, 10BASE2, and 10BASE-T Ethernet are considered Legacy Ethernet. The four common features of Legacy Ethernet are timing parameters, frame format, transmission process, and a basic design rule.

Legacy Ethernet encodes data on an electrical signal. The form of encoding used in 10 Mbps systems is called Manchester encoding. Manchester encoding uses a change in voltage to represent the binary numbers zero and one. An increase or decrease in voltage during a timed period, called the bit period, determines the binary value of the bit.

In addition to a standard bit period, Ethernet standards set limits for slot time and interframe spacing. Different types of media can affect transmission timing and timing standards ensure interoperability. 10 Mbps Ethernet operates within the timing limits offered by a series of no more than five segments separated by no more than four repeaters.

A single thick coaxial cable was the first medium used for Ethernet. 10BASE2, using a thinner coax cable, was introduced in 1985. 10BASE-T, using twisted-pair copper wire, was introduced in 1990. Because it used multiple wires 10BASE-T offered the option of full-duplex signaling. 10BASE-T carries 10 Mbps of traffic in half-duplex mode and 20 Mbps in full-duplex mode.

10BASE-T links can have unrepeated distances up to 100 m. Beyond that network devices such as repeaters, hub, bridges and switches are used to extend the scope of the LAN. With the advent of switches, the 4-repeater rule is not so relevant. You can extend the LAN indefinitely by daisy-chaining switches. Each switch-to-switch connection, with maximum length of 100m, is essentially a point-to-point connection without the media contention or timing issues of using repeaters and hubs.

100-Mbps Ethernet, also known as Fast Ethernet, can be implemented using twisted-pair copper wire, as in 100BASE-TX, or fiber media, as in 100BASE-FX. 100 Mbps forms of Ethernet can transmit 200 Mbps in full duplex.

Because the higher frequency signals used in Fast Ethernet are more susceptible to noise, two separate encoding steps are used by 100-Mbps Ethernet to enhance signal integrity.

Gigabit Ethernet over copper wire is accomplished by the following:

• Category 5e UTP cable and careful improvements in electronics are used to boost 100 Mbps per wire pair to 125 Mbps per wire pair.
• All four wire pairs instead of just two. This allows 125 Mbps per wire pair, or 500 Mbps for the four wire pairs.
• Sophisticated electronics allow permanent collisions on each wire pair and run signals in full duplex, doubling the 500 Mbps to 1000 Mbps.

On Gigabit Ethernet networks bit signals occur in one tenth of the time of 100 Mbps networks and 1/100 of the time of 10 Mbps networks. With signals occurring in less time the bits become more susceptible to noise. The issue becomes how fast the network adapter or interface can change voltage levels to signal bits and still be detected reliably one hundred meters away at the receiving NIC or interface. At this speed encoding and decoding data becomes even more complex.

The fiber versions of Gigabit Ethernet, 1000BASE-SX and 1000BASE-LX offer the following advantages: noise immunity, small size, and increased unrepeated distances and bandwidth. The IEEE 802.3 standard recommends that Gigabit Ethernet over fiber be the preferred backbone technology.

10-Gigabit Ethernet architectures / Future of Ethernet

10-Gigabit Ethernet architectures
7.2.6 This page describes the 10-Gigabit Ethernet architectures.


As with the development of Gigabit Ethernet, the increase in speed comes with extra requirements. The shorter bit time duration because of increased speed requires special considerations. For 10 GbE transmissions, each data bit duration is 0.1 nanosecond. This means there would be 1,000 GbE data bits in the same bit time as one data bit in a 10-Mbps Ethernet data stream. Because of the short duration of the 10 GbE data bit, it is often difficult to separate a data bit from noise. 10 GbE data transmissions rely on exact bit timing to separate the data from the effects of noise on the physical layer. This is the purpose of synchronization.

In response to these issues of synchronization, bandwidth, and Signal-to-Noise Ratio, 10-Gigabit Ethernet uses two separate encoding steps. By using codes to represent the user data, transmission is made more efficient. The encoded data provides synchronization, efficient usage of bandwidth, and improved Signal-to-Noise Ratio characteristics.

Complex serial bit streams are used for all versions of 10GbE except for 10GBASE-LX4, which uses Wide Wavelength Division Multiplex (WWDM) to multiplex four bit simultaneous bit streams as four wavelengths of light launched into the fiber at one time.

Figure represents the particular case of using four slightly different wavelength, laser sources. Upon receipt from the medium, the optical signal stream is demultiplexed into four separate optical signal streams. The four optical signal streams are then converted back into four electronic bit streams as they travel in approximately the reverse process back up through the sublayers to the MAC layer.

Currently, most 10GbE products are in the form of modules, or line cards, for addition to high-end switches and routers. As the 10GbE technologies evolve, an increasing diversity of signaling components can be expected. As optical technologies evolve, improved transmitters and receivers will be incorporated into these products, taking further advantage of modularity. All 10GbE varieties use optical fiber media. Fiber types include 10µ single-mode Fiber, and 50µ and 62.5µ multimode fibers. A range of fiber attenuation and dispersion characteristics is supported, but they limit operating distances.

Even though support is limited to fiber optic media, some of the maximum cable lengths are surprisingly short. No repeater is defined for 10-Gigabit Ethernet since half duplex is explicitly not supported.

As with 10 Mbps, 100 Mbps and 1000 Mbps versions, it is possible to modify some of the architecture rules slightly. Possible architecture adjustments are related to signal loss and distortion along the medium. Due to dispersion of the signal and other issues the light pulse becomes undecipherable beyond certain distances.

The next page will discuss the future of Ethernet.

Future of Ethernet
7.2.7 Ethernet has gone through an evolution from Legacy —> Fast —> Gigabit —> MultiGigabit technologies. While other LAN technologies are still in place (legacy installations), Ethernet dominates new LAN installations. So much so that some have referred to Ethernet as the LAN “dial tone”. Ethernet is now the standard for horizontal, vertical, and inter-building connections. Recently developing versions of Ethernet are blurring the distinction between LANs, MANs, and WANs.


While 1-Gigabit Ethernet is now widely available and 10-Gigabit products becoming more available, the IEEE and the 10-Gigabit Ethernet Alliance are working on 40, 100, or even 160 Gbps standards. The technologies that are adopted will depend on a number of factors, including the rate of maturation of the technologies and standards, the rate of adoption in the market, and cost.

Proposals for Ethernet arbitration schemes other than CSMA/CD have been made. The problem of collisions with physical bus topologies of 10BASE5 and 10BASE2 and 10BASE-T and 100BASE-TX hubs is no longer common. Using UTP and optical fiber with separate Tx and Rx paths, and the decreasing costs of switches make single shared media, half-duplex media connections much less important.

The future of networking media is three-fold:

1. Copper (up to 1000 Mbps, perhaps more)
2. Wireless (approaching 100 Mbps, perhaps more)
3. Optical fiber (currently at 10,000 Mbps and soon to be more)

Copper and wireless media have certain physical and practical limitations on the highest frequency signals that can be transmitted. This is not a limiting factor for optical fiber in the foreseeable future. The bandwidth limitations on optical fiber are extremely large and are not yet being threatened. In fiber systems, it is the electronics technology (such as emitters and detectors) and fiber manufacturing processes that most limit the speed. Upcoming developments in Ethernet are likely to be heavily weighted towards Laser light sources and single-mode optical fiber.

When Ethernet was slower, half-duplex, subject to collisions and a “democratic” process for prioritization, was not considered to have the Quality of Service (QoS) capabilities required to handle certain types of traffic. This included such things as IP telephony and video multicast.

The full-duplex high-speed Ethernet technologies that now dominate the market are proving to be sufficient at supporting even QoS-intensive applications. This makes the potential applications of Ethernet even wider. Ironically end-to-end QoS capability helped drive a push for ATM to the desktop and to the WAN in the mid-1990s, but now it is Ethernet, not ATM that is approaching this goal.

This page concludes this lesson. The next page will summarize the main points from the module.

Saturday, February 6, 2010

Gigabit Ethernet architecture / 10-Gigabit Ethernet

Gigabit Ethernet architecture
7.2.4 This page will discuss the architecture of Gigabit Ethernet.


The distance limitations of full-duplex links are only limited by the medium, and not the round-trip delay. Since most Gigabit Ethernet is switched, the values in Figures and are the practical limits between devices. Daisy-chaining, star, and extended star topologies are all allowed. The issue then becomes one of logical topology and data flow, not timing or distance limitations.

A 1000BASE-T UTP cable is the same as 10BASE-T and 100BASE-TX cable, except that link performance must meet the higher quality Category 5e or ISO Class D (2000) requirements.

Modification of the architecture rules is strongly discouraged for 1000BASE-T. At 100 meters, 1000BASE-T is operating close to the edge of the ability of the hardware to recover the transmitted signal. Any cabling problems or environmental noise could render an otherwise compliant cable inoperable even at distances that are within the specification.

It is recommended that all links between a station and a hub or switch be configured for Auto-Negotiation to permit the highest common performance. This will avoid accidental misconfiguration of the other required parameters for proper Gigabit Ethernet operation.

The next page will discuss 10-Gigabit Ethernet.

10-Gigabit Ethernet
7.2.5 This page will describe 10-Gigabit Ethernet and compare it to other versions of Ethernet.


IEEE 802.3ae was adapted to include 10 Gbps full-duplex transmission over fiber optic cable. The basic similarities between 802.3ae and 802.3, the original Ethernet are remarkable. This 10-Gigabit Ethernet (10GbE) is evolving for not only LANs, but also MANs, and WANs.

With the frame format and other Ethernet Layer 2 specifications compatible with previous standards, 10GbE can provide increased bandwidth needs that are interoperable with existing network infrastructure.

A major conceptual change for Ethernet is emerging with 10GbE. Ethernet is traditionally thought of as a LAN technology, but 10GbE physical layer standards allow both an extension in distance to 40 km over single-mode fiber and compatibility with synchronous optical network (SONET) and synchronous digital hierarchy (SDH) networks. Operation at 40 km distance makes 10GbE a viable MAN technology. Compatibility with SONET/SDH networks operating up to OC-192 speeds (9.584640 Gbps) make 10GbE a viable WAN technology. 10GbE may also compete with ATM for certain applications.

To summarize, how does 10GbE compare to other varieties of Ethernet?

• Frame format is the same, allowing interoperability between all varieties of legacy, fast, gigabit, and 10 gigabit, with no reframing or protocol conversions.
• Bit time is now 0.1 nanoseconds. All other time variables scale accordingly.
• Since only full-duplex fiber connections are used, CSMA/CD is not necessary.
• The IEEE 802.3 sublayers within OSI Layers 1 and 2 are mostly preserved, with a few additions to accommodate 40 km fiber links and interoperability with SONET/SDH technologies.
• Flexible, efficient, reliable, relatively low cost end-to-end Ethernet networks become possible.
• TCP/IP can run over LANs, MANs, and WANs with one Layer 2 transport method.

The basic standard governing CSMA/CD is IEEE 802.3. An IEEE 802.3 supplement, entitled 802.3ae, governs the 10GbE family. As is typical for new technologies, a variety of implementations are being considered, including:

• 10GBASE-SR – Intended for short distances over already-installed multimode fiber, supports a range between 26 m to 82 m
• 10GBASE-LX4 – Uses wavelength division multiplexing (WDM), supports 240 m to 300 m over already-installed multimode fiber and 10 km over single-mode fiber
• 10GBASE-LR and 10GBASE-ER – Support 10 km and 40 km over single-mode fiber
• 10GBASE-SW, 10GBASE-LW, and 10GBASE-EW – Known collectively as 10GBASE-W, intended to work with OC-192 synchronous transport module SONET/SDH WAN equipment

The IEEE 802.3ae Task force and the 10-Gigabit Ethernet Alliance (10 GEA) are working to standardize these emerging technologies.

10-Gbps Ethernet (IEEE 802.3ae) was standardized in June 2002. It is a full-duplex protocol that uses only optic fiber as a transmission medium. The maximum transmission distances depend on the type of fiber being used. When using single-mode fiber as the transmission medium, the maximum transmission distance is 40 kilometers (25 miles). Some discussions between IEEE members have begun that suggest the possibility of standards for 40, 80, and even 100-Gbps Ethernet.

The next page will discuss the architecture of 10-Gigabit Ethernet.

1000BASE-T / 1000BASE-SX and LX

1000BASE-T
7.2.2 This page will describe 1000BASE-T.


As Fast Ethernet was installed to increase bandwidth to workstations, this began to create bottlenecks upstream in the network. The 1000BASE-T standard, which is IEEE 802.3ab, was developed to provide additional bandwidth to help alleviate these bottlenecks. It provided more throughput for devices such as intra-building backbones, inter-switch links, server farms, and other wiring closet applications as well as connections for high-end workstations. Fast Ethernet was designed to function over Category 5 copper cable that passes the Category 5e test. Most installed Category 5 cable can pass the Category 5e certification if properly terminated. It is important for the 1000BASE-T standard to be interoperable with 10BASE-T and 100BASE-TX.

Since Category 5e cable can reliably carry up to 125 Mbps of traffic, 1000 Mbps or 1 Gigabit of bandwidth was a design challenge. The first step to accomplish 1000BASE-T is to use all four pairs of wires instead of the traditional two pairs of wires used by 10BASE-T and 100BASE-TX. This requires complex circuitry that allows full-duplex transmissions on the same wire pair. This provides 250 Mbps per pair. With all four-wire pairs, this provides the desired 1000 Mbps. Since the information travels simultaneously across the four paths, the circuitry has to divide frames at the transmitter and reassemble them at the receiver.

The 1000BASE-T encoding with 4D-PAM5 line encoding is used on Category 5e, or better, UTP. That means the transmission and reception of data happens in both directions on the same wire at the same time. As might be expected, this results in a permanent collision on the wire pairs. These collisions result in complex voltage patterns. With the complex integrated circuits using techniques such as echo cancellation, Layer 1 Forward Error Correction (FEC), and prudent selection of voltage levels, the system achieves the 1-Gigabit throughput.

In idle periods there are nine voltage levels found on the cable, and during data transmission periods there are 17 voltage levels found on the cable. With this large number of states and the effects of noise, the signal on the wire looks more analog than digital. Like analog, the system is more susceptible to noise due to cable and termination problems.

The data from the sending station is carefully divided into four parallel streams, encoded, transmitted and detected in parallel, and then reassembled into one received bit stream. Figure represents the simultaneous full duplex on four-wire pairs. 1000BASE-T supports both half-duplex as well as full-duplex operation. The use of full-duplex 1000BASE-T is widespread.

The next page will introduce 1000BASE-SX and LX

1000BASE-SX and LX
7.2.3 This page will discuss single-mode and multimode optical fiber.


The IEEE 802.3 standard recommends that Gigabit Ethernet over fiber be the preferred backbone technology.

The timing, frame format, and transmission are common to all versions of 1000 Mbps. Two signal-encoding schemes are defined at the physical layer. The 8B/10B scheme is used for optical fiber and shielded copper media, and the pulse amplitude modulation 5 (PAM5) is used for UTP.

1000BASE-X uses 8B/10B encoding converted to non-return to zero (NRZ) line encoding. NRZ encoding relies on the signal level found in the timing window to determine the binary value for that bit period. Unlike most of the other encoding schemes described, this encoding system is level driven instead of edge driven. That is the determination of whether a bit is a zero or a one is made by the level of the signal rather than when the signal changes levels.

The NRZ signals are then pulsed into the fiber using either short-wavelength or long-wavelength light sources. The short-wavelength uses an 850 nm laser or LED source in multimode optical fiber (1000BASE-SX). It is the lower-cost of the options but has shorter distances. The long-wavelength 1310 nm laser source uses either single-mode or multimode optical fiber (1000BASE-LX). Laser sources used with single-mode fiber can achieve distances of up to 5000 meters. Because of the length of time to completely turn the LED or laser on and off each time, the light is pulsed using low and high power. A logic zero is represented by low power, and a logic one by high power.

The Media Access Control method treats the link as point-to-point. Since separate fibers are used for transmitting (Tx) and receiving (Rx) the connection is inherently full duplex. Gigabit Ethernet permits only a single repeater between two stations. Figure is a 1000BASE Ethernet media comparison chart.

The next page describes the architecture of Gigabit Ethernet

Fast Ethernet architecture / 1000-Mbps Ethernet

Fast Ethernet architecture
7.1.9 This page describes the architecture of Fast Ethernet.


Fast Ethernet links generally consist of a connection between a station and a hub or switch. Hubs are considered multi-port repeaters and switches are considered multi-port bridges. These are subject to the 100-m (328 ft) UTP media distance limitation.

A Class I repeater may introduce up to 140 bit-times latency. Any repeater that changes between one Ethernet implementation and another is a Class I repeater. A Class II repeater is restricted to smaller timing delays, 92 bit times, because it immediately repeats the incoming signal to all other ports without a translation process. To achieve a smaller timing delay, Class II repeaters can only connect to segment types that use the same signaling technique.

As with 10-Mbps versions, it is possible to modify some of the architecture rules for 100-Mbps versions. Modification of the architecture rules is strongly discouraged for 100BASE-TX. 100BASE-TX cable between Class II repeaters may not exceed 5 m (16 ft). Links that operate in half duplex are not uncommon in Fast Ethernet. However, half duplex is undesirable because the signaling scheme is inherently full duplex.

Figure shows architecture configuration cable distances. 100BASE-TX links can have unrepeated distances up to 100 m. Switches have made this distance limitation less important. Most Fast Ethernet implementations are switched.

This page concludes this lesson. The next lesson will discuss Gigabit and 10-Gigabit Ethernet. The first page describes 1000-Mbps Ethernet standards.

1000-Mbps Ethernet
7.2.1 This page covers the 1000-Mbps Ethernet or Gigabit Ethernet standards. These standards specify both fiber and copper media for data transmissions. The 1000BASE-T standard, IEEE 802.3ab, uses Category 5, or higher, balanced copper cabling. The 1000BASE-X standard, IEEE 802.3z, specifies 1 Gbps full duplex over optical fiber.


1000BASE-TX, 1000BASE-SX, and 1000BASE-LX use the same timing parameters, as shown in Figure . They use a 1 ns, 0.000000001 of a second, or 1 billionth of a second bit time. The Gigabit Ethernet frame has the same format as is used for 10 and 100-Mbps Ethernet. Some implementations of Gigabit Ethernet may use different processes to convert frames to bits on the cable. Figure shows the Ethernet frame fields.

The differences between standard Ethernet, Fast Ethernet and Gigabit Ethernet occur at the physical layer. Due to the increased speeds of these newer standards, the shorter duration bit times require special considerations. Since the bits are introduced on the medium for a shorter duration and more often, timing is critical. This high-speed transmission requires higher frequencies. This causes the bits to be more susceptible to noise on copper media.

These issues require Gigabit Ethernet to use two separate encoding steps. Data transmission is more efficient when codes are used to represent the binary bit stream. The encoded data provides synchronization, efficient usage of bandwidth, and improved signal-to-noise ratio characteristics.

At the physical layer, the bit patterns from the MAC layer are converted into symbols. The symbols may also be control information such as start frame, end frame, and idle conditions on a link. The frame is coded into control symbols and data symbols to increase in network throughput.

Fiber-based Gigabit Ethernet, or 1000BASE-X, uses 8B/10B encoding, which is similar to the 4B/5B concept. This is followed by the simple nonreturn to zero (NRZ) line encoding of light on optical fiber. This encoding process is possible because the fiber medium can carry higher bandwidth signals.

The next page will discuss the 1000BASE-T standard.

100BASE-TX / 100BASE-FX

100BASE-TX
7.1.7 This page will describe 100BASE-TX.


In 1995, 100BASE-TX was the standard, using Category 5 UTP cable, which became commercially successful.

The original coaxial Ethernet used half-duplex transmission so only one device could transmit at a time. In 1997, Ethernet was expanded to include a full-duplex capability that allowed more than one PC on a network to transmit at the same time. Switches replaced hubs in many networks. These switches had full-duplex capabilities and could handle Ethernet frames quickly.

100BASE-TX uses 4B/5B encoding, which is then scrambled and converted to Multi-Level Transmit (MLT-3) encoding. Figure shows four waveform examples. The top waveform has no transition in the center of the timing window. No transition indicates a binary zero. The second waveform shows a transition in the center of the timing window. A transition represents a binary one. The third waveform shows an alternating binary sequence. The fourth wavelength shows that signal changes indicate ones and horizontal lines indicate zeros.

Figure shows the pinout for a 100BASE-TX connection. Notice that the two separate transmit-receive paths exist. This is identical to the 10BASE-T configuration.

100BASE-TX carries 100 Mbps of traffic in half-duplex mode. In full-duplex mode, 100BASE-TX can exchange 200 Mbps of traffic. The concept of full duplex will become more important as Ethernet speeds increase.

100BASE-FX
7.1.8 This page covers 100BASE-FX.



When copper-based Fast Ethernet was introduced, a fiber version was also desired. A fiber version could be used for backbone applications, connections between floors, buildings where copper is less desirable, and also in high-noise environments. 100BASE-FX was introduced to satisfy this desire. However, 100BASE-FX was never adopted successfully. This was due to the introduction of Gigabit Ethernet copper and fiber standards. Gigabit Ethernet standards are now the dominant technology for backbone installations, high-speed cross-connects, and general infrastructure needs.


The timing, frame format, and transmission are the same in both versions of 100-Mbps Fast Ethernet. In Figure , the top waveform has no transition, which indicates a binary 0. In the second waveform, the transition in the center of the timing window indicates a binary 1. In the third waveform, there is an alternating binary sequence. In the third and fourth waveforms it is more obvious that no transition indicates a binary zero and the presence of a transition is a binary one.


Figure summarizes a 100BASE-FX link and pinouts. A fiber pair with either ST or SC connectors is most commonly used.


The separate Transmit (Tx) and Receive (Rx) paths in 100BASE-FX optical fiber allow for 200-Mbps transmission.


The next page will explain the Fast Ethernet architecture.

10BASE-T wiring and architecture / 100-Mbps Ethernet

10BASE-T wiring and architecture
7.1.5 This page explains the wiring and architecture of 10BASE-T.


A 10BASE-T link generally connects a station to a hub or switch. Hubs are multi-port repeaters and count toward the limit on repeaters between distant stations. Hubs do not divide network segments into separate collision domains. Bridges and switches divide segments into separate collision domains. The maximum distance between bridges and switches is based on media limitations.

Although hubs may be linked, it is best to avoid this arrangement. A network with linked hubs may exceed the limit for maximum delay between stations. Multiple hubs should be arranged in hierarchical order like a tree structure. Performance is better if fewer repeaters are used between stations.

An architectural example is shown in Figure . The distance from one end of the network to the other places the architecture at its limit. The most important aspect to consider is how to keep the delay between distant stations to a minimum, regardless of the architecture and media types involved. A shorter maximum delay will provide better overall performance.

10BASE-T links can have unrepeated distances of up to 100 m (328 ft). While this may seem like a long distance, it is typically maximized when wiring an actual building. Hubs can solve the distance issue but will allow collisions to propagate. The widespread introduction of switches has made the distance limitation less important. If workstations are located within 100 m (328 ft) of a switch, the 100-m distance starts over at the switch.

The next page will describe Fast Ethernet.

100-Mbps Ethernet
7.1.6 This page will discuss 100-Mbps Ethernet, which is also known as Fast Ethernet. The two technologies that have become important are 100BASE-TX, which is a copper UTP medium and 100BASE-FX, which is a multimode optical fiber medium.


Three characteristics common to 100BASE-TX and 100BASE-FX are the timing parameters, the frame format, and parts of the transmission process. 100BASE-TX and 100BASE-FX both share timing parameters. Note that one bit time at 100-Mbps = 10 ns = .01 microseconds = 1 100-millionth of a second.

The 100-Mbps frame format is the same as the 10-Mbps frame.

Fast Ethernet is ten times faster than 10BASE-T. The bits that are sent are shorter in duration and occur more frequently. These higher frequency signals are more susceptible to noise. In response to these issues, two separate encoding steps are used by 100-Mbps Ethernet. The first part of the encoding uses a technique called 4B/5B, the second part of the encoding is the actual line encoding specific to copper or fiber.

The next page will discuss the 100BASE-TX standard.

10BASE2

10BASE2
7.1.3 This page covers 10BASE2, which was introduced in 1985.


Installation was easier because of its smaller size, lighter weight, and greater flexibility. 10BASE2 still exists in legacy networks. Like 10BASE5, it is no longer recommended for network installations. It has a low cost and does not require hubs.

10BASE2 also uses Manchester encoding. Computers on a 10BASE2 LAN are linked together by an unbroken series of coaxial cable lengths. These lengths are attached to a T-shaped connector on the NIC with BNC connectors.

10BASE2 has a stranded central conductor. Each of the maximum five segments of thin coaxial cable may be up to 185 m (607 ft) long and each station is connected directly to the BNC T-shaped connector on the coaxial cable.

Only one station can transmit at a time or a collision will occur. 10BASE2 also uses half-duplex. The maximum transmission rate of 10BASE2 is 10 Mbps.

There may be up to 30 stations on a 10BASE2 segment. Only three out of five consecutive segments between any two stations can be populated.

The next page will discuss 10BASE-T.


10BASE-T
7.1.4 This page covers 10BASE-T, which was introduced in 1990.


10BASE-T used cheaper and easier to install Category 3 UTP copper cable instead of coax cable. The cable plugged into a central connection device that contained the shared bus. This device was a hub. It was at the center of a set of cables that radiated out to the PCs like the spokes on a wheel. This is referred to as a star topology. As additional stars were added and the cable distances grew, this formed an extended star topology. Originally 10BASE-T was a half-duplex protocol, but full-duplex features were added later. The explosion in the popularity of Ethernet in the mid-to-late 1990s was when Ethernet came to dominate LAN technology.

10BASE-T also uses Manchester encoding. A 10BASE-T UTP cable has a solid conductor for each wire. The maximum cable length is 90 m (295 ft). UTP cable uses eight-pin RJ-45 connectors. Though Category 3 cable is adequate for 10BASE-T networks, new cable installations should be made with Category 5e or better. All four pairs of wires should be used either with the T568-A or T568-B cable pinout arrangement. This type of cable installation supports the use of multiple protocols without the need to rewire. Figure shows the pinout arrangement for a 10BASE-T connection. The pair that transmits data on one device is connected to the pair that receives data on the other device.

Half duplex or full duplex is a configuration choice. 10BASE-T carries 10 Mbps of traffic in half-duplex mode and 20 Mbps in full-duplex mode.

The next page describes the wiring and architecture of 10BASE-T.