Skip to main content

Factors that impact network performance / Elements of Ethernet/802.3 networks

Factors that impact network performance 
4.1.2 This page will describe some factors that cause LANs to become congested and overburdened. In addition to a large number of network users, several other factors have combined to test the limits of traditional LANs:
  • The multitasking environment present in current desktop operating systems such as Windows, Unix/Linux, and Mac OS X allows for simultaneous network transactions. This increased capability has lead to an increased demand for network resources.
  • The use of network intensive applications such as the World Wide Web has increased. Client/server applications allow administrators to centralize information and make it easier to maintain and protect information.
  • Client/server applications do not require workstations to maintain information or provide hard disk space to store it. Given the cost benefit of client/server applications, such applications are likely to become even more widely used in the future.
The next page will discuss Ethernet networks.

Elements of Ethernet/802.3 networks 
4.1.3 This page will describe some factors that can have a negative impact on the performance of an Ethernet network.
Ethernet is a broadcast transmission technology. Therefore network devices such as computers, printers, and file servers communicate with one another over a shared network medium. The performance of a shared medium Ethernet/802.3 LAN can be negatively affected by several factors:
  • The data frame delivery of Ethernet/802.3 LANs is of a broadcast nature.
  • The carrier sense multiple access/collision detect (CSMA/CD) method allows only one station to transmit at a time.
  • Multimedia applications with higher bandwidth demand such as video and the Internet, coupled with the broadcast nature of Ethernet, can create network congestion.
  • Normal latency occurs as frames travel across the network medium and through network devices.
Ethernet uses CSMA/CD and can support fast transmission rates. Fast Ethernet, or 100BASE-T, provides transmission speeds up to 100 Mbps. Gigabit Ethernet provides transmission speeds up to 1000 Mbps and 10-Gigabit Ethernet provides transmission speeds up to 10,000 Mbps. The goal of Ethernet is to provide a best-effort delivery service and allow all devices on the shared medium to transmit on an equal basis. Collisions are a natural occurrence on Ethernet networks and can become a major problem.  
The next page will describe half-duplex networks.

Comments

Popular posts from this blog

OSI layers / Peer-to-peer communications / TCP/IP model

OSI layers 2.3.4 This page discusses the seven layers of the OSI model. The OSI reference model is a framework that is used to understand how information travels throughout a network. The OSI reference model explains how packets travel through the various layers to another device on a network, even if the sender and destination have different types of network media. In the OSI reference model, there are seven numbered layers, each of which illustrates a particular network function. - Dividing the network into seven layers provides the following advantages: • It breaks network communication into smaller, more manageable parts. • It standardizes network components to allow multiple vendor development and support. • It allows different types of network hardware and software to communicate with each other. • It prevents changes in one layer from affecting other layers. • It divides network communication into smaller parts to make learning it easier to understand. In the foll...

Advantages and disadvantages of link-state routing

Advantages and disadvantages of link-state routing 2.1.5  This page lists the advantages and disadvantages of link-state routing protocols. The following are advantages of link-state routing protocols:  Link-state protocols use cost metrics to choose paths through the network. The cost metric reflects the capacity of the links on those paths. Link-state protocols use triggered updates and LSA floods to immediately report changes in the network topology to all routers in the network. This leads to fast convergence times. Each router has a complete and synchronized picture of the network. Therefore, it is very difficult for routing loops to occur. Routers use the latest information to make the best routing decisions. The link-state database sizes can be minimized with careful network design. This leads to smaller Dijkstra calculations and faster convergence. Every router, at the very least, maps the topology of it...

Ports for services

Ports for services 10.2.2  Services running on hosts must have a port number assigned to them so communication can occur. A remote host attempting to connect to a service expects that service to use specific transport layer protocols and ports. Some ports, which are defined in RFC 1700, are known as the well-known ports. These ports are reserved in both TCP and UDP.  These well-known ports define applications that run above the transport layer protocols. For example, a server that runs FTP will use ports 20 and 21 to forward TCP connections from clients to its FTP application. This allows the server to determine which service a client requests. TCP and UDP use port numbers to determine the correct service to which requests are forwarded. The next page will discuss ports in greater detail.