Skip to main content

Throughput / Data transfer calculation


Throughput


2.2.5 Bandwidth is the measure of the amount of information that can move through the network in a given period of time. Therefore, the amount of available bandwidth is a critical part of the specification of the network. A typical LAN might be built to provide 100 Mbps to every desktop workstation, but this does not mean that each user is actually able to move 100 megabits of data through the network for every second of use. This would be true only under the most ideal circumstances.

Throughput refers to actual measured bandwidth, at a specific time of day, using specific Internet routes, and while a specific set of data is transmitted on the network. Unfortunately, for many reasons, throughput is often far less than the maximum possible digital bandwidth of the medium that is being used. The following are some of the factors that determine throughput:

• Internetworking devices
• Type of data being transferred
• Network topology
• Number of users on the network
• User computer
• Server computer
• Power conditions

The theoretical bandwidth of a network is an important consideration in network design, because the network bandwidth will never be greater than the limits imposed by the chosen media and networking technologies. However, it is just as important for a network designer and administrator to consider the factors that may affect actual throughput. By measuring throughput on a regular basis, a network administrator will be aware of changes in network performance and changes in the needs of network users. The network can then be adjusted accordingly.

The next page explains data transfer calculation.


Data transfer calculation


2.2.6 Network designers and administrators are often called upon to make decisions regarding bandwidth. One decision might be whether to increase the size of the WAN connection to accommodate a new database. Another decision might be whether the current LAN backbone is of sufficient bandwidth for a streaming-video training program. The answers to problems like these are not always easy to find, but one place to start is with a simple data transfer calculation.

Using the formula transfer time = size of file / bandwidth (T=S/BW) allows a network administrator to estimate several of the important components of network performance. If the typical file size for a given application is known, dividing the file size by the network bandwidth yields an estimate of the fastest time that the file can be transferred.

Two important points should be considered when doing this calculation.

• The result is an estimate only, because the file size does not include any overhead added by encapsulation.

• The result is likely to be a best-case transfer time, because available bandwidth is almost never at the theoretical maximum for the network type. A more accurate estimate can be attained if throughput is substituted for bandwidth in the equation.

Although the data transfer calculation is quite simple, one must be careful to use the same units throughout the equation. In other words, if the bandwidth is measured in megabits per second (Mbps), the file size must be in megabits (Mb), not megabytes (MB). Since file sizes are typically given in megabytes, it may be necessary to multiply the number of megabytes by eight to convert to megabits.

Try to answer the following question, using the formula T=S/BW. Be sure to convert units of measurement as necessary.

Would it take less time to send the contents of a floppy disk full of data (1.44 MB) over an ISDN line, or to send the contents of a ten GB hard drive full of data over an OC-48 line?

The next page will compare analog and digital signals.

Comments

Popular posts from this blog

OSI layers / Peer-to-peer communications / TCP/IP model

OSI layers 2.3.4 This page discusses the seven layers of the OSI model. The OSI reference model is a framework that is used to understand how information travels throughout a network. The OSI reference model explains how packets travel through the various layers to another device on a network, even if the sender and destination have different types of network media. In the OSI reference model, there are seven numbered layers, each of which illustrates a particular network function. - Dividing the network into seven layers provides the following advantages: • It breaks network communication into smaller, more manageable parts. • It standardizes network components to allow multiple vendor development and support. • It allows different types of network hardware and software to communicate with each other. • It prevents changes in one layer from affecting other layers. • It divides network communication into smaller parts to make learning it easier to understand. In the foll...

Advantages and disadvantages of link-state routing

Advantages and disadvantages of link-state routing 2.1.5  This page lists the advantages and disadvantages of link-state routing protocols. The following are advantages of link-state routing protocols:  Link-state protocols use cost metrics to choose paths through the network. The cost metric reflects the capacity of the links on those paths. Link-state protocols use triggered updates and LSA floods to immediately report changes in the network topology to all routers in the network. This leads to fast convergence times. Each router has a complete and synchronized picture of the network. Therefore, it is very difficult for routing loops to occur. Routers use the latest information to make the best routing decisions. The link-state database sizes can be minimized with careful network design. This leads to smaller Dijkstra calculations and faster convergence. Every router, at the very least, maps the topology of it...

Ports for services

Ports for services 10.2.2  Services running on hosts must have a port number assigned to them so communication can occur. A remote host attempting to connect to a service expects that service to use specific transport layer protocols and ports. Some ports, which are defined in RFC 1700, are known as the well-known ports. These ports are reserved in both TCP and UDP.  These well-known ports define applications that run above the transport layer protocols. For example, a server that runs FTP will use ports 20 and 21 to forward TCP connections from clients to its FTP application. This allows the server to determine which service a client requests. TCP and UDP use port numbers to determine the correct service to which requests are forwarded. The next page will discuss ports in greater detail.