Maintaining a healthy network performance is crucial, as there are billions of data transfer occurs every day.
The year 2020 began with approximately 44 zettabytes of data, which is about 40 times greater than the total number of stars in the observable universe, according to the World Economic Forum.
And this number is going to increase in the upcoming days.
That’s an insane number, but true!
Therefore, there’s a dire need to ensure your network exhibits adequate performance to meet the growing demands.
Now, the elephant in the room is network performance, which depends upon certain factors, to name a few – Latency, Time To First Byte (TTFB), Bandwidth, and Throughput.
If you want to maintain a stable and high-performing network, you ought to optimize these factors. In addition to that, they also help you to avoid penalties from major browsers and search engines.
So, prepare to understand these network terminologies and how you can optimize them for high performance.
What is Latency?
Latency literally means delay.
In the network ecosystem, latency is the time taken by a request or data to travel from its source to the destination. Here, a user’s action is the request and how long a web application takes to respond to this request.
This delay in time also includes the time a server takes to process the request. Hence, it is measured as the round trip – the total time taken by a request to be captured and processed through several devices and then received by the user where it gets decoded.
If the delays in data transmission are relatively small, it is low latency, which is desirable. But longer delays or high latency is not desirable as it deteriorates user experience.
But how will you know if your network exhibits high latency?
Some typical signs include:
Website or application take ‘forever’ to load
Accessing servers and web applications becomes slow.
It takes longer time while sending some information, for instance, emails with large attachments.
So, when you come across such a sign, it is probably due to the high latency network.
Network latency is calculated in milliseconds (ms), and it is unavoidable as multiple aspects are influencing the way networks converse with one another. But, you can reduce latency by implementing certain measures which I will discuss in the upcoming sections.
Before that, let’s discuss the reason behind network latency.
Server errors such as Error 50X can affect the performance of applications and might also prevent visitors from reaching your website.
Poor backend database optimization, which may happen as a result of over-utilized databases, long fields, large tables, improper index usage, and complicated calculations.
Hardware issues arising from routers, Wi-Fi access points, switches, security devices, load balancers, firewalls, IPS, etc.
Transmission mediums like wireless connections, optical fiber cables, etc. have limitations.
Due to low memory space, operating systems struggle in maintaining RAM requirements that programs use, which impacts system performance.
End-user issues such as low CPU or memory cycle needed to catch response within reasonable time-frames
How to measure ⏱️ latency?
If you use networking monitoring tools from brands like SolarWinds and Datadog, you can examine network latency automatically.
But is there a manual way for this?
The answer is YES.
Just open the command prompt in your operating system and type this – tracert and then type the destination you want to query.
After typing the command, a list with all the routers in the network path leading to a site address would appear. It will also show a time calculation in milliseconds.
Just add up the entire measurements to get the latency associated with your network.
There are certain methods to measure latency:
Time To First Byte (TTFB)
The time difference recorded when a request leaves from a user’s device to the moment it reaches the destination with the ‘first’ byte of data is known as Time To First Byte (TTFB).
TTFB is a crucial measure of network latency and server responsiveness as well.
Round Trip Time (RTT)
The total time a data packet takes while traveling from the source to its destination and then back to its source is referred to as Round Trip Time (RTT). It can deliver accurate results, but things may go hazy when the data packets take a different return path.
Generally, sysadmin use ping tests to measure how much time 32 bytes take to reach the server along with the time taken in receiving the response. These methods can check different servers in a network at the same time and can report the overall latency and performance.
Use optimization tools that can efficiently address congestion in your network to improve routing.
You can also utilize amplifiers or re-generators to increase network speed as well if the issues lie in the transmission medium.
Compression and caching
Highly distributed IPs traverse great distances, which add up transmission time. If you manage an edge server situated near your end-users, it will reduce the travel time and boosts page loading speed.
Apart from that, techniques like image optimization and file compression also reduce the bandwidth required to transfer large data volumes.
Peering means allowing two or more networks to get connected and exchange direct traffic without paying a third-party for traffic transmission across the web.
So, maintain a properly connected network having several network paths available across the internet.
Optimize network protocols
Comply with the regulatory standards by optimizing network protocols for interoperability and lowest latency.
HTTP/2 helps in reducing server latency through lesser round trips and parallelized transfers. In addition, you must also ensure a minimal number of external HTTP requests, including images plus JS and CSS files.
While it may not necessarily reduce network latency but improves the performance of your website in terms of page loading speed.
Time To First Byte (TTFB)
As I already have discussed, TTFB, which is a metric to determine server responsiveness, let’s move to its other aspects.
TTFB can help you identify all the weak points existing in your connection process. If you can determine where the delays happen, you can tweak your services for faster and reliable performance.
Not to mention, TTFB also impacts SEO; hence, it is crucial for your online visibility as well.
What factors impact TTFB?
Three actions impact TTFB:
Sending server a request
Once a user makes a request, the server receives it based on certain factors like time taken for DNS lookup, network speed, server distance, and more; and TTFB begins.
Processing the request
Upon receiving the request, the server must process the request and generate a response. The process starts with database calls, communications with other systems within the network, running scripts, etc.
Next, the server transmits its generated response back to the user making the request. Now, it depends upon the connection speed of both the user as well as the enterprise’s network.
Here, TTFB is the measure of time a user begins receiving the response starting from the first byte.
How to improve TTFB?
Latency can occur on any side; your server and/or user. Although you can’t control the connection speed of the users, you can certainly work on your server speed. Try to reduce server loading by leveraging CDNs, which can put static content close to users, which increases page loading speed.
Fast DNS resolve
A DNS resolve shouldn’t take greater than 100 milliseconds. When it happens, consider optimizing the DNS settings. You may also change the DNS provider if the issues continue to persist.
Upgrade your website hosting
TTFB more than 200ms is not good for your website. Maybe the reason for slow TTFB is your website hosting provider due to congested network and overworked servers.
If this is the case, you can converse with the hosting provider regarding this or upgrade your plan. Otherwise, you can also move your website to another provider. If you are using WordPress, then check out these premium hosting platforms.
Improve your backend performance
If you do not normalize or index your databases properly, it could drag down the response time. Therefore, normalize and index databases for prompt queries. It will also inform your database to quickly find table columns instead of examining every one of them.
Reduce processor load and database queries by maintaining frequently needed files and data read to get transferred in the server cache.
In case you have optimized your TTFB by implementing a few tactics, that’s great!
And you will always want to stay this way.
But changes in software updates, hardware, site updates, and others may cause TTFB to increase. So, pay attention to these before it’s too late.
The maximum capacity of a network, including the network size and server processing capacity, is referred to as Bandwidth.
Simply put, it is the measure of the amount of data that can be sent and/or received at a given point of time. It is measured in terms of bits/second, megabits/second, or gigabits/second.
Factors affecting Bandwidth
An internet connection consists of a particular maximum bandwidth. Certain factors can limit bandwidth for a specific device, which results in a slower connection. These factors listed below can originate in the nature of the connection or the user’s computer.
Bandwidth is affected by the total number of tasks you perform in your device. On increasing the number of tasks simultaneously, the speed slows down. Hence, consider serializing your tasks.
Upstream & downstream bandwidth
Data flowing FROM a device is upstream while the data flowing TO a device is downstream.
In general, internet processes include more downstream use as compared to upstream; hence, internet connections emphasize more on downstream bandwidth.
So, bandwidth gets affected when the need for an upstream increase during large data transmission, video chats, remote access, voice-over IP calls, etc.
Bandwidth impacts by the number of concurrent downloads and uploads occurring on your device.
Single connection, multiple users
As the number of users increases in a single network, it increases server loads and results in slower data transmission.
If your device is placed closer to the router, you will experience higher bandwidth as compared to the case when it is placed farther.
How to improve Bandwidth?
Use QoS Settings
Quality of Service (QoS) settings assist networks in supporting important applications and command traffic rules to prioritize some traffic types. Hence, needed applications don’t have to struggle for bandwidth.
Cloud-based applications might help.
Move towards the cloud to enhance network performance. Try outsourcing certain parts of your traffic to private and public cloud networks and lift pressure from your network.
Eliminate non-essential traffic
You can block non-essential traffic that provides no value to your productivity during business hours. This way, your bandwidth could be used only for running essential operations.
Periodic updates and backups
Updates and backups of your data and software are crucial for performance as well as a security point of view. But these processes take up a huge amount of network bandwidth.
This is why you should strategically schedule them, preferably outside normal business hours.
What is Throughput?
Network throughput is the measure of the total amount of data that can be transmitted from a source to the destination in a specified time-frame.
In other words, throughput measures the number of packets arrived successfully at a destination. It is calculated in bits/second or data/second.
Factors affecting Throughput
Limitation of a transmission medium
Bandwidth or theoretical capacity associated with a transmission medium limits throughput.
For example, if the bandwidth rate is 100 Mbps, it won’t climb more than that, no matter what. In fact, the practical data sent would be around 95%, more or less.
Network congestion 🚸
In a highly congested network, throughput would be reduced.
If the latency on a particular network is more, throughput is going to suffer.
The protocol carrying and delivering the data packets on a network can also affect throughput.
Packet loss or errors
In some types of traffic, packet errors and losses can affect throughput. It’s because those packets need to get re-transmitted, which further reduces throughput.
The reason behind compromised packets could be security attacks, damaged devices, and more.
If you want to measure throughput, you can use some tools like SolarWinds, iPerf etc.
How to improve Throughput?
One of the first things you should do is to try minimizing the network latency because it greatly impacts network performance and results in poor user experience.
You can take help from the above-mentioned sections to address this issue.
Clear network bottlenecks
Avoid network bottlenecks by upgrading routers, reducing the total number of nodes that shorten the traveling distance of packets. Hence, it can reduce congestion for better throughput.
Keep an eye on application consuming too much bandwidth.
If you are using your internet connection far greater than a fair share, throughput will reduce. Hence, close those applications using too much bandwidth unless they are necessary.
Reboot the network
Schedule periodic rebooting of your network systems like modem, routers, etc. So, when they start working again, they can exhibit great performance.
Check your hardware
Don’t let faulty hardware compromise the throughput. So, check your network hardware to find if there’s any discrepancy occurring at this end.
Have a word with your ISP
If you have everything in place, and you still encounter a poor throughput rate, consider communicating the same to your internet service provider (ISP). Maybe the fault is on their side.
I hope things are a bit clear about latency, TTFB, bandwidth, and throughput at your end. Try implementing the tips and tricks that I mentioned above to enhance the performance of your network.