Azure Traffic Manager

When you’re running an online business, or even just have an online presence, you want your site to be highly available (and you’re using Azure load balancer for that!), but you also want it to load fast, no matter where your customers are.

So if you’re hosting your site in the EU, you want customers from the US and Asia to still have a good experience!

Network Latency

Latency is a term that is used all the time. But there are many who don’t know what it actually means! Simply put, it is the time it takes for a client to send a packet to a server, the server to process it, and the client to receive the response from the server. So the round trip time from browser to server can be defined as latency.

When you’re looking at network optimizations, or even just designing a network, latency is not the only thing you have to think about. There’s also bandwidth and throughput!

  • Bandwidth: This determines how much data you can push over your network. It’s often defined as narrow or wide, and you can think of it as a road. A narrow road can’t have many cars at the same time. A broad road can.
  • Latency: How fast can we transfer contents on the network from the client to the server and back. Think of it as a speed limit. The actual speed limit is the speed of light, so latency will never be lower than the distance light can travel over the distance the packet needs to go.
  • Throughput: How much data can we transfer in a given time period. This is how many cars we’re actually pushing through.

So if we have a low latency (low amount of time to reach our servers) and a low bandwidth (can’t put much data on the network at the same time), then our throughput will be low. If we have low latency and high bandwidth, we’ll have high throughput.

It’s not always this black and white, but it’s good to think about these 3 when you’re working on a network.

So what causes latency?

Latency is normal and to be expected. After all, we haven’t found a way to go faster than the speed of light as of yet. But there are other factors that can increase your latency, such as congested networks, or even silly things like double encrypting traffic. The transmission medium, in the end, is just the road the packets travel on. It’s the network devices that can cause slowdowns. Then again, if you have a bad road, traffic will slow or come to a complete halt…

How can we then optimize customer experiences?

The first instinct would be, build more roads… And that is not necessarily a bad thought pattern! We’re all human, so we think a certain way. But perhaps that’s not the best solution. If you have customers all over the world, it would make more sense to scale out to different regions…

With Azure we can provide exact copies of your service in multiple regions. So we can run our website in regions all over the world, and being close to our end-users!

Well now we’re close to our users with our exact copies, but how are we going to ensure that users connect to the closest endpoint?

Traffic Manager to the rescue!

When deployed, traffic manager is going to use the DNS server that’s closest to the user to direct them to a globally distributed endpoint. It isn’t going to see the traffic that gets passed between the client and server. It’s just going to direct the client web browser to a preferred endpoint. There are many ways for traffic manager to route traffic, such as to the endpoint with the lowest latency.

But you don’t have to deploy only to Azure to use Traffic Manager, you could throw your on-premises network in the mix and add it as an endpoint to traffic manager. It’s all up to you to make the right calls on where you’re hosting.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Close Menu
%d bloggers like this: