CacheFly Blog Menu

CacheFly Blog

Capacity Upgrades and HTTP/2 Support

As part of our commitment to provide our customers with the fastest content delivery while handling ongoing bandwidth demand and growth in our customer base, we’ve added capacity and servers in the following locations:

  • Dallas
  • San Jose
  • London
  • Chicago

HTTP/2 Support Now Available for All Customers

In addition, we’ve been rolling out support of HTTP/2 since Q3 of last year, and we’re happy to announce that it’s now available for all customers. Relieving all of the intricacies and limitations of HTTP/1.1, HTTP/2 brings significant improvements for rapid delivery, and features including reduced HTTP header size, less connections between the client and server, multiplexing, and HPACK compression.

With HTTP/2 + CDN, customers can experience:

  • Faster load time and reliability
  • Lower bandwidth usage
  • Better security
  • Higher ranking on Google’s SERP

Since browser vendors only run HTTP/2 for HTTPS connections, HTTP/2 is automatically supported for all of our customers who use an encrypted connection/SSL certificate only. End users who use browsers that support HTTP/2 will experience the switch automatically. Customers using only a non-secured HTTP connection, should consider making the switch to HTTPS to take advantage of HTTP/2’s many benefits.  If you are a customer interested in securing SSL support for your website, our support team is happy to assist.

DIY CDN – Friend or Foe

We are witnessing a hyperconnectivity era with everything and anything pushed to the Internet to take advantage of its broad footprint. Users are scattered everywhere and they all want consistent service independent of connected device or medium. Everyone has high expectations and no one is willing to wait for a slow page load or buffered video. Nowadays, service performance is critical but are today’s protocols prepared for this new age of connectivity?

The old protocols of the Internet are still in use today. Yes, there are tweaks and optimisations in an attempt to match current conditions, but the base mechanisms are the same. As they were never optimized since inception, it’s always going to be a cat and mouse game when it comes to performance. If protocols were initially optimised for long RTT, we wouldn’t have some of the application performance problems we see today.

IP is a simple protocol, consisting of only a few fields. However, the TCP applications that run on top of it are very advanced, some requiring strict performance metrics to operate at peak levels. The content that is carried for the application is no longer tied to single area deployments. More often than not, we have dispersed server and user base locations, creating performance challenges as to how content is optimally served.

So now we have high expectations and performance requirements from a variety of locations; however, the Internet is an unreliable place with unpredictable asymmetric patterns, bufferbloat and many other performances related challenges. So how did we all get here?

We can’t change the speed of light!

The foundation of the Internet was based on the principle that everyone could talk to everyone. The original vision was universal reachability; networks were classed as equal citizens. There was an element of trust, and performance was never too much of an issue. We moved to segment networks by putting clients behind Network Address Translation (NAT) devices and a class A-B citizen design relationship emerged; the client-server model.

Throughout these movements, the role of the Internet stayed the same; connectivity regardless of location. With this type of connectivity model, distance and physical proximity plays a significant role in application performance. Over time, connectivity models change, but one thing engineers can’t change are the laws of physics. The closer to content, the better performance the user experiences. The days of having all content in the same location, while an attractive option from a data management perspective, from a performance perspective, would satisfy only a small proportion of customers. Dispersed users with content in one location does nothing for anyone.

Users on the other side of the planet to their content will suffer regardless of buffer sizes or other device optimisations. Everything is stuffed into HTTP that rides on top of TCP. However, TCP is chatty with a lot of back and forth communication inadequate for serving content. Long RTT are experienced as packets go back and forward many times before actual data transmission. Caching and traffic redirection are used on the long distance links but have limitations. The only real way forward is to move data right under the user’s nose.

Not all content is pushed to the edge, only content that is most used. CDN have a similar analogy to that of a user going to a library or a bookstore. One goes to a bookstore to buy a particular book; if it’s not there the bookstore owner can order it. There is no need to go to a library if you know which book you want.

Similarly to the operations of a CDN, if users requests require a piece of content not cached, the CDN network fetches it. This style of networking offers an entirely different perspective to connectivity. Instead of a simple universal reachability and connectivity model where everyone speaks to everyone, we are now looking at intelligent ways to deliver content close to user locations. The money on the Internet is taking a shift from transit to content providers, and we are experiencing a lot of cross network communication flowing through CDN providers. If you want to move 200GB around a network, then a CDN is the way to do that.

For optimised content delivery we need a CDN. There is no doubting this as there is no other way to delivering content closer to the user. And as much as we would like to we can’t control the speed of light. So the next question becomes, how do we go about doing this?

Arguing Build vs Buy

There is a lot of misconception regarding the best way to implement a CDN. Is it better to build or buy from a trusted source? The availability of open source and the variety of cloud providers should enable quick and cheap engineering, right? These days it’s easy to build a CDN. But this wasn’t always the case. Just turn the clocks 10 years ago to the physical world of networking.

The traditional world of physical appliances presents a significant barrier for a global footprint build phase. CDN builders require vendor specific appliances for firewalling, load balancing, and other network services. PoP locations require a costly and time-consuming physical build, not to mention the human element necessary to engineer all of this.

Back in those days, it was expensive to build any network; most often, venture capitalists were required. There was plenty of vendor lock-in, and open source was not too much of an option. Networking was static and lacked any agility.

It became easier with the push for open source products and x86 platforms. The birth of virtualization gives rise to the Virtual Machine (VM) and virtual appliance in VM format enable Network Function Virtualisation (NFV) environments with the ability to chain services. The cloud allows ready-made low-cost data centres.

Simply install your VM’s in different cloud provider high-availability zones, implement Varnish, deploy Pingdom and GTmetrix for monitoring. Now, we have the capability to turn up a CDN in under an hour at very little cost. To build a CDN these days takes a little time and engineering skill, but the misconception is that everyone thinks the building is the hard part! This part is easy.

In these trying Internet times, it’s harder to achieve what you initially set out to do when you first built the CDN. If you’re not careful, you will end up with a CDN that performs worse than content stored in a single location. The cost of cloud VM’s is practically nothing, but you will also require an outsourced traffic management solution requiring new relationship touch points and at a cost.

The most challenging part is not the build phase, but the operation and monitoring side of things. What do you do if your CDN is not performing well? How do you know? And who is responsible? – this is the hard part.

What makes a successful CDN?

It’s not about the build and buy phase; it’s all about running one. A CDN is a serious technological infrastructure investment that needs care and feeding. To make a successful CDN you need an experienced operations team with the right monitoring, so when something goes wrong, it’s detected and fixed.

Performance related goals must be set for the type of content you are serving. These metrics are only derived from sound knowledge and history of working within CDNs. This is key to operating a successful CDN. If you don’t have performance goals and aren’t measuring performance, why are you trying to build a CDN? If you want to be faster than everyone, then how much more faster? And more importantly, what steps do you take when you are not reaching performance targets?

CacheFly has been in operation for 12 years. That’s a long time to focus on one technology set, the lessons learned are priceless to an operations team. This type of experience gives you the power to make the right decisions. Through various CacheFly customer engagements and hands-on Internet knowledge has led them to focus on core technology sets, making them the leader in that class.

CacheFly’s promise is superior performance all the time with 100% availability. To achieve this, they have made core technology decisions and stuck to them. They don’t spread their operations and monitoring departments to implement every optimisation available instead focus on static content, performance and 100% reliability.

Streamlined Operations

Any sized CDN will have plenty of monitoring and backend systems used for the technical representation of their network. These could either be open source or commercial based platforms. Commercial based tools are expensive to buy, requiring recurring costs and relationship touch points. Many that build a CDN as part of their overlay network may have to choose a combination of open source tools that only fulfil a subset of functionality to that of a commercial based platform. Each tool may also require a different skillset and engineer.

What makes a successful streamlined operations department is the full integration of these tools and the automation around the process. Is the billing system fully integrated and does the billing start at the correct time – by activations and not order date? A single call from a backend system should signal device provisioning/changes and all these should be fully integrated with all other systems for better operations.

If you build a bespoke CDN, many operational duties operate with a manual approach which is cumbersome and time-consuming. Instead of having a manual approach requiring human intervention, all process such as billing should be integrated as part of the overlay delivery, with automatic notification of any network changes that affect the billing.

Successful networks are never just about configurations. Streamlined operations should be present right from order delivery to operational support. Device configurations and network monitoring are just part of running a CDN. The entire systems, departments and process must be aligned and integrated. Once the system and process are streamlined, automation of these events leads to a fully fledged mature CDN. This is never usually the case with CDN quickly spun up. It takes many years of lessons learnt to formulate and fine tune.

Building a CDN is easy!! It’s the operational activities that pose the biggest challenges.

Contact CacheFly to schedule an appointment with their CDN experts, who can evaluate your needs and determine a custom-fit solution for you.
Request Free Assessment >


This guest contribution is written by Matt Conran, Network Architect for Network Insight. Matt Conran has more than 17 years of networking industry with entrepreneurial start-ups, government organisations and others. He is a lead Network Architect and successfully delivered major global green field service provider and data centre networks.


Image Credit: Pixabay

Now Serving Dubai and Seoul

I’m pleased to announce that we’ve just added edge locations in Seoul and Dubai to better serve your users in both regions.  Since the past few years, we’ve been rapidly expanding our network coverage with strategically placed edge locations to provide your end users with the fastest content delivery. The new additions bring our total PoP count to 43.

If you’re an existing CacheFly customer, no custom configuration is needed. Users who request content around Seoul or Dubai will automatically receive content from the new POPs. For new customers, special pay-as-you-go pricing for Dubai will begin at 17¢/GB for the first 10TB, while delivery from Seoul starts at 15¢/GB for the first 10TB.

Holiday Readiness Begins Now

Black Friday and Cyber Monday are approaching in a few months… Will your retail website and apps handle the traffic?

For retailers, Black Friday and Cyber Monday can be the most lucrative opportunities, yet, biggest threats of the year. Just last November, Cyber Monday surpassed $3B in online sales (up 16% from 2014)—marking the biggest online spending day in U.S. history. Despite the sales jump, high traffic volume managed to temporarily shut down retail sites for Target, Saks, ShutterFly, Footlocker, and Neiman Marcus—which happened to miss Black Friday, altogether, with a full-day extended outage.



Target’s 404 page after record traffic made it crash on Cyber Monday 2015

Impact of Delays and Downtime on Retailers

What we learn from these delays and outages, is the considerable impact it has on revenue. Today’s consumer wants instant gratification. Studies show that if a retailer’s site is making $100k per day, even as little as a 1 second delay can potentially cost $2.5M in lost sales every year.

If that’s not enough, the blow is compounded by intangible costs that leave a lasting impression in the mind of your customers—and on your brand’s reputation.

Damage to Brand Reputation: Outages and delays make news headlines. Customers take to social media, such as Twitter, to express their frustration and discontent.

Competitive Advantage: If customers can’t access your site or app, they take their money elsewhere— to your competition.
Loss of Staff Productivity: IT, web operations and customer service teams are left dealing with the aftermath of having to troubleshoot and quickly resolve the issue, while customer service handles irate customers.
Loss in Marketing and Advertising Spend: Efforts spent on promotions are wasted when customers reach a retail website, app or landing page, but can’t access it.

Upgrade your infrastructure, or risk losing revenue

For retailers, uptime is mission-critical. When the busiest shopping days of the year approach, you can’t afford to lose out on the opportunity. If you have doubts to whether your retail site can withstand the growing volume of holiday traffic, the time to upgrade your infrastructure is now.

Ensure Reliability and Availability

It’s important to have a plan in place to ensure reliability and uptime. Retailers can prevent downtime by using a Content Delivery Network (CDN) that offers none other than a 100% availability SLA.  The difference in service levels may seem marginal, but, if a provider’s availability is 99.5%—that equates to 43 hours and 49 minutes of potential downtime, while a 99.99% SLA is 52 minutes and 35 seconds of potential downtime per year. Is that acceptable—on the busiest shopping day of the year?

While there are many CDNs out there to choose from, CacheFly CDN offers features critical for retailers during the busy holiday season:

  • Highest Availability—100% SLA: While downtime may occur, and it’s not always possible to deliver on 100% uptime, CacheFly CDN provides instant failover to automatically reroute to the nearest available servers to offer 100% availability.
  • Fastest File Throughput Performance: Independent network monitoring tests reveal that CacheFly has the fastest file throughput performance, to deliver your website and app files fast—no matter the location.
  • Infinite Scalability: Manage peak traffic on Black Friday and Cyber Monday with ease, knowing no matter how many global requests you receive, your site and mobile apps will seamlessly scale on demand.
  • Instant Purge: If merchandise is out of stock, or the wrong pricing or item is listed, its important to remove it immediately. With instant purge, items can be removed easily, and CacheFly’s super-fast replication pushes updates to global users within seconds.
  • Gzip Compression: Most retail sites consist of lots of product images, including large hero images, which can cause slowdowns. Gzip compression reduces file sizes ‘on the fly’, to optimize transfer over the Internet and save on bandwidth costs.
  • Security: With security features such as SSL support and URL/referrer blocking, CacheFly ensures your customers confidence in purchasing from your retail website securely.

Using a CDN can help you capitalize on the busiest holiday shopping days of the year, by ensuring availability and scalability, faster delivery, and a better overall user experience to increase conversions and profits. In some cases, in can mean the difference between making the most sales of the year, versus having a complete system downtime failure. The time to prepare for it is now.

Contact us to schedule an appointment with our CDN experts, who can evaluate your needs and determine a custom-fit solution for you.
Request Free Assessment >

Related Topics:
Measuring Throughput Performance: DNS vs. TCP Anycast
Is Your Site Ready for the Busiest Cyber Monday?
Cloud Benchmarking Provider Finds CacheFly Leads in Global Throughput
Application Downtime: A Critical Concern for SaaS and Web App Providers

Using a CDN to Improve Your SEO Ranking

People hate to wait.

Almost half of web visitors will abandon a web page that takes longer than 3 seconds to load. When you fail to deliver your site fast enough, you will lose visitors (and conversions)—and Google thinks you should too.

In 2010, Google began using Site Speed (how quickly a website responds to web requests) as one of 200 factors that determine search rankings. If your page load speed is too slow, it can hinder your site’s ability to be found on Google’s SERP. While the impact of site speed on SERP rankings only affects fewer than 1% of search queries, Google’s acknowledgement of speed—as a critical factor—speaks volumes. Google constantly adjusts its algorithms to provide Google users with the best possible user experience—so it makes sense that a site’s speed can affect its SERP position on Google.

How Fast Should My Avg. Page Load Speed Be?

Maile Ohye of Google says, “2 seconds is the threshold for e-commerce website acceptability. At Google, we aim for under a half second.”

How Google Measures Site Speed

When measuring page load speed, Google considers the following:

  • Time to First Byte (TTFB): The time it takes from from initial request to the first byte of data it returns, and
  • Critical Render Path: When all files ‘above the fold’ are fully rendered.

Google PageSpeed Insights evaluates a site’s desktop and mobile page speed and provides recommendations to optimize performance.

How Do I Optimize My Page Load Speed?

  • Use a Content Delivery Network (CDN): A CDN caches your website files (HTML, JavaScript, CSS, images, videos, etc.) onto geographically distributed servers, to serve it physically closer to your end users—resulting in faster delivery.

However, not all CDNs are built the same. If you choose a CDN provider known for outages, you wind up with downtime, which is obviously not only bad for your users—but, also your SEO—since search bots are unable to crawl your site. You must choose a CDN that can offer 100% uptime.

In addition to offering a 100% uptime SLA, CacheFly CDN provides you with:

  • Fastest file throughput delivery—which reduces TTFB, thereby decreasing your average page load speed.
  • Global POP coverage, distributed in six continents—to deliver your site closer and faster to your users.
  • Automatic gzip compression, to compress your image files on the fly—optimizing delivery and reducing bandwidth costs.
  • Infinite scalability to scale on demand to traffic bursts—eliminating the risk of crashes and timeouts.
  • Security, including: Token-based auth, and URL/referrer blocking, and origin shield.

What you get is a faster loading site and the highest availability, which will help increase your SERP ranking and more importantly—provide your users a better experience.

Don’t let slow load times affect your SERP ranking. Start optimizing your site speed today. Contact us to schedule an appointment with our CDN experts, who can evaluate your needs and determine a custom-fit solution for you.
Request Free Assessment >