CacheFly Blog Menu

CacheFly Blog

Capacity Upgrades and HTTP/2 Support

As part of our commitment to provide our customers with the fastest content delivery while handling ongoing bandwidth demand and growth in our customer base, we’ve added capacity and servers in the following locations:

  • Dallas
  • San Jose
  • London
  • Chicago

HTTP/2 Support Now Available for All Customers

In addition, we’ve been rolling out support of HTTP/2 since Q3 of last year, and we’re happy to announce that it’s now available for all customers. Relieving all of the intricacies and limitations of HTTP/1.1, HTTP/2 brings significant improvements for rapid delivery, and features including reduced HTTP header size, less connections between the client and server, multiplexing, and HPACK compression.

With HTTP/2 + CDN, customers can experience:

  • Faster load time and reliability
  • Lower bandwidth usage
  • Better security
  • Higher ranking on Google’s SERP

Since browser vendors only run HTTP/2 for HTTPS connections, HTTP/2 is automatically supported for all of our customers who use an encrypted connection/SSL certificate only. End users who use browsers that support HTTP/2 will experience the switch automatically. Customers using only a non-secured HTTP connection, should consider making the switch to HTTPS to take advantage of HTTP/2’s many benefits.  If you are a customer interested in securing SSL support for your website, our support team is happy to assist.

Anycast – “Think” before you talk – Part II

This post is Part II of Anycast – “Think” before you talk.

Directing Users to PoPs

Traditional methods to route users to the closest PoP rely on DNS-based geographic load balancing. DNS attempts to map users requests to the nearest PoP by giving out DNS records based on users latitude and longitude. Essentially, the IP address of the PoP is handed to the user based on the IP of the resolver, not the actual client IP address.

Benefits of DNS-based LB

One of the main advantages of DNS-based load balancing is control – administrators can direct any DNS request to any node. This is useful for traffic management purposes. It also offers flexibility regarding deployment as you don’t need to have common carriers and deal with the intricacies Anycast routing on the Internet.

However, there are plenty of tradeoffs as it does not out of the box handle topology changes very well, unlike the Anycast-based approach.

Challenges of DNS based LB

The majority feel it’s a very naive approach with plenty of shortcomings and suboptimal users placement. The issues mainly arise from PoP failover, resolver proximity, low TTL’s and timeouts.

Suboptimal decisions are made as the DNS mapping is based on the user’s name server IP, not the client’s actual IP address. This makes DNS based load balancing an inaccurate method for client proximity routing. The call comes from the client’s DNS server, not the actual client’s IP address. As a result, administrators can only ever optimise the performance metrics for the DNS resolver. This has changed in the last few years with EDNS client subnet, but it’s not fully implemented in all resolvers.

DNS doesn’t fail very gracefully when you are in the middle of a session and want to get rerouted to a different PoP. Users have to close their browsers and open them up to start working again.

DNS TTLS can cause lagging and performance issues. Whenever you decide that you need to change your answer you have to wait for DNS TTL to expire. During a failover scenario, the TTL of the response must be reached to change locations. Unfortunately, some applications hold on to these values for a long time. Setting a low TTL mitigates but the tradeoff is against performance as resolvers must frequently re-request the same DNS record.

The Anycast Approach

Instead of giving a different IP, Anycast is a mechanism that announces the same IP address from multiple locations. Anycast is nothing special – simply a route with multiple next hops. It’s not too different than Unicast; a NLRI object has multiple next hops instead of one. All the magic is done when you deliver the packet not the underlying network transporting the packet.

When you advertise multiple destinations, the shortest path is chosen based on the user location. Therefore, traffic organically lands where it should do as opposed to direct control based on GEO IP. Anycast does not rely on a stale Geo IP database and performance rests on the natural flow of the Internet.

The resolver IP is not used, instead the client IP is used for anycast routing. This subtle difference offers a more accurate view of where the users are located. The users can use whatever resolver they want and will still have the same assignment. As a result, the client’s DNS server is trivial with whatever the question is, the answer will be the same.

With an Anycast design, there are trade-offs between performance and stability. Anycast works best with metro or regionally based level design and with single PoP per location deployments. Multiple PoPs per location might run you into some problems. As a general best practice, the more physical space you have between your PoP the more stable the overall architecture will be.

Anycast Organic Traffic

Natively, Anycast is natively not load aware. Large volumes of inbound traffic could potentially saturate a PoP. While also true for Unicast traffic, DNS-based routing offers better control for PoP placement as you can hand out specific IP blocks for specific locations.

The DNS response may provide a suboptimal response, but it still represents a better level of supervision for traffic management purposes. The Anycast approach to PoP placement will have organic traffic naturally flowing to each PoP location; you can’t control this. Some control was given up moving from traditional DNS-based routing to a TCP-anycast CDN. So what’s the best line of action to take under these circumstances? Should you oversubscribe each PoP to account for the lack of control?

First and foremost when it happens, you need to be aware of it. It’s not acceptable not to be aware of a flood of traffic entering your network. The right monitoring tools need to be in place along with a responsive and active monitoring team. Much of the reason for large inbound flows happens upstream. For example, a provider breaks something. So it will happen, it’s just a matter of time. The best way to deal with it, is through active monitoring and preparation.

CacheFly has the experience and monitoring in place to detect and mitigate large volumes of inbound traffic. The network architecture consists of private connections between all PoP locations streamlining the shedding of traffic to undersubscribed PoPs as the need arises. In the event of high inbound traffic flows, CacheFly’s proactive monitoring and intelligent network design shifts traffic between locations mitigating the effects of uneven traffic flows due to Anycast design.

Benefits of Anycast

Anycast is deemed to fail quicker than DNS, has better performance and simpler to operate. Anycast doesn’t suffer from any of the DNS correlation issues, and it doesn’t matter which DNS server you came from. The client takes the fastest path from its locations as opposed to the fastest path where the DNS resolver is.

Anycast is a simple, less complex way for user assignment. You’re pushing the complexity and responsibility to the Interior Gateway Protocol (IGP) of the upstream provider, relying on the natural forwarding of the Internet to bring users to the closest PoP.

While with an Anycast design the next time you click a link on a page or anytime your browser goes out and refreshes content you are back on your way to a new POP. Anycast is faster as traffic shift can happen much quicker and you don’t have to lower users performance by keeping a low DNS TTL.

Upon network failure, Anycast fails far more quickly in scenarios to that of Unicast. If you are having routing issues between location X and location Y. With Anycast a TCP RST is received and the client works immediately to the new location. Without Anycast, clients will continually attempt to reach the server in location X but as it’s not available, the client is continually stuck in a routing loop between location, until either;

a ) The providers converge, but until then, users are waiting, timing out, reloading and timing out over and over again.

b) If location X is down, the client has to wait for the DNS GEO to realise and offer a new IP address. Clients either need to timeout the IP in the application or resolver and potentially close and open the web browser for things to start working again.

While on the other hand Anycast broke quickly and got back working again rapidly. With outages, traffic is seamlessly routed to the next best location, without requiring browser restarts, a type of convergence not possible with traditional DNS solutions.

anycast

Anycast enables the use of high TTL as the actual IP address of endpoints never changes. This allows resolver to cache a response increasing overall end user experience and network efficiency.

It’s also a great tool in a DDoS mitigation solution. With Botnet armies reaching a Terabyte-scale attack, the only cost effective way is to distribute your architecture, naturally absorbing the attack with an Anycast network.

Everything is debatable

However, Anycast requires some form of stickiness, so flows get the same forwarding treatment. As a result, per packet load balancing can break Anycast. However, per packet load balancing is rarely seen these days, but there is a chance it exists somewhere in a far-flung ISP. Generally speaking, we are designing better networks these days.

TCP/IP uses a different protocol for out of band signalling. As a result, it may have different forwarding treatments and massages (Path MTU discovery), and may not reach the intended receiver. Technically this is still an issue but not widely a problem on TCP/Anycast networks.

Anycast endpoint selection is based on hop count number. That does not mean it’s routing based on lowest latency or best performing links. Fewer hops do not mean lower latency. Some destinations may be one hop away, but that could be a high latency intercontinental link. More than often traffic doesn’t have to traverse intercontinental links to reach its final destination. With intelligent PoP placement, content is placed close to the user in the specified regions.

Anycast does take control away from the administrator to the hands of the Internet. As user requests organically land at the closest PoP; the strict supervision of where users lands are removed, potentially leading to capacity management issues at each edge location. As already discussed, this is overcome with experienced monitoring teams. Another reason why you shouldn’t go with a DIY CDN.

Summary

People overestimate how unreliable the Internet is regarding broad events, underestimate the impact of those on Unicast and overestimate the impact on Anycast. The unreliability of the Internet is built into its design. The Internet is designed to fail! However, we assume under a failure if we are using TCP/Anycast and application terminates at the wrong place, the world stops, and everything else breaks.

If for an intermediate failure or misconfiguration event, an HTTP SYN destined to Server X lands on Server Y, and as this server does not have an active TCP session, it will as it should send an RST back to the client. But if your application doesn’t handle network interactions very well you really shouldn’t be running it on the Internet.

Networks are built to fail, and they will fail! If you are looking for 100% network reliability and the application can’t handle failures, then you should maybe look to rebuild the application.


This guest contribution is written by Matt Conran, Network Architect for Network Insight. Matt Conran has more than 17 years of networking industry with entrepreneurial start-ups, government organisations and others. He is a lead Network Architect and successfully delivered major global green field service provider and data centre networks.

 

Image Credit: Pixabay

Anycast – “Think” before you talk

Part I

Introduction

How you experience the performance of an application boils down to where you stand on the Internet. Global reachability means everyone can reach everyone, but not everyone gets the same level of service. The map of the Internet has different perspectives for individual users and proximity plays a significant role in users’ experience.

Why can’t everyone everywhere have the same level of service and why do we need optimisations in the first place? Mainly, it boils down to the protocols used for reachability and old congestion management techniques. The Internet comprises of old protocols that were never designed with performance in mind. As a result, there are certain paths we must take to overcome its shortcomings.

Performances Challenges

The primary challenge arises from how Transmission Control Protocol (TCP) operates under both normal conditions and stress once the inbuilt congestion control mechanisms kick-in.

Depending on configuration, it could take anything up to 3  – 5 RTT to send data back to the client. Also, the congestion control mechanisms of TCP only allows 10 segments to be sent with 1 RTT, increasing after that.

Unfortunately, this is the basis for congestion control on the Internet, which hinders application performance, especially for those with large payloads.

Help at Hand

There are a couple of things we can engineer to help this. The main one is to move content closer to users by rolling out edge nodes (PoPs) that proxy requests and cache static content. Edge nodes increase client side performance as all connections terminate close to the users. In simplistic terms, the closer you are to the content the better the performance.

Other engineering tricks involve tweaking how TCP operates. This works to a degree, making bad less bad or good better but it doesn’t necessarily shift a bad connection to a good one.

The correct peering and transits play a role. Careful selection based on population and connectivity density are key factors. Essentially, optimum performance is down to many factors used together in conjunction with reducing the latency as much as possible.

PoP locations, peerings, transits with all available optimisations are only part of the puzzle. The next big question is how do we get users to the right PoP location? And with failure events how efficiently do users fail to alternative locations?

In theory, we have two main options for PoP selection;

a) Traditional DNS based load balancing,
b) Anycast.

Before we address these mechanisms, lets dig our understanding deeper into some key CDN technologies to understand better which network type is optimal.

Initially, we had web servers in central locations servicing content locally. As users became more dispersed so did the need for content. You cannot have the same stamp for the entire world! There needs to be some network segregation which gives rise to edge nodes or PoPs placed close to the user.  


What
PoPs solve

Employing a PoP decrease the connection time as we are terminating the connection at the local PoP. When the client sends an HTTP GET request the PoP sends to the data centre over an existing hot TCP connection. The local PoP and central data centre are continually taking, so the congestion control windows are high, allowing even a 1MB of data to be sent in one RTT. This greatly improves application performance and the world without PoPs would be a pretty slow one.

anycast

 

Selecting the right locations for PoP infrastructure plays an important role in overall network performance and user experience. The most important site selection criteria are to go where the eyeball networks are. You should always try to maximise the number of eyeball networks you are close to when you roll out a PoP. As a result, two fundamental aspects come to play – both physical and topological distance.

Well advanced countries, have well-advanced strategies for peering while others are not so lucky with less peering diversity due to size or government control. An optimum design has large population centers with high population and connectivity densities. With power and space being a secondary concern, diverse connectivity is King when it comes to selecting the right PoP location.


New Architectures

If you were to build a Content Delivery Network Ten years ago, the design would consist of heavy physical load balancers and separate appliance devices to terminate Secure Sockets Layer (SSL). The current best practice architecture has moved away from this and it’s now all about lots of RAM, SSD and high-performance CPU piled into compute nodes. Modern CPU’s are just as good at calculating SSL and it’s cleaner to terminate everything at a server level rather than terminate on costly dedicated appliances.

CacheFly pushes network complexities to their high performing servers and run equal cost multipath (ECMP) right to the host level. Pushing complexity to the edge of the network is the only way to scale and reduce central state. ECMP right down to the host, gives you a routerless design and gets rid of centralised load balancers, allowing to load balance incoming requests in hardware on the Layer 3 switch and then perform the TCP magic on the host.

CacheFly operates a Route Reflector design consisting of iBGP internally and eBGP to the WAN.


Forget about State

ECMP designs are not concerned with scaling an appliance that has lots of state. Devices with state are always hard to scale and load balancing with pure IP is so much easier. It allows you do to the inbound load balancing in hardware without the high-costs and operational complexities of multiple load balancer and high cost routers. With the new architectures, everything looks like an IP Packet and all switches forward this in hardware. Also, there usually needs to be two appliances for redundancy and also some additional spares in stock, just in case. Costly physical appliances sitting idle in a warehouse is good for no one.

We already touched on the methods to get clients to the PoP both traditional DNS based load balancing and Anycast. Anycast is now deemed a superior option but in the past has met some barriers. Anycast has been popular in the UDP world and now offers the same benefits to TCP-based application. But there has been some barriers to adoption mainly down to inaccurate information and lack of testing.

Barriers to TCP Anycast

The biggest problem for TCP/Anycast is not route convergence and application timeouts; it’s that most people think it doesn’t work. People believe that they know without knowing the facts or putting anything into practise to get those facts.

If you have haven’t tested, then you shouldn’t talk or if you have used it and experienced problems, let’s talk. People think that routes are going to converge quickly and always bounce between multiple locations, causing TCP resets. This doesn’t happen as much as you think but it’s much worse when Anycast is not used.

There is a perception that the Internet end-to-end, is an awful place. While there are many challenges, it’s not as bad you might think, especially if the application is built correctly. The Internet is never fully converged, but is this a major problem? If we have deployed an Anycast network how often would the Anycast IP from a given PoP change? – almost never.

The Internet may not have a steady state but what does change is the 1) association of prefix to Autonomous System (AS) and 2) peering between the AS. Two factors that may lead to best path selection. As a result, we need reliable peering relationships, but this is nothing to do with the Anycast Unicast debate.

Building better Networks

Firstly, we need to agree there is no global licence to rule to network designing and the creative ART of networking comes to play with all Service Provider designs. While SP networks offer similar connectivity goals, each and every SP network is configured and designed differently. Some with per-packet load balancing, but most with not. But we as a network design community are rolling out better networks.

There will always be the individual ART to network design unless we fully automate the entire process from design to device configurations which will not happen on a global scale anytime soon. There are so many ways and approaches to network design, but as a consensus, we are building and operating better networks. The modern Internet, a network that never fully converges, is overall pretty stable.

Nowadays, we are building better networks. We are forced to do so as networks are crucial to service delivery. If the network is down or not performing adequately, the services that run on top are useless. The pressure has forced engineers to design with a business orientated approach to networking, with the introduction of automation as an integral part to overlay network stability.


New Tools

In the past, we had primitive debugging and troubleshooting tools; PING and Traceroute most widely used. Both are crude ways to measure performance and only tell administrators if something is “really” broken. Today, we have an entirely new range of telemetry systems at our disposal that inform administrators where the asymmetrical paths are and overlay network performance based on numerous Real User Monitoring (RUM) metrics.

Continue Reading Anycast – “Think” before you talk – Part II


This guest contribution is written by Matt Conran, Network Architect for Network Insight. Matt Conran has more than 17 years of networking industry with entrepreneurial start-ups, government organisations and others. He is a lead Network Architect and successfully delivered major global green field service provider and data centre networks.

Image Credit: Pexels

DIY CDN – Friend or Foe

We are witnessing a hyperconnectivity era with everything and anything pushed to the Internet to take advantage of its broad footprint. Users are scattered everywhere and they all want consistent service independent of connected device or medium. Everyone has high expectations and no one is willing to wait for a slow page load or buffered video. Nowadays, service performance is critical but are today’s protocols prepared for this new age of connectivity?

The old protocols of the Internet are still in use today. Yes, there are tweaks and optimisations in an attempt to match current conditions, but the base mechanisms are the same. As they were never optimized since inception, it’s always going to be a cat and mouse game when it comes to performance. If protocols were initially optimised for long RTT, we wouldn’t have some of the application performance problems we see today.

IP is a simple protocol, consisting of only a few fields. However, the TCP applications that run on top of it are very advanced, some requiring strict performance metrics to operate at peak levels. The content that is carried for the application is no longer tied to single area deployments. More often than not, we have dispersed server and user base locations, creating performance challenges as to how content is optimally served.

So now we have high expectations and performance requirements from a variety of locations; however, the Internet is an unreliable place with unpredictable asymmetric patterns, bufferbloat and many other performances related challenges. So how did we all get here?

We can’t change the speed of light!

The foundation of the Internet was based on the principle that everyone could talk to everyone. The original vision was universal reachability; networks were classed as equal citizens. There was an element of trust, and performance was never too much of an issue. We moved to segment networks by putting clients behind Network Address Translation (NAT) devices and a class A-B citizen design relationship emerged; the client-server model.

Throughout these movements, the role of the Internet stayed the same; connectivity regardless of location. With this type of connectivity model, distance and physical proximity plays a significant role in application performance. Over time, connectivity models change, but one thing engineers can’t change are the laws of physics. The closer to content, the better performance the user experiences. The days of having all content in the same location, while an attractive option from a data management perspective, from a performance perspective, would satisfy only a small proportion of customers. Dispersed users with content in one location does nothing for anyone.

Users on the other side of the planet to their content will suffer regardless of buffer sizes or other device optimisations. Everything is stuffed into HTTP that rides on top of TCP. However, TCP is chatty with a lot of back and forth communication inadequate for serving content. Long RTT are experienced as packets go back and forward many times before actual data transmission. Caching and traffic redirection are used on the long distance links but have limitations. The only real way forward is to move data right under the user’s nose.

Not all content is pushed to the edge, only content that is most used. CDN have a similar analogy to that of a user going to a library or a bookstore. One goes to a bookstore to buy a particular book; if it’s not there the bookstore owner can order it. There is no need to go to a library if you know which book you want.

Similarly to the operations of a CDN, if users requests require a piece of content not cached, the CDN network fetches it. This style of networking offers an entirely different perspective to connectivity. Instead of a simple universal reachability and connectivity model where everyone speaks to everyone, we are now looking at intelligent ways to deliver content close to user locations. The money on the Internet is taking a shift from transit to content providers, and we are experiencing a lot of cross network communication flowing through CDN providers. If you want to move 200GB around a network, then a CDN is the way to do that.

For optimised content delivery we need a CDN. There is no doubting this as there is no other way to delivering content closer to the user. And as much as we would like to we can’t control the speed of light. So the next question becomes, how do we go about doing this?

Arguing Build vs Buy

There is a lot of misconception regarding the best way to implement a CDN. Is it better to build or buy from a trusted source? The availability of open source and the variety of cloud providers should enable quick and cheap engineering, right? These days it’s easy to build a CDN. But this wasn’t always the case. Just turn the clocks 10 years ago to the physical world of networking.

The traditional world of physical appliances presents a significant barrier for a global footprint build phase. CDN builders require vendor specific appliances for firewalling, load balancing, and other network services. PoP locations require a costly and time-consuming physical build, not to mention the human element necessary to engineer all of this.

Back in those days, it was expensive to build any network; most often, venture capitalists were required. There was plenty of vendor lock-in, and open source was not too much of an option. Networking was static and lacked any agility.

It became easier with the push for open source products and x86 platforms. The birth of virtualization gives rise to the Virtual Machine (VM) and virtual appliance in VM format enable Network Function Virtualisation (NFV) environments with the ability to chain services. The cloud allows ready-made low-cost data centres.

Simply install your VM’s in different cloud provider high-availability zones, implement Varnish, deploy Pingdom and GTmetrix for monitoring. Now, we have the capability to turn up a CDN in under an hour at very little cost. To build a CDN these days takes a little time and engineering skill, but the misconception is that everyone thinks the building is the hard part! This part is easy.

In these trying Internet times, it’s harder to achieve what you initially set out to do when you first built the CDN. If you’re not careful, you will end up with a CDN that performs worse than content stored in a single location. The cost of cloud VM’s is practically nothing, but you will also require an outsourced traffic management solution requiring new relationship touch points and at a cost.

The most challenging part is not the build phase, but the operation and monitoring side of things. What do you do if your CDN is not performing well? How do you know? And who is responsible? – this is the hard part.

What makes a successful CDN?

It’s not about the build and buy phase; it’s all about running one. A CDN is a serious technological infrastructure investment that needs care and feeding. To make a successful CDN you need an experienced operations team with the right monitoring, so when something goes wrong, it’s detected and fixed.

Performance related goals must be set for the type of content you are serving. These metrics are only derived from sound knowledge and history of working within CDNs. This is key to operating a successful CDN. If you don’t have performance goals and aren’t measuring performance, why are you trying to build a CDN? If you want to be faster than everyone, then how much more faster? And more importantly, what steps do you take when you are not reaching performance targets?

CacheFly has been in operation for 12 years. That’s a long time to focus on one technology set, the lessons learned are priceless to an operations team. This type of experience gives you the power to make the right decisions. Through various CacheFly customer engagements and hands-on Internet knowledge has led them to focus on core technology sets, making them the leader in that class.

CacheFly’s promise is superior performance all the time with 100% availability. To achieve this, they have made core technology decisions and stuck to them. They don’t spread their operations and monitoring departments to implement every optimisation available instead focus on static content, performance and 100% reliability.

Streamlined Operations

Any sized CDN will have plenty of monitoring and backend systems used for the technical representation of their network. These could either be open source or commercial based platforms. Commercial based tools are expensive to buy, requiring recurring costs and relationship touch points. Many that build a CDN as part of their overlay network may have to choose a combination of open source tools that only fulfil a subset of functionality to that of a commercial based platform. Each tool may also require a different skillset and engineer.

What makes a successful streamlined operations department is the full integration of these tools and the automation around the process. Is the billing system fully integrated and does the billing start at the correct time – by activations and not order date? A single call from a backend system should signal device provisioning/changes and all these should be fully integrated with all other systems for better operations.

If you build a bespoke CDN, many operational duties operate with a manual approach which is cumbersome and time-consuming. Instead of having a manual approach requiring human intervention, all process such as billing should be integrated as part of the overlay delivery, with automatic notification of any network changes that affect the billing.

Successful networks are never just about configurations. Streamlined operations should be present right from order delivery to operational support. Device configurations and network monitoring are just part of running a CDN. The entire systems, departments and process must be aligned and integrated. Once the system and process are streamlined, automation of these events leads to a fully fledged mature CDN. This is never usually the case with CDN quickly spun up. It takes many years of lessons learnt to formulate and fine tune.

Building a CDN is easy!! It’s the operational activities that pose the biggest challenges.

Contact CacheFly to schedule an appointment with their CDN experts, who can evaluate your needs and determine a custom-fit solution for you.
Request Free Assessment >

 


This guest contribution is written by Matt Conran, Network Architect for Network Insight. Matt Conran has more than 17 years of networking industry with entrepreneurial start-ups, government organisations and others. He is a lead Network Architect and successfully delivered major global green field service provider and data centre networks.

 

Image Credit: Pixabay

Troubleshooting 404 Errors

When a client accesses CacheFly’s CDN to request your content, every request we receive is responded to with an HTTP status code. These status codes represent how each request was handled. A brief overview of codes looks like the following;

  • 1xx Informational
  • 2xx Success
  • 3xx Redirection
  • 4xx Client Error
  • 5xx Server Error

Let’s focus on 4XX errors – specifically, 404 Page Not Found status codes, the common reasons for errors, and resolutions.

How To Find out if your files are 404ing

If you haven’t yet heard about it from an end-user, or stumbled upon a 404 page on your own, you can find out for certain if any of your files are 404ing via your CacheFly portal.

To generate a report, go to:  Statistics > Report Type > 404 by File

 

404

Here’s how to troubleshoot where you’re 404ing

When using Reverse Proxy, use the CacheFly URL that is 404ing and replace ‘username.cachefly.net’ portion of the URL with your origin domain. This is the URL that we use to pull from your origin server. If the object is returning a 404 on your origin server, then we’re not able to proxy and cache it.

common reasons for 404 errors

If you’ve run a report and founds errors, it’s time to troubleshoot why your page/s are 404ing. Here are some common reasons for 404 errors and how to resolve them.

File is Not Uploaded (push)
A 404 can occur if any requests for objects have not yet been uploaded to ftp.cachefly.com.
Visit our helpful starter guide for instructions on how to properly upload to CacheFly.

Inaccurate URL /Misspelled URL
Inaccurate URLs are a common reason for 404s. This can be caused by misspellings/typos or a misconfigured plugin.

Incorrect HTTP Referer
An incorrect HTTP referer occurs when you’ve setup referrer blocking rules in the customer portal to prevent hotlinking. 
To resolve this, include a valid referrer in your test request
(e.g. curl -I -e ‘http://www.valid-referrer.com’ http://username.cachefly.net/test_file.jpg).

Incorrect Origin Definition
If after your initial setup, all objects start 404ing from the CDN delivery edge, this is due to an incorrect origin definition. 
To remediate the issue, all you need to do is verify that your origin domain is resolving to the correct host. This should be a publicly reachable web server that will respond to our pull requests with a HTTP 200.

Incorrect CNAME Configuration
If you’ve created a CNAME record to alias your CacheFly provided subdomain, you’ll need to enter that alias in the hostname manager.
Here’s a helpful tutorial on how to configure CNAMEs.

HTTP Links on an HTTPS Page
Unless we’ve added a Subject Alternative Name to our edge certificate, SSL requests made using a CNAME record will 404.
 To resolve this, contact support@cachefly.com for pricing and availability. For more information, read about the types of SSL we support.

If you still need help, contact us at support@cachefly.com. We are always happy to assist!