Why Settle on a Hosting Provider? Bandwidth liquidity and other issues

At the 2009 Trenton Computer Festival Professional Conference in April 2009, I presented Web Efficiency: Using XHTML, CSS, and Server-side to Maximize Efficiency. The focus of my presentation was that efficiency, scale, and costs are inextricably connected.

Put simply, the resources required to produce content for a web user varies dramatically depending upon the implementation of the web request. Implementations or techniques that are good for a single content display or a handful of displays in a “proof of concept” can ruinously expend resources in production. This often manifests when systems are scaled for tens, hundreds, or thousands of users at a time. This article is about one aspect of this problem, that of self-hosting versus using hosting providers and the collateral issue of bandwidth availability on a moment to moment basis, what I will call “bandwidth liquidity.”

Efficient use of bandwidth as discussed here is but one aspect of efficient IT operations. Overall efficiency is a more extensive topic which I will cover in future postings.

During my Trenton Computer Festival Professional Conference presentation, a question arose: “Which is better: Using a hosting provider or self-hosting?” Unsurprisingly, the very phrasing of the question is part of the challenge. Posing this question as “either or” represents a false dichotomy. The problem is ill-posed for two reasons:

Self-hosting is often considered a potential problem due to concerns about robustness and bandwidth availability. However, these two questions do not capture the true nature of the challenge.

Many choices exist, from use of of owned servers at hosting facilities (generally referred to a “co-location”), to decisions on whether to use virtual or physical servers, to choices of shared or dedicated servers, whether co- located or not. However, none of these choices, in and of themselves, necessarily solves the problem.

Irrespective of hosting choice, unpredictability remains the fundamental challenge of operating on the web. Dramatic, unpredictable surges in demand are one of these unpredictable challenges. A single reference on a popular talk show, blog, or news outlet can result in a quantum jump in demand, creating a totally unexpected tsunami of requests. The aggregate bandwidth required to respond to such a tsunami is staggering.

Demand tsunamis are not new. So-called “virtual mobs” have been a problem since the earliest consumer adoption of the Internet. These mobs often arise in response to scheduled events, such as Victoria’s Secrets 1997 online fashion show. They can also spontaneously arise in an instant; their growth accelerated by the frictionless communications environment. A single comment, propagated by mass media, social networking channels (e.g., Twitter, or Facebook), or geolocation services (e.g., Foursquare, Brightkite, or Loopt) can lead to a tsunami of requests. There is no gradual ramp up of activity or warning sign of popularity. The transition from zero or low activity to frenzy occurs in the blink of an eye.

A related problem arises when a site with modest bandwidth requirements offers bandwidth intensive presentations or multi-media files. Aggregate bandwidth may not measurably increase, but there is an instantaneous need to transmit a large volume of data, even at modest request rates. Serving an unpredictable request tsunami requires comparable short-term bandwidth. This demand for instantaneous bandwidth is the heart of the problem. Aggregate bandwidth over time is not the issue. Aggregate bandwidth may very well be adequate to serve most requests. Rather, the problem is a lack of bandwidth on a short-term basis.

One could simply purchase bandwidth to satisfy the worst case demand. However, provisioning high-speed connections for peak load is expensive, particularly if the peak load is significantly greater than average load. Fortunately, installing a faster broadband circuit is not the only potential solution. The key is the dramatic difference between short-term instantaneous bandwidth requirements and long-term average utilization. In most cases, the surges are irregularly episodic, with dramatic differences between instantaneous and long-term average utilization.

Business people are well aware of the difference between long-term and short-term sufficiency. It is common to have long-term sufficiency simultaneously with a short-term drought. In finance, this mismatch is referred to as a “liquidity crisis.” It is nothing more complex than having a bill due today, and being scheduled to receive your paycheck next week. Companies of all sizes, from small businesses to major corporations can fall victim to liquidity crises. In 1970 Penn Central Company filed what was then the largest bankruptcy in US history; it had significant assets, but no cash. The 2008 failure of the investment firm Bear Sterns is similar in that it was a short-term cash shortage, not a long-term lack of capital and assets. While much of the writing on “liquidity crises” refers to finance, liquidity crises can also occur in any sphere where an intermediary moderates the differential between short-term and long-term requirements, albeit sometimes using different phraseology. An instantaneous shortfall in communications bandwidth is in effect no different than an overbooked airline flight or hotel room.

For simplicity, this article will use the financial terminology. I will refer to this mismatch between long-term adequacy and short-term bandwidth availability as a question of “bandwidth liquidity”. More precisely, what is relevant is the “bandwidth liquidity ratio”; the ratio of promised bandwidth to provisioned bandwidth.

Even the most inexpensive broadband connections have uplink speeds of hundreds of kilobits per second (kbit/s). One common speed is 384 kbit/s. This translates to 48kbytes/second. Over the course of a day, it is theoretically possible to transfer over 4 gigabytes.[1] The theoretical capacity of higher bandwidth circuits is correspondingly larger.

While a modest website may not exceed the daily transfer bandwidth of such a circuit, shortfalls may still occur over shorter time intervals. Individual web site visitors experience this as the delay between click and complete page refresh. This delay (or lack of it) determines perceived quality. It is on this far shorter time scale that even a small website can quickly exceed the bandwidth of a low-end DSL connection. Activity surges only exacerbate the problem. This is often the motivation for clients asking: Which hosting provider should I use?

This question is synergistically connected to what operations are required to service the web requests. Web browsers and intermediate nodes are free to maintain local copies of static material, while dynamic content must be transferred anew each time it is required. This reality has some impact on content serving efficiencies.

While much content is subject to change, change is not the complete story. Web pages are composed of many elements. Those elements that constitute the overwhelming mass of data transferred rarely change, although the assortment used on a single page may vary dramatically. Images of all kinds, from product catalog images to icons and backgrounds are highly stable, and changes are infrequent. Many other components are not dynamic on a per-page-view basis (e.g., long-term historical stock statistical data) and likewise change infrequently. There are some components that do change often, sometimes in response to user activities. It is this dynamic content which presents potential performance challenges that I turn to next.

Dynamically generated content does have appropriate uses. The question is: “When is dynamic content necessary?” The process of balancing on-demand content and offline generated content is complex, with many tradeoffs. Dynamic content generated page-by-page is orders of magnitude more expensive to serve to users. This may not be significant when a single page is served as a proof-of-concept, but when scaled for production it can create demand that would require hardware-filled server rooms.

Those who focus solely on the use of dynamic technologies for generating web content suffer from myopia and tunnel vision. The answer to efficiently serving material to web browsing users is to minimize the use of dynamic web technologies. This minimizes resource consumption and utilization and maximizes Return on Investment.

It is far more productive to divide content into different classes and distribute the load to different web servers, or if need be different server farms. The technologies used for this are not the web-centric technologies related to dynamic server pages, but the underlying technologies of the Internet, most notably DNS, the Domain Name System.

Changing content is not always dynamic. Some variable content is generated by applications on a per-page-view basis; other changing content may change on a far less frequent basis. Still other data may be sufficiently sensitive that while it can be served to those authorized via the web, it would be inappropriate, unwise, or merely infeasible to locate it offsite. Customer financial data as well as medical records are examples of such data. The cost side of the comparison shows a dramatic difference. High bandwidth hosting accounts are available for far less than the expense of even low-end DSL circuits. For example, in New York City, a low-end DSL circuit costs between US$ 50.00 and US$ 100.00 per month. A hosting account at a major hosting provider with far higher peak bandwidth costs approximately US$ 10.00 per month. Why is this case? There is a market reality that is consistent with this dramatic difference in costs: actual website utilization.

Hosting providers can offer these low-prices because they act as an intermediary. In this role, they are the bandwidth ecosystem analogue to stocking distributors, consolidators, and banks. They provision datacenters and broadband connectivity, reselling it to smaller users for a markup. Like any stocking distributor, they give their customers the illusion of large inventory by taking advantage of a reality of fractional utilization: not everyone will attempt to fully draw on their inventory at the same time.

Profitability depends on the careful balance of the degree to which the installed capacity is oversold. Unused bits per second unused have no value; shortfalls are damaging to customers. This is quite the opposite side of the coin from a leased broadband circuit, which generally has a Committed Data Rate (CDR) as part of the underlying agreement or tariff. In contrast, hosting accounts typically measure data volume over time, often using gigabytes per month (Gbytes/month) as the unit of measure, with the instantaneous data rate left unstated.

Virtual machine or co-located servers often include a Committed Data Rate (CDR) as a guaranteed minimum of bandwidth provided. In some cases, such arrangements also have a specified Peak Information Rate (PIR), similar to that used with ATM (Asynchronous Transfer Mode) Frame Relay networks, on an “as available” basis. This is dramatically different than many shared serving arrangements that often commit to a monthly transfer volume, and a peak data rate that differs from the average data rate by several orders of magnitude.

The non-CDR hosting model relies upon the high probability that website usage over time is not constant, and that while a website may need momentary bursts of bandwidth, a steady stream of high bandwidth is not required. Thus, a hosting provider offering what seems like a dramatically low cost is relying on the same set of phenomena as any other so-called “unlimited usage” business: buffets, mobile phone plans, rail/transit passes: limited utilization.

The combination of virtual machine/co-located servers with a Committed Data Rate and shared serving arrangement with best efforts serving at a fraction of the stated data rate create “tranches”, classes of different services with differing commitments and properties.

When events go as planned everyone benefits. A hosting provider can host many web sites with the presumption that not every site will require full bandwidth at the same instant in time.

When the amount of sold bandwidth greatly exceeds the actual bandwidth installed, customers experience problems and delays in access. This is the case of a highly leveraged provider, with a very small fractional bandwidth liquidity ratio.

Just as it is less expensive to stay in a hotel for an occasional trip, but cheaper to make other arrangements for a several week or month visit, virtual server/co-location with a Committed Data Rate is a higher grade commitment than shared hosting.

However, bandwidth liquidity, the degree of bandwidth committed/sold versus bandwidth installed is still a crucial indicator as to how realistic a guarantee is being provided. A provider who has a ratio approaching 100% (committed bandwidth equals installed bandwidth) is offering a far more reliable solution than a provider who has sold 100 or 1,000 times the installed bandwidth. The fractional utilization presumption is what allows modern banks to use most of their deposit base for lending, rather than as cash sitting in the vault lest all depositors appear asking for their cash.[2] This same presumption allows hosting providers to provide bandwidth at far less cost than is possible using directly provisioned circuits. Hosting is no different than any other service we use provided by an outsider: from electric power to catching a cab to the airport; the presumption in using a shared service is that not everyone will require the service at the same time. When this presumption is invalidated, the system becomes clogged.

When any system is provisioned to maximize efficiency by sharing resources with others, the problem of a demand spike or “run” on those resources is inevitable. While the analogy is not precise, the phenomenon of too many “hot” websites requiring instantaneous bandwidth is strikingly similar to a “bank panic”, a liquidity shortfall caused by an unexpectedly large number of withdrawals.[3] The odds of such a shortfall happening depend directly on how well the hosting provider is managed, and the percentage of over-allocation of the committed bandwidth. The previously mentioned bandwidth liquidity ratio is an indicator of the degree to which providers can successfully service unexpected surges in activity.

The high intensity load related to images and multimedia does not require complex traffic management hardware or software. What is needed is a modicum of foresight when generating the code that forms the web site (be it HTML or other code, such as ASP, JSP, and CSS).

Most web sites presume that all files reside in a single directory tree. The root of this tree is known as the “document root.” For my firm’s web site, which hosts this blog, the root is http://www.rlgsc.com/... .

Providing for the possibility of shifting the relatively unchanging image files to a high-bandwidth hosting provider requires nothing more than changing the Uniform Resource Locators (URLs) on images to reference a different root (e.g., http://images.rlgsc.com/...).[4]

The elegance of this approach is that the underlying infrastructure is independent of the provisioning solution chosen to achieve higher or lower bandwidth and bandwidth liquidity. Whether the files are relocated to a different host within the organization, a co-located server, or a shared server managed by a hosting provider, the process is essentially identical. The only necessary actions are copying the files en masse, then changing the A or CNAME record for site's images directory (e.g., images.rlgsc.com) to point to the new host. This change is completely transparent to the end user of the web site. The expected observable change for users should be improved performance.

Segmenting traffic further is no more complicated. Images are large, but multimedia, video, and audio files are orders of magnitude larger still. A similar approach may be applied separately to those files as well (e.g., http://multimedia.rlgsc.com/...).

Simple DNS management is the only technology required to implement this segmenting of traffic by size and volatility. This allows you to leverage outside resources while keeping the control over the critical components of your web site.

In overall effect, this is a very low-budget implementation of a content delivery network.

The choice of a hosting provider in many ways has to do with both available bandwidth and the robustness of that bandwidth. The variations which can occur in website design and utilization by end-users create challenges to maintaining consistent performance. Proper website design and operational management go a long way to mitigating or solving these issues. Not all solutions reside within web servers. Many times efficiencies may be gained by re-allocating resources including judicious use of external providers. In this way potential massive effects of short-term bandwidth shortages may be effectively eliminated by the combination of client-side caching and externally provided high-bandwidth servers providing bulk supporting serving. This approach is the cornerstone of high-end, high volume content delivery networks. In a very real sense, the techniques outlined in this article enable you to create a low-budget content delivery implementation with much of the benefit and relatively little of the complexity or cost.

Notes

[1] Assuming full utilization of the link. Local contention, server load, and competing traffic at the ISP level all tend to reduce aggregate capacity.
[2] P Samuelson and P Nordhaus Economics, 18th Edition, pp
[3] ibid, “The Contagion of Bank Panics”, pp 520
[4] Just because the host name is different does not imply that the actual directories are separate. It merely implies that they are reached through a different host name. This is an important consideration when transitioning content, as old and new HTML files may coexist for a period. Transitioning to this scheme requires creating new web server settings, then new DNS records (e.g., for the images host), establishing the high-bandwidth hosting arrangements, placing a copy of the files on the high-bandwidth hosting location, changing the DNS pointers to the high-bandwidth hosting arrangement, and then changing URLs contained in HTML, CSS, and other files.

References

  • J. McNamara Technical Aspects of Data Communication 1977, Digital Press, Bedford, Massachusetts
  • R. Gezelter “”, Commerce in Cyberspace, February 1996
    Retrieved from http://www.rlgsc.com/tcb/plaintalk.html on May 12, 2010
  • P. Samuelson and P. Nordhaus Economics, 18th Edition

URLs for referencing this entry

Picture of Robert Gezelter, CDP
RSS Feed Icon RSS Feed Icon
Add to Technorati Favorites
Follow us on Twitter
Bringing Details into Focus, Focused Innovation, Focused Solutions
Robert Gezelter Software Consultant Logo
http://www.rlgsc.com
+1 (718) 463 1079