It’s a delight to see major service providers coming around to the message that the cloud isn’t just about network-hosted services; it concerns the whole of networking, resources in the data center, and security to protect customers’ data both in motion and at rest.
But there’s been a disconnect between the network-hosted data center part of the cloud, and the network. The (virtualized/multi-tenant) data center component is flexible and highly scalable. Need more compute power or storage capacity? Adjust the dial as needed. Networking has been much more static. Companies moving large loads from their premises into/out of cloud-based data centers, or pushing data around between data centers, depend on having enough spare bandwidth at each site to make all this shuffling possible in a reasonable time frame.
Enter bandwidth-on-demand, with a disclaimer. Personally, I’ve disliked the bandwidth-on-demand model that has been around to date, and have given anyone who asks an earful about why I think the current model is limited. The current concept uses bandwidth/QoS sliders accessible via a web portal, which a network administrator can adjust as needed, pushing a button to confirm the order. The IP/MPLS VPN ports between locations makes the bandwidth available within minutes, assuming it can be provisioned end-to-end. The customer is usually billed for the extra bandwidth between locations on a daily basis.
The current bandwidth-on-demand paradigm uses a lot of carrier-side technology to solve a business problem that can be mostly duplicated by burstable and tiered (i.e. flat-rate plus burstable overage) billing plans. Tata Communications, for example, can also shift between classes of service at Layer 3 via an Inter-CoS bursting feature, and supports burstable bandwidth at Layer 2 via a Priority Stretch feature. If the unpredictability of variable billing is an issue, an enterprise IT department can always rate-limit ports at the CPE router level. Few services mix Inter-CoS features with burstable billing, but even traffic marked Best Effort has pretty impressive performance nowadays. Best Effort is usually good enough when it comes to moving large volumes of data.
But it looks like the bandwidth-on-demand model is evolving, and the new concepts look like a big improvement. Some service providers have been talking about software defined networks – in the form of opening up applications programming interfaces (APIs) that let applications themselves request bandwidth and QoS in an automated way.
Application-driven bandwidth-on-demand is probably a scary concept to enterprise IT departments that need to control costs. But as part of a cloud service, it’s an elegant, sensible solution. The IT administrator turns up the pay-as-you-go cloud and agrees on a rate. The data center and network are aligned when it comes to up/downloading data: The network opens the spigots wide, while the data center supplies the compute and storage resources needed for the task. The limits remain whatever maximum bandwidth is available end-to-end, with the access network the most likely choke point.
There are several ways for service providers to approach automated bandwidth. But it has to start with changing companies’ static networking mindset. Tata Communications’ Network as a Service is an example that re-frames the customers’ thinking, shifting away from buying ports, and toward delivering high-quality application experiences. In the future, that sort of high-quality customer experience may be best served by an OpenFlow CPE overlay; by intelligence embedded across the carrier network; or alternatively (and never to be underestimated) by the raw power of high amounts of attractively priced, static bandwidth.
Brian Washburn is reachable at firstname.lastname@example.org. Brian also blogs for Current Analysis’ IT Connection service (itcblogs.currentanalysis.com).