F5 Leverages Equinix to Extend Deployment, Security to Microservices

0
283

In a move intended to ease mid-level businesses’ transition to modern microservices in colocated environments, application delivery provider F5 Networks announced today the addition of application-centric and even container-centric deployment and access control services to its portfolio.  In doing so, the firm is taking advantage of its existing partnership with colocation leader Equinix, providing these new services over Equinix Performance Hub.

It’s not an easy development to explain, so we’ll go slowly:  Mid-level enterprises are looking for ways to transition their global online presence to a model that works much more fluidly, like Google or Netflix.  This transition involves the use of containerization as a more manageable model for applications, as well as a weaning from traditional, virtual machine-centered hosting environments.

At the same time, these enterprises are looking to more easily deploy certain of their applications to the public cloud, as necessary, on a per-application basis.  Containerization eliminates the overhead of deploying huge virtual machines just to support them.

Extend the Plank

F5 perceives a viable market there, specifically for businesses that want a hassle-free mechanism for deploying, maintaining, and securing new microservices models.  So it’s tapping into the cache of businesses that have already demonstrated their willingness to bypass the public Internet, and interface directly with Equinix’ network of high-speed interconnects.

“Application Connector gives you the ability to put an app out in the public cloud — you want it to have the same security policies,” explained Lori MacVittie, F5’s principal technical evangelist, in an interview with Data Center Knowledge.  “That would be anything from dealing with DDoS at the TCP level and the HTTP level, to having WAF policies [Web Application Firewall] move with that application.  So it allows those applications, when they’re launched in the cloud… to dial home and get the right policies provisioned, so that they’re automatically there.”

So the same access control policies that apply to an application when it’s hosted on-premises (or within the customer’s leased domain) can be extended to that application when it’s deployed to public cloud — and also, in this case, through Equinix Cloud Exchange.  VMware is offering a similar concept, although it requires the use of its NSX network virtualization platform and also its vSphere operating environment. Plus, it requires partners such as Microsoft Azure and IBM Cloud to be on board.

Nerve Center

Similarly, F5’s new Container Connector intends to do much the same thing with the applications hosted within Docker and other containers (e.g., OCI, rkt).  But it’s a much more complex undertaking, especially since managing microservices bears a closer resemblance to herding cats.

For extending access control policy to containers, F5 needs a go-between.  In this case, it’s the orchestrator — the mechanism that automates the deployment of individual containers to available resources in real-time.  As MacVittie confirmed, Container Connector will interact with Kubernetes — at present, the most prominent open source orchestrator in the space — by means of its native API.

Using Kubernetes, she told us, Container Connector will facilitate the deployment of BIG-IP, F5’s application delivery controller (ADC), as well as an Application Services Proxy (ASP, not to be confused with Microsoft’s technology of the same name).  These will act as load balancers within the container environment, as well as access controllers.

“Each service will get its own lightweight proxy,” MacVittie explained, “that might then be dealt with by something upstream.  Every time a container comes up, you need to have either a service registry, or some way to tell these load balancers — whether they’re upstream, or sitting with the containers — that there’s a new one, and please add this to your pool.  Conversely, if you take one out of rotation, you have to get it out of there.  And this can happen very fast; container lifetimes are highly variable, more than we’re used to seeing.”

Deeper Dive

What F5 is trying to establish is a kind of communications mechanism for dispatching policy updates throughout a fast-moving system.  It must use the orchestrator as the go-between here; although it’s technically feasible to field calls from active containers directly, that job would be akin to tagging mosquitos, releasing them back into the wild, and expecting regular reports from them.  Since the orchestrator determines the lifespan of containers in a system, several of them could cease to exist whenever the orchestrator makes that determination — right in the midst of a policy operation.

F5’s choice of interface should avoid this contingency.  However, it does require a concession to the new architectural model, which means that Container Connector must itself become a container, amid all the others in the system.  MacVittie confirmed that F5 will distribute pre-built containers, with its new components included, through its own Web site.

“We’re also providing visibility data back from the services proxy [ASP],” she added.  “So if that’s sitting in front of containers, we’re able to provide things like uptime intervals, response times, and the metrics around how these things are running and what their status is.

“We’re trying to make sure we provide not only the basic services of load balancing and making sure things are available and can scale up and down.  Let’s also make sure the DevOps community has the necessary metrics they need, in order to understand what’s going on in their environments, how their applications are performing, and what feedback they need so that they understand what’s going on.” via

LEAVE A REPLY