By now you may have heard about container technology, as any discussion about cloud native software today is bound to include more than a few mentions of containers and tools such as Docker, Mesos and Kubernetes.

Containers are a solution to the problem of getting software to run reliably when moved from one computing environment to another. All too often, software developers are bedeviled by software that ran well in a development environment but stops working in a test environment, and similarly when moving from testing to staging, and from staging into production.

So how does containerization actually work?

Containers offer a new approach to build, ship and run applications through a method of operating system (OS) process isolation via the OS kernel. A container consists of an entire runtime environment: an application plus all its dependencies, libraries and other binaries, and the configuration files needed to run it, bundled into one package.

Containerization offers an alternative to launching a virtual machine for each application, where each virtual machine includes an entire operating system as well as the application. A physical server running three virtual machines would have a hypervisor and three separate operating systems running on top of it. In contrast, a server running three containerized applications with Docker runs a single operating system, and each container shares the operating system kernel with the other containers. This lightweight approach makes containers far more efficient in their memory, CPU and storage requirements.

The growing adoption of containers is driven by a number of factors: portability, ease of use, and the need to support modern cloud native software architecture with DevOps-style continuous delivery. Containers are considered the best practice approach to develop and deploy microservices, one of the fundamental building blocks of cloud native software. As research analyst Kris Szaniawski from Ovum wrote in a recent research note, “the implementation of a microservices architecture and containerization enables the continuous delivery of large, complex applications as loosely coupled services, thus enabling shorter innovation cycles, increased agility, improved scalability, and reduced Opex.” Szaniawski noted Amdocs as one of the leading telecom vendors moving to microservices architecture and cloud native applications.

By containerizing the application and its dependencies, differences in OS distributions and underlying infrastructure are abstracted away. The container, which is independent from the host version of Linux kernel, platform distribution or deployment model, can be transferred to another host running a container engine and executed without compatibility issues.

The overall benefit of containerization is to simplify application development and deployment. Developers can focus on the business logic in applications and not be concerned with the infrastructure that is necessary to make an application run, thus speeding new software availability and reducing problems when operating that software. 

However (and there’s always seems to be that ‘but’), the reality of container portability is far less rosy than what you would be led to believe. There are significant roadblocks related to Linux distribution incompatibilities, commercial licenses, security and networking. That can partially explain why we see in the market that the majority of companies using containers at scale, typically Web-scale players, are those which develop, test and deploy their own software end to end internally. However, most enterprises have complex IT ecosystems leveraging software developed by multiple external software vendors, and this is where roadblocks exist, as detailed in a recent report from Michael Azoff, another leading analyst at Ovum.

Azoff’s report dives deep to explore these challenges, based on his extensive discussions with Tal Barenboim, Cloud Evangelist, Amdocs Cloud Center of Excellence, and Zeev Likwornik, Head of  Amdocs Cloud Center of Excellence. Azoff highlights the following key challenges:

•  Linux distributions have library incompatibilities, and the leading vendors only support their own distributions, which leads to vendor lock-in

•  The GPL open source license was not designed for a container world

•  Container network controllers still have challenges leading to performance issues.

Containers will surely be part of the path to cloud native software – that much is clear. But the challenges limiting portability must be clearly understood and tackled. Amdocs Cloud Center of Excellence is working closely with leading vendors in the industry, including RedHat, Docker and others, to research and solve the complex operability challenges of containers, so that companies can benefit from container technology.

For further details, here is a link to the Ovum report.  

Yifat is a product marketing manager within Technology & New Offerings at Amdocs. With over 13 years of experience in marketing, business development and strategy in the communications industry, Yifat has extensive knowledge of the dynamic forces shaping the market as well as expert knowledge of the systems and solutions required to deliver business success. Yifat has worked directly with major operators to help them identify and define their needs and strategy when it comes to cloud, Big Data analytics and other cutting-edge technologies. She has held positions at Deloitte Consulting and Morgan Stanley, and holds an Executive MBA from Kellogg-Recanati (Northwestern University).

PHP Code Snippets Powered By : XYZScripts.com