I’m excited to kick off this project, as no other topic in IT seems to have so many different aspects to consider and so many points to attack it. As, hopefully, in all our EMA research projects, we will look at this topic from all angles without any predetermined outcomes. The project will be exclusively guided by what’s best for the customer when it comes to ramping up a container strategy.
Research Goal: Lower Enterprise Decision Risk
‘Betting on the wrong horse’ in terms of container management products for large scale production use is a significant operational risk. Especially, as container management is a young market in a ‘gold rush’ stage with software vendors and consultants positioning themselves to absorb the biggest share of the pie. All of our recommendations will look ‘behind the scenes’ for the reader to understand potential factors that would motivate specific vendors to provide a certain recommendation.
The CEO Says: “Now go and do something with containers to make us more competitive”
First of all, I’m very familiar standing in a product manager’s shoes and being tasked to “Please present us with options for how to make our product more efficient and competitive by leveraging containers.” That’s often as specific as executive advice gets and then our IT ops, DevOps, or development lead starts Googling, going to conferences, calling peers, downloading trials, and consulting with his or her psychotherapist about the best course of action that shows some early traction, without holding a large risk of failure.
The Dev Lead Goes ahead with Adopting Docker Containers, but Doesn’t worry about Production Readiness
Second of all, I’m very familiar with shadow IT and dev groups adopting whatever technologies that make them more productive in the moment, as I’ve been the culprit myself, more than once. This means that many organizations are already using containers, sometimes in production, without the IT ops guys being aware of it. This doesn't matter until the next compliance audit comes along or users start complaining about performance, availability or other usability issues that they’d like resolved. Of course the IT guys can’t fix what they don’t know about and they can’t plan for technologies they are not familiar with.
Digital Transformation Consultants Sell PaaS as the “cleanest” path of becoming a Digital Aggressor
To external consultants, adopting PaaS is the optimal course of action for a well governed digital transformation. However, this did not work out with development groups or with IT operators and here is why. Developers need to be able to experiment quickly, easily, cheaply and without consequences. This means that deploying a new app or service based on technologies or product versions that are not part of the corporate PaaS environment is a must. Containers enable them to do exactly that by simply packaging the entire runtime with their code and throw it on any server for everyone to try and offer feedback.
Compliance and Security
Container networking is a different animal compared to networking VMs and to not rely on individual development groups following different security approaches, security has to be part of the container management architecture. In order to be able to do this, the enterprise needs skilled staff, new tools, and a clear plan to integrate container networking and security with its existing infrastructure.
Staff Expertise Is a Critical Bottleneck
By definition, there are not many architects and operators with significant experience in running containers in production and at scale. Containers need an entire new set of tools for deployment, monitoring, and management and they also need a new way of thinking. For example, updating a containerized app or microservice means replacing the entire container with the later version, and ideally conducting blue-green testing where both, the old and the new, versions of the container can be observed under production conditions and instant rollback is possible. While this sounds simpler compared to server upgrades across multiple environments (dev, test, staging, prod) this new process requires a different skillset from architects and operators.
The Big Question: Should I Migrate Existing Enterprise Apps without Re-architecting them first?
Whenever we ask enterprise IT guys whether they are planning on lifting and shifting their existing enterprise apps into containers we get this “you analyst's live in your own dream world” kind of look. In order to containerize legacy apps, there needs to be a strong pain point, as otherwise the migration risk is not justified. This means that it can absolutely make sense to lift and shift a few existing apps to get them off of old hardware, but bulk migrations of hundreds or even thousands of enterprise apps do not make sense today if the only purpose is to utilize fewer server or storage resources.
Looking Past Containers and Toward the Next Big Thing
The logical next step when looking past containers is the transition to ‘serverless functions’ or ‘serverless computing.’ Of course, there are servers in serverless computing and there are containers too, but both technologies are well hidden behind an automation layer that provides APIs, CLIs, and GUIs to enable development groups to just upload and run their code without worrying about container scheduling or infrastructure in general.
Docker’s Role in the Container Game
Docker is in a difficult position of having made a bet on their own container scheduling and orchestration technology -Swarm- that has recently been overtaken by the market. This happened because most large players -Google, VMware, RedHat, IBM, Microsoft, Dell Technologies, and even Amazon- are now all betting on Kubernetes becoming the standard container scheduler and orchestrator. Now as Docker only gets paid when someone orders their Docker Enterprise container management stack, but that stack only supports Swarm, not Kubernetes, the Docker company is in a difficult situation where customers more and more are concerned about not being able to leverage the leading scheduler.
Docker Does not Get Paid when Developers Use Docker as a Packaging Tool
Docker is great for development groups, as they can rely on the fact that the application will run in its target environment as well as on their laptop. However, the Docker company only gets paid when customers adopt their Docker Enterprise offering to manage production container environments. Docker does not get paid when customers use solutions by Amazon, RedHat, Google, Microsoft, Dell Technologies or any other container management products.
Where Does VMware Stand: Pivotal Kubernetes Services
[embed width="500"] https://youtu.be/73jot9r1bTM [/embed]
Bare metal containers catching on too fast would be problematic for VMware’s entire business case that relies on server, network, and storage virtualization. At this point, most container environments are running on the VMware virtualization platform, meaning that VMware is ideally positioned to demonstrate that containers run better on virtual infrastructure than bare metal. Pivotal Kubernetes Service (PKS), also often referred to ask Pivotal Container Service is VMware’s aims to do exactly that. PKS is VMware’s big opportunity to show how in a VMware operators and developers can work together more efficiently than on bare metal. Ultimately, PKS needs to become one of VMware’s set of SaaS-based management tools (VMware Cloud Services.)
LXD Containers by Ubuntu
LXD containers, also called machine containers, are really VMs that share the host OS kernel, but are separated from each other and from the host OS by a Linux daemon and some other Linux ‘magic.’ Admins can log into LXD containers, as they log into virtualized operating systems. Key here is to understand that LXD containers run directly on Linux without hypervisor. By definition, this saves licensing and operations cost. Interestingly, Docker containers can run within LXD containers. This opens up a number of interesting possibilities.
SaveSave
SaveSave