As Fortune 500 companies, SMEs, and other sizable organizations across the public and private sector continue to undertake their digital transformations, the use of cloud services is exploding. Save for those with extreme security needs, organizations can now distribute their digital infrastructure across numerous different application environments. Google Kubernetes Engine, Microsoft Azure Kubernetes Service, and AWS Elastic Kubernetes Service are cloud-native environments that support a massive amount of the economy’s digital infrastructure, with other players like Oracle, IBM, and Alibaba not far behind.
These cloud platforms have already revolutionized the manner in which enterprises administer their services to customers, with new developments in container technology occurring on a near daily basis. IT professionals across development, operations, and security teams now have the monumental task of staying ahead of (or rather, keeping up with) rapid development in the field of container software solutions. Kubernetes has become one of the frontrunners in this space since it’s development and rollout by Google, with half of all container orchestration users incorporating it into their system.
Unfortunately, this mass adoption is not necessarily translating to more effective service delivery—at least not for those organizations who have yet to implement proper governance policies across their Kubernetes environments.
Issues with Managing and Securing Kubernetes
The primary benefit of adopting Kubernetes for cloud-native applications is its incredible flexibility, extensibility, and automation. Agile creation and deployment, continuous development and integration, and predictable application performance are all possible through kubernetes. But the dynamic nature of this container system also leads to problems with development, operations, and security when the proper governance practices are absent.
Understanding each of the different components that fall under the Kubernetes title and properly securing clusters at every level are just a few of the numerous issues born from the chaos of an ungoverned container management system. Below are a few of the major problems that enterprise IT departments run into when trying to optimize their kubernetes environments without the necessary oversight.
One of the major pain points in running a multi-cloud application environment using Kubernetes is understanding and leveraging its multiple constituent parts. Because the container management software is made up of multiple components, enterprise IT departments need to be aware of multiple best practices within, across, and between each of the different segments. From the kube-apiserver to the cloud-controller-manager to any of the multiple addons, Kubernetes clusters are finely tuned systems with numerous points of failure. Providing a seamless multi-cloud experience requires that each of these components is properly integrated, and attempting to do so manually can be catastrophic for service delivery.
Just because Kubernetes has the potential for optimal service delivery doesn’t mean this will automatically be the case in every enterprise situation. Not without additional guardrails, at least. The shift towards cloud-based containers to run applications means it is more difficult to monitor resource use in each cluster server. Some may be exhausting all the CPU available, while others might only be running at 20% capacity. In either case, digital infrastructure is not used optimally, leading to wasted resources or unhappy clients.
The fully remote social engagement experts at Buffer experienced a performance optimization issue in the form of CPU throttling. They shared how they navigated those troubling waters to improve site speed by over 20x. While Buffer is a relatively small-scale operation, the nature of their distributed organization necessitates the use of properly-governed Kubernetes environments.
At the large end of the enterprise spectrum, a company’s multi-cloud application environment may have dozens to hundreds of different clusters. This is because, depending on the specific needs of an organization, enterprises may use large shared clusters, small single-use clusters, or any other cluster configurations along the continuum. In addition to having implications on cost, management, and performance, cluster size has a massive impact on security.
The latest release of Kubernetes clusters theoretically supports up to 5000 nodes, but meeting security goals often necessitates the use of multiple, smaller containers. When clusters get too large, it becomes difficult for DevSecOps teams to determine when malicious users have infiltrated an environment. A high-profile instance of just such a breach occurred a few years ago when Cryptominers infiltrated a JW Player Kubernetes cluster to utilize untapped CPU resources.
Clearly, leaving Kubernetes clusters ungoverned is not an option for any enterprise company looking to stay ahead in today’s competitive marketplace. But with the sheer size and scale of most multi-cloud application environments, how exactly is a company to prevent IT employees from committing costly errors?
Using Kyverno to Apply Custom Container Management Policy
As we have seen, the Cloud Native Computing Foundation recommends that any enterprise deploying Kubernetes also implements a proper governance structure. These guard rails not only protect intricate, multi-cloud environments from malicious users but also protect companies from another major risk to service delivery: their own DevOps teams. Without doing so, CIOs and executives are risking the CI/CD through the inevitable errors that arise in iterative software development and delivery.
To help prevent this kind of internal error, the CNCF recently added Nirmata’s Kyverno policy engine into their privacy sandbox for just this reason. This Kubernetes-native governance tool allows companies to automate all best practices, company policies, and even industry regulations. Nirmata has sunk its teeth into the container software space for years by creating tools like Kyverno and leveraging extensive cloud strategy expertise to improve outcomes for enterprise organizations. Kyverno is built to extend this cloud expertise to users in an automated, comprehensive fashion.
Kyverno: Nirmata's Kubernetes Policy Engine
Unlike other policy engines, Kyverno is specifically designed for use in Kubernetes environments. This means there is no complex new language for IT professionals to learn and that the governance system can seamlessly integrate into the Kubernetes environment. Kyverno exerts its policy management via three main functions: Validation, mutations, and generation.
Automating these functions through Policy-as-Code means that each DevOps task is checked against central rules to ensure no security violations occur. It also allows authorized individuals to modify policies or create new ones to keep up in a rapidly changing environment. The continuous compliance that Kyverno provides is especially powerful for large-scale enterprises with diverse Kubernetes clusters.
There are certainly numerous different strategies and approaches when it comes to managing a multi-cloud application environment. However, the missteps of previous enterprises in their journey to optimizing Kubernetes clusters show that one thing is for certain: governance is key to success.
About the Author
Ritesh Patel is the Co-Founder at Nirmata, a cloud computing company responsible for Kyverno and other Kubernetes policy management solutions. While Ritesh has spent decades in the tech industry, his experience covers a wide range of roles and responsibilities, including software engineering, market strategy, and business development.