The ABC of Evolution Towards Cloud Infrastructure

Escrito por: Comunicaciones Moveapps

 

By far, it is the sector that periodically experiences significant evolutions in its infrastructure approaches. The computer industry is inconceivable without increasingly rapid changes that leave behind everything considered as new or the latest. An article by Gartner argues that the beginning of this eternal process was the transition from mainframes to minicomputers in the 1970s; followed by the adoption of client/server architecture based on industry-standard hardware and software in the 1980s and 1990s, and the rise of virtual machines in the early 2000s.

Today, the emergence of native cloud infrastructure is establishing itself as the new “game changer.” This is how Cloud has progressed and continues to ascend the throne.

 

Evolution Towards Cloud InfrastructureGartner broadly defines the term “cloud-native” as something created to enable or leverage cloud features. Native cloud infrastructure is used to provide platforms with agility that reflects agile processes for delivering cloud-native applications. Therefore, native cloud infrastructure must be programmable, resilient, immutable, modular, elastic, and declarative (PRIMED).

There are different ways to deploy native cloud infrastructure, but in practice, large-scale native initiatives will likely be based on containers and Kubernetes. As Kubernetes becomes the foundation for an increasing number of applications, both developed internally and supplied by ISVs, it effectively becomes the “infrastructure” on which these applications are deployed.

Compared to machine-centric virtual infrastructure, cloud-native infrastructure is fundamentally application-centric.

When based on Kubernetes, native cloud infrastructure introduces some practical changes, such as pods effectively becoming CPUs; persistent volume claims (PVC) becoming data storage devices, and service connectivity layers, like service meshes, becoming the network.

The native cloud infrastructure will also leverage the evolution of computing, storage, and networking technologies at lower levels of the infrastructure, such as running containers on bare-metal servers; offloading tasks to specialized function accelerator cards (FAC); using processors based on architectures like ARM, and executing code with micro-VM approaches like WebAssembly (Wasm).

And, most importantly, unlike previous waves of infrastructure evolution, the introduction of Cloud infrastructure will require more than just adopting new architectural and technological principles. New operational practices like GitOps, which leverage Kubernetes’ active control plane and declarative configuration, and consumption-based models for infrastructure provisioning are also essential for its implementation.

However, to achieve the full potential of native cloud infrastructure, these three aspects must be addressed holistically.

The goal of deploying native cloud infrastructure is to support a self-service platform for developing and/or delivering applications based on a cloud-native architecture. Kubernetes is often the core of Cloud infrastructure today, but over time, it will become less visible as it is increasingly delivered with a serverless experience, even as it pushes further to the edge. Ultimately, product teams will expect to work in an environment where low-level computing, storage, and networking resources are abstracted.



Publicado originalmente el 18 de July de 2023, modificado 4 de August de 2023