Consider this scenario. An organization, which has a data center, wants to hyperscale. It wants to add an analytics component for an application that requires data parallelism at scale. The data center may or may not have the ability to scale. By utilizing hybrid clouds, the enterprise can offload the compute of analytics to the cloud environment, which provides the virtualized hardware, including CPU, memory, general purpose computation on graphics processing units (GPGPU) and more.
Such scenarios are driving the adoption of container-based technologies for multi-clouds, slowly but surely. It is happening despite some confusion and misunderstanding about the technology. A survey from IBM last year found that less than half (41%) of enterprises still do not have a multi-cloud strategy and only 38 percent had the necessary tools to even operate multi-clouds.
Without a question, Virtual Machines (VMs) will continue to exist for many years. However, for hybrid or multi-cloud development and deployment scenarios, CIOs are increasingly turning to containers. Containers provide an opportunity for enterprises to scale deployments as necessary in hybrid-cloud scenarios. As such, it’s important for CIOs to understand the near- and short-term benefits of containers and the challenges.
Container conundrums
One of the biggest conundrums has been to demonstrate the value of containers when making the transition. In many cases, CIOs will choose containers for new initiatives; however, for existing systems that reside on a VM and bare metal, it’s more costly to move those to a container-based system. Some CIOs have even had to forego containers until costs can be mitigated.
Containers do provide a number of cost benefits over VMs. For instance, the compute does not have to run all of the time for analytics components. In a VM setting, the analytics component has to run on demand, and provisioning a VM takes several minutes to launch.

View Entire Article on