until today I knew that running containers was always dependent on the existence of a host and an OS of some kind, but I came across this project : vSphere Integrated containers, it’s a runtime environment allowing developer to run containers as VMs, instead of running containers in VMs .
there is a good read to understand the contrast between traditional containers and containerVMs
- vCenter Server with a cluster
- vCenter Server with one or more standalone ESXi hosts
- A standalone ESXi host
this architectures relies on a Virtual Container Host, the VHC is an end point to start, stop, delete containers across the datacenter .
“The Virtual Container Host (VCH) is the means of controlling, as well as consuming, container services – a Docker API endpoint is exposed for developers to access, and desired ports for client connections are mapped to running containers as required. Each VCH is backed by a vSphere resource pool, delivering compute resources far beyond that of a single VM or even a dedicated physical host. Multiple VCHs can be deployed in an environment, depending on business requirements. For example, to separate resources for development, testing, and production.”
the binaries can be downloaded from here :
untar the compressed file :
$ tar xvzf vic_3711.tar.gz
this is the content of the tar file :
Setting up ESXI host :
- download the ISO file from vmware wbesite : https://my.vmware.com/en/web/vmware/evalcenter?p=free-esxi6
- use virtualbox or vmware fusion to create a host using esxi host ( http://www.vmwareandme.com/2013/10/step-by-step-guide-how-to-install.html#.V6a8rZNViko)
Creating a Virtual Container Host :
$ vic-machine-darwin create –target 172.16.127.130 –user root –image-datastore datastore1
INFO[2016-08-06T14:05:48-05:00] Please enter ESX or vCenter password:
INFO[2016-08-06T14:05:50-05:00] ### Installing VCH ####
INFO[2016-08-06T14:05:50-05:00] Generating certificate/key pair – private key in ./virtual-container-host-key.pem
INFO[2016-08-06T14:05:50-05:00] Validating supplied configuration
INFO[2016-08-06T14:05:51-05:00] Firewall status: ENABLED on “/ha-datacenter/host/localhost.localdomain/localhost.localdomain”
INFO[2016-08-06T14:05:51-05:00] Firewall configuration OK on hosts:
INFO[2016-08-06T14:05:51-05:00] “/ha-datacenter/host/localhost.localdomain/localhost.localdomain”
WARN[2016-08-06T14:05:51-05:00] Evaluation license detected. VIC may not function if evaluation expires or insufficient license is later assigned.
INFO[2016-08-06T14:05:51-05:00] License check OK
INFO[2016-08-06T14:05:51-05:00] DRS check SKIPPED – target is standalone host
INFO[2016-08-06T14:05:51-05:00] Creating Resource Pool “virtual-container-host”
INFO[2016-08-06T14:05:51-05:00] Creating VirtualSwitch
INFO[2016-08-06T14:05:51-05:00] Creating Portgroup
INFO[2016-08-06T14:05:51-05:00] Creating appliance on target
INFO[2016-08-06T14:05:51-05:00] Network role “client” is sharing NIC with “external”
INFO[2016-08-06T14:05:51-05:00] Network role “management” is sharing NIC with “external”
INFO[2016-08-06T14:05:52-05:00] Uploading images for container
INFO[2016-08-06T14:05:52-05:00] “bootstrap.iso”
INFO[2016-08-06T14:05:52-05:00] “appliance.iso”
INFO[2016-08-06T14:06:00-05:00] Waiting for IP information
INFO[2016-08-06T14:06:18-05:00] Waiting for major appliance components to launch
INFO[2016-08-06T14:06:18-05:00] Initialization of appliance successful
INFO[2016-08-06T14:06:18-05:00]
INFO[2016-08-06T14:06:18-05:00] vic-admin portal:
INFO[2016-08-06T14:06:18-05:00] https://172.16.127.131:2378
INFO[2016-08-06T14:06:18-05:00]
INFO[2016-08-06T14:06:18-05:00] DOCKER_HOST=172.16.127.131:2376
INFO[2016-08-06T14:06:18-05:00]
INFO[2016-08-06T14:06:18-05:00] Connect to docker:
INFO[2016-08-06T14:06:18-05:00] docker -H 172.16.127.131:2376 –tls info
INFO[2016-08-06T14:06:18-05:00] Installer completed successfully
you can use vSphere or ESXI web client to take a look :
creating ContainerVM :
$ docker –tls run –name container1 ubuntu
the container has been created :
Conclusion :
ContainerVMs seem to have the following distinctive characteristics over the traditional containers :
- There is no default shared filesystem between the container and its host
- Volumes are attached to the container as disks and are completely isolated from each other
- A shared filesystem could be provided by something like an NFS volume driver
- The way that you do low-level management and monitoring of a container is different. There is no VCH shell.
- Any API-level control plane query, such as
docker ps
, works as expected - Low-level management and monitoring uses exactly the same tools and processes as for a VM
- Any API-level control plane query, such as
- The kernel running in the container is not shared with any other container
- This means that there is no such thing as an optional privileged mode. Every container is privileged and fully isolated.
- When a containerVM kernel is forked rather than booted, much of its immutable memory is shared with a parenttemplate
- There is no such thing as unspecified memory or CPU limits
- A Linux container will have access to all of the CPU and memory resource available in its host if not specified
- A containerVM must have memory and CPU limits defined, either derived from a default or specified explicitly
but the traditional containers like Docker are definitely a more mature solution, offers more tools for orchestration and scaling.