FinOps and shifting left your cloud finances

The old vs the new model of infrastructure finance

The old procurement model :

In the old days of infrastructure, ops teams like any other organization would have to follow a traditional procurement model, requirements for hardware are set months or years in advance, and needed to account for growth, the process was long, subject to multiple budget approvals and delays, OPex and CAPex finances have to determined and also approved.

The new cloud model

In the era of cloud, operation teams have more direct access to cloud resources, and cloud spending, without having to undergo a budget approval process, with this new flexibility in provisioning, the challenge becomes how to have a good governance model around cloud spending without been a road block for development and operation teams, but at the same time making sure the finance teams have a realtime visibility into the cloud bill cloud bill

what is finops :

FinOps is a culture and practice of financial awareness and transparency around cloud spending. Teams should use tools to understand their spend, forecast future spend, and take action to manage that spend as needed. FinOps is about using data to drive better decisions. It’s about efficiently using resources for the best return on investment. FinOps is about understanding how your organization is using cloud resources, and using that information to make strategic decisions about how to use your cloud spend more effectively.

FinOps Personas

different people or personas are involved in the finops teams, each has different role to accomplish, but it all centered around cloud, provisioning,

FinOps Practitioner : Drive best practices into the organization through education, standardization, and cheerleading

Execs : Leverage technology to give the business a market and competitive advantage

Product Owner : Quickly bring new products and features to market with an accurate price point

Finance : Accurately budget, forecast and report cloud costs

Engineering : Deliver faster and high quality services to the organisation, whilst maintaining business as usual

The Prius Effect

 J.R. Storment and Mike Fuller in their book Cloud FinOps talk about the Prius Effect ; it’s basically an analogy to driving the hybrid EV, when you put your foot and the pedal your can instantly see the flow of energy from the battery to the engine, when you lift your foot the flow goes the other way, and the feedback is instantaneous .

The real-time reporting on cloud spends should follow the same model, providing transparency to everyone to see the effect of cloud provisioning, this is a capability that needs to be built and offered internally, making use of all the cloud billing resources, cost estimators, etc …

Unit Economics

Unit economics is the idea that cloud spend should be measured against a business metrics, it can be the number of order delivered per day, the cost of keeping a number of DAU ( daily active users), revenue per ride for a ride sharing company, etc

This approach will change the narrative of cloud spends, from just reducing the cloud bill, to correlating that amount with a business goal, this help organizations distinguish between good cloud spend and bad.

Shifting left cloud finances with engineering and operation teams :

Just like Devops and DevSecOps allowed us to shift left application delivery and application security, by getting developers and operations teams to work closely to release faster, and embed application security practices in the pipelines, FinOps aims also to eliminate the friction between ops teams and financial team, one aspect of of this collaboration can be done at the code level:

Imagine you have an operation team, wanting to provision additional EC2 instances, the ops team writes a terraform template, and sends a pull request to your code repository

your code request can generate a report showing the forecasted expenses for such a change, tools like infracost , can allow you to create a report on the impact of the current infrastructure and the future one, in a way that will help with the decision making process, this , product managers can take a look at the report and decide to approve or deny the pull request.

This step while its scope is only limited to the engineering teams, it helps create more transparency about cloud expenses, and also have operation teams take better ownership of their cloud bill.

My Thoughts :

FinOps will have to become a culture to adopt by every organization using cloud resources, driven by transparency and ownership of cloud expenses, but also managing cloud investments, at the end of the day FinOps is not only about saving money it’s also about making money, when more resources are needed for new projects.

there is a lot of talks, blogs, webinars, about cloud cost optimization but often times it’s only targeted at engineering and operation teams, using very specific technical nomenclature making almost impossible for non-technical stakeholders to understand it.

J.R Storment and Mike Fuller recommend having an actual FinOps team, composed of the different personas, having access to real time reporting, and an understandable language and terminology that make sense for everyone, while keeping in mind that one of the driving factors of cloud expendenture is Unit economics

Making your publisher/subscriber app more cloud agnostic using DAPR

Publisher/Subscriber Model

Publish/subscribe messaging, or pub/sub messaging, is a form of asynchronous service-to-service communication used in serverless and microservices architectures. In a pub/sub model, any message published to a topic is immediately received by all of the subscribers to the topic. Pub/sub messaging can be used to enable event-driven architectures, or to decouple applications in order to increase performance, reliability and scalability.

Depending on which pub/sub technology you end up choosing, your application code will be tightly coupled the technology you choose, which makes it difficult to switch from one provider to another without changing the code of your application

DAPR ( Distributed APplication Runtime) give your the possibility to abstract your pub/sub layer and write your code independently of your pub/sub provider ( SQS, Azure Bus Service, Kafka, Azure event hub, rabbitmq, redis streams )

Example :

In this example we will have an order processing app where the publisher pushers orders to the messaging queue

Application Layer : Publisher and Subscriber both python programs

DAPR pubsub Component : we will use Azure Service Bus

1- creating Azure Service Bus :

for more details about how to create an Azure Service Bus, you can read this article

2- Install Dapr

curl -fsSL https://raw.githubusercontent.com/dapr/cli/master/install/install.sh | /bin/bash

3- initialize dapr

in a terminal window run : dapr init

4- configure dapr publisher component

create a yaml file called pubsub.yaml and place it under ~/.dapr/components

5- Start Dapr Application

dapr run --app-id daprapptest --dapr-http-port 3601

6- Run publisher application

save the following app code to a publisher.py file

Starting the publisher app :

dapr run --app-id daprapptest --app-port 6001 --dapr-http-port 3601 --app-protocol grpc python3 publisher.py

The application start publishing messages to the Azure Service Bus

My Thoughts:

Modern microservices cloud-native applications need to be able to agnostic of the underlying technologies in order to achieve more independence and ease of deployment across platforms, in the example above we tied to make a publisher application code more independent of the technology, we used Azure Service Bus as a queue technology, but we could easily change it AWS SQS, Rabbit MQ , or any other platform that DAPR supports without having to make any changes to application code

Automating AWS server builds from ServiceNow requests.

A lot of organizations are using ServiceNow to manage requests for creating new compute resources in the could ( i.e servers, lambda functions, containers …)

Usually, CMP tools like Scalr, or even ServiceNow Cloud Management can ( to a certain degree ) automate this process, but there are two major issues with these approaches :

1 – For these tools to be used properly they need lengthy customization to fit the business architecture of the organization, mapping all the business units, integrating with existing tools …

2 – It can be expensive, the price per node model can add up quickly for companies with a large number of servers deployed.

This a way to automate server build by using ServerNow Orchestration Module, and Serverless to build the integration.

snow

Creating ServiceNow Form

 

Screen Shot 2018-10-15 at 10.16.32 AM

Screen Shot 2018-10-15 at 10.17.35 AM

 

ServiceNow workflow

Screen Shot 2018-10-14 at 5.38.03 PM

to simplify the process I’ve only used a simple RunScript Step, other activities can be added like CMDB integration, approvals, notifications, etc …

here is a look at what the script does :

  • Gets all the parameters from the form
  • make an API call to API Gateway and Lambda
  • Lambda will trigger

[code]

var os = current.variables.os;
var inst_size = current.variables.inst_size;

var request = new sn_ws.RESTMessageV2();
request.setEndpoint(‘https://9drnhpoorj.execute-api.us-east-1.amazonaws.com/dev/instance’);
request.setBasicAuth(“apikey1″,”mutKJWAolpsdfsdflksd8ndew02234LNFQQvq”);
request.setHttpMethod(‘POST’);
request.setRequestHeader(“Accept”, “application/json”);
request.setRequestHeader(‘Content-Type’, ‘application/json’);

request.setStringParameterNoEscape(‘os’,current.variables.os);
request.setStringParameterNoEscape(‘inst_size’,current.variables.inst_size);
request.setStringParameterNoEscape(‘volume_size’,current.variables.volume_size1);

request.setRequestHeader(‘Content-Type’, ‘application/json’);
request.setRequestBody(‘{“os”:”${os}”,”inst_size”:”${inst_size}”}’);
var response = request.execute();

[/code]

 

The AWS side

To build things on the AWS side, I used Serverless,

check out the code at the github repo

Demo

if I request an EC2 instance that’s an m1.small running an amazon linux OS

Screen Shot 2018-10-14 at 5.49.25 PM

if I take a look at the AWS console I can see that the server is getting created  :

Screen Shot 2018-10-15 at 10.23.14 AM

With the right parameters :

Screen Shot 2018-10-15 at 10.21.09 AM

 

Using AWS GuardDuty to stop compromised instances and send notifications.

GuardDuty  (announced in the 2017 edition of AWS Re:Invent) , is a managed threat detection service that continuously monitors for malicious or unauthorized behavior to help you protect your AWS accounts and workloads. It monitors for activity such as unusual API calls or potentially unauthorized deployments that indicate a possible account compromise. GuardDuty also detects potentially compromised instances or reconnaissance by attackers.

with a minimal amount of code, and few clicks in the AWS console we can set up guardduty to scan EC2 fleets for eventual threats, notify a lambda function to stop the compromised instances and send an SMS notification using AWS SNS service:

Screen Shot 2018-01-04 at 9.43.11 AM

1- Testing few threats :

1-a – Bitcoin mining : one of the possible threats is using your EC2 instances for bitcoin mining , I started a bitcoind container on my EC2 instance to :

Screen Shot 2018-01-04 at 9.53.26 AM

1-b SSH brute-force : I’m not using any username and passwords dictionaries

Screen Shot 2018-01-04 at 9.55.03 AM

2- SNS topic : create an SNS topic called guardduty_alerts, with an SMS subscription

3- Lambda: for stopping instances and sending notifications

import boto3
import json

def lambda_handler(event, context):
print(‘loading handler’)# print(event)
sns = boto3.client(service_name = “sns”)
topicArn = ‘arn:aws:sns:us-east-1:9999999999:guardduty_alerts’

#
result = json.loads(event)# result is now a dict
instanceId = event[‘detail’][‘resource’][‘instanceDetails’][‘instanceId’]
type = event[‘detail’][‘description’]
message = “your EC2 instance ” + instanceId + “has been compromised by attack of ” + type + “, it will be stopped”
sns.publish(
TopicArn = topicArn,
Message = message
)

ec2 = boto3.client(‘ec2’, region_name = ‘us-east-1’)
ec2.stop_instances(InstanceIds = [instanceId])
return

4- CloudWatch rule: create a cloudwatch rule that triggers the lambda function we created previosly

 

et voila , all the threats that we did earlier shows in the GuardDuty findings :

Screen Shot 2018-01-04 at 10.36.08 AM

Stoping the compromised instances :

Screen Shot 2018-01-04 at 10.42.33 AM

sending notifications:

Screen Shot 2018-01-04 at 10.43.29 AM

 

Local (and S3) cloud storage server using Minio

Minio is a local cloud object storage server, it’s open source, released under Apache License V2.0, allowing developers and devops to have a local and a public cloud storage to:

  • backup VMs
  • backup containers
  • store unstructured Data ( photos, files, …)
  • store objects in AWS S3
  • store objects using SDKs (GO, Javascripts, Java )

to start a server you can use the container image of MiniO available on Docker hub here :

mini0/minio

you can run this cmd :

docker pull minio/minio

to start the server run :

 docker run -p 9000:9000 minio/minio server /export

you can access the web UI at http://localhost:9000

screen-shot-2017-01-02-at-7-21-48-pm

the access key and secret key for the local server are generated at the start of the server

create a bucket :

screen-shot-2017-01-02-at-9-15-52-pm

accessible also through the web UI:

screen-shot-2017-01-02-at-9-18-00-pm

using your AWS S3 storage :

we need to add AWS S3 end point to the list of hosts :

mc config host add my-aws https://s3.amazonaws.com YOUR_ACCESS_KEY  YOUR_SECRET_KEY

create a bucket in S3 :

screen-shot-2017-01-02-at-10-30-19-pm

and it’s created  :

screen-shot-2017-01-02-at-10-36-10-pm

CI and code promotion for Chef cookbooks with Jenkins – POC

 

I have been browsing the internet for blogs or articles to help Chef developers have a way of promoting the code of their cookbooks, a way of vetting code, and avoiding that code goes from Operations guys straight to production.

I have found a lot of theoretical articles on building a CI pipeline for chef cookbooks, but not a lot of practical ones,  so I decided to make a proof of concept for  the public and my team as well.

when it comes to integration tools, I like Jenkins, it’s open source and the community is very active in adding  and updating plugins .

In this example I will use a Java cookbook as a code base, and I will be running 4 test :

  • Foodcritic : a helpful lint tool you can use to check your Chef cookbooks for common problems. It comes with 61 built-in rules that identify problems ranging from simple style inconsistencies to difficult to diagnose issues that will hurt in production.
  • ChefSpec : a unit testing framework for testing Chef cookbooks. ChefSpec makes it easy to write examples and get fast feedback on cookbook changes without the need for virtual machines or cloud servers.
  • Rubocop :  a Ruby static code analyzer. Out of the box it will enforce many of the guidelines outlined in the community Ruby Style Guide.
  • Test Kitchen : a test harness tool to execute your configured code on one or more platforms in isolation. A driver plugin architecture is used which lets you run your code on various cloud providers and virtualization technologies such as Amazon EC2,  Vagrant, Docker, LXC containers, and more. Many testing frameworks are already supported out of the box including Bats, shUnit2, RSpec, Serverspec, with others being created weekly.

of course you can have all these tools in one package…. the famous ChefDK

Code Promotion :

The concept of code promotion helps the CI process distinguish between good and bad builds, I like to define a good build as a build where ALL the tests are successful .

Jenkins helps you implement this concept with a community plugin : Promoted Build Plugin

based of the status of your build ( promoted of not)  you can control the code that goes into your repository (github or gitlab), you can use set up hooks to deny merge requests from builds that are not promoted.

Jobs:

let’s setup our jobs, we will too categories of jobs :

  • Build Jobs
  • Test Jobs

 

screen-shot-2016-09-18-at-9-00-30-pm

whenever a build job is successful will trigger all the test jobs to start.

screen-shot-2016-09-19-at-9-24-50-am

Build-java-cookbook : will clone the code repo and create a temporary artifact, this is the config section for this job

screen-shot-2016-09-18-at-9-04-18-pm

Rubocop Test : will copy the temporary artifact decompress it to have all code repo and run rubocop test on the code :

screen-shot-2016-09-18-at-9-18-58-pm

ChefSpec Test :

screen-shot-2016-09-19-at-9-26-48-am

FoodCritic : 

screen-shot-2016-09-19-at-9-34-11-am

Test Kitchen : 

screen-shot-2016-09-19-at-9-39-28-am

Test Kitchen will spin a vagrant box (ubuntu-14.04) and run the cookbook on it and test the results.

First Run : 

with the configurations above we run the build and test jobs :

Result :

screen-shot-2016-09-19-at-9-54-56-am

Rubocop test failed, by looking at the execution log we can see why :

+ rubocop
Inspecting 44 files
.......CC...CC........C..C..................

Offenses:

providers/alternatives.rb:34:38: C: Use shell_out("#{alternatives_cmd} --display #{cmd} | grep #{alt_path} | grep 'priority #{priority}$'").exitstatus.zero? instead of shell_out("#{alternatives_cmd} --display #{cmd} | grep #{alt_path} | grep 'priority #{priority}$'").exitstatus == 0.
      alternative_exists_same_prio = shell_out("#{alternatives_cmd} --display #{cmd} | grep #{alt_path} | grep 'priority #{priority}$'").exitstatus == 0
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
providers/alternatives.rb:35:28: C: Use shell_out("#{alternatives_cmd} --display #{cmd} | grep #{alt_path}").exitstatus.zero? instead of shell_out("#{alternatives_cmd} --display #{cmd} | grep #{alt_path}").exitstatus == 0.
      alternative_exists = shell_out("#{alternatives_cmd} --display #{cmd} | grep #{alt_path}").exitstatus == 0
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
providers/alternatives.rb:43:18: C: Use remove_cmd.exitstatus.zero? instead of remove_cmd.exitstatus == 0.
          unless remove_cmd.exitstatus == 0
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^
providers/alternatives.rb:57:18: C: Use install_cmd.exitstatus.zero? instead of install_cmd.exitstatus == 0.
          unless install_cmd.exitstatus == 0
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
providers/alternatives.rb:66:28: C: Use shell_out("#{alternatives_cmd} --display #{cmd} | grep \"link currently points to #{alt_path}\"").exitstatus.zero? instead of shell_out("#{alternatives_cmd} --display #{cmd} | grep \"link currently points to #{alt_path}\"").exitstatus == 0.
      alternative_is_set = shell_out("#{alternatives_cmd} --display #{cmd} | grep \"link currently points to #{alt_path}\"").exitstatus == 0
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
providers/alternatives.rb:72:16: C: Use set_cmd.exitstatus.zero? instead of set_cmd.exitstatus == 0.
        unless set_cmd.exitstatus == 0
               ^^^^^^^^^^^^^^^^^^^^^^^
providers/alternatives.rb:87:50: C: Use cmd.exitstatus.zero? instead of cmd.exitstatus == 0.
    new_resource.updated_by_last_action(true) if cmd.exitstatus == 0
                                                 ^^^^^^^^^^^^^^^^^^^
providers/ark.rb:39:20: C: Omit parentheses for ternary conditions.
    package_name = (file_name =~ /^server-jre.*$/) ? 'jdk' : file_name.scan(/[a-z]+/)[0]
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
providers/ark.rb:134:14: C: Use 0o for octal literals.
        mode 0755
             ^^^^
providers/ark.rb:149:14: C: Closing method call brace must be on the line after the last argument when opening brace is on a separate line from the first argument.
            ))
             ^
providers/ark.rb:150:16: C: Use cmd.exitstatus.zero? instead of cmd.exitstatus == 0.
        unless cmd.exitstatus == 0
               ^^^^^^^^^^^^^^^^^^^
providers/ark.rb:157:16: C: Use cmd.exitstatus.zero? instead of cmd.exitstatus == 0.
        unless cmd.exitstatus == 0
               ^^^^^^^^^^^^^^^^^^^
providers/ark.rb:164:16: C: Use cmd.exitstatus.zero? instead of cmd.exitstatus == 0.
        unless cmd.exitstatus == 0
               ^^^^^^^^^^^^^^^^^^^
providers/ark.rb:172:14: C: Use cmd.exitstatus.zero? instead of cmd.exitstatus == 0.
      unless cmd.exitstatus == 0
             ^^^^^^^^^^^^^^^^^^^
recipes/ibm.rb:44:8: C: Use 0o for octal literals.
  mode 00755
       ^^^^^
recipes/ibm_tar.rb:36:8: C: Use 0o for octal literals.
  mode 00755
       ^^^^^
recipes/ibm_tar.rb:49:8: C: Use 0o for octal literals.
  mode 00755
       ^^^^^
recipes/set_java_home.rb:27:8: C: Use 0o for octal literals.
  mode 00755
       ^^^^^
recipes/set_java_home.rb:32:8: C: Use 0o for octal literals.
  mode 00755
       ^^^^^
resources/ark.rb:39:54: C: Use 0o for octal literals.
attribute :app_home_mode, kind_of: Integer, default: 0755
                                                     ^^^^

44 files inspected, 20 offenses detected
Build step 'Execute shell' marked build as failure
Finished: FAILURE

 

 

let’s go ahead and fix these offenses and commit the code :

screen-shot-2016-09-19-at-10-38-02-am

we restart the build :

and here everything is green this time :

screen-shot-2016-09-19-at-11-40-27-am

from this point on you can create do two things :

  • save your cookbook in a private supermarket with a corresponding version number
  • upload this cookbook to chef server

 

Promotion status :

After the completion of all tests, this build can now be promoted.

screen-shot-2016-09-19-at-11-48-31-am

 

Running ContainerVMs on ESXI vmware host

 

until today I knew that running containers was always dependent on the existence of a host and an OS of some kind, but I came across this project : vSphere Integrated containers, it’s a runtime environment allowing developer to run containers as VMs, instead of running containers in  VMs .

there is a good read to understand the contrast between traditional containers and containerVMs

vic-machine :
is CLI tool allowing for the creating of containerVMs in the following setups :
  • vCenter Server with a cluster
  • vCenter Server with one or more standalone ESXi hosts
  • A standalone ESXi host

this architectures relies on a Virtual Container Host, the VHC is an end point to start, stop, delete containers across the datacenter .

“The Virtual Container Host (VCH) is the means of controlling, as well as consuming, container services – a Docker API endpoint is exposed for developers to access, and desired ports for client connections are mapped to running containers as required. Each VCH is backed by a vSphere resource pool, delivering compute resources far beyond that of a single VM or even a dedicated physical host. Multiple VCHs can be deployed in an environment, depending on business requirements. For example, to separate resources for development, testing, and production.”

the binaries can be downloaded from here :

https://bintray.com/vmware/vic-repo/build/view#files

untar the compressed file :

$ tar xvzf vic_3711.tar.gz

this is the content of the tar file :

Screen Shot 2016-08-06 at 11.54.13 PM

Setting up ESXI host : 

  • download the ISO file from vmware wbesite :  https://my.vmware.com/en/web/vmware/evalcenter?p=free-esxi6
  • use virtualbox or vmware fusion to create a host using esxi host ( http://www.vmwareandme.com/2013/10/step-by-step-guide-how-to-install.html#.V6a8rZNViko)Screen Shot 2016-08-06 at 11.50.51 PM

Creating a Virtual Container Host :

$ vic-machine-darwin create –target 172.16.127.130 –user root –image-datastore datastore1
INFO[2016-08-06T14:05:48-05:00] Please enter ESX or vCenter password:
INFO[2016-08-06T14:05:50-05:00] ### Installing VCH ####
INFO[2016-08-06T14:05:50-05:00] Generating certificate/key pair – private key in ./virtual-container-host-key.pem
INFO[2016-08-06T14:05:50-05:00] Validating supplied configuration
INFO[2016-08-06T14:05:51-05:00] Firewall status: ENABLED on “/ha-datacenter/host/localhost.localdomain/localhost.localdomain”
INFO[2016-08-06T14:05:51-05:00] Firewall configuration OK on hosts:
INFO[2016-08-06T14:05:51-05:00] “/ha-datacenter/host/localhost.localdomain/localhost.localdomain”
WARN[2016-08-06T14:05:51-05:00] Evaluation license detected. VIC may not function if evaluation expires or insufficient license is later assigned.
INFO[2016-08-06T14:05:51-05:00] License check OK
INFO[2016-08-06T14:05:51-05:00] DRS check SKIPPED – target is standalone host
INFO[2016-08-06T14:05:51-05:00] Creating Resource Pool “virtual-container-host”
INFO[2016-08-06T14:05:51-05:00] Creating VirtualSwitch
INFO[2016-08-06T14:05:51-05:00] Creating Portgroup
INFO[2016-08-06T14:05:51-05:00] Creating appliance on target
INFO[2016-08-06T14:05:51-05:00] Network role “client” is sharing NIC with “external”
INFO[2016-08-06T14:05:51-05:00] Network role “management” is sharing NIC with “external”
INFO[2016-08-06T14:05:52-05:00] Uploading images for container
INFO[2016-08-06T14:05:52-05:00] “bootstrap.iso”
INFO[2016-08-06T14:05:52-05:00] “appliance.iso”
INFO[2016-08-06T14:06:00-05:00] Waiting for IP information
INFO[2016-08-06T14:06:18-05:00] Waiting for major appliance components to launch
INFO[2016-08-06T14:06:18-05:00] Initialization of appliance successful
INFO[2016-08-06T14:06:18-05:00]
INFO[2016-08-06T14:06:18-05:00] vic-admin portal:
INFO[2016-08-06T14:06:18-05:00] https://172.16.127.131:2378
INFO[2016-08-06T14:06:18-05:00]
INFO[2016-08-06T14:06:18-05:00] DOCKER_HOST=172.16.127.131:2376
INFO[2016-08-06T14:06:18-05:00]
INFO[2016-08-06T14:06:18-05:00] Connect to docker:
INFO[2016-08-06T14:06:18-05:00] docker -H 172.16.127.131:2376 –tls info
INFO[2016-08-06T14:06:18-05:00] Installer completed successfully

you can use vSphere or ESXI web client to take a look :

Screen Shot 2016-08-07 at 12.20.56 AM

creating ContainerVM :

$ docker –tls run  –name container1 ubuntu

the container has been created :

Screen Shot 2016-08-07 at 12.25.32 AM

 

Conclusion :

ContainerVMs seem to have the following distinctive characteristics over the traditional containers  :

  1. There is no default shared filesystem between the container and its host
    • Volumes are attached to the container as disks and are completely isolated from each other
    • A shared filesystem could be provided by something like an NFS volume driver
  2. The way that you do low-level management and monitoring of a container is different. There is no VCH shell.
    • Any API-level control plane query, such as docker ps, works as expected
    • Low-level management and monitoring uses exactly the same tools and processes as for a VM
  3. The kernel running in the container is not shared with any other container
    • This means that there is no such thing as an optional privileged mode. Every container is privileged and fully isolated.
    • When a containerVM kernel is forked rather than booted, much of its immutable memory is shared with a parenttemplate
  4. There is no such thing as unspecified memory or CPU limits
    • A Linux container will have access to all of the CPU and memory resource available in its host if not specified
    • A containerVM must have memory and CPU limits defined, either derived from a default or specified explicitly

but the traditional containers like Docker are definitely  a more mature solution, offers more tools for orchestration and scaling.

Riak Cluster Using Docker Compose

Riak is some hot stuff lately with the increasing need for clusterzitation in the world of NoSQL data stores .

Riak is a solution to big data problem, it was based on Amazon Dynamo design, to respond to request at a very large scale,

Basho introduced Riak as fault tolerant, simple, scalable, high availability friendly .

it’s fairly easy to create/add nodes with riak-admin to a riak cluster, but if this is combined with docker-compose capabilities it will give you the possibility to have an easily deployable/scalable cluster of riak nodes.

I created a docker-compose yaml to specify all the components of the cluster, basically a seed and a set of nodes

I used  Hectcastro Riak docker image, because :  that’s the beauty of containers, they are reusable !

$git clone https://github.com/mbenachour/riak-cluster-compose.git
$cd riak-cluster-compose

 

  • start compose as a daemon :
$docker-compose up -d
  • you will have one seed and one node running :

Screen Shot 2015-12-15 at 5.00.52 PM

  • list all running containers

Screen Shot 2015-12-15 at 5.02.33 PM

  • let’s create an “Artists” buckets and add an object using node1 (port : 32812)
$curl -i -d '{"name":"Bruce"}' -H "Content-Type: application/json" \
localhost:32812/riak/artists/Bruce
  • check the bucket was created :

Screen Shot 2015-12-15 at 5.30.05 PM

Scale time ! : 

  • to scale you cluster you can add 3 nodes
    $docker scale riak_node=4

Screen Shot 2015-12-15 at 5.32.32 PM

  •  let’s query node4 for the list of buckets (port: 32818)

Screen Shot 2015-12-15 at 5.36.08 PM

works !

if you are more interested to scale on multiple hosts, you can combine this with docker swarm.