Author Archives: osipov

Join me for “Serverless Machine Learning with TensorFlow” at O’Reilly Strata in London, UK

If you are at O’Reilly Strata Data Conference in London on May 22nd, I’d be delighted to see you! During the session I’ll walk you through building a complete machine learning pipeline from ingest, exploration, training, and evaluation to deployment and prediction. This workshop will be conducted on the Google Cloud Platform (GCP) and will use GCP’s infrastructure to run TensorFlow.

Also, the session will cover:

  • Data pipelines and data processing: How to explore and split large datasets correctly (using SQL and pandas on BigQuery and Cloud Datalab)
  • Model building: How to develop a wide-and-deep machine learning model in TensorFlow on a small sample locally (using Apache Beam for preprocessing operations so that the same preprocessing can be applied in streaming mode as well and Cloud Dataflow and Cloud ML Engine for preprocessing and training of the model)
  • Model inference and deployment: How to deploy the trained model as a REST microservice and predictions invoked from a web application

For more, check out: https://conferences.oreilly.com/strata/strata-eu/public/schedule/detail/65484

Outpace your rivals and transform to MobileFirst

This is an original version of my article on IBM Cloud computing news titled How a mobile development platform can help IT pros cut through clutter

Have you ever felt overwhelmed by the number of mobile gadgets you see every day?

If so, you are not alone. In 2015, the total number of mobile devices worldwide (7.9 billion) eclipsed the world’s population (7.4 billion). Though smartphone manufacturers often pitch their products as if they are fashion accessories, a recent study by the IBM Institute for Business Value uncovered that companies around the globe are driving the adoption of mobile because it makes good financial sense. Sixty-two percent of the executives surveyed as part of the study said that their top mobile initiatives achieved return on investment (ROI) in 12 months or less.

Chief information officers, enterprise architects, software development managers and other information technology professionals should plan for a growing number of mobile projects.

IBM is a recognized leader in helping enterprises launch and accelerate their mobility efforts. The research report “The Forrester Wave: Mobile Development Platforms, Q4 2016,” Forrester Research Inc, 24 October 2016, “included 12 vendors in the assessment” and “evaluated vendors against 32 criteria.”

It states: “The MobileFirst Foundation on-premises offering was once the most full-featured of IBM’s solutions, but the Bluemix cloud solution is now functionally equivalent, driving IBM’s move to the Leaders category.”

According to the report, IBM MobileFirst Platform delivered from the IBM Bluemix cloud proved to be a stronger offering than services delivered on clouds from Microsoft and AWS.

Customers increasingly demand conversational interfaces for interactions with brands.

Such customer experiences are complex to implement with only traditional application development skills and tools. Companies, like Elemental Path, grand prize recipient of the Watson Mobile Developer Challenge, chose IBM Watson to simplify the task of building conversational interfaces. Availability of IBM Watson technologies for developers is just one of the features that differentiate IBM MobileFirst platform from other mobile development platforms on the market.

Forrester reported that “IBM is best fit for shops that focus on data integration, especially complex integration scenarios.”

The IBM hybrid cloud infrastructure helps mobile application developers ensure high degree of application and data isolation, security, auditability, and compliance with data privacy, as well as other regulatory requirements. According to the Forrester report, IBM “customers cited the openness of the platform as a reason for purchase, particularly its front-end tooling partnerships with Cordova and Ionic.”

IBM’s leading expertise in mobile is based in part on the experience of working with Apple in a partnership to help global businesses to transform enterprise mobility.

IBM expertise in mobile is based in part on the experience of working in partnership with Apple to help global businesses transform enterprise mobility.

As part of the Apple partnership, IBM developed and delivered more than 100 applications using the MobileFirst platform, covering 14 industries. The applications transformed work for professions ranging from wealth advisors to flight attendants.

For example, working with SAS, the largest airline in Scandinavia, IBM developed a Passenger+ app that enabled flight attendants to access a 360-degree view of each passenger’s past flight preferences, interests, and purchasing decisions. With this information, IBM MobileFirst became essential in helping SAS deliver a more elite and personalized flying experience.

 

Composable Architecture Patterns for Serverless Computing Applications – Part 4

This post is the 4th in a series on serverless computing (see Part 1, Part 2, and Part 3) and will focus on the differences between serverless architectures and the more widely known Platform-as-a-Service (PaaS) and Extract-Transform-Load (ETL) architectures. If you are unsure about what is serverless computing, I strongly encourage you to go back to the earlier parts of the series to learn about the definition and to review concrete examples of microflows, which are applications based on serverless architecture. This post will also use the applications developed previously to illustrate a number of serverless architecture patterns.

How’s serverless different?

Serverless and cloud. At a surface, the promise of serverless computing sounds similar to the original promise of cloud computing, namely helping developers to abstract away from servers, focus on writing code, avoid issues related to under/over provisioning of capacity, operating system patches, and so on. So what’s new in serverless computing? To answer this question, it is important to remember that cloud computing defines three service models: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). Since serverless fits the definition of a PaaS[1], it offers many of the same benefits. However, unlike Cloud Foundry, OpenShift, Heroku, and other traditional PaaSes focused on supporting long running applications and services, serverless frameworks offer a new kind of a platform for running short lived processes and functions, also called microflows.

The distinction between the long running processes and microflows is subtle but important. When started, long running processes wait for an input, execute some code when an input is received, and then continue waiting. In contrast, microflows are started once an input is received, execute some code, and are terminated by the platform after the code in the microflow finishes executing. One way to describe microflows is to say that they are reactive, in the sense that they react to incoming requests and release resources after finishing work.

Serverless and microservices. Today, an increasing number of cloud based applications that are built to run on PaaSes follow a cloud native, microservices architecture[2]. Unlike microflows, microservices are long running processes designed to continuously require server capacity (see microservices deployment patterns[3]) while waiting for a request.

For example, a microservice deployed to a container based PaaS (e.g. Cloud Foundry) consumes memory and some fraction of the CPU even when not servicing requests. In most public cloud PaaSes, this continuous claim on CPU and memory resources directly translates to account charges. Further, microservices implementations may have memory leak bugs that result in increasing memory usage depending on how long a microservice has been running and how many requests it serviced. In contrast, microflows are designed to be launched on demand, upon the arrival of a specific type of a request to the PaaS hosting the microflow. After the code in the microflow finishes executing, the PaaS is responsible for releasing any resources allocated to the microflow during runtime, including memory. Although in practice the hosting PaaS may not fully release its memory resources to preserve a reusable, “hot” copy of a microflow for better performance, the PaaS can prevent runaway memory leaks by monitoring its memory usage and restarting the microflow.

Microflows naturally compliment microservices by providing means for microservices to communicate asynchronously as well as to execute one-off tasks, batch jobs, and other operations described later in this post based on serverless architecture patterns.

Serverless and ETL. Some may argue that serverless architecture is a new buzzword for the well known Extract Transform Load (ETL) technologies. The two are related, in fact, AWS advertises its serverless computing service, Lambda, as a solution for ETL-type of problems. However, unlike microflows, ETL applications are implicitly about data: they focus on a variety of data specific tasks, like import, filtering, sorting, transformations, and persistence. Serverless applications are broader in scope: they can extract, transform, and load data (see Part 3), but they are not limited to these operations. In practice, microflows (serverless applications), are as much about data operations as they are about calling services to execute operations like sending a text message (see Part 1 and Part 2) or changing temperature on an Internet-of-Things enabled thermostat. In short, serverless architecture patterns are not the same as ETL patterns.

Serverless architecture patterns

The following is a non-exhaustive and a non-mutually exclusive list of serverless computing patterns. The patterns are composable, in the sense that a serverless application may implement just one of the patterns, as in the examples in Part 1 and Part 2, or alternatively an application may be based on any number of the patterns, as in the example in Part 3.

The command pattern describes serverless computing applications designed to orchestrate service requests to one or more services. The requests, which may be handled by microservices, can target a spectrum ranging from business services that can send text messages to customers to application services, such as those that handle webhook calls, and to infrastructure services, for example those responsible for provisioning additional virtual servers to deploy an application.

The enrich pattern is described in terms of the V’s framework popularized by Gartner and IT vendors to describe qualities of Big Data[4]. The framework provides a convenient way to describe the different features of the serverless computing applications (microflows) that are focused on data processing. The enrich pattern facilitates increase in the Value of the microflow’s input data by performing one or more of the following:

  • improving data Veracity, by verifying or validating the data
  • increasing the Volume of the data by augmenting it with additional, relevant data
  • changing the Variety of the data by transforming or transcoding it
  • accelerating the Velocity of the data by splitting it into chunks and forwarding the chunks, in parallel, to other services

The persist pattern describes applications that more closely resemble the traditional ETL apps than is the case with the other two patterns. When a microflow is based solely on this pattern, the application acts as an adapter or a router, transforming input data arriving to the microflow’s service endpoint into records in one or more external data stores, which can be relational databases, NoSQL databases, or distributed in-memory caches. However, as illustrated by the example in Part 3, applications use this pattern in conjuction with other patterns, processing input data through enrich or command patterns, and then persisting the data to a data store.

References

[1] https://en.wikipedia.org/wiki/Platform_as_a_service
[2] https://en.wikipedia.org/wiki/Microservices
[3] http://microservices.io/patterns/index.html
[4] http://www.ibmbigdatahub.com/infographic/four-vs-big-data

Start with serverless computing on IBM Cloud – Part 3

Until recently, platform as a service (PaaS) clouds offered competing approaches on how to implement traditional Extract Transform Load (ETL) style workloads in cloud computing environments. Vendors like IBM, AWS, Google, are starting support serverless computing in their clouds as a way to support ETL and other stateless, task-oriented applications. Building on the examples from Part 1 and Part 2 which described serverless applications for sending text messages, this post demonstrates how an OpenWhisk action can be used to validate unstructured data, add value to the data using 3rd party services / APIs, and to persist the resulting higher value data in an IBM Compose managed database server. The stateless, serverless action executed by OpenWhisk is implemented as a Node.JS app, packaged in a Docker container.

Serverless computing

Getting started

To run the code below you will need to sign up for the following services

NOTE: before proceeding, configure the following environment variables from your command line. Use the Docker Hub username for the USERNAME variable and the Pitney Bowes application ID for the PBAPPID variable.

export USERNAME=''
export PBAPPID=''

Create a Postgres database in IBM Compose

When signing up for a Compose trial, make sure that you choose Postgres as your managed database.

Once you are done with the Compose sign up process and your Postgres database deployment is completed, open the deployments tab of the Compose portal and click on the link for your Postgres instance. You may already have a default database called compose in the deployment. To check that this database exists, click on a sub-tab called Browser and verify that there is a link to a database called compose. If the database does not exist, you can create one using a corresponding button on the right.

Next, open the database by clicking on the compose database link and choose the sub-tab named SQL. At the bottom of the SQL textbox add the following CREATE TABLE statement and click the Run button.

CREATE TABLE "address" ("address" "text", "city" "text", "state" "text", "postalCode" "text", "country" "text", "lat" "float", "lon" "float");

The output at the bottom of the screen should contain a “Command executed successfully” response.

You also need to export the connection string for your database as an enviroment variable. Open the Deployments tab, the Overview sub-tab, and copy the entire connection string with the credentials included. You can reveal the credentials by clicking on the Show / Change link next to the password.

Insert the full connection string between the single quotes below and execute the command.

export CONNSTRING=''

NOTE: This connection string will be needed at a later step when configuring your OpenWhisk action.

Create a Cloudant document database in IBM Bluemix

Download a CF command line interface for your operating system using the following link

https://github.com/cloudfoundry/cli/releases

and then install it.

From your command line type in

cf login -a api.ng.bluemix.net

to authenticate with IBM Bluemix and then enter your Bluemix email, password, as well as the deployment organization and space as prompted.

To export your selection of the deployment organization and space as environment variables for the future configuration of the OpenWhisk action:

export ORG=`cf target | grep 'Org:' | awk '{print $2}'`
export SPACE=`cf target | grep 'Space:' | awk '{print $2}'`

To create a new Cloudant database, run the following commands from your console

cf create-service cloudantNoSQLDB Shared cloudant-deployment

cf create-service-key cloudant-deployment cloudant-key

cf service-key cloudant-deployment cloudant-key

The first command creates a new Cloudant deployment in your IBM Bluemix account, the second assigns a set of credentials for your account to the Cloudant deployment. The third command should output a JSON document similar to the following.

{
 "host": "d5695abd-d00e-40ef-1da6-1dc1e1111f63-bluemix.cloudant.com",
 "password": "5555ee55555a555555c8d559e248efce2aa9187612443cb8e0f4a2a07e1f4",
 "port": 443,
 "url": "https://"d5695abd-d00e-40ef-1da6-1dc1e1111f63-bluemix:5555ee55555a555555c8d559e248efce2aa9187612443cb8e0f4a2a07e1f4@d5695abd-d00e-40ef-1da6-1dc1e1111f63-bluemix.cloudant.com",
 "username": "d5695abd-d00e-40ef-1da6-1dc1e1111f63-bluemix"
}

You will need to put these Cloudant credentials in environment variables to create a database and populate the database with documents. Insert the values from the returned JSON document in the corresponding environment variables in the code snippet below.

export USER=''
export PASSWORD=''
export HOST=''

After the environment variables are correctly configured you should be able to create a new Cloudant database by executing the following curl command

curl https://$USER:$PASSWORD@$HOST/address_db -X PUT

On successful creation of a database you should get back a JSON response that looks like this:

{"ok":true}

Clone the OpenWhisk action implementation

The OpenWhisk action is implemented as a Node.JS based application that will be packaged as a Docker image and published to Docker Hub. You can clone the code for the action from github by running the following from your command line

git clone https://github.com/osipov/compose-postgres-openwhisk.git

This will create a compose-postgres-openwhisk folder in your current working directory.

Most of the code behind the action is in the server/service.js file in the functions listed below. As evident from the function names, once the action is triggered with a JSON object containing address data, the process is to first query the Pitney Bowes geolocation data to validate the address and to obtain the latitude and the longitude geolocation coordinates. Next, the process retrieves a connection to the Compose Postgres database, runs a SQL insert statement to put the address along with the coordinates into the database, and returns the connection back to the connection pool.

queryPitneyBowes
connectToCompose
insertIntoCompose
releaseComposeConnection

The code to integrate with the OpenWhisk platform is in the server/app.js file. Once executed, the code starts a server on port 8080 and listens for HTTP POST requests to the server’s _init_ and _run_ endpoints. Each of these endpoints delegates to the corresponding method implementation in server/service.js. The init method simply logs its invocation and returns an HTTP 200 status code as expected by the OpenWhisk platform. The run method executes the process described above to query for geocoordinates and to insert the retrieved data to Compose Postgres.

Build and package the action implementation in a Docker image

If you don’t have Docker installed, it is available per the instructions provided in the link below. Note that if you are using Windows or OSX, you will want to install Docker Toolbox.

https://docs.docker.com/engine/installation/

Make sure that your Docker Hub account is working correctly by trying to login using

docker login -u $USERNAME

You will be prompted and will need to enter your Docker Hub password.

Change to the compose-postgres-openwhisk as your working directory and execute the following commands to build the Docker image with the Node.JS based action implementation and to push the image to Docker Hub.

docker build -t $USERNAME/compose .
docker push $USERNAME/compose

 


Use your browser to login to https://hub.docker.com after the docker push command is done. You should be able to see the compose image in the list of your Docker Hub images.

Create a stateless, Docker-based OpenWhisk action

To get started with OpenWhisk, download and install a command line interface using the instructions from the following link

https://new-console.ng.bluemix.net/openwhisk/cli

Configure OpenWhisk to use the same Bluemix organization and space as your Cloudant instance by executing the following from your command line

wsk property set --namespace $ORG\_$SPACE

If your $ORG and $SPACE environment variables are not set, refer back to the section on creating a Cloudant database.

Next update the list of packages by executing

wsk package refresh

One of the bindings listed in the output should be named Bluemix_cloudant-deployment_cloudant-key

The following commands need to be executed to configure your OpenWhisk instance to run the action in case if a new document is placed in the Cloudant database.

The first command sets up a Docker-based OpenWhisk action called composeInsertAction that is implemented using the $USERNAME/compose image from Docker Hub.

wsk action create --docker composeInsertAction $USERNAME/compose
wsk action update composeInsertAction --param connString "$CONNSTRING" --param pbAppId "$PBAPPID"
wsk trigger create composeTrigger --feed /$ORG\_$SPACE/Bluemix_cloudant-deployment_cloudant-key/changes --param includeDoc true --param dbname address_db
wsk rule create --enable composeRule composeTrigger composeInsertAction

Test the serverless computing action by creating a document in the Cloudant database

Open a separate console window and execute the following command to monitor the result of running the OpenWhisk action

wsk activation poll

 

In another console, create a document in Cloudant using the following curl command

curl https://$USER:$PASSWORD@$HOST/address_db -X POST -H "Content-Type: application/json" -d '{"address": "1600 Pennsylvania Ave", "city": "Washington", "state": "DC", "postalCode": "20006", "country": "USA"}'

On success you should see in the console running the wsk activation poll a response similar to following

[run] 200 status code result
{
  "command": "SELECT",
  "rowCount": 1,
  "oid": null,
  "rows": [
    {
      "address": "1600 Pennsylvania Ave",
      "city": "Washington",
      "state": "DC",
      "postalcode": "20006",
      "country": "USA",
      "lat": 38.8968999990778,
      "lon": -77.0408
    }
  ],
  "fields": [
    {
      "name": "address",
      "tableID": 16415,
      "columnID": 1,
      "dataTypeID": 25,
      "dataTypeSize": -1,
      "dataTypeModifier": -1,
      "format": "text"
    },
    {
      "name": "city",
      "tableID": 16415,
      "columnID": 2,
      "dataTypeID": 25,
      "dataTypeSize": -1,
      "dataTypeModifier": -1,
      "format": "text"
    },
    {
      "name": "state",
      "tableID": 16415,
      "columnID": 3,
      "dataTypeID": 25,
      "dataTypeSize": -1,
      "dataTypeModifier": -1,
      "format": "text"
    },
    {
      "name": "postalcode",
      "tableID": 16415,
      "columnID": 4,
      "dataTypeID": 25,
      "dataTypeSize": -1,
      "dataTypeModifier": -1,
      "format": "text"
    },
    {
      "name": "country",
      "tableID": 16415,
      "columnID": 5,
      "dataTypeID": 25,
      "dataTypeSize": -1,
      "dataTypeModifier": -1,
      "format": "text"
    },
    {
      "name": "lat",
      "tableID": 16415,
      "columnID": 6,
      "dataTypeID": 701,
      "dataTypeSize": 8,
      "dataTypeModifier": -1,
      "format": "text"
    },
    {
      "name": "lon",
      "tableID": 16415,
      "columnID": 7,
      "dataTypeID": 701,
      "dataTypeSize": 8,
      "dataTypeModifier": -1,
      "format": "text"
    }
  ],
  "_parsers": [
    null,
    null,
    null,
    null,
    null,
    null,
    null
  ],
  "rowAsArray": false
}

Start with serverless computing on IBM Cloud – Part 2

This post is a Part 2 in a series on serverless computing. The last post described how to build a simple but useful text messaging application written in Python, packaged in a Docker image on Docker Hub, and launched using the OpenWhisk serverless computing framework. The app was implemented to be entirely stateless, which is common in serverless computing but can be limiting for many practical use cases.

For example, applications that send text messages may need to make a record about the text message contents, the date and time when the message was sent, and other useful state information. This post will describe how to extend the application built in Part 1 to persist the text message metadata in Cloudant, a PouchDB based JSON document database available from IBM Bluemix. Since OpenWhisk integrates with Cloudant, it is possible to setup OpenWhisk to automatically trigger a Docker-based action to send the SMS once the text message contents are in Cloudant. An overview of the process is described in the following diagram.

Serverless2

Before you start

Make sure that you have completed the steps in the Part 1 of the series and have a working textAction in OpenWhisk that can send text messages using Twilio. You will also need to make sure you are registered for IBM Bluemix. To sign up for a 30 day trial Bluemix account register here: https://console.ng.bluemix.net/registration/

Next, download a Cloud Foundry command line interface for your operating system using the following link
https://github.com/cloudfoundry/cli/releases
and then install it.

Create a Cloudant deployment in IBM Bluemix

In your console, type in

cf login -a api.ng.bluemix.net

to authenticate with IBM Bluemix and then enter your Bluemix email, password, as well as the deployment organization and space as prompted.

To export your selection of the deployment organization and space as environment variables for configuration of the OpenWhisk action:

export ORG=`cf target | grep 'Org:' | awk '{print $2}'`
export SPACE=`cf target | grep 'Space:' | awk '{print $2}'`

To create a new Cloudant database, run the following commands from your console

cf create-service cloudantNoSQLDB Shared cloudant-deployment

cf create-service-key cloudant-deployment cloudant-key

cf service-key cloudant-deployment cloudant-key

The first command creates a new Cloudant deployment in your IBM Bluemix account, the second assigns a set of credentials for your account to the Cloudant deployment. The third command should output a JSON document similar to the following.

{
"host": "d5555abd-d00e-40ef-1da6-1dc1e1111f63-bluemix.cloudant.com",
"password": "5555ee55555a555555c8d559e248efce2aa9187612443cb8e0f4a2a07e1f4",
"port": 443,
"url": "https://"d5695abd-d00e-40ef-1da6-1dc1e1111f63-bluemix:5555ee55555a555555c8d559e248efce2aa9187612443cb8e0f4a2a07e1f4@d5695abd-d00e-40ef-1da6-1dc1e1111f63-bluemix.cloudant.com",
"username": "d5695abd-d00e-40ef-1da6-1dc1e1111f63-bluemix"
}

You will need to put these Cloudant credentials in environment variables to create a database and populate the database with documents. Insert the values from the returned JSON document in the corresponding environment variables in the code snippet below.

export USER=''
export PASSWORD=''
export HOST=''

After the environment variables are correctly configured you should be able to create a new Cloudant database by executing the following curl command

curl https://$USER:$PASSWORD@$HOST/sms -X PUT

On successful creation of a database you should get back a JSON response that looks like this:

{"ok":true}

Integrate Cloudant with OpenWhisk rules and triggers

Configure OpenWhisk to use the same Bluemix organization and space as your Cloudant instance by executing the following from your command line

wsk property set --namespace $ORG\_$SPACE

If your $ORG and $SPACE environment variables are not set, refer back to the section on creating the Cloudant database.

Next update the list of packages by executing

wsk package refresh

One of the bindings listed in the output should be named Bluemix_cloudant-deployment_cloudant-key

Run following commands to configure OpenWhisk to start the action in case if a new document is placed in the Cloudant sms database.

wsk trigger create textTrigger --feed /$ORG\_$SPACE/Bluemix_cloudant-deployment_cloudant-key/changes --param includeDoc true --param dbname sms

wsk rule create –enable textRule textTrigger textAction

The first command creates a trigger that listens to changes to the Cloudant database. The second command is a rule that indicates that whenever the trigger is activated with a document in Cloudant, then the text messaging action (textAction created in the previous post) needs to be invoked.

Test the OpenWhisk trigger by logging the text message to the Cloudant database

Open a separate console window and execute the following command to monitor the OpenWhisk log

wsk activation poll

In another console, create a document in Cloudant using the following curl command, replacing the to value to specify the phone number and the msg value to specify the text message contents:

curl https://$USER:$PASSWORD@$HOST/sms -X POST -H "Content-Type: application/json" -d '{"from": "$TWILIO_NUMBER", "to": "867-5309", "msg":"Jenny I got your number"}'

On success, you should see in the console running the wsk activation poll a response similar to following

{
    "status": [
        {
            "success": "true"
        },
        {
            "message_sid": "SM5ecc4ee8c73b4ec29e79c0f1ede5a4c8"
        }
    ]
}

Start with serverless computing on IBM Cloud – Part 1

Once Forbes starts to cover serverless computing[1] you know that it is time to begin paying attention. Today, there are many frameworks that can help you get started with serverless computing, for example OpenWhisk[2], AWS Lambda, and Google Cloud Functions.

This post will help you build a simple but useful serverless computing application with OpenWhisk on IBM Cloud. The app is implemented using Python with Flask and can help you send text messages via a Twilio SMS API[3].

If you would like to skip the introductions and geek out with the code, you can access it from the following github repository: https://github.com/osipov/openwhisk-python-twilio Otherwise, read on.

So why OpenWhisk? One reason is that it stands out based on its elegant, Docker-based architecture that enables a lot more flexibility than competing frameworks from AWS and Google. For example, AWS Lambda forces developers to choose between Python, Java, or JavaScript[4] for the implementation of the serverless computing functions. Google Cloud Functions are JavaScript only and must be packaged as Node.js modules[5].

OpenWhisk’s use of Docker means that any server side programming language supported by Docker can be used for serverless computing. This is particularly important for organizations that target hybrid clouds, environments where legacy, on-premise code needs to be integrated with code running in the cloud. Also, since Docker is a de facto standard for containerizing applications, serverless computing developers don’t need to learn yet another packaging mechanism to build applications on IBM Cloud.

You can use the sample app described in this post to figure out whether OpenWhisk works for you.

Overview

The post will walk you through the steps to clone existing Python code and package it as a Docker image. Once the image is in Docker Hub, you will create an OpenWhisk action[6] that knows how to launch a Docker container with your code. To send a text message, you will use OpenWhisk’s command line interface to pass it the text message contents. In response, OpenWhisk instantiates the Docker container holding the Python app which connects to Twilio’s text messaging service and sends an SMS.

Before you start

OpenWhisk serverless computing environment is hosted on IBM Bluemix. To sign up for a 30 day trial Bluemix account register here: https://console.ng.bluemix.net/registration/

This app uses Twilio for text messaging capabilities. To sign up for a Twilio account visit: https://www.twilio.com/try-twilio Make sure that once you have a Twilio account, you also obtain the account SID, authentication token, and register a phone number with an SMS capability.

OpenWhisk uses Docker Hub to execute Docker based actions. You will need a Docker Hub account; to sign up for one use: https://hub.docker.com

NOTE: To make it easier to use the instructions, export your various account settings as environment variables:

  • your Docker Hub username as DOCKER_USER
  • your Twilio Account SID as TWILIO_SID
  • your Twilio Auth Token as TWILIO_TOKEN
  • your Twilio SMS capable phone number as TWILIO_NUMBER

export DOCKER_USER=''
export TWILIO_SID=''
export TWILIO_TOKEN=''
export TWILIO_NUMBER=''

Clone the OpenWhisk action implementation

The OpenWhisk action is implemented as a Python Flask application which is packaged as a Docker image and published to Docker Hub. You can clone the code for the action from github by running the following from your command line

git clone https://github.com/osipov/openwhisk-python-twilio.git

This will create an openwhisk-python-twilio folder in your current working directory.

All of the code for the OpenWhisk action is in the py/service.py file. There are two functions, called init and run that correspond to Flask app routes /init and /run. The init function is called on an HTTP POST request and returns an HTTP 200 status code as expected by the OpenWhisk platform. The run function verifies that an incoming HTTP POST request is a JSON document containing Twilio configuration parameters and the content of the text message. After configuring a Twilio client and sending the text message, the function returns back an HTTP 200 status code and a JSON document with a success status message.

Build and package the action implementation in a Docker image

If you don’t have Docker installed, it is available per the instructions provided in the link below. Note that if you are using Windows or OSX, you will want to install Docker Toolbox from:

https://docs.docker.com/engine/installation/

Make sure that your Docker Hub account is working correctly by trying to login using

docker login -u $DOCKER_USER

You will be prompted to enter your Docker Hub password.

Run the following commands to build the Docker image with the OpenWhisk action implementation and to push the image to Docker Hub.

cd openwhisk-python-twilio
docker build -t $DOCKER_USER/openwhisk .
docker push $DOCKER_USER/openwhisk

Use your browser to login to https://hub.docker.com after the docker push command is done. You should be able to see the openwhisk image in the list of your Docker Hub images.

Create a stateless, Docker-based OpenWhisk action

To get started with OpenWhisk, download and install a command line interface using the instructions from the following link:

https://new-console.ng.bluemix.net/openwhisk/cli

The following commands need to be executed to configure your OpenWhisk action instance:

wsk action create --docker textAction $DOCKER_USER/openwhisk
wsk action update textAction --param account_sid "$TWILIO_SID" --param auth_token "$TWILIO_TOKEN"

The first command sets up a Docker-based OpenWhisk action called textAction that is implemented using the $DOCKER_USER/openwhisk image from Docker Hub. The second command configures the textAction with the Twilio account SID and authentication token so that they don’t need to be passed to the action execution environment on every action invocation.

Test the serverless computing action

Open a dedicated console window and execute

wsk activation poll

to monitor the result of running the OpenWhisk action.

In a separate console, execute the following command, replacing the to value to specify the phone number and the msg value to specify the text message contents:

wsk action invoke --blocking --result -p from "$TWILIO_NUMBER" -p to "867-5309" -p msg "Jenny I got your number" textAction

Upon successful action execution your to phone number should receive the text message and you should be able to see an output similar to the following:

{
  "status": [
    {
      "success": "true"
    },
    {
      "message_sid": "SM5ecc4ee8c73b4ec29e79c0f1ede5a4c8"
    }
  ]
}

[1] http://www.forbes.com/sites/janakirammsv/2016/03/22/five-serverless-computing-frameworks-to-watch-out-for
[2] https://developer.ibm.com/openwhisk/what-is-openwhisk/
[3] https://www.twilio.com/sms
[4] http://docs.aws.amazon.com/lambda/latest/dg/deployment-package-v2.html
[5] https://cloud.google.com/functions/writing
[6] https://console.ng.bluemix.net/docs/openwhisk/openwhisk_actions.html

IBM Cloud Strategy for Data-Intensive Enterprises

Few weeks back I had an opportunity to speak about IBM Cloud strategy to an enterprise involved in managing and analyzing big data as a core part of their business. Here’s a video recording of an edited version of that presentation. The blog post that follows the video has the slides, the transcription, and the audio of the presentation.

To understand IBM Cloud strategy and its direction it helps to understand the history of where it is coming from. This slide shows examples of the business computing systems that IBM created over the roughly past 100 years.

The horizontal scale is time. The vertical scale is about computer intelligence which is closely related to qualities of data that these systems processed. For example tabulating systems were keeping statistics for US National Census in the early 20th century. At that time, the systems were challenged to deal with volumes on the scale of kilobytes of data.

Now if we fast forward a few decades, the programmable systems that started appearing in 1950s, were very exciting because most of the computers that are around you today, from laptops, tablets to smartphones are examples of the programmable systems that use their own memory to store instruction about how to compute.

The programmable approach worked well when the systems had to work with with megabytes to gigabytes of structured data. But over the past two decades they are increasingly challenged by greater volumes of data on the internet which also comes in greater variety, including natural language text, photos, videos. For example, to beat human champions of the TV show Jeopardy, IBM researchers had to build a new, cognitive system, called IBM Watson, which processed on the order of 10TB of data (TERABYTES that’s 1 followed by 12 zeros), and that was mostly unstructured data from Wikipedia, dictionaries, books.

The increasing velocity of data is also a challenge for many enterprises. Think about streams of twitter messages about gift purchases. Around Christmas time these tweets arrive at faster and faster rates.

The rest of this presentation is about the role that cloud computing and IBM Cloud play in addressing these challeges. Because regardless of whether your business is operating programmable systems that are common today or cognitive systems that are increasingly popular, IBM Cloud can help your company achieve better business outcomes.

IBM Cloud is both the name of the technology and also the name of the cloud computing business unit that IBM is operating since the start of 2015.

Just a few words about the business: it had high double digit growth every quarter since the unit was launched. Today, IBM Cloud is used by over 30,000 customers around the world and 47 out of Fortune 50 companies. According to some analysts, IBM Cloud is the largest hybrid cloud provider by revenue. Forrester released a study earlier this year, saying that IBM technology is the top choice for enterprises that want to build hybrid clouds that need to span multiple vendors, for example, clouds that include services from IBM’s Softlayer and also Amazon’s AWS, and Microsoft Azure. IDC, another independent analyst firm ranked IBM as #1 at helping enterprises transition to cloud.

So why do enterprises want to work with IBM Cloud? IBM offers clients an open cloud architecture to run their IT solutions. As illustrated on this slide, IT solutions are built from software, software needs comprehensive and flexible platforms, and plaforms need a resilient infrastructure. Enterprises also have a spectrum of LEGACY systems and applications, so they need to have a choice of where to tap into this stack, which can be anywhere from the infrastructure layer where IBM offers a choice of virtual servers, storage, or even bare metal hardware; to the solutions layer, for capabilities like the IBM Watson cognitive system that helps with unstructured data.

Our clients also want a choice of delivery models, whether public, to take advantage of the economies of scale possible when sharing infrastructure with other customers; dedicated, a delivery model where a customer gets their own, private cloud instance on an isolated infrastructure on IBM’s premises; or a local delivery model, where an IBM cloud in deployed to a client’s own infrastructure behind the client’s firewall.

With so many choices, IBM Cloud is designed to ensure a consistent experience for clients. In fact, IBM has over 500 developers working on the open standards architecture and the platform to support consistency in delivery and in service management.

IBM also stands out in the marketplace as an open, hybrid cloud with a focus on enterprise grade requirements. For example, Gartner recognized IBM as the leader in cloud disaster recovery services, and notably AWS and Microsoft are absent from the upper right quadrant.

As enterprises migrate to cloud, it is important to ensure that the migration does not disrupt existing backup and disaster recovery plans. This problem of backup and disaster recovery is compounded by the fact that cloud’s ease of use can often result in IT organizations managing 1,000s or in some cases 10,000s of virtual servers, because it is so simple to create them in the cloud.

IBM offers a scalable endpoint management capability to keep these virtual servers, secure with latest patches and updates, keep them up to date on backups, and ready in case of disaster recovery events. And again to the point of consistency, IBM can provide endpoint management regardless of whether the virtual server is deployed to IBM Softlayer, to AWS or to Microsoft Azure.

Many enterprises have complex audit and compliance requirements. IBM Cloud can offer fine grained auditability of the underlying hardware used to run enterprise workloads. For example, IBM Cloud can provide the details about the hardware used for an application, including everything from the rack location in an IBM’s data center, down to the serial number and the firmware version.

IBM Cloud is also designed to securely integrate and to interoperate with existing, legacy infrastructure and solutions used by enterprises. IBM has over 40 data centers worldwide, close to existing infrastructure of many of IBM’s customers worldwide. IBM partnered with leading telecommunications providers to ensure that it is possible to setup a direct and secure network link from customer’s premises to an IBM data center.

IBM has a long history of defining and influencing open standards and open source software. For example in 2000 IBM put the Linux operating systems on the map of many enterprises when IBM committed to invest $1B in Linux support.

Similarly in the cloud space, IBM is a key player in the most influential open source projects. For instance, IBM is a board member of the Openstack foundation where IBM is working with companies like AT&T and Intel to advance an open standard around cloud infrastructure, including compute, storage and networking.

In the platform as a service space, IBM is working with Cloud Foundry and Docker projects to advance a set of open standards for cloud application portability.

So why should enterprises care that their cloud investments back an open architecture? Because it is tied to a culture of productivity in IT organizations. For example Docker can help introduce a culture of agile IT to developers and operations in an enterprise. To illustrate this with an example, at this year’s IBM Interconnect conference we had an IBM Cloud customer called Monetize that does digital transactions in Europe talking about pushing 78 changes to their application a day. I work with CIOs of large enterprises that tell me that they don’t do 78 changes to their applications a year!

In addition to having a more productive IT workforce, investing in open also means that an enterprise has the freedom of switching to a different cloud vendor. Open standards help ensure portability of workloads in the cloud and thus help avoid vendor lock-in.

All right, so we’ve spend some time on technology and now let’s bring the focus back on business and talk how IBM Cloud can work with capabilities like mobile and social to help enterprises achieve better business outcomes.

At IBM we use this phase: API Economy. It refers to the part of the economy where commercial transactions are executed digitally over interfaces that exist in the cloud. API stands for application programming interface, and APIs is how an enterprise can participate in this economy by unlocking the value of its data, insights, competencies, business services and making them accessible to third parties or even within the enteprise. According to some estimates, by 2018, the API Economy will become a $2.2 trillion dollar market.

IBM is uniquely positioned to help our clients to achieve better business outcomes in the API Economy. IBM Cloud helps enterprises run both legacy and cloud native applications more effectively to help generate insights from data. This is possible with technologies like Spark which are available as a service from IBM Cloud. Cognitive technologies like IBM Watson are also available to help with with analysis of unstructured data like natural language text, or photos and videos.

IBM Cloud can help enterprises monetize data and insights about data by helping to build, run, and manage applications that publish the API to enterprise’s customers. Beyond that, an enterprise can also use IBM Cloud to deliver mobile applications to its customers, to provide them with user experiences that drive the use of the APIs. The APIs can also be integrated into social, for example to provide personalized experiences for users of social networks like Twitter or Facebook.

Ultimately what’s so exciting about the API Economy and as illustrated on this slide, is that an enterprise can use the APIs to create a virtous cycle, such that insights about data can be exposed and monetized via APIs, and then the data streams about the mobile and social uses of the APIs, can in turn generate better insights about the enterprise’s customers, accelerating this cycle.

Evolving landscape of the (mostly) open source container ecosystem

Ever since the 2015 DockerCon in San Francisco, I am seeing an increasing number of questions from my cloud customers about various container technologies, how they fit together, and how they can be managed as part of the application development lifecycle. In the chart below I summarized my current understanding of the container ecosystem landscape. It is a pretty busy slide as the landscape is complex and rapidly evolving, so the rest of the post will provide an overview of the various parts of the stack and point to more information about the individual technologies and products. Although the chart is fairly comprehensive it doesn’t cover some experimental technologies like Docker on Windows.Container LandscapeHypervisors Type 1, Type 2 and Bare Metal. Most of the cloud providers today offer Docker running on a virtualized infrastructure (Type 1 hypervisors and/or low level paravirtualization) while developers often use Docker in a Type 2 Hypervisor (e.g. VirtualBox). Docker on bare metal Linux is a superior option as it ensures better I/O performance and has potential for greater container density per physical node.

Operating System. While Docker and other container technologies are available for a variety of OSes including AIX and Windows, Linux is the defacto standard container operating system. Due to this close affinity between the two technologies, many Linux companies began to offer lightweight distributions designed specifically for running containers: Ubuntu Snappy Core and Red Hat Atomic are just two examples.

Container Extensions. Docker codebase is evolving rapidly however as of version 1.7 it is still missing a number of capabilities needed for enterprise production scenarios. Docker Plugins exist to help support emerging features that weren’t accepted by the Docker developers in the Engine codebase. For example, technologies like Flocker and Criu provide operating system and Docker extensions that enable live migration and other high availability features for Docker installations.

Container Host Runtime. Until recently there existed a concern about fragmentation in the Linux container management space with Rocket (rkt) appearing as a direct competitor to Docker. Following an announcement of the Open Container Format (OCF) at DockerCon 2015 and support from CoreOS team for RunC, a standard container management in OCF, there are fewer reasons to worry. Nonetheless, container runtimes are evolving very rapidly and it is still possible that developers will change their preferred approach for managing containers. For example Cloud Foundry Garden is tightly integrated into the rest of the Cloud Foundry architecture and shields developers who simply want to run an application from having to execute low level container management tasks.

Container Networking. Support for inter-container networking had to be completely rewritten for Docker v1.7 and network communication across Docker containers hosted on different network nodes is still experimental in that release. As this particular component of Docker is evolving other technologies have been used in production to fill the gap. Some Docker users adopted CoreOS Flannel while others rely on Open vSwitch. Concerns about Docker Networking are also delaying adoption of container clustering and application scheduling in the Docker ecosystem and are in turn helping to drive users to the Cloud Foundry stack.

Container Clustering. Developers want to be able to create groups of containers for load balancers, web servers, map reduce nodes, and other parallelizable workloads. Technologies like Docker Swarm, CoreOS Fleet, and Hashicorp Serf exist to fill this requirement but offer inconsistent feature sets. Further, it is unclear whether developers prefer to operate on low level container clustering primitives such as the ones available from Swarm, or to write declarative application deployment manifests as enabled by application scheduling technologies like Kubernetes.

Application Scheduling. Cloud Foundry is the most broadly adopted open source platform as a service (PaaS) and relies on manifests, which are declarative specifications for deploying an application, provisioning its dependencies and binding them together. More recently, Kubernetes emerged as a competing manifest format and a technology for scheduling an application across container clusters according to the manifest. This layer of the stack is also highly competitive with other vendors like Mesosphere and Engine Yard trying to establish their own application manifest format for developers.

Imperative and Declarative Configuration Management & Automation. As with traditional (non-container based) environments, developers need to be able to automate the steps related to configuration of complex landscapes. Chef and Puppet, both Ruby based domain specific languages (DSLs), offer a more programmatic, imperative style for definition of deployment configurations. On the other end of the spectrum, tools like Ansible and Salt support declarative formats for configuration management and provide a mechanism for specifying extensions to the format. Dockerfiles and Kubernetes deployment specifications overlap with some of the capabilities of the configuration management tools, so this space is evolving as vendors are clarifying the use cases for container-based application deployment.

DevOps, Continuous Integration & Delivery. Vendors and independent open source projects are extending traditional build tools with plugins to support Docker and other technologies in the container ecosystem. In contrast to traditional deployment processing, use of containers reduces the dependency of build pipelines on configuration management tools and instead relies more on interfaces to Cloud Foundry, Docker, or Kubernetes for deployment.

Log Aggregation. Docker does not offer a standard approach for aggregation of log messages across a network of containers and there is no clear marketplace leader for addressing this issue. However this is an important component for developers and is broadly supported in Cloud Foundry and other PaaSes.

Distributed Service Discovery and Configuration Store. Clustered application deployments face the problem of coordination across cluster members and various tools have been adopted by the ecosystem to provide a solution. For example Docker Swarm deployments ofter rely on Etcd or Zookeeper while Consul is common in the Hashcorp ecosystem.

Container Job/Task Scheduling. Operating at a lower level than fully featured application schedulers like Cloud Foundry Diego, Mesosphere Marathon, or Kubernetes, job schedulers like Mesosphere Chronos exist to execute transactional operations across networks. For example, a job might be to execute a shell script in guest containers distributed across a network.

Container Host Management API / CLI. Docker Machine and similar technologies like boot2docker have appeared to help with the problem of installing and configuring the Docker host across various operating system (including non-Linux ones) and different hosting providers. VMWare AppCatalyst serves a similar function for organizations using the VMWare hypervisor.

Container Management UI. Traditional CLI based approach for managing containers can be easily automated but is arguably error prone in the hands of novice administrators. Tools like Kitematic and Panamax offer a lower learning curve.

Container Image Registry. Sharing and collaborative development of images is one of the top benefits that can be realized by adopting container technologies. Docker and CoreOS registries are the underlying server side technologies that enable developers to store, share and describe container images with metadata like 5 star ratings, provenance, and more. Companies like IBM are active players in this space offering curated image registries, focusing on image maintenance, patching, and upgrades.

Container Image Trust & Verification. Public image registries like DockerHub have proven popular with developers but carry security risks. Studies have shown that over 30% of containers instantiated from DockerHub images have security vulnerabilities. Organizations concerned about trusting container images turned to various solutions that provide cryptographic guarantees about the image provenance implemented as cryptographic signing of images and changes to them by a trusted vendor.

Choose IBM’s Docker-based Container Service on Bluemix for your I/O intensive code

Few weeks ago IBM announced general availability of a new Docker-based container service[1] as part of Bluemix PaaS[2]. The service was in beta since the beginning of 2015 and looks very promising, especially for I/O heavy workloads like databases and analytics. This post will help you create your own container instance running on Bluemix and provide some pointers on how you can evaluate whether the I/O performance of the instances matches your application’s needs. It will also describe the nuances of using Docker Machine[5] if you are running Mac OSX or Windows.

Even if you are not familiar with Docker, chances are you know about virtual machines. When you order a server hosted in a cloud, in most cases you get a virtual machine instance (a guest) running on a physical server (a host) in your cloud provider’s data center. There are many advantages in getting a virtual machine (as opposed to a physical server) from a cloud provider and arguably the top one is quicker delivery. Getting access to a physical server hosted in a data center usually takes hours while you can get a virtual machine in a matter of minutes. However, many workloads like databases and analytics engines are still running on physical servers because virtual machine hypervisors introduce a non-trivial penalty on I/O operations in guest instances.

Enter Linux Containers(LXC) and Docker. Instead of virtualizing the hardware (as the case with traditional virtual machines) containers virtualize the operating system. Start up time for containers is as good or better (think seconds not minutes) than for virtual machines and the I/O overhead all but disappears. In addition, Docker makes it easier to manage both containers and their contents. Containers are not a panacea and there are situations where virtual machines make more sense but that’s a topic for another post.

In this post, you can follow along with the examples to learn whether I/O performance of Docker containers on IBM Bluemix matches your application’s needs. Before starting, make sure that you have access to Bluemix[3] and to the Container Service[4].

Getting Started

The following steps describe how to use Docker Machine[5] for access to a Docker installation. You can skip the Docker Machine instructions if you are already running Docker 1.6 or higher on your Linux instance. Otherwise, you’ll want a Docker Machine which deploys a VirtualBox guest with Tiny Core Linux to OS X or Windows and you can ssh into the guest to access docker CLI.

Start of Docker Machine specific instructions

Install Docker Machine as described here[5].  Use the following command to connect to your Docker Machine Tiny Core Linux guest

docker-machine ssh dev

Install python and the ice CLI tool in your guest so you can interface with the IBM Container Service environment. The approach used in these steps to install python is specific to TinyCore Linux and shouldn’t be used on other distros.

tce-load -wi python
curl https://bootstrap.pypa.io/get-pip.py -o - | sudo python
curl https://bootstrap.pypa.io/ez_setup.py -o - | sudo python
curl -o cf.tgz -L -O https://cli.run.pivotal.io/stable?release=linux64-binary
sudo tar -zxvf cf.tgz -C /usr/bin/
curl -O https://static-ice.ng.bluemix.net/icecli-3.0.zip
sudo pip install icecli-3.0.zip
rm -f cf.tgz icecli-3.0.zip setuptools-18.0.1.zip

End of Docker Machine specific instructions

If you are not using Docker Machine, you should follow the standard ice CLI installation instructions[6]

Before proceeding, you will create a public/private key pair which you’ll use to connect to your container. The following steps save your private key file to id_rsa and your public key file to id_rsa.pub with both in your local directory. The second step ensures that the private key file is ignored by Docker and doesn’t get included in your image.

sudo ssh-keygen -f id_rsa -t rsa
echo id_rsa > .dockerignore

Make sure that you have access to the IBM Container Service from the Bluemix dashboard as described here[8] and proceed to login using the ice CLI with the command below. Note that when prompted you’ll need to enter your Bluemix credentials and to specify the logical organization as well as the space where your container will reside.

ice login

The login command should complete with a Login Succeeded message.

Next, you will pull one of the base IBM Docker images and customize it with your own Docker file:

docker pull registry.ng.bluemix.net/ibmnode:latest

Once the image completed downloading, you will create a Dockerfile that will customize the image with your newly created credentials (so you can ssh into it) and with sysbench scripts for performance testing.

Create a Dockerfile using your favorite editor and the following contents:

FROM registry.ng.bluemix.net/ibmnode:latest
MAINTAINER Carl Osipov

ADD *.sh /bench/

EXPOSE 22
COPY id_rsa.pub /root/.ssh/
RUN cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys

RUN apt-get update && apt-get install -y sysbench

Next, create io.sh with the following contents

#!/bin/sh
SIZE="$1"
sysbench --test=fileio --file-total-size=$SIZE prepare
sysbench --test=fileio --file-total-size=$SIZE --file-test-mode=rndrw --init-rng=on --max-time=30 --max-requests=0 run
sysbench --test=fileio --file-total-size=$SIZE cleanup

And cpu.sh containing:

#!/bin/sh
PRIME="$1"
sysbench --test=cpu --cpu-max-prime=$PRIME run

add execute permissions to both scripts

chmod +x *.sh

At this point your custom Docker image is ready to be built. Run

docker build -t sysbench .

which should finish with a Successfully built message followed by an ID.

Push your custom Docker image to Bluemix

When you first accessed the Container Service via the Bluemix dashboard[4], you should have specified a custom namespace to be used when provisioning your containers. Note that in the steps below, `ice namespace get` is evaluated to retrieve the custom namespace which is unique to your account.

docker tag sysbench registry.ng.bluemix.net/`ice namespace get`/sysbench
docker push registry.ng.bluemix.net/`ice namespace get`/sysbench
ice run -p 22 -n sysbench `ice namespace get`/sysbench

After you’ve executed the commands above, your container should be in a BUILD stage which changes to the Running stage after a minute or so. You can verify that by executing

ice ps

Request a public IP address from the Container Service and note its value.

ice ip request

Bind the provided public IP address to your container instance with

ice ip bind <public_ip_address> sysbench

Now you can go ahead and ssh into the container using

sudo ssh -i id_rsa root@<public_ip_address>

Once there, notice it is running Ubuntu 14.04

lsb_release -a

on a 48 core server with Intel(R) Xeon(R) CPU E5-2690 v2 @ 2.60GHz CPUs

cat /proc/cpuinfo
...
processor : 47
vendor_id : GenuineIntel
cpu family : 6
model : 63
model name : Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz

Now you can also test out the I/O performance using

. /bench/io.sh 100M

Just for a comparison, I ordered a Softlayer virtual machine (running on a Xen hypervisor) and ran the same Docker container and the benchmark there. In my experience, the I/O benchmark results were roughly twice better on the Container Service than on a Softlayer VM. You can also get a sense of relative CPU performance using

. /bench/cpu.sh 5000

Conclusions

Benchmarks are an artificial way of measuring performance and better benchmark results don’t always mean that your application will necessarily run better or faster. However, benchmarks can help you understand if there exists potential for better performance and help you design or redesign your code accordingly.

In case of the Containers Service on IBM Bluemix, I/O benchmark performance results are significantly superior to those from a Softlayer virtual machine. This shouldn’t be surprising since Containers runs on bare metal Softlayer servers. However, unlike the hardware servers, Containers can be delivered to you in seconds compared to hours for bare metal. This level of responsiveness and workload flexibility enable Bluemix application designers to create exciting web applications built on novel and dynamic architectures.

References

[1] https://developer.ibm.com/bluemix/2015/06/22/ibm-containers-on-bluemix/
[2] https://console.ng.bluemix.net
[3] https://apps.admin.ibmcloud.com/manage/trial/bluemix.html
[4] https://console.ng.bluemix.net/catalog/?org=5b4b9dfb-8537-48e9-91c9-d95027e2ed77&space=c9e6fd6b-088d-4b2f-b47f-9ee229545c09&containerImages=true
[5] https://docs.docker.com/machine/#installation
[6] https://www.ng.bluemix.net/docs/starters/container_cli_ov_ice.html#container_install
[7] https://www.ibm.com/developerworks/community/blogs/1ba56fe3-efad-432f-a1ab-58ba3910b073/entry/ibm_containers_on_bluemix_quickstart?lang=en&utm_content=buffer894a7&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer
[8] https://www.ng.bluemix.net/docs/containers/container_single_ov.html#container_single_ui

Top 10 takeaways the day after #DockerCon

Shipping_containers_at_Clyde1. Containers are too important to the industry to be left to a single vendor

No question, the single biggest announcement of this DockerCon was the launch of the Open Container Project with support from a broad range of companies including such unlikely bedfellows as Google, IBM, Cisco, Amazon, Microsoft, and of course Docker. https://blog.docker.com/2015/06/open-container-project-foundation

2. Docker gave away the reference implementation for open containers but the specification is still a work in progress

We have the opencontainers.org website, we have the runc command line interface tool donated by Docker to the project but the companies involved with the Open Container Project will still have to work out the details of the specification. Today, the specifications github returns a 404 https://github.com/opencontainers/specs (EDIT: @philips, from CoreOS fixed this as of June 25)

3. Containers are building blocks for hybrid clouds and an enabler for hybrid IT

Success of Docker showed developers the value of containers and a lightweight, reproducible application packaging format that can be used on premise or in the cloud. Now it is up to the developers to use containers to deploy applications in hybrid clouds, both on premise and in the cloud datacenters. https://developer.ibm.com/bluemix/2015/06/19/deploy-containers-premises-hybrid-clouds-ibm-docker/

4. There is an enterprise market for secured and trusted Docker containers

IBM and other companies have signed up to resell and to provide services around Docker Trusted Registry, a place for developers to store and share Docker images while ensuring secure, role-based access control as well as auditing and event logging. https://www-03.ibm.com/press/us/en/pressrelease/47165.wss

5. Containers instantiated from trusted Docker registries still need vulnerability scanning

IBM announced that its Container Service, a bare metal based hosting environment for Docker containers, will provide automated image security and vulnerability scanning with Vulnerability Advisor to alert developers of security weaknesses before the container is deployed to production. https://developer.ibm.com/bluemix/2015/06/22/containers-the-answer-to-modern-development-demands/

6. Containers are showing up in surprising places including IBM mainframes, Power, and Raspberri Pi

Vendors are working to support containers across operating systems and hardware plaforms. At DockerCon, IBM did a live demo of deploying a Docker Compose application to a Z13 mainframe http://mainframeinsights.com/ibm-z-systems-leverage-the-power-of-docker-containers/

7. Multiplatform container applications combining Linux and Windows are on the horizon

Not to be outdone, Microsoft showed off how to integrate ASP.NET code running in a container on Linux with Node.js code also running in a container on Windows Server.

8. Live migration for containers is here and it looks impressive

With a Quake3 deathmatch running in Docker container, live migration never looked this exciting! https://twitter.com/pvnovarese/status/613499654219526144/photo/1 link to the video

9. Docker ecosystem (Swarm, Machine, libnetworking) is maturing rapidly but is not yet ready for the enterprise

Most IT pros will proceed cautiously after learning how much of the underlying Docker code was rewritten for Docker Engine 1.7 release and the associated releases of Swarm, Compose, Machine, and especially libnetworking. Multi-host networking is an experimental feature for now. http://blog.docker.com/2015/06/compose-1-3-swarm-0-3-machine-0-3/

10.There is so much going on in the Docker community that we need another DockerCon in just 4 months!

DockerCon Europe is in Barcelona on November 16th and 17th. http://blog.docker.com/2015/06/announcing-dockercon-2015-europe/