Service Item Selection (Updated)

Here is something I get a lot,

“How can I make a service with multiple service items, but then conditional drop some during the deployment?”

Eg. You have a Service Dialog like this one here;


Screen Shot 2014-10-10 at 14.11.00

Giving the user the option to select QA, Test or Production.

The decision would evaluate to one of 3 backing Service Items. Shown here in this diagram;


Each service item here maybe RHEV, OpenStack or VMware. They might be all AWS/EC2 but different AMI’s etc!

So back to the use case,

How do I remove the service items I do not want to use from the service bundle (Nobody mention ZSTOP!)

Lets go through the solution;

1. You need to have a state machine for EACH service item. It can be the same state machine, but either way you need to specify one.

So, when you create each service item, and I am not concerned with the contents here (EC2, RHOS, RHEV, VMware etc..) You need to specify an entry point to a state machine that you have edit rights to as show here;

Screen Shot 2014-10-10 at 14.10.36

Screen Shot 2014-10-10 at 14.10.49

Screen Shot 2014-10-10 at 14.10.43

The bundle that has these three service items can be whatever you like, here is mine, its just using its own state machine as you should do, but nothing special happening here.

Screen Shot 2014-10-10 at 14.10.24

So looking at the Automate browser what do these state machines look like in file view?

Screen Shot 2014-10-10 at 20.13.35

2. The state machine has many steps, Pre1, Pre2, Pre3, Provisioin, Post Provision etc.. We want to edit Pre1 and put something for the state machine to process. We are going to call out to a method that decides to either keep this service item or to dump it out. So exactly like a conditional processor but done using some simple ruby code rather than the nice GUI’s we see in Control, Reporting and Filtering (That may come later in ManageIQ hopefully)

The code simply takes the value from the Dialog for an attribute. In my case it is “Dialog_Environment”, the possible outcomes for this attribute are “Test, QA or Production”

The next thing we do is take the value of the state machine we are running, so I have added to the state machine schema an Attribute field called “State_Environment”. Now I know what the user selected and what the current service I am running is.

So you should edit the schema of the <YourDOMAIN>/Service/Provisioning/StateMachines/ServiceProvision_Template class to include this new attribute called “State_Environment” here is a picture to show you the finished edit;

Screen Shot 2014-10-10 at 20.20.31

If the value of Dialog_Environment is NOT same as State_Environment then we want to dump this Service Item.

Here is the method; Its called Stopper, its a state on each of the Service Item State Machines, I shall show you this next;

def mark_task_invalid(task)
 task.miq_request_tasks.each do |t|
 2.times { $evm.log("info", "********************************DUMPING TASK #{t}*************************************") }

10.times { $evm.log("info", "*********************************************************************") }

stp_task = $evm.root["service_template_provision_task"]
miq_request_id = $evm.vmdb('miq_request_task', stp_task.get_option(:parent_task_id))
dialogOptions = miq_request_id.get_option(:dialog)

$evm.log("info", "Dialog_Environment #{dialogOptions['dialog_environment'].downcase}")
$evm.log("info", "State_Environment #{$evm.root['State_Environment'].downcase}")

if dialogOptions['dialog_environment'].downcase != $evm.root['State_Environment'].downcase
 $evm.log("info", "NO MATCH - DUMPING Service from resolution")
 task = $evm.root["service_template_provision_task"]
 exit MIQ_STOP

$evm.log("info", "MATCH FOUND - Processing Service Normally")

10.times { $evm.log("info", "*********************************************************************") }

Download the Stopper method here Stopper.rb

So, the Stopper method needs to be placed somewhere, I instructed you do this on the Pre1 of EACH service item state machine. Here is an example of the QA state machine, this is the entry point for the service item QA. I have used the ON_ENTRY state, but you must leave the other states intact, you will see why in point number 3 coming next.

Screen Shot 2014-10-10 at 20.16.30

3. Now we do have a small issue to deal with, notice in the previous step the code exit is MIQ_STOP. This is great on one hand because it stops this state machine processing any further, but breaks in another as the ae_result is populated with “error” and when the bundle starts to look at its children for status it sees “error” and barfs out the entire Service Bundle, not good. So we have to fake the ae_result back to OK once we know that the reason for being “Error” is because we want it to be and not because its a genuine error. Make sense?

So we have an OOTB method called “update_serviceprovision_status”, its the job of this method to watch the service deployments and bump the return status around depending on its value. In here we simply do a check to say;

Is the item we are looking at for status, have a status value of “Invalid”, because this is unique and not a Cloudforms status, its been set specially by us for this use case. If it is “Invalid” then we know that this service needs its “ae_result” forced to be “ok” and to exit MIQ_OK. Making everyone happy, the service bundle thinks its provisioned the service when it actually did not, moving onto the next service item in the bundle.

Here is the NEW check_provision method that you need to create in your domain, I would simply copy the one from;


And either copy this code in, just place it near the top, before the processing of the objects;

# Bypass errors for Invalid instances
if prov.message == 'Invalid'
  $evm.log("info","Skipping Invalid Services")
  $evm.root['ae_result'] = 'ok'
  message = 'Service Provisioned Successfully'
  exit MIQ_OK

or download this new one from here update_serviceprovision_status.rb

I hope this all works for you, its a great use case and one we talk about all the time.

Obviously there are more than one way to do this, but most other routes either fail the bundle (bad) or require generic service items and loads more code. This is re-usable, could be prodctised and easily repeatable without a lot of effort. With the new domains in 3.1 you can have this in your tool box.

I will raise a discussion on to share this, and get input on having the GUI conditional processor available in the Service Item designer phase.

Shell-Shock – Bash Code Injection Vulnerability via Specially Crafted Environment Variables (CVE-2014-6271, CVE-2014-7169)

UPDATED : Video version of this blog…..found here…


So this Shell-Shock stuff is hitting the press quite a bit!

Fncy finding out really quickly if your Red Hat Enterprise Linux 6.5 systems are patched correctly? Even if they are turned off right now? Wow that is clever not even the virtual infrastructure players can do that…I know…its cool. Here it is..

Using Cloudforms (or ManageIQ for FREE!) download this policy and import it into Control. Then assign the policy to your targets. The policy will only check Linux systems, though it could do with a makeover to check only RHEL 6.5 systems too.


Download and import the following policy profile -

To note….the policy is valid for only the fix packages as defined in article for RHEL 6.5 systems. Feel free to modify the policy to fit your needs and share with the community at 

1. Ensure that your VM has a recent smart state scan completed successfully. You can check by clicking on the Configuration/Packages link as follows;

Screen Shot 2014-09-28 at 20.13.12

Search the list of packages for the “bash” package. Select the package and you will be presented something like the following;

Screen Shot 2014-09-28 at 19.59.53

Ok so we have confirmed we have package detail about bash in the VMDB for this virtual machine.

2. Assign the policy, you can assign the policy anywhere you like that has coverage of the test virtual machine.

Once assigned, simply click on a VM you wish to check and select the menu “Policy” and “Check Compliance”

Screen Shot 2014-09-28 at 20.09.52

You would have noticed that your Compliance status is probably as follows;

Screen Shot 2014-09-28 at 19.51.17




Once the compliance check is complete the compliance area of the screen will report how old the current report is.

Screen Shot 2014-09-28 at 20.18.54

Next, Click on the Status of the compliance to drill further into the detail;

Screen Shot 2014-09-28 at 19.57.16

As you can see the policy has fail compliance check.

Now we want to remediate the issue, and re-run the compliance check.

Screen Shot 2014-09-28 at 20.21.08

So I run a “yum update bash” and as you can see “4.1.2-15.el6_5.2″ has been applied to my system, lets have cloudforms check against this now.

So, first run a smart state scan against your test virtual machine,

Screen Shot 2014-09-28 at 20.02.57

Once complete, run the compliance check once more on the virtual machine;

Screen Shot 2014-09-28 at 20.08.21


This time the compliance check passes, click on the status and drill further into the detail.

Screen Shot 2014-09-28 at 20.08.30

And as a last resort you can go back to the virtual machine and take a look at the package entry for “bash”

Screen Shot 2014-09-28 at 20.05.39

As you can see, Cloudforms has been updated with the latest rpm data from the yum update bash we ran.

So there you go, how simple is it to check for ShellShock using Cloudforms, really easy.



Creating an OSE Service BluePrint and Ordering it

This video shows how to use the OpenShift for Cloudforms materials to create a blueprint that can deploy a multi machine setup. I demonstrate creating a catalog, two catalog items, a bundle and assigning the OpenShift policy as we desire. Finally we order the service and see it instantiate fully as a working multi machine OpenShift infrastructure.

Creating an OSE Service BluePrint and Ordering it –

Monitoring the progress of an OSE Service – Part 2

In the previous post I showed the consumer use case of going into Cloudforms and requesting a service for deployment, namely a service that deploys OpenShift Enterprise.

The following link is a video that shows how you can monitor the installation. The state machine that deploys OpenShift from Cloudforms will automatically send the consumer emails to the progress of the installation as follows;

Email 1 – Verify that the Virtual Machines/Instances deployed in the service are capable of taking OpenShift Enterprise

Email 2 – Confirm that the workflow for OpenShift installer has been written and that the install as been spawned.

Email 3 – Confirmation to the finished state of the installation, if successful the email contains the links to the various components.

Along with the emails you can also monitor the installation using the tags created by the state machine on the service, these tags will allow you to easily locate the unique log file for the deployment of that service.

Monitoring the progress of OSE Deployment – Part 2 –

YouTube – CLOUDFORMSNOW channel.

I think most know who follow this blog that I have started posting some video content on Cloudforms as thats quite and easy way to digest or see it for real.

Here are a couple of links to videos on my cloudformsnow youtube channel

Service-Now - Demonstrating Service-Now deploying new instances to Amazon EC2 via Cloudforms orchestration and provisioning.

Docker, Kubernetes and Cloudforms – This topic is on fire, everyone is talking about Docker and Kubernetes. You have VMware claiming a fire and forget demo, even Microsoft have shown Azure with containers. So its Cloudforms turn to show its capabilities here, and as you will see we have a nice story, especially with our lifecycle abilities to manage after the drop and run.

SOAP – SAVON v2 Syntax



So for the past few years Savon v1.1.0 has been the default GEM in the appliance. Heres a scoop! In future releases, and the upstream builds have Savon v2.

What does this mean? Well v2 has a slightly different syntax to its connections and function call. Here is a v2 SOAP example;

require 'savon'

client = Savon.client do | globals |
globals.wsdl "https://x.x.x.x/vmdbws/wsdl"
globals.basic_auth ["admin", "smartvm"]
globals.ssl_verify_mode :none
globals.ssl_version :TLSv1

body_hash = {}
body_hash['version'] = '1.1'
body_hash['uri_parts'] = "namespace=Sample|class=Methods|instance=InspectMe|message=create"
body_hash['parameters'] = ""
body_hash['requester'] = "auto_approve=true"

response =, message: body_hash)

Notice that the client call is different, and also the document is now just wsdl.

Another option for you if you wish to work with older v1.1.0 syntax scripts, then simply add to the top of your scripts;

require 'rubygems'
gem 'savon', '= 1.1.0'
require 'savon'

This will allow you to use your script as is, but forcing it to use the v1.1.0 Savon GEM. I have documented this route in the thinking that you are running your script REMOTE to the appliance. This is not a proposed route for modifying existing scripts running in an appliance.

Really the advice I give here is that;

  • If you are running your scripts REMOTE to the appliance then either upgrade your syntax to v2 or force the scripts to use v1 Savon gem.
  • If your script is running LOCAL to the appliance, change your syntax to v2.
  • If you are writing a NEW script, use the v2 syntax (and specify the version in your script like a happy scripter would do!)

Hope this helps.

Savon web site is