Deploying OpenShift Origin with .NET Support

Deploying OpenShift Origin with .NET Support


This deployment guide will walk you through setting up a complete deployment of OpenShift Origin that includes the OpenShift.NET enhancements. When you have finished the deployment in this guide you will have a fully functional OpenShift Origin service of your own.

Please check out the Uhuru support page for additional information and to get access to our community support forum where you can ask questions:

Installing Linux

A Centos 6.5 VM with a minimal installation is needed. You can find a guide on how to do this here:

Installing deployment dependencies

First you need to setup Extra Packages for Enterprise Linux (EPEL); to do this you can use the following commands:

yum install wget -y
rpm -Uvh epel-release-6*.rpm

Next you need to setup the OpenShift Origin dependencies repository:

cat > /etc/yum.repos.d/openshift-origin-deps.repo <<"EOF"
name=OpenShift Origin Dependencies - EL6

And finally install the deployment dependencies:

yum install ruby puppet ruby193-ruby unzip openssh-clients \
augeas-1.0.0-5.el6_5.1.x86_64 install \
32:bind-9.8.2-0.23.rc1.el6_5.1.x86_64 httpd-tools \
ruby193-ruby-devel.x86_64 –y

Install Origin

Use the tested version of Origin stored in the Uhuru repository to avoid any incompatibilities that may occur with untested newer versions. The repo base can be found here:

To install Origin you can use the one-liner from

sh < (curl -s 

When asked about about specifying usernames and passwords for services configured

  • Say yes
  • Setup and write down the MCollective user
  • Setup and write down the MCollective password. These values will be used when installing the Windows Node as described later on in this document.
  • When asked about making changes to the subscription info:
    • Say yes
    • Keep the same subscription type (yum)
    • Change the base repository URL to be the one provided by Uhuru :
    • Keep the same setting for JBoss repository (-)
    • Keep the same setting for Jenkins repository (-)
    • Keep the same setting for the yum operating system repository (-)
    • Keep the same setting for the “Optional” operating system repository (-)
    • Your subscription info should look like this:
      | Setting           | Value                             |
      | type              | yum                               |
      | repos_base        | |
      | jboss_repo_base   | -                                 |
      | jenkins_repo_base |  |
      | os_repo           | -                                 |
      | os_optional_repo  | -                                 |

After the install finishes we need to do some manual steps:

yum install dbus –y # Install message bus 
chkconfig cgconfig on # Enable cgconfig service at startup 
chkconfig cgred on # Enable cgred service at startup 
scl enable ruby193 "gem install --version '> 2.0.2' jquery-rails" # Install jquery-rails gem
scl enable ruby193 "gem install --version '>= 1.2.0' net-ssh-multi" # Install net-ssh-multi gem
scl enable ruby193 "gem install archive-tar-minitar" # Install archive-tar-minitar gem
chkconfig openshift-broker on # Enable openshift-broker service
chkconfig openshift-watchman on # Enable openshift-watchman service

We now need to set the available platforms – tell the broker that we are going to have a Windows node. To do this, edit /etc/openshift/broker.conf and add the following line:


To finish the installation reboot the VM.
For deployment validation you should run:

oo-diagnostics –verbose

Note: There should not be any errors, but you might get some warnings.

Setting up DNS

This section is described in detail in our Windows Node Deployment Documentation.

Setting up Windows

Installing Prerequisites
You can find an extensive guide on how to setup the OpenShift Windows Node prerequisites in our Windows Node Deployment Documentation

Installing OpenShift windows node
The OpenShift Windows node installer can be downloaded from here: Just download the latest version.

Run the installer which will open a PowerShell prompt, after accepting the license. To install, you can use this one liner:

.\install.ps1 -publicHostname <your_public_hostname> -brokerHost <broker_hostname>; -cloudDomain <cloud_domain> -sqlServerSAPassword <sql_srvr_password> -mcollectivePskPlugin unset -mcollectiveActivemqUser <mcollective_user> -mcollectiveActivemqPassword <mcollective_password>

The and are the values setup in the Origin Install section of this document.

More information about the OpenShift Windows Node installer can be found in our Windows Node Deployment Documentation.

Importing Cartridges

Before we run our first app we need to import the cartridges from both Windows and Linux nodes. To do so run the following command on the Linux VM:

oo-admin-ctl-cartridge -c import-node --activate --force

Running your first App

Next you should setup client tools (rhc). You should install these on your workstation. The guide on how to install this can be found here.

The default credentials for your new OpenShift PaaS are demo/changeme.

After you setup rhc, if you run rhc cartridges you should see a list of both Linux and Windows cartridges as shown below:

uhuru-dotnet-4.5 DotNet 4.5 web
jenkins-1 Jenkins Server web
nodejs-0.10 Node.js 0.10 web
nodejs-0.6 Node.js 0.6 web
perl-5.10 Perl 5.10 web
php-5.3 PHP 5.3 web
php-5.4 PHP 5.4 web
python-2.6 Python 2.6 web
python-2.7 Python 2.7 web
python-3.3 Python 3.3 web
ruby-1.8 Ruby 1.8 web
ruby-1.9 Ruby 1.9 web
winsample-1.0 Windows Sample web
diy-0.1 Do-It-Yourself 0.1 web
10gen-mms-agent-0.1 10gen Mongo Monitoring Service Agent addon
cron-1.4 Cron 1.4 addon
jenkins-client-1 Jenkins Client addon
mongodb-2.4 MongoDB 2.4 addon
mssql-2008 MS SQL Server 2008 addon
mssql2012-2012 MS SQL Server 2012 addon
mssql-2012 MS SQL Server 2012 addon
mysql-5.1 MySQL 5.1 addon
mysql-5.5 MySQL 5.5 addon
phpmyadmin-4 phpMyAdmin 4.0 addon
postgresql-8.4 PostgreSQL 8.4 addon
postgresql-9.2 PostgreSQL 9.2 addon
haproxy-1.4 Web Load Balancer addon

All OpenShift applications that contain a Windows cartridge must be configured as scalable. When you use rhc to create a Windows application, make sure to specify the -s flag. Example:

rhc create-app myapp dotnet -s

The end

Thanks for reading and let us know if you run into trouble. You can find us on Google Groups and on freenode IRC channel
You can use the resources below to learn more about how to deploy Origin and Windows Support:

Dr Nic speaks about the immutability of BOSH

I had the pleasure on interviewing the Dr Nic Williams about his experience working with server configuration and provisioning tools like BOSH and Chef. Dr Nic has become an institution in the Cloud Foundry community and is constantly speaking at meetups and conferences on BOSH.

He has authored a bunch of Open Source tools to make working with BOSH simpler. He also offers consulting services to organizations deploying or using Cloud Foundry through his company Stark & Wayne. (Who wouldn’t trust their businesses to Ironman and Batman?)

If anyone can make sense of the trade-offs in working with BOSH, Chef or Puppet its Dr Nic.

Uhuru: How did you get started with BOSH in the first place?

Dr Nic: When I was with Engine Yard we ran a business around deploying apps using Chef on Amazon and was intensely aware of the problems inherent with that architecture. When BOSH emerged out of the Cloud Foundry world, I liked the approach it took to solving many of the same problems. These are the problems you only get to see when you’ve observed many production applications, and many hundreds or thousands of servers over a few years.

For example, creating custom builds of Gentoo Linux and provisioning them Amazon required mystical rules and was far from simple. BOSH included a something shell-based packaging system but  seemed much simpler to understand and use.

Uhuru: Why do you think BOSH is so much better than Chef or Puppet?

Dr Nic: I haven’t worked with Puppet as much as with Chef, but BOSH has clear advantages over both because it manages the life cyle of infrastructure. That is, provisioning and de-provisioning virtual servers, machine images, disks and binding to networking. Conversely, Chef and Puppet presuppose that you somehow get your machines provisioned before you start provisioning them. This is a huge gap in process and functionality. Everyone has to provision and maintain servers, disks , and machine images. If they are using Chef or Puppet only, then it means they have to make up their own orchestration. Or do it manually. And manually means there is room for human error and lack of reproducibility.

BOSH can be then compared to Chef and Puppet for configuration management. I love the simplicity of BOSH. It only does a few things.

  • Start things
  • Stop things
  • Attach persistent disk
  • Upgrade

If you keep it simple, you can create a simple tool.

BOSH has a very simple concept of packages. If things are in a folder, they go in the package. If something is outside the folder it isn’t in the package. There is a very thin contract between BOSH and the operating system. You can avoid unpredictable  or inconsistent deployment tools like apt-get update and apt-get install.

It is remarkably simple to change the type of machine being used by BOSH. For example, moving from an Amazon M1 small instance to an M1 medium instance couldn’t be easier. BOSH orchestrates it during “bosh deploy”. Change the size of a persistent disk attached to a server and BOSH will provision and mount the new disk, copy the content, throw away the old disk and re-mount the resized volume to the original mount point.

Simple is good. Simple allows us to do powerful things.

By contrast, Chef and Puppet rely on running scripts on existing machines. It is hard to guarantee that the end result will always be the same. “apt-get update” will do different things over time. The recipes and cookbooks in Chef quickly become crazy complex. And of course, you have to find some other means to setup those machines in the first place when you are using Chef and Puppet.

Uhuru: Why is the consistency with BOSH machine provisioning so important?

Dr Nic: If you can’t trust how a machine is provisioned and configured then you can’t be sure it will behave as you expect. Chef is very poor at offering consistency guarantees from one machine to the next. You are never surprised with BOSH. If there is a problem with a machine provisioned by BOSH you just kill and create a new one that complies with the original template. BOSH creates an immutable infrastructure you can depend on. You never have to try and repair something on a given machine.

The minute someone has to SSH into a machine and try fixing something you can no longer rely on the system state. It is far better to just delete failed machines and re-create them from a known template.

Uhuru: Can BOSH be used for more than just deploying Cloud Foundry?

Dr Nic: Cloud Foundry is the killer app for BOSH, but BOSH lends itself nicely to deploying any app or service that is either ephemeral or requires a resizable persistent disk backend.

Uhuru: Are there any limitations to BOSH? Some people complain about the complexity of creating YML manifests.

Dr Nic: I would like to know what the people complaining about BOSH manifests are comparing it to. There isn’t anything completely simple in this industry. If you don’t specify how you want something provisioned and what the defaults should be then you will have no predictability about the final state of the machines you are deploying.

There are also tools you can use to make it easier to create BOSH manifests. I have created some Open Source BOSH manifest tools myself.

Uhuru: Does PivotalCF make BOSH complexity easier to deal with?

Dr Nic: Yes, PivotalCF makes it very easy to deploy Cloud Foundry using BOSH. However, it has limitations. PivotalCF only works with VMware vCenter and it isn’t Open Source.

Uhuru: Are you worried that commercial products using BOSH like PivotalCF could hurt BOSH consultancies like Stark & Wayne?

Dr Nic: I am not concerned about losing business at all. Right now a lot of Stark & Wayne business is involved with deployment and configuration of Cloud Foundry with BOSH but this is only a short-term need. The BOSH automation tools and knowledge will improve and doing simple Cloud Foundry deployments will no longer be something people need help with.

However, the real work begins once people get Cloud Foundry deployed. We will find lots of business helping companies architect and build cloud compatible applications optimized for Cloud Foundry.

Uhuru: What things would you like to see improved with BOSH?

Dr Nic: I would like to see improvements to make it easier to deploy services that required a consistent state. Right now it isn’t very easy deploy databases like MySQL or PostgreSQL that will preserve the data even when the machine instance is destroyed. There is nothing that sits between your app and the data services to give a high degree of persistence. You get some of this resilience when you use Cloud Foundry in conjunction with BOSH, but it would be nice to have some of this in BOSH itself.

Also, it would be handy to have a smaller and less complex version of Cloud Foundry that could be used just for deploying a single app upon BOSH. There are some interesting things happening in this regard by community members working on the Mezos project.

OpenShift extensions for Visual Studio

It’s now possible to deploy an application to an OpenShift PaaS from Microsoft Visual studio. Check out this video that shows how the new Uhuru extensions for integrating OpenShift with Visual Studio work.

The video also demonstrates how to use the OpenShift web console and command line for managing .NET applications.

Just check out our Open Source repositories if you want to try this at home:

Windows Isolation


Sandboxing Windows Applications

The lack of secure sandboxing mechanisms to isolate processes on Windows Server has made it a poor choice for multi-tenant hosting situations like OpenShift or Cloud Foundry. Uhuru Software has created the Windows Isolation engineering project to allow Windows Server to run untrusted software from 3rd parties without fear that harm could be done to other applications or services on the server.

App Isolation is critical in multi-tenant situations to prevent server failures from either malicious behavior or even mistakes. Not every service outage occurs because of nefarious attacks, sometime all it takes is an  app tries  to seize resources that are being used by something else to bring everything crashing down. Application isolation can protect us from ourselves.

Windows Isolation goes a long way to filling the gap with Linux in Windows Server to allow secure multi-tenant hosting, but it isn’t perfect – as we discuss later in this article. Uhuru is eager to work with the community to make Windows Isolation even better.

In creating Windows Isolation we apply general requirements for any sandboxing solution in a cloud environment:

  • Integrity of the system
    A sandboxed process should not be able to modify system DLLs or change any system wide configurations like the network IP.
  • Mandatory privacy between the containers
    A process from one container should not be able to give access for any resources (i.e. files, pipes) to any other process from another container.
  • Resource quotas
    The processes from a container should have restricted access to memory, CPU and disk space and share all those resources in a fair manner.
  • Preventing denial of service to the system
    Processes from a container should not be able to crash the box or other processes by abusing specific system calls or Windows APIs.

These requirements apply equally to both client-side graphical interface applications and back-end server side apps (i.e. web services, build processes, databases). However, for now we have focused our efforts on isolating and testing the server side applications and processes. Isolating server side applications has proven to be a challenge in its own right. Legacy applications like Microsoft SQL Server are particularly difficult to isolate due to their design which requires administrative privileges.

Windows OS features for Sandboxing

The Windows Job Object is one of the key tools we have used to create Windows Isolation. Windows Job Objects provides a way to manage multiple processes as a unit and allow some restrictions for things like total number of processes, virtual private memory and CPU throttling. Any process spawned from a process inside a Job Object will also be included in the parent process’ Job Object. This can be a powerful mechanism for tracking and resource accounting for the number of active processes if and only if there is no way to break away from the Job Object.

Discretionary Access Control List (DACL) also provide some level of privacy and integrity for most Windows Server resources: files, directories, semaphores, threads, pipes, shared memory, etc. DACL gives more granular control compared to Linux file permissions.

We use the Windows Integrity Mechanism to give additional system integrity with mandatory policies. It is useless for isolating containers between each other. However, it is a form of limited Mandatory Access Control (MAC), but nothing near SELinux or AppArmor.

File system filters and registry filters can be used to implement files system namespaceing (something similar to chroot) and registry namespaceing, respectively. To create a namespaceing engine for files and registries, kernel level drivers are required to be developed.

The Windows Filtering Platform (WFP) is very useful for filtering any TCP/IP network operation. It is designed to be a general purpose framework for implementing firewall solution on Windows.

ASP.NET Partial Trust comes in handy as a mechanism to enforce constraints on .NET apps. It is useful, but not sufficient to provide true sandboxing for ASP.NET apps.

Windows Limitations

Although Uhuru has been able to greatly improve the application isolation capabilities of Windows Server, there are some areas that still need work.

None of the Windows Server security features we have explored allow us to enforce restrictions on how a contained process gives access to processes from other containers. DACLs don’t provide this granular type of restriction and neither does the Windows Integrity Mechanism. Consequently, it is possible for a process in shared memory to set an ACL on an empty list that processes in other containers can read and write to. So far, we know of nothing that can stop this.

Windows Server has no feature like Linux’s namespace isolation to provide a mechanism to hide disk drive letters, network interfaces or other running processes. Namespace isolation is imperative on Windows Server to allow Docker type functionality. Even though file and registry namespace isolation is not build in, file system and registry filters could be used to create a driver that provides namespace functionality.

We have also been unable to find a way to restrict access to specific Windows API calls from sandboxed processes. We have investigated using API hooks at the kernel level to provide a mechanism for restricting API access but the Kernel Patch Protection feature of Windows Server makes hacking the kernel unworkable. Only GUI system calls may be restricted by setting the DisallowWin32kSystemCalls mitigation policy for a process.

Use of a command line Console application or calls to the AllocConsole Win32 API spawn the conhost.exe process that is not contained in Job Objects and provides a way for the isolated process to run code outside the Job Object. We have informed Microsoft about this scenario and hope that they will provide a solution.

There are so many old features in the aging Windows memory manager that it is difficult to watch them all for quota enforcement and accounting. Just one of the issues we struggle with is to prevent applications from allocating shared memory. We haven’t found a way to track shared memory that applications allocate. We have had to resort to looking at total system commits and calculating a rough estimate  of allocated shared memory. This leaves open the possibility of memory leaks that bring down a server by starving the system of memory.

Other Sandboxing Solutions and Mechanisms

Chrome has a pretty good sandbox for the purpose of isolating the renderer. Its strength lies in assigning a fully restricted access token to isolated processes that will not give any access to securable objects. Unfortunately, this strategy is too restrictive for Windows Isolation because we need to allow third party code to run that may need to access various system resources (i.e. system DLLs).

The new Windows Store (Metro-style) apps use some new undocumented AppContainer features implemented in the kernel. Even though it is new and looks promising, the isolation it provides is too limited. Most of the server side applications we need to enable with Windows Isolation need greater access to the system. Windows Store apps security also relies on an app certification process before publishing them to the store. The only feature that may be useful for our purposes from the AppContainer is the BaseNamedObjects namespacing to prevent object name squatting attacks. Parts of the undocumented AppContainer security APIs may prove to be useful in addressing compatibility issues for many existing applications.

The Sandboxie software provides sandboxing for interactive processes like web browsers or text editors. Its main purpose is to provide integrity to the system and even sandbox apps that require administrator or root privileges with a copy-on-write mechanism. After Microsoft released Vista and Kernel Path Protection, it took years for the Sandboxie software to come with those features on 64-bit version of Windows. This approach is not ideal for us because is very intrusive and hard to maintain.

The IF-Warden sandboxing containers created as part of the Cloud Foundry project have very limited functionality in the Windows version. The original Linux Warden is something similar to Docker and uses the same Linux functionality for sandboxing cgroups and namespace isolation. The Linux version of Warden is managed by a protobuf API instead of REST API. Unfortunately, the IF-Warden implementation for .NET provides almost no sandboxing at all, doing little more than creating a local user and starts processes with Process .NET class. A simple fork bomb would create a denial of service.

Virtuozzo Containers, by Parallels, is an operating system level virtualization solution. It could be useful, but the latest stable release only supports the old Windows 2008 R2.

Overview of mechanisms we use to isolate processes

We have built Windows Isolation to create a limited local user and random password at the base of each application container. The user provides a profile to the processes with its own registry hive.

Each process in Windows Isolation is started in a suspended mode and then added in a Job Object. They are used  to account for and cap the CPU utilization. Job Objects also provide a way to terminate all running processes in a single transaction.

Due to conhost and Job Objects limitations , we also plan to use a monitoring process that actively checks at a specific interval if there are processes outside the Job Object and add them back to the Job Object. The active monitoring process will also enforce shared memory usage.

Firewall rules will be applied for container’s user to prevent access to unauthorized IP’s and ports and will also prevent the processes to receiving incoming connections from unauthorized ports.

NTSF quotas are used to enforce disk quota per user if all containers run on the same file-system. The system enforces quota per user on the whole file system. System Virtual Address Space Quotas are also set in the registry for the user to enforce the paged pool and non-paged pool.


The Windows Isolation project goes a long way to close the gap Windows Server has with Linux in offering secure app sandboxing in multi-tenant environments. However, limitations of the Windows Server operating system make it difficult to provide complete parity with Linux isolation functionality.

We hope that Microsoft will be able to implement new technologies like PicoProcesses from their very own Drawbridge research that will allow us to make Windows Isolation even better.



Practical Windows Sandboxing

Sandboxes for Exploit Mitigation Slides.

Escaping from Microsoft’s Protected Mode Internet Explorer:

Application Sandboxes: A Pen-Tester’s Perspective:

Extraordinary String Based Attacks:

Windows Internals Book:

Mysteries of Memory Management Revealed, with Mark Russinovich (Shared Memory @ 1h:08m):

Create your own custom .NET buildpack

Buildpacks offer a lot of flexibility in Cloud Foundry 2. The standard .NET buildpack Uhuru has put together for our .NET WinDEA gives you a lot of flexibility to publish your apps with the features and capabilities you need.

Better still, you can create your very own customized .NET buildpack that incorporates almost any library or component you desire. Here is an example of a custom .NET buildpack you can use as a guide to create your own.

You can always try out your buildpacks on our public trial Cloud Foundry 2 services.

OpenShift and Cloud Foundry – A Contributor’s Perspective

This post summarizes the insights that we (at Uhuru Software) gathered while working on both the OpenShift and Cloud Foundry projects over the last two years. As you can imagine, in order to be able to provide support for Windows, we had to really dig into the code base of both solutions.

Uhuru has released both implementations to the Open Source community – see here. If you’re interested in piloting either environment with first-rate Windows support feel free to get in touch with us.

It is great to see how well Windows integration with both OpenShift and Cloud Foundry turned out. Windows Server functions as a first class citizen of the PaaS. You can publish and manage your .NET applications like their Linux siblings.

I’m assuming you have some high level familiarity with both offerings. If not, then these links should help to provide a nice overview:

Our approach began by reading the code, understanding what it does, then implementing a flavor of that in C# on Windows. So we have a pretty good idea of how these two systems are built. Uhuru is one of the few companies that has extended core services for both OpenShift and Cloud Foundry, and this post is intended to share our hard-won insights on these leading open source PaaS platforms.

I’m not going to provide 100% coverage of all the features, scenarios and use cases supported by both these communities, but rather demonstrate the major differences that were observed between the offerings based on our hands-on experience with the projects beyond marketing and documentation.

Developing and Contributing

For us, extending these projects meant understanding the Ruby (and sometimes Go) code used to create these platforms. With OpenShift, most of the effort was spent on the Windows version of Node component. We also implemented changes to the Broker, allowing it to interact with more than one platform. For Cloud Foundry we had to implement counterparts to all of the following components: the DEA, NATS client, service nodes and gateways (brokers) and a BOSH agent.

Code structure

The code structures for these projects are quite different. OpenShift has all components in one repository, whereas Cloud Foundry has one component per repository and heavy use of submodules for BOSH releases. This Stack Exchange discussion sums up why we prefer the one repo model to the other.


Cloud Foundry is split into many components. Each of these components has a particular role in the Cloud Foundry system. The DEA (Droplet Execution Agent) for example, is very specific in its role – it’s the service that runs your app and manages its environment. That probably resulted in a smaller amount of code that we had to write for the Windows DEA when comparing to the OpenShift Windows Node.

The communication layer used by the two projects is also different. OpenShift uses MCollective on top of ActiveMQ. Cloud Foundry uses NATS. Since on OpenShift the communication mechanism is decoupled from the implementation of the Node, we did not have to write any communication components. We use the same MCollective agent DDL for both Windows and Linux.

In OpenShift there are two major components – the Node and the Broker. The Windows Node we’ve built started out as a complete mirror of the Linux version, implementing the entire API that is exposed via MCollective. It turned out that not all of the APIs had to be covered. We chose to always use a Linux HA proxy as a load balancer – even for Windows apps. This means we did not have to re-implement things like SSL support and handling of OpenShift web proxy cartridges inside the Windows Node.

Extending these two systems from a services perspective was simpler for OpenShift, because we merely had to implement cartridges. In Cloud Foundry you have to implement a service node that becomes a new Cloud Foundry component.

From a developer trying to extend the platform, the OpenShift codebase provided much better documentation than Cloud Foundry, but was a bit more difficult to understand at first, because it’s split into fewer components. As engineers, we like smaller components of code when we can get them.

Open Source Software

Contributing to these projects was a completely different experience for us.

We have attempted to provide Windows support to Cloud Foundry (v1 and v2) in the past, but we were not able to do so, probably because Pivotal is not yet ready to accept such a large contribution from the community. Trying to merge pull requests and get feedback was a bumpy road. The new community processes that have been created might help.

On the other side, Red Hat has well established processes in place for working with outside contributors. We were amazed at how easy it is to work with the OpenShift community on development. The whole process of getting help, advice, and submitting our code for inclusion in Origin has been incredibly smooth. If you are interested, you can check out the pull request on github.

Deploying a PaaS

This is a long story and it varies greatly on what scenario you have in mind. If you simply want to deploy a medium sized PaaS – say 50 Nodes or DEAs (I think most of us fit in this bucket at this point) – Red Hat has an edge because the system administrator is able to get started immediately on deploying OpenShift (either Origin or Enterprise). He or she has plenty of documentation and all the command line tools to operate the system. Given that most Linux in the enterprise is Red Hat Enterprise Linux (RHEL), administrators will be familiar with these tools and probably require little, if any new training in order to get started. The OpenShift deployment strategy is based on Puppet, which is very popular.

On the Cloud Foundry side administrators may have a harder time kicking the tires. The deployment mechanism provided (BOSH) will be unfamiliar, so they will most likely need training. BOSH, however, would appear to reduce the amount of time needed to manage the PaaS in the long term. By default BOSH downloads a lot of additional bits from the web, including Stemcells. Many administrators will not be comfortable running clones of Pivotal’s images and will have to build their own.

About BOSH

BOSH deserves a section on its own – it’s a great tool that scales up and down very well, and is very helpful as your PaaS grows. This is the mechanism that allows Cloud Foundry to be easily updated and maintained without downtime.

But BOSH is a great tool irrespective of Cloud Foundry. In my opinion Pivotal should position it in a way that is closer to their Big Data strategy. From my perspective, BOSH could help standardize the deployment of these complicated systems in the enterprise.

For small and medium deployments, BOSH should be hidden behind something easy to use, with a normal learning curve. This is why we’ve built UCC and URM. These two pieces put together should allow users to easily manage stemcells, software releases and deployments while hiding BOSH completely. Pivotal One also includes something similar, but the tool seems to be targeted for Cloud Foundry and their Big Data services. More importantly, it’s not open source.


Load Balancing Mechanisms

OpenShift and Cloud Foundry have different styles of handling how traffic flows towards applications. In Cloud Foundry, you have the router component that is deployed on one or more VMs. These routers act as dynamic reverse proxies and serve the content of your app to clients.

With OpenShift every node has a public IP address and it integrates with your DNS. The reverse proxy in this case is a special type of cartridge (web proxy cartridges). By default, the web proxy cartridge for OpenShift is HAProxy, but you can write your own.

Application idling

A very nice feature of OpenShift is idling applications if they’re not being used. Every node has an httpd service that handles HTTP traffic and detects this. Using this service OpenShift can tell if an application has not received requests for some time. If the application is idle and a request comes in, the service loads the application back into memory and processes the http request.

Cloud Foundry does not have a concept like this. This means that you can have much higher application density on OpenShift than on Cloud Foundry.

While running our trial service on for two years we’ve learned that application idling is very important when it comes to improving application density. The very first version of the Windows DEA that we built (for Cloud Foundry v1), did not use IIS Hostable Web Core to run .NET applications. We directly setup websites inside of IIS. Because of this, we were able to take advantage of IIS app pool recycling, and had higher densities on Windows. So the fact that OpenShift already has this is a major plus.

Buildpacks vs. Cartridges

Buildpacks are more prevalent because of Heroku, but cartridges are wider in scope. What I mean by this is that buildpacks are restricted to encapsulating a web server and a framework, like Apache and PHP. However, cartridges can be written that contain custom code, database services, or help you with your application life-cycle and continuous integration (see the Jenkins cartridge). A simple mechanism exists to connect cartridges to one another using environment variables and it gives you a lot of flexibility.

Deploying your app

OpenShift uses git. There’s probably nothing easier that you can do for a developer than giving him access to deploy his application via git. Whether you use the command line, GUI or IDE, git is the easiest option. OpenShift also gives you the option to deploy a binary package – for people who compile their applications and don’t want their source code in the cloud.

The lifecycle of the application looks quite different between these two platforms, and that difference starts with the mechanisms used to deploy your applications.

In OpenShift you create an application and the system will provision your own little bit of space in the PaaS, called a gear (you can have several). Keep in mind that your custom code hasn’t come into play yet. Once your app is created, you can already browse your app, because OpenShift will put a default website there. Then you can push your code to the app via git. If you want a one-line creation of your app from scratch, you can do it using a git URL – you tell OpenShift to deploy code from the git URL instead of the default template of the cartridge.

On Cloud Foundry everything starts with you pushing your bits. The platform will take your code, analyze it, combine it with a buildpack and then deploy it on a DEA.

I like the idea of separating the creation process from the deployment of code/bits. When deploying an application on Cloud Foundry and it fails, it’s more difficult to tell why it failed: was there a failure in provisioning your corner of the cloud? Is there something wrong with the buildpack? Do you have a bug in your application?

So OpenShift gives the user a bit more control and more predictability. On the same idea, another thing you should be aware of is that every time you push your application in Cloud Foundry, your corner of the cloud will be different; in OpenShift pushing changes to your app does not recreate your gear.

Deployment process

In OpenShift you create your application via the broker API and then the broker will search for a node that has enough available resources to process the request. Next, a gear is created on that node and the specified web cartridge is added. This is a short description of what happens for a non-scalable Linux application. For auto-scaling apps (more than one instance), you need to mark it as such when creating it. In that case, OpenShift will also deploy a web proxy cartridge next to the web cartridges.
After you’ve created your application, it’s ready to be cloned using git. Then the power is yours – there are many ways to get your code in the platform, including adding this new git server as a remote to your git repo or simply copying and pasting your code to a clone of the app’s git repo.
After you push, git hooks run within the gear. Depending on the cartridge deployed, these hooks run pre or post start scripts, build scripts or lifecycle control (start, stop, restart). Additionally, for auto-scaling apps, your code is synced from a master gear to the others using rsync.

On Cloud Foundry, when you push your application the cf command line will bundle your code and send it over to the cloud controller. Before packaging, the controller tries to figure out which of the resources are already available on the cloud, so you do not have to upload them again. For example, if someone has already pushed some large file, the controller knows and the file is not uploaded again. The diff is not as good as git, but it does have the advantage of building this index of common resources across the platform.
The next step is to ‘stage’ your application – this is where Cloud Foundry tries to detect what buildpack to use for your application, then bundle the buildpack and your code/bits and deploy them on a DEA. In my experience, buildpack auto-detection is not that useful. The developer will always know what technology he or she used to write their application. Auto-detection of a buildpack is a superfluous process that is susceptible to naive detection techniques.
An advantage when it comes to scaling in Cloud Foundry is that applications with one instance are treated the same as applications with multiple instances. So all you have to do is tell Cloud Foundry how many you want, you don’t need to flag your application when you create it. Cloud Foundry does not have automatic scaling though.

OpenShift is more like evolution (git, HAProxy, rsync, et al.) and Cloud Foundry is more like revolution (most of the mechanisms are new). That said, neither platform has everything right, but both are working hard to and continuing to improve their respective solutions in this area.

Browsing Your Application Files

With OpenShift the story is as simple as it can be: you can SSH into your gears. Cloud Foundry has a special directory service implemented in the DEA to support browsing the filesystem and tailing files. Some developers might appreciate the extra flexibility offered by OpenShift by being able to SSH into the application, if necessary.


As you may have realized already, services in Cloud Foundry and OpenShift are a little bit different. In Cloud Foundry services are components on their own (deployed by BOSH). In OpenShift, they are cartridges.

In Cloud Foundry you connect your application to a service by binding them together. This tells the Cloud Controller that it should create a set of credentials for the service and then make them available to the application. Binding your app to a service will cause it to be staged again. You can bind one application to many services and many services to an application. You don’t get full access to the service (i.e. you are not the admin) and your credentials work only within the confines that Cloud Foundry has setup for you.

On the OpenShift side your application is a grouping of gears. Some of these can be services, and the information published by them (such as credentials) is made available to gears that need it (like the gears that run your code). Credentials are generated once when the cartridge is added and connection information is not published to gears outside of an application. This means you can’t easily connect multiple applications to the same service. However, with OpenShift you are the admin of the service and in complete control as a developer. This allows you to create multiple databases and your application can access all of them.

On both Cloud Foundry and OpenShift service connection information is passed via environment variables to buildpacks and cartridges. These can make life easier for the developers by auto configuring applications that conform to certain standards.


Another important feature that many developers find useful is the ability to connect to your services from your local network. In OpenShift this done via SSH tunnels, a solution that is very well known – it works very well and it’s fast.

In Cloud Foundry talking to your services from the outside is done through an HTTP tunnel (these are called caldecott tunnels). In the future this mechanism might support web sockets. Currently it uses a polling mechanism which slows down data transfers.

The End

These were a few of the points I thought would be useful to write about. It is not a complete analysis – that is a far larger topic, so to that end we’ll try to write more about the following in the future:

  • Security and isolation for Windows
  • Integration with service marketplaces
  • Logging and monitoring of applications
  • Monitoring of the platforms themselves
  • Metering and billing support
  • Keeping the systems up-to-date
  • Capacity planning
  • Auto-scaling of applications

Thanks for reading!

Now playing on an OpenShift PaaS near you: Windows

Openshift_partnerlogo_advanced_CMYK_0513jwUhuru is proud to announce the availability of .NET applications on Red Hat’s OpenShift Platform-as-a-Service (PaaS). We have worked closely with Red Hat engineers to build a comprehensive OpenShift integration for the Microsoft application stack through a community-driven effort in OpenShift Origin.

OpenShift users can now use the same tools they love for managing their Linux apps with .NET. Likewise, Windows users can now take advantage of the powerful OpenShift environment for rapidly deploying, managing, and scaling their applications without sacrificing compatibility or functionality of the .NET platform they know. This initial collaboration with the OpenShift Origin community will enable Microsoft environment integration capabilities for OpenShift Online and Enterprise customers in the future.

The consistent model for managing both Linux and Windows systems that OpenShift provides allow organizations to achieve greater efficiency and agility.

Windows is now a full-fledged member of the open source world of OpenShift. In keeping with the spirit of open source, Uhuru has made all of its OpenShift integration software for Windows available to the community and has made a pull request to have it officially integrated into the OpenShift core.

Uhuru will continue to work closely with Red Hat engineers and the broader OpenShift community to test this software and make it fully integrated into the OpenShift core in the coming weeks.

Read here for a description of the work Uhuru is doing for OpenShift:

You can get the source code and join Uhuru’s developer community. We also have a community and resources for people who are using Uhuru’s software.

You can read about Red Hat’s technology preview for .NET here:

VMware’s walk on the wild side – Cloud Foundry and Open Source

In this episode of the Uhuru podcast Dave McCrory and Patrick Chanezon, both senior staff at VMware, explain how the decision to introduce the Cloud Foundry Platform as a Service (PaaS) as an Open Source project has helped far more than its hurt. Dave and Patrick talk about how they had the first community bug fix pull requests within the first day of releasing Cloud Foundry to the Open Source community. After a year in the Open Source realm they feel Cloud Foundry has matured far quicker than it ever could have otherwise.

There is a certain logic here. For a technology like a PaaS aimed squarely at developers, sharing the source code and engaging the very people intended to use the service can lead to a lot of great benefits.

Still, it can be a little frightening to see the community take your life’s labours in directions you may not have intended, or dreamed, as the example Dave and Patrick site of Smalltalk support being added to Cloud Foundry illustrates, not to mention .NET support and other sundry enhancements that companies like Uhuru have added.

You can listen to the entire podcast here:

Testimony of a cloud convert

In this episode of the Uhuru podcast show Tim Genge, a senior software architect at RealmSoft, talks about his recent conversion to the cloud. Cloud computing used to be nothing more than marketing double-speak as far as Tim was concerned. Now that he is running his own business and working with customers in the banking sector he “gets” it. Yes, many aspects of the cloud are similar to traditional client/server software models but the way in which people use it to transform their businesses is truly impressive.

Tim has been looking seriously at Azure as a platform for his software, but he needs it to run on a private networks as well since his customers won’t trust everything to the public internet. Tim also has a client-side component to his software which makes it difficult to find the perfect cloud solution since no one seems to focus on that area.

As an addendum to the interview, I should point out that after the discussion we clarified that Azure does NOT, in fact, have a local version that Tim’s customers can run on their local networks. Tim says he is going to investigate the Uhuru PaaS as an alternative .NET PaaS (which just happens to run in private networks just fine).

The database hitman tries the PaaS

In this episode of the Uhuru podcast show Anil Mahadev, a Principal Software Consultant aka DB2Hitman, shares his experience of working with the Uhuru PaaS. He was amazed with how quickly he was able to deploy a .NET app. Moreover, it was easy for him to configure his application to work with the Microsoft SQL Server service by following a few steps. By contrast, it took Anil a lot longer to get off the ground and running with Azure.

No question about it, Anil feels that the Uhuru PaaS is great for developers and test engineers. The big question is whether IT is ready to go the way of the PaaS.

Click here to read Anil’s blog, and his post about his Uhuru PaaS experience.

You can listen to the entire interview here.

The Virtual Machine perspective

In this episode of the Uhuru podcast show, Shalin Jirawla (a software engineering student) shares his thoughts about cloud computing. He talks about his interest in using the cloud for real-time computing applications. While Platforms as a Service are interesting to Shalin he feels that complete access to virtual machines is necessary for most of the things he wants to do for now. His experiments with Microsoft’s Azure haven’t been overwhelming.

That’s the kind of challenge Uhuru loves! Let’s just see if the Uhuru PaaS can’t help Shalin with his work…

Here is the full recording of the interview.

The next step of the cloud – making the office virtual

In this episode of the Uhuru podcast show Ben Yehooda (the CEO ofLevaman) talks about how his company is building cloud services to make the virtual office of the future a reality today. Just as the cloud has made it possible for companies to avoid having to build their own physical IT infrastructure, Ben is working hard to eliminate the need for physical infrastructure too.

Why can’t employees just work together remotely from wherever they are? Why invest in expensive office space? Ben explains why he thinks tools like Skype and Office 365 are limited in their abilities to re-create the vibrant office environments with serendipitous conversations and sharing of information.

Levaman practices what they preach. Their employees are scattered around the world and they have never owned a physical server.

You can hear ben talk about his vision of the cloud and virtual offices here:

Software testing moves to the cloud

In this episode of the Uhuru podcast show Feza Pamir, the VP of Marketing at QASymphony, explains how his company uses the cloud to offer their own cloud service, to help software test teams manage bugs and the entire quality assurance process. QASymphony currently uses Amazon virtual machines for hosting their service but support local deployments on private networks and are looking at other service providers that might have different levels of security that would allow them to meet varying customer needs.

Feza makes it clear that choice in cloud hosting is critical because his customers have such different needs – even private clouds have a role to play since there are still situations where organizations need to keep data on their own network.

While platforms as a service, like Uhuru, are an interesting hosting option to Feza the technology his company has built has such a degree of low-level customizations that they need the greater control of configuring the operating system to their needs. If he had to do it all over again it would be nice to build the QASymphony applications in a way that would make them more compatible with PaaSes.

The health of the cloud

In this episode of the Uhuru podcast Terry Montgomery, a Project Manager at Health Roster, talks about his experience with managing IT projects in the cloud. According to Terry, the medical industry may be slow to adopt new technologies but the value of cloud computing is so compelling that they are moving fast. Amazingly, it is the small operations like physician offices which are leading in cloud adoption with products like practice fusion. Hospitals and large health-care organizations are dipping their toes into cloud computing with virtualization of their private data centers. Compliance issues, and HIPPA, may have created some initial caution with cloud adoption in health-care, but Terry sees that these issues are being addressed and that the industry is ripe for the cloud now.

PaaS Requires a New Set of Skills

In this episode of the Uhuru podcast Imran Ahmad, President of Cloudanum, tells about his experience helping major organizations roll out cloud computing projects. The new Platform as a Service space solves a lot of the cloud management issues Imran’s customers face, but it takes a new skillset and way of thinking of the cloud. Many of Imran’s clients are exploring Platforms as a Service (PaaS) but not one has rolled out a production service yet. Too many PaaSes require changes in how apps are written to be immediately applicable to existing applications.

I will just have to get Imran to try to Uhuru PaaS to see how it IS possible for apps to be deployed on the cloud without re-architecting.

You can read more of Imran’s ideas on his blog:

Medical Imaging In the Cloud – Just a Matter of Time

In this episode of the Uhuru podcast Dr Victor Fang, a research scientist with Riverain Technology, talks about how the medical industry is well on its path to moving IT to the cloud. While medical diagnostic images (like x-rays and MRIs) are still stored locally today, most companies are looking at ways of putting them in the cloud. It has just taken a while for cloud services to reach the level of security and privacy needed to comply with government regulations. At conferences Dr Fang has attended recently he has noticed the explosion in businesses offering cloud hosting, which is just one indicator of where the medical industry is moving. Dr Fang also couldn’t help but notice that hospitals where he’s worked are typically behind in adopting stateless web applications, relying primarily on traditional Windows-based client/server solutions that are harder to move to the cloud.

The health care industry can move slowly at times, but it makes huge waves when it does move, and the cloud is just around the corner.

You can follow Dr Fang on his web site.

Pioneer Of The PaaS

In this Uhuru podcast episode Wely Lau, a software architect at NCS Private Ltd, talks about his pioneering work with the Azure Platform as a Service (PaaS). He was using Azure as an early tester even before it was released! There have been growing pains along the way, but today Wely is building complex applications on Azure that save his customers loads of money. He describes one fascinating project where he reduced the time to process thousands of records to prioritize work for a shipping company from over 4 hours to 2 minutes by dynamically spinning up additional application instances as needed. This is the dream of the cloud – pay for what you need, when you need it, rather than having large sunk costs in systems that are only used on a periodic basis. Wely is also closely following other PaaS services such as Amazon’s new Beanstalk, and now the Uhuru AppCloud (of course!).

You can follow Wely on his blog:

When The PaaS Isn’t Enough

In this episode of the Uhuru podcast Andrey Cherkasin tells about his decision to abandon a Platform as a Service and move to a custom hosting solution. Andrey was an early adopter of the Cloud Foundry PaaS technology, and was thrilled with being able to setup his own private PaaS. Unfortunately, Andrey’s needs for support of numerous Ruby platforms made it impractical for his needs. He now relies on a custom cloud solution using Chef Scripts and a smart OS.

You can follow Andrey on his blog:

The Long Path To The Cloud

In this episode of the Uhuru podcast Jason Nappi, a software developer with SmartPak, talks about the journey he has been on to move his .NET application to the cloud. Everything is now running in virtualized instances that can be easily replicated to handle additional load, but the monolithic database model used by his software doesn’t lend itself well to most cloud platforms. It would be nice to utilize the scalability possible on Azure, but that just won’t be possible without re-architecting the database. Jason is closely watching technologies like database auto-sharding to find ways to further improve the performance of his apps in the cloud.

You can read more about Jason’s ideas and trials on the cloud at his blog:

Gimme That PaaS Source Code!

In this episode of the Uhuru podcast show Krum Bakalsky, a software engineer, shares his passion of participating in Open Source projects. In particular, VMware’s decision to place Cloud Foundry into Open Source has inspired Krum to become a significant contributor to the community effort. The Cloud Foundry community has been the perfect place for Krum to learn about the guts of building enterprise class software and to gain real experience as a participant. The fragmented nature of the Open Source world can be frustrating at times, such as when Krum discovered other programmers had already been working on something he was doing, but the collaborative environment more than makes up for these deficiencies.

You can read about Krum’s Open Source exploits and ideas on his blog:

The Stateless App

In this episode of the Uhuru podcast Andy Piper, a Cloud Foundry developer advocate at VMware, talks about how to build great apps for the cloud. In his work with developers Andy has found that stateless apps are the easiest to transition to cloud platforms such as Cloud Foundry. Looking for particular files or settings on a server prevents your apps from being able to benefit from the automatic deployment scaling or redundancy features of new hosted Platforms of a Service. Luckily, many apps are ready to run on a PaaS already. At recent hackathons Andy has that many of the existing apps deployed on Cloud Foundry without any changes at all.

You can read more about Andy’s advice for building apps in the cloud on his blog:

Cloud Foundry Bridges the Clouds

In this episode of the Uhuru podcast, Dekel Tankel, a Cloud Foundry marketing manager at VMware, explains how one of the things that interested him most about Cloud Foundry was it’s ability to cross the chasm between clouds, allowing customers to easily move their applications between cloud service providers. To Dekel Cloud Foundry is all about giving more choices to developers without locking people into a particular service or technology.

Developers can start with the Cloud Foundry micro-cloud as they build their app prototypes, running locally on a laptop, and then deploy to a cloud service when their code has matured. No other cloud service offers this degree of flexibility ranging from the private to public clouds.

At times the enthusiasm for Cloud Foundry is daunting to Dekel (they had 10,000 developers register in the first 72 hours of launch) and the volume of Open Source contributions is hard to keep up with. It would be easier to control and contain the progress of Cloud Foundry if it had been closed and followed a proprietary architecture, but it’s all been worth it.

You can follow the Cloud Foundry blog here:

You can register for your own free Cloud Foundry account here (use the word “cloudtoday” for the promotion code for immediate approval):

Of course, you can also register for a free account on the Uhuru PaaS, which is based on Cloud Foundry and offers extra enhancements like support for .NET applications.

Taking .NET to the cloud is a process

In this episode of the Uhuru podcast show Michael Collier, an architect at Neudesic, shares his experiences in bringing .NET applications to the Azure Platform as a Service (PaaS). Michael explains how migrating existing .NET apps to a PaaS is a process. Managers have to be educated about the cloud so they feel comfortable using it. He has often had to rewrite parts of the .NET apps to make them compatible with the Azure PaaS. The migration process is different for each app, depending on how it was built. .NET apps that follow best practices can run on a PaaS with almost no changes at all (e.g. not storing state on the server, etc). Other apps can be much more difficult to migrate (e.g. using COM, relying on server state, etc).

In the end, the migration to the cloud is well worth it. Developers can focus on what they love doing: writing great apps!

You can find Michael’s blog here:

Shadow IT and the cloud – Déjà vu all over again

In this episode of the Uhuru podcast show Brian Gracely, Director of solutions at EMC and host of the CloudCast podcast, reminisces on how cloud computing is filling the same role of empowering users as the PC and LANs did back in the ’80s and ’90s. With this phenomena of “shadow IT” developers and small departments are able to take advantage of cloud services and completely bypass traditional IT departments. All this empowerment does come with risks. Putting critical data on insecure cloud services with little traceability can come back to haunt you. There is still value in involving IT with cloud projects. Of course, forward thinking IT departments should show their users that they bring value to their cloud initiatives if they don’t want to see themselves shut out of even grass roots initiatives.

Brian also cautions that you shouldn’t expect cloud services to suddenly reduce bottom line costs. The cloud may offer unprecendented flexibility and start up velocity but they aren’t always as cheap as you think.

You can read check out Brian’s blog here:

His cloud computing podcast show is here:

The most common mistakes in selling a cloud service

In this episode of the Uhuru podcast show Peter Cohen, the founder ofSaaS Marketing Strategy Advisors, explains how selecting the right technologies for a cloud service requires a good dose of business and marketing acumen to succeed. Picking the right technology or platform to build a cloud service is just one piece of the puzzle.

Peter has seen wanna-be internet businesses make all manner of rookie mistakes such as under spending on marketing (who knew) and selling on cost alone (you’re dead if your only differentiator  is price). Unfortunately, there is no silver bullet for success as a cloud service. Good marketing and pricing strategies will vary based on the specific business. Just make sure you are flexible and measure, measure, measure, results of everything you do.

You can find out more of Peter’s SaaS marketing ideas at his web site:

RavenDB – the NoSQL database for the rest of us

RavenDB – the NoSQL database for the rest of us

In this episode of the Uhuru podcast show Oren Eini, a software developer and avid blogger, tells about his passion of bringing NoSQL database technology to Windows that led him to become a major contributor to the Open Source RavenDB project. While the NoSQL database offerings on Linux are pioneering a lot of great ideas they are inflexible and extremely difficult to configure and use. RavenDB is different. Almost anyone can get RavenDB up and running quickly and it doesn’t require an immersion in esoteric configuration settings to tune.

Since RavenDB uses simple REST APIs it can be used by any applications, whether they are on Linux or Windows. RavenDB runs well on cloud infrastructure services like Amazon Web Services and is also available in a dedicated service.

You can read more about Oren’s ideas at his blog (where he writes under a pseudonym).

You can read about RavenDB here:

Not all .NET roads lead to Microsoft

In this episode of the Uhuru podcast Troy Hunt, a software architect and Microsoft MVP for developer security, talks about his great experience using the AppHarbor Platform as a Service to host his .NET applications. He has looked at Microsoft’s Azure PaaS but found the requirements to rewrite his .NET apps to be prohibitive. Moreover, the AppHarbor integration with GitHub offers source control management that Troy hasn’t seen anywhere else.

The .NET PaaS takes away all the pain of having to manage servers. There is no going back to traditional hosting on virtual machines for Troy.

You can find Troy’s blog here:

You can find Troy’s app that tests the security of .NET web sites here:

The clouds are different down under

In this episode of the Uhuru podcast show Pabich Pawel, a Senior Consultant at Readify, talks about his experiences helping companies in Australia bring their web applications to the cloud. Unfortunately, the dearth of local cloud computing services often means that Australian companies have to put up with barely tolerable latencies accessing off-shore clouds. The closest Azure hosting service is in Singapore. It also doesn’t help that many of the services Pabich’s customers want, like Microsoft’s MSMQ, aren’t even available on the cloud (just try and find MSMQ on Azure).

In other respects, migrating to the cloud is no different for Australian IT professionals than anyone else. Moving to virtualized environments on the cloud is always a good first step since you can easily move existing apps no matter how messy they are that way. As more and more apps are built to be cloud friendly the Platform as a Service offerings (PaaS) become more practical. Pabich likes using AppHarbor for his personal projects. The Git integration on AppHarbor is a big plu

You can find Pabich’s blog here:

Does your desktop belong in the cloud?

In this episode of the Uhuru podcast show Brian Byrne, the founder of MeshIP, tells about the benefits of running desktops in the cloud. For the same reasons that cloud computing makes sense for web servers, there are compelling reasons to run your personal productivity and desktop tools in the cloud as well. Brian’s company offers a VDI hosted solution which allows centralized management and backups. Running your desktop in the cloud might not be cheaper than buying a low cost PC for your desk, but when you add in the reliability improvements and IT management savings, the advantages of desktop hosting in the cloud are substantial.

You can find more about Byrne’s company here:

Secrets of a master blogger

In this episode of the Uhuru podcast show Mukesh Agarwal shares the secrets that have led him to be a master blogger, able to make an actual income of his sites. It turns out that the key to blogging success is simple: deliver what readers want. Mukesh explains how he has been able to constantly hone his blogs by watching which subjects attract the most interest and then focus on that. He is constantly creating new blogs targeted to subjects readers are most interested in. Of course, it helps to have useful content and a real interest in the topics you write about. Mukesh’s natural interest in VOIP has made blogging about methods to better use the internet for audio conversations a natural fit.

His web sites on the Uhuru service consistently outrank the thousands of other sites in traffic levels.

Mukesh has found the Uhuru service offers the kind of advanced features he is looking in running his sites but he has certainly felt the pain of some of the Uhuru service outages while it is in beta is eager to see it go into commercial operation. Even an hour of downtime a month is a big deal to Mukesh’s blogs that get hundreds of thousands of visitors a day.

You can reach Mukesh by using the contact form at

Voting now open on the Uhuru Lifetime Hosting contest!

Create your own site to nominate.

Voting is now open on nominees for the Uhuru Lifetime Hosting contest. We will provide lifetime free premium hosting to the 5 nominated web sites that receive the most votes from the community.

You can see all the nominees and vote for your favorites here:

Additional web sites hosted on the Uhuru service will be accepted as nominees until April 22nd. This competition will end at 12:00am Pacific Time on April 30th, 2013.

You can nominate new sites here:

Please send all questions about the contest to the Uhuru community e-mail distribution list. NOTE: You must join the list as a member before you can send e-mails to it.

You can subscribe to the Uhuru community e-mail discussion list here:

Here is the e-mail address for the Uhuru community discussion list:

Contest rule details are listed below.

Michael Surkan

Director of Marketing, Uhuru Software, Inc.


The nominated web sites with the most votes from the community will win lifetime free premium hosting on the Uhuru service.


[list_item]The owners of winning web sites will be invited to be guests on the Uhuru podcast show.[/list_item]
[list_item]Nominated web sites must be hosted on the Uhuru service.[/list_item]
[list_item]There is no restriction on the type of web site eligible for the contest. Just some examples are:[/list_item]



[list_item]A mobile Android app that just uses the Uhuru service as a back-end database.[/list_item]
[list_item]A customer resource management web site for managing sales engagements for your own products.[/list_item]
[list_item]An application for managing students and grades at a college.[/list_item]
[list_item]A blog about cuddly dogs.[/list_item]



[list_item]Nominations for web sites engaged in illegal or immoral activities will be rejected and cannot participate in the contest. Read the Uhuru service terms of use for details about which activities violate our hosting policies.[/list_item]
[list_item]Only one vote per person is allowed.[/list_item]
[list_item]You are allowed to nominate any web site running on the Uhuru service whether it is your own or someone else’s.
[list_item]You can nominate as many of your own web sites as you choose.[/list_item]
[list_item]Nominations which leave incomplete information on the registration form will be rejected.[/list_item]
[list_item]Lifetime hosting is defined as the period of time that Uhuru continues to offer a hosting service for web applications. As long as Uhuru is operating a hosting service a site with lifetime hosting will continue to get free service.[/list_item]
[list_item]Premium hosting is defined as 1GB RAM, 512MB shared file storage and 512MB shared database storage.[/list_item]
[list_item]If a web site which exceeds the free resources offered in the lifetime premium hosting offer should win the web the contest the free hosting prize will be extended to whichever site the owner of the winning web site chooses. In other words, there is no obligation for the winning site itself to be the site which recieves the lifetime free hosting. The free hosting prize can be transfered to whichever web site the winner chooses.[/list_item]
[list_item]The URL submitted for the nominated web site does not have to be in the format. It can be a private domain that has been redirected to[/list_item]
[list_item]Uhuru reserves the right to extend the voting period.[/list_item]
[list_item]A web site must receive at least 10 unique votes to win the contest.[/list_item]



App deployed currently on the academic intranet of , with popular features like integration with the most used social networks and soon with Google Apps.

Besides brings the possibility for teachers to upload multiple files in the cloud like images, documents, presentations and more. Also include news and info related to the High school, which may append a media gallery, and this is shared in a stream.

Continue reading

Uhuru and eFactor team up to help entrepreneurs

Uhuru is proud to become one of the preferred resources offered to the eFactor community. With well over a million members eFactor is one of the most useful entrepreneurial communities around. This vibrant community provides one of the best places to share experiences and get help with business. From writing a business plan, developing products or getting funding eFactor can  help.

We are excited to be able to help the eFactor community with their application hosting needs.



Welcome to the Core

Uhuru is proud to be a founding member of VMware, Inc.’s Cloud Foundry™ Core initiative, featuring hosted services that are compatible with the Cloud Foundry framework and APIs. It is a remarkable achievement to have a whole ecosystem of hosting services, from different companies, which are completely compatible with one another. If you are able to publish an app on one Cloud Foundry Core service you can easily publish the same app on another Cloud Foundry Core environment.

What a novel idea this is – giving people the freedom to choose different hosting services without the handcuffs of compatibility that make the costs of moving apps between providers prohibitive. Cloud Foundry is the only platform that does this. Everyone else has their own proprietary cloud.

Better still, from Uhuru’s perspective, there remains ample room for innovation as a Cloud Foundry partner. We have not only brought .NET and Microsoft SQL Server support to our Cloud Foundry compatible service but created rock-solid developer tools, a persistent file service and an innovative web console to make using and managing your apps even easier. You can expect to see even more helpful enhancements in the next release of the Uhuru AppCloud.

Here is the link to VMware’s blog post on the Cloud Foundry Core compatibility initiative:

Here is information about Uhuru’s participation in Cloud Foundry Core:

Uhuru on FreeAir podcast

Michael had a great time talking with Seyi Ogunyemi and Ikenna Okpala on the FreeAir podcast show about the evolving cloud. It was particularly interesting to discuss all the amazing things happening in emerging markets. There are a lot of entrepreneurs taking advantage of cloud services like Uhuru to make their ideas a reality.

You can listen to the podcast here:

Improved reliability and security for Uhuru Cloud Admin tool

On Wednesday, August 29th, we released a new version of the Cloud Admin tool. This Windows application now features encrypted tunnels for improved security when uploading or downloading data on the databases or file service. We have also made a lot of improvements to the reliability and performance in the tunnels. You can now transfer data more securely and reliably than ever from your local Windows PC to the Uhuru AppCloud service.

If you are running an older version of the Cloud Admin tool, be sure to install the new one. You can check the version of the Cloud Admin tool you currently have installed in the Help -> About menu. The new version number is

You can download the latest Cloud Admin tool here:

We are now using the .NET ClickOnce feature to make it easy to automatically update the Cloud Admin. Once you install this latest release you will automatically be notified when new versions of the Cloud Admin are available.


2000 Uhuru AppCloud Users since June 25th!

The Uhuru AppCloud beta just passed 2000 registered users today (August 16th)! All these people have signed up since we made the AppCloud beta public on June 25th.

That’s amazing! Even more incredible, the Uhuru AppCloud is now hosting over 1000 web sites!

The large number of questions we’ve been getting on our support site has been hard to keep up with for a small company like ours, but these are the kinds of problems an upstart like us dreams of.

Interestingly, around 60% of the web sites on the Uhuru AppCloud are currently using PHP 27% are .NET.

NOTE: Uhuru is offering a year’s free premium hosting once we go commercial to anyone who blogs about the experience with our hosted service. You can get details about how to get free premium hosting here:

If the French like Uhuru, it must be good

Another user, from France, posted a blog article talking about his experience with the Uhuru AppCloud beta.

In the blog’s words: “But given the youth of the service, it is already a great PAAS to try, especially for .NET developers who look for a handy platform to test out their applications (use case, load tests, validation, etc…) in no time.”

There are a lot of great suggestions for our engineers to look over here. We love this feedback!

Don’t forget that you too could qualify for 1 year free premium hosting once the Uhuru AppCloud goes commercial by sharing your ideas in a blog post too.

The Heroku for .NET

Read Chin Wye Jin’s take on the Uhuru AppCloud. In his words, “It is a “Heroku” for .Net Guys like me”.

It is also great to hear Chin’s feedback of things he would like to see improved. This is still a beta after all, and we can use all the suggestions our users have to offer.

Don’t forget that you too could qualify for 1 year free premium hosting once the Uhuru AppCloud goes commercial by sharing your ideas in a blog post too.

GIS and the cloud – the perfect marriage

In this episode of the Uhuru podcast show Ming Lee, the manager of on-line operations at ESRI UK, talks about the tremendous benefits his company has seen in taking their Global Information Systems (GIS) software to the cloud. By moving to virtual machines on hosting services like Amazon Ming has reduced IT costs and increased capacity to handle large amounts of traffic and processing all at the same time. Ming can spin up new machines as fast as he wants, and take them down when no longer needed. Ming has looked as Platforms as a Service (PaaS) but found that the highly customized apps he supports won’t run well on them. However, new GIS apps are being written with the cloud in mind from the start (such as not relying on the OS for state) which will enable even more productivity improvements in the future.

Be sure and check out the GIS web site Ming helped make possible that tracks Diamond Jubilee events throughout the UK, in celebration of Queen Elizabeth

Another great GIS site Ming supports focusses on the Titanic:

You can read Ming’s blog here:

A cloud for every season – who says you only need one cloud solution?

In this episode of the Uhuru podcast show Rob Reynolds, a senior software developer and creator of Chocolatey, explains how he uses a variety of different cloud computing solutions for hosting his applications. Amazon’s EC2 is great for streaming and AppHarbor is brilliant for integration with Git and easy hosting of apps. Azure was overpriced and difficult, but they have made a lot of improvements recently which Rob thinks might warrant a second look. Rob suggests that anyone trying to offer a cloud computing product should consider the freemum model. Offer some capabilities for free and charge for add-ons.

It was fascinating to hear Rob talk about his initiatives to create theChocolatey and Chuck Norris Open Source initiatives that are adding using capabilities to the Windows world. Who says Open Source is only about Linux?

You can read Rob’s blog here:

No Worries

Never touch an operating system or virtual machine again. Leave that up to us.

Just upload your applications and let us automatically push it to dozens, or hundreds, of servers.

Our automatic load balancing and fault tolerance options ensure that your applications will be running at all times and able to handle the largest of demands.

[button url="" target="_self" size="small" style="tealgrey" ]Read more[/button]

Swift Upload

Upload your custom app to our service in minutes. Most apps don’t require any modifications.

Use the command line, Microsoft Management Console or the popular Eclipse or Microsoft Visual Studio integrated development environments to deploy and manage your applications.

[button url="" target="_self" size="small" style="tealgrey" ]Read more[/button]

Any Template

Pick one of our pre-configured templates of common applications and you can have your own web site in minutes without even having to have a web application of your own to start with.

Just choose WordPress, Magento, Sugar CRM or any of our other app templates and we’ll handle everything else.

You don’t even have to worry about uploading an application. Just login to your site and get started.

[button url="" target="_self" size="small" style="tealgrey" ]Read more[/button]

Any App

Take your pick of C#, Java, PHP, Ruby or Node.js, RTG runs them all unmodified. Don’t waste time re-architecting your app for the cloud, most apps work without any changes.

Fault tolerance and load balancing are automatic, allowing any app to easily handle whatever capacities are needed. Focus on writing great applications, not managing servers or virtual machines.

[button url="" target="_self" size="small" style="tealgrey" ]Read more[/button]

Everyone Needs a Cloud – Even If They Don’t Think They Do

In this Uhuru podcast episode Jonathan Schnittger, a senior developer atiQuate, talks about even companies where cloud computing doesn’t make sense for their own products can still benefit from back-office cloud services like e-mail, file sharing, etc. Jonathan’s company creates security scanning software that has to be run on local networks which precludes cloud hosting. Many of his customers are using cloud services like Amazon. Even the large enterprises he works with are hosting more and more of their applications on the cloud. Desktop replacement with cloud services is another hot area Jonathan sees organizations adopting, but the solutions for this are still immature.

You can follow Jonathan on Twitter:!/JonnySchnittger

New and Improved Uhuru Tools

We just released updated versions of our Uhuru PaaS client tools.

These new versions address a number of reliability issues that resulted in the tools hanging on occasion as well as some usability enhancements. You will also see that creating and using database management tunnels with the Microsoft Management Console is much faster.

These new versions can be downloaded on our Instructions and Tools page.

If you were using older version of our client tools you will need to uninstall them first before installing the new ones.

  • To remove the old Uhuru MMC snap-in go to the Windows Control Panel and uninstall the “Uhuru Cloud Foundry Manager”.
  • To remove the old Uhuru Visual Studio extension go to the Extension Manager under the Tools menu of Visual Studio and uninstall “Uhuru Visual Studio Extensions for Cloud Foundry”.

A Balanced Perspective of the Uhuru PaaS

Sayak Saha, an software engineer, took the Uhuru PaaS for a test drive and wrote about his experience deploying a .NET app to the cloud. It was over-all positive, but he did make some useful suggestions for improvements.

Way to go Sayak! This is exactly the kind of feedback that we at Uhuru need to improve our product.

Sayak is now the second recipient of a lifetime account on the Uhuru trial PaaS. There are now just 8 more lifetime accounts left for beta testers who give feedback

Michael Surkan


The Versatile Platform as a Service

The PaaS that gives you choices in how you want to develop and host web applications in the cloud.

Support for all major languages and platforms

Deploy apps in minutes

Support for public and private clouds

[button url="" target="_self" size="small" style="tealgrey" ]Read more[/button]

Superior productivity

Never touch an OS again!

Deploy apps in minutes

Run existing apps unmodified

Extensions for popular Integrated Development Environments

Automatic fail-over

Automatic redundancy

[button url="" target="_self" size="small" style="tealgrey" ]Read more…[/button]


The best of Open Source and Windows

Why choose when you can have it all?

Support for .NET, Java, PHP, Ruby and Node.js

.NET apps can access Open Source data services

Open Source apps can access Windows data services

[button url="" target="_self" size="small" style="tealgrey" ]Read more[/button]


No lock-in

No app customization required

No custom APIs

Runs on any network, public or private

Easily move apps between clouds

[button url="" target="_self" size="small" style="tealgrey" ]Read more[/button]

Everything you wanted to know about databases on an Uhuru PaaS.

Here are the detailed instructions for working with databases that our Uhuru PaaS beta testers have been asking for. The in-depth database deployment guide walks you step by step through how to setup a data service in the Uhuru PaaS and configure your applications to use it. We also have 3 videos that show you how to deploy apps that use databases with both the Microsoft Management Console and Visual Studio.

Welcome to the NEW way of thinking about deploying apps to the cloud. As these videos show, getting a database setup on an Uhuru PaaS only takes minutes, and your applications only need to have configuration strings changed to point to the new database name. It’s as simple as that. Unlike most other PaaS offerings you can even use the Uhuru database management tunnel to work with most popular database management tools.

Deploying a database application with the Microsoft Management Console

Deploying a database application with Visual Studio

Using the database management tunnel

The first recipient of the Uhuru trial PaaS Lifetime account

It is my great pleasure to award Debashish Mishra, an Architect with CSC, as the very first recipient of a free lifetime account on the Uhuru trial PaaS.The blog post Debashish wrote about his experience with the Uhuru PaaS, and the suggestions he offers, is exactly the kind of thing Uhuru needs to make our product even better.

Of course, you too can get your very own lifetime account on the Uhuru trial PaaS by trying it out and giving feedback.

Read about the details for getting a lifetime account here. Now that Debashish has a lifetime account, there are 9 more available.

So, you want free hosting on the Uhuru trial PaaS?

At the suggestion of some of the beta testers of the trial Uhuru Platform as a Service (PaaS) we are implementing a special offer of free premium hosting for 1 year to  anyone who hosts an app!

Our normal Uhuru PaaS trial accounts only have 512MB of shared RAM of capacity. Our free 1 year hosting includes 1GB of RAM and 10GB of both file and SQL storage. The trial is a fully functional hosting service which is ideal for staging, prototyping, and production hosting.  We can host applications using almost any language (e.g. .NET, Java, Ruby, PHP, Node.js). Applications in our beta can also use any of the following database services we host – MySQL, Microsoft SQL Server, RabbitMQ, MongoDB, Postgres and Redis.

We have pre-packaged templates for WordPress, Magento and SugarCRM that you can deploy and start using in seconds but you can also upload your own applications.

The 1 year free hosting offer is only for the first 100 beta testers who qualify (see instructions below).

Here is how you can get a year’s free hosting in the Uhuru trial PaaS:

  • Host a functional app that uses a database on the Uhuru trial PaaS. The app must have a real function and not simply be of the “hello world” variety. NOTE: It doesn’t have to be an app you wrote. Feel free to use an Open Source, or publicly available app.
  • You must write a blog post describing your experience with the Uhuru PaaS of no less than 500 words and include at least one screen shot. You can post it wherever you like. Alternatively, you can send a video of no less than 30 seconds in length describing your experience with the Uhuru trial PaaS which Uhuru can publish on its web site.
  • The URL for your blog post must be published as one of the comments in the discussion about the Uhuru PaaS account offer on the Uhuru LinkedIn group.

We are eager to hear what you didn’t like, and suggestions for improvement, just as much as the things you liked. In fact, getting improvement suggestions or bug fixes from the community makes my job of helping the engineers know what to do next easier!

To start, just register for an account in our trial Uhuru PaaS.

Please write this word in the promo code box on the registration page if :  alberta

NOTE: Keep in mind that our trial PaaS is not yet in commercial operation. We don’t provide backups or guarantee 24/7 support and availability. You are responsible for your own backups.

Welcome to the Uhuru Micro-Cloud

The Uhuru micro-cloud was fun, but unfortunately it has come to an end. As the Uhuru PaaS evolved into the second beta release it became just too difficult to maintain in a micro cloud version so we have discontinued it.

Feel free to sign up for a free trial account on the Uhuru AppCloud. If you are interested in setting up an Uhuru PaaS on your own network just drop us a note. We would be delighted to talk with you about it.

Uhuru LinkedIn group hits 3000 members!

Our Uhuru group membership just keeps on growing. We just crossed the 3000 member mark. This makes us the 3rd largest LinkedIn group when searching for “Azure” now. The largest group is a job seeking group, so I don’t think that really counts…

Don’t forget to check the group out.

Here is a view of the Uhuru LinkedIn group statistics on February 10th, 2012.

Uhuru LinkedIn group is now 2000 strong

Wow! Our Uhuru LinkedIn cloud computing group has just reached the 2000 member mark. There are a lot of great discussions going on. Some of them have over 200 comments (like the one about whether .NET is a second class citizen of the cloud). Don’t forget to check it out.

Here is a view of the Uhuru LinkedIn group statistics on January 26, 2012.

Connect the dot .NET hosting for the rest of us

Check out our new how-to-video by Uhuru engineer architect Vlad Iovanov. With Vlad’s step-by-step instructions any developer or IT administrator can discover how easy it is to deploy .NET applications themselves with the Uhuru .NET Services for Cloud Foundry. We start at the beginning with configuring Cloud Foundry right through using Visual Studio and the Microsoft Management Console to deploy your .NET applications.


Michael Surkan

Director of Product Marketing

It’s alive!

It’s been many weeks of late nights (and all-nighters), but we finally shipped our very first software release! And to think that as recently as 4 months ago not a single piece of Uhuru code existed… It just goes to show how much a small dedicated team can accomplish in a short time frame.

Life is fun at a startup like Uhuru. We have the luxury to work in a whole new space of Cloud Computing. At Uhuru it’s very much an attitude of “get the work done in the best way you can”. How refreshing!

Technology sure makes transatlantic collaboration far easier than anyone could have imagined even a few short years ago. Now we all just need to catch some sleep. I don’t know how Vlad, our engineering lead, manages to keep going during our marathon video conference sessions as he shares his screen, coding in real time, fixing bugs before our very eyes. Talk about a coding wizard!

No shortcuts here

We have made a point of doing things the right way, from the very beginning. It would have been far simpler to have just taken existing Ruby or Java Cloud Foundry code and built wrappers for Windows, but we knew that this kind of short-cut just wouldn’t meet our goals of performance, scaleability, or native integration with Windows. Now that .NET Services for Cloud Foundry has been released we will know soon enough if the broader community is as excited about these design decisions as we are.

As the Product Marketing Director I will now be focused on getting the word out about our Uhuru LinkedIn community so that we can find people who will want to participate in trying our software as well so we can get that all important feedback.

This is what software development is all about these days. Get your code out as soon as you possibly can and start getting feedback before you waste time making more changes that customers won’t want. Sure, we could have waited another 6 months trying to add every little feature and conducting masses of customer research, but in the end we will learn far more from letting people just try our code.

Are you ready for the test drive?


Michael Surkan

Director of Product Marketing

Homepage Side 2

This home page is setup using jQuery Homepage Template. The slides are setup by selecting the category of post from site option. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus rhoncus, arcu non blandit tempus, elit diam congue velit.

A NetApp Perspective of the Uhuru PaaS

Sayak Saha, an engineer at NetApp, took the Uhuru PaaS for a test drive and wrote about his experience deploying a .NET app to the cloud. It was over-all positive, but he did make some useful suggestions for improvements.

Way to go Sayak! This is exactly the kind of feedback that we at Uhuru need to improve our product.

Sayak is now the second recipient of a lifetime account on the Uhuru trial PaaS. There are now just 8 more lifetime accounts left for beta testers who give feedback

Michael Surkan

Homepage Slide

This home page is setup using jQuery Homepage Template. The slides are setup by selecting the category of post from site option. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus rhoncus, arcu non blandit tempus, elit diam congue velit.

Mikhail has added text.