OpenShift and Cloud Foundry – A Contributor’s Perspective

This post summarizes the insights that we (at Uhuru Software) gathered while working on both the OpenShift and Cloud Foundry projects over the last two years. As you can imagine, in order to be able to provide support for Windows, we had to really dig into the code base of both solutions.

Uhuru has released both implementations to the Open Source community – see here. If you’re interested in piloting either environment with first-rate Windows support feel free to get in touch with us.

It is great to see how well Windows integration with both OpenShift and Cloud Foundry turned out. Windows Server functions as a first class citizen of the PaaS. You can publish and manage your .NET applications like their Linux siblings.

I’m assuming you have some high level familiarity with both offerings. If not, then these links should help to provide a nice overview:

Our approach began by reading the code, understanding what it does, then implementing a flavor of that in C# on Windows. So we have a pretty good idea of how these two systems are built. Uhuru is one of the few companies that has extended core services for both OpenShift and Cloud Foundry, and this post is intended to share our hard-won insights on these leading open source PaaS platforms.

I’m not going to provide 100% coverage of all the features, scenarios and use cases supported by both these communities, but rather demonstrate the major differences that were observed between the offerings based on our hands-on experience with the projects beyond marketing and documentation.

Developing and Contributing

For us, extending these projects meant understanding the Ruby (and sometimes Go) code used to create these platforms. With OpenShift, most of the effort was spent on the Windows version of Node component. We also implemented changes to the Broker, allowing it to interact with more than one platform. For Cloud Foundry we had to implement counterparts to all of the following components: the DEA, NATS client, service nodes and gateways (brokers) and a BOSH agent.

Code structure

The code structures for these projects are quite different. OpenShift has all components in one repository, whereas Cloud Foundry has one component per repository and heavy use of submodules for BOSH releases. This Stack Exchange discussion sums up why we prefer the one repo model to the other.

Components

Cloud Foundry is split into many components. Each of these components has a particular role in the Cloud Foundry system. The DEA (Droplet Execution Agent) for example, is very specific in its role – it’s the service that runs your app and manages its environment. That probably resulted in a smaller amount of code that we had to write for the Windows DEA when comparing to the OpenShift Windows Node.

The communication layer used by the two projects is also different. OpenShift uses MCollective on top of ActiveMQ. Cloud Foundry uses NATS. Since on OpenShift the communication mechanism is decoupled from the implementation of the Node, we did not have to write any communication components. We use the same MCollective agent DDL for both Windows and Linux.

In OpenShift there are two major components – the Node and the Broker. The Windows Node we’ve built started out as a complete mirror of the Linux version, implementing the entire API that is exposed via MCollective. It turned out that not all of the APIs had to be covered. We chose to always use a Linux HA proxy as a load balancer – even for Windows apps. This means we did not have to re-implement things like SSL support and handling of OpenShift web proxy cartridges inside the Windows Node.

Extending these two systems from a services perspective was simpler for OpenShift, because we merely had to implement cartridges. In Cloud Foundry you have to implement a service node that becomes a new Cloud Foundry component.

From a developer trying to extend the platform, the OpenShift codebase provided much better documentation than Cloud Foundry, but was a bit more difficult to understand at first, because it’s split into fewer components. As engineers, we like smaller components of code when we can get them.

Open Source Software

Contributing to these projects was a completely different experience for us.

We have attempted to provide Windows support to Cloud Foundry (v1 and v2) in the past, but we were not able to do so, probably because Pivotal is not yet ready to accept such a large contribution from the community. Trying to merge pull requests and get feedback was a bumpy road. The new community processes that have been created might help.

On the other side, Red Hat has well established processes in place for working with outside contributors. We were amazed at how easy it is to work with the OpenShift community on development. The whole process of getting help, advice, and submitting our code for inclusion in Origin has been incredibly smooth. If you are interested, you can check out the pull request on github.

Deploying a PaaS

This is a long story and it varies greatly on what scenario you have in mind. If you simply want to deploy a medium sized PaaS – say 50 Nodes or DEAs (I think most of us fit in this bucket at this point) – Red Hat has an edge because the system administrator is able to get started immediately on deploying OpenShift (either Origin or Enterprise). He or she has plenty of documentation and all the command line tools to operate the system. Given that most Linux in the enterprise is Red Hat Enterprise Linux (RHEL), administrators will be familiar with these tools and probably require little, if any new training in order to get started. The OpenShift deployment strategy is based on Puppet, which is very popular.

On the Cloud Foundry side administrators may have a harder time kicking the tires. The deployment mechanism provided (BOSH) will be unfamiliar, so they will most likely need training. BOSH, however, would appear to reduce the amount of time needed to manage the PaaS in the long term. By default BOSH downloads a lot of additional bits from the web, including Stemcells. Many administrators will not be comfortable running clones of Pivotal’s images and will have to build their own.

About BOSH

BOSH deserves a section on its own – it’s a great tool that scales up and down very well, and is very helpful as your PaaS grows. This is the mechanism that allows Cloud Foundry to be easily updated and maintained without downtime.

But BOSH is a great tool irrespective of Cloud Foundry. In my opinion Pivotal should position it in a way that is closer to their Big Data strategy. From my perspective, BOSH could help standardize the deployment of these complicated systems in the enterprise.

For small and medium deployments, BOSH should be hidden behind something easy to use, with a normal learning curve. This is why we’ve built UCC and URM. These two pieces put together should allow users to easily manage stemcells, software releases and deployments while hiding BOSH completely. Pivotal One also includes something similar, but the tool seems to be targeted for Cloud Foundry and their Big Data services. More importantly, it’s not open source.

Scalability

Load Balancing Mechanisms

OpenShift and Cloud Foundry have different styles of handling how traffic flows towards applications. In Cloud Foundry, you have the router component that is deployed on one or more VMs. These routers act as dynamic reverse proxies and serve the content of your app to clients.

With OpenShift every node has a public IP address and it integrates with your DNS. The reverse proxy in this case is a special type of cartridge (web proxy cartridges). By default, the web proxy cartridge for OpenShift is HAProxy, but you can write your own.

Application idling

A very nice feature of OpenShift is idling applications if they’re not being used. Every node has an httpd service that handles HTTP traffic and detects this. Using this service OpenShift can tell if an application has not received requests for some time. If the application is idle and a request comes in, the service loads the application back into memory and processes the http request.

Cloud Foundry does not have a concept like this. This means that you can have much higher application density on OpenShift than on Cloud Foundry.

While running our trial service on uhurucloud.com for two years we’ve learned that application idling is very important when it comes to improving application density. The very first version of the Windows DEA that we built (for Cloud Foundry v1), did not use IIS Hostable Web Core to run .NET applications. We directly setup websites inside of IIS. Because of this, we were able to take advantage of IIS app pool recycling, and had higher densities on Windows. So the fact that OpenShift already has this is a major plus.

Buildpacks vs. Cartridges

Buildpacks are more prevalent because of Heroku, but cartridges are wider in scope. What I mean by this is that buildpacks are restricted to encapsulating a web server and a framework, like Apache and PHP. However, cartridges can be written that contain custom code, database services, or help you with your application life-cycle and continuous integration (see the Jenkins cartridge). A simple mechanism exists to connect cartridges to one another using environment variables and it gives you a lot of flexibility.

Deploying your app

OpenShift uses git. There’s probably nothing easier that you can do for a developer than giving him access to deploy his application via git. Whether you use the command line, GUI or IDE, git is the easiest option. OpenShift also gives you the option to deploy a binary package – for people who compile their applications and don’t want their source code in the cloud.

The lifecycle of the application looks quite different between these two platforms, and that difference starts with the mechanisms used to deploy your applications.

In OpenShift you create an application and the system will provision your own little bit of space in the PaaS, called a gear (you can have several). Keep in mind that your custom code hasn’t come into play yet. Once your app is created, you can already browse your app, because OpenShift will put a default website there. Then you can push your code to the app via git. If you want a one-line creation of your app from scratch, you can do it using a git URL – you tell OpenShift to deploy code from the git URL instead of the default template of the cartridge.

On Cloud Foundry everything starts with you pushing your bits. The platform will take your code, analyze it, combine it with a buildpack and then deploy it on a DEA.

I like the idea of separating the creation process from the deployment of code/bits. When deploying an application on Cloud Foundry and it fails, it’s more difficult to tell why it failed: was there a failure in provisioning your corner of the cloud? Is there something wrong with the buildpack? Do you have a bug in your application?

So OpenShift gives the user a bit more control and more predictability. On the same idea, another thing you should be aware of is that every time you push your application in Cloud Foundry, your corner of the cloud will be different; in OpenShift pushing changes to your app does not recreate your gear.

Deployment process

In OpenShift you create your application via the broker API and then the broker will search for a node that has enough available resources to process the request. Next, a gear is created on that node and the specified web cartridge is added. This is a short description of what happens for a non-scalable Linux application. For auto-scaling apps (more than one instance), you need to mark it as such when creating it. In that case, OpenShift will also deploy a web proxy cartridge next to the web cartridges.
After you’ve created your application, it’s ready to be cloned using git. Then the power is yours – there are many ways to get your code in the platform, including adding this new git server as a remote to your git repo or simply copying and pasting your code to a clone of the app’s git repo.
After you push, git hooks run within the gear. Depending on the cartridge deployed, these hooks run pre or post start scripts, build scripts or lifecycle control (start, stop, restart). Additionally, for auto-scaling apps, your code is synced from a master gear to the others using rsync.

On Cloud Foundry, when you push your application the cf command line will bundle your code and send it over to the cloud controller. Before packaging, the controller tries to figure out which of the resources are already available on the cloud, so you do not have to upload them again. For example, if someone has already pushed some large file, the controller knows and the file is not uploaded again. The diff is not as good as git, but it does have the advantage of building this index of common resources across the platform.
The next step is to ‘stage’ your application – this is where Cloud Foundry tries to detect what buildpack to use for your application, then bundle the buildpack and your code/bits and deploy them on a DEA. In my experience, buildpack auto-detection is not that useful. The developer will always know what technology he or she used to write their application. Auto-detection of a buildpack is a superfluous process that is susceptible to naive detection techniques.
An advantage when it comes to scaling in Cloud Foundry is that applications with one instance are treated the same as applications with multiple instances. So all you have to do is tell Cloud Foundry how many you want, you don’t need to flag your application when you create it. Cloud Foundry does not have automatic scaling though.

OpenShift is more like evolution (git, HAProxy, rsync, et al.) and Cloud Foundry is more like revolution (most of the mechanisms are new). That said, neither platform has everything right, but both are working hard to and continuing to improve their respective solutions in this area.

Browsing Your Application Files

With OpenShift the story is as simple as it can be: you can SSH into your gears. Cloud Foundry has a special directory service implemented in the DEA to support browsing the filesystem and tailing files. Some developers might appreciate the extra flexibility offered by OpenShift by being able to SSH into the application, if necessary.

Services

As you may have realized already, services in Cloud Foundry and OpenShift are a little bit different. In Cloud Foundry services are components on their own (deployed by BOSH). In OpenShift, they are cartridges.

In Cloud Foundry you connect your application to a service by binding them together. This tells the Cloud Controller that it should create a set of credentials for the service and then make them available to the application. Binding your app to a service will cause it to be staged again. You can bind one application to many services and many services to an application. You don’t get full access to the service (i.e. you are not the admin) and your credentials work only within the confines that Cloud Foundry has setup for you.

On the OpenShift side your application is a grouping of gears. Some of these can be services, and the information published by them (such as credentials) is made available to gears that need it (like the gears that run your code). Credentials are generated once when the cartridge is added and connection information is not published to gears outside of an application. This means you can’t easily connect multiple applications to the same service. However, with OpenShift you are the admin of the service and in complete control as a developer. This allows you to create multiple databases and your application can access all of them.

On both Cloud Foundry and OpenShift service connection information is passed via environment variables to buildpacks and cartridges. These can make life easier for the developers by auto configuring applications that conform to certain standards.

Tunneling

Another important feature that many developers find useful is the ability to connect to your services from your local network. In OpenShift this done via SSH tunnels, a solution that is very well known – it works very well and it’s fast.

In Cloud Foundry talking to your services from the outside is done through an HTTP tunnel (these are called caldecott tunnels). In the future this mechanism might support web sockets. Currently it uses a polling mechanism which slows down data transfers.

The End

These were a few of the points I thought would be useful to write about. It is not a complete analysis – that is a far larger topic, so to that end we’ll try to write more about the following in the future:

  • Security and isolation for Windows
  • Integration with service marketplaces
  • Logging and monitoring of applications
  • Monitoring of the platforms themselves
  • Metering and billing support
  • Keeping the systems up-to-date
  • Capacity planning
  • Auto-scaling of applications

Thanks for reading!

4 thoughts on “OpenShift and Cloud Foundry – A Contributor’s Perspective”

  1. excellent post. out of curiosity, which one took less time to build your extensions for? How stable are the interfaces or do you see more maintenance with one platform over another?

    1. Hi Isaac and thank you.

      It took less time to build the OpenShift extensions. Interfaces (assuming you mean communication layer) are stable for both projects.
      If you are asking which project requires more maintenance from a coding perspective, it would be Cloud Foundry – more components plus maintaining BOSH releases.

      Cheers

  2. Great post! If you would recommend a platform after this time, which should it be? I mean in terms of a future perspective and strategic interop with zerovm, docker and so on…

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>