In this series of posts, we want to give you a look at the launch version of the Thermo.io Cloud platform. Then we will share our near-term roadmap including Volumes, Object Storage, Firewalls, and passthrough GPUs, to name a few.

The Cloud Platform

We are proud to have put together a team of world-class Thermo Physicists, lead by Hopper, our beloved mascot and leader.

Together, Hopper and team have come up with a formula for success premised on keeping the platform simple, reliable, and evolving.

We spent a great deal of time evaluating the components that make up our Cloud platform, ultimately landing on an OpenStack-based solution. Hopper’s team put emphasis on automation and orchestration (Ansible and Kolla) and portability of services (Docker) – all within an ecosystem that was iterating rapidly with strong community support (OpenStack).

OpenStack

The choice to use OpenStack was fairly straightforward. Although the landscape is littered with both commercial and open source options, our requirements kept bringing us back to OpenStack or homegrown.

Reinventing the wheel of on-premise Cloud with a homegrown solution seemed impractical. There is an enormous community of developers behind OpenStack and an ecosystem of resources and features that plug straight into it. Also, the evolution and management of OpenStack fit us well.

Here are a few key points from our decision process that pointed to OpenStack:

  • A rapid but not overly aggressive release cycle, about two or three times a year
  • A REST API that is thoughtful, well-documented, and feature-complete in each OpenStack release
  • A series of deployment and orchestration options, allowing us to select one that was easily repeatable and resilient
  • A proven track record of thoughtful leadership, maturity in existing core components, and the introduction of emerging features
  • And finally, Hopper’s love affair with open source

Our release version of Thermo.io Cloud is based on OpenStack 15 (Ocata). The roadmap to launch had us transition from proof-of-concept and Alpha on release 13 (Mitaka), Beta launch on release 14 (Newton), and ultimately production launch under release 15 (Ocata).

The future roadmap is for an upgrade to OpenStack 16 (Pike) in March 2018 or earlier. This puts Thermo.io Cloud at the back end of one release trailing cycle with the OpenStack project, or a trailing time of about 6 months.

In that trailing period, it allows Hopper’s team to validate the maturity of the new release, plan for integration and upgrades, get any new features over to our development and frontend teams, and train Thermo Physicists on using the latest and greatest.

We believe OpenStack will provide a world-class experience for our customers, one that will eventually eclipse features available from our competitors.

Ansible and Kolla

Our team came into the office one day and found Hopper standing by a giant whiteboard. On it was written on it a single word: Ansible.

Well, not quite, but close. Our team, which has deep experience with Puppet, initially chose it when setting out to deploy OpenStack. At this point, we were in the proof-of-concept stage on OpenStack Mitaka, working out how we would deploy, manage, and upgrade on an ongoing basis.

We kept at it for a while. The many months spent tooling away on Puppet-based OpenStack felt like an entropic system of ever-increasing frustration. This felt forced, and it wasn’t right for us. We needed a new option.

Hopper. Whiteboard. Ansible.

At this point, we had already started to dive into Ansible in other areas, but nothing at the level of OpenStack orchestration. This would require a different level of love for Ansible. In order for it to work, we needed a proven deployment framework. Bring on the Kolla project.

The aim of Kolla is to provide “production-ready containers and deployment tools” (read: Ansible Playbooks) “for operating OpenStack clouds that are scalable, fast, reliable, and upgradeable using community best practices” (their words, not ours).

This was a tall order for a mission statement. At this point, the Kolla project was relatively unheard of but had shown promise in early development through a couple of releases. We were cautiously optimistic.

The more we dived into Kolla, the more we fell in love with it. We were hooked by its choice of container architecture, the rapid development by an engaged community, and a novel approach to orchestrating OpenStack.

Over the last year, we’ve seen the Kolla project explode in adoption. 12% of all new OpenStack installations use Kolla. In that time, we’ve gained a great deal of experience with Kolla by breaking, fixing, and testing it.

Today, we have a set of Ansible playbooks that automate on top of the Kolla project. Our playbooks take care of the boring stuff like metal prep, network configuration, dependency validation, custom tooling setup, and the like. From there, the star of the show is all Kolla.

It is our intention to keeping making the most of Kolla, be an active participant in the project, and work with the community to build the project into the go-to standard for OpenStack deployments

Docker

 

The Docker ecosystem has grown exponentially in recent years. The word Docker is now synonymous with defining portable, manageable applications where dependencies can be contained.

Hopper doesn’t like messy dependencies. Enter OpenStack. There is an oft-unspoken truth to OpenStack and its dependencies on Python. Traditionally, OpenStack deploys on metal, with multiple services sharing an operating system. When – not if – it’s necessary to upgrade or patch a single OpenStack service, the dependency chain can force an upgrade of all services, or even break them.  

OpenStack is often mistakenly viewed as a giant monolithic application. This assumption is wrong. In fact, OpenStack is a collection of separate service applications, each of which communicates over a shared message queue and a central database.

That architecture lends itself very well to OpenStack service applications being containerized. This is exactly what the Kolla project introduces using Docker containers.

By using Docker containers for all our OpenStack services, we can begin to decouple them from hardware and software dependencies between services. We introduce new concepts in our ability to test platform updates and new releases. Backup and disaster recovery of services becomes a trivial set of commands, while images allow for rapid deployment of new services to scale.

There are countless additional benefits to container-based services for OpenStack, as well as new challenges. However, the benefits far outweigh those challenges, and many of those on our end stemmed from the Docker learning curve.

The container portion of our infrastructure is not an afterthought. It is a deliberate approach to how we deploy our OpenStack infrastructure. We have a great deal of confidence in this approach, and it is one shared by the broader OpenStack community.

We hope you’ve enjoyed this introduction into the technology that powers the Thermo.io Cloud platform. We’ll be back soon with dives into our hardware infrastructure, roadmap features like passthrough GPUs, and much more.

Our Thermo Physicists are looking forward to hearing from you. Leave us a comment or take a look around Thermo.io. You may even catch Hopper replying to your comments or chatting you up on the site!