Cloud Connected

Cloud CMS and Two-way Replication

When we designed Cloud CMS, we wanted to give our customers the choice of running the platform both in the cloud and on-premise.  The cloud makes sense for a lot of infrastructure needs but we recognize that some of our customers will want to have their own hosted installation of the platform.

We also wanted to give our customers the ability to push and pull data between their on-premise installations and the cloud platform (whether our public installation or a private cloud the customer runs).  There are times where it’s better to take advantage of the elastic storage and capacity of the cloud and other times where data is better suited to live on-premise.

To achieve this, Cloud CMS offers two-way replication.  It’s very similar to Git in that you can push and pull changes between your local installation and a cloud installation.

To pull, you browse the cloud and find a resource you would like to pull down into a local copy.  You then export that resource into an archive.  And then, on your local installation, you simply import the archive via its URL.  Your local copy of Cloud CMS will download the archive and seamlessly produce a replication of the data on your local instance.

Archives are stored in vaults and basically comprise a snapshot of the data along with all of its dependencies.  Archives are a lot like Maven Artifacts (if you’re familiar with Maven, Ivy, Gradle or other dependency management / build lifecycle tools).  They can contain either all of the data of your export or a partial subset depending on whether you’ve bounded the export by date or changeset IDs (in the case of changeset-versioned branches).

You’re free to work locally on your data and if you choose, at any time, you can push your data back to the cloud.  It’s basically the same operation but in reverse.  You export the archive.  And then, you either pull the archive to the cloud from the local installation or you push it from the local installation to the cloud.  The former is applicable if your local installation is visible from the cloud (depending on your firewall / IT restrictions).  The latter is often more preferable.  But in the end, they accomplish the same thing.

Cloud CMS really looks to Git and Mercurial as examples of great versioning systems that really get it right in terms of being distributed, changeset-driven and replication-friendly.  We didn’t seek to reinvent the wheel but instead opted to give our customers access to some wonderful tools for collaboration which, prior to Cloud CMS, were only available to command-line developers.

Building Applications with Ratchet JS MVC

Over the past few days, I’ve had a chance to delve back into ratchet.js which is a JavaScript MVC framework that I had a hand in building in 2010. By this point, there are a lot of JavaScript MVC frameworks that you can utilize. However, at the time we built it, we were very inspired by sammy.js, backbone.js and knockout.js.

A few points on these libraries:

  • I particularly liked sammy.js for its simplicity. The developers of that library do a great job minimizing the work and also utilized an interesting “chaining” approach during the rendering phase which was inspirational. We really liked the chaining approach and used it in Ratchet as well as our own Cloud CMS JavaScript Driver.
  • Both backbone.js and knockout.js are fantastic frameworks for defining scoped-observable variables in the model. They solve things like how to update content on the page in real-time, build components that listen for update events and pass messages between controls or elements on the site.

We sought to produce an MVC library that gave us the singular foundation that we needed to build really great HTML5 and JavaScript-based applications. Furthermore, we wanted a framework that would be ideal for real-time, cloud-connected applications. Thus, while it’s important to get the foundation bits right in terms of observables, components, templates, routes and so forth, we also felt it was very important to define an asynchronous rendering pipeline that could manage state for the backend, stream content forward and aggregate it into HTML5.

None of that is really too outlandish. A few years ago, for those old enough to recall (not that it was that long ago), everyone was crazy about mashups. The basic idea behind mashups was that content would be sourced from other locations and presented singularly. That idea hasn’t gone away and with the explosion of cloud-based services including Cloud CMS for all of your content and application services, we think its high time that a JS framework was built to address that need.

So that’s where we’re headed going forward. I find it an absolute joy to work with ratchet.js and would recommend to readers that they take a look. It’s a purely open-source project under the Apache 2 license. All of the source code is available on GitHub.

Dynamic ProxyPassReverseCookieDomain with Apache 2

We do a lot of HTML5 and JavaScript application hosting at Cloud CMS.  Our platform lets you build HTML5 applications and deploy them to our cloud infrastructure with just a couple of clicks.  As a result, we’ve gotten pretty friendly with Apache 2, virtual hosts, mod_rewrite, proxies and more.

Applications built on our platform use OAuth2 over SSL.  We support all of the authentication flows even for HTML5/JS applications.  Inherently, these applications are considered “untrusted” in any two-legged flow (such as username/password).  And with good reason - it’s simply impossible to store private information within the browser.

Heck, even your aunt Mildred could “view source” and poke around to find passwords, ids or other important “private” things sitting in your source code.  Plus, anything you put into source could be cached.  Searched.  And made public.  How ‘bout them apples?

With Cloud CMS powered apps, we recommend using a three-legged OAuth2 flow for all HTML5 and JS applications.  With a three-legged flow, the application never “sees” your private credentials at all.  There’s a bit more hopping around but, for the most part, it’s totally seamless for the end user.

Might resemble your aunt Mildred

Cloud CMS goes a step further and gives you some security enhancements that you can use for two-legged scenarios.  For one, we let you dynamically configure your domains so that you can lock down exactly which hosts are allowed to authorize.  You can also do things like create secondary client and user key/password credentials.  If a pair goes awry, you can shut them down and issue new ones.  Wallah!

A lot of these security enhancements are built around the ability for users to deploy Cloud CMS backed applications to new domains quickly.  Setting up web servers is an art and often takes a good deal of man power.  Thus, we’ve built out a fully dynamic solution that uses Apache 2, mod_rewrite and proxies.

As you might guess, we use a wildcard SSL and virtual host configuration.  One of the things we need to do is hand off requests from the HTML5 app through a proxy back to our content management API.  Our API handles the request and responds.  On the way back out, all cookies need to be mapped into the domain of the browser.

It turns out this isn’t the most intuitive thing with Apache 2 to do.  The ProxyPassReverseCookieDomain directive is good but it doesn’t support dynamic variables straight away.  In our case, we have a wild card virtual host and we’d like to be able to tell certain Set-Cookie headers to swap their domains for our selected wildcard match.  How does one do this?

mod_rewrite: first to Alpha Centauri

Our solution was to use mod_rewrite to set an Apache environment variable.  Interesting, eh?  Not exactly what mod_rewrite was intended for (perhaps).  But frankly, mod_rewrite seems like one of those plugins that can do just about anything in the universe.  I am sure when they get to Alpha Centauri, they will discover that mod_rewrite was there first.

I digress.  So we use mod_rewrite to copy the ${HTTP_HOST} variable into an environment variable.  And then we use the proxy pass interpolate feature to plug the environment variable into the cookie.

It looks like this:

 <VirtualHost *:80> 
 ServerName *
 DocumentRoot "/apps"


 # Use Mod-Rewrite to copy %{HTTP_HOST} into 
 # an Apache environment variable called "host"
 RewriteEngine On
 RewriteRule .* - [E=host:%{HTTP_HOST}]

 # Stop! Proxy Time!
 ProxyRequests Off
 ProxyPassInterpolateEnv On
 ProxyPass /proxy http://localhost:8080 interpolate
 ProxyPassReverse /proxy http://localhost:8080 interpolate
 ProxyPassReverseCookieDomain localhost ${host} interpolate



Hopefully this proves useful for others.  If you’re interested in learning more about Cloud CMS, just hop on over to our site at and sign up for an account.  

We’ll post more cool tips as we find them!