Cloud Connected

Thoughts and Ideas from the Gitana Development Team

Building Applications with Ratchet JS MVC

Over the past few days, I’ve had a chance to delve back into ratchet.js which is a JavaScript MVC framework that I had a hand in building in 2010. By this point, there are a lot of JavaScript MVC frameworks that you can utilize. However, at the time we built it, we were very inspired by sammy.js, backbone.js and knockout.js.

A few points on these libraries:

  • I particularly liked sammy.js for its simplicity. The developers of that library do a great job minimizing the work and also utilized an interesting “chaining” approach during the rendering phase which was inspirational. We really liked the chaining approach and used it in Ratchet as well as our own Cloud CMS JavaScript Driver.
  • Both backbone.js and knockout.js are fantastic frameworks for defining scoped-observable variables in the model. They solve things like how to update content on the page in real-time, build components that listen for update events and pass messages between controls or elements on the site.

We sought to produce an MVC library that gave us the singular foundation that we needed to build really great HTML5 and JavaScript-based applications. Furthermore, we wanted a framework that would be ideal for real-time, cloud-connected applications. Thus, while it’s important to get the foundation bits right in terms of observables, components, templates, routes and so forth, we also felt it was very important to define an asynchronous rendering pipeline that could manage state for the backend, stream content forward and aggregate it into HTML5.

None of that is really too outlandish. A few years ago, for those old enough to recall (not that it was that long ago), everyone was crazy about mashups. The basic idea behind mashups was that content would be sourced from other locations and presented singularly. That idea hasn’t gone away and with the explosion of cloud-based services including Cloud CMS for all of your content and application services, we think its high time that a JS framework was built to address that need.

So that’s where we’re headed going forward. I find it an absolute joy to work with ratchet.js and would recommend to readers that they take a look. It’s a purely open-source project under the Apache 2 license. All of the source code is available on GitHub.

Dynamic ProxyPassReverseCookieDomain with Apache 2

We do a lot of HTML5 and JavaScript application hosting at Cloud CMS.  Our platform lets you build HTML5 applications and deploy them to our cloud infrastructure with just a couple of clicks.  As a result, we’ve gotten pretty friendly with Apache 2, virtual hosts, mod_rewrite, proxies and more.

Applications built on our platform use OAuth2 over SSL.  We support all of the authentication flows even for HTML5/JS applications.  Inherently, these applications are considered “untrusted” in any two-legged flow (such as username/password).  And with good reason - it’s simply impossible to store private information within the browser.

Heck, even your aunt Mildred could “view source” and poke around to find passwords, ids or other important “private” things sitting in your source code.  Plus, anything you put into source could be cached.  Searched.  And made public.  How ‘bout them apples?

With Cloud CMS powered apps, we recommend using a three-legged OAuth2 flow for all HTML5 and JS applications.  With a three-legged flow, the application never “sees” your private credentials at all.  There’s a bit more hopping around but, for the most part, it’s totally seamless for the end user.


Might resemble your aunt Mildred

Cloud CMS goes a step further and gives you some security enhancements that you can use for two-legged scenarios.  For one, we let you dynamically configure your domains so that you can lock down exactly which hosts are allowed to authorize.  You can also do things like create secondary client and user key/password credentials.  If a pair goes awry, you can shut them down and issue new ones.  Wallah!

A lot of these security enhancements are built around the ability for users to deploy Cloud CMS backed applications to new domains quickly.  Setting up web servers is an art and often takes a good deal of man power.  Thus, we’ve built out a fully dynamic solution that uses Apache 2, mod_rewrite and proxies.

As you might guess, we use a wildcard SSL and virtual host configuration.  One of the things we need to do is hand off requests from the HTML5 app through a proxy back to our content management API.  Our API handles the request and responds.  On the way back out, all cookies need to be mapped into the domain of the browser.

It turns out this isn’t the most intuitive thing with Apache 2 to do.  The ProxyPassReverseCookieDomain directive is good but it doesn’t support dynamic variables straight away.  In our case, we have a wild card virtual host and we’d like to be able to tell certain Set-Cookie headers to swap their domains for our selected wildcard match.  How does one do this?


mod_rewrite: first to Alpha Centauri

Our solution was to use mod_rewrite to set an Apache environment variable.  Interesting, eh?  Not exactly what mod_rewrite was intended for (perhaps).  But frankly, mod_rewrite seems like one of those plugins that can do just about anything in the universe.  I am sure when they get to Alpha Centauri, they will discover that mod_rewrite was there first.

I digress.  So we use mod_rewrite to copy the ${HTTP_HOST} variable into an environment variable.  And then we use the proxy pass interpolate feature to plug the environment variable into the cookie.

It looks like this:

 
 <VirtualHost *:80> 
 ServerName *.wassup.com
 DocumentRoot "/apps"

 ...

 # Use Mod-Rewrite to copy %{HTTP_HOST} into 
 # an Apache environment variable called "host"
 RewriteEngine On
 RewriteRule .* - [E=host:%{HTTP_HOST}]

 # Stop! Proxy Time!
 ProxyRequests Off
 ProxyPassInterpolateEnv On
 ProxyPass /proxy http://localhost:8080 interpolate
 ProxyPassReverse /proxy http://localhost:8080 interpolate
 ProxyPassReverseCookieDomain localhost ${host} interpolate

 ...

 </VirtualHost>

Hopefully this proves useful for others.  If you’re interested in learning more about Cloud CMS, just hop on over to our site at http://www.cloudcms.com and sign up for an account.  

We’ll post more cool tips as we find them!

The Beauty of Cloud CMS Chaining

Chaining is a common technique that has been widely adopted by modern JavaScript libraries to chain method calls together.

The goal of chaining is to produce elegant and concise code that is easy to understand or maintain. For example, if you are a jQuery developer, you may produce similar code like this on daily basis.

$('#mydiv').empty().html('Hello Word!').css('font-size','10px');

However, most popular JavaScript libraries only support “static” chaining, e.g. DOM object manipulation. If the method to be chained makes Ajax calls, you will have to resort to callback which requires very strict and verbose syntax.

Again, let us take a look at a jQuery example that chains two serial Ajax calls.

$.ajax({
  url: 'service/endpoint1',
  success: function(data1) {
    // we have successfully made our first ajax call
    // and we are ready for our second ajax call
    $.ajax({
  	url: 'service/endpoint2',
  	success: function(data2) {
    	  // we have successfully chained two ajax calls 
  	},  	
  	error:function(error2) {
  	  // handle error from the second ajax call
  	}
    });    
  },
  error:function(error1) {
  	// handle error from the first ajax call
  }
});

The callback approach used by the above example looks clean and will do exactly what we expect. However the challenge is when we need to chain very large number of ajax calls the above code will grow significantly. With many levels of nesting, it will become very unpleasant to read or maintain and you will really miss the simplicity that chaining brings.

Before we introduce Cloud CMS chaining, let us look at another common use case of making parallel Ajax calls. It is easy to make concurrent Ajax calls but the challenge is to find out when all parallel calls finish and then execute the code that processes the returned results.

A simple and effective approach is to maintain a counter that tracks number of finished Ajax calls.

function processResults (result1, result2) {
// process returns from two parallel ajax calls } var count = 0, result1, result2; $.ajax({ url: 'service/endpoint1', success: function(data1) { result1 = data1; count ++; if (count == 2) { processResults (result1, result2); } }, error:function(error1) { // handle error from the first ajax call } }); $.ajax({ url: 'service/endpoint2', success: function(data2) { result2 = data2; count ++; if (count == 2) { processResults (result1, result2); } }, error:function(error2) { // handle error from the second ajax call } });

In the above example, we will make sure we get return from both Ajax calls before executing the processResults function. Just as the case for serial calls, you will definitely prefer chaining over using callbacks or explicitly managing the state of Ajax calls.

Now let us talk about the dynamic chaining that Cloud CMS introduces.

Cloud CMS is a content platform that help you build cloud-connected applications. When you build a Cloud CMS application, your application will interact with Cloud CMS platform through its REST APIs. In order to provide pleasant programming experience to developers, it is critical for Cloud CMS to have a driver that times or coordinates multiple REST/ajax calls in a very easy manner. That is why Cloud CMS provides a JavaScript Driver which comes with support for dynamic chaining.

The driver provides a Chain class that allow you to instantiate, extend or end a chain. A chain can carry an underlying proxied object which performs proxied Ajax calls to the REST services that Cloud CMS provides.

A chain can also be extended with a different proxied object if needed. The driver provides a list of “Chainable” classes that can be instantiated as the proxied objects for the chaining. For example, the Repository class will provide methods that deals with repository related or its sub-level objects related operations such as updating repository, creating new branch etc.

So when we make a call to

repository.readBranch('master');

it will make a proxied GET call to retrieve details of the master branch of the repository. 

The call will look like

http://localhost/proxy/repositories/aaa718b9bd29f76f443b/branches/master?metadata=true&full=true&cb=1338678288323

where aaa718b9bd29f76f443b is the repository id.

Please note that Cloud CMS JavaScript driver doesn’t reinvent the way to manage Ajax calls.

Under the hood, it still uses the callback approach to time the ajax calls and use the counter approach to keep track the parallel calls. It just provides utilities and simple APIs that shield developers away from dealing with those details.    

Let us start with a simple example that manage the life cycle of a simple chain which doesn’t deal with Ajax call.

 // Create a new chain
 Chain().then(function() {
  var data = 'Marry ';
   // Extend the chain
   this.subchain().then(function() {
     data += 'has a little ';
   }).then(function() {
      data += 'lamb.';
      // We should have "Marry has a little lamb." at the end the chain.
   });
 });

Now let us take a step further and try to use a chain with proxied objects to create a new node under the master branch of a given repository.

new Gitana({
   "clientId": "SOMEID",
   "clientSecret": "SOMESECRET"
}).authenticate({
   "username": "SOMEUSERNAME",
    "password": "SOMEPASSWORD"
 }).readRepository("SOMEREPOSITORYID").readBranch('master').createNode().then(function() {
    // we have successfully created a new node
});

In the above example, it first creates a new client with correct credentials. Once it is authenticated, it creates a new chain with a proxied Platform object. The Platform object will then be used to read the repository with the given ID.

The chain will then be extended and the underlying proxied object will be switched to a proxied Repository object populated from the previous ajax call response. It will keeps the chain going by reading the master branch (switching the proxied object to Branch) and then creating a new node under it (switching the proxied object to Node).        

If we want to extend the example to create three new nodes in serial, the example will look like

new Gitana({
   "clientId": "SOMEID",
   "clientSecret": "SOMESECRET"
}).authenticate({
   "username": "SOMEUSERNAME",
   "password": "SOMEPASSWORD"
}).readRepository("SOMEREPOSITORYID").readBranch('master').then(function() {
    this.createNode();
    this.createNode();
    this.createNode();
    this.then(function() {
     // we have successfully created three new nodes by making serial calls
    });
});

Now if we want to create new nodes in parallel, we can do something like this

new Gitana({
   "clientId": "SOMEID",
   "clientSecret": "SOMESECRET"
}).authenticate({
   "username": "SOMEUSERNAME",
   "password": "SOMEPASSWORD"
}).readRepository("SOMEREPOSITORYID").readBranch('master').then(function() {
   var f = function(){
      this.createNode();
   };
   this.then([f,f,f]).then(function() {
      // we have successfully created three new nodes by making parallel calls
   });	
});

As you can see from the above examples, the driver significantly simplifies the code with chaining. It makes the code easier to read and less error prone. It follows the human’s nature feeling of synchronous chaining while dealing with actual asynchronous ajax calls under the hood.   

Before we wrap up this blog, let us take a look a more complex example that does mix of serial ajax calls and parallel calls. It will also show how to manually switch the underlying proxied object by using subchain method.

new Gitana({
   "clientId": "SOMEID",
   "clientSecret": "SOMESECRET"
}).authenticate({
   "username": "SOMEUSERNAME",
   "password": "SOMEPASSWORD"
}).readRepository("SOMEREPOSITORYID").readBranch('master').then(function() {
   var node1, node2, node3;
   var f1 = function() {
      this.createNode().then(function() {
         node1 = this;
      });
   };
   var f2 = function() {
      this.createNode().then(function() {
         node2 = this;
      });
   };
   var f3 = function() {
      this.createNode().then(function() {
         node3 = this;
      });
   };            
   this.then([f1,f2,f3]).then(function() {
      // we have successfully created three new nodes by making parallel calls
 // we now associate node2 to node1 ( node1 ==> node2)
     this.subchain(node1).associate(node2).then(function() {
         // we have successfully associated node2 with node1
     });	
     // At this point, node2 has already been associated with node1
     // we then associate node2 to node3 ( node3 ==> node2)
    this.subchain(node2).associateOf(node3).then(function() {
        // we have successfully associated node2 with node3
	// The chain will end at this point.
    });                
  });	
});

For live chaining examples, please check out our online JavaScript Samples.