Cloud Connected

Thoughts and Ideas from the Gitana Development Team

New reporting options

We’ve added some expanded reports lately: for both Workflow, as well as Logging.

image

Since it’s introduction, workflow remains a key component of our platform.  The process to define and execute various business processes in Cloud CMS is well documented.  Now, you can track and analyze each step involved to streamline operations even further:

image

Review any historical sequence of events, or those currently in-flight, to find patterns / trends / bottlenecks needing attention.  Especially where system log details are involved:

image

Also, if you need to build any custom reports - these are now called “Data Views”

They’re functionally equivalent to the reporting outlined in this video… to see any of the above in action, login to your tenant, or signup for a Trial!

CEM - shuffling deck chairs on the titanic

Going back 15 years, we’ve seen the core of providing websites shift across various types of “platforms” - from Web Content Management (WCM) to Enterprise Content Management (ECM) to Customer Experience Management (CEM).  Every iteration involved another set of technologies, and associated migration headaches.

Each expansion also consumed more and more of the resulting presentation tier.  At first, this was mostly a good thing as no standard mechanisms existed to facilitate efforts.  However, in the last few years alone key frameworks have arisen to tackle this job much more effectively, such as:  EmberJS (2007), AngularJS (2009), and Backbone.js (2010).  Suddenly, the templating systems embedded within a CMS platform were not only unnecessary - but were actively getting in the way of good website design - much like a certain iceberg.

Cloud CMS allows you to fly over this barrier with an API-first approach.  No more shuffling deck chairs - sorry Leo.  We’ll take it from here…

image

With Cloud CMS, you’re free to select any presentation framework and build your customized look-and-feel that focuses on both your content, as well as your customer.  No meddling in the page delivery, no unsupported add-ons, no unnatural or customized CMS configurations - just a pure API driven solution, done right and provided natively.

How we use Docker at Cloud CMS

At Cloud CMS, we use Docker to provision our cloud infrastructure servers on top of Amazon Web Services. Our stack consists of five different clusters:

With the exception of MongoDB, all of these clusters are allocated using elastic load balancing and are architected in such a way that we can spin up new servers and tear down old ones with elastic demand. That is to say, they are fully elastic in design. The product components were all built in such a way so that cache state is fully distributed and requests naturally failover to alternate servers as the configuration changes over time.

Note: MongoDB is deemed an exception to this but it’s no less scalable. It simply has a different architecture. Here, we utilize a MongoDB sharded backend with replica sets. The Cloud CMS primary object identifier (_doc) serves as a shard key to distribute objects evenly across the back end shards.

The DevOps tasks of releasing new product updates, allocating new servers and bouncing users between servers are all very automatic. We made a fundamental design decision early on to be stateless so that server-side session management would not be needed. This means that any server anywhere in the cluster can handle the request. OAuth 2.0 bearer tokens are passed with every request and a distributed object cache helps us to ensure consistent performance benefits without having to reload objects every time.

Docker plays a very key role in all of this. With Docker, we’re able to define the various images that comprise these clusters. Docker build files (Dockerfile) describe all of the software that must be installed, everything from yum (dfn) updates to third party libraries like ImageMagick or FFMpeg. They set up users, lay down permissions and produce an image that is a perfect snapshot of a fresh environment.

Docker then lets us launch these images into a configuration locally. On my laptop, I can spin up a full Cloud CMS infrastructure. Every image forms the basis for one or more containers. With a single command, I am able to launch 10 Docker containers:

  • 2 Cloud CMS API Servers (cluster)
  • 2 Cloud CMS UI Servers (cluster)
  • 2 Cloud CMS App Servers (cluster)
  • 2 Elastic Search Servers (cluster)
  • 2 MongoDB Shards (cluster)

Each of these servers exposes a port binding which is dynamically mapped according to the Docker launch script. It takes a few seconds for these containers to spin up. I can then work against this Cloud CMS infrastructure locally. This is the basis for how we do development and testing.

The other beautiful thing about this approach is that we’re able to deploy the exact same containers to AWS and we’re using locally. Thus, we can test the precise bits that are about to go into production (ahead of them actually going into production). This provides vast assurance that our deployment model is sound. And frankly, it’s one of the reasons that we’re all able to sleep well at night!

In addition, we offer these very same Docker images to our enterprise subscribers. This allows our customers to run Cloud CMS on-premise, either within their own data center or on their own Amazon AWS private cloud. This gives them more control over their backend infrastructure and provides them with more autonomy over their costs and services. It’s running on their hardware and they can make decisions about how much bandwidth, storage or data transfer to throw at it.

Cloud CMS offers support for these customers through a software support model, complete with updated Docker images as they are released. Customers are then free to take new images as they wish and are not beholden to Cloud CMS for product updates and bug fixes on the public infrastructure.

And finally, we offer a standalone Docker image for development purposes. Enterprise customers can distribute this image to their developers so that they can achieve faster iteration and the same fluidity of development that our engineering team enjoys. They can run Cloud CMS locally. They can bring it up, tear it down, start over and iterate as quickly as they’d like to. They can work locally, whether on a plane or in a coffee shop. They run Cloud CMS right on their laptops in a Docker container. No internet access required. No stepping on any else’s toes.

Using the Cloud CMS import/export transfer capabilities, these same developers can then push and pull content against their private or public cloud instances. This is fully distributed content replication and is very similar to GitHub. You work on something locally and you push it to the cloud via a command line tool.

At Cloud CMS, we’re very excited about Docker and appreciate it greatly for all of the DevOps challenges it has solved for us. We spent a lot of time early on experimenting with Chef, Puppet and Ansible only to realize how much faster things are when done with a Docker mindset. So far, we haven’t had to rely on any external tools to manage the construction and assembly of our Docker images. Furthermore, the commitment to Docker within Amazon AWS has been very beneficial. We certainly feel that we’ve picked the right technology to get the job done.