How we built the www.zanbato.com public website

Last year, we launched a new Zanbato public website that explains what the Zanbato web application does and who we are. It’s is much flashier than the old site with big, high-quality splash images and a responsive design, and all of the credit goes to our designers. However, we also changed the entire architecture for how we generate and serve our website as well. We explored a few different options before converging on our current setup, and this post will review that thought process.

Zanbato Public Site

At a high level, we had 2 technical goals for the new public site. First, it had to be fast. We wanted to minimize page load times to create the best user experience. Second, we wanted it to be easy to update and deploy from an engineering perspective to keep the site stable and content fresh.

Option 1: Integrated with the web application

Before the redesign, our public website was part of the Zanbato application. It was in the same repository and hosted from the same Django server as the rest of our site. Of course, this means that the server for the public site was our web application servers.

One really nice part about it was that the handoff between the public landing page and the application was seamless. Users who wanted to sign in didn’t have to jump domains, and we could integrate signup features easily across the public pages and the backend for the rest of the site.

On the other hand, it was a pain to have it in the same codebase because it didn’t move lockstep with the rest of the site. When we redesigned the application, we had to keep the old public site styles. When we wanted to change the public site content, we had to wait for our application deployment schedule. The truth was that the public site and web application were mostly unrelated, and keeping them together.

Option 2: Separate web application

Most noticeably, the public site was slow because of the overhead (such as middleware, database connections, etc.) associated with the application. We wanted to simplify our setup, so the natural first step was to separate the public site into its own repository. We split out the views and HTML for these specific pages and were able to cut most parts of the stack. For example, we basically didn’t even need a database. However, still having an entire Django appliation associated with effectively just serving largely static HTML. It also was more expensive to maintain a EC2 instance just to serve our public website.

Option 3: Static Files hosted out of S3

Our next goal was to cut out the application layer entirely. However, we had gotten used the ease of HTML templating and shared base files to generate shared components likes headers and footers and such with Django. Thankfully, there are plenty of static site generators, but we decided to use django-medusa, a library for rendering views out to HTML files. Staying in the Django ecosystem allowed us to keep the exact same templating we were used to and did not require us to restructure our public site repository at all.

The easiest transition would be to serve the HTML off of a EC2 instance using Apache, but it turns out that a much cheaper way to do it is to use S3, which offers specific functionality for website hosting. The process for setting up hosting was relatively straightforward. To deploy changes to the S3 bucket, we used django-pipeline to send static files and AWS CLI tools to copy the generated HTML from django-medusa up to the server.

One other nice feature provided by S3 hosting is that you can setup automatic redirects between http://yourdomain.com and http://www.yourdomain.com.

One catch to moving to a fully static site is that you can’t POST to the server. This generally isn’t a problem, but we do have a contact form built into our public site. We solved this by using a separate form builder site (in our case, wufoo) to act as the backend for this form. Handily, they offer the option to generate raw HTML for a form, which was sufficient for us to figure out the structure of the HTML form to POST the correct data. There was a minor bit of JavaScript trickery in that a typical POST to this endpoint assumes that there will be a HTML redirect at the end, but we catch that and just close the popup on our site.

In the end, hosting out of S3 ends up costing cents per month to maintain. However, we noticed that the site was actually still slow because we had large image assets around the site. Additionally, S3 website hosting doesn’t support https, which was a dealbreaker. Fortunately, we had already used the technology to solve this on our application.

Option 4: Static Files served from Cloudfront

Content is made fast on the internet through CDNs, and S3/Cloudfront integration is well-supported. Thanks to two extremely helpful guides, we were able to get our site served off of a CDN, which is extremely fast. SSL is baked into the configuration, and all of the environment and development works as described in the step above. I’m typically too lazy to invalidate older versions of files explicitly and just wait a few hours for changes to propagate, but we don’t update the site too often anyways.

The one catch to use Cloudfront is that it does not support the redirect between subdomains available from S3. As such, you will note that you can visit both https://www.zanbato.com and https://zanbato.com, and they both resolve without a redirect. It is possible to use a different endpoint that does do a redirect, but we haven’t implemented that yet.

Conclusion

Currently, the new public site has been live for just over a year, and it has worked out great. It is painless to update, fast, stable, and cheap. Were I to start this project again today, I don’t think I would change much about the final result: I probably would use Jinja2 or another static site generator instead of Django. However, the architecture appears to work well and has solved all of the problems discussed above. Hopefully it was helpful to see our journey through solving these problems!

Posted by: Kevin Leung, VP of Engineering