Instance store HVM AMIs for Amazon EC2

January 29, 2014 3 comments

At SmugMug, we have a rule for EC2 instances: do not use EBS for production systems unless instance store is not an option for the instance type. EBS is networked storage, which has many useful capabilities, but in our production environment we have automated builds for our servers and have no need for the added complexity of networked storage.

We’ve been successful with this goal for the most part, but the one area that we couldn’t use instance store is instances that required HVM virtualization, such as cc2.8xlarge and i2.* instances. EC2 offers two ways to boot instances, off of an instance store (like local disk) or via EBS, and two types of virtualization, Hardware Virtual Machine (HVM) and Paravirtual (PV). Recently AWS added instance store volumes to these HVM-only instances, so we wanted to take advantage of that. To create instance-store HVM AMIs, AWS guided us to boot an HVM AMI and convert it to instance store.

Before beginning, make sure you have both access keys and X.509 certificates for use. You must have both types of credentials for these steps to work. See bundle AMI prerequisites if you need guidance.

Here are the steps for creating instance-store HVM AMIs:

  1. Boot an EBS HVM instance. I used ami-dfa98cb6 (Ubuntu 12.04.3) on a c3.large instance (do not use 20131205 AMIs as they don’t boot on i2.* due to a bug)
  2. Set up the instance with your environment; we use masterless Puppet to configure the host with our base environment
  3. Copy the below make-hvm-s3.sh script to the instance
  4. Copy your cert.pem and key.pem to /tmp
  5. export AWS_SECRET_KEY="foo"
  6. Run the make-hvm-s3.sh script. The script creates the machine image, uploads it to s3, and registers it with EC2 so it is available for use
  7. Document the AMI created somewhere

make-hvm-s3.sh script:

#!/bin/bash
set -e

USER='root'
if [ `whoami` != $USER ]; then
    sudo -u $USER -H AWS_SECRET_KEY=$AWS_SECRET_KEY $0 "$@"
    exit $?
fi

if [ -z "$AWS_SECRET_KEY" ] ; then
    echo "ERROR: \$AWS_SECRET_KEY not set"
    echo "    export AWS_SECRET_KEY=foo"
    exit 2
fi

AWS_ACCESS_KEY="foo"
AWS_ACCOUNT="123412341234"
REGION="us-east-1"
BUCKET="s3-bucket"

STAMP=`/bin/date +%s`
PREFIX="hvm-s3"
export EC2_HOME=/opt/ec2-ami-tools-1.4.0.10/
export EC2_AMITOOL_HOME=/opt/ec2-ami-tools-1.4.0.10/

apt-get -y install ruby1.8 gdisk kpartx grub unzip python-pip
apt-get -y autoremove

if ! [ -d /opt/ec2-ami-tools-1.4.0.10 ] ; then
    curl -o/tmp/ec2-ami-tools-1.4.0.10.zip http://s3.amazonaws.com/aws-dev-support/beta/ec2-ami-tools-1.4.0.10.zip
    unzip /tmp/ec2-ami-tools-1.4.0.10.zip -d /opt
fi

/bin/rm -f /var/cache/apt/archives/*deb

sed -i 's;ro console=hvc0;ro console=ttyS0 xen_emul_unplug=unnecessary;' /boot/grub/menu.lst

$EC2_HOME/bin/ec2-bundle-vol \
    --privatekey /tmp/key.pem \
    --user $AWS_ACCOUNT \
    --cert /tmp/cert.pem \
    --arch x86_64 \
    --partition mbr \
    --prefix $PREFIX-$STAMP \
    --block-device-mapping ami=sda,root=/dev/sda1,ephemeral0=sdb,ephemeral1=sdc,ephemeral2=sdd,ephemeral3=sde \
    --exclude `find /tmp | tail -n+2 | tr '\n' ','` \
    --include `find / -name '*.gpg' -o -name '*.pem' -o -name 'authorized_keys' | grep -v '^/mnt\|^/tmp' | tr '\n' ','`

$EC2_HOME/bin/ec2-upload-bundle \
    --bucket $BUCKET \
    --manifest /tmp/$PREFIX-$STAMP.manifest.xml \
    --access-key $AWS_ACCESS_KEY \
    --secret-key $AWS_SECRET_KEY \
    --batch \
    --location US \
    --retry

echo -e "[default]\nregion = $REGION\naws_access_key_id = $AWS_ACCESS_KEY\naws_secret_access_key = $AWS_SECRET_KEY" > /tmp/aws
export AWS_CONFIG_FILE="/tmp/aws"
pip install awscli
aws ec2 register-image \
    --image-location $BUCKET/$PREFIX-$STAMP.manifest.xml \
    --name $PREFIX-$STAMP \
    --virtualization-type hvm

Notes for the script:

  • Set up the variables on lines 16-19 to match your environment
  • We’re using a beta version of ec2-ami-tools that has support for HVM on instance store, previous versions of these tools will not work: EC2 AMI tools v1.4.0.10 (beta)
  • AWS_SECRET_KEY is passed through your shell so it doesn’t accidentally get baked into the AMI
  • You may need to adjust the include/exclude lines for ec2-bundle-vol as appropriate for your setup
  • With Ubuntu/Debian, you need to include *.gpg, *.pem, and authorized_keys files otherwise you’ll have problems connecting and performing apt-get operations
  • Adjusting menu.lst ended up being critical for getting this working, as well as using an older grub (0.97). Without these two changes, our AMIs would not boot
  • AWS list of AMI types to instance types

Thanks to Joshua F. from AWS support for help getting this going.

– Shane Meyers, SmugMug Operations

Categories: DevOps

Using PhantomJS at scale

December 17, 2013 25 comments

About a year ago SmugMug had a dilemma. Our upcoming site-wide redesign and refactor  (aka the new SmugMug) moved all of our rendering code into client-side JavaScript using YUI. We had a problem; SEO is critical for our customers and search engines couldn’t index the new site.

Possible solutions were thrown around: do we duplicate our code in PHP? Use an artificial DOM? What about PhantomJS? Duplicating code would be a monumental effort and a continued burden when writing new features. Initial tests of fake/artificial DOMs proved unreliable. A small prototype Node.js web server that hooked into PhantomJS proved promising. Node.js’ async model would be perfect for handling things that wait for I/O like rendering webpages. We came up with the project name ‘The Phantom Renderer’ soon after.

The prototype

I spent a few days whipping up a prototype proxy server to test with that worked like so:

  • Node.js web server accepts a url in the querystring
  • Send that URL to a newly-spawned PhantomJS process that listens on stdin
  • PhantomJS fetches the page, we wait 500ms after the last HTTP request is sent to get the rendered content via the page.content property
  • Send content back to Node.js
  • Send content back to search bot

We thought we had a fairly simple and working solution.

The Reality

While our prototype worked (mostly), we knew we had a lot of work to do. Our pages were complex JavaScript applications with many HTTP requests and expectations that they would live in a ‘traditional’ desktop browser. GoogleBot sometimes would crawl us at over 500 reqs/s. PhantomJS can be CPU and memory intensive (and randomly crashes or freezes). We had to be absolutely sure we were sending back fully rendered pages.

Problem 1: When is a webpage ‘complete’?

In our prototype app we assumed that a webpage was ‘finished’ 500ms after the last HTTP request had begun. As you can probably already guess, this is incredibly naive. Our site loads dozens of images, scripts and stylesheets (not to mention lots of analytics code). Some load instantly, some take > 500ms to return content. What happens if a request completely fails? If the page is redirected (301, 302 or even via JS/meta tag)?  404s? We had to handle all those cases appropriately and gracefully.

At first, we had many pages that looked like this after ‘rendering':

blank page

Obviously, this wasn’t going to work.

Through a lot of manual testing and QA we eventually came to a solution where we tracked each and every HTTP request PhantomJS makes and watch every step of the transaction (start, progress, end, failed). Only once every single request has completed (or failed, etc) we start ‘waiting’. We give the page 500ms to either start making more requests or finish adding content to the DOM. After that timeout we assume the page is done.

Once we did that, we had a 100% success rate for rendering pages and saw pages that looked like this:Screen Shot 2013-12-03 at 5.11.05 PM

Much better! But we weren’t out of the woods yet…

Problem 2: PhantomJS and Node.js Bugs

meme-bugs-1

Getting PhantomJS to render pages correctly during testing was a lot of work, but dealing with PhantomJS bugs made tear our hair out on occasion. When you are dealing with > 500 requests/second you uncover sporadic, random bugs that most people don’t. Also we are using a large percentage of the PhantomJS API, which means we are more likely to hit bugs or undocumented behavior. We also were new to PhantomJS so there was lots of user error :)

Some of these fun bugs and problems we dealt with were:

  • If PhantomJS got in a redirect loop it would hog all CPU and rapidly fill up memory until it crashed itself or the server it was on
  • Random ECONNRESET errors from child processes upon termination
  • Small percentage of PhantomJS processes simply not returning
  • PhantomJS’ onResourceRequested and onResourceReceived returning different URLs for the same resource due to url encoding. This causes problems if you are tracking requests.
  • Expecting PhantomJS processes to terminate cleanly. Instead tell it to exit, then kill the process. Double tap!

Problem 3: Scaling PhantomJS and NodeJS

LrsTa

Since this was a brand new project and we knew rendering web pages was CPU intensive, we spent a lot of time running benchmarks (and learning how to benchmark).

Our testing infrastructure consisted of a test Phantom Renderer box and a separate server running http_load that was used to send varying amounts of traffic. We created a list of 600 public gallery urls from our most popular customer sites and repeatedly slammed our test server with varying load to determine the best combination of processes, CPU and RAM.

It’s important to also document raw number of requests/sec and response time. A server isn’t very useful if it can handle hundreds or thousands of requests/sec but takes far too long to complete them.

When performance testing we learned a few things:

  • Don’t test against your normal QA/test environment. This will make your QA and dev teams unhappy.
  • Do make sure the any dependent services can also hand additional load/traffic!
  • Do use as close to production workloads and data.
  • Do repeat your tests multiple times to allow for services to ‘warm up’.
  • Do test multiple configurations (number of processes, max connections, etc) on the same hardware.
  • Do write down all your results and extra data
  • Do test for long periods of time (hours at least). You’ll probably uncover issues that won’t occur during a short performance test.

We also had a few problems scaling PhantomJS once it was in production and running for long periods of time:

  • Setting PhantomJS’ cache size too big, causing all 64 PhantomJS processes to slam the disk with reads and writes when the cache filled up and needed items removed.
  • Running too many PhantomJS instances, filling up RAM over a period of a few hours and causing processes to be killed.
  • Node.js’ Cluster module on Ubuntu not load balancing equally between processes, causing server CPU to be underutilized (fix is to put HAProxy in front of Node.js)
  • Setting too high of a limit on number of connections on our HAProxy servers, overloading our servers.

We also spent some time optimizing PhantomJS to load pages quickly by turning off image loading, allowing it to use a small disk cache and keeping the PhantomJS processes alive instead of respawning them for every request. We also spawn a separate Node.js process for each processor core, allowing for massive parallelization.

The importance of logs

Logs

During testing and tuning Phantom Renderer, we developed one strong habit; log everything. When we first started the project, we had no logging whatsoever. Debugging issues was easy at first when the codebase was small, but once it grew in size and complexity debugging became much more difficult. When Phantom Renderer was being tested it was difficult to determine the cause of bugs and errors (or even what PhantomJS and Node.js was doing).

About midway through the project we started using Winston, a great logging utility for Node.js. With Winston in place we added logging to every single step of the render process in PhantomJS and HTTP process in Node.js. We also used Winston’s log levels to allow for different levels of logging for debugging and production. Combining that with Splunk gave us deep insights into how specific requests were handled and how often certain errors were occurring in production. If you’re starting a new project logging should be a required piece of it.

The future of The Phantom Renderer

We’re hoping to open source The Phantom Renderer sometime in the near future. Hopefully it will be useful for web apps that have a mix of different frontend and backend technologies. Let us know if it’s something your team or company is interested in using!

We’ll be posting more in-depth posts about our experience with PhantomJS and NodeJS. Stay tuned!


Logs photo By Aapo Haapanen from Tampere, Finland (Logs) [CC-BY-SA-2.0 (http://creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons

Push-button deployments with Arduino

January 30, 2013 8 comments

TL;DR: Thanks to an Arduino, an Arduino ethernet shield, some Arduino code, buttons from eBay, LEDs from Fry’s, and SmugMug’s deployment web app, we’ve created a push-button deployment process that looks like this:

When I first started working at SmugMug, we deployed infrequently by manually merging branches, tagging, double-checking, then running a bunch of commands (some via sudo and some not). It was an eleven step process that not all developers had access to. Developers were usually uneasy about pushing due to the complexity involved.

Eventually the process was consolidated into a shell script, which still had to be run via sudo on a designated server. More recently, the shell script was wrapped in a web app that made things much easier.

Pushit

While the web app is pretty awesome and easy to use, I thought using a real physical button to deploy code would be even better:

Introducing the SmugMug Deployinator 5000!

Front

Deployinator 5000

Inside

Deployinator 5000 inside

Buttons

Buttons

Arduino and Ethernet Shield

Arduino

The Deployinator 5000 consists of the following components:

The setup is relatively simple. The toggle switch, key lock, and two momentary switches are all wired up in sequence so the Arduino sees them as one button. When all four are pressed, the Arduino makes an HTTP POST request to our deployment server, which then pushes any pending code live. While the Arduino is waiting for the deployment to finish, it blinks the yellow LED. When the push is deployed, the green LED lights up. If something goes horribly wrong, the red LED strikes fear into the deployer’s heart.

It wasn’t too hard to wire up the Arduino, the buttons, and the LEDs, even for someone with no electronics experience (although I had a bit of help from other SmugMug employees with more experience). The fun and challenging part was finding an enclosure and mounting all the pieces inside it. Trips to Weird Stuff and Home Depot solved that problem easily!

After gutting the enclosure, I superglued the Arduino holder to the inside and drilled holes into the backing plate to mount the buttons with machine screws. I then reattached a few wires and circuitry for dramatic effect.

Long-term ideas for the Deployinator include adding larger lights, a disco ball, and playing music when a push occurs. PowerSwitch Tails would allow the Arduino to control anything that runs on 120V power.

Deploying code doesn’t have to be boring!

- Ryan Doherty, SmugMug DevOps

Categories: DevOps Tags: ,

Scaling Puppet in EC2

January 14, 2013 10 comments

At SmugMug, we’re constantly scaling up and down the number of machines running in ec2. We’re big fans of puppet, using it to configure all of our machines from scratch. We use generic Ubuntu AMIs provided by Canonical as our base, saving ourselves the trouble of building custom AMIs every time there is a new operating system release.

To help us scale automatically without intervention (we use AutoScaling), we run puppet in a nodeless configuration. This means we do not run a puppet master on any machines in our infrastructure. All machines run puppet independent of any other, removing dependencies and improving reliability.

I will first explain nodeless puppet, then I will dive into how we use it.

Understanding Nodeless Puppet

Most instructions for setting up puppet tell you to create a puppet master instance that all puppet agents talk to. When an agent needs to apply a configuration, the master compiles a config and hands it back to the agent. With nodeless, the puppet agent compiles its own configuration and applies it to the host.

We start with a simple puppet.conf file that is pretty generic:

[main]
logdir=/var/log/puppet
vardir=/var/lib/puppet
ssldir=/var/lib/puppet/ssl
rundir=/var/run/puppet
factpath=$vardir/lib/facter
templatedir=$confdir/templates
modulepath=$confdir/modules

Then we create a top-level manifest called mainrun.pp. In our setup this manifest lives in a directory called /manifests. An example mainrun.pp:

include ntp
include puppet
include ssh
include sudo

if $hostname == "foo" {
    include apache2
}

There is also a /modules directory that contains puppet modules. Each include statement in the mainrun.pp manifest exists as a module.

Once we have all of our modules created and listed appropriately in mainrun.pp, we run puppet with the apply command: sudo puppet apply /etc/puppet/manifests/mainrun.pp. Puppet will then run and do all that our manifests tell it to.

Scaling Puppet

Upon booting, all machines download their entire puppet manifest code tree from an Amazon S3 bucket. Then puppet is run and the machine is configured. By using S3, we’re leveraging Amazon’s ability to provide highly available access to files despite server or data center outages.

To help keep our changes to puppet sane, we use a git repository. When anyone does a push to the central repository server, it copies our files to our Amazon S3 bucket. The S3 bucket has custom IAM access rules applied so puppet can only see its bucket and no other.

When we launch a new instance in ec2, we use the --user-data-file option in ec2-run-instances to run a first-boot script that sets us up with puppet.

A simple first-boot script:

#!/bin/sh
AWS_ACCESS_KEY="abcdefghijlmnopqrstu"
AWS_SECRET_KEY="abcdefghijlmnopqrstuabcdefghijlmnopqrstu"
BUCKET_PUPPET="puppet"

apt-get update
apt-get --yes --force-yes install puppet s3cmd
S3CMDCFG="/etc/s3cmd.cfg"
wget --output-document=$S3CMDCFG http://s3.amazonaws.com/$BUCKET_PUPPET/s3cmd.cfg
sed -i -e "s#__AWS_ACCESS_KEY__#$AWS_ACCESS_KEY#" \
    -e "s#__AWS_SECRET_KEY__#$AWS_SECRET_KEY#" $S3CMDCFG
chmod 400 $S3CMDCFG

until \
    s3cmd -c $S3CMDCFG sync --no-progress --delete-removed \
    s3://$BUCKET_PUPPET/ /etc/puppet/ && \
    /usr/bin/puppet apply /etc/puppet/manifests/mainrun.pp ; \
do sleep 5 ; done

s3cmd.cfg in s3 is a publicly accessible template file, containing placeholders for the AccessKey and SecretKey that is supplied by the first-boot script. As s3cmd.cfg is a publicly accessible file, do not place any real credential data in it.

Puppet will install additional tools for keeping puppet running on all machines.

Keeping Puppet Running

As puppet is not running in agent mode, it does not wake up from a sleeping state to apply manifests that have changed since booting. We use cron to run puppet every 30 minutes. Our cron entry:

*/30 * * * * sleep `perl -e 'print int(rand(300));'` && /usr/local/sbin/puppet-run.sh > /dev/null

We have the 5 minute sleep in the cron to ensure that machines run puppet at a staggered interval. This is to prevent all machines from restarting a service at the same moment (Apache for example), causing an interruption for our customers.

We have three simple scripts that are also installed by puppet for puppet:
/usr/local/sbin/puppet-run.sh:

#!/bin/sh
/usr/local/sbin/puppet-update.sh
/usr/local/sbin/puppet-apply.sh

/usr/local/sbin/puppet-update.sh:

#!/bin/sh
BUCKET_PUPPET="puppet"
/usr/bin/s3cmd -c /etc/s3cmd.cfg sync --no-progress \
    --delete-removed s3://$BUCKET_PUPPET/ /etc/puppet/

/usr/local/sbin/puppet-apply.sh:

#!/bin/sh
/usr/bin/puppet apply /etc/puppet/manifests/mainrun.pp

We split puppet runs into three scripts to help with manual maintenance of a box. We run sudo puppet-run.sh if we simply want puppet to run immediately. sudo puppet-apply.sh is handy when making manual changes to the puppet manifests and modules for testing purposes; once testing is complete we copy our changes back into the git repository. sudo puppet-update.sh is infrequently run manually, mostly for resetting the puppet config when making manual testing changes.

Final Thoughts

As you can imagine there is a lot more involved in our manifests. There are a large number of conditional operators that enable and disable different parts of the manifests depending on what role an instance has.

EC2 tags have proven to be invaluable for us; each machine is assigned two tags that exactly describe any role. A script for reading the ec2 tags at boot combined with a custom fact is used to expose the ec2 tags to puppet.

Future posts about puppet may include:

  • How we use EC2 tags for determining instance roles
  • Speeding up initial booting of instances
  • Using custom facts to enable one-off configurations for testing or debugging

– Shane Meyers, SmugMug Operations

Categories: DevOps Tags: , , ,

Speeding up SmugMug Search

August 9, 2012 9 comments

SmugMug users have uploaded millions of awesome photos, and one of the things we work hard on is making it easy and fun for people to discover them. SmugMug Search is an important part of this, since it allows anyone on the web to search among all public photos on SmugMug. It also helps drive traffic to our Pros, many of whom make a living selling prints of their work.

Naturally we want Search to be fast, intuitive, and beautiful. But more importantly, we want to showcase our users’ gorgeous photos. When people search for photos on SmugMug, they want to see photos, not a bunch of pagination links and other user interface clutter. So, a few months ago, we launched a redesigned Search page that does just that.

SmugMug search results for "sunset"

We put those big gorgeous photos front and center, and we got rid of the ugly pagination links in favor of infinite scrolling—as you scroll down, more results are loaded automatically. This looks great and works well, but keeping the interface fast and responsive presented a lot of challenges, especially on older browsers or slower computers.

Performance Problems

The first issue we faced is that executing JavaScript as the user scrolls—which is necessary in order to implement infinite scrolling—can really bog things down if you’re not careful, especially when the browser has to render all those big beautiful images. The solution was conceptually simple: just do less stuff as the user scrolls! In practice, though, that’s not always as simple as it sounds.

Behind the scenes, the search results are loaded into a YUI Model List via XHR and rendered using YUI Views. As the user scrolls down, more results are loaded automatically, appended to the list, and rendered.

In the initial version of the search page, both a Model instance and a View instance were created for each image—a classic MVC approach. This made logical sense and made the images easy to work with in code, but it really sucked from a performance perspective. Creating all those Model and View instances as the user scrolls meant that not only was the browser trying to load and render lots of images, it was also having to execute lots of complex JavaScript at the same time. Even in modern browsers on fast machines, this could be a burden.

Even worse, as the user scrolled further and further down and more results were loaded, memory usage skyrocketed. With potentially thousands of results on the page at once and a Model and View instance for each, users without lots of RAM sometimes saw things grind to a halt as the browser was forced to rely on virtual memory. This clearly wasn’t a good experience.

Performance Improvements

To improve things, we dug deep into each of these performance issues, profiled the hell out of everything (Google Chrome’s CPU and Heap profilers were a godsend for this), and then refactored every line of JavaScript on the search page based on our findings.

Here are some of the things we did to speed stuff up:

  • We wrote a new LazyModelList class that extends YUI’s ModelList and makes it possible for the list to store and manipulate plain JavaScript objects rather than fully-instantiated Model instances. Plain objects are much cheaper to create and work with, both in terms of CPU overhead and memory usage, and LazyModelList makes it easy to revive a simple object into a full Model instance as needed, saving the work of doing it up front for every item. We contributed LazyModelList back to YUI, and it’s now available for everyone to use in YUI 3.6.0.

  • Instead of creating a View instance for each image tile, we now use a single master View instance for the entire list of results. As new results are added to the page, the master result view is responsible for rendering those new results without re-rendering any of the existing results on the page. Now, even when there are thousands of images on the page, there’s just a single view managing them all.

  • We wrote a new ScrollInfo plugin for YUI’s Node component, which provides a highly efficient, throttled wrapper around the browser’s native scroll event. Since the scroll event can fire hundreds of times per second, throttling ensures that our event handlers only run, say, once every 50 or 100 milliseconds rather than on every single event. This puts less of a burden on the browser and ensures that more system resources are available to render content and keep the page feeling responsive as the user scrolls. This plugin isn’t yet available as part of YUI, but we’ve sent a pull request and we hope they’ll accept it.

  • Our profiling revealed several memory leaks in YUI’s event system, which we were able to work around to improve memory usage even above and beyond our other improvements. The YUI team is aware of these issues. Some have already been fixed in YUI 3.6.0, and others will be addressed in an upcoming release.

  • Naturally we also took the opportunity to make lots of other minor improvements and fix lots of little bugs, but those weren’t directly related to the performance effort.

Benchmarks & Pretty Graphs

Here are some pretty graphs demonstrating the effect of the performance improvements we made. Results were gathered using Google Chrome’s profiling tools on a Mac Pro (2.8 GHz Quad-Core) running OS X 10.8.

We also created a jsPerf test for LazyModelList to demonstrate how much faster it is than ModelList.

That Ain’t All

It took some work, but now we’ve got a gorgeous search page that performs well even in older browsers and on slower machines. We’re pretty happy with it, and we hope you are too. But that’s not all! We’ve still got plenty more improvements planned, so keep your eyes peeled.

— Ryan Grove and Brian Strong, SmugMug Sorcerers

Using HTML5’s Fullscreen API for Fun and Profit

June 6, 2012 11 comments

For the past few weeks I’ve been working on a new super magical awesome feature that involves using the new HTML5 Fullscreen API. As with most brand spankin’ new web APIs, its support and implementation varies per browser. I think it’s worth the effort considering how freaking awesome it is to do fullscreen web apps.

The Basics

OK, let’s get started with the basics of how this new API works. Via the JavaScript function requestFullscreen you tell the browser you want a specific HTML DOM element to fill the entire screen with no browser chrome displayed.

var myNode = document.querySelector("#myFullscreenNode");
myNode.requestFullscreen();

This is not the normal fullscreen mode that many browsers have where the browser’s viewport is simply stretched to the edges of the edges of the screen and the browser chrome is hidden. As far as I know that type of fullscreen mode is not standardized and is not accessible via JavaScript.

Currently Firefox, Safari and Chrome support the fullscreen API. But of course each implements it slightly differently, which is exactly why I’m writing this article and you’re reading it.

Getting started

According to the W3C specification, the first thing you should do is determine if the browser supports the fullscreen API and is currently in a state where it’s safe to go fullscreen. This is achieved via the `fullscreenEnabled` property on the document object. If the property exists and is true this means you can request the browser’s fullscreen mode. (Note the terminology: request. There’s no guarantee it will always work so don’t expect it to.)

You want to use this flag (if available) because a browser may support the fullscreen API but be in a state where it can’t go fullscreen (still loading content, a browser preference pane may be focused, etc).

To determine if fullscreen mode is available, check the .fullscreenEnabled property on the document object like this:

if(document.fullscreenenabled) {
	var myNode = document.querySelector("#myFullscreenNode");
	myNode.requestFullscreen();
} else {
	dont();
}

Currently only Firefox has this property on the document object as ‘mozFullScreenEnabled’ (note the capitalization), so it’s not worth relying on unless you really want to adhere to a draft spec.

The easier way to check if a browser supports the fullscreen API is to create a test HTML Node object and check if it has the requestFullscreen function on it:

var testNode = document.createElement('div');

if(testNode.requestFullscreen) {
	var myNode = document.querySelector("#myFullscreenNode");
	myNode.requestFullscreen();
} else {
	//Fail
}

The above snippet is the spec format, for use in Firefox/Chrome use .mozRequestFullScreen and .webkitRequestFullScreen (again note capitalization!).

Are we there yet?

Let’s assume we have a browser that supports fullscreen mode. We can just call requestFullscreen() on the DOM Node we specify and we’re golden, right? Wrong! Just because we call the function doesn’t mean we’re guaranteed to go fullscreen. The user could press the Escape key during the transition to fullscreen or something could occur in the browser itself where it needs to abort. This is where listening for the events ‘fullscreenchange’ and ‘fullscreenerror’ is helpful (both are available prefixed in Firefox and Chrome, fullscreenerror is not available in Safari).

These events are fired on the document object, not on the node that was requested to go fullscreen. Adding to our code snippet above we get this:

var testNode = document.createElement('div');

if(testNode.requestFullscreen) {
	document.onfullscreenchange = function(event) {
		//Fullscreen mode has changed
	}

	document.onfullscreenerror = function(event) {
		//Error!
	}

	var myNode = document.querySelector("#myFullscreenNode");
	myNode.requestFullscreen();
}

Again, the above code is per the spec, for Firefox and Chrome/Safari use ‘onmozfullscreenchange’ and ‘onwebkitfullscreenchange’, respectively.

Given there’s a fullscreen change event object you’d assume that it will tell you which mode the browser is currently in, right? Wrong! You can’t tell which mode the browser is in from the event fired. Luckily there is a document property to determine which mode the browser is in. (Are we having fun yet?!)

To determine which mode the browser is in check the ‘fullscreenElement’ property of the document object. If this property is not null the browser is in fullscreen mode (and the value is the DOM node that is fullscreen). Firefox, Chrome and Safari all support this property (namespaced).

if(document.fullscreenElement) {
    //Yay, we're fullscreen!
}

Checking for errors

Even if we’ve checked the ‘fullscreenEnabled’ and ‘fullscreenElement’ properties betore we request fullscreenmode, it’s still possible that the browser will deny our request. When this happens the browser will fire a ‘fullscreenerror’ event on the document object.

This can happen if there’s a user preference, security risk or platform limitation regarding fullscreen mode. Fullscreen mode is also only triggerable via user input (click, key press, etc), so if it is requested outside of those events the fullscreenerror event will be fired.

var testNode = document.createElement('div');
if(testNode.requestFullscreen) {
	document.onfullscreenerror = function(event) {
		//Error!
	}
}

Firefox and Chrome support the onfullscreenerror events (prefixed), Safari does not.

All together now

Combining all our code examples above we get the following:

if(document.fullscreenEnabled) {
	document.onfullscreenchange = function(event) {
		if(document.fullscreenElement) {
			//We are fullscreen! Rejoice!
		} else {
			//We're not fullscreen :(
		}
	}

	document.onfullscreenerror = function(event) {
		//Something went wrong...
	}

	var fullscreenNode = document.querySelector("#myFullscreenNode");
	fullscreenNode.requestFullscreen();
}

Unfortunately none of this code will work in any current browsers. A lot of conditional logic is needed to determine which set of fullscreen APIs are available (Firefox, Chrome and Safari all differ).

Fortunately for you, I’ve wrapped it all up into a convenience function that will return the correct set of events and fullscreen function for the browser or false if the current browser does not have fullscreen support:

function FullScreenSupport() {
    var TEST_NODE = document.createElement('div');
        REQUEST_FULLSCREEN_FUNCS = {
            'requestFullscreen': {'change':'onfullscreenchange',
                                  'request':'requestFullscreen',
                                  'error':'onfullscreenerror',
                                  'enabled':'fullscreenEnabled',
                                  'cancel': 'exitFullscreen',
                                  'fullScreenElement':'fullscreenElement'
            },
            'mozRequestFullScreen':{'change':'onmozfullscreenchange',
                                    'request':'mozRequestFullScreen',
                                    'error':'onmozfullscreenerror',
                                    'cancel': 'mozCancelFullScreen',
                                    'enabled':'mozFullScreenEnabled',
                                    'fullScreenElement':'mozFullScreenElement'
            },
            'webkitRequestFullScreen':{'change': 'onwebkitfullscreenchange',
                                       'request': 'webkitRequestFullScreen',
                                       'cancel': 'webkitCancelFullScreen',
                                       'error': 'onwebkitfullscreenerror',
                                       'fullScreenElement': 'webkitCurrentFullScreenElement'
            }
        },

        fullscreen = false;

        for(var prop in REQUEST_FULLSCREEN_FUNCS) {
            if(REQUEST_FULLSCREEN_FUNCS.hasOwnProperty(prop)) {
                if(prop in TEST_NODE) {
                    fullscreen = REQUEST_FULLSCREEN_FUNCS[prop];
                    //Still need to verify all properties are there as
                    //Chrome and Safari have different versions of Webkit
                    for(var item in fullscreen) {
                        if(!(fullscreen[item] in document) &&
                            !(fullscreen[item] in TEST_NODE)) {
                            delete fullscreen[item];
                        }
                    }
                }
            }

            if(fullscreen) {
                break;
            }
        }

        return fullscreen;
}

It ain’t pretty, but it does work. The function will return false if the browser doesn’t support fullscreen mode. Also note that just because a browser supports full screen mode doesn’t mean every function and property related to full screen mode is available, make sure it’s in the object FullScreenSupport returns.

Styling it all

Thought you were done? :)

Along with the long list of JavaScript functions for fullscreen mode, there’s a little bit of CSS styling that is applied to the element that is shown fullscreen. According to the spec, an element that is fullscreen gets this CSS:

  position:fixed;
  top:0; right:0; bottom:0; left:0;
  margin:0;
  box-sizing:border-box;
  width:100%;
  height:100%;
  object-fit:contain;

Firefox applies this by default along with background-color: black, Safari and Chrome do not apply the width and height properties. To make Chrome and Safari match the spec and Firefox’s behavior, you can use the :fullscreen pseudo class to apply these styles:

#myFullscreenNode:-webkit-full-screen { //webkit prefix
  width:100%;
  height:100%;
  background-color: black;
}

Combine that with the FullScreenSupport function and you’ll have a relatively easy to use fullscreen API in three browsers! Also, if you happen to know anyone on the IE team please let them know they should implement it!

References

Categories: HTML5 Tags: , ,

Deriving JSON types in Go

April 6, 2012 6 comments

At SmugMug, I am currently writing code in Go to support a proxy to a remote service that formats messages in JSON. A good strategy in Go is to create a type that matches the shape of the expected data. The example below is a trivial example of matching JSON of a known shape to a native type:

package main

import (
        "encoding/json"
        "fmt"
)

type MyJSONType struct {
        A int
        B string
}

func main() {
        s := "{\"A\":123,\"B\":\"hello\"}"
        var mjt MyJSONType
        um_err := json.Unmarshal([]byte(s),&mjt)                                  
        if um_err == nil {
                fmt.Printf("%v\n",mjt)                                            
        }                                                                         
}

Using a technique like this, we can use robust native typing to help us detect when the JSON doesn’t match its expected shape.

But what do we do when the JSON has a shape that varies? Typically in Go we utilize interface {} to match structures of an unknown shape, and let Go automatically create an opaque data structure. But what if we want to derive more useful native types? Consider some variations in JSON that show up in messages seen from our remote service:

{"S":"a string"}

{"N":"123456"}

{"SS":["a string","another string"]}

{"NS":["123","456"]}

Go has powerful value inspection mechanisms for helping us determine how to get these bits into native types, and we can also use some intuition – we know that when we see a key of either SS or NS, we have lists of interface {} values. Using what we know, four functions emerge to help derive native types:

// take an interface{} string and turn it into a real string
func To_S(i interface{}) (string,error) {
        i_str,ok := i.(string)
        if !ok {
                e := fmt.Sprintf("cannot convert %v to string\n",i)
                return "", errors.New(e)
        }
        return i_str,nil
}

// take an interface{} string and turn it into a real string
func To_N(i interface{}) (int64,error) {
        i_int64,ok := i.(int64)
        if !ok {
                e := fmt.Sprintf("cannot convert %v to int64\n",i)
                return 0, errors.New(e)
        }
        return i_int64,nil
}

// take an interface{} string and turn it into a list of real strings.
// also return a list of error elements if there were any
func To_SS(i interface{}) ([]string,[]string,error) {
        i_int,ok := i.([]interface {})
        if !ok {
                e := fmt.Sprintf("cannot convert %v to []interface {}\n",i)
                return nil,nil, errors.New(e)
        }
        var i_str_list []string
        var error_list []string
        for _,v := range i_int {
                i_str,ok := v.(string)
                if ok {
                        i_str_list = append(i_str_list,i_str)
                } else {
                        e_str := fmt.Sprint("%v",v)
                        error_list = append(error_list,string(e_str))
                }
        }
        if len(error_list) > 0 {
                return i_str_list,error_list,errors.New("some conversion errs")
        }
        return i_str_list,error_list,nil
}

// take an interface{} int64 and turn it into a list of real int64s.
// also return a list of error elements if there were any
func To_NS(i interface{}) ([]int64,[]string,error) {
        i_int,ok := i.([]interface {})
        if !ok {
                e := fmt.Sprintf("cannot convert %v to []interface {}\n",i)
                return nil,nil, errors.New(e)
        }
        var i_int64_list []int64
        var error_list []string
        for _,v := range i_int {
                i_int64,ok := v.(int64)
                if ok {
                        i_int64_list = append(i_int64_list,i_int64)
                } else {
                        e_str := fmt.Sprint("%v",v)
                        error_list = append(error_list,string(e_str))
                }
        }
        if len(error_list) > 0 {
                return i_int64_list,error_list,errors.New("some conversion errs")
        }
 return i_int64_list,error_list,nil
}

Now we have native types that preserve the safety of the rest of our system.

submitted by Brad Clawsie

Categories: Uncategorized
Follow

Get every new post delivered to your Inbox.

%d bloggers like this: