Efficient and Scalable Off-Site Backup to Amazon Glacier

By Ted Reed

The strength of IMVU is our large catalog of User-Generated Content (UGC). With more than ten million items in our virtual catalog, losing our UGC would be crippling to our business.  We in the Operations Team take the preservation of this data seriously. We recently upgraded our aging UGC backup system to use Amazon Glacier as the storage medium. This post will briefly explain the old backup system before detailing the new one.

In The Beginning

We store our UGC in a MogileFS instance, a system for cheap and efficient storage of files across commodity hardware. Our offsite backups originally took the form of USB drives onto which a process would copy the files as they were written to Mogile. As each drive filled up, we would then transport it from our colocation facility to a fire safe in our office. To cover the period of time when the disk was still being written to, we would keep a synced copy of the disk on a machine in a server closet in our offices. This copy would then be deleted once the USB disk had been safely stored.

As time went on, the rate at which our customers were adding photos or uploading products to our catalog grew and grew. We also started permitting higher-quality photos and made it easier to upload and manage them. In order to compensate for the increase in growth, we started buying larger and larger USB drives. This past summer we reached the point where we had the biggest USB drives we could reasonably get and we were still filling them up in about a week. Each time a drive filled up, one of us had to drive out to the facility to retrieve it.

Auditing the data also became woefully inconvenient, as the number of individual drives one had to juggle during the audit skyrocketed. It was an annoyingly manual process, even with helper scripts to manage everything but plugging and unplugging drives. Additionally, annoyingly manual processes tend to not get done as often as we ought to.

Enter Amazon Glacier

We spent some time looking over options for better off-site backups. We talked to many vendors and even for those who could handle our “monumental amount of data” (actual term used by a vendor who shall remain nameless). The quotes left us wondering if we might not be better off just putting some cheap servers in another data center somewhere.

While we were evaluating these options in late August 2012, Amazon announced Glacier, an “extremely low-cost storage service” intended for long-term archival of infrequently-accessed data. At just one cent per gigabyte per month, it handily beat out every other vendor we’d seen in terms of price. We poked around and tested the service out, and found it to be right up our alley, although the pricing was annoyingly confusing. (I ended up writing a small Haskell program to re-implement the math from their FAQ. It rounds in weird places.)

The Backup Process

The backup process begins with a Perl daemon which manages worker pools of Fetchers and Uploaders. The master process figures out which files need to be backed up and hands work off to the Fetchers, which talk to Mogile and pull the file down to a local directory. Once a directory has about 25 MB of files, it’s closed off and handed to an Uploader. The Uploader will then tar the directory and send the tar to Glacier.

It’s worth pointing out here that the 25 MB bundling is actually pretty important if you don’t want to pay a great deal of money. Glacier charges five cents per thousand uploads. My first tests ignored that cost and we pretty quickly ran up an unexpectedly large bill. After some analysis, we found that 25 MB was about right as a middle point between cost and flexibility. This middle point is likely to differ for your use case and budget. The state of the backup process is stored in a MySQL database, which has tables for archives and the files that go into them. We record roughly the same information that Mogile does about each file so that we can accurately restore it if the file is accidentally deleted from Mogile. For each archive, we store both its size as well as information about where to find it (region/vault/id). We also have tables to record information about backup failures for later investigation.

Restoration

The restoration process begins with a simple front-end script, which can take a source and destination. The source can be in terms of Mogile ID, Mogile Domain/Key, or one of our URLs (the latter two are dereferenced to Mogile ID). The destination can be Mogile itself, a local file, or S3. There’s also a batch mode which takes input where each line represents one source and one destination. The script will record each file to restore in the database. It will then issue requests to Glacier to retrieve the archives needed to restore the files, unless there is already an active and uncompleted request for the same archive.

Elsewhere in our network, we have a daemon which sits and listens to an SQS queue tied to the SNS topic for restoration. The notification will include the database ID for the restore request. The daemon will pull the information needed to do the restoration and then clear it afterwards.

Auditing and Validation

One side benefit of a backup system that doesn’t involve managing disks is that we can automate testing the backups and our restoration process. There are two aspects that we want to test. First, we want to prove that the backups are intact and accurate. Second, we want to prove that we are able to restore the data. Glacier permits you to pull up to 5% of your total data per month, presumably for purposes such as these. You still need to pay bandwidth egress charges if the data leaves AWS, which informed our design for the automated audit process.

In order to audit the backups, there’s a cron on our side which runs hourly and selects a random sampling of one millionth of the archives we’ve uploaded. If we had 3,000,000 archives, we’d audit 3 per hour. For each archive, this cron pulls the files from Mogile and generates md5sums. It issues a request to Glacier, with a different SNS topic than the one we use for ordinary restoration. It then uploads the md5sums as a file to an EC2 micro instance which runs a daemon listening for that SNS topic. As the archives become available, the daemon does the same md5sum generation on the Glacier data, comparing it to the list uploaded beforehand. If there is a discrepancy, it sends an email to a mailing list that we check at least daily.

To handle verifying the restoration path, there is a smaller monthly process. It will generate a list of 10 random files and use our normal restoration tool to restore them to test buckets in both Mogile and S3. If the files haven’t been restored within 24 hours, a notification goes to the same mailing list as for audit failures.

In The End

We’re still pushing our historical data into Glacier, but this project is already looking pretty successful. We’ve been adding new data since last November, and are in the process of checking through everything to gain a full trust in the system before we finally pull the plug on the old system and build a mighty throne out of the USB drives.

The backup process was mostly optimized to be able to push these historical files up to Amazon in a reasonable timeframe. As a result, the process which backs up our real-time uploads is quite fast with plenty of room to grow as our usage scales up. Files added to Mogile are in the backup process within seconds, and are typically in Glacier within less than a minute. I looked at our system during peak time for an example, and found that files were in Glacier 35 seconds after being uploaded to us by a user. Knowing this gives me great peace of mind that our UGC will be safe in the event a disaster strikes.

What about other forms of backup? We briefly looked at storing our offsite database backups in Glacier, but decided against this. Any data you put into Glacier will be billed for at least three months. Adding daily backups and then deleting some of them later makes for a rather expensive solution. There’s also little to gain as the process which moves the database backups to our office isn’t nearly as painful as our older UGC backup process was.

One thought to “Efficient and Scalable Off-Site Backup to Amazon Glacier”

  1. It looks like IMVU is planning some expantions
    Just keep in mind Amazon storage is realy slow anrd thier core is selling stuff not storing GB
    😉

Leave a Reply