We built our own solution for this by creating a plugin that works with our digital asset management system (ResourceSpace) to invidually back up files to Amazon S3. Because S3 is replicated to multiple data centers, this provides a fairly high level of redundancy. And because it's an object-based web service, we can access any given object individually by using a URL related to the original storage URL within our system.
This also allows us to take advantage of S3 for images on our website. All of the images from in our online collections database are being served straight from S3, which diverts the load from our public web server. When we launch zoomable images later this year, all of the tiles will also be generated locally in the DAM and then served to the public via the mirrored copy in S3.
The current pricing is around $0.08/GB/month for 1-50 TB, which I think is fairly reasonable for what we're getting. They just dropped the price substantially a few months ago.
DuraCloud http://www.duracloud.org/ supposedly offers a way to add another abstraction layer so you can build something like this that is portable between different cloud storage providers. But I haven't really looked into this as of yet.
Systems Librarian/Archivist, Historic New England
141 Cambridge Street, Boston, MA 02114
[log in to unmask]
>>> Joshua Welker <[log in to unmask]> 1/10/2013 5:20 PM >>>
We are starting a digitization project for some of our special collections, and we are having a hard time setting up a backup system that meets the long-term preservation needs of digital archives. The backup mechanisms currently used by campus IT are short-term full-server backups. What we are looking for is more granular, file-level backup over the very long term. Does anyone have any recommendations of software or some service or technique? We are looking into LOCKSS but haven't dug too deeply yet. Can anyone who uses LOCKSS tell me a bit of their experiences with it?
Electronic/Media Services Librarian
Southwest Baptist University