[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Archiving Websites



Shannon Peevey wrote:
> Your backup means depend on what operating system the web server is on.  If it
> is an apache web server on a linux platform, you can use a utility called "tar"
> to basically combine all of the files in the directory into one.

I think it's important to distinguish internal backups from a variety of other capture scenarios in
which archivists may be involved.  First, regular backups may not necessarily address the selection,
appraisal and preservation concerns of an archivist.  Second, archivists who are collecting sites from
the outside -- i.e. using a crawler or some other means to capture the files through HTTP (the way
browsers get the files off the server) rather than carrying out routine file captures based on direct
access to all the files on the server -- will often not need to worry about what server software or
operating system is running on the host computer.  If there are databases on the server that need to
be captured, then the underlying platform (e.g. Linux/Apache as opposed to Windows NT/Site Server) can
be relevant, but simply harvesting a bunch of HTML files from someone else's site should generally
(apart from a few technical details such as file naming) be the same, regardless of how those HTML
files are being generated.  That's one of the great things about the Web: we can share files across
numerous hardware and software environments.

Capturing pages does ultimately raise the issue of what formats to use when storing the pages in the
digital repository.  One option is simply to store all the pages as flat files within their original
directory structure within the filesystem of whatever computer system the archives is using.  If the
digital repository is using a Microsoft operating system, then the underlying filesystem will be
different from what it would be if the repository were using Linux.  The former will probably be based
on either FAT32 or NTFS and the latter will probably be based on Ext2fs.  This underlying filesystem
is not a concern to the average user, since both systems operate pretty much like a set of files
within a hierarchical structure of directories or folders, but dependency on that filesystem can be a
concern when it comes to long-term preservation.

With all of this said, I would agree with Shannon that the tar format can be a very promising option
as a long-term preservation format.  By translating a hierarchy of files into a tar file, the
resulting byte stream is less dependent on any underlying filesystem.  This is one of the
recommendations laid out by the recently complete CEDARS project in their Guide to Digital
Preservation Strategies:
http://www.leeds.ac.uk/cedars/guideto/dpstrategies/

Of course, this requires certainly that the tar format itself is still understandable over time.
Luckily, there are lots of public domain documents and applications to ensure that the format will be
supported for a while.  And if archivists preserve those documents and applications, the window could
be even longer.

The tar format is only one (among many) of the options for the preservation copy of the Web content.
The work of the San Diego Supercomputer Center (SDSC) with the National Archives and Records
Administration would suggest "wrapping" the files into eXtensible Markup Language (XML) objects and
then storing them in a system that has a very robust, scalable and extensible system (e.g. using the
Storage Resource Broker).  One important thing that the SDSC work has in common with the CEDARS work
is the recognition that files should not be preserved in formats that depend heavily on a proprietary
software environment.

Whatever preservation approach is adopted, there is still the access system to consider.  For speed
and efficiency, the access version of the captured pages could reside in a much more software-specific
environment than the preservation version.  There is also the possibility of setting up a system that
carries out "migration on request," another suggestion from the CEDARS project.  In this case,
translation for the preservation format to the access format would happen at the time that a file is
requested, potentially saving a lot of intervening steps of migration along the way.

The ideal approach will probably depend things like the quantity and type of materials being
preserved, the relationship that the archives has with the producer of the Web pages, and cost-benefit
factors such as doing up-front work on system developed as opposed to delaying it until later.  It's
also important to consider the period of preservation you intend to aim for.  An approach that aims
for 5 years of access is very different from one that aims for 100 years.

=========================================================
 Cal Lee
 University of Michigan             Phone: 734-647-0505
 School of Information
 http://www-personal.si.umich.edu/~calz/
   "To the extent that archival organizations are concerned
    with current record practices and the content of future
    holdings, they cannot long delay plans for appraising,
    accessioning, and servicing this new [computer] media
    of communication."
                                  - Meyer Fishbein, 1970

A posting from the Archives & Archivists LISTSERV List!

To subscribe or unsubscribe, send e-mail to listserv@listserv.muohio.edu
      In body of message:  SUB ARCHIVES firstname lastname
                    *or*:  UNSUB ARCHIVES
To post a message, send e-mail to archives@listserv.muohio.edu

Or to do *anything* (and enjoy doing it!), use the web interface at
     http://listserv.muohio.edu/archives/archives.html

Problems?  Send e-mail to Robert F Schmidt <rschmidt@lib.muohio.edu>