[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: archiving websites



> I think it's important to distinguish internal backups from a variety of other capture scenarios in
> which archivists may be involved.  First, regular backups may not necessarily address the selection,
> appraisal and preservation concerns of an archivist.  Second, archivists who are collecting sites from
> the outside -- i.e. using a crawler or some other means to capture the files through HTTP (the way
> browsers get the files off the server) rather than carrying out routine file captures based on direct
> access to all the files on the server -- will often not need to worry about what server software or
> operating system is running on the host computer.  If there are databases on the server that need to
> be captured, then the underlying platform (e.g. Linux/Apache as opposed to Windows NT/Site Server) can
> be relevant, but simply harvesting a bunch of HTML files from someone else's site should generally
> (apart from a few technical details such as file naming) be the same, regardless of how those HTML
> files are being generated.  That's one of the great things about the Web: we can share files across
> numerous hardware and software environments.

Thanks for the information!  As you can tell, I am more of a techy than
and archivist, but am enjoying learning every bit that I can :)  But, I
would contend with your comment about using HTTP calls to pull the pages
from a web site.  As many sites are now using server-side, dynamically
generated content, you will lose all of the coding that is relavent to
the page, and only recieve the generated HTML.  This is not an accurate
representation of the site.  I assume, as an archivist, you want the
purest form of the site, and that brings up another question.  The
repository for holding the archived sites, will have to have all of the
capabilities as the original web server.  Therefore, the archive must be
able to generate ASP, PHP, ColdFusion, JSP, etc...  This would be very
difficult, even for a webserver admin as myself ;)  Therefore, it might
be more beneficial for a group of archives to gather together to host
the content that they have the knowledge about.  (ie UNT hosts a
ColdFusion-base archive of web sites, and TAMU hosts an ASP archive of
web sites, but we work together as a group to maintain the archives as a
whole.)  This also brings up the points of backend...  Who host sites
that use ColdFusion and SQL Server, or ColdFusion and MySQL server
backends?  Do we make a second level split of hosting content, or do we
run multiple backends with snapshots of the data running on those
backends?  I do agree with you about using the spider, but again, the
spider has no way of knowing which sites are using server-side content
generators.  People who are using this method are not getting an
appropriate sampling of the sites.

>
> Capturing pages does ultimately raise the issue of what formats to use when storing the pages in the
> digital repository.  One option is simply to store all the pages as flat files within their original
> directory structure within the filesystem of whatever computer system the archives is using.  If the
> digital repository is using a Microsoft operating system, then the underlying filesystem will be
> different from what it would be if the repository were using Linux.  The former will probably be based
> on either FAT32 or NTFS and the latter will probably be based on Ext2fs.  This underlying filesystem
> is not a concern to the average user, since both systems operate pretty much like a set of files
> within a hierarchical structure of directories or folders, but dependency on that filesystem can be a
> concern when it comes to long-term preservation.
>

Good question!  Do we resolve the A HREF's to the originating machine,
or do we leave them as is, and use the same filestructure on our hosting
machines to replicate the original format.  (Or, pay the price of broken
links!)  Hmmm....   Looks like this is getting more and more complicated
:)

> With all of this said, I would agree with Shannon that the tar format can be a very promising option
> as a long-term preservation format.  By translating a hierarchy of files into a tar file, the
> resulting byte stream is less dependent on any underlying filesystem.  This is one of the
> recommendations laid out by the recently complete CEDARS project in their Guide to Digital
> Preservation Strategies:
> http://www.leeds.ac.uk/cedars/guideto/dpstrategies/
>
> Of course, this requires certainly that the tar format itself is still understandable over time.
> Luckily, there are lots of public domain documents and applications to ensure that the format will be
> supported for a while.  And if archivists preserve those documents and applications, the window could
> be even longer.


The tar utility, is not necessarily a format, as it is a way of
concatenating text files one after the other into one file.  The files
in the tar file, retain their native format.  (id text will remain text,
binary will remain unreadable ;)  )  Therefore, for a plain text
representation, you can tar the files into a tarball, and still read
them in notepad, wordpad, vi, or pico...  As far as readability of plain
text, I don't think it will go away soon.  All the configuration files
in both Unix and Linux are plain text, and I will only backup my systems
using plain text, after being bit in the b*tt with a proprietary format.

>
> The tar format is only one (among many) of the options for the preservation copy of the Web content.
> The work of the San Diego Supercomputer Center (SDSC) with the National Archives and Records
> Administration would suggest "wrapping" the files into eXtensible Markup Language (XML) objects and
> then storing them in a system that has a very robust, scalable and extensible system (e.g. using the
> Storage Resource Broker).  One important thing that the SDSC work has in common with the CEDARS work
> is the recognition that files should not be preserved in formats that depend heavily on a proprietary
> software environment.
>
> Whatever preservation approach is adopted, there is still the access system to consider.  For speed
> and efficiency, the access version of the captured pages could reside in a much more software-specific
> environment than the preservation version.  There is also the possibility of setting up a system that
> carries out "migration on request," another suggestion from the CEDARS project.  In this case,
> translation for the preservation format to the access format would happen at the time that a file is
> requested, potentially saving a lot of intervening steps of migration along the way.
>
> The ideal approach will probably depend things like the quantity and type of materials being
> preserved, the relationship that the archives has with the producer of the Web pages, and cost-benefit
> factors such as doing up-front work on system developed as opposed to delaying it until later.  It's
> also important to consider the period of preservation you intend to aim for.  An approach that aims
> for 5 years of access is very different from one that aims for 100 years.
>
> =========================================================
>  Cal Lee
>  University of Michigan             Phone: 734-647-0505
>  School of Information
>  http://www-personal.si.umich.edu/~calz/
>    "To the extent that archival organizations are concerned
>     with current record practices and the content of future
>     holdings, they cannot long delay plans for appraising,
>     accessioning, and servicing this new [computer] media
>     of communication."

Good stuff :)  Thanks for the reply!
--
Shannon Peevey
Central Web Support
UNT-Computing Center
speeves@unt.edu
940-369-8876

A posting from the Archives & Archivists LISTSERV List!

To subscribe or unsubscribe, send e-mail to listserv@listserv.muohio.edu
      In body of message:  SUB ARCHIVES firstname lastname
                    *or*:  UNSUB ARCHIVES
To post a message, send e-mail to archives@listserv.muohio.edu

Or to do *anything* (and enjoy doing it!), use the web interface at
     http://listserv.muohio.edu/archives/archives.html

Problems?  Send e-mail to Robert F Schmidt <rschmidt@lib.muohio.edu>