[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Archiving Websites



speeves speeves wrote:
> But, I would contend with your comment about using HTTP calls to pull the pages
> from a web site.  As many sites are now using server-side, dynamically
> generated content, you will lose all of the coding that is relavent to
> the page, and only recieve the generated HTML.  This is not an accurate
> representation of the site.  I assume, as an archivist, you want the
> purest form of the site, and that brings up another question.  The
> repository for holding the archived sites, will have to have all of the
> capabilities as the original web server.

I would say that this isn't necessarily true.  Many of the capabilities
of the server are simply there to facilitate the management of different
chunks of pages, what is often currently called "content management."
For example, Microsoft's Active Server Pages (ASP), Cold Fusion or
server-side includes on Apache allow the Web administrator/developer to
make changes to the site much more easily, since a footer/header or some
other chunk of text can just be changed once and that change will then
be applied to all pages on the site the next time someone visits it.
All of these chunks can sit on the server in separate places and then
only come together as an entire HTML page once the URL of the page is
requested through an an HTTP call (from a browser, user agent, search
engine crawler or whatever).  While this dynamic generation of pages is
very convenient for short-term management, it's a nightmare for someone
concerned with preservation, since (as you suggest) maintaining the
pages requires keeping all of the correct server software and files in
working order.

A much better approach for creating long-term preservation copies of
sites is to save as many files as possible as static HTML pages.  The
manual version of this approach would be simply visiting all of the
pages on a site, clicking on the File | Save menu and saving the base
HTML files to the repository and then right-clicking on all the images
in the page and saving them to the appropriate directories in the
repository.  Of course, this would be extreme labor-intensive, and
there's lots of software out there to do such capturing automatically
simply by pointing the software at the appropriate URLs.

Such an approach can lose the interactive properties of some pages,
which is exactly why I suggested earlier that many Web captures will be
greatly facilitated by making direct arrangements with the producing
institutions or individuals.  When it comes to dynamically generated
content, I think there are three very important things to keep in mind:

- Many pages that are generated "dynamically" - i.e. based on separate
chunks of content being assembled together at the time of request rather
than sitting on the server as entire HTML pages - are still really just
static pages once they're put together for the user.  If the only thing
the server is doing is inserting an image, footer, some meta tags or
whatever, none of that content will be lost of the HTML page that's
received by browsers is saved.

- Even with truly interactive sites, not all original functionality will
be necessary for future users.  In the archival literature, there's a
long-standing distinction between primary and secondary users.  In the
case of a Web site, we could think of the primary users as those who
visit the site in order to carry out the activity for which it was
originally intended.  Secondary users, on the other hand, would be those
who want to find out something about an old site, but they do not
necessarily intend to carry out the original activity for which it was
intended.  100 years from now, someone may want to know that the Amazon
site had a "shopping cart" feature, but (assuming Amazon is out of
business by then), it may not provide much value to maintain the actual
functionality of that shopping cart.  A more extreme example of this
could be the preservation of a site like Yahoo!  It might be interesting
to look back at and navigate around today's version of Yahoo! a hundred
years from now, but allowing future users to actually query Yahoo! and
get back the exact same results as they would today would require a
great deal of work, including copying of massive databases owned by
other companies.

- Finally, cost-benefit analysis will often require us to abandon some
of the original site's functionality, even though we'd ideally like to
preserve it all.  If there's some property of a site that can only be
preserved by continuing to maintain some very specific, proprietary
server-side software, then the cost of preservation may not be
justified, unless the value of that property for future users is
estimated as being very high.  Of course, some of the functionality of
the server-side software could be translated into a more open version
(e.g. writing and sharing some Perl scripts that do the same thing), but
this will also require resources.  In some cases, those resources will
be justified, particularly if it seems that the tools developed can be
used by other archivists with similar materials.  But in some other
cases we'll just have to sacrifice some bells and whistles in order to
adopt a viable preservation strategy.

> Therefore, the archive must be
> able to generate ASP, PHP, ColdFusion, JSP, etc...  This would be very
> difficult, even for a webserver admin as myself ;)  Therefore, it might
> be more beneficial for a group of archives to gather together to host
> the content that they have the knowledge about.  (ie UNT hosts a
> ColdFusion-base archive of web sites, and TAMU hosts an ASP archive of
> web sites, but we work together as a group to maintain the archives as a
> whole.)

Certainly collaboration will be essential, for developing the
repositories that hold the preservation copies, and also in developing
the access systems.  The access systems will probably often be based on
"ASP, PHP, ColdFusion, JSP, etc." but I would again argue that the
particular server setup for serving the access copies of old Web pages
need not necessarily resemble the exact setup of the servers on which
they originally resided.

> Good question!  Do we resolve the A HREF's to the originating machine,
> or do we leave them as is, and use the same filestructure on our hosting
> machines to replicate the original format.  (Or, pay the price of broken
> links!)  Hmmm....   Looks like this is getting more and more complicated
> :)

The resolution of links is an issue with many online materials.  It
seems that the two most simple approaches are to either change the HREF
text within the files themselves, or using an external index and
software tool to correctly resolve the original references to their
appropriate locations.  The latter is probably much viable over time,
since  collections (particarly distributed ones) will often be unable to
depend on the persistence of a system-specific file reference for very
long.  There are several interesting approaches currently available for
persistent links and identifiers.  Some of them can be found at:
http://www-personal.si.umich.edu/~calz/ermlinks/stan_rid.htm

The CEDARS project, which I mentioned earlier, also has a proposal that
they call the CEDARS reference ID (CRID).  Several other digital
preservation projects have proposed their own mechanisms for naming and
locating files.

In general, there must be some way of providing an identifier that is
unique within a given domain, and then some way of registering and
identifying all of the existing domains.  With reference linking with
electronic publications, the picture gets more complicated, since
there's the issue of ensuring that a user is provided with the
"appropriate copy" of a digital object, based on the rights and services
available through his/her own host institution.  There will also be lots
of external links that break over time, since we can't control all the
naming practices of sites maintained on the Web that are not preserved
in our repositories.

> The tar utility, is not necessarily a format, as it is a way of
> concatenating text files one after the other into one file.  The files
> in the tar file, retain their native format.  (id text will remain text,
> binary will remain unreadable ;)  )

Very good point.  My reference to it as a format could be a bit
misleading, sinc it's not a file format like HTML, MS Word, or
whatever.  This points to why tar doesn't provide a complete answer.
There must still be software to read and interpret the files within the
tarball and/or any other filesystem being used.  For HTML or plain ASCII
text files, that probably won't be a problem.  But highly complex
formats will require more preservation effort through approaches such as
- translating them to some standard preservation formats,
- continuously migrating them over time with each new generation of
technology,
- preserving the original byte stream and ensuring that there is
software around to read and manipulate that byte stream (through
emulation or other means)
- some hybrid of the above.

In short, such preservation will take some forethought and active
planning at the time that pages are first accessioned (or in the
language of the Reference Model for an Open Archival Information System,
"ingested").
http://ssdoo.gsfc.nasa.gov/nost/isoas/ref_model.html

=========================================================
  Cal Lee
  University of Michigan
  School of Information
  Phone: 734-647-0505
  http://www-personal.si.umich.edu/~calz/
  "How can I know what I think because I forgot what I said?"
                                          - Karl Weick, 1979

A posting from the Archives & Archivists LISTSERV List!

To subscribe or unsubscribe, send e-mail to listserv@listserv.muohio.edu
      In body of message:  SUB ARCHIVES firstname lastname
                    *or*:  UNSUB ARCHIVES
To post a message, send e-mail to archives@listserv.muohio.edu

Or to do *anything* (and enjoy doing it!), use the web interface at
     http://listserv.muohio.edu/archives/archives.html

Problems?  Send e-mail to Robert F Schmidt <rschmidt@lib.muohio.edu>