[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Archiving Websites



Shannon Peevey wrote:
> Though true on some aspects, I would say that the most important use of
> dynamic content languages, is to dynamically pull data from a database.
> (Such as a calendar, that is constantly changing, or members list.)
> These are the reasons for the web pages, and is information that we will
> lose if we do not get a snapshot of the database.  (Therefore, the HTML
> document is not a very complete representation of the site, for our
> purposes.  Unless, of course, we are trying to archive the site for
> visual design purposes.)

You're very right that sites are increasingly relying on database-driven approaches, but I don't think
this refutes my basic point.  Archivists are most often concerned with a fixed, authentic version of a
digital object.  Take your example of a calendar.  If the site referenced a particular date at a given
point in time, it will often be appropriate to include that specific date as the content of the page
captured.  The same can be said for the members list.  If one periodically crawls and takes snapshots
of the the pages that make reference to the members list, then the changes will be reflected within
the content of those captured pages.  If the database changes much more frequently than the snapshot
schedule, then obviously some of the databases changed in the interim period will not be reflected.
If that's an issue, then the schedule needs to be speeded up or someone needs to take snapshots of the
database separately.

I think the vital point here is that we're talking about snapshots, which represent the site (or parts
of the site) at given points in time.  I would agree that there are three reasons why simply crawling
the site and making hypertext transfer protocol (HTTP) calls might be a problem with very
database-driven content:

- The previously mentioned concern about snapshots not being undertaken frequently enough, so that
many of the changes of the underlying database are lost.  Note, of course, that this is always a
problem when dealing with the retention of database content.  Ideally, archivists would like database
software and administrators to provide for detailed documentation of the history of all the field's
values and changes to those values.  This is feasible to do, but it is generally not done.  The result
is that we are left with whatever snapshots of the database we can get, sadly often only the final
state of the database when it is finally abandoned by the organization.  Building components into
databases administration so that it reflects good electronic recordkeeping is an important area, but
it is not unique to database-driven sites, and we won't always have control over this situation.

- Software that crawls sites often has trouble with database-driven content.  For example, some
software simply won't follow URLs that contain funky characters like the ? (generally indicating a
query to a database).  We need to make sure that the software we're using gets the right pages and
doesn't get trapped in endless loops of dynamic content.  In some cases, as you suggest, the best
approach may be to arrange with the producers of the site, getting copies of components directly from
their server.  It's important to note, though, that lots of database-driven content doesn't actually
have this problem.  Check out all of the versions of the Yahoo! site that have been captured as part
of the Internet Archive:
http://web.archive.org/web/*/http://www.yahoo.com
Yahoo! is one of the earliest commercially successful examples of a database-driven site, and the
Internet Archive has captured it precisely by the sort of crawling and snapshots we're discussing.
Brewster Kahle, the founder of the Internet Archive, and his colleagues do not call up each individual
company and ask for direct FTP access to all their internal files.  They just copy the files that come
back when their software makes HTTP calls to the correct URLs.

- Finally, when the database behind the site is big and complex enough, it may simply be impractical
to try copying its contents through isolated HTTP calls.  In these cases, we may either decide that
capturing the entire database isn't really our goal anyway and just more on, or we can see if the
database manager is willing to share it with us.  The latter approach does raise many more social and
legal barriers, though.  It is probably much easier to argue that content accessed through HTTP by
visiting a site is within the public sphere and thus ripe for copying by an archives, than it is to
argue that everyone who has a database on her server is obligated to regularly share the entire
contents of that database with the archives.  There are still important intellectual property issues
to address even when HTML is being copied and then redistributed by an archives, but they are not
nearly as thorny as with the database case.  In principle, the rights to both types of material rest
exclusively with the creator.  But we have very different traditions and norms for dealing with
content that has been published directly to the Web and that which is within the four walls of a
company and considered one of its most valuable business assets.

This is not to say that we shouldn't be concerned about how organizations are managing their own
databases.  One of major themes of the archival literature on electronic records is that we need to be
more active in advising content creators and systems designers about how to adequately manage their
records.  This is an important objective, regardless of whether we take custody of the specific
materials in question or the creating institution maintains them over time.  It's quite likely that
we'll often have a hybrid situation, capturing some things for preservation by the archives and
providing guidance to creators to maintain some things themselves.

> Yeah.  Perl, or an open-source spider would be a great free alternative
> to the proprietary software out there.

There are several such tools freely available.

> Don't agree, as stated above.  And this is changing even more
> dramatically now, with more and more content being hosted as objects in
> a database.  (check out www.zope.org, and I know that the new version of
> WebCT, a distance learning tool, is going to do the same thing.  I
> predict, that static content, as we have known it, is going to have a
> limited future.)

I would still argue strongly for a distinction between dynamically GENERATED content and content that
is inherently interactive.  The former is ripe for capture as a set of fixed hypertext markup language
(HTML) pages, whereas the latter becomes more complicated from a preservation perspective.  There are
cases in which we want to maintain the original functionality of a site, which requires ongoing
interaction with software sitting on the server, but I would consider these to be the minority of
sites.  Recall again my distinction between primary and secondary use.  The idea is not to maintain
the site forever as an active set of business activities.  Instead, the idea is to capture it in a way
that allows future users to see and understand what the site contained, how it was used, and what
functions it supported.  A tool like Zope can be extremely valuable, particularly since it's
open-source software and uses a very cool object-oriented approach.  But maintaining a site based on
Zope require the maintenance of lots of separate files, as well as an understanding of exactly how
it's particular object model and syntax work.  There may be cases in which we need to keep all the
objects going, but we should try to minimize such cases, since they add considerably to the complexity
of preservation.  There are also cases in which the interactive components may be fairly
free-standing, e.g. java applets, in a way that allows them to be ported to another system.

As you said earlier, we should definitely aim for collaboration in those cases when we do need to work
with the server and database software directly.  Any time we can come up with solutions that can be
generalized to other collections, we have saved a lot of valuable resources for the profession.  It
would be great for us to start developing repositories of tools and documents that assist in the
preservation (not just the active management) of the most common software.

> That's true.  But, what about "on this date in history?" functions?
> Such as what were the headlines on the Los Angeles Times on such date,
> or Yahoo...  Is this a function that future generations might like to
> have?

That would be great to have.  The Internet Archive is based on the concept I think you're suggesting.
Check out the WayBack Machine.  If you know the right URL (a serious constraint of the current
interface), you can try to visit that site as it existed at various dates in the past:
http://www.archive.org

The Internet Archive is not a complete copy of the Web, and one of the reasons is the database-driven
content discussed above.  Not only are there technical issues involved with getting at the "hidden
Web" but there are also big organizational and legal issues.  For example, the Los Angeles Times may
be willing to provide access to a database of all their old newspapers, but they will most likely
charge for this service.  If they don't want their articles captured by the Internet Archive or other
crawler, they can indicate this preference in a robots.txt file.

Many libraries are currently tackling the issues of how to manage relationships with publishers in
ways that ensure ongoing access.  In addition to licensing and long-term resource commitment, there is
also the "appropriate copy" issue, which I mentioned in a previous message.  In order to live up to
their end of the arrangements, libraries often have to ensure that publications are provided in
appropriate ways to only authorized individuals.

There are also intranets and extranets, which carry their own set of considerations.  For an
institutional archive, these can be important.  Since they're generally sitting behind a firewall,
arrangement for copying files on such networks obviously requires special arrangements.

> But if we link to external sites, we run the risk of the external site
> going down, or changing ip/hostname.  I, personally, think that it would
> be better to run everything internally, so that we are not dependent on
> external sites, and we can administer the sites much more efficiently.
> (Basically, leaving it alone once it is up.

I'm not quite sure what you're suggesting here.  Don't you still think there needs to be some limit to
the scope of a given Web site collection?  Wherever we draw the boundaries, there will always be links
to sites outside.  We can provide an indication that the link existed, and in some cases even include
additional metadata about what the external site may have been, but there's no way to ensure the
long-term availability of every page to which a given page points.

> That's why I say plain text.  Plain text will always be used, until
> Microsoft takes over the world ;), or until we can all read binary :)
> Plain text has been used for at least thirty years in the Unix world,
> and I would venture to say, that it extends before that.  I think that
> it is the safest alternative, and it doesn't take up too much room.

I agree that it can be very helpful to use formats that have ASCII (or increasingly Unicode) as their
method for encoding the bytes, but this doesn't provide a complete solution.  HTML and Javascript, for
example, are generally both stored in a form that's based on a string of ASCII text, but one still
needs to know how to read and interpret HTML and Javascript in order to make use of them.  eXtensible
Markup Language (XML) objects are also just strings of ASCII or Unicode text, but making meaningful
use of them requires the ability to both parse XML and deal with the semantics, schemas, namespaces,
unique data typing, etc. involved in the objects.  ASCII is great, and it's been around for decades,
but it really only provides the lowest layer of meaning, except for the rare file that's simply stored
as a free-standing ASCII text document.  Even email, which is often just text, requires an
understanding of MIME headers, attachments, MIME encapsulation and (God forbid) all the other funky
things going on with more proprietary email client components.

> (Compression is another question, as the algorithms change over time, we
> will have to decompress and recompress with the new algorithms
> occasionally, so that we can always access the text that is our document
> :) )

Yes, we need to either manage the compression algorithms or simply save files in uncompressed form for
preservation purposes.  There are many approaches.  The key is not to overlook something like
compression or encryption since addressing them is essential to maintaining readability and
comprehensible digital objects.

=========================================================
 Cal Lee
 University of Michigan             Phone: 734-647-0505
 School of Information
 http://www-personal.si.umich.edu/~calz/
   "To the extent that archival organizations are concerned
    with current record practices and the content of future
    holdings, they cannot long delay plans for appraising,
    accessioning, and servicing this new [computer] media
    of communication."
                                  - Meyer Fishbein, 1970

A posting from the Archives & Archivists LISTSERV List!

To subscribe or unsubscribe, send e-mail to listserv@listserv.muohio.edu
      In body of message:  SUB ARCHIVES firstname lastname
                    *or*:  UNSUB ARCHIVES
To post a message, send e-mail to archives@listserv.muohio.edu

Or to do *anything* (and enjoy doing it!), use the web interface at
     http://listserv.muohio.edu/archives/archives.html

Problems?  Send e-mail to Robert F Schmidt <rschmidt@lib.muohio.edu>