[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Archiving Websites



Christopher Albert Lee wrote:
>
> speeves speeves wrote:
> > But, I would contend with your comment about using HTTP calls to pull the pages
> > from a web site.  As many sites are now using server-side, dynamically
> > generated content, you will lose all of the coding that is relavent to
> > the page, and only recieve the generated HTML.  This is not an accurate
> > representation of the site.  I assume, as an archivist, you want the
> > purest form of the site, and that brings up another question.  The
> > repository for holding the archived sites, will have to have all of the
> > capabilities as the original web server.
>
> I would say that this isn't necessarily true.  Many of the capabilities
> of the server are simply there to facilitate the management of different
> chunks of pages, what is often currently called "content management."
> For example, Microsoft's Active Server Pages (ASP), Cold Fusion or
> server-side includes on Apache allow the Web administrator/developer to
> make changes to the site much more easily, since a footer/header or some
> other chunk of text can just be changed once and that change will then
> be applied to all pages on the site the next time someone visits it.
> All of these chunks can sit on the server in separate places and then
> only come together as an entire HTML page once the URL of the page is
> requested through an an HTTP call (from a browser, user agent, search
> engine crawler or whatever).  While this dynamic generation of pages is
> very convenient for short-term management, it's a nightmare for someone
> concerned with preservation, since (as you suggest) maintaining the
> pages requires keeping all of the correct server software and files in
> working order.
>

Though true on some aspects, I would say that the most important use of
dynamic content languages, is to dynamically pull data from a database.
(Such as a calendar, that is constantly changing, or members list.)
These are the reasons for the web pages, and is information that we will
lose if we do not get a snapshot of the database.  (Therefore, the HTML
document is not a very complete representation of the site, for our
purposes.  Unless, of course, we are trying to archive the site for
visual design purposes.)

> A much better approach for creating long-term preservation copies of
> sites is to save as many files as possible as static HTML pages.  The
> manual version of this approach would be simply visiting all of the
> pages on a site, clicking on the File | Save menu and saving the base
> HTML files to the repository and then right-clicking on all the images
> in the page and saving them to the appropriate directories in the
> repository.  Of course, this would be extreme labor-intensive, and
> there's lots of software out there to do such capturing automatically
> simply by pointing the software at the appropriate URLs.
>

Yeah.  Perl, or an open-source spider would be a great free alternative
to the proprietary software out there.

> Such an approach can lose the interactive properties of some pages,
> which is exactly why I suggested earlier that many Web captures will be
> greatly facilitated by making direct arrangements with the producing
> institutions or individuals.  When it comes to dynamically generated
> content, I think there are three very important things to keep in mind:
>

Agreed!  We should work in conjunction with the site personnel, as they
know the site the best anyways, and can facilitate an accurate archive.

> - Many pages that are generated "dynamically" - i.e. based on separate
> chunks of content being assembled together at the time of request rather
> than sitting on the server as entire HTML pages - are still really just
> static pages once they're put together for the user.  If the only thing
> the server is doing is inserting an image, footer, some meta tags or
> whatever, none of that content will be lost of the HTML page that's
> received by browsers is saved.
>

Don't agree, as stated above.  And this is changing even more
dramatically now, with more and more content being hosted as objects in
a database.  (check out www.zope.org, and I know that the new version of
WebCT, a distance learning tool, is going to do the same thing.  I
predict, that static content, as we have known it, is going to have a
limited future.)

> - Even with truly interactive sites, not all original functionality will
> be necessary for future users.  In the archival literature, there's a
> long-standing distinction between primary and secondary users.  In the
> case of a Web site, we could think of the primary users as those who
> visit the site in order to carry out the activity for which it was
> originally intended.  Secondary users, on the other hand, would be those
> who want to find out something about an old site, but they do not
> necessarily intend to carry out the original activity for which it was
> intended.  100 years from now, someone may want to know that the Amazon
> site had a "shopping cart" feature, but (assuming Amazon is out of
> business by then), it may not provide much value to maintain the actual
> functionality of that shopping cart.  A more extreme example of this
> could be the preservation of a site like Yahoo!  It might be interesting
> to look back at and navigate around today's version of Yahoo! a hundred
> years from now, but allowing future users to actually query Yahoo! and
> get back the exact same results as they would today would require a
> great deal of work, including copying of massive databases owned by
> other companies.
>

That's true.  But, what about "on this date in history?" functions?
Such as what were the headlines on the Los Angeles Times on such date,
or Yahoo...  Is this a function that future generations might like to
have?

> - Finally, cost-benefit analysis will often require us to abandon some
> of the original site's functionality, even though we'd ideally like to
> preserve it all.  If there's some property of a site that can only be
> preserved by continuing to maintain some very specific, proprietary
> server-side software, then the cost of preservation may not be
> justified, unless the value of that property for future users is
> estimated as being very high.  Of course, some of the functionality of
> the server-side software could be translated into a more open version
> (e.g. writing and sharing some Perl scripts that do the same thing), but
> this will also require resources.  In some cases, those resources will
> be justified, particularly if it seems that the tools developed can be
> used by other archivists with similar materials.  But in some other
> cases we'll just have to sacrifice some bells and whistles in order to
> adopt a viable preservation strategy.
>

Hmmm...  Interesting.  I see a job, and lot's of money in this paragraph
;)

>
> > Good question!  Do we resolve the A HREF's to the originating machine,
> > or do we leave them as is, and use the same filestructure on our hosting
> > machines to replicate the original format.  (Or, pay the price of broken
> > links!)  Hmmm....   Looks like this is getting more and more complicated
> > :)
>
> The resolution of links is an issue with many online materials.  It
> seems that the two most simple approaches are to either change the HREF
> text within the files themselves, or using an external index and
> software tool to correctly resolve the original references to their
> appropriate locations.  The latter is probably much viable over time,
> since  collections (particarly distributed ones) will often be unable to
> depend on the persistence of a system-specific file reference for very
> long.  There are several interesting approaches currently available for
> persistent links and identifiers.  Some of them can be found at:
> http://www-personal.si.umich.edu/~calz/ermlinks/stan_rid.htm
>
> The CEDARS project, which I mentioned earlier, also has a proposal that
> they call the CEDARS reference ID (CRID).  Several other digital
> preservation projects have proposed their own mechanisms for naming and
> locating files.
>
> In general, there must be some way of providing an identifier that is
> unique within a given domain, and then some way of registering and
> identifying all of the existing domains.  With reference linking with
> electronic publications, the picture gets more complicated, since
> there's the issue of ensuring that a user is provided with the
> "appropriate copy" of a digital object, based on the rights and services
> available through his/her own host institution.  There will also be lots
> of external links that break over time, since we can't control all the
> naming practices of sites maintained on the Web that are not preserved
> in our repositories.
>

But if we link to external sites, we run the risk of the external site
going down, or changing ip/hostname.  I, personally, think that it would
be better to run everything internally, so that we are not dependent on
external sites, and we can administer the sites much more efficiently.
(Basically, leaving it alone once it is up.

> > The tar utility, is not necessarily a format, as it is a way of
> > concatenating text files one after the other into one file.  The files
> > in the tar file, retain their native format.  (id text will remain text,
> > binary will remain unreadable ;)  )
>
> Very good point.  My reference to it as a format could be a bit
> misleading, sinc it's not a file format like HTML, MS Word, or
> whatever.  This points to why tar doesn't provide a complete answer.
> There must still be software to read and interpret the files within the
> tarball and/or any other filesystem being used.  For HTML or plain ASCII
> text files, that probably won't be a problem.  But highly complex
> formats will require more preservation effort through approaches such as
> - translating them to some standard preservation formats,
> - continuously migrating them over time with each new generation of
> technology,
> - preserving the original byte stream and ensuring that there is
> software around to read and manipulate that byte stream (through
> emulation or other means)
> - some hybrid of the above.
>

That's why I say plain text.  Plain text will always be used, until
Microsoft takes over the world ;), or until we can all read binary :)
Plain text has been used for at least thirty years in the Unix world,
and I would venture to say, that it extends before that.  I think that
it is the safest alternative, and it doesn't take up too much room.
(Compression is another question, as the algorithms change over time, we
will have to decompress and recompress with the new algorithms
occasionally, so that we can always access the text that is our document
:) )

> In short, such preservation will take some forethought and active
> planning at the time that pages are first accessioned (or in the
> language of the Reference Model for an Open Archival Information System,
> "ingested").
> http://ssdoo.gsfc.nasa.gov/nost/isoas/ref_model.html
>

Amen.  Way to go Cal!  Woo-hoo!  (Sorry, I'm a Croc Hunter fan :) )


--
Shannon Peevey
Central Web Support
UNT-Computing Center
speeves@unt.edu
940-369-8876

A posting from the Archives & Archivists LISTSERV List!

To subscribe or unsubscribe, send e-mail to listserv@listserv.muohio.edu
      In body of message:  SUB ARCHIVES firstname lastname
                    *or*:  UNSUB ARCHIVES
To post a message, send e-mail to archives@listserv.muohio.edu

Or to do *anything* (and enjoy doing it!), use the web interface at
     http://listserv.muohio.edu/archives/archives.html

Problems?  Send e-mail to Robert F Schmidt <rschmidt@lib.muohio.edu>