Normalizing Greek for indexing

Post Reply
NathanSmith
Posts: 62
Joined: June 10th, 2011, 12:38 am
Location: Portland, OR, USA
Contact:

Normalizing Greek for indexing

Post by NathanSmith »

Nearly two years ago I posted some questions about the normalization of Greek words. In recent weeks I have renewed my interest in this topic, as I have been examing the ElasticSearch platform. In addition to query and document retrieval, it can include metadata tagging for documents, so that parsing information could be included (and queried) as well.

An important element of indexing (and querying) is to normalize the documents being indexed as well as inbound queries. This way you don't miss results by having minor differences in the query text (e.g. punctuation, capitalization, pluralization, etc.). In the case of Koine Greek, I'd like to use normalization to control for orthography, accentuation, phonetic assimilation, etc.

To that end I'm developing a python library for normalizing Koine Greek. Currently I have the following implemented:
  • (Optional) convert from betacode
  • Remove punctuation
  • Convert to lowercase
  • Convert final sigma to normal sigma
  • Remove diacritics (breathings, accents, iota subscripts)
  • Expand elisions
  • Normalize unicode (NFC if you are familiar with such things)
So, for example, you go from this:
ἀπεκρίθη Νικόδημος καὶ εἶπεν αὐτῷ· πῶς δύναται ταῦτα γενέσθαι;
To this:
απεκριθη νικοδημοσ και ειπεν αυτω πωσ δυναται ταυτα γενεσθαι
This has provided a decent baseline. Remember, this is normalization for indexing. It is going to break and mangle things. :-)

If I run the MorphGNT through the normalizer, it makes the "word" and "normalized word" columns the same, with the exception of phonetic assimilation (mainly movable ν). The challenge with assimilation is that I'm not aware of a programmatic way to identify instances. Elision could be handled easily due to the relatively small number of examples in the corpus, but there are many, many more instances of movable ν thanks to verb morphology.

The Apache Lucene project (on which ElasticSearch is based) includes a modern Greek normalizer, so I am going to see if there are any other broad categories I might be missing. It also includes a stemmer, which can be useful to control for inflection, but I think I'd rather do that by storing the lemma in metadata.

Anyway, this is what I'm interested in, and I'm sharing it to get any feedback or other discussion. Thanks.
Nigel Chapman
Posts: 74
Joined: June 3rd, 2011, 4:55 pm
Location: Sydney Australia
Contact:

Re: Normalizing Greek for indexing

Post by Nigel Chapman »

Hi Nathan,

If PHP is any use to you, you'll find some useful utils here (with a test suite):

https://github.com/eukras/koinos/blob/m ... /Greek.php
https://github.com/eukras/koinos/blob/m ... ekTest.php

Esp. check the functions lowercase, gravesToAcutes, stripAccents, stripBreathings, fixDuplicateCharacters.

I think that last function is part of what you mean by normaliseUnicode; I haven't implemented a way to combine combining characters. Also, I have an old Betacode script, but it's not part of Koinos at present.

It's all MIT licensed, so reuse freely.

Koinos is a part of this site -> http://hexap.la/matt+22.34-40'mark+12.2 ... e+10.25-28

Nigel.
"When eras die their legacies are left to strange police." -- Clarence Day
Nigel Chapman | http://chapman.id.au
Jonathan Robie
Posts: 4158
Joined: May 5th, 2011, 5:34 pm
Location: Durham, NC
Contact:

Re: Normalizing Greek for indexing

Post by Jonathan Robie »

Nathan, this would be extremely useful, e.g. for doing full text searches over the Greek corpus that is openly available. I would love to see that happen!
ἐξίσταντο δὲ πάντες καὶ διηποροῦντο, ἄλλος πρὸς ἄλλον λέγοντες, τί θέλει τοῦτο εἶναι;
http://jonathanrobie.biblicalhumanities.org/
NathanSmith
Posts: 62
Joined: June 10th, 2011, 12:38 am
Location: Portland, OR, USA
Contact:

Re: Normalizing Greek for indexing

Post by NathanSmith »

Nigel Chapman wrote:If PHP is any use to you, you'll find some useful utils here (with a test suite):

https://github.com/eukras/koinos/blob/m ... /Greek.php
https://github.com/eukras/koinos/blob/m ... ekTest.php

Esp. check the functions lowercase, gravesToAcutes, stripAccents, stripBreathings, fixDuplicateCharacters.

I think that last function is part of what you mean by normaliseUnicode; I haven't implemented a way to combine combining characters.
Great, thanks Nigel. At the risk of invoking a religous war, I prefer Python. But PHP is a great language too.

The unicode normalization I do is to combine into NFC form. However, as a result of my diacritic strip, there probably (possibly?) isn't any difference between NFD and NFC in that case. FWIW I don't think it really matters, you just have to stick with one, because submitting a non-combined query to a combined index will result in a query miss.

Jonathan, I am pretty excited by the prosects as well. I hope to have a proof-of-concept up on the web before too long, and I'll share that with you. I'd love to have a Perseus-like scope and search functionality, without have the tremendous weight of Persues. :-)

One project which inspired this was xtas. It is a tool designed for the automated annotation (name your NLP task) of documents in elastic search.

An interesting question for the structure of document storage in an index is how you choose what comprises a "document" - in other words the boundaries which will constrain search results. I think a sentence may be the best linguistic answer, but in the case of biblical texts, most everyone is looking for book, chapter, verse. Lots of interesting things to consider.
Stephen Hughes
Posts: 3323
Joined: February 26th, 2013, 7:12 am

Re: Normalizing Greek for indexing

Post by Stephen Hughes »

Jonathan Robie wrote:Nathan, this would be extremely useful, e.g. for doing full text searches over the Greek corpus that is openly available. I would love to see that happen!
If it could be tailored to a firefox plug on that would be great too.

If I use the find magnifying glass for a word in a page, it needs to be accentuated accurately. If words on a page I was browsing could be (optionally) seen by the find magnifying glass as the dictionary form and each instance of a word could show up, that would be a great help. Can your software create another layer of data behind a webpage and search that while still displaying the original page? Can it work with firefox?
Γελᾷ δ' ὁ μωρός, κἄν τι μὴ γέλοιον ᾖ
(Menander, Γνῶμαι μονόστιχοι 108)
Nigel Chapman
Posts: 74
Joined: June 3rd, 2011, 4:55 pm
Location: Sydney Australia
Contact:

Re: Normalizing Greek for indexing

Post by Nigel Chapman »

Hi Nathan,

I wasn't suggesting a language switch... Just that the array/hash mappings and other code used to normalize the Greek words are easy to borrow and adapt.

Nigel.
"When eras die their legacies are left to strange police." -- Clarence Day
Nigel Chapman | http://chapman.id.au
NathanSmith
Posts: 62
Joined: June 10th, 2011, 12:38 am
Location: Portland, OR, USA
Contact:

Re: Normalizing Greek for indexing

Post by NathanSmith »

Nigel Chapman wrote:Hi Nathan,

I wasn't suggesting a language switch... Just that the array/hash mappings and other code used to normalize the Greek words are easy to borrow and adapt.

Nigel.
Understood. I can read PHP well enough that I can probably adapt what you've already accomplished.

Stephen Hughes wrote:If I use the find magnifying glass for a word in a page, it needs to be accentuated accurately. If words on a page I was browsing could be (optionally) seen by the find magnifying glass as the dictionary form and each instance of a word could show up, that would be a great help. Can your software create another layer of data behind a webpage and search that while still displaying the original page? Can it work with firefox?
What you are describing is a slightly different application of exactly what I am going for with this normalization library. To use it in a webpage, you could easily add the normal form as metadata in the underlying HTML, which would not be displayed, but could be queried by a browser plugin or javascript.

Right now I've got the LXX and NT normalized in Elastic search, but without metadata. I need to do some research on Elastic Search on how best to configure the schema to allow for complex seachers (like what are available in bible software).

Other applications I have thought of with this platform would be a spell-checker (based on fuzzy queries of an index of just words), and parsing hints (similar to before, but using exact matching with metadata attached). These could both be used to help acceleration the acquisition of new texts into the digital realm.
Post Reply

Return to “Other”