An important element of indexing (and querying) is to normalize the documents being indexed as well as inbound queries. This way you don't miss results by having minor differences in the query text (e.g. punctuation, capitalization, pluralization, etc.). In the case of Koine Greek, I'd like to use normalization to control for orthography, accentuation, phonetic assimilation, etc.
To that end I'm developing a python library for normalizing Koine Greek. Currently I have the following implemented:
- (Optional) convert from betacode
- Remove punctuation
- Convert to lowercase
- Convert final sigma to normal sigma
- Remove diacritics (breathings, accents, iota subscripts)
- Expand elisions
- Normalize unicode (NFC if you are familiar with such things)
To this:ἀπεκρίθη Νικόδημος καὶ εἶπεν αὐτῷ· πῶς δύναται ταῦτα γενέσθαι;
This has provided a decent baseline. Remember, this is normalization for indexing. It is going to break and mangle things.απεκριθη νικοδημοσ και ειπεν αυτω πωσ δυναται ταυτα γενεσθαι
If I run the MorphGNT through the normalizer, it makes the "word" and "normalized word" columns the same, with the exception of phonetic assimilation (mainly movable ν). The challenge with assimilation is that I'm not aware of a programmatic way to identify instances. Elision could be handled easily due to the relatively small number of examples in the corpus, but there are many, many more instances of movable ν thanks to verb morphology.
The Apache Lucene project (on which ElasticSearch is based) includes a modern Greek normalizer, so I am going to see if there are any other broad categories I might be missing. It also includes a stemmer, which can be useful to control for inflection, but I think I'd rather do that by storing the lemma in metadata.
Anyway, this is what I'm interested in, and I'm sharing it to get any feedback or other discussion. Thanks.