Publication ideas

Anything related to Biblical Greek that doesn't fit into the other forums.
Alan Bunning
Posts: 299
Joined: June 5th, 2011, 7:31 am
Contact:

Publication ideas

Post by Alan Bunning »

After completing phase 1 of the project with the Center for New Testament Restoration where I produced electronic transcriptions of the Greek manuscripts up to 400 AD containing portions of the New Testament, I have acquired various types of knowledge that may be of interest to others and want to explore the possibility of publishing an article in a journal. Here are some of the topics I thought of:

Technical
1. Almost every popular Greek text found in Bible programs and on the Internet contains errors which continue to be copied and distributed. This has resulted in a type of electronic textual criticism where the source of the original electronic transcription can often be identified by checking just a few words in question.
2. None of the Bibles or online texts I have examined follow the verse boundaries originally introduced by Robert Estienne (Stephanus) in his 1550 text and where the boundaries differ.
3. An algorithm on how to line up variant words in multiple texts without assigning one of the texts to be a base text (in other words, treat all texts equally).
4. The set of rules that enable a computer to generate all the verbs forms of the New Testament if given the irregular principal parts.

Textual Criticism
1. Examples of how the current apparatuses contain errors, are incomplete, and not very useful for doing any serious work in textual criticism.
2. Using textual affinity based on statistics to replace the usual text-type theories (or perhaps to confirm their existence).
3. An algorithm for how to have the computer automatically generate a base text approximating the original autographs in an unbiased manner (without human intervention) based on five objective external criteria and why that would be better than the texts we use now.

I was wondering how much interest there would be in any of these topics (maybe you could help rank them in terms of importance). Perhaps some of these papers have already been written? If any of these topics have merit, what would you recommend as appropriate journals for them to be submitted to? Information about open access or ungated journals would be preferred. (Or perhaps one of them should become a “Working Paper” as discussed a couple weeks ago).

Thanks,

Alan Bunning
Stephen Hughes
Posts: 3323
Joined: February 26th, 2013, 7:12 am

Errors in electronic texts

Post by Stephen Hughes »

I am interested in thie errors in electronic texts topic.

I am also interested to know what your 5 external criteria for textual criticism valid in all situations are.
Γελᾷ δ' ὁ μωρός, κἄν τι μὴ γέλοιον ᾖ
(Menander, Γνῶμαι μονόστιχοι 108)
Ken M. Penner
Posts: 881
Joined: May 12th, 2011, 7:50 am
Location: Antigonish, NS, Canada
Contact:

Re: Publication ideas

Post by Ken M. Penner »

Alan Bunning wrote:Technical
1. Almost every popular Greek text found in Bible programs and on the Internet contains errors which continue to be copied and distributed. This has resulted in a type of electronic textual criticism where the source of the original electronic transcription can often be identified by checking just a few words in question.
2. None of the Bibles or online texts I have examined follow the verse boundaries originally introduced by Robert Estienne (Stephanus) in his 1550 text and where the boundaries differ.
3. An algorithm on how to line up variant words in multiple texts without assigning one of the texts to be a base text (in other words, treat all texts equally).
4. The set of rules that enable a computer to generate all the verbs forms of the New Testament if given the irregular principal parts.

Textual Criticism
1. Examples of how the current apparatuses contain errors, are incomplete, and not very useful for doing any serious work in textual criticism.
2. Using textual affinity based on statistics to replace the usual text-type theories (or perhaps to confirm their existence).
3. An algorithm for how to have the computer automatically generate a base text approximating the original autographs in an unbiased manner (without human intervention) based on five objective external criteria and why that would be better than the texts we use now.
Some of these are interesting topics, but not substantial enough for a full journal article. Technical topics 1, 2, and 4 and Textual Criticism topic 1 fall in this category. You might find a journal to publish these as Short Notes, but I think a blog post is probably the way to go for these.
Technical topic 3 is already being done (see for example the Online Critical Pseudepigrapha at http://ocp.tyndale.ca).
That leaves Textual Criticism topics 2 and 3. What makes these viable is the useful result they produce: a new text-type theory, and a new critical text respectively.

http://www.sbl-site.org/SBLcommittees_JBLBoard.aspx
Ken M. Penner
Professor and Chair of Religious Studies, St. Francis Xavier University
Co-Editor, Digital Biblical Studies
General Editor, Lexham English Septuagint
Co-Editor, Online Critical Pseudepigrapha pseudepigrapha.org
James Spinti
Posts: 103
Joined: June 1st, 2011, 6:01 pm
Location: Red Wing MN
Contact:

Re: Publication ideas

Post by James Spinti »

Fascinating. I would be interested in seeing information on Technical §1 and Text Criticism §3.

I must admit I'm not sure you can be unbiased; you have to show bias in choosing what the 5 criteria will be. Doesn't that involve bias?

James
Proofreading and copyediting of ancient Near Eastern and biblical studies monographs
Alan Bunning
Posts: 299
Joined: June 5th, 2011, 7:31 am
Contact:

Re: Publication ideas

Post by Alan Bunning »

Ken M. Penner wrote: Technical topic 3 is already being done (see for example the Online Critical Pseudepigrapha at http://ocp.tyndale.ca).
I could not find an example of what I was proposing on that website. Can you point me to where they discuss it in particular? You can see an example of the algorithm I am using at http://bunning.gweb.io/CNTR/manuscripts.htm. But then again, no one has expressed interest in that item so far. Perhaps this topic would be more appropriate for a computer science journal?
Ken M. Penner
Posts: 881
Joined: May 12th, 2011, 7:50 am
Location: Antigonish, NS, Canada
Contact:

Re: Publication ideas

Post by Ken M. Penner »

Alan Bunning wrote:
Ken M. Penner wrote: Technical topic 3 is already being done (see for example the Online Critical Pseudepigrapha at http://ocp.tyndale.ca).
I could not find an example of what I was proposing on that website. Can you point me to where they discuss it in particular?
Ian Scott and I presented on our project about 9 years ago. Our handout from the talk can be found at https://stfx.academia.edu/KenPenner/Con ... sentations
Ken M. Penner
Professor and Chair of Religious Studies, St. Francis Xavier University
Co-Editor, Digital Biblical Studies
General Editor, Lexham English Septuagint
Co-Editor, Online Critical Pseudepigrapha pseudepigrapha.org
Alan Bunning
Posts: 299
Joined: June 5th, 2011, 7:31 am
Contact:

Re: Publication ideas

Post by Alan Bunning »

Ken M. Penner wrote:
Alan Bunning wrote:
Ken M. Penner wrote: Technical topic 3 is already being done (see for example the Online Critical Pseudepigrapha at http://ocp.tyndale.ca).
I could not find an example of what I was proposing on that website. Can you point me to where they discuss it in particular?
Ian Scott and I presented on our project about 9 years ago. Our handout from the talk can be found at https://stfx.academia.edu/KenPenner/Con ... sentations
Oh, I see now. Yes, you have done something similar, but instead of showing multiple texts in parallel, you chose to show one text and then the differences based on it. I didn’t see any information on the algorithm you used though. Is that publicly available information? I am not so interested on the fact that data like this can be generated (for there are several programs which can do something similar), but more interested in the science of the algorithm used. I was surprised to see that most of the research on these algorithms is being done in the area of genetics (along these lines http://en.wikipedia.org/wiki/Multiple_s ... _alignment). But what a similarity to the New Testament where there was an original text and then variants emerged, much like mutations in genes. Yet, in the case of genetics where there is such a huge volume of data, the efficiency of the algorithm is very important and my algorithm may be unique in these regards. Thus, I am thinking a computer science journal might be more appropriate to pursue.
Ken M. Penner
Posts: 881
Joined: May 12th, 2011, 7:50 am
Location: Antigonish, NS, Canada
Contact:

Re: Publication ideas

Post by Ken M. Penner »

For algorithms, I could put you in touch with Nat Dyskstra of http://water.twu.ca
Ken M. Penner
Professor and Chair of Religious Studies, St. Francis Xavier University
Co-Editor, Digital Biblical Studies
General Editor, Lexham English Septuagint
Co-Editor, Online Critical Pseudepigrapha pseudepigrapha.org
Jonathan Robie
Posts: 4159
Joined: May 5th, 2011, 5:34 pm
Location: Durham, NC
Contact:

Re: Publication ideas

Post by Jonathan Robie »

An algorithm for how to have the computer automatically generate a base text approximating the original autographs in an unbiased manner (without human intervention) based on five objective external criteria and why that would be better than the texts we use now.
Any such algorithm is based on an underlying theory, and it's easy to imagine algorithms that take different approaches. An algorithm might give all texts equal weight, or it might favor the earliest texts, perhaps weighting the rankings based on the age of the text. Similar texts might be grouped into families, computing the best reading for each family, then comparing the best readings for the families to create one text (perhaps ranking one family more highly than others). You could use the algorithms from multiple sequence alignment, or clustering algorithms.

None of these theories is really a proven way of determining what the original text is, each seems plausible to me, the different approaches likely have different strengths and weaknesses. You could test theories like this, e.g. by constructing similar tasks based on known, modern texts in English, having people write them by hand in various scenarios similar to those we think were used in transmission, and demonstrating that your algorithm can reconstruct the original text based on the later copies. Have any of the schools of textual criticism ever proven their methods work with experiments along these lines?

I once knew something about exploratory data analysis, and to me, that's what this is. Back then, we used to think that the best approach was frequently to apply multiple plausible algorithms and compare the results. I believe you plan to make your transcriptions available, and that's one of the best parts of this - people can analyze the data different ways and compare. If you can't analyze someone's data, you can't really have much confidence in their results.
ἐξίσταντο δὲ πάντες καὶ διηποροῦντο, ἄλλος πρὸς ἄλλον λέγοντες, τί θέλει τοῦτο εἶναι;
http://jonathanrobie.biblicalhumanities.org/
Alan Bunning
Posts: 299
Joined: June 5th, 2011, 7:31 am
Contact:

Re: Publication ideas

Post by Alan Bunning »

Jonathan Robie wrote:
An algorithm for how to have the computer automatically generate a base text approximating the original autographs in an unbiased manner (without human intervention) based on five objective external criteria and why that would be better than the texts we use now.
Any such algorithm is based on an underlying theory, and it's easy to imagine algorithms that take different approaches. An algorithm might give all texts equal weight, or it might favor the earliest texts, perhaps weighting the rankings based on the age of the text. Similar texts might be grouped into families, computing the best reading for each family, then comparing the best readings for the families to create one text (perhaps ranking one family more highly than others). You could use the algorithms from multiple sequence alignment, or clustering algorithms.

None of these theories is really a proven way of determining what the original text is, each seems plausible to me, the different approaches likely have different strengths and weaknesses. You could test theories like this, e.g. by constructing similar tasks based on known, modern texts in English, having people write them by hand in various scenarios similar to those we think were used in transmission, and demonstrating that your algorithm can reconstruct the original text based on the later copies. Have any of the schools of textual criticism ever proven their methods work with experiments along these lines?

I once knew something about exploratory data analysis, and to me, that's what this is. Back then, we used to think that the best approach was frequently to apply multiple plausible algorithms and compare the results. I believe you plan to make your transcriptions available, and that's one of the best parts of this - people can analyze the data different ways and compare. If you can't analyze someone's data, you can't really have much confidence in their results.
Yes, the point of the algorithm is to use the five objective external criteria to automatically generate the text, with the emphasis being on “objective” data rather than subject human decisions. These five criteria could indeed be weighted in several different ways to produce different results, but I think people will be very surprised at how close the computer will come to generating something like the Nestle-Aland text (and I would claim better than the Nestle-Aland text) without having to argue over internal evidence. Ideally, there should be a discussion to see if there can be agreement on how these external criteria should be weighed BEFORE seeing the results it produces. For example, few would disagree that the manuscript date should be weighed relatively highly in the formula, but there are other things to consider as well. But you are right that once the data is available, people could generate all types of base texts simply by tweaking the formula to their desire. There is much much more I have to say about all of this, but that is why I am proposing to write a paper on it.
Post Reply

Return to “Other”