Publication ideas

Anything related to Biblical Greek that doesn't fit into the other forums.
Stephen Hughes
Posts: 3323
Joined: February 26th, 2013, 7:12 am

Subjective / Objective

Post by Stephen Hughes »

For questions of style in a particular context, the boarderline between objective and subjective is difficult to recognise. Even recognising what is good Greek, acceptable Greek and Greek requiring attention becomes subjective, especially in the middle there.

Subjective human beings made subjective choices as they copied. Aside from errors made unintentionally while actually trying to copy word for word, there is the inherent subjectivity of a scribe "working" with the text - producing a text based on, either what the copiest thought was better Greek, more understandable Greek, taking out what was against their own beliefs or adding things to make the text clearer.

To follow the subtleties of those changes requires more than algorithmic interaction with the text. To know what was better Greek per se is probably one of the easier types of subjectivity to follow. One would just need to have a good experience in the better authours. Recognising where errors in understanding might happen and how they would have been avoided is to take the understanding of Greek to the next level. To understand what was objectionable to various groups involves understanding heresy and orthodoxy. Finally, adding clarity to the text and filling in details is probably the most subjective thing.

In any case, I think that having an ability to conprehend various forms of subjectivity would be useful at some points in the text. If your algorithm could identify which parts of the text look like they would benefit by user intervention, that might be better. That is to say, to identify parts where the simplicity of the modelling process is not enough to explain variation.
Γελᾷ δ' ὁ μωρός, κἄν τι μὴ γέλοιον ᾖ
(Menander, Γνῶμαι μονόστιχοι 108)
Stephen Carlson
Posts: 3351
Joined: May 11th, 2011, 10:51 am
Location: Melbourne
Contact:

Re: Publication ideas

Post by Stephen Carlson »

There is considerable interest now in computer-assisted forms of textual criticism for the New Testament. For example, the INTF in Muenster has developed a technique called the Coherence-Based Genealogical Method. As another example, I will be publishing a monograph on the use of techniques from computational biology to edit the text of Galatians. But these approaches use internal evidence, though in different ways. The CBGM uses internal evidence as an input into the process, while I use it at the back end by the human textual critic after the computer work is done.

Certainly a computer algorithm can implement any number of possible text critical approachs. A copy text approach is trivial, and a majority texxt approach is not much harder. Those can easily be programmed on a computer.

It may be interesting to see how close one can get to the NA text without internal evidence, using some external proxies. A computer can implement that. It may point out interesting areas of the critical text for further examination.

Ultimately, however, most textual critics I know are adamantly opposed to the disavowal of internal evidence. Indeed, you will not be able to argue that your computer-generated text is better than the NA without making appeals to internal evidence. In fact, there has been a lot of hostility historically in the field to the use of computer or mathematical techniques. For example, the highly-respected Günther Zuntz had these to say:
Zuntz 1953:12 wrote:It follows that textual criticism, in our field, still can, and must, use the traditional methods (if adapted to its subject); and that it cannot be carried out mechanically. At every stage the critic has to use his brains. Were it different, we could put the critical slide-rule into the hands of any fool and leave it to him to settle the problems of the New Testament text.
Zuntz 1953:58 wrote:[T]he textual criticism of the New Testament cannot be carried out by statistical methods. . . . None but commensurable entities can be reduced to figures, and no two variants are strictly commensurable. Readings of all shades between good and bad; slips of the pen and intentional alterations; attestation by anything between one and a thousand witnesses: what is their common denominator? Variant readings can fruitfully be compared and grouped on more than one principle, but they cannot reasonably be added up or reduced to percent-ages like the factor of an arithmetical sum. What is the sum total of, say, an egg plus a grape plus a unicorn?
In a similar vein, you have Housman's famous essay The Application of Thought to Textual Criticism to contend with.
Stephen C. Carlson, Ph.D.
Melbourne, Australia
Barry Hofstetter
Posts: 2159
Joined: May 6th, 2011, 1:48 pm

Re: Publication ideas

Post by Barry Hofstetter »

How have I missed Housman's essay all these years? My one question is this: Has Bart Ehrman ever read it? :o
N.E. Barry Hofstetter, M.A., Th.M.
Ph.D. Student U of FL
Instructor of Latin
Jack M. Barrack Hebrew Academy
καὶ σὺ τὸ σὸν ποιήσεις κἀγὼ τὸ ἐμόν. ἆρον τὸ σὸν καὶ ὕπαγε.
Jonathan Robie
Posts: 4159
Joined: May 5th, 2011, 5:34 pm
Location: Durham, NC
Contact:

Re: Publication ideas

Post by Jonathan Robie »

Barry Hofstetter wrote:How have I missed Housman's essay all these years? My one question is this: Has Bart Ehrman ever read it? :o
Indeed, what a wonderful essay! And what a great post, Stephen!

I do wonder how Housman is using the term "science" in this essay. Clearly, what he is describing is an interpretive discipline, qualitative research. (For that matter, I'm generally confused by what German theologians call science ...)
Stephen Carlson wrote:There is considerable interest now in computer-assisted forms of textual criticism for the New Testament. For example, the INTF in Muenster has developed a technique called the Coherence-Based Genealogical Method. As another example, I will be publishing a monograph on the use of techniques from computational biology to edit the text of Galatians. But these approaches use internal evidence, though in different ways. The CBGM uses internal evidence as an input into the process, while I use it at the back end by the human textual critic after the computer work is done.
This mirrors what people are doing in big data, exploratory data analysis, and a lot of other areas where people slog data around to come up with insights. Computers are good at being consistent, every time. They are bad at noticing exceptions that they weren't programmed to think about and interpreting them intelligently.
Stephen Carlson wrote:Certainly a computer algorithm can implement any number of possible text critical approaches. A copy text approach is trivial, and a majority text approach is not much harder. Those can easily be programmed on a computer.
So how different is a computer-generated text from the texts we typically use? How close does it get to a Nestle-Aland or a Robinson-Pierpoint, if you choose an approach that seeks to approximate such a text? If it differs by, say, a few hundred readings, are they the same readings that are discussed in the text-critical notes in standard works?
ἐξίσταντο δὲ πάντες καὶ διηποροῦντο, ἄλλος πρὸς ἄλλον λέγοντες, τί θέλει τοῦτο εἶναι;
http://jonathanrobie.biblicalhumanities.org/
Jonathan Robie
Posts: 4159
Joined: May 5th, 2011, 5:34 pm
Location: Durham, NC
Contact:

Re: Publication ideas

Post by Jonathan Robie »

Zuntz 1953:58 wrote:[T]he textual criticism of the New Testament cannot be carried out by statistical methods. . . . None but commensurable entities can be reduced to figures, and no two variants are strictly commensurable. Readings of all shades between good and bad; slips of the pen and intentional alterations; attestation by anything between one and a thousand witnesses: what is their common denominator? Variant readings can fruitfully be compared and grouped on more than one principle, but they cannot reasonably be added up or reduced to percent-ages like the factor of an arithmetical sum. What is the sum total of, say, an egg plus a grape plus a unicorn?
Precisely. You can compare them, but those comparisons are interpretive and require human judgement.

http://www.dilbert.com/strips/2014-03-23/
ἐξίσταντο δὲ πάντες καὶ διηποροῦντο, ἄλλος πρὸς ἄλλον λέγοντες, τί θέλει τοῦτο εἶναι;
http://jonathanrobie.biblicalhumanities.org/
Alan Bunning
Posts: 299
Joined: June 5th, 2011, 7:31 am
Contact:

Re: Publication ideas

Post by Alan Bunning »

I wasn’t really wanting to get into a complicated discussion about textual criticism at this point, but will simply say that some of you are going to be surprised. Instead, I want to return to the original question about the viability of some topics for publication. So far the main interest has been in technical topic #1 and textual criticism topics #2 and #3. Could anyone suggest some appropriate journals to submit them to? Which ones do you actually read? Again any information about open access or ungated journals would be preferred.
Alan Bunning
Posts: 299
Joined: June 5th, 2011, 7:31 am
Contact:

Re: Publication ideas

Post by Alan Bunning »

Alan Bunning wrote:Textual Criticism
1. Examples of how the current apparatuses contain errors, are incomplete, and not very useful for doing any serious work in textual criticism.
This thread appears to be dead now, but I was kind of surprised that I didn't get any responses about this item. I am wondering if that is because everybody already knows that this is the case, nobody knows that this is the case, or nobody really cares too much about that topic? Anyone want to elaborate?
Ken M. Penner
Posts: 881
Joined: May 12th, 2011, 7:50 am
Location: Antigonish, NS, Canada
Contact:

Re: Publication ideas

Post by Ken M. Penner »

Alan Bunning wrote:
Alan Bunning wrote:Textual Criticism
1. Examples of how the current apparatuses contain errors, are incomplete, and not very useful for doing any serious work in textual criticism.
This thread appears to be dead now, but I was kind of surprised that I didn't get any responses about this item. I am wondering if that is because everybody already knows that this is the case, nobody knows that this is the case, or nobody really cares too much about that topic? Anyone want to elaborate?
Those that care about this topic enough for it to make a difference already know that "serious work in textual criticism" requires one to examine the manuscripts.
Ken M. Penner
Professor and Chair of Religious Studies, St. Francis Xavier University
Co-Editor, Digital Biblical Studies
General Editor, Lexham English Septuagint
Co-Editor, Online Critical Pseudepigrapha pseudepigrapha.org
Stephen Hughes
Posts: 3323
Joined: February 26th, 2013, 7:12 am

Those who don't care too much about apparatuses

Post by Stephen Hughes »

Ken M. Penner wrote:
Alan Bunning wrote:
Alan Bunning wrote:Textual Criticism
1. Examples of how the current apparatuses contain errors, are incomplete, and not very useful for doing any serious work in textual criticism.
This thread appears to be dead now, but I was kind of surprised that I didn't get any responses about this item. I am wondering if that is because everybody already knows that this is the case, nobody knows that this is the case, or nobody really cares too much about that topic? Anyone want to elaborate?
Those that care about this topic enough for it to make a difference already know that "serious work in textual criticism" requires one to examine the manuscripts.
Those who don't care much about this topic leave it to those who do care to do the caring about it.
Γελᾷ δ' ὁ μωρός, κἄν τι μὴ γέλοιον ᾖ
(Menander, Γνῶμαι μονόστιχοι 108)
Barry Hofstetter
Posts: 2159
Joined: May 6th, 2011, 1:48 pm

Textual Criticism

Post by Barry Hofstetter »

Let me also point out that textual criticism is not one of the topics that B-Greek is about. We usually only discuss variants if they offer interesting insight into the way Greek works as a language...
N.E. Barry Hofstetter, M.A., Th.M.
Ph.D. Student U of FL
Instructor of Latin
Jack M. Barrack Hebrew Academy
καὶ σὺ τὸ σὸν ποιήσεις κἀγὼ τὸ ἐμόν. ἆρον τὸ σὸν καὶ ὕπαγε.
Post Reply

Return to “Other”