From: John Hudson (tiro@tiro.com)
Date: Thu May 27 2004 - 13:59:10 CDT
Peter Constable wrote:
> That question has been answered. So far, the responses to the answers
> provided haven't been exactly deafening. Does nobody in the
> pro-unification camp have any response? Is nobody willing to give
> acknowledgement to the problems presented?
My response was to ask what need there was for plain-text in the circumstance in which the
plain-text distinction was identified as a 'need'. The only response I received back was
John Jenkin's 'What if someone were trying to read their Hebrew e-mail at an Internet cafe
that only had a Palaeo-Hebrew font installed?' hypothesis. Aside from the sheer
unlikeliness of this, this threw the discussion back on the legibility criterion, but I'd
really like to know what prompts other users of 'Phoenician' to *need* a separate
encoding. I understand that some people may *want* a separate encoding, but I have not yet
seen a *need*, i.e. something that requires a distinction in plain-text. And I would like
to see such a need clearly explained, with material examples, because then we could stop
arguing until Michael submits his next semitic script proposal.
The concern I have is not so much with the Phoenician encoding per se, but with the
encoding of 'significant nodes' -- to use Michael's phrase -- on a script continuum. While
this might make sense to scholars dealing with isolated atomic instances of that
continuum, it is not going to make sense to scholars dealing with the continuum as a
whole, for whom the structural identity of the 'diacripts' within the continuum is much
more important than their visual dissimilarity at specific places and times. There are
'technical issues' -- in the same sense that there are technical issues prompting some
people to want a separate Phoenician encoding, i.e. usage issues -- that arise in trying
to do scholarly work in a script continuum that is variously encoded as multiple scripts.
These issues may not be sufficient to overcome the conflicting 'needs' of other scholars,
but they should not be ignored on that basis. In particular, if Unicode encodes a number
of 'significant nodes' on the semtitic script continuum, how should the standard be used
to encode texts that fall between the nodes? This is an issue even if one accepts the
concept of nodes, i.e. of a linear continuum with clearly identifiable chronological or
cultural script instances. Dean has, convincingly I think, presented examples of
overlapping of use of such 'nodes' among ancient communities, making it harder to
distinguish them from within the continuum.
John Hudson
-- Tiro Typeworks www.tiro.com Vancouver, BC tiro@tiro.com Currently reading: Typespaces, by Peter Burnhill White Mughals, by William Dalrymple Hebrew manuscripts of the Middle Ages, by Colette Sirat
This archive was generated by hypermail 2.1.5 : Thu May 27 2004 - 14:00:45 CDT