From: Peter Kirk (peterkirk@qaya.org)
Date: Sat Oct 11 2003 - 07:33:35 CST
On 11/10/2003 05:37, Gautam Sengupta wrote:
> ...
>
>[Gautam]: I did hedge my claim by saying that I was
>going to cite a rather minor example. But why would I
>want to do this extra bit of computing - however
>trivial - when I could have avoided it by adopting a
>more "appropriate" encoding in the first place? After
>all, what I am suggesting is that the VIRAMA model
>once adopted ought to have been implemented in full.
>Is there any particular reason why it should be
>adopted for CC but not for CV sequences?
>
>Encoding /ki/ as <K><i> (using lowercase vowels to
>denote combining forms and letters within slashes to
>denote phonemes rather than characters) is also
>semantically inappropriate. <K> stands for /ka/ not
>/k/, and <i> being a combining form of <I> simply
>stands for /i/. So <K><i> should stand for /kai/
>rather than /ki/ unless a VIRAMA is inserted between
>the <K> and the <i> to remove the default inherent
>vowel /a/ from <K>.
>
>I hope this makes sense. Best, Gautam.
>
>
>
>
>
I see where you are coming from. But you seem to be trying to redefine
one of the basic characteristics of Indic scripts, that an explicit
vowel replaces the implicit vowel. If you start on this path, you may as
well do the further simplification to do away with the confusing virama
and use a simple phonetic encoding i.e <k, a> for ka, <k, i> for ki, and
<k> alone for k with virama mark. Smart font technology can easily
substitute the required ligatures, virama marks etc., in the same sort
of way that it does Arabic shaping, ligatures etc. And if we were
starting from scratch we might have decided that that was the better way
to go. But we are not, we are starting from where we are, so we should
probably make as few changes as we can.
-- Peter Kirk peter@qaya.org (personal) peterkirk@qaya.org (work) http://www.qaya.org/
This archive was generated by hypermail 2.1.5 : Thu Jan 18 2007 - 15:54:24 CST