On Sat, 9 Mar 2013 18:58:45 -0700
"Doug Ewell" <doug_at_ewellic.org> wrote:
> Richard Wordingham wrote:
> > The general feeling seems to be that computers don't do proper
> > decimal points, and so the raised decimal point is dropping out of
> > use.
> Any discussion of whether "computers" handle decimal points properly
> can't happen without talking about number-to-string conversion
> routines in programming languages and frameworks.
The question is what users will demand. Expectations have been low
enough that the loss of decimal points has been accepted.
Additionally, striving for an apparently hard to get raised decimal
point risks being forced to use an achievable decimal comma.
> Conversion routines are often able to choose between full stop and
> comma as the decimal separator, based on locale, but I'm not aware of
> any that will use U+00B7.
> The same is true for using U+2212, or even U+2013, as the "negative"
> sign instead of U+002D, which looks just terrible for this purpose in
> many fonts.
U+2212 is not necessary for English (see CLDR exemplar characters), so
CLDR policy (if not rules) do not allow it in CLDR conversion rules.
I'm feeling lucky that I've got away with using it in documents for a
few years now, but may be I've only succeeded because we've been cut and
pasting from a Unicode-aware environment (Windows) to an 8-bit
environment (ill-maintained Solaris, hated by management).
Richard.
Received on Sun Mar 10 2013 - 04:43:33 CDT
This archive was generated by hypermail 2.2.0 : Sun Mar 10 2013 - 04:43:34 CDT