It's fun to consider the introdroduction (after emojis) of imojis, amojis,
umojis and omojis for individual people (or named pets), alien species
(E.T. wants to be able to call home with his own language and script !),
unknown things, and obfuscated entities. Also fun for new "trollface"
characters. In fact you could represent every individual or even single
atom in the universe that has ever created since the BingBang !
But unlike peoples and social entities, characters to encode don't grow
exponentially but still linearily at a slowing speed. Unicode characters
are not exploding like Internet addresses (organizations, users, computers,
phones: the IPv4 space expoloded only because the equipement rate of people
accelerated but now it is slowing down with high equipment replacement
rate, and only the explosion of IoT continues to drive some growth but it
will rapidly reach a cap, so even the IPv6 address space will never be
filled even if it will be much larger that the UCS encoding space; I can
expect a maximum in the range of ~300 billions devices at most with all
planet resources as global population will not be able to grow
exponentially and will necessarily cap, plus ~100 milions
services/organizations; all the rest will die and will be replaced and even
if we give a delay of 100 years before reusing addresses of died devices
and people in IPv6, this will leave lot of space, we'll never reach a small
fraction of the number of entities in the universe; we are also completely
unable to make any physical measurement with so many digits of precision:
even just 64 bit bit is really extremely large, but 128 bit was chosen in
IPv6 just to allow random allocation without needeing excessive centralized
management, an IPv6 address is even a good subtitute to the whole DNS
system and its overvalued black market of domain names: IPv6 is extremely
economic !)
2018-04-03 3:06 GMT+02:00 Mark E. Shoulson via Unicode <unicode_at_unicode.org>
:
> On 04/02/2018 08:52 PM, J Decker via Unicode wrote:
>
>
>
> On Mon, Apr 2, 2018 at 5:42 PM, Mark E. Shoulson via Unicode <
> unicode_at_unicode.org> wrote:
>
>> For unique identifiers for every person, place, thing, etc, consider
>> https://en.wikipedia.org/wiki/Universally_unique_identifier which are
>> indeed 128 bits.
>>
>> What makes you think a single "glyph" that represents one of these 3.4⏨38
>> items could possibly be sensibly distinguishable at any sort of glance
>> (including long stares) from all the others? I have an idea for that: we
>> can show the actual *digits* of some encoding of the 128-bit number. Then
>> just inspecting for a different digit will do.
>>
>
> there's no restirction that it be one character cell in size... rendered
> glyphs could be thousands of pixels wide...
>
>
> Yes, but at that point it becomes a huge stretch to call it a
> "character". It becomes more like a "picture" or "graphic" or something.
> And even then, considering the tremendohunormous number of them we're
> dealing with, can we really be sure each one can be uniquely recognized as
> the one it's *supposed* to be, by everyone?
>
> ~mark
>
>
>
Received on Mon Apr 02 2018 - 20:50:29 CDT
This archive was generated by hypermail 2.2.0 : Mon Apr 02 2018 - 20:50:29 CDT