From: Martin Duerst (duerst@w3.org)
Date: Mon Sep 30 2002 - 23:46:41 EDT
At 07:37 02/09/26 +0900, jarkko.hietaniemi@nokia.com wrote:
>I would be happy if just this
>
><meta http-equiv="Content-Type" content="text/html; charset=utf-8"/>
>
>would be enough to convince the browsers that the page is in UTF-8...
>It isn't if the HTTP server claims that the pages it serves are in
>ISO 8859-1. A sample of this is http://www.iki.fi/jhi/jp_utf8.html,
>it does have the meta charset, but since the webserver (www.hut.fi,
>really, a server outside of my control) thinks it's serving Latin 1,
>I cannot help the wrong result. (I guess some browsers might do better
>work at sniffing the content of the page, but at least IE6 and Opera 6.05
>on Win32 seem to believe the server rather than the (HTML of the) page.
Sniffing isn't a good idea in the long term. It may work
for simple web page serving, but as soon as you go XML and
start to move data around without the user having a chance
to see it frequently, you'll end up with a big mess.
Also, 'guessing' is very ill-defined. You might serve
a document to your favorite browser, and it looks okay.
But other browsers might guess a bit differently, or
a new version of your favorite browser may guess a bit
differently, and off you are.
Regards, Martin.
This archive was generated by hypermail 2.1.5 : Tue Oct 01 2002 - 01:03:12 EDT