[Unicode] Technical Reports
 

Unicode® Technical Standard #58

Unicode Link Detection and Formatting:
URLs and Email Addresses

Version 17.0
Editors Mark Davis, Markus Scherer
Date 2026-02-02
This Version https://www.unicode.org/reports/tr58/tr58-2.html
Previous Version https://www.unicode.org/reports/tr58/tr58-1.html
Latest Version https://www.unicode.org/reports/tr58/
Latest Proposed Update https://www.unicode.org/reports/tr58/proposed.html
Revision 2

Summary

When URLs are stored and exchanged in structured data, the start and end of each URL is clear, and it can be parsed according to the relevant specifications. However, when URLs appear as unmarked strings in text content, detecting their boundaries can be challenging. For example, some characters that are often used as sentence-level punctuation in text, such as parentheses, commas, and periods, can also be valid characters within a URL. Implementations often do not behave intuitively and consistently.

When a URL is inserted into text, non-ASCII characters and “special” characters can be percent-encoded, which can make it easy for a later process to find the start and end of the URL. However, escaping more characters than necessary, especially normal letters, can make the URL illegible for a human reader.

Similar problems exist for email addresses.

This document specifies two consistent, standardized mechanisms that address these problems, consisting of:

  1. link detection: detecting URLs and email addresses embedded in plain text that properly handles non-ASCII characters, and
  2. minimally escaping: minimal escaping of non-ASCII code points in the Path, Query, and Fragment portions of a URL.

The focus is on links with the Schemes http:, https:, and mailto: — and links where those Schemes are missing but implied. For these cases, the two mechanisms of detecting and formatting are aligned, so that: a minimally escaped URL string between two spaces in flowing text is accurately detected, and a detected URL works when pasted into address bars of major browsers.

Status

This document has been reviewed by Unicode members and other interested parties, and has been approved for publication by the Unicode Consortium. This is a stable document and may be used as reference material or cited as a normative reference by other specifications.

A Unicode Technical Standard (UTS) is an independent specification. Conformance to the Unicode Standard does not imply conformance to any UTS.

Please submit corrigenda and other comments with the online reporting form [Feedback]. Related information that is useful in understanding this document is found in the References. For more information see About Unicode Technical Reports and the Specifications FAQ. Unicode Technical Reports are governed by the Unicode Terms of Use.

Contents


1 Introduction

1.1 URLs

The standards for URLs and their implementations in browsers generally handle Unicode quite well, permitting people around the world to use their writing systems in those URLs. This is important: in writing their native languages, the majority of humanity uses characters that are not limited to A-Z, and they expect their characters to work equally well. To make these characters work seamlessly requires attention to issues often overlooked. For example, consider the common practice of providing user handles such as:

The first three of these work well in practice. Copying from the address bar and pasting into text provides a readable result. However, the last example contains non-ASCII characters: many browsers currently don't put the desired Unicode string onto the clipboard, and instead put an unreadable string there, as the following shows.

The names also expand in size and turn into very long strings:

While many people cannot read "महात्मा_गांधी", nobody can read %E0%A4%AE%E0%A4%B9%E0%A4%BE%E0%A4%A4%E0%A5%8D%E0%A4%AE%E0%A4%BE_…. This unintentional obfuscation also happens with URLs using Latin-script characters with accents:

Such cases are common, as few languages using Latin-script characters are limited to the ASCII letters A-Z; English being a notable exception. This situation is doubly frustrating for people because the un-obfuscated URLs such as https://www.youtube.com/@핑크퐁 and https://en.wikipedia.org/wiki/Antonín_Dvořák work fine as plain text; you can copy and paste them back into your address bar — they go to the right page and display properly in the address bar.

Notes

1.2 Email Addresses

Email addresses should also work seamlessly for all languages. Linkification is part of that. For example, an e-mail client recognizes each email address in plain text and "linkifies" it, for convenience for the recipient. Getting this to work as expected requires attention to the issues described in the following. With most email programs, when someone pastes in the plain text:

and sends to someone else, they receive it as:

1.3 Displaying Unmarked URLs and Email Addresses

URLs are linkified in many applications, such as when pasting into a word processor (triggered by typing a space afterwards, for example). However, many products (text messaging apps, video messaging chats, etc.) completely fail to recognize any non-ASCII characters other than in the domain name itself. And even among those that do recognize such non-ASCII characters, there are gratuitous differences in where they detect the end of the link.

Linkification is the process of adding links to URLs and email addresses in plain text, such as in email body text, text messaging, or video meeting chats. The first step in this process is link detection, which is determining the boundaries of each span of text that contains a URL. Each of these spans can then have a link applied to it. The functions that perform these operations are called a link detector and a linkifier, respectively. The specifications that define the URL format don’t specify how to handle link detection, because they are only concerned with the structure in isolation, not when it is embedded within flowing text.

The lack of a clear specification for link detection also leads many implementations to overuse percent escaping for non-ASCII characters when converting URLs into plain text.

While implementations often differ in how they linkify URLs and email addresses that contain only ASCII characters, the differences are even greater when non-ASCII characters are present. Such inconsistent handling of letters across writing systems can have a huge impact on usability. For example, which of the following would be more readable to the user?

For example, take the lists of links on [List of articles every Wikipedia should have] in the available languages. When those links are tested with major products, there are significant discrepancies: any two implementations may terminate the linkification at different places, or not linkify the URL at all. Such inconsistencies make it very difficult to exchange URLs between products within plain text, which is done surprisingly often — the lack of predictable behavior causing problems for users and software companies alike.

This inconsistency causes problems for users and software companies. Having consistent rules for linkification can also leading to solutions such as for the following reported problems:

As linkification behavior becomes more predictable across platforms and applications, applications to limit escaping to what is minimally required: For example, in the following only one character would need escaping, the %29 — the ASCII “)”. It would still need escaping because it is an unmatched parenthesis.

This specification provides a consistent, predictable solution in the form of standardized algorithms to define the behavior, one that works across the world’s languages. The corresponding Unicode character properties cover all Unicode characters, not just a small subset.

1.4 Focus

This specification currently focuses on the detection and formatting of the Path, Query, and Fragment and unquoted email local-parts, not on the Scheme or Host, or quoted email local-parts.

Internationalized domain names have strong limitations on their structure. They basically consist of a sequence of labels separated by label separators ("."), where each label consists of a sequence of one or more valid characters. (This is a basic overview: there are some edge cases.) There are some additional syntactic constraints as well. Characters outside of the valid characters and label separators definitely terminate the domain name (either at the start or end). (For more information, see UTS #46, Unicode IDNA Compatibility Processing.)

The start of a URL is also easy to determine when it has a known Scheme (such as “https://”). For domain names, there are structural limitations imposed by ICANN on TLDs (top-level domains, like .fr or .com). For example, a TLD cannot contain digits, hyphens, CONTEXTJ or CONTEXTO characters, nor can it be less than a minimal length (single letters are not allowed for ASCII). (For more details, see [RZ-LGR]). Implementations also make use of the fact that there is a list of valid top-level domains [TLD] — however, that should not be used unless the implementation regularly and frequently updates their copy of the list. There are other considerations when detecting domain names: consult Section 8 Security Considerations.

The parsing up to the path, query, or fragment is as specified in [WHATWG URL: 4.4. URL parsing]. Implementations use this information and the structure of domain names to identify the Scheme and Host in link detection, and format to human-readable characters (instead of Punycode!). For example, implementations must not include in link detection a host with a forbidden host code point, or a domain with a forbidden domain code point. Implementations must not linkify if a domain is not a registrable domain. The terms forbidden host code point, forbidden domain code point, and registrable domain are defined in [WHATWG URL: Host representation]. An implementation would parse to the end of each of https://some.example.com, foo.рф, and xn--j1ay.xn--p1ai.

Similarly, quoted email local-parts, such as "Jane Doe"@example.com are already well specified. However, they are rarely used. This specification does not apply to quoted local-parts.

When it comes to the Path, Query, and Fragment, many implementations don't handle them well. It is much less clear to implementers how to handle the many different types of Unicode characters correctly for these Parts of the URL. The same is true of the email local-parts; thus the focus of this specification.

2 Conformance

UTS58-C1. For a given version of Unicode, a conformant implementation shall replicate the same link detection results as those produced by Section 3 URL Link Detection Algorithm.

UTS58-C2. For a given version of Unicode, a conformant implementation shall replicate the same minimal escaping results as those produced by Section 4 URL Minimal Escaping.

UTS58-C3. For a given version of Unicode, a conformant implementation shall replicate the same email link detection results as those produced by Section 5 Email Addresses.

3 URL Link Detection

The following table shows the relevant parts of a URL. For clarity, the separator characters are included in the examples. For more information see [WhatWG URL: Example URL Components].

Table 3-1. Parts of a URL

Scheme Host (incl. Domain) Port Path Query Fragment
https:// docs.foobar.com :8000 /knowledge/area/ ?name=article&topic=seo #top

Notes:

3.1 Processes

There are two main processes involved in Unicode link detection.

  1. Initiation. This requires determining the point within plain text where the parsing of a URL starts. When the Scheme is present for a URL (such as “http://”), determining the start of link detection is simple. However, the Scheme for a URL is commonly omitted when URLs are represented in text. For example, the string “adobe.com” should be recognized as being a URL when it occurs in the body of an email message, even though it does not have a Scheme.
  2. Termination. This requires determining the point within plain text where the parsing of a URL ends. A formal reading of the URL specs allows almost any character in certain URL parts, so it is insufficient for separating the end of the URL from the non-URL text after it.

There are two special cases. Both of these introduce some complications in the algorithm, because each of the Parts have different internal syntax and different initial characters, and can be followed by different Parts.

  1. "Soft" characters are not included in the link, unless they are followed by other characters that would be included. Here’s an example with ‘!’:
    1. “See abc.com?def!” — not included.
    2. “See abc.com?def!ghi” — is included.
  2. Closing brackets are not included in the link, unless they have a matching opening bracket — that doesn’t cross syntax characters. Here’s an example with ‘)’:
    1. “(See abc.com?def=a). And…” — not included.
    2. “See abc.com?def=(a). And…” — is included.

The algorithm is a single-pass algorithm with backup, that is, remembering the latest ‘safe’ point to break, and returning that where necessary. It also has a stack, so that it can determine when a closing bracket matches.

3.2 Initiation

As discussed in Section 1.4 Focus, the determination of the start of a URL is outside of the scope of this specification; the focus is on the part of a URL extending after the domain name.

3.3 Termination

Termination is much more challenging, because of the presence of characters from many different writing systems. While small, hard-coded sets of characters suffice for an ASCII implementation, there are over 150,000 Unicode characters, many with quite different behavior than ASCII. While in theory, almost any Unicode character can occur in certain URL parts, in practice many characters have very restricted usage in URLs.

Initiation stops at any Path, Query, or Fragment, so the termination process takes over with a “/”, “?”, or “#” character. Each Path, Query, or Fragment can contain most Unicode characters. The key is to be able to determine, given a URL Part (such as a Query), when a sequence of characters should cause termination of the link detection, even though that character would be valid in the URL specification.

It is impossible for a link detection algorithm to match user expectations in all circumstances, given the variation in usage of various characters both within and across languages. So the goal is to cover use cases as broadly as possible. Exceptional cases (URLs that need to use characters that would terminate) can still be appropriately linkified if those few characters are represented with % escapes.

At a high level, this specification defines three features:

  1. A method for identifying when to terminate link detection based on Unicode character properties that define contexts for terminating the parsing of a URL.
    • This addresses the question, for example, when a trailing period should be included in a link or not.
  2. A method for identifying balanced quotes and brackets that enclose a URL.
    • This addresses the distinction, for example, of enclosing the entire URL in parentheses, vs. URLs that contain a segment that is enclosed in parens, etc.
  3. An algorithm for doing the above, together with an enumerated property and a mapping property.

The focus is on the most common cases.

One of the goals is also predictability; it should be relatively easy for users to understand the link detection behavior at a high level.

3.4Properties

This specification defines two properties for URL link detection and formatting. There is an additional property for email, defined in Section 5 Email Addresses.

The short property names are identical to the long property names.

3.4.1 Link_Term Property

Link_Term is an enumerated property of characters with five enumerated values: {Include, Hard, Soft, Close, Open}
The short property value aliases are the same as the long ones.

Table 3-2. Link_Term Property Values

Value Description / Examples
Include There is no stop before the character; it is included in the link.
Example: letters
  • https://ja.wikipedia.org/wiki/アルベルト・アインシュタイン
Hard The URL terminates before this character.
Example: a space
  • Go to https://ja.wikipedia.org/wiki/アルベルト・アインシュタイン to find the material.
Soft The URL terminates before this character, if it is followed by a sequence of zero or more characters with the Soft value followed by a Hard value or end of string. That is: /\p{Link_Term=Soft}*(\p{Link_Term=Hard}|$)/
Example: a question mark
  • https://ja.wikipedia.org/wiki/アルベルト・アインシュタイン??abc
  • https://ja.wikipedia.org/wiki/アルベルト・アインシュタイン?? abc
  • https://ja.wikipedia.org/wiki/アルベルト・アインシュタイン??
Close If the character is paired with a previous character in the same URL Part (path, query, fragment) and within the same sequence of characters delimited by separators as described in the Termination Algorithm below, it is treated as Include. Otherwise it is treated as Hard.
Example: an end parenthesis
  • https://ja.wikipedia.org/wiki/(アルベルト)アインシュタインアインシュタイン)
  • (https://ja.wikipedia.org/wiki/アルベルト)アインシュタイン
  • (https://ja.wikipedia.org/wiki/アルベルトアインシュタイン
Open Used to match Close characters.
Example: same as under Close

3.4.2 Link_Bracket Property

Link_Bracket is a string property of characters, which for each character in \p{Link_Term=Close}, returns a character with \p{Link_Term=Open}.

Example

  1. Link_Bracket('}') == '{'

The specification of the characters with each of these property values is given in Section 6.1 Property Assignments.

3.5 Termination Algorithm

The termination algorithm assumes that a domain (or other host) has been successfully parsed to the start of a Path, Query, or Fragment, as per the algorithm in [ WHATWG URL:3. Hosts (domains and IP addresses)].

This algorithm then processes each final URL Part [path, query, fragment] of the URL in turn. It stops when it encounters a code point that meets one of the terminating conditions and reports the last location in the current URL Part that is still safely considered inside the link. The algorithm terminates when encountering:

More formally:

The termination algorithm begins after the Host (and optionally Port) have been parsed, so there is potentially a Path, Query, or Fragment. In the algorithm below, each Part has three sets of Action strings that affect transitions within and between Parts:

Sequence SetsActions
InitiatorStarts the Part
Terminator SetTerminates the Part
ClearStackOpen SetClears the stack of open brackets within the Part

Here are the sets of zero or more strings in each Sequence Set for each Part.

Table 3-3. Link Termination by URL Part

Part Initiator Terminator set ClearStackOpen set
path '/' [?#] [/]
query '?' [#] [=\&]
fragment '#' [{:~:}] []
fragment directive :~: [] [\&,{:~:}]

Fragment directives:

3.5.1 URL Link Detection Termination Algorithm

In the following:


  1. Set lastSafe = link_startthis marks the offset after the last code point that is included in the link detection (so far).
  2. Set part = none.
  3. Set limit = 125.
  4. Clear the openStack.
  5. Loop from i = start to n - 1
    1. If part ≠ none and one of the part.terminators matches at i
      1. Set previousPart = part.
      2. Set part = none.
    2. If part == none then try to match one of the URL Part initiators at i.
      1. If none of the initiators match, then stop and return lastSafe.
      2. Set part according to which URL Part’s initiator matches.
      3. If part is a Fragment Directive and previousPart is neither a Fragment nor a Fragment Directive, then stop and return lastSafe.
      4. Set i to just after the matched part.initiator.
      5. Set lastSafe = i.
      6. Clear the openStack.
      7. Continue loop
    3. If one of the part.clearStackOpen elements matches at i
      1. Set i to just after the matched part.clearStackOpen element.
      2. Set lastSafe = i.
      3. Clear the openStack.
      4. Continue loop
    4. Set LT = Link_Term(cp[i]).
    5. If LT == Include
      1. Set lastSafe = i + 1.
      2. Continue loop
    6. If LT == Soft
      1. Continue loop
    7. If LT == Hard
      1. Stop and return lastSafe
    8. If LT == Open
      1. If openStack.length() == limit, then stop and return lastSafe.
      2. Push cp[i] onto openStack
      3. Set lastSafe = i + 1.
      4. Continue loop.
    9. If LT == Close
      1. If openStack.isEmpty(), then stop and return lastSafe.
      2. Set lastOpen = openStack.pop().
      3. If Link_Bracket(cp[i]) == lastOpen
        1. Set lastSafe = i + 1.
        2. Continue loop.
      4. Else stop and return lastSafe.
  6. After the loop terminates, set link_limit to lastSafe and return.

For ease of understanding, this algorithm does not include all features of URL parsing. Any implementation that produces the same results as this algorithm is conformant. Such implementations can be optimized in various ways, and adapted to use a single-pass algorithm.

4 URL Minimal Escaping

The goal is to generate a serialized form of a URL that:

  1. is correctly parsed by modern browsers and other devices
  2. minimizes the use of percent-escapes
  3. is completely link-detected when isolated.

Note that if not isolated (not bounded by start/end of string or Hard characters), the linkification may extend beyond the bounds of the serialized form. For example, the URL would fail to linkify correctly if pasted between the two X's in "See XX for more information.", resulting in “See Xabc.com/path1./path2%2EX for more information”.

The minimal escaping algorithm is parallel to the link detection algorithm algorithm. When serializing a URL a character in a Path, Query, or Fragment is basically only percent-escaped if it is one of the following:

The minimally escaped result should be used whenever a URL is visible to end users. For example, bücher.de/bücher should appear — not xn--bcher-kva.de/b%C3%BCcher — in the following:

4.1 URL Minimal Escaping Algorithm

This algorithm only handles the formatting of the Path, Query, and Fragment URL Parts. Formatting of the Scheme, Host, and Port should be done as is customary for those URL Parts. For the Host (domain name), see also UTS #46: Unicode IDNA Compatibility Processing and its ToUnicode operation.

In the following:


  1. Set output = ""
  2. For each URL part in any non-empty Path, Query, Fragment, successively:
    1. Append to output: part.initiator
    2. Set copiedAlready = 0
    3. Clear the openStack
    4. Loop from i = 0 to n - 1
      1. If one of the part.terminators matches at i
        1. Set LT = Hard
      2. Else set LT = Link_Term(cp[i])
      3. If one of the part.clearStackOpen elements matches at i, clear the openStack.
      4. If LT == Include
        1. Append to output: any code points between copiedAlready (inclusive) and i (exclusive)
        2. Append to output: cp[i]
        3. Set copiedAlready = i + 1
        4. Continue loop
      5. If LT == Hard
        1. Append to output: any code points between copiedAlready (inclusive) and i (exclusive)
        2. Append to output: percentEscape(cp[i])
        3. Set copiedAlready = i + 1
        4. Continue loop
      6. If LT == Soft
        1. Continue loop
      7. If LT == Open
        1. If openStack.length() == 125, then do the same as LT == Hard.
        2. Else push cp[i] onto openStack and do the same as LT == Include
      8. If LT == Close
        1. Set lastOpen = openStack.pop(), or 0 if the openStack is empty
        2. If Link_Bracket(cp[i]) == lastOpen
          1. Do the same as LT == Include
        3. Else do the same as LT == Hard
    5. If part is not last
      1. Append to output: all code points between copiedAlready (inclusive) and n (exclusive)
    6. Else if copiedAlready < n
      1. Append to output: all code points between copiedAlready (inclusive) and n - 1 (exclusive)
      2. Append to output: percentEscape(cp[n - 1])
  3. Return output.

Any implementation that produces the same results is conformant. Such implementations can be optimized in various ways, and adapted to use single-pass processing.

Higher level implementations can percent-escape additional characters to reduce confusability, especially when they are confusable with URL syntax characters, such as a glottal stop character ‘Ɂ’ character in a path. See Section 8, Security Considerations.

5 Email Addresses

Email link detection applies similar principles to URL Link Detection. An email address is of the form local-part@domain-name. The local-part can include unusual characters by quoting: enclosing it in "…", and using backslash to escape those characters. For example, "john\ doe"@example.com contains an escaped space. While the quoted local-part format can be easily supported if desired, it is also very rarely implemented in practice, so it is out of scope for this specification.

The email link detection algorithm is invoked whenever an '@' character is encountered at index n, followed by a valid domain name. The algorithm scans backward from the '@' sign to find the start of the local-part, terminating at index end (exclusive). If there is a "mailto:" before the local-part, then that is also included.

The only complications are introduced by the requirement in the specifications that the local-part cannot start or end with a ".", nor contain "..". For details of the format, see [RFC6530].

5.1 Link_Email Property

This specification defines one property for email link detection and formatting.

Link_Email is a binary property of characters, indicating the characters that can normally occur in the local-part of an email address, such as σωκράτης@example.om

Example

  1. Link_Email('σ') == 'Yes'

The specification of the characters with this property value is given in Section 6.1 Property Assignments.

The short property name is identical to the long property name.

5.2 Email Detection Algorithm

The algorithm uses the property Link_Email to scan backwards, as follows.

In the following:


  1. If n = 0, fail to match.
  2. If n > 0 and cp[i] == '.', fail to match.
  3. Scan backward through the text from i = n - 1 down to 0.
    1. If cp[i] == '.'
      1. If cp[i + 1] == '.', fail to match.
      2. Else continue scanning backward.
    2. Else if cp[i] is not in Link_Email, set start = i + 1 and terminate scanning.
    3. Else continue scanning backwards.
  4. If cp[start] == '.', fail to match.
  5. If start = n, fail to match.
  6. If "mailto:" is immediately before start, then set start = start-7.
  7. Set link_start to start and return.

As usual, any algorithm that produces the same results is conformant. Such algorithms can be optimized in various ways, and adapted to be a single pass algorithm for processing.

Table 5-1. Email Address Link Detection Examples

Contact [email protected]Stop backing up when a space is hit
Contact [email protected]Include the medial dot.
Contact アルベルト.アルベルト@example.comHandle non-ASCII
 
Contact @example.😎No valid domain name
Contact @example.comNo local-part
Contact [email protected]No valid local-part
Contact [email protected]No valid local-part
Contact [email protected]No valid local-part

In the last 3 examples, where the dots are illegal, linkification is failing entirely. In principle, a customized implementation could stop in front of the problematic dots in the last two examples, thus: "john..[email protected]" and ".[email protected]". However, that is more error-prone.

5.3 Email Minimal Quoting Algorithm

The minimal email quoting algorithm for email addresses is trivial. If any characters are not in Link_Email, and yet the text is valid according to [RFC6530], then the entire local part needs to be in quotation marks (with backslashes for the ASCII characters that require them: double-quote and backslash).

6 Property Data

The assignments of Link_Term and Link_Bracket property values are defined by the following files:

6.1 Property Assignments

The initial property assignments are based on the following descriptions. However, their values may deviate from these descriptions in future versions. See Section 9 Stability. Note that most characters that cause link termination are still valid, but require % encoding.

Link_Term=Hard

Whitespace, non-characters, deprecated characters, controls, private-use, surrogates, unassigned,...

Link_Term=Soft

Termination characters and ambiguous quotation marks:

Link_Term=Open, Link_Term=Close

if Bidi_Paired_Bracket_Type(cp) == Open then Link_Term(cp) = Open

else if Bidi_Paired_Bracket_Type(cp) == Close then Link_Term(cp) = Close

else if cp == "<" then Link_Term(cp) = Open

else if cp == ">" then Link_Term(cp) = Close

Link_Term=Include

All other code points

Link_Bracket

if Bidi_Paired_Bracket_Type(cp) == Close then Link_Bracket(cp) = Bidi_Paired_Bracket(cp)

else if cp == ">" then Link_Bracket(cp) = "<"

else Link_Bracket(cp) = <none>

Only characters with Link_Term=Close have a Link_Bracket mapping.

See Bidi_Paired_Bracket_Type.

Link_Email

In the ASCII range, the characters are as specified for ASCII, as per RFC 5322, Section 3.2.3. That is:

Outside of the ASCII range, the characters follow UAX31 identifiers. That is:

The reasons for this are that non-ASCII in the local-part are less commonly supported at this point, and the local-parts supported on most mail servers that go beyond ASCII are likely to have restrictions similar to programming identifiers. Implementations could also customize the set, and it can be broadened in the future.

7 Test Data

The following test files supply data for testing conformance to this specification. The format of each test is explained in the header of the test.

The test files are not applicable to results that are modified by a higher-level algorithm, as discussed in Security Considerations.

8 Security Considerations

Linkification in plain text is a service to users, and the end goal is to make that as useful as possible. It is a balancing act, because linkifying every substring in plaintext that has a syntactically valid domain would both be a bad user experience (eg, M.Sc.), and introduce security concerns.

The security considerations for Path, Query, and Fragment are less critical than for Domain names. See UTS #39: Unicode Security for more information about domain names.

A conformant implementation can have a fast low-level detection algorithm that simply finds all syntactically valid link opportunities — matching this specification — but then at a higher level (linkification) apply some additional security checks. The result of such checks could be to reject particular link detection results entirely, or alter the bounds of the link resulting from the link detection.

For example, an implementation of linkification could completely reject detection for the following:

Beyond just security considerations, usability is also a factor: an implementation might refrain from linkify helpers.py if there is no scheme before it, or when the context is a discussion of Python programming.

A higher level implementation could also adjust the boundaries from link detection, as in the following example:

In this example, it might move the start boundary so that the domain name doesn't contain two adjacent characters with different values for (Line_Break=Ideographic OR Complex_Context). This is a bit tricky, though, because it would block some reasonable URLs, like 最高のSONY製品.com.

Note that simply forcing characters to be percent-escaped in link formatting doesn't generally solve any problems; if anything, percent-escaping obfuscates characters even more than showing their regular appearance to users.

However, there are some exceptions. When characters can be confused with syntax characters, it is best to percent-escape them to reduce confusability and limit spoofing. See Section 4.1 URL Minimal Escaping Algorithm.

Right-to-left characters open up additional opportunities for spoofing, because their presence can alter the ordering of characters in display. This is especially for those having the property value Bidi_Control=Yes, which can change the ordering of characters in display. These will be percent-escaped by the Minimal Escaping algorithm. For display of BIDI URLs, see also HL4 in UAX #9, Unicode Bidirectional Algorithm.

Many real-world linkifiers and validators have length limits for URLs and email addresses, either as wholes or for certain Parts of them. This can help performance, avoid DOS attacks, and improve usability. Implementations of this specification are not required to support unlimited-length link detection or minimal escaping. It is unclear what the best limits are in practice; some guidance may be added in future versions of this specification.

There are documented cases of how Format characters can be used to sneak malicious instructions into LLMs; see Invisible text that AI chatbots understand and humans can’t?. URLs are just a small aspect of the larger problem of feeding clean text to LLMs, both in building them and in querying them: making sure the text does not have malformed encodings, is in a consistent Unicode Normalization Form (NFC), and so on.

For security implications of URLs in general, see UTS #39: Unicode Security Mechanisms. For related issues, see UTS #55 Unicode Source Code Handling. For display of BIDI URLs, see also HL4 in UAX #9, Unicode Bidirectional Algorithm.

9 Stability

As with other Unicode Specifications, the algorithms as well as property values and derivations may change in successive versions to adapt to new information and feedback from developers and end users.

The practical impact is expected to be very limited. Any unassigned characters will be escaped in formatting. Any newly assigned characters are either low frequency and will take a while before they show up in URLs, giving implementations ample time to upgrade. The worst case would be the very rare instance where a character is not escaped on a formatting system, but terminates the link on the detecting system. In that case, the link would be foreshortened, and the user would need to manually adjust.

10 Migration

The easiest way for an implementation to get the benefit of the new mechanisms described here is to use an imported library that implements it. However, that can be disruptive, so the following provides some examples of how to achieve this with minimal modifications to its use of existing link detection and formatting code:

Migration: Link Detection

The implementation may call its existing code library for link detection, but then post-process. Using such post-processing can retain the existing performance and feature characteristics of the code library, including the recognition of the Scheme and Host, and then refine the results for the Path, Query, and Fragment. A typical problem is that the code library terminates too early. For implementations that 'mostly' handle non-ASCII characters this will affect a fraction of the detected links.

  1. Call the existing code library.
  2. Let S be the start of the link in plain text as detected by the existing code library, and E be the offset at the end of that link.
  3. If E is at the end of the string, or if the code point at E (meaning the code point immediately after the offset at the end of the detected link) has the value Link_Term=Hard, then return S and E.
  4. Scan backwards to find the last initiator of a Path, Query, or Fragment URL Part.
  5. Follow the Termination Algorithm from that point on.

Migration: Link Formatting

The implementation calls its existing code library for the Scheme and Host. It then invokes code implementing the URL Minimal Escaping algorithm for the Path, Query, and Fragment.

References

[RFC6530] J. Klensin, Y. Ko, Overview and Framework for Internationalized Email RFC 6530, February 2012
https://datatracker.ietf.org/doc/html/rfc6530
[RZ-LGR] Internet Corporation for Assigned Names and Numbers (ICANN), Root Zone Label Generation Rules (RZ LGR-6): Overview and Summary, 23 September 2025
https://www.icann.org/sites/default/files/lgr/rz-lgr-6-overview-23sep25-en.pdf
[TLD List] Internet Assigned Numbers Authority (IANA), Domain Name Services: Root Zone Database>
https://www.iana.org/domains/root/db
[UnicodeSet] Unicode Technical Standard #35: Unicode Locale Data Markup Language (LDML)
https://www.unicode.org/reports/tr35/#Unicode_Sets
[URL Fragment Text Directives] W3C Draft Community Group Report, URL Fragment Text Directives
https://wicg.github.io/scroll-to-text-fragment/#syntax
[WHATWG URL: 3. Hosts (domains and IP addresses)] WHATWG URL: 3. Hosts (domains and IP addresses)
https://url.spec.whatwg.org/#hosts-(domains-and-ip-addresses)
[WHATWG URL: 4.4. URL parsing] WHATWG URL: 4.4. URL parsing
https://url.spec.whatwg.org/#url-parsing
[WHATWG URL: Example URL Components] WhatWG URL: Example URL Components
https://url.spec.whatwg.org/#example-url-components
[WHATWG URL: Host representation] WHATWG URL: Host representation
https://url.spec.whatwg.org/#host-representation

Acknowledgments

Mark Davis authored the bulk of the text, under direction from the Unicode Technical Committee.

Thanks to the following people for their contributions or feedback on this document or on test cases: Arnt Gulbrandsen, Asmus Freytag, Dennis Tan, Elika Etemad, Geraldo Ferreira, Hayato Ito, Jim Hunt, Josh Hadley, Jules Bertholet, Markus Scherer, Mathias Bynens, Peter Constable, Pitinan Kooarmornpatana, Robin Leroy, Sarmad Hussain. Thanks especially to Asmus Freytag for his thorough review.

Modifications

The following summarizes modifications from the previous revision of this document.

Revision 2