diff --git a/source b/source index 8d97a837fcf..448c4b728fc 100644 --- a/source +++ b/source @@ -2509,6 +2509,19 @@ a.setAttribute('href', 'https://example.com/'); // change the content attribute syntax is defined in Media Fragments URI. MEDIAFRAG
+The following terms are defined in URL Pattern: URLPATTERN
+ +URLPatternInitThe following terms are defined in MIME Sniffing: MIMESNIFF
@@ -2582,6 +2597,7 @@ a.setAttribute('href', 'https://example.com/'); // change the content attributeabout:blankSec-Purpose`The following terms are defined in The No-Vary-Search HTTP Response Header + Field: NOVARYSEARCH
+ +No-Vary-Search`The following terms are defined in Selectors: SELECTORS
<selector-list>The following features are defined in CSS Values and Units: @@ -4530,6 +4567,7 @@ a.setAttribute('href', 'https://example.com/'); // change the content attribute
The following terms are defined in Storage: STORAGE
Ensure that the CSS :link/:visited/etc. pseudo-classes
are updated appropriately.
For each descendant of document's + shadow-including descendants:
+ +If descendant is a script element whose result is a speculation rules parse result,
+ then:
Let oldResult be element's result.
Let newResult be the result of creating a speculation rules parse result given element's + child text content and element's node document.
Update speculation rules given element's relevant global + object, oldResult, and newResult.
Consider speculative loads given document.
This means that changing the base @@ -25247,13 +25311,60 @@ document.body.appendChild(wbr); data-x="concept-hyperlink-url">url to url.
-When elements implementing the HTMLHyperlinkElementUtils mixin are created, and
- whenever those elements have their href content
- attribute set, changed, or removed, the user agent must set the url.
When elements implementing the HTMLHyperlinkElementUtils mixin are created, the
+ user agent must set the url.
The HTML element insertion steps for a and area
+ elements, given insertedNode, are:
If insertedNode is not connected, then return.
Consider speculative loads given insertedNode's node + document.
The HTML element removing steps for a and area elements,
+ given removedNode and oldParent, are:
This is only observable for blob: URLs as
- parsing them involves a Blob URL Store lookup.
If oldParent is not connected, then return.
Consider speculative loads given oldParent's node + document.
The HTML element moving steps for a and area elements,
+ given movedNode, are:
Consider speculative loads given movedNode's node + document.
The following attribute change
+ steps, given element, localName, oldValue,
+ value, and namespace, are used for all a and area
+ elements:
If namespace is not null, then return.
If oldValue equals value, then return.
If localName is href, then set the url given element.
This is only observable for blob: URLs as
+ parsing them involves a Blob URL Store lookup.
If localName is href, referrerpolicy, or rel, then consider speculative loads given
+ element's node document.
An element implementing the HTMLHyperlinkElementUtils mixin has an associated
reinitialize url algorithm, which runs these
@@ -25740,19 +25851,13 @@ document.body.appendChild(wbr);
If hyperlinkSuffix is non-null, then append it to urlString.
Let referrerPolicy be the current state of subject's referrerpolicy content attribute.
If subject's link
- types includes the noreferrer keyword, then set
- referrerPolicy to "no-referrer".
Navigate targetNavigable to urlString using subject's node document, with referrerPolicy set to referrerPolicy, userInvolvement set to userInvolvement, - and sourceElement set to subject.
+ data-x="navigation-referrer-policy">referrerPolicy set to subject's + hyperlink referrer policy, userInvolvement set to userInvolvement, and + sourceElement set to subject.Unlike many other types of navigations, following hyperlinks does not have
special "replace" behavior for when
@@ -25762,6 +25867,18 @@ document.body.appendChild(wbr);
The hyperlink referrer policy for an element subject is the value + returned by the following steps:
+ +If subject's link
+ types includes the noreferrer keyword, then return
+ "no-referrer".
Return the current state of subject's referrerpolicy content attribute.
Setting the attribute to an ASCII case-insensitive match for "speculationrules" means that the script defines a speculation rule
+ set, containing JSON that will be used to describe speculative loads.
Setting the attribute to any other value means that the script is a data block, which is not processed by the user agent, but instead by author script or other tools. Authors @@ -63546,6 +63668,16 @@ interface HTMLScriptElement : HTMLElement {
The contents of inline script elements for import
maps must conform with the import map authoring requirements.
The contents of inline script elements for speculation rule sets must conform with the speculation rule set authoring
+ requirements.
When used to include data blocks, the data must be embedded
inline, the format of the data must be given using the type
attribute, and the contents of the script element must conform to the requirements
@@ -63779,6 +63915,9 @@ interface HTMLScriptElement : HTMLElement {
If type is "importmap", then return
true.
If type is "speculationrules", then return
+ true.
Return false.
A script element has a type, which is
- either null, "classic", "module", or "importmap", initially null. It is determined when the element is prepared, based on the type attribute of the element at that time.
classic", "module", "importmap", or "speculationrules", initially null. It is
+ determined when the element is prepared, based on
+ the type attribute of the element at that time.
A script element has a result, which is either "uninitialized", null (representing an error), a script, or an import map parse result. It is
- initially "uninitialized".
uninitialized".
A script element has steps to run when the result
is ready, which are a series of steps or null, initially null. To mark as ready a
- script element el given a script, import map parse result, or null
- result:
script element el given a result:
Set el's result to @@ -64094,11 +64232,33 @@ document.body.append(script1, script2);
Prepare the script element given insertedNode.
The script children changed steps are:
The script HTML element removing steps given removedNode
+ are:
Run the script HTML element post-connection steps, given the
- script element.
If removedNode's result is a + speculation rules parse result, then:
+ +Unregister speculation rules given removedNode's relevant + global object and removedNode's result.
Set removedNode's already started to false.
Set removedNode's result to + null.
The script children changed steps given changedNode
+ are:
Run the script HTML element post-connection steps, given
+ changedNode.
importmap".
+ Otherwise, if the script block's type string is an ASCII
+ case-insensitive match for the string "speculationrules", then set
+ el's type to "speculationrules".
Otherwise, return. (No script is executed, and el's type is left as null.)
Let cspType be "script speculationrules" if
+ el's type is "speculationrules"; otherwise, "script".
If el does not have a src content attribute, and the Should element's inline
behavior be blocked by Content Security Policy? algorithm returns "Blocked" when given el, "script", and
- source text, then return. CSP
If el has an If el's type is "event attribute and a
" or "importmap", then queue an element task on the DOM
- manipulation task source given el to fire
- an event named error at el, and
- return.speculationrules", then queue an
+ element task on the DOM manipulation task source given el to
+ fire an event named error at el, and return.
External import map scripts are not currently supported. See WICG/import-maps issue #235 for - discussions on adding support.
+External import maps and speculation rules are not currently supported. See WICG/import-maps issue #235 and WICG/nav-speculation issue #348 + for discussions on adding support.
Let src be the value of el's Mark as ready el given result.
speculationrules"Let result be the result of creating a speculation rules parse result given source text + and document.
Mark as ready el given result.
speculationrules"Register speculation rules given el's relevant global + object and el's result.
pre-media".
+ If navigationParams's navigable
+ is a top-level traversable, then process the `Speculation-Rules`
+ header given document and navigationParams's response.
This is conditional because speculative loads are only considered for + top-level traversables, so it would be wasteful to fetch these rules otherwise.
+Potentially free deferred fetch quota for document.
Return document.
Speculative loading is the practice of performing navigation actions, such as prefetching, + ahead of navigation starting. This makes subsequent navigations faster.
+ +Developers can initiate speculative loads by using speculation rules. User agents might also perform speculative loads in certain + implementation-defined scenarios, such as typing into the address bar.
+ +Speculation rules are how developers instruct the + browser about speculative loading operations that the developer believes will be beneficial. They + are delivered as JSON documents, via either:
+ +inline script elements with their type
+ attribute set to "speculationrules"; or
resources fetched from a URL specified in the `Speculation-Rules` HTTP
+ response header.
The following JSON document is parsed into a speculation rule set specifying a + number of desired conditions for the user agent to start a referrer-initiated navigational + prefetch:
+ +{
+ "prefetch": [
+ {
+ "urls": ["/chapters/5"]
+ },
+ {
+ "eagerness": "moderate",
+ "where": {
+ "and": [
+ { "href_matches": "/*" },
+ { "not": { "selector_matches": ".no-prefetch" } }
+ ]
+ }
+ }
+ ]
+}
+ A JSON document representing a speculation rule set must meet the following + speculation rule set authoring requirements:
+ +It must be valid JSON. JSON
The JSON must represent a JSON object, with at most three keys "tag",
+ "prefetch" and "prerender".
In this standard, "prerender" is optionally converted to
+ "prefetch" at parse time. Some
+ implementations might implement different behavior for prerender, as specified in
+ Prerendering Revamped. PRERENDERING-REVAMPED
The value corresponding to the "tag" key, if present, must be a
+ speculation rule tag.
The values corresponding to the "prefetch" and "prerender" keys, if present, must be arrays of valid speculation rules.
A valid speculation rule is a JSON object that meets the following requirements:
+ +It must have at most the following keys: "source", "urls", "where", "relative_to",
+ "eagerness", "referrer_policy", "tag", "requires", "expects_no_vary_search", or "target_hint".
In this standard, "target_hint" is ignored.
The value corresponding to the "source" key, if present, must be
+ either "list" or "document".
If the value corresponding to the "source" key is "list", then the "urls" key must be present, and the
+ "where" key must be absent.
If the value corresponding to the "source" key is "document", then the "urls" key must be absent.
The "urls" and "where" keys must not both be
+ present.
If the value corresponding to the "source" key is "document" or the "where" key is present, then the "relative_to" key must be absent.
The value corresponding to the "urls" key, if present, must be an
+ array of valid URL strings.
The value corresponding to the "where" key, if present, must be a
+ valid document rule predicate.
The value corresponding to the "relative_to" key, if present, must
+ be either "ruleset" or "document".
The value corresponding to the "eagerness" key, if present, must be
+ a speculation rule eagerness.
The value corresponding to the "referrer_policy" key, if present,
+ must be a referrer policy.
The value corresponding to the "tag" key, if present, must be a
+ speculation rule tag.
The value corresponding to the "requires" key, if present, must be
+ an array of speculation rule
+ requirements.
The value corresponding to the "expects_no_vary_search" key, if
+ present, must be a string that is parseable as a `No-Vary-Search` header value.
A valid document rule predicate is a JSON object that meets the following + requirements:
+ +It must contain exactly one of the keys "and", "or", "not", "href_matches", or
+ "selector_matches".
It must not contain any keys apart from the above or "relative_to".
If it contains the key "relative_to", then it must also contain the
+ key "href_matches".
The value corresponding to the "relative_to" key, if present, must
+ be either "ruleset" or "document".
The value corresponding to the "and" or "or"
+ keys, if present, must be arrays of valid document
+ rule predicates.
The value corresponding to the "not" key, if present, must be a
+ valid document rule predicate.
The value corresponding to the "href_matches" key, if present, must
+ be either a valid URL pattern input or an array of valid URL pattern inputs.
The value corresponding to the "selector_matches" key, if present,
+ must be either a string matching <selector-list> or an array of
+ strings that match <selector-list>.
A valid URL pattern input is either:
+ +a scalar value string that can be successfully parsed as a URL pattern constructor string, or;
a JSON object whose keys are drawn from the members of the URLPatternInit
+ dictionary and whose values are scalar value
+ strings.
A speculation rule set is a struct with the following items:
+ +prefetch rules, a list of speculation rules, initially empty
In the future, other rules will be possible, e.g., prerender rules. See + Prerendering Revamped for such not-yet-accepted extensions. + PRERENDERING-REVAMPED
+ +A speculation rule is a struct with the following items:
+ +URLs, an ordered set of URLs
predicate, a document rule predicate or + null
eagerness, a speculation rule + eagerness
referrer policy, a referrer + policy
tags, an ordered set of speculation rule tags
requirements, an ordered set + of speculation rule requirements
No-Vary-Search hint, a URL search + variance
A document rule predicate is one of the following:
+ +a document rule conjunction;
a document rule disjunction;
a document rule negation;
a document rule URL pattern predicate; or
a document rule selector predicate.
A document rule conjunction is a struct with the following items:
+ +clauses, a list of document rule predicates
A document rule disjunction is a struct with the following items:
+ +clauses, a list of document rule predicates
A document rule negation is a struct with the following items:
+ +clause, a document rule predicate
A document rule URL pattern predicate is a struct with the following + items:
+ +patterns, a list of URL patterns
A document rule selector predicate is a struct with the following items:
+ +selectors, a list of selectors
A speculation rule eagerness is one of the following strings:
+ +immediate"The developer believes that performing the associated speculative loads is very likely to + be worthwhile, and they might also expect that load to require significant lead time to complete. + User agents should usually enact the speculative load candidate as soon as practical, subject + only to considerations such as user preferences, device conditions, and resource limits.
eager"User agents should enact the speculative load candidate on even a slight suggestion that + the user may navigate to this URL in the future. For instance, the user might have moved the + cursor toward a link or hovered it, even momentarily, or paused scrolling when the link is one of + the more prominent ones in the viewport. The author is seeking to capture as many navigations as + possible, as early as possible.
moderate"User agents should enact the candidate if user behavior suggests the user may navigate to
+ this URL in the near future. For instance, the user might have scrolled a link into the viewport
+ and shown signs of being likely to click it, e.g., by moving the cursor over it for some time.
+ The developer is seeking a balance between "eager" and
+ "conservative".
conservative"User agents should enact the candidate only when the user is very likely to navigate to + this URL at any moment. For instance, the user might have begun to interact with a link. The + developer is seeking to capture some of the benefits of speculative loading with a fairly small + tradeoff of resources.
A speculation rule eagerness A is less eager than another speculation rule + eagerness B if A follows B in the above list.
+ +A speculation rule eagerness A is at least as eager as another speculation rule + eagerness B if A is not less + eager than B.
+ +A speculation rule tag is either an ASCII string whose code points are all in the range U+0020 to U+007E inclusive, or + null.
+ +This code point range restriction ensures the value can be sent in an HTTP header + with no escaping or modification.
+ +A speculation rule requirement is the string "anonymous-client-ip-when-cross-origin".
In the future, more possible requirements might be defined.
+ +Since speculative loading is a progressive enhancement, this standard is fairly conservative + in its parsing behavior. In particular, unknown keys or invalid values usually cause parsing + failure, since it is safer to do nothing than to possibly misinterpret a speculation rule.
+ +That said, parsing failure for a single speculation rule still allows other speculation rules + to be processed. It is only in the case of top-level misconfiguration that the entire speculation + rule set is discarded.
+To parse a speculation rule set string given a string input,
+ a Document document, and a URL baseURL:
Let parsed be the result of parsing a JSON string to an Infra value given input.
If parsed is not a map, then throw a
+ TypeError indicating that the top-level value needs to be a JSON object.
Let result be a new speculation rule set.
Let tag be null.
If parsed["tag"] exists:
If parsed["tag"] is not a speculation rule
+ tag, then throw a TypeError indicating that the speculation rule tag is
+ invalid.
Set tag to parsed["tag"].
Let typesToTreatAsPrefetch be « "prefetch" ».
The user agent may append "prerender" to typesToTreatAsPrefetch.
Since this specification + only includes prefetching, this allows user agents to treat requests for prerendering as + requests for prefetching. User agents which implement prerendering, per the Prerendering + Revamped specification, will instead interpret these as prerender requests. + PRERENDERING-REVAMPED
+For each type of + typesToTreatAsPrefetch:
+ +If parsed[type] exists:
+ +If parsed[type] is a list, then for each rule of parsed[type]:
+ +Let rule be the result of parsing + a speculation rule given rule, tag, document, and + baseURL.
If rule is null, then continue.
Append rule to result's + prefetch rules.
Otherwise, the user agent may report a warning to the console indicating + that the rules list for type needs to be a JSON array.
Return result.
To parse a speculation rule given a map
+ input, a speculation rule tag rulesetLevelTag, a
+ Document document, and a URL baseURL:
If input is not a map:
+ +The user agent may report a warning to the console indicating that the rule + needs to be a JSON object.
Return null.
If input has any key other than "source", "urls", "where", "relative_to", "eagerness", "referrer_policy", "tag", "requires", "expects_no_vary_search", or "target_hint":
The user agent may report a warning to the console indicating that the rule + has unrecognized keys.
Return null.
"target_hint" has no impact on the processing model in this standard. However,
+ implementations of Prerendering Revamped can use it for prerendering rules, and so
+ requiring user agents to fail parsing such rules would be counterproductive.
+ PRERENDERING-REVAMPED.
Let source be null.
If input["source"] exists, then set source to input["source"].
Otherwise, if input["urls"] exists and input["where"] does not exist, then set source to "list".
Otherwise, if input["where"] exists and input["urls"] does not exist, then set source to "document".
If source is neither "list" nor "document":
The user agent may report a warning to the console indicating that a source + could not be inferred or an invalid source was specified.
Return null.
Let urls be an empty list.
Let predicate be null.
If source is "list":
If input["where"] exists:
The user agent may report a warning to the console indicating that there + were conflicting sources for this rule.
Return null.
If input["relative_to"] exists:
If input["relative_to"] is neither "ruleset" nor "document":
The user agent may report a warning to the console indicating that the + supplied relative-to value was invalid.
Return null.
If input["relative_to"] is "document", then set baseURL to document's
+ document base URL.
If input["urls"] does not exist or is not a list:
The user agent may report a warning to the console indicating that the + supplied URL list was invalid.
Return null.
For each urlString of input["urls"]:
If urlString is not a string:
+ +The user agent may report a warning to the console indicating that the + supplied URL must be a string.
Return null.
Let parsedURL be the result of URL parsing + urlString with baseURL.
If parsedURL is failure, or parsedURL's scheme is not an HTTP(S) scheme:
+ +The user agent may report a warning to the console indicating that the + supplied URL string was unparseable.
Continue.
Append parsedURL to + urls.
If source is "document":
If input["urls"] or input["relative_to"] exists:
The user agent may report a warning to the console indicating that there + were conflicting sources for this rule.
Return null.
If input["where"] does not exist, then set predicate to a document rule
+ conjunction whose clauses is an empty
+ list.
Such a predicate will match all links.
+Otherwise, set predicate to the result of parsing a document rule predicate given input["where"], document, and baseURL.
If predicate is null, then return null.
Let eagerness be "immediate" if
+ source is "list"; otherwise, "conservative".
If input["eagerness"] exists:
If input["eagerness"] is not a speculation rule
+ eagerness:
The user agent may report a warning to the console indicating that the + eagerness was invalid.
Return null.
Set eagerness to input["eagerness"].
Let referrerPolicy be the empty string.
If input["referrer_policy"] exists:
If input["referrer_policy"] is not a referrer
+ policy:
The user agent may report a warning to the console indicating that the + referrer policy was invalid.
Return null.
Set referrerPolicy to input["referrer_policy"].
Let tags be an empty ordered set.
If rulesetLevelTag is not null, then append + rulesetLevelTag to tags.
If input["tag"] exists:
If input["tag"] is not a speculation rule
+ tag:
The user agent may report a warning to the console indicating that the + tag was invalid.
Return null.
Append input["tag"]
+ to tags.
If tags is empty, then append null to tags.
Assert: tags's size is either 1 or + 2.
Let requirements be an empty ordered set.
If input["requires"] exists:
If input["requires"] is not a list:
The user agent may report a warning to the console indicating that the + requirements were not understood.
Return null.
For each requirement of
+ input["requires"]:
If requirement is not a speculation rule requirement:
+ +The user agent may report a warning to the console indicating that the + requirement was not understood.
Return null.
Append requirement to + requirements.
Let noVarySearchHint be the default URL search variance.
If input["expects_no_vary_search"] exists:
If input["expects_no_vary_search"] is not a
+ string:
The user agent may report a warning to the console indicating that the
+ `No-Vary-Search` hint was invalid.
Return null.
Set noVarySearchHint to the result of parsing a URL search variance given input["expects_no_vary_search"].
Return a speculation rule with:
+ +To parse a document rule predicate given a value input, a
+ Document document, and a URL baseURL:
If input is not a map:
+ +The user agent may report a warning to the console indicating that the + document rule predicate was invalid.
Return null.
If input does not contain exactly one of "and", "or", "not", "href_matches", or "selector_matches":
The user agent may report a warning to the console indicating that the + document rule predicate was empty or ambiguous.
Return null.
Let predicateType be the single key found in the previous step.
If predicateType is "and" or "or":
If input has any key other than + predicateType:
+ +The user agent may report a warning to the console indicating that the + document rule predicate had unexpected extra options.
Return null.
If input[predicateType] is not a list:
+ +The user agent may report a warning to the console indicating that the + document rule predicate had an invalid clause list.
Return null.
Let clauses be an empty list.
For each rawClause of + input[predicateType]:
+ +Let clause be the result of parsing a document rule predicate given rawClause, + document, and baseURL.
If clause is null, then return null.
Append clause to + clauses.
If predicateType is "and", then return a
+ document rule conjunction whose clauses is
+ clauses.
Return a document rule disjunction whose clauses is clauses.
If predicateType is "not":
If input has any key other than "not":
The user agent may report a warning to the console indicating that the + document rule predicate had unexpected extra options.
Return null.
Let clause be the result of parsing a document rule predicate given + input[predicateType], document, and + baseURL.
If clause is null, then return null.
Return a document rule negation whose clause is clause.
If predicateType is "href_matches":
If input has any key other than "href_matches" or "relative_to":
The user agent may report a warning to the console indicating that the + document rule predicate had unexpected extra options.
Return null.
If input["relative_to"] exists:
If input["relative_to"] is neither "ruleset" nor "document":
The user agent may report a warning to the console indicating that the + supplied relative-to value was invalid.
Return null.
If input["relative_to"] is "document", then set baseURL to document's
+ document base URL.
Let rawPatterns be input["href_matches"].
If rawPatterns is not a list, then set rawPatterns to + « rawPatterns ».
Let patterns be an empty list.
For each rawPattern of + rawPatterns:
+ +Let pattern be the result of building a URL pattern from an Infra value given rawPattern + and baseURL. If this step throws and exception, catch the exception and set + pattern to null.
If pattern is null:
+ +The user agent may report a warning to the console indicating that the + supplied URL pattern was invalid.
Return null.
Append pattern to + patterns.
Return a document rule URL pattern predicate whose patterns is patterns.
If predicateType is "selector_matches":
If input has any key other than "selector_matches":
The user agent may report a warning to the console indicating that the + document rule predicate had unexpected extra options.
Return null.
Let rawSelectors be input["selector_matches"].
If rawSelectors is not a list, then set rawSelectors + to « rawSelectors ».
Let selectors be an empty list.
For each rawSelector of + rawSelectors:
+ +Let parsedSelectorList be failure.
If rawSelector is a string, then set parsedSelectorList to the + result of parsing a selector given + rawSelector.
If parsedSelectorList is failure:
+ +The user agent may report a warning to the console indicating that the + supplied selector list was invalid.
Return null.
For each selector of + parsedSelectorList, append selector + to selectors.
Return a document rule selector predicate whose selectors is selectors.
Assert: this step is never reached, as one of the previous branches was + taken.
A speculative load candidate is a struct with the following items:
+ +URL, a URL
No-Vary-Search hint, a URL search + variance
eagerness, a speculation rule + eagerness
referrer policy, a referrer + policy
tags, an ordered set of + speculation rule tags
A prefetch candidate is a speculative load candidate with the following + additional item:
+ +anonymization policy, a + prefetch IP anonymization policy
A prefetch IP anonymization policy is either null or a cross-origin prefetch + IP anonymization policy.
+ +A cross-origin prefetch IP anonymization policy is a struct whose + single item is its origin, an + origin.
+ +A speculative load candidate candidateA is redundant with another speculative load + candidate candidateB if the following steps return true:
+ +If candidateA's No-Vary-Search hint + is not equal to candidateB's No-Vary-Search + hint, then return false.
If candidateA's URL is not + equivalent modulo search variance to candidateB's URL given candidateA's No-Vary-Search hint, then return false.
Return true.
The requirement that the No-Vary-Search hints be + equivalent is somewhat strict. It means that some cases which could theoretically be treated as + matching, are not treated as such. Thus, redundant speculative loads could happen.
+ +However, allowing more lenient matching makes the check no longer an equivalence relation, and + producing such matches would require an implementation strategy that does a full comparison, + instead of a simpler one using normalized URL keys. This is in line with the best practices for + server operators, and attendant HTTP cache implementation notes, in No + Vary Search § 6 Comparing.
+ +In practice, we do not expect this to cause redundant speculative loads, since server
+ operators and the corresponding speculation rules-writing web developers will follow best
+ practices and use static `No-Vary-Search` header values/speculation rule hints.
Consider three speculative load + candidates:
+ +A has a URL of https://example.com?a=1&b=1 and a No-Vary-Search hint parsed from params=("a").
B has a URL of https://example.com?a=2&b=1 and a No-Vary-Search hint parsed from params=("b").
C has a URL of https://example.com?a=2&b=2 and a No-Vary-Search hint parsed from params=("a").
With the current definition of redundant + with, none of these candidates are redundant with each other. A speculation rule + set which contained all three could cause three separate speculative loads.
+ +A definition which did not require equivalent No-Vary-Search hints could consider A and + B to match (using A's No-Vary-Search + hint), and B and C to match (using B's No-Vary-Search hint). But it could not consider + A and C to match, so it would not be transitive, and thus not an + equivalence relation.
+Every Document has speculation rule sets, a
+ list of speculation rule sets, initially
+ empty.
Every Document has a consider speculative loads microtask queued, a
+ boolean, initially false.
To consider speculative loads for a Document document:
If document's node navigable is not a top-level + traversable, then return.
+ +Supporting speculative loads into child + navigables has some complexities and is not currently defined. It might be possible to + define it in the future.
+If document's consider speculative loads microtask queued is true, + then return.
Set document's consider speculative loads microtask queued to + true.
Queue a microtask given document to run the following steps:
+ +Set document's consider speculative loads microtask queued to + false.
Run the inner consider speculative loads steps for + document.
In addition to the call sites explicitly given in this standard:
+ +When style recalculation would cause selector matching results to change, the user agent
+ must consider speculative loads for the relevant Document.
When the user indicates interest in hyperlinks, in one of the + implementation-defined ways that the user agent uses to implement the + speculation rule eagerness heuristics, the user agent may consider + speculative loads for the hyperlink's node document.
+ +For example, a user agent which implements "conservative" eagerness by watching for pointerdown events would want to consider speculative
+ loads as part of reacting to such events.
In this standard, every call to consider speculative loads is given just a
+ Document, and the algorithm re-computes all possible candidates in a stateless way.
+ A real implementation would likely cache previous computations, and pass along information from
+ the call site to make updates more efficient. For example, if an a element's href attribute is changed, that specific element could be
+ passed along in order to update only the related speculative load candidate.
Note that because of how consider speculative loads queues a microtask, by the + time the inner consider speculative loads steps are run, multiple updates (or cancelations) might be processed + together.
+The inner consider speculative loads steps for a Document
+ document are:
If document is not fully active, then return.
Let prefetchCandidates be an empty list.
For each ruleSet of document's speculation rule sets:
+ +For each rule of ruleSet's prefetch rules:
+ +Let anonymizationPolicy be null.
If rule's requirements contains "anonymous-client-ip-when-cross-origin", then set
+ anonymizationPolicy to a cross-origin prefetch IP anonymization
+ policy whose origin is document's origin.
For each url of rule's URLs:
+ +Let referrerPolicy be the result of computing a speculative load referrer policy given + rule and null.
Append a new prefetch candidate with
+ +to prefetchCandidates.
+If rule's predicate is not null:
+ +Let links be the result of finding + matching links given document and rule's predicate.
For each link of links:
+ +Let referrerPolicy be the result of computing a speculative load referrer policy given + rule and link.
Append a new prefetch candidate + with
+ +to prefetchCandidates.
+For each prefetchRecord of + document's prefetch records:
+ +If prefetchRecord's source is
+ not "speculation rules", then continue.
Assert: prefetchRecord's state is not "canceled".
If prefetchRecord is not still being speculated given + prefetchCandidates, then cancel + and discard prefetchRecord given document.
Let prefetchCandidateGroups be an empty list.
For each candidate of + prefetchCandidates:
+ +Let group be « candidate ».
Extend group with all items in prefetchCandidates, apart from candidate itself, + which are redundant with candidate + and whose eagerness is at least as eager as candidate's + eagerness.
If prefetchCandidateGroups contains + another group whose items are the same as group, + ignoring order, then continue.
Append group to + prefetchCandidateGroups.
The following speculation rules generate two redundant prefetch candidates:
+ +{
+ "prefetch": [
+ {
+ "tag": "a",
+ "urls": ["next.html"]
+ },
+ {
+ "tag": "b",
+ "urls": ["next.html"],
+ "referrer_policy": "no-referrer"
+ }
+ ]
+}
+
+ This step will create a single group containing them both, in the given order. (The second
+ pass through will not create a group, since its contents would be the same as the first group,
+ just in a different order.) This means that if the user agent chooses to execute the "may" step
+ below to enact the group, it will enact the first candidate, and ignore the second. Thus, the
+ request will be made with the default referrer policy, instead of using "no-referrer".
However, the collect tags from speculative load candidates algorithm will
+ collect tags from both candidates in the group, so the `Sec-Speculation-Tags`
+ header value will be `"a", "b"`. This indicates to server operators that
+ either rule could have caused the speculative load.
For each group of + prefetchCandidateGroups:
+ +The user agent may run the following steps:
+ +Let prefetchCandidate be group[0].
Let tagsToSend be the result of collecting tags from speculative load candidates given + group.
Let prefetchRecord be a new prefetch record with
+ +speculation rules"Start a referrer-initiated navigational prefetch given + prefetchRecord and document.
When deciding whether to execute this "may" step, user agents should consider + prefetchCandidate's eagerness, in + accordance to the current behavior of the user and the definitions of speculation rule + eagerness.
+ +prefetchCandidate's No-Vary-Search + hint can also be useful in implementing the heuristics defined for the + speculation rule eagerness values. For example, a user hovering of a link whose + URL is equivalent modulo search + variance to prefetchCandidate's URL + given prefetchCandidate's No-Vary-Search + hint could indicate to the user agent that performing this step would be useful.
+ +When deciding whether to execute this "may" step, user agents should prioritize user + preferences (express or implied, such as data-saver or battery-saver modes) over the eagerness + supplied by the web developer.
+To compute a speculative load referrer policy given a speculation rule
+ rule and an a element, area element, or null
+ link:
If rule's referrer policy is not the + empty string, then return rule's referrer + policy.
If link is null, then return the empty string.
Return link's hyperlink referrer policy.
To collect tags from speculative load candidates given a list of speculative load candidates candidates:
+ +Let tags be an empty ordered set.
For each candidate of + candidates:
+ +For each tag of candidate's + tags: append + tag to tags.
Sort in ascending order tags, with + tagA being less than tagB if tagA is null, or if tagA + is code unit less than tagB.
Return tags.
To find matching links given a Document document and a
+ document rule predicate predicate:
Let links be an empty list.
For each shadow-including descendant + descendant of document, in shadow-including tree order:
+ +If descendant is not an a or area element with an
+ href attribute, then continue.
If descendant is not being rendered or is part of skipped contents, then continue.
+ +Such links, though present in document, aren't available for the + user to interact with, and thus are unlikely to be good candidates. In addition, they might + not have their style or layout computed, which might make selector matching less efficient in + user agents which skip some or all of that work for these elements.
+If descendant's url is null, or + its scheme is not an HTTP(S) scheme, then + continue.
If predicate matches + descendant, then append descendant to + links.
Return links.
A document rule predicate predicate matches an a or area element
+ el if the following steps return true, switching on predicate's type:
For each clause of predicate's + clauses:
+ +If clause does not match + el, then return false.
Return true.
For each clause of predicate's + clauses:
+ +If clause matches el, + then return true.
Return false.
If predicate's clause matches el, then return false.
Return true.
For each pattern of predicate's + patterns:
+ +If performing a match given pattern + and el's url gives a non-null value, + then return true.
Return false.
For each selector of predicate's + selectors:
+ +If performing a match given + selector and el with the scoping root set to + el's root returns sucess, then return true.
Return false.
Speculation rules features use the speculation rules task source, which is a + task source.
+ +Because speculative loading is generally less important than processing tasks for + the purpose of the current document, implementations might give tasks enqueued here an especially low priority.
+ +For now, the navigational prefetching process is defined in the Prefetch + specification. Moving it into this standard is tracked in issue #11123. PREFETCH
+ +This standard refers to the following concepts defined there:
+ +Speculation-Rules` headerThe `Speculation-Rules` HTTP response header allows the
+ developer to request that the user agent fetch and apply a given speculation rule set
+ to the current Document. It is a structured
+ header whose value must be a list of
+ strings that are all valid URL strings.
To process the `Speculation-Rules` header given a Document
+ document and a response response:
Let parsedList be the result of getting a structured field value
+ given `Speculation-Rules` and "list" from
+ response's header list.
If parsedList is null, then return.
For each item of parsedList:
+ +If item is not a string, then continue.
Let url be the result of URL parsing + item with document's document base URL.
If url is failure, then continue.
In parallel:
+ +Optionally, wait for an implementation-defined amount of time.
+ +This allows the implementation to prioritize other work ahead of loading
+ speculation rules, as especially during Document creation and header
+ processing, there are often many more important things going on.
Queue a global task on the speculation rules task source given + document's relevant global object to perform the following + steps:
+ +Let request be a new request whose
+ URL is url, destination is "speculationrules", and mode is
+ "cors".
Fetch request with the following processResponseConsumeBody steps given response response and null, failure, or a + byte sequence bodyBytes:
+ +If bodyBytes is null or failure, then abort these steps.
If response's status is + not an ok status, then abort these steps.
If the result of extracting a MIME type
+ from response's header list
+ does not have an essence of
+ "application/speculationrules+json", then abort these steps.
Let bodyText be the result of UTF-8 + decoding bodyBytes.
Let ruleSet be the result of parsing a speculation rule set string given bodyText, + document, and response's URL. If this throws an exception, then abort these + steps.
Append ruleSet to + document's speculation rule + sets.
Consider speculative loads for document.
Sec-Speculation-Tags` headerThe `Sec-Speculation-Tags` HTTP request header specifies
+ the web developer-provided tags associated with the speculative navigation request. It can also be
+ used to distinguish speculative navigation requests from speculative subresource requests, since
+ `Sec-Purpose` can be sent by both categories of requests.
The header is a structured header whose value must
+ be a list. The list can contain either token or string values. String values represent
+ developer-provided tags, whereas token values represent predefined tags. As of now, the only
+ predefined tag is null, which indicates a speculative navigation request
+ with no developer-defined tag.
Speculative loads can be initiated by web pages to cross-site destinations. However, because + such cross-site speculative loads are always done without credentials, as explained + below, ambient authority is limited to + requests that are already possible via other mechanisms on the platform.
+ +The `Speculation-Rules` header can also be used to issue requests, for JSON
+ documents whose body will be parsed as a
+ speculation rule set string. However, they use the "same-origin"
+ credentials mode, the "cors" mode, and responses which do not
+ use the application/speculationrules+json MIME type essence are ignored,
+ so they are not useful in mounting attacks.
Because links in a document can be selected for speculative loading via document rule predicates, developers need to be cautious if such links
+ might contain user-generated markup. For example, if the href of a link can be entered by one user and displayed to all
+ other users, a malicious user might choose a value like "/logout", causing
+ other users' browsers to automatically log out of the site when that link is speculatively loaded.
+ Using a document rule selector predicate to exclude such potentially-dangerous links,
+ or using a document rule URL pattern predicate to allowlist known-safe links, are
+ useful techniques in this regard.
As with all uses of the script element, developers need to be cautious about
+ inserting user-provided content into <script type=speculationrules>'s
+ child text content. In particular, the insertion of an unescaped closing </script> tag could be used to break out of the script element
+ context and inject attacker-controlled markup.
The <script type=speculationrules> feature causes activity in
+ response to content found in the document, so it is worth considering the options open to an
+ attacker able to inject unescaped HTML. Such an attacker is already able to inject JavaScript or
+ iframe elements. Speculative loads are generally less dangerous than arbitrary script
+ execution. However, the use of document rule
+ predicates could be used to speculatively load links in the document, and the existence of
+ those loads could provide a vector for exfiltrating information about those links.
+ Defense-in-depth against this possibility is provided by Content Security Policy. In particular,
+ the script-src directive can be used to restrict the parsing of speculation
+ rules script elements, and the default-src directive applies
+ to navigational prefetch requests arising from such speculation rules. Additional defense is
+ provided by the requirement that speculative loads are only performed to potentially-trustworthy URLs, so an on-path attacker would only
+ have access to metadata and traffic analysis, and could not see the URLs directly.
+ CSP
It's generally not expected that user-generated content will be added as arbitrary response
+ headers: server operators are already going to encounter significant trouble if this is possible.
+ It is therefore unlikely that the `Speculation-Rules` header meaningfully expands the
+ XSS attack surface. For this reason, Content Security Policy does not apply to the loading of rule
+ sets via that header.
This standard allows developers to request that navigational prefetches are performed using IP + anonymization technology provided by the user agent. The details of this anonymization are not + specified, but some general security principles apply.
+ +To the extent IP anonymization is implemented using a proxy service, it is advisable to + minimize the information available to the service operator and other entities on the network path. + This likely involves, at a minimum, the use of TLS for the connection.
+ +Site operators need to be aware that, similar to virtual private network (VPN) technology, the + client IP address seen by the HTTP server might not exactly correspond to the user's actual + network provider or location, and a traffic for multiple distinct subscribers could originate from + a single client IP address. This can affect site operators' security and abuse prevention + measures. IP anonymization measures might make an effort to use an egress IP address which has a + similar geolocation or is located in the same jurisdiction as the user, but any such behavior is + particular to the user agent and not guaranteed.
+ + +The consider speculative loads algorithm contains a crucial "may" step, which + encourages user agents to start + referrer-initiated navigational prefetches based on a combination of the speculation + rule eagerness and other features of the user's environment. Because it can be observable + to the document whether speculative loads are performed, user agents must take care to protect + privacy when making such decisions—for instance by only using information which is already + available to the origin. If these heuristics depend on any persistent state, that state must be + erased whenever the user erases other site data. If the user agent automatically clears other site + data from time to time, it must erase such persistent state at the same time.
+ +The use of origin instead of site here is intentional. + Although same-site origins are generally allowed to coordinate if they wish, the web's security + model is premised on preventing origins from accessing the data of other origins, even same-site + ones. Thus, the user agent needs to be sure not to leak such data unintentionally across origins, + not just across sites.
+ +Examples of inputs which would be already known to the document:
+ +author-supplied eagerness
order of appearance in the document
whether a link is in the viewport
whether the cursor is near a link
rendered size of a link
Examples of persistent data related to the origin (which the origin could have gathered itself) + but which must be erased according to user intent:
+ +whether the user has clicked this or similar links on this document or other documents on + the same origin
Examples of device information which might be valuable in deciding whether speculative loading + is appropriate, but which needs to be considered as part of the user agent's overall privacy + posture because it can make the user more identifiable across origins:
+ +coarse device class (CPU, memory)
coarse battery level
whether the network connection is known to be metered
any user-toggleable settings, such as a speculative loading toggle, a battery-saver + toggle, or a data-saver toggle
The start a referrer-initiated navigational prefetch algorithm is designed to + ensure that the HTTP requests that it issues behave consistently with how user agents partition + credentials according to storage keys. This + property is maintained even for cross-partition prefetches, as follows.
+ +If a future navigation using a prefetched response would load a document in the same partition, + then at prefetch time, the partitioned credentials can be sent, as they can with subresource + requests and scripted fetches. If such a future navigation would instead load a document in + another partition, it would be inconsistent with the partitioning scheme to use partitioned + credentials for the destination partition (since this would cross the boundary between partitions + without a top-level navigation) and also inconsistent to use partitioned credentials within the + originating partition (since this would result in the user seeing a document with different state + than a non-prefetched navigation). Instead, a third, initially empty, partition is used for such + requests. These requests therefore send along no credentials from either partition. However, the + resulting prefetched response body constructed using this initially-empty partition can only be + used if, at activation time, the destination partition contains no credentials.
+ +This is somewhat similar to the behavior of only sending such prefetch requests if the + destination partition is known ahead of time to not contain credentials. However, to avoid such + behavior being used a way of probing for the presence of credentials, instead such prefetch + requests are always completed, and in the case of conflicting credentials, their results are not + used.
+ +Redirects are possible between these two types of requests. A redirect from a same- to + cross-partition URL could contain information derived from partitioned credentials in the + originating partition; however, this is equivalent to the originating document fetching the + same-partition URL itself and then issuing a request for the cross-partition URL. A redirect from + a cross- to same-origin URL could carry credentials from the isolated partition, but since this + partition has no prior state this does not enable tracking based on the user's prior browsing + activity on that site, and the document could construct the same state by issuing uncredentialed + requests itself.
+ + +Speculative loads provide a mechanism through which HTTP requests for later top-level + navigation can be made without a user gesture. It is natural to ask whether it is possible for two + coordinating sites to connect user identities.
+ +Since existing credentials for the destination site are not sent (as explained in
+ the previous section), that site is limited in its ability to identify the user before navigation
+ in a similar way to if the referrer site had simply used fetch() to make an
+ uncredentialed request. Upon navigation, this becomes similar to ordinary navigation (e.g., by
+ clicking a link that was not speculatively loaded).
To the extent that user agents attempt to mitigate identity joining for ordinary fetches and + navigations, they can apply similar mitigations to speculatively-loaded navigations.
+ +X-Frame-Options` headerThe `X-Frame-Options` HTTP response header is a way
@@ -111787,6 +113966,115 @@ dictionary PromiseRejectionEventInit : EventInitimport map.
A speculation rules parse result is a struct that is similar to a script, and also can be stored in a script element's
+ result, but is not counted as a script for other purposes. It has the following items:
To create a speculation rules parse result given a string
+ input and a Document document:
Let result be a speculation rules parse result whose import map is null and whose error to rethrow is null.
Parse a speculation rule set string given input, + document, and document's document base URL, catching any + exceptions. If this threw an exception, then set result's error to rethrow to that exception. Otherwise, set + result's speculation rule set to the return + value.
Return result.
To register speculation rules given a Window global, a
+ speculation rules parse result result, and an optional boolean
+ queueErrors (default false):
If result's error to rethrow is not + null, then:
+ +If queueErrors is true, then queue a global task on the DOM + manipulation task source given global to perform the following step:
+ +Report an exception given by result's error to rethrow for global.
Otherwise, report an exception given by result's error to rethrow for global.
Return.
Append result's speculation rule set to global's associated Document's speculation rule sets.
Consider speculative loads for document.
To unregister speculation rules given a Window global and a
+ speculation rules parse result result:
If result's error to rethrow is not + null, then return.
Remove result's speculation rule set from global's associated Document's speculation rule sets.
Consider speculative loads for document.
To update speculation rules given a Window global, a
+ speculation rules parse result oldResult, and a speculation rules
+ parse result newResult:
Remove oldResult's speculation rule set from global's associated Document's speculation rule sets.
Register speculation rules given global, newResult, and + true.
+ +When updating speculation rules,
+ as opposed to registering them for the first time, we ensure that any error events are queued as tasks, instead of synchronously fired.
+ Although synchronously executing error event handlers is OK
+ when inserting script elements, it's best if other modifications do not cause such
+ synchronous script execution.
application/json (namely, at the time of writing, no semantics at all).
JSON
+ application/speculationrules+jsonThis registration is for community review and will be submitted to the IESG for review, + approval, and registration with IANA.
+ + + +application/json JSONapplication/json JSONapplication/json JSONapplication/json JSONapplication/microdata+json type asserts that the
+ resource is a JSON text that follows the speculation rule set authoring requirements. Thus, the relevant specifications are
+ JSON and this specification. JSON
+ Web browsers.
+Fragments used with
+ application/speculationrules+json resources have the same semantics as when used with
+ application/json (namely, at the time of writing, no semantics at all).
+ JSON
text/event-streamtype
script
module"; a valid MIME type string that is not a JavaScript MIME type essence match
+ module"; "importmap"; "speculationrules"; a valid MIME type string that is not a JavaScript MIME type essence match
usemap
img
@@ -147790,6 +150141,9 @@ INSERT INTERFACES HERE
Special thanks to the WICG for incubating the speculative loading feature. In particular, thanks to Jeremy Roman + for his work as editor of the original speculation rules and prefetch specifications.
+For about ten years starting in 2003, this standard was almost entirely written by Ian Hickson (Google, ian@hixie.ch).
@@ -149418,6 +151785,13 @@ INSERT INTERFACES HERE href="https://www.w3.org/Consortium/Legal/2015/copyright-software-and-document">W3C Software and Document License. +Part of the revision history of the Speculative loading section can be
+ found in the WICG/nav-speculation
+ repository, which is available under the W3C Software and
+ Document License.
Copyright © WHATWG (Apple, Google, Mozilla, Microsoft). This work is licensed under a Creative Commons Attribution 4.0 International License. To the extent portions of it are incorporated into source code, such