Volume 107, Issue 1 pp. 441-509
RESEARCH ARTICLE
Open Access

Brownian half-plane excursion and critical Liouville quantum gravity

Juhan Aru

Juhan Aru

Institute of Mathematics, EPFL, EPFL SB MATH, Lausanne, Switzerland

Search for more papers by this author
Nina Holden

Nina Holden

Courant Institute of Mathematical Sciences at New York University

Search for more papers by this author
Ellen Powell

Corresponding Author

Ellen Powell

Department of Mathematics, Durham University, Durham, UK

Correspondence

Department of Mathematics, Durham University, Mathematics and Computer Science Building, Science Site, Upper Mountjoy, Durham DH13LE, UK.

Email: [email protected]

Search for more papers by this author
Xin Sun

Xin Sun

Department of Mathematics, University of Pennsylvania, Philadelphia, Pennsylvania, USA

Search for more papers by this author
First published: 14 December 2022

Abstract

In a groundbreaking work, Duplantier, Miller and Sheffield showed that subcritical Liouville quantum gravity (LQG) coupled with Schramm–Loewner evolutions (SLE) can be obtained by gluing together a pair of Brownian motions. In this paper, we study the counterpart of their result in the critical case via a limiting argument. In particular, we prove that as one sends κ 4 $\kappa ^{\prime } \downarrow 4$ in the subcritical setting, the space-filling SLE κ $_{\kappa ^{\prime }}$ in a disk degenerates to the CLE 4 $_4$ (where CLE is conformal loop ensembles) exploration introduced by Werner and Wu, along with a collection of independent and identically distributed coin tosses indexed by the branch points of the exploration. Furthermore, in the same limit, we observe that although the pair of initial Brownian motions collapses to a single one, one can still extract two different independent Brownian motions ( A , B ) $(A,B)$ from this pair, such that the Brownian motion A $A$ encodes the LQG distance from the CLE loops to the boundary of the disk and the Brownian motion B $B$ encodes the boundary lengths of the CLE 4 $_4$ loops. In contrast to the subcritical setting, the pair ( A , B ) $(A,B)$ does not determine the CLE-decorated LQG surface. Our paper also contains a discussion of relationships to random planar maps, the conformally invariant CLE 4 $_4$ metric and growth fragmentations.

1 INTRODUCTION

The most classical object of random planar geometry is probably the two-dimensional Brownian motion together with its variants. Over the past 20 years, a plenitude of other interesting random geometric objects have been discovered and studied. Among those we find Liouville quantum gravity (LQG) surfaces [19] and conformal loop ensembles (CLE) [56, 61]. LQG surfaces aim to describe the fields appearing in the study of 2D LQG and can be viewed as canonical models for random surfaces. They can be mathematically defined in terms of volume forms [19, 31, 50] (used in this paper), but recently also in terms of random metrics [17, 26]. CLE is a random collection of loops that correspond conjecturally to interfaces of the q $q$ -state Potts model and the FK random cluster model in the continuum limit (see, for example, [42]).

In this paper we study a coupling of LQG measures, CLE and Brownian motions, taking a form of the kind first discovered in [18]. On the one hand we consider a ‘uniform’ exploration of CLE 4 $\operatorname{CLE}_4$ drawn on top of an independent LQG surface known as the critical LQG disk. On the other hand, we take a seemingly simpler object: the Brownian half-plane excursion. In this coupling one component of the Brownian excursion encodes the branching structure of the CLE 4 $_4$ exploration, together with a certain (LQG surface dependent) distance of CLE 4 $_4$ loops from the boundary. The other component of the Brownian excursion encodes the LQG boundary lengths of the discovered CLE 4 $_4$  loops.

Our result can be viewed as the critical ( κ = 4 ${\kappa ^{\prime }}=4$ ) analog of Duplantier–Miller–Sheffield's mating of trees theorem for κ > 4 ${\kappa ^{\prime }}> 4$ , [18]. The original mating of trees theorem first observes that the quantum boundary length process defined by a space-filling SLE κ $_{\kappa ^{\prime }}$ (where SLE is Schramm–Loewner evolutions) curve drawn on a subcritical LQG surface is given by a certain correlated planar Brownian motion. Moreover, it says that one can take the two components of this planar Brownian motion, glue each one to itself (under its graph) to obtain two continuum random trees and then mate these trees along their branches to obtain both the LQG surface and the space-filling SLE curve wiggling between the trees in a measurable way. This theorem has had far-reaching consequences and applications, for example, to the study of random planar maps and their limits [23, 25, 30], SLE and CLE [3, 5, 20, 43], and LQG itself [4, 41]. See the survey [21] for further applications.

Obtaining a critical analog of the mating of trees theorem was one of the main aims of this paper. The problem one faces is that the above-described picture degenerates in many ways as κ 4 ${\kappa ^{\prime }}\downarrow 4$ (for example, the correlation of the Brownian motions tends to one and the LQG measure converges to the zero measure). However, it is known that the LQG measure can be renormalized in a way that gives meaningful limits [6], and the starting point of the current project was the observation that the pair of Brownian motions can be renormalized via an affine transformation to give something meaningful as well.

Still, not all the information passes nicely to the limit, and in particular extra randomness appears. Therefore, our limiting coupling is somewhat different in nature to that of [18] (or [2] for the finite volume case of quantum disks). Most notably, one of the key results of [2, 18] is that the CLE decorated LQG determines the Brownian motions, and vice versa. In our case neither statement holds in the same way; see Section 5.2.1 for more details. For example, to define the Brownian excursion from the branching CLE 4 $_4$ exploration, one needs a binary variable at every branching event to decide on an ordering of the branches.

We believe that in addition to completing the critical version of Duplantier–Miller–Sheffield's mating of trees theorem, the results of this paper are intriguing in their own right. Moreover, as explained below, this paper opens the road for several interesting questions in the realm of SLE theory, about LQG-related random metrics, in the setting of random planar maps decorated with statistical physics models, and about links to growth-fragmentation processes.

1.1 Contributions

Since quite some setup is required to describe our results for κ = 4 $\kappa =4$ precisely, we postpone the detailed statement to Theorem 5.5. Let us state here a caricature version of the final statement. Some of the objects appearing in the statement will also be precisely defined only later, yet should be relatively clear from their names.

Theorem 1.1.Let

  • lqg $\mathfrak {lqg}$ be the field of a critical quantum disk together with associated critical LQG measures (see Section 4.1);
  • cle $\mathfrak {cle}$ denote the uniform space-filling SLE 4 $\operatorname{SLE}_4$ in the unit disk parameterized by critical LQG mass, which is defined in terms of a uniform CLE 4 $\operatorname{CLE}_4$ exploration plus a collection of independent coin tosses (see Section 2.1.5);
  • and be $\mathfrak {be}$ describe a Brownian (right) half-plane excursion ( A , B ) $(A, B)$ (see Section 4.3).
Then one can couple ( cle , lqg , be ) $(\mathfrak {cle}, \mathfrak {lqg}, \mathfrak {be})$ such that cle $\mathfrak {cle}$ and lqg $\mathfrak {lqg}$ are independent, A $A$ encodes a certain quantum distance for CLE 4 $\operatorname{CLE}_4$ loops from the boundary and B $B$ encodes the quantum boundary lengths of the CLE $\operatorname{CLE}$ loops. Moreover ( cle , lqg ) $(\mathfrak {cle}, \mathfrak {lqg})$ determines be $\mathfrak {be}$ , but the opposite does not hold.

In terms of limit results, we, for example, prove the following:
  • We show that a SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ in the disk converges to the uniform CLE 4 $_4$ exploration introduced by Werner and Wu [64], as κ 4 ${\kappa ^{\prime }}\downarrow 4$ (Proposition 2.6). Here an extra level of randomness appears in the limit, in the sense that new CLE 4 $_4$ loops in the exploration are always added at a uniformly chosen point on the boundary, in contrast to the κ > 4 ${\kappa ^{\prime }}>4$ case where the loops are traced by a continuous curve.
  • Using a limiting argument, we also show in Section 3 how to make sense of a ‘uniform’ space-filling SLE 4 $_4$ exploration, albeit no longer defined by a continuous curve. Again extra randomness appears in the limit: contrary to the κ > 4 ${\kappa ^{\prime }}> 4$ case, the nested uniform CLE 4 $_4$ exploration does not uniquely determine this space-filling SLE 4 $_4$ .
  • Perhaps less surprisingly but nonetheless not without obstacles, we show that the nested CLE κ $_{\kappa ^{\prime }}$ in the unit disk converges to the nested CLE 4 $_4$ with respect to Hausdorff distance (Proposition 2.18). We also show that after dividing the associated quantum gravity measures by ( 4 2 γ ) $(4-2\gamma )$ , a γ $\gamma$ -LQG disk converges to a critical LQG disk.
In terms of connections and open directions, let us very briefly mention a few examples and refer to Section 5.2.2 for more detail.
  • First, as stated above in Theorem 1.1, ( cle , lqg ) $(\mathfrak {cle}, \mathfrak {lqg})$ determines be $\mathfrak {be}$ , but the opposite does not hold. A natural question is whether there is another natural mating of trees type theorem for κ = 4 $\kappa =4$ where one has measurability in both directions.
  • Second, our coupling sheds light on the recent work of Aïdékon and Da Silva [1] who identify a (signed) growth fragmentation embedded naturally in the Brownian half-plane excursion. The cells in this growth fragmentation correspond to very natural observables in our exploration.
  • Third, as we have already mentioned, one of the coordinates in our Brownian excursion encodes a certain LQG distance of CLE 4 $_4$ loops from the boundary. It is reasonable to conjecture that this distance should be related to the CLE 4 $_4$ distance defined in [64] via a Lamperti transform.
  • Fourth, several interesting questions can be asked in terms of convergence of discrete models. Critical FK-decorated planar maps and stable maps are two immediate candidates.

1.2 Outline

The rest of the paper is structured as follows. In Section 2, after reviewing background material on branching SLE and CLE, we will prove the convergence of the SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ exploration in the disk to the uniform CLE 4 $_4$ exploration, and also show the convergence of the nested CLE with respect to Hausdorff distance. In Section 3, we use the limiting procedure to give sense to a notion of space-filling SLE 4 $_4$ . In Section 4, we review the basics of LQG surfaces and of the mating of trees story, and prove convergence of the Brownian motion functionals appearing in [2, 18] after appropriate normalization. We also finalize a certain proof of Section 3, which is interestingly (seemingly) much easier to prove in the mating of trees context. Finally, in Section 5 we conclude the proof of joint convergence of Brownian motions, space-filling SLE and LQG. This allows us to state and conclude the proof of our main theorem. We finish the paper with a small discussion on connections, and an outlook on several interesting open questions.

Throughout, γ ( 2 , 2 ] $\gamma \in (\sqrt {2},2]$ is related to parameters κ , κ , ε $\kappa ,\kappa ^{\prime },\varepsilon$ by
κ = γ 2 , κ = 16 / κ , ε = 2 γ . $$\begin{equation} \kappa =\gamma ^2,\quad \kappa ^{\prime }=16/\kappa ,\quad \varepsilon =2-\gamma . \end{equation}$$ (1.1)

2 CONVERGENCE OF BRANCHING SLE κ $_{\kappa ^{\prime }}$ AND CLE κ $_{\kappa ^{\prime }}$ AS κ 4 ${\kappa ^{\prime }}\downarrow 4$

2.1 Background on branching SLE and conformal loop ensembles

2.1.1 Spaces of domains

Let D $\mathcal {D}$ be the space of D = { D t ; t 0 } $\mathrm{D}=\lbrace \mathrm{D}_t\, ; \, t\geqslant 0\rbrace$ such that
  • for every t 0 $t\geqslant 0$ , 0 D t D $0\in {\mathrm{D}}_t\subset \mathcal {D}$ and D t ${\mathrm{D}}_t$ is simply connected planar domain;
  • D t D s $\mathrm{D}_t\subset \mathrm{D}_s$ for all 0 s < t < ; $0\leqslant s &lt; t&lt;\infty ;$
  • for every t 0 $t\geqslant 0$ , if f t = f t [ D ] $f_t=f_t[{\mathrm{D}}]$ is the unique conformal map from D $\mathbb {D}$ to D t ${\mathrm{D}}_t$ that sends 0 to 0 and has f t ( 0 ) > 0 $f_t^{\prime }(0)&gt;0$ , then f t ( 0 ) = CR ( 0 ; D t ) = e t $f_t^{\prime }(0)=\operatorname{CR}(0;\mathrm{D}_t)=e^{-t}$ .
We also write g t = g t [ D ] $g_t=g_t[{\mathrm{D}}]$ for the inverse of f t $f_t$ .

Recall that a sequence of simply connected domains ( U n ) n 0 $(U^n)_{n\geqslant 0}$ containing 0 are said to converge to a simply connected domain U $U$ in the Carathéodory topology (viewed from 0) if we have f U n f U $f_{U^n}\rightarrow f_U$ uniformly in r D $r\mathbb {D}$ for any r < 1 $r&lt;1$ , where f U n $f_{U^{n}}$ (respectively, f U $f_U$ ) are the unique conformal maps from D $\mathbb {D}$ to U n $U^{n}$ (respectively, U $U$ ) sending 0 to 0 and with positive real derivative at 0. Carathéodory convergence viewed from z 0 $z\ne 0$ is defined in the analogous way.

We equip D $\mathcal {D}$ with the natural extension of this topology: that is, we say that a sequence ( D n ) n 0 $({\mathrm{D}}^{n})_{n\geqslant 0}$ in D $\mathcal {D}$ converges to D ${\mathrm{D}}$ in D $\mathcal {D}$ if for any r < 1 $r&lt;1$ and T [ 0 , ) $T\in [0,\infty )$
sup t [ 0 , T ] sup z r D | f t n ( z ) f t ( z ) | 0 $$\begin{equation} \sup _{t\in [0,T]}\sup _{z\in r\mathbb {D}}|f^{n}_t(z)-f_t(z)|\rightarrow 0\end{equation}$$ (2.1)
as n $n\rightarrow \infty$ , where f t n = f t [ D n ] $f_t^{n}=f_t[\mathrm{D}^{n}]$ and f t = f t [ D ] $f_t=f_t[\mathrm{D}]$ . With this topology, D $\mathcal {D}$ is a metrizable and separable space; see, for example, [37, Section 6.1].

2.1.2 Radial Loewner chains

In order to introduce radial SLE, we first need to recall the definition of a (measure-driven) radial Loewner chain. Such chains are closely related to the space D $\mathcal {D}$ , as we will soon see. If λ $\lambda$ is a measure on [ 0 , ) × D $[0,\infty ) \times \partial \mathbb {D}$ whose marginal on [ 0 , ) $[0,\infty )$ is Lebesgue measure, we define the radial Loewner equation driven by λ $\lambda$ via
g t ( z ) = [ 0 , t ] × D g s ( z ) u + g s ( z ) u g s ( z ) d λ ( s , u ) ; g 0 ( z ) = z $$\begin{equation} g_t(z)=\int _{[0,t]\times \partial \mathbb {D}} g_s(z)\frac{u+g_s(z)}{u-g_s(z)} \, d\lambda (s,u) ; \quad \quad g_0(z)=z \end{equation}$$ (2.2)
for z D $z\in \mathbb {D}$ and t 0 $t\geqslant 0$ . It is known (see, for example, [37, Proposition 6.1]) that for any such λ $\lambda$ , (2.2) has a unique solution g t ( z ) $g_t(z)$ for each z D $z\in \mathbb {D}$ , defined until time t z : = sup { t 0 : g t ( z ) D } $t_z:=\sup \lbrace t\geqslant 0: g_t(z)\in \mathbb {D}\rbrace$ . Moreover, if one defines D t : = { z D : t z < t } $\mathrm{D}_t:=\lbrace z\in \mathbb {D}: t_z&lt;t\rbrace$ , then D = { D t , t 0 } $\mathrm{D}=\lbrace \mathrm{D}_t\, , \, t\geqslant 0\rbrace$ is an element of D $\mathcal {D}$ , and g t $g_t$ from (2.2) is equal to g t [ D ] = ( f t [ D ] ) 1 $g_t[\mathrm{D}]=(f_t[\mathrm{D}])^{-1}$ for each t $t$ . We call D $\mathrm{D}$ the radial Loewner chain driven by λ $\lambda$ .
Note that if one restricts to measure of the form λ ( A , d t ) = δ W ( t ) ( A ) d t $\lambda (A,dt)=\delta _{W(t)}(A) \, dt$ with W : [ 0 , ) D $W:[0,\infty )\rightarrow \partial \mathbb {D}$ piecewise continuous, this defines the more classical notion of a radial Loewner chain. In this case we can rewrite the radial Loewner equation as
t g t ( z ) = g t ( z ) W t + g t ( z ) W t g t ( z ) ; z D , t t z : = inf { s : g s ( z ) = W s } $$\begin{equation} \partial _t g_t(z)= g_t(z) {\frac{W_t+g_t(z)}{W_t-g_t(z)}}; \;\; z\in \mathbb {D},\, t\leqslant t_z:=\inf \lbrace s: g_s(z)=W_s\rbrace \end{equation}$$ (2.3)
and we refer to the corresponding Loewner chain as the radial Loewner evolution with driving function W $W$ . In fact, this is the case that we will be interested in when defining radial SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ for κ > 4 ${\kappa ^{\prime }}&gt;4$ .

Remark 2.1.Let us further remark that if ( λ n ) $(\lambda ^n)$ are a sequence of driving measures as above, such that λ n $\lambda ^n$ converges weakly (that is,  with respect to the weak topology on measures) to some λ $\lambda$ on [ 0 , T ] × D $[0,T]\times \partial \mathbb {D}$ for every T $T$ , then the corresponding Loewner chains ( D n ) , D $(\mathrm{D}^n),\mathrm{D}$ are such that D n D $\mathrm{D}^n\rightarrow \mathrm{D}$ in D $\mathcal {D}$ [37, Proposition 6.1]. In particular, one can check that if λ n ( A , d t ) = δ W n ( t ) ( A ) d t $\lambda ^n(A,dt)=\delta _{W^n(t)}(A) \, dt$ and λ ( A , d t ) = δ W ( t ) ( A ) d t $\lambda (A,dt)=\delta _{W(t)}(A)\, dt$ for some piecewise continuous functions W n : [ 0 , ) D $W^n:[0,\infty )\rightarrow \partial \mathbb {D}$ , and W : [ 0 , ) D $W:[0,\infty )\rightarrow \partial \mathbb {D}$ then the corresponding Loewner chains converge in D $\mathcal {D}$ if for any T > 0 $T&gt;0$ fixed and F : [ 0 , T ] × D R $F:[0,T]\times \partial \mathbb {D}\rightarrow \mathbb {R}$ bounded and continuous, we have

λ n ( F ) = 0 T D F ( u , t ) δ W n ( t ) ( u ) d t = 0 T F ( W n ( t ) , t ) d t λ ( F ) = 0 T F ( W ( t ) , t ) d t $$\begin{equation} \lambda ^n(F)=\int _0^T\int _{\partial \mathbb {D}} F(u,t) \delta _{W^n(t)}(u)dt = \int _0^T F(W^n(t),t) \, dt \rightarrow \lambda (F)=\int _0^T F(W(t),t) \, dt\end{equation}$$ (2.4)
as n $n\rightarrow \infty$ .

Remark 2.2.In what follows we will sometimes need to consider evolving domains { D t ; t [ 0 , S ] } $\lbrace \mathrm{D}_t\, ; \, t\in [0,S]\rbrace$ that satisfy the conditions to be an element of D $\mathcal {D}$ up to some finite time S $S$ . In this case we may extend the definition of D t $\mathrm{D}_t$ for t S $t\geqslant S$ by setting D t = f S ( e ( t S ) D ) $\mathrm{D}_t=f_S(e^{-(t-S)}\mathbb {D})$ , where f S : D D S $f_S:\mathbb {D}\rightarrow \mathrm{D}_S$ is the unique conformal map sending 0 0 $0\rightarrow 0$ and with f S ( 0 ) = e S $f_S^{\prime }(0)=e^{-S}$ .With this extension, D = { D t ; t 0 } $\mathrm{D}=\lbrace \mathrm{D}_t \, ; \, t\geqslant 0\rbrace$ defines an element of D $\mathcal {D}$ .

If we have a sequence of such objects, then we say that they converge to a limiting object in D $\mathcal {D}$ if and only if these extensions converge. We will use this terminology without further comment in the rest of the paper.

2.1.3 Radial SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$

Let κ ( 4 , 8 ) ${\kappa ^{\prime }}\in (4,8)$ , and recall the relationship (1.1) between κ ( 4 , 8 ) ${\kappa ^{\prime }}\in (4,8)$ and ε ( 0 , 2 2 ) $\varepsilon \in (0,2-\sqrt {2})$ . Although the use of ε $\varepsilon$ is somewhat redundant at this point, we do so to avoid redefining certain notations later on.

Let B $B$ be a standard Brownian motion, and let θ 0 ε = { ( θ 0 ε ) t ; t > 0 } $\theta ^\varepsilon _0=\lbrace (\theta _0^{\varepsilon} )_t\, ; \, t&gt;0\rbrace$ be the unique B $B$ -measurable process taking values in [ 0 , 2 π ] $[0,2\pi ]$ , with ( θ 0 ε ) 0 = x [ 0 , 2 π ] $(\theta ^{\varepsilon} _0)_0=x\in [0,2\pi ]$ , which is instantaneously reflecting at { 0 , 2 π } $\lbrace 0,2\pi \rbrace$ , and that solves the SDE
d ( θ 0 ε ) t = κ d B t + κ 4 2 cot ( θ 0 ε ) t 2 d t $$\begin{equation} d(\theta ^{\varepsilon} _0)_t = \sqrt {{\kappa ^{\prime }}}dB_t+ \frac{{\kappa ^{\prime }}-4}{2}\cot {\left(\frac{(\theta ^{\varepsilon} _0)_t}{2}\right)} \, dt\end{equation}$$ (2.5)
on time intervals for which ( θ 0 ε ) t { 0 , 2 π } $(\theta ^{\varepsilon} _0)_t\ne \lbrace 0,2\pi \rbrace$ . The existence and pathwise uniqueness of this process is shown in [56, Propositions 3.15 and 4.2]. It follows from the strong Markov property of Brownian motion that θ 0 ε $\theta ^{\varepsilon} _0$ has the strong Markov property. We let τ 0 ε $\tau ^{\varepsilon} _0$ be the first hitting time of 2 π $2\pi$ by θ 0 ε $\theta _0^{\varepsilon}$ .
Associated to θ 0 ε $\theta ^{\varepsilon} _0$ , we can define a process W 0 ε $W^{\varepsilon} _0$ , taking values on D $\partial \mathbb {D}$ , by setting
( W 0 ε ) t = exp i ( ( θ 0 ε ) t 0 t cot ( θ 0 ε ) s / 2 d s ) t 0 . $$\begin{equation} (W_0^{\varepsilon} )_t = \exp {\left(\operatorname{i}\, ((\theta _0^{\varepsilon} )_t- \int _0^t \cot {\left((\theta _0^{\varepsilon} )_s/2\right)} \, ds)\right)} \quad t\geqslant 0.\end{equation}$$ (2.6)
This indeed gives rise to a continuous function W 0 ε $W_0^{\varepsilon}$ in time (see, for example, [45, 56]) and using this as the driving function in the radial Loewner equation (2.3) defines a radial SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ in D $\mathbb {D}$ from 1 to 0, with a force point at e i x $e^{-ix}$ (recall that ( θ 0 ε ) 0 = x $(\theta _0^{\varepsilon} )_0=x$ ). We denote this by ( D 0 ε ) = { ( D 0 ε ) t ; t 0 } $({\mathbf {D}}^{\varepsilon} _0)=\lbrace ({\mathbf {D}}^{\varepsilon} _0)_t\, ; \, t\geqslant 0\rbrace$ which is an element of D $\mathcal {D}$ . In fact, there almost surely exists a continuous non-self-intersecting curve η 0 ε : [ 0 , ) D $\eta ^{\varepsilon} _0:[0,\infty )\rightarrow \mathbb {D}$ such that ( D 0 ε ) t $({\mathbf {D}}^{\varepsilon} _0)_t$ is the connected component of D η 0 ε [ 0 , t ] $\mathbb {D}\setminus \eta ^{\varepsilon} _0[0,t]$ containing 0 for all t $t$ [38, 51].

Usually we will start with x = 0 $x=0$ , and then we say that the force point is at 1 $1^-$ : everything in the above discussion remains true in this case; see [56]. In this setting we refer to D 0 ε ${\mathbf {D}}^{\varepsilon} _0$ and/or η 0 ε $\eta ^{\varepsilon} _0$ (interchangeably) as simply a radial SLE κ ( κ 6 ) $\operatorname{SLE}_{{\kappa ^{\prime }}}({\kappa ^{\prime }}-6)$ targeted at 0.

The time τ 0 ε $\tau ^{\varepsilon} _0$ corresponds to the first time that 0 is surrounded by a counterclockwise loop; see Figure 3. To begin, we will just consider the SLE stopped at this time. We write
D 0 ε = { ( D 0 ε ) t ; t 0 } : = { ( D 0 ε ) τ ε t ; t 0 } $$\begin{equation*} {\mathrm{D}}^{\varepsilon} _0=\lbrace ({\mathrm{D}}^{\varepsilon} _0)_t\, ; \, t\geqslant 0\rbrace :=\lbrace ({\mathbf {D}}^{\varepsilon} _0)_{\tau ^{\varepsilon} \wedge t}\, ; \, {t\geqslant 0}\rbrace \end{equation*}$$
for the corresponding element of D $\mathcal {D}$ (see Remark 2.2).
Details are in the caption following the image
A simplistic sketch of the correspondence in Theorem 1.1. On the left: all the outermost CLE 4 $\operatorname{CLE}_4$ loops discovered by the space-filling SLE 4 $_4$ before the dashed loop surrounding z $z$ is discovered, together with all of the second-level nested CLE 4 $_4$ loops discovered before the dotted loop surrounding z $z$ is discovered. On the right: the corresponding half-planar Brownian excursion, with the coordinate axes switched for ease of viewing. The subexcursion marked by the dashed (respectively, dotted) line, that is, the portion of Brownian path starting and ending at the endpoints of this line ‘-’ corresponds to the exploration within the dashed (respectively, dotted) loop. The lengths of these lines are the LQG lengths of the corresponding loops, and the duration of the subexcursions are their LQG areas. The time that z $z$ is visited is marked by a dot, and the time that the dotted loop is discovered is marked by a cross. When the dotted loop is discovered, a coin is tossed to determine which of the two disconnected yet-to-be-explored domains is visited first by the space-filling SLE 4 $_4$ ; in this example, the component containing z $z$ is visited second; see also Figure 2.
Details are in the caption following the image
An illustration of the subset of the unit disk, shaded gray, which has been explored by the space-filling SLE 4 $_4$ at two different times. On the left: at the time that the second-level CLE 4 $_4$ loop surrounding z $z$ is discovered (marked by a cross on the right-hand side of Figure 1). On the right: at the time that z $z$ is reached (marked by a dot on the right-hand side of Figure 1). Note that, although this is not apparent from the sketch, the explored subset of the unit disk at any given time is actually a connected set.
Details are in the caption following the image
From left to right, the process θ 0 ε $\theta ^{\varepsilon} _0$ does the following at the illustrated time: hits 0, hits 0, hits neither 0 nor 2 π $2\pi$ , hits 2 π $2\pi$ . The rightmost image is, therefore, an illustration of the time τ 0 ε $\tau _0^{\varepsilon}$ .

2.1.4 An approximation to radial SLE κ ( κ 6 ) $_{{\kappa ^{\prime }}}({\kappa ^{\prime }}-6)$

We will use the following approximations ( D 0 ε , n ) n N $({\mathrm{D}}^{\varepsilon ,n}_0)_{n\in \mathbb {N}}$ to D 0 ε ${\mathrm{D}}^{\varepsilon} _0$ in D $\mathcal {D}$ (in order to show convergence to the CLE 4 $_4$ exploration). Fixing ε $\varepsilon$ , and taking the processes θ 0 ε $\theta _0^{\varepsilon}$ and W 0 ε $W_0^{\varepsilon}$ as above, the idea is to remove intervals of time where θ 0 ε $\theta _0^{\varepsilon}$ is making tiny excursions away from 0, and then define D 0 ε , n $\mathrm{D}^{\varepsilon ,n}_0$ to be the radial Loewner chain whose driving function is equal to W 0 ε $W_0^{\varepsilon}$ , but with these times cut out.

More precisely, we set T 0 ε , n : = 0 ; $T_0^{\varepsilon ,n}:=0;$ and inductively define
R 1 ε , n = inf { t T 0 ε , n : ( θ 0 ε ) t 2 n } ; S 1 ε , n = sup { t R 1 ε , n : ( θ 0 ε ) t = 0 } ; T 1 ε , n = inf { t R 1 ε , n : ( θ 0 ε ) t = 0 } ; R 2 ε , n = inf { t T 1 ε , n : ( θ 0 ε ) t 2 n } ; S 2 ε , n = sup { t R 2 ε , n : ( θ 0 ε ) t = 0 } ; T 2 ε , n = inf { t R 2 ε , n : ( θ 0 ε ) t = 0 } ; $$\begin{align*} {R}_1^{\varepsilon ,n} & =\inf \lbrace t\geqslant T_0^{\varepsilon ,n}\hspace{-2.84544pt}:\hspace{-2.84544pt} (\theta ^{\varepsilon} _0)_t\geqslant 2^{-n}\rbrace ; \\ S_1^{\varepsilon ,n} & =\sup \lbrace t\leqslant R_1^{\varepsilon ,n}\hspace{-2.84544pt}:\hspace{-2.84544pt} (\theta ^{\varepsilon} _0)_t=0\rbrace ; \\ T_1^{\varepsilon ,n} & =\inf \lbrace t\geqslant R_1^{\varepsilon ,n}\hspace{-2.84544pt}:\hspace{-2.84544pt} (\theta ^{\varepsilon} _0)_t=0\rbrace ; \\ R_2^{\varepsilon ,n} & =\inf \lbrace t\geqslant T_1^{\varepsilon ,n}\hspace{-2.84544pt}:\hspace{-2.84544pt}(\theta ^{\varepsilon} _0)_t\geqslant 2^{-n}\rbrace ; \\ S_2^{\varepsilon ,n} & =\sup \lbrace t\leqslant R_2^{\varepsilon ,n}\hspace{-2.84544pt}:\hspace{-2.84544pt} (\theta ^{\varepsilon} _0)_t=0\rbrace ; \\ T_2^{\varepsilon ,n} & =\inf \lbrace t\geqslant R_2^{\varepsilon ,n}\hspace{-2.84544pt}:\hspace{-2.84544pt} (\theta ^{\varepsilon} _0)_t=0\rbrace ; \end{align*}$$
etc. so the intervals [ S i ε , n , T i ε , n ] $[S_i^{\varepsilon ,n},T_i^{\varepsilon ,n}]$ for i 1 $i\geqslant 1$ are precisely the intervals on which θ 0 ε $\theta ^{\varepsilon} _0$ is making an excursion away from 0 whose maximum height exceeds 2 n $2^{-n}$ . Call the i $i$ th one of these excursions e i ε , n $e_i^{\varepsilon ,n}$ . Also set Λ ε , n : = sup { j : S j ε , n τ 0 ε } $ \Lambda ^{\varepsilon ,n}:= \sup \lbrace j: S_j^{\varepsilon ,n}\leqslant \tau ^{\varepsilon} _0\rbrace$ and
l i ε , n : = T i ε , n S i ε , n for i < Λ ε , n ; l Λ ε , n ε , n = τ 0 ε S Λ ε , n ε , n ; L i ε , n = 1 j i l j ε , n for 1 i Λ ε , n . $$\begin{equation*} l_i^{\varepsilon ,n}:= T_i^{\varepsilon ,n}-S_i^{\varepsilon ,n} \text{ for } i&lt;\Lambda ^{\varepsilon ,n} \text{ ; } l_{\Lambda ^{\varepsilon ,n}}^{\varepsilon ,n}=\tau ^{\varepsilon} _0-{S}_{\Lambda ^{\varepsilon ,n}}^{\varepsilon ,n} \text{ ; } L_i^{\varepsilon ,n}=\sum \nolimits _{1\leqslant j \leqslant i} l_j^{\varepsilon ,n} \text{ for } 1\leqslant i \leqslant \Lambda ^{\varepsilon ,n}. \end{equation*}$$
Now we define
( W 0 ε , n ) t = ( W 0 ε ) S i ε , n + ( t L i 1 ε , n ) , for t [ L i 1 ε , n , L i ε , n ) and 1 i Λ ε , n , $$\begin{equation*} (W_0^{\varepsilon ,n})_t=(W_0^{\varepsilon} )_{S_i^{\varepsilon ,n}+(t-L_{i-1}^{\varepsilon ,n})}, \text{ for } t\in [L_{i-1}^{\varepsilon ,n},L_i^{\varepsilon ,n})\text{ and } 1\leqslant i \leqslant \Lambda ^{\varepsilon ,n}, \end{equation*}$$
and set D 0 ε , n $\mathrm{D}_0^{\varepsilon ,n}$ to be the radial Loewner chain with driving function W 0 ε , n $W_0^{\varepsilon ,n}$ . This is defined up to time τ 0 ε , n : = L Λ ε , n ε , n $\tau _0^{\varepsilon ,n}:=L_{\Lambda ^{\varepsilon ,n}}^{\varepsilon ,n}$ .

We will show in Section 2.2 that D 0 ε , n D 0 ε ${\mathrm{D}}^{\varepsilon ,n}_0 \rightarrow {\mathrm{D}}^{\varepsilon} _0$ in D $\mathcal {D}$ as n $n\rightarrow \infty$ (see Lemma 2.10).

2.1.5 Uniform CLE 4 $\operatorname{CLE}_4$ exploration targeted at the origin

Now suppose that we replace κ ${\kappa ^{\prime }}$ with 4, so that the solution θ 0 $\theta _0$ of (2.5) is simply a (speed 4) Brownian motion reflected at { 0 , 2 π } $\lbrace 0,2\pi \rbrace$ . Then the integral in (2.6) does not converge, but it is finite for any single excursion of θ 0 $\theta _0$ . For any n N $n\in \mathbb {N}$ if we define τ 0 n $\tau ^n_0$ , Λ n $\Lambda ^{n}$ and ( S i n , T i n , l i n , L i n ) i 1 $(S_i^{n},T_i^{n},l_i^{n},L_i^{n})_{i\geqslant 1}$ as in the sections above, we can therefore define a process D 0 n ${\mathrm{D}}^{n}_0$ in D $\mathcal {D}$ via the following procedure:
  • sample random variables ( X i n ) i 1 $(X_i^{n})_{i\geqslant 1}$ uniformly and independently on D $\partial \mathbb {D}$ ;
  • define ( W 0 n ) t $(W_0^n)_t$ for t [ 0 , τ 0 n ) $t\in [0,\tau _0^n)$ by setting
    ( W 0 n ) t = X i n exp i ( ( θ 0 ) t + S i n S i n t + S i n cot ( ( θ 0 ) s / 2 ) d s ) $$\begin{equation} (W_0^n)_t= X_i^{n}\exp {\left(\operatorname{i}((\theta _0)_{t+S_i^{n}} - \int _{S_i^{n}}^{t+S_i^{n}} \cot ((\theta _0)_s/2) \,ds)\right)} \end{equation}$$ (2.7)
    for t [ L i 1 n , L i ) $t\in [L_{i-1}^n,L_i)$ and 1 i Λ n $ 1\leqslant i\leqslant \Lambda ^n$ ;
  • let D n $\mathrm{D}^n$ be the radial Loewner chain with driving function W 0 n $W_0^n$ .

With these definitions we have that D 0 n D 0 ${\mathrm{D}}^{n}_0\Rightarrow {\mathrm{D}}_0$ in D $\mathcal {D}$ as n $n\rightarrow \infty$ , where the limit process is the uniform CLE 4 $_4$ exploration introduced in [64], and run until the outermost CLE 4 $_4$ loop surrounding 0 is discovered.

More precisely, the uniform CLE 4 $_4$ exploration toward 0 in D $\mathbb {D}$ can be defined as follows. One starts with a Poisson point process { ( γ j , t j ) ; j J } $\lbrace (\gamma _j, t_j)\, ; \, j\in J\rbrace$ with intensity given by M $M$ times Lebesgue measure, where M $M$ is the SLE 4 $_4$ bubble measure rooted uniformly over the unit circle; see [60, Section 2.3.2]. In particular, for each j $j$ , γ j $\gamma _j$ is a simple continuous loop rooted at some point in D $\partial \mathbb {D}$ . We define int ( γ j ) $\mathrm{int}(\gamma _j)$ to be the connected component of D γ j $\mathbb {D}\setminus \gamma _j$ that intersects D $\partial \mathbb {D}$ only at the root, and set τ = inf { t : t = t j with 0 int ( γ j ) } $\tau =\inf \lbrace t: t=t_j \text{ with } 0 \in \mathrm{int}(\gamma _j)\rbrace$ so that for all t j < τ $t_j&lt;\tau$ , int ( γ j ) $\mathrm{int}(\gamma _j)$ does not contain the origin. Therefore, for each such j $j$ we can associate a unique conformal map f j $f_j$ from D $\mathbb {D}$ to the connected component of D γ j $\mathbb {D}\setminus \gamma _j$ containing 0 to D $\mathbb {D}$ , such that f j ( 0 ) = 0 $f_j(0)=0$ and f j ( 0 ) > 0 $f_j^{\prime }(0)&gt;0$ . For any t τ $t\leqslant \tau$ it is then possible to define (for example, by considering only loops with some minimum size and then letting this size tend to 0, see again [60, 64]) f t $f_t$ to be the composition t j < t f t j $\circ _{t_j&lt; t} f_{t_j}$ , where the composition is done in reverse chronological order of the functions t j $t_j$ . The process
{ D t ; t τ } : = { f t ( D ) ; t τ } $$\begin{equation} \lbrace \mathrm{D}^{\prime }_t \, ; \, t\leqslant \tau \rbrace :=\lbrace f_t(\mathbb {D}) \, ; \, t\leqslant \tau \rbrace \end{equation}$$ (2.8)
is then a process of simply connected subdomains of D $\mathbb {D}$ containing 0, which is decreasing in the sense that D t D s $\mathrm{D}^{\prime }_t\subseteq \mathrm{D}^{\prime }_s$ for all 0 s t τ $0\leqslant s\leqslant t \leqslant \tau$ . This is the description of the uniform CLE 4 $\operatorname{CLE}_4$ exploration toward 0 most commonly found in the literature. Note that with this definition, time is parameterized according to the underlying Poisson point process, and entire loops are ‘discovered instantaneously’.

Since we are considering processes in D $\mathcal {D}$ , we need to reparameterize D $\mathrm{D}^{\prime }$ by log CR $-\log \operatorname{CR}$ seen from the origin. By definition, for each j J $j\in J$ , γ j $\gamma _j$ is a simple loop rooted at a point in D $\partial \mathbb {D}$ that does not surround 0. If we declare the loop to be traversed counterclockwise, we can view it as a curve c j : [ 0 , f j ( 0 ) ] D $c_j:[0,f_j^{\prime }(0)]\rightarrow \mathbb {D}$ parameterized so that CR ( 0 ; D c j ) = e t $\operatorname{CR}(0;\mathbb {D}\setminus c_j)=e^{-t}$ for all t $t$ (the choice of direction means that int ( γ j ) $\mathrm{int}(\gamma _j)$ is surrounded by the left-hand side of c j $c_j$ ). We then define D $\mathrm{D}$ to be the unique process in D $\mathcal {D}$ such that for each j J $j\in J$ with t j τ $t_j\leqslant \tau$ , and all t [ log f t j ( 0 ) , log f t j ( 0 ) log f j ( 0 ) ] $t\in [-\log f_{t_j}^{\prime }(0),- \log f_{t_j}^{\prime }(0)-\log f_j^{\prime }(0)]$ , D t $\mathrm{D}_t$ is the connected component of f t j ( D c j [ 0 , t log f t j ( 0 ) ] ) $f_{t_j}(\mathbb {D}\setminus c_j[0,t-\log f_{t_j}^{\prime }(0)])$ containing 0. In other words, D $\mathrm{D}$ is a reparameterization of D $\mathrm{D}^{\prime }$ by log CR $-\log \operatorname{CR}$ seen from 0, where instead of loops being discovered instantaneously, they are traced continuously in a counterclockwise direction. The process is defined until time τ 0 : = log CR ( 0 ; f τ ( D γ τ ) ) $\tau _0:=-\log \operatorname{CR}(0;f_{\tau }(D\setminus \gamma _{\tau } ))$ , at which point the origin is surrounded by a loop (the law of this loop is that of the outermost loop surrounding the origin in a nested CLE 4 $_4$ in D $\mathbb {D}$ ).

With this definition, the same argument as in [64, Section 4] shows that D 0 n D 0 ${\mathrm{D}}^{n}_0\Rightarrow {\mathrm{D}}_0$ in D $\mathcal {D}$ as n $n\rightarrow \infty$ . Moreover, this convergence in law holds jointly with the convergence τ 0 n τ 0 $\tau _0^n\Rightarrow \tau _0$ (in particular, τ 0 $\tau _0$ has the law of the first time that a reflected Brownian motion started from 0 hits π $\pi$ , as was already observed in [52]).

The CLE 4 $\operatorname{CLE}_4$ exploration can be continued after this first loop exploration time τ 0 $\tau _0$ by iteration. More precisely, given the process up to time τ 0 $\tau _0$ , one next samples an independent CLE 4 $\operatorname{CLE}_4$ exploration in the interior of the discovered loop containing 0, but now with loops traced clockwise instead of counterclockwise. When the next-level loop containing 0 is discovered, the procedure is repeated, but going back to counterclockwise tracing. Continuing in this way, we define the whole uniform CLE 4 $_4$ exploration targeted at 0: D 0 = { ( D 0 ) t ; t 0 } ${\mathbf {D}}_0=\lbrace ({\mathbf {D}}_0)_t \, ; \, t\geqslant 0\rbrace$ . Note that by definition D 0 $\mathrm{D}_0$ is then just the process D 0 $\mathbf {D}_0$ , stopped at time τ 0 $\tau _0$ .

Remark 2.3.The ‘clockwise/counterclockwise’ switching defined above is consistent with what happens in the SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ picture when κ > 4 ${\kappa ^{\prime }}&gt;4$ . Indeed, it follows from the Markov property of θ 0 ε $\theta _0^{\varepsilon}$ (in the κ > 4 ${\kappa ^{\prime }}&gt;4$ case) that after time τ 0 ε $\tau _0^{\varepsilon}$ , the evolution of θ $\theta$ until it next hits 0 is independent of the past and equal in law to ( 2 π θ 0 ε ( t ) ) t [ 0 , τ 0 ε ] $(2\pi -\theta _0^{\varepsilon} (t))_{t\in [0,\tau _0^{\varepsilon} ]}$ . This implies that the future of the curve after time τ 0 ε $\tau _0^{\varepsilon}$ has the law of an SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ in the connected component of the remaining domain containing 0, but now with force point starting infinitesimally counterclockwise from the tip, until 0 is surrounded by a clockwise loop. This procedure alternates, just as in the κ = 4 ${\kappa ^{\prime }}=4$ case.

2.1.6 Exploration of the (nested) CLE

In the previous subsections, we have seen how to construct SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ processes, denoted by D 0 ε ${\mathbf {D}}^{\varepsilon} _0$ ( ε = ε ( κ ) $\varepsilon =\varepsilon (\kappa ^{\prime })$ ) from 1 to 0 in D $\mathbb {D}$ , and that these are generated by curves η ε $\eta ^{\varepsilon}$ . We have also seen how to construct a uniform CLE 4 $\operatorname{CLE}_4$ exploration, D 0 ${\mathbf {D}}_0$ , targeted at 0 in D $\mathbb {D}$ . The 0 in the subscripts here is to indicate that 0 is a special target point. But we can also define the law of an SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ , or a CLE 4 $\operatorname{CLE}_4$ exploration process, targeted at any point z $z$ in the unit disk. To do this we simply take the law of ϕ ( D 0 ε ) $\phi (\mathbf {D}^{\varepsilon} _0)$ or ϕ ( D 0 ) $\phi (\mathbf {D}_0)$ , where ϕ : D D $\phi :\mathbb {D}\rightarrow \mathbb {D}$ is the unique conformal map sending 0 to z $z$ and 1 to 1. We will denote these processes by ( D z ε ) , D z $({\mathbf {D}}^{\varepsilon} _z),{\mathbf {D}}_z$ , where the ( D z ε ) $({\mathbf {D}}^{\varepsilon} _z)$ are also clearly generated by curves η z ε $\eta ^{\varepsilon} _z$ for ε > 0 $\varepsilon &gt;0$ . By definition, the time parameterization for D z ε $\mathbf {D}_z^{\varepsilon}$ is such that log CR ( z ; ( D z ε ) t ) = t $-\log \operatorname{CR}(z; (\mathbf {D}_z^{\varepsilon} )_t)=t$ for all t , z , ε $t, z, \varepsilon$ (similarly for D z $\mathbf {D}_z$ ).

In fact, both SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ and the uniform CLE 4 $\operatorname{CLE}_4$ exploration satisfy a special target invariance property; see, for example, [53] for SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ and [64, Lemma 8] for CLE 4 $_4$ . This means that they can be targeted at a countable dense set of point in D $\mathbb {D}$ simultaneously, in such a way that for any distinct z , w D $z,w\in \mathbb {D}$ , the processes targeted at z $z$ and w $w$ agree (modulo time reparameterization) until the first time that z $z$ and w $w$ lie in different connected components of the yet-to-be-explored domain. We will choose our dense set of points to be Q : = Q 2 D $\mathcal {Q}:=\mathbb {Q}^2\cap \mathbb {D}$ , and for ε > 0 $\varepsilon &gt;0$ refer to the coupled process ( D z ε ) z Q $({\mathbf {D}}^{\varepsilon} _z)_{z\in \mathcal {Q}}$ (or ( η z ε ) z Q $(\eta ^{\varepsilon} _z)_{z\in \mathcal {Q}}$ ) as the branching SLE κ $\operatorname{SLE}_{\kappa ^{\prime }}$ in D $\mathbb {D}$ . Similarly, we refer to the coupled process ( D z ) z Q $({\mathbf {D}}_z)_{z\in \mathcal {Q}}$ as the branching CLE 4 $\operatorname{CLE}_4$ exploration in D $\mathbb {D}$ .

Note that in this setting we can associate a process θ z ε $\theta _z^{\varepsilon}$ to each z Q $z \in \mathcal {Q}$ : we consider the image of D z ε $\mathbf {D}_z^{\varepsilon}$ under the unique conformal map from D D $\mathbb {D}\rightarrow \mathbb {D}$ sending z 0 $z\mapsto 0$ and 1 1 $1\mapsto 1$ , and define θ z ε $\theta _z^{\varepsilon}$ to be the unique process such that this new radial Loewner chain is related to θ z ε $\theta _z^{\varepsilon}$ via Equations (2.6) and (2.3). Note that θ z ε $\theta _z^{\varepsilon}$ has the same law as θ 0 ε $\theta _0^{\varepsilon}$ for each fixed z $z$ (by definition), but the above procedure produces a coupling of { θ z ε ; z Q } $\lbrace \theta _z^{\varepsilon} \, ; \, z\in \mathcal {Q}\rbrace$ .

We will use the following property connecting chordal and radial SLE (that is closely related to target invariance).

Lemma 2.4. ([[53], Theorem 3])Consider the radial SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}(\kappa ^{\prime }-6)$ with force point at e i x $e^{-\operatorname{i}x}$ for x ( 0 , 2 π ) $x\in (0,2\pi )$ , stopped at the first time that e i x $\operatorname{e}^{-\operatorname{i}x}$ and 0 are separated. Then its law coincides (up to a time change) with that of a chordal SLE κ $_{\kappa ^{\prime }}$ from 1 to e i x $\operatorname{e}^{\operatorname{i}x}$ in D $\mathbb {D}$ , stopped at the equivalent time.

We remark that from ( η z ε ) z Q $(\eta ^{\varepsilon} _z)_{z\in \mathcal {Q}}$ , we can almost surely define a curve η a ε $\eta ^{\varepsilon} _a$ for any fixed a D ¯ $a\in \overline{\mathbb {D}}$ , by taking the almost sure limit (with respect to the supremum norm on compacts of time) of the curves η a k ε $\eta ^{\varepsilon} _{a_k}$ , where a k Q $a_k\in \mathcal {Q}$ is a sequence tending to a $a$ as k $k\rightarrow \infty$ . This curve has the law of an SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ from 1 to a $a$ in D $\mathbb {D}$ [45, Section 2.1]. Let us caution at this point that such a limiting construction does not work simultaneously for all a $a$ . Indeed, there are almost surely certain exceptional points a $a$ , the set of which almost surely has Lebesgue measure zero, for which the limit of η a k ε $\eta ^{\varepsilon} _{a_k}$ does not exist for some sequence a k a $a_k\rightarrow a$ ; see Figure 4.

Details are in the caption following the image
On the left: the curve η 0 ε $\eta _0^{\varepsilon}$ (in blue) is run up to time τ 0 , 0 ε $\tau _{0,0}^{\varepsilon}$ (the last time that θ 0 ε $\theta _0^{\varepsilon}$ hits 0 before hitting 2 π $2\pi$ ). Point η 0 ε ( τ 0 , 0 ε ) $\eta _0^{\varepsilon} (\tau _{0,0}^{\varepsilon })$ is defined to be o 0 ε $o_0^{\varepsilon}$ and we have that η 0 ε ( [ 0 , τ 0 , 0 ε ] ) = η o 0 ε ε ( [ 0 , τ 0 ε ] ) $\eta _0^{\varepsilon} ([0,\tau _{0,0}^{\varepsilon }])=\eta _{o_0^{\varepsilon} }^{\varepsilon} ([0,\widetilde{\tau }_0^{\varepsilon} ])$ for some time τ 0 ε $\widetilde{\tau }_0^{\varepsilon}$ . On the right: the outermost CLE κ $_{\kappa ^{\prime }}$ loop L 0 ε $\mathcal {L}_0^{\varepsilon}$ containing 0 (marked in red) is defined to be η o 0 ε ε ( [ τ 0 ε , ] ) $\eta _{o_0^{\varepsilon} }^{\varepsilon} ([\widetilde{\tau }_0^{\varepsilon} ,\infty ])$ . Note that we have a choice about how to define η o 0 ε ε $\eta ^{\varepsilon }_{o^{\varepsilon} _0}$ : if we take it to be a limit of η a k ε $\eta ^{\varepsilon} _{a_k}$ where a k o 0 ε $a_k\rightarrow o_0^{\varepsilon}$ along the dotted line, this will be different to if a k o 0 ε $a_k\rightarrow o_0^{\varepsilon}$ along the dashed line. We choose the definition that makes o 0 ε ${o^{\varepsilon} _0}$ into a double point for η o 0 ε ε $\eta ^{\varepsilon }_{o^{\varepsilon} _0}$ .

Let us now explain how, for each κ ( 4 , 8 ) ${\kappa ^{\prime }}\in (4,8)$ , we can use the branching SLE κ $\operatorname{SLE}_{\kappa ^{\prime }}$ to define a (nested) CLE κ $\operatorname{CLE}_{\kappa ^{\prime }}$ . The conformal loop ensemble CLE κ $\operatorname{CLE}_{\kappa ^{\prime }}$ in D $\mathbb {D}$ is a collection of non-crossing (nested) loops in the disk, [61], whose law is invariant under Möbius transforms D D $\mathbb {D}\rightarrow \mathbb {D}$ . The ensemble can therefore be defined in any simply connected domain by conformal invariance, and the resulting family of laws is conjectured (in some special cases proved, for example, [8, 16, 22, 33, 63]) to be a universal scaling limit for collections of interfaces in critical statistical physics models.

For z Q $z\in \mathcal {Q}$ , the procedure to define L z ε $\mathcal {L}^{\varepsilon} _{z}$ , the outermost CLE κ $\operatorname{CLE}_{\kappa ^{\prime }}$ loop containing z $z$ , goes as follows.
  • Let τ z ε $\tau ^{\varepsilon} _{z}$ be the first time that θ z ε $\theta ^{\varepsilon} _z$ hits 2 π $2\pi$ , and let τ 0 , z ε $\tau ^{\varepsilon} _{0,z}$ be the last time before this that θ z ε $\theta ^{\varepsilon} _z$ is equal to 0.
  • Let o z ε = η z ε ( τ 0 , z ε ) $o^{\varepsilon} _z=\eta _z^{\varepsilon} (\tau ^{\varepsilon} _{0,z})$ . In fact, point o z ε $o^{\varepsilon} _z$ is one of the exceptional points for which the limit of η a k ε $\eta ^{\varepsilon} _{a_k}$ does not exist for all sequences a k o z ε $a_k\rightarrow o^{\varepsilon} _z$ , so it is not immediately clear how to define η o z ε ε $\eta ^{\varepsilon} _{o_z^{\varepsilon} }$ ; see Figure 4. However, the limit is well defined if we insist that the sequence a k o z ε $a_k\rightarrow o_z^{\varepsilon}$ is such that 0 and a k $a_k$ are separated by η z ε $\eta _z^{\varepsilon}$ at time τ z ε $\tau _z^{\varepsilon}$ for each k $k$ .
  • Define η o z ε ε $\eta ^{\varepsilon} _{o_z^{\varepsilon} }$ to be the limit of the curves η a k ε $\eta ^{\varepsilon} _{a_k}$ as k $k\rightarrow \infty$ . In particular the condition on the sequence a k $a_k$ means that o z ε $o_z^{\varepsilon}$ is almost surely a double point of η o z ε ε $\eta ^{\varepsilon} _{o_z^{\varepsilon} }$ . With this definition of η o z ε ε $\eta ^{\varepsilon} _{o_z^{\varepsilon} }$ , it follows that
    η z ε ( [ 0 , τ 0 , z ε ] ) = η o z ε ε ( [ 0 , τ z ε ] ) almost surely for some τ z ε 0 . $$\begin{equation*} \eta ^{\varepsilon} _z([0,\tau ^{\varepsilon} _{0,z}])=\eta ^{\varepsilon} _{o^{\varepsilon} _z}([0,\widetilde{\tau }^{\varepsilon }_z]) \text{ almost surely\ for some } \widetilde{\tau }^{\varepsilon }_z\geqslant 0. \end{equation*}$$
  • Set L z ε : = η o z ε ε ( [ τ z ε , ) ) $\mathcal {L}^{\varepsilon} _z:=\eta ^{\varepsilon} _{o^{\varepsilon} _z}([\widetilde{\tau }^{\varepsilon }_z,\infty ))$ .

We write B z ε $\mathcal {B}^{\varepsilon} _z$ for the connected component of D L z ε $\mathbb {D}\setminus \mathcal {L}^{\varepsilon} _z$ containing z $z$ : note that this is equal to ( D z ε ) τ z ε $({\mathbf {D}}^{\varepsilon} _z)_{\tau ^{\varepsilon} _z}$ . We will call this the (outermost) CLE κ $\operatorname{CLE}_{{\kappa ^{\prime }}}$ interior bubble containing z $z$ .

We define the sequence of nested CLE κ $\operatorname{CLE}_{\kappa ^{\prime }}$ loops ( L z , i ε ) $(\mathcal {L}^{\varepsilon} _{z,i})$ for i 1 $i\geqslant 1$ by iteration (so L z ε = : L z , 1 ε $\mathcal {L}^{\varepsilon} _z=:\mathcal {L}^{\varepsilon} _{z,1}$ ), and denote the corresponding sequence of nested domains (interior bubbles) containing z $z$ by ( B z , i ε ) i 1 $(\mathcal {B}^{\varepsilon} _{z,i})_{i\geqslant 1}$ . More precisely, the i $i$ th loop is defined inside B z , i 1 ε $\mathcal {B}^{\varepsilon} _{z,i-1}$ in the same way that the first loop is defined inside D $\mathbb {D}$ , after mapping B z , i 1 ε $\mathcal {B}^{\varepsilon} _{z,i-1}$ conformally to D $\mathbb {D}$ and considering the curve η z ε ( [ τ z ε , ) ) $\eta ^{\varepsilon} _z([\tau _z^{\varepsilon} ,\infty ))$ rather than η z ε $\eta ^{\varepsilon} _z$ .

The uniform CLE 4 $\operatorname{CLE}_4$ exploration defines a nested CLE 4 $\operatorname{CLE}_4$ in a similar but less complicated manner; see [64]. For any z Q $z\in \mathcal {Q}$ , to define L z $\mathcal {L}_z$ (the outermost CLE 4 $\operatorname{CLE}_4$ loop containing z $z$ ) we consider the Loewner chain D z ${\mathrm{D}}_z$ and define the times τ z $\tau _z$ and τ 0 , z $\tau _{0,z}$ (according to θ z $\theta _z$ ) as in the κ > 4 ${\kappa ^{\prime }}&gt;4$ case. Then between times τ 0 , z $\tau _{0,z}$ and τ z $\tau _z$ the Loewner chain D z ${\mathrm{D}}_z$ is tracing a simple loop — starting and ending at a point o z $o_z$ . This loop is what we define to be L z $\mathcal {L}_z$ . We define B z $\mathcal {B}_z$ to be the interior of L z $\mathcal {L}_z$ : note that this is also equal to ( D z ) τ z $({\mathbf {D}}_z)_{\tau _z}$ . Finally, we define the nested collection of CLE 4 $\operatorname{CLE}_4$ loops containing z $z$ and their interiors by iteration, denoting these by ( B z , i , L z , i ) i 1 $(\mathcal {B}_{z,i},\mathcal {L}_{z,i})_{i\geqslant 1}$ (so B z , 1 : = B z $\mathcal {B}_{z,1}:=\mathcal {B}_z$ and L z , 1 : = L z $\mathcal {L}_{z,1}:=\mathcal {L}_z$ ).

2.1.7 Space-filling SLE

Now, for κ ( 4 , 8 ) $\kappa ^{\prime }\in (4,8)$ we can also use the branching SLE κ $_{\kappa ^{\prime }}$ , ( η z ε ) z Q $(\eta ^{\varepsilon} _z)_{z\in \mathcal {Q}}$ , to define a space-filling curve η ε $\eta ^{\varepsilon}$ known as space-filling SLE κ $_{\kappa ^{\prime }}$ . This was first introduced in [18, 39]; see also [10, Appendix A.3] for the precise definition of the space-filling loop that we will use. The presentation here closely follows [21].

In our definition, the branches of ( η z ε ) z Q $(\eta ^{\varepsilon} _z)_{z\in \mathcal {Q}}$ are all SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ processes started from point 1, and with force points initially located infinitesimally clockwise from 1. This means that the associated space-filling SLE κ $_{\kappa ^{\prime }}$ will be a so-called counterclockwise space-filling SLE κ $\operatorname{SLE}_{\kappa ^{\prime }}$ loop from 1 to 1 in D $\mathbb {D}$ .

Given an instance ( η z ε ) z Q $(\eta ^{\varepsilon} _z)_{z\in \mathcal {Q}}$ of a branching SLE κ $_{\kappa ^{\prime }}$ , to define the associated space-filling SLE κ $_{\kappa ^{\prime }}$ , we start by defining an ordering on the points of Q $\mathcal {Q}$ . For this we use a coloring procedure. First, we color the boundary of D $\mathbb {D}$ blue. Then, for each z Q $z\in \mathcal {Q}$ , we can consider the branch η z ε $\eta ^{\varepsilon} _z$ of the branching SLE κ $_{\kappa ^{\prime }}$ targeted toward z $z$ . We color the left-hand side of η z ε $\eta ^{\varepsilon} _z$ red, and the right-hand side of η z ε $\eta ^{\varepsilon} _z$ blue. Whenever η z ε $\eta ^{\varepsilon} _z$ disconnects one region of D $\mathbb {D}$ from another, we can then label the resulting connected components as monocolored or bicolored, depending on whether the boundaries of these components are made up of one or two colors, respectively.

For z $z$ and w $w$ distinct elements of Q $\mathcal {Q}$ , we know (by definition of the branching SLE) that η z ε $\eta ^{\varepsilon} _z$ and η w ε $\eta ^{\varepsilon} _w$ will agree until the first time that z $z$ and w $w$ are separated. When this occurs, it is not hard to see that precisely one of z $z$ or w $w$ will be in a newly created monocolored component. If this is z $z$ we declare that z w $z\prec w$ , and otherwise that w z $w\prec z$ . In this way, we define a consistent ordering $\prec$ on Q $\mathcal {Q}$ ; see Figure 5.

Details are in the caption following the image
Constructing the ordering from the space-filling SLE κ $\operatorname{SLE}_{\kappa ^{\prime }}$ . When z $z$ and w 1 $w_1$ are separated, the connected component containing z $z$ has entirely blue boundary, while the connected component containing w 1 $w_1$ has red and blue on its boundary $\Rightarrow$ z $z$ comes before w 1 $w_1$ in the ordering. By contrast, when z $z$ and w 2 $w_2$ are separated, w 2 $w_2$ is in a monocolored component and z $z$ is not, which implies that z $z$ comes after w 2 $w_2$ in the ordering. So w 1 z w 2 $w_1\prec z \prec w_2$ in this example.
It was shown in [39] that there is a unique continuous space-filling curve η ε $\eta ^{\varepsilon}$ , parameterized by Lebesgue area, which visits the points of Q $\mathcal {Q}$ in this order. This is the counterclockwise space-filling SLE κ $_{\kappa ^{\prime }}$ loop (we will tend to parameterize it differently in what follows, but will discuss this later). We make the following remarks.
  • We can think of η ε $\eta ^{\varepsilon}$ as a version of ordinary SLE κ $\operatorname{SLE}_{\kappa ^{\prime }}$ that iteratively fills in bubbles, or disconnected components, as it creates them. The ordering means that it will fill in monocolored components first, and come back to bicolored components only later.
  • The word counterclockwise in the definition refers to the fact that the boundary of D $\partial \mathbb {D}$ is covered up by η ε $\eta ^{\varepsilon}$ in a counterclockwise order.

2.2 Convergence of the SLE κ ( κ 6 ) $_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ branches

In this subsection and the next, we will show that for any z Q $z\in \mathcal {Q}$ , we have the joint convergence, in law as κ 4 ${\kappa ^{\prime }}\downarrow 4$ of
  • the SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ branch toward z $z$ to the CLE 4 $\operatorname{CLE}_4$ exploration branch toward z $z$ ; and
  • the nested CLE κ $\operatorname{CLE}_{\kappa ^{\prime }}$ loops surrounding z $z$ to the nested CLE 4 $\operatorname{CLE}_4$ loops surrounding z $z$ .
The present subsection is devoted to proving the first statement.

Let us assume without loss of generality that our target point z $z$ is the origin. We first consider the radial SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ branch targeting 0, D 0 ε $\mathrm{D}_0^{\varepsilon}$ , up until the first time τ 0 ε $\tau _0^{\varepsilon}$ that 0 is surrounded by a counterclockwise loop. The basic result is as follows.

Proposition 2.5. ( D 0 ε , τ 0 ε ) ( D 0 , τ 0 ) $({\mathrm{D}}_0^{\varepsilon} ,\tau _0^{\varepsilon} )\Rightarrow ({\mathrm{D}}_0, \tau _0)$ in D × R $\mathcal {D}\times \mathbb {R}$ as ε 0 $\varepsilon \downarrow 0$ .

By Remark 2.3 and the iterative definition of the CLE 4 $\operatorname{CLE}_4$ exploration toward 0, the convergence for all time follows immediately from the above.

Proposition 2.6. D 0 ε D 0 ${\mathbf {D}}^{\varepsilon} _0\Rightarrow {\mathbf {D}}_0$ in D $\mathcal {D}$ as ε 0 $\varepsilon \downarrow 0$ .

Our proof of Proposition 2.5 will go through the approximations D 0 ε , n $\mathrm{D}_0^{\varepsilon ,n}$ and D 0 n $\mathrm{D}_0^n$ . Namely, we will show that for any fixed level n $n$ of approximation, D 0 ε , n D 0 n $\mathrm{D}_0^{\varepsilon ,n}\rightarrow \mathrm{D}_0^n$ as ε 0 $\varepsilon \downarrow 0$ , equivalently κ 4 ${\kappa ^{\prime }}\downarrow 4$ . Broadly speaking, this holds since the macroscopic excursions of the underlying processes θ 0 ε $\theta _0^{\varepsilon}$ converge, and in between these macroscopic excursions we can show that the location of the tip of the curve distributes itself uniformly on the boundary of the unexplored domain. We combine this with the fact that the approximations D 0 ε , n $\mathrm{D}_0^{\varepsilon ,n}$ converge to D 0 ε $\mathrm{D}^{\varepsilon} _0$ as n $n\rightarrow \infty$ , uniformly in ε $\varepsilon$ , to obtain the result.

The heuristic explanation for the mixing of the curve tip on the boundary is that the force point in the definition of an SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ causes the curve to ‘whizz’ around the boundary more and more quickly as κ 4 ${\kappa ^{\prime }}\downarrow 4$ . This means that in any fixed amount of time (for example, between macroscopic excursions), it will forget its initial position and become uniformly distributed in the limit. Making this heuristic rigorous is the main technical step of this subsection, and is achieved in Section 2.2.3.

2.2.1 Excursion measures converge as κ 4 ${\kappa ^{\prime }}\downarrow 4$

The first step toward proving Proposition 2.5 is to describe the sense in which the underlying process θ 0 ε $\theta ^{\varepsilon} _0$ for the SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}(\kappa ^{\prime }-6)$ branch converges to the process θ 0 $\theta _0$ for the CLE 4 $_4$ exploration. It is convenient to formulate this in the language of excursion theory; see Lemma 2.8.

To begin we observe, and record in the following remark, that when θ 0 ε $\theta ^{\varepsilon} _0$ is very small, it behaves much like a Bessel process of a certain dimension.

Remark 2.7.Suppose that ( θ 0 ε ) 0 = 0 $(\theta _0^{\varepsilon} )_0=0$ . By Girsanov's theorem, if the law of { ( θ 0 ε ) t ; t 0 } $\lbrace (\theta ^{\varepsilon} _0)_t \, ; \, t\geqslant 0\rbrace $ is weighted by the martingale

exp Z t ε Z ε t 2 ; Z t ε : = κ 4 κ 0 t 1 ( θ 0 ε ) s 1 2 cot ( θ 0 ε ) s 2 d B s , $$\begin{equation*} \exp {\left(Z_t^{\varepsilon} -\frac{\langle Z^{\varepsilon} \rangle _t}{2}\right)}\; ; \; Z^{\varepsilon} _t:=\frac{\kappa ^{\prime }-4}{\sqrt {\kappa ^{\prime }}} \int _0^t {\left(\frac{1}{(\theta ^{\varepsilon} _0)_s}-\frac{1}{2}\cot {\left(\frac{(\theta ^{\varepsilon} _0)_s}{2}\right)}\right)} \, dB_s , \end{equation*}$$
the resulting law of { ( θ 0 ε ) t ; t τ 0 ε } $\lbrace (\theta ^{\varepsilon} _0)_t\, ; \, {t\leqslant \tau ^{\varepsilon} _0} \rbrace$ is that of κ $\sqrt {{\kappa ^{\prime }}}$ times a Bessel process of dimension δ ( κ ) = 3 8 / κ $\delta ({\kappa ^{\prime }})=3-8/{\kappa ^{\prime }}$ . Note that for y [ 0 , 2 π ) $y\in [0,2\pi )$ , ( 1 / y ( 1 / 2 ) cot ( y / 2 ) ) $(1/y- (1/2)\cot (y/2))$ is positive and increasing, and that for y [ 0 , π ] $y\in [0,\pi ]$ , y / 12 ( 1 / y ( 1 / 2 ) cot ( y / 2 ) ) y / 6 $y/12\leqslant (1/y-(1/2)\cot (y/2)) \leqslant y/6$ , so in particular the integral in the definition of Z t ε $Z_t^\epsilon$ is well defined.

Now, observe that by the Markov property of θ 0 ε $\theta ^{\varepsilon} _0$ , we can define its associated (infinite) excursion measure on excursions from 0. We define m ε $m^{\varepsilon}$ to be the image of this measure under the operation of stopping excursions if and when they reach height 2 π $2\pi$ .

For n 0 $n\geqslant 0$ , we write m n ε $m^{\varepsilon} _n$ for m ε $m^{\varepsilon}$ restricted to excursions with maximum height exceeding 2 n $2^{-n}$ , and normalized to be a probability measure. It then follows from the strong Markov property that the excursions of θ 0 ε $\theta _0^{\varepsilon}$ during the intervals [ S i ε , n , T i ε , n ] $[S_i^{\varepsilon ,n},T_i^{\varepsilon ,n}]$ are independent samples from m n ε $m_n^{\varepsilon}$ , and Λ ε , n $\Lambda ^{\varepsilon ,n}$ is the index of the first of these samples that actually reaches height 2 π $2\pi$ . We also write m ε $m^{\varepsilon} _*$ for the measure m ε $m^{\varepsilon}$ restricted to excursions that reach 2 π $2\pi$ , again normalized to be a probability measure.

Finally, we consider the excursion measure on excursions from 0 for Brownian motion. We denote the image of this measure, after stopping excursions when they hit 2 π $2\pi$ , by m $m$ . Analogously to above, we write m n $m_n$ for m $m$ conditioned on the excursion exceeding height 2 n $2^{-n}$ . We write m $m_\star$ for m $m$ conditioned on the excursion reaching height 2 π $2\pi$ .

The measures m , ( m ε ) ε $m,(m^{\varepsilon} )_\varepsilon$ are supported on the excursion space
E = { e C ( R + , [ 0 , 2 π ] ) ; e ( 0 ) = 0 , ζ ( e ) : = sup { s > 0 : e ( s ) ( 0 , 2 π ) } ( 0 , ) } $$\begin{equation*} E = \lbrace e\in C(\mathbb {R}_+,[0,2\pi ])\, ; \, e(0)=0, \zeta (e):=\sup \lbrace s&gt;0: e(s)\in (0,2\pi )\rbrace \in (0,\infty )\rbrace \end{equation*}$$
on which we define the distance
d E ( e , e ) = sup t 0 | e ( t ) e ( t ) | + | ζ ( e ) ζ ( e ) | . $$\begin{equation*} d_E(e,e^{\prime })=\sup _{t\geqslant 0} |e(t)-e^{\prime }(t)| + |\zeta (e)-\zeta (e^{\prime })|. \end{equation*}$$

Lemma 2.8.For any n 0 $n\geqslant 0$ , m n ε m n $m_n^{\varepsilon} \rightarrow m_n$ in law as ε 0 $\varepsilon \rightarrow 0$ , with respect to d E $d_E$ . The same holds with ( m ε , m ) $(m_\star ^{\varepsilon} ,m_\star )$ in place of ( m n ε , m n ) $(m_n^{\varepsilon} , m_n)$ .

Proof.For a > 0 $a&gt;0$ , set E a = { e C ( R + , [ 0 , 2 π a ] ) ; e ( 0 ) = 0 , ζ a ( e ) : = sup { s > 0 : e ( s ) ( 0 , 2 π a ) } ( 0 , ) } $E^a = \lbrace e\in C(\mathbb {R}_+,[0,2\pi -a])\, ; \, e(0)=0, \zeta ^a(e):=\sup \lbrace s&gt;0: e(s)\in (0,2\pi -a)\rbrace \in (0,\infty )\rbrace$ , and equip it with the metric d E a ( e , e ) = sup t 0 | e ( t ) e ( t ) | + | ζ a ( e ) ζ a ( e ) | $d_{E^a}(e,e^{\prime })=\sup _{t\geqslant 0} |e(t)-e^{\prime }(t)| + |\zeta ^a(e)-\zeta ^a(e^{\prime })|$ . Set δ = δ ( κ ( ε ) ) $\delta =\delta ({\kappa ^{\prime }}(\varepsilon ))$ , recalling the definition δ ( κ ) = 8 3 / κ $\delta ({\kappa ^{\prime }})=8-3/{\kappa ^{\prime }}$ . We first state and prove the analogous result for Bessel processes. $\Box$

Lemma 2.9.Let b ε $b^{\varepsilon}$ be a sample from the Bessel- δ $\delta$ excursion measure away from 0, conditioned on exceeding height 2 n $2^{-n}$ , and stopped on the subsequent first hitting of 0 or 2 π a $2\pi -a$ . Let b $b$ be a sample from the Brownian excursion measure with the same conditioning and stopping. Then for any a > 0 $a&gt;0$ , b ε b $b^{\varepsilon} \Rightarrow b$ as ε 0 $\varepsilon \downarrow 0$ , in the space ( E a , d E a ) $(E^a,d_{E^a})$ .

Proof of Lemma 2.9.For any ε ( 0 , 2 2 ) $\varepsilon \in (0,2-\sqrt {2})$ , b ε $b^{\varepsilon}$ can be sampled (see [18, Section 3]) by

  • first sampling X ε $X^{\varepsilon}$ from the probability measure on [ 2 n , ) $[2^{-n},\infty )$ with density proportional to x δ 3 d x $x^{\delta -3} dx$ ;
  • then running a Bessel- ( 4 δ ) $(4-\delta )$ process from 0 to X ε $X^{\varepsilon}$ ;
  • stopping this process at 2 π a $2\pi -a$ if X ε 2 π a $X^{\varepsilon} \geqslant 2\pi -a$ ; or
  • placing it back to back with the time reversal of an independent Bessel- ( 4 δ ) $(4-\delta )$ from 0 to X ε $X^{\varepsilon}$ if X ε < 2 π a $X^{\varepsilon} &lt;2\pi -a$ .
Since the time for a Bessel- ( 4 δ ) $(4-\delta )$ to leave [ 0 , a ] $[0,a^{\prime }]$ converges to 0 as a 0 $a^{\prime }\rightarrow 0$ uniformly in δ < 3 / 2 $\delta &lt;3/2$ , and for any a < 2 n $a^{\prime }&lt;2^{-n}$ , a Bessel- ( 4 δ ) $(4-\delta )$ from a $a^{\prime }$ to y $y$ converges in law to a Bessel 3 $-3$ from a $a^{\prime }$ to y $y$ as κ 4 ${\kappa ^{\prime }}\downarrow 4$ , uniformly in y [ 2 n , 2 π ] $y\in [2^{-n},2\pi ]$ , this shows that b ε b $b^{\varepsilon} \Rightarrow b$ in ( E a , d E a ) $(E^a,d_{E^a})$ . $\Box$

Now we continue the proof of Lemma 2.8. Recalling the Radon–Nikodym derivative of Remark 2.7 (note that κ 4 0 ${\kappa ^{\prime }}-4\rightarrow 0$ as ε 0 $\varepsilon \downarrow 0$ ), we conclude that if e ε $e^{\varepsilon}$ and e $e$ are sampled from m n ε $m_n^{\varepsilon}$ and m n $m_n$ , respectively, and stopped upon hitting { 0 , 2 π a } $\lbrace 0,2\pi -a\rbrace$ for the first time after hitting 2 n $2^{-n}$ , then e ε e $e^{\varepsilon} \rightarrow e$ in law as ε 0 $\varepsilon \downarrow 0$ , in the space ( E a , d E a ) $(E^a,d_{E^a})$ .

To complete the proof, it therefore suffices to show (now without stopping e ε $e^{\varepsilon}$ or e $e$ ) that
ζ ( e ε ) ζ a ( e ε ) 0 and sup t ( ζ a ( e ε ) , ζ ( e ε ) ) | e ε ( t ) 2 π | 0 $$\begin{equation*} \zeta (e^{\varepsilon} )-\zeta ^a(e^{\varepsilon} )\rightarrow 0 \;\;\; \text{ and } \;\;\; \sup _{t\in (\zeta ^a(e^{\varepsilon} ),\zeta (e^{\varepsilon} ))} |e^{\varepsilon} (t)-2\pi |\rightarrow 0 \end{equation*}$$
as a 0 $a\rightarrow 0$ , uniformly in ε $\varepsilon$ (small enough). But by symmetry, if ζ a ( e ε ) < ζ ( e ε ) $\zeta ^a(e^{\varepsilon} )&lt;\zeta (e^{\varepsilon} )$ then 2 π e ε $2\pi -e^{\varepsilon}$ from time ζ a ( e ε ) $\zeta ^a(e^{\varepsilon} )$ onward has the law of θ ε $\theta ^{\varepsilon}$ started from a $a$ and stopped upon hitting 0 or 2 π $2\pi$ . As a 0 $a\rightarrow 0$ the probability that this process remains in [ 0 , π ] $[0,\pi ]$ tends to 1 uniformly in ε $\varepsilon$ , and then we can use the same Radon–Nikodym considerations to deduce the result. The final statement of Lemma 2.8 can be justified in exactly the same manner.

2.2.2 Strategy for the proof of Proposition 2.5

With Lemma 2.8 in hand the strategy to prove Proposition 2.5 is to establish the following two lemmas.

Lemma 2.10.Let F $F$ be a continuous bounded function on D × [ 0 , ) $\mathcal {D}\times [0,\infty )$ . Then E [ F ( D 0 ε , n , τ 0 ε , n ) ] E [ F ( D 0 ε , τ 0 ε ) ] $\mathbb {E}[F({\mathrm{D}}^{\varepsilon ,n}_0,\tau ^{\varepsilon ,n}_0)]\rightarrow \mathbb {E}[F({\mathrm{D}}^{\varepsilon} _0,\tau ^{\varepsilon} _0)]$ as n $n\rightarrow \infty$ , uniformly in κ ( 4 , 8 ) ${\kappa ^{\prime }}\in (4,8)$ , equivalently ε ( 0 , 2 2 ) $\varepsilon \in (0,2-\sqrt {2})$ .

Proof.Fix ε $\varepsilon$ as above, and let us assume that the processes D 0 ε , n ${\mathrm{D}}^{\varepsilon ,n}_0$ as n $n$ varies and D 0 ε ${\mathrm{D}}^{\varepsilon} _0$ are coupled together in the natural way: using the same underlying θ 0 ε $\theta ^{\varepsilon} _0$ and W 0 ε $W^{\varepsilon} _0$ . By Remark 2.1, in particular (2.4), it suffices to prove that

τ 0 ε , n τ 0 ε $$\begin{equation} \tau ^{\varepsilon ,n}_0\rightarrow \tau ^{\varepsilon }_0 \end{equation}$$ (2.9)
in probability as n $n\rightarrow \infty$ , uniformly in ε $\varepsilon$ . In other words, to show that the time spent by θ 0 ε $\theta _0^{\varepsilon}$ in excursions of maximum height less than 2 n $2^{-n}$ (before first hitting 2 π $2\pi$ ) goes to 0 uniformly in ε $\varepsilon$ as n $n\rightarrow \infty$ .

To do this, let us consider the total (that is, cumulative) duration C ε , n $C^{\varepsilon ,n}$ of such excursions of θ 0 ε $\theta _0^{\varepsilon}$ , before the first time σ ε $\sigma ^{\varepsilon }$ that θ 0 ε $\theta _0^{\varepsilon}$ reaches π $\pi$ . The reason for restricting to this time interval is to use the final observation in Remark 2.7: that the integrand in the definition of Z ε $Z^{\varepsilon}$ is deterministically bounded up to time σ ε $\sigma ^{\varepsilon}$ . This will allow us to transfer the question to one about Bessel processes. And, indeed, since the number of times that θ 0 ε $\theta _0^{\varepsilon}$ will reach π $\pi$ before time τ 0 ε $\tau _0^{\varepsilon}$ is a geometric random variable with success probability uniformly bounded away from 0 (due to Lemma 2.8), it is enough to show that C ε , n $C^{\varepsilon ,n}$ tends to 0 in probability as n $n\rightarrow \infty$ , uniformly in ε $\varepsilon$ .

For this, we first note that by Remark 2.7, for any a , S > 0 $a,S&gt;0$ we can write

P ( C ε , n > a ) P ( σ ε > S ) + Q ε ( exp ( Z σ ε ε + 1 2 Z ε σ ε ) 1 { C ε , n > a } 1 { σ ε S } ) , $$\begin{equation*} \mathbb {P}(C^{\varepsilon ,n}&gt;a)\leqslant \mathbb {P}(\sigma ^{\varepsilon} &gt;S)+\mathbb {Q}^{\varepsilon} (\exp (-Z_{\sigma ^{\varepsilon} }^{\varepsilon} +\tfrac{1}{2}\langle Z^{\varepsilon} \rangle _{\sigma ^{\varepsilon} }) \mathbb {1}_{\lbrace C^{\varepsilon ,n}&gt;a\rbrace }\mathbb {1}_{\lbrace \sigma ^{\varepsilon} \leqslant S\rbrace }), \end{equation*}$$
where Z ε $Z^{\varepsilon}$ is as defined in Remark 2.7 and under Q ε $\mathbb {Q}^{\varepsilon}$ , θ 0 ε $\theta _0^{\varepsilon}$ has the law of κ $\sqrt {{\kappa ^{\prime }}}$ times a Bessel process of dimension δ ( κ ) = 3 8 / κ $\delta ({\kappa ^{\prime }})=3-8/{\kappa ^{\prime }}$ . Since P ( σ ε > S ) 0 $\mathbb {P}(\sigma ^{\varepsilon} &gt;S)\rightarrow 0$ as S $S\rightarrow \infty$ , uniformly in ε $\varepsilon$ (this is proved, for example, in [52]), it suffices to show that for any fixed S $S$ , the second term in the above equation tends to 0 uniformly in ε $\varepsilon$ as n $n\rightarrow \infty$ .

To this end, we begin by using Cauchy–Schwarz to obtain the upper bound

Q ε exp ( Z σ ε ε + 1 2 Z ε σ ε 1 { C ε , n > a } 1 { σ ε S } ) 2 Q ε ( exp ( 2 Z σ ε ε + Z ε σ ε ) 1 { σ ε S } ) Q ε ( 1 { C ε , n > a } ) . $$\begin{equation*} \mathbb {Q}^{\varepsilon} {\left(\exp (-Z_{\sigma ^{\varepsilon} }^{\varepsilon} +\tfrac{1}{2}\langle Z^{\varepsilon} \rangle _{\sigma ^{\varepsilon} } \mathbb {1}_{\lbrace C^{\varepsilon ,n}&gt;a\rbrace }\mathbb {1}_{\lbrace \sigma ^{\varepsilon} \leqslant S\rbrace })\right)}^2\leqslant \mathbb {Q}^{\varepsilon} (\exp (-2Z_{\sigma ^{\varepsilon} }^{\varepsilon} +\langle Z^{\varepsilon} \rangle _{\sigma ^{\varepsilon} }) \mathbb {1}_{\lbrace \sigma ^{\varepsilon} \leqslant S\rbrace }) \mathbb {Q}^{\varepsilon} (\mathbb {1}_{\lbrace C^{\varepsilon ,n}&gt;a\rbrace }). \end{equation*}$$
Then, because we are on the event that σ ε S $\sigma ^{\varepsilon} \leqslant S$ , and the integrand in the definition of Z ε $Z^{\varepsilon}$ is deterministically bounded up to time σ ε $\sigma ^{\varepsilon}$ , we have that Q ε ( exp ( 2 Z σ ε ε + Z ε σ ε ) 1 { σ ε S } ) c $\mathbb {Q}^{\varepsilon} (\exp (-2Z_{\sigma ^{\varepsilon} }^{\varepsilon} +\langle Z^{\varepsilon} \rangle _{\sigma ^{\varepsilon} }) \mathbb {1}_{\lbrace \sigma ^{\varepsilon} \leqslant S\rbrace }) \leqslant c$ for some constant c = c ( S ) $c=c(S)$ not depending on ε $\varepsilon$ . So it remains to show that the Q ε $\mathbb {Q}^{\varepsilon}$ expectation of C ε , n $C^{\varepsilon ,n}$ , goes to 0 uniformly in ε $\varepsilon$ as n $n\rightarrow \infty$ .

Recall that under Q ε $\mathbb {Q}^{\varepsilon}$ , θ 0 ε $\theta _0^{\varepsilon}$ has the law of κ $\sqrt {{\kappa ^{\prime }}}$ times a Bessel process of dimension δ ( κ ) = 3 8 / κ $\delta ({\kappa ^{\prime }})=3-8/{\kappa ^{\prime }}$ . Now, by [47, Theorem 1] we can construct a dimension δ ( κ ) $\delta ({\kappa ^{\prime }})$ Bessel process by concatenating excursions from a Poisson point process Λ $\Lambda$ with intensity 0 x δ 3 ν δ x d x $\int _0^{\infty } x^{\delta -3} \nu _\delta ^x \, dx$ times Lebesgue measure on E × R $E\times \mathbb {R}$ , where ν δ x $\nu _\delta ^x$ is a probability measure on Bessel excursions with maximum height x $x$ for each x > 0 $x&gt;0$ . Moreover, by Brownian scaling, ν δ x ( e ) = ν δ 1 ( e x ) $\nu _\delta ^x(e)=\nu _\delta ^1(e_x)$ , e x ( s ) = x 1 e ( x 2 s ) $e_x(s)=x^{-1}e(x^{2}s)$ for 0 s ζ ( e x ) = x 2 ζ ( e ) $0\leqslant s \leqslant \zeta (e_x)=x^{-2}\zeta (e)$ . (For proofs of these results, see, for example, [47].)

Now, if we let T = inf { t : ( e , t ) Λ and sup e ( s ) π } $T=\inf \lbrace t:(e,t)\in \Lambda \text{ and } \sup e(s) \geqslant \pi \rbrace$ , then conditionally on T $T$ , we can write C κ , n $C^{{\kappa ^{\prime }}, n}$ as the sum of the excursion lifetimes ζ ( e ) $\zeta (e)$ over points ( e , t ) $(e,t)$ in a (conditionally independent) Poisson point process with intensity

0 2 n x δ 3 ν δ x d x × Leb ( [ 0 , T ] ) . $$\begin{equation*} \int _0^{2^{-n}} x^{\delta -3} \nu _\delta ^x \, dx \times \mathrm{Leb}([0,T]). \end{equation*}$$
Note that by definition of the Poisson point process, T $T$ is an exponential random variable with associated parameter π x δ 3 d x $\int _\pi ^\infty x^{\delta -3} \, dx$ , and so has uniformly bounded expectation in κ ${\kappa ^{\prime }}$ . Since Brownian scaling also implies that ν δ x ( ζ ( e ) ) = x 2 ν δ 1 ( ζ ( e x ) ) $\nu _\delta ^x(\zeta (e)) =x^2\nu _\delta ^{1}(\zeta (e_x))$ for excursions e $e$ , Campbell's formula yields that the expectation of C κ , n $C^{{\kappa ^{\prime }},n}$ is of order 2 n δ $2^{-n\delta }$ . This indeed converges uniformly to 0 in δ 1 $\delta \geqslant 1$ (equivalently κ , ε ${\kappa ^{\prime }},\varepsilon$ ), which completes the proof. $\Box$

Lemma 2.11.For any fixed n N $n\in \mathbb {N}$ , ( D 0 ε , n , τ 0 ε , n ) $({\mathrm{D}}^{\varepsilon ,n}_0,\tau ^{\varepsilon ,n}_0)$ converges to ( D 0 n , τ 0 n ) $({\mathrm{D}}^n_0,\tau ^n_0)$ in law as ε 0 $\varepsilon \downarrow 0$ , with respect to the Carathéodory × $\times$ Euclidean topology.

Proof of Proposition 2.5.This follows by combining Lemma 2.10 and Lemma 2.11, plus the fact that ( D 0 n , τ 0 n ) ( D 0 , τ 0 ) $(\mathrm{D}_0^n,\tau _0^n)\Rightarrow (\mathrm{D}_0,\tau _0)$ as n $n\rightarrow \infty$ . $\Box$

2.2.3 Convergence at a fixed level of approximation as κ 4 ${\kappa ^{\prime }}\downarrow 4$

The remainder of this section will now be devoted to proving Lemma 2.11. This is slightly trickier, and so we will break down its proof further into Lemmas 2.12 and 2.13.

Let us first set up for the statements of these lemmas. For κ ( 4 , 8 ) ${\kappa ^{\prime }}\in (4,8)$ (equiv. ε ( 0 , 2 2 ) $\varepsilon \in (0,2-\sqrt {2})$ ) we set X i ε , n = ( W 0 ε ) S i ε , n $X_i^{\varepsilon ,n}=(W_0^{\varepsilon} )_{S_i^{\varepsilon ,n}}$ for 1 i Λ ε , n $1\leqslant i\leqslant \Lambda ^{\varepsilon ,n}$ and then write
X ε , n = ( X 1 ε , n , X 2 ε , n , , X Λ ε , n ε , n ) . $$\begin{equation*} \mathbf {X}^{\varepsilon ,n}=(X_1^{\varepsilon ,n},X_2^{\varepsilon ,n},\ldots , X_{\Lambda ^{\varepsilon ,n}}^{\varepsilon ,n}). \end{equation*}$$
For the CLE 4 $\operatorname{CLE}_4$ case, we write
X n = ( X 1 n , X 2 n , , X Λ n n ) , $$\begin{equation*} \mathbf {X}^{n}=(X_1^{n},X_2^{n},\ldots , X_{\Lambda ^{n}}^{n}), \end{equation*}$$
where the X n $X^n$ are as defined in Section 2.1.5. Also recall the definition of the excursions ( e i ε , n ) 1 i Λ ε , n $(e_i^{\varepsilon ,n})_{1\leqslant i \leqslant \Lambda ^{\varepsilon ,n}}$ of θ ε $\theta ^{\varepsilon}$ above height 2 n $2^{-n}$ . Define the corresponding excursions ( e i n ) i Λ n $(e_i^n)_{i\leqslant \Lambda ^n}$ for the uniform CLE 4 $\operatorname{CLE}_4$ exploration, and denote
e ε , n = ( e 1 ε , n , e 2 ε , n , , e Λ ε , n ε , n ) , e n = ( e 1 n , e 2 n , , e Λ n n ) . $$\begin{equation*} \mathbf {e}^{\varepsilon ,n}=(e_1^{\varepsilon ,n},e_2^{\varepsilon ,n},\ldots , e_{\Lambda ^{\varepsilon ,n}}^{\varepsilon ,n}), \quad \mathbf {e}^{n}=(e_1^{n},e_2^{n},\ldots , e_{\Lambda ^{n}}^{n}). \end{equation*}$$

Thus, X ε , n , X n $\mathbf {X}^{\varepsilon ,n}, \mathbf {X}^n$ live in the space of sequences of finite length, taking values in D $\partial \mathbb {D}$ . We equip this space with topology such that X ( n ) X $\mathbf {X}^{(n)}\rightarrow \mathbf {X}$ as n $n\rightarrow \infty$ if and only if the vector length of X ( n ) $\mathbf {X}^{(n)}$ is equal to the length of X $\mathbf {X}$ for all n N 0 $n\geqslant N_0$ large enough, and such that every component of X ( n ) $\mathbf {X}^{(n)}$ (for n N 0 $n\geqslant N_0$ ) converges to the corresponding component of X $\mathbf {X}$ with respect to the Euclidean distance. Similarly, e ε , n , e n $\mathbf {e}^{\varepsilon ,n}, \mathbf {e}^n$ live in the space of sequences of finite length, taking values in the space E $E$ of excursions away from { 0 , 2 π } $\lbrace 0,2\pi \rbrace$ .

We equip this sequence space with topology such that e ( k ) e $\mathbf {e}^{(k)}\rightarrow \mathbf {e}$ as k $k\rightarrow \infty$ if and only if the vector length of e ( k ) $\mathbf {e}^{(k)}$ is equal to the vector length of e $\mathbf {e}$ for all k $k$ large enough, together with component-wise convergence with respect to d E $d_E$ .

Lemma 2.12.For any n N $n\in \mathbb {N}$ , ( e ε , n , τ ε , n ) ( e n , τ n ) $(\mathbf {e}^{\varepsilon ,n},\tau ^{\varepsilon ,n})\Rightarrow (\mathbf {e}^n,\tau ^n)$ as ε 0 $\varepsilon \rightarrow 0$ .

Proof.This is a direct consequence of Lemma 2.8 and the definition of τ ε , n , τ n $\tau ^{\varepsilon ,n},\tau ^n$ . $\Box$

Lemma 2.13.For any n N $n\in \mathbb {N}$ , X ε , n X n $\mathbf {X}^{\varepsilon ,n}\rightarrow \mathbf {X}^n$ in law as ε 0 $\varepsilon \rightarrow 0$ .

This second lemma will take a bit more work to prove. However, we can immediately see how the two together imply Lemma 2.11.

Proof of Lemma 2.11.Lemmas 2.12 and 2.13 imply that the driving functions of D 0 ε , n $\mathrm{D}^{\varepsilon ,n}_0$ converge in law to the driving function of D 0 n $\mathrm{D}^n_0$ with respect to the Skorokhod topology. This implies the result by Remark 2.1. $\Box$

Our new goal is therefore to prove Lemma 2.13. The main ingredient is the following (recall that S 1 ε , n $S_1^{\varepsilon ,n}$ is the start time of the first excursion of θ 0 ε $\theta _0^{\varepsilon}$ away from 0 that reaches height 2 n $2^{-n}$ ).

Lemma 2.14.For any u 0 $u\ne 0$ and n N $n\in \mathbb {N}$ fixed,

E [ X 1 ε , n ] = E [ exp ( i u 0 S 1 ε , n cot ( ( θ 0 ε ) s / 2 ) d s ) ] 0 as ε 0 . $$\begin{equation} {\mathbb {E}[\, X_1^{\varepsilon ,n}\, ]} = \mathbb {E}[\,\exp (\operatorname{i}u \int _0^{S_1^{\varepsilon ,n}}\cot ((\theta ^{\varepsilon} _0)_s/2) \, ds)\,]\rightarrow 0 \text{ as } \varepsilon \downarrow 0. \end{equation}$$ (2.10)

For the proof of Lemma 2.14, we are going to use Remark 2.7. That is, the fact that θ 0 ε $\theta ^{\varepsilon} _0$ behaves very much like κ $\sqrt {{\kappa ^{\prime }}}$ times a Bessel process of dimension δ = 3 8 / κ ( 1 , 2 ) $\delta =3-8/{\kappa ^{\prime }}\in (1,2)$ . The Bessel process is much more convenient to work with (in terms of exact calculations) because of its scaling properties. Indeed, for Bessel processes we have the following lemma:

Lemma 2.15.Let θ ε $\widetilde{\theta }^{\varepsilon}$ be κ = κ ( ε ) $\sqrt {{\kappa ^{\prime }}}=\sqrt {{\kappa ^{\prime }}(\varepsilon )}$ times a Bessel process of dimension 3 8 / κ $3-8/{\kappa ^{\prime }}$ (started from 0) and S ε , m $\widetilde{S}^{\varepsilon ,m}$ be the start time of the first excursion in which it exceeds 2 m $2^{-m}$ . Then for u 0 $u\ne 0$ ,

| E [ exp 2 i u 0 S ε , m ( θ s ε ) 1 d s ] | 0 $$\begin{equation*} | \mathbb {E}[\exp {\left(2\operatorname{i}u \int _0^{\widetilde{S}^{\varepsilon ,m}} (\widetilde{\theta }^{\varepsilon} _s)^{-1} \, ds\right)} ]|\rightarrow 0 \end{equation*}$$
as ε 0 $\varepsilon \downarrow 0$ for any m $m$ large enough.

(The assumption that m $m$ is sufficiently large here is made simply for convenience of proof.)

Proof.By changing the value of u $u$ appropriately, we can instead take θ ε $\widetilde{\theta }^{\varepsilon }$ to be a Bessel process of dimension δ ( κ ) = 3 8 / κ $\delta ({\kappa ^{\prime }})=3-8/{\kappa ^{\prime }}$ (that is, we forget about the multiplicative factor of κ $\sqrt {{\kappa ^{\prime }}}$ ). Note that δ ( κ ) ( 1 , 2 ) $\delta ({\kappa ^{\prime }})\in (1,2)$ for κ < 8 ${\kappa ^{\prime }}&lt;8$ and δ ( κ ) 1 $\delta ({\kappa ^{\prime }}) \downarrow 1$ as κ 4 ${\kappa ^{\prime }}\downarrow 4$ . By standard Itô excursion theory, θ ε $\widetilde{\theta }^{\varepsilon}$ can be formed by gluing together the excursions of a Poisson point process Λ $\Lambda$ with intensity ν δ ( κ ) × Leb [ 0 , ) $\nu _{\delta (\kappa )}\times \text{Leb}_{[0,\infty )}$ , where ν δ $\nu _\delta$ is the Bessel- δ $\delta$ excursion measure. As mentioned previously, it is a classical result that we can decompose ν δ ( · ) = 0 x δ 3 ν δ x ( · ) d x $\nu _\delta (\cdot )=\int _0^\infty x^{\delta -3}\nu _\delta ^x(\cdot ) \, dx$ (there is a multiplicative constant that we can set to one without loss of generality) where ν δ x $\nu _\delta ^x$ is a probability measure on excursions with maximum height exactly x $x$ for each x > 0 $x&gt;0$ and that moreover by Brownian scaling, ν δ x ( e ) = ν δ 1 ( e x ) $\nu _\delta ^x(e)=\nu _\delta ^1(e_x)$ , e x ( s ) = x 1 e ( x 2 s ) $e_x(s)=x^{-1}e(x^{2}s)$ for 0 s ζ ( e x ) = x 2 ζ ( e ) $0\leqslant s \leqslant \zeta (e_x)=x^{-2}\zeta (e)$ .

Let

T m κ = ( d ) Exp ( 2 m ) δ 2 2 δ $$\begin{equation} T^{\kappa ^{\prime }}_m \, {\overset{(d)}{=}} \, \text{Exp}{\left(\frac{(2^{-m})^{\delta -2}}{2-\delta } \right)}\end{equation}$$ (2.11)
be the smallest t $t$ such that ( e , t ) $(e,t)$ is in the Poisson process for some e $e$ with sup ( e ) > 2 m $\sup (e)&gt; 2^{-m}$ . Then conditionally on T m κ $T_m^{{\kappa ^{\prime }}}$ , the collection of points ( e , t ) $(e,t)$ in the Poisson process with t T m κ $t\leqslant T_m^{{\kappa ^{\prime }}}$ is simply a Poisson process Λ ( T m κ ) $\Lambda {(T_m^{\kappa ^{\prime }})}$ with intensity 0 2 m x δ 3 ν δ x × Leb ( [ 0 , T m κ ] ) $\int _0^{2^{-m}} x^{\delta -3}\nu _\delta ^x \times \mathrm{Leb}([0,T_m^{{\kappa ^{\prime }}}])$ . So, if for any given excursion e E $e\in E$ , we define
f ( e ) = 0 ζ ( e ) 1 e ( s ) d s $$\begin{equation*} f(e)=\int _0^{\zeta (e)}\frac{1}{e(s)} \, ds \end{equation*}$$
(setting f ( e ) = $f(e)=\infty$ if the interval diverges), we have
E e 2 i u 0 S ε , m ( θ s ε ) 1 d s | T m κ = E e 2 i u ( e , t ) Λ ( T m κ ) f ( e ) | T m κ = exp T m κ 0 2 m x δ 3 ν δ x ( 1 e 2 i u f ( e ) ) , $$\begin{eqnarray} \mathbb {E}{\left(\operatorname{e}^{2 \operatorname{i}u \int _0^{\widetilde{S}^{\varepsilon ,m}} (\widetilde{\theta }_s^{\varepsilon} )^{-1}\, ds} \, | \, T_m^{\kappa ^{\prime }}\right)} = \mathbb {E}{\left(\operatorname{e}^{2\operatorname{i}u \sum _{(e,t)\in \Lambda {(T_m^{\kappa ^{\prime }})}} f(e)} \, | \, T_m^{\kappa ^{\prime }}\right)}=\exp {\left(T_m^{\kappa ^{\prime }}\int _0^{2^{-m}}\!\!\!\!\! x^{\delta -3}\nu _\delta ^x(1-\operatorname{e}^{2\operatorname{i}u f(e)}) \right)},\hskip-9pt \nonumber\\ \end{eqnarray}$$ (2.12)
where in the final equality we have applied Campbell's formula for the Poisson point process Λ ( T m κ ) $\Lambda {(T_m^{\kappa ^{\prime }})}$ .

The real part of 1 e 2 i u f ( e ) $1-\operatorname{e}^{2 \operatorname{i}u f(e)}$ is bounded above by 2 u 2 f ( e ) 2 $2 u^2 f(e)^2$ . Then using the Brownian scaling property of ν δ x $\nu _\delta ^x$ explained before, we can bound ν δ x ( ( 1 e 2 i u f ( e ) ) ) $\nu _\delta ^x(\Re (1-\operatorname{e}^{2 \operatorname{i}u f(e)}))$ by u 2 x 2 ν δ 1 ( f 2 ) $u^2 x^2\nu _\delta ^1(f^2)$ . Using the fact that ν δ 1 ( f 2 ) < $\nu _\delta ^1(f^2) &lt; \infty$ , which can be obtained from a direct calculation, it follows that 0 2 m x δ 3 ν δ x ( ( 1 e 2 i u f ( e ) ) ) d x < ( 2 δ ) 1 2 m ( δ 2 ) $\int _0^{2^{-m}} x^{\delta -3}\nu _\delta ^x(\Re (1-\operatorname{e}^{2 \operatorname{i}u f(e)})) \, dx&lt; (2-\delta )^{-1} 2^{-m(\delta -2)}$ for all m M 0 = M 0 ( u ) $m\geqslant M_0 = M_0(u)$ , where M 0 < $M_0&lt;\infty$ does not depend on δ < 3 / 2 $\delta &lt;3/2$ (say). This allows us to take expectations over T m κ $T_m^{{\kappa ^{\prime }}}$ in (2.12) (recall the distribution of T m κ $T_m^{\kappa ^{\prime }}$ from (2.11)) to obtain that

E ( e 2 i u 0 S ε , m ( θ s ε ) 1 d s ) = 1 2 m ( δ 2 ) ( 2 δ ) 0 2 m x δ 3 ν δ x ( ( 1 cos ( 2 u f ( e ) ) + i sin ( 2 u f ( e ) ) ) ) d x 1 2 m ( δ 2 ) ( 2 δ ) 0 2 m x δ 3 ν δ x ( sin ( 2 u f ( e ) ) ) d x 1 ( 2 δ ) 0 1 y δ 3 ν δ 2 m y ( sin ( 2 u f ( e ) ) ) d y 1 $$\begin{align} {\left|\mathbb {E}(\operatorname{e}^{2 \operatorname{i}u \int _0^{\widetilde{S}^{\varepsilon ,m}} (\widetilde{\theta }_s^{\varepsilon} )^{-1}\, ds})\right|} & = {\left|1-2^{m(\delta -2)}(2-\delta ) \int _0^{2^{-m}} x^{\delta -3} \nu _\delta ^x((1-\cos (2uf(e))+\operatorname{i}\sin (2uf(e)))) \, dx \right|}^{-1} \nonumber \\ & \leqslant {\left|2^{m(\delta -2)}(2-\delta ) \int _0^{2^{-m}} x^{\delta -3} \nu _\delta ^x(\sin (2u f(e))) \, dx \right|}^{-1} \nonumber \\ & \leqslant {\left|(2-\delta ) \int _0^{1} y^{\delta -3} \nu _\delta ^{2^{-m}y}(\sin (2u f(e))) \, dy \right|}^{-1} \end{align}$$ (2.13)
for all m M 0 $m\geqslant M_0$ and δ ( 1 , 3 / 2 ) $\delta \in (1,3/2)$ .

We now fix u 0 $u\ne 0$ and m M 0 $m\geqslant M_0$ for the rest of the proof. Our aim is to show that the final expression in (2.13) converges to 0 as δ 1 $\delta \downarrow 1$ (equivalently ε 0 $\varepsilon \downarrow 0$ ). To do this, we use the Brownian scaling property of ν δ x $\nu _\delta ^x$ again to write ν δ 2 m y ( sin ( 2 u f ( e ) ) ) = ν δ 1 ( sin ( 2 m + 1 u y f ( e ) ) ) $\nu _{\delta }^{2^{-m}y}(\sin (2uf(e)))=\nu _\delta ^1 (\sin (2^{-m+1}uyf(e)))$ for each y $y$ . We also observe that

y 1 ν δ 1 ( sin ( 2 m + 1 u y f ( e ) ) ) ν δ 1 ( 2 m + 1 u f ( e ) ) $$\begin{equation*} y^{-1}\nu _\delta ^1(\sin (2^{-m+1}uyf(e)))\rightarrow \nu _\delta ^1(2^{-m+1}uf(e)) \end{equation*}$$
as y 0 $y\downarrow 0$ , which follows by dominated convergence since sin ( z ) / z 1 $\sin (z)/z\rightarrow 1$ as z 0 $z\downarrow 0$ . Moreover (by Lemma 2.8, say) the convergence is uniform in δ $\delta$ . This means that for some Y u , m ( 0 , 1 ) $Y_{u,m}\in (0,1)$ and k u , m < $ k_{u,m}&lt;\infty$ depending only on u $u$ and m $m$ , we have that
| ν δ 1 ( sin ( 2 m + 1 u y f ( e ) ) ) k u , m y for all y Y u , m . $$\begin{equation*} |\nu _\delta ^1(\sin (2^{-m+1}uyf(e)))\geqslant k_{u,m} y \; \text{ for all } y\geqslant Y_{u,m}. \end{equation*}$$
It follows that
( 2 δ ) 0 1 y δ 3 ν δ 2 m y ( sin ( 2 u f ( e ) ) ) d y ( 2 δ ) k u , m 0 Y u , m y δ 2 d y ( 2 δ ) Y u , m 1 y δ 3 d y k u , m Y u , m δ 1 δ 1 ( 1 Y u , m δ 2 ) $$\begin{align*} {\left|(2-\delta ) \int _0^{1} y^{\delta -3} \nu _\delta ^{2^{-m}y}(\sin (2u f(e))) \, dy \right|} & \geqslant (2-\delta )k_{u,m}\int _0^{Y_{u,m}} y^{\delta -2} \, dy -(2-\delta )\int _{Y_{u,m}}^1 y^{\delta -3} \, dy \\ & \geqslant \frac{k_{u,m}Y_{u,m}^{\delta -1}}{\delta -1}-(1-Y_{u,m}^{\delta -2}) \end{align*}$$
for all δ ( 1 , 3 / 2 ) $\delta \in (1,3/2)$ . Since this expression converges to $\infty$ as δ 1 $\delta \downarrow 1$ , and the final term in (2.13) is its reciprocal, the proof is complete. $\Box$

With this in hand, the proof of Lemma 2.14 follows in a straightforward manner.

Proof of Lemma 2.14.In order to do a Bessel process comparison and use Lemma 2.15, we need to replace the fixed n $n$ in (2.10) by some m $m$ which is very large (so we are only dealing with time intervals where θ 0 ε $\theta ^{\varepsilon} _0$ is tiny). However, this is not a problem, since for m n $m\geqslant n$ we can write

0 S 1 ε , n cot ( ( θ 0 ε ) s / 2 ) d s = 0 S 1 ε , m cot ( ( θ 0 ε ) s / 2 ) d s + S 1 ε , m S 1 ε , n cot ( ( θ 0 ε ) s / 2 ) d s , $$\begin{equation*} \int _0^{S_1^{\varepsilon ,n}} \cot ((\theta ^{\varepsilon} _0)_s/2) \, ds = \int _0^{S_1^{\varepsilon ,m}} \cot ((\theta ^{\varepsilon} _0)_s/2) \, ds + \int _{S_1^{\varepsilon ,m}}^{S_1^{\varepsilon ,n}} \cot ((\theta ^{\varepsilon} _0)_s/2) \, ds, \end{equation*}$$
where the two integrals are independent. This means that | E [ exp ( i u 0 S 1 ε , n cot ( ( θ 0 ε ) s / 2 ) d s ) ] | $|\mathbb {E}[\,\exp (i u \int _0^{S_1^{\varepsilon ,n}}\cot ((\theta ^{\varepsilon} _0)_s/2) \, ds)\,]|$ is actually increasing in n $n$ for any fixed ε $\varepsilon$ , so proving (2.10) for m > n $m&gt;n$ also proves it for n $n$ .

So we can write, for any m n $m\geqslant n$

E exp ( i u 0 S 1 ε , n cot ( ( θ 0 ε ) / 2 ) d s ) E exp ( i u 0 S 1 ε , m cot ( ( θ 0 ε ) s / 2 ) d s ) $$\begin{equation*} \left|\mathbb {E}\left[\exp (\operatorname{i}u \int _0^{S_1^{\varepsilon ,n}}\cot ((\theta ^{\varepsilon} _0)/2) \, ds)\right]\right| \leqslant \left|\mathbb {E}\left[\exp (\operatorname{i}u \int _0^{S_1^{\varepsilon ,m}}\cot ((\theta ^{\varepsilon} _0)_s/2) \, ds)\right]\right| \end{equation*}$$
which is, by the triangle inequality, less than
E exp 2 i u 0 S ε , m ( θ s ε ) 1 d s + E exp i u 0 S 1 ε , m cot ( ( θ 0 ε ) s / 2 ) d s E exp 2 i u 0 S ε , m ( θ s ε ) 1 d s . $$\begin{eqnarray*} && {\left| \mathbb {E}\left[\exp {\left(2\operatorname{i}u \int _0^{\widetilde{S}^{\varepsilon ,m}} (\widetilde{\theta }^{\varepsilon} _s)^{-1} \, ds\right)} \right]\right|}\\ &&\quad + {\left|\mathbb {E}\left[\exp {\left(\operatorname{i}u \int _0^{S_1^{\varepsilon ,m}}\cot ((\theta ^{\varepsilon} _0)_s/2) \, ds\right)}\right]-\mathbb {E}\left[\exp {\left(2\operatorname{i}u \int _0^{\widetilde{S}^{\varepsilon ,m}} (\widetilde{\theta }^{\varepsilon } _s)^{-1} \, ds\right)} \right]\right|}. \end{eqnarray*}$$
Now, using that ( 1 / y ( 1 / 2 ) cot ( y / 2 ) ) 0 $(1/y- (1/2)\cot (y/2))\downarrow 0$ as y 0 $y\downarrow 0$ , and an argument almost identical to the first half of the proof of Lemma 2.10, the second term above converges to 0 as m $m\rightarrow \infty$ , uniformly in ε $\varepsilon$ . Since Lemma 2.15 says that the first term converges to 0 as ε 0 $\varepsilon \rightarrow 0$ for any m $m$ large enough, this completes the proof. $\Box$

Proof of Lemma 2.13.Equation (2.10) implies that the law of X 1 ε , n $X_1^{\varepsilon ,n}$ converges to the uniform distribution on the unit circle as κ 4 ${\kappa ^{\prime }}\downarrow 4$ . The full result then follows by the Markov property of θ 0 ε $\theta ^{\varepsilon} _0$ . $\Box$

2.2.4 Summary

So, we have now tied up all the loose ends from the proof of Proposition 2.5. Recall that this proposition asserted the convergence in law of a single SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ branch in D $\mathbb {D}$ , targeted at 0, to the corresponding uniform CLE 4 $_4$ exploration branch. Let us conclude this subsection by noting that the same result holds when we change the target point.

For z D $z\in \mathbb {D}$ not necessarily equal to 0, we define D z $\mathcal {D}_z$ to be the space of evolving domains whose image after applying the conformal map f ( w ) = ( w z ) / ( 1 z ¯ w ) $f(w)=(w-z)/(1-\bar{z}w)$ from D D $\mathbb {D}\rightarrow \mathbb {D}$ , z 0 $z\mapsto 0$ , lies in D $\mathcal {D}$ .

From the convergence in Proposition 2.6, plus the target invariance of radial SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ and the uniform CLE 4 $_4$ exploration, it is immediate that

Corollary 2.16.For any z Q $z\in \mathcal {Q}$ , ( D z ε , τ z ε ) ( D z , τ z ) $({\mathbf {D}}_z^{\varepsilon} ,\tau _z^{\varepsilon} )\Rightarrow ({\mathbf {D}}_z,\tau _z)$ in D z × R $\mathcal {D}_z\times \mathbb {R}$ as ε 0 $\varepsilon \rightarrow 0$ .

Recall that τ 0 , z ε $\tau _{0,z}^{\varepsilon}$ is the last time that θ z ε $\theta _z^{\varepsilon}$ hits 0 before first hitting 2 π $2\pi$ and [ τ 0 , z , τ z ] $[\tau _{0,z},\tau _z]$ is the time interval during which D z $\mathbf {D}_z$ traces the outermost CLE 4 $_4$ loop surrounding z $z$ . Note that τ z ε τ 0 , z ε $\tau _z^{\varepsilon} -\tau _{0,z}^{\varepsilon}$ is equal to the length of the excursion e Λ ε , n ε , n $ \mathrm{e}_{\Lambda ^{\varepsilon ,n}}^{\varepsilon ,n}$ and similarly τ z τ 0 , z $\tau _z-\tau _{0,z}$ is the length of the excursion e Λ n $\mathrm{e}_{\Lambda ^n}$ (for every n $n$ ), so that by Lemma 2.12 the following extension holds.

Corollary 2.17.For any fixed z Q $z\in \mathcal {Q}$

( D z ε , τ z ε , τ 0 , z ε ) ( D z , τ z , τ 0 , z ) $$\begin{equation*} ({\mathbf {D}}_z^{\varepsilon} , \tau _z^{\varepsilon} ,\tau _{0,z}^{\varepsilon} )\Rightarrow ({\mathbf {D}}_z,\tau _z, \tau _{0,z} ) \end{equation*}$$
as ε 0 $\varepsilon \rightarrow 0$ .

2.3 Convergence of the CLE κ $_{\kappa ^{\prime }}$ loops

Recall that for z Q $z\in \mathcal {Q}$ , L z ε $\mathcal {L}_z^{\varepsilon}$ (respectively, L z $\mathcal {L}_z$ ) denotes the outermost CLE κ $\operatorname{CLE}_{\kappa ^{\prime }}$ loop (respectively, CLE 4 $_4$ loop) containing z $z$ and B z ε $\mathcal {B}_z^{\varepsilon}$ (respectively, B z $\mathcal {B}_z$ ) denotes the connected component of the complement of L z ε $\mathcal {L}_z^{\varepsilon}$ (respectively, L z $\mathcal {L}_z$ ) containing z $z$ . By definition we have
B z ε = ( D z ε ) τ z ε and B z = ( D z ) τ z , $$\begin{equation} \mathcal {B}_z^{\varepsilon} = ({\mathbf {D}}^{\varepsilon} _z)_{\tau ^{\varepsilon} _z} \text{ and } \mathcal {B}_z=({\mathbf {D}}_z)_{\tau _z}, \end{equation}$$ (2.14)
where { ( D z ε ) t ; t 0 } $\lbrace (\mathbf {D}_z^{\varepsilon} )_t\, ; \, t\geqslant 0\rbrace$ and { ( D z ) t ; t 0 } $\lbrace (\mathbf {D}_z)_t\, ; \, t\geqslant 0\rbrace$ are processes in D z $\mathcal {D}_z$ describing radial SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ processes and a uniform CLE 4 $\operatorname{CLE}_4$ exploration, respectively, toward z $z$ . See Section 2.1.6 for more details.

In this subsection we will prove the convergence of L z ε L z $\mathcal {L}_z^{\varepsilon} \Rightarrow \mathcal {L}_z$ with respect to the Hausdorff distance. That this might be non-obvious is illustrated by the following difference: in the limit D z = L z $\partial \mathbf {D}_z = \mathcal {L}_z$ , whereas this is not at all the case for ε > 0 $\varepsilon &gt; 0$ . Nevertheless, we have

Proposition 2.18.For any z Q $z\in \mathcal {Q}$

( D z ε , L z ε , B z ε ) ( D z , L z , B z ) $$\begin{equation*} ({\mathbf {D}}^{\varepsilon} _z, \mathcal {L}^{\varepsilon} _z, \mathcal {B}^{\varepsilon} _z) \Rightarrow ({\mathbf {D}}_z, \mathcal {L}_z, \mathcal {B}_z) \end{equation*}$$
as ε 0 $\varepsilon \downarrow 0$ , with respect to the product topology generated by ( D z $\mathcal {D}_z$ × $\times$ Hausdorff × $\;\times$ Carathéodory viewed from z $z$ ) convergence.

Given (2.14), and that we already know the convergence of D z ε $\mathbf {D}_z^{\varepsilon}$ as ε 0 $\varepsilon \downarrow 0$ , the proof of Proposition 2.18 boils down to the following lemma.

Lemma 2.19.Suppose that ( D 0 , L , B 0 ) $({\mathbf {D}}_0, \mathcal {L}, \mathcal {B}_0)$ is a subsequential limit in law of ( D 0 ε , L 0 ε , B 0 ε ) $({\mathbf {D}}_0^{\varepsilon} , \mathcal {L}_0^{\varepsilon} , \mathcal {B}_0^{\varepsilon} )$ as ε 0 $\varepsilon \downarrow 0$ (with the topology of Proposition 2.18). Then we have L = L 0 $\mathcal {L}=\mathcal {L}_0$ almost surely.

Proof of Proposition 2.18 given Lemma 2.19.By conformal invariance we may assume that z = 0 $z=0$ . Observe that by Corollary 2.16, we already know that ( D 0 ε , B 0 ε ) ( D 0 , B 0 ) $({\mathbf {D}}_0^{\varepsilon} , \mathcal {B}_0^{\varepsilon} )\Rightarrow ({\mathbf {D}}_0, \mathcal {B}_0)$ as ε 0 $\varepsilon \rightarrow 0$ , with respect to the product ( D $\mathcal {D}$ × $\times$ Carathéodory ) topology. Indeed, if one takes a sequence ε n $\varepsilon _n$ converging to 0, and a coupling of ( D 0 ε n , τ 0 ε n ) n N $({\mathbf {D}}_0^{\varepsilon _n},\tau _0^{\varepsilon _n})_{n\in \mathbb {N}}$ and ( D 0 , τ 0 ) $({\mathbf {D}}_0,\tau _0)$ so that ( D 0 ε n , τ 0 ε n ) ( D 0 , τ 0 ) $({\mathbf {D}}_0^{\varepsilon _n},\tau _0^{\varepsilon _n})\rightarrow ({\mathbf {D}}_0,\tau _0)$ almost surely as n $n\rightarrow \infty$ , it is clear due to (2.14) that each B 0 ε n $\mathcal {B}_0^{\varepsilon _n}$ also converges to B 0 $\mathcal {B}_0$ almost surely. Also note that ( L 0 ε ) $(\mathcal {L}_0^{\varepsilon} )$ is tight in ε $\varepsilon$ with respect to the Hausdorff topology, since all the sets in question are almost surely contained in D ¯ $\overline{\mathbb {D}}$ . Thus ( D 0 ε , B 0 ε , L 0 ε ) $({\mathrm{D}}_0^{\varepsilon} , \mathcal {B}_0^{\varepsilon} , \mathcal {L}_0^{\varepsilon} )$ is tight in the desired topology, and the limit is uniquely characterized by the above observation and Lemma 2.19. This yields the proposition. $\Box$

2.3.1 Strategy for the proof of Lemma 2.19

At this point, we know the convergence in law of ( D 0 ε , B 0 ε ) ( D 0 , B 0 ) $({\mathbf {D}}_0^{\varepsilon} , \mathcal {B}_0^{\varepsilon} )\rightarrow ({\mathbf {D}}_0, \mathcal {B}_0)$ as ε 0 $\varepsilon \downarrow 0$ , and we know that B 0 ε $\mathcal {B}_0^{\varepsilon}$ is the connected component of D L 0 ε $\mathbb {D}\setminus \mathcal {L}_0^{\varepsilon}$ containing 0 for every ε $\varepsilon$ . Given a subsequential limit ( D 0 , B 0 , L ) $({\mathbf {D}}_0, \mathcal {B}_0, \mathcal {L})$ in law of ( D 0 ε , B 0 ε , L 0 ε ) $({\mathbf {D}}_0^{\varepsilon} ,\mathcal {B}_0^{\varepsilon} , \mathcal {L}_0^{\varepsilon} )$ , the difficulty in concluding that L = L 0 $\mathcal {L}=\mathcal {L}_0$ lies in the fact that Carathéodory convergence (which is what we have for B 0 ε $\mathcal {B}_0^{\varepsilon}$ ) does not ‘see’ bottlenecks; see Figure 6.

Details are in the caption following the image
The sequence of domains enclosed by the thick black curves will converge in the Carathéodory sense (viewed from 0), but not in the Hausdorff sense, to the dotted domain. This is the type of behavior that must be ruled out to deduce convergence of CLE loops (in the Hausdorff sense) from convergence of the radial SLE (in the Carathéodory sense).

To proceed with the proof, we first show that any part of the supposed limit L $\mathcal {L}$ that does not coincide with L 0 $\mathcal {L}_0$ must lie outside of B 0 $\mathcal {B}_0$ .

Lemma 2.20.With the setup of Lemma 2.19, we have L C B 0 $\mathcal {L}\subseteq \mathbb {C}\setminus \mathcal {B}_0$ almost surely.

Once we have this ‘one-sided’ result, it suffices to prove that the laws of L $\mathcal {L}$ and L 0 $\mathcal {L}_0$ coincide.

Lemma 2.21.Suppose that L $\mathcal {L}$ is as in Lemma 2.19. Then the law of L $\mathcal {L}$ is equal to the law of L 0 $\mathcal {L}_0$ .

The first lemma follows almost immediately from the Carathéodory convergence of B 0 ε B 0 $\mathcal {B}_0^{\varepsilon} \rightarrow \mathcal {B}_0$ (see the next subsection). To prove the second lemma, we use the fact that CLE κ $\operatorname{CLE}_\kappa$ for κ ( 0 , 8 ) $\kappa \in (0,8)$ is inversion invariant: more correctly, a whole-plane version of CLE κ $\operatorname{CLE}_\kappa$ is invariant under the mapping z 1 / z $z\mapsto 1/z$ . Roughly speaking, this means that for whole-plane CLE, we can use inversion invariance to obtain the complementary result to Lemma 2.20, and deduce Hausdorff convergence in law of the analogous loops. We then have to do a little work, using the relation between whole-plane CLE and CLE in the disk (a Markov property), to translate this back to the disk setting and obtain Lemma 2.21.

2.3.2 Preliminaries on Carathéodory convergence

We first record the following standard lemma concerning Carathéodory convergence, which will be useful in what follows.

Lemma 2.22. (Carathéodory kernel theorem)Suppose that ( U n ) n 1 $(U_n)_{n\geqslant 1}$ is a sequence of simply connected domains containing 0, and for each n $n$ , write V n $V_n$ for the connected component of the interior of k n U k $\cap _{k\geqslant n} U_k$ containing 0. Define the kernel of ( U n ) n 1 $(U_n)_{n\geqslant 1}$ to be n V n $\cup _n V_n$ if this is non-empty, otherwise declare it to be { 0 } $\lbrace 0\rbrace$ .

Suppose that ( U n ) n 1 $(U_n)_{n\geqslant 1}$ and U $U$ are simply connected domains containing 0. Then U n U $U_n\rightarrow U$ with respect to the Carathéodory topology (viewed from 0) if and only if every subsequence of the U n $U_n$ has kernel U $U$ .

One immediate consequence of this is the following.

Corollary 2.23.Suppose that ( K n , D n ) ( K , D ) $(K_n,D_n)\Rightarrow (K,D)$ as n $n\rightarrow \infty$ for the product (Hausdorff × $\times$ Carathéodory topology), where for each fixed n $n$ , the coupling of K n $K_n$ and D n $D_n$ is such that D n $D_n$ is a simply connected domain with 0 D n $0\in D_n$ , and K n $K_n$ is a compact subset of C $\mathbb {C}$ with K n C D n $K_n\subseteq \mathbb {C}\setminus D_n$ almost surely. Then K C D $K\subseteq \mathbb {C}\setminus D$ almost surely.

Proof.By Skorokhod embedding, we may assume without loss of generality that ( K n , D n ) ( K , D ) $(K_{n},D_{n})\rightarrow (K,D)$ almost surely as n $n\rightarrow \infty$ .

For j N $j\in \mathbb {N}$ write V j $V_{j}$ for the connected component of int ( k j D k ) $\mathrm{int}(\cap _{k\geqslant j}D_{k})$ containing 0. By assumption, K n C D n $K_{n}\subseteq \mathbb {C}\setminus D_{n}$ for every n $n$ almost surely, which means that K n C V j $K_{n}\subseteq \mathbb {C}\setminus V_j$ for all n j $n\geqslant j$ almost surely. Since K n $K_{n}$ converges to K $K$ in the Hausdorff topology, we have K C V j $K\subseteq \mathbb {C}\setminus V_j$ for each j $j$ , which implies that K C j V j $K\subseteq \mathbb {C}\setminus \cup _j V_j$ almost surely. Finally, because D n D $D_{n}\rightarrow D$ in the Carathéodory topology, the Carathéodory kernel theorem gives that j V j = D $\cup _j V_j=D$ almost surely. Hence K C D $K\subseteq \mathbb {C}\setminus D$ almost surely, as desired. $\Box$

In particular:

Proof of Lemma 2.20.This is a direct consequence of Corollary 2.23. $\Box$

Now, if U n C $U_n\subseteq \mathbb {C}$ are such that 1 / U n : = { z : 1 / z U n } $1/U_n:=\lbrace z: 1/z\in U_n\rbrace$ is a simply connected domain containing 0 for each n $n$ , we say that U n U $U_n\rightarrow U$ with respect to the Carathéodory topology seen from $\infty$ , if and only if 1 / U n 1 / U $1/U_n\rightarrow 1/U$ with respect to the Carathéodory topology seen from 0. It is clear from this definition and the above arguments (or similar) that the following properties hold.

Lemma 2.24.Suppose that U n C $U_n\in \mathbb {C}$ are simply connected domains such that 1 / U n $1/U_n$ is simply connected containing 0 for each n $n$ . Then

  • if ( U n , K n ) ( U , K ) $(U_n,K_n)\Rightarrow (U,K)$ jointly with respect the product (Carathéodory seen from × $\infty \times$ Hausdorff) topology, for some compact sets K n $K_n$ with K n C U n $K_n\subseteq \mathbb {C}\setminus U_n$ for each n $n$ , then K C U $K\subseteq \mathbb {C}\setminus U$ almost surely;
  • if ( U n , D n ) ( U , D ) $(U_n, D_n)\Rightarrow (U,D)$ jointly with respect the product (Carathéodory seen from × $\infty \times$ Carathéodory seen from 0) topology, for some simply connected domains D D n 0 ${\mathbb {D}}\supseteq D_n\ni 0$ with D n C U n $D_n\subseteq \mathbb {C}\setminus {U_n}$ for each n $n$ , then D C U $D\subseteq \mathbb {C}\setminus {U}$ almost surely.

Proof.The first bullet point follows from Corollary 2.23 by considering 1 / U n , 1 / U $1/U_n,1/U$ and 1 / K n , 1 / K $1/K_n,1/K$ . For the second, let us assume by Skorohod embedding that ( U n , D n ) ( U , D ) $(U_n,D_n)\rightarrow (U,D)$ almost surely in the claimed topology. Then the compact sets D n : = D n ¯ D n D ¯ $\partial D_n:=\bar{D_n}\setminus D_n\subset \bar{\mathbb {D}}$ are tight for the Hausdorff topology, and hence have some subsequential limit $\partial$ . (The argument of) Corollary 2.23 implies that C U $\partial \subset \mathbb {C}\setminus U$ and C D $\partial \subset \mathbb {C}\setminus D$ almost surely. Since U $U$ is an open simply connected domain containing $\infty$ and D $D$ is an open simply connected domain containing 0, this implies that D C U $D\subset \mathbb {C}\setminus U$ almost surely. $\Box$

2.3.3 Whole-plane CLE and conclusion of the proofs

As mentioned previously, we would now like to use some kind of symmetry argument to prove Lemma 2.21. However, the symmetry we wish to exploit is not present for CLE in the unit disk, and so we have to go through an argument using whole-plane CLE instead. Whole-plane CLE was first introduced in [34] and is, roughly speaking, the local limit of CLE in (any) sequence of domains with size tending to $\infty$ . The key symmetry property of whole-plane CLE κ $_{\kappa ^{\prime }}$ that we will use is its invariance under applying the inversion map z 1 / z $z\mapsto 1/z$ [27, 34]. More precisely:

Lemma 2.25.Let Γ κ $\Gamma ^{\kappa ^{\prime }}$ be a whole-plane CLE κ $\operatorname{CLE}_{\kappa ^{\prime }}$ with κ [ 4 , 8 ) ${\kappa ^{\prime }}\in [4,8)$ .

  • (Inversion invariance) The image of Γ κ $\Gamma ^{\kappa ^{\prime }}$ under z 1 / z $z\mapsto 1/z$ has the same law as Γ κ $\Gamma ^{\kappa ^{\prime }}$ .
  • (Markov property) Consider the collection of loops in Γ κ $\Gamma ^{\kappa ^{\prime }}$ that lie entirely inside D $\mathbb {D}$ and surround 0. Write I 1 ε $I_1^{\varepsilon }$ (with ε = ε ( κ ) $\varepsilon =\varepsilon ({\kappa ^{\prime }})$ as usual) for the connected component containing 0 of the complement of the outermost loop in this collection. Write l 2 ε $\mathfrak {l}_2^{\varepsilon}$ for the second outermost loop in this collection. Then the image of l 2 ε $\mathfrak {l}_2^{\varepsilon}$ under the conformal map I 1 ε D $I_1^{\varepsilon} \rightarrow \mathbb {D}$ sending z $z$ to 0 with positive derivative at 0 has the same law as the outermost loop surrounding 0 for a CLE κ $\operatorname{CLE}_{\kappa ^{\prime }}$ in D $\mathbb {D}$ .

Proof.The inversion invariance is shown in [34, Theorem 1.1] for κ = 4 ${\kappa ^{\prime }}=4$ and [27, Theorem 1.1] for κ ( 4 , 8 ) ${\kappa ^{\prime }}\in (4,8)$ . The Markov property follows from [27, Lemma 2.9] when κ > 4 ${\kappa ^{\prime }}&gt;4$ and [34, Theorem 1] when κ = 4 ${\kappa ^{\prime }}=4$ . $\Box$

Let us now state the convergence result that we will prove for whole-plane CLE κ $_{\kappa ^{\prime }}$ as κ 4 ${\kappa ^{\prime }}\downarrow 4$ , and show how it implies Lemma 2.21.

For ε > 0 $\varepsilon &gt;0$ , we extend the above definitions and write l 1 ε , l 2 ε $\mathfrak {l}_1^{\varepsilon} , \mathfrak {l}_2^{\varepsilon}$ for the largest and second largest whole-plane CLE κ $\operatorname{CLE}_{{\kappa ^{\prime }}}$ loops containing 0, which are entirely contained in the unit disk. We let I i ε $I_i^{\varepsilon}$ be the connected component of C l i ε $\mathbb {C}\setminus \mathfrak {l}_i^{\varepsilon}$ containing 0 for i = 1 , 2 $i=1,2$ and let E i ε $E_i^{\varepsilon}$ be the connected component containing $\infty$ . When ε = 0 $\varepsilon =0$ we write l 1 , l 2 $\mathfrak {l}_1,\mathfrak {l}_2$ for the corresponding loops of a whole-plane CLE 4 $\operatorname{CLE}_4$ , and I 1 , E 1 , I 2 , E 2 $I_1,E_1, I_2,E_2$ for the corresponding domains containing 0 and $\infty$ . Note that in this case we have I i ¯ = C E i $\overline{I_i}=\mathbb {C}\setminus E_i$ and E i ¯ = C I i $\overline{E_i}=\mathbb {C}\setminus I_i$ for i = 1 , 2 $i=1,2$ .

Lemma 2.26. ( I 1 ε , E 1 ε , I 2 ε , E 2 ε ) ( I 1 , E 1 , I 2 , E 2 ) $(I_1^{\varepsilon} , E_1^{\varepsilon} , I_2^{\varepsilon} , E_2^{\varepsilon} )\Rightarrow (I_1, E_1, I_2, E_2)$ as ε 0 $\varepsilon \rightarrow 0$ , with respect to the product

Carathéodory (seen from ( 0 , , 0 , ) $(0,\infty ,0,\infty )$ in the four coordinates) topology.

Proof of Lemma 2.21 given Lemma 2.26.Suppose that ( I 1 ε , l 1 ε ) $(I_1^{\varepsilon} , \mathfrak {l}_1^{\varepsilon} )$ converges in law to ( I 1 , l ) $(I_1,\mathfrak {l})$ along some subsequence, with respect to the product (Carathéodory seen from 0 × $\times$ Hausdorff) topology. By the above lemma, we can extend this convergence to the joint convergence of ( I 1 ε , l 1 ε , E 2 ε , I 2 ε ) ( I 1 , l , E 2 , I 2 ) $(I_1^{\varepsilon} , \mathfrak {l}_1^{\varepsilon} , E_2^{\varepsilon} , I_2^{\varepsilon} )\rightarrow (I_1,\mathfrak {l},E_2, I_2)$ . But then Corollary 2.23 and Lemma 2.24 imply that l C I 2 = E 2 ¯ $\mathfrak {l}\subseteq \mathbb {C}\setminus I_2=\overline{E_2}$ and l C E 2 = I 2 ¯ $\mathfrak {l}\subseteq \mathbb {C}\setminus E_2= \overline{I_2}$ almost surely. This implies that l l 2 = ( E 2 ) = ( I 2 ) $\mathfrak {l}\subseteq \mathfrak {l}_2=\partial (E_2)=\partial (I_2)$ almost surely. Moreover, it is not hard to see (using the definition of Hausdorff convergence) that l 2 l = $\mathfrak {l_2}\setminus \mathfrak {l}=\emptyset$ , else l 2 ε $\mathfrak {l}_2^{\varepsilon}$ would not disconnect 0 from $\infty$ for small ε $\varepsilon$ . So l = l 2 $\mathfrak {l}=\mathfrak {l}_2$ almost surely.

Now consider, for each ε $\varepsilon$ , the unique conformal map g 1 ε : I 1 ε D $g_1^{\varepsilon} :I_1^{\varepsilon} \rightarrow \mathbb {D}$ that sends 0 0 $0\rightarrow 0$ and has ( g 1 ε ) ( 0 ) > 0 $(g_1^{\varepsilon} )^{\prime }(0)&gt;0$ . Then the above considerations imply that if g 1 ε ( l 2 ε ) $g_1^{\varepsilon} (\mathfrak {l}_2^{\varepsilon} )$ converges in law along some subsequence, with respect to the Hausdorff topology, then the limit must have the law of g 1 ( l 2 ) $g_1(\mathfrak {l}_2)$ , where g 1 : I 1 D $g_1:I_1\rightarrow \mathbb {D}$ is defined in the same way as g 1 ε $g_1^{\varepsilon}$ but with I 1 ε $I^{\varepsilon} _1$ replaced by I 1 $I_1$ . Since the law of g 1 ε ( l 2 ε ) $g_1^{\varepsilon} (\mathfrak {l}_2^{\varepsilon} )$ is the same as that of L 0 ε $\mathcal {L}_0^{\varepsilon}$ for every ε $\varepsilon$ and the law of g 1 ( l 2 ) $g_1(\mathfrak {l}_2)$ has the law of L 0 $\mathcal {L}_0$ , this proves Lemma 2.21. $\Box$

Proof of Lemma 2.19 and Proposition 2.18.Combining Lemmas 2.20 and 2.21 yields Lemma 2.19. As explained previously, this implies Proposition 2.18. $\Box$

So, we are left only to prove Lemma 2.26, concerning whole-plane CLE $\operatorname{CLE}$ . We will build up to this with a sequence of lemmas: first proving convergence of nested CLE $\operatorname{CLE}$ loops in very large domains, then transferring this to whole-plane CLE and finally appealing to inversion invariance to obtain the result.

Lemma 2.27.Fix R > 1 $R&gt;1$ . For κ ( 4 , 8 ) ${\kappa ^{\prime }}\in (4,8)$ and a CLE κ $\operatorname{CLE}_{\kappa ^{\prime }}$ in R D $R\mathbb {D}$ , denote by ( l i ε ) i 1 $(l_i^{\varepsilon} )_{i\geqslant 1}$ the sequence of nested loops containing 0, starting with the second smallest loop to fully enclose the unit disk (set equal to the boundary of R D $R\mathbb {D}$ if only one or no loops in R D $R\mathbb {D}$ actually surround D $\mathbb {D}$ ) and such that l i ε $l_i^{\varepsilon}$ surrounds l i + 1 ε $l_{i+1}^{\varepsilon}$ for all i $i$ . Write ( b i ε ) i 1 $(b_i^{\varepsilon} )_{i\geqslant 1}$ for the connected components containing 0 of the complements of the ( l i ε ) i 1 $(l_i^{\varepsilon} )_{i\geqslant 1}$ . Then ( b i ε ) i 1 $(b_i^{\varepsilon} )_{i\geqslant 1}$ converges in law to its CLE 4 $_4$ counterpart as ε 0 $\varepsilon \rightarrow 0$ , with respect to the product Carathéodory topology viewed from 0.

Proof.By Corollary 2.16 and scale invariance of CLE, together with the iterative nature of the construction of nested loops, we already know that the sequence of nested loops in R D $R\mathbb {D}$ containing 0, starting from the outermost one, converges as ε 0 $\varepsilon \rightarrow 0$ , with respect to the product Carathéodory topology viewed from 0. Taking a coupling where this convergence holds almost surely, it suffices to prove that the index of the smallest loop containing the unit disk also converges almost surely. This is a straightforward consequence of the kernel theorem — Lemma 2.22 — plus the fact that the smallest CLE 4 $\operatorname{CLE}_4$ loop in R D $R\mathbb {D}$ that contains D $\mathbb {D}$ actually contains ( 1 + r ) D $(1+r)\mathbb {D}$ for some strictly positive r $r$ almost surely. $\Box$

Lemma 2.28.The statement of the above lemma holds true if we replace the CLEs in R D $R\mathbb {D}$ with whole-plane versions.

Proof.For fixed κ [ 4 , 8 ) $\kappa \in [4,8)$ , let Γ C $\Gamma ^\mathbb {C}$ , Γ R D $\Gamma ^{R\mathbb {D}}$ denote whole-plane CLE κ $_{\kappa ^{\prime }}$ and CLE κ $\operatorname{CLE}_{\kappa ^{\prime }}$ on R D $R\mathbb {D}$ , respectively. The key to this lemma is [46, Theorem 9.1], which states (in particular) that Γ R D $\Gamma ^{R\mathbb {D}}$ rapidly converges to Γ C $\Gamma ^\mathbb {C}$ in the following sense. For some C , α > 0 $C,\alpha &gt; 0$ , Γ R D $\Gamma ^{R\mathbb {D}}$ and Γ C $\Gamma ^\mathbb {C}$ can be coupled so that for any r > 0 $r&gt;0$ and R > r $R&gt;r$ , with probability at least 1 C ( R / r ) α $1-C(R/r)^{-\alpha }$ , there is a conformal map φ $\varphi$ from some D ( R / r ) 1 / 4 D $D\supset (R/r)^{1/4}\mathbb {D}$ to D ( R / r ) 1 / 4 D $D^{\prime }\supset (R/r)^{1/4}\mathbb {D}$ , which maps the nested loops of Γ R D $\Gamma ^{R\mathbb {D}}$ — starting with the smallest containing r D $r\mathbb {D}$ — to the corresponding nested loops of Γ C $\Gamma ^\mathbb {C}$ , and has low distortion in the sense that | φ ( z ) 1 | C ( R / r ) α $|\varphi ^{\prime }(z)-1|\leqslant C(R/r)^{-\alpha }$ on R 1 / 4 D $R^{1/4}\mathbb {D}$ .

In fact, it is straightforward to see that C $C$ and α $\alpha$ (which in principle depend on κ $\kappa$ ) may be chosen uniformly for κ [ 4 , 6 ] $\kappa \in [4,6]$ (say). Indeed, it follows from the proof in [46] that they depend only on the law of the log conformal radius of the outermost loop containing 0 for a CLE κ $\operatorname{CLE}_{\kappa ^{\prime }}$ in D $\mathbb {D}$ , and this varies continuously in κ $\kappa$ , [52]. Hence, the result follows by letting R $R\rightarrow \infty$ in Lemma 2.27 and noting that the second smallest loop containing D $\mathbb {D}$ is contained in r D $r\mathbb {D}$ with arbitrarily high probability as r $r\rightarrow \infty$ , uniformly in κ $\kappa$ . $\Box$

Proof of Lemma 2.26.Lemmas 2.28 and 2.25 (inversion invariance) imply that ( I 1 ε , I 2 ε ) ( I 1 , I 2 ) $(I_1^{\varepsilon} ,I_2^{\varepsilon} )\Rightarrow (I_1,I_2)$ and ( E 1 ε , E 2 ε ) ( E 1 , E 2 ) $(E_1^{\varepsilon} , E_2^{\varepsilon} )\Rightarrow (E_1, E_2)$ as ε 0 $\varepsilon \rightarrow 0$ . This ensures that ( I 1 ε , E 1 ε , I 2 ε , E 2 ε ) $(I_1^{\varepsilon} , E_1^{\varepsilon} , I_2^{\varepsilon} , E_2^{\varepsilon} )$ is tight in ε $\varepsilon$ , so we need only prove that if ( I 1 , E ̂ 1 , I 2 , E ̂ 2 ) $(I_1, \hat{E}_1, I_2, \hat{E}_2)$ is a subsequential limit of ( I 1 ε , E 1 ε , I 2 ε , E 2 ε ) $(I_1^{\varepsilon} , E_1^{\varepsilon} , I_2^{\varepsilon} , E_2^{\varepsilon} )$ , then E ̂ 1 = E 1 = int ( C I 1 ) $\hat{E}_1=E_1=\mathrm{int}(\mathbb {C}\setminus I_1)$ and E ̂ 2 = E 2 = int ( C I 2 ) $\hat{E}_2=E_2=\mathrm{int}(\mathbb {C}\setminus I_2)$ almost surely. Note that ( E ̂ 1 , E ̂ 2 ) $(\hat{E}_1,\hat{E}_2)$ has the same law as ( E 1 , E 2 ) $(E_1, E_2)$ , and since I 1 ε C E 1 ε $I_1^{\varepsilon} \subseteq \mathbb {C}\setminus E_1^{\varepsilon}$ for all ε $\varepsilon$ , Lemma 2.24 implies that I 1 C E ̂ 1 $I_1\subseteq \mathbb {C}\setminus \hat{E}_1$ . In other words E ̂ 1 E 1 $ \hat{E}_1\subseteq E_1$ almost surely. Then because E ̂ 1 $\hat{E}_1$ and E 1 $E_1$ have the same law, we may deduce that they are equal almost surely. Similarly, we see that E ̂ 2 = E 2 $\hat{E}_2=E_2$ almost surely. $\Box$

2.3.4 Conclusion

Recall that for z D $z\in \mathbb {D}$ , ( B z , i ε , L z , i ε ) i 1 $(\mathcal {B}_{z,i}^{\varepsilon} ,\mathcal {L}_{z,i}^{\varepsilon} )_{i\geqslant 1}$ (respectively, ( B z , i , L z , i ) i 1 $(\mathcal {B}_{z,i},\mathcal {L}_{z,i})_{i\geqslant 1}$ ) denotes the sequence of nested CLE κ $\operatorname{CLE}_{\kappa ^{\prime }}$ (respectively, CLE 4 $\operatorname{CLE}_4$ ) bubbles and loops containing z $z$ . By the Markov property and iterative nature of the construction, it is immediate from Proposition 2.18 that

Corollary 2.29.For fixed z Q $z\in \mathcal {Q}$

( D z ε , ( L z , i ε ) i 1 , ( B z , i ε ) i 1 ) ( D z , ( L z , i ) i 1 , ( B z , i ) i 1 ) $$\begin{equation*} ({\mathbf {D}}^{\varepsilon} _z, (\mathcal {L}^{\varepsilon} _{z,i})_{i\geqslant 1}, (\mathcal {B}^{\varepsilon} _{z,i})_{i\geqslant 1}) \Rightarrow ({\mathbf {D}}_z, (\mathcal {L}_{z,i})_{i\geqslant 1}, (\mathcal {B}_{z,i})_{i\geqslant 1}) \end{equation*}$$
as ε 0 $\varepsilon \downarrow 0$ , with respect to the product topology generated by ( D z $\mathcal {D}_z$ × $\times$ $\prod$ Hausdorff × $\times$ $\prod$ Carathéodory viewed from z $z$ ) convergence.

3 THE UNIFORM SPACE-FILLING SLE 4 $_4$

In this section we show that the ordering on points (with rational coordinates) in the disk, induced by space-filling SLE κ $\operatorname{SLE}_{\kappa ^{\prime }}$ with κ > 4 ${\kappa ^{\prime }}&gt;4$ , converges to a limiting ordering as κ 4 ${\kappa ^{\prime }}\downarrow 4$ . We call this the uniform space-filling SLE 4 $_4$ . Nonetheless, we can describe explicitly the law of this ordering, which for any two fixed points comes down to the toss of a fair coin. As for κ > 4 $\kappa ^{\prime } &gt; 4$ , there would be other ways to define a space-filling SLE 4 $_4$ process, by considering different explorations of CLE 4 $_4$ .

Let us now recall some notation in order to properly state the result. For ε ( 0 , 2 2 ) $\varepsilon \in (0,2-\sqrt {2})$ and z , w Q $z,w\in \mathcal {Q}$ , we define O z , w ε $\mathcal {O}_{z,w}^{\varepsilon}$ to be the indicator function of the event that the space-filling SLE κ $\operatorname{SLE}_{\kappa ^{\prime }}$ η ε $\eta ^{\varepsilon}$ hits z $z$ before w $w$ (see Section 2.1.7). By convention we set this equal to 1 when z = w $z=w$ .

To describe the limit as κ 0 ${\kappa ^{\prime }}\downarrow 0$ , we define O = ( O z , w ) z , w Q $\mathcal {O}=(\mathcal {O}_{z,w})_{z,w\in \mathcal {Q}}$ to be a collection of random variables, coupled with ( D z ) z Q $(\mathbf {D}_z)_{z\in \mathcal {Q}}$ such that conditionally given ( D z ) z Q $(\mathbf {D}_z)_{z\in \mathcal {Q}}$ :
  • O z , z = 1 $\mathcal {O}_{z,z}=1$ for all z Q $z\in \mathcal {Q}$ almost surely;
  • O z , w $\mathcal {O}_{z,w}$ is a Bernoulli( 1 2 $\frac{1}{2}$ ) random variable for all z , w Q $z,w\in \mathcal {Q}$ with z w $z\ne w$ ;
  • O z , w = 1 O w , z $\mathcal {O}_{z,w}=1-\mathcal {O}_{w,z}$ for all z , w Q $z,w\in \mathcal {Q}$ with z w $z\ne w$ almost surely;
  • for all z , w 1 , w 2 Q $z,w_1,w_2\in \mathcal {Q}$ with z w 1 , w 2 $z\ne w_1, w_2$ , if D z $\mathbf {D}_z$ separates z $z$ from w 2 $w_2$ at the same time as it separates z $z$ from w 1 $w_1$ then O z , w 1 = O z , w 2 $\mathcal {O}_{z,w_1}=\mathcal {O}_{z,w_2}$ , otherwise O z , w 1 $\mathcal {O}_{z,w_1}$ and O z , w 2 $\mathcal {O}_{z,w_2}$ are independent.

Lemma 3.1.There is a unique joint law on ( ( D z ) z Q , O ) $((\mathbf {D}_z)_{z\in \mathcal {Q}},\mathcal {O})$ satisfying the above requirements, and such that the marginal law of ( D z ) z Q $(\mathbf {D}_z)_{z\in \mathcal {Q}}$ is that of a branching uniform CLE 4 $\operatorname{CLE}_4$ exploration. With this law, O $\mathcal {O}$ almost surely defines an order on any finite subset of Q $\mathcal {Q}$ by declaring that z w $z\preceq w$ if and only if O z , w = 1 $\mathcal {O}_{z,w}=1$ .

We will prove the lemma in just a moment. The main result of this section is the following.

Proposition 3.2. ( ( D z ε ) z Q , ( O z , w ε ) z , w Q ) $(({\mathbf {D}}^{\varepsilon }_z)_{z\in \mathcal {Q}},(\mathcal {O}^{\varepsilon} _{z,w})_{z,w\in \mathcal {Q}})$ converges to ( ( D z ) z Q , ( O z , w ) z , w Q ) $(({\mathbf {D}}_z)_{z\in \mathcal {Q}},(\mathcal {O}_{z,w})_{z,w\in \mathcal {Q}})$ , in law as ε 0 $\varepsilon \downarrow 0$ , with respect to the product topology ( Q D z × Q × Q discrete ) $(\prod _{\mathcal {Q}} \mathcal {D}_z \,\times \, \prod _{\mathcal {Q}\times \mathcal {Q}} \text{discrete})$ , where ( O z , w ) z , w Q $(\mathcal {O}_{z,w})_{z,w\in \mathcal {Q}}$ is as defined in Lemma 3.1.

Proof of Lemma 3.1.The main observation is that if a joint law ( ( D z ) z Q , O ) $((\mathbf {D}_z)_{z\in \mathcal {Q}},\mathcal {O})$ as in the lemma exists, then for all z , w , y Q $z,w,y\in \mathcal {Q}$ we almost surely have

{ O z , w = 1 } { O w , y = 1 } { O z , y = 1 } . $$\begin{equation} \lbrace \mathcal {O}_{z,w}=1\rbrace \cap \lbrace \mathcal {O}_{w,y}=1\rbrace \Rightarrow \lbrace \mathcal {O}_{z,y}=1\rbrace .\end{equation}$$ (3.1)
To verify this, we assume that z , w , y $z,w,y$ are distinct (else the statement is trivial) with O z , w = 1 $\mathcal {O}_{z,w}=1$ and O w , y = 1 $\mathcal {O}_{w,y}=1$ . Since O w , z = 1 O z , w = 0 $\mathcal {O}_{w,z}=1-\mathcal {O}_{z,w}=0$ this implies that y $y$ and z $z$ are not separated from w $w$ by D w $\mathbf {D}_w$ at the same time. If D w $\mathbf {D}_w$ separates z $z$ from w $w$ strictly before separating y $y$ from w $w$ , then D z $\mathbf {D}_z$ separates y $y$ and w $w$ from z $z$ at the same time, so O z , y = O z , w = 1 $\mathcal {O}_{z,y}=\mathcal {O}_{z,w}=1$ . If D w $\mathbf {D}_w$ separates y $y$ from w $w$ strictly before separating z $z$ from w $w$ , then D y $\mathbf {D}_y$ separates z $z$ and w $w$ from y $y$ at the same time, so O z , y = 1 O y , z = 1 O y , w = O w , y = 1 $\mathcal {O}_{z,y}=1-\mathcal {O}_{y,z}=1-\mathcal {O}_{y,w}=\mathcal {O}_{w,y}=1$ . In either case it must be that O z , y = 1 $\mathcal {O}_{z,y}=1$ .

We now show why this implies that for any { z 1 , , z k } $\lbrace z_1, \ldots , z_k\rbrace$ with z i Q $z_i\in \mathcal {Q}$ distinct, there exists a unique a conditional law on ( O z i , z j ) 1 i , j k $(\mathcal {O}_{z_i,z_j})_{1\leqslant i,j\leqslant k}$ given ( D z ) z Q $(\mathbf {D}_z)_{z\in \mathcal {Q}}$ , satisfying the requirements of the lemma. We argue by induction on the number of points. Indeed, suppose it is true with 1 k n 1 $1\leqslant k \leqslant n-1$ for some n $n$ and take { z 1 , , z n } $\lbrace z_1,\ldots , z_n\rbrace$ in Q $\mathcal {Q}$ distinct. We construct the conditional law of ( O z i , z j ) 1 i , j n $(\mathcal {O}_{z_i,z_j})_{1\leqslant i,j\leqslant n}$ given ( D z ) z Q $(\mathbf {D}_z)_{z\in \mathcal {Q}}$ as follows.

  • To define ( O z 1 , z i ) 1 i n $(\mathcal {O}_{z_1,z_i})_{1\leqslant i \leqslant n}$ :
    • partition the indices { 2 , , n } $\lbrace 2,\ldots , n\rbrace$ into equivalence classes { C 1 , , C K } $\lbrace C_1,\dots , C_K\rbrace$ such that i j $i\sim j$ if and only if D z 1 $\mathbf {D}_{z_1}$ separates z 1 $z_1$ from z i $z_i$ and z j $z_j$ at the same time;
    • for each equivalence class sample an independent Bernoulli ( 1 / 2 ) $(1/2)$ random variable; and
    • set O z 1 , z i $\mathcal {O}_{z_1,z_i}$ to be the random variable associated with class [ i ] $[i]$ for every i $i$ .
  • Given ( O z 1 , z i ) 1 i n $(\mathcal {O}_{z_1,z_i})_{1\leqslant i \leqslant n}$ and ( D z ) z Q $(\mathbf {D}_z)_{z\in \mathcal {Q}}$ , define O z i , z j $\mathcal {O}_{z_i,z_j}$ with [ i ] [ j ] $[i]\ne [j]$ by setting it equal to O z 1 , z j $\mathcal {O}_{z_1,z_j}$ if z i $z_i$ and z 1 $z_1$ are separated from z j $z_j$ at the same time, or O z 1 , z i $\mathcal {O}_{z_1,z_i}$ if z j $z_j$ and z 1 $z_1$ are separated from z i $z_i$ at the same time.
  • For each 1 l K $1\leqslant l\leqslant K$ consider the connected component U l D $U_l\subset \mathbb {D}$ in the branching CLE 4 $\operatorname{CLE}_4$ exploration that contains points z i $z_i$ with [ i ] = C l $[i]=C_l$ when they are separated from z 1 $z_1$ . The CLE 4 $\operatorname{CLE}_4$ explorations inside these components are mutually independent, independent of the CLE 4 $\operatorname{CLE}_4$ exploration before this separation time, and each has the same law as ( D z ) z Q $(\mathbf {D}_z)_{z\in \mathcal {Q}}$ after mapping to the unit disk. Thus, since each equivalence class contains strictly less than n $n$ points, using the induction hypothesis, we can define ( O z i , z j ) i j , [ i ] = [ j ] = C l $(\mathcal {O}_{z_i,z_j})_{i\ne j, [i]=[j]=C_l}$ for 1 l K $1\leqslant l\leqslant K$ such that
    • the collections for different l $l$ are mutually independent; and
    • ( O z i , z j ) i j , [ i ] = [ j ] = C l $(\mathcal {O}_{z_i,z_j})_{i\ne j, [i]=[j]=C_l}$ for each l $l$ is independent of the CLE 4 $_4$ exploration outside of U l $U_l$ , and after conformally mapping everything to the unit disk, is coupled the exploration inside U l $U_l$ as in the statement of Lemma 3.1.

Using the induction hypothesis, it is straightforward to see that this defines a conditional law on ( O z i , z j ) 1 i j n $(\mathcal {O}_{z_i,z_j})_{1\leqslant i\ne j\leqslant n}$ given ( D z ) z Q $(\mathbf {D}_z)_{z\in \mathcal {Q}}$ that satisfies the conditions of the Lemma. Moreover, note that the first two bullet points above, together with (3.1), define the law of ( O z 1 , z j ) 1 j n $(\mathcal {O}_{z_1,z_j})_{1\leqslant j\leqslant n}$ and ( O z i , z j ) [ i ] [ j ] $(\mathcal {O}_{z_i,z_j})_{[i]\ne [j]}$ (satisfying the requirements) uniquely. Combining with the uniqueness in the induction hypothesis, it follows easily that the conditional law of ( O z i , z j ) 1 i j n $(\mathcal {O}_{z_i,z_j})_{1\leqslant i\ne j\leqslant n}$ given ( D z ) z Q $(\mathbf {D}_z)_{z\in \mathcal {Q}}$ (satisfying the requirements) is unique.

Consequently, given ( D z ) z Q $(\mathbf {D}_z)_{z\in \mathcal {Q}}$ , there exists a unique conditional law on the product space { 0 , 1 } Q × Q $\lbrace 0,1\rbrace ^{\mathcal {Q}\times \mathcal {Q}}$ equipped with the product σ $\sigma$ -algebra, such that if O = ( O z , w ) z , w Q $\mathcal {O}=(\mathcal {O}_{z,w})_{z,w\in \mathcal {Q}}$ has this law then it satisfies the conditions above Lemma 3.1.

This concludes the existence and uniqueness statement of the lemma. The property (3.1) implies that O $\mathcal {O}$ does almost surely define an order on any finite subset of Q $\mathcal {Q}$ . $\Box$

In the coming subsections we will prove Proposition 3.2. Since tightness of all the random variables in question is immediate (either by definition or from our previous work) it suffices to characterize any limiting law. We begin in Section 3.1 by showing this for the order of two points; see just below for an outline of the strategy. Then, we will prove that the time at which they are separated by the SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ converges (for the log CR $-\log \operatorname{CR}$ parameterization with respect to either of the points). This is important for characterizing joint limits, when there are three or more points being considered. It also turns out to be non-trivial, due to pathological behavior that cannot be ruled out when one only knows convergence of the SLE branches in the spaces D z $\mathcal {D}_z$ . We conclude the proof in a third subsection, and finally combine this with the results of Section 2 to summarize the ‘Euclidean’ part of this paper in Proposition 3.12.

3.1 Convergence of order for two points

In this section we show that for two distinct points z , w D $z,w\in \mathbb {D}$ , the law of the order in which they are visited by the space-filling SLE κ $_{\kappa ^{\prime }}$ η ε $\eta ^{\varepsilon}$ , converges to the result of a fair coin toss as κ 4 ${\kappa ^{\prime }}\downarrow 4$ . That is, O z , w ε $\mathcal {O}_{z,w}^{\varepsilon}$ converges to a Bernoulli ( 1 / 2 ) $(1/2)$ random variable as ε 0 $\varepsilon \downarrow 0$ . The rough outline of the proof is as follows

Recall that η ε $\eta ^{\varepsilon}$ is determined by an SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ branching tree, in which η z ε $\eta ^{\varepsilon} _z$ denotes the SLE κ ( κ 6 ) $_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ branch toward z $z$ (parameterized according to minus log conformal radius as seen from z $z$ ). If we consider the time σ z , w ε $\sigma ^{\varepsilon} _{z,w}$ at which η z ε $\eta ^{\varepsilon} _z$ separates z $z$ and w $w$ , then for every ε > 0 $\varepsilon &gt;0$ , O z , w ε $\mathcal {O}_{z,w}^{\varepsilon}$ is actually measurable with respect to η z ε ( [ 0 , σ z , w ε ] ) $\eta ^{\varepsilon} _z([0,\sigma ^{\varepsilon} _{z,w}])$ . So what we are trying to show is that this measurability turns to independence in the ε 0 $\varepsilon \downarrow 0$ limit. This means that we will not get very far if we consider the conditional law of O z , w ε $\mathcal {O}_{z,w}^{\varepsilon}$ given η z ε ( [ 0 , σ z , w ε ] ) $\eta ^{\varepsilon} _z([0,\sigma ^{\varepsilon} _{z,w}])$ , so instead we have to look at times just before σ z , w ε $\sigma _{z,w}^{\varepsilon}$ . Namely, we will consider the times σ z , w , δ ε $\sigma _{z,w,\delta }^{\varepsilon}$ that w $w$ is sent first sent to within distance δ $\delta$ of the boundary by the Loewner maps associated with η z ε $\eta ^{\varepsilon} _z$ . We will show that for any fixed δ ( 0 , 1 ) $\delta \in (0,1)$ , the conditional probability that O z , w ε = 1 $\mathcal {O}_{z,w}^{\varepsilon} =1$ , given η z ε ( [ 0 , σ z , w , δ ε ] ) $\eta _z^{\varepsilon} ([0,\sigma _{z,w,\delta }^{\varepsilon} ])$ , converges to 1 / 2 $1/2$ as ε 0 $\varepsilon \rightarrow 0$ . Knowing this for every δ $\delta$ allows us to reach the desired conclusion.

To show that these conditional probabilities do tend to 1 / 2 $1/2$ for fixed δ $\delta$ , we apply the Markov property at time σ z , w , δ ε $\sigma _{z,w,\delta }^{\varepsilon}$ . This tells us that after mapping ( D z ε ) σ z , w , δ ε $(\mathbf {D}^{\varepsilon} _z)_{\sigma _{z,w,\delta }^{\varepsilon} }$ to the unit disc, the remainder of η z ε $\eta ^{\varepsilon} _z$ evolves as a radial SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ with a force point somewhere on the unit circle. And we know the law of this curve: initially it evolves as a chordal SLE κ $\operatorname{SLE}_{\kappa ^{\prime }}$ targeted at the force point, and after the force point is swallowed, it evolves as a radial SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ in the to-be-discovered domain with force point starting adjacent to the tip. So we need to show that for such a process, the behavior is ‘symmetric’ in an appropriate sense. In fact, we have to deal with two scenarios, according to whether the images of z $z$ and w $w$ are separated or not when the force point is swallowed. If they are separated, our argument becomes a symmetry argument for chordal SLE κ $\operatorname{SLE}_{\kappa ^{\prime }}$ . If they are not, our argument becomes a symmetry argument for space-filling SLE κ $\operatorname{SLE}_{\kappa ^{\prime }}$ . For a more detailed outline of the strategy, and the bulk of the proof, see Lemma 3.8.

At this point, let us just record the required symmetry property of space-filling SLE κ $\operatorname{SLE}_{\kappa ^{\prime }}$ in the following lemma.

Lemma 3.3.Let η ε $\eta ^{\varepsilon}$ be a space-filling SLE κ ( ε ) $\operatorname{SLE}_{{\kappa ^{\prime }}(\varepsilon )}$ in D $\mathbb {D}$ , as above. Then for any x D $x\in \mathbb {D}$ :

P ( η ε hits 0 before x ) 1 2 as ε 0 . $$\begin{equation*} \mathbb {P}(\eta ^{\varepsilon} \text{ hits } 0 \text{ before } x) \rightarrow \frac{1}{2} \text{ as } \varepsilon \rightarrow 0. \end{equation*}$$

Proof.For this we use a conformal invariance argument. Namely, we note that by conformal invariance of η ε $\eta ^{\varepsilon}$ , applying the map

z 1 x ¯ 1 x z x 1 x ¯ z $$\begin{equation*} z\mapsto \frac{1-\bar{x}}{1-x}\frac{z-x}{1-\bar{x}z} \end{equation*}$$
from D $\mathbb {D}$ to D $\mathbb {D}$ that sends 1 to 1 and x $x$ to 0, we have
P [ η ε hits 0 before x ] = P [ η ε hits x ̂ before 0 ] = 1 P [ η ε hits 0 before x ̂ ] , $$\begin{equation*} \mathbb {P}[\eta ^{\varepsilon} \text{ hits } 0 \text{ before } x] = \mathbb {P}[\eta ^{\varepsilon} \text{ hits } \hat{x} \text{ before } 0]= 1- \mathbb {P}[\eta ^{\varepsilon} \text{ hits } 0\text{ before } \hat{x} ], \end{equation*}$$
where x ̂ = x ( 1 x ¯ ) ( 1 x ) 1 $\hat{x}=-x(1-\bar{x})(1-x)^{-1}$ is the image of 0 under the conformal map, and | x ̂ | = | x | $|\hat{x}|=|x|$ . Hence it suffices to show that
P [ η ε hits 0 before x ] P [ η ε hits 0 before x ̂ ] 0 $$\begin{equation*} \mathbb {P}[\eta ^{\varepsilon} \text{ hits } 0 \text{ before } x] - \mathbb {P}[\eta ^{\varepsilon} \text{ hits } 0 \text{ before } \hat{x}]\rightarrow 0 \end{equation*}$$
as ε 0 $\varepsilon \rightarrow 0$ . By rotational invariance, if we write η θ ε $\eta ^{\varepsilon} _{\theta }$ for a space-filling SLE κ $_{\kappa ^{\prime }}$ starting at e i θ $\operatorname{e}^{i\theta }$ , then it is enough to show that
P [ η θ ε hits 0 before | x | ] P [ η 0 ε hits 0 before | x | ] 0 $$\begin{equation*} \mathbb {P}[\eta ^{\varepsilon} _{\theta } \text{ hits } 0 \text{ before } |x|]-\mathbb {P}[\eta ^{\varepsilon} _{0} \text{ hits } 0 \text{ before } |x|]\rightarrow 0 \end{equation*}$$
as ε 0 $\varepsilon \rightarrow 0$ , for any θ [ 0 , 2 π ] $\theta \in [0,2\pi ]$ .

However, this is easily justified, because we can couple an SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ from 1 to 0 and another from e i θ $\operatorname{e}^{i\theta }$ to 0, so that they successfully couple (that is, coincide for all later times) before 0 is separated from | x | $|x|$ with arbitrarily high probability (uniformly in θ $\theta$ ) as κ 4 ${\kappa ^{\prime }}\downarrow 4$ . This follows from Lemma 2.14, target invariance of the SLE κ ( κ 6 ) $_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ and (2.9); that is, because in an arbitrarily small amount of time as κ 4 ${\kappa ^{\prime }}\downarrow 4$ , the SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ will have swallowed every point on D $\partial \mathbb {D}$ . $\Box$

Now we proceed with the setup for the main result of this section (Proposition 3.4). Recall that D z D ${\mathbf {D}}_z\in {\mathcal {D}}$ is the sequence of domains formed by the branch of the uniform CLE 4 $_4$ exploration toward z $z$ in D $\mathbb {D}$ . For w z $w\ne z$ , we write σ z , w $\sigma _{z,w}$ for the first time that D z ${\mathbf {D}}_z$ separates z $z$ from w $w$ and let O z , w $\mathcal {O}_{z,w}$ be a Bernoulli random variable (taking values { 0 , 1 } $\lbrace 0,1\rbrace$ each with probability 1 / 2 $1/2$ ) that is independent of { ( D z ) t ; t [ 0 , σ z , w ] } $\lbrace ({\mathbf {D}}_z)_t \, ; \, t\in [0,\sigma _{z,w}]\rbrace$ .

We define elements
D z , w ε = { ( D z ε ) t σ z , w ε ; t 0 } and D z , w = { ( D z ) t σ z , w ; t 0 } $$\begin{equation*} {\mathrm{D}}^{\varepsilon} _{z,w}=\lbrace ({\mathbf {D}}^{\varepsilon} _{z})_{t\wedge \sigma _{z,w}^{\varepsilon} }\, ; \, t\geqslant 0\rbrace \text{ and } {\mathrm{D}}_{z,w}=\lbrace ({\mathbf {D}}_{z})_{t\wedge \sigma _{z,w}}\, ; \, t\geqslant 0\rbrace \end{equation*}$$
of D $\mathcal {D}$ . These are, respectively, the domain sequences formed by the SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ and the uniform CLE 4 $\operatorname{CLE}_4$ exploration branches toward z $z$ , stopped when z $z$ and w $w$ become separated. By definition, they are parameterized such that log CR ( 0 ; ( D z , w ε ) t ) = t σ z , w ε $-\log \operatorname{CR}(0;({\mathrm{D}}_{z,w}^{\varepsilon} )_{t})=t\wedge \sigma _{z,w}^{\varepsilon}$ for all t $t$ .

Proposition 3.4.Fix z w Q $z\ne w\in \mathcal {Q}$ . Then if ( D , O ) $({\mathbf {D}},\mathcal {O})$ is a subsequential limit in law of ( D z ε , O z , w ε ) $({\mathbf {D}}^{\varepsilon} _z,\mathcal {O}^{\varepsilon} _{z,w})$ (with respect to the product D z $\mathcal {D}_z$ × $ \times$ discrete topology), ( D , O ) $({\mathbf {D}},\mathcal {O})$ must satisfy the following property. If D ${{\mathrm{D}}}$ is equal to D ${\mathbf {D}}$ stopped at the first time that w $w$ is separated from z $z$ , then

( D , O ) = ( l a w ) ( D z , w , O z , w ) . $$\begin{equation*} ({{\mathrm{D}}},\mathcal {O}) \overset{(law)}{=}({\mathrm{D}}_{z,w},\mathcal {O}_{z,w}). \end{equation*}$$

Note that this does not yet imply that the times at which z $z$ and w $w$ are separated converge.

To set up for the proof of this proposition, we define for ε , δ > 0 $\varepsilon ,\delta &gt;0$ , σ z , w , δ ε ${\sigma }^{\varepsilon} _{z,w,\delta }$ to be the first time t $t$ that, under the conformal map g t [ D z ε ] $g_t[{\mathrm{D}}_z^{\varepsilon} ]$ , the image of w $w$ is at distance δ $\delta$ from D $\partial \mathbb {D}$ ; see Figure 7 for an illustration. Define σ z , w , δ $\sigma _{z,w,\delta }$ in the same way for ε = 0 $\varepsilon =0$ . Write D z , w , δ ε ${\mathrm{D}}_{z,w,\delta }^{\varepsilon}$ and D z , w , δ ${\mathrm{D}}_{z,w,\delta }$ for the same things as D z , w ε ${\mathrm{D}}_{z,w}^{\varepsilon}$ and D z , w ${\mathrm{D}}_{z,w}$ , but with the time now cut off at σ z , w , δ ε $\sigma _{z,w,\delta }^{\varepsilon}$ and σ z , w , δ $\sigma _{z,w,\delta }$ , respectively.

Lemma 3.5.

  • (a) ( D z , w , δ ε , σ z , w , δ ε ) ( D z , w , δ , σ z , w , δ ) $({\mathrm{D}}_{z,w,\delta }^{\varepsilon} ,\sigma _{z,w,\delta }^{\varepsilon} ) \Rightarrow ({\mathrm{D}}_{z,w,\delta },\sigma _{z,w,\delta })$ as ε 0 $\varepsilon \rightarrow 0$ for every fixed δ > 0 $\delta &gt;0$ .
  • (b) ( D z , w , δ , σ z , w , δ ) ( D z , w , σ z , w ) $({\mathrm{D}}_{z,w,\delta },\sigma _{z,w,\delta })\Rightarrow ({\mathrm{D}}_{z,w},\sigma _{z,w})$ as δ 0 $\delta \rightarrow 0$ .

Details are in the caption following the image
The SLE κ ( κ 6 ) $_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ branch η z ε $\eta _z^{\varepsilon}$ , run up to time σ z , w , δ ε $\sigma _{z,w,\delta }^{\varepsilon}$ . This is the first time that under the Loewner map, w $w$ is sent within distance δ $\delta$ of the boundary. The future of the curve has image η ε $\widetilde{\eta }^{\varepsilon}$ under this map, and is an SLE κ ( κ 6 ) $_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ starting from x 1 = η z ε ( σ z , w , δ ε ) $x_1=\eta _z^{\varepsilon} (\sigma _{z,w,\delta }^{\varepsilon} )$ with a force point at x 2 D $x_2\in \partial \mathbb {D}$ . z $z$ is visited before w $w$ by the original space-filling SLE κ $\operatorname{SLE}_{\kappa ^{\prime }}$ if and only if when η ε $\widetilde{\eta }^{\varepsilon}$ separates 0 and w $w^{\prime }$ (the image of w $w$ ), the component containing 0 is ‘monocolored’.

Proof.For (a) we use that D z ε D z ${\mathbf {D}}^{\varepsilon} _{z}\Rightarrow {\mathbf {D}}_z$ in D z $\mathcal {D}_z$ . Taking a coupling ( D z , ( D z ε ) ε > 0 ) $({\mathbf {D}}_z,({\mathbf {D}}^{\varepsilon }_z)_{\varepsilon &gt;0})$ such that this convergence is almost sure, it is clear from the definition of convergence in D z $\mathcal {D}_z$ that, under this coupling, ( D z , w , δ ε , σ z , w , δ ε ) ( D z , w , δ , σ z , w , δ ) $({\mathrm{D}}_{z,w,\delta }^{\varepsilon} ,\sigma _{z,w,\delta }^{\varepsilon} )\rightarrow ({\mathrm{D}}_{z,w,\delta },\sigma _{z,w,\delta })$ almost surely for every δ > 0 $\delta &gt;0$ . Statement (b) holds because σ z , w , δ σ z , w $\sigma _{z,w,\delta }\rightarrow \sigma _{z,w}$ almost surely as δ 0 $\delta \rightarrow 0$ . Indeed, σ z , w , δ $\sigma _{z,w,\delta }$ is almost surely increasing in δ $\delta$ and bounded above by σ z , w $\sigma _{z,w}$ so must have a limit σ σ z , w $\sigma ^*\leqslant \sigma _{z,w}$ as δ 0 $\delta \rightarrow 0$ . On the other hand, w $w$ cannot be mapped anywhere at positive distance from the boundary under g σ [ D z ] $g_{\sigma ^*}[{\mathbf {D}}_z]$ , so it must be that σ σ z , w $\sigma ^*\geqslant \sigma _{z,w}$ . $\Box$

Thus, we can reduce the proof of Proposition 3.4 to the following lemma.

Lemma 3.6.For any continuous bounded function F $F$ with respect to D z $\mathcal {D}_z$ , and any fixed δ > 0 $\delta &gt;0$ , we have that

E [ O z , w ε F ( D z , w , δ ε ) ] 1 2 E [ F ( D z , w , δ ) ] $$\begin{equation*} \mathbb {E}[\mathcal {O}^{\varepsilon} _{z,w} F({\mathrm{D}}^{\varepsilon} _{z,w,\delta })] \rightarrow \frac{1}{2} \mathbb {E}[F({\mathrm{D}}_{z,w,\delta })] \end{equation*}$$
as ε 0 $\varepsilon \rightarrow 0$ .

Proof of Proposition 3.4 given Lemma 3.6.Consider a subsequential limit as in Proposition 3.4. Write D δ $\widetilde{{\mathrm{D}}}_\delta$ for D ${\mathbf {D}}$ stopped at the first time that w $w$ is sent within distance δ $\delta$ of D $\partial \mathbb {D}$ under the Loewner flow. Then it is clear (by taking a coupling where the convergence holds almost surely) that ( D δ , O ) $(\widetilde{{\mathrm{D}}}_\delta , \mathcal {O})$ is equal to the limit in law of ( D z , w , δ ε , O z , w ε ) $({\mathrm{D}}^{\varepsilon} _{z,w,\delta }, \mathcal {O}_{z,w}^{\varepsilon} )$ as ε 0 $\varepsilon \rightarrow 0$ along the subsequence.

On the other hand, Lemma 3.6 implies that the law of such a limit is that of D z , w , δ ${\mathrm{D}}_{z,w,\delta }$ together with an independent Bernoulli random variable. Indeed, any continuous bounded function with respect to the product topology on D z × { 0 , 1 } $\mathcal {D}_z \times \lbrace 0,1\rbrace$ is of the form ( D , x ) 1 { x = 1 } F ( D ) + 1 { x = 0 } G ( D ) $({\mathrm{D}},x)\rightarrow \mathbb {1}_{\lbrace x=1\rbrace } F({\mathrm{D}})+\mathbb {1}_{\lbrace x=0\rbrace } G({\mathrm{D}})$ for F , G $F,G$ bounded and continuous with respect to D z $\mathcal {D}_z$ . Moreover, 1 { x = 0 } G = G 1 { x = 1 } G $\mathbb {1}_{\lbrace x=0\rbrace }G=G-\mathbb {1}_{\lbrace x=1\rbrace }G$ and we already know that E [ G ( D z , w , δ ε ) ] E [ G ( D z , w , δ ) ] $\mathbb {E}[G({\mathrm{D}}_{z,w,\delta }^{\varepsilon} )] \rightarrow \mathbb {E}[G({\mathrm{D}}_{z,w,\delta })]$ as ε 0 $\varepsilon \rightarrow 0$ .

So ( D δ , O ) $(\tilde{\mathrm{D}}_\delta ,\mathcal {O})$ has the law of D z , w , δ $\mathbf {D}_{z,w,\delta }$ plus an independent Bernoulli random variable for each δ > 0 $\delta &gt;0$ . Combining with (b) of Lemma 3.5 yields the proposition. $\Box$

The proof of Lemma 3.6 will take up the remainder of this subsection. An important ingredient is the following result of [32], about the convergence of SLE κ $\operatorname{SLE}_{\kappa ^{\prime }}$ to SLE 4 $\operatorname{SLE}_4$ as κ 4 ${\kappa ^{\prime }}\downarrow 4$ .

Theorem 3.7. ([[32], Theorem 1.10])Chordal SLE κ $\operatorname{SLE}_{\kappa ^{\prime }}$ between two boundary points in the disk converges in law to chordal SLE 4 $\operatorname{SLE}_4$ as κ 4 ${\kappa ^{\prime }}\downarrow 4$ . This is with respect to supremum norm on curves viewed up to time reparameterization.

Proof of Lemma 3.6.Since F $F$ is bounded, subsequential limits of E [ O z , w ε F ( D z , w , δ ε ) ] $\mathbb {E}[\mathcal {O}^{\varepsilon} _{z,w} F({\mathrm{D}}_{z,w,\delta }^{\varepsilon} )]$ always exist. Therefore, we need only to show that such a limit must be equal to ( 1 / 2 ) E [ F ( D z , w , δ ) ] $(1/2) \mathbb {E}[F({\mathrm{D}}_{z,w,\delta })]$ . For this, we apply the map g σ z , w , δ ε [ D z ε ] $g_{\sigma ^{\varepsilon} _{z,w,\delta }}[{\mathbf {D}}_z^{\varepsilon} ]$ : recall that this is the unique conformal map from ( D z ε ) σ z , w , δ ε $({\mathbf {D}}_z^{\varepsilon} )_{\sigma _{z,w,\delta }^{\varepsilon} }$ to D $\mathbb {D}$ that sends z $z$ to 0 and has positive real derivative at z $z$ ; see Figure 7. We then use the Markov property of SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ . This tells us that conditionally on D z , w , δ ε ${\mathrm{D}}^{\varepsilon} _{z,w,\delta }$ , the image of η z ε $\eta ^{\varepsilon} _z$ under this map is that of an SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ started at some x 1 D $x_1\in \partial \mathbb {D}$ with a force point at x 2 D $x_2\in \partial \mathbb {D}$ (where x 1 , x 2 $x_1,x_2$ are measurable with respect to D z , w , δ ε ${\mathrm{D}}^{\varepsilon} _{z,w,\delta }$ ). Let us call this curve η ε $\widetilde{\eta }^{\varepsilon}$ . Let w $w^{\prime }$ be the image of w $w$ under g σ z , w , δ ε [ D z ε ] $g_{\sigma ^{\varepsilon} _{z,w,\delta }}[{\mathbf {D}}_z^{\varepsilon} ]$ , which is also measurable with respect to D z , w , δ ε $D^{\varepsilon} _{z,w,\delta }$ and has | w | = 1 δ $|w^{\prime }|=1-\delta$ almost surely. Then the conditional expectation of O z , w ε $\mathcal {O}^{\varepsilon} _{z,w}$ given D z , w , δ ε ${\mathrm{D}}_{z,w,\delta }^{\varepsilon}$ can be written as a probability for η ε $\widetilde{\eta }^{\varepsilon}$ . Namely, it is just the probability that when η ε $\widetilde{\eta }^{\varepsilon}$ first separates w $w^{\prime }$ and 0, the component containing 0 either has boundary made up of entirely of the left-hand side of η ε $\widetilde{\eta }^{\varepsilon}$ and the clockwise arc from x 1 $x_1$ to x 2 $x_2$ , or the right-hand side of η ε $\widetilde{\eta }^{\varepsilon}$ and the complementary counterclockwise arc. We denote this event for η ε $\widetilde{\eta }^{\varepsilon}$ by A ε $\mathcal {A}^{\varepsilon}$ .

Therefore, by dominated convergence, Lemma 3.6 follows from Lemma 3.8 stated and proved below. $\Box$

Lemma 3.8.Let η ε $\widetilde{\eta }^{\varepsilon}$ be an SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ started at some x 1 D $x_1\in \partial \mathbb {D}$ with a force point at x 2 D $x_2\in \partial \mathbb {D}$ . Fix w D $w^{\prime }\in \mathbb {D}$ . Let A ε $\mathcal {A}^{\varepsilon}$ be the event that when η ε $\widetilde{\eta }^{\varepsilon}$ first separates w $w^{\prime }$ and 0, the component containing 0 either has boundary made up of entirely of the left-hand side of η ε $\widetilde{\eta }^{\varepsilon}$ and the clockwise arc from x 1 $x_1$ to x 2 $x_2$ , or the right-hand side of η ε $\widetilde{\eta }^{\varepsilon}$ and the complementary counterclockwise arc.

P ( A ε ) 1 2 as ε 0 ( equivalentlyas κ 4 ) . $$\begin{equation} \mathbb {P}(\mathcal {A}^{\varepsilon} ) \rightarrow \frac{1}{2} \text{ as } \varepsilon \rightarrow 0 \; \textrm {(equivalently as } {\kappa ^{\prime }}\downarrow \textrm {4)}. \end{equation}$$ (3.2)

Another way to describe the event A ε $\mathcal {A}^{\varepsilon}$ is the following. If the clockwise boundary arc from x 1 $x_1$ to x 2 $x_2$ together with the left-hand side of η ε $\widetilde{\eta }^{\varepsilon}$ is colored red, and the counterclockwise boundary arc together with the right-hand side of η ε $\widetilde{\eta }^{\varepsilon}$ is colored blue (as in Figures 7 and 8) then A ε $\mathcal {A}^{\varepsilon}$ is the event that when 0 and w $w^{\prime }$ are separated, the component containing 0 is ‘monocolored’.

Details are in the caption following the image
Illustration of Lemma 3.8. The two scenarios that can occur when the force point x 2 $x_2$ is swallowed by η ε $\widetilde{\eta }^{\varepsilon}$ . On the left, 0 and w $w^{\prime }$ are on opposite sides of the curve (there is also an analogous scenario when 0 is on the ‘blue side’ and w $w^{\prime }$ is on the ‘red side'). If this happens, we are interested whether η ε $\widetilde{\eta }^{\varepsilon}$ hits the blue or the red part of D $\partial \mathbb {D}$ first. On the right, they are on the same side of the curve and we are interested in what happens after x 2 $x_2$ is swallowed.

Outline for the proof of Lemma 3.8. Note that until the first time that 0 is separated from x 2 $x_2$ , η ε $\widetilde{\eta }^{\varepsilon}$ has the law (up to time reparameterization) of a chordal SLE κ $\operatorname{SLE}_{\kappa ^{\prime }}$ from x 1 $x_1$ to x 2 $x_2$ in D $\mathbb {D}$ ; see Lemma 2.4. Importantly, we know by Theorem 3.7 that this converges to chordal SLE 4 $\operatorname{SLE}_4$ as κ 4 ${\kappa ^{\prime }}\downarrow 4$ .

This is the main ingredient going into the proof, for which the heuristic is as follows. If η ε $\widetilde{\eta }^{\varepsilon}$ is very close to a chordal SLE 4 $_4$ , then after some small initial time it should not hit the boundary of D $\mathbb {D}$ again until getting very close to x 2 $x_2$ . At this point either w $w^{\prime }$ and 0 will be on the ‘same side of the curve’ (scenario on the right-hand side of Figure 8) or they will be on ‘different sides’ (scenario on the left-hand side of Figure 8).
  • In the latter case (left-hand side of Figure 8), note that η $\widetilde{\eta }$ is very unlikely to return anywhere near to 0 or w $w^{\prime }$ before swallowing the force point at x 2 $x_2$ . Hence, whether or not A ε $\mathcal {A}^{\varepsilon}$ occurs depends only on whether the curve goes on to hit the boundary ‘just to the left’ of x 2 $x_2$ , or ‘just to the right’. Indeed, hitting on one side will correspond to 0 being in a monocolored red bubble when it is separated from w $w^{\prime }$ , meaning that A ε $\mathcal {A}^{\varepsilon}$ will occur, while hitting on the other side will correspond to w $w^{\prime }$ being in a monocolored blue bubble, and it will not. By the Markov property and symmetry, we will argue that each of these happen with (conditional) probability close to 1 / 2 $1/2$ .
  • In the former case (right-hand side of Figure 8), η $\widetilde{\eta }$ will go on to swallow the force point x 2 $x_2$ before separating 0 and w $w^{\prime }$ , with high probability as κ 4 ${\kappa ^{\prime }}\downarrow 4$ . Once this has occurred, η ε $\widetilde{\eta }^{\varepsilon}$ will continue to evolve in the cut-off component containing 0 and w $w^{\prime }$ , as an SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ with force point initially adjacent to the tip. But then by mapping to the unit disk again, the conditional probability of A ε $\mathcal {A}^{\varepsilon}$ becomes the probability that a space-filling SLE κ $\operatorname{SLE}_{\kappa ^{\prime }}$ visits one particular point before another. This converges to 1 / 2 $1/2$ as κ 4 ${\kappa ^{\prime }}\downarrow 4$ by Lemma 3.3.

Proof of Lemma 3.8.Let us now proceed with the details. For u > 0 $u&gt;0$ small, let η u ε $\widetilde{\eta }^{\varepsilon} _u$ be η ε $\widetilde{\eta }^{\varepsilon}$ run until the first entry time T u ε $T^{\varepsilon} _u$ of D B x 2 ( u ) $\mathbb {D}\cap B_{x_2}(u)$ . By Theorem 3.7, the probability that η ε $\widetilde{\eta }^{\varepsilon}$ separates 0 or w $w^{\prime }$ from x 2 $x_2$ before time T u ε $T^{\varepsilon} _u$ tends to 0 as ε 0 $\varepsilon \rightarrow 0$ for any fixed u < | x 2 x 1 | $u&lt;|x_2-x_1|$ . We write E u , b ε $E_{u,\text{b}}^{\varepsilon}$ for this event.

We also fix a u > 0 $u^{\prime }&gt;0$ , chosen such that x 1 , 0 $x_1,0$ and w $w^{\prime }$ are contained in the closure of D B x 2 ( u ) $\mathbb {D}\setminus B_{x_2}(u^{\prime })$ . Again from the convergence to SLE 4 $_4$ we can deduce that

P η ε revisits D B x 2 ( u ) after time T u ε 0 as u 0 , uniformly in ε . $$\begin{equation} \mathbb {P}{\left(\widetilde{\eta }^{\varepsilon} \text{ revisits } \mathbb {D}\setminus B_{x_2}(u^{\prime }) \text{ after time }T^{\varepsilon} _u \right)}\rightarrow 0 \text{ as } u\rightarrow 0, \text{ uniformly in } \varepsilon . \end{equation}$$ (3.3)
The point of this is that η ε $\widetilde{\eta }^{\varepsilon}$ cannot ‘change between the configurations in Figure 8’ without going back into D B x 2 ( u ) $\mathbb {D}\setminus B_{x_2}(u^{\prime })$ . Write:
  • E u , l ε $E_{u,\text{l}}^{\varepsilon}$ for the intersection of ( E u , b ε ) c $(E_{u,\text{b}}^{\varepsilon} )^c$ and the event that η u ε B x 2 ( u ) ¯ $\widetilde{\eta }^{\varepsilon} _u\cup \overline{B_{x_2}(u)}$ separates 0 and w $w^{\prime }$ in D $\mathbb {D}$ , with 0 on the left of η u ε $\widetilde{\eta }^{\varepsilon} _u$ ;
  • E u , r ε $E_{u,\text{r}}^{\varepsilon}$ for the same thing but with left replaced by right; and
  • E u , s ε $E_{u,\text{s}}^{\varepsilon}$ for the intersection of ( E u , b ε ) c $(E_{u,\text{b}}^{\varepsilon} )^c$ and the event that η u ε B x 2 ( u ) ¯ $\widetilde{\eta }_u^{\varepsilon} \cup \overline{B_{x_2}(u)}$ does not separate 0 and w $w^{\prime }$ in D $\mathbb {D}$ .
Then we can decompose
P ( A ε ) = E [ P ( A ε | E u , b ε ) 1 E u , b ε + P ( A ε | E u , l ε ) 1 E u , l ε + P ( A ε | E u , r ε ) 1 E u , r ε + P ( A ε | E u , s ε ) 1 E u , s ε ] = E [ A ε 1 E u , b ε ] + E [ P ( A ε | E u , l ε ) 1 E u , l ε ] + E [ P ( A ε | E u , r ε ) 1 E u , r ε ] + E [ P ( A ε | E u , s ε ) 1 E u , s ε ] . $$\begin{eqnarray*} \mathbb {P}(\mathcal {A}^{\varepsilon} ) & = & \mathbb {E}[\mathbb {P}(\mathcal {A}^{\varepsilon} \, | \, E_{u,\text{b}}^{\varepsilon} )\mathbb {1}_{E_{u,\text{b}}^{\varepsilon} }+\mathbb {P}(\mathcal {A}^{\varepsilon} \, | \, E_{u,\text{l}}^{\varepsilon} )\mathbb {1}_{E_{u,\text{l}}^{\varepsilon} }+\mathbb {P}(\mathcal {A}^{\varepsilon} \, | \,E_{u,\text{r}}^{\varepsilon} )\mathbb {1}_{E_{u,\text{r}}^{\varepsilon} }+\mathbb {P}(\mathcal {A}^{\varepsilon} \, | \, E_{u,\text{s}}^{\varepsilon} )\mathbb {1}_{E_{u,\text{s}}^{\varepsilon} }] \nonumber \\ & = & \underset{\hbox{\textcircled {1}}}{\mathbb {E}[{\mathcal {A}}^{\varepsilon} \mathbb {1}_{E_{u,\text{b}}^{\varepsilon} }]} + \underset{\hbox{\textcircled {2}}}{\mathbb {E}[\mathbb {P}(\mathcal {A}^{\varepsilon} \, | \, E_{u,\text{l}}^{\varepsilon} )\mathbb {1}_{E_{u,\text{l}}^{\varepsilon} }]} + \underset{\hbox{\textcircled {3}}}{\mathbb {E}[\mathbb {P}(\mathcal {A}^{\varepsilon} \, | \,E_{u,\text{r}}^{\varepsilon} )\mathbb {1}_{E_{u,\text{r}}^{\varepsilon} }]} +\underset{\hbox{\textcircled {4}}}{\mathbb {E}[\mathbb {P}(\mathcal {A}^{\varepsilon} \, | \, E_{u,\text{s}}^{\varepsilon} )\mathbb {1}_{E_{u,\text{s}}^{\varepsilon} }].} \end{eqnarray*}$$
By the observations of the previous paragraph, P ( E u , b ε ) 0 $\mathbb {P}(E_{u,\text{b}}^{\varepsilon} )\rightarrow 0$ as ε 0 $\varepsilon \rightarrow 0$ for any fixed u $u$ , and therefore also
0 as ε 0 for any fixed u . $$\begin{equation} \hbox{\textcircled {1}} \rightarrow 0 \text{ as } \varepsilon \rightarrow 0 \text{ for any fixed } u. \end{equation}$$ (3.4)

Let us now describe what is going on with the terms , $\hbox{\textcircled {2}},\hbox{\textcircled {3}}$ and $\hbox{\textcircled {4}}$ . The term $\hbox{\textcircled {2}}$ corresponds to the left-hand side scenario of Figure 8, and the term $\hbox{\textcircled {3}}$ corresponds to the same scenario, but when 0 and w $w^{\prime }$ lie on opposite sides of the curve to those illustrated in the figure. We will show that

lim u 0 lim ε 0 ( + ) = 1 2 P ( SLE 4 from x 1 to x 2 in D separates w and 0 ) = : p 2 . $$\begin{equation} \lim _{u\rightarrow 0} \lim _{\varepsilon \rightarrow 0} \, (\hbox{\textcircled {2}} + \hbox{\textcircled {3}}) = \frac{1}{2} \mathbb {P}(\operatorname{SLE}_4 \text{ from } x_1 \text{ to } x_2 \text{ in } \mathbb {D}\text{ separates } w^{\prime } \text{ and } 0)=: \frac{p}{2}. \end{equation}$$ (3.5)
The term $\hbox{\textcircled {4}}$ corresponds to the scenario on the right-hand side of Figure 8. We will show that
lim u 0 lim ε 0 = 1 2 ( 1 p ) = 1 2 P ( SLE 4 from x 1 to x 2 in D does not separate w and 0 ) . $$\begin{equation} \lim _{u\rightarrow 0}\lim _{\varepsilon \rightarrow 0} \, \hbox{\textcircled {4}} = \frac{1}{2}(1-p)=\frac{1}{2}\mathbb {P}(\operatorname{SLE}_4 \text{ from } x_1 \text{ to } x_2 \text{ in } \mathbb {D}\text{ does not separate } w^{\prime } \text{ and } 0).\end{equation}$$ (3.6)
Combining (3.5), (3.6), (3.4) and the decomposition P ( A ε ) = + + + $\mathbb {P}(\mathcal {A}^{\varepsilon} )=\hbox{\textcircled {1}}+\hbox{\textcircled {2}}+\hbox{\textcircled {3}}+\hbox{\textcircled {4}}$ gives (3.2), and thus completes the proof. So all that remains is to show (3.5) and (3.6).

Proof of (3.5). First, by (3.3), we can pick u $u$ small enough such that the differences

E [ P ( η ε | [ T u ε , ) hits the clockwise arc between x 1 and x 2 first | E u , l ε ) 1 E u , l ε ] and E [ P ( η ε | [ T u ε , ) hits the counterclockwise arc between x 1 and x 2 first | E u , r ε ) 1 E u , r ε ] $$\begin{align*} & {\left(\hbox{\textcircled {2}} - \mathbb {E}[\mathbb {P}(\widetilde{\eta }^{\varepsilon} {|_{[T_u^{\varepsilon} ,\infty )}} \text{ hits the clockwise arc between } x_1 \text{ and } x_2 \text{ first }\, | \, E_{u,\text{l}}^{\varepsilon} )\, \mathbb {1}_{E_{u,\text{l}}^{\varepsilon} }] \right)} \text{ and } \\ & {\left(\hbox{\textcircled {3}} - \mathbb {E}[\mathbb {P}(\widetilde{\eta }^{\varepsilon} {|_{[T_u^{\varepsilon} ,\infty )}} \text{ hits the counterclockwise arc between } x_1 \text{ and } x_2 \text{ first } \, | \, E_{u,\text{r}}^{\varepsilon} )\,\mathbb {1}_{E_{u,\text{r}}^{\varepsilon} }]\right)} \end{align*}$$
are arbitrarily small, uniformly in ε $\varepsilon$ . All we are doing here is using the fact that if u $u$ is small enough, η ε $\widetilde{\eta }^{\varepsilon}$ will not return anywhere close to 0 or w $w^{\prime }$ after time T u ε $T_u^{\varepsilon}$ . This allows us to reduce the problem to estimating conditional probabilities for chordal SLE κ $\operatorname{SLE}_{\kappa ^{\prime }}$ . To estimate these probabilities (the conditional probabilities in the displayed equations above) we can use Theorem 3.7, plus symmetry. In particular, Theorem 3.7 implies that for a chordal SLE κ $_{\kappa ^{\prime }}$ curve on H $\mathbb {H}$ from 0 to $\infty$ , the probability that it hits [ R , ) $[R,\infty )$ before ( , L ] $(-\infty ,-L]$ for any fixed L , R ( 0 , ) $L,R\in (0,\infty )$ can be made arbitrary close to the probability that it hits [ max ( L , R ) , ) $[\max (L,R),\infty )$ before ( , max ( L , R ) ] $(-\infty ,-\max (L,R)]$ as κ 4 ${\kappa ^{\prime }}\downarrow 4$ . This is because SLE 4 $_4$ does not hit the boundary apart from at the end points and the convergence is in the uniform topology. Since the probability that chordal SLE κ $_{\kappa ^{\prime }}$ in H $\mathbb {H}$ from 0 to $\infty$ hits [ max ( L , R ) , ) $[\max (L,R),\infty )$ before ( , max ( L , R ) ] $(-\infty ,-\max (L,R)]$ is 1 / 2 $1/2$ for every κ ${\kappa ^{\prime }}$ by symmetry, we see that the probability of hitting [ R , ) $[R,\infty )$ before ( , L ] $(-\infty ,-L]$ converges to 1 / 2 $1/2$ as κ 4 ${\kappa ^{\prime }}\downarrow 4$ .

We use this to observe, by conformally mapping to H $\mathbb {H}$ that

P η ε | [ T u ε , ) hits the clockwise arc between x 1 and x 2 first | η ε ( [ 0 , T u ε ] ) 1 2 $$\begin{equation*} \mathbb {P}{\left(\widetilde{\eta }^{\varepsilon} {|_{[T_u^{\varepsilon} ,\infty )}} \text{ hits the clockwise arc between } x_1 \text{ and } x_2 \text{ first } \, | \, \widetilde{\eta }^{\varepsilon} ([0,T_u^{\varepsilon} ])\right)}\rightarrow \frac{1}{2} \end{equation*}$$
almost surely as ε 0 $ \varepsilon \rightarrow 0$ . Using this along with dominated convergence, we obtain (3.5).

Proof of (3.6). Write E ε $E^{\varepsilon}$ for the event that η ε $\widetilde{\eta }^{\varepsilon}$ swallows the force point x 2 $x_2$ before separating 0 and w $w^{\prime }$ . Then we can rewrite 4 as

E [ A ε ( 1 E u , s ε 1 E ε ) ] + E [ A ε 1 E ε ] . $$\begin{equation} \mathbb {E}[\mathcal {A}^{\varepsilon} (\mathbb {1}_{E^{\varepsilon} _{u,s}}-\mathbb {1}_{E^{\varepsilon} })]+\mathbb {E}[\mathcal {A}^{\varepsilon} \mathbb {1}_{E^{\varepsilon} }]. \end{equation}$$ (3.7)
Applying (3.3) shows that the first term tends to 0 as u 0 $u\rightarrow 0$ , uniformly in ε $\varepsilon$ . Let us now show that the second tends to ( 1 / 2 ) ( 1 p ) $(1/2)(1-p)$ as ε 0 $\varepsilon \rightarrow 0$ .

To do this, we condition on η ε $\widetilde{\eta }^{\varepsilon}$ run up to the time T 0 ε $T^{\varepsilon} _0$ that the force point x 2 $x_2$ is swallowed. Conditioned on this initial segment we can use the Markov property of SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ to describe the future evolution of η ε $\widetilde{\eta }^{\varepsilon}$ . Indeed, it is simply that of a radial SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ started from η ε ( T 0 ε ) D $\widetilde{\eta }^{\varepsilon} (T_0^{\varepsilon} )\in \partial \mathbb {D}$ and targeted toward 0, with force point located infinitesimally close to the starting point. Viewing the evolution of η ε $\widetilde{\eta }^{\varepsilon}$ after time T 0 ε $T_0^{\varepsilon}$ as one branch of a space-filling SLE κ $\operatorname{SLE}_{\kappa ^{\prime }}$ we then have

E [ A ε 1 E ε ] = E [ P ( space-filling SLE κ started from η ε ( T 0 ε ) hits 0 before w ) 1 E ε ] $$\begin{equation*} \mathbb {E}[\mathcal {A}^{\varepsilon} \mathbb {1}_{E^{\varepsilon} }] = \mathbb {E}[\mathbb {P}(\text{space-filling SLE}_{\kappa ^{\prime }}\text{ started from } \widetilde{\eta }^{\varepsilon} (T_0^{\varepsilon} ) \text{ hits } 0 \text{ before } w^{\prime }) \mathbb {1}_{E^{\varepsilon} }] \end{equation*}$$
which we further decompose as
1 2 P ( E ε ) + E P ( space-filling SLE κ started from η ε ( T 0 ε ) hits 0 before w ) 1 / 2 1 E ε . $$\begin{equation*} \frac{1}{2} \mathbb {P}(E^{\varepsilon} ) + \mathbb {E}{\left[{\left(\mathbb {P}(\text{space-filling SLE}_{\kappa ^{\prime }}\text{ started from } \widetilde{\eta }^{\varepsilon} (T_0^{\varepsilon} ) \text{ hits } 0 \text{ before } w^{\prime })-1/2\right)} \mathbb {1}_{E_\varepsilon }\right]}. \end{equation*}$$
Since the first term above tends to ( 1 / 2 ) ( 1 p ) $(1/2)(1-p)$ as ε 0 $\varepsilon \rightarrow 0$ , it again suffices by dominated convergence (and by applying a rotation) to show that for any x D $x\in \mathbb {D}$ :
P ( η ε hits 0 before x ) 1 2 as ε 0 . $$\begin{equation*} \mathbb {P}(\eta ^{\varepsilon} \text{ hits } 0 \text{ before } x) \rightarrow \frac{1}{2} \text{ as } \varepsilon \rightarrow 0. \end{equation*}$$

This is precisely the statement of Lemma 3.3. Thus we conclude the proof of (3.6), and therefore Lemma 3.8. $\Box$

3.2 Convergence of separation times

We now want to prove that for z w $z\ne w$ the actual separation times σ z , w ε $\sigma _{z,w}^{\varepsilon}$ converge to the separation time σ z , w $\sigma _{z,w}$ in law (jointly with the exploration) as ε 0 $\varepsilon \rightarrow 0$ . The difficulty is as follows. Suppose we are on a probability space where η z ε $\eta ^{\varepsilon} _z$ converges almost surely to η z $\eta _z$ . Then we can deduce (by Lemma 3.5) that any limit of σ z , w ε $\sigma _{z,w}^{\varepsilon}$ must be greater than or equal to σ z , w $\sigma _{z,w}$ . But it still could be the case that z $z$ and w $w$ are ‘almost separated’ at some sequence of times that converge to σ z , w $\sigma _{z,w}$ as ε 0 $\varepsilon \downarrow 0$ , but that the η z ε $\eta _z^{\varepsilon}$ then go on to do something else for a macroscopic amount of time before coming back to finally separate z $z$ and w $w$ . Note that in this situation the η z ε $\eta _z^{\varepsilon}$ would be creating ‘bottlenecks’ at the almost separation times, so it would not contradict Proposition 3.4).

The main result of this subsection is the following.

Proposition 3.9.For any z w Q $z\ne w \in \mathcal {Q}$

( D z ε , σ z , w ε ) ( D z , σ z , w ) $$\begin{equation} (\mathbf {D}_z^{\varepsilon} ,\sigma _{z,w}^{\varepsilon} )\Rightarrow (\mathbf {D}_z,\sigma _{z,w})\end{equation}$$ (3.8)
as ε 0 $\varepsilon \rightarrow 0$ , with respect to Carathéodory convergence in D $\mathcal {D}$ in the first coordinate, and convergence in R $\mathbb {R}$ in the second.

Remark 3.10.It is easy to see that σ z , w ε $\sigma _{z,w}^{\varepsilon}$ is tight in ε $\varepsilon$ for any fixed z w D $z\ne w\in \mathbb {D}$ . For example, this follows from Corollary 2.29, which implies that minus the log conformal radius, seen from z $z$ , of the first CLE κ $\operatorname{CLE}_{{\kappa ^{\prime }}}$ loop containing z $z$ and not w $w$ , is tight. Since σ z , w ε $\sigma _{z,w}^{\varepsilon}$ is bounded above by this minus log conformal radius, tightness of σ z , w ε $\sigma _{z,w}^{\varepsilon}$  follows.

There is one situation where convergence of the separation times is already easy to see from our work so far. Namely, when z $z$ and w $w$ are separated (in the limit) at a time when a CLE 4 $_4$ loop has just been drawn. More precisely:

Lemma 3.11.Suppose that ε n 0 $\varepsilon _n\downarrow 0$ is such that

( D z ε n , D w ε n , σ z , w ε n , σ w , z ε n , O z , w ε n ) ( D z , D w , σ z , w , σ w , z , O ) as n $$\begin{equation*} (\mathbf {D}_{z}^{\varepsilon _n},\mathbf {D}_w^{\varepsilon _n}, \sigma _{z,w}^{\varepsilon _n}, \sigma _{w,z}^{\varepsilon _n},\mathcal {O}_{z,w}^{\varepsilon _n})\Rightarrow (\mathbf {D}_z, \mathbf {D}_w^*, \sigma _{z,w}^*,\sigma _{w,z}^*,\mathcal {O}^*) \text{ as } n\rightarrow \infty \end{equation*}$$
(where at this point we know that D z , D w $\mathbf {D}_z,\mathbf {D}_w^*$ have the same marginal laws as D z , D w $\mathbf {D}_z,\mathbf {D}_w$ , but not necessarily the same joint law). Then on the event that D z $\mathbf {D}_z$ separates w $w$ from z $z$ at a time σ z , w $\sigma _{z,w}$ when a CLE 4 $\operatorname{CLE}_4$ loop L $\mathcal {L}$ is completed, we have that almost surely:
  • σ z , w = σ z , w $\sigma _{z,w}^*=\sigma _{z,w}$ ;
  • D w $\mathbf {D}_w^*$ is equal to D z $\mathbf {D}_z$ (modulo time reparameterization), up to the time σ w , z $\sigma _{w,z}$ that z $z$ is separated from w $w$ ;
  • σ w , z = σ w , z $\sigma _{w,z}^*=\sigma _{w,z}$ ; and
  • conditionally on the above event occurring, O $\mathcal {O}^*$ is independent of D z , D w $\mathbf {D}_z,\mathbf {D}_w^*$ and has the law of a Bernoulli ( 1 2 ) $(\frac{1}{2})$ random variable.

Proof.Without loss of generality, by switching the roles of z $z$ and w $w$ if necessary and by the Markov property of the explorations, it suffices to consider the case that L = L z $\mathcal {L}=\mathcal {L}_z$ is the outermost CLE 4 $\operatorname{CLE}_4$ loop (generated by D z $\mathbf {D}_z$ ) containing z $z$ .

By Skorokhod embedding together with Corollary 2.17 and Proposition 2.18, we may assume that we are working on a probability space where the convergence assumed in the lemma holds almost surely, jointly with the convergence L z ε n L z $\mathcal {L}_z^{\varepsilon _n} \rightarrow \mathcal {L}_z$ (in the Hausdorff sense), B z ε n = ( D z ε n ) τ z ε n B z = ( D z ) τ z = int ( L z ) $\mathcal {B}_z^{\varepsilon _n}=(\mathbf {D}_{z}^{\varepsilon _n})_{\tau _z^{\varepsilon _n}}\rightarrow \mathcal {B}_z=(\mathbf {D}_z)_{\tau _z}=\mathrm{int}(\mathcal {L}_z)$ (in the Carthéodory sense) and ( τ 0 , z ε n , τ z ε n ) ( τ 0 , z , τ z ) $(\tau _{0,z}^{\varepsilon _n},\tau _z^{\varepsilon _n}) \rightarrow (\tau _{0,z},\tau _z)$ . (Recall the definitions of these times from Section 2.1.6). We may also assume that the convergence σ z , w , δ ε n σ z , w , δ $\sigma _{z,w,\delta }^{\varepsilon _n}\rightarrow \sigma _{z,w,\delta }$ holds almost surely as n $n\rightarrow \infty$ for all rational δ > 0 $\delta &gt;0$ .

Now we restrict to the event E $E$ that D z $\mathbf {D}_z$ separates z $z$ from w $w$ at time τ z $\tau _z$ , so that in particular w $w$ is at positive distance from L z ( D z ) τ z = ( D z ) τ z ¯ $\mathcal {L}_z\cup (\mathbf {D}_z)_{\tau _z}=\overline{(\mathbf {D}_z)_{\tau _z}}$ . The Hausdorff convergence L z ε n L z $\mathcal {L}_z^{\varepsilon _n} \rightarrow \mathcal {L}_z$ thus implies that w D B z ε n $w\in \mathbb {D}\setminus \mathcal {B}_z^{\varepsilon _n}$ for all n $n$ large enough (that is, w $w$ is outside of the first CLE κ ( ε n ) $\operatorname{CLE}_{{\kappa ^{\prime }}(\varepsilon _n)}$ loop containing z $z$ ), and therefore that σ z , w ε n τ z ε n $\sigma _{z,w}^{\varepsilon _n}\leqslant \tau ^{\varepsilon _n}_z$ for all n $n$ large enough (that is, separation occurs no later than this loop closure time). Since σ z , w $\sigma _{z,w}^*$ is defined to be the almost sure limit of σ z , w ε n $\sigma _{z,w}^{\varepsilon _n}$ as n $n\rightarrow \infty$ , and we have assumed that τ z ε n τ z $\tau _z^{\varepsilon _n}\rightarrow \tau _z$ almost surely, this implies that σ z , w τ z $\sigma _{z,w}^*\leqslant \tau _z$ almost surely on the event E $E$ . On the other hand, we know that σ z , w ε n σ z , w , δ ε n $\sigma _{z,w}^{\varepsilon _n}\geqslant \sigma _{z,w,\delta }^{\varepsilon _n}$ and σ z , w , δ ε n σ z , w , δ $\sigma _{z,w,\delta }^{\varepsilon _n}\rightarrow \sigma _{z,w,\delta }$ as n $n\rightarrow \infty$ for all rational positive δ $\delta$ , so that σ z , w σ z , w , δ $\sigma _{z,w}^*\geqslant \sigma _{z,w,\delta }$ for all δ $\delta$ and therefore σ z , w lim δ σ z , w , δ = σ z , w = τ z $\sigma _{z,w}^*\geqslant \lim _{\delta \rightarrow } \sigma _{z,w,\delta }=\sigma _{z,w}=\tau _z$ almost surely. Together this implies that σ z , w = τ z = σ z , w $\sigma _{z,w}=\tau _z=\sigma _{z,w}^*$ on the event E $E$ .

Next, observe that by the same argument as in the penultimate sentence above, we have σ w , z σ w , z $\sigma _{w,z}^*\geqslant \sigma _{w,z}$ with probability 1. Moreover, we saw that on the event E $E$ , w D B z ε n $w\in \mathbb {D}\setminus \mathcal {B}_z^{\varepsilon _n}$ for all n $n$ large enough. But we also have that σ z , w ε n τ z $\sigma _{z,w}^{\varepsilon _n}\rightarrow \tau _z$ , so that σ z , w ε n > τ 0 , z ε n $\sigma _{z,w}^{\varepsilon _n}&gt;\tau _{0,z}^{\varepsilon _n}$ and therefore w ( D z , w ε n ) τ 0 , z ε n B z ε n $w\in (\mathbf {D}^{\varepsilon _n}_{z,w})_{\tau _{0,z}^{\varepsilon _n}}\setminus \mathcal {B}_z^{\varepsilon _n}$ for all n $n$ large enough. Hence,

σ w , z = lim n σ w , z ε n lim n log CR ( w , ( D z , w ε n ) τ 0 , z ε n B z ε n ) = log CR ( w , ( D z ) τ 0 , z B z ) = σ w , z . $$\begin{equation*} \sigma _{w,z}^*=\lim _n \sigma _{w,z}^{\varepsilon _n} \leqslant \lim _n -\log \operatorname{CR}(w,(\mathbf {D}^{\varepsilon _n}_{z,w})_{\tau _{0,z}^{\varepsilon _n}}\setminus \mathcal {B}_z^{\varepsilon _n})=-\log \operatorname{CR}(w,(\mathbf {D}_{z})_{{\tau _{0,z}}}\setminus \mathcal {B}_z)=\sigma _{w,z}. \end{equation*}$$
Combining the two inequalities above gives the third bullet point of the lemma, and since D w , z ε n $\mathbf {D}_{w,z}^{\varepsilon _n}$ and D z , w ε n $\mathbf {D}_{z,w}^{\varepsilon _n}$ agree up to time parameterization until z $z$ and w $w$ are separated for every n $n$ , we also obtain the second bullet point.

For the final bullet point, if we write D z , w $\mathbf {D}_{z,w}$ for D z $\mathbf {D}_z$ stopped at time σ z , w $\sigma _{z,w}$ , we already know from the previous subsection that the law of O $\mathcal {O}^*$ given D z , w $\mathbf {D}_{z,w}$ is fair Bernoulli. Moreover, since O z , w ε n $\mathcal {O}^{\varepsilon _n}_{z,w}$ and ( g σ z , w ε n [ D z ε n ] ( ( D z ε n ) s + σ z , w ε n ) ; s 0 ) $(g_{\sigma _{z,w}^{\varepsilon _n}}[\mathbf {D}_{z}^{\varepsilon _n}]((\mathbf {D}_z^{\varepsilon _n})_{s+\sigma _{z,w}^{\varepsilon _n}}) \, ; \, s\geqslant 0)$ are independent for every n $n$ , it follows that O $\mathcal {O}^*$ is independent of ( g σ z , w [ D z ] ( ( D z ) s + σ z , w ) ; s 0 ) $(g_{\sigma _{z,w}^*}[\mathbf {D}_{z}]((\mathbf {D}_z)_{s+\sigma _{z,w}^*})\, ; \,s\geqslant 0)$ . So in general (that is, without restricting to the event E $E$ ) we can say that, given ( g σ z , w [ D z ] ( ( D z ) s + σ z , w ) ; s 0 ) $(g_{\sigma _{z,w}^*}[\mathbf {D}_{z}]((\mathbf {D}_z)_{s+\sigma _{z,w}^*}) \, ; \, s\geqslant 0)$ and ( ( D z ) t ; t σ z , w ) $((\mathbf {D}_z)_t \, ; \, t\leqslant \sigma _{z,w})$ , O $\mathcal {O}^*$ has the conditional law of a Bernoulli ( 1 / 2 ) $(1/2)$ random variable. Since the event E $E$ (that σ z , w = τ z $\sigma _{z,w}=\tau _z$ ) is measurable with respect to ( ( D z ) t ; t σ z , w ) $((\mathbf {D}_z)_t \, ; \, t\leqslant \sigma _{z,w})$ , and we have already seen that σ z , w = σ z , w $\sigma _{z,w}=\sigma _{z,w}^*$ on this event, we deduce the final statement of the lemma. $\Box$

Proof of Proposition 3.9.By tightness (Remark 3.10), and since we already know the convergence in law of ( D z ε , ( σ z , w , δ ε ) δ > 0 ) $(\mathbf {D}_z^{\varepsilon }, (\sigma ^{\varepsilon} _{z,w,\delta })_{\delta &gt;0})$ to ( D z , ( σ z , w , δ ) δ > 0 ) $(\mathbf {D}_z,(\sigma _{z,w,\delta })_{\delta &gt;0})$ , it suffices to prove that any joint subsequential limit in law of ( D z , ( σ z , w , δ ) δ > 0 , σ z , w ) of ( D z ε , ( σ z , w , δ ε ) δ > 0 , σ z , w ε ) $(\mathbf {D}_z,(\sigma _{z,w,\delta })_{\delta &gt;0},\sigma ^*_{z,w}) \text{ of } (\mathbf {D}_z^{\varepsilon }, (\sigma ^{\varepsilon} _{z,w,\delta })_{\delta &gt;0}, \sigma _{z,w}^{\varepsilon} )$ has σ z , w = σ z , w $\sigma ^*_{z,w}=\sigma _{z,w}$ almost surely. So let us assume that we have such a subsequential limit (along some sequence ε n 0 $\varepsilon _n\downarrow 0$ ) and that we are working on a probability space where the convergence holds almost surely. As remarked previously, since σ z , w ε n σ z , w , δ ε n $\sigma _{z,w}^{\varepsilon _n}\geqslant \sigma _{z,w,\delta }^{\varepsilon _n}$ for each δ > 0 $\delta &gt;0$ and lim δ lim n σ z , w , δ ε n = lim δ σ z , w , δ = σ z , w $\lim _\delta \lim _n\sigma _{z,w,\delta }^{\varepsilon _n}=\lim _\delta \sigma _{z,w,\delta }=\sigma _{z,w}$ , we already know that σ z , w σ z , w $\sigma _{z,w}^*\geqslant \sigma _{z,w}$ almost surely. So we just need to prove that P ( σ z , w + s σ z , w ) = 0 $\mathbb {P}(\sigma _{z,w}+s\leqslant \sigma _{z,w}^*)=0$ , or alternatively, that lim δ 0 P ( σ z , w , δ + s σ z , w ) = 0 $\lim _{\delta \rightarrow 0} \mathbb {P}(\sigma _{z,w,\delta }+s\leqslant \sigma _{z,w}^*)=0$ for any s > 0 $s&gt;0$ fixed. Since σ z , w , δ $\sigma _{z,w,\delta }$ and σ z , w $\sigma _{z,w}^*$ are the almost sure limits of σ z , w , δ ε n $\sigma ^{\varepsilon _n}_{z,w,\delta }$ and σ z , w ε n $\sigma _{z,w}^{\varepsilon _n}$ as n $n\rightarrow \infty$ , it is sufficient to prove that for each s > 0 $s&gt;0$

lim sup δ 0 lim sup ε 0 P ( σ z , w , δ ε + s σ z , w ε ) = 0 . $$\begin{equation*} \limsup _{\delta \rightarrow 0}\limsup _{\varepsilon \rightarrow 0} \mathbb {P}(\sigma _{z,w,\delta }^{\varepsilon }+s\leqslant \sigma _{z,w}^{\varepsilon })=0. \end{equation*}$$
The strategy of the proof is to use Lemma 3.11 to say that (when δ $\delta$ and ε $\varepsilon$ are small), η z ε $\eta _z^{\varepsilon}$ will separate lots of CLE κ $\operatorname{CLE}_{\kappa ^{\prime }}$ loops from z $z$ during the time interval [ σ z , w , δ ε , σ z , w , δ ε + s ] $[\sigma _{z,w,\delta }^{\varepsilon} ,\sigma _{z,w,\delta }^{\varepsilon} +s]$ . Then we will argue that this is very unlikely to happen during the time interval [ σ z , w , δ ε , σ z , w ε ] $[\sigma _{z,w,\delta }^{\varepsilon} ,\sigma _{z,w}^{\varepsilon} ]$ , which means that σ z , w ε < σ z , w , δ ε + s $\sigma _{z,w}^{\varepsilon} &lt;\sigma _{z,w,\delta }^{\varepsilon} +s$ with high probability.

More precisely, let us assume from now on that s > 0 $s&gt;0$ is fixed, and write S r $\mathcal {S}_r$ for the collection of faces (squares) of r Z 2 $r\mathbb {Z}^2$ that intersect D $\mathbb {D}$ . We write S δ , r ε $\widetilde{S}_{\delta ,r}^{\varepsilon}$ for the event that there exists S S r $S\in \mathcal {S}_r$ that is separated by η z ε $\eta _z^{\varepsilon}$ from z $z$ during the interval [ σ z , w , δ ε , σ z , w , δ ε + s ] $[\sigma _{z,w,\delta }^{\varepsilon} ,\sigma _{z,w,\delta }^{\varepsilon} +s]$ and such that z $z$ is visited by the space-filling SLE κ $\operatorname{SLE}_{\kappa ^{\prime }}$ before S $S$ . We write S δ , r ε $S_{\delta ,r}^{\varepsilon}$ for the same event but with the interval [ σ z , w , δ ε , σ z , w ε ] $\sigma _{z,w,\delta }^{\varepsilon} , \sigma _{z,w}^{\varepsilon} ]$ instead. So if the event { σ z , w , δ ε + s σ z , w ε } $\lbrace \sigma _{z,w,\delta }^{\varepsilon} +s\leqslant \sigma _{z,w}^{\varepsilon} \rbrace$ occurs, then either S δ , r ε $S_{\delta ,r}^{\varepsilon}$ occurs or S δ , r ε $\widetilde{S}_{\delta ,r}^{\varepsilon}$ does not. Hence, for any r > 0 $r&gt;0$ :

lim sup δ 0 lim sup ε 0 P ( σ z , w , δ ε + s σ z , w ε ) lim sup δ 0 lim sup ε 0 P ( S δ , r ε ) + lim sup δ 0 lim sup ε 0 P ( ( S δ , r ε ) c ) . $$\begin{equation*} \limsup _{\delta \rightarrow 0}\limsup _{\varepsilon \rightarrow 0} \mathbb {P}(\sigma _{z,w,\delta }^{\varepsilon }+s\leqslant \sigma _{z,w}^{\varepsilon })\leqslant \limsup _{\delta \rightarrow 0} \limsup _{\varepsilon \rightarrow 0} \mathbb {P}(S_{\delta ,r}^{\varepsilon} )+ \limsup _{\delta \rightarrow 0}\limsup _{\varepsilon \rightarrow 0}\mathbb {P}((\widetilde{S}_{\delta ,r}^{\varepsilon} )^c). \end{equation*}$$
We will show that
lim inf δ 0 lim inf ε 0 P ( S δ , r ε ) 1 as r 0 , $$\begin{equation} \liminf _{\delta \downarrow 0}\liminf _{\varepsilon \downarrow 0}\mathbb {P}(\widetilde{S}_{\delta ,r}^{\varepsilon} )\rightarrow 1 \text{ as } r\rightarrow 0, \end{equation}$$ (3.9)
and that for any r > 0 $r&gt;0$ ,
lim δ 0 lim ε 0 P ( S δ , r ε ) = 0 . $$\begin{equation} \lim _{\delta \downarrow 0} \lim _{\varepsilon \downarrow 0} \mathbb {P}(S_{\delta ,r}^{\varepsilon} ) =0. \end{equation}$$ (3.10)

Let us start with (3.9). First, Lemma 3.11 tells us that since many S S r $S\in \mathcal {S}_r$ will be separated from z $z$ by the CLE 4 $_4$ exploration during the time interval [ σ z , w , σ z , w + s ] $[\sigma _{z,w},\sigma _{z,w}+s]$ as r 0 $r\downarrow 0$ , the same will be true for the space-filling SLE κ $_{\kappa ^{\prime }}$ on the time interval [ σ z , w , δ ε , σ z , w , δ ε + s ) $[\sigma _{z,w,\delta }^{\varepsilon} , \sigma _{z,w,\delta }^{\varepsilon} +s)$ when ε , δ $\varepsilon , \delta$ are small. More precisely, for any fixed k N $k\in \mathbb {N}$ , δ > 0 $\delta &gt;0$ , the lemma implies that

lim inf ε 0 P ( η z ε ( [ σ z , w , δ ε , σ z , w , δ ε + s ] ) separates k squares in S r from z ) p δ , k , r , $$\begin{equation*} \liminf _{\varepsilon \downarrow 0} \mathbb {P}(\eta _z^{\varepsilon} ([\sigma _{z,w,\delta }^{\varepsilon} ,\sigma _{z,w,\delta }^{\varepsilon} +s]) \text{ separates } k \text{ squares in } \mathcal {S}_r \text{ from } z) \geqslant p_{\delta ,k,r}, \end{equation*}$$
where p δ , k , r $p_{\delta ,k,r}$ is the probability that D z $\mathbf {D}_z$ disconnects at least k $k$ squares in S r $\mathcal {S}_r$ from z $z$ by distinct CLE 4 $\operatorname{CLE}_4$ loops during the time interval [ σ z , w , δ , σ z , w , δ + s ] $[\sigma _{z,w,\delta },\sigma _{z,w,\delta }+s]$ . Moreover, since σ z , w , δ σ z , w $\sigma _{z,w,\delta }\rightarrow \sigma _{z,w}$ as δ 0 $\delta \rightarrow 0$ almost surely, lim inf δ 0 p δ , k , r $\liminf _{\delta \downarrow 0} p_{\delta ,k,r}$ is equal to the probability p k , r $p_{k,r}$ that D z $\mathbf {D}_z$ disconnects at least k $k$ squares in S r $\mathcal {S}_r$ from z $z$ by distinct CLE 4 $\operatorname{CLE}_4$ loops during the time interval [ σ z , w , σ z , w + s ] $[\sigma _{z,w},\sigma _{z,w}+s]$ . Note that since s $s$ is positive (and fixed), p k , r 1 $p_{k,r}\rightarrow 1$ as r 0 $r\rightarrow 0$ for any fixed k $k$ .

This is almost exactly what we need. However, recall that although S δ , r ε $\tilde{S}_{\delta ,r}^{\varepsilon}$ only requires one S S r $S\in \mathcal {S}_r$ to be disconnected from z $z$ by η z ε ( [ σ z , w , δ ε , σ z , w , δ ε + s ] ) $\eta _z^{\varepsilon} ([\sigma _{z,w,\delta }^{\varepsilon} ,\sigma _{z,w,\delta }^{\varepsilon} +s])$ , it also requires that this z $z$ is visited by the space-filling SLE κ $_{\kappa ^{\prime }}$ before S $S$ . This is why we ask for k $k$ squares to be separated because then by Lemma 3.11, whether they are visited before or after z $z$ converges to a sequence of independent coin tosses. Namely, for any k N $k\in \mathbb {N}$ ,

lim inf δ 0 lim inf ε 0 P ( S δ , r ε ) ( 1 2 k ) lim inf δ 0 lim inf ε 0 P ( η z ε ( [ σ z , w , δ ε , σ z , w , δ ε + s ] ) separates k squares in S r from z ) ( 1 2 k ) lim inf δ 0 p δ , k , r ( 1 2 k ) p k , r . $$\begin{align*} \liminf _{\delta \downarrow 0}\liminf _{\varepsilon \downarrow 0}\mathbb {P}(\widetilde{S}_{\delta ,r}^{\varepsilon} ) & \geqslant (1-2^{-k}) \liminf _{\delta \downarrow 0}\liminf _{\varepsilon \downarrow 0} \mathbb {P}(\eta _z^{\varepsilon} ([\sigma _{z,w,\delta }^{\varepsilon} ,\sigma _{z,w,\delta }^{\varepsilon} +s])\\ &\quad \text{ separates } k \text{ squares in } \mathcal {S}_r \text{ from } z) \\ & \geqslant (1-2^{-k})\liminf _{\delta \downarrow 0} p_{\delta ,k,r} \\ & \geqslant (1-2^{-k}) p_{k,r}. \end{align*}$$
The lim inf $\liminf$ as r 0 $r\rightarrow 0$ of the left-hand side above is therefore greater than or equal to ( 1 2 k ) lim r 0 p k , r = ( 1 2 k ) $(1-2^{-k})\lim _{r\rightarrow 0}p_{k,r}=(1-2^{-k})$ for every k $k$ . Since k $k$ was arbitrary this concludes the proof of (3.9).

Hence, to conclude the proof of the proposition, it suffices to justify (3.10). Although this is a statement purely about SLE, it turns out to be somewhat easier to prove using the connection with LQG in [18]. Thus we postpone the proof of (3.10) to Section 4.4, at which point we will have introduced the necessary objects and stated the relevant theorem of [18]. Let us emphasize that this proof will rely only on [18] and basic properties of LQG (and could be read immediately by someone already familiar with the theory) so it is safe from now on to treat Proposition 3.9 as being proved. $\Box$

3.3 Convergence of the partial order: Proof of Proposition 3.2

Recall that Proposition 3.2, stated at the very beginning of Section 3, asserts the joint convergence of the branching SLE κ $\operatorname{SLE}_{\kappa ^{\prime }}$ and the collection of order variables to the limit
( ( D z ) z Q , ( O z , w ) z , w Q ) $$\begin{equation*} ((\mathbf {D}_z)_{z\in \mathcal {Q}}, (\mathcal {O}_{z,w})_{z,w\in \mathcal {Q}}) \end{equation*}$$
defined in Lemma 3.1. Completing the proof is now simply a case of putting together our previous results.

Proof of Proposition 3.2.The following three claims are the main ingredients. $\Box$

Claim 1. ( D z ε ) z Q ( D z ) z Q $({\mathbf {D}}_z^{\varepsilon} )_{z\in \mathcal {Q}}\Rightarrow ({\mathbf {D}}_z)_{z\in \mathcal {Q}}$ .

Proof.This follows from Corollary 2.16, Proposition 3.9 and the fact that for every ε $\varepsilon$ and z , w Q $z,w\in \mathcal {Q}$ , D z ε ${\mathbf {D}}_z^{\varepsilon}$ and D w ε ${\mathbf {D}}_w^{\varepsilon}$ agree (up to time change) until z $z$ and w $w$ are separated, and then evolve independently. $\Box$

Claim 2.For any z , w Q $z, w\in \mathcal {Q}$ , ( D z ε , D w ε , O z , w ε ) ( D z , D w , O z , w ) $({\mathbf {D}}_z^{\varepsilon} , {\mathbf {D}}_w^{\varepsilon} , \mathcal {O}_{z,w}^{\varepsilon} )\Rightarrow ({\mathbf {D}}_z, {\mathbf {D}}_w, \mathcal {O}_{z,w})$ .

Proof.As usual, due to tightness, it is enough to show that any subsequential limit ( D z , D w , O ) $({\mathbf {D}}_z^*, {\mathbf {D}}_w^*, \mathcal {O}^*)$ of ( D z ε , D w ε , O z , w ε ) $({\mathbf {D}}_z^{\varepsilon} , {\mathbf {D}}_w^{\varepsilon} , \mathcal {O}_{z,w}^{\varepsilon} )$ , along a sequence ε n 0 $\varepsilon _n\downarrow 0$ , has the correct joint distribution. In fact, we may assume that

( D z ε n , D w ε n , σ z , w ε n , σ w , z ε n , O z , w ε n ) ( D z , D w , σ z , w , σ w , z , O ) $$\begin{equation*} ({\mathbf {D}}_z^{\varepsilon _n}, {\mathbf {D}}_w^{\varepsilon _n}, \sigma _{z,w}^{\varepsilon _n},\sigma _{w,z}^{\varepsilon _n},\mathcal {O}_{z,w}^{\varepsilon _n})\Rightarrow ({\mathbf {D}}_z^*, {\mathbf {D}}_w^*,\sigma _{z,w}^*,\sigma _{w,z}^*, \mathcal {O}^*) \end{equation*}$$
and verify the same statement, where by Proposition 3.9 and Claim 1, we already know that
( D z , D w , σ z , w , σ w , z ) = ( d ) ( D z , D w , σ z , w , σ w , z ) $$\begin{equation*} (\mathbf {D}_z^*,\mathbf {D}_w^*,\sigma _{z,w}^*,\sigma _{w,z}^*)\overset{(d)}{=} (\mathbf {D}_z,\mathbf {D}_w,\sigma _{z,w},\sigma _{w,z}) \end{equation*}$$
(in particular, D z $\mathbf {D}_z^*$ and D w $\mathbf {D}_w^*$ agree up to time reparameterization until z $z$ and w $w$ are separated at times σ z , w $\sigma _{z,w}^*$ , σ w , z $\sigma _{w,z}^*$ ).

Now, Proposition 3.4 implies that, given D z ${\mathbf {D}}_z^*$ and D w ${\mathbf {D}}_w^*$ stopped at times σ z , w , σ w , z $\sigma _{z,w}^*,\sigma _{w,z}^*$ , respectively, the conditional law of O $\mathcal {O}^*$ is fair Bernoulli. On the other hand, since

O z , w ε n , ( g σ z , w ε n [ D z ε n ] ( ( D z ε n ) s + σ z , w ε n ) ; s 0 ) and ( g σ w , z ε n [ D w ε n ] ( ( D w ε n ) s + σ w , z ε n ) ; s 0 ) $$\begin{equation*} \mathcal {O}^{\varepsilon _n}_{z,w}\, , \, (g_{\sigma _{z,w}^{\varepsilon _n}}[\mathbf {D}_{z}^{\varepsilon _n}]((\mathbf {D}_z^{\varepsilon _n})_{s+\sigma _{z,w}^{\varepsilon _n}}) \, ; \, s\geqslant 0) \text{ and } (g_{\sigma _{w,z}^{\varepsilon _n}}[\mathbf {D}_{w}^{\varepsilon _n}]((\mathbf {D}_w^{\varepsilon _n})_{s+\sigma _{w,z}^{\varepsilon _n}}) \, ; \, s\geqslant 0) \end{equation*}$$
are mutually independent for every n $n$ , it follows that O $\mathcal {O}^*$ is independent of
( g σ z , w [ D z ] ( ( D z ) s + σ z , w ) ; s 0 ) , ( g σ w , z [ D w ] ( ( D w ) s + σ w , z ) ; s 0 ) . $$\begin{equation*} (g_{\sigma _{z,w}^*}[\mathbf {D}^*_{z}]((\mathbf {D}_z)_{s+\sigma _{z,w}^*})\, ; \,s\geqslant 0)\, , \, (g_{\sigma _{w,z}^*}[\mathbf {D}^*_{w}]((\mathbf {D}_w)_{s+\sigma _{w,z}^*})\, ; \,s\geqslant 0). \end{equation*}$$
This provides the claim. $\Box$

Claim 3.For any z , w Q $z,w\in \mathcal {Q}$ , ( ( D y ε ) y Q , O z , w ε ) ( ( D y ) y Q , O z , w ) $(({\mathbf {D}}_y^{\varepsilon} )_{y\in \mathcal {Q}}, \mathcal {O}^{\varepsilon} _{z,w})\Rightarrow (({\mathbf {D}}_y)_{y\in \mathcal {Q}}, \mathcal {O}_{z,w})$ .

Proof.The same argument as for Claim 2 extends directly to this slightly more general setting (we omit the details).

With Claim 1 in hand (and the argument proving Lemma 3.1) all we need to show is that for any subsequential limit in law ( ( D z ) z Q , ( O z , w ) z , w Q ) $(({\mathbf {D}}_z)_{z\in \mathcal {Q}}, (\mathcal {O}^*_{z,w})_{z,w\in \mathcal {Q}})$ of ( ( D z ε ) z Q , ( O z , w ε ) z , w Q ) $(({\mathbf {D}}_z^{\varepsilon} )_{z\in \mathcal {Q}}, (\mathcal {O}^{\varepsilon} _{z,w})_{z,w\in \mathcal {Q}})$ as ε 0 $\varepsilon \rightarrow 0$ , the conditional law of ( O z , w ) z , w Q $(\mathcal {O}^*_{z,w})_{z,w\in \mathcal {Q}}$ given ( D z ) z Q $({\mathbf {D}}_z)_{z\in \mathcal {Q}}$ satisfies the bullet points above Lemma 3.1. That is, (a) O z , z = 1 $\mathcal {O}^*_{z,z}=1$ for all z Q $z\in \mathcal {Q}$ ; (b) O z , w = 1 O w , z $\mathcal {O}^*_{z,w}=1-\mathcal {O}^*_{w,z}$ for all z , w Q $z,w\in \mathcal {Q}$ distinct; (c) O z , w $\mathcal {O}^*_{z,w}$ is (conditionally) Bernoulli ( 1 / 2 ) $(1/2)$ for any such z , w $z,w$ ; and (d) for all z , w 1 , w 2 Q $z,w_1,w_2\in \mathcal {Q}$ with z w 1 , w 2 $z\ne w_1, w_2$ , if D z ${\mathbf {D}}_z$ separates z $z$ from w 1 $w_1$ at the same time as it separates z $z$ from w 1 $w_1$ then O z , w 1 = O z , w 2 $\mathcal {O}^*_{z,w_1}=\mathcal {O}^*_{z,w_2}$ ; otherwise O z , w 1 $\mathcal {O}^*_{z,w_1}$ and O z , w 2 $\mathcal {O}^*_{z,w_2}$ are (conditionally) independent.

Observe that (a) and (b) follow by definition of the O z , w ε $\mathcal {O}_{z,w}^{\varepsilon}$ , and (c) follows from Claim 3. The first case of (d) also follows by definition, and the second follows from the definition of O z , w 1 ε , O z , w 2 ε $\mathcal {O}_{z,w_1}^{\varepsilon} , \mathcal {O}_{z,w_2}^{\varepsilon}$ together with the branching property of ( D z ε ) z Q $({\mathbf {D}}_z^{\varepsilon} )_{z\in \mathcal {Q}}$ and the convergence of the separation times. $\Box$

3.4 Joint convergence of SLE, CLE and the order variables

The results of Sections 2 and 3 give the final combined result:

Proposition 3.12.

( ( D z ε ) z Q , ( L z , i ε ) z Q , i 1 , ( B z , i ε ) z Q , i 1 , ( O z , w ε ) z , w Q ) ( ( D z ) z Q , ( L z , i ) z Q , i 1 , ( B z , i ) z Q , i 1 , ( O z , w ) z , w Q ) $$\begin{eqnarray*} & (({\mathbf {D}}^{\varepsilon} _z)_{z\in \mathcal {Q}},(\mathcal {L}^{\varepsilon} _{z,i})_{z\in \mathcal {Q}, i\geqslant 1}, (\mathcal {B}^{\varepsilon} _{z,i})_{z\in \mathcal {Q}, i\geqslant 1},(\mathcal {O}^{\varepsilon} _{z,w})_{z,w\in \mathcal {Q}} ) & \\ & \Rightarrow & \\ & (({\mathbf {D}}_z)_{z\in \mathcal {Q}},(\mathcal {L}_{z,i})_{z\in \mathcal {Q}, i\geqslant 1}, (\mathcal {B}_{z,i})_{z\in \mathcal {Q}, i\geqslant 1},(\mathcal {O}_{z,w})_{z,w\in \mathcal {Q}} )&{} \end{eqnarray*}$$
as ε 0 $\varepsilon \downarrow 0$ , with respect to the product topology
Q D z × Q × N Hausdorff × Q × N Carathéodory viewed from z × Q × Q discrete . $$\begin{equation*} \prod _\mathcal {Q}\mathcal {D}_z \times \prod _{\mathcal {Q}\times \mathbb {N}} \text{Hausdorff} \times \prod _{\mathcal {Q}\times \mathbb {N}} \text{Carath\'{e}odory viewed from } z \times \prod _{\mathcal {Q}\times \mathcal {Q}} \text{discrete}. \end{equation*}$$

Proof.Since we know that all the individual elements in the above tuples converge, the laws are tight in ε $\varepsilon$ . Combining Proposition 3.2 and Corollary 2.29 (in particular, using that ( L z , i ) z Q , i 1 , ( B z , i ) z Q , i 1 $(\mathcal {L}_{z,i})_{z\in \mathcal {Q}, i\geqslant 1}, (\mathcal {B}_{z,i})_{z\in \mathcal {Q}, i\geqslant 1}$ are deterministic functions of ( D z ) z Q $({\mathbf {D}}_z)_{z\in \mathcal {Q}}$ ) ensures that any subsequential limit has the correct law. $\Box$

4 LIOUVILLE QUANTUM GRAVITY AND MATING OF TREES

4.1 Liouville quantum gravity

Let D C $D\subset \mathbb {C}$ be a simply connected domain with harmonically non-trivial boundary. For f , g C ( D ) $f,g\in C^\infty (D)$ define the Dirichlet inner product by
( f , g ) = 1 2 π D f ( z ) · g ( z ) d 2 z . $$\begin{equation*} (f,g)_\nabla = \frac{1}{2\pi } \int _D \nabla f(z) \cdot \nabla g(z)\, d^2\hspace{-1.42271pt}z. \end{equation*}$$
Let H ( D ) $H(D)$ be the Hilbert space closure of the subspace of functions f C ( D ) $f\in C^\infty (D)$ for which ( f , f ) < $(f,f)_\nabla &lt;\infty$ , where we identify two functions that differ by a constant. Letting ( f n ) $(f_n)$ be an orthonormal basis for H ( D ) $H(D)$ , the free boundary Gaussian free field (GFF) h $h$ on D $D$ is defined by
h = n = 1 α n f n , $$\begin{equation*} h = \sum _{n=1}^{\infty } \alpha _n f_n, \end{equation*}$$
where ( α n ) $(\alpha _n)$ is a sequence of independent and identically distributed standard normal random variables and the convergence is almost sure in the space of generalized functions modulo constants. The free boundary GFF is only defined modulo additive constant here, but we remark that there are several natural ways to fix the additive constant, for example, by requiring that testing the field against a fixed test function gives zero. If this is done in an arbitrary way (that is, picking some arbitrary test function in the previous sentence) the resulting field almost surely lives in the space H loc 1 ( D ) $H^{-1}_{\text{loc}}(D)$ : this is the space of generalized functions whose restriction to any bounded domain U D $U\subset D$ is an element of the Sobolev space H 1 ( U ) $H^{-1}(U)$ ; see [11, 55] for more details.

Let S = R × ( 0 , π ) $\mathcal {S}=\mathbb {R}\times (0,\pi )$ denote the infinite strip. By, for example, [18, Lemma 4.3], H ( S ) $H(\mathcal {S})$ has an orthogonal decomposition H ( S ) = H 1 ( S ) H 2 ( S ) $H(\mathcal {S})=H_1(\mathcal {S})\oplus H_2(\mathcal {S})$ , where H 1 ( S ) $H_1(\mathcal {S})$ is the subspace of H ( S ) $H(\mathcal {S})$ consisting of functions (modulo constants) which are constant on vertical lines of the form u + [ 0 , i π ] $u+[0,\operatorname{i}\pi ]$ and H 1 ( S ) $H_1(\mathcal {S})$ is the subspace of H ( S ) $H(\mathcal {S})$ consisting of functions which have mean zero on all such vertical lines. This leads to a decomposition h = h 1 + h 2 $h=h_1+h_2$ of the free boundary GFF h $h$ on S $\mathcal {S}$ , where h 1 $h_1$ (respectively, h 2 $h_2$ ) is the projection of h $h$ onto H 1 ( S ) $H_1(\mathcal {S})$ (respectively, H 2 ( S ) $H_2(\mathcal {S})$ ). We call h 2 $h_2$ the lateral component of h $h$ .

Now let D C $D\subset \mathbb {C}$ be as before, and let h $\mathfrak {h}$ be an instance of the free-boundary GFF on D $D$ with the additive constant fixed in an arbitrary way. Set h = h + f $h=\mathfrak {h}+f$ , where f $f$ is a (possibly random) continuous function on D $D$ . For δ > 0 $\delta &gt;0$ and z D $z\in D$ let h δ ( z ) $h_\delta (z)$ denote the average of h $h$ on the circle B δ ( z ) $\partial B_\delta (z)$ if B δ ( z ) D $B_\delta (z)\subset D$ ; otherwise set h δ ( z ) = 0 $h_\delta (z)=0$ . For γ ( 2 , 2 ) $\gamma \in (\sqrt {2},2)$ and ε = 2 γ $\varepsilon =2-\gamma$ the field h $h$ induces an area measure μ h ε $\mu _h^{\varepsilon}$ on D $D$ , which is defined by the following limit in probability for any bounded open set A D $A\subseteq D$ :
μ h ε ( A ) = lim δ 0 ( 2 ε ) 1 A exp γ h δ ( z ) δ γ 2 / 2 d 2 z . $$\begin{equation*} \mu _h^{\varepsilon} (A) = \lim _{\delta \rightarrow 0} (2\varepsilon )^{-1}\int _A \exp {\left(\gamma h_\delta (z)\right)}\delta ^{\gamma ^2/2} \, d^2\hspace{-1.42271pt}z. \end{equation*}$$
Note that the definitions for ε > 0 $\varepsilon &gt;0$ differ by a factor of 2 ε $2\varepsilon$ from the definitions normally found in the literature. This is natural in the context of this paper, where we will be concerned with taking ε 0 $\varepsilon \downarrow 0$ (see below). Indeed, for γ = 2 $\gamma =2$ (which will correspond to the limit as ε 0 $\varepsilon \downarrow 0$ ) we define:
μ h ( A ) = lim δ 0 A h δ + log ( 1 / δ ) exp ( 2 h δ ( z ) ) δ d 2 z . $$\begin{equation*} \mu _{h}(A) = \lim _{\delta \rightarrow 0} \int _A {\left(-h_\delta +\log (1/\delta )\right)}\exp (2h_\delta (z))\delta \, d^2\hspace{-1.42271pt}z. \end{equation*}$$
If f $f$ extends continuously to D $\partial D$ , boundary measures ν h ε $\nu ^{\varepsilon} _h$ and ν h $\nu _h$ can be defined similarly by
ν h ε ( A ) = lim δ 0 ( 2 ε ) 1 A exp γ 2 h δ ( z ) δ γ 2 / 4 d z , ν h ( A ) = lim δ 0 A h δ 2 + log ( 1 / δ ) δ exp ( h δ ( z ) ) d z . $$\begin{align*} \nu _{h}^{\varepsilon }(A) & = \lim _{\delta \rightarrow 0}\, (2\varepsilon )^{-1}\int _A \exp {\left(\frac{\gamma }{2}h_\delta (z)\right)}\delta ^{\gamma ^2/4} \, dz,\\ \nu _{h}(A) & = \lim _{\delta \rightarrow 0}\, \int _A {\left(-\frac{h_\delta }{2} +\log (1/\delta )\right)}\, \delta \, \exp (h_\delta (z)) \, dz. \end{align*}$$
See [9, 19, 48] for proofs of these facts.
A pair ( D , h ) $(D,h)$ defines a so-called γ $\gamma$ -LQG surface. More precisely, a γ $\gamma$ -LQG surface is an equivalence class of pairs ( D , h ) $(D,h)$ where D $D$ is as above and h $h$ is a distribution, and we define two pairs ( D 1 , h 1 ) $(D_1,h_1)$ and ( D 2 , h 2 ) $(D_2,h_2)$ to be equivalent if there is a conformal map ϕ : D 1 D 2 $\phi :D_1\rightarrow D_2$ such that
h 1 = h 2 ϕ + Q γ log | ϕ | , Q γ : = 2 / γ + γ / 2 . $$\begin{equation} h_1 = h_2\circ \phi +Q_\gamma \log |\phi ^{\prime }|,\qquad Q_\gamma :=2/\gamma +\gamma /2. \end{equation}$$ (4.1)
With this definition, if h 1 , h 2 $h_1,h_2$ are absolutely continuous with respect to a GFF plus a continuous function we have μ h 2 ε = ϕ ( μ h 1 ε ) $\mu _{h_2}^{\varepsilon} =\phi _*(\mu _{h_1}^{\varepsilon} )$ and ν h 2 ε = ϕ ( ν h 1 ε ) $\nu _{h_2}^{\varepsilon} =\phi _*(\nu _{h_1}^{\varepsilon} )$ for ε ( 0 , 2 2 ) $\varepsilon \in (0,2-\sqrt {2})$ . The analogous identities also hold for ε = 0 $\varepsilon =0$ .

The LQG disk is an LQG surface of special interest, since it arises in scaling limit results concerning random planar maps, for example, [13, 24]. The following is our definition of the unit boundary length γ $\gamma$ -LQG disk in the subcritical case. Our field is equal to 2 γ 1 log ( 2 ε ) $-2\gamma ^{-1}\log (2\varepsilon )$ plus the field defined in, for example, [18]: this is because we want it to have boundary length 1 for our definition of ν h ε $\nu _h^{\varepsilon}$ (which is ( 2 ε ) 1 $(2\varepsilon )^{-1}$ times the usual one).

Definition 4.1. (Unit boundary length γ $\gamma$ -LQG disk for γ ( 2 , 2 ) $\gamma \in (\sqrt {2},2)$ )Let h 2 $h_2$ be a field on the strip S = R × ( 0 , i π ) $\mathcal {S}=\mathbb {R}\times (0,\operatorname{i}\pi )$ with the law of the lateral component of a free boundary GFF on S $\mathcal {S}$ . Let h 1 ε $h_1^{\varepsilon}$ be a function on S $\mathcal {S}$ such that h 1 ε ( s + i y ) = B s ε $h^{\varepsilon} _1(s+\operatorname{i}y)=\mathcal {B}^{\varepsilon} _s$ , where

  • (i) ( B s ε ) s 0 $(\mathcal {B}^{\varepsilon} _s)_{s\geqslant 0}$ has the law of B 2 s ( 2 / γ γ / 2 ) s $B_{2s}-(2/\gamma -\gamma /2)s$ conditioned to be negative for all time, for B $B$ a standard Brownian motion started from 0; and
  • (ii) ( B s ε ) s 0 $(\mathcal {B}^{\varepsilon} _{-s})_{s\geqslant 0}$ is independent of ( B s ε ) s 0 $(\mathcal {B}^{\varepsilon} _s)_{s\geqslant 0}$ and satisfies ( B s ε ) s 0 = d ( B s ε ) s 0 $(\mathcal {B}^{\varepsilon} _{-s})_{s\geqslant 0}\overset{d}{=}(\mathcal {B}^{\varepsilon} _s)_{s\geqslant 0}$ .
Set h s ε = h 1 ε + h 2 $h_{\operatorname{s}}^{\varepsilon} =h_1^{\varepsilon} +h_2$ and let h ̂ ε ${\widehat{h}^{\varepsilon} }$ be the distribution on S $\mathcal {S}$ whose law is given by
h s ε 2 γ 1 log ν h s ε ε ( S ) reweighted by ν h s ε ε ( S ) 4 / γ 2 1 . $$\begin{equation} h_{\operatorname{s}}^{\varepsilon} -2\gamma ^{-1}\log \nu ^{\varepsilon} _{h_{\operatorname{s}}^{\varepsilon} }(\partial \mathcal {S}) \qquad \text{reweighted\,\,by\,\,} \nu ^{\varepsilon} _{h^{\varepsilon} _{\operatorname{s}}}(\partial \mathcal {S})^{4/\gamma ^2-1}. \end{equation}$$ (4.2)
Then the surface defined by ( S , h ̂ ε ) $(\mathcal {S},{\widehat{h}^{\varepsilon} })$ has the law of a unit boundary length γ $\gamma$ -LQG disk.

See [30, Definition 2.4 and Remark 2.5] for a proof that the above does correspond to 2 γ 1 log ( 2 ε ) $-2\gamma ^{-1}\log (2\varepsilon )$ + the unit boundary length disk of [18]. Note that (see, for example, [18, Lemma 4.20]) ν h s ε ( S ) $\nu _{h_{\operatorname{s}}}^{\varepsilon} (\partial \mathcal {S})$ is finite for each fixed ε > 0 $\varepsilon &gt;0$ , so that the above definition makes sense. In fact, we can say something stronger, namely Lemma 4.2. We remark that the power 1 / 17 $1/17$ in the lemma has not been optimized.

Lemma 4.2.There exists C ( 0 , ) $C\in (0,\infty )$ not depending on ε ( 0 , 2 2 ) $\varepsilon \in (0,2-\sqrt {2})$ such that

P [ ν h s ε ε ( S ) > x ] C x 1 / 17 for all x 1 . $$\begin{equation*} \mathbb {P}[ \nu _{h_{\operatorname{s}}^{\varepsilon} }^{\varepsilon} (\partial \mathcal {S})&gt;x]\leqslant Cx^{-1/17} \text{ for all } x\geqslant 1. \end{equation*}$$
Moreover, for any fixed x $x$ , P [ ν h s ε ε ( ( , K ) ( K ) × i { 0 , π } ) > x ] 0 $\mathbb {P}[ \nu _{h_{\operatorname{s}}^{\varepsilon} }^{\varepsilon} ((-\infty ,-K)\cup (K\cup \infty )\times \operatorname{i}\lbrace 0,\pi \rbrace )&gt;x]\rightarrow 0$ as K $K\rightarrow \infty$ , uniformly in ε $\varepsilon$ .

Finally, if h s $h_{\operatorname{s}}$ is defined in the same way as h s ε $h_{\operatorname{s}}^{\varepsilon}$ above but instead letting ( B s ) s 0 $(\mathcal {B}_s)_{s\geqslant 0}$ have the law of ( 2 ) $(-\sqrt {2})$ times a three-dimensional Bessel process, then we also have that

P [ ν h s ( S ) > x ] C x 1 / 17 for all x 1 . $$\begin{equation*} \mathbb {P}[ \nu _{h_{\operatorname{s}}}(\partial \mathcal {S})&gt;x]\leqslant Cx^{-1/17} \text{ for all } x\geqslant 1. \end{equation*}$$

Proof.Let us first deal with the subcritical measures. In this case, we write

b k ε = ν h 2 ε ( [ k , k + 1 ] × { 0 , i π } ) $$\begin{equation*} b_k^{\varepsilon} =\nu _{h_2}^{\varepsilon} ([k,k+1]\times \lbrace 0,\operatorname{i}\pi \rbrace ) \end{equation*}$$
for k Z $k\in \mathbb {Z}$ . Then the law of b k ε $b_k^{\varepsilon}$ does not depend on k $k$ since the law of h 2 $h_2$ is translation invariant; see, for example, [11, Remark 5.48]. Furthermore, by [49, Theorem 1.1], E ( ( b 0 ε ) q ) $\mathbb {E}((b_0^{\varepsilon} )^q)$ is uniformly bounded in ε $\varepsilon$ for any q < 1 $q&lt;1$ . (The result of [49] shows uniform boundedness of the moment for a field that differs from h 2 $h_2$ in [ 0 , 1 ] × { 0 } $[0,1]\times \lbrace 0\rbrace$ or [ 0 , 1 ] × { i π } $[0,1]\times \lbrace \operatorname{i}\pi \rbrace$ by a centered Gaussian function with uniformly bounded variance.) Letting a k ε = sup s [ k , k + 1 ] e ( γ / 2 ) B s ε $a_k^{\varepsilon} =\sup _{s\in [k,k+1]} e^{(\gamma /2) \mathcal {B}^{\varepsilon} _s}$ we then have that
ν h s ε ε ( S ) k Z a k ε b k ε . $$\begin{equation*} \nu _{h_{\operatorname{s}}^{\varepsilon} }^{\varepsilon} (\partial \mathcal {S})\leqslant \sum _{k\in \mathbb {Z}} a_k^{\varepsilon} b_k^{\varepsilon} . \end{equation*}$$
Thus, since k Z ( | k | 1 ) 2 < 10 $\sum _{k\in \mathbb {Z}} (|k|\vee 1)^{-2}&lt;10$ , a union bound gives
P [ ν h s ε ε ( S ) > x ] k Z P [ a k ε > x 1 / 2 ( | k | 1 ) 4 ] + P [ b k ε > 0.1 x 1 / 2 ( | k | 1 ) 2 ] . $$\begin{equation} \begin{split} \mathbb {P}[ \nu _{h_{\operatorname{s}}^{\varepsilon} }^{\varepsilon} (\partial \mathcal {S})&gt;x] \leqslant \sum _{k\in \mathbb {Z}} {\left(\mathbb {P}[ a_k^{\varepsilon} &gt; x^{1/2}(|k|\vee 1)^{-4} ]+ \mathbb {P}[ b_k^{\varepsilon} &gt; 0.1x^{1/2}(|k|\vee 1)^{2} ]\right)}. \end{split} \end{equation}$$ (4.3)
Taking q = 3 / 4 $q=3/4$ (for example), using the uniform bound on E ( ( b k ε ) q ) $\mathbb {E}((b_k^{\varepsilon} )^q)$ and applying Chebyshev's inequality gives that k Z P [ b k ε > 0.1 x 1 / 2 ( | k | 1 ) 2 ] c 0 x 3 / 8 $ \sum _{k\in \mathbb {Z}} \mathbb {P}[ b_k^{\varepsilon} &gt; 0.1x^{1/2}(|k|\vee 1)^{2} ] \leqslant c_0 x^{-3/8}$ for some universal constant c 0 $c_0$ . Furthermore, since B ε $\mathcal {B}^{\varepsilon}$ is stochastically dominated by ( 2 ) $(-\sqrt {2})$ times a three-dimensional Bessel process; see [35, Lemma 12.4], we have that for ( Z ( t ) ) t 0 $(Z(t))_{t\geqslant 0}$ such a process and ( W ( t ) ) t 0 $(W(t))_{t\geqslant 0}$ a standard linear Brownian motion:
P [ a k ε > x 1 / 2 ( | k | 1 ) 4 ] P inf s [ k , k + 1 ] Z ( s ) < γ 1 log x 1 / 2 ( | k | 1 ) 4 P inf s [ k , k + 1 ] | W ( s ) | < γ 1 log x 1 / 2 ( | k | 1 ) 4 3 $$\begin{equation*} \begin{split} \mathbb {P}[ a_k^{\varepsilon} &gt; x^{1/2}(|k|\vee 1)^{-4} ] &\leqslant \mathbb {P}{\left[ \inf _{s\in [k,k+1]} Z(s) &lt; \gamma ^{-1} \log {\left(x^{-1/2}(|k|\vee 1)^{4}\right)} \right]}\\ &\leqslant \mathbb {P}{\left[ \inf _{s\in [k,k+1]} |W(s)| &lt; \gamma ^{-1} \log {\left(x^{-1/2}(|k|\vee 1)^{4}\right)} \right]}^3 \end{split} \end{equation*}$$
for all x $x$ and k $k$ , where we used to get the second inequality that Z = d | ( W 1 , W 2 , W 3 ) | $Z\overset{d}{=}|(W_1,W_2,W_3)|$ for W 1 , W 2 , W 3 $W_1,W_2,W_3$ independent copies of W $W$ . The probability on the right side is 0 if | k | x 1 / 8 $|k|\leqslant x^{1/8}$ and otherwise it is bounded above by c 1 | k | 1 / 2 γ 1 log ( x 1 / 2 ( | k | 1 ) 4 ) $c_1|k|^{-1/2} \gamma ^{-1} \log (x^{-1/2}(|k|\vee 1)^{4})$ where c 1 $c_1$ is another universal constant. Therefore, for a final universal constant c 2 > 0 $c_2&gt;0$ ,
k Z P [ a k ε > x 1 / 2 ( | k | 1 ) 4 ] 2 k Z : | k | > x 1 / 8 c 1 | k | 1 / 2 γ 1 log x 1 / 2 ( | k | 1 ) 4 3 c 2 x 1 / 17 . $$\begin{equation*} \sum _{k\in \mathbb {Z}}\mathbb {P}[ a_k^{\varepsilon} &gt; x^{1/2}(|k|\vee 1)^{-4} ] \leqslant 2\sum _{k\in \mathbb {Z}\,:\,|k|&gt;x^{1/8}} {\left(c_1|k|^{-1/2} \gamma ^{-1} \log {\left(x^{-1/2}(|k|\vee 1)^{4}\right)}\right)}^3 \leqslant c_2 x^{-1/17}. \end{equation*}$$
The same bounds yield the second statement of the lemma.

Finally, exactly the same proof works in the case of the critical measure, using [49, Section 1.1.1] to see that b k = ν h 2 ( [ k , k + 1 ] ) $b_k=\nu _{h_2}([k,k+1])$ has a finite q $q$ th moment, which does not depend on k $k$ by translation invariance. $\Box$

We may now define the critical unit boundary length LQG disk as follows.

Definition 4.3. (Unit boundary length 2-LQG disk)Letting h s $h_{\operatorname{s}}$ be as in Lemma 4.2 we define the unit boundary length 2-LQG disk to be the surface ( S , h ̂ ) $(\mathcal {S}, \widehat{h})$ , where

h ̂ : = h s log ν h s ( S ) . $$\begin{equation*} \widehat{h}:=h_{\operatorname{s}}-\log \nu _{h_{\operatorname{s}}}(\partial \mathcal {S}). \end{equation*}$$

Note that ν h s ( S ) $\nu _{h_{\operatorname{s}}}(\partial S)$ is finite by Lemma 4.2.

Remark 4.4.Readers may have previously encountered the above as the definition of a quantum disk with two marked boundary points. A quantum surface with k $k$ marked points is an equivalence class of ( D , h , x 1 , , x k ) $(D,h,x_1,\dots , x_k)$ with x 1 , , x k D ¯ $x_1,\dots , x_k\in \overline{D}$ , using the equivalence relation described by (4.1), but with the additional requirement that ϕ $\phi$ maps marked points to marked points. In this paper we will use Definitions 4.1 and 4.3 to define specific equivalence class representatives of quantum disks, but we will always consider them as quantum surfaces without any marked points. That is, we will consider their equivalence classes under the simple relation (4.1).

The following lemma says that the subcritical disk converges to the critical disk as ε 0 $\varepsilon \downarrow 0$ (equivalently, γ 2 $\gamma \uparrow 2$ ). We say that a sequence of measures ( μ ¯ n ) n N $(\bar{\mu }_n)_{n\in \mathbb {N}}$ on a metric space E $E$ (equipped with the Borel σ $\sigma$ -algebra) converges weakly to a measure μ ¯ $\bar{\mu }$ if for all A E $A\subseteq E$ such that μ ¯ ( A ) = 0 $\bar{\mu }(\partial A)=0$ we have μ ¯ n ( A ) μ ¯ ( A ) $\bar{\mu }_n(A)\rightarrow \bar{\mu }(A)$ .

Lemma 4.5.For ε > 0 $\varepsilon &gt;0$ let h ̂ ε $\widehat{h}^{\varepsilon}$ be the field of Definition 4.1 and h ̂ $\widehat{h}$ be the field of Definition 4.3. Then ( h ̂ ε , μ h ̂ ε ε , ν h ̂ ε ε ) ( h ̂ , μ h ̂ , ν h ̂ ) $(\widehat{h}^{\varepsilon} ,\mu ^{\varepsilon} _{\widehat{h}^{\varepsilon} },\nu ^{\varepsilon} _{\widehat{h}^{\varepsilon} })\Rightarrow (\widehat{h},\mu _{\widehat{h}},\nu _{\widehat{h}})$ , where the first coordinate is equipped with the H loc 1 ( S ) $H^{-1}_{\mathrm{loc}}(\mathcal {S})$ topology and the second and third coordinates are equipped with the weak topology of measures on S $\mathcal {S}$ and S $\partial \mathcal {S}$ , respectively.

Proof.To conclude it is sufficient to prove the following, for an arbitrary sequence ε n 0 $\varepsilon _n\downarrow 0$ :

  • (i) we have convergence in law along the sequence ε n ${\varepsilon _n}$ if we replace h ̂ $\widehat{h}$ by h s $h_{\operatorname{s}}$ , and h ̂ ε n $\widehat{h}^{\varepsilon _n}$ by h s ε n $h_{\operatorname{s}}^{\varepsilon _n}$ for every n $n$ ; and
  • (ii) there exists a coupling of the ( ν h s ε n ) $(\nu _{h^{\varepsilon _n}_{\operatorname{s}}})$ such that ν h s ε ε n ( S ) 4 / γ 2 1 1 $\nu _{h^{\varepsilon} _{\operatorname{s}}}^{\varepsilon _n}(\partial \mathcal {S})^{4/\gamma ^2-1}\rightarrow 1$ in L 1 $L^1$ as n $n\rightarrow \infty$ .
To see (i), first observe that the processes B ε $\mathcal {B}^{\varepsilon}$ converge to B $\mathcal {B}$ in law as ε 0 $\varepsilon \rightarrow 0$ , with respect to the topology of uniform convergence on compacts of time. Indeed for any fixed δ > 0 $\delta &gt;0$ , if T δ ε $T_\delta ^{\varepsilon}$ (respectively, T δ $T_\delta$ ) is the first time that B ε $\mathcal {B}^{\varepsilon}$ (respectively, B $\mathcal {B}$ ) hits δ $-\delta$ , it is easy to see that B ε ( · + T δ ε ) $\mathcal {B}^{\varepsilon} (\cdot +T_\delta ^{\varepsilon} )$ converges to B ( · + T δ ) $\mathcal {B}(\cdot +T_\delta )$ in law in the specified topology as ε 0 $\varepsilon \rightarrow 0$ : a consequence of the fact that the drift coefficient in B ε $\mathcal {B}^{\varepsilon}$ goes to 0, and by applying the Markov property at time T δ ε , T δ $T_\delta ^{\varepsilon} , T_\delta$ . Moreover, T δ , T δ ε $T_\delta , T_\delta ^{\varepsilon}$ converge to 0 in probability as δ 0 $\delta \rightarrow 0$ , uniformly in ε $\varepsilon$ : this is true since T δ , T δ ε $T_\delta ,T_\delta ^{\varepsilon}$ are stochastically dominated by their counterparts for non-conditioned (drifted) Brownian motion, and the result plainly holds for the non-conditioned versions. Combining these observations yields the assertion.

We may therefore couple h s ε n ${h_{\operatorname{s}}^{\varepsilon _n}}$ and h s $h_{\operatorname{s}}$ so that their lateral components are identical, and the components that are constant on vertical lines converge almost surely on compacts as n $n\rightarrow \infty$ . For this coupling, the result of [6] implies that

ν h s ε n ε n ( A ) ν h s ( A ) and μ h s ε n ε n ( U ) μ h s ( U ) $$\begin{equation} \nu ^{\varepsilon _n}_{h_{\operatorname{s}}^{\varepsilon _n}}(A)\rightarrow \nu _{h_{\operatorname{s}}}(A) \text{ and } \mu ^{\varepsilon _n}_{h_{\operatorname{s}}^{\varepsilon _n}}(U)\rightarrow \mu _{h_{\operatorname{s}}}(U)\end{equation}$$ (4.4)
 in probability as n $n\rightarrow \infty$ , for any bounded subsets A S $A\subset \partial \mathcal {S}$ and U S $U\subset \mathcal {S}$ . More precisely [6, Sections 4.1.1 and 4.1.2] proves that ν h ε n ( A ) ν h ε n ( A ) $\nu _{h}^{\varepsilon _n}(A)\rightarrow \nu _h^{\varepsilon _n}(A)$ , when h $h$ is a specific field on S $\mathcal {S}$ that differs from h s $h_{\operatorname{s}}$ by a bounded continuous function on A $A$ (similarly for μ $\mu$ ). Since adding a continuous function f $f$ to h $h$ modifies the boundary measure locally by exp ( ( γ / 2 ) f ) $\exp ((\gamma /2)f)$ and the bulk measure by exp ( γ f ) $\exp (\gamma f)$ we deduce (4.4). To conclude that
( h s ε n , ν h s ε n ε n , μ h s ε n ε n ) ( h s , ν h s , μ h s ) $$\begin{equation*} (h_{\operatorname{s}}^{\varepsilon _n},\nu ^{\varepsilon _n}_{h_{\operatorname{s}}^{\varepsilon _n}}, \mu ^{\varepsilon _n}_{h_{\operatorname{s}}^{\varepsilon _n}})\rightarrow (h_{\operatorname{s}},\nu _{h_{\operatorname{s}}},\mu _{h_{\operatorname{s}}}) \end{equation*}$$
in probability for this coupling (with the correct topology), and thus complete the proof of (i), it remains to show that ν h s ε n n ( S ) ν h s ( S ) $\nu _{h_{\operatorname{s}}^{\varepsilon _n}}^n(\partial \mathcal {S})\rightarrow \nu _{h_{\operatorname{s}}}(\partial \mathcal {S})$ and μ h s ε n n ( S ) μ h s ( S ) $\mu _{h_{\operatorname{s}}^{\varepsilon _n}}^n(\mathcal {S})\rightarrow \mu _{h_{\operatorname{s}}}(\mathcal {S})$ in probability as n $n\rightarrow \infty$ . For this, we use the second assertion of Lemma 4.2 together with the fact that ν h s ( S ) = lim K ν h s ( ( K , K ) × i { 0 , π } ) $\nu _{h_{\operatorname{s}}}(\mathcal {S})=\lim _{K\rightarrow \infty } \nu _{h_{\operatorname{s}}}((-K,K)\times \operatorname{i}\lbrace 0,\pi \rbrace )$ by definition. Combining with (4.4) yields the desired conclusion for the boundary measures. A similar argument can be applied for the bulk measures, where we may use, for example, [2, Theorem 1.2; 4, Theorem 1.2] to get the uniform q $q$ th moment bound for q < 1 $q&lt;1$ as in the proof of 4.2.

For (ii), first observe that

ν h s ε n ε n ( S ) 4 / γ 2 1 1 $$\begin{equation*} \nu _{h^{\varepsilon _n}_{\operatorname{s}}}^{\varepsilon _n}(\partial \mathcal {S})^{4/\gamma ^2-1}\Rightarrow 1 \end{equation*}$$
in law since
4 / γ 2 1 0 and ν h s ε n ε n ( S ) ν h s ( S ) . $$\begin{equation*} 4/\gamma ^2-1\rightarrow 0 \text{ and } \nu _{h^{\varepsilon _n}_{\operatorname{s}}}^{\varepsilon _n}(\partial \mathcal {S})\rightarrow \nu _{h_{\operatorname{s}}}(\partial \mathcal {S}). \end{equation*}$$
Furthermore, Lemma 4.2 gives the uniform integrability of ν h s ε ε ( S ) 4 / γ 2 1 $\nu _{h^{\varepsilon} _{\operatorname{s}}}^{\varepsilon} (\partial \mathcal {S})^{4/\gamma ^2-1}$ in ε $\varepsilon$ . Combining these two results we get (ii). $\Box$

Remark 4.6.We reiterate that μ h ̂ ( S ) < $\mu _{\widehat{h}}(\mathcal {S})&lt;\infty$ and ν h ̂ ( S ) = 1 $\nu _{\widehat{h}}(\partial \mathcal {S})=1$ almost surely. Moreover, we have the convergence μ h ̂ ε ε ( S ) μ h ̂ ( S ) < $\mu _{\widehat{h}^{\varepsilon} }^{\varepsilon} (\mathcal {S})\Rightarrow \mu _{\widehat{h}}(\mathcal {S})&lt;\infty$ as ε 0 $\varepsilon \rightarrow 0$ .

Remark 4.7.For b > 0 $b&gt;0$ we define the b $b$ -boundary length disk to be a surface with the law of ( S , h b ) $(\mathcal {S},h^b)$ , where h b = h + 2 γ 1 log ( b ) $h^b=h+2\gamma ^{-1}\log (b)$ for h $h$ as in Definition 4.1 or 4.3. Lemma 4.5 also holds if we assume all the disks are b $b$ -boundary length disks.

The fields that appear in the statement of our main theorem are defined as follows.

Definition 4.8.We define fields h ε $h^{\varepsilon}$ (respectively, h $h$ ) to be parameterizations of unit boundary length γ $\gamma$ -LQG disks (respectively, the 2-LQG disk) by D $\mathbb {D}$ instead of S $\mathcal {S}$ . More specifically we take ϕ : D S $\phi :\mathbb {D}\rightarrow \mathcal {S}$ to be the conformal map from S $\mathcal {S}$ to D $\mathbb {D}$ that sends + , , i π $+\infty ,-\infty ,\operatorname{i}\pi$ to 1 , 1 , i $1,-1,\operatorname{i}$ , respectively. Then we set

h ε = h ̂ ε ϕ + Q γ log | ϕ | and h = h ̂ ϕ + 2 log | ϕ | , $$\begin{equation*} h^{\varepsilon} =\widehat{h}^{\varepsilon} \circ \phi +Q_\gamma \log |\phi ^{\prime }| \text{ and } h=\widehat{h} \circ \phi +2\log |\phi ^{\prime }|, \end{equation*}$$
where h ̂ ε $\widehat{h}^{\varepsilon}$ (respectively, h ̂ $\widehat{h}$ ) is the field in the strip S $ \mathcal {S}$ corresponding to Definition 4.1 (respectively, Definition 4.3).

Remark 4.9.Lemma 4.5 clearly also implies the convergence

( h ε , μ h ε ε , ν h ε ε ) ( h , μ h , ν h ) $$\begin{equation*} (h^{\varepsilon} ,\mu ^{\varepsilon} _{h^{\varepsilon} }, \nu _{h^{\varepsilon} }^{\varepsilon} )\Rightarrow (h,\mu _h,\nu _h) \end{equation*}$$
as ε 0 $\varepsilon \rightarrow 0$ (with respect to H loc 1 ( D ) ${H^{-1}_{\mathrm{loc}}(\mathbb {D})}$ convergence in the first coordinate, and weak convergence of measures on D , D $\mathbb {D},\partial \mathbb {D}$ in the final coordinates).

In fact, it implies the convergence of various embeddings of quantum disks. Of particular use to us will be the following:

Lemma 4.10.Suppose that for each ε $\varepsilon$ , h ̂ ε $\widehat{h}^{\varepsilon}$ is as in Remark 4.7 for some b > 0 $b&gt;0$ and that h ε $\widetilde{h}^{\varepsilon}$ is defined by choosing a point z ε $z^{\varepsilon}$ from μ h ̂ ε ε $\mu ^{\varepsilon} _{\widehat{h}^{\varepsilon} }$ in S $\mathcal {S}$ , defining ψ ε : S D $\psi ^{\varepsilon} :\mathcal {S}\rightarrow \mathbb {D}$ conformal such that ψ ε ( z ε ) = 0 $\psi ^{\varepsilon} (z^{\varepsilon} )=0$ and ( ψ ε ) ( z ε ) > 0 $(\psi ^{\varepsilon} )^{\prime }(z^{\varepsilon} )&gt;0$ , and setting

h ε : = h ̂ ε ( ψ ε ) 1 + Q γ log | ( ( ψ ε ) 1 ) | . $$\begin{equation*} \widetilde{h}^{\varepsilon} := \widehat{h}^{\varepsilon} \circ (\psi ^{\varepsilon} )^{-1}{+Q_\gamma \log |((\psi ^{\varepsilon} )^{-1})^{\prime }|.} \end{equation*}$$

Suppose similarly that ( h , μ ) $(\widetilde{h}, \widetilde{\mu })$ is defined by taking the field h ̂ $\widehat{h}$ in Remark 4.7 with the same b > 0 $b&gt;0$ , picking a point z $z$ from μ h ̂ $\mu _{\widehat{h}}$ ; taking ψ : S D $\psi :\mathcal {S}\rightarrow \mathbb {D}$ conformal with ψ ( z ) > 0 $\psi ^{\prime }(z)&gt;0$ and ψ ( z ) = 0 $\psi (z)=0$ ; and setting

h = h ̂ + ψ 1 + 2 log | ( ψ 1 ) | , μ = μ h . $$\begin{equation*} \widetilde{h}=\widehat{h}+\psi ^{-1} {+2\log |(\psi ^{-1})^{\prime }|}\, , \, \widetilde{\mu }=\mu _{\widetilde{h}}. \end{equation*}$$

Then as ε 0 $\varepsilon \rightarrow 0$ , we have that

( h ε , μ h ε ε ) ( h , μ ) . $$\begin{equation*} (\widetilde{h}^{\varepsilon} ,\mu ^{\varepsilon} _{\widetilde{h}^{\varepsilon} })\Rightarrow (\widetilde{h}, \widetilde{\mu }). \end{equation*}$$
Moreover, for any m > 0 $m&gt;0$
P ( μ h ε ε ( D ( 1 δ ) D ) > m ) 0 as δ 0 $$\begin{equation} \mathbb {P}(\mu ^{\varepsilon} _{\widetilde{h}^{\varepsilon} }(\mathbb {D}\setminus (1-\delta ) \mathbb {D})&gt;m) \rightarrow 0 \text{ as } \delta \rightarrow 0 \end{equation}$$ (4.5)
uniformly in ε $\varepsilon$ . This convergence is also uniform over b [ 0 , C ] $b\in [0,C]$ for any 0 < C < $0&lt;C&lt;\infty$ .

Proof.We assume that b = 1 $b=1$ ; the result for other b $b$ and the uniform convergence in (4.5) follows immediately from the definition in Remark 4.7.

The proof then follows from Lemma 4.5. We take a coupling where the convergence is almost sure: in particular, the fields h ̂ ε $\widehat{h}^{\varepsilon}$ converge almost surely to h ̂ $\widehat{h}$ in H loc 1 ( S ) $H^{-1}_{{\mathrm{loc}}}(\mathcal {S})$ and the measures μ h ̂ ε ε $\mu _{\widehat{h}^{\varepsilon} }^{\varepsilon}$ converge weakly almost surely to μ h ̂ $\mu _{\widehat{h}}$ in S $\mathcal {S}$ . This means that we can sample a sequence of z ε $z^{\varepsilon}$ from the μ h ̂ ε ε $\mu _{\widehat{h}^{\varepsilon} }^{\varepsilon}$ and z $z$ from μ h ̂ $\mu _{\widehat{h}}$ , such that z ε z S $z^{\varepsilon} \rightarrow z\in \mathcal {S}$ almost surely. Since z S $z\in \mathcal {S}$ is at positive distance from S $\partial \mathcal {S}$ , this implies that the conformal maps ψ ε $\psi ^{\varepsilon}$ converge to ψ $\psi$ almost surely on compacts of S $\mathcal {S}$ and therefore that h ε h $\widetilde{h}^{\varepsilon} \rightarrow \widetilde{h}$ in H loc 1 ( D ) $H^{-1}_{{\mathrm{loc}}}(\mathbb {D})$ and μ h ε ε $\mu _{\widetilde{h}^{\varepsilon} }^{\varepsilon}$ converges weakly to μ $\widetilde{\mu }$ . Finally, (4.5) follows from the convergence proved above, and the fact that it holds for the limit measure μ h $\mu _{\widetilde{h}}$ . $\Box$

Later, we will also need to consider fields obtained from the field h ε $\widetilde{h}^{\varepsilon}$ of Lemma 4.10 via a random rotation. For this purpose, we record the following remark.

Remark 4.11.Suppose that h n $h_n$ are a sequence of fields coupled with some rotations θ n $\theta _n$ such that h ¯ n = h n θ n 2 γ n 1 log ν h n ( D ) $\bar{h}_n=h_n\circ \theta _n-2\gamma _n^{-1}\log \nu _{h_n}(\partial \mathbb {D})$ has the law of h ε n $\widetilde{h}^{\varepsilon _n}$ from Lemma 4.10 with b = 1 $b=1$ , for some ε n 0 ${\varepsilon _n}\downarrow 0$ , γ n = γ ( ε n ) $\gamma _n=\gamma (\varepsilon _n)$ . Suppose further that ( h n , ν h n ( D ) , μ h n ( D ) ) ( h , ν , μ ) $(h_n, \nu _{h_n}(\partial \mathbb {D}),\mu _{h_n}(\mathbb {D}))\Rightarrow (h,\nu ^*, \mu ^*)$ in H loc 1 ( D ) × R × R $H^{-1}_{{\mathrm{loc}}}(\mathbb {D})\times \mathbb {R}\times \mathbb {R}$ as n $n\rightarrow \infty$ . Then ν = ν h ( D ) $\nu ^*=\nu _h(\partial \mathbb {D})$ and μ = μ h ( D ) $\mu ^*=\mu _h(\mathbb {D})$ almost surely. Indeed, ( h n , ν h n ( D ) , μ h n ( D ) , θ n , h ¯ n ) $(h_n, \nu _{h_n}(\partial \mathbb {D}),\mu _{h_n}(\mathbb {D}),\theta _n,\bar{h}_n)$ is tight in n $n$ , and any subsequential limit ( h , ν , μ , θ , h ¯ ) $(h,\nu ^*,\mu ^*,\theta ,\bar{h})$ has ( h , ν , μ ) $(h,\nu ^*,\mu ^*)$ coupled as above. Since μ h n ( A ) = ( ν h n ( D ) ) 2 μ h ¯ n ( θ n 1 ( A ) ) $\mu _{h_n}(A)=(\nu _{h_n}(\partial \mathbb {D}))^2\mu _{\bar{h}_n}(\theta _n^{-1}(A))$ for every n $n$ and A D $A\subset \mathbb {D}$ it follows from Lemma 4.10 that μ = ( ν ) 2 μ h ¯ ( D ) $\mu ^*=(\nu ^*)^2\mu _{\bar{h}}(\mathbb {D})$ and ν h ¯ ( D ) = 1 $\nu _{\bar{h}}(\partial \mathbb {D})=1$ almost surely. On the other hand, it is not hard to see that h ¯ $\bar{h}$ must be equal to h θ log ν $h\circ \theta -\log \nu ^*$ almost surely, which implies the result.

4.2 Mating of trees

Mating of trees theory, [18], provides a powerful encoding of LQG and SLE in terms of Brownian motion. We will state the version in the unit disk D $\mathbb {D}$  below.

Let α ( 1 , 1 ) $\alpha \in (-1,1)$ and let Z ( c ) $Z^{(c)}$ be c $c$ times a standard planar Brownian motion with correlation α > 0 $\alpha &gt;0$ , started from (1,0) or (0,1). Condition on the event that Z $Z$ first leaves the first quadrant at the origin (0,0); this is a zero probability event but can be made sense of via a limiting procedure; see, for example, [2, Proposition 4.2]. We call the resulting conditioned process (restricted until the time at which the process first leaves the first quadrant) a Brownian cone excursion with correlation α $\alpha$ . Note that we use the same terminology for the resulting process for any c $c$ and either choice of (1,0) or (0,1) for the starting point.

To state the mating of trees theorem (disk version) we first introduce some notation. Let ( D , h ε ) $(\mathbb {D},h^{\varepsilon} )$ denote a unit boundary length γ $\gamma$ -LQG disk for γ ( 2 , 2 ) $\gamma \in (\sqrt {2},2)$ , embedded as described in Definition 4.8. Let η ε $\eta ^{\varepsilon}$ denote a space-filling SLE κ $_{\kappa ^{\prime }}$ in D $\mathbb {D}$ , starting and ending at 1, which is independent of h $h$ . Recall that this is defined from a branching SLE κ $_{\kappa ^{\prime }}$ as described in Section 2.1.7, where the branch targeted toward z D $z\in \mathbb {D}$ is denoted by η z ε $\eta _z^{\varepsilon}$ (one can obtain η z ε $\eta _z^{\varepsilon}$ from η ε $\eta ^{\varepsilon}$ by deleting time intervals on which η ε $\eta ^{\varepsilon}$ is exploring regions of D $\mathbb {D}$ that have been disconnected from z $z$ ). Parameterize η ε $\eta ^{\varepsilon}$ by the area measure induced by h $h$ . Let Z ε = ( L ε , R ε ) $Z^{\varepsilon} =(L^{\varepsilon} ,R^{\varepsilon} )$ denote the process started at (0,1) and ending at (0,0) which encodes the evolution of the left-hand side and right-hand side boundary lengths of η ε $\eta ^{\varepsilon}$ ; see Figure 9.

Details are in the caption following the image
The left-hand side figure is an illustration of the branch of a space-filling SLE κ $_{\kappa ^{\prime }}$ ( κ > 4 ${\kappa ^{\prime }}&gt;4$ ) toward some point z D $z\in \mathbb {D}$ , and stopped at some time before it reaches z $z$ . The space-filling SLE itself will fill in the monocolored components that are separated from z $z$ as it creates them, so if t $t$ is equal to the total γ $\gamma$ -LQG area of the gray-shaded region on the right-hand side figure, then the space-filling SLE has visited precisely this gray region at time t $t$ . We then define the left (respectively, right) boundary length of the space-filling SLE at time t $t$ to be the γ $\gamma$ -LQG boundary length of the red (respectively, blue) curve shown on the right-hand side figure.

The following theorem follows essentially from [18]. For precise statements, see [40, Theorem 2.1] for the law of Z ε $Z^{\varepsilon}$ and see [40, Theorem 7.3] for the law of the monocolored components.

Theorem 4.12. ([[18, 40]])In the setting above, Z ε $Z^{\varepsilon}$ has the law of a Brownian cone excursion with correlation cos ( π γ 2 / 4 ) $-\cos (\pi \gamma ^2/4)$ . The pair ( h ε , η ε ) $(h^{\varepsilon} ,\eta ^{\varepsilon} )$ is measurable with respect to the σ $\sigma$ -algebra generated by Z ε $Z^{\varepsilon}$ . Furthermore, if z $z$ is sampled from μ h ε ε $\mu _{h^{\varepsilon} }^{\varepsilon}$ renormalized to be a probability measure, then the monocolored complementary components of η z ε $\eta ^{\varepsilon} _z$ define independent γ $\gamma$ -LQG disks conditioned on their γ $\gamma$ -LQG boundary lengths and areas, that is, if we condition on the ordered sequence of boundary lengths and areas of the monocolored domains U $U$ disconnected from z $z$ by η z ε $\eta ^{\varepsilon} _z$ then the corresponding LQG surfaces ( U , h | U ) $(U,h|_U)$ are independent γ $\gamma$ -LQG disks with the given boundary lengths and areas.

Remark 4.13.In fact, we now know from [4] that the variance c 2 $c^2$ of the Brownian motion from which the law of Z ε $Z^{\varepsilon}$ can be constructed is equal to 1 / ( ε sin ( π γ 2 / 4 ) ) $1/(\varepsilon \sin (\pi \gamma ^2/4))$ , where γ = γ ( ε ) = 2 ε $\gamma =\gamma (\varepsilon )=2-\varepsilon$ . In particular, the variance is of order ε 2 $\varepsilon ^{-2}$ .

For each fixed z D $z\in \mathbb {D}$ there is a natural parameterization of η z ε $\eta ^{\varepsilon} _z$ called its quantum natural parameterization which is defined in terms of Z ε $Z^{\varepsilon}$ as follows. First define t = inf { t 0 : η ε ( t ) = z } $\mathfrak {t}=\inf \lbrace t\geqslant 0\,:\,\eta ^{\varepsilon} (t)=z \rbrace$ to be the time at which η ε $\eta ^{\varepsilon}$ first hits z $z$ . Then let I ε , t $\mathcal {I}^{\varepsilon ,\mathfrak {t}}$ denote the set of s [ 0 , t ] $s\in [0,\mathfrak {t}]$ for which we cannot find a cone excursion J [ 0 , t ] $J\subset [0,\mathfrak {t}]$ (that is, J = [ t 1 , t 2 ] [ 0 , t ] $J=[t_1,t_2]\subset [0,\mathfrak {t}]$ such that ( X s ε , Y s ε ) ( X t 2 ε , Y t 2 ε ) $(X^{\varepsilon} _s,Y^{\varepsilon} _s)\geqslant (X^{\varepsilon} _{t_2},Y^{\varepsilon} _{t_2})$ on J $J$ , and either X t 1 ε = X t 2 ε $X^{\varepsilon} _{t_1}=X^{\varepsilon} _{t_2}$ or Y t 1 ε = Y t 2 ε $Y^{\varepsilon} _{t_1}=Y^{\varepsilon} _{t_2}$ ) such that s J $s\in J$ . We call the times in I ε , t $\mathcal {I}^{\varepsilon ,\mathfrak {t}}$ ancestor-free times relative to time t $\mathfrak {t}$ . It is possible to show (see [18, Section 1.4.2]) that the local time of I ε , t $\mathcal {I}^{\varepsilon ,\mathfrak {t}}$ is well defined. Let ( t ε , t ) t 0 $(\ell ^{\varepsilon ,\mathfrak {t}}_t)_{t\geqslant 0}$ denote the increasing function describing the local time of I ε , t $\mathcal {I}^{\varepsilon ,\mathfrak {t}}$ such that 0 ε , t = 0 $\ell ^{\varepsilon ,\mathfrak {t}}_0=0$ and t ε , t = t ε , t $\ell ^{\varepsilon ,\mathfrak {t}}_t= \ell ^{\varepsilon ,\mathfrak {t}}_{\mathfrak {t}}$ for t t $t\geqslant \mathfrak {t}$ . Then let T t ε , t $T^{\varepsilon ,\mathfrak {t}}_t$ for t [ 0 , t ε , t ] $t\in [0,\ell _{\mathfrak {t}}^{\varepsilon ,\mathfrak {t}}]$ denote the right-continuous inverse of ε , t $\ell ^{\varepsilon ,\mathfrak {t}}$ .

Definition 4.14. (Quantum natural parameterization)With the above definitions

( η z ε ( T t ε , t ) ) t [ 0 , t ε , t ] $$\begin{equation*} (\eta ^{\varepsilon} _{z}(T^{\varepsilon ,\mathfrak {t}}_t))_{t\in [0,\ell _{\mathfrak {t}}^{\varepsilon ,\mathfrak {t}}]} \end{equation*}$$
defines a parameterization of η z ε $\eta ^{\varepsilon} _z$ which is called its quantum natural parameterization.

4.3 Convergence of the mating of trees Brownian functionals

Let Z ε $Z^{\varepsilon}$ be the process from Theorem 4.12 and let X ε = ( A ε , B ε ) $X^{\varepsilon} =(A^{\varepsilon} ,B^{\varepsilon} )$ , where
A t ε = a ε ( L t ε + R t ε ) , B t ε = R t ε L t ε , a ε 2 = 1 + cos ( π γ 2 / 4 ) 1 cos ( π γ 2 / 4 ) , t 0 . $$\begin{equation*} A^{\varepsilon} _t=a_\varepsilon (L^{\varepsilon} _t+R^{\varepsilon} _t),\qquad B^{\varepsilon} _t=R^{\varepsilon} _t-L^{\varepsilon} _t,\qquad a_\varepsilon ^2=\frac{1+\cos (\pi \gamma ^2/4)}{1-\cos (\pi \gamma ^2/4)}, t\geqslant 0. \end{equation*}$$
Note that a ε = ε π / 2 + o ( ε ) $a_\varepsilon =\varepsilon \pi /2+o(\varepsilon )$ and that X ε $X^{\varepsilon}$ is an uncorrelated Brownian excursion with variance 2 ( 1 + cos ( π γ 2 / 4 ) ) ( ε sin ( π γ 2 / 4 ) ) 1 = π + o ( ε ) $2(1+\cos (\pi \gamma ^2/4))(\varepsilon \sin (\pi \gamma ^2/4))^{-1}=\pi +o(\varepsilon )$ in the cone { z C : arg ( z ) [ π / 2 + tan 1 ( a ε ) , π / 2 tan 1 ( a ε ) ) } $\lbrace z\in \mathbb {C}: \arg (z)\in [-\pi /2+\tan ^{-1}(a_\varepsilon ),\pi /2-\tan ^{-1}(a_\varepsilon ))\rbrace$ , starting from ( a ε , 1 ) $(a_\varepsilon ,1)$ and ending at the origin (see Figure 10). Also define the processes X ̂ ε , t = ( A ̂ ε , t , B ̂ ε , t ) $\widehat{X}^{\varepsilon ,\mathfrak {t}}=(\widehat{A}^{\varepsilon ,\mathfrak {t}},\widehat{B}^{\varepsilon ,\mathfrak {t}})$ for each t < μ ε ( D ) $\mathfrak {t}&lt;\mu ^{\varepsilon} (\mathbb {D})$ , by setting
X ̂ t ε , t = X T t ε , t ε , t ; t > 0 . $$\begin{equation*} \widehat{X}^{\varepsilon ,\mathfrak {t}}_t= X^{\varepsilon ,\mathfrak {t}}_{T^{\varepsilon ,\mathfrak {t}}_t}\, ; \, t&gt; 0. \end{equation*}$$
Details are in the caption following the image
The transformation from Z ε $Z^{\varepsilon}$ to X ε $X^{\varepsilon}$

We will prove in this subsection that all the quantities defined above have a joint limit in law as ε 0 $\varepsilon \downarrow 0$ . Namely, let us consider an uncorrelated Brownian excursion X = ( A , B ) $X=(A,B)$ in the right half-plane from (0,1) to (0,0); the process can, for example, be constructed via a limiting procedure where we condition a standard planar Brownian motion from (0,1) to (0,0) on first leaving { z : Re ( z ) > δ } $\lbrace z\,:\,\operatorname{Re}(z)&gt;-\delta \rbrace$ at a point z ̂ $\widehat{z}$ where | Im ( z ̂ ) | < δ $|\operatorname{Im}(\widehat{z})|&lt;\delta$ . For t $\mathfrak {t}$ less than the total duration of X $X$ , let I t [ 0 , t ] $\mathcal {I}^{\mathfrak {t}}\subset [0,\mathfrak {t}]$ denote the set of times at which A $A$ has a backward running infimum relative to time t $\mathfrak {t}$ , that is, s I t $s\in \mathcal {I}^{\mathfrak {t}}$ if A u > A s $A_u&gt;A_s$ for all u ( s , t ] $u\in (s,\mathfrak {t}]$ . Let ( t t ) t 0 $(\ell ^{\mathfrak {t}}_t)_{t\geqslant 0}$ denote the increasing function describing the local time of I t $\mathcal {I}^{\mathfrak {t}}$ such that 0 t = 0 $\ell ^{\mathfrak {t}}_0=0$ and t t = t t $\ell ^{\mathfrak {t}}_t= \ell ^{\mathfrak {t}}_{\mathfrak {t}}$ for t t $t\geqslant \mathfrak {t}$ . Then let T t $T^{\mathfrak {t}}$ denote the right-continuous inverse of t $\ell ^{\mathfrak {t}}$ , and define X ̂ t = ( A ̂ t , B ̂ t ) $\widehat{X}^{\mathfrak {t}}=(\widehat{A}^{\mathfrak {t}},\widehat{B}^{\mathfrak {t}})$ by X ̂ t t = X T t t t $\widehat{X}^{\mathfrak {t}}_t= X^{\mathfrak {t}}_{T^{\mathfrak {t}}_t}$ .

We set
be ε = ( X ε , ( I ε , t ) t , ( ε , t ) t , ( T ε , t ) t , ( X ̂ ε , t ) t ) $$\begin{equation*} \mathfrak {be}^{\varepsilon} =(X^{\varepsilon} ,(\mathcal {I}^{\varepsilon ,\mathfrak {t}})_{\mathfrak {t}}, (\ell ^{\varepsilon ,\mathfrak {t}})_{\mathfrak {t}}, (T^{\varepsilon ,\mathfrak {t}})_{\mathfrak {t}}, (\widehat{X}^{\varepsilon ,\mathfrak {t}})_{\mathfrak {t}} ) \end{equation*}$$
and
be = ( X , ( I t ) t , ( t ) t , ( T t ) t , ( X ̂ t ) t ) $$\begin{equation*} \mathfrak {be}=(X,(\mathcal {I}^{\mathfrak {t}})_{\mathfrak {t}}, (\ell ^{\mathfrak {t}})_{\mathfrak {t}}, (T^{\mathfrak {t}})_{\mathfrak {t}}, (\widehat{X}^{\mathfrak {t}})_{\mathfrak {t}} ) \end{equation*}$$
where the indexing is over t R + Q $\mathfrak {t}\in \mathbb {R}_+\cap \mathbb {Q}$ .

Then we have the following convergence.

Lemma 4.15. be ε be $\mathfrak {be}^{\varepsilon} \Rightarrow \mathfrak {be}$ as ε 0 $\varepsilon \downarrow 0$ , where we use the Hausdorff topology on the second coordinate and the Skorokhod topology on the remaining coordinates.

Proof.First we consider the infinite volume case where X ε $X^{\varepsilon}$ is a two-sided planar Brownian motion started from 0, with the same variance and covariance as before, namely variance 2 ( 1 + cos ( π γ 2 / 4 ) ) ( ε sin ( π γ 2 / 4 ) ) 1 = π + o ( ε ) $2(1+\cos (\pi \gamma ^2/4))(\varepsilon \sin (\pi \gamma ^2/4))^{-1}=\pi +o(\varepsilon )$ and covariance 0. In this infinite volume setting we define ( I ε , t ) t , ( ε , t ) t , ( T ε , t ) t , ( X ̂ ε , t ) t $(\mathcal {I}^{\varepsilon ,{\mathfrak {t}}})_{\mathfrak {t}}, (\ell ^{\varepsilon ,{\mathfrak {t}}})_{\mathfrak {t}}, (T^{\varepsilon ,{\mathfrak {t}}})_{\mathfrak {t}}, (\widehat{X}^{\varepsilon ,{\mathfrak {t}}})_{\mathfrak {t}}$ similar to before, such that for ε ( 0 , 2 2 ) $\varepsilon \in (0,2-\sqrt {2})$ , I ε , t ( , t ) $\mathcal {I}^{\varepsilon ,{\mathfrak {t}}}\subset (-\infty ,{\mathfrak {t}})$ is the set of ancestor-free times relative to time t ${\mathfrak {t}}$ , ε , t : R ( , 0 ] $\ell ^{\varepsilon ,{\mathfrak {t}}}:\mathbb {R}\rightarrow (-\infty ,0]$ is an increasing process given by the local time of I ε , t $\mathcal {I}^{\varepsilon ,{\mathfrak {t}}}$ satisfying s ε , t 0 $\ell ^{\varepsilon ,{\mathfrak {t}}}_s\equiv 0$ for s t $s\geqslant {\mathfrak {t}}$ , T ε , t : ( , 0 ) ( , 0 ) $T^{\varepsilon ,{\mathfrak {t}}}:(-\infty ,0)\rightarrow (-\infty ,0)$ is the right-inverse of ε , t $\ell ^{\varepsilon ,{\mathfrak {t}}}$ and X ̂ s ε , t = X T s ε , t ε $\widehat{X}^{\varepsilon ,{\mathfrak {t}}}_s=X^{\varepsilon }_{T^{\varepsilon ,{\mathfrak {t}}}_s}$ . We make a similar adaptation of the definition to the infinite volume setting for ε = 0 $\varepsilon =0$ ; in particular, X $X$ is ( π $\sqrt {\pi }$ times) a standard uncorrelated two-sided Brownian motion planar motion. By translation invariance in law of X ε $X^{\varepsilon}$ and X $X$ , and since X ε $X^{\varepsilon}$ and X $X$ determine the rest of the objects in question, it is sufficient to show convergence for t = 0 ${\mathfrak {t}}=0$ .

First we claim that for all ε [ 0 , 2 2 ) $\varepsilon \in [0,2-\sqrt {2})$ we can sample I ε , 0 $\mathcal {I}^{\varepsilon ,0}$ by considering a PPP in the second quadrant with intensity d x × y α ( ε ) d y $dx\times y^{-\alpha (\varepsilon )}dy$ for α ( ε ) = 1 + 2 / ( 2 ε ) 2 = 1 + 2 / γ 2 $\alpha (\varepsilon )=1+2/(2-\varepsilon )^2=1+2/\gamma ^2$ , such that points ( x , y ) $(x,y)$ of this PPP are in bijection with the complementary components of I ε , 0 $\mathcal {I}^{\varepsilon ,0}$ with y $y$ representing the length of the component and x $x$ representing the relative ordering of the components. (In the case ε = 0 $\varepsilon =0$ , I 0 , 0 $\mathcal {I}^{0,0}$ refers to I 0 $\mathcal {I}^0$ .) For ε = 0 $\varepsilon =0$ the claim follows since A $A$ restricted to the complementary components of I 0 $\mathcal {I}^0$ has law given by the Brownian excursion measure. For ε ( 0 , 2 2 ) $\varepsilon \in (0,2-\sqrt {2})$ the claim follows from [18]: It is explained in [18, Section 1.4.2] that I ε , 0 $\mathcal {I}^{\varepsilon ,0}$ has the law of the zero set of some Bessel process, which verifies the claim modulo the formula for α ( ε ) $\alpha (\varepsilon )$ . The dimension of I ε , 0 $\mathcal {I}^{\varepsilon ,0}$ is 2 / γ 2 $2/\gamma ^2$ [20, Table 1 and Example 2.3], and we get the formula for α ( ε ) $\alpha (\varepsilon )$ by adding 1 to this number.

Next we argue that the marginal law of I ε , 0 $\mathcal {I}^{\varepsilon ,0}$ converges to the marginal law of I 0 $\mathcal {I}^{0}$ . Consider the definition of these sets via PPP as described in the previous paragraph. Since lim ε 0 α ( ε ) = α ( 0 ) = 3 / 2 $\lim _{\varepsilon \rightarrow 0}\alpha (\varepsilon )=\alpha (0)=3/2$ , the PPP for ε ( 0 , 2 2 ) $\varepsilon \in (0,2-\sqrt {2})$ converge in law to the PPP for ε = 0 $\varepsilon =0$ on all sets bounded away from y = 0 $y=0$ . This implies that for any compact interval I $I$ we have convergence in law of I ε , 0 I $\mathcal {I}^{\varepsilon ,0}\cap I$ to I 0 I $\mathcal {I}^{0}\cap I$ for the Hausdorff distance.

Now we will argue that if I ε , 0 ( , 0 ) $\widetilde{\mathcal {I}}^{\varepsilon ,0}\subset (-\infty ,0)$ denotes the backward running infima of A ε $A^{\varepsilon}$ relative to time 0, then

( X ε , I ε , 0 , I ε , 0 ) ( X , I 0 , I 0 ) . $$\begin{equation*} (X^{\varepsilon} ,\mathcal {I}^{\varepsilon ,0},\widetilde{\mathcal {I}}^{\varepsilon ,0})\Rightarrow (X,\mathcal {I}^{0},\mathcal {I}^{0}). \end{equation*}$$
Since ( X ε , I ε , 0 ) ( X , I 0 ) $(X^{\varepsilon} ,{\widetilde{\mathcal {I}}^{\varepsilon ,0}})\Rightarrow (X,\mathcal {I}^{0})$ and I ε , 0 I 0 ${\mathcal {I}^{\varepsilon ,0}}\Rightarrow \mathcal {I}^{0}$ , we need only to prove that for any almost surely subsequential limit ( X , I 0 , I 0 ) $(X,\mathcal {I}^{0},\widetilde{\mathcal {I}}^{0})$ we have I 0 = I 0 $\mathcal {I}^0=\widetilde{\mathcal {I}}^0$ almost surely. Observe that I ε , 0 I ε , 0 $\widetilde{\mathcal {I}}^{\varepsilon ,0}\subset \mathcal {I}^{\varepsilon ,0}$ since I ε , 0 $\widetilde{\mathcal {I}}^{\varepsilon ,0}$ denotes the backward running infima of A ε $A^{\varepsilon}$ , I ε , 0 $\mathcal {I}^{\varepsilon ,0}$ denotes the set of ancestor-free times of A ε $A^{\varepsilon}$ relative to time 0, and a time which is a backward running infimum of A ε $A^{\varepsilon}$ relative to time 0 cannot be inside a cone excursion, hence it is ancestor-free. The observation I ε , 0 I ε , 0 $\widetilde{\mathcal {I}}^{\varepsilon ,0}\subset \mathcal {I}^{\varepsilon ,0}$ implies that I 0 I 0 $\widetilde{\mathcal {I}}^{0}\subset \mathcal {I}^{0}$ almost surely in any subsequential limit ( X , I 0 , I 0 ) $(X,\mathcal {I}^{0},\widetilde{\mathcal {I}}^{0})$ . Since I 0 = d I 0 $\widetilde{\mathcal {I}}^{0}\overset{d}{=}\mathcal {I}^{0}$ , this implies that I 0 = I 0 $\mathcal {I}^0=\widetilde{\mathcal {I}}^0$ almost surely.

Next we will argue that ( I ε , 0 , ε , 0 , T ε , 0 ) ( I 0 , 0 , T 0 ) $(\mathcal {I}^{\varepsilon ,0},\ell ^{\varepsilon ,0},T^{\varepsilon ,0}) \Rightarrow (\mathcal {I}^{0},\ell ^{0},T^{0})$ , assuming we choose the multiplicative constant consistently when defining ε , 0 $\ell ^{\varepsilon ,0}$ and 0 $\ell ^{0}$ . The convergence result follows again from the construction of I ε , 0 $\mathcal {I}^{\varepsilon ,0}$ and I 0 $\mathcal {I}^0$ via a PPP, since the x $x$ coordinate of the PPP defines the local time (modulo multiplication by a deterministic constant).

Using that ( I ε , 0 , ε , 0 , T ε , 0 ) ( I 0 , 0 , T 0 ) $(\mathcal {I}^{\varepsilon ,0},\ell ^{\varepsilon ,0},T^{\varepsilon ,0}) \Rightarrow (\mathcal {I}^{0},\ell ^{0},T^{0})$ , that I ε , 0 $\mathcal {I}^{\varepsilon ,0}$ and I 0 $\mathcal {I}^{0}$ determine the other two elements in this tuple and that ( X ε , I ε , 0 ) ( X , I 0 ) $(X^{\varepsilon} ,\mathcal {I}^{\varepsilon ,0})\Rightarrow (X,\mathcal {I}^{0})$ , we get

( X ε , I ε , 0 , ε , 0 , T ε , 0 ) ( X 0 , I 0 , 0 , T 0 ) . $$\begin{equation*} (X^{\varepsilon} ,\mathcal {I}^{\varepsilon ,0},\ell ^{\varepsilon ,0},T^{\varepsilon ,0}) \Rightarrow (X^0,\mathcal {I}^{0},\ell ^{0},T^{0}). \end{equation*}$$
We conclude that the lemma holds in the infinite volume setting by using that
X ̂ s ε , 0 = X T s ε , 0 ε and X ̂ s = X T s 0 . $$\begin{equation*} \widehat{X}^{\varepsilon ,0}_s=X^{\varepsilon }_{T^{\varepsilon ,0}_s} \text{ and }\widehat{X}_s=X_{T^{0}_s}. \end{equation*}$$

To conclude the proof we will transfer from the infinite volume setting to the finite volume setting. Let us start by recalling that there is a natural infinite measure θ ε $\theta _\varepsilon$ on Brownian excursions in the cone C ε : = { z C : arg ( z ) ( π / 2 + tan 1 ( a ε ) , π / 2 tan 1 ( a ε ) ) } $\mathcal {C}_\varepsilon :=\lbrace z\in \mathbb {C}: \arg (z)\in (-\pi /2+\tan ^{-1}(a_\varepsilon ),\pi /2-\tan ^{-1}(a_\varepsilon ))\rbrace$ which is uniquely characterized (modulo multiplication by a constant) by the following property. Let X ε $X^{\varepsilon}$ be as in the previous paragraph, let δ > 0 $\delta &gt;0$ and let J ε = [ t 1 , t 2 ] R $J_\varepsilon =[t_1,t_2]\subset \mathbb {R}_-$ be the interval with largest left end point t 1 $t_1$ of length at least δ $\delta$ during which X ε $X^{\varepsilon}$ makes an excursion in the cone C ε $\mathcal {C}_\varepsilon$ . Here a cone excursion in C ε $\mathcal {C}_\varepsilon$ is a path starting at ( b a ε , b ) + z 0 $(ba_\varepsilon ,b)+z_0$ for some b > 0 $b&gt;0$ and z 0 C $z_0\in \mathbb {C}$ , ending at z 0 $z_0$ , and otherwise staying inside z 0 + C ε $z_0+\mathcal {C}_\varepsilon$ . Define

Y t ε = ( X t + t 1 ε X t 2 ε ) $$\begin{equation} Y^{\varepsilon} _t=(X^{\varepsilon} _{t+t_1}-X^{\varepsilon} _{t_2}) \end{equation}$$ (4.6)
for t [ 0 , t 2 t 1 ] $t\in [0,t_2-t_1]$ so that Y ε $Y^{\varepsilon}$ is a path that starts at ( b a ε , b ) $(ba_\varepsilon ,b)$ for some b > 0 $b&gt;0$ , ends at the origin and otherwise stays inside C ε $\mathcal {C}_\varepsilon$ . Then Y ε $Y^{\varepsilon}$ has law θ ε $\theta _\varepsilon$ restricted to excursions of length at least δ $\delta$ . (Here and in the rest of the proof, when we work with a non-probability measure of finite mass, we will often assume that it been renormalized to be a probability measure.); see [62].

The measure θ ε $\theta _\varepsilon$ allows a disintegration θ ε = 0 θ ε b d b $\theta _\varepsilon =\int _0^\infty \theta _\varepsilon ^b\,db$ , where a path sampled from θ ε b $\theta _\varepsilon ^b$ almost surely starts at ( b a ε , b ) $(ba_\varepsilon ,b)$ . Furthermore, for b , b > 0 $b,b^{\prime }&gt;0$ , a path sampled from θ ε b $\theta _\varepsilon ^b$ and rescaled by b / b $b^{\prime }/b$ so it ends at ( b a ε , b ) $(b^{\prime }a_\varepsilon ,b^{\prime })$ (and with Brownian scaling of time), has law θ ε b $\theta _\varepsilon ^{b^{\prime }}$ . Finally, an excursion sampled from θ ε 1 $\theta _\varepsilon ^1$ is equal in law to the excursion in the statement of the lemma; see [2].

Let us now use these facts to complete the proof. We define a function f ε $f^{\varepsilon}$ such that for X ε $X^{\varepsilon}$ a two-sided planar Brownian motion as above we have f ε ( X ε ) = ( ( I ε , t ) t , ( ε , t ) t , ( T ε , t ) t , ( X ̂ ε , t ) t ) $f^{\varepsilon} (X^{\varepsilon} )=((\mathcal {I}^{\varepsilon ,\mathfrak {t}})_{\mathfrak {t}}, (\ell ^{\varepsilon ,\mathfrak {t}})_{\mathfrak {t}}, (T^{\varepsilon ,\mathfrak {t}})_{\mathfrak {t}}, (\widehat{X}^{\varepsilon ,\mathfrak {t}})_{\mathfrak {t}} )$ almost surely. For Y ε $Y^{\varepsilon}$ a Brownian cone excursion in C ε $\mathcal {C}_\varepsilon$ starting at ( a ε , 1 ) $(a_\varepsilon ,1)$ we define f ε ( Y ε ) $f^{\varepsilon} (Y^{\varepsilon} )$ such that ( Y ε , f ε ( Y ε ) ) $(Y^{\varepsilon} ,f^{\varepsilon} (Y^{\varepsilon} ))$ is equal in law to the tuple be ε $\mathfrak {be}^{\varepsilon}$ in the theorem statement. We also extend the definition of f ε $f^{\varepsilon}$ to the case of Brownian excursions Y ε $Y^{\varepsilon}$ in C ε $\mathcal {C}_\varepsilon$ starting at ( b a ε , b ) $(ba_\varepsilon ,b)$ for general b > 0 $b&gt;0$ in the natural way.

Now let Y ε $Y^{\varepsilon}$ be coupled with X ε $X^{\varepsilon}$ as in (4.6) for some fixed δ > 0 $\delta &gt;0$ , and let E ε $E^{\varepsilon}$ be the event that Y ε $Y^{\varepsilon}$ starts at ( b a ε , b ) $(ba_\varepsilon ,b)$ for b [ 1 , 2 ] $b\in [1,2]$ . Define f , E $f,E$ similarly for ε = 0 $\varepsilon =0$ . We claim that

( X ε , f ε ( X ε ) , Y ε , f ε ( Y ε ) , E ε ) ( X , f ( X ) , Y , f ( Y ) , E ) $$\begin{equation} (X^{\varepsilon} ,f^{\varepsilon} (X^{\varepsilon} ),Y^{\varepsilon} ,f^{\varepsilon} (Y^{\varepsilon} ),E^{\varepsilon} ) \Rightarrow (X,f(X),Y,f(Y),E) \end{equation}$$ (4.7)
as ε 0 $\varepsilon \rightarrow 0$ . In fact, this claim is immediate since if ( X ε , f ε ( X ε ) ) $(X^{\varepsilon} ,f^{\varepsilon} (X^{\varepsilon} ))$ converges to ( X , f ( X ) ) $(X,f(X))$ then (by convergence of I ε , 0 $\mathcal {I}^{\varepsilon ,0}$ ) we also have convergence of the interval J ε $J_\varepsilon$ , which further gives convergence of ( Y ε , f ε ( Y ε ) , E ε ) $(Y^{\varepsilon} ,f^{\varepsilon} (Y^{\varepsilon} ),E^{\varepsilon} )$ to ( Y , f ( Y ) , E ) $(Y,f(Y),E)$ .

With Y ε $Y^{\varepsilon}$ as in the previous paragraph let Y ε $\widetilde{Y}^{\varepsilon}$ denote a random variable which is obtained by conditioning on E ε $E^{\varepsilon}$ and then applying a Brownian rescaling of Y ε $Y^{\varepsilon}$ so that Y ε $\widetilde{Y}^{\varepsilon}$ starts at ( a ε , 1 ) $(a_\varepsilon ,1)$ . We get from (4.7) that ( Y ε , f ε ( Y ε ) ) ( Y , f ( Y ) ) $(\widetilde{Y}^{\varepsilon} , f^{\varepsilon} (\widetilde{Y}^{\varepsilon} )) \Rightarrow (\widetilde{Y},f(\widetilde{Y}))$ . Note that if we condition the excursions in the statement of the lemma to have duration at least δ $\delta$ , then these have exactly the same laws as ( Y ε , f ε ( Y ε ) , Y , f ( Y ) ) $(\widetilde{Y}^{\varepsilon} , f^{\varepsilon} (\widetilde{Y}^{\varepsilon} ),\widetilde{Y},f(\widetilde{Y}))$ conditioned to have duration at least δ $\delta$ . Thus the lemma follows upon taking δ 0 $\delta \rightarrow 0$ , since the probability that the considered excursions have duration at least δ $\delta$ tends to 1, uniformly in ε $\varepsilon$ . $\Box$

4.4 Proof of (3.10)

Let us first recall the statement of (3.10). We have fixed z , w D $z,w\in \mathbb {D}$ , and as usual, η ε $\eta ^{\varepsilon}$ denotes a space-filling SLE κ $_{\kappa ^{\prime }}$ in D $\mathbb {D}$ , while η z ε $\eta ^{\varepsilon} _z$ denotes the branch in the associated branching SLE κ $_{\kappa ^{\prime }}$ toward z $z$ , parameterized by log $-\log$ conformal radius seen from z $z$ . For δ > 0 $\delta &gt;0$ , we have defined the times σ z , w , δ ε $\sigma _{z,w,\delta }^{\varepsilon}$ that w $w$ is sent first sent to within distance δ $\delta$ of D $\partial \mathbb {D}$ by the Loewner maps associated with η z ε $\eta ^{\varepsilon} _z$ , and σ z , w ε = σ z , w , 0 ε $\sigma _{z,w}^{\varepsilon} =\sigma _{z,w,0}^{\varepsilon}$ to be the first time that z $z$ and w $w$ are separated by η z ε $\eta ^{\varepsilon} _z$ . For r > 0 $r&gt;0$ , we denote the collection of faces (squares) of r Z 2 $r\mathbb {Z}^2$ that intersect D $\mathbb {D}$ by S r $\mathcal {S}_r$ . Finally, we write S δ , r ε $S_{\delta ,r}^{\varepsilon}$ for the event that there exists S S r $S\in \mathcal {S}_r$ that is separated by η z ε $\eta _z^{\varepsilon}$ from z $z$ during the interval σ z , w , δ ε , σ z , w ε ] $\sigma _{z,w,\delta }^{\varepsilon} , \sigma _{z,w}^{\varepsilon} ]$ and such that z $z$ is visited by the space-filling SLE κ $\operatorname{SLE}_{\kappa ^{\prime }}$ η ε $\eta ^{\varepsilon}$ , before S $S$ . The statement of (3.10) is then that
lim δ 0 lim ε 0 P ( S δ , r ε ) = 0 . $$\begin{equation*} \lim _{\delta \downarrow 0} \lim _{\varepsilon \downarrow 0} \mathbb {P}(S_{\delta ,r}^{\varepsilon} ) =0. \end{equation*}$$

The mating of trees theorem (Theorem 4.12) together with the convergence proved in the previous subsection now make the proof of this statement reasonably straightforward. Indeed, in plain language, it says that the probability of an SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ branch almost separating two points z $z$ and w $w$ (where ‘almost’ is encoded by a small parameter δ $\delta$ ) but then going on to separate a bicolored component of macroscopic size from z $z$ at some time t $t$ strictly before separating z $z$ from w $w$ , goes to 0 as δ 0 $\delta \rightarrow 0$ , uniformly in κ ${\kappa ^{\prime }}$ . The idea is to couple this SLE with an independent γ $\gamma$ -LQG disk and note that if the event mentioned above were to occur, then the component U $U$ containing z $z$ and w $w$ at time t $t$ would have a small ‘bottleneck’ and hence define a very strange distribution of γ $\gamma$ -LQG mass when viewed as a γ $\gamma$ -LQG surface. On the other hand, if we sample several points from the γ $\gamma$ -LQG area measure on the disk, then one of these is likely to be in the bicolored component separated from z $z$ and w $w$ at time t $t$ . So the mating of trees theorem says that U $U$ should really look like a quantum disk, and in particular, have a rather well behaved distribution of γ $\gamma$ -LQG mass without bottlenecks. This contradiction will lead us to the proof of (3.10).

Let us now get on with the details. For ε ( 0 , 2 2 ) $\varepsilon \in (0,2-\sqrt {2})$ we consider a CLE κ $_{{\kappa ^{\prime }}}$ exploration alongside an independent unit boundary length quantum disk h ε $h^{\varepsilon}$ as in Definition 4.8. We write μ ε $\mu ^{\varepsilon}$ for its associated LQG area measure and let y ε $y^{\varepsilon}$ be a point in D $\mathbb {D}$ sampled from μ ε $\mu ^{\varepsilon}$ normalized to be a probability measure. We let z Q $z\in \mathcal {Q}$ be fixed.

Corollary 4.16.Consider the event A δ , m , v ε $A^{\varepsilon} _{\delta ,m,v}$ that

  • O z , y ε ε = 1 $\mathcal {O}_{z,y^{\varepsilon} }^{\varepsilon} =1$ (that is, the component containing z $z$ when y ε $y^{\varepsilon}$ and z $z$ are separated is monocolored);
  • when D z , y ε ε ${\mathrm{D}}_{z,y^{\varepsilon} }^{\varepsilon}$ (this monocolored component) is mapped to D $\mathbb {D}$ , with a point in the interior chosen proportionally to μ ε | D z , y ε ε $\mu ^{\varepsilon} |_{{\mathrm{D}}_{z,y^{\varepsilon} }^{\varepsilon} }$ sent to 0, the resulting quantum mass of D ( 1 10 δ D ) $\mathbb {D}\setminus (1-10\delta \mathbb {D})$ is greater than m $m$ .
Then for every m $m$ we have that
lim δ 0 lim sup ε 0 P ( A δ , m , v ε ) = 0 . $$\begin{equation*} \lim _{\delta \rightarrow 0}\limsup _{\varepsilon \rightarrow 0}\mathbb {P}(A_{\delta ,m,v}^{\varepsilon} )=0. \end{equation*}$$

Proof.Theorem 4.12 says that the monocolored components separated from y ε $y^{\varepsilon}$ by η y ε ε $\eta ^{\varepsilon} _{y^{\varepsilon} }$ are quantum disks conditionally on their boundary lengths and areas. Moreover, we know that the total mass of the original disk h ε $h^{\varepsilon}$ converges in law to something almost surely finite as ε 0 $\varepsilon \rightarrow 0$ , by Lemma 4.5 and Remark 4.6. Recalling the definition of B ̂ $\widehat{B}$ from Section 4.3, we also know that the largest quantum boundary length among all monocolored components separated from y ε $y^{\varepsilon}$ has law given by the largest jump of B ̂ t $\widehat{B}^{\mathfrak {t}}$ , for t $\mathfrak {t}$ chosen uniformly in ( 0 , μ ε ( D ) ) $(0,\mu ^{\varepsilon} (\mathbb {D}))$ . Indeed, if t $\mathfrak {t}$ corresponds to y ε $y^{\varepsilon}$ as in the paragraph above Definition 4.14, then t $\mathfrak {t}$ is a uniform time in ( 0 , μ ε ( D ) ) $(0,\mu ^{\varepsilon} (\mathbb {D}))$ and the jumps of B ̂ t $\widehat{B}^{\mathfrak {t}}$ are precisely the quantum boundary lengths of the monocolored components disconnected from y ε $y^{\varepsilon}$ . By Lemma 4.15 we may deduce that the law of this largest jump converges to something almost surely finite as ε 0 $\varepsilon \rightarrow 0$ . Thus, by choosing N , L $N,L$ large enough, we may work on an event with arbitrarily high probability (uniformly in ε $\varepsilon$ ) where there are fewer than N $N$ monocolored components separated for y ε $y^{\varepsilon}$ with mass at least m $m$ , and where they all have ν ε $\nu ^{\varepsilon}$ boundary length less than L $L$ . Lemma 4.10 then provides the result. $\Box$

We also need one more elementary property of radial Loewner chains to assist with the proof of (3.10).

Lemma 4.17.Consider the image ( g t ( z ) ) t 0 $(g_t(z))_{t\geqslant 0}$ of a point z D $z\in \mathbb {D}$ under the radial Loewner flow ( g t ) t 0 = ( g t [ D ] ) t 0 $(g_t)_{t\geqslant 0}=(g_t[\mathbf {D}])_{t\geqslant 0}$ corresponding to D D $\mathbf {D}\in \mathcal {D}$ . Then with probability one, | g t ( z ) | $|g_t(z)|$ is a non-decreasing function of time (until point z $z$ is swallowed).

Proof.From the radial Loewner equation one can compute directly that, until point z $z$ is swallowed,

t ( | g t ( z ) | 2 ) = 2 | g t ( z ) | W t + g t ( z ) W t g t ( z ) . $$\begin{equation*} \partial _t (|g_t(z)|^2) = 2 |g_t(z)| \Re {\left(\frac{W_t+g_t(z)}{W_t-g_t(z)}\right)}. \end{equation*}$$
Since ( ( 1 + x ) / ( 1 x ) ) > 0 $\Re ((1+x)/(1-x))&gt;0$ for any x D $x\in \mathbb {D}$ , the right-hand side above must be positive. $\Box$

Proof of (3.10).Fix r > 0 $r&gt;0$ and suppose that P ( S δ , r ε ) a $\mathbb {P}(S_{\delta ,r}^{\varepsilon} )\geqslant a$ for some a > 0 $a&gt;0$ . Recall that S δ , r ε $S_{\delta ,r}^{\varepsilon}$ is the event that there exists S S r $S\in \mathcal {S}_r$ that is separated by η z ε $\eta _z^{\varepsilon}$ from z $z$ during the interval [ σ z , w , δ ε , σ z , w ε ] $[\sigma _{z,w,\delta }^{\varepsilon} , \sigma _{z,w}^{\varepsilon} ]$ and such that the disconnected component containing z $z$ is monocolored. Let h ε , μ ε , y ε $h^{\varepsilon} , \mu ^{\varepsilon} ,y^{\varepsilon}$ be as above Corollary 4.16, and let a = inf ε > 0 min S S r P ( y ε S ) $a^{\prime }=\inf _{\varepsilon &gt; 0}\min _{S\in \mathcal {S}_r}\mathbb {P}(y^{\varepsilon} \in S)$ . Then a $a^{\prime }$ is strictly positive, due to the convergence result Lemma 4.8, plus the fact that min S S r P ( y S ) > 0 $\min _{S\in \mathcal {S}_r}\mathbb {P}(y\in S)&gt;0$ when y $y$ is picked from the critical LQG area measure for a critical unit boundary length disk. By independence, we then have P ( E δ ε ) a a $\mathbb {P}(E_\delta ^{\varepsilon} )\geqslant aa^{\prime }$ , where E δ ε $E_{\delta }^{\varepsilon}$ is the event that σ z , y ε [ σ z , w , δ ε , σ z , w ε ] $\sigma _{z,y^{\varepsilon} }\in [\sigma _{z,w,\delta }^{\varepsilon} ,\sigma _{z,w}^{\varepsilon} ]$ and O z , y ε = 1 $\mathcal {O}_{z,y}^{\varepsilon} =1$ .

We can also choose v , m $v,m$ small enough and K $K$ large enough that on an event F m , v , K ε $F_{m,v,K}^{\varepsilon}$ with probability 1 a a / 2 $\geqslant 1-aa^{\prime }/2$ , uniformly in ε $\varepsilon$ :

  • B z ( v ) l z ε $B_z(v)\subset l_z^{\varepsilon}$ (respectively, B w ( v ) l w ε $B_w(v)\subset l_w^{\varepsilon}$ ) where l z $l_z$ (respectively, l w ε $l_w^{\varepsilon}$ ) is the first nested CLE κ $\operatorname{CLE}_{{\kappa ^{\prime }}}$ bubble containing z $z$ (respectively, w $w$ ) that is entirely contained in B z ( | z w | / 3 ) ) $B_z(|z-w|/3))$ (respectively, B w ( | z w | / 3 ) $B_w(|z-w|/3)$ ;
  • B z ( v ) $B_z(v)$ and B w ( v ) $B_w(v)$ have μ $\mu$ -mass greater than or equal to m $m$ ;
  • if we map l z ε $l_z^{\varepsilon}$ (respectively, l w ε $l_w^{\varepsilon}$ ) to D $\mathbb {D}$ with z $z$ (respectively, w $w$ ) sent to 0, then the images of B z ( v ) $B_z(v)$ and B w ( v ) $B_w(v)$ are contained in ( 1 / 2 ) D $(1/2)\mathbb {D}$ ; and
  • μ ε ( D ) K $\mu ^{\varepsilon} (\mathbb {D})\leqslant K$ .
Again this is possible because such v , m , K $v,m,K$ can be chosen when ε = 0 , κ = 4 $\varepsilon =0,{\kappa ^{\prime }}=4$ , and we can appeal to the convergence results Proposition 2.18 and Lemma 4.8. Note that on the event F v , m , K ε $F_{v,m,K}^{\varepsilon}$ :
  • (i) B w ( v ) $B_w(v)$ and B z ( v ) $B_z(v)$ are contained in ( D z ε ) t $({\mathbf {D}}_{z}^{\varepsilon} )_t$ for all t ( σ z , w , δ ε , σ z , w ε ) $t\in (\sigma ^{\varepsilon} _{z,w,\delta },\sigma ^{\varepsilon} _{z,w})$ ;
  • (ii) for any t ( σ z , w , δ ε , σ z , w ε ) $t\in (\sigma ^{\varepsilon} _{z,w,\delta },\sigma ^{\varepsilon} _{z,w})$ and conformal map sending ( D z ε ) t $({\mathbf {D}}_{z}^{\varepsilon} )_t$ to D $\mathbb {D}$ with z B z ( v ) $z^{\prime }\in B_z(v)$ sent to 0, the image of B w ( v ) $B_w(v)$ is contained in a 10 δ $10\delta$ neighborhood of D $\partial \mathbb {D}$ .
Point (ii) follows because any such conformal map can be written as the composition of a conformal map ( D z ε ) t $({\mathbf {D}}_{z}^{\varepsilon} )_t$ to D $\mathbb {D}$ sending z $z$ to 0, and then a conformal map from D D $\mathbb {D}\rightarrow \mathbb {D}$ sending the image of z $z^{\prime }$ , which lies in ( 1 / 2 ) D $(1/2)\mathbb {D}$ , to 0. By Lemma 4.17, v $v$ is sent to distance at most δ $\delta$ from the boundary by the first of these two maps. The third bullet point in the definition of F v , m , K $F_{v,m,K}$ then implies that the whole of B w ( v ) $B_w(v)$ is actually sent within distance 4 δ $4\delta$ of D $\partial \mathbb {D}$ . Distortion estimates near the boundary for the second conformal map allow one to deduce (ii).

To finish the proof, we consider the event E δ ε F m , v , K ε $E_\delta ^{\varepsilon} \cap F_{m,v,K}^{\varepsilon}$ which has probability a a / 2 $\geqslant aa^{\prime }/2$ by construction. Conditionally on this event, if we sample a point from D z , y ε ε $\mathbf {D}_{z,y^{\varepsilon} }^{\varepsilon}$ according to the measure μ ε $\mu ^{\varepsilon}$ , then this point will lie in B z ( v ) $B_z(v)$ with conditional probability m / K $\geqslant m/K$ . If this happens, then upon mapping to the unit disk with this point sent to the origin, a set of μ ε $\mu ^{\varepsilon}$ mass m $\geqslant m$ (namely B z ( v ) $B_z(v)$ ) will necessarily be sent to D ( 1 10 δ ) D $\mathbb {D}\setminus (1-10\delta )\mathbb {D}$ (see point (ii) above). Note that m / K $m/K$ is a function c ( a ) $c(a)$ of a $a$ only (and in particular does not depend on ε , δ $\varepsilon ,\delta$ ).

So in summary, if P ( S δ , r ε ) a $\mathbb {P}(S_{\delta ,r}^{\varepsilon} )\geqslant a$ , then P ( A δ , m , v ε ) > a a c ( a ) $\mathbb {P}(A_{\delta , m,v}^{\varepsilon} )&gt;aa^{\prime }c(a)$ for some m ( a ) , v ( a ) , c ( a ) $m(a),v(a),c(a)$ depending only on a $a$ , where A δ , m , v ε $A_{\delta ,m,v}^{\varepsilon}$ is as in Corollary 4.16. This means that if (3.10) does not hold, then lim δ 0 lim sup ε 0 P ( A δ , m , v ε ) > 0 $\lim _{\delta \rightarrow 0} \limsup _{\varepsilon \rightarrow 0} \mathbb {P}(A^{\varepsilon} _{\delta , m,v})&gt;0$ for some m , v $m,v$ . This contradicts Corollary 4.16, and hence (3.10) is proved. $\Box$

5 MATING OF TREES FOR κ = 4 $\kappa =4$ AND JOINT CONVERGENCE OF CLE, LQG AND BROWNIAN MOTIONS AS κ 4 $\kappa ^{\prime }\downarrow 4$

Before stating the main theorems, let us briefly take stock of the progress so far. Recall that to each ε ( 0 , 2 2 ) $\varepsilon \in (0,2-\sqrt {2})$ we associate κ = κ ( ε ) = 16 / ( 2 ε ) 2 ${\kappa ^{\prime }}={\kappa ^{\prime }}(\varepsilon )=16/(2-\varepsilon )^2$ , and write ( D z ε ) z Q $(\mathbf {D}_z^{\varepsilon} )_{z\in \mathcal {Q}}$ for the SLE κ ( κ 6 ) $_{\kappa ^{\prime }}(\kappa ^{\prime }-6)$ branches from 1 to z $z$ in a branching SLE κ $_{\kappa ^{\prime }}$ in D $\mathbb {D}$ . These are generated by curves ( η z ε ) z Q $(\eta ^{\varepsilon} _z)_{z\in \mathcal {Q}}$ , so that ( D z ε ) t $(\mathbf {D}_z^{\varepsilon} )_t$ is the connected component of D η z ε $\mathbb {D}\setminus \eta _z^{\varepsilon}$ containing z $z$ for every z $z$ and t $t$ . Recall that this branching SLE defines a nested CLE κ $_{\kappa ^{\prime }}$ which we denote by Γ ε $\Gamma ^{\varepsilon }$ , and a space-filling SLE κ $\operatorname{SLE}_{\kappa ^{\prime }}$ which we denote by η ε $\eta ^{\varepsilon}$ . The space-filling SLE κ $_{\kappa ^{\prime }}$ η ε $\eta ^{\varepsilon}$ then determines an order on the points in Q $\mathcal {Q}$ : for z , w Q $z,w\in \mathcal {Q}$ we denote by O z , w ε $\mathcal {O}_{z,w}^{\varepsilon}$ the random variable that is 1 if z $z$ is visited before w $w$ by η ε $\eta ^{\varepsilon}$ (or z = w $z=w$ ) and 0 otherwise. We combine these and set
cle ε = ( ( D z ε ) z , Γ ε , ( O z , w ε ) z , w ) $$\begin{equation*} \mathfrak {cle}^{\varepsilon} =((\mathbf {D}^{\varepsilon} _z)_z,\Gamma ^{\varepsilon} ,(\mathcal {O}_{z,w}^{\varepsilon })_{z,w}) \end{equation*}$$
for each ε $\varepsilon$ , where z , w $z,w$ are indexed by Q $\mathcal {Q}$ .
When κ = 4 ${\kappa ^{\prime }}=4$ we have analogous objects. We write Γ $\Gamma$ for a nested CLE 4 $_4$ in D $\mathbb {D}$ , and we assume that Γ $\Gamma$ is coupled with a branching uniform CLE 4 $\operatorname{CLE}_4$ exploration that explores its loops. We write D z $\mathbf {D}_z$ for the branch toward each z Q $z\in \mathcal {Q}$ in this exploration. Finally, we define a collection of independent coin tosses ( O z , w ) z , w Q $(\mathcal {O}_{z,w})_{z,w\in \mathcal {Q}}$ as described at the start of Section 3. Combining these, we set
cle = ( ( D z ) z , Γ , ( O z , w ) z , w ) . $$\begin{equation*} \mathfrak {cle}=((\mathbf {D}_z)_z,\Gamma ,(\mathcal {O}_{z,w})_{z,w}). \end{equation*}$$

The processes D z ε , D z $\mathbf {D}^{\varepsilon} _z,\mathbf {D}_z$ are each parameterized by log $-\log$ conformal radius seen from z $z$ , and equipped with the topology of D z $\mathcal {D}_z$ for every z Q $z\in \mathcal {Q}$ . The loop ensembles Γ ε , Γ $\Gamma ^{\varepsilon} ,\Gamma$ are equipped with the topology of Hausdorff convergence for the countable collection of loops surrounding each z Q $z\in \mathcal {Q}$ .

We also consider, for each ε $\varepsilon$ , a unit boundary length LQG disk as in Definition 4.8, independent of cle ε $\mathfrak {cle}^{\varepsilon}$ and write
lqg ε = ( μ h ε ε , ν h ε ε , h ε ) $$\begin{equation*} \mathfrak {lqg}^{\varepsilon} =(\mu ^{\varepsilon} _{h^{\varepsilon} },\nu ^{\varepsilon} _{h^{\varepsilon} },h^{\varepsilon} ) \end{equation*}$$
for the associated area measure, boundary length measure and field. We denote by
lqg = ( μ h , ν h , h ) $$\begin{equation*} \mathfrak {lqg}=(\mu _h,\nu _h,h) \end{equation*}$$
its critical counterpart, which we also sample independently of cle $\mathfrak {cle}$ . We equip the fields with the H 1 ( D ) $H^{-1}(\mathbb {D})$ topology, and the measures with the weak topology for measures on D $\mathbb {D}$ and D $\partial \mathbb {D}$ , respectively.

Then by Remark 4.9, Proposition 3.12 and the independence of cle ε $\mathfrak {cle}^{\varepsilon}$ and lqg ε $\mathfrak {lqg}^{\varepsilon}$ (respectively, cle $\mathfrak {cle}$ and lqg $\mathfrak {lqg}$ ), we have that

Proposition 5.1. ( cle ε , lqg ε ) ( cle , lqg ) $(\mathfrak {cle}^{\varepsilon} ,\mathfrak {lqg}^{\varepsilon} )\Rightarrow (\mathfrak {cle},\mathfrak {lqg})$ as ε 0 $\varepsilon \rightarrow 0$ .

Additionally, for every ε ( 0 , 2 2 ) $\varepsilon \in (0,2-\sqrt {2})$ by the mating of trees theorem, Theorem 4.12, ( cle ε , lqg ε ) $(\mathfrak {cle}^{\varepsilon} ,\mathfrak {lqg}^{\varepsilon} )$ determines a collection of Brownian observables
be ε = ( X ε , ( I ε , t ) t , ( ε , t ) t , ( T ε , t ) t , ( X ̂ ε , t ) t ) $$\begin{equation*} \mathfrak {be}^{\varepsilon} =(X^{\varepsilon} ,(\mathcal {I}^{\varepsilon ,\mathfrak {t}})_{\mathfrak {t}}, (\ell ^{\varepsilon ,\mathfrak {t}})_{\mathfrak {t}}, (T^{\varepsilon ,\mathfrak {t}})_{\mathfrak {t}}, (\widehat{X}^{\varepsilon ,\mathfrak {t}})_{\mathfrak {t}} ) \end{equation*}$$
as explained in Section 4.3. Recall that X ε $X^{\varepsilon}$ is π $\sqrt {\pi }$ times an uncorrelated Brownian excursion in the cone { z C : arg ( z ) [ π / 2 + tan 1 ( a ε ) , π / 2 tan 1 ( a ε ) ) } $\lbrace z\in \mathbb {C}: \arg (z)\in [-\pi /2+\tan ^{-1}(a_\varepsilon ),\pi /2-\tan ^{-1}(a_\varepsilon ))\rbrace$ , starting from ( a ε , 1 ) $(a_\varepsilon ,1)$ and ending at the origin, where a ε = ( 1 + cos ( π γ 2 / 4 ) ) / ( 1 cos ( π γ 2 / 4 ) ) ) = π ε / 2 + o ( ε ) $a_\varepsilon =\sqrt {(1+\cos (\pi \gamma ^2/4))/(1-\cos (\pi \gamma ^2/4)))}=\pi \varepsilon /2+o(\varepsilon )$ . The indexing of the above processes is over t R + Q $\mathfrak {t}\in \mathbb {R}_+\cap \mathbb {Q}$ . If we also write
be = ( X , ( I t ) t , ( t ) t , ( T t ) t , ( X ̂ t ) t ) , $$\begin{equation*} \mathfrak {be}=(X,(\mathcal {I}^{\mathfrak {t}})_{\mathfrak {t}}, (\ell ^{\mathfrak {t}})_{\mathfrak {t}}, (T^{\mathfrak {t}})_{\mathfrak {t}}, (\widehat{X}^{\mathfrak {t}})_{\mathfrak {t}} ), \end{equation*}$$
for a tuple with law as described in Section 4.3, then by Lemma 4.15 we have that

Proposition 5.2. be ε be $\mathfrak {be}^{\varepsilon} \Rightarrow \mathfrak {be}$ as ε 0 $\varepsilon \rightarrow 0$ .

Here, I ε , t , I t $\mathcal {I}^{\varepsilon ,\mathfrak {t}},\mathcal {I}^{\mathfrak {t}}$ are equipped with the Hausdorff topology, and the stochastic processes in the definition of be ε , be $\mathfrak {be}^{\varepsilon} , \mathfrak {be}$ are equipped with the Skorokhod topology.

We now wish to describe the joint limit of ( cle ε , lqg ε , be ε ) $(\mathfrak {cle}^{\varepsilon} , \mathfrak {lqg}^{\varepsilon} , \mathfrak {be}^{\varepsilon} )$ as ε 0 $\varepsilon \rightarrow 0$ . For this, we first need to introduce a little notation.

For z , w Q $z, w\in \mathcal {Q}$ , z w $z\ne w$ , we can consider the first time σ z , w ε $\sigma _{z,w}^{\varepsilon}$ (defined by cle ε $\mathfrak {cle}^{\varepsilon}$ ) at which z $z$ and w $w$ are in different complementary components of D η z ε $\mathbb {D}\setminus \eta _z^{\varepsilon}$ . We let U ε = U ε ( z , w ) D $U^{\varepsilon} =U^{\varepsilon} (z,w)\subset \mathbb {D}$ denote the component which is visited first by the space-filling SLE κ $_{\kappa ^{\prime }}$ η ε $\eta ^{\varepsilon}$ . We say that U ε = U ε ( z , w ) $U^{\varepsilon} =U^{\varepsilon} (z,w)$ is the monocolored component when z $z$ and w $w$ are separated. Let us define
U z ε : = { U D : U = U ε ( z , w ) for some z w with O z , w ε = 0 } $$\begin{equation*} \mathfrak {U}^{\varepsilon} _z:=\lbrace U\subset \mathbb {D}: U=U^{\varepsilon} (z,w) \text{ for some } z\ne w \text{ with } \mathcal {O}^{\varepsilon} _{z,w}=0\rbrace \end{equation*}$$
to be the set of monocolored components separated from z $z$ by η z ε $\eta _z^{\varepsilon}$ . Note that these are naturally ordered, according to the order that they are visited by η ε $\eta ^{\varepsilon}$ . In fact, we may also associate orientations to the elements of U z ε $\mathfrak {U}_z^{\varepsilon}$ : we say that U U z ε $U\in \mathfrak {U}_z^{\varepsilon}$ is ordered clockwise (respectively, counterclockwise) if the boundary of U $U$ is visited by η z ε $\eta _z^{\varepsilon}$ in a clockwise (respectively, counterclockwise) order, and in this case we write sgn ( U ) = 1 $\mathrm{sgn}(U)=-1$ (respectively, + 1 $+1$ ).

Remark 5.3.For ε ( 0 , 2 2 ) $\varepsilon \in (0,2-\sqrt {2})$ , by Theorem 4.12 and the definitions above, we have that

  • the duration of Z ε $Z^{\varepsilon}$ is equal to μ h ε ε ( D ) $\mu _{h^{\varepsilon} }^{\varepsilon} (\mathbb {D})$ , hence X ε = 0 $X^{\varepsilon} =0$ for all t μ h ε ε ( D ) $t\geqslant \mu ^{\varepsilon} _{h^{\varepsilon} }(\mathbb {D})$ almost surely;
  • for z Q $z\in \mathcal {Q}$ , the time t z ε $t_z^{\varepsilon}$ at which η ε $\eta ^{\varepsilon}$ visits z $z$ is almost surely given by μ h ε ε ( U U z ε U ) = U z ε μ h ε ε ( U ) $\mu _{h^{\varepsilon} }^{\varepsilon} (\cup _{U\in \mathfrak {U}_{z}^{\varepsilon} } U)=\sum _{\mathfrak {U}_{z}^{\varepsilon} } \mu _{h^{\varepsilon} }^{\varepsilon} (U)$ ;
  • the ordered ν h ε ε $\nu _{h^{\varepsilon} }^{\varepsilon}$ boundary lengths of the components of U z ε $\mathfrak {U}_z^{\varepsilon}$ are almost surely equal to the ordered jumps of ( B ̂ ε , t z ε ) $(\widehat{B}^{\varepsilon ,t_{z}^{\varepsilon} })$ , and the sign of each jump is equal to the sign of the corresponding element of U z ε $\mathfrak {U}_z^{\varepsilon}$ ; and
  • the ordered μ h ε ε $\mu ^{\varepsilon} _{h^{\varepsilon} }$ masses of the components of U z ε $\mathfrak {U}_z^{\varepsilon}$ are almost surely equal to the ordered jumps of T ε , t z ε $T^{\varepsilon ,t_z^{\varepsilon} }$ .

We can also define analogous objects associated with the CLE 4 $_4$ exploration: if z $z$ and w $w$ are separated at time σ z , w $\sigma _{z,w}$ by the CLE 4 $\operatorname{CLE}_4$ exploration branch toward z $z$ , and O z , w = 1 $\mathcal {O}_{z,w}=1$ we set U ( z , w ) = ( D z ) σ z , w $U(z,w)=(\mathbf {D}_z)_{\sigma _{z,w}}$ ; if O z , w = 0 $\mathcal {O}_{z,w}={0}$ we set U ( z , w ) = ( D w ) σ w , z $U(z,w)=(\mathbf {D}_w)_{\sigma _{w,z}}$ . The set U z $\mathfrak {U}_z$ is then defined in exactly the same way. Note that in this case the elements of U z $\mathfrak {U}_z$ are ordered by declaring that U $U$ comes before U $U^{\prime }$ if and only if U = U ( z , w ) $U=U(z,w)$ and U = U ( z , w ) $U^{\prime }=U(z,w^{\prime })$ for w w $w\ne w^{\prime }$ such that O w , w = 0 $\mathcal {O}_{w^{\prime },w}=0$ . We now say that U U z $U\in \mathfrak {U}_z$ is ordered clockwise (respectively, counterclockwise) if there is an even (respectively, odd) number of loops which enclose U $U$ , and write sgn ( U ) = 1 $\mathrm{sgn}(U)=-1$ (respectively, + 1 $+1$ ).

The main ingredient that will allow us to describe the joint limit of ( cle ε , lqg ε , be ε ) $(\mathfrak {cle}^{\varepsilon} , \mathfrak {lqg}^{\varepsilon} , \mathfrak {be}^{\varepsilon} )$ is the following:

Proposition 5.4.Given ( cle ε , lqg ε ) $(\mathfrak {cle}^{\varepsilon} , \mathfrak {lqg}^{\varepsilon} )$ , denote by z ε $ z^{\varepsilon}$ a point sampled from μ h ε ε $\mu ^{\varepsilon} _{h^{\varepsilon} }$ in D $\mathbb {D}$ (normalized to be a probability measure) and given ( cle , lqg ) $(\mathfrak {cle}, \mathfrak {lqg})$ , denote by z $z$ a point sampled in the same way from μ h $\mu _{h}$ . For given δ > 0 $\delta &gt;0$ , write ( U 1 ε , , U N ε ε ) $(U_1^{\varepsilon },\dots , U_{N^{\varepsilon} }^{\varepsilon} )$ for the ordered components of U z ε ε $\mathfrak {U}_{z^{\varepsilon} }^{\varepsilon }$ with μ h ε ε $\mu ^{\varepsilon} _{h^{\varepsilon} }$ area δ $\geqslant \delta$ , and define ( U 1 , , U N ) $(U_1,\dots , U_N)$ similarly for the ordered components of U z $\mathfrak {U}_{z}$ with μ h $\mu _h$ area δ $\geqslant \delta$ . Suppose that w i ε $w_i^{\varepsilon}$ for 1 i N ε $1\leqslant i \leqslant N_\varepsilon$ (respectively, w i $w_i$ for 1 i N $1\leqslant i \leqslant N$ ) are sampled from μ ε | U i ε $\mu ^{\varepsilon} |_{U_i^{\varepsilon} }$ (respectively, μ | U i $\mu |_{U_i}$ ) normalized to be probability measures, and g i ε : U i ε D $g_i^{\varepsilon} :U_i^{\varepsilon} \rightarrow \mathbb {D}$ (respectively, g i : U i D $g_i:U_i\rightarrow \mathbb {D}$ ) are the conformal maps that send w i ε $w_i^{\varepsilon}$ to 0 (respectively, w i $w_i$ to 0) with positive real derivative at w i ε $w_i^{\varepsilon}$ (respectively, w i $w_i$ ). Set sgn ( U i ε ) = w i ε = 0 $\mathrm{sgn}(U_i^{\varepsilon} )=w_i^{\varepsilon} =0$ (respectively, sgn ( U i ) = w i = 0 $\mathrm{sgn}(U_i)=w_i=0$ ) and g i ε ( h ε ) $g_i^{\varepsilon} (h^{\varepsilon} )$ (respectively, g i ( h ) $g_i(h)$ ) to be the 0 function for i > N ε $i&gt;N^{\varepsilon}$ (respectively, i > N $i&gt;N$ ). Then

( cle ε , lqg ε , z ε , ( sgn ( U i ε ) ) i 1 , ( w i ε ) i 1 , ( g i ε ( h ε ) ) i 1 ) ( cle , lqg , z , ( sgn ( U i ) ) i 1 , ( w i ) i 1 , ( g i ( h ) ) i 1 ) $$\begin{equation*} (\mathfrak {cle}^{\varepsilon} , \mathfrak {lqg}^{\varepsilon} , z^{\varepsilon} ,(\mathrm{sgn}(U_i^{\varepsilon} ))_{i\geqslant 1}, (w_i^{\varepsilon} )_{i\geqslant 1}, (g_i^{\varepsilon} (h^{\varepsilon} ))_{i\geqslant 1}) \Rightarrow (\mathfrak {cle},\mathfrak {lqg},z,(\mathrm{sgn}(U_i))_{i\geqslant 1}, (w_i)_{i\geqslant 1}, (g_i(h))_{i\geqslant 1}) \end{equation*}$$
as ε 0 $\varepsilon \rightarrow 0$ . The fields g i ε ( h ε ) $g_i^{\varepsilon} (h^{\varepsilon} )$ and g ( h ) $g(h)$ above are defined using the change of coordinates formula (4.1).

In other words, the ordered and signed sequence of monocolored quantum surfaces separated from z ε n $z^{\varepsilon _n}$ converges almost surely, as a sequence of quantum surfaces (see above (4.1)) to the ordered sequence of monocolored quantum surfaces separated from z $z$ as n $n\rightarrow \infty$ .

From this, we can deduce our main theorem.

Theorem 5.5. ( cle ε , lqg ε , be ε ) $(\mathfrak {cle}^{\varepsilon} , \mathfrak {lqg}^{\varepsilon} , \mathfrak {be}^{\varepsilon} )$ converges jointly in law to a tuple ( cle , lqg , be ) $(\mathfrak {cle}, \mathfrak {lqg}, \mathfrak {be})$ as ε 0 $\varepsilon \downarrow 0$ . In the limiting tuple, cle , lqg , be $\mathfrak {cle}, \mathfrak {lqg}, \mathfrak {be}$ have marginal laws as above, cle $\mathfrak {cle}$ and lqg $\mathfrak {lqg}$ are independent, and ( cle , lqg ) $(\mathfrak {cle}, \mathfrak {lqg})$ determines be $\mathfrak {be}$ .

Furthermore, we have the following explicit description of the correspondence between ( cle , lqg ) $(\mathfrak {cle},\mathfrak {lqg})$ and be $\mathfrak {be}$ in the limit. Suppose that z D $z\in \mathbb {D}$ is sampled from the critical Liouville measure μ $\mu$ normalized to be a probability measure. Then

  • X t = 0 $X_t=0$ for all t μ ( D ) $ t\geqslant \mu (\mathbb {D})$ almost surely and the conditional law of
    t z : = μ h U U z U $$\begin{equation} t_z:= \mu _h{\left(\cup _{U\in \mathfrak {U}_z} U \right)}\end{equation}$$ (5.1)
    given ( cle , lqg , be ) $(\mathfrak {cle}, \mathfrak {lqg}, \mathfrak {be})$ is uniform on ( 0 , μ ( D ) ) $(0,\mu (\mathbb {D}))$ ,
  • X t z = ( A t z , B t z ) $X_{t_z}=(A_{t_z},B_{t_z})$ satisfies the following for a deterministic constant c > 0 $c&gt;0$ :
    A t z = c lim inf δ 0 δ N δ and B t z = 1 + U U z sgn ( U ) ν h ( U ) $$\begin{equation} A_{t_z}={c}\liminf _{\delta \rightarrow 0} \delta N_\delta \text{ and } B_{t_z}=1+\sum _{U\in \mathfrak {U}_z} \mathrm{sgn}(U) \nu _h(U)\end{equation}$$ (5.2)
    almost surely, where for δ > 0 $\delta &gt;0$ , N δ $N_\delta$ is the number of domains U U z $U\in \mathfrak {U}_z$ such that ν h ( U ) ( δ / 2 , δ ) $\nu _h(\partial U)\in (\delta /2,\delta )$ ,
  • the ordered collection ( μ h ( U ) , sgn ( U ) ν h ( U ) ) U U z $(\mu _h(U),\mathrm{sgn}(U)\nu _h(\partial U))_{U\in \mathfrak {U}_z}$ is almost surely equal to the ordered collection of jumps of ( T t z , B ̂ t z ) $(T^{t_z},\widehat{B}^{t_z})$ (where ( T t z , B ̂ t z ) $(T^{t_z},\widehat{B}^{t_z})$ are defined from be $\mathfrak {be}$ as in Section 4.3).

Note that
A t z = A ̂ t z t z = t z t z $$\begin{equation} A_{t_z}=\widehat{A}_{\ell ^{t_z}_{t_z}}=\ell ^{t_z}_{t_z}\end{equation}$$ (5.3)
is the limit as ε 0 $\varepsilon \rightarrow 0$ of the total length of the SLE κ ( κ 6 ) $\operatorname{SLE}_{\kappa ^{\prime }}({\kappa ^{\prime }}-6)$ branch toward z $z$ in the quantum natural parameterization. We can therefore view A t z $A_{t_z}$ as a limiting ‘quantum natural distance’ of z $z$ from the boundary of the disk. In a similar vein, we record in Table 1 some of the correspondences between the CLE 4 $\operatorname{CLE}_4$ decorated critical LQG disk with order variables ( cle , lqg ) $(\mathfrak {cle}, \mathfrak {lqg})$ and the Brownian excursion be $\mathfrak {be}$ , where z , w $z,w$ are points sampled from the critical LQG measure μ h $\mu _h$ in the bulk.
TABLE 1.  
be $\mathfrak {be}$ ( cle , lqg ) $(\mathfrak {cle}, \mathfrak {lqg})$
Duration of X $X$ μ h ( D ) $\mu _h(\mathbb {D})$
{ t w < t z } $\lbrace t_w&lt;t_z\rbrace$ { O w , z = 1 } = $\lbrace \mathcal {O}_{w,z}=1\rbrace =$ w $w$ ordered before z $z$ '
t z $t_z$ μ h ( { w Q : O w , z = 1 } ¯ ) = $\mu _h(\overline{\lbrace w\in \mathcal {Q}: O_{w,z}=1\rbrace })=$ ‘quantum area of points ordered before z $z$ '
A t z $A_{t_z}$ Quantum natural distance of z $z$ from D $\partial \mathbb {D}$
Jumps of B ̂ t z $\widehat{B}^{t_z}$ LQG boundary lengths of ‘components ordered before z $z$ '
Sign of jump Parity of # { CLE 4 loops surrounding component } $\#\ \lbrace \operatorname{CLE}_4 \text{ loops surrounding component}\rbrace$
Jumps of T t z $T^{t_z}$ LQG areas of ‘components ordered before z $z$ '
CRT encoded by A $A$ CLE 4 $\operatorname{CLE}_4$ exploration branches parameterized by quantum natural distance

Proof of Theorem 5.5 given Proposition 5.4.Since we know the marginal convergence of each component of ( cle ε , lqg ε , be ε ) $(\mathfrak {cle}^{\varepsilon} , \mathfrak {lqg}^{\varepsilon} , \mathfrak {be}^{\varepsilon} )$ , we know that the triple is tight in ε $\varepsilon$ . Thus our task is to characterize any subsequential limit ( cle , lqg , be ) $(\mathfrak {cle}, \mathfrak {lqg}, \mathfrak {be})$ of ( cle ε , lqg ε , be ε ) $(\mathfrak {cle}^{\varepsilon} , \mathfrak {lqg}^{\varepsilon} ,\mathfrak {be}^{\varepsilon} )$ . Note that Proposition 5.1 already tells us that ( cle , lqg ) $(\mathfrak {cle}, \mathfrak {lqg})$ are independent, and Proposition 5.2 tells us that the marginal law of be $\mathfrak {be}$ is that of a Brownian half-plane excursion plus associated observables.

To characterize the law of ( cle , lqg , be ) $(\mathfrak {cle}, \mathfrak {lqg}, \mathfrak {be})$ we will prove that if z D $z\in \mathbb {D}$ is sampled according to μ h $\mu _h$ in D $\mathbb {D}$ , conditionally independently of the rest of ( cle , lqg , be ) $(\mathfrak {cle}, \mathfrak {lqg}, \mathfrak {be})$ then

  • (i) the duration of X $X$ is equal to μ h ( D ) $\mu _h(\mathbb {D})$ almost surely;
  • (ii) t z $t_z$ defined by (5.1) is conditionally uniform on ( 0 , μ h ( D ) ) $(0,\mu _h(\mathbb {D}))$ given ( cle , lqg , be ) $(\mathfrak {cle}, \mathfrak {lqg},\mathfrak {be})$ ;
  • (iii) the ordered collection ( μ h ( U ) , sgn ( U ) ν h ( U ) ) U U z $(\mu _h(U),\mathrm{sgn}(U)\nu _h(\partial U))_{U\in \mathfrak {U}_z}$ is almost surely equal to the ordered collection of jumps of ( T t z , B ̂ t z ) $(T^{t_z},\widehat{B}^{t_z})$ (defined from be $\mathfrak {be}$ as in Section 4.3); and
  • (iv) A t z , B t z $A_{t_z}, B_{t_z}$ satisfy (5.2) almost surely.
Let us remark already that the above claim is enough to complete the proof of the theorem. Indeed, suppose that ( cle , lqg , be ) $(\mathfrak {cle}, \mathfrak {lqg},\mathfrak {be})$ is a subsequential limit in law of ( cle ε , lqg ε , be ε ) $(\mathfrak {cle}^{\varepsilon} , \mathfrak {lqg}^{\varepsilon} , \mathfrak {be}^{\varepsilon} )$ as ε 0 $\varepsilon \rightarrow 0$ and let ( cle , lqg , be , be ) $(\mathfrak {cle}, \mathfrak {lqg}, \mathfrak {be}, \mathfrak {be}^{\prime })$ be coupled so that ( cle , lqg , be ) $(\mathfrak {cle},\mathfrak {lqg},\mathfrak {be})$ is equal in law to ( cle , lqg , be ) $(\mathfrak {cle},\mathfrak {lqg}, \mathfrak {be}^{\prime })$ , while be , be $\mathfrak {be},\mathfrak {be}^{\prime }$ are conditionally independent given cle , lqg $\mathfrak {cle}, \mathfrak {lqg}$ . Further sample z $z$ from μ h $\mu _h$ in D $\mathbb {D}$ , conditionally independently of the rest of ( cle , lqg , be , be ) $(\mathfrak {cle}, \mathfrak {lqg}, \mathfrak {be},\mathfrak {be}^{\prime })$ , so that (i)–(iv) hold for ( cle , lqg , be , z ) $(\mathfrak {cle}, \mathfrak {lqg}, \mathfrak {be}, z)$ and for ( cle , lqg , be , z ) $(\mathfrak {cle}, \mathfrak {lqg}, \mathfrak {be}{^{\prime }}, z)$ (with X , A , B $X,A,B$ replaced by their counterparts X , A , B $X^{\prime },A^{\prime },B^{\prime }$ for be $\mathfrak {be}^{\prime }$ .) Then by (i) and (ii), and since X ( be ) $X(\mathfrak {be})$ , X ( be ) $X(\mathfrak {be}^{\prime })$ are almost surely continuous, if P ( be be ) $\mathbb {P}(\mathfrak {be}\ne \mathfrak {be}^{\prime })$ were strictly positive then P ( X ( be ) t z X ( be ) t z ) $\mathbb {P}(X(\mathfrak {be})_{t_z}\ne X(\mathfrak {be}^{\prime })_{t_z})$ would be strictly positive as well. This would contradict (iii) and (iv), so we conclude that be = be $\mathfrak {be}=\mathfrak {be}^{\prime }$ almost surely. This means that be $\mathfrak {be}$ is determined by ( cle , lqg ) $(\mathfrak {cle}, \mathfrak {lqg})$ , and the explicit description in the statement of the theorem also follows immediately.

The same argument implies that the law of any subsequential limit is unique. More concretely, suppose that ε n $\varepsilon _n$ , ε n ${\varepsilon }^{\prime }_n$ are two sequences tending to 0 as n $n\rightarrow \infty$ , such that ( cle ε n , lqg ε n , be ε n ) ( cle , lqg , be ) $(\mathfrak {cle}^{\varepsilon _n},\mathfrak {lqg}^{\varepsilon _n},\mathfrak {be}^{\varepsilon _n})\Rightarrow (\mathfrak {cle}, \mathfrak {lqg},\mathfrak {be})$ and ( cle ε n , lqg ε n , be ε n ) ( cle , lqg , be ) $(\mathfrak {cle}^{\varepsilon ^{\prime }_n},\mathfrak {lqg}^{\varepsilon ^{\prime }_n},\mathfrak {be}^{\varepsilon ^{\prime }_n})\Rightarrow (\mathfrak {cle}^{\prime },\mathfrak {lqg}^{\prime },\mathfrak {be}^{\prime })$ as n $n\rightarrow \infty$ . Then we can also take a joint subsequential limit of ( cle ε n , lqg ε n , be ε n , cle ε n , lqg ε n , be ε n ) $(\mathfrak {cle}^{\varepsilon _n},\mathfrak {lqg}^{\varepsilon _n},\mathfrak {be}^{\varepsilon _n},\mathfrak {cle}^{\varepsilon ^{\prime }_n},\mathfrak {lqg}^{\varepsilon ^{\prime }_n},\mathfrak {be}^{\varepsilon ^{\prime }_n})$ ; call it ( cle , lqg , be , cle , lqg , be ) $(\mathfrak {cle},\mathfrak {lqg},\mathfrak {be},\mathfrak {cle}^{\prime },\mathfrak {lqg}^{\prime },\mathfrak {be}^{\prime })$ where necessarily cle = cle $\mathfrak {cle}=\mathfrak {cle}^{\prime }$ and lqg = lqg $\mathfrak {lqg}=\mathfrak {lqg}^{\prime }$ , since we already know the convergence ( cle ε , lqg ε ) ( cle , lqg ) $(\mathfrak {cle}^{\varepsilon} ,\mathfrak {lqg}^{\varepsilon} )\Rightarrow (\mathfrak {cle},\mathfrak {lqg})$ . Repeating the argument of the previous paragraph gives that be = be $\mathfrak {be}=\mathfrak {be}^{\prime }$ almost surely. In particular, the marginal law of ( cle , lqg , be ) $(\mathfrak {cle}^{\prime },\mathfrak {lqg}^{\prime },\mathfrak {be}^{\prime })$ is the same as that of ( cle , lqg , be ) $(\mathfrak {cle},\mathfrak {lqg},\mathfrak {be})$ .

So we are left to justify the above claim. To this end, let

( cle , lqg , be ) $$\begin{equation} (\mathfrak {cle}, \mathfrak {lqg}, \mathfrak {be}) \end{equation}$$ (5.4)
be a subsequential limit, along some subsequence of ε $\varepsilon$ . By Proposition 5.4 and passing to a further subsequence if necessary we may extend this to the convergence
( cle ε n , lqg ε n , z ε n , be ε n , ( sgn ( U i ε n , δ ) ) i 1 , ( g i ε n , δ ( h ε n ) ) i 1 δ Q ( 0 , 1 ) ) ( cle , lqg , z , be , ( sgn ( U i δ ) ) i 1 , ( g i δ ( h ) ) i 1 δ Q ( 0 , 1 ) ) $$\begin{gather} (\mathfrak {cle}^{{\varepsilon _n}},\mathfrak {lqg}^{{\varepsilon _n}}, z^{{\varepsilon _n}}, \mathfrak {be}^{{\varepsilon _n}},{\left((\mathrm{sgn}(U_i^{{{\varepsilon _n},\delta }}))_{i\geqslant 1}, (g_i^{{{\varepsilon _n},\delta }}(h^{\varepsilon _n}))_{i\geqslant 1}\right)}_{\delta \in \mathbb {Q}\cap (0,1)} )\nonumber \\ \Rightarrow \nonumber \\ (\mathfrak {cle}, \mathfrak {lqg},z,\mathfrak {be}, {\left((\mathrm{sgn}(U_i^{{\delta }}))_{i\geqslant 1}, (g_i^{{\delta }}(h))_{i\geqslant 1}\right)}_{\delta \in \mathbb {Q} \cap (0,1)})\end{gather}$$ (5.5)
along some ε n 0 ${\varepsilon _n}\downarrow 0$ , where for every δ Q ( 0 , 1 ) $\delta {\in \mathbb {Q} \cap (0,1)}$ the joint law of
cle ε n , lqg ε n , z ε n , be ε n , ( sgn ( U i ε n , δ ) ) i 1 , ( g i ε n , δ ( h ε n ) ) i 1 δ Q ( 0 , 1 ) and cle , lqg , z , sgn ( U i δ ) i 1 , g i δ ( h ) i 1 $$\begin{eqnarray*} &&\left.\left(\mathfrak {cle}^{{\varepsilon _n}},\mathfrak {lqg}^{{\varepsilon _n}}, z^{{\varepsilon _n}}, \mathfrak {be}^{{\varepsilon _n}},{\left((\mathrm{sgn}(U_i^{{{\varepsilon _n},\delta }}))_{i\geqslant 1}, (g_i^{{{\varepsilon _n},\delta }}(h^{\varepsilon _n}))_{i\geqslant 1}\right)}_{\delta \in \mathbb {Q}\cap (0,1)} \right)\right)\\ &&\quad \text{ and }\left(\mathfrak {cle}, \mathfrak {lqg}, z, \left(\mathrm{sgn}(U_i^{{\delta }})_{i\geqslant 1}, g_i^{{\delta }}(h)_{i\geqslant 1}\right)\right) \end{eqnarray*}$$
are as in Proposition 5.4 (now with the dependence on δ $\delta$ indicated for clarity) and the joint law of ( cle , lqg , be ) $(\mathfrak {cle}, \mathfrak {lqg}, \mathfrak {be})$ is the one assumed in (5.4). Note that the conditional law of z $z$ given ( cle , lqg , be ) $(\mathfrak {cle},\mathfrak {lqg},\mathfrak {be})$ is that of a sample from μ h $\mu _h$ , since the same is true at every approximate level and since μ h ε n ε n $\mu ^{{\varepsilon _n}}_{h^{\varepsilon _n}}$ converges as part of lqg ε n $\mathfrak {lqg}^{{\varepsilon _n}}$ .

We next argue that the convergence (5.5) necessarily implies the joint convergence

cle ε n , lqg ε n , z ε n , be ε n , sgn U i ε n , δ i 1 , g i ε n , δ h ε n i 1 , μ h ε n ε n U i ε n , δ i 1 , ν h ε n ε n U i ε n , δ i 1 δ Q ( 0 , 1 ) ( cle , lqg , z , be , ( sgn ( U i δ ) ) i 1 , ( g i δ ( h ) ) i 1 , ( μ h ( U i δ ) ) i 1 , ( ν h ( U i δ ) ) i 1 δ Q ( 0 , 1 ) ) $$\begin{gather} \left(\mathfrak {cle}^{{\varepsilon _n}},\mathfrak {lqg}^{{\varepsilon _n}}, z^{{\varepsilon _n}}, \mathfrak {be}^{{\varepsilon _n}},\left(\left(\mathrm{sgn}\left(U_i^{{{\varepsilon _n},\delta }}\right)\right)_{i\geqslant 1},\left(g_i^{{{\varepsilon _n},\delta }}\left(h^{\varepsilon _n}\right)\right)_{i\geqslant 1}, \left(\mu ^{\varepsilon _n}_{h^{\varepsilon _n}}\left(U_i^{{{\varepsilon _n},\delta }}\right)\right)_{i\geqslant 1},\right.\right.\nonumber\\ \left.\left. \left(\nu _{h^{\varepsilon _n}}^{\varepsilon _n}\left(\partial U_i^{{{\varepsilon _n},\delta }}\right)\right)_{i\geqslant 1 }\right)_{\delta \in \mathbb {Q}\cap (0,1)}\right) \nonumber \\ \Rightarrow \nonumber \\ (\mathfrak {cle}, \mathfrak {lqg},z, \mathfrak {be}, {\left((\mathrm{sgn}(U_i^{{\delta }}))_{i\geqslant 1}, (g_i^{{\delta }}(h))_{i\geqslant 1}, (\mu _h(U_i^{{\delta }}))_{i\geqslant 1 }, (\nu _{h}(\partial U^{{\delta }}_i))_{i\geqslant 1 }\right)}_{\delta \in \mathbb {Q}\cap (0,1)}) \end{gather}$$ (5.6)
as n $n\rightarrow \infty$ , where the initial components are exactly as in (5.5). Indeed, we know that the tuple on the left is tight in n $n$ , because the first six terms are tight by above and both ( μ h ε n ε n ( U i ε n , δ ) ) i 1 $(\mu ^{\varepsilon _n}_{h^{\varepsilon _n}}(U_i^{{{\varepsilon _n},\delta }}))_{i\geqslant 1}$ and ( ν h ε n ε n ( U i ε n , δ ) ) i 1 $(\nu _{h^{\varepsilon _n}}^{\varepsilon _n}(\partial U_i^{{{\varepsilon _n},\delta }}))_{i\geqslant 1 }$ are sequences with only a tight number of non-zero terms, and with all non-zero terms bounded by convergent quantities in ( lqg ε n , be ε n ) $(\mathfrak {lqg}^{\varepsilon _n},\mathfrak {be}^{\varepsilon _n})$ . On the other hand, for any fixed δ $\delta$ , i $i$ and n $n$ ,
μ h ε n ε n ( U i ε n , δ ) = μ g i ε n , δ ( h ε n ) ε n ( D ) and ν h ε n ε n ( U i ε n , δ ) = ν g i ε n , δ ( h ε n ) ε n ( D ) , $$\begin{equation*} \mu _{h^{\varepsilon _n}}^{\varepsilon _n}(U_i^{{{\varepsilon _n},\delta }})=\mu _{g_i^{{{\varepsilon _n},\delta }}(h^{\varepsilon _n})}^{\varepsilon _n}(\mathbb {D}) \text{ and } \nu _{h^{\varepsilon _n}}^{\varepsilon _n}(\partial U_i^{{{\varepsilon _n},\delta }})=\nu _{g_i^{{{\varepsilon _n},\delta }}(h^{\varepsilon _n})}^{\varepsilon _n}(\partial \mathbb {D}), \end{equation*}$$
so by Theorem 4.12, ( g i ε n , δ ( h ε n ) , μ h ε n ε n ( U i ε n , δ ) , ν h ε n ε n ( U i ε n , δ ) ) $(g_i^{{{\varepsilon _n},\delta }}(h^{\varepsilon _n}), \mu _{h^{\varepsilon _n}}^{\varepsilon _n}(U_i^{{{\varepsilon _n},\delta }}), \nu _{h^{\varepsilon _n}}^{\varepsilon _n}(\partial U_i^{{{\varepsilon _n},\delta }}))$ is a sequence of γ ( ε n ) $\gamma ({\varepsilon _n})$ -quantum disks together with their quantum boundary lengths and areas. We can therefore apply Remark 4.11 to deduce that any subsequential limit in law ( g i ( h ) , μ , ν ) $(g_i(h),\mu ^*,\nu ^*)$ of ( g i ε n , δ ( h ε n ) , μ h ε n ε n ( U i ε n , δ ) , ν h ε n ε n ( U i ε n , δ ) ) $(g_i^{{{\varepsilon _n},\delta }}(h^{\varepsilon _n}), \mu _{h^{\varepsilon _n}}^{\varepsilon _n}(U_i^{{{\varepsilon _n},\delta }}), \nu _{h^{\varepsilon _n}}^{\varepsilon _n}(\partial U_i^{{{\varepsilon _n},\delta }}))$ must be equal to
( g i δ ( h ) , μ g i δ ( h ) ( D ) , ν g i δ ( h ) ( D ) ) = ( g i δ ( h ) , μ h ( U i δ ) , ν h ( U i δ ) ) . $$\begin{equation*} (g_i^{{\delta }}(h),\mu _{g_i^{{\delta }}(h)}(\mathbb {D}),\nu _{g_i^{{\delta }}(h)}(\partial \mathbb {D}))=(g_i^{{\delta }}(h),\mu _h(U_i^{{\delta }}),\nu _h(\partial U_i^{{\delta }})). \end{equation*}$$
This concludes the proof of (5.6).

So to summarize, if we have any subsequential limit ( cle , lqg , be ) $(\mathfrak {cle}, \mathfrak {lqg}, \mathfrak {be})$ of ( cle ε , lqg ε , be ε ) $(\mathfrak {cle}^{\varepsilon} ,\mathfrak {lqg}^{\varepsilon} ,\mathfrak {be}^{\varepsilon} )$ we can couple it with z $z$ (whose conditional law given ( cle , lqg , be ) $(\mathfrak {cle},\mathfrak {lqg},\mathfrak {be})$ is that of a sample from μ h $\mu _h$ ) and with ( U i , g i ) i 1 $(U_i,g_i)_{i\geqslant 1}$ for every positive δ Q $\delta \in \mathbb {Q}$ , such that the joint convergence (5.6) holds along some subsequence ε n 0 ${\varepsilon _n}\downarrow 0$ . By Skorokhod embedding we may assume that this convergence is almost sure, and so just need to prove that (i)–(iv) hold for the limit. This essentially follows from Remark 5.3 and the convergence of the final coordinates in (5.6); we give the details for each point below.

  • (i) This holds since X ε n = 0 $X^{{\varepsilon _n}}=0$ for all t μ ε n ( D ) $t\geqslant \mu ^{{\varepsilon _n}}(\mathbb {D})$ almost surely for every n $n$ , and ( μ h ε n ε n ( D ) , X ε n ) ( μ h ( D ) , X ) $(\mu ^{{\varepsilon _n}}_{h^{\varepsilon _n}}(\mathbb {D}),X^{{\varepsilon _n}}) \rightarrow (\mu _h(\mathbb {D}),X)$ almost surely.
  • (ii) The convergence of the areas in (5.6) implies that
    t z ε n ε n = U z ε n ε n μ h ε n ε n ( U ) $$\begin{equation*} t_{z^{\varepsilon _n}}^{\varepsilon _n}=\sum _{\mathfrak {U}_{z^{\varepsilon _n}}^{\varepsilon _n}} \mu ^{\varepsilon _n}_{h^{\varepsilon _n}}(U) \end{equation*}$$
    converges almost surely to t z $t_z$ defined in (5.1) along the subsequence ε n 0 ${\varepsilon _n}\downarrow 0$ . On the other hand, t z ε n $t_z^{\varepsilon _n}$ is conditionally uniform on ( 0 , μ h ε n ε n ( D ) ) $(0,\mu ^{\varepsilon _n}_{h^{\varepsilon _n}}(\mathbb {D}))$ given ( cle ε n , lqg ε n , be ε n ) $(\mathfrak {cle}^{\varepsilon _n}, \mathfrak {lqg}^{\varepsilon _n}, \mathfrak {be}^{\varepsilon _n})$ for every n $n$ .
  • (iii) The ordered collection of jumps of ( T ε n , t z ε n ε n , B ̂ ε n , t z ε n ε n ) $(T^{{{\varepsilon _n}},t^{{\varepsilon _n}}_{z^{{\varepsilon _n}}}},\widehat{B}^{{{\varepsilon _n}},t_{z^{{\varepsilon _n}}}^{{\varepsilon _n}}})$ converge almost surely to the ordered collection of jumps of ( T t z , B ̂ t z ) $(T^{t_z},\widehat{B}^{t_z})$ on the one hand, by definition of the convergence ( be ε n , z ε n ) ( be , z ) $(\mathfrak {be}^{{\varepsilon _n}},z^{{\varepsilon _n}})\rightarrow (\mathfrak {be},z)$ (and by considering a sequence z n Q $z^n\in \mathcal {Q}$ converging to z $z$ ). On the other hand, they are equal to the ordered collection ( μ h ε n ε n ( U ) , sgn ( U ) ν h ε n ε n ( U ) ) U U z ε n $(\mu ^{{\varepsilon _n}}_{h^{\varepsilon _n}}(U),\mathrm{sgn}(U)\nu _{h^{{\varepsilon _n}}}^{{\varepsilon _n}}(\partial U))_{U\in \mathfrak {U}_z^{{\varepsilon _n}}}$ for every n $n$ . Since this latter collection converges almost surely to the ordered collection ( μ h ( U ) , sgn ( U ) ν h ( U ) ) U U z $(\mu _h(U),\mathrm{sgn}(U)\nu _h(\partial U))_{U\in \mathfrak {U}_z}$ , we obtain (iii).
  • (iv) This follows from (iii) and the fact that the marginal law of X = ( A , B ) $X=(A,B)$ is that of a Brownian excursion in the right half-plane. Specifically, the first coordinate of X $X$ at a given time t $t$ can almost surely be recovered from the jumps of its inverse local time at backward running infima with respect to time t $t$ , see (5.3), and the second coordinate can also be recovered from the collection of its signed jumps when reparameterized by this inverse local time. When t = t z $t=t_z$ , the values are recovered exactly using the formula (5.2) after using (iii) to translate between ( μ h ( U ) , sgn ( U ) ν h ( U ) ) U U z $(\mu _h(U),\mathrm{sgn}(U)\nu _h(\partial U))_{U\in \mathfrak {U}_z}$ and ( T t z , B ̂ t z ) $(T^{t_z},\widehat{B}^{t_z})$ . $\Box$

5.1 Proof of Proposition 5.4

In this subsection, δ $\delta$ is fixed, so we omit it from the notation (just as in the statement of Proposition 5.4). Since the convergence of μ h ε ε $\mu _{h^{\varepsilon} }^{\varepsilon}$ to μ h $\mu _h$ is included in the convergence of ( cle ε , lqg ε ) $(\mathfrak {cle}^{\varepsilon} , \mathfrak {lqg}^{\varepsilon} )$ to ( cle , lqg ) $(\mathfrak {cle}, \mathfrak {lqg})$ it is clear (for example, by working on a probability space where the convergence holds almost surely) that ( cle ε , lqg ε , z ε ) ( cle , lqg , z ) $(\mathfrak {cle}^{\varepsilon} , \mathfrak {lqg}^{\varepsilon} , z^{\varepsilon} )\Rightarrow (\mathfrak {cle},\mathfrak {lqg}, z)$ as ε 0 $\varepsilon \rightarrow 0$ . From here, the proof proceeds via the following steps.
  • (1) The tuples on the left-hand side in Proposition 5.4 are tight in ε $\varepsilon$ , so we may take a subsequential limit ( cle , lqg , z , ( s i ) i 1 , ( w i ) i 1 , ( h i ) i 1 ) $(\mathfrak {cle}, \mathfrak {lqg}, z, (s_i)_{i\geqslant 1}, (w_i)_{i\geqslant 1}, (h_i)_{i\geqslant 1})$ (that we will work with for the remainder of the proof).
  • (2) w i D Γ $w_i\in \mathbb {D}\setminus \Gamma$ (that is, w i $w_i$ is not on any nested CLE 4 $_4$ loop) for all i $i$ almost surely.
  • (3) If g i : U ( z , w i ) D $\widetilde{g}_i:U(z,w_i) {\rightarrow } \mathbb {D}$ are conformal with g i ( w i ) = 0 $\widetilde{g}_i(w_i)=0$ and g i ( w i ) > 0 $\widetilde{g}_i^{\prime }(w_i)&gt;0$ , then h i = g i ( h ) $h_i=\widetilde{g}_i(h)$ for each i $i$ almost surely.
  • (4) Given ( cle , lqg , z ) $(\mathfrak {cle},\mathfrak {lqg},z)$ , the w i $w_i$ are conditionally independent and distributed according to μ h $\mu _h$ in each U ( z , w i ) $U(z,w_i)$ .
  • (5) { U U z : μ h ( U ) δ } = { U ( z , w i ) } i 1 $\lbrace U\in \mathfrak {U}_z: \mu _h(U)\geqslant \delta \rbrace =\lbrace U(z,w_i)\rbrace _{i\geqslant 1}$ almost surely, where the set on the left is ordered as usual.
  • (6) s i = sgn ( U ( z , w i ) ) $s_i=\mathrm{sgn}(U(z,w_i))$ for each i $i$ almost surely.

These clearly suffice for the proposition.

Proof of (1).Tightness of the first five components follows from the fact that ( cle ε , lqg ε , z ε ) ( cle , lqg , z ) $(\mathfrak {cle}^{\varepsilon} , \mathfrak {lqg}^{\varepsilon} , z^{\varepsilon} )\Rightarrow (\mathfrak {cle},\mathfrak {lqg}, z)$ as ε 0 $\varepsilon \rightarrow 0$ , plus the tightness of the quantum boundary lengths in U z ε $\mathfrak {U}_z^{\varepsilon}$ (recall that these converge when be ε $\mathfrak {be}^{\varepsilon}$ converges). To see the tightness of ( g i ε ( h ε ) ) i 1 $(g_i^{\varepsilon} (h^{\varepsilon} ))_{i\geqslant 1}$ we note that there are at most μ h ε ε ( D ) / δ $\mu ^{\varepsilon} _{h^{\varepsilon} }(D)/\delta$ non-zero terms, where μ h ε ε ( D ) $\mu ^{\varepsilon} _{h^{\varepsilon} }(\mathbb {D})$ is tight in ε $\varepsilon$ . Moreover, each non-zero g i ε ( h ε ) $g_i^{\varepsilon} (h^{\varepsilon} )$ has the law of h ε θ ε + a ε $\widetilde{h}^{\varepsilon} \circ \theta ^{\varepsilon} +a^{\varepsilon}$ , where h ε $\widetilde{h}^{\varepsilon}$ is as in Lemma 4.10, θ ε $\theta ^{\varepsilon}$ are random rotations (which automatically form a tight sequence in ε $\varepsilon$ ) and a ε $a^{\varepsilon}$ are some tight sequence of real numbers. This implies the result by Lemma 4.10. $\Box$

Proof of (2).Suppose that ( y j ε ) j 1 $(y_j^{\varepsilon} )_{j\geqslant 1}$ are sampled conditionally independently according to μ h ε ε $\mu ^{\varepsilon} _{h^{\varepsilon} }$ in D $\mathbb {D}$ , normalized to be a probability measure. Then ( cle ε , lqg ε , ( y j ε ) j 1 ) ( cle , lqg , ( y j ) j 1 ) $(\mathfrak {cle}^{\varepsilon} , \mathfrak {lqg}^{\varepsilon} , (y_j^{\varepsilon} )_{j\geqslant 1})\Rightarrow (\mathfrak {cle}, \mathfrak {lqg}, (y_j)_{j\geqslant 1})$ where the ( y j ) j 1 $(y_j)_{j\geqslant 1}$ are sampled conditionally independently from μ h $\mu _h$ and almost surely all lie in D Γ $\mathbb {D}\setminus \Gamma$ . On the other hand, since cle ε $\mathfrak {cle}^{\varepsilon}$ and lqg ε $\mathfrak {lqg}^{\varepsilon}$ are independent, one can sample ( w i ε ) i 1 $(w_i^{\varepsilon} )_{i\geqslant 1}$ by taking ( cle ε , lqg ε , ( y j ε ) j 1 ) $(\mathfrak {cle}^{\varepsilon} ,\mathfrak {lqg}^{\varepsilon} ,(y_j^{\varepsilon} )_{j\geqslant 1})$ and then setting w i ε = y j ε $w_i^{\varepsilon} =y_j^{\varepsilon}$ for each i $i$ , with j = min { k : y k U i ε } $j=\min \lbrace k:y_k\in U_i^{\varepsilon} \rbrace$ . $\Box$

Proof of (3).By Skorokhod's theorem, we may work on a probability space where we have the almost sure convergence

( cle ε n , lqg ε n , z ε n , ( sgn ( U i ε n ) ) i , ( w i ε n ) i , ( g i ε n ( h ε n ) ) i ) ( cle , lqg , z , ( s i ) i , ( w i ) i , ( h i ) i ) $$\begin{equation} (\mathfrak {cle}^{\varepsilon _n}, \mathfrak {lqg}^{\varepsilon _n}, z^{\varepsilon _n},(\mathrm{sgn}(U_i^{\varepsilon _n}))_{i}, (w_i^{\varepsilon _n})_{i}, (g_i^{\varepsilon _n}(h^{\varepsilon _n}))_{i})\rightarrow (\mathfrak {cle}, \mathfrak {lqg}, z, (s_i)_{i}, (w_i)_{i}, (h_i)_{i})\end{equation}$$ (5.7)
along a sequence ε n 0 ${\varepsilon _n}\downarrow 0$ . It is then natural to expect, since the w i ε n $w_i^{\varepsilon _n}$ converge almost surely to the w i $w_i$ and cle ε n $\mathfrak {cle}^{\varepsilon _n}$ converges almost surely to cle $\mathfrak {cle}$ , that the maps g i ε n $g_i^{\varepsilon _n}$ will converge to g i $\widetilde{g}_i$ described in (3). Since h ε n $h^{\varepsilon _n}$ also converges almost surely to h $h$ (as part of the convergence lqg ε n lqg $\mathfrak {lqg}^{\varepsilon _n}\rightarrow \mathfrak {lqg}$ ) it therefore follows h i $h_i$ will almost surely be equal to g i ( h ) $\widetilde{g}_i(h)$ for each i $i$ . This is the essence of the proof. However, one needs to take a little care with the statement concerning the convergence g i ε n g i $g_i^{{\varepsilon _n}} \rightarrow \widetilde{g}_i$ , since the domains U i ε n $U_i^{\varepsilon _n}$ and U ( z , w i ) $U(z,w_i)$ are defined in terms of points that are not necessarily in Q $\mathcal {Q}$ , while the convergence of cle ε cle $\mathfrak {cle}^{\varepsilon} \rightarrow \mathfrak {cle}$ is stated in terms pairs of points in Q $\mathcal {Q}$ .

To carry out the careful argument, let us fix i 1 $i\geqslant 1$ . Since w i D Γ $w_i\in \mathbb {D}\setminus \Gamma$ almost surely by (2), there exists r > 0 $r&gt;0$ and y Q $y\in \mathcal {Q}$ such that B ( y , r ) B ( w i , 2 r ) U ( z , w i ) = ( D w i ) σ w i , z $B(y,r)\subset B(w_i,2r)\subset U(z,w_i)=(\mathbf {D}_{w_i})_{\sigma _{w_i,z}}$ . By taking r $r$ smaller if necessary, we can also find x Q $x\in \mathcal {Q}$ with B ( x , r ) B ( z , 2 r ) ( D z ) σ z , w i $B(x,r)\subset B(z,2r)\subset (\mathbf {D}_z)_{\sigma _{z,w_i}}$ . Note that O z , w i = O x , y = 0 $\mathcal {O}_{z,w_i}=\mathcal {O}_{x,y}=0$ by definition. Due to the almost sure convergence z ε n z $z^{\varepsilon _n}\rightarrow z$ , w i ε n w i $w^{\varepsilon _n}_i \rightarrow w_i$ , and cle ε n cle $\mathfrak {cle}^{\varepsilon _n}\rightarrow \mathfrak {cle}$ it then follows that U ε n ( z ε n , w i ε n ) = U ε n ( x , y ) = ( D y ε n ) σ y , x ε n $U^{\varepsilon _n}(z^{\varepsilon _n},w_i^{\varepsilon _n})=U^{\varepsilon _n}(x, y)=(\mathbf {D}_y^{\varepsilon _n})_{\sigma ^{\varepsilon _n}_{y,x}}$ , and O x , y ε n = O z ε n , w i ε n ε n = 0 $\mathcal {O}^{\varepsilon _n}_{x,y}=\mathcal {O}^{\varepsilon _n}_{z^{\varepsilon _n},w_i^{\varepsilon _n}}=0$ for all n $n$ large enough. Moreover, we know that the maps f ε n : D U ε n ( z ε n , w i ε n ) = ( D y ε n ) σ y , x $f^{\varepsilon _n}:\mathbb {D}\rightarrow U^{\varepsilon _n}(z^{\varepsilon _n}, w_i^{\varepsilon _n})=(\mathbf {D}^{\varepsilon _n}_{y})_{\sigma _{y,x}}$ with f ε n ( 0 ) = y $f^{\varepsilon _n}(0)=y$ , ( f ε n ) ( 0 ) > 0 $(f^{\varepsilon _n})^{\prime }(0)&gt;0$ converge on compacts of D $\mathbb {D}$ to f : D U ( x , y ) = ( D y ) σ y , x $f:\mathbb {D}\rightarrow U(x,y)=(\mathbf {D}_y)_{\sigma _{y,x}}$ sending 0 to y $y$ and with f ( 0 ) > 0 $f^{\prime }(0)&gt;0$ .

On the other hand, ( g i ) 1 = f ϕ $(\widetilde{g}_i)^{-1}=f\circ \phi$ where ϕ : D D $\phi :\mathbb {D}\rightarrow \mathbb {D}$ sends 0 f 1 ( w i ) $0\mapsto f^{-1}(w_i)$ and has ϕ ( 0 ) > 0 $\phi ^{\prime }(0)&gt;0$ , and ( g i ε n ) 1 = f ε n ϕ ε n $(g_i^{\varepsilon _n})^{-1}=f^{\varepsilon _n}\circ \phi ^{\varepsilon _n}$ for each ε n ${\varepsilon _n}$ , where ϕ ε n : D D $\phi ^{\varepsilon _n}:\mathbb {D}\rightarrow \mathbb {D}$ has ϕ ε n ( 0 ) = ( f ε n ) 1 ( w i ε n ) $\phi ^{\varepsilon _n}(0)=(f^{\varepsilon _n})^{-1}(w_i^{\varepsilon _n})$ and ( ϕ ε n ) ( 0 ) > 0 $(\phi ^{\varepsilon _n})^{\prime }(0)&gt;0$ . Since w i ε n w i $w_i^{\varepsilon _n}\rightarrow w_i$ almost surely, and the w i ε n $w_i^{\varepsilon _n}$ are uniformly close to y $y$ and bounded away from the boundary of U ε n ( x , y ) $U^{\varepsilon _n}(x,y)$ , this implies that ( g i ε n ) 1 $(g_i^{{\varepsilon _n}})^{-1}$ converges to g i 1 $\widetilde{g}_i^{-1}$ uniformly on compacts of D $\mathbb {D}$ . In turn, this implies that h i $h_i$ restricted to any compact of D $\mathbb {D}$ is equal to g i ( h ) $\widetilde{g}_i(h)$ , which verifies that h i = g i ( h ) $h_i=g_i(h)$ almost surely. $\Box$

Proof of (4).For this it suffices to prove that for each i $i$ ,

( cle ε n , lqg ε n , z ε n , w i ε n , g i ε n ( h ε n ) , μ g i ε n ( h ε n ) ε n ) ( cle , lqg , z , w i , h i , μ h i ) $$\begin{equation*} (\mathfrak {cle}^{\varepsilon _n}, \mathfrak {lqg}^{\varepsilon _n}, z^{\varepsilon _n}, w_i^{\varepsilon _n}, g_i^{\varepsilon _n}(h^{\varepsilon _n}),\mu ^{\varepsilon _n}_{g_i^{\varepsilon _n}(h^{\varepsilon _n})})\Rightarrow (\mathfrak {cle}, \mathfrak {lqg}, z, w_i, h_i,\mu _{h_i}) \end{equation*}$$
as n $n\rightarrow \infty$ , where the convergence of the final components is in the sense of weak convergence for measures on D $\mathbb {D}$ . Note that if we work on a space where all but the last components converge almost surely, as in (3), then the proof of (3) shows that h i = g i ( h ) $h_i=\widetilde{g}_i(h)$ and that ( g i ε n ) 1 ( g i ) 1 $(g_i^{\varepsilon _n})^{-1}\rightarrow (\widetilde{g}_i)^{-1}$ almost surely when restricted to compact subsets of D $\mathbb {D}$ . This implies the almost sure convergence of the measures μ g i ε n ( h ε n ) ε n $\mu ^{\varepsilon _n}_{g_i^{\varepsilon _n}(h^{\varepsilon _n})}$ to μ h i $\mu _{h_i}$ when restricted to compact subsets of D $\mathbb {D}$ . On the other hand, μ g i ε n ( h ε n ) ( D ) $\mu _{g_i^{\varepsilon _n}(h^{\varepsilon _n})}(\mathbb {D})$ is a tight sequence in n $n$ , and by Remark 4.11, any subsequential limit ( cle , lqg , z , w i , h i , m ) $(\mathfrak {cle}, \mathfrak {lqg}, z, w_i, h_i, m)$ of ( cle ε n , lqg ε n , z ε n , w i ε n , g i ε n ( h ε n ) , μ g i ε n ( h ε n ) ε n ( D ) ) $(\mathfrak {cle}^{\varepsilon _n}, \mathfrak {lqg}^{\varepsilon _n}, z^{\varepsilon _n}, w_i^{\varepsilon _n}, g_i^{\varepsilon _n}(h^{\varepsilon _n}),\mu ^{\varepsilon _n}_{g_i^{\varepsilon _n}(h^{\varepsilon _n})}(\mathbb {D}))$ has m = μ h i ( D ) $m=\mu _{h_i}(\mathbb {D})$ almost surely. Combining these observations yields the result. $\Box$

Proof of (5).As in (3) we assume that we are working on a probability space where we have almost sure convergence along a sequence ε n 0 ${\varepsilon _n}\downarrow 0$ , so we need to show that the limiting domains U ( z , w i ) $U(z,w_i)$ are precisely the elements of U z $\mathfrak {U}_z$ that have μ h $\mu _h$ area greater than or equal to δ $\delta$ . The same argument as for (4) gives that each U ( z , w i ) $U(z,w_i)$ is a component of U z $\mathfrak {U}_z$ with μ h $\mu _h$ area greater than or equal to δ $\delta$ . So it remains to show that they are the only such elements of U z $\mathfrak {U}_z$ .

For this, suppose that U U z $U\in \mathfrak {U}_z$ has μ h ( U ) δ $\mu _h(U)\geqslant \delta$ . Then μ h ( U ) = δ + r $\mu _h(U)=\delta +r$ for some r > 0 $r&gt;0$ with probability 1. Choosing w Q $w\in \mathcal {Q}$ , a > 0 $a&gt;0$ such that U = U ( z , w ) B ( w , a ) $U=U(z,w)\supset B(w,a)$ it is easy to see that U ( z , w ) $U(z,w)$ is the almost sure Carathéodory limit seen from w $w$ of U ε n ( z ε n , w ) $U^{\varepsilon _n}(z^{\varepsilon _n}, w)$ as ε n 0 ${\varepsilon _n}\rightarrow 0$ . Using the convergence of μ h ε n ε n $\mu ^{\varepsilon _n}_{h^{\varepsilon _n}}$ to μ h $\mu _h$ and Corollary 2.23, we therefore see that lim n μ h ε n ε n ( U ε n ( z ε n , w ) ) μ h ( U ( z , w ) ) = δ + r $\lim _n \mu _{h^{\varepsilon _n}}^{\varepsilon _n}(U^{\varepsilon _n}(z^{\varepsilon _n},w))\geqslant \mu _h(U(z,w))=\delta +r$ and so U ε n ( z ε n , w ) = U i ε n = U ε n ( z ε n , w i ε n ) $U^{\varepsilon _n}(z^{\varepsilon _n}, w)=U_i^{\varepsilon _n}=U^{\varepsilon _n}(z^{\varepsilon _n},w_i^{\varepsilon _n})$ for some i $i$ and all n $n$ large enough. From here we may argue as in the proof of (3) to deduce that the Carathéodory limit of U ε n ( z ε n , w i ε n ) $U^{\varepsilon _n}(z^{\varepsilon _n},w_i^{\varepsilon _n})$ is equal to U ( z , w i ) $U(z,w_i)$ . Thus, since U = U ( z , w ) $U=U(z,w)$ is the Carathéodory limit of U ε n ( z ε n , w ) $U^{\varepsilon _n}(z^{\varepsilon _n},w)$ which is equal to U ε n ( z ε , w i ε n ) $U^{\varepsilon _n}(z^{\varepsilon} ,w_i^{\varepsilon _n})$ for all n $n$ large enough, we conclude that U = U ( z , w i ) $U=U(z,w_i)$ .

The fact that the orders of the collections in (3) coincide follows from the convergence of the order variables as part of cle ε cle $\mathfrak {cle}^{\varepsilon} \rightarrow \mathfrak {cle}$ (and the argument we have now used several times that allows one to transfer from z ε , w i ε $z^{\varepsilon} , w_i^{\varepsilon}$ to points in Q $\mathcal {Q}$ : we omit the details). $\Box$

Proof of (6).Let us work under almost sure convergence as in the proof of (3), fix i 1 $i\geqslant 1$ and define x , y , r $x,y,r$ as in the proof of (3). By Proposition 3.2, we know that σ y , x ε n σ y , x $\sigma ^{\varepsilon _n}_{y,x}\rightarrow \sigma _{y,x}$ almost surely as n $n\rightarrow \infty$ , and that sgn ( U i ε n ) $\mathrm{sgn}(U_i^{\varepsilon _n})$ is determined by the number of loops nested around y $y$ which D y ε n $\mathbf {D}^{\varepsilon _n}_y$ discovers before or at time σ y , x ε n $\sigma ^{\varepsilon _n}_{y,x}$ (see the definition of CLE loops from the space-filling/branching SLE κ $\operatorname{SLE}_{\kappa ^{\prime }}$ in Section 2.1.6). If σ y , x $\sigma _{y,x}$ occurs between two such times for D y $\mathbf {D}_y$ , it is clear from the almost sure convergence of σ y , x ε n $\sigma ^{\varepsilon _n}_{y,x}$ and D y ε n $\mathbf {D}_{y}^{\varepsilon _n}$ that the number of loop closure times for D y ε n $\mathbf {D}^{{\varepsilon _n}}_y$ occurring before or at σ y , x ε n $\sigma ^{\varepsilon _n}_{y,x}$ converges to the number of loop closure times for D y , x $\mathbf {D}_{y,x}$ occurring before or at time σ y , x $\sigma _{y,x}$ . If σ y , x $\sigma _{y,x}$ is a loop closure time for D y $\mathbf {D}_y$ , the result follows from Lemma 3.11. $\Box$

5.2 Discussion and outlook

The results obtained above open the road to several very natural questions related to the critical mating of trees picture. We will describe some of those below. Roughly, they can be stated as follows:
  • 1. Can one obtain a version of critical mating of trees where there is bi-measurability between the decorated LQG surface and the pair of Brownian motions (with possibly additional information included)?
  • 2. There is an interesting relation to growth-fragmentation processes studied in [1]. Can one combine these two point of views in a fruitful way?
  • 3. The Brownian motion A $A$ encodes a distance of each point to the boundary, and in particular between any CLE 4 $_4$ loop and the boundary. What is its relation to the CLE 4 $_4$ metric introduced in [59]?
  • 4. Can one prove convergence of observables in critical FK-decorated random planar maps toward the observables in the critical mating of trees picture?

Let us finally mention that there are also other interesting questions in the realm of critical LQG, for example, the behavior of height functions on top of critical planar maps, which are certainly worth exploring too.

5.2.1 Measurability

In the subcritical mating of trees, that is, when κ > 4 $\kappa ^{\prime } &gt; 4$ , γ < 2 $\gamma &lt; 2$ and we consider the coupling ( cle , lqg , be ) $(\mathfrak {cle}, \mathfrak {lqg}, \mathfrak {be})$ described in the introduction or in Section 5 (for simplicity without subscripts), [18] proves that in the infinite-volume setting the pair ( cle , lqg ) $(\mathfrak {cle}, \mathfrak {lqg})$ determines be $\mathfrak {be}$ and vice versa. In particular, ( cle , lqg ) $(\mathfrak {cle}, \mathfrak {lqg})$ can be obtained from be $\mathfrak {be}$ via a measurable map. This result is extended to the finite volume case of LQG disks in [2].

By contrast, some of this measurability gets lost when we consider our critical setting. The easier direction to consider is whether ( cle , lqg ) $(\mathfrak {cle}, \mathfrak {lqg})$ determine be $\mathfrak {be}$ . In the subcritical case this comes basically from the construction, and it does not matter what we really mean by cle $\mathfrak {cle}$ : the nested CLE κ $_{\kappa ^{\prime }}$ , the space-filling SLE κ $_{\kappa ^{\prime }}$ and the radial exploration tree of CLE κ $_{\kappa ^{\prime }}$ are all measurable with respect to one another. This, however, gets more complicated in the critical case. First, the question of whether the nested CLE 4 $_4$ determines the uniform exploration tree of CLE 4 $_4$ is already not straightforward; this is a theorem of an unpublished work [59]. Moreover, the nested CLE 4 $_4$ no longer determines the space-filling exploration from Section 3: indeed, we saw that to go from the uniform exploration tree to the ordering on points, some additional order variables are needed. These order variables are, however, the only missing information when going from ( cle , lqg ) $(\mathfrak {cle}, \mathfrak {lqg})$ to be $\mathfrak {be}$ : the conclusion of Theorem 5.5 is that when we include the order variables in cle $\mathfrak {cle}$ (in other words consider the space-filling exploration) then indeed be $\mathfrak {be}$ is measurable with respect to ( cle , lqg ) $(\mathfrak {cle}, \mathfrak {lqg})$ .

In the converse direction, things are trickier. In the coupling considered in this paper, be $\mathfrak {be}$ does not determine the pair ( cle , lqg ) $(\mathfrak {cle}, \mathfrak {lqg})$ ; however, we conjecture that ( cle , lqg ) $(\mathfrak {cle}, \mathfrak {lqg})$ is determined modulo a countable number of ‘rotations’. Informally, one can think of these rotations as follows: a rotation is an operation where we stop the CLE 4 $_4$ exploration at a time when the domain of exploration is split into two domains D $D$ and D $D^{\prime }$ , we consider the LQG surfaces ( D , h ) $(D,h)$ and ( D D , h ) $(\mathbb {D}\setminus D,h)$ , and we conformally weld these two surfaces together differently. The field and loop ensemble ( cle ̂ , lqg ̂ ) $(\widehat{\mathfrak {cle}}, \widehat{\mathfrak {lqg}})$ of the new surface will be different than the pair ( cle , lqg ) $(\mathfrak {cle}, \mathfrak {lqg})$ of the original surface, but their law is unchanged if we choose the new welding appropriately (for example, if we rotate by a fixed amount of LQG length), and be $\mathfrak {be}$ is pathwise unchanged. Therefore performing such a rotation gives us two different pairs ( cle , lqg ) $(\mathfrak {cle}, \mathfrak {lqg})$ and ( cle ̂ , lqg ̂ ) $(\widehat{\mathfrak {cle}}, \widehat{\mathfrak {lqg}})$ with the same law, and which are associated with the same be $\mathfrak {be}$ . We believe that these rotations are the only missing part needed to obtain measurability in this coupling. In fact, by considering a different CLE 4 $_4$ exploration, where loops are pinned in a predetermined way (for example, where all loops are pinned to some trunk, such as in, for example, [36]), one could imagine obtaining a different coupling of ( cle , lqg , be ) $(\mathfrak {cle}, \mathfrak {lqg}, \mathfrak {be})$ , where be $\mathfrak {be}$ does determine ( cle , lqg ) $(\mathfrak {cle}, \mathfrak {lqg})$ .

5.2.2 Growth fragmentation

We saw below the statement of Theorem 5.5 how certain observables in the Brownian excursion be $\mathfrak {be}$ map to observables (for example, quantum boundary lengths and areas of discovered CLE $\operatorname{CLE}$ loops) in ( cle , lqg ) $(\mathfrak {cle}, \mathfrak {lqg})$ , when we restrict to a single uniform CLE 4 $_4$ exploration branch. Given the definition of the branching CLE 4 $\operatorname{CLE}_4$ exploration (recall that the explorations toward any two points coincide exactly until they are separated by the discovered loops and then evolve independently) this is one way to define an entire branching process from the Brownian excursion.

In fact, this embedded branching process was already described completely, and independently, in an earlier work of Aïdekon and Da Silva [1]. Namely, given X = ( A , B ) $X=(A,B)$ with law as in Theorem 5.5, one can consider for any a 0 $a\geqslant 0$ the countable collection of excursions of X $X$ to the right side of the vertical line with horizontal component a $a$ . Associated with each such excursion is a total displacement (the difference between the vertical coordinate of the start and end points) and a sign (depending on which of these coordinates is larger). In [1], the authors prove that if one considers the evolution of these signed displacements as a $a$ increases, then one obtains a signed growth fragmentation process with completely explicit law. The fact that this process is a growth fragmentation means, roughly speaking, that it can be described by the evolving ‘mass’ of a family of cells: the mass of the initial cell evolves according to a positive self-similar Markov process, and every time this mass has a jump, a new cell with exactly this mass is introduced into the system. Each such new cell initiates an independent cell system with the same law. In the setting of signed growth fragmentations, masses may be both positive and negative.

In the coupling ( cle , lqg , be ) $(\mathfrak {cle}, \mathfrak {lqg}, \mathfrak {be})$ , such a growth fragmentation is therefore naturally embedded in be $\mathfrak {be}$ . It corresponds to a parameterization of the branching uniform CLE 4 $\operatorname{CLE}_4$ exploration by quantum natural distance from the boundary (that is, by the value of the A $A$ component), and branching occurs whenever components of the disk become disconnected in the exploration. At any given time, the absolute mass of a fragment is equal to the quantum boundary length of the corresponding component, and the sign of the fragment is determined by the number of CLE 4 $\operatorname{CLE}_4$ loops that surround this component.

Let us also mention that growth fragmentations in the setting of CLE on LQG were also studied in [43, 44], and coincide with the growth fragmentations obtained as scaling limits from random planar map explorations in [12]. Taking κ 4 $\kappa \rightarrow 4$ in these settings (either κ 4 $\kappa \uparrow 4$ in [43] or κ 4 $\kappa \downarrow 4$ in [44]) is also very natural and would give other insights about κ = 4 $\kappa =4$ than those obtained in this paper. Lehmkuehler takes this approach in [36].

5.2.3 Link with the conformally invariant metric on CLE 4 $\operatorname{CLE}_4$

Recall the uniform CLE 4 $_4$ exploration from Section 2.1.5, which was introduced by Werner and Wu [64]. Werner and Wu interpret the time t $t$ at which a loop L $\mathcal {L}$ of the CLE 4 $_4$ Γ $\Gamma$ is added, with the time parameterization (2.8), as the distance of L $\mathcal {L}$ to the boundary D $\partial \mathbb {D}$ ; we refer to it here as the CLE 4 $_4$ exploration distance of L $\mathcal {L}$ to D $\partial \mathbb {D}$ . In an unpublished work, Sheffield, Watson and Wu [59] prove that this distance is the distance as measured by a conformally invariant metric on Γ { D } $\Gamma \cup \lbrace \partial \mathbb {D}\rbrace$ . This metric is conjectured to be the limit of the adjacency metric on CLE κ $_{\kappa ^{\prime }}$ loops as κ 4 ${\kappa ^{\prime }}\downarrow 4$ . It is also argued in [59] that the uniform exploration of Γ $\Gamma$ is determined by Γ $\Gamma$ .

Our process A $A$ also provides a way to measure the distance of a CLE 4 $_4$ loop L $\mathcal {L}$ to D $\partial \mathbb {D}$ , as we previously discussed below (5.3) in the case of a point. Namely, for an arbitrary point z $z$ enclosed by L $\mathcal {L}$ define
t ( L ) : = μ h U U z U int ( L ) , $$\begin{equation} t(\mathcal {L}):= \mu _h{\left(\cup _{U\in \mathfrak {U}_z} U \setminus \operatorname{int}(\mathcal {L}) \right)}, \end{equation}$$ (5.8)
where int ( L ) D $\operatorname{int}(\mathcal {L})\subset \mathbb {D}$ is the domain enclosed by L $\mathcal {L}$ . It is not hard to see that t ( L ) $t(\mathcal {L})$ does not depend on the choice of z $z$ . We call A t ( L ) $A_{t(\mathcal {L})}$ the quantum natural distance of L $\mathcal {L}$ to D $\partial \mathbb {D}$ . Note that A t ( L ) $A_{t(\mathcal {L})}$ can also be defined similarly as in (5.2) by counting the number of CLE 4 $_4$ loops of length in ( δ / 2 , δ ) $(\delta /2,\delta )$ that are encountered before L $\mathcal {L}$ in the CLE 4 $_4$ exploration and then sending δ 0 $\delta \rightarrow 0$ while renormalizing appropriately. We remark that, in contrast to the CLE 4 $_4$ exploration distances, we do not expect that the quantum natural distances to the boundary defined here correspond to a conformally invariant metric on Γ $\Gamma$ .
It is natural to conjecture that the CLE 4 $_4$ exploration distance and the quantum natural distance are related via a Lamperti type transform
A t ( L ) = c 0 0 T ν h ( D t ) d t $$\begin{equation} A_{t(\mathcal {L})}=c_0\int _0^T \nu _h(\partial D_t)\,dt \end{equation}$$ (5.9)
for some deterministic constant c 0 > 0 $c_0&gt;0$ , where T $T$ is the CLE 4 $_4$ exploration distance of a loop L $\mathcal {L}$ from D $\partial \mathbb {D}$ and for t [ 0 , T ) $t\in [0,T)$ , D t $D_t$ is the connected component containing L $\mathcal {L}$ of D $\mathbb {D}$ minus the loops at CLE 4 $_4$ exploration distance less than t $t$ from D $\partial \mathbb {D}$ . This is natural since the distances are invariant under the application of a conformal map (where the field h $h$ is modified as in (4.1)), since the CLE 4 $_4$ exploration is uniform for both distances (so if two loops L , L $\mathcal {L},\mathcal {L}^{\prime }$ have CLE 4 $_4$ exploration distance t , t $t,t^{\prime }$ , respectively, to D $\partial \mathbb {D}$ then t < t $t&lt;t^{\prime }$ if and only if A t ( L ) < A t ( L ) $A_{t(\mathcal {L})}&lt;A_{t(\mathcal {L}^{\prime })}$ ), and since the left and right sides of (5.9) transform similarly upon adding a constant c $c$ to the field h $h$ (namely, both sides are multiplied by e c $e^{c}$ ). Proving or disproving (5.9) is left as an open problem. We remark that several earlier papers [7, 26, 30, 54, 57] have proved uniqueness of lengths or distances in LQG via an axiomatic approach, with axioms of a rather similar flavor to the above, but these proofs do not immediately apply to our setting.

5.2.4 Discrete models

The mating of trees approach to LQG coupled with CLE is inspired by certain random walk encodings of random planar maps decorated by statistical physics models. The first such encoding is the hamburger/cheeseburger bijection of Sheffield [58] for random planar maps decorated by the critical Fortuin–Kasteleyn random cluster model (FK-decorated planar map).

In the FK-decorated planar map each configuration is a planar map with an edge subset, whose weight is assigned according to the critical FK model with parameter q > 0 $q&gt;0$ . Sheffield encodes this model by five-letter words whose symbol set consists of hamburger, cheeseburger, hamburger order, cheeseburger order and fresh order. The fraction p $p$ of fresh orders within all orders is given by q = 2 p 1 p $\sqrt q=\frac{2p}{1-p}$ . As we read the word, a hamburger (respectively, cheeseburger) will be consumed by either a hamburger (respectively, cheeseburger) order or a fresh order, in a last-come-first-serve manner. In this setting, the discrete analog of our Brownian motion ( A , B ) $(A,B)$ is the net change in the burger count and the burger discrepancy since time zero, which we denote by ( C n , D n ) $(\mathcal {C}_n,\mathcal {D}_n)$ .

It was proved in [58] that ε ( C t / ε 2 , D t / ε 2 ) $\varepsilon (\mathcal {C}_{t/\varepsilon ^2},\mathcal {D}_{t/\varepsilon ^2})$ converges in law to ( B t 1 , B α t 2 ) $(B^1_{t}, B^2_{\alpha t})$ , where B 1 , B 2 $B^1,B^2$ are independent standard one-dimensional Brownian motions and α = max { 1 2 p , 0 } $\alpha =\max \lbrace 1-2p, 0\rbrace$ . When p ( 0 , 1 2 ) $p\in (0,\frac{1}{2})$ , the correlation of ( B t 1 + B α t 2 , B t 1 B α t 2 ) $(B^1_{t}+B^2_{\alpha t},B^1_{t}- B^2_{\alpha t})$ is the same as for the left and right boundary length processes of space-filling SLE κ $\operatorname{SLE}_{\kappa ^{\prime }}$ decorated γ $\gamma$ -LQG (cf. Theorem 4.12) where q = 2 + 2 cos ( 8 π / κ ) $q=2+ 2\cos (8\pi /\kappa ^{\prime })$ and γ 2 = 16 / κ $\gamma ^2=16/\kappa ^{\prime }$ . This is consistent with the conjecture that under these parameter relations, LQG coupled with CLE $\operatorname{CLE}$ (equivalently, space-filling SLE $\operatorname{SLE}$ ) is the scaling limit of the FK-decorated planar map for q ( 0 , 4 ) $q\in (0,4)$ . Indeed, based on the Brownian motion convergence in [58], it was shown in [22, 28, 29] that geometric quantities such as loop lengths and areas converge as desired.

When q = 4 $q=4$ and p = 1 2 $p=\frac{1}{2}$ , we have B α t 2 = 0 $B^2_{\alpha t}=0$ , just as in the κ 4 $\kappa ^{\prime }\downarrow 4$ limit of LQG coupled with CLE $\operatorname{CLE}$ , where the correlation of the left and right boundary length processes tend to 1. We believe that the process ( ε C t / ε 2 , Var [ D ε 2 ] 1 D t / ε 2 ) $(\varepsilon \mathcal {C}_{t/\varepsilon ^2}, \mathrm{Var}[\mathcal {D}_{\varepsilon ^{-2}}]^{-1} \mathcal {D}_{t/\varepsilon ^2})$ converges in law to ( B t 1 , B t 2 ) $(B^1_{t}, B^2_{t})$ ; moreover, based on this convergence and results in our paper, it should be possible to extract the convergence of the loop lengths and areas for FK decorated planar map to the corresponding observables in critical LQG coupled with CLE 4 $\operatorname{CLE}_4$ . We leave this as an open question. It would also be very interesting to identify the order of the normalization Var [ D ε 2 ] 1 $\mathrm{Var}[\mathcal {D}_{\varepsilon ^{-2}}]^{-1}$ , which is related to the asymptotic of the partition function of the FK-decorated planar map with q = 4 $q=4$ .

Another model of decorated random planar maps that is believed to converge (after uniformization) to CLE decorated LQG is the O( n $n$ ) loop model, where the critical case κ = 4 $\kappa =4$ corresponds to n = 2 $n=2$ . It is therefore also interesting to ask whether our Brownian half-plane excursion be $\mathfrak {be}$ can be obtained as a scaling limit of a suitable boundary length exploration process in this discrete setting. In fact, a very closely related question was considered in [15], where the authors identify the scaling limit of the perimeter process in peeling explorations of infinite volume critical Boltzmann random planar maps (see [14] for the relationship between these maps and the O(2) model). Modulo finite/infinite volume differences, this scaling limit, which is a Cauchy process, corresponds to a single ‘branch’ in our Brownian motion (see Section 5.2.2).

ACKNOWLEDGEMENTS

J. Aru was supported by Eccellenza grant 194648 of the Swiss National Science Foundation. N. Holden was supported by grant 175505 of the Swiss National Science Foundation, along with Dr. Max Rössler, the Walter Haefner Foundation and the ETH Zürich Foundation. E. Powell was supported by grant 175505 of the Swiss National Science Foundation. X. Sun was supported by the NSF grant DMS-2027986 and the NSF Career grant DMS-2046514. J. Aru and N. Holden were both part of SwissMAP. We all thank Wendelin Werner and ETH for their hospitality. We also thank Elie Aïdékon, Nicolas Curien, William Da Silva, Ewain Gwynne, Laurent Ménard, Avelio Sepúlveda and Samuel Watson for useful discussions. Finally, we thank the anonymous referee for their careful reading of this paper, and helping to improve the exposition in numerous places.

    JOURNAL INFORMATION

    The Journal of the London Mathematical Society is wholly owned and managed by the London Mathematical Society, a not-for-profit Charity registered with the UK Charity Commission. All surplus income from its publishing programme is used to support mathematicians and mathematics research in the form of research grants, conference grants, prizes, initiatives for early career researchers and the promotion of mathematics.

    • We thank N. Curien for explaining this relation to us.
    • That is, if λ $\lambda$ is the Brownian excursion measure then the integral is finite for λ $\lambda$ -almost all excursions; see [64, Section 2].
    • Variants of this process, for example, chordal/whole-plane versions, a clockwise version, and version with another starting point, can be defined by modifying the definition of the branching SLE; see, for example, [2, 21].
    • Of course this depends on a $a$ , but we drop this from the notation for simplicity.
    • This name is partially inspired from the fact that the process is constructed via a uniform CLE 4 $_4$ exploration, and partly since, every time the domain of exploration is split into two components, the components are ordered uniformly at random.
    • This local time (and the corresponding local time for ε = 0 $\varepsilon =0$ defined below) is defined only up to a deterministic multiplicative constant. We fix this constant in the proof of Lemma 4.15.
    • With respect to the Euclidean topology in the third coordinate, and the topology in the final coordinates defined such that ( ( s i n ) i 1 , ( w i n ) i 1 , ( h i n ) i 1 ) ( ( s i ) i 1 , ( w i ) i 1 , ( h i ) i 1 ) $((s_i^n)_{i\geqslant 1}, (w_i^n)_{i\geqslant 1}, (h^n_i)_{i\geqslant 1})\rightarrow ((s_i)_{i\geqslant 1}, (w_i)_{i\geqslant 1}, (h_i)_{i\geqslant 1})$ as n $n\rightarrow \infty$ if and only if the number of non-zero components on the left-hand side is equal to the number N n $N_n$ of non-zero components on the right-hand side for all n $n$ large enough, and the first N $N$ components converge in the product discrete × $\times$ Euclidean × $\times$ H 1 ( D ) $H^{-1}(\mathbb {D})$  topology.
    • Once we have point (5), it follows that these are equal to the ( g i ) i = 1 N $(g_i)_{i=1}^N$ .