Finer estimates on the 2 -dimensional matching problem

. — We study the asymptotic behaviour of the expected cost of the random matching problem on a 2 -dimensional compact manifold, improving in several aspects the results of [AST18]. In particular, we simplify the original proof (by treating at the same time upper and lower bounds) and we obtain the coeﬃcient of the leading term of the asymptotic expansion of the expected cost for the random bipartite matching on a general 2 -dimensional closed manifold. We also sharpen the estimate of the error term given in [Led18] for the semi-discrete matching. As a technical tool, we develop a reﬁned contractivity estimate for the heat ﬂow on random data that might be of independent interest.


Introduction
The bipartite matching problem is a very classical problem in computer science.It asks to find, among all possible matching in a bipartite weighted graph, the one that minimizes the sum of the costs of the chosen edges.The most typical instance of the matching problem arises when trying to match two families (X i ) 1 i n and (Y i ) 1 i n of n points in a metric space (M, d) and the cost of matching two points X i and Y j depends only on the distance d(X i , Y j ).
We will investigate a random version of the problem, that is, in its most general form, the following one.
Problem (Random bipartite matching).-Let (M, d, m) be a metric measure space such that m is a probability measure and let p 1 be a fixed exponent.Given two families (X i ) 1 i n and (Y i ) 1 i n of independent random points m-uniformly distributed on M , study the value of the expected matching cost For brevity we will denote with E n the mentioned expected value, dropping the dependence on (M, d, m) and p > 0.
The coefficient 1/n before the summation is inserted both for historical reasons and because it will make things easier when we will move to the context of probability measures.
Let us remark that, without any further assumption on M and m, the statement of the problem is too general to be interesting.
In the special case M = [0, 1] d , m = L d | M the problem has been studied deeply in the literature.As a general reference, we suggest the reading of the book [Tal14], which devotes multiple chapters to the treatment of the random matching problem.Before summarizing the main known results in this setting, let us remark that also the weighted setting (M = R d , m a generic measure with adequate moment estimates) has attracted a lot of attention since it is the most useful in applications (see [DSS13,BLG14,FG15,WB17]).
the problem is much easier compared to other dimensions.Indeed on the interval a monotone matching is always optimal.Thus the study of E n reduces to the study of the probability distribution of the kth point in the increasing order (that is X k if the sequence (X i ) is assumed to be increasing).In particular it is not hard to show In the special case d = 1 and p = 2 we can even compute E n explicitly .
A monograph on the 1-dimensional case where the mentioned results, and much more, can be found is [BL14].
Dimension 3. -When d 3, for any 1 p < ∞ it holds For p = 1, the result is proved in [DY95,Tal92], whereas the paper [Led17] addresses all cases 1 p < ∞ with methods, inspired by [AST18], similar to the ones we are going to use.
In [BB13,Th. 2] the authors manage to prove the existence of the limit of the renormalized cost under the constraint 1 p < d/2, but the value of the limit is not determined.
-When d = 2, the study of E n becomes suddenly more delicate.As shown in the fundamental paper [AKT84], for any 1 p < ∞, the growth is Their proof is essentially combinatorial and following such a strategy there is little hope to be able to compute the limit of the renormalized quantity . Much more recently, in 2014, in [CLPS14] the authors claimed that, if p = 2, the limit value is 1/2π with an ansatz supporting their claim (see also [CS14] for a deeper analysis of the 1-dimensional case).Then in [AST18] it was finally proved that the claim is indeed true.The techniques used in this latter work are completely different from the combinatorial approaches seen in previous works on the matching problem, indeed the tools used come mainly from the theory of partial differential equations and optimal transport.Semi-discrete matching problem, large scale behaviour.-In the semi-discrete matching problem, a single family of independent and identically distributed points (X i ) 1 i n has to be matched to the reference measure m.By rescaling, and possibly replacing the empirical measures with a Poisson point process, the semi-discrete matching problem can be connected to the Lebesgue-to-Poisson transport problem of [HS13], see [GHO18] where the large scale behaviour of the optimal maps is deeply analyzed.
As in [AST18], we will focus on the case where (M, d) is a 2-dimensional compact Riemannian manifold, m is the volume measure and the cost is given by the square of the distance.From now on we are going to switch from the language of combinatorics and computer science to the language of probability and optimal transport.Thus, instead of matching two family of points we will minimize the Wasserstein distance between the corresponding empirical measures.
We will prove the following generalization of [AST18, Eq. (1.2)] to manifold different from the torus and the square.
Theorem 1.1 (Main theorem for bipartite matching).-Let (M, g) be a 2-dimensional compact closed manifold (or the square [0, 1] 2 ) whose volume measure m is a probability.Let (X i ) i∈N and (Y i ) i∈N be two families of independent random points m-uniformly distributed on M .Then In the context of the semi-matching problem we simplify the proof contained in [AST18] and strengthen the estimate of the error term provided in [Led18].
Theorem 1.2 (Main theorem for semi-discrete matching).-Let (M, g) be a 2-dimensional compact closed manifold (or the square [0, 1] 2 ) whose volume measure m is a probability.Let (X i ) i∈N be a family of independent random points m-uniformly distributed on M .There exists a constant C = C(M ), such that In order to describe our approach let us focus on the semi-discrete matching problem.Very roughly, we compute a first-order approximation of the optimal transport from m to 1 n δ Xi and we show, overcoming multiple technical difficulties, that very often the said transport is almost optimal.In some sense, this strategy is even closer to the heuristics behind the ansatz proposed in [CLPS14], compared to the strategy pursued in [AST18] (even though many technical points will be in common).
More in detail, here is a schematic description of the proof.(1) Let us denote µ n := 1 n δ Xi the empirical measure.We construct a regularized version of µ n , called µ n,t , that is extremely near to µ n in the Wasserstein distance.
(2) We consider a probabilistic event A n,t ξ similar to { µ n,t − m ∞ < ξ} and show that such an event is extremely likely.The event considered is rather technical, but should be understood as the intuitive event: "the X i are well-spread on M ".
(3) With a known trick in optimal transport (Dacorogna-Moser coupling), we construct a transport map T n,t from m to µ n,t .
(4) In the event A n,t ξ , we derive from T n,t an optimal map T n,t from m into a measure µ n,t that is extremely near in the Wasserstein distance to µ n,t .
(5) We conclude computing the average cost of T n,t .
The regularized measure µ n,t is obtained from µ through the heat flow, namely µ n,t = P * t (µ n ) where t > 0 is a suitably chosen time (see Section 2.2.3 for the definition of P * t ).In Section 5 we develop an improved contractivity estimate for the heat flow on random data and we use it to show that µ n and µ n,t are sufficiently near in the Wasserstein distance.The probabilistic event A n,t ξ is defined and studied in Section 3. Let us remark that this is the only section where probability theory plays a central role.The map T n,t is constructed in Section 4 as the flow of a vector field.The map T n,t is simply the exponential map applied on (a slightly modified version of) the same vector field.Optimality of T n,t follows from the fact that it has small C 2norm (see [Gla19, Th. 1.1] or [Vil09, Th. 13.5]).We will devote Appendix A to showing that µ n,t and µ n,t are sufficiently near in the Wasserstein distance.
In Sections 6 and 7 we will prove our main theorems for the semi-discrete matching problem and the bipartite matching problem.The proofs are almost equal.
Differently from the proof contained in [AST18], we do not need the duality theory of optimal transport as it is completely encoded in the mentioned theorem stating that a small map is optimal.In this way, we do not need to manage the upper-bound and the lower-bound of the expected cost separately.Last but not least, as will be clear by the comments scattered through the paper, our proof is just one little step away from generalizing the results also to weighted manifolds and manifolds with boundary.
Acknowledgements.-We thank F. Stra and D. Trevisan for useful comments, during the development of the paper, and the paper's reviewers for their constructive and detailed observations.

General setting and notation
The setting and definitions we are going to describe in this section will be used in the whole paper.Only in the section devoted to the bipartite matching we are going to change slightly some definitions.
Whenever we say A B we mean that there exists a positive constant C = C(M ) that depends only on the manifold M such that A CB.If we add a subscript like p it means that the implicit constant is allowed to depend also on p.Even though this notation might be a little confusing initially, it is extremely handy to avoid the introduction of a huge number of meaningless constant factors.
With P(M ) we will denote the set of Borel probability measures on the manifold M .
2.1.Ambient manifold.-Let M be a closed compact 2-dimensional Riemannian manifold and let m be its volume measure.We will always work, under no real loss of generality, under the assumption that m is a probability measure.Unless stated otherwise, this will be the ambient space for all our results.It is very tempting to work in the more general setting of weighted/with boundary manifolds.Indeed it might seem that most of what we obtain could be easily achieved also for weighted/with boundary manifolds.Nonetheless, there is an issue that we could not solve.The estimate on the derivatives of the heat kernel we use, specifically Theorem 3.9, seems to be known in literature only for a closed and nonweighted manifold.Apart from that single result (that most likely holds also in the weighted setting if the weight is sufficiently smooth) everything else can be easily adapted to the weighted and nonclosed setting.In an appendix to this work, we manage to extend the mentioned estimate to the case of the square (and thus all our results apply also when M = [0, 1] 2 ).
By weighted manifold we mean a Riemannian manifold where the measure is perturbed as m V = e −V m where V : M → R is a smooth function (i.e., the weight).The matching problem is very susceptible to the change of the reference measure (as the case of Gaussian measures, recently considered in [Led17,Led18,Tal18], illustrates) and therefore gaining the possibility to add a weight would broaden the scope of our results.
Even though we are not able to generalize Theorem 3.9, during the paper we will outline what are the changes necessary to make everything else work in the weighted/with boundary case.
Let us say now the fundamental observation that is necessary to handle the weighted/with boundary case: we have to adopt the right definition of Laplacian.
In the weighted (or even with boundary) setting, the standard Laplacian must be replaced by the so called drift-Laplacian (still denoted ∆ for consistency), also named Witten Laplacian, characterized by the identity (2.1) . This operator is related to the standard Laplace-Beltrami operator ∆ by ∆ = ∆ − ∇V • ∇ (see [Gri06]) and, in the case of manifolds with boundary, (2.1) encodes the null Neumann boundary condition.Using this definition everywhere, almost all the statements and proofs that we provide in the nonweighted closed setting can be adapted straight-forwardly to the weighted/with boundary setting.

Random matching problem notation
2.2.1.Empirical measures.-Let (X i ) 1 i n be a family of independent random points m-uniformly distributed on M .Let us define the empirical measure associated to the family of random points When two independent families (X i ) and (Y i ) of random points m-uniformly distributed on M will be considered, we will denote with µ n 0 and µ n 1 the empirical measures associated respectively to (X i ) and (Y i ).
The main topic of this paper is the study the two quantities

Wasserstein distance
The quadratic Wasserstein distance, denoted by W 2 ( • , • ), is the distance induced on probability measures by the quadratic optimal transport cost where Γ(µ, ν) is the set of Borel probability measures on the product M × M whose first and second marginals are µ and ν respectively.See the monographs [Vil09] or [San15] for further details.Let us recall that when both measures are given by a sum of Dirac masses the Wasserstein distance becomes the more elementary bipartite matching cost

Heat flow regularization.
-For any positive time t > 0, let µ n,t be the evolution through the heat flow of µ n , that is where u n,t is implicitly defined as the density of µ n,t with respect to m.Let us recall that P * t denotes the heat semigroup on the space of measures and p t ( • , • ) is the heat kernel at time t.For some background on the heat flow on a compact Riemannian manifold, see for instance [Cha84,Chap. 6].
Why are we regularizing µ n through the heat flow?First of all let us address a simpler question: why are we regularizing at all?The intuition is that regularization allows us to ignore completely the small-scale bad behaviour that is naturally associated with the empirical measure.For example, the regularization is necessary to gain the uniform estimate we will show in Theorem 3.3.
But why the heat flow?A priori any kind of good enough convolution kernel that depends on a parameter t > 0 would fit our needs.Once again the intuition is pretty clear: the heat flow is the best way to go from the empirical measure to the standard measure.Indeed it is well known from [JKO98] that the heat flow can be seen as the gradient flow in the Wasserstein space induced by the relative entropy functional (see [Erb10] for the extension of [JKO98] to Riemannian manifolds).More practically, the semigroup property of the heat kernel provides a lot of identities and estimates and plays a crucial role also in the proof of the refined contractivity property of Section 5.

2.2.4.
The potential f n,t .-It is now time to give the most important definition.Let f n,t : M → R be the unique function with null mean such that (2.2) The hidden idea behind this definition, underlying the ansatz of [CLPS14], is a linearization of the Monge-Ampère equation under the assumption that u n,t is already extremely near to 1.We suggest the reading of the introduction of [AST18] for a deeper explanation.The mentioned linearization hints us that ∇f n,t should be an approximate optimal map from the measure µ to the measure µ n,t .We will see in Proposition 4.3 and later in Theorem 1.2 that this is indeed true.Let us remark that, as much as possible, we try to be consistent with the notation used in [AST18].

Flatness of the regularized density
Definition 3.1 (Norm of a tensor).-Given a 2-tensor field τ ∈ T 0 2 (M ), let the operator norm at a point x ∈ M be defined as The infinity norm of the tensor τ is then defined as the supremum of the pointwise norm For a fixed ξ > 0, we want to investigate how unlikely is the event ∞ , thus the event A n,t ξ is very similar to (and is contained in) the event considered in [AST18, Prop.3.10].Let us remark that taking a larger t > 0 will of course assure us that A n,t ξ is extremely likely, but, as we will see, taking a t > 0 that is too large is unfeasible.
Our goal is showing the following estimate on the probability of A n,t ξ .
Theorem 3.3.-There exists a constant a = a(M ) > 1 such that, for any n ∈ N, 0 < ξ < 1 and 0 < t < 1 it holds The proof of Theorem 3.3 will follow rather easily via a standard concentration inequality once we have established some nontrivial inequalities concerning the heat kernel.
In fact, a vast part of this section will be devoted to a fine study of q t , that is a time-averaged heat kernel.Definition 3.4.-Let us denote with q t : M × M → R the unique function with null mean value such that −∆q t (x, y) = p t (x, y) − 1, where the Laplacian is computed with respect to the second variable.
Remark 3.5.-All the derivatives of q t will be performed on the second variable.In the weighted and with boundary setting the definition of q t stays the same, whereas the Laplace operator changes meaning (as explained in Section 2.1).
Remark 3.6.-The kernel q t arises naturally in our investigation because, as we will see later, it holds Let us show some properties of the kernel q t .As a consequence of the decay of the heat kernel when t goes to infinity, for any x, y ∈ M and t > 0, it holds therefore we have the fundamental identity Let us also remark that q t is symmetric q t (x, y) = q t (y, x).Furthermore, for all y ∈ M the average value of ∇ y q t ( • , y) is null, indeed it holds Similarly we can prove that the average value is null also for higher derivatives.Now we want to deduce some estimates for the time-averaged kernel q t from the related estimates for the standard heat kernel.Therefore let us start stating some well-known estimates related to the heat kernel.The interested reader can find more about heat kernel estimates on the monographs [SC10,Gri99].
Proof.-It is proved in [MS67] for smooth manifolds, possibly with smooth boundary.To handle the case of the square, we need the formula for Lipschitz domains that is proved in [Bro93].For weighted manifolds it is proved in [CR17, Th. 1.5].
Theorem 3.8 (Heat kernel estimate).-There exists a suitable a = a(M ) > 1 such that, for any 0 < t < 1 and x, y ∈ M , it holds Proof Theorem 3.9 (Heat kernel derivatives estimate).-For any N 1, 0 < t < 1 and x, y ∈ M , it holds Proof.-For closed compact manifolds it can be found in [ST98] or in [Hsu99, Cor.1.2].We prove the special case Remark 3.10.-The estimate on the derivatives of the heat kernel provided by the previous theorem is fundamental for the approach presented here.Furthermore, as anticipated in Section 2.1, the need for such an estimate is exactly the obstruction to the generalization of our result to the weighted or with boundary setting.
Let us also remark that we will use the estimate only for N 3.
We are now ready to state and prove some estimates on the kernel q t .The first one has an algebraic flavor, whereas Proposition 3.12 and Corollary 3.13 are hard-analysis estimates deduced from Theorems 3.8 and 3.9.
Proposition 3.11.-For any t > 0 it holds Proof.-Let us ignore the integral in dm(x).Recalling the formula stated in (3.1) and the definition of q t , integrating by parts we obtain thence, applying Fubini's theorem and the semigroup property of p t , we can continue this chain of identities After integration with respect to x, the statement follows thanks to Theorem 3.7.
The following proposition plays a central role in all forthcoming results.Indeed, the kind of estimate we obtain on the derivatives of q t is exactly the one we need to deduce strong integral inequalities.
Proposition 3.12.-For any N 1, 0 < t < 1 and x, y ∈ M it holds Proof.-For the sake of brevity, let us denote d = d(x, y).
Applying (3.1) together with Theorems 3.8 and 3.9, through some careful estimates of the involved quantities and the change of variables d 2 /s = w, we obtain . The desired statement now follows from the elementary inequality which is a consequence of the two estimates Corollary 3.13.-Let us fix a natural number N 1 and a real number p > 2/N .For any 0 < t < 1 and x, y ∈ M the following two inequalities hold (1) Proof.-We are going to prove only the case N p > 2 when the integral is in dm(y), the proof of the other case being very similar.
To prove the desired result we just have to insert the inequality stated in Proposition 3.12 inside the coarea formula (in the very last inequality below we use the (1) Let us remark that the two inequalities are not equivalent.Indeed in the first one we are integrating with respect to the variable y, that is the differentiation variable, whereas in the second one we are integrating with respect to the variable x, that is not the differentiation variable.change of variables r 2 = st): Let us recall that H 1 is the Hausdorff measure induced by the Riemannian distance.
In our inequalities we have implicitly applied the known estimate (see [Pet98]) on the measure of the spheres on a smooth Riemannian manifold.It would be possible to obtain the same result applying only the estimate on the measure of the balls (that is a somewhat more elementary inequality) and Cavalieri's principle instead of the coarea formula.
We are going to transform the results we have proved about q t into inequalities concerning f n,t .
As anticipated, it holds q t (X i , y).
Such an identity can be proved showing that both sides have null mean and the same Laplacian: Lemma 3.14.-The following approximation holds Proof.-Using the linearity of the expected value and the independence of the variables X i , we obtain and therefore, applying Proposition 3.11, we have proved the desired approximation.
Remark Our next goal is proving Theorem 3.3.In order to do so we will need Bernstein inequality for the sum of independent and identically distributed random variables.We are going to state it here exactly in the form we will use.One of the first reference containing the inequality is [Ber46], whereas one of the first in English is [Hoe63].A more recent exposition can be found in the survey [CL06, Th. 3.6,3.7](alternatively, see the monograph [BLM03]).
Theorem 3.16 (Bernstein inequality).-Let X be a centered random variable such that, for a certain constant L > 0, it holds |X| L almost surely.If (X i ) 1 i n are independent random variables with the same distribution as X, for any ξ > 0 it holds Proof of Theorem 3.3.-Our strategy is to gain, for any y ∈ M , a pointwise bound on the probability P |∇ 2 f n,t (y)| > ξ through the aforementioned concentration inequality and then to achieve the full result using a sufficiently fine net of points on M .Let us fix y ∈ M .Recalling (3.2), we can apply Theorem 3.16 in conjunction with Proposition 3.12 and Corollary 3.13 and obtain (2) Let us now fix > 0 such that • L = ξ/2, where L is the Lipschitz constant of |∇ 2 f |.Let (z i ) 1 i N ( ) ⊆ M be an -net on the manifold.Through a fairly standard approach, we can find such a net with N ( ) −2 .Furthermore, it is easy to see, considering the bound imposed on , that if |∇ 2 f |(z i ) < ξ/2 for any 1 i N ( ), then ∇ 2 f ∞ < ξ.Therefore we have The statement follows noticing that, thanks to (3.2), L can be bounded from above by (3) where we used Proposition 3.12 in the last inequality.
(2) Note that we are applying Theorem 3.16 to matrix-valued random variables.In order to prove this generalization it is sufficient to apply the theorem to each entry of the matrix.
(3) The definition of the infinity norm for a 3-tensor is analogous to the definition given for 2-tensors in Definition 3.1.
Remark 3.17.-Let us stress that if t log(n)/n, Theorem 3.3 shows that quite often the hessian of f n,t is very small.
More precisely, if ξ n 1/n and t n = log(a) −1 κ n log(n)/n with κ n 1, then 1 n κnξ 2 n −5 .This observation will play a major role in the last sections as it will allow us to ignore completely the complementary event A n,t ξ as it is sufficiently small.

Transport cost inequality
Given two density functions u 0 , u 1 : M → R, the Dacorogna-Moser coupling (see [Vil09,[16][17]) gives us a "first-order" approximation of the optimal matching between them.With this technique in mind, following the ideas developed in [AST18, Led17], we are presenting a proposition that generalizes some of the results mentioned in those two papers.Its proof is simpler than those presented in [AST18,Led17] but relies on the Benamou-Brenier formula (see [BB00]).Let us remark that our application of the Benamou-Brenier formula is somewhat an overkill, nonetheless we believe that using it makes the result much more natural.
For the ease of the reader let us recall the said formula before stating the proposition and its proof.Definition 4.1 (Flow plan).-Given two measures µ 0 , µ 1 ∈ P(M ), a flow plan is the joint information of a weakly continuous curve of measures µ t ∈ C 0 ([0, 1] , P(M )) and a time-dependent Borel vector field (v t ) t∈(0, 1) on M such that, in (0, 1) × M , the continuity equation holds in the distributional sense Theorem 4.2 (Benamou-Brenier formula).-Given two measures µ 0 , µ 1 ∈ P(M ), it holds ) is a flow plan between µ 0 and µ 1 .
In addition, if (µ t , v t ) is a flow plan with v t ∞ + ∇v t ∞ ∈ L ∞ (0, 1), the flow at time 1 induced by the vector field v t is a transport map between µ 0 and µ 1 and its cost can be estimated by Proposition 4.3.-Given two positive, smooth density functions u 0 , u 1 in M , let f ∈ C ∞ (M ) be the unique solution of −∆f = u 1 − u 0 with null mean.For any increasing function θ ∈ C 1 ([0, 1]) such that θ(0) = 0 and θ(1) = 1, it holds Furthermore a map that realizes the cost at the right hand side is the flow at time 1 induced by the time-dependent vector field .
Proof.-Let us define the convex combination hence, thanks to the Benamou-Brenier formula, we have Corollary 4.4.-Given two smooth, positive density functions and also where the ratio is understood to be equal to Proof.-The inequality (4.1) would follow from Proposition 4.3 if we were able to find a function θ such that for any x ∈ M it holds (4.3) .
Let us start with the observation and therefore, in order to get (4.3), it suffices to have that is satisfied with equality by The inequality (4.2) follows from Proposition 4.3 choosing θ(t) = t and computing the definite integral.
Remark 4.5.-The inequality (4.1) will be used when we can control only one of the two densities involved, whereas the sharper (4.2) will be used when we can show that both densities are already very close to 1.
Remark 4.6.-On a compact Riemannian manifold, it is well-known that the Wasserstein distance W 2 2 (u 0 m, u 1 m) is controlled by the relative entropy and by the Fisher information (see [OV00]).In our setting this kind of inequalities is not sufficient, as we need estimates that are almost-sharp when u 0 , u 1 ≈ 1.Moreover, both the relative entropy and the Fisher information depend on the pointwise value of the relative density u 1 /u 0 , but we need estimates that depend nonlocally on the density (as we must take into account the macroscopical differences between u 0 and u 1 ).On the other hand, (4.1) and (4.2) control the Wasserstein distance with the negative Sobolev norm (u 1 /u 0 ) − 1 H −1 , that is nonlocal, and (4.2) is almost-sharp when u 0 , u 1 ≈ 1.

Refined contractivity of the heat flow
The following proposition (see for example [EKS15, Th. 3]) is the well-known Hölder continuity of the heat flow with respect to the Wasserstein distance.
Proposition 5.1.-Given a measure µ ∈ P(M ) and a positive t > 0, it holds In this section we are going to prove a more refined (asymptotic) contractivity result, when µ is an empirical measure built from an i.i.d.family with law m.
The following theorem proves that the estimate described in Proposition 5.1 is far from being sharp when the measure µ is the empirical measure generated by n random points.Indeed it shows that the average growth of the Wasserstein distance squared is not linear after the threshold t = log log(n)/n.
Such a trend would be expected if the matching cost had magnitude O(log log(n)/n), but its magnitude is O(log(n)/n).This quirk shall be seen as a manifestation of the fact that, in dimension 2, the obstructions to the matching are both the global and local discrepancies in distribution between the empirical measure and the reference measure (see [Tal14, §4.2] for further details on this intuition).Regularizing with the heat kernel we take into account only the short-scale discrepancies and thus we stop observing linear growth way before the real matching cost is achieved.
Given that in higher dimension the main obstruction to the matching is concentrated in the microscopic scale, we don't think that a similar statement can hold in dimension d > 2. With the wording "similar statement" we mean the fact that Theorem 5.2.-Given a positive integer n ∈ N, let (X i ) 1 i n be n independent random points m-uniformly distributed on M .Let µ n = 1 n n i=1 δ Xi be the empirical measure associated to the points (X i ) 1 i n .There exists a constant C = C(M ) > 0 such that, for any time t = α/n with α C log(n), denoting µ n,t = P * t (µ n ), it holds J.É.P. -M., 2019, tome 6 Proof.-Our approach consists of using Proposition 5.1 to estimate the distance between µ n and µ n,1/n and then adopting Proposition 4.3 to estimate the distance from µ n,1/n to µ n,t .Let u n,s = 1 n n 1 p s (X i , • ) be the density of P * s (µ n ) and let us fix the time t 0 = 1/n.Recalling Proposition 5.1, it holds In order to bound E W 2 2 (µ n,t0 , µ n,t ) , let us restrict our study to the event A n,t 1/2 .As stated in Theorem 3.3, such an event is so likely (as a consequence of the assumption α log(n)) that its complement can be completely ignored because all quantities that we are estimating have polynomial growth in n (recall Remark 3.17).
Let us denote f : M → R the null mean solution to the Poisson equation −∆f = u n,t0 − u n,t , representable as t t0 u n,s − 1 ds.Recalling that we are in the event A n,t 1/2 , we can apply (4.1) and obtain Using the independence of p a (X i , y) and p b (X j , y) for a, b > 0, y ∈ M and i = j, we are now able to compute the expected value that is exactly the desired result.Let us remark that in one of the inequalities we have exploited the bound p r (x, x) 1 with r = t + s, a simple consequence of the Chapman-Kolmogorov property.
Remark 5.3.-Before going on, let us take a minute to isolate and describe the approach we have employed to restrict our study to the event A n,t 1/2 .Let X, Y be random variables such that X ≡ Y in an event A. It holds Therefore, exactly as we did in the previous proof, if the event A is much smaller than the inverse of the magnitude of X and Y , we can safely exchange X and Y when computing expected values.
Remark 5.4.-In order to exploit Remark 5.3 in the proofs of Theorems 1.1, 1.2 and 5.2, it is necessary to check that all involved quantities have at most polynomial growth in n (see Remark 3.17).
Let us prove, for example, that M |∇f n,t | 2 dm has polynomial growth in n whenever t = t(n) 1/n.Thanks to standard elliptic estimates, it holds where in the last inequality we have applied Theorem 3.8.Thence, the desired control over M |∇f n,t | 2 dm follows from the condition on t = t(n).
For the proof of Theorem 1.2 this turns out to be sufficient, whereas for the proofs of Theorems 1.1 and 5.2 some similar (but not identical) quantities should be controlled.We do not write explicitly how to control them as the exact same reasoning works with minor changes.

Semi-discrete matching
This section is devoted to the computation, with an asymptotic estimate of the error term, of the average matching cost between the empirical measure generated by n random points and the reference measure.
An estimate of the error term was recently provided by Ledoux in [Led18, Eqs. ( 16) & (17)].Our estimate is slightly better than the one proposed by Ledoux; indeed he estimates the error as O(log log(n) log(n) 3/4 /n) whereas our estimate is Let us briefly sketch the strategy of the proof.
Step 1. -The inequality developed in the previous section allows us to choose t of magnitude O(log 3 (n)/n) while keeping W 2 2 (µ n , µ n,t ) under strict control.With such a choice of the regularization time, we can apply Theorem 3.3 and get that A n,t ξ is a very likely event.Without Theorem 5.2 we would have been able only to choose t = o(log(n)/n) and that would have invalidated the proof.
Step 2. -Using (4.2) we estimate the matching cost between µ n,t and m.It comes out that, in the event A n,t ξ , this matching cost is almost equal to and we are able to evaluate it thanks to Lemma 3.14.The statement we are giving here is slightly stronger than the statement given in the introduction (as we can now use the function f n,t ).
Theorem 1.2.-Let (M, g) be a 2-dimensional compact closed manifold (or the square [0, 1] 2 ) whose volume measure m is a probability.Let (X i ) i∈N be a family of independent random points m-uniformly distributed on M .For a suitable choice of the Furthermore it also holds from which follows Proof.-Let us fix a parameter ξ = 1/log n and the time variable so that Remark 3.17 gives Hence, by choosing γ > 0 sufficiently large, we can obtain any power-like decay we need.
Thanks to Proposition 4.3 we know that µ n,t is the push-forward of m through the flow at time 1 of the time-dependent vector field .
Thus, if we assume to be in the event A n,t ξ with n sufficiently large, we can apply Proposition A.1 with X = ∇f n,t and Y s = ∇f n,t /(1 + s(u n,t − 1)) to obtain (6.2) Still working in the event A n,t ξ , thanks to [Gla19, Th. 1.1], we can say (for a sufficiently large n) that (6.3) Once again, as we have done in the proof of Theorem 5.2, let us notice that the restriction of our analysis to the event A n,t ξ is not an issue.Indeed its complement is so small that, because all the quantities involved have no more than polynomial growth in n, using the approach described in Remark 5.3, we can restrict our study to the event A n,t ξ thanks to Theorem 3.3.Hence, joining (6.1), (6.2) and (6.3) with the triangle inequality, we can get Thus the first part of the statement follows from the choice ξ = 1/log n, that gives, recalling Lemma 3.14, that the leading term in the right hand side is the first summand.
The second part of the statement follows once again from (6.1), (6.2) and (6.3).But instead of using the triangular inequality we use the elementary inequality that holds for any choice of square integrable random variables C, D.More in detail we apply the said inequality with D = A + B and To estimate E D 2 we proceed as follows ) + E W 2 2 (µ n,t , exp(∇f n,t ) # m) , where we have applied (6.3).Then (6.1) and (6.2) provide the inequalities necessary to conclude.Remark 6.1.-In the work [CLPS14], the authors claim that the higher order error term should be O( 1 n ).Unfortunately with our approach it is impossible to improve the estimate on the error term from log(n) log log(n)/n to 1/n.Indeed, even ignoring all the complex approximations and estimates, our expansion involves the term |log t|/4πn.Thence we would be obliged to set t = O (1/n).The issue is that this growth of t does not allow us to exploit Theorem 3.3.Indeed, if t = O (1/n), we are not able anymore to prove that A n,t ξ is a very likely event (even when ξ is fixed) and our strategy fails.

Bipartite matching
Exactly as we have computed the expected cost for the semi-discrete matching problem, we are going to do the same for the bipartite (or purely discrete) matching problem (i.e., we have to match two families of n random points trying to minimize the sum of the distances squared).
The approach is almost identical to the one described at the beginning of Section 6.Let us remark that this result is new for a general 2-dimensional closed manifold M .Indeed in the work [AST18] the authors manage to handle the bipartite matching only when M is the 2-dimensional torus (or the square).Their approach is custom-tailored for the torus and thence very hard to generalize to other manifolds.
Theorem 1.1 (Main theorem for bipartite matching).-Let (M, g) be a 2-dimensional compact closed manifold (or the square [0, 1] 2 ) whose volume measure m is a probability.Let (X i ) i∈N and (Y i ) i∈N be two families of independent random points m-uniformly distributed on M .It holds the asymptotic behaviour Proof.-Let us warn the reader that in this proof the definitions of f n,t and of A n,t change.The change is a natural consequence of the presence of two families of points.We decided to keep the same notation as the role and meaning of the objects do not change at all.We will skip the parts of the proof identical to the proof of Theorem 1.2.Let us fix a parameter ξ = 1/log n and the time variable t = log(a) −1 γlog 3 (n)/n where γ > 0 is a sufficiently large constant.
Analogously to what we have done in the semi-discrete case, let us define δ Yi and the associated regularized measures and densities µ n,t 0 := P * t µ n 0 = u n,t 0 m, µ n,t 1 := P * t µ n 1 = u n,t 1 m.Let us denote with f n,t : M → R the unique function with null mean value such that −∆f n,t = u n,t 1 − u n,t 0 .Of course it holds f n,t = f n,t 1 − f n,t 0 where the functions f n,t 0 and f n,t 1 are defined exactly as in (2.2) but using µ n,t 0 and µ n,t 1 in place of µ.Hence, we can apply Theorem 3.3 on f n,t 0 and f n,t 1 to obtain the estimate Here A n,t ξ is defined as the intersection of the events A n,t ξ,ι for ι = 0, 1, where A n,t ξ,ι is From now on the proof goes along the exact same lines of the proof of Theorem 1.2 (just replacing µ n with µ n 1 and m with µ n 0 ) apart from the computation of Indeed in the semi-discrete case we could blindly apply Lemma 3.14, whereas now we have to compute it.
Thanks to Theorem 3.3 and Remark 5.3, we can assume to be in the event { u n,t − 1 ∞ < ξ} as it is so likely that its complement can be safely ignored.Thus, if we replace µ n,t 0 with m, we obtain As already done in the proof of Lemma 3.14, using the linearity of the expected value and the independence of the random points we can easily compute and therefore, applying Proposition 3.11, we have shown that, together with (7.1), is exactly the result we needed to complete the proof.
Remark 7.1 (Interpolation between semi-discrete and bipartite) The proof given for the bipartite case is flexible enough to handle also families of random points with different cardinalities.Given a positive rational number q ∈ Q and a natural number n ∈ N such that qn ∈ N, let (X i ) 1 i n and (Y i ) 1 i qn be two families of independent random points m-uniformly distributed on M .Then, exactly as we have proved Theorem 1.1, we can show Let us remark that when q 1 we recover the result of the semi-discrete case.

Appendix A. Stability of vector fields flows
In this appendix we are going to obtain a stability result for flows of vector fields on a compact Riemannian manifold.This kind of results are well known, but we could not find a statement in literature that could fit exactly our needs.The proof has a very classical flavor, borrowing the majority of the ideas from the uniqueness theory for ordinary differential equations.Nonetheless it might seem a little technical as we are working on a Riemannian manifold and we are using both flows of vector fields and the exponential map.
Given a compact closed Riemannian manifold M , we assume in this section that X ∈ χ(M ) is a vector field such that ∇X ∞ < 1/2 (see Definition 3.1) and such that X ∞ is sufficiently small with respect to inj(M ).
Proposition A.1.-Under the previous assumptions on X, if (Y t ) 0 t 1 is a time dependent vector field such that, for a suitable 0 < ξ < 1, it holds pointwise  The statement follows easily integrating this last inequality.
As anticipated, the main issue with the previous lemma is that instead of the exponential map, it uses the flow of X.We will use a trick that involves applying the lemma again to replace the flow with the exponential map.In order for our trick to work we need the following simple estimate.Let us define the associated kernel p t : R 2 × R 2 → (0, ∞) as p t (x, y) := x ∈Gx p t (x , y).
Our goal is showing that p t is exactly the (Neumann) heat kernel for the domain [0, 1] 2 .All the needed verifications are readily done, apart from the fact that p t satisfies the Neumann boundary conditions.This property follows from the following simple symmetries of p t : ∀ x, y ∈ R 2 , ∀ g ∈ G : p t (x, y) = p t (g(x), y) = p t (x, g(y)).
It is now time to prove Theorem 3.9 for p t .
Proof of Theorem 3.9 for M = [0, 1] 2 .-First, we are going to prove explicitly Theorem 3.9 for the heat kernel on the plane.for some suitable coefficients c m1,m2 n1,n2 .From this formula, it follows