Front propagation directed by a line of fast diffusion: large diffusion and large time asymptotics

The system under study is a reaction-diffusion equation in a horizontal strip, coupled to a diffusion equation on its upper boundary via an exchange condition of the Robin type. This class of models was introduced by H. Berestycki, L. Rossi and the second author in order to model biological invasions directed by lines of fast diffusion. They proved, in particular, that the speed of invasion was enhanced by a fast diffusion on the line, the spreading velocity being asymptotically proportional to the square root of the fast diffusion coefficient. These results could be reduced, in the logistic case, to explicit algebraic computations. The goal of this paper is to prove that the same phenomenon holds, with a different type of nonlinearity, which precludes explicit computations. We discover a new transition phenomenon, that we explain in detail.


Introduction, statement of the problem 1.Model and question
Let Ω L be the strip {(x, y) ∈ R × (−L, 0)}.The goal of this paper is to study the large time asymptotics of the following system: The unknowns are the functions (u(t, x), v(t, x, y)), respectively defined on R + × R and R + × Ω L .The positive numbers µ, d, D are given.The function f (v) is smooth, and there is θ > 0 such that f ≡ 0 on [0, θ] and f (1) = 0. Moreover f > 0 on (θ, 1) and f (1) < 0. Such a nonlinear term will sometimes be referred to as ignition type nonlinearity, in reference to the mathematical literature on flame propagation models.Of particular interest to us will be the large time asymptotics of (1), combined with the limit D → +∞.

Motivation
System (1) was proposed for the first time by Berestycki, Rossi and the second author in [5], as a model for biological invasions in oriented habitats.It was indeed observed in several instances that transportation networks tend to enhance the speed of invasion.Let us mention two biological instances: the pine processionary moves northwards faster than anticipated, and it is believed that the road network has a responsibility in the phenomenon, see for example [21].The yellow-legged hornet has invaded the whole South West of France, as is reported in the maps provided in [13]: it first followed the main rivers, and from then colonised the inland areas.
In [5], Ω L is replaced by the whole upper half-plane, and the nonlinearity f is a Fisher-KPP type nonlinearity (f (0) = f (1) = 0, f > 0 concave between 0 and 1).The line {y = 0} is named 'the road', and the upper half-plane is named 'the field'.This terminology will be freely used here.We showed the dramatic effect of the road on the overall propagation: there is c * (D) > 0 such that, for all c < c * (D) we have lim Moreover, there is c ∞ > 0 such that c * (D) ∼ c ∞ √ D, as D → +∞.This is in sharp contrast with the classical propagation results for reaction-diffusion equations, such as Aronson-Weinberger [1].One could question whether it is an effect of the Fisher-KPP nonlinearity, or if it holds for more general terms f .In [11], the first author gives a first hint of the robustness of this phenomenon, by constructing travelling waves (φ(x + ct), ψ(x + ct, y)) to (1) whose speed c satisfies indeed c(D) ∼ c ∞ √ D, where c ∞ > 0 is characterised in terms of a limiting problem obtained by rescaling x by √ D and sending D to infinity.In order to confirm the phenomenon for (1) with the ignition type nonlinearity, one should understand whether, and how, those travelling waves attract the solutions of (1).Instead of presenting the results now, we will show some numerical simulations, which reveal a phenomenon that we had not expected.

Some numerical simulations
These simulations were produced using FreeFem++.We used P2 finite elements on a mesh of 400 × 50 points.The time scheme used is a two-step (to handle the coupling) explicit Euler, which seems quite sufficient in terms of accuracy and speed for our context.Neumann boundary conditions are imposed on the sides of a domain of size A × L with A L. Finally, we represented u as a function over the whole domain so that it is visible.The following parameters were used :   The scenario that we would expect is thus the following: due to the large diffusivity D, u is quickly spread on all R and decays rapidly.Meanwhile, v grows slowly and transmits mass to u.At some point, u has recovered enough mass and starts to lead the propagation.The acceleration of the propagation is then transmitted downwards from the road to the bottom of the field, reaching the regime dictated by the travelling wave.The remainder of this paper is devoted to proving that this is indeed what happens.

Main results, discussion
Let us reformulate System (1) in the following way, we hope that it will help the reader visualise the problem.
We also want to study the behaviour for large D, so the renormalization (x ← x √ D) will often be used: Some results will be stated for equation (2) and some for (3) and the proofs will juggle between the two.We briefly mention existence and uniqueness of a solution and refer to [5] for the proof (where the strip is replaced by a half-plane but the argument still holds).We wish to emphasise that uniqueness as well as many properties of this system are a consequence of the monotone structure of (2) inherited from the maximum principle investigated in [5,10,11].The purpose of this paper is to investigate the large-time and large-diffusion asymptotics of this solution.
There exists a global solution in the classical sense to (2) with initial data (u 0 , v 0 ).This solution is unique in the class of bounded classical solutions and satisfies 0 ≤ µu(t, x), v(t, x, y) ≤ 1 for all t ≥ 0, x ∈ R, y ∈ Ω L .

First results
The following theorems are natural consequences of the stability of front-like initial data using an argument initiated by [15] and are not so unexpected.They will, nevertheless, be useful for later purposes.A specificity of the present computations is that they should be uniform in the large parameter D, this is why they are detailed.

Theorem 2.2.
[Stated for equation (2)] Let (u 0 , v 0 ) be a front-like initial datum for equation (3), that is (u 0 , v 0 ) ∈ P α 0 defined in the next section.There exists an exponent ω > 0 that depends on the initial data only through α 0 and for all ε > 0 small enough there exist two shifts where C is a constant that depends only on f , d and L.Moreover, ω does not depend on D > d.
Remark 2.1.The previous theorem does not give the convergence towards travelling waves, but it gives a precise spreading velocity.In [19], this is the starting point of an iterative argument showing a geometric decrease of the distance separating the two shifts with respect to a fixed time step.One could think of adapting the argument of [19] to (3), but this would not be uniform in D > d.We prefer to focus on the really new features of the model.
We now turn to what happens for compactly supported initial data (see Theorem 4.1 for a precised statement): Theorem 2.3.[Stated for equation (2)] Let (u 0 , v 0 ) be non-negative smooth compactly supported data.There exist δ > 0 and ) then µu, v stays trapped (up to an exponentially decaying error) between two shifts of a pair of travelling waves evolving in both directions.

Data with O(1) support and additional effects
In this section we state the results that account for the above numerical simulations.

Theorem 2.4. [Stated for equation (2)] Let L be large enough (independently of D). There exist
then the following holds: let h(D) be any infinitely increasing function as D → ∞; then after a time t D = D 1/2 h(D) + O(1), the functions µu and v satisfy the assumptions of Theorem 2.3, in other words: As a consequence, starting from the time t = t D , propagation occurs as described in Theorem 2.3.
One could argue that this happens in a much smaller time.The next theorem shows that, even if the solution may not take all the time t D to fall into the assumptions of Theorem 2.3, it stills needs a lot of time.To what extent the upper bound in the preceding theorem, and the lower bound in the next theorem, can be reconciled, is a very interesting question that we leave for future work.

Theorem 2.5. [Stated for equation (2)]
Let M , δ > 0 be as in Theorem 2.4.For every κ > 0, there exists then (µu(t, .),v(t, ., .))does not satisfy the assumptions of Theorem 2.3.More precisely we have, uniformly in t satisfying (4): Finally, we investigate the situation of an initial datum supported only on the road.The behaviour that we find does not at all look like what we have just discovered for initial data supported in the field.If µu 0 ≤ 1 has a support of size ≤ C √ D there will be extinction.On the other hand, we also provide conditions on µ for invasion to happen.Theorem 2.6.[Stated for equation (3)] Let v 0 ≡ 0 and µu 0 = 1 (−a,a) be initial data for (3) and u, v the associated solutions.We have the following : • There exists a 0 > 0 independent of D such that if a < a 0 , µu and v decay to 0 uniformly as t → +∞.
• More generally, provided µ < µ − , there exists a 1 > 0 independent of D such that if a > a 1 , invasion occurs.
Remark 2.2.It is quite natural that µ too large leads to extinction: indeed, we normalised u so that u ≤ 1/µ and moreover µ acts as a death rate in the equation on u.Meanwhile, v sees the same initial boundary Robin condition ≡ 1 independently of µ.

Bibliographical study and discussion
The general issue of our work is that of speed-up versus quenching.The first contribution concerning the behaviour of compactly supported initial data in reaction-diffusion equation of ignition (or bistable) type can be found in Kanel' [17].For the one dimensional equation the author shows the existence of two thresholds 0 with l < L 0 , v ends up below θ in finite time (and as a consequence, decays to 0 uniformly) : we call this situation quenching.On the other hand, if l > L 1 it is shown that v → t→+∞ 1 uniformly on compact sets.Zlatoš [23] showed L 0 = L 1 , and more generally Du and Matano [12] showed the existence of such thresholds for general one-parameter families of initial data.
For equations in cylinders and in the presence of a parallel shear flow, an important issue is to understand how a large amplitude flow (i.e.A >> 1) will enhance spreading.This has been studied in various papers starting from [3], where it is showed in the case of a Fisher-KPP nonlinearity, a linear speed-up In a more general setting, let us quote Constantin-Kiselev-Oberman-Ryzhik [6], who introduce the notion of bulk burning rate.For ignition type nonlinearities, the same result holds as proved by Hamel and Zlatoš [16] (see [11] for a comparison of their result with our situation).As for whether propagation or quenching holds, Constantin-Kiselev-Ryzhik [7] and Kiselev-Zlatoš [18] show that the price to pay for propagation (hence, speed-up) also has a linear scaling in A: provided that the flow is not constant on too large intervals.In other words, one trades a linear speed up of propagation for a linear growth in the critical size of initial data that leads to quenching.
In the case of cellular flows, the same phenomenon happens but with a scaling in A 1/4 (up to a logarithmic factor): the speed-up property was proved by Novikov and Ryzhik [20] for the KPP case and more recently by Zlatoš [24] for combustion type nonlinearities.On the other hand, Fannjiang-Kiselev-Ryzhik [14] proved (for flows with small enough cells) that if L 4 ln(L) < kA -where L represents the size of the square supporting the initial datumquenching happens.See also the numerical simulations of [22].
A different type of mechanism is studied in Constantin-Roquejoffre-Ryzhik-Vladimora [8] where the authors investigate a system coupling a reaction-diffusion equation and a Burgers equation.They show different quenching results with respect to a gravity parameter, one of them being that quenching happens independently on l when the gravity is large enough.
In the light of this section, Theorem 2.4 may come up as a surprise since it shows a speedup of the propagation (c = c ∞ √ D) for free: D does not appear in the threshold size of the initial data v 0 .The trade-off is the presence of a "two-speed" mechanism: propagation first happens at a small speed that does not depend on D, but accelerates towards the full speed c(D).On the other hand, if one tries to initiate the invasion only thanks to µu 0 = 1 (−l,l) , Theorem 2.6 shows that quenching happens if l < a 0 D 1/2 (from the point of view of (2)).

Organisation of the paper
Section 3 is devoted to proving Theorem 2.2 in the more precise form Theorem 3.1, Section 4 provides the details for the proof of Theorem 2.3 by proving the detailed Theorem 4.1.In section 5 we prove Theorem 2.4, and we prove Theorem 2.5 in Section 6.These two sections will describe more precisely the mechanism that is at work.Finally, the last section investigates the case of initial data supported on the road only.

Front-like initial data
Let us first trap the initial data between functions that will evolve in sub and super-solutions travelling at the right speed.

Trapping the initial data
We deal with bounded, uniformly continuous perturbations (ρ 1 , ρ 2 ) such that there exist C and α 0 > 0 such that ρ i (x) ≤ Ce α 0 x .Then we assume for some (ρ 1 , ρ 2 ) of the above form and some translation ξ ∈ R. Such initial data is said to be in the class P α 0 .In this subsection, we prove that such initial data can be trapped between two translates of the travelling front, which is conceptually simple but necessary.Due to the degeneracy of f (v) as v ≤ θ, we will have to use the following weight function.Let L 0 > 3 and 0 < α < min(α 0 , c).
Define Γ(x) to be a smooth non-decreasing function such that We also recall the exponential convergence towards 0 or 1 as x → ±∞ proved in [10], [11]: there exist λ, λ > 0 (bounded from below uniformly in D > d) and one can enlarge L 0 > 0 so that where That way, ahead of the front the system becomes linear and behind the front one controls the monotonicity of f .We now assert the following: Proposition 3.1.Assume (5), (6).Then for any ε > 0, there exist ξ − 0 < 0 and ξ Proof.We only prove (9). ( 10) is obtained simultaneously with the same arguments (yuniform limits, y-uniform exponential decay) by taking |ξ ± 0 | large enough.We start with the right inequality.Let ε > 0. Thanks to the uniform limit of φ as x → +∞, there exists On the other hand, when x ≤ −ξ + 0 − L 0 − 1, µu 0 (x) ≤ Ce α 0 x so here for the inequality to be true, one needs εe α(x+ξ Observe that on this interval, µu 0 (x) goes uniformly to 0 as ξ + 0 → ∞, whereas the right-hand side in (9) has a fixed positive infimum, so that the desired order is obtained by enlarging ξ + 0 .For the existence of ξ − 0 : observe that on provided ξ − 0 is negative enough, thanks to the uniform limit of u 0 as x → +∞.Now for the rest of the proof, we need on Because the exponential decay λ of φ and ψ satisfies λ > c ≥ α (see [11]) this is true on Again, we cover the compact region left around the interface by enlarging −ξ − 0 .

Wave-like sub and supersolution
We adapt the original result of Fife-McLeod [15] using the simplified notations and generalisation of Mellet-Nolen-Ryzhik-Roquejoffre [19].The adaptation is computationally non trivial, so let us first explain the changes that we expect to happen.Our objective is to build a supersolution u, v to (3) that is close to the front (φ, ψ) (in the frame moving at speed c).In the homogeneous case and for generalized transitions fronts the authors of [19] proposed , where ψ is the front, ξ(t) is an increasing shift starting from the initial one and converging to some finite limit, q(t) = εe −ωt and Γ is defined above: this is a necessary correction to take into account the initial perturbation and the degeneracy of f on v ≤ θ.In our case, we will look for and with ξ starting from ξ ± 0 .We will also use the fact that Now we reduce α a bit more and set α = min(α 0 , c/5), just so that the quantities cannot be zero.These quantities will play an important role in the following computations.
Observe that this condition on α means that the decay correction obtained through Γ is limited: solutions starting from large perturbations (i.e.small α 0 ) will be stabilized thanks to a correction with an α 0 decay also, but solutions from very small perturbations (i.e.very large α 0 ) will still need a c/5 correction in the decay at −∞ to be stabilized.Since we still want an exponential decay of q v (t) we look for with separate variables.The boundary conditions yield h (−L) = 0 and h (0) + h(0) = C so that we have a large choice for h.Nonetheless, it will become clear in the following computations that a good candidate is with . The role of these conditions will be clear in the computations.
Observe that since G (0) > 0, the decay exponent ω is then linearly small as β or α 0 or µ is small, but it should be noticed that it does not depend on D ≥ d, and that it depends on the initial data only through α 0 .We can now state the following: Theorem 3.1.Assume (5), (6) and let u, v denote the associated solutions of (3).Let ε 0 = min(θ/4, (1 − θ 1 )/4, γ 0 ) where There exists a constant K 0 that depends on the initial data only through α 0 and such that if ε ∈ (0, ε 0 ), there exists ξ ± 1 with and for all t ≥ 0, Proof.Inequations ( 16), (17) are set in the moving frame with variables (t, x + ct).As a consequence, in the computations one has to replace ∂ t by ∂ t + c∂ x .We now want to show that ū, v as defined in (11) yields indeed a supersolution: where and that u, v as defined in ( 12) yields a subsolution.Then ( 16), (17) will follow by an application of the comparison principle, Prop.3.1, and the monotonicity of ξ.Indeed, this will show that in the original frame u, v stays trapped between the fronts shifted initially by ξ ± 0 and moving at speed resp.c(1 ± ξ) (or speed c ± ξ in the moving frame).This deformation becomes of course exponentially small over time due to the e −ωt factor.Observe also that ξ ≤ 1/4 and is exponentially decaying over time, so u, v will propagate at least and at most with speed c + o (1).
We divide this computation in three zones concerning x + ξ(t).In the following, φ and ψ will always mean φ(x + cξ(t)) and ψ(x + cξ(t), y), q u will always mean q u (t), q v will either mean q v (t, 0) or q v (t, y) and Γ will always mean Γ(x + cξ(t)), all of these functions being defined as above in ( 7) and ( 13)-( 14).

Behind the front
The first inequality holds because we look for ξ ≥ 0 and the last because ω ≤ G( √ κL).
The last inequality holds because w ≤ β/2 and the next to last because κ ≤ β/2.
The last inequality holds because of the condition on ω, and the next to last because of the condition on κ and because ξ ≥ 0.

The middle region
And we have We obtain conditions ( 18), ( 19) by remarking that κ ≤ β < Lip(f ), ω < Lip(f ) and d/D < 1 and then we take ξ(t) = Bεe −ωt , so that answer our queries.One should observe that the condition Bε ≤ 1/4 has not been used yet as well as ω ≤ cα/4 − α 2 rather than just 1/2(cα − α 2 ).Observe that the computations concerning the subsolution (12) with this time ) ω are exactly symmetric, except for a cα(1 − ξ) term (instead of cα(1 + ξ)) that appears ahead of the front and in the middle region, which is treated thanks to the above still unused assumptions: On the other hand, we have This ends the construction.

Compactly supported initial data
In this section, we go back in the fixed original frame.Seeing the problem in the light of [15] it is natural to test: as a subsolution to (3), i.e. a pair of waves evolving in opposite directions.Of course, in light of the previous section, for this to be a subsolution one needs a well chosen correction in time and in space (in the degeneracy regime of f ).Let us define the symmetrised fronts In the sequel we will always use the following notations: and the same will hold for ψ, ψ, Γ, Γ.Here ξ 0 will be a large initial shift and ξ(t) a timeincreasing shift with ξ(0) = 0 and cξ(+∞) ≤ 1, which will be realised as a smallness condition on ε 0 .In this section we set where λ and λ are already defined in (8) so that α yields the same inequations as above and moreover α < λ, λ.Γ is defined as above, only with a little more margin.Precisely let us set this time: We will set the following: The proof will consist in adapting the previous computations.We shall see that (u, v) yields a subsolution provided only a size condition on the initial shift ξ 0 (independently of D > d).This condition is important, because then for the initial data to lie above (u(0), v(0)) it has to be large enough on a large enough interval.Moreover, we wish to insist on the fact that to retrieve the original model ( 2) one has to change the variable x ← x/ √ D. As a consequence, when stated for (2), our result assumes that u 0 , v 0 are large enough on an interval with length of order √ D. Theorem 2.3 will be proved as soon as we have proved the Theorem 4.1.1.There exist ε 0 > 0 small enough and two constants B, ξ 0 > 0 large enough such that for all 0 < ε < ε 0 , there exist a small δ > 0 and where q u = εCe −ωt and q v = εh(y)e −ωt are defined as above and this time ξ(t) = Bε(1 − e −ωt ) ω defines a subsolution to (3) with initial data u 0 , v 0 for all times.By the comparison principle, we then have at all times u ≤ u, v ≤ v.
As a consequence, (3) propagates the initial data u 0 , v 0 along the x-axis with speed at least as c + o t→+∞ (1) in both directions.2.Using the notations of Section 2 we have the following: let ũ, ṽ denote the same functions as in (11) with φ, ψ and Γ replaced by φ, ψ, Γ.As a consequence, ũ, ṽ will be a supersolution for decreasing front-like initial data.Up to enlarging the initial shifts, we assert that (min( ū, ũ), min( v, ṽ)) is a supersolution to (3) with initial data u 0 , v 0 for all times.Again, this implies that u ≤ min( ū, ũ), v ≤ min( v, ṽ) and so that the level lines of u, v propagate at most as c + ξ = c + o(1) in both directions along the x-axis.Remark 4.1.(i).As noticed above, observe that one needs to replace M ← M √ D when Theorem 4.1 is stated for the original system (2).(ii).The size condition on u 0 , v 0 is far from optimal and ensures only that u 0 ≥ u(0), v 0 ≥ v(0).It could be sharpened by replacing 1 − δ with θ and by waiting long enough for the reaction to put u, v above 1 − δ.Proof.The second part of Theorem 4.1 is easy because the minimum of two supersolutions is a supersolution and any front like initial data can be translated above any compactly supported initial data.The first part is more intricate.Observe that u(0), v(0) are zero except on a set of length (−M , M ) (with M proportional to ξ 0 ) and that on (−M , M ) they are less than some 1 − δ: this directly gives the largeness condition asked so that u 0 ≥ u(0), v 0 ≥ v(0).We now detail the computation of N (u, v) in the following subsections by splitting the computations in three zones concerning x + ct + ξ 0 .

x
In this zone, one has necessarily x + ct + ξ 0 − ξ(t) < −L 0 and also x − ct − ξ 0 + ξ(t) < −L 0 (by asking 2ξ 0 ≥ 1).As a consequence, in this zone we have µφ, ψ, v ≤ θ/2 and µ φ, ψ ≥ (1 + θ 1 )/2.Also min(Γ, Γ) ≡ Γ ≡ e α(x+ct+ξ 0 −ξ(t)+L 0 ) will be denoted e α(••• ) from now on.Then Both terms are already negative thanks to the conditions stated in Section 2. Then a computation similar to the preceding section -thus not detailed here -leads to This quantity can be made negative provided ω ≤ 2αc (which is already the case): indeed, using the exponential decay of f ( ψ) in this zone, the above expression can be factorized as only that ξ 0 is large enough (but depending on the initial data only through α 0 ).

x
First, we ensure cξ ≤ 1 by asking that c Bε ω ≤ 1 so by taking ε 0 ≤ ω cB .As a consequence, Since ω < 2αc, the computations of section 3.2.3still hold by enlarging the constant B enough.
Observe that the computations of section 2 still hold by splitting this zone in two subzones: x < 0 and x > 0. In the first one, one will bound with (•) being positive provided ξ 0 is large enough.This proves Theorem 4.1.

Initial data with O(1) compact support
We now go back to the original equation ( 2) and state the following.

Theorem 5.1. Let L be large enough (independently of D).
There exist M , δ > 0 independent of D > d such that if the initial data of (2) satisfies then after a finite time t D = D We will divide the proof in several steps: Step 1.Since D > d is ought to be large, u should be very small for small times.Thus we first investigate the equation for v in (2) where u is replaced by 0, and we not only expect to use its solution as a subsolution but we really expect that it will reflect the dynamics of the full solution for some time: Step 2. Let p(y) be the largest y-dependent steady solution to (22).The travelling wave for (22) connecting 0 and p(y) will serve to build a subsolution for (22) propagating just as in Theorem 4.1 but here at speed c p = O(1).This will give a lower bound on the boundary data v(x, −L) ≥ v(x, −L) ≥ • • • .This will be the purpose of Lemma 5.2.
Step 3. Using this lower bound, we then go back to (3): we show that even without the reaction term, this lower bound suffices to have µu, v ≥ 1 − δ on (−M , M ) within a finite time t D .As a consequence, this is the case also for the nonlinear problem.This will be proved in a final step.Observe that here we use f ≥ 0. If we were looking for instance at a bistable nonlinearity this would still be true but we would need to add a positive zero-order term in these computations.We recall the following elementary fact, that we will freely use in the sequel.
Let us now prove the following: Lemma 5.2.Let v be a solution of (22).There exist δ , M > 0 independent of D such that if v 0 > 1 − δ for x ∈ (−M , M ), there holds where C, b > 0 do not depend on D and (ϕ t ) is bounded in C 3 such that ϕ t (x) = 1 for |x| ≤ cp 2 t and ϕ t (x) = 0 for |x| ≥ c p t for some speed c p > 0 independent of D. Proof.First, that there exists a travelling wave solution with speed c p > 0 independent of D of ( 22) connecting 0 and p(y) has to be established: for this we refer to Berestycki-Nirenberg [4] which gives the existence of an increasing (in x) travelling front ψ(x, y) with exponential convergence towards 0 and p(y) as x → ±∞.Now we notice that the subsolution argument in Theorem 4.1 can be used but in a simpler fashion for the Robin homogeneous boundary value problem (22): one the one hand, the structure of the problem is simpler than the one studied in Theorem 4.1 since here we deal with a single equation, the original construction of [15] with q v = εe −ωt will suffice.On the other hand, 1 is not a steady state for the problem so one has to replace 1 by p(y) in the computations.Nonetheless, one can check that the above computations still hold with the adequate subsolution ψ + ψ − p − q v min(Γ, Γ) As a consequence, just as in Theorem 4.1, provided v 0 is above an initial shift of a pair of waves -hence the existence of δ and M -its level lines will be pushed by below by the pair of waves travelling as ±c p t ∓ O(1).This implies the desired bound.
End of the proof of Theorem 5.1.Let (u D , v D ) be the solution of (3) starting from compactly supported 0 ≤ µu 0 , v 0 ≤ 1 and let v 0 satisfy the rescaled assumptions of Theorem 5.1.First, let h(D) be any positive function such that h(D) grows to infinity as D → ∞, and set T D = D 1/2 h(D).We now show the following lim inf where Ω L,M = (−M , M ) × (−L, 0).First, it is an easy but tedious exercise to see that the left hand-side of ( 24) can be characterised as the limit as n → +∞ of some µu Dn (T We then extract from (t n , D n , x n , y n ) a subsequence so that x n → x ∞ and y n → y ∞ .Our objective is to extract from (u, v) a subsequence converging to some limiting (u ∞ , v ∞ ) to which the maximum principle will apply and force the above limit to be ≥ 1 − δ .The difficulty comes from the fact that (D n ) might be unbounded and so that standard parabolic estimates and the usual maximum principle might fall at the limit.Two cases can appear: Since f ≥ 0 and by Lemma 5.2 above, by the comparison principle we have Since d/D n → 0, the standard parabolic estimates applied on v n will fall concerning the xderivatives.We overcome this difficulty since equation ( 25) is linear and the boundary data v n (t, x, −L) is bounded in C 3 : the maximum principle applied on x-derivatives of (u n , v n ) up to order 3 gives that they are all bounded independently of n: Now concerning the y-derivatives, even though d/D n → 0 the standard estimates hold: indeed since v n ≤ 1, standard L p parabolic estimates with p large enough applied on u n give that u n is bounded in C α,1+α by some C 2 .Now rescale by x ← x √ D n so that The bound on ∂ 2 xy v n follows also by combining the two arguments above, and finally by plugging the estimate on v in the equation for u, standard Schauder estimates yield that u n is bounded in C 1+α/2,2+α .In the end one can extract from (u n , v n ) some subsequence converging in In both cases, the lim inf above is ) holds.Theorem 5.1 follows easily: indeed, there exists t 1 independent of D such that after

Lower bound on the waiting time
In this subsection, (u, v) denotes the solution of (3).Let ε := D −1/2 and v 0 solve sharing the same boundary data as v: v 0 (0) = v 0 .Observe that v 0 is the rescaling of the subsolution v already introduced in equation ( 22).The aim of this subsection is to give an estimate on the time during which v is close to v 0 .More precisely we will show the following Theorem 6.1.Let α ∈ (0, 1) and define Then for all 0 < δ < min α, 2 7 , 2 5 (1 − α) one has The limiting case is δ = α = 2/7.This theorem implies Theorem 2.5.
Let us define w = v − v 0 .Observe that (u, w) solves where (by Taylor's formula).The idea of the proof is to decouple equation (28) by decomposing w in two parts.Let us set w = w 1 + w where and Observe that T ε,α exists by continuity.We now work by contradiction to show that if T α,ε = (1/ε) δ , then |w| stays of order less than ε α with α > α.During the rest of the proof, this will be abbreviated with " ε α ".The scheme is as follows.First, we derive an L 1 estimate on w 1 by Duhamel's formula.This, inserted in estimate (50) yields the desired estimate on u and then on w.We then go back to w 1 to obtain the desired estimate, by a more intricate supersolution argument.

L 1 bound on w 1
By definition, up to time T ε,α one has |w| ≤ ε α .By Duhamel's formula and the maximum principle for equation (29), this yields where ∂ N R yy denotes ∂ yy endowed with the Neuman-Robin boundary condition of (29).Since • ≤ v 0 + ε α , using the above results and rescaling them, one knows that v 0 ≤ θ for x ≤ (a + c p t)ε for some constant a > 0. As a consequence, Also, by the maximum principle, there exist C(d) > 0 and λ 1 (d) > 0 such that Using both estimates in (31), the maximum principle yields Since e dε 2 (t−s)∂xx preserves the L 1 norm, this gives the estimate, for some constants C 1 , C 2 that do not depend on ε: since δ < 1.

Estimate on u and w
Using the appendix estimate (50) and Duhamel's formula one gets First observe that due to the above results and the rescaling, one has Using this and estimate (33), we deal with the second term: We now deal with the first term: As a consequence, |u| ε α .Now seeing equation (30) as a boundary value problem for w, we see that the above estimate on µu provides an easy supersolution that stays above w, that is

Back to w 1
Using w = w 1 + w we rewrite equation (29) as a linear non-homogeneous problem: Since w 1 (0) = 0, Duhamel's formula gives where w s h solves along with the initial condition w s h (0) = (f (•)w)(s).Inspired by the linearisation and a rescaling of the supersolution to the non-linear equation in the previous section, and using the same notations as in it, we look for a supersolution with the form for some ξ(t) increasing in time and initial shift x s 0 .First of all, we need to ensure ordering of the initial data.That is: for some ν < 1 − 3/2δ − α (since w ≤ Kε 1− 3 2 δ thanks to the computations above).We achieve this by asking We will also see below that we need δ < ν.Combining both these conditions imposes δ < 1 − 3 2 δ − α, i.e. the assumption δ < 2 5 (1 − α) of Theorem 6.1.

Now, straightforward computations give
As in Section 3.2, we analyse the sign of this quantity in three separate zones.Observe that due to the rescaling between ( 2) and ( 3), decay exponents Θ, Θ 0 (resp.the α and α 0 from Sec. 3.2) and λ, λ scale here as 1/ε as well as the lower bounds on the derivatives δ L .Remember also that we are looking only at times t ≤ ε −δ , so we only need to find a supersolution up to this time.We also reinitialize the constants C i and K i which will be positive constants independent of ε.
As before, in this zone we have . As a consequence and since Θ < λ, one has The first term inside the brackets is positive provided ω < Θ 2 (which is not a constraint since Θ grows as 1/ε) and we can make the whole bracket positive provided (observe that the right-hand side in (40) is bounded from above and by below by positive constants that do not depend on ε) by taking This will be a constraint on our future choice of ξ(t), to be kept in mind.

x + c
We make this positive by counterbalancing the negative terms thanks to δ 2L 0 ξ, by asking Moreover by using Θ < λ, ω < 2Θc p ε and the previous expression of e −2Θx s 0 one obtains so that we ask also for ξ ≥ K 2 ε α+ν+1 e −ωt , which is implied by (42) by taking K 1 large enough.The last term to counterbalance is by the triangle inequality and the definition of •.But we also know that by Section 5 for some C 5 , ω 0 > 0 independent of ε.Now just as above, one can use the exponential decay of 1 − ψ in the current zone to prove that there exists C 6 > 0 such that |1 − ψ| ≤ C 6 e −ωt .In the end We can reduce ω and change the constants so that and we counterbalance this term by asking (remember the additional power of ε factor due to the scaling of We now have to find a suitable increasing function t → ξ(t) satisfying (42), (43) and that should not increase too much so that w 1 (ε −δ ) ε α .Since the order between the right-hand sides in (43) changes at some point in time, we define ξ in two parts as a continuous but only piecewise C 1 function.This is not a problem since one can apply the maximum principle a second time starting from the junction.We propose ε) > 0 uniformly bounded from above and by below in ε is chosen so that ξ is con- . Observe that (43) is automatically satisfied since we just integrated the stronger differential equation between the two on the associated time-intervals.Observe that (42) is indeed satisfied provided K 2 , K 3 > K 1 and ε is small.Now with this choice of ξ -since ξ ε α+ν up to time ε −δ -observe that the remaining condition on x s 0 (41) is void since x s 0 > 0 and it reduces to the initial one (39).Finally, the only condition on ω is (40).

x + c p εt + x s
0 ≥ L 0 As before, we deal with this last zone by using symmetry and by repeating the arguments above: no stronger condition appears and the computations above hold (by eventually changing the constants).

End of the proof of Theorem 6.1
We now estimate w 1 thanks to the supersolution.Coming back to (37) with t = ε −δ one gets The second term is bounded by K 7 ε α+ν /ω ε α .For the first term, we divide the integral in two parts.Call t j = 1 ω ln K 3 K 2 ε −α the junction time.
by using a crude upper bound for the first part in the definition of ξ(t).Now since δ < α + 1, In the end, both w and w 1 are ε α up to time t = ε −δ so we have a contradiction.

Initial data supported on the road only
In this section we investigate the behaviour of solutions starting from (u 0 , v 0 ) = (1 (−a,a) , 0).We still denote ε := 1/ √ D.
The proof relies on a suitable reformulation of equation ( 2) and a crude bound on f .Observe that, if we replace v by its even extension on R × [−L, L], we have where dλ y=0 denotes the Lebesgue measure on the line {y = 0}.
where C is a constant that depends only on d and L.
Proof.This is basically an Aronson type inequality (see [2]), we give a quick computation here.By Duhamel's formula, which of course depends only on y and is even in y (the φ k being even or odd).Observe that this is nothing more than the fundamental solution of the diffusion equation in y on (−L, L).Because the φ k are uniformly bounded by C depending only on d and L one gets for another constant C .The last inequality comes from the growth of λ k as Ck 2 .Going back to (45) and using f (v) ≤ Lipf as well as µu − v ≤ 1 and the positivity of the integral, one gets which implies the lemma.
Lemma 7.2.We have Proof.We insert the previous estimate on v(t, x, 0) in the equation satisfied by u and solve it using Duhamel's formula.By the maximum principle, this gives the following upper bound: which gives the desired result.
Proof of Theorem 7.1.
if a < a 0 for some a 0 .Then the maximum principle yields that from this time µu, v will always stay below the constant solution 2θ/3 of (2).And so, µu and v will tend to 0.

Best case scenario: a = +∞
In this subsection we take µu 0 ≡ 1.Since both the initial data and equation ( 3) enjoy here a translation invariance in the x direction, u and v do not depend on x.We prove the following , provided µ is large enough so that e −µtµ ≤ θ/2 and v ≤ θ, i.e.
one has µu, v ≤ θ.By the comparison principle, this will hold for all t > t µ and v never gets above θ anywhere.As a consequence, µu(t), v(t, y) converge to a common limit l ≤ θ satisfying (L + 1/µ)l = 1/µ (conservation of mass).

Proof of point b).
The idea of the proof is simple: we investigate whether the sole diffusion is able to transfer enough mass from u to v so that in finite time v is above θ on a large enough interval (−L 0 , 0).The quantity L 0 is linked to Kanel' and Aronson-Weinberger [17,1].Using v ≥ 0 and the strong parabolic maximum principle one gets µu ≥ e −µt .So that setting θ = (1 + θ)/2 and t M = 1 µ ln 1 θ one has µu ≥ θ while t ≤ t M so that, by the maximum principle, Hopf's lemma and the positivity of f , up to time t M we have v ≥ v the solution of starting from v 0 = 0. Observe that v is independent of x, so we will call it v(t, y) from now on.The function w = θ − v is easily decomposed as w(t, y) = Chose L 0 large enough in the beginning so that an initial condition µu(t M ), v(t M , y) ≥ (1 + 3θ)/4, for all y ∈ (−L 0 , 0), leads to invasion: µu, v → 1 as t → ∞.The existence of such an L 0 follows from Kanel', Aronson-Weinberger [17,1] on R. In our context, it is in fact simpler since total mass is confined in (−L, 0) and a single point whereas in [17,1] it can be spread on all R.

Large a < +∞
We use the best case scenario described above to prove the existence of large but finite a that will lead to invasion.Our proof relies on the fact that 1 (−a,a) and 1 (−∞,∞) are close in L ∞ weighted by some ρ(x) with tails e −|x| and that such a weight preserves the semi-linear parabolic and monotone structure of the system (3).In particular, the "weighted equation" will have a locally (in time) Lipschitz continuous flow.Going back to the original solutions, this Lipschitz continuity becomes a uniform continuity on every compact subset.for every (u, v) and ( ũ, ṽ) solutions of (3) starting from respectively (u 0 , v 0 ) and ( ũ0 , ṽ0 ).

Figure 5 :
Figure 5: Trapping of the front-like data

Figure 6 :
Figure 6: Trapping of the compactly supported data
As a consequence, starting from the time t = t D , propagation occurs as described in Theorem 4.1.
the semi-norms of the derivatives even go to zero since 1/D n → 0).Moreover, under this rescaling −d/D n ∂ 2 xx − d∂ 2 yy becomes −d∆ so that standard parabolic estimates up to the Robin boundary apply and give that |v n (t, x C 1+α/2,2+α ≤ C 4 .Since this rescaling does not impact ∂ y or ∂ t , this gives by Lemma 5.2 above and by use of T Dn .Since (u ∞ , v ∞ ) are global in time, there is no initial data anymore and the maximum principle applies to give µu ∞ , v ∞ ≡ 1 − δ .Indeed, no value different than 1 − δ can be reached, because then (u, v) would have an infimum smaller or a supremum larger than 1 − δ .By translating over time (which is possible since the solution is global) this infimum or supremum would become a minimum or maximum, that cannot be reached by u because of the strong parabolic maximum principle, and neither by v by the strong parabolic maximum principle and Hopf's lemma applied on the suitable y-slice.Case 2. (D n ) is bounded.Then one extracts so that D n → D ∞ > d and the above proof is much simpler since standard regularity and maximum principle apply.Moreover T D is not necessary.
= dε 2 ∂ 2 xx + d∂ 2 yy endowed with Neumann boundary conditions on y = ±L.Since R × (−L, L) is a product domain and since ∂ 2 xx and ∂ 2 yy commute, we can compute this heat kernel as follows.Denote λ k = d(kπ/(2L)) 2 the eigenvalues of d∂ 2 yy on (−L, L) with Neumann conditions and φ k the associated eigenfunctions.Then the heat kernel is K 2 (t, y, y Proof of point a).Using Lemma 7.1 one gets, for t ≤ 1: v(t, x, y) ≤ C √ t (for some constant C different than the C in the afore mentioned Lemma).Using this in the equation for u, Theorem 7.2.There exist µ ± > 0 such that: a) If µ > µ + , µu and v converge uniformly to 1/(µ(L + 1/µ)) as t → +∞.b) If µ < µ − , µu and v converge uniformly to 1 as t → +∞.