Prime Number Theorem And ...

3 Second approach

3.1 Residue calculus on rectangles

This files gathers definitions and basic properties about rectangles.

The border of a rectangle is the union of its four sides.

Definition 3.1.1 RectangleBorder
#

A Rectangle’s border, given corners \(z\) and \(w\) is the union of the four sides.

Definition 3.1.2 RectangleIntegral
#

A RectangleIntegral of a function \(f\) is one over a rectangle determined by \(z\) and \(w\) in \(\mathbb {C}\). We will sometimes denote it by \(\int _{z}^{w} f\). (There is also a primed version, which is \(1/(2\pi i)\) times the original.)

Definition 3.1.3 UpperUIntegral
#

An UpperUIntegral of a function \(f\) comes from \(\sigma +i\infty \) down to \(\sigma +iT\), over to \(\sigma '+iT\), and back up to \(\sigma '+i\infty \).

Definition 3.1.4 LowerUIntegral
#

A LowerUIntegral of a function \(f\) comes from \(\sigma -i\infty \) up to \(\sigma -iT\), over to \(\sigma '-iT\), and back down to \(\sigma '-i\infty \).

It is very convenient to define integrals along vertical lines in the complex plane, as follows.

Definition 3.1.5 VerticalIntegral
#

Let \(f\) be a function from \(\mathbb {C}\) to \(\mathbb {C}\), and let \(\sigma \) be a real number. Then we define

\[ \int _{(\sigma )}f(s)ds = \int _{\sigma -i\infty }^{\sigma +i\infty }f(s)ds. \]

We also have a version with a factor of \(1/(2\pi i)\).

Lemma 3.1.1 DiffVertRect-eq-UpperLowerUs

The difference of two vertical integrals and a rectangle is the difference of an upper and a lower U integrals.

Proof

Follows directly from the definitions.

Theorem 3.1.1 existsDifferentiableOn-of-bddAbove
#

If \(f\) is differentiable on a set \(s\) except at \(c\in s\), and \(f\) is bounded above on \(s\setminus \{ c\} \), then there exists a differentiable function \(g\) on \(s\) such that \(f\) and \(g\) agree on \(s\setminus \{ c\} \).

Proof

This is the Riemann Removable Singularity Theorem, slightly rephrased from what’s in Mathlib. (We don’t care what the function \(g\) is, just that it’s holomorphic.)

Theorem 3.1.2 HolomorphicOn.vanishesOnRectangle
#

If \(f\) is holomorphic on a rectangle \(z\) and \(w\), then the integral of \(f\) over the rectangle with corners \(z\) and \(w\) is \(0\).

Proof

This is in a Mathlib PR.

The next lemma allows to zoom a big rectangle down to a small square, centered at a pole.

Lemma 3.1.2 RectanglePullToNhdOfPole

If \(f\) is holomorphic on a rectangle \(z\) and \(w\) except at a point \(p\), then the integral of \(f\) over the rectangle with corners \(z\) and \(w\) is the same as the integral of \(f\) over a small square centered at \(p\).

Proof

Chop the big rectangle with two vertical cuts and two horizontal cuts into smaller rectangles, the middle one being the desired square. The integral over each of the outer rectangles vanishes, since \(f\) is holomorphic there. (The constant \(c\) being “small enough” here just means that the inner square is strictly contained in the big rectangle.)

Lemma 3.1.3 ResidueTheoremAtOrigin
#

The rectangle (square) integral of \(f(s) = 1/s\) with corners \(-1-i\) and \(1+i\) is equal to \(2\pi i\).

Proof

This is a special case of the more general result above.

Lemma 3.1.4 ResidueTheoremOnRectangleWithSimplePole

Suppose that \(f\) is a holomorphic function on a rectangle, except for a simple pole at \(p\). By the latter, we mean that there is a function \(g\) holomorphic on the rectangle such that, \(f = g + A/(s-p)\) for some \(A\in \mathbb {C}\). Then the integral of \(f\) over the rectangle is \(A\).

Proof

Replace \(f\) with \(g + A/(s-p)\) in the integral. The integral of \(g\) vanishes by Lemma 3.1.2. To evaluate the integral of \(1/(s-p)\), pull everything to a square about the origin using Lemma 3.1.2, and rescale by \(c\); what remains is handled by Lemma 3.1.3.

3.2 Perron Formula

In this section, we prove the Perron formula, which plays a key role in our proof of Mellin inversion.

The following is preparatory material used in the proof of the Perron formula, see Lemma 3.2.16.

Lemma 3.2.1 zeroTendstoDiff
#

If the limit of \(0\) is \(L_1 - L_2\), then \(L_1 = L_2\).

Proof

Obvious.

Lemma 3.2.2 RectangleIntegral-tendsTo-VerticalIntegral

Let \(\sigma ,\sigma ' \in \mathbb {R}\), and \(f : \mathbb {C} \to \mathbb {C}\) such that the vertical integrals \(\int _{(\sigma )}f(s)ds\) and \(\int _{(\sigma ')}f(s)ds\) exist and the horizontal integral \(\int _{(\sigma )}^{\sigma '}f(x + yi)dx\) vanishes as \(y \to \pm \infty \). Then the limit of rectangle integrals

\[ \lim _{T\to \infty }\int _{\sigma -iT}^{\sigma '+iT}f(s)ds = \int _{(\sigma ')}f(s)ds - \int _{(\sigma )}f(s)ds. \]
Proof

Almost by definition.

Lemma 3.2.3 RectangleIntegral-tendsTo-UpperU

Let \(\sigma ,\sigma ' \in \mathbb {R}\), and \(f : \mathbb {C} \to \mathbb {C}\) such that the vertical integrals \(\int _{(\sigma )}f(s)ds\) and \(\int _{(\sigma ')}f(s)ds\) exist and the horizontal integral \(\int _{(\sigma )}^{\sigma '}f(x + yi)dx\) vanishes as \(y \to \pm \infty \). Then the limit of rectangle integrals

\[ \int _{\sigma +iT}^{\sigma '+iU}f(s)ds \]

as \(U\to \infty \) is the “UpperUIntegral” of \(f\).

Proof

Almost by definition.

Lemma 3.2.4 RectangleIntegral-tendsTo-LowerU

Let \(\sigma ,\sigma ' \in \mathbb {R}\), and \(f : \mathbb {C} \to \mathbb {C}\) such that the vertical integrals \(\int _{(\sigma )}f(s)ds\) and \(\int _{(\sigma ')}f(s)ds\) exist and the horizontal integral \(\int _{(\sigma )}^{\sigma '}f(x + yi)dx\) vanishes as \(y \to -\infty \). Then the limit of rectangle integrals

\[ \int _{\sigma -iU}^{\sigma '-iT}f(s)ds \]

as \(U\to \infty \) is the “LowerUIntegral” of \(f\).

Proof

Almost by definition.

TODO : Move to general section

Lemma 3.2.5 limitOfConstant
#

Let \(a:\mathbb {R}\to \mathbb {C}\) be a function, and let \(\sigma {\gt}0\) be a real number. Suppose that, for all \(\sigma , \sigma '{\gt}0\), we have \(a(\sigma ')=a(\sigma )\), and that \(\lim _{\sigma \to \infty }a(\sigma )=0\). Then \(a(\sigma )=0\).

Proof
\begin{align*} \lim _{\sigma '\to \infty }a(\sigma ) & = \lim _{\sigma '\to \infty }a(\sigma ’) \\ & = 0 \end{align*}
Lemma 3.2.6 limitOfConstantLeft
#

Let \(a:\mathbb {R}\to \mathbb {C}\) be a function, and let \(\sigma {\lt}-3/2\) be a real number. Suppose that, for all \(\sigma , \sigma '{\gt}0\), we have \(a(\sigma ')=a(\sigma )\), and that \(\lim _{\sigma \to -\infty }a(\sigma )=0\). Then \(a(\sigma )=0\).

Proof
\begin{align*} \lim _{\sigma '\to -\infty }a(\sigma ) & = \lim _{\sigma '\to -\infty }a(\sigma ’) \\ & = 0 \end{align*}
Lemma 3.2.7 tendsto-rpow-atTop-nhds-zero-of-norm-lt-one

Let \(x{\gt}0\) and \(x{\lt}1\). Then

\[ \lim _{\sigma \to \infty }x^\sigma =0. \]
Proof

Standard.

Lemma 3.2.8 tendsto-rpow-atTop-nhds-zero-of-norm-gt-one

Let \(x{\gt}1\). Then

\[ \lim _{\sigma \to -\infty }x^\sigma =0. \]
Proof

Standard.

Lemma 3.2.9 isHolomorphicOn
#

Let \(x{\gt}0\). Then the function \(f(s) = x^s/(s(s+1))\) is holomorphic on the half-plane \(\{ s\in \mathbb {C}:\Re (s){\gt}0\} \).

Proof

Composition of differentiabilities.

Lemma 3.2.10 integralPosAux
#

The integral

\[ \int _\mathbb {R}\frac{1}{|(1+t^2)(2+t^2)|^{1/2}}dt \]

is positive (and hence convergent - since a divergent integral is zero in Lean, by definition).

Proof

This integral is between \(\frac{1}{2}\) and \(1\) of the integral of \(\frac{1}{1+t^2}\), which is \(\pi \).

Lemma 3.2.11 vertIntBound
#

Let \(x{\gt}0\) and \(\sigma {\gt}1\). Then

\[ \left| \int _{(\sigma )}\frac{x^s}{s(s+1)}ds\right| \leq x^\sigma \int _\mathbb {R}\frac{1}{|(1+t ^2)(2+t ^2)|^{1/2}}dt. \]
Proof

Triangle inequality and pointwise estimate.

Lemma 3.2.12 vertIntBoundLeft
#

Let \(x{\gt}1\) and \(\sigma {\lt}-3/2\). Then

\[ \left| \int _{(\sigma )}\frac{x^s}{s(s+1)}ds\right| \leq x^\sigma \int _\mathbb {R}\frac{1}{|(1/4+t ^2)(2+t ^2)|^{1/2}}dt. \]
Proof

Triangle inequality and pointwise estimate.

Lemma 3.2.13 isIntegrable
#

Let \(x{\gt}0\) and \(\sigma \in \mathbb {R}\). Then

\[ \int _{\mathbb {R}}\frac{x^{\sigma +it}}{(\sigma +it)(1+\sigma + it)}dt \]

is integrable.

Proof

By 3.2.9, \(f\) is continuous, so it is integrable on any interval.

Also, \(|f(x)| = \Theta (x^{-2})\) as \(x\to \infty \),

and \(|f(-x)| = \Theta (x^{-2})\) as \(x\to \infty \).

Since \(g(x) = x^{-2}\) is integrable on \([a,\infty )\) for any \(a{\gt}0\), we conclude.

Lemma 3.2.14 tendsto-zero-Lower
#

Let \(x{\gt}0\) and \(\sigma ',\sigma ''\in \mathbb {R}\). Then

\[ \int _{\sigma '}^{\sigma ''}\frac{x^{\sigma +it}}{(\sigma +it)(1+\sigma + it)}d\sigma \]

goes to \(0\) as \(t\to -\infty \).

Proof

The numerator is bounded and the denominator tends to infinity.

Lemma 3.2.15 tendsto-zero-Upper
#

Let \(x{\gt}0\) and \(\sigma ',\sigma ''\in \mathbb {R}\). Then

\[ \int _{\sigma '}^{\sigma ''}\frac{x^{\sigma +it}}{(\sigma +it)(1+\sigma + it)}d\sigma \]

goes to \(0\) as \(t\to \infty \).

Proof

The numerator is bounded and the denominator tends to infinity.

We are ready for the first case of the Perron formula, namely when \(x{\lt}1\):

For \(x{\gt}0\), \(\sigma {\gt}0\), and \(x{\lt}1\), we have

\[ \frac1{2\pi i} \int _{(\sigma )}\frac{x^s}{s(s+1)}ds =0. \]
Proof

Let \(f(s) = x^s/(s(s+1))\). Then \(f\) is holomorphic on the half-plane \(\{ s\in \mathbb {C}:\Re (s){\gt}0\} \). The rectangle integral of \(f\) with corners \(\sigma -iT\) and \(\sigma +iT\) is zero. The limit of this rectangle integral as \(T\to \infty \) is \(\int _{(\sigma ')}-\int _{(\sigma )}\). Therefore, \(\int _{(\sigma ')}=\int _{(\sigma )}\).

But we also have the bound \(\int _{(\sigma ')} \leq x^{\sigma '} * C\), where \(C=\int _\mathbb {R}\frac{1}{|(1+t)(1+t+1)|}dt\).

Therefore \(\int _{(\sigma ')}\to 0\) as \(\sigma '\to \infty \).

So pulling contours gives \(\int _{(\sigma )}=0\).

The second case is when \(x{\gt}1\). Here are some auxiliary lemmata for the second case. TODO: Move to more general section

Lemma 3.2.17 keyIdentity
#

Let \(x\in \mathbb {R}\) and \(s \ne 0, -1\). Then

\[ \frac{x^\sigma }{s(1+s)} = \frac{x^\sigma }{s} - \frac{x^\sigma }{1+s} \]
Proof

By ring.

Lemma 3.2.18 diffBddAtZero
#

Let \(x{\gt}0\). Then for \(0 {\lt} c {\lt} 1 /2\), we have that the function

\[ s ↦ \frac{x^s}{s(s+1)} - \frac1s \]

is bounded above on the rectangle with corners at \(-c-i*c\) and \(c+i*c\) (except at \(s=0\)).

Proof

Applying Lemma 3.2.17, the function \(s ↦ x^s/s(s+1) - 1/s = x^s/s - x^0/s - x^s/(1+s)\). The last term is bounded for \(s\) away from \(-1\). The first two terms are the difference quotient of the function \(s ↦ x^s\) at \(0\); since it’s differentiable, the difference remains bounded as \(s\to 0\).

Lemma 3.2.19 diffBddAtNegOne
#

Let \(x{\gt}0\). Then for \(0 {\lt} c {\lt} 1 /2\), we have that the function

\[ s ↦ \frac{x^s}{s(s+1)} - \frac{-x^{-1}}{s+1} \]

is bounded above on the rectangle with corners at \(-1-c-i*c\) and \(-1+c+i*c\) (except at \(s=-1\)).

Proof

Applying Lemma 3.2.17, the function \(s ↦ x^s/s(s+1) - x^{-1}/(s+1) = x^s/s - x^s/(s+1) - (-x^{-1})/(s+1)\). The first term is bounded for \(s\) away from \(0\). The last two terms are the difference quotient of the function \(s ↦ x^s\) at \(-1\); since it’s differentiable, the difference remains bounded as \(s\to -1\).

Lemma 3.2.20 residueAtZero

Let \(x{\gt}0\). Then for all sufficiently small \(c{\gt}0\), we have that

\[ \frac1{2\pi i} \int _{-c-i*c}^{c+ i*c}\frac{x^s}{s(s+1)}ds = 1. \]
Proof

For \(c{\gt}0\) sufficiently small,

\(x^s/(s(s+1))\) is equal to \(1/s\) plus a function, \(g\), say, holomorphic in the whole rectangle (by Lemma 3.2.18).

Now apply Lemma 3.1.4.

Lemma 3.2.21 residueAtNegOne

Let \(x{\gt}0\). Then for all sufficiently small \(c{\gt}0\), we have that

\[ \frac1{2\pi i} \int _{-c-i*c-1}^{c+ i*c-1}\frac{x^s}{s(s+1)}ds = -\frac1x. \]
Proof

Compute the integral.

For \(x{\gt}1\) (of course \(x{\gt}0\) would suffice) and \(\sigma {\gt}0\), we have

\[ \frac1{2\pi i} \int _{(\sigma )}\frac{x^s}{s(s+1)}ds =1 + \frac1{2\pi i} \int _{(-1/2)}\frac{x^s}{s(s+1)}ds. \]
Proof

We pull to a square with corners at \(-c-i*c\) and \(c+i*c\) for \(c{\gt}0\) sufficiently small. By Lemma 3.2.20, the integral over this square is equal to \(1\).

For \(x{\gt}1\), we have

\[ \frac1{2\pi i} \int _{(-1/2)}\frac{x^s}{s(s+1)}ds = -1/x + \frac1{2\pi i} \int _{(-3/2)}\frac{x^s}{s(s+1)}ds. \]
Proof

Pull contour from \((-1/2)\) to \((-3/2)\).

For \(x{\gt}1\) and \(\sigma {\lt}-3/2\), we have

\[ \frac1{2\pi i} \int _{(-3/2)}\frac{x^s}{s(s+1)}ds = \frac1{2\pi i} \int _{(\sigma )}\frac{x^s}{s(s+1)}ds. \]
Proof

Pull contour from \((-3/2)\) to \((\sigma )\).

For \(x{\gt}1\) and \(\sigma {\gt}0\), we have

\[ \frac1{2\pi i} \int _{(\sigma )}\frac{x^s}{s(s+1)}ds =1-1/x. \]
Proof

Let \(f(s) = x^s/(s(s+1))\). Then \(f\) is holomorphic on \(\mathbb {C}\setminus {0,-1}\).

First pull the contour from \((\sigma )\) to \((-1/2)\), picking up a residue \(1\) at \(s=0\).

Next pull the contour from \((-1/2)\) to \((-3/2)\), picking up a residue \(-1/x\) at \(s=-1\).

Then pull the contour all the way to \((\sigma ')\) with \(\sigma '{\lt}-3/2\).

For \(\sigma ' {\lt} -3/2\), the integral is bounded by \(x^{\sigma '}\int _\mathbb {R}\frac{1}{|(1+t ^2)(2+t ^2)|^{1/2}}dt\).

Therefore \(\int _{(\sigma ')}\to 0\) as \(\sigma '\to \infty \).

So pulling contours gives \(\int _{(-3/2)}=0\).

The two together give the Perron formula. (Which doesn’t need to be a separate lemma.)

For \(x{\gt}0\) and \(\sigma {\gt}0\), we have

\[ \frac1{2\pi i} \int _{(\sigma )}\frac{x^s}{s(s+1)}ds = \begin{cases} 1-\frac1x & \text{ if }x{\gt}1\\ 0 & \text{ if } x{\lt}1 \end{cases}. \]

3.3 Mellin transforms

Lemma 3.3.1 PartialIntegration
#

Let \(f, g\) be once differentiable functions from \(\mathbb {R}_{{\gt}0}\) to \(\mathbb {C}\) so that \(fg'\) and \(f'g\) are both integrable, and \(f\cdot g (x)\to 0\) as \(x\to 0^+,\infty \). Then

\[ \int _0^\infty f(x)g'(x) dx = -\int _0^\infty f'(x)g(x)dx. \]
Proof

Partial integration.

In this section, we define the Mellin transform (already in Mathlib, thanks to David Loeffler), prove its inversion formula, and derive a number of important properties of some special functions and bumpfunctions.

Def: (Already in Mathlib) Let \(f\) be a function from \(\mathbb {R}_{{\gt}0}\) to \(\mathbb {C}\). We define the Mellin transform of \(f\) to be the function \(\mathcal{M}(f)\) from \(\mathbb {C}\) to \(\mathbb {C}\) defined by

\[ \mathcal{M}(f)(s) = \int _0^\infty f(x)x^{s-1}dx. \]

[Note: My preferred way to think about this is that we are integrating over the multiplicative group \(\mathbb {R}_{{\gt}0}\), multiplying by a (not necessarily unitary!) character \(|\cdot |^s\), and integrating with respect to the invariant Haar measure \(dx/x\). This is very useful in the kinds of calculations carried out below. But may be more difficult to formalize as things now stand. So we might have clunkier calculations, which “magically” turn out just right - of course they’re explained by the aforementioned structure...]

Finally, we need Mellin Convolutions and properties thereof.

Definition 3.3.1 MellinConvolution
#

Let \(f\) and \(g\) be functions from \(\mathbb {R}_{{\gt}0}\) to \(\mathbb {C}\). Then we define the Mellin convolution of \(f\) and \(g\) to be the function \(f\ast g\) from \(\mathbb {R}_{{\gt}0}\) to \(\mathbb {C}\) defined by

\[ (f\ast g)(x) = \int _0^\infty f(y)g(x/y)\frac{dy}{y}. \]

Let us start with a simple property of the Mellin convolution.

Lemma 3.3.2 MellinConvolutionSymmetric
#

Let \(f\) and \(g\) be functions from \(\mathbb {R}_{{\gt}0}\) to \(\mathbb {R}\) or \(\mathbb {C}\), for \(x\neq 0\),

\[ (f\ast g)(x)=(g\ast f)(x) . \]
Proof

By Definition 3.3.1,

\[ (f\ast g)(x) = \int _0^\infty f(y)g(x/y)\frac{dy}{y} \]

in which we change variables to \(z=x/y\):

\[ (f\ast g)(x) = \int _0^\infty f(x/z)g(z)\frac{dz}{z} =(g\ast f)(x) . \]

The Mellin transform of a convolution is the product of the Mellin transforms.

Theorem 3.3.1 MellinConvolutionTransform
#

Let \(f\) and \(g\) be functions from \(\mathbb {R}_{{\gt}0}\) to \(\mathbb {C}\) such that

\begin{equation} (x,y)\mapsto f(y)\frac{g(x/y)}yx^{s-1} \label{eq:assm_integrable_Mconv} \end{equation}
3

is absolutely integrable on \([0,\infty )^2\). Then

\[ \mathcal{M}(f\ast g)(s) = \mathcal{M}(f)(s)\mathcal{M}(g)(s). \]
Proof

By Definitions ?? and 3.3.1

\[ \mathcal M(f\ast g)(s)= \int _0^\infty \int _0^\infty f(y)g(x/y)x^{s-1}\frac{dy}ydx \]

By (??) and Fubini’s theorem,

\[ \mathcal M(f\ast g)(s)= \int _0^\infty \int _0^\infty f(y)g(x/y)x^{s-1}dx\frac{dy}y \]

in which we change variables from \(x\) to \(z=x/y\):

\[ \mathcal M(f\ast g)(s)= \int _0^\infty \int _0^\infty f(y)g(z)y^{s-1}z^{s-1}dzdy \]

which, by Definition ??, is

\[ \mathcal M(f\ast g)(s)= \mathcal M(f)(s)\mathcal M(g)(s) . \]

The \(\nu \) function has Mellin transform \(\mathcal{M}(\nu )(s)\) which is entire and decays (at least) like \(1/|s|\).

[Of course it decays faster than any power of \(|s|\), but it turns out that we will just need one power.]

Theorem 3.3.2 MellinOfPsi
#

The Mellin transform of \(\nu \) is

\[ \mathcal{M}(\nu )(s) = O\left(\frac{1}{|s|}\right), \]

as \(|s|\to \infty \) with \(\sigma _1 \le \Re (s) \le 2\).

Proof

Integrate by parts:

\[ \left|\int _0^\infty \nu (x)x^s\frac{dx}{x}\right| = \left|-\int _0^\infty \nu '(x)\frac{x^{s}}{s}dx\right| \]
\[ \le \frac{1}{|s|} \int _{1/2}^2|\nu '(x)|x^{\Re (s)}dx. \]

Since \(\Re (s)\) is bounded, the right-hand side is bounded by a constant times \(1/|s|\).

We can make a delta spike out of this bumpfunction, as follows.

Definition 3.3.2 DeltaSpike
#

Let \(\nu \) be a bumpfunction supported in \([1/2,2]\). Then for any \(\epsilon {\gt}0\), we define the delta spike \(\nu _\epsilon \) to be the function from \(\mathbb {R}_{{\gt}0}\) to \(\mathbb {C}\) defined by

\[ \nu _\epsilon (x) = \frac{1}{\epsilon }\nu \left(x^{\frac{1}{\epsilon }}\right). \]

This spike still has mass one:

Lemma 3.3.3 DeltaSpikeMass
#

For any \(\epsilon {\gt}0\), we have

\[ \int _0^\infty \nu _\epsilon (x)\frac{dx}{x} = 1. \]
Proof

Substitute \(y=x^{1/\epsilon }\), and use the fact that \(\nu \) has mass one, and that \(dx/x\) is Haar measure.

The Mellin transform of the delta spike is easy to compute.

Theorem 3.3.3 MellinOfDeltaSpike
#

For any \(\epsilon {\gt}0\), the Mellin transform of \(\nu _\epsilon \) is

\[ \mathcal{M}(\nu _\epsilon )(s) = \mathcal{M}(\nu )\left(\epsilon s\right). \]
Proof

Substitute \(y=x^{1/\epsilon }\), use Haar measure; direct calculation.

In particular, for \(s=1\), we have that the Mellin transform of \(\nu _\epsilon \) is \(1+O(\epsilon )\).

Corollary 3.3.1 MellinOfDeltaSpikeAt1
#

For any \(\epsilon {\gt}0\), we have

\[ \mathcal{M}(\nu _\epsilon )(1) = \mathcal{M}(\nu )(\epsilon ). \]
Proof

This is immediate from the above theorem.

Lemma 3.3.4 MellinOfDeltaSpikeAt1-asymp
#

As \(\epsilon \to 0\), we have

\[ \mathcal{M}(\nu _\epsilon )(1) = 1+O(\epsilon ). \]
Proof

By Lemma 3.3.1,

\[ \mathcal M(\nu _\epsilon )(1)=\mathcal M(\nu )(\epsilon ) \]

which by Definition ?? is

\[ \mathcal M(\nu )(\epsilon )=\int _0^\infty \nu (x)x^{\epsilon -1}dx . \]

Since \(\nu (x) x^{\epsilon -1}\) is integrable (because \(\nu \) is continuous and compactly supported),

\[ \mathcal M(\nu )(\epsilon )-\int _0^\infty \nu (x)\frac{dx}x =\int _0^\infty \nu (x)(x^{\epsilon -1}-x^{-1})dx . \]

By Taylor’s theorem,

\[ x^{\epsilon -1}-x^{-1}=O(\epsilon ) \]

so, since \(\nu \) is absolutely integrable,

\[ \mathcal M(\nu )(\epsilon )-\int _0^\infty \nu (x)\frac{dx}x=O(\epsilon ) . \]

We conclude the proof using Theorem 3.3.5.

Let \(1_{(0,1]}\) be the function from \(\mathbb {R}_{{\gt}0}\) to \(\mathbb {C}\) defined by

\[ 1_{(0,1]}(x) = \begin{cases} 1 & \text{ if }x\leq 1\\ 0 & \text{ if }x{\gt}1 \end{cases}. \]

This has Mellin transform: [Note: this already exists in mathlib]

Theorem 3.3.4 MellinOf1
#

The Mellin transform of \(1_{(0,1]}\) is

\[ \mathcal{M}(1_{(0,1]})(s) = \frac{1}{s}. \]
Proof

This is a straightforward calculation.

What will be essential for us is properties of the smooth version of \(1_{(0,1]}\), obtained as the Mellin convolution of \(1_{(0,1]}\) with \(\nu _\epsilon \).

Definition 3.3.3 Smooth1
#

Let \(\epsilon {\gt}0\). Then we define the smooth function \(\widetilde{1_{\epsilon }}\) from \(\mathbb {R}_{{\gt}0}\) to \(\mathbb {C}\) by

\[ \widetilde{1_{\epsilon }} = 1_{(0,1]}\ast \nu _\epsilon . \]
Proof

Let \(c:=2^\epsilon {\gt} 1\), in terms of which we wish to prove

\[ -1 {\lt} c \log c - c . \]

Letting \(f(x):=x\log x - x\), we can rewrite this as \(f(1) {\lt} f(c)\). Since

\[ \frac{d}{dx}f(x) = \log x {\gt} 0 , \]

\(f\) is monotone increasing on [1, ∞), and we are done.

In particular, we have the following two properties.

Lemma 3.3.5 Smooth1Properties-below

Fix \(\epsilon {\gt}0\). There is an absolute constant \(c{\gt}0\) so that: If \(0 {\lt} x \leq (1-c\epsilon )\), then

\[ \widetilde{1_{\epsilon }}(x) = 1. \]
Proof

Opening the definition, we have that the Mellin convolution of \(1_{(0,1]}\) with \(\nu _\epsilon \) is

\[ \int _0^\infty 1_{(0,1]}(y)\nu _\epsilon (x/y)\frac{dy}{y} = \int _0^1 \nu _\epsilon (x/y)\frac{dy}{y}. \]

The support of \(\nu _\epsilon \) is contained in \([1/2^\epsilon ,2^\epsilon ]\), so it suffices to consider \(y \in [1/2^\epsilon x,2^\epsilon x]\) for nonzero contributions. If \(x {\lt} 2^{-\epsilon }\), then the integral is the same as that over \((0,\infty )\):

\[ \int _0^1 \nu _\epsilon (x/y)\frac{dy}{y} = \int _0^\infty \nu _\epsilon (x/y)\frac{dy}{y}, \]

in which we change variables to \(z=x/y\) (using \(x{\gt}0\)):

\[ \int _0^\infty \nu _\epsilon (x/y)\frac{dy}{y} = \int _0^\infty \nu _\epsilon (z)\frac{dz}{z}, \]

which is equal to one by Lemma 3.3.3. We then choose

\[ c:=\log 2, \]

which satisfies

\[ c {\gt} \frac{1-2^{-\epsilon }}\epsilon \]

by Lemma ??, so

\[ 1-c\epsilon {\lt} 2^{-\epsilon }. \]
Lemma 3.3.6 Smooth1Properties-above

Fix \(0{\lt}\epsilon {\lt}1\). There is an absolute constant \(c{\gt}0\) so that: if \(x\geq (1+c\epsilon )\), then

\[ \widetilde{1_{\epsilon }}(x) = 0. \]
Proof

Again the Mellin convolution is

\[ \int _0^1 \nu _\epsilon (x/y)\frac{dy}{y}, \]

but now if \(x {\gt} 2^\epsilon \), then the support of \(\nu _\epsilon \) is disjoint from the region of integration, and hence the integral is zero. We choose

\[ c:=2\log 2 . \]

By Lemma ??,

\[ c {\gt} 2\frac{1-2^{-\epsilon }}\epsilon {\gt} 2^\epsilon \frac{1-2^{-\epsilon }}\epsilon = \frac{2^\epsilon -1}\epsilon , \]

so

\[ 1+c\epsilon {\gt} 2^\epsilon . \]
Lemma 3.3.7 Smooth1Nonneg

If \(\nu \) is nonnegative, then \(\widetilde{1_{\epsilon }}(x)\) is nonnegative.

Proof

By Definitions 3.3.3, 3.3.1 and 3.3.2

\[ \widetilde{1_\epsilon }(x) =\int _0^\infty 1_{(0,1]}(y)\frac1\epsilon \nu ((x/y)^{\frac1\epsilon }) \frac{dy}y \]

and all the factors in the integrand are nonnegative.

Lemma 3.3.8 Smooth1LeOne

If \(\nu \) is nonnegative and has mass one, then \(\widetilde{1_{\epsilon }}(x)\le 1\), \(\forall x{\gt}0\).

Proof

By Definitions 3.3.3, 3.3.1 and 3.3.2

\[ \widetilde{1_\epsilon }(x) =\int _0^\infty 1_{(0,1]}(y)\frac1\epsilon \nu ((x/y)^{\frac1\epsilon }) \frac{dy}y \]

and since \(1_{(0,1]}(y)\le 1\), and all the factors in the integrand are nonnegative,

\[ \widetilde{1_\epsilon }(x)\le \int _0^\infty \frac1\epsilon \nu ((x/y)^{\frac1\epsilon }) \frac{dy}y \]

(because in mathlib the integral of a non-integrable function is \(0\), for the inequality above to be true, we must prove that \(\nu ((x/y)^{\frac1\epsilon })/y\) is integrable; this follows from the computation below). We then change variables to \(z=(x/y)^{\frac1\epsilon }\):

\[ \widetilde{1_\epsilon }(x)\le \int _0^\infty \nu (z) \frac{dz}z \]

which by Theorem 3.3.5 is 1.

Combining the above, we have the following three Main Lemmata of this section on the Mellin transform of \(\widetilde{1_{\epsilon }}\).

Fix \(\epsilon {\gt}0\). Then the Mellin transform of \(\widetilde{1_{\epsilon }}\) is

\[ \mathcal{M}(\widetilde{1_{\epsilon }})(s) = \frac{1}{s}\left(\mathcal{M}(\nu )\left(\epsilon s\right)\right). \]
Proof

By Definition 3.3.3,

\[ \mathcal M(\widetilde{1_\epsilon })(s) =\mathcal M(1_{(0,1]}\ast \nu _\epsilon )(s) . \]

We wish to apply Theorem 3.3.1. To do so, we must prove that

\[ (x,y)\mapsto 1_{(0,1]}(y)\nu _\epsilon (x/y)/y \]

is integrable on \([0,\infty )^2\). It is actually easier to do this for the convolution: \(\nu _\epsilon \ast 1_{(0,1]}\), so we use Lemma 3.3.2: for \(x\neq 0\),

\[ 1_{(0,1]}\ast \nu _\epsilon (x)=\nu _\epsilon \ast 1_{(0,1]}(x) . \]

Now, for \(x=0\), both sides of the equation are 0, so the equation also holds for \(x=0\). Therefore,

\[ \mathcal M(\widetilde{1_\epsilon })(s) =\mathcal M(\nu _\epsilon \ast 1_{(0,1]})(s) . \]

Now,

\[ (x,y)\mapsto \nu _\epsilon (y)1_{(0,1]}(x/y)\frac{x^{s-1}}y \]

has compact support that is bounded away from \(y=0\) (specifically \(y\in [2^{-\epsilon },2^\epsilon ]\) and \(x\in (0,y]\)), so it is integrable. We can thus apply Theorem 3.3.1 and find

\[ \mathcal M(\widetilde{1_\epsilon })(s) =\mathcal M(\nu _\epsilon )(s)\mathcal M(1_{(0,1]})(s) . \]

By Lemmas 3.3.4 and 3.3.3,

\[ \mathcal M(\widetilde{1_\epsilon })(s) =\frac1s\mathcal M(\nu )(\epsilon s) . \]
Lemma 3.3.10 MellinOfSmooth1b

Given \(0{\lt}\sigma _1\le \sigma _2\), for any \(s\) such that \(\sigma _1\le \mathcal Re(s)\le \sigma _2\), we have

\[ \mathcal{M}(\widetilde{1_{\epsilon }})(s) = O\left(\frac{1}{\epsilon |s|^2}\right). \]
Proof

Use Lemma 3.3.9 and the bound in Lemma 3.3.2.

Lemma 3.3.11 MellinOfSmooth1c

At \(s=1\), we have

\[ \mathcal{M}(\widetilde{1_{\epsilon }})(1) = 1+O(\epsilon )). \]
Proof

Follows from Lemmas 3.3.9, 3.3.1 and 3.3.4.

Lemma 3.3.12 Smooth1ContinuousAt

Fix a nonnegative, continuously differentiable function \(F\) on \(\mathbb {R}\) with support in \([1/2,2]\). Then for any \(\epsilon {\gt}0\), the function \(x \mapsto \int _{(0,\infty )} x^{1+it} \widetilde{1_{\epsilon }}(x) dx\) is continuous at any \(y{\gt}0\).

Proof

Use Lemma ?? to write \(\widetilde{1_{\epsilon }}(x)\) as an integral over an integral near \(1\), in particular avoiding the singularity at \(0\). The integrand may be bounded by \(2^{\epsilon }\nu _\epsilon (t)\) which is independent of \(x\) and we can use dominated convergence to prove continuity.

Let \(\nu \) be a bumpfunction.

Theorem 3.3.5 SmoothExistence
#

There exists a smooth (once differentiable would be enough), nonnegative “bumpfunction” \(\nu \), supported in \([1/2,2]\) with total mass one:

\[ \int _0^\infty \nu (x)\frac{dx}{x} = 1. \]
Proof

Same idea as Urysohn-type argument.

3.4 Zeta Bounds

Already on Mathlib (with a shortened proof):

Theorem 3.4.1 hasDerivAt-conj-conj
#

Let \(f : \mathbb {C} \to \mathbb {C}\) be a complex differentiable function at \(p \in \mathbb {C}\) with derivative \(a\). Then the function \(g(z) = \overline{f(\overline{z})}\) is complex differentiable at \(\overline{p}\) with derivative \(\overline{a}\).

Proof

We expand the definition of the derivative and compute.

Submitted to Mathlib:

Theorem 3.4.2 deriv-conj-conj
#

Let \(f : \mathbb {C} \to \mathbb {C}\) be a function at \(p \in \mathbb {C}\) with derivative \(a\). Then the derivative of the function \(g(z) = \overline{f(\overline{z})}\) at \(\overline{p}\) is \(\overline{a}\).

Proof

We proceed by case analysis on whether \(f\) is differentiable at \(p\). If \(f\) is differentiable at \(p\), then we can apply the previous theorem. If \(f\) is not differentiable at \(p\), then neither is \(g\), and both derivatives have the default value of zero.

Theorem 3.4.3 conj-riemannZeta-conj-aux1
#

Conjugation symmetry of the Riemann zeta function in the half-plane of convergence. Let \(s \in \mathbb {C}\) with \(\Re (s) {\gt} 1\). Then \(\overline{\zeta (\overline{s})} = \zeta (s)\).

Proof

We expand the definition of the Riemann zeta function as a series and find that the two sides are equal term by term.

Theorem 3.4.4 conj-riemannZeta-conj
#

Conjugation symmetry of the Riemann zeta function. Let \(s \in \mathbb {C}\). Then

\[ \overline{\zeta (\overline{s})} = \zeta (s). \]
Proof

By the previous lemma, the two sides are equal on the half-plane \(\{ s \in \mathbb {C} : \Re (s) {\gt} 1\} \). Then, by analytic continuation, they are equal on the whole complex plane.

Theorem 3.4.5 riemannZeta-conj
#

Conjugation symmetry of the Riemann zeta function. Let \(s \in \mathbb {C}\). Then

\[ \zeta (\overline{s}) = \overline{\zeta (s)}. \]
Proof

This follows as an immediate corollary of Theorem 3.4.4.

Theorem 3.4.6 deriv-riemannZeta-conj
#

Conjugation symmetry of the derivative of the Riemann zeta function. Let \(s \in \mathbb {C}\). Then

\[ \zeta '(\overline{s}) = \overline{\zeta '(s)}. \]
Proof

We apply the derivative conjugation symmetry to the Riemann zeta function and use the conjugation symmetry of the Riemann zeta function itself.

Theorem 3.4.7 intervalIntegral-conj
#

The conjugation symmetry of the interval integral. Let \(f : \mathbb {R} \to \mathbb {C}\) be a measurable function, and let \(a, b \in \mathbb {R}\). Then

\[ \int _{a}^{b} \overline{f(x)} \, dx = \overline{\int _{a}^{b} f(x) \, dx}. \]
Proof

We unfold the interval integral into an integral over a uIoc and use the conjugation property of integrals.

We record here some prelimiaries about the zeta function and general holomorphic functions.

Theorem 3.4.8 ResidueOfTendsTo
#

If a function \(f\) is holomorphic in a neighborhood of \(p\) and \(\lim _{s\to p} (s-p)f(s) = A\), then \(f(s) = \frac{A}{s-p} + O(1)\) near \(p\).

Proof

The function \((s - p)\cdot f(s)\) bounded, so by Theorem 3.1.1, there is a holomorphic function, \(g\), say, so that \((s-p)f(s) = g(s)\) in a neighborhood of \(s=p\), and \(g(p)=A\). Now because \(g\) is holomorphic, near \(s=p\), we have \(g(s)=A+O(s-p)\). Then when you divide by \((s-p)\), you get \(f(s) = A/(s-p) + O(1)\).

Theorem 3.4.9 riemannZetaResidue
#

The Riemann zeta function \(\zeta (s)\) has a simple pole at \(s=1\) with residue \(1\). In particular, the function \(\zeta (s) - \frac{1}{s-1}\) is bounded in a neighborhood of \(s=1\).

Proof

From riemannZeta_residue_one (in Mathlib), we know that \((s-1)\zeta (s)\) goes to \(1\) as \(s\to 1\). Now apply Theorem 3.4.8. (This can also be done using \(\zeta _0\) below, which is expressed as \(1/(s-1)\) plus things that are holomorphic for \(\Re (s){\gt}0\)...)

Theorem 3.4.10 nonZeroOfBddAbove
#

If a function \(f\) has a simple pole at a point \(p\) with residue \(A \neq 0\), then \(f\) is nonzero in a punctured neighborhood of \(p\).

Proof

We know that \(f(s) = \frac{A}{s-p} + O(1)\) near \(p\), so we can write

\[ f(s) = \left(f(s) - \frac{A}{s-p}\right) + \frac{A}{s-p}. \]

The first term is bounded, say by \(M\), and the second term goes to \(\infty \) as \(s \to p\). Therefore, there exists a neighborhood \(V\) of \(p\) such that for all \(s \in V \setminus \{ p\} \), we have \(f(s) \neq 0\).

Theorem 3.4.11 logDerivResidue
#

If \(f\) is holomorphic in a neighborhood of \(p\), and there is a simple pole at \(p\), then \(f'/ f\) has a simple pole at \(p\) with residue \(-1\):

\[ \frac{f'(s)}{f(s)} = \frac{-1}{s - p} + O(1). \]
Proof

Using Theorem 3.1.1, there is a function \(g\) holomorphic near \(p\), for which \(f(s) = A/(s-p) + g(s) = h(s)/ (s-p)\). Here \(h(s):= A + g(s)(s-p)\) which is nonzero in a neighborhood of \(p\) (since \(h\) goes to \(A\) which is nonzero). Then \(f'(s) = (h'(s)(s-p) - h(s))/(s-p)^2\), and we can compute the quotient:

\[ \frac{f'(s)}{f(s)}+1/(s-p) = \frac{h'(s)(s-p) - h(s)}{h(s)} \cdot \frac{1}{(s-p)}+1/(s-p) = \frac{h'(s)}{h(s)}. \]

Since \(h\) is nonvanishing near \(p\), this remains bounded in a neighborhood of \(p\).

Theorem 3.4.12 BddAbove-to-IsBigO
#

If \(f\) is bounded above in a punctured neighborhood of \(p\), then \(f\) is \(O(1)\) in that neighborhood.

Proof

Elementary.

Let’s also record that if a function \(f\) has a simple pole at \(p\) with residue \(A\), and \(g\) is holomorphic near \(p\), then the residue of \(f \cdot g\) is \(A \cdot g(p)\).

Theorem 3.4.13 ResidueMult
#

If \(f\) has a simple pole at \(p\) with residue \(A\), and \(g\) is holomorphic near \(p\), then the residue of \(f \cdot g\) at \(p\) is \(A \cdot g(p)\). That is, we assume that

\[ f(s) = \frac{A}{s - p} + O(1) \]

near \(p\), and that \(g\) is holomorphic near \(p\). Then

\[ f(s) \cdot g(s) = \frac{A \cdot g(p)}{s - p} + O(1). \]
Proof

Elementary calculation.

\[ f(s) * g(s) - \frac{A * g(p)}{s - p} = \left(f(s) * g(s) - \frac{A * g(s)}{s - p}\right) + \left(\frac{A * g(s) - A * g(p)}{s - p}\right). \]

The first term is \(g(s)(f(s) - \frac{A}{s - p})\), which is bounded near \(p\) by the assumption on \(f\) and the fact that \(g\) is holomorphic near \(p\). The second term is \(A\) times the log derivative of \(g\) at \(p\), which is bounded by the assumption that \(g\) is holomorphic.

As a corollary, the log derivative of the Riemann zeta function has a simple pole at \(s=1\):

Theorem 3.4.14 riemannZetaLogDerivResidue
#

The log derivative of the Riemann zeta function \(\zeta (s)\) has a simple pole at \(s=1\) with residue \(-1\): \(-\frac{\zeta '(s)}{\zeta (s)} - \frac{1}{s-1} = O(1)\).

Proof

This follows from Theorem 3.4.11 and Theorem 3.4.9.

Definition 3.4.1 riemannZeta0
#

For any natural \(N\ge 1\), we define

\[ \zeta _0(N,s) := \sum _{1\le n \le N} \frac1{n^s} + \frac{- N^{1-s}}{1-s} + \frac{-N^{-s}}{2} + s \int _N^\infty \frac{\lfloor x\rfloor + 1/2 - x}{x^{s+1}} \, dx \]
Lemma 3.4.1 sum-eq-int-deriv
#

Let \(a {\lt} b\), and let \(\phi \) be continuously differentiable on \([a, b]\). Then

\[ \sum _{a {\lt} n \le b} \phi (n) = \int _a^b \phi (x) \, dx + \left(\lfloor b \rfloor + \frac{1}{2} - b\right) \phi (b) - \left(\lfloor a \rfloor + \frac{1}{2} - a\right) \phi (a) - \int _a^b \left(\lfloor x \rfloor + \frac{1}{2} - x\right) \phi '(x) \, dx. \]
Proof

Specialize Abel summation from Mathlib to the trivial arithmetic function and then manipulate integrals.

Lemma 3.4.2 ZetaSum-aux1
#

Let \(0 {\lt} a {\lt} b\) be natural numbers and \(s\in \mathbb {C}\) with \(s \ne 1\) and \(s \ne 0\). Then

\[ \sum _{a {\lt} n \le b} \frac{1}{n^s} = \frac{b^{1-s} - a^{1-s}}{1-s} + \frac{b^{-s}-a^{-s}}{2} + s \int _a^b \frac{\lfloor x\rfloor + 1/2 - x}{x^{s+1}} \, dx. \]
Proof

Apply Lemma 3.4.1 to the function \(x \mapsto x^{-s}\).

Lemma 3.4.3 ZetaBnd-aux1a
#

For any \(0 {\lt} a {\lt} b\) and \(s \in \mathbb {C}\) with \(\sigma =\Re (s){\gt}0\),

\[ \int _a^b \left|\frac{\lfloor x\rfloor + 1/2 - x}{x^{s+1}} \, dx\right| \le \frac{a^{-\sigma }-b^{-\sigma }}{\sigma }. \]
Proof

Apply the triangle inequality

\[ \left|\int _a^b \frac{\lfloor x\rfloor + 1/2 - x}{x^{s+1}} \, dx\right| \le \int _a^b \frac{1}{x^{\sigma +1}} \, dx, \]

and evaluate the integral.

Lemma 3.4.4 ZetaSum-aux2
#

Let \(N\) be a natural number and \(s\in \mathbb {C}\), \(\Re (s){\gt}1\). Then

\[ \sum _{N {\lt} n} \frac{1}{n^s} = \frac{- N^{1-s}}{1-s} + \frac{-N^{-s}}{2} + s \int _N^\infty \frac{\lfloor x\rfloor + 1/2 - x}{x^{s+1}} \, dx. \]
Proof

Apply Lemma 3.4.2 with \(a=N\) and \(b\to \infty \).

Lemma 3.4.5 ZetaBnd-aux1b
#

For any \(N\ge 1\) and \(s = \sigma + tI \in \mathbb {C}\), \(\sigma {\gt} 0\),

\[ \left| \int _N^\infty \frac{\lfloor x\rfloor + 1/2 - x}{x^{s+1}} \, dx \right| \le \frac{N^{-\sigma }}{\sigma }. \]
Proof

Apply Lemma 3.4.3 with \(a=N\) and \(b\to \infty \).

Lemma 3.4.6 ZetaBnd-aux1
#

For any \(N\ge 1\) and \(s = \sigma + tI \in \mathbb {C}\), \(\sigma =\in (0,2], 2 {\lt} |t|\),

\[ \left| s\int _N^\infty \frac{\lfloor x\rfloor + 1/2 - x}{x^{s+1}} \, dx \right| \le 2 |t| \frac{N^{-\sigma }}{\sigma }. \]
Proof

Apply Lemma 3.4.5 and estimate \(|s|\ll |t|\).

Big-Oh version of Lemma 3.4.6.

Lemma 3.4.7 ZetaBnd-aux1p
#

For any \(N\ge 1\) and \(s = \sigma + tI \in \mathbb {C}\), \(\sigma =\in (0,2], 2 {\lt} |t|\),

\[ \left| s\int _N^\infty \frac{\lfloor x\rfloor + 1/2 - x}{x^{s+1}} \, dx \right| \ll |t| \frac{N^{-\sigma }}{\sigma }. \]
Proof

Apply Lemma 3.4.5 and estimate \(|s|\ll |t|\).

Theorem 3.4.15 HolomorphicOn-riemannZeta0
#

For any \(N\ge 1\), the function \(\zeta _0(N,s)\) is holomorphic on \(\{ s\in \mathbb {C}\mid \Re (s){\gt}0 ∧ s \ne 1\} \).

Proof

The function \(\zeta _0(N,s)\) is a finite sum of entire functions, plus an integral that’s absolutely convergent on \(\{ s\in \mathbb {C}\mid \Re (s){\gt}0 ∧ s \ne 1\} \) by Lemma 3.4.5.

Lemma 3.4.8 isPathConnected-aux
#

The set \(\{ s\in \mathbb {C}\mid \Re (s){\gt}0 ∧ s \ne 1\} \) is path-connected.

Proof

Construct explicit paths from \(2\) to any point, either a line segment or two joined ones.

Lemma 3.4.9 Zeta0EqZeta

For \(\Re (s){\gt}0\), \(s\ne 1\), and for any \(N\),

\[ \zeta _0(N,s) = \zeta (s). \]
Proof

Use Lemma 3.4.4 and the Definition 3.4.1.

Lemma 3.4.10 ZetaBnd-aux2
#

Given \(n ≤ t\) and \(\sigma \) with \(1-A/\log t \le \sigma \), we have that

\[ |n^{-s}| \le n^{-1} e^A. \]
Proof

Use \(|n^{-s}| = n^{-\sigma } = e^{-\sigma \log n} \le \exp (-\left(1-\frac{A}{\log t}\right)\log n) \le n^{-1} e^A\), since \(n\le t\).

Lemma 3.4.11 ZetaUpperBnd
#

For any \(s = \sigma + tI \in \mathbb {C}\), \(1/2 \le \sigma \le 2, 3 {\lt} |t|\) and any \(0 {\lt} A {\lt} 1\) sufficiently small, and \(1-A/\log |t| \le \sigma \), we have

\[ |\zeta (s)| \ll \log t. \]
Proof

First replace \(\zeta (s)\) by \(\zeta _0(N,s)\) for \(N = \lfloor |t| \rfloor \). We estimate:

\[ |\zeta _0(N,s)| \ll \sum _{1\le n \le |t|} |n^{-s}| + \frac{- |t|^{1-\sigma }}{|1-s|} + \frac{-|t|^{-\sigma }}{2} + |t| \cdot |t| ^{-σ} / σ \]
\[ \ll e^A \sum _{1\le n {\lt} |t|} n^{-1} +|t|^{1-\sigma } \]

, where we used Lemma 3.4.10 and Lemma 3.4.6. The first term is \(\ll \log |t|\). For the second term, estimate

\[ |t|^{1-\sigma } \le |t|^{1-(1-A/\log |t|)} = |t|^{A/\log |t|} \ll 1. \]
Lemma 3.4.12 DerivUpperBnd-aux7
#

For any \(s = \sigma + tI \in \mathbb {C}\), \(1/2 \le \sigma \le 2, 3 {\lt} |t|\), and any \(0 {\lt} A {\lt} 1\) sufficiently small, and \(1-A/\log |t| \le \sigma \), we have

\[ \left\| s \cdot \int _{N}^{\infty } \left(\left\lfloor x \right\rfloor + \frac{1}{2} - x\right) \cdot x^{-s-1} \cdot (-\log x) \right\| \le 2 \cdot |t| \cdot N^{-\sigma } / \sigma \cdot \log |t|. \]
Proof

Estimate \(|s|= |\sigma + tI|\) by \(|s|\le 2 +|t| \le 2|t|\) (since \(|t|{\gt}3\)). Estimating \(|\left\lfloor x \right\rfloor +1/2-x|\) by \(1\), and using \(|x^{-s-1}| = x^{-\sigma -1}\), we have

\[ \left\| s \cdot \int _{N}^{\infty } \left(\left\lfloor x \right\rfloor + \frac{1}{2} - x\right) \cdot x^{-s-1} \cdot (-\log x) \right\| \le 2 \cdot |t| \int _{N}^{\infty } x^{-\sigma } \cdot (\log x). \]

For the last integral, integrate by parts, getting:

\[ \int _{N}^{\infty } x^{-\sigma -1} \cdot (\log x) = \frac{1}{\sigma }N^{-\sigma } \cdot \log N + \frac1{\sigma ^2} \cdot N^{-\sigma }. \]

Now use \(\log N \le \log |t|\) to get the result.

Lemma 3.4.13 ZetaDerivUpperBnd
#

For any \(s = \sigma + tI \in \mathbb {C}\), \(1/2 \le \sigma \le 2, 3 {\lt} |t|\), there is an \(A{\gt}0\) so that for \(1-A/\log t \le \sigma \), we have

\[ |\zeta '(s)| \ll \log ^2 t. \]
Proof

First replace \(\zeta (s)\) by \(\zeta _0(N,s)\) for \(N = \lfloor |t| \rfloor \). Differentiating term by term, we get:

\[ \zeta '(s) = -\sum _{1\le n {\lt} N} n^{-s} \log n + \frac{N^{1 - s}}{(1 - s)^2} + \frac{N^{1 - s} \log N}{1 - s} + \frac{N^{-s}\log N}{2} + \int _N^\infty \frac{\lfloor x\rfloor + 1/2 - x}{x^{s+1}} \, dx -s \int _N^\infty \log x \frac{\lfloor x\rfloor + 1/2 - x}{x^{s+1}} \, dx . \]

Estimate as before, with an extra factor of \(\log |t|\).

Lemma 3.4.14 ZetaNear1BndFilter
#

As \(\sigma \to 1^+\),

\[ |\zeta (\sigma )| \ll 1/(\sigma -1). \]
Proof

Zeta has a simple pole at \(s=1\). Equivalently, \(\zeta (s)(s-1)\) remains bounded near \(1\). Lots of ways to prove this. Probably the easiest one: use the expression for \(\zeta _0 (N,s)\) with \(N=1\) (the term \(N^{1-s}/(1-s)\) being the only unbounded one).

Lemma 3.4.15 ZetaNear1BndExact
#

There exists a \(c{\gt}0\) such that for all \(1 {\lt} \sigma ≤ 2\),

\[ |\zeta (\sigma )| ≤ c/(\sigma -1). \]
Proof

Split into two cases, use Lemma 3.4.14 for \(\sigma \) sufficiently small and continuity on a compact interval otherwise.

Lemma 3.4.16 ZetaLowerBound3
#

There exists a \(c{\gt}0\) such that for all \(1 {\lt} \sigma {\lt}= 2\) and \(3 {\lt} |t|\),

\[ c \frac{(\sigma -1)^{3/4}}{(\log |t|)^{1/4}} \le |\zeta (\sigma + tI)|. \]
Proof

Combine Lemma ?? with upper bounds for \(|\zeta (\sigma )|\) (from Lemma 3.4.15) and \(|\zeta (\sigma +2it)|\) (from Lemma 3.4.11).

Lemma 3.4.17 ZetaInvBound1
#

For all \(\sigma {\gt}1\),

\[ 1/|\zeta (\sigma +it)| \le |\zeta (\sigma )|^{3/4}|\zeta (\sigma +2it)|^{1/4} \]
Proof

The identity

\[ 1 \le |\zeta (\sigma )|^3 |\zeta (\sigma +it)|^4 |\zeta (\sigma +2it)| \]

for \(\sigma {\gt}1\) is already proved by Michael Stoll in the EulerProducts PNT file.

Lemma 3.4.18 ZetaInvBound2
#

For \(\sigma {\gt}1\) (and \(\sigma \le 2\)),

\[ 1/|\zeta (\sigma +it)| \ll (\sigma -1)^{-3/4}(\log |t|)^{1/4}, \]

as \(|t|\to \infty \).

Proof

Combine Lemma 3.4.17 with the bounds in Lemmata 3.4.15 and 3.4.11.

Lemma 3.4.19 Zeta-eq-int-derivZeta
#

For any \(t\ne 0\) (so we don’t pass through the pole), and \(\sigma _1 {\lt} \sigma _2\),

\[ \int _{\sigma _1}^{\sigma _2}\zeta '(\sigma + it) dt = \zeta (\sigma _2+it) - \zeta (\sigma _1+it). \]
Proof

This is the fundamental theorem of calculus.

Lemma 3.4.20 Zeta-diff-Bnd
#

For any \(A{\gt}0\) sufficiently small, there is a constant \(C{\gt}0\) so that whenever \(1- A / \log t \le \sigma _1 {\lt} \sigma _2\le 2\) and \(3 {\lt} |t|\), we have that:

\[ |\zeta (\sigma _2 + it) - \zeta (\sigma _1 + it)| \le C (\log |t|)^2 (\sigma _2 - \sigma _1). \]
Proof

Use Lemma 3.4.19 and estimate trivially using Lemma 3.4.13.

Lemma 3.4.21 ZetaInvBnd
#

For any \(A{\gt}0\) sufficiently small, there is a constant \(C{\gt}0\) so that whenever \(1- A / \log ^9 |t| \le \sigma {\lt} 1+A/\log ^9 |t|\) and \(3 {\lt} |t|\), we have that:

\[ 1/|\zeta (\sigma +it)| \le C \log ^7 |t|. \]
Proof

Let \(\sigma \) be given in the prescribed range, and set \(\sigma ' := 1+ A / \log ^9 |t|\). Then

\[ |\zeta (\sigma +it)| \ge |\zeta (\sigma '+it)| - |\zeta (\sigma +it) - \zeta (\sigma '+it)| \ge C (\sigma '-1)^{3/4}\log |t|^{-1/4} - C \log ^2 |t| (\sigma '-\sigma ) \]
\[ \ge C A^{3/4} \log |t|^{-7} - C \log ^2 |t| (2 A / \log ^9 |t|), \]

where we used Lemma 3.4.18 and Lemma 3.4.20. Now by making \(A\) sufficiently small (in particular, something like \(A = 1/16\) should work), we can guarantee that

\[ |\zeta (\sigma +it)| \ge \frac C2 (\log |t|)^{-7}, \]

as desired.

Annoyingly, it is not immediate from this that \(\zeta \) doesn’t vanish there! That’s because \(1/0 = 0\) in Lean. So we give a second proof of the same fact (refactor this later), with a lower bound on \(\zeta \) instead of upper bound on \(1 / \zeta \).

Lemma 3.4.22 ZetaLowerBnd
#

For any \(A{\gt}0\) sufficiently small, there is a constant \(C{\gt}0\) so that whenever \(1- A / \log ^9 |t| \le \sigma {\lt} 1\) and \(3 {\lt} |t|\), we have that:

\[ |\zeta (\sigma +it)| \ge C \log ^7 |t|. \]
Proof

Follow same argument.

Now we get a zero free region.

Lemma 3.4.23 ZetaZeroFree
#

There is an \(A{\gt}0\) so that for \(1-A/\log ^9 |t| \le \sigma {\lt} 1\) and \(3 {\lt} |t|\),

\[ \zeta (\sigma +it) \ne 0. \]
Proof

Apply Lemma 3.4.22.

Lemma 3.4.24 LogDerivZetaBnd
#

There is an \(A{\gt}0\) so that for \(1-A/\log ^9 |t| \le \sigma {\lt} 1+A/\log ^9 |t|\) and \(3 {\lt} |t|\),

\[ |\frac{\zeta '}{\zeta } (\sigma +it)| \ll \log ^9 |t|. \]
Proof

Combine the bound on \(|\zeta '|\) from Lemma 3.4.13 with the bound on \(1/|\zeta |\) from Lemma 3.4.21.

Theorem 3.4.16 ZetaNoZerosOn1Line
#

The zeta function does not vanish on the 1-line.

Proof

This fact is already proved in Stoll’s work.

Then, since \(\zeta \) doesn’t vanish on the 1-line, there is a \(\sigma {\lt}1\) (depending on \(T\)), so that the box \([\sigma ,1] \times _{ℂ} [-T,T]\) is free of zeros of \(\zeta \).

Lemma 3.4.25 ZetaNoZerosInBox
#

For any \(T{\gt}0\), there is a constant \(\sigma {\lt}1\) so that

\[ \zeta (\sigma '+it) \ne 0 \]

for all \(|t| \leq T\) and \(\sigma ' \ge \sigma \).

Proof

Assume not. Then there is a sequence \(|t_n| \le T\) and \(\sigma _n \to 1\) so that \(\zeta (\sigma _n + it_n) = 0\). By compactness, there is a subsequence \(t_{n_k} \to t_0\) along which \(\zeta (\sigma _{n_k} + it_{n_k}) = 0\). If \(t_0\ne 0\), use the continuity of \(\zeta \) to get that \(\zeta (1 + it_0) = 0\); this is a contradiction. If \(t_0=0\), \(\zeta \) blows up near \(1\), so can’t be zero nearby.

We now prove that there’s an absolute constant \(\sigma _0\) so that \(\zeta '/\zeta \) is holomorphic on a rectangle \([\sigma _2,2] \times _{ℂ} [-3,3] \setminus \{ 1\} \).

Lemma 3.4.26 LogDerivZetaHolcSmallT
#

There is a \(\sigma _2 {\lt} 1\) so that the function

\[ \frac{\zeta '}{\zeta }(s) \]

is holomorphic on \(\{ \sigma _2 \le \Re s \le 2, |\Im s| \le 3 \} \setminus \{ 1\} \).

Proof

The derivative of \(\zeta \) is holomorphic away from \(s=1\); the denominator \(\zeta (s)\) is nonzero in this range by Lemma 3.4.25.

Lemma 3.4.27 LogDerivZetaHolcLargeT
#

There is an \(A{\gt}0\) so that for all \(T{\gt}3\), the function \( \frac{\zeta '}{\zeta }(s) \) is holomorphic on \(\{ 1-A/\log ^9 T \le \Re s \le 2, |\Im s|\le T \} \setminus \{ 1\} \).

Proof

The derivative of \(\zeta \) is holomorphic away from \(s=1\); the denominator \(\zeta (s)\) is nonzero in this range by Lemma 3.4.23.

Lemma 3.4.28 LogDerivZetaBndUnif
#

There exist \(A, C {\gt} 0\) such that

\[ |\frac{\zeta '}{\zeta }(\sigma + it)|\leq C \log |t|^9 \]

whenever \(|t|{\gt}3\) and \(\sigma {\gt} 1 - A/\log |t|^9\).

Proof

For \(\sigma \) close to \(1\) use Lemma 3.4.24, otherwise estimate trivially.

3.5 Proof of Medium PNT

The approach here is completely standard. We follow the use of \(\mathcal{M}(\widetilde{1_{\epsilon }})\) as in [Kontorovich 2015].

Definition 3.5.1 ChebyshevPsi
#

The (second) Chebyshev Psi function is defined as

\[ \psi (x) := \sum _{n \le x} \Lambda (n), \]

where \(\Lambda (n)\) is the von Mangoldt function.

It has already been established that zeta doesn’t vanish on the 1 line, and has a pole at \(s=1\) of order 1. We also have the following.

Theorem 3.5.1 LogDerivativeDirichlet
#

We have that, for \(\Re (s){\gt}1\),

\[ -\frac{\zeta '(s)}{\zeta (s)} = \sum _{n=1}^\infty \frac{\Lambda (n)}{n^s}. \]
Proof

Already in Mathlib.

The main object of study is the following inverse Mellin-type transform, which will turn out to be a smoothed Chebyshev function.

Definition 3.5.2 SmoothedChebyshev
#

Fix \(\epsilon {\gt}0\), and a bumpfunction supported in \([1/2,2]\). Then we define the smoothed Chebyshev function \(\psi _{\epsilon }\) from \(\mathbb {R}_{{\gt}0}\) to \(\mathbb {C}\) by

\[ \psi _{\epsilon }(X) = \frac{1}{2\pi i}\int _{(\sigma )}\frac{-\zeta '(s)}{\zeta (s)} \mathcal{M}(\widetilde{1_{\epsilon }})(s) X^{s}ds, \]

where we’ll take \(\sigma = 1 + 1 / \log X\).

Lemma 3.5.1 SmoothedChebyshevDirichlet-aux-integrable

Fix a nonnegative, continuously differentiable function \(F\) on \(\mathbb {R}\) with support in \([1/2,2]\), and total mass one, \(\int _{(0,\infty )} F(x)/x dx = 1\). Then for any \(\epsilon {\gt}0\), and \(\sigma \in (1, 2]\), the function

\[ x \mapsto \mathcal{M}(\widetilde{1_{\epsilon }})(\sigma + ix) \]

is integrable on \(\mathbb {R}\).

Proof

By Lemma 3.3.10 the integrand is \(O(1/t^2)\) as \(t\rightarrow \infty \) and hence the function is integrable.

Lemma 3.5.2 SmoothedChebyshevDirichlet-aux-tsum-integral

Fix a nonnegative, continuously differentiable function \(F\) on \(\mathbb {R}\) with support in \([1/2,2]\), and total mass one, \(\int _{(0,\infty )} F(x)/x dx = 1\). Then for any \(\epsilon {\gt}0\) and \(\sigma \in (1,2]\), the function \(x \mapsto \sum _{n=1}^\infty \frac{\Lambda (n)}{n^{\sigma +it}} \mathcal{M}(\widetilde{1_{\epsilon }})(\sigma +it) x^{\sigma +it}\) is equal to \(\sum _{n=1}^\infty \int _{(0,\infty )} \frac{\Lambda (n)}{n^{\sigma +it}} \mathcal{M}(\widetilde{1_{\epsilon }})(\sigma +it) x^{\sigma +it}\).

Proof

Interchange of summation and integration.

We have that

\[ \psi _{\epsilon }(X) = \sum _{n=1}^\infty \Lambda (n)\widetilde{1_{\epsilon }}(n/X). \]
Proof

We have that

\[ \psi _{\epsilon }(X) = \frac{1}{2\pi i}\int _{(2)}\sum _{n=1}^\infty \frac{\Lambda (n)}{n^s} \mathcal{M}(\widetilde{1_{\epsilon }})(s) X^{s}ds. \]

We have enough decay (thanks to quadratic decay of \(\mathcal{M}(\widetilde{1_{\epsilon }})\)) to justify the interchange of summation and integration. We then get

\[ \psi _{\epsilon }(X) = \sum _{n=1}^\infty \Lambda (n)\frac{1}{2\pi i}\int _{(2)} \mathcal{M}(\widetilde{1_{\epsilon }})(s) (n/X)^{-s} ds \]

and apply the Mellin inversion formula.

The smoothed Chebyshev function is close to the actual Chebyshev function.

Theorem 3.5.3 SmoothedChebyshevClose

We have that

\[ \psi _{\epsilon }(X) = \psi (X) + O(\epsilon X \log X). \]
Proof

Take the difference. By Lemma 3.3.6 and 3.3.5, the sums agree except when \(1-c \epsilon \leq n/X \leq 1+c \epsilon \). This is an interval of length \(\ll \epsilon X\), and the summands are bounded by \(\Lambda (n) \ll \log X\).

Returning to the definition of \(\psi _{\epsilon }\), fix a large \(T\) to be chosen later, and set \(\sigma _0 = 1 + 1 / log X\), \(\sigma _1 = 1- A/ \log T^9\), and \(\sigma _2{\lt}\sigma _1\) a constant. Pull contours (via rectangles!) to go from \(\sigma _0-i\infty \) up to \(\sigma _0-iT\), then over to \(\sigma _1-iT\), up to \(\sigma _1-3i\), over to \(\sigma _2-3i\), up to \(\sigma _2+3i\), back over to \(\sigma _1+3i\), up to \(\sigma _1+iT\), over to \(\sigma _0+iT\), and finally up to \(\sigma _0+i\infty \).

In the process, we will pick up the residue at \(s=1\). We will do this in several stages. Here the interval integrals are defined as follows:

Definition 3.5.3 I₁
#
\[ I_1(\nu , \epsilon , X, T) := \frac{1}{2\pi i} \int _{-\infty }^{-T} \left( \frac{-\zeta '}\zeta (\sigma _0 + t i) \right) \mathcal M(\widetilde1_\epsilon )(\sigma _0 + t i) X^{\sigma _0 + t i} \ i \ dt \]
Definition 3.5.4 I₂
#
\[ I_2(\nu , \epsilon , X, T, \sigma _1) := \frac{1}{2\pi i} \int _{\sigma _1}^{\sigma _0} \left( \frac{-\zeta '}\zeta (\sigma - i T) \right) \mathcal M(\widetilde1_\epsilon )(\sigma - i T) X^{\sigma - i T} \ d\sigma \]
Definition 3.5.5 I₃₇
#
\[ I_{37}(\nu , \epsilon , X, T, \sigma _1) := \frac{1}{2\pi i} \int _{-T}^{T} \left( \frac{-\zeta '}\zeta (\sigma _1 + t i) \right) \mathcal M(\widetilde1_\epsilon )(\sigma _1 + t i) X^{\sigma _1 + t i} \ i \ dt \]
Definition 3.5.6 I₈
#
\[ I_8(\nu , \epsilon , X, T, \sigma _1) := \frac{1}{2\pi i} \int _{\sigma _1}^{\sigma _0} \left( \frac{-\zeta '}\zeta (\sigma + T i) \right) \mathcal M(\widetilde1_\epsilon )(\sigma + T i) X^{\sigma + T i} \ d\sigma \]
Definition 3.5.7 I₉
#
\[ I_9(\nu , \epsilon , X, T) := \frac{1}{2\pi i} \int _{T}^{\infty } \left( \frac{-\zeta '}\zeta (\sigma _0 + t i) \right) \mathcal M(\widetilde1_\epsilon )(\sigma _0 + t i) X^{\sigma _0 + t i} \ i \ dt \]
Definition 3.5.8 I₃
#
\[ I_3(\nu , \epsilon , X, T, \sigma _1) := \frac{1}{2\pi i} \int _{-T}^{-3} \left( \frac{-\zeta '}\zeta (\sigma _1 + t i) \right) \mathcal M(\widetilde1_\epsilon )(\sigma _1 + t i) X^{\sigma _1 + t i} \ i \ dt \]
Definition 3.5.9 I₇
#
\[ I_7(\nu , \epsilon , X, T, \sigma _1) := \frac{1}{2\pi i} \int _{3}^{T} \left( \frac{-\zeta '}\zeta (\sigma _1 + t i) \right) \mathcal M(\widetilde1_\epsilon )(\sigma _1 + t i) X^{\sigma _1 + t i} \ i \ dt \]
Definition 3.5.10 I₄
#
\[ I_4(\nu , \epsilon , X, \sigma _1, \sigma _2) := \frac{1}{2\pi i} \int _{\sigma _2}^{\sigma _1} \left( \frac{-\zeta '}\zeta (\sigma - 3 i) \right) \mathcal M(\widetilde1_\epsilon )(\sigma - 3 i) X^{\sigma - 3 i} \ d\sigma \]
Definition 3.5.11 I₆
#
\[ I_6(\nu , \epsilon , X, \sigma _1, \sigma _2) := \frac{1}{2\pi i} \int _{\sigma _2}^{\sigma _1} \left( \frac{-\zeta '}\zeta (\sigma + 3 i) \right) \mathcal M(\widetilde1_\epsilon )(\sigma + 3 i) X^{\sigma + 3 i} \ d\sigma \]
Definition 3.5.12 I₅
#
\[ I_5(\nu , \epsilon , X, \sigma _2) := \frac{1}{2\pi i} \int _{-3}^{3} \left( \frac{-\zeta '}\zeta (\sigma _2 + t i) \right) \mathcal M(\widetilde1_\epsilon )(\sigma _2 + t i) X^{\sigma _2 + t i} \ i \ dt \]
Lemma 3.5.3 dlog-riemannZeta-bdd-on-vertical-lines
#

For \(\sigma _0 {\gt} 1\), there exists a constant \(C {\gt} 0\) such that

\[ \forall t \in \mathbb {R}, \quad \left\| \frac{\zeta '(\sigma _0 + t i)}{\zeta (\sigma _0 + t i)} \right\| \leq C. \]
Proof

Write as Dirichlet series and estimate trivially using Theorem 3.5.1.

Lemma 3.5.4 SmoothedChebyshevPull1-aux-integrable

The integrand

\[ \zeta '(s)/\zeta (s)\mathcal{M}(\widetilde{1_{\epsilon }})(s)X^{s} \]

is integrable on the contour \(\sigma _0 + t i\) for \(t \in \mathbb {R}\) and \(\sigma _0 {\gt} 1\).

Proof

The \(\zeta '(s)/\zeta (s)\) term is bounded, as is \(X^s\), and the smoothing function \(\mathcal{M}(\widetilde{1_{\epsilon }})(s)\) decays like \(1/|s|^2\) by Theorem 3.3.10. Actually, we already know that \(\mathcal{M}(\widetilde{1_{\epsilon }})(s)\) is integrable from Theorem 3.5.1, so we should just need to bound the rest.

Lemma 3.5.5 BddAboveOnRect
#

Let \(g : \mathbb {C}\to \mathbb {C}\) be a holomorphic function on a rectangle, then \(g\) is bounded above on the rectangle.

Proof

Use the compactness of the rectangle and the fact that holomorphic functions are continuous.

We have that

\[ \psi _{\epsilon }(X) = \mathcal{M}(\widetilde{1_{\epsilon }})(1) X^{1} + I_1 - I_2 +I_{37} + I_8 + I_9 . \]
Proof

Pull rectangle contours and evaluate the pole at \(s=1\).

Next pull contours to another box.

We have that

\[ I_{37} = I_3 - I_4 + I_5 + I_6 + I_7 . \]
Proof

Mimic the proof of Lemma 3.5.4.

We insert this information in \(\psi _{\epsilon }\). We add and subtract the integral over the box \([1-\delta ,2] \times _{ℂ} [-T,T]\), which we evaluate as follows

Theorem 3.5.5 ZetaBoxEval
#

For all \(\epsilon {\gt} 0\) sufficiently close to \(0\), the rectangle integral over \([1-\delta ,2] \times _{ℂ} [-T,T]\) of the integrand in \(\psi _{\epsilon }\) is

\[ \frac{X^{1}}{1}\mathcal{M}(\widetilde{1_{\epsilon }})(1) = X(1+O(\epsilon )) , \]

where the implicit constant is independent of \(X\).

Proof

Unfold the definitions and apply Lemma 3.3.11.

It remains to estimate all of the integrals.

This auxiliary lemma is useful for what follows.

Lemma 3.5.7 IBound-aux1
#

Given a natural number \(k\) and a real number \(X_0 {\gt} 0\), there exists \(C \geq 1\) so that for all \(X \geq X_0\),

\[ \log ^k X \le C \cdot X. \]
Proof

We use the fact that \(\log ^k X / X\) goes to \(0\) as \(X \to \infty \). Then we use the extreme value theorem to find a constant \(C\) that works for all \(X \geq X_0\).

Lemma 3.5.8 I1Bound

We have that

\[ \left|I_{1}(\nu , \epsilon , X, T) \right| \ll \frac{X}{\epsilon T} . \]

Same with \(I_9\).

Proof

Unfold the definitions and apply the triangle inequality.

\[ \left|I_{1}(\nu , \epsilon , X, T)\right| = \left| \frac{1}{2\pi i} \int _{-\infty }^{-T} \left( \frac{-\zeta '}\zeta (\sigma _0 + t i) \right) \mathcal M(\widetilde1_\epsilon )(\sigma _0 + t i) X^{\sigma _0 + t i} \ i \ dt \right| \]

By Theorem 3.5.3 (once fixed!!), \(\zeta '/\zeta (\sigma _0 + t i)\) is bounded by \(\zeta '/\zeta (\sigma _0)\), and Theorem 3.4.14 gives \(\ll 1/(\sigma _0-1)\) for the latter. This gives:

\[ \leq \frac{1}{2\pi } \left| \int _{-\infty }^{-T} C \log X\cdot \frac{C'}{\epsilon |\sigma _0 + t i|^2} X^{\sigma _0} \ dt \right| , \]

where we used Theorem 3.3.10. Continuing the calculation, we have

\[ \leq \log X \cdot C'' \frac{X^{\sigma _0}}{\epsilon } \int _{-\infty }^{-T} \frac{1}{t^2} \ dt \ \leq C''' \frac{X\log X}{\epsilon T} , \]

where we used that \(\sigma _0=1+1/\log X\), and \(X^{\sigma _0} = X\cdot X^{1/\log X}=e \cdot X\).

Lemma 3.5.9 I2Bound

Assuming a bound of the form of Lemma 3.4.28 we have that

\[ \left|I_{2}(\nu , \epsilon , X, T)\right| \ll \frac{X}{\epsilon T} . \]
Proof

Unfold the definitions and apply the triangle inequality.

\[ \left|I_{2}(\nu , \epsilon , X, T, \sigma _1)\right| = \left|\frac{1}{2\pi i} \int _{\sigma _1}^{\sigma _0} \left(\frac{-\zeta '}\zeta (\sigma - T i) \right) \cdot \mathcal M(\widetilde1_\epsilon )(\sigma - T i) \cdot X^{\sigma - T i} \ d\sigma \right| \]
\[ \leq \frac{1}{2\pi } \int _{\sigma _1}^{\sigma _0} C \cdot \log T ^9 \frac{C'}{\epsilon |\sigma - T i|^2} X^{\sigma _0} \ d\sigma \leq C'' \cdot \frac{X\log T^9}{\epsilon T^2} , \]

where we used Theorems 3.3.10, the hypothesised bound on zeta and the fact that \(X^\sigma \le X^{\sigma _0} = X\cdot X^{1/\log X}=e \cdot X\). Since \(T{\gt}3\), we have \(\log T^9 \leq C''' T\).

Lemma 3.5.10 I8I2

Symmetry between \(I_2\) and \(I_8\):

\[ I_8(\nu , \epsilon , X, T) = -\overline{I_2(\nu , \epsilon , X, T)} . \]
Proof

This is a direct consequence of the definitions of \(I_2\) and \(I_8\).

Lemma 3.5.11 I8Bound

We have that

\[ \left|I_{8}(\nu , \epsilon , X, T)\right| \ll \frac{X}{\epsilon T} . \]
Proof

We deduce this from the corresponding bound for \(I_2\), using the symmetry between \(I_2\) and \(I_8\).

Lemma 3.5.12 log-pow-over-xsq-integral-bounded
#

For every \(n\) there is some absolute constant \(C{\gt}0\) such that

\[ \int _3^T \frac{(\log x)^9}{x^2}dx {\lt} C \]
Proof

Induct on n and just integrate by parts.

Lemma 3.5.13 I3Bound

Assuming a bound of the form of Lemma 3.4.28 we have that

\[ \left|I_{3}(\nu , \epsilon , X, T)\right| \ll \frac{X}{\epsilon }\, X^{-\frac{A}{(\log T)^9}} . \]

Same with \(I_7\).

Proof

Unfold the definitions and apply the triangle inequality.

\[ \left|I_{3}(\nu , \epsilon , X, T, \sigma _1)\right| = \left|\frac{1}{2\pi i} \int _{-T}^3 \left(\frac{-\zeta '}\zeta (\sigma _1 + t i) \right) \mathcal M(\widetilde1_\epsilon )(\sigma _1 + t i) X^{\sigma _1 + t i} \ i \ dt \right| \]
\[ \leq \frac{1}{2\pi } \int _{-T}^3 C \cdot \log t ^9 \frac{C'}{\epsilon |\sigma _1 + t i|^2} X^{\sigma _1} \ dt , \]

where we used Theorems 3.3.10 and the hypothesised bound on zeta. Now we estimate \(X^{\sigma _1} = X \cdot X^{-A/ \log T^9}\), and the integral is absolutely bounded.

Lemma 3.5.14 I4Bound
#

We have that

\[ \left|I_{4}(\nu , \epsilon , X, \sigma _1, \sigma _2)\right| \ll \frac{X}{\epsilon }\, X^{-\frac{A}{(\log T)^9}} . \]

Same with \(I_6\).

Proof

The analysis of \(I_4\) is similar to that of \(I_2\), (in Lemma 3.5.9) but even easier. Let \(C\) be the sup of \(-\zeta '/\zeta \) on the curve \(\sigma _2 + 3 i\) to \(1+ 3i\) (this curve is compact, and away from the pole at \(s=1\)). Apply Theorem 3.3.10 to get the bound \(1/(\epsilon |s|^2)\), which is bounded by \(C'/\epsilon \). And \(X^s\) is bounded by \(X^{\sigma _1} = X \cdot X^{-A/ \log T^9}\). Putting these together gives the result.

Lemma 3.5.15 I5Bound
#

We have that

\[ \left|I_{5}(\nu , \epsilon , X, \sigma _2)\right| \ll \frac{X^{\sigma _2}}{\epsilon }. \]
Proof

Here \(\zeta '/\zeta \) is absolutely bounded on the compact interval \(\sigma _2 + i [-3,3]\), and \(X^s\) is bounded by \(X^{\sigma _2}\). Using Theorem 3.3.10 gives the bound \(1/(\epsilon |s|^2)\), which is bounded by \(C'/\epsilon \). Putting these together gives the result.

3.6 MediumPNT

Theorem 3.6.1 MediumPNT
#

We have

\[ \sum _{n \leq x} \Lambda (n) = x + O(x \exp (-c(\log x)^{1/10})). \]
Proof

Evaluate the integrals.