伟大的数学思想家—黎曼

伟大的数学思想家—黎曼
   记得,五十多年前,数学所五学科组(几何、数论、拓扑,…,合用一个办公室)的老同学告诉袁萌;他们每天都要“审查”数十封群众来信,其中多半是官关于证明哥德巴赫猜想的稿件,要求给予审查。
当然,毫无例外,来稿都是借助初等数论的证明方法,无一成功。
    时至今日,反动文人王晓明及其团伙,大力宣扬、鼓吹哥德巴赫猜想属于初等数论问题,他们已经解决。
   但是,实际情况是,哥德巴赫问题与黎曼猜想有关。黎曼的数学思想如何伟大,且听下回分解。
   请读者预习本文附件关于黎曼Zeta函数的知识。
袁萌 陈启清  2月16日
附件:
The Riemann Zeta Function
David Jekel
June 6, 2013
In 1859, Bernhard Riemann published an eight-page paper, in which he estimated “the number of prime numbers less than a given magnitude” using a certain meromorphic function on C. But Riemann did not fully explain his proofs; it took decades for mathematicians to verify his results, and to this day we have not proved some of his estimates on the roots of ξ. Even Riemann did not prove that all the zeros of ξ lie on the line Re(z) = 1 2. This conjecture is called the Riemann hypothesis and is considered by many the greatest unsolved problem in mathematics. H. M. Edwards’ book Riemann’s Zeta Function [1] explains the historical context of Riemann’s paper, Riemann’s methods and results, and the subsequent work that has been done to verify and extend Riemann’s theory. The first chapter gives historical background and explains each section of Riemann’s paper. The rest of the book traces later historical developments and justifies Riemann’s statements. This paper will summarize the first three chapters of Edwards. My paper can serve as an introduction to Riemann’s zeta function, with proofs of some of the main formulae, for advanced undergraduates familiar with the rudiments of complex analysis. I use the term “summarize” loosely; in some sections my discussion will actually include more explanation and justification, while in others I will only give the main points. The paper will focus on Riemann’s definition of ζ, the functional equation, and the relationship between ζ and primes, culminating in a thorough discussion of von Mangoldt’s formula.
1
Contents
1 Preliminaries        3
2 Definition of the Zeta Function   3
2.1 Motivation: The Dirichlet Series . .. . . 4
2.2 Integral Formula . . . . . 4 2.3 Definition of ζ . .  . 5
3 The Functional Equation 6 3.1 First Proof . . .  . 7
3.2 Second Proof . .  . 8
4 ξ and its Product Expansion 9 4.1 The Product Expansion . . . . . 9
4.2 Proof by Hadamard’s Theorem .  . . . . . . 10
5 Zeta and Primes: Euler’s Product Formula 12
6 Riemann’s Main Formula: Summary     13
7 Von Mangoldt’s Formula 15 7.1 First Evaluation of the Integral . .  . . 15
7.2 Second Evaluation of the Integral . .  17
7.3 Termwise Evaluation over ρ . . . . . . . 19
7.4 Von Mangoldt’s and Riemann’s Formulae .. . 22
2
1 Preliminaries
Before we get to the zeta function itself, I will state, without proof, some results which are important for the later discussion. I collect them here so subsequent proofs will have less clutter. The impatient reader may skip this section and refer back to it as needed. In order to define the zeta function, we need the gamma function, which extends the factorial function to a meromorphic function onC. Like Edwards and Riemann, I will not use the now standard notation Γ(s) where Γ(n) = (n−1)!, but instead I will call the function Π(s) and Π(n), will be n!. I give a definition and some identities of Π, as listed in Edwards section 1.3. Definition 1. For all s ∈C except negative integers,
Π(s) = lim N→∞
(N + 1)s
N Y n=1
n s + n
.
Theorem 2. For Re(s) > −1, Π(s) =R∞ 0 e−xxs dx. Theorem 3. Π satisfies 1. Π(s) =Q∞ n=1(1 + s/n)−1(1 + 1/n)s 2. Π(s) = sΠ(s−1) 3. Π(s)Π(−s)sinπs = πs 4. Π(s) = 2√πΠ(s/2)Π(s/2−1/2). Since we will often be interchanging summation and integration, the following theorem on absolute convergence is useful. It is a consequence of the dominated convergence theorem.
Theorem 4. Suppose fn is nonnegative and integrable on compact subsets of [0,∞). IfPfn converges andR∞ 0 Pfn converges, thenPR∞ 0 fn =R ∞0 Pfn. 2 Definition of the Zeta Function
Here I summarize Edwards section 1.4 and give additional explanation and justification.
3
2.1 Motivation: The Dirichlet Series Dirichlet defined ζ(s) =P∞ n=1 n−s for Re(s) > 1. Riemann wanted a definition valid for all s ∈ C, which would be equivalent to Dirichlet’s for Re(s) > 1. He found a new formula for the Dirichlet series as follows. For Re(s) > 1, by Euler’s integral formula for Π(s) 2, Z∞ 0 e−nxxs−1 dx = 1 nsZ∞ 0 e−xxs−1 dx = Π(s−1) ns . Summing over n and applying the formula for a geometric series gives
Π(s−1)
∞ X n=1
1 ns
=
∞ X n=1Z∞ 0
e−nxxs−1 dx =Z∞ 0
e−xxs−1 1−e−x
dx =Z∞ 0
xs−1 ex −1
dx.
We are justified in interchanging summation and integration by Theorem 4 because the integral on the right converges at both endpoints. As x → 0+, ex−1 behaves like x, so that the integral behaves likeRa 0 xs−2 dx which converges for Re(s) > 1. The integral converges at the right endpoint because ex grows faster than any power of x.
2.2 Integral Formula To extend this formula to C, Riemann integrates (−z)s/(ez −1) over a the path of integration C which “starts” at +∞, moves to the origin along the “top” of the positive real axis, circles the origin counterclockwise in a small circle, then returns to +∞ along the “bottom” of the positive real axis. That is, for small positive δ, ZC (−z)s ez −1 dz z = Zδ +∞ +Z|z|=δ +Z+∞ δ !(−z)s ez −1 dz z . Notice that the definition of (−z)s implicitly depends on the definition of log(−z) = log|z| + iarg(−z). In the first integral, when −z lies on the negative real axis, we take arg(−z) = −πi. In the second integral, the path of integration starts at z = δ or −z = −δ, and as −z proceeds counterclockwise around the circle, arg(−z) increases from −πi to πi. (You can think of the imaginary part of the log function as spiral staircase, and going counterclockwise around the origin as having brought us up one level.) In the last integral, arg(−z) = πi. (Thus, the first and last integrals do not
4
cancel as we would expect!) To state the definition quite precisely, the integral over C is Zδ +∞ es(logz−πi) ez −1 dz z +Z2π 0 es(logδ−iπ+iθ) eδeiθ −1 idθ +Z+∞ δ es(logz+πi) ez −1 dz z . The integrals converge by the same argument given above regardingR∞ 0 xs−1/(1− ex)dx; in fact, they converge uniformly on compact subsets of C. This definition appears to depend on δ, but actually δ does not matter (so long as we avoid muitiples of 2πi). Edwards does not mention this point, but I will give a short proof for δ ∈ (0,1). Let Aδ be a curve which moves in a line from 1 to δ, then in a semicircle above the real axis from δ to −δ, then in a line from −δ to −1. Let Bδ move in a line from −1 to −δ, then in a semicircle below the real axis from −δ to δ, then in a line from δ to 1. Then, because the integrals from −δ to −1 and −1 to −δ cancel, ζ(s) = Z1 +∞ +ZAδ +ZBδ +Z+∞ 1  (−z)s ez −1 dz z The first and last integrals obviously do not depend on δ. In the integral over Aδ, (−z)s is defined by the branch of the log function whose slit is on the nonnegative imaginary axis, and in the integral over Bδ, (−z)s is defined by the branch of the log function whose slit is on the nonpositive imaginary axis. Since each branch of the log function is analytic on its domain, the value of an integral depends only on the endpoints. Hence, the integrals over Aδ and Bδ do not depend on δ.
2.3 Definition of ζ For Re(s) > 0, we can relate the formula forRC(−z)s/z(ez −1)dz to our previous formula for the Dirichlet series by taking δ → 0+. Now
lim δ→0+Z2π 0
es(logδ−iπ+iθ) eδeiθ −1
idθ = 0
because
 
 
es(logδ−iπ+iθ) eδeiθ −1
 

=
δRe(s)eIm(s)(π−θ) |eδeiθ −1| ≤
δ eδ −1
δRe(s)−1eπIm(s),
5
but δ/(eδ −1) → 1 and δRe(s)−1 → 0. Hence, ZC (−z)s ez −1 dz z = lim δ→0+ Zδ +∞ es(logz−πi) ez −1 dz z
+Z+∞ δ
es(logz+πi) ez −1
dz z!
= (eiπs −e−iπs)Z∞ 0
zs−1 ez −1
dz.
But this integral is exactly Π(s−1)P∞ n=1 n−s. Therefore, for Re(s) > 1, ∞ X n=1 1 ns = 1 2i(sinπs)Π(s−1)ZC (−z)s ez −1 dz z By Theorem 3, 1/2i(sinπs)Π(s − 1) = Π(−s)/2πi. Therefore, Riemann gives the following definition of the zeta function, which coincides with the Dirichlet series for Re(s) > 1:
Definition 5. ζ(s) =
Π(−s) 2πi ZC (−z)s ez −1
dz z
.
Theorem 6. For Re(s) > 1, ζ(s) =
∞ X n=1
1 ns
.
The integral over C converges uniformly on compact subsets of C because ez grows faster than any power of z. It follows that the integral defines an analytic function. Thus, ζ is analytic except possibly at the positive integers where Π(−s) has poles. Because the Dirichlet series converges uniformly on Re(s) > 1+δ for any positive δ, we see ζ is actually analytic for s = 2,3,4..., and it is not too hard to check that the integral has a zero there which cancels the pole of Π. Thus, ζ is analytic everywhere except 1 where it has a simple pole.
3 The Functional Equation
Riemann derives a functional equation for ζ, which states that a certain expression is unchanged by the substitution of 1−s for s. Theorem 7. Π s 2 −1 π−s/2ζ(s) = Π 1−s 2 −1 π−1/2+s/2ζ(1−s). He gives two proofs of this functional equation, both of which Edwards discusses (sections 1.6 and 1.7). I will spend more time on the first.
6
3.1 First Proof Fix s < 1. Let DN = {z : δ < |z| < (2N + 1)π}\ (δ,(2N + 1)π). We consider Dn split along the positive real axis. Let CN be the same contour as C above except that the starting and ending point is (2N + 1)π rather than +∞. Then Z∂DN (−z)s ez −1 dz z =Z|z|=(2N+1)π (−z)s ez −1 dz z −ZCN (−z)s ez −1 dz z . We make similar stipulations as before regarding the definition of log(−z); that is, arg(−z) is taken to be −πi on the “top” of the positive real axis and −πi on the “bottom.” Note that the integrand is analytic except at poles at 2πin. If logz were analytic everywhere, the residue theorem would give Z∂DN (−z)s ez −1 dz z = 2πiX z∈Dn Res  (−z)s z(ez −1) ,z . In fact, we can argue that this is still the case by spliting Dn into two pieces and considering an analytic branch of the log function on each piece. This argument is similar to the one made in the last section and is left to the reader. The residue at each point 2πin for integer n 6= 0 is computed by integrating over a circle of radius r < 2π:
1 2πiZ|z−2πin|=r (−z)s ez −1
dz z
= −
1 2πiZ|z−2πin|=r(−z)s−1z−2πin ez −1
dz z−2πin
.
Since (−z)s−1 is analytic at z = 2πin and (z−2πin)/(ez −1) has a removable singularity (its limit is 1), we can apply Cauchy’s integral formula to conclude that the residue is −(−2πin)s−1. Hence, ZCN (−z)s ez −1 dz z =Z|z|=(2N+1)π (−z)s ez −1 dz z +2πi N X n=1(−2πin)s−1 + (2πin)s−1 . As we take N → ∞, the left side approaches the integral over C. I claim the integral on the right approaches zero. Notice 1/(1−ez) is bounded on |z| = (2N + 1)π. For if |Re(z)| > π/2, then |ez − 1| ≥ 1 − eπ/2 by the reverse triangle inequality. If |Im(z)−2πin| > π/2 for all integers n, then |ez −1| > 1 because Im(ez) < 0. For |z| = (2N + 1)π, at least one of these two conditions holds. The length of the path of integration is 2π(2N + 1)π
7
and |(−z)s/z|≤ (2N + 1)s−1, so the whole integral is less than a constant times (2N + 1)s, which goes to zero as N →∞ because we assumed s < 0. Therefore, taking N →∞ gives ZC (−z)s ez −1 dz z = 2πi(2π)s−1(is−1 + (−i)s−1) ∞ X n=1 ns−1. From our definition of the log function, is−1 + (−i)s−1 = 1 i (is −(−i)s) = 1 i (eπis/2 −e−πis/2) = 2sin πs 2 . Substituting this into the previous equation and multiplying by Π(−s)/2πi gives ζ(s) = Π(−s)(2π)s−1 2sin πs 2 ζ(1−s), s < 0. Applying various identities of Π from Theorem 3 will put this equation into the form given in the statement of the theorem. Since the equation holds for s < 0 and the functions involved are all analytic on C except for some poles at certain integers, the equation holds on all of C\Z.
3.2 Second Proof
The second proof I will merely summarize. Riemann applies a change of variables in the integral for Π to obtain
1 ns
π−s/2Π s 2 −1 =Z∞ 0
e−n2πxxs/2−1 dx, Re(s) > 1.
Summing this over n gives π−s/2Π s 2 −1 ζ(s) =Z∞ 0
φ(x)xs/2−1 dx,
where φ(x) =P∞ n=1 e−n2πx. (We have interchanged summation and integration here; this is justified by applying Theorem 4 to the absolute value of the integrand and showing thatR∞ 0 φ(x)xp dx converges for all real p.) It turns out φ satisfies 1 + 2φ(x) 1 + 2φ(1/x) = 1 √x as Taylor section 9.2 or Edwards section 10.6 show. Using this identity, changing variables, and integrating by parts, Riemann obtains Π s 2 −1 π−s/2ζ(s) =Z∞ 1 φ(x)(xs/2 + x1/2−s/2) dx x + 1 s(1−s) . 8
Both sides can be shown to be analytic, so this relation holds not only for Re(s) > 1, but for all s. The right side is unchanged by the substitution of 1−s for s, so the theorem is proved again.
4 ξ and its Product Expansion
As Edwards explains in sections 1.8 and 1.9, Riemann defined the function ξ by Definition 8. ξ(s) = Π s 2 (s−1)π−s/2ζ(s). The functional equation derived in the last section says exactly that ξ(s) = ξ(1 − s). We know that ξ is analytic except perhaps at certain integers. But by examining both sides of the functional equation at potential singular points, we can deduce that, in fact, ξ is analytic everywhere. Euler’s product formula, which we will prove later, shows that ζ has no zeroes for Re(s) > 1. This implies ξ has no zeroes there either and ξ(s) = ξ(1−s) implies all the roots of ξ lie in {s : 0 ≤ Re(s) ≤ 1}. We can also prove that ξ has no zeroes on the real line.
4.1 The Product Expansion
Riemann wanted to write ξ in the form ξ(s) = ξ(0)Y ρ  1− s ρ , where ρ ranges over the roots of ξ. This type of product expansion is guaranteed for polynomials by the fundamental theorem of algebra, but not for functions with possibly infinitely many roots. The convergence of the product depends on the ordering of the terms and the frequency of the roots. By the argument principle, {0 ≤ Re(s) ≤ 1,0 ≤ Im(s) ≤ T} is equal to the integral of ξ0/2πiξ around the boundary of the region (assuming there are no roots on the boundary). Riemann estimates this integral as
T 2π log T 2π −1  with an error on the order of T−1. This estimate on the frequency of the roots of ξ would guarantee that the product expansion of ξ converges, provided each root ρ is paired with its symmetric “twin” 1−ρ. But no one else could prove this estimate until 1905.
9
4.2 Proof by Hadamard’s Theorem
Hadamard proved that the product representation of ξ was valid in 1893. His methods gave rise to a more general result, the Hadamard factorization theorem. Edwards devotes chapter 2 to Hadamard’s proof of the product formula for ξ. But for the sake of space I will simply quote Hadamard’s theorem to justify that formula. The reader may consult Taylor chapter 8 for a succinct development of the general theorem, which I merely state here, as given in [2]:
Theorem 9 (Hadamard’s Factorization Theorem). Suppose f is entire. Let p be an integer and suppose there exists positive t < p+1 such that |f(z)|≤ exp(|z|t) for z sufficiently large. Then f can be factored as
f(z) = zmeh(z)
∞ Y k=1
Ep z zk , where {zk} is a list of the roots of f counting multiplicity, m is the order of the zero of f at 0, h(z) is a polynomial of degree at most p, and Ep(w) = (1−w)exp(Pp n=1 wn/n). To apply the theorem, we must estimate ξ. To do this we need Riemann’s alternate representation of ξ based on the result of the previous section
ξ(s) =
1 2 −
s(1−s) 2 Z∞ 1
φ(x)(xs/2 + x1/2−s/2) dx x
,
which Edwards derives in section 1.8. Using more integration by parts and change of variables, we can show that ξ(s) = 4Z∞ 1 d dxhx3/2φ0(x)ix−1/4 cosh1 2(s− 1 2)logx dx. Substituting the power series for coshw =P∞ n=0 wn/(2n)! will show that the power series for ξ(s) about s = 1 2 has coefficients
a2n =
4 (2n)!Z∞ 1
d dxhx3/2φ0(x)ix−1/4(1 2 logx)2n dx. Direct evaluation of (d/dx)[x3/2φ0(x)] will show that the integrand is positive. Hence, ξ has a power series about 1 2 with real, positive coefficients. This is all we need to derive a “simple estimate” of ξ as Edwards does in section 2.3:
10
Theorem 10. For sufficiently large R, |ξ(1 2 + w)|≤ RR for |w| < R. Proof. Since the power series of ξ around 1 2 has positive coefficients, |ξ(1 2 + w)| achieves its maximum value for |w|≤ R at R itself. Choose N such that 1 2 +R ≤ 2N ≤ 1 2 +R+2. Since ξ is entire, the power series expansion is valid for all s, and the fact that is has positive coefficients shows ξ is increasing on [1 2,∞). Thus, ξ(1 2 + R) ≤ ξ(2N) = N!π−N(2N −1)ζ(2N). Because ζ(s) =P∞ n=1 n−s for Re(s) > 1, we know ζ is decreasing on [2,∞). Hence, ζ(2N) is bounded by a constant, and so is π−N. Therefore, ξ(2N) ≤ kN!(2N −1) ≤ 2kNN+1 ≤ 2k(1 2R + 5 4)R/2+7/4 ≤ RR for sufficiently large R. This implies as a corollary that |ξ(1 2 + w)|≤ exp(|w|3/2) for sufficiently large w. This is because RR = eRlogR ≤ eR3/2 for R sufficiently large. I will now follow Taylor’s proof [2]. We have shown that ξ(1 2 + w) satisfies the hypotheses of Hadamard’s theorem with p = 1, and therefore it has a product expansion of the form
ξ(1 2 + w) = eh(w)Y ρ
E1 w σ = eh(w)Y σ  1− w σ ew/σ, where σ ranges over the roots of ξ(1 2 + w) and where h is a polynomial of degree at most 1. (Since ξ(1 2) 6= 0, we do not have any wm term.) If we group the roots σ and −σ together, the exponential factors cancel, so we have ξ(1 2 + w) = eh(w)Y σ  1− w σ . Since the left hand side and the product are both even functions, eh(w) must be even, which implies it is constant. We undo the change of variables by letting 1 2 + w = s and 1 2 + σ = ρ. Thus, we have ξ(s) = cY ρ  1− s−1/2 ρ−1/2 . Divide by ξ(0) (as Edwards section 2.8):
ξ(s) ξ(0)
=Y ρ  1− s−1/2 ρ−1/2  1 + 1/2 ρ−1/2 −1 .
11
Each term in the product is a linear function of s which is 0 when s = ρ and 1 when s = 0. Hence, each term is 1−s/ρ and we have Theorem 11. ξ has a product expansion ξ(s) = ξ(0)Y ρ  1− s ρ , where ρ ranges over the roots of ξ, and ρ and 1−ρ are paired.
5 Zeta and Primes: Euler’s Product Formula
The zeta function is connected to prime numbers by Theorem 12. For Re(s) > 1, ζ(s) =Y p (1−p−s)−1, where p ranges over the prime numbers. This identity is derived from the Dirichlet series in the following way. Edwards does not give the proof, but for the sake of completeness I include a proof adapted from Taylor [2].
Proof. Let pn be the nth prime number. Define sets SN and TN inductively as follows. Let S0 be the positive integers. For N ≥ 0, let TN+1 be the set of numbers in SN divisible by PN and let SN+1 = SN \TN+1. Notice that SN is the set of positive integers not divisible by the first N primes, and TN+1 exactly {pN+1m : m ∈ SN}. I will show by induction that for Re(s) > 1,
ζ(s)
N Y n=1 1− 1 ps n = X n∈SN
1 ns
. Since S0 = Z+, we have ζ(s) =Pn∈S0 n−s. Now suppose the statement holds for N. Multiply by (1−p−s N+1):
ζ(s)
N+1 Y n=1 1− 1 ps n = 1− 1 ps N+1 X n∈SN
1 ns
= X n∈SN
1 ns − X n∈SN
1 (PN+1n)s
= X n∈SN
1 ns − X n∈TN+1
1 ns
= X n∈SN+1
1 ns
,
12
where the last equality holds because SN \TN+1 = SN+1 and TN+1 ⊂ SN. This completes the induction. The proof will be complete if we show thatPn∈SN n−s approaches 1 asN →∞. Notice that if 1 < n ≤ N < PN, then n 6∈ SN. This is because any prime factor of n has to be less than or equal to PN, but all multiples of all primes pn ≤ PN have been removed from SN. Hence,
 
 
 
X n∈SN 1 ns −1
 
 
 =

 
 
 X N≤n∈SN 1 ns

 
 
≤ X N≤n∈SN

 
 1 ns
 
 
≤ ∞ X n=N

 
 1 ns
 
 
. We are justified in applying the triangle inequality here because the rightmost sum converges absolutely. Indeed, the sum on the right is smaller than any positive   for N sufficiently large. Hence, the limit ofPn∈SN n−s is 1. Taking the log of this formula gives logζ(s) = −Pp log(1 − p−s) forRe( s) > 1. Since log(1−x) = −P∞ n=1 xn/n, we have logζ(s) =X p ∞ X n=1 1 n p−sn. There is a problem in that logz is only defined up to multiples of 2πi. However, it is easy to make the statement rigorous. We simply say that the right hand side provides one logarithm for ζ. We can also argue that it is analytic.
6 Riemann’s Main Formula: Summary
Riemann uses the formula for logζ(s) to evaluate J(x), a function which measures primes. Since I am going to prove Von Mangoldt’s similar formula later, I will only sketch Riemann’s methods here. J(x) is a step function which starts at J(0) = 0 and jumps up at positive integers. The jump is 1 at primes, 1 2 at squares of primes, 1 3 at cubes of primes, etc. J can be written as J(x) =X p ∞ X n=1 1 n u(x−pn), where u(x) = 0 for x < 0, u(0) = 1 2, and u(x) = 1 for x > 0. We can obtain a formula for π(x), the number of primes less than x from J(x).
13
Since J(x) ≤ x and J(x) = 0 for x < 2, we know that for Re(s) > 1,R ∞0 J(x)x−s−1 dx converges. Hence, by Theorem 4, we can integrate term by term and Z∞ 0 J(x)x−s−1 dx =X p ∞ X n=1 1 nZ∞ pn x−s−1 dx = 1 sX p ∞ X n=1 1 n p−sn, which is s−1 logζ(s). Through a change of variables and Fourier inversion, Riemann recovers J(x) from the integral formula:
J(x) =
1 2πiZa+i∞ a−i∞
(logζ(s))xsds s
,
where a > 1 and the integral is defined as the Cauchy principal value
lim T→+∞Za+iT a−iT
(logζ(s))xsds s
,
He then integrates by parts and shows the first term goes to zero to obtain:
J(x) = −
1 2πiZa+i∞ a−i∞
d ds logζ(s) s  xs ds. Applying ξ(s) = Π(s/2)(s−1)π−s/2ζ(s) together with the product formula for ξ gives:
logζ(s) = logξ(s) +
s 2
logπ−log(s−1)−logΠ s 2  = logξ(0) +X ρ log 1− s ρ + s 2 logπ−log(s−1)−logΠ s 2 . He substitutes this into the integral for J(x), then evaluates the integral term by term as J(x) = Li(x)− X Imρ>0 [Li(xρ) + Li(x1−ρ)] +Z∞ x dt t(t2 −1)logt + logξ(0), where Li(x) is the Cauchy principal value ofRx 0 1/logxdx. The evaluation requires too much work to explain it completely here (Edwards sections 1.13-1.16), but the term-by-term integration itself is even more difficult to justify, and Riemann’s main formula was not proved until Von Mangoldt’s work in 1905.
14
7 Von Mangoldt’s Formula
Von Mangoldt’s formula is essentially the same as Riemann’s, except that instead of considering logζ(s), he considers its derivative ζ0(s)/ζ(s). By differentiating the formula for logζ(s) as a sum over primes, we have
ζ0(s) ζ(s)
= −X p
∞ X n=1
p−ns logp, Re(s) > 1.
Instead of J(x), we use a different step function ψ(x) defined by ψ(x) =X p ∞ X n=1 (logp)u(x−pn). Van Mangoldt shows that ψ(x) = x−X ρ xρ ρ + ∞ X n=1 x−2n 2n − ζ0(0) ζ(0) , by showing that both sides are equal to

1 2πiZa+i∞ a−i∞
ζ0(s) ζ(s)
xsds s
.
(In the sum over ρ, we pair ρ with 1−ρ and sum in order of increasing Im(ρ). In the integral from a−i∞ to a + i∞ we take the Cauchy principal value.) His proof, like Riemann’s, depends on several tricky termwise integrations.
7.1 First Evaluation of the Integral
First, we show that the integral is equal to ψ(x). We let Λ(m) be the size of the jump of ψ at m; it is logp if m = pn and zero otherwise. We then rewrite the series for ζ0/ζ and ψ as
ζ0(s) ζ(s)
= −
∞ X m=1
Λ(m)m−s, ψ(x) =
∞ X m=1
Λ(m)u(x−m).
The rearrangement is justified because the series converge absolutely. We then have

1 2πiZa+i∞ a−i∞
ζ0(s) ζ(s)
xsds s
=
1 2πiZa+i∞ a−i∞ ∞ X m=1
Λ(m)
xs ms!ds s
.
15
We want to integrate term by term. This is justifiable on a finite interval because the series converges uniformly for Re(s) ≥ a > 1 (since Λ(m) ≤ logm and x is constant). Thus, we have
1 2πiZa+ih a−ih ∞ X m=1
Λ(m)
xs ms!ds s
=
1 2πi
∞ X m=1
Λ(m)Za+ih a−ih
xs ms
ds s
.
To take the limit as h → ∞ and evaluate the integral, we will need the following lemma, which can be proved by straightforward estimates and integration by parts.
Lemma 13. For t > 0,
lim h→+∞
1 2πiZa+ih a−ih
tsds s
= u(t−1) =     0 t < 1 1 2, t = 1 1, t > 1. For 0 < t < 1 and for 1 < t, the error
 
 
 1 2πiZa+ih a−ih tsds s −u(t−1)
 
 
≤ ta πh|logt| . Let x/m be the t in the lemma and noticing u(x/m−1) = u(x−m), we have Λ(m)

 
 1 2πiZa+ih a−ih xs ms ds s −u(x−m)
 
 
≤ (x/m)a logm πh|logx−logm| The right hand side is summable over m because (logm)/|logx − logm| approaches 1 as m →∞ and (x/m)a is summable. Thus, for fixed x, ∞ X m=1 Λ(m)
 
 
Za+ih a−ih xs ms ds s −u(x−m)
 
 
≤ K h , for some constant K. Hence, the limit of the sum as h →∞ is zero. This argument assumes x 6= m, but if x is an integer, we can simply throw away the terms where m ≤ x, since finitely many terms do not affect convergence. This implies
lim h→∞
1 2πi
∞ X m=1
Λ(m)Za+ih a−ih
xs ms
ds s
=
∞ X m=1
Λ(m)u(x−m),
or

1 2πiZa+i∞ a−i∞
ζ0(s) ζ(s)
xsds s
= ψ(x).
This completes the first evaluation of the integral.
16
7.2 Second Evaluation of the Integral For the second evaluation, we use a different formula for ζ0/ζ. Differentiate logζ(s) = logξ(0) +X ρ log 1− s ρ + s 2 logπ−log(s−1)−logΠ s 2  to obtain

ζ0(s) ζ(s)
=X ρ
1 ρ−s −
1 2
logπ +
1 s−1
+
d ds
logΠ s 2  In order to justify termwise differentiation, we use the fact proved by Hadamard (and which we will not prove here!) thatPρ 1/ρ with the roots ρ and 1−ρ paired converges absolutely. We order the terms by increasing|Im(ρ)|. Since lim|Im(ρ)|→∞ ρ ρ−s = 1,
 
 
 1 ρ−s
 
 
<

 
 k ρ
 
 
for any fixed s, for Im(ρ) sufficiently large. In fact, on the set {s : Re(s) > 1 + δ,−a < Im(s) < a}, we can bound |1/(ρ−s)| uniformly by |1/(ρ−s0)| where s0 = 1 + δ + ai. This shows that the series converges uniformly on compact sets and term-by-term differentiation is justified. In the preceding formula, substitute the product formula for Π(s) =Q ∞ n=1(1 + s/n)−1(1 + 1/n)s from Theorem 3: d ds logΠ s 2 = d ds ∞ X n=1 slog 1 + 1 2n −log 1 + s 2n   = ∞ X n=1 log 1 + 1 2n + 1 2n + s  This termwise differentiation is justifiable because the differentiated series converges uniformly on compact subsets of C with Re(s) > 1. The reader may verify this: Use the power series for log(1+t) and show that each term of the differentiated series is less than a constant times 1/n2. Using these formulae, we have

ζ0(s) ζ(s)
+
ζ0(0) ζ(0)
=X ρ
1 ρ−s −X ρ
1 ρ
+
1 s−1
+ 1 +
∞ X n=1  1 2n + s
+
1 2n

ζ0(s) ζ(s)
=
s s−1 −X ρ
s ρ(s−ρ)
+
∞ X n=1
s 2n(s + 2n) −
ζ0(0) ζ(0)
.
17
This is the formula for ζ0/ζ we intend to integrate term by term. To do this, we need another lemma. The reader may try deducing this lemma from the previous one using a change of variables and Cauchy’s theorem.
Lemma 14. For a > 1 and x > 1, Za+i∞ a−i∞
1 s−β
xs ds = xβ
The lemma allows us immediately to integrate the first and last terms:
1 2πiZa+i∞ a−i∞
s s−1
xsds s
= x, −
1 2πiZa+i∞ a−i∞
ζ0(0) ζ(0)
xsds s
=
ζ0(0) ζ(0)
.
The lemma also tells us what the value of other integrals will be, if we can justify term-by-term integration. First, consider
1 2πiZa+i∞ a−i∞ ∞ X n=1
s 2n(s + 2n)!xsds s
.
Because the series converges uniformly on compact subsets of {Re(s) > 1}, we can integrate term by term over a finite interval. We only have to worry about taking the limit
lim h→∞
∞ X n=1
1 2πi
1 2nZa+ih a−ih
xs s + 2n
ds.
To justify the termwise limit, we begin by showing that the sum of the limits converges. By Lemma 14,
1 2πi
1 2nZa+i∞ a−i∞
xs s + 2n
ds =
x−2n 2n
.
This is clearly summable over n (in fact, it sums to −1 2 log(1−x−2)). We now show that the sum before we take the limit (that is, the sum of the partial integrals) converges, using the estimate Lemma 15. For x > 1, a > 0, and d > c ≥ 0,
 
 
 1 2πiZa+di a+ci ts s ds
 
 
≤ K (a + c)logt . 18
By change of variables, Za+ih a−ih xs s + 2n ds = x−2nZa+2n+ih a+2n−ih
xs s
ds = 2x−2nZa+2n−ih a+2n
xs s
ds.
And by Lemma 15,

 
 
2x−2n 1 2πiZa+2n−ih 0
xs s
ds
 
 

2Kx−2n (a + 2n)logx ≤
K0 n2
.
Hence, the sum of the partial integrals is summable. This implies that evaluating the limit termwise is valid. This is because for any   > 0, we can choose N large enough that the sum of the first N infinite integrals is within   of the complete sum. By choosing N larger if necessary, we can make the sum of the first N finite integrals within  /3 of its limit, uniformly in h. Then by choosing h large enough, we can make the first N integrals be within  /3N of their limits. Thus, the two partial sums are within  /3 of each other, and so the infinite sum of the infinite integrals is within   of the infinite sum of the finite integrals. All that remains is to evaluate the integral ofPρ s/ρ(ρ−s). 7.3 Termwise Evaluation over ρ
We will need the following lemma on the density of the roots of ξ, given in Edwards section 3.4. The proof of this lemma is highly nontrivial. It is ultimately based on Hadamard’s results. Lemma 16. For T sufficiently large, ξ has fewer than 2logT roots in T ≤ Im(ρ) ≤ T + 1. We want to evaluate
1 2πiZa+i∞ a−i∞ X ρ
s ρ(s−ρ)!xsds s
.
We first note that we can exchange summation and integration for a finite integral:
lim h→∞X ρ
1 2πiZa+ih a−ih
xs ρ(s−ρ)
ds.
Uniform convergence of the series is obtained from Hadamard’s result that Pρ|ρ − 1 2|−2 converges. After pairing the terms for ρ and 1 − ρ, we can 19
show that each term of the series is less than a constant times |ρ − 1 2|−2. The proof is merely a manipulation of fractions and is left to the reader. We already know the limit exists because we have shown that all the other terms in von Mangoldt’s formula have limits as h →∞. Von Mangoldt considers the diagonal limit
lim h→∞ X |Im(ρ)|≤h
xρ ρ Za+ih a−ih
xs−ρ s−ρ
ds.
He shows that X ρ
xρ ρ Za+ih a−ih
xs−ρ s−ρ
ds− X |Im(ρ)|≤h
xρ ρ Za+ih a−ih
xs−ρ s−ρ
ds
approaches zero, so that the diagonal limit exists and is equal to the original limit. If ρ is written as β + iγ, then we can estimate the sum of roots with positive γ as X γ>h

 
 xρ ρ
 
 
 
 
 1 2πiZa+ih a−ih xs−ρ s−ρ ds
 
 
≤X γ>h xβ γ
 
 
 1 2πiZa−β+i(γ+h) a−β+i(γ−h) xt t ds
 
 
 , This, in turn, is less than or equal to X γ>h xβ γ Kxa−β (a−β + γ −h)logx by Lemma 15, which is less than or equal to
K
xa logxX γ>h
1 γ
K γ(γ −h + c)logx
,
where c = a−1 so that 0 < c ≤ a−β. By writing the roots with γ < 0 as 1−ρ = 1−β−iγ and making a change of variables, the reader may verify that the same chain of inequalities applies with β replaced by 1−β, but the c we chose works for that case as well. Hence, the sum over all roots can be estimated by twice the above estimate. Now take h sufficiently large that Lemma 16 applies. Arrange the roots into groups where h + j ≤ γ < h + j + 1 for each integer j. There are at most 2log(h + j) roots in each group, so summing over j makes the above quantities smaller than a constant multiple of ∞ X j=0 log(h + j) (h + j)(j + c) .
20
This sum converges and as h →∞, it approaches zero; the proof of this is left to the reader. We now only have to evaluate the diagonal limit, and we know that it exists. We will show that X| Im(ρ)|≤h xρ ρ Za+ih a−ih xs−ρ s−ρ ds− X |Im(ρ)|≤h xρ ρ approaches zero as h → ∞. This will prove thatPρ xρ/ρ converges and is equal to the diagonal limit. By similar reasoning as before, the above difference is smaller than twice X0 <γ≤h

 
 xρ ρ
 
 
 
 
 1 2πiZa−β−iγ+ih a−β−iγ−ih xt t dt−1
 
 
,which is no larger than X0 <γ≤h

 
 xρ ρ
 
 
 
 
 1 2πiZa−β+i(h+γ) a−β−i(h+γ) xt t dt−1
 
 
 + X 0<γ≤h

 
 xρ ρ
 
 
 
 
 1 2πiZa−β+i(h−γ) a−β+i(h+γ) xt t dt
 
 
 by path additivity of integrals and the triangle inequality. By Lemmas 13 and 15 and similar manipulation as before, this is less than or equal to X0 <γ≤h xβ γ xa−β π(h + γ)logx + X 0<γ≤h xβ γ Kxa−β γ(a−β + h−γ)logx ≤ xa π logx X 0<γ≤h 1 γ(h + γ) + Kxa logx X 0<γ≤h 1 γ(c + h−γ) , where c = a−1. Our object is to show that this series approach zero as h →∞. We fix an H such that the estimate of Lemma 16 holds for h ≥ H. There are only finitely many terms where γ ≤ H, and it is easy to see each term approaches zero as h →∞. The remaining terms we put into groups where h + j ≤ γ < h + j + 1 as before and apply the estimate of Lemma 16. We estimate the quantity by convergent series which approach zero as h →∞; since the argument is similar to the one we did before, I will skip the details. This completes the proof that
1 2πiZa+i∞ a−i∞ X ρ
s ρ(s−ρ)!xsds s
= lim h→∞ X |Im(ρ)|≤h
xρ ρ Za+ih a−ih
xs−ρ s−ρ
ds =X ρ
xρ ρ
.
21
7.4 Von Mangoldt’s and Riemann’s Formulae
We have now evaluated term by term
1 2πiZa+i∞ a−i∞ s s−1 −X ρ
s ρ(s−ρ)
+
∞ X n=1
s 2n(s + 2n) −
ζ0(0) ζ(0)!xsds s
,
showing that x−X ρ
xρ ρ
+
∞ X n=1
x−2n 2n −
ζ0(0) ζ(0)
= −
1 2πiZa+i∞ a−i∞
ζ0(s) ζ(s)
xsds s
= ψ(x).
The one thing missing from von Mangoldt’s formula is the evaluation of ζ0(0)/ζ(0) which Edwards performs in section 3.8. It is log2π. Von Mangoldt used his formula to prove Riemann’s main formula; Edwards explains this proof in section 3.7. Von Mangoldt used a different method from Riemann’s and did not directly justify Riemann’s term-byterm integration, although three years later in 1908, Landau did just that. Von Mangoldt’s formula can also be used to prove the prime number theorem, which says essentially that ψ(x)/x approaches 1 as x →∞.
References
[1] Harold M. Edwards. Riemann’s Zeta Function. 1974. Dover Publications.
[2] Joseph L. Taylor. Complex Variables. 2011. American Mathematical Society.
22
 

猜你喜欢

转载自blog.csdn.net/yuanmeng001/article/details/87480161