A solution of the Riemann hypothesis based on the analytic continuation principle Zeraoulia Elhadj TDepartment of Mathematics, University of Tébessa, (12002), Algeria January 19, 2014 Abstract In this paper, a solution of the famous Riemann hypothesis is given by using the the analytic continuation principle for a well defined and analytic function on the critical strip. 1 Introduction This is just a draft. Nothing is correct. The Riemann zeta function is the function of the complex variable s = α + iβ, defined in the half-plane α > 1 by the absolutely convergent series ζ (s) = ∞ X 1 ns n=1 (1) and in the whole complex plane C by analytic continuation. As shown by Riemann, ζ (s) extends to C as a meromorphic function with only a simple pole at s = 1 with residue 1. In [1] Riemann obtained an analytic formula for the number of primes up to a preassigned limit in terms of the zeros of the zeta function (1.1). This principal result implies that natural primes are distributed as regularly as possible if the Riemann hypothesis is true. Riemann Hypothesis. The nontrivial zeros of ζ (s) have real part equal to α = 12 . The Riemann hypothesis is probably the most important open problem in pure mathematics today [2]. The unsolved Riemann hypothesis is part 1 of the Hilbert’s eighth problem, along with the Goldbach conjecture. It is also one of the Clay mathematics institute millennium prize problems. This hypothesis has been checked to be true for the first 1500000000 solutions. However, a mathematical proof is not available since its formulation in 1859. It is well known that the Riemann hypothesis is equivalent to the statement that all the zeros of the Dirichlet eta function (the alternating zeta function) ∞ X (−1)n−1 η (s) = (2) s n n=1 falling in the critical strip D = {s = α + iβ ∈ C with 0 < α < 1} lie on the critical line α = 12 [3-4-8, pp. 49]. We have η (s) = (1 − 21−s ) ζ (s) and the series η (s) given by (1.2) converges only for s = α + iβ ∈ C with α > 0. The function η (s) is analytic for all s = α + iβ ∈ C with α > 0 except at s = α = 1, where it has a simple pole of residue 1. The first main result for the proof of the Riemann hypothesis is given as follow: Theorem 1 For every s ∈ D, we have η (1 − s) = η (¯ s) = η¯ (s) (3) The proof is divided into two steps: (1) Definition of an analytic function h (s) in the critical strip D in order to prove the required property. (2) By using the analytic continuation principle, we prove that h (s) is identically zero for all s ∈ D. (1) First of all, the function h (s) is defined by ⎧ ∞ P ⎨ h (s) = η (1 − s) − η (¯ (−1)n−1 fn (α) niβ s) = (4) n=1 ⎩ n2α−1 −1 fn (α) = nα where s¯ = α − iβ is the complex conjugate of s. The function h (s) is considered here for translating the Riemann hypothesis to a clair expression containing the variable α in order apply the analytic continuation principle. The function h (s) is well defined only for s ∈ D since it is the sum of two series η (1 − s) and η (¯ s) that converges both only for s ∈ D. See [7, pp. 64] for more details about Dirichlet series. Indeed, let s ∈ D, 2 then Re (1 − s) = 1 − α > 0 and Re (¯ s) = α > 0. For α = 1, we have ∞ P n−1 iβ ¡ n−1 ¢ h (s) = (−1) n . For this case, the serie is divergent by usn n=1 ing the necessary condition for the convergence of a serie, i.e., if a sequence ∞ P cn diverges. For α > 1, (cn )n∈N does not tend to zero, then the series n=1 the serie (1.3) does not converges since Re (1 − s) = 1 − α < 0 and hence the function h is not defined. Finally, the function h is defined only on the critical strip D. (2) Secondly, on one hand, it is clear that the function h (s) is analytic for all s ∈ D as the sum the two analytic functions η (1 − s) and η (¯ s) in D : The function η (1 − s) is the composition of two analytic functions s → 1 − s and s → η (s) for s ∈ D. The function η (¯ s) is also analytic since the conjugate doesn’t change the real part of s ∈ D and the fact that if s ∈ D, then 1 − s, s¯ ∈ D and both series η (1 − s) and η (¯ s) converges. See [7, pp. 66]. On the other hand, recall that any bounded infinite set in C has at least one accumulation point. Also, in the theory of analytic functions, recall the analytic continuation principle: if two functions are equal on a set with an accumulation point, these functions are equal for all values in the domain of definition. In other words, if (rk ) is a sequence of distinct numbers such that h(rk ) = 0 for all k and this sequence converges to a point r in the domain D, then h is identically zero on the connected component of D containing r. We remark that h (α + iβ) = η (1 − α − iβ)−η (α − iβ) , then we looking for a ¡convergent h(r¢k ) = 0 for all k. It is easy to verify that ¢ sequence ¡ ¢in which ¡ h 12 + iβ = η 12 − iβ −η 12 − iβ = 0 for all β ∈ R. Thus, for the function h, the domain is the critical strip D defined above and the bounded sequence (rk )k is defined by rk = 12 +iβ k where (β k )k is any bounded rational sequence converging to a real number β. Such a sequence can exist due to the density of rationales in the real numbers. This fact do not contradict numerical evidence suggesting that all values of β corresponding to nontrivial zeros of ζ (s) are irrational [5-6] since all irrational numbers are real. The main fact here is as follow: if a sequence is convergent to a limit, then all its subsequences are convergent to that same limit, hence a convergent sequence 1 has exactly one accumulation point. Thus, limk→∞ rk = n¡2 + iβ =¢r ∈ oD is an 1 + iβ k k∈N ⊂ D. accumulation point for the bounded infinite set Ω = 2 Hence, by using the analytic continuation principle, we concludes that the function h (s) is identically zero in the connected set D = ]0, 1[ × R since it 3 is the finite cartesian product of two connected spaces and by the fact that a space is connected if and only if it has a single connected component, namely the whole space itself. 2 Evaluating the possible values of α The famous functional equation for η (s) restricted to the domain D is given by ( η (s) = ϕ (s) η (1 − s) ¡ ¢ (5) (1−2−1+s ) Γ (1 − s) ϕ (s) = 2 (1−2s ) π s−1 sin πs 2 Hence, by using (1.3) we have η (s) = ϕ (s) η¯ (s) (6) the function ϕ (s) is Let η (s) = x (s) + iy (s) , ϕ (s) = ϕ1 (s) + iϕ2 (s) . Hence,p given uniquely by ϕ (s) = ρ (s) exp (iθ (s)) , where, ρ = ϕ21 (s) + ϕ22 (s) 6= 0 and θ (s) = Argϕ (s) ∈ R. Here, we have ϕ1 (s) = ρ (s) cos θ (s) and ϕ2 (s) = ρ (s) sin θ (s) since such complex numbers are entirely determined by their modulus and angle. In this case, the equation (2.2) becomes ¶µ ¶ µ ¶ µ −ϕ2 (s) x (s) 0 (1 − ϕ1 (s)) = (7) −ϕ2 (s) (1 + ϕ1 (s)) y (s) 0 Thus, we have the following result: Proposition 2 If s ∈ D is not a solution of η (s) = 0, then ρ (s) = 1. Let s ∈ D and assume that ρ (s) 6= 1, then the determinant of the matrix in (2.3) is 1 − ρ2 (s) 6= 0, then the only solution of (2.3) is when x (s) = 0, y (s) = 0, this means that equation (2.3) holds only for the roots of η (s) = 0, this last result contradict the fact that s is not a solution of η (s) = 0. Thus, ρ (s) = 1. We conclude the second important result for our proof Theorem 3 If ρ (s) 6= 1, then s ∈ D is a solution of η (s) = 0. 4 Now, let η (1 − s) = u (s) + iv (s) . Thus, from equation (1.2) we have ⎧ ∞ ∞ P P (−1)n−1 (−1)n−1 ⎪ cos (β ln n) , y (s) = − sin (β ln n) ⎨ x (s) = α n nα n=1 n=1 ∞ ∞ P P (−1)n−1 (−1)n−1 ⎪ ⎩ u (s) = cos (β ln n) , v (s) = sin (β ln n) 1−α n n1−α n=1 (8) n=1 substtutying the algebraic forms of η (s) and η (1 − s) into equation (2.1), we obtain ⎧ µ ¶ µ ¶ x (s) u (s) ⎪ ⎪ = A (s) ⎨ y (s) µ v (s) ¶ (9) ϕ1 (s) −ϕ2 (s) ⎪ ⎪ A (s) = ⎩ ϕ2 (s) ϕ1 (s) that is, ½ x (s) = u (s) ϕ1 (s) − v (s) ϕ2 (s) y (s) = u (s) ϕ2 (s) + v (s) ϕ1 (s) and the inverse transformation is given by ( y(s) u (s) = ρx(s) 2 (s) ϕ1 (s) + ρ2 (s) ϕ2 (s) x(s) v (s) = ρy(s) 2 (s) ϕ1 (s) − ρ2 (s) ϕ2 (s) (10) (11) The matrix A (s) in (2.5) is invertible for all s ∈ D since its determinant is ρ2 (s) 6= 0 for all s ∈ D. The only fixed points of the transformation defined by this ¶ matrix are µlocated ¶in the lines x (s) = 0 andµy (s) =¶ 0. Indeed, µ ¶ µ x (s) 0 x (s) x (s) implies that (I2 − A (s)) = A (s) = y (s) 0 y (s) y (s) which has one solution (0, 0) since the determinant of the matrix (I2 − A (s)) is (ϕ1 (s) − 1)2 + ϕ22 (s) 6= 0 since ϕ (s) 6= 0 for all s ∈ D. Here I2 is the 2 × 2 unit matrix. Clearly, all the images of the roots of η (s) = 0 by the transformation in (2.6) remain unchanged. The same result holds true for the inverse transformation in (2.7). From (1.3) we have that x (s) = u (s) and y (s) = −v (s) , thus from (2.6) and (2.7), we obtain ½ −ϕ1 (s) x (s) − ϕ2 (s) y (s) + ρ2 (s) (ϕ1 (s) u (s) − ϕ2 (s) v (s)) = 0 −ϕ2 (s) x (s) + ϕ1 (s) y (s) + ρ2 (s) (ϕ2 (s) u (s) + ϕ1 (s) v (s)) = 0 5 (12) by replacing the values of ϕ1 (s) = ρ (s) cos θ (s) , ϕ2 (s) = ρ (s) sin θ (s) , and the values of x (s) , y (s) , u (s) and v (s) from (2.4), we obtain the following equation ∞ X ¢ (−1)n−1 ¡ 2α−1 2 1 − n ρ (s) exp (i (θ (s) + β ln n)) = 0 nα n=1 holds for all s = α + iβ ∈ D. We remark that if ½ (1 − n2α−1 ρ2 (s)) exp (i (θ (s) + β ln n)) = r (s) exp (−iδ ln n) r (s) = (1 − ρ2 (s)) exp (iθ (s)) (13) (14) for all n ≥ 2 and for some values of ρ (s) , α, θ (s) , δ and β, then s = α + iδ is a root for η (s) = 0. We have proved the following important result: Lemma 4 The only non trivial solution of the equation (2.10) with respect to ρ (s) , α, θ (s) , δ and β is ρ (s) 6= 1, α = 12 and δ = −β for all n ≥ 2. We remark that r (s) = 0 if ρ (s) = 1. Thus, assume that ρ (s) 6= 1, hence, from Theorem. 2.2, the complex number s is a root for η (s) = 0. From (2.10), we have (1 − n2α−1 ρ2 ) exp (i (θ + β ln n)) = (1 − ρ2 ) exp (i (θ − δ ln n)) . By calculating the modulus of both sides we obtain |1 − n2α−1 ρ2 | = |1 − ρ2 | for all n ≥ 2. Clearly, the only solution is α= 1 2 (15) Hence, we obtain exp (i (θ + β ln n)) = exp (i (θ − δ ln n)) .Thus, µ cos (β ln n) − cos (δ ln n) − (sin (β ln n) + sin (δ ln n)) sin (β ln n) + sin (δ ln n) cos (β ln n) − cos (δ ln n) ¶µ cos θ (s) sin θ (s) ¶ = µ (16) The determinant of the matrix in (2.12) is −2 (cos (β ln n + δ ln n) − 1) . If this determinant 6= 0, then the only solution of (2.12) is when cos θ (s) = sin θ (s) = 0, which is impossible due to the trigonometric identity cos2 θ (s)+ sin2 θ (s) = 1. Hence, we must have cos (β ln n + δ ln n) − 1 = 0, that is, δ = −β 6 (17) 0 0 ¶ Now, equation (2.9) becomes ¢ ¡ 1 − ρ2 (s) exp (iθ (s)) η µ 1 − iβ 2 ¶ =0 (18) Then from (2.14), we conclude that all the roots of η (s) = 0 have the form to¢ Theorem .2.2. When replacing −δ s = 12 − iβ since ρ (s) 6= 1 according ¡1 by δ in (2.10), we obtain that η 2 + iβ = 0, hence s = 12 + iβ is a root for η (s) = 0. Finally, we can concludes that Theorem 5 The Riemann hypothesis is true, i.e., all the nontrivial zeros of ζ (s) have real part equal to α = 12 . The fact that the Dirichlet eta function (1.2) have the same zeros as the zeta function (1.1) in the critical strip D, implies that all nontrivial zeros of ζ (s) have real part equal to α = 12 . At this end, the hope that primes are distributed as regularly as possible is now become a truth by this unique solution of the Riemann hypothesis. Also, all propositions which are known to be equivalent to or true under the Riemann hypothesis are now correct. Examples includes, growth of arithmetic functions, Lindelöf hypothesis and growth of the zeta function, large prime gap conjecture...etc. References [1] G. F. B. Riemann, Ueber die Anzahl der Primzahlen unter einer gegebenen Grösse, Monatsber. Königl. Preuss. Akad. Wiss. Berlin, 671-680, Nov. 1859. [2] M. Sautoy, The music of the primes: Searching to solve the greatest mystery in mathematics, HarperCollins, New York, 2004. [3] H. M. Srivastava and J. Choi, Series associated with the Zeta and related functions, Kluwer Academic Publishers, Dordrecht, Boston, and London, 2001, [4] J. Sondow, Zeros of the alternating zeta function on the line Re(s) = 1, Amer. Math. Monthly, vol. 110, pp. 435-437, 2003. 7 [5] J. Havil, The zeros of zeta." §16.6 in Gamma: Exploring Euler’s constant. Princeton, NJ: Princeton University Press, pp. 193-196, 2003, [6] J. Derbyshire, Prime obsession: Bernhard Riemann and the greatest unsolved problem in mathematics. New York: Penguin, 2004. , pp. 384. [7] J. P. Serre, A Course in arithmetic, Chapter VI. Springer-Verlag, 1973. [8] S. Choi, B. Rooney and A. Weirathmueller, The Riemann hypothesis: A resource for the afficionado and virtuoso alike, Springer-Verlag, 2007. 8
© Copyright 2024 ExpyDoc