| 1. | Lecture 1 |
| 2. | Lecture 2 |
| 3. | Lecture 3 |
| 4. | Lecture 4 |
| 5. | Lecture 5 |
| 6. | Lecture 6 |
| 7. | Lecture 7 |
| 8. | Lecture 8 |
| 9. | Lecture 9 |
| 10. | Lecture 10 |
| 11. | Lecture 11 |
| 12. | Lecture 12 |
| 13. | Lecture 13 |
We denote by \(D_{R}=\{z\in\mathbb C: |z|<R\}\) and \(C_{R}=\{z\in\mathbb C: |z|=R\}\) the open disc and circle of radius \(R\in\mathbb R_+\) centered at the origin.
Let \(\Omega\) be an open set that contains the closure of a disc \(D_{R}\) and suppose that \(f\) is holomorphic in \(\Omega\), and \(f(0) \neq 0\), \(f\) vanishes nowhere on the circle \(C_{R}\). If \(z_{1}, \ldots, z_{N}\) denote the zeros of \(f\) inside the disc (counted with multiplicities, i.e. each zero appears in the sequence as many times as its order), then \[\begin{align*} \log |\: f(0)|=\sum_{k=1}^{N} \log \left(\frac{\left|z_{k}\right|}{R}\right) +\frac{1}{2 \pi} \int_{0}^{2 \pi} \log |\:f(R e^{i \theta})| d \theta. \tag{*} \end{align*}\]
Proof. The proof of the theorem consists of several steps.
Step 1. First, we observe that if \(f_{1}\) and \(f_{2}\) are two functions satisfying the hypotheses and the conclusion of the theorem, then the product \(f_{1}\: f_{2}\) also satisfies the hypothesis of the theorem and formula (*). This is a simple consequence of the fact that \(\log x y=\log x+\log y\) whenever \(x, y\in\mathbb R_+\), and that the set of zeros of \(f_{1}\: f_{2}\) is the union of the sets of zeros of \(f_{1}\) and \(f_{2}\).
Step 2. The function \[g(z)=\frac{f(z)}{\left(z-z_{1}\right) \cdots\left(z-z_{N}\right)}\] initially defined on \(\Omega\setminus\left\{z_{1}, \ldots, z_{N}\right\}\), is bounded near each \(z_{j}\). Therefore each \(z_{j}\) is a removable singularity, and hence we can write \[f(z)=\left(z-z_{1}\right) \cdots\left(z-z_{N}\right) g(z),\] where \(g\) is holomorphic in \(\Omega\) and nowhere vanishing in the closure of \(D_{R}\). By Step 1, it suffices to prove Jensen’s formula for functions like \(g\) that vanish nowhere, and for functions of the form \(z-z_{j}\).
Step 3. We first prove (*) for a function \(g\) that vanishes nowhere in the closure of \(D_{R}\). More precisely, we must establish the following identity:
\[\begin{align*} \log |g(0)|=\frac{1}{2 \pi} \int_{0}^{2 \pi} \log |g(R e^{i \theta})| d \theta. \tag{**} \end{align*}\] In a slightly larger disc, we can write \(g(z)=e^{h(z)}\) where \(h\) is holomorphic in that disc. This is possible since discs are simply connected, and we can define \(h=\log g\). Now \(|g(z)|=|e^{h(z)}|=|e^{\operatorname{Re}(h(z))+i \operatorname{Im}(h(z))}|=e^{\operatorname{Re}(h(z))}\), so that \(\log |g(z)|=\operatorname{Re}(h(z))\). Then the mean value property for holomorphic functions (in our case with \(h=\log g\)) immediately implies the desired formula for its real part, which is precisely (**).
Step 4. The last step is to prove the formula for functions of the form \(f(z)=z-w\), where \(w \in D_{R}\). That is, we must show that \[\log |w|=\log \left(\frac{|w|}{R}\right)+\frac{1}{2 \pi} \int_{0}^{2 \pi} \log |R e^{i \theta}-w| d \theta.\]
Since \(\log (|w| / R)=\log |w|-\log R\) and \(\log|R e^{i \theta}-w|=\log R+\log |e^{i \theta}-w / R|\), it suffices to prove that \[\int_{0}^{2 \pi} \log |e^{i \theta}-a| d \theta=0, \quad \text { whenever } \quad |a|<1.\]
This in turn is equivalent (after the change of variables \(\theta \mapsto-\theta\) ) to \[\int_{0}^{2 \pi} \log |1-a e^{i \theta}| d \theta=0, \quad \text { whenever } \quad |a|<1.\] To prove this, we use the function \(F(z)=1-a z\), which vanishes nowhere in the closure of the unit disc. As a consequence, there exists a holomorphic function \(G\) in a disc of radius greater than \(1\) such that \(F(z)=\) \(e^{G(z)}\). Then \(|F|=e^{\operatorname{Re}(G)}\), and therefore \(\log |F|=\operatorname{Re}(G)\). Since \(F(0)=1\) we have \(\log |F(0)|=0\), and an application of the mean value property to the real part of holomorphic function \(G\), which is \(\log |F(z)|\) concludes the proof of the theorem.$$\tag*{$\blacksquare$}$$
From Jensen’s formula we can derive an identity linking the growth of a holomorphic function with its number of zeros inside a disc.
If \(f\) is a holomorphic function on the closure of a disc \(D_{R}\), we denote by \(\mathfrak{n}_{f}(r)\) the number of zeros of \(f\) (counted with their multiplicities) inside the disc \(D_{r}\), with \(0<r<R\).
A simple but useful observation is that \(\mathfrak{n}_{f}(r)\) is a non-decreasing function of \(r\).
Under the assumptions of the previous theorem, we have \[\int_{0}^{R} \mathfrak{n}_f(r) \frac{d r}{r}=\sum_{k=1}^{N} \log \bigg|\frac{R}{z_{k}}\bigg|.\]
Proof.
First we have \[\sum_{k=1}^{N} \log \left|\frac{R}{z_{k}}\right|=\sum_{k=1}^{N} \int_{\left|z_{k}\right|}^{R} \frac{d r}{r}.\]
$$\tag*{$\blacksquare$}$$
If we define the characteristic function \[\eta_{k}(r)= \begin{cases}1 & \text { if } r>\left|z_{k}\right|, \\ 0 & \text { if } r \leq\left|z_{k}\right|, \end{cases}\] then
\[\sum_{k=1}^{N} \eta_{k}(r)=\mathfrak{n}_f(r).\]
The lemma is proved using \[\sum_{k=1}^{N} \int_{\left|z_{k}\right|}^{R} \frac{d r}{r}=\sum_{k=1}^{N} \int_{0}^{R} \eta_{k}(r) \frac{d r}{r}=\int_{0}^{R}\left(\sum_{k=1}^{N} \eta_{k}(r)\right) \frac{d r}{r}=\int_{0}^{R} \mathfrak{n}_f(r) \frac{d r}{r}.\]
This completes the proof of the lemma. $$\tag*{$\blacksquare$}$$
As a corollary of Jensen’s formula and the previous lemma, we obtain \[\int_{0}^{R} \mathfrak{n}_f(r) \frac{d r}{r}=\frac{1}{2 \pi} \int_{0}^{2 \pi} \log |\: f(R e^{i \theta})| d \theta-\log |\: f(0)|.\]
Let \(f\) be an entire function. If there exist \(\rho\in\mathbb R_+\) and constants \(A, B\in\mathbb R_+\) such that \[|\:f(z)| \leq A e^{B|z|^{\rho}} \quad \text { for all } \quad z \in \mathbb{C},\] then we say that \(f\) has an order of growth \(\leq \rho\).
We define the order of growth of \(f\) as \[\rho_{f}=\inf \rho,\] where the infimum is taken over all \(\rho>0\) such that \(f\) has an order of growth \(\leq \rho\).
For example, the order of growth of the function \(e^{z^{2}}\) is \(2\).
If \(f\) is an entire function that has an order of growth \(\leq \rho\), then:
\(\mathfrak{n}_f(r) \leq C r^{\rho}\) for some \(C>0\) and all sufficiently large \(r\).
If \(z_{1}, z_{2}, \ldots\) denote the zeros of \(f\), with \(z_{k} \neq 0\), then for all \(s>\rho\) we have
\[\sum_{k=1}^{\infty} \frac{1}{\left|z_{k}\right|^{s}}<\infty.\]
It suffices to prove the estimate for \(\mathfrak{n}_f(r)\) when \(f(0) \neq 0\). Indeed, consider \(F(z)=f(z) / z^{l}\), where \(l\) is the order of the zero of \(f\) at the origin. Then \(\mathfrak{n}_{f}(r)\) and \(\mathfrak{n}_{F}(r)\) differ only by a constant, and \(F\) also has an of order of growth \(\leq \rho\).
If \(f(0) \neq 0\) we may use formula from the previous corollary, namely \[\int_{0}^{R} \mathfrak{n}_f(x) \frac{d x}{x}=\frac{1}{2 \pi} \int_{0}^{2 \pi} \log |\: f(R e^{i \theta})| d \theta-\log|\:f(0)|.\]
Choosing \(R=2 r\), this formula implies \[\int_{r}^{2 r} \mathfrak{n}_f(x) \frac{dx}{x} \leq \frac{1}{2 \pi} \int_{0}^{2 \pi} \log |\: f(R e^{i \theta})| d \theta-\log |\: f(0)|.\]
On the one hand, since \(\mathfrak{n}_f(r)\) is increasing, we have \[\int_{r}^{2 r} \mathfrak{n}_f(x) \frac{d x}{x} \geq \mathfrak{n}_f(r) \int_{r}^{2 r} \frac{d x}{x}=\mathfrak{n}_f(r)[\log 2 r-\log r]=\mathfrak{n}_f(r) \log 2.\]
On the other hand, the growth condition on \(f\) (for all large \(r\)) gives \[\int_{0}^{2 \pi} \log |f(R e^{i \theta})| d \theta \leq \int_{0}^{2 \pi} \log |A e^{B R^{\rho}}| d \theta \leq C^{\prime} r^{\rho}.\]
Consequently, \(\mathfrak{n}_f(r) \leq C r^{\rho}\) for an appropriate \(C>0\) and all sufficiently large \(r\).
The following estimates prove the second part of the theorem: \[\begin{aligned} \sum_{\left|z_{k}\right| \geq 1}\left|z_{k}\right|^{-s} & =\sum_{j=0}^{\infty}\left(\sum_{2^{j} \leq\left|z_{k}\right|<2^{j+1}}\left|z_{k}\right|^{-s}\right) \\ & \leq \sum_{j=0}^{\infty} 2^{-j s} \mathfrak{n}\left(2^{j+1}\right) \\ & \leq c \sum_{j=0}^{\infty} 2^{-j s} 2^{(j+1) \rho} \\ & \leq c^{\prime} \sum_{j=0}^{\infty}\left(2^{\rho-s}\right)^{j} <\infty \end{aligned}\]
The last series converges because \(s>\rho\).
This completes the proof of the theorem.$$\tag*{$\blacksquare$}$$
Given a sequence \((a_{n})_{n\in\mathbb Z_+}\subseteq \mathbb C\), we say that the product \[\prod_{n=1}^{\infty}\left(1+a_{n}\right)\] converges if the limit \[\lim _{N \rightarrow \infty} \prod_{n=1}^{N}\left(1+a_{n}\right)\] of the partial products exists.
A useful necessary condition that guarantees the existence of a product is contained in the following proposition.
Proposition. If \(\sum_{n\in\mathbb Z_+}\left|a_{n}\right|<\infty\), then the product \[\prod_{n=1}^{\infty}\left(1+a_{n}\right)\] converges. Moreover, the product converges to \(0\) if and only if one of its factors is \(0\).
If \(\sum_{n\in\mathbb Z_+}\left|a_{n}\right|\) converges, then for all large \(n\) we must have \(\left|a_{n}\right|<1 / 2\). We may assume, without loss of generality, that this inequality holds for all \(n\in\mathbb Z_+\).
Hence, we can define \(\log \left(1+a_{n}\right)\) by the usual power series, and this logarithm satisfies the property that \(1+z=e^{\log (1+z)}\) whenever \(|z|<1\).
Hence we may write the partial products as follows: \[\prod_{n=1}^{N}\left(1+a_{n}\right)=\prod_{n=1}^{N} e^{\log \left(1+a_{n}\right)}=e^{B_{N}}\] where \(B_{N}=\sum_{n=1}^{N} b_{n}\) with \(b_{n}=\log \left(1+a_{n}\right)\).
By the power series expansion we see that \(|\log (1+z)| \leq 2|z|\), if \(|z|<1 / 2\). Hence \(\left|b_{n}\right| \leq 2\left|a_{n}\right|\), so \(B_{N}\) converges as \(N \rightarrow \infty\) to a complex number, say \(B\).
Since the exponential function is continuous, we conclude that \(e^{B_{N}}\) converges to \(e^{B}\) as \(N \rightarrow \infty\), and the first part follows.
Observe also that if \(1+a_{n} \neq 0\) for all \(n\in\mathbb Z_+\), then the product converges to a non-zero limit since it is expressed as \(e^{B}\). $$\tag*{$\blacksquare$}$$
Proposition 2. Suppose \((F_{n})_{n\in\mathbb Z_+}\) is a sequence of holomorphic functions on the open set \(\Omega\). If there exist constants \(c_{n}>0\) such that \[\sum_{n\in\mathbb Z_+} c_{n}<\infty \quad \text { and } \quad\left|F_{n}(z)-1\right| \leq c_{n} \quad \text { for all } \quad z \in \Omega,\] then:
The product \(\prod_{n=1}^{\infty} F_{n}(z)\) converges uniformly in \(\Omega\) to a holomorphic function \(F(z)\).
If \(F_{n}(z)\) does not vanish for any \(n\), then \[\frac{F^{\prime}(z)}{F(z)}=\sum_{n=1}^{\infty} \frac{F_{n}^{\prime}(z)}{F_{n}(z)}.\]
To prove the first statement, note that for each \(z\) we may argue as in the previous proposition if we write \(F_{n}(z)=1+a_{n}(z)\), with \(\left|a_{n}(z)\right| \leq c_{n}\).
Then, we observe that the estimates are actually uniform in \(z\) because the \(c_{n}\) ’s are constants. It follows that the product converges uniformly to a holomorphic function, which we denote by \(F(z)\).
To establish the second part of the theorem, suppose that \(K\) is a compact subset of \(\Omega\), and let \[G_{N}(z)=\prod_{n=1}^{N} F_{n}(z).\]
We have just proved that \(\lim_{N\to\infty}G_{N} = F\) uniformly in \(\Omega\). Hence, the sequence \((G_{N}^{\prime})_{N\in\mathbb Z_+}\) converges uniformly to \(F^{\prime}\) in \(K\).
Since \(G_{N}\) is uniformly bounded from below on \(K\), we conclude that \(\lim_{N\to\infty}G_{N}^{\prime} / G_{N} =F^{\prime} / F\) uniformly on \(K\), and because \(K\) is an arbitrary compact subset of \(\Omega\), the limit holds for every point of \(\Omega\).
Moreover, a simple calculation yields \[\frac{G_{N}^{\prime}}{G_{N}}=\sum_{n=1}^{N} \frac{F_{n}^{\prime}}{F_{n}},\] so part (ii) of the proposition is also proved. $$\tag*{$\blacksquare$}$$
For each integer \(k \geq 0\) we define canonical factors by \[E_{0}(z)=1-z \quad \text { and } \quad E_{k}(z)=(1-z) e^{z+z^{2} / 2+\cdots+z^{k} / k}, \quad \text { for } k \geq 1 .\]
The integer \(k\) is called the degree of the canonical factor.
If \(|z| \leq 1 / 2\), then \(|E_{k}(z)-1| \leq 2e|z|^{k+1}\).
Proof.
If \(|z| \leq 1 / 2\), then with the logarithm defined in terms of the power series, we have \(1-z=e^{\log (1-z)}\), and therefore \[E_{k}(z)=e^{\log (1-z)+z+z^{2} / 2+\cdots+z^{k} / k}=e^{w}\] where \(w=-\sum_{n=k+1}^{\infty} z^{n} / n\). Observe that since \(|z| \leq 1 / 2\) we have \[|w| \leq|z|^{k+1} \sum_{n=k+1}^{\infty}|z|^{n-k-1} / n \leq|z|^{k+1} \sum_{j=0}^{\infty} 2^{-j} \leq 2|z|^{k+1}.\]
In particular, we have \(|w| \leq 1\) and this implies that \[\left|1-E_{k}(z)\right|=\left|1-e^{w}\right| \leq e|w| \leq 2e|z|^{k+1}. \qquad \tag*{$\blacksquare$}\]
$$\tag*{$\blacksquare$}$$
Given any sequence \((a_{n})_{n\in\mathbb Z_+}\subseteq \mathbb C\) with \(\lim_{n\to \infty}|a_{n}|=\infty\), there exists an entire function \(f\) that vanishes at all \(z=a_{n}\) and nowhere else. Any other such entire function is of the form \(f(z)e^{g(z)}\), where \(g\) is entire.
Proof.
Recall that if a holomorphic function \(f\) vanishes at \(z=a\), then the multiplicity of the zero \(a\) is the integer \(m\) so that \[f(z)=(z-a)^{m} g(z),\] where \(g\) is holomorphic and nowhere vanishing in a neighborhood of \(a\).
To begin the proof, note first that if \(f_{1}\) and \(f_{2}\) are two entire functions that vanish at all \(z=a_{n}\) and nowhere else, then \(f_{1} / f_{2}\) has removable singularities at all the points \(a_{n}\). Hence \(f_{1} / f_{2}\) is entire and vanishes nowhere, so that there exists an entire function \(g\) with \[f_{1}(z) / f_{2}(z)=e^{g(z)}.\]
Therefore \(f_{1}(z)=f_{2}(z) e^{g(z)}\) as desired.
$$\tag*{$\blacksquare$}$$
We have to construct a function that vanishes at all the points of the sequence \((a_{n})_{n\in\mathbb Z_+}\) and nowhere else.
Suppose that we are given a zero of order \(m\) at the origin, and that \(a_{1}, a_{2} \ldots\) are all non-zero. Then we define the Weierstrass product by \[f(z)=z^{m} \prod_{n=1}^{\infty} E_{n}\left(z / a_{n}\right).\]
We claim that this function has the required properties; that is,
\(f\) is entire with a zero of order \(m\) at the origin;
\(f\) has zeros at each point of the sequence \((a_{n})_{n\in\mathbb Z_+}\);
\(f\) vanishes nowhere else.
Fix \(R>0\), and suppose that \(z\) belongs to the disc \(|z|<R\). We shall prove that \(f\) has all the desired properties in this disc, and since \(R\) is arbitrary, this will prove the theorem.
We can consider two types of factors in the formula defining \(f\), with the choice depending on whether \(\left|a_{n}\right| \leq 2 R\) or \(\left|a_{n}\right|>2 R\).
There are only finitely many terms of the first kind (since \(\lim_{n\to\infty}\left|a_{n}\right| =\infty\)), and we see that the finite product vanishes at all \(z=a_{n}\) with \(\left|a_{n}\right|<R\).
If \(\left|a_{n}\right| \geq 2 R\), we have \(\left|z / a_{n}\right| \leq 1 / 2\), hence the previous lemma implies \[\left|E_{n}\left(z / a_{n}\right)-1\right| \leq 2e\left|\frac{z}{a_{n}}\right|^{n+1} \leq \frac{e}{2^{n}}.\]
Therefore, the product \[\prod_{\left|a_{n}\right| \geq 2 R} E_{n}\left(z / a_{n}\right)\] defines a holomorphic function when \(|z|<R\), and does not vanish in that disc by the previous propositions.
This shows that the function \(f\) has the desired properties, and the proof of Weierstrass’s theorem is complete. $$\tag*{$\blacksquare$}$$
Suppose \(f\) is entire and has growth order \(\rho_{0}\). Let \(k\in\mathbb Z\) be so that \(k \leq \rho_{0}<k+1\). If \(a_{1}, a_{2}, \ldots\) denote the (non-zero) zeros of \(f\), then \[f(z)=e^{P(z)} z^{m} \prod_{n=1}^{\infty} E_{k}\left(z / a_{n}\right),\] where \(P\) is a polynomial of degree \(\leq k\), and \(m\) is the order of the zero of \(f\) at \(z=0\), and \(E_k\) are the canonical factors for \(k\in\mathbb N\).
We gather a few lemmas needed in the proof of Hadamard’s theorem.
The canonical products satisfy \[\left|E_{k}(z)\right| \geq e^{-c|z|^{k+1}} \quad \text { if }\quad |z| \leq 1 / 2,\] and \[\left|E_{k}(z)\right| \geq|1-z| e^{-c|z|^{k}} \quad \text { if }\quad |z| \geq 1 / 2.\] Here, we allow the implied constant \(c=c_k\) to depend on \(k\in\mathbb N\).
If \(|z| \leq 1 / 2\) we can use the power series to define the logarithm of \(1-z\), so that \[E_{k}(z)=e^{\log (1-z)+\sum_{n=1}^{k} z^{n} / n}=e^{-\sum_{n=k+1}^{\infty} z^{n} / n}=e^{w}.\]
Since \(\left|e^{w}\right| \geq e^{-|w|}\) and \(|w| \leq c|z|^{k+1}\), the first part follows.
For the second part, simply observe that if \(|z| \geq 1 / 2\), then \[\left|E_{k}(z)\right|=|1-z|\left|e^{z+z^{2} / 2+\cdots+z^{k} / k}\right|,\] and that there exists \(c^{\prime}>0\) such that \[\left|e^{z+z^{2} / 2+\cdots+z^{k} / k}\right| \geq e^{-\left|z+z^{2} / 2+\cdots+z^{k} / k\right|} \geq e^{-c^{\prime}|z|^{k}}. \qquad \tag*{$\blacksquare$}\]
For any \(s\in\mathbb R_+\) with \(\rho_{0}<s<k+1\), we have \[\left|\prod_{n=1}^{\infty} E_{k}\left(z / a_{n}\right)\right| \geq e^{-c|z|^{s}},\] except possibly when \(z\) belongs to the union of the discs centered at \(a_{n}\) of radius \(\left|a_{n}\right|^{-k-1}\), for \(n\in\mathbb Z_+\).
First, we write \[\prod_{n=1}^{\infty} E_{k}\left(z / a_{n}\right)=\prod_{\left|a_{n}\right| \leq 2|z|} E_{k}\left(z / a_{n}\right) \prod_{\left|a_{n}\right|>2|z|} E_{k}\left(z / a_{n}\right)\]
For the second product the estimate asserted above holds for all \(z\in\mathbb C\). Indeed, by the previous lemma \[\begin{aligned} \bigg|\prod_{\left|a_{n}\right|>2|z|} E_{k}\left(z / a_{n}\right)\bigg| & =\prod_{\left|a_{n}\right|>2|z|}\left|E_{k}\left(z / a_{n}\right)\right| \\ & \geq \prod_{\left|a_{n}\right|>2|z|} e^{-c | z / a_{n}|^{k+1}} \geq e^{-c|z|^{k+1} \sum_{\left|a_{n}\right|>2|z|}\left|a_{n}\right|^{-k-1}} . \end{aligned}\]
But \(\left|a_{n}\right|>2|z|\) and \(s<k+1\), so we must have \[\left|a_{n}\right|^{-k-1}=\left|a_{n}\right|^{-s}\left|a_{n}\right|^{s-k-1} \leq C\left|a_{n}\right|^{-s}|z|^{s-k-1} .\]
Therefore, the fact that \(\sum_{n\in\mathbb Z_+}\left|a_{n}\right|^{-s}\) converges implies that \[\bigg|\prod_{\left|a_{n}\right|>2|z|} E_{k}\left(z / a_{n}\right)\bigg| \geq e^{-c|z|^{s}}\] for some constant \(c>0\), which may depend on \(k\) and \(s\).
To estimate the first product, we use the second part of the previous lemma, and write \[\begin{align*} \bigg|\prod_{\left|a_{n}\right| \leq 2|z|} E_{k}\left(z / a_{n}\right)\bigg| \geq \prod_{\left|a_{n}\right| \leq 2|z|}\bigg|1-\frac{z}{a_{n}}\bigg| \prod_{\left|a_{n}\right| \leq 2|z|} e^{-c\left|z / a_{n}\right|^{k}}. \tag{*} \end{align*}\]
We now note that \[\prod_{\left|a_{n}\right| \leq 2|z|} e^{-c\left|z / a_{n}\right|^{k}} =e^{-c|z|^{k} \sum_{\left|a_{n}\right| \leq 2|z|}\left|a_{n}\right|^{-k}},\] and again, we have \(\left|a_{n}\right|^{-k}=\left|a_{n}\right|^{-s}\left|a_{n}\right|^{s-k} \leq C\left|a_{n}\right|^{-s}|z|^{s-k}\), thereby proving that \[\prod_{\left|a_{n}\right| \leq 2|z|} e^{-c^{\prime}\left|z / a_{n}\right|^{k}} \geq e^{-c|z|^{s}}.\]
The estimate on the first product on the right-hand side of (*) will require the restriction on \(z\) imposed in the statement of the lemma.
Indeed, whenever \(z\) does not belong to a disc of radius \(\left|a_{n}\right|^{-k-1}\) centered at \(a_{n}\), we must have \(\left|a_{n}-z\right| \geq\left|a_{n}\right|^{-k-1}\).
Therefore \[\begin{aligned} \prod_{\left|a_{n}\right| \leq 2|z|}\left|1-\frac{z}{a_{n}}\right| & =\prod_{\left|a_{n}\right| \leq 2|z|}\left|\frac{a_{n}-z}{a_{n}}\right| \\ & \geq \prod_{\left|a_{n}\right| \leq 2|z|}\left|a_{n}\right|^{-k-1}\left|a_{n}\right|^{-1} \\ & =\prod_{\left|a_{n}\right| \leq 2|z|}\left|a_{n}\right|^{-k-2} . \end{aligned}\]
Finally, the estimate for the first product follows from the fact that \[\begin{aligned} (k+2) \sum_{\left|a_{n}\right| \leq 2|z|} \log \left|a_{n}\right| & \leq(k+2) \mathfrak{n}_f(2|z|) \log 2|z| \\ & \leq c|z|^{s} \log 2|z| \\ & \leq c^{\prime}|z|^{s^{\prime}} \end{aligned}\] for any \(s^{\prime}>s\), and the second inequality follows as \(\mathfrak{n}(2|z|) \leq c|z|^{s}\).
Since we restricted \(s\) to satisfy \(s>\rho_{0}\), we can take an initial \(s\) sufficiently close to \(\rho_{0}\), so that the assertion of the lemma is established (with \(s\) being replaced by \(s^{\prime}\)). $$\tag*{$\blacksquare$}$$
There exists a sequence of radii, \(r_{1}, r_{2}, \ldots\), with \(r_{m} \rightarrow \infty\), such that \[\bigg|\prod_{n=1}^{\infty} E_{k}\left(z / a_{n}\right)\bigg| \geq e^{-c|z|^{s}} \quad \text { for } \quad |z|=r_{m}.\]
Proof.
Since \(\sum_{n\in\mathbb Z_+}\left|a_{n}\right|^{-k-1}<\infty\), there exists \(N\in\mathbb Z_+\) so that \[\sum_{n=N}^{\infty}\left|a_{n}\right|^{-k-1}<1 / 10.\]
Therefore, given any two consecutive large integers \(L\) and \(L+1\), we can find \(r\in\mathbb R_+\) with \(L \leq r \leq L+1\), such that the circle of radius \(r\) centered at the origin does not intersect the forbidden discs from the previous lemma.
For otherwise, the union of the intervals \[I_{n}=\left[\left|a_{n}\right|-\frac{1}{\left|a_{n}\right|^{k+1}},\left|a_{n}\right|+\frac{1}{\left|a_{n}\right|^{k+1}}\right]\] (which are of length \(2\left|a_{n}\right|^{-k-1}\)) would cover all the interval \([L, L+1]\).
This would imply \(2 \sum_{n=N}^{\infty}\left|a_{n}\right|^{-k-1} \geq 1\), which is a contradiction. We can then apply the previous lemma with \(|z|=r\) to conclude the proof. $$\tag*{$\blacksquare$}$$
Let \[E(z)=z^{m} \prod_{n=1}^{\infty} E_{k}\left(z / a_{n}\right).\]
To prove that \(E\) is entire, we repeat the argument in the proof of Weierstrass theorem. Namely, we have \[\left|E_{k}\left(z / a_{n}\right)-1 \right| \leq 2e\left|\frac{z}{a_{n}}\right|^{k+1}, \quad \text { for all large } \quad n\in\mathbb Z_+,\] and that the series \(\sum_{n\in \mathbb Z_+}\left|a_{n}\right|^{-k-1}\) converges. (Recall \(\rho_{0}<s<k+1\).)
Moreover, \(E\) has the zeros of \(f\), therefore \(f / E\) is holomorphic and nowhere vanishing. Hence \[\frac{f(z)}{E(z)}=e^{g(z)}\] for some entire function \(g\).
By the fact that \(f\) has growth order \(\rho_{0}\), and because of the estimate from below for \(E\) obtained in the previous corollary, we have \[e^{\operatorname{Re}(g(z))}=\left|\frac{f(z)}{E(z)}\right| \leq c^{\prime} e^{c|z|^{s}}, \quad \text{ whenever } \quad |z|=r_{m}.\]
This proves that \[\operatorname{Re}(g(z)) \leq C|z|^{s}, \quad \text { for } \quad |z|=r_{m},\] where \((r_m)_{m\in\mathbb Z_+}\subseteq\mathbb R_+\) is a a sequence such that \(\lim_{m\to \infty}r_m=\infty\).
We have to prove that \(g\) is a polynomial of degree \(\le s\).
We can expand \(g\) in a power series centered at the origin \[g(z)=\sum_{n=0}^{\infty} a_{n} z^{n}\]
As a simple application of Cauchy’s integral formulas, we may write \[\frac{1}{2 \pi} \int_{0}^{2 \pi} g\left(r e^{i \theta}\right) e^{-i n \theta} d \theta = \begin{cases} a_{n} r^{n} & \text { if } n \geq 0, \\ 0 & \text { if } n<0. \end{cases}\]
By taking complex conjugates we find that \[\frac{1}{2 \pi} \int_{0}^{2 \pi} \overline{g\left(r e^{i \theta}\right)} e^{-i n \theta} d \theta=0\] whenever \(n>0\).
Since \(2 u=g+\overline{g}\) we add the above two equations and obtain \[a_{n} r^{n}=\frac{1}{\pi} \int_{0}^{2 \pi} u\left(r e^{i \theta}\right) e^{-i n \theta} d \theta, \quad \text { whenever } \quad n>0.\]
For \(n=0\) we find that \[2 \operatorname{Re}\left(a_{0}\right)=\frac{1}{\pi} \int_{0}^{2 \pi} u\left(r e^{i \theta}\right) d \theta.\]
Now we recall the simple fact that whenever \(n \neq 0\), the integral of \(e^{-i n \theta}\) over any circle centered at the origin vanishes. Therefore \[a_{n}=\frac{1}{\pi r^{n}} \int_{0}^{2 \pi}\left[u\left(r e^{i \theta}\right)-C r^{s}\right] e^{-i n \theta} d \theta \quad \text { when } \quad n>0.\]
Taking \(r=r_m\), we consequently obtain \[\left|a_{n}\right| \leq \frac{1}{\pi r_m^{n}} \int_{0}^{2 \pi}\left[C r_m^{s}-u\left(r_m e^{i \theta}\right)\right] d \theta \leq 2 C r_m^{s-n}-2 \operatorname{Re}\left(a_{0}\right) r_m^{-n}.\]
Letting \(m\to \infty\) we deduce \(a_{n}=0\) for any \(n>s\). This completes the proof of Hadamard’s theorem. $$\tag*{$\blacksquare$}$$
The function \(\sin \pi s\) is entire and of order one, and its zeros are at \(s=0, \pm 1, \pm 2, \ldots\), and so, by Hadamard’s theorem we can write \[\sin \pi s=s e^{H(s)} \prod_{n=1}^{\infty}\left(1-\frac{s^{2}}{n^{2}}\right),\] where \(H(s)=a s+b\).
Taking the logarithmic derivative of this equation, we find that \[\pi \frac{\cos \pi s}{\sin \pi s}=\frac{1}{s}+H^{\prime}(s)-\sum_{n=1}^{\infty} \frac{2 s}{n^{2}-s^{2}} .\]
Passage to the limit as \(s \rightarrow 0\) gives \(a=0\), and so \(H(s)=b\). Thus, \[\frac{\sin \pi s}{s}=c \prod_{n=1}^{\infty}\left(1-\frac{s^{2}}{n^{2}}\right) .\]
Passing again to the limit as \(s \rightarrow 0\) gives \(c=\pi\), i.e. \[\sin \pi s=\pi s \prod_{n=1}^{\infty}\left(1-\frac{s^{2}}{n^{2}}\right).\]
The Euler gamma function \(\Gamma(s)\) is defined by the equation \[\frac{1}{\Gamma(s)}=s e^{\gamma s} \prod_{n=1}^{\infty}\left(1+\frac{s}{n}\right) e^{-s / n}\] where \(\gamma\) is Euler’s constant.
It follows from the definition that \(\Gamma^{-1}(s)\) is an entire function of order at most one.
Moreover, \(\Gamma(s)\) is an analytic function in the entire \(s\)-plane except for the points \(s=0,-1,-2, \ldots\), where it has simple poles.
For every \(s\in\mathbb C\setminus\{-n: n\in\mathbb N\}\), we have \[\Gamma(s)=\frac{1}{s} \prod_{n=1}^{\infty}\left(1+\frac{1}{n}\right)^{s}\left(1+\frac{s}{n}\right)^{-1}.\] In other words, \(\Gamma(s)\) is a meromorphic function on \(\mathbb C\) with simple poles at \(0\) and at the negative integers and with no zeros.
From the definition of an infinite product and from the definition of the function \(\Gamma(s)\), we obtain \[\begin{aligned} \frac{1}{\Gamma(s)} & =s \lim _{m \rightarrow \infty} e^{s\left(1+\frac{1}{2}+\ldots+\frac{1}{m}-\log m\right)} \cdot \lim _{m \rightarrow \infty} \prod_{n=1}^{m}\left(1+\frac{s}{n}\right) e^{-\frac{s}{n}} \\ & =s \lim _{m \rightarrow \infty} m^{-s} \prod_{n=1}^{m}\left(1+\frac{s}{n}\right)\\ &=s \lim _{m \rightarrow \infty} \prod_{n=1}^{m-1}\left(1+\frac{1}{n}\right)^{-s} \prod_{n=1}^{m}\left(1+\frac{s}{n}\right) \\ & =s \lim _{m \rightarrow \infty} \prod_{n=1}^{m}\left(1+\frac{1}{n}\right)^{-s}\left(1+\frac{s}{n}\right)\left(1+\frac{1}{m}\right)^{s} \\ & =s \prod_{n=1}^{\infty}\left(1+\frac{1}{n}\right)^{-s}\left(1+\frac{s}{n}\right), \end{aligned}\] which is what we had to prove.$$\tag*{$\blacksquare$}$$
For every \(s\in\mathbb C\setminus\{-n: n\in\mathbb N\}\), we have \[\Gamma(s)=\lim _{n \rightarrow \infty} \frac{(n-1)! \cdot n^{s}}{s(s+1) \cdot \ldots \cdot (s+n-1)}.\]
Proof.
From the previous theorem we have \[\begin{aligned} \Gamma(s)&=\lim_{n\to \infty}s^{-1} \prod_{m=1}^{n-1}\left(1+\frac{1}{m}\right)^{s}\left(1+\frac{s}{m}\right)^{-1}\\ &=\lim_{n\to \infty}\frac{2^s\cdot\frac{3^s}{2^s}\cdot\ldots\cdot\frac{n^s}{(n-1)^s}}{s\cdot \frac{(s+1)}{1}\cdot\ldots\cdot \frac{(s+n-1)}{n-1}}\\ &=\lim_{n\to \infty}\frac{1\cdot 2\cdot\ldots\cdot (n-1)n^s}{s\cdot (s+1)\cdot\ldots\cdot (s+n-1)} \end{aligned}\]
as desired.$$\tag*{$\blacksquare$}$$
We also have \(\Gamma(1)=\Gamma(2)=1\).
We have \(\Gamma(s+1)=s \Gamma(s)\) for all \(s\in\mathbb C\setminus\{-n: n\in\mathbb N\}\).
In particular, \(\Gamma(n+1)=n!\) for all \(n\in\mathbb N\), and \({\rm res}_{s=-m}\Gamma(s)=\frac{(-1)^m}{m!}\).
Proof.
We have \[\begin{aligned} \frac{\Gamma(s+1)}{\Gamma(s)}&=\frac{s}{s+1} \lim _{m \rightarrow \infty} \prod_{n=1}^{m} \frac{\left(1+\frac{1}{n}\right)^{s+1}\left(1+\frac{s+1}{n}\right)^{-1}}{\left(1+\frac{1}{n}\right)^{s}\left(1+\frac{s}{n}\right)^{-1}} \\ & =\frac{s}{s+1} \lim _{m \rightarrow \infty} \prod_{n=1}^{m} \frac{n+1}{n} \cdot \frac{n+s}{n+s+1}\\ &=\frac{s}{s+1} \lim _{m \rightarrow \infty} \frac{(m+1)(s+1)}{m+1+s}=s . \end{aligned}\]
This completes the proof.$$\tag*{$\blacksquare$}$$
\[\Gamma(2s)\Gamma\left(1/2\right)= 2^{2s-1} \Gamma\left(s\right) \Gamma\left(s+1/2\right) \quad \text{ for all }\quad s\in\mathbb C\setminus(-\mathbb N).\]
\[\frac{\sin \pi s}{\pi}=\frac{1}{\Gamma(s) \Gamma(1-s)} \quad \text{ for all }\quad s\in\mathbb C.\]
Proof.
We know that \[\frac{\sin \pi s}{\pi s}= \prod_{n=1}^{\infty}\left(1-\frac{s^{2}}{n^{2}}\right).\]
On the other hand, we have \[\frac{1}{\Gamma(s)\Gamma(-s)}=-s^2\prod_{n=1}^{\infty}\left(1-\frac{s^{2}}{n^{2}}\right)\]
But we also know that \(\Gamma(1-s)=-s\Gamma(-s)\), and the result follows.
$$\tag*{$\blacksquare$}$$
As a corollary we obtain that \(\Gamma(1/2)=\sqrt{\pi}\).
Suppose that \(\operatorname{Re}(s)>0\). Then \[\begin{aligned} \Gamma(s)=\int_{0}^{\infty} e^{-t} t^{s-1} d t. \end{aligned}\]
Proof.
We know that \[\Gamma(s)=\lim _{n \rightarrow \infty} \frac{n!\cdot n^{s}}{s(s+1)(s+2) \cdots(s+n)}.\]
We have to establish two things. Firstly, we will show that \[\int_{0}^{n}\left(1-\frac{t}{n}\right)^{n} t^{s-1} d t=\frac{n!\cdot n^{s}}{s(s+1)(s+2)\cdot \ldots \cdot (s+n)} \quad \text{ for all }\quad n\in\mathbb Z_+.\]
Secondly, we will show that
\[\lim _{n \rightarrow \infty} \int_{0}^{n}\left(1-\frac{t}{n}\right)^{n} t^{s-1} d t=\int_{0}^{\infty} e^{-t} t^{s-1} d t,\] which will complete the proof.
Indeed, when \(s>0\) the above integral converges and we have \[\begin{aligned} \int_{0}^{n}\left(1-\frac{t}{n}\right)^{n} t^{s-1} d t & =n^{s} \int_{0}^{1}(1-u)^{n} u^{s-1} d u \\ & =n^{s} \frac{n}{s} \int_{0}^{1}(1-u)^{n-1} u^{s} d u \\ & =n^{s} \frac{n(n-1)}{s(s+1)} \int_{0}^{1}(1-u)^{n-2} u^{s+1} d u \\ & \hspace{2cm} \vdots \\ & =n^{s} \frac{n(n-1) \cdot \ldots\cdot 1}{s(s+1) \cdot \ldots\cdot (s+n-1)} \int_{0}^{1} u^{s+n-1} d u \\ & =\frac{n!\cdot n^{s}}{s(s+1)(s+2) \cdot \ldots\cdot (s+n)} . \end{aligned}\]
Thus, it suffices to prove that \[\lim _{n \rightarrow \infty} \int_{0}^{n}\left(1-\frac{t}{n}\right)^{n} t^{s-1} d t=\int_{0}^{\infty} e^{-t} t^{s-1} d t.\]
To this end, we consider the functions \[f_{n}(t)= \begin{cases}(1-t / n)^{n} t^{s-1} & \text { if } 0 \leq t \leq n, \\ 0 & \text { if } t>n .\end{cases}\]
Each of these functions is in \(L^{1}([0, \infty))\) and satisfies the inequality \[\left|\: f_{n}(t)\right| \leq e^{-t} t^{\sigma-1}, \quad \text{ where } \quad \sigma=\operatorname{Re}(s).\]
The last inequality is easily verified by taking logarithms and noting \[n \log \left(1-\frac{t}{n}\right)=-t-\frac{t^{2}}{2 n}-\frac{t^{3}}{3 n^{2}}-\cdots<-t .\]
Furthermore, \[\lim _{n \rightarrow \infty} f_{n}(t)=t^{s-1} \lim _{n \rightarrow \infty}\left(1-\frac{t}{n}\right)^{n}=e^{-t} t^{s-1} .\]
Since the function \(e^{-t} t^{\sigma-1}\) is in \(L^{1}([0, \infty))\), the dominated convergence theorem yields \[\lim _{n \rightarrow \infty} \int_{0}^{\infty} f_{n}(t) d t =\int_{0}^{\infty} \lim _{n \rightarrow \infty} f_{n}(t) d t =\int_{0}^{\infty} e^{-t} t^{s-1} d t,\] which completes the proof of the lemma.$$\tag*{$\blacksquare$}$$
Suppose that \(s\in mathbb C\) such that \(|\arg s|<\pi\). Then \[\log \Gamma(s)=(s-1 / 2) \log s-s+\log \sqrt{2 \pi}+\int_{0}^{\infty} \frac{\psi(u)}{u+s} d u.\] Here \(\log s\) denotes the principal branch of the logarithm and \(\psi(u)=\{u\}-1 / 2\).
Suppose that \(0<\delta<\pi\) and \(|\arg s|<\pi-\delta\). Then \[\log \Gamma(s)=(s-1 / 2) \log s-s+\log \sqrt{2 \pi}+O\left(|s|^{-1}\right)\] uniformly as \(|s|\to \infty\), and \(\frac{\Gamma^{\prime}(s)}{\Gamma(s)}=\log s+O\left(|s|^{-1}\right),\) where the implied constants depending at most on \(\delta\).
Suppose that \(\alpha \leq \sigma \leq \beta\) and \(|t| \geq 1\). Then \[|\Gamma(\sigma+i t)|=\sqrt{2 \pi}|t|^{\sigma-1 / 2} \exp (-\pi|t| / 2)\big(1+O(|t|^{-1})\big),\] where the implied constant depending at most on \(\alpha\) and \(\beta\).
Definition (Riemann zeta-function). The Riemann zeta-function \(\zeta(s)\) is defined for all complex numbers \(s=\sigma+i t\) such that \(\sigma>1\) by \[\zeta(s)=\sum_{n=1}^{\infty} \frac{1}{n^{s}}.\]
By the absolute convergence all complex numbers \(s=\sigma+i t\) such that \(\sigma>1\) we also have the Euler product formula \[\zeta(s)=\prod_{p\in\mathbb P}\left(1-\frac{1}{p^{s}}\right)^{-1}.\]
The Euler product formula enables us to see that \(\zeta(s) \neq 0\) in the half-plane \(\sigma>1\). Indeed, for \(\sigma>1\) we have \[\begin{aligned} \frac{1}{|\zeta(s)|}=\prod_{p\in\mathbb P}\left|1-\frac{1}{p^{s}}\right| \le \prod_{p\in\mathbb P}\left(1+\frac{1}{p^{\sigma}}\right)\le \sum_{n=1}^\infty\frac{1}{n^\sigma}\le 1+\int_1^\infty\frac{dt}{t^\sigma}=\frac{\sigma}{\sigma-1}. \end{aligned}\] Thus \(|\zeta(s)|\ge\frac{\sigma-1}{\sigma}>0\).
Euler’s summation formula. If \(f \in C^{1}([a, b])\), and \(\psi(x)=\{x\}-1/2\) for \(x\in\mathbb R\), then by summation by parts we obtain the following identity \[\sum_{a<n \leqslant b} f(n)=\int_{a}^{b} f(x) dx+f(a) \psi(a)-f(b) \psi(b) +\int_{a}^{b} f^{\prime}(x) \psi(x) dx.\]
By the summation by parts formula we can derive
Let \(x \geqslant 1\) be a real number and \(s=\sigma+i t\) with \(\sigma>1\). By the Euler summation formula with \(a=1, b=x\) and \(f(x)=x^{-s}\), we can write \[\sum_{n \leqslant x} \frac{1}{n^{s}}=\frac{1}{2}+\frac{1-x^{1-s}}{s-1}-\frac{\psi(x)}{x^{s}}-s \int_{1}^{x} \frac{\psi(u)}{u^{s+1}} d u.\]
Taking \(x \to \infty\) we obtain \[\zeta(s)=\frac{1}{2}+\frac{1}{s-1}-s \int_{1}^{\infty} \frac{\psi(u)}{u^{s+1}} du.\tag{*}\]
Since \(|\psi(x)| \leqslant \frac{1}{2}\), the integral converges for \(\sigma>0\) and is uniformly convergent in any finite region to the right of the line \(\sigma=0\).
This implies that it defines an analytic function in the half-plane \(\sigma>0\), and therefore (*) extends \(\zeta\) to a meromorphic function in this half-plane, which is analytic except for a simple pole at \(s=1\) with residue \(1\).
Replacing \(x\) by \(\pi n^{2} x\) in the integral defining \(\Gamma(s/2)\) gives \[\pi^{-s / 2} \Gamma\left(\frac{s}{2}\right) n^{-s}=\int_{0}^{\infty} x^{s / 2-1} e^{-\pi n^{2} x} d x \quad \text{ for all } \quad \sigma>0.\]
The purpose is to sum both sides of this equation. To this end, we define the following two Theta functions. For all \(x>0\), we set \[\omega(x)=\sum_{n=1}^{\infty} e^{-\pi n^{2} x} \quad \text { and } \quad \theta(x)=2 \omega(x)+1=\sum_{n \in \mathbb{Z}} e^{-\pi n^{2}x}.\]
Then \(g(t) =e^{-\pi t^{2}}\) satisfies \(\int_{\mathbb{R}} g(t) dt=1\), and its Fourier transform is \[\widehat{g}(u)=e^{-\pi u^{2}}.\]
For a Schwartz function \(f\), by the Poisson summation formula, we have \(\sum_{n\in \mathbb Z}\widehat{f}(n)=\sum_{n\in \mathbb Z}f(n)\), hence \[\theta(x)=\sum_{n\in\mathbb Z}g(\sqrt{x}n)=x^{-1/2}\theta(x^{-1}) \quad \text{ for all } \quad x>0.\]
Summing this equation over \(n\in \mathbb Z_+\) and interchanging the sum and integral, we obtain for all \(\sigma>1\) that \[\pi^{-s / 2} \Gamma\left(\frac{s}{2}\right) \zeta(s)=\int_{0}^{\infty} x^{s / 2-1} \omega(x) d x,\] since the sum and integral converge absolutely in the half-plane \(\sigma>1\).
Splitting the integral \(\int_{0}^\infty=\int_0^1+\int_1^\infty\) and changing the variables \(x\mapsto 1 / x\) in the first integral yields \[\pi^{-s / 2} \Gamma\left(\frac{s}{2}\right) \zeta(s)=\int_{1}^{\infty} x^{s / 2-1} \omega(x) d x+\int_{1}^{\infty} x^{-s / 2-1} \omega\left(\frac{1}{x}\right) dx.\]
Using \(\theta(x^{-1})=x^{1/2}\theta(x)\) we may write \[\omega\left(\frac{1}{x}\right)=x^{1 / 2} \omega(x)+\frac{x^{1 / 2}-1}{2},\] and consequently we obtain \[\pi^{-s / 2} \Gamma\left(\frac{s}{2}\right) \zeta(s) =-\frac{1}{s}+\frac{1}{s-1}+\int_{1}^{\infty} \omega(x)\left(x^{s / 2}+x^{(1-s) / 2}\right) \frac{dx}{x},\] whenever \(\sigma>1\).
Let \[\begin{aligned} \Xi(s)&=\pi^{-s / 2} \Gamma(s / 2) \zeta(s)\\ &=-\frac{1}{s}+\frac{1}{s-1}+\frac{1}{2}\int_{1}^{\infty} (\theta(x)-1)\left(x^{s / 2}+x^{(1-s) / 2}\right) \frac{dx}{x}, \end{aligned}\] where \(\theta\) is the Theta function \[\theta(x)=\sum_{n \in \mathbb{Z}} e^{-\pi n^{2}x}.\]
Then the function \(\Xi(s)\) can be extended analytically in the whole complex plane to a meromorphic function having simple poles at \(s=0\) and \(s=1\), and satisfies the functional equation \(\Xi(s)=\Xi(1-s)\).
Thus the Riemann zeta-function can be extended analytically in the whole complex plane to a meromorphic function having a simple pole at \(s=1\) with residue \(1\). Furthermore, for all \(s \in \mathbb{C} \backslash\{1\}\), we have \[\zeta(s)=2^{s} \pi^{s-1} \sin \left(\frac{\pi s}{2}\right) \Gamma(1-s) \zeta(1-s).\]
For \(\sigma>1\) we have \[\begin{align*} \Xi(s)=-\frac{1}{s}+\frac{1}{s-1}+\int_{1}^{\infty} \omega(x)\left(x^{s / 2}+x^{(1-s) / 2}\right) \frac{dx}{x}. \tag{*} \end{align*}\]
Since \(\omega(x) =O (e^{-\pi x})\) as \(x \to \infty\), we infer that the integral is absolutely convergent for all \(s \in \mathbb{C}\) whereas the left-hand side is a meromorphic function on \(\sigma>0\). This implies that
The identity (*) is valid for all \(\sigma>0\).
The function \(\Xi(s)\) can be defined by this identity as a meromorphic function on \(\mathbb{C}\) with simple poles at \(s=0\) and \(s=1\).
Since the right-hand side of (*) is invariant under the substitution \(s \mapsto 1-s\), we obtain \(\Xi(s)=\Xi(1-s)\).
The function \(s \mapsto \xi(s):=s(s-1) \Xi(s)\) is entire on \(\mathbb{C}\). Indeed, if \(\sigma>0\), the factor \(s-1\) counters the pole at \(s=1\), and the result on all \(\mathbb{C}\) follows from the functional equation.
It remains to show that the functional equation can be written as \[\zeta(s)=2^{s} \pi^{s-1} \sin \left(\frac{\pi s}{2}\right) \Gamma(1-s) \zeta(1-s).\]
Since \(\Xi(s)=\Xi(1-s)\), we have \[\Gamma(s / 2) \zeta(s)=\pi^{s/2}\Xi(s)=\pi^{s/2}\Xi(1-s)=\pi^{s-1 / 2} \Gamma\left(\frac{1-s}{2}\right) \zeta(1-s).\]
Multiplying both sides by \(\pi^{-1 / 2} 2^{s-1} \Gamma\left(\frac{1+s}{2}\right)\) and using the duplication formula, asserting that \(\Gamma(s)= \pi^{-1 / 2}2^{s-1} \Gamma\left(s/2\right) \Gamma\left((s+1)/2\right)\) we see \[\Gamma(s) \zeta(s)=(2 \pi)^{s-1} \Gamma\left(\frac{1-s}{2}\right) \Gamma\left(\frac{1+s}{2}\right) \zeta(1-s)\]
Now the reflection formula \(\frac{\sin \pi s}{\pi}=\frac{1}{\Gamma(s) \Gamma(1-s)}\), implies that \[\zeta(s)=(2 \pi)^{s-1}\left(\frac{\sin \pi s}{\sin (\pi(1+s) / 2)}\right) \Gamma(1-s) \zeta(1-s)\] and the result follows from the identity \[\sin \pi s=2 \sin \left(\frac{\pi s}{2}\right) \sin \left(\frac{\pi}{2}(1+s)\right).\]
The proof is complete. $$\tag*{$\blacksquare$}$$
\(\zeta(s)\) has simple zeros at \(s=-2,-4,-6,-8, \ldots\). Indeed, since the integral in (*) is absolutely convergent for all \(s \in \mathbb{C}\) and since \(\omega(x)>0\) for all \(x\in\mathbb R\), we have \[\Xi(-2 n)=\frac{1}{2 n}-\frac{1}{2 n+1}+\int_{1}^{\infty} \omega(x)\left(x^{-n}+x^{n+1 / 2}\right) \frac{dx}{x}>0\] for all \(n\in\mathbb Z_+\). The result follows from the fact that \(\Gamma(s / 2)\) has simple poles at \(s=-2 n\).
These zeros are the only ones lying in the region \(\sigma<0\). They are called trivial zeros of the Riemann zeta-function.
For all \(0<\sigma<1\), we have \(\zeta(\sigma) \neq 0\). Indeed, for all \(\sigma>0\) we see \[\zeta(s)=\frac{s}{s-1}-s \int_{1}^{\infty} \frac{\{x\}}{x^{s+1}} dx\] we infer that, for all \(0<\sigma<1\), we get \[\left|\zeta(\sigma)-\frac{\sigma}{\sigma-1}\right|<\sigma \int_{1}^{\infty} \frac{dx}{x^{\sigma+1}}=1,\] which implies that \(\zeta(\sigma)<1+\sigma /(\sigma-1)\) for all \(0<\sigma<1\).
Hence \(\zeta(\sigma)<0\) for all \(\frac{1}{2} \leqslant \sigma<1\), and the functional equation implies the asserted result.