| 1. | Lecture 1 |
| 2. | Lecture 2 |
| 3. | Lecture 3 |
| 4. | Lecture 4 |
| 5. | Lecture 5 |
| 6. | Lecture 6 |
| 7. | Lecture 7 |
| 8. | Lecture 8 |
| 9. | Lecture 9 |
| 10. | Lecture 10 |
| 11. | Lecture 11 |
| 12. | Lecture 12 |
| 13. | Lecture 13 |
Definition. A function \(f:I\to \mathbb R\) is unimodal on an interval \(I=[a, b]\) if there exists a number \(t_{0} \in I\) such that \(f(t)\) is increasing for \(t \leq t_{0}\) and decreasing for \(t \geq t_{0}\). For example, \(f(t)=\log ^{k} t / t\) is unimodal on the interval \([1, \infty)\) with \(t_{0}=e^{k}\).
Let \(a,b\in\mathbb Z\)with \(a<b\), and let \(f:[a, b]\to \mathbb R\) be monotonic. Then \[\min \{f(a), f(b)\} \leq \sum_{n=a}^{b} f(n)-\int_{a}^{b} f(t) d t \leq \max \{f(a), f(b)\}.\]
Let \(x, y\in \mathbb R\) with \(y<\lfloor x\rfloor\), and let \(f: [y, x]\to \mathbb R_+\) be monotonic. Then \[\bigg|\sum_{y<n \leq x} f(n)-\int_{y}^{x} f(t) d t\bigg| \leq \max \{f(y), f(x)\}.\]
Let \(f:[1, \infty)\to \mathbb R_+\) be unimodal. Then \[F(x)=\sum_{n \leq x} f(n)=\int_{1}^{x} f(t) d t+O(1).\]
If \(f: [a, b]\to \mathbb R\) is increasing, then \[\int_{a}^{b} f(t) d t=\sum_{k=a}^{b-1}\int_k^{k+1}f(t)dt\le \sum_{k=a+1}^bf(k),\] and also \[\int_{a}^{b} f(t) d t=\sum_{k=a}^{b-1}\int_k^{k+1}f(t)dt\ge \sum_{k=a}^{b-1}f(k),\]
Hence, we conclude that \[f(a)+\int_{a}^{b} f(t) d t \leq \sum_{k=a}^{b} f(k) \leq f(b)+\int_{a}^{b} f(t) d t.\]
If \(f: [a, b]\to \mathbb R\) is decreasing, then \[f(b)+\int_{a}^{b} f(t) d t \leq \sum_{k=a}^{b} f(k) \leq f(a)+\int_{a}^{b} f(t) d t.\]
Combining those two inequalities we obtain the desired conclusion.
Let \(f: [y, x]\to \mathbb R_+\) be increasing. Let \(a=\lfloor y\rfloor+1\) and \(b=\lfloor x\rfloor\). We have \(y<a \leq b \leq x\) and \[\sum_{y<n \leq x} f(n) =\sum_{a \leq n \leq b} f(n) \leq \int_{a}^{b} f(t) d t+f(b) \leq \int_{y}^{x} f(t) d t+f(x)\]
Since \(f(a) \geq \int_{y}^{a} f(t) d t\) and \(f(x) \geq \int_{b}^{x} f(t) d t,\) it follows that \[\begin{aligned} \sum_{y<n \leq x} f(n) & \geq \int_{a}^{b} f(t) d t+f(a) \\ & \geq \int_{y}^{x} f(t) d t-\int_{b}^{x} f(t) d t+f(a)-\int_{y}^{a} f(t) d t \\ & \geq \int_{y}^{x} f(t) d t-f(x) \end{aligned}\]
Therefore, \[\bigg|\sum_{y<n \leq x} f(n)-\int_{x}^{y} f(t) d t\bigg| \leq f(x).\]
If \(f: [y, x]\to \mathbb R_+\) be decreasing, we obtain a similar conclusion \[\bigg|\sum_{y<n \leq x} f(n)-\int_{x}^{y} f(t) d t\bigg| \leq f(y),\] which proves the second inequality.
Finally, if the function \(f:[1, \infty)\to \mathbb R_+\) is unimodal, then \(f(t)\) is bounded and the conclusion follows from the second inequality. $$\tag*{$\blacksquare$}$$
Example: a simple form of Stirling’s formula. For \(x \geq 2\), we have \[\sum_{n \leq x} \log n=x \log x-x+1+O(\log x).\]
Indeed, the function \(f(t)=\log t\) is increasing on \([1, x]\). By the previous theorem, we have \[\int_{1}^{x} \log t d t \leq \sum_{n \leq x} \log n \leq \int_{1}^{x} \log t d t+\log x,\] which gives the desired claim, since \(\int_{1}^{x} \log t d t=x\log x -x+1\).
For \(n\in \mathbb N\), we have \[\begin{aligned} n! = \sqrt{2\pi} n^{n+1/2} e^{-n} e^{r_n}, \end{aligned}\] where \(r_n\) satisfies the double inequality \[\begin{aligned} \frac{1}{12n + 1} < r_n < \frac{1}{12n}. \end{aligned}\] In other words we have \[\sqrt{2\pi} n^{n+1/2} e^{-n}e^{\frac{1}{12n + 1}}<n!<\sqrt{2\pi} n^{n+1/2} e^{-n}e^{\frac{1}{12n}}.\]
Let \[S_n = \log(n!) = \sum_{p=1}^{n-1} \log (p+1)\] and write \[\begin{aligned} \log (p+1) = A_p + b_p - \varepsilon_p, \end{aligned}\] where \[\begin{aligned} A_p &= \int_p^{p+1} \log x \, dx, \\ b_p &= [ \log (p+1) - \log p ]/2, \\ \varepsilon_p &= \int_p^{p+1} \log x \, dx - [ \log (p+1) + \log p ]/2. \end{aligned}\]
In other words, \(\log (p+1)\) is regarded as the area of a rectangle with base \((p, p+1)\) and height \(\log (p+1)\) partitioned into a curvilinear area \(A_p\), a triangle \(b_p\), and a small sliver \(\varepsilon_p\) suggested by the geometry of the curve \(y = \log x\).
Then \[S_n = \sum_{p=1}^{n-1} (A_p + b_p - \varepsilon_p) = \int_1^n \log x \, dx + \frac{1}{2} \log n - \sum_{p=1}^{n-1} \varepsilon_p.\]
Since \(\int \log x \, dx = x \log x - x\) we can write \[\begin{aligned} S_n = (n + 1/2) \log n - n + 1 - \sum_{p=1}^{n-1} \varepsilon_p, \end{aligned}\] where \[\begin{aligned} \varepsilon_p &=\int_p^{p+1} \log x \, dx - [ \log (p+1) + \log p ]/2\\ &=(p+1)\log(p+1)-p\log p-1- [ \log (p+1) + \log p ]/2\\ &= \frac{2p + 1}{2} \log \Big( \frac{p+1}{p} \Big) - 1. \end{aligned}\]
Using the well known series expansions \[\log \Big( \frac{1+x}{1 - x} \Big) = 2 \sum_{k=0}^\infty \frac{x^{2k+1}}{2k+1}\] valid for \(|x| < 1\), and setting \(x = (2p+1)^{-1}\), so that \((1+x)/(1-x) = (p+1)/p\), we find that \[\begin{aligned} \varepsilon_p = \frac{2p + 1}{2} \log \Big( \frac{p+1}{p} \Big) - 1= \sum_{k=0}^\infty \frac{1}{(2k+3)(2p+1)^{2k+2}}. \end{aligned}\]
We can therefore bound \(\varepsilon_p\) above: \[\begin{aligned} \varepsilon_p < \frac{1}{3 (2p+1)^{2}} \sum_{k=0}^\infty \frac{1}{(2p+1)^{2k}} = \frac{1}{12} \Big( \frac{1}{p} - \frac{1}{p+1} \Big). \end{aligned}\]
Similarly, we bound \(\varepsilon_p\) below: \[\begin{aligned} \varepsilon_p > \frac{1}{3 (2p+1)^{2}} \sum_{k=0}^\infty \frac{1}{ [3 (2p+1)^{2} ]^k } &= \frac{1}{3 (2p+1)^{2}} \frac{1}{ 1 - \frac{1}{3 (2p+1)^{2} } }\\ &> \frac{1}{12} \Big( \frac{1}{p + 1/12} - \frac{1}{p+1+ 1/12} \Big). \end{aligned}\]
Now define \[\begin{aligned} B = \sum_{p=1}^\infty \varepsilon_p, \qquad r_n = \sum_{p=n}^\infty \varepsilon_p, \end{aligned}\] where from the lower and upper bound for \(\varepsilon_p\) we have \[\begin{aligned} 1/13 < B < 1/12. \end{aligned}\]
Then we can write \[\begin{aligned} S_n &= (n + 1/2) \log n - n + 1 - \sum_{p=1}^{n-1} \varepsilon_p\\ &= (n+1/2) \log n - n + 1 - B + r_n, \end{aligned}\] or, setting \(C = e^{1-B}\), as \[n! = C n^{n+1/2} e^{-n} e^{r_n},\] where \(r_n\) satisfies \[1/(12n + 1) < r_n < 1/(12n).\]
The constant \(C\), lies between \(e^{11/12}\) and \(e^{12/13}\), may be shown to have the value \(\sqrt{2\pi}\). Indeed, by the Wallis formula we have \[\sqrt{\frac{\pi}{2}}=\lim_{n\to \infty}\frac{(2^nn!)^2}{(2n)!\sqrt{2n}}=\lim_{n\to \infty}\frac{C^2 2^{2n}n^{2n+1} e^{-2n}}{C (2n)^{2n+1/2} e^{-2n}\sqrt{2n}}=\frac{C}{2}.\]
Thus \(C=\sqrt{2\pi}\) and this completes the proof. $$\tag*{$\blacksquare$}$$
Let \(f, g:\mathbb Z_+\to \mathbb C\) be arithmetic functions. Let \(F(x):=\sum_{1\le n \leq x} f(n)\). Then for any \(a, b\in\mathbb N\) with \(a<b\), we have \[\begin{aligned} \sum_{n=a+1}^{b} f(n) g(n)= F(b) g(b)-F(a) g(a+1) -\sum_{n=a+1}^{b-1} F(n)(g(n+1)-g(n)). \end{aligned}\]
Let \(x, y\in \mathbb R_+\) with \(\lfloor y\rfloor<\lfloor x\rfloor\), and let \(g\in C^1([y, x])\). Then \[\sum_{y<n \leq x} f(n) g(n)=F(x) g(x)-F(y) g(y)-\int_{y}^{x} F(t) g^{\prime}(t) d t\]
In particular, if \(x \geq 2\) and \(g\in C^1([1, x])\), then \[\sum_{n \leq x} f(n) g(n)=F(x) g(x)-\int_{1}^{x} F(t) g^{\prime}(t) d t\]
Since \(f(n)=F(n)-F(n-1)\), we have \[\begin{aligned} & \sum_{n=a+1}^{b} f(n) g(n) =\sum_{n=a+1}^{b}(F(n)-F(n-1)) g(n) \\ & \quad=\sum_{n=a+1}^{b} F(n) g(n)-\sum_{n=a}^{b-1} F(n) g(n+1) \\ & \quad=F(b) g(b)-F(a) g(a+1)-\sum_{n=a+1}^{b-1} F(n)(g(n+1)-g(n)). \end{aligned}\]
If \(g\in C^1([y, x])\), then \[g(n+1)-g(n)=\int_{n}^{n+1} g^{\prime}(t) d t,\] and since \(F(t)=F(n)\) for \(n \leq t<n+1\), it follows that \[F(n)(g(n+1)-g(n))=\int_{n}^{n+1} F(t) g^{\prime}(t) d t.\]
Let \(a=\lfloor y\rfloor, b=\lfloor x\rfloor\), since \(a \leq y<a+1 \leq b \leq x<b+1\), we have
\[\begin{aligned} & \sum_{y<n \leq x} f(n) g(n) =\sum_{n=a+1}^{b} f(n) g(n) \\ & =F(b) g(b)-F(a) g(a+1)-\sum_{n=a+1}^{b-1} F(n)(g(n+1)-g(n)) \\ & =F(x) g(b)-F(y) g(a+1)-\sum_{n=a+1}^{b-1} \int_{n}^{n+1} F(t) g^{\prime}(t) d t \\ & = F(x) g(x)-F(y) g(y)-F(x)(g(x)-g(b))-F(y)(g(a+1)-g(y)) \\ &-\int_{a+1}^{b} F(t) g^{\prime}(t) d t =F(x) g(x)-F(y) g(y)-\int_{y}^{x} F(t) g^{\prime}(t) d t . \end{aligned}\]
If \(x \geq 2\) and \(g\in C^1([1, x])\), then \[\begin{aligned} \sum_{n \leq x} f(n) g(n) & =f(1) g(1)+\sum_{1<n \leq x} f(n) g(n) \\ & =f(1) g(1)+F(x) g(x)-F(1) g(1)-\int_{1}^{x} F(t) g^{\prime}(t) d t \\ & =F(x) g(x)-\int_{1}^{x} F(t) g^{\prime}(t) d t. \qquad\end{aligned}\qquad\blacksquare\]
Let \((a_{k})_{k\in\mathbb Z},(b_{k})_{k\in\mathbb Z}\subseteq \mathbb C\) and \(m,n\in\mathbb Z\) with \(m<n\). If \(s_{k}:=\sum_{l=m}^{k} a_{l}\), then \[\sum_{k=m+1}^{n} a_{k} b_{k}=s_nb_{n} - a_{m}b_{m+1} -\sum_{k=m+1}^{n-1}\left(b_{k+1}-b_{k}\right) s_k.\]
Observe that \(a_{k}=s_{k}-s_{k-1}\), hence \[\sum_{k=m+1}^{n} a_{k} b_{k}=\sum_{k=m+1}^{n} b_{k}\left(s_{k}-s_{k-1}\right)=\sum_{k=m+1}^{n} b_{k} s_{k}-\sum_{k=m}^{n-1} b_{k+1} s_{k},\] which implies the asserted result, since \(s_m=a_m\).
One can deduce from this equality the following useful bound \[\Big|\sum_{k=m+1}^{n} a_{k} b_{k}\Big| \le(2 \max \{|b_{m+1}|,|b_{n}|\}+V_{[m, n]}) \max _{m \leqslant k \leqslant n}|s_k|,\] where \(V_{[m, n]}:=\sum_{k=m+1}^{n-1}\left|b_{k+1}-b_{k}\right|\).
If \((b_{k})_{k\in\mathbb Z}\subseteq \mathbb R_+\) is a monotone, then \[\Big|\sum_{k=m+1}^{n} a_{k} b_{k}\Big| \le 2 \max \{b_{m+1},b_{n}\} \max _{m \leqslant k \leqslant n}|s_k|.\]
Since \[\sum_{k=1}^{n} \frac{1}{k} \geqslant \int_{1}^{n} \frac{\mathrm{~d} t}{t}=\log n,\] the sequence \[u_{n}:=\sum_{k=1}^{n} \frac{1}{k}-\log n\ge 0\] is non-negative, and decreasing \[u_{n+1}-u_{n}=\frac{1}{n+1}-\log \left(1+\frac{1}{n}\right) \leqslant 0\] thus \((u_n)_{n\in\mathbb Z_+}\) converges.
Definition. We call the Euler-Mascheroni constant, or more simply the Euler constant, the real number \(\gamma\) defined by \[\gamma:=\lim _{n \rightarrow \infty}\left(\sum_{k=1}^{n} \frac{1}{k}-\log n\right) \approx 0.577215664901532 \ldots.\] The problem of the irrationality of \(\gamma\) still remains open!
Let \(n\in\mathbb Z_+\), then \[\sum_{k=1}^{n} \frac{1}{k}=\log n+\gamma+R(n)\] with \(0 \leqslant R(n)<\frac{1}{n}\).
Proof.
We use partial summation with \(f(t)=1\) and \(g(t)=\frac{1}{t}\) which gives \[\begin{aligned} \sum_{k=1}^{n} \frac{1}{k} & =\frac{1}{n} \sum_{k=1}^{n} 1+\int_{1}^{n} \frac{1}{t^{2}}\Big(\sum_{1\le k \leqslant t} 1\Big) d t \\ & =1+\int_{1}^{n} \frac{t-\{t\}}{t^{2}} d t =\log n+\bigg(1-\int_{1}^{\infty} \frac{\{t\}}{t^{2}} d t\bigg)+\int_{n}^{\infty} \frac{\{t\}}{t^{2}} d t \end{aligned}\]
Thus \(\gamma=\lim _{n \rightarrow \infty}\big(\sum_{k=1}^{n} \frac{1}{k}-\log n\big)=1-\int_{1}^{\infty} \frac{\{t\}}{t^{2}} d t\), and \[R(n):=\int_{n}^{\infty} \frac{\{t\}}{t^{2}} d t<\int_{n}^{\infty} \frac{1}{t^{2}} d t=\frac{1}{n}. \qquad \tag*{$\blacksquare$}\]
Proposition. For each Dirichlet series \(D(s, f)\), there exists \(\sigma_{a} \in \mathbb{R} \cup\{ \pm \infty\}\), called the abscissa of absolute convergence, such that
\(D(s, f)\) converges absolutely in the half-plane \(\sigma>\sigma_{a}\);
\(D(s, f)\) does not converge absolutely in the half-plane \(\sigma<\sigma_{a}\).
Remarks.
In particular, the series \(D(s, f)\) defines an analytic function in the halfplane \(\sigma>\sigma_{a}\). By abuse of notation, this function will be still denoted by \(D(s, f)\).
If \(|\: f(n)| \leqslant \log n\), then the series \(D(s, f)\) is absolutely convergent in the half-plane \(\sigma>1\), and hence \(\sigma_{a} \leqslant 1\).
At \(\sigma=\sigma_{a}\), the series may or may not converge absolutely. For instance, \(\zeta(s)\) converges absolutely in the half-plane \(\sigma>\sigma_{a}=1\), but does not converge on the line \(\sigma=1\).
On the other hand, the Dirichlet series associated to the function \(f(n)=\) \(1 /(\log (en))^{2}\) has also \(\sigma_{a}=1\) for the abscissa of absolute convergence, but converges absolutely at \(\sigma=1\).
Let \(S:=\{s\in \mathbb C: D(s, f) \ \text{converges absolutely}\}\).
If \(S=\varnothing\), then put \(\sigma_{a}=+\infty\). Otherwise define \[\sigma_{a}:=\inf \{\sigma: s=\sigma+i t \in S\}\]
\(D(s, f)\) does not converge absolutely if \(\sigma<\sigma_{a}\) by the definition of \(\sigma_{a}\).
On the other hand, suppose that \(D(s, f)\) is absolutely convergent for some \(s_{0}=\sigma_{0}+i t_{0} \in \mathbb{C}\) and let \(s=\sigma+i t\) be such that \(\sigma \geqslant \sigma_{0}\). Since \[\left|\frac{f(n)}{n^{s}}\right|=\left|\frac{f(n)}{n^{s_{0}}}\right| \times \frac{1}{n^{\sigma-\sigma_{0}}} \leqslant\left|\frac{f(n)}{n^{s_{0}}}\right|\] we infer that \(D(s, f)\) converges absolutely at any point \(s\) with \(\sigma \geqslant \sigma_{0}\).
Now by the definition of \(\sigma_{a}\), there exist points arbitrarily close to \(\sigma_{a}\) at which \(D(s, f)\) converges absolutely, and therefore by above \(D(s, f)\) converges absolutely at each point \(s\) such that \(\sigma>\sigma_{a}\). $$\tag*{$\blacksquare$}$$
The partial sums \(\sum_{x<n \leqslant y} f(n)\) and the Dirichlet series \(D(s, f)\) are strongly related to each other. The next result shows that if we are able to estimate the order of magnitude of \(\sum_{x<n \leqslant y} f(n)\), then a region of absolute convergence of \(D(s, f)\) is known.
Proposition 2. Let \(D(s, f)=\sum_{n=1}^{\infty} f(n) n^{-s}\) be a Dirichlet series. Assume that \[|\:f(n)| \leqslant M n^{\alpha} \quad \text{ for all } \quad n\in\mathbb Z_+,\] for some \(\alpha \geqslant 0\) and \(M>0\) independent of \(n\).
Then \(D(s, f)\) converges absolutely in the half-plane \(\sigma>\alpha+1\).
Proof. Indeed, observe that \[\begin{aligned} |D(s, f)|=\sum_{n=1}^{\infty}|\: f(n) n^{-s}|\le M\sum_{n=1}^{\infty}n^{-(\sigma-\alpha)}<\infty, \end{aligned}\] whenever \(\sigma>\alpha+1\), as desired.$$\tag*{$\blacksquare$}$$
Proposition 3. Let \(f, g\) be two arithmetic functions. The the associated Dirichlet series \(D(s, f)\) and \(D(s, g)\) are absolutely convergent at a point \(s_{0}\), then \(D(s, f\star g)\) converges absolutely at \(s_{0}\) and we have \(D(s_0, f\star g)=D(s_0, f)D(s_0, g)\).
Proof.
We have \[D(s_0, f)D(s_0, g)=\sum_{n=1}^{\infty} \frac{f\star g(n)}{n^{s_{0}}}=D(s, f\star g),\] where the rearrangement of the terms in the double sums is justified by the absolute convergence of the two series \(D(s, f)\) and \(D(s, g)\) at \(s=s_{0}\).
Furthermore, we have \[\sum_{n=1}^{\infty}\left|\frac{f\star g(n)}{n^{s_{0}}}\right| \leqslant\left(\sum_{n=1}^{\infty}\left|\frac{f(n)}{n^{s_{0}}}\right|\right)\left(\sum_{n=1}^{\infty}\left|\frac{g(n)}{n^{s_{0}}}\right|\right)\] proving the absolute convergence of \(D(s_0, f\star g)\).
This completes the proof.$$\tag*{$\blacksquare$}$$
Let \(f\) be an arithmetic function such that \(f(1) \neq 0\). Let \(f^{-1}\) be the convolution inverse of the fuction \(f\), i.e. \(f \star f^{-1}=\delta\). Then \[D(s, f^{-1})=\frac{1}{D(s, f)}\] at every point \(s\) where \(D(s, f)\) and \(D(s, f^{-1})\) converge absolutely.
Example.
The Möbius function satisfies \(\mu^{-1}=\mathbf{1}\), hence for \(\sigma>1\), we have \[\sum_{n=1}^{\infty} \frac{\mu(n)}{n^{s}}=\frac{1}{\zeta(s)}.\]
In particular, we have \[\sum_{n=1}^{\infty} \frac{\mu(n)}{n^{2}}=\frac{6}{\pi^2}, \quad \text{ since }\quad \zeta(2)=\sum_{n=1}^{\infty} \frac{1}{n^{2}}=\frac{\pi^2}{6}.\]
Proposition 4. Let \(f\) be an arithmetic function.
If \(|\: f(n)| \leqslant M n^{\alpha}\) for some \(M\in\mathbb R_+\) and \(\alpha \geqslant 0\), then \(\sigma_{a} \leqslant \alpha+1\).
We have \[\sigma_{a} \le L:=\limsup _{n \rightarrow \infty}\left(1+\frac{\log |\: f(n)|}{\log n}\right).\]
Proof.
The first part follows from our criterium for absolute convergence.
For the second part we may suppose that \(L<\infty\) and let \(\sigma>L\). Fix \(\varepsilon>0\) so that \(\sigma-\varepsilon> L+\varepsilon\). Then there exists a large \(N_{\varepsilon}\in\mathbb Z_+\) such that, for all \(n \geqslant N_{\varepsilon}\), we have \[1+\frac{\log |\: f(n)|}{\log n}< L+\varepsilon < \sigma-\varepsilon\] and hence, for all \(n \geqslant N_{\varepsilon}\), we obtain \[\left|\frac{f(n)}{n^{s}}\right|<\frac{n^{\sigma-1-\varepsilon}}{n^{\sigma}}=\frac{1}{n^{1+\varepsilon}}.\]
Hence \(D(s, f)\) is absolutely convergent in the half-plane \(\sigma>L\).$$\tag*{$\blacksquare$}$$
Proposition 5. Let \(D(s, f)=\sum_{n=1}^{\infty} f(n) n^{-s}\) be a Dirichlet series. Assume that \[\Big|\sum_{x<n \leqslant y} f(n)\Big| \leqslant M y^{\alpha} \quad \text{ for all } \quad 0<x<y,\] for some \(\alpha \geqslant 0\) and \(M>0\) independent of \(x\) and \(y\).
Then \(D(s, f)\) converges in the half-plane \(\sigma>\alpha\).
Furthermore, we have in this half-plane \[|D(s, f)| \leqslant \frac{M|s|}{\sigma-\alpha}, \quad \text { and } \quad \bigg|\sum_{x<n \leqslant y} \frac{f(n)}{n^{s}}\bigg| \leqslant \frac{M}{x^{\sigma-\alpha}}\left(\frac{|s|}{\sigma-\alpha}+1\right).\]
Remark.
The latter statement ensures that \(D(s, f)\) converges uniformly in any compact subset of the half plane \(\sigma>\alpha\).
Set \(A(x)=\sum_{1\le n \leqslant x} f(n)\) and \(S(x, y)=A(y)-A(x)\). By partial summation we have \[\sum_{x<n \leqslant y} \frac{f(n)}{n^{s}}=\frac{S(x, y)}{y^{s}}+s \int_{x}^{y} \frac{S(x, u)}{u^{s+1}} d u.\]
By hypothesis we have \(\left|S(x, y) / y^{s}\right| \leqslant M y^{\alpha-\sigma}\), so that \(S(x, y) / y^{s}\) tends to 0 as \(y \to \infty\) in the half-plane \(\sigma>\alpha\).
Therefore if one of \[D(s, f)=\sum_{n \in\mathbb N} \frac{f(n)}{n^{s}}, \quad \text { or } \quad s \int_{1}^{\infty} \frac{A(u)}{u^{s+1}} d u.\] converges, then so does the other, and the two quantities converge to the same limit.
But since \[\left|\frac{A(u)}{u^{s+1}}\right| \leqslant \frac{M}{u^{\sigma-\alpha+1}},\] we infer that the integral converges absolutely for \(\sigma>\alpha\), and hence \(D(s, f)\) is convergent in this half-plane.
Therefore for all \(\sigma>\alpha\), we obtain \[\sum_{n=1}^{\infty} \frac{f(n)}{n^{s}}=s \int_{1}^{\infty} \frac{A(u)}{u^{s+1}} d u,\] and hence \[|D(s, f)| \leqslant M|s| \int_{1}^{\infty} \frac{du}{u^{\sigma-\alpha+1}}=\frac{M|s|}{\sigma-\alpha}.\]
Similarly \[\Big|\sum_{x<n \leqslant y} \frac{f(n)}{n^{s}}\Big| \leqslant \frac{M}{y^{\sigma-\alpha}} +M|s| \int_{x}^{\infty} \frac{d u}{u^{\sigma-\alpha+1}} \leqslant \frac{M}{x^{\sigma-\alpha}}\left(\frac{|s|}{\sigma-\alpha}+1\right)\] as required.$$\tag*{$\blacksquare$}$$
Proposition 6. For each Dirichlet series \(D(s, f)\), there exists \(\sigma_{c} \in \mathbb{R} \cup\{ \pm \infty\}\), called the abscissa of convergence, such that \(D(s, f)\) converges in the half-plane \(\sigma>\sigma_{c}\) and does not converge in the half-plane \(\sigma<\sigma_{c}\). Furthermore, \[\sigma_{c} \leqslant \sigma_{a} \leqslant \sigma_{c}+1.\]
Proof.
Suppose first that \(D(s, f)\) converges at a point \(s_{0}=\sigma_{0}+i t_{0}\) and fix a small real number \(\varepsilon>0\). By Cauchy’s theorem, there exists \(x_{\varepsilon} \geqslant 1\) such that, for all \(y>x \geqslant x_{\varepsilon}\), we have \[\bigg|\sum_{x<n \leqslant y} \frac{f(n)}{n^{s_{0}}}\bigg| \leqslant \varepsilon.\]
Let \(s=\sigma+i t \in \mathbb{C}\) such that \(\sigma>\sigma_{0}\). Using the previous proposition with \(s\) replaced by \(s-s_{0}\) and \(\alpha=0\), we obtain \[\bigg|\sum_{x<n \leqslant y} \frac{f(n)}{n^{s}}\bigg| \leqslant \varepsilon\left(\frac{\left|s-s_{0}\right|}{\sigma-\sigma_{0}}+1\right).\] so that \(D(s, f)\) converges by Cauchy’s theorem.
Now we may proceed as before. Let \(S:=\{s\in \mathbb C: D(s, f) \ \text{ converges}\}\).
If \(S=\varnothing\), then we put \(\sigma_{c}=+\infty\). Otherwise define \[\sigma_{c}:=\inf \{\sigma: s=\sigma+i t \in S\}.\]
\(D(s, f)\) does not converge if \(\sigma<\sigma_{a}\) by the definition of \(\sigma_{c}\).
On the other hand, there exist points \(s_{0}\) with \(\sigma_{0}\) being arbitrarily close to \(\sigma_{c}\) at which \(D(s, f)\) converges.
By above, \(D(s, f)\) converges at any point \(s\) such that \(\sigma>\sigma_{0}\). Since \(\sigma_{0}\) may be chosen as close to \(\sigma_{c}\) as we want, it follows that \(D(s, f)\) converges at any point \(s\) such that \(\sigma>\sigma_{c}\).
The inequality \(\sigma_{c} \leqslant \sigma_{a} \leqslant \sigma_{c}+1\) remains to be shown.
The lower bound is obvious. For the upper bound, it suffices to show that if \(D(s_{0}, f)\) converges for some \(s_{0}\), then it converges absolutely for all \(s\) such that \(\sigma>\sigma_{0}+1\). Now if \(D(s, f)\) converges at some point \(s_{0}\), then \(\lim_{n\to \infty}f(n)n^{-s_{0}}=0\). Thus there exists a positive integer \(n_{0}\) such that, for all \(n \geqslant n_{0}\), we have \(\left|\: f(n)\right| \le n^{\sigma_{0}}\), hence \(D(s, f)\) is absolutely convergent in the half-plane \(\sigma>\sigma_{0}+1\) as required. $$\tag*{$\blacksquare$}$$
A Dirichlet series \(D(s, f)=\sum_{n=1}^{\infty} f(n) n^{-s}\) defines a holomorphic function of the variable \(s\) in the half-plane \(\sigma>\sigma_{c}\), in which \(D(s, f)\) can be differentiated term by term so that, for all \(s=\sigma+it\) such that \(\sigma>\sigma_{c}\), we have \[\partial^k_sD(s, f)=\sum_{n=1}^{\infty} \frac{(-1)^{k}(\log n)^{k} f(n)}{n^{s}}, \quad \text{ for } \quad k\in\mathbb Z_+.\]
Proof.
We know that the partial sums of a Dirichlet series is a holomorphic function of the variable \(s\) that converges uniformly on any compact subset of the half-plane \(\sigma>\sigma_{c}\).
Therefore, the limit \(D(s, f)\) defines a holomorphic function of the variable \(s\) in this half-plane.
Consequently, term-by-term differentiation is allowed, and since \(n^{-s}=e^{-s\log n}\), then \[\partial^k_sn^{-s}=(-1)^{k}(\log n)^{k}n^{-s},\] and the desired formula follows. $$\tag*{$\blacksquare$}$$
Proposition 7. Let \(D(s, f)=\sum_{n=1}^{\infty} f(n) n^{-s}\) be a Dirichlet series with abscissa of convergence \(\sigma_{c}\). If \(D(s, f)=0\) for all \(s\) such that \(\sigma>\sigma_{c}\), then \(f(n)=0\) for all \(n \in \mathbb{Z}_+\). In particular, if \(D(s, f)=D(s, f)\) for all \(s\) such that \(\sigma>\sigma_{c}\), then \(f(n)=g(n)\) for all \(n \in \mathbb{Z}_+\).
Proof.
Suppose the contrary and let \(k\in \mathbb Z_+\) be the smallest integer such that \(f(k) \neq 0\). Then \(D(s, f)=\sum_{n=k}^{\infty} f(n) n^{-s}=0\) for all \(s\) such that \(\sigma>\sigma_{c}\). Then we have \[G(s)=k^{s}D(s, f)=k^{s} \sum_{n=k}^{\infty} \frac{f(n)}{n^{s}}=0.\]
Therefore, for all \(s\) such that \(\sigma>\sigma_{c}\) we have \[G(s)=f(k)+\sum_{n=k+1}^{\infty} f(n)\left(\frac{k}{n}\right)^{s}=0.\]
Hence \[0=\lim _{\sigma \rightarrow \infty} G(\sigma)=f(k)\neq 0,\] which is impossible.$$\tag*{$\blacksquare$}$$
If \(f(n) \geqslant 0\) for all \(n\in\mathbb Z_+\), and \(D(s,f)=\sum_{n=1}^{\infty} f(n) n^{-s}\) is a Dirichlet series with abscissa of convergence \(\sigma_{c} \in \mathbb{R}\), then \(D(s, f)\) has a singularity at \(s=\sigma_{c}\).
Proof.
Without loss of generality, we may assume that \(\sigma_{c}=0\) and \(|D(0, f)|<\infty\). By the Taylor expansion of \(D(s, f)\) about \(a>0\) we have \[D(s, f)=\sum_{k=0}^{\infty} \frac{(s-a)^{k}}{k!} \partial^k_sD(a, f)=\sum_{k=0}^{\infty} \frac{(s-a)^{k}}{k!} \sum_{n=1}^{\infty} \frac{(-1)^{k}(\log n)^{k} f(n)}{n^{a}},\] which must converge at some \(s=b<0\). Hence \[0\le \sum_{k=0}^{\infty} \sum_{n=1}^{\infty} \frac{((a-b) \log n)^{k} f(n)}{n^{a} k!}<\infty.\]
Each term is nonnegative, so the order of summation may be changed \[\sum_{n=1}^{\infty} \frac{f(n)}{n^{a}} \sum_{k=0}^{\infty} \frac{((a-b) \log n)^{k}}{k!}=\sum_{n=1}^{\infty} \frac{f(n)}{n^{b}},\] which does not converge, since \(b<0=\sigma_{c}\), giving a contradiction. $$\tag*{$\blacksquare$}$$
Let \(f\) be a multiplicative function satisfying \[\sum_{p\in \mathbb P} \sum_{k=1}^{\infty}|\: f(p^{k})|<\infty.\] Then the series \(\sum_{n \geqslant 1} f(n)\) is absolutely convergent and we have \[\sum_{n=1}^{\infty} f(n)=\prod_{p\in \mathbb P}\left(1+\sum_{k=1}^{\infty} f\left(p^{k}\right)\right).\]
Proof.
Let us first notice that the inequality \(\sum_{p\in \mathbb P} \sum_{k=1}^{\infty}|\:
f(p^{k})|<\infty\) implies the convergence of the product
\[\prod_{p\in\mathbb
P}\left(1+\sum_{k=1}^{\infty}\left|\:
f\left(p^{k}\right)\right|\right)\le \exp\bigg(\sum_{p\in \mathbb P}
\sum_{k=1}^{\infty}|\: f(p^{k})|\bigg)<\infty.\]
Now let \(x \geqslant 2\) be a real number and set \[P(x)=\prod_{p \in \mathbb P_{\le x}}\Big(1+\sum_{k=1}^{\infty}\big|\: f(p^{k})\big|\Big).\]
The convergence of the series \(\sum_{k=1}^{\infty}\left|\: f\left(p^{k}\right)\right|\) enables us to rearrange the terms when we expand \(P(x)\), hence \[P(x)=\sum_{{\rm gpf}(n) \le x}|\: f(n)|,\] where \({\rm gpf}(1)=1\) and \({\rm gpf}(n)\) is the greatest prime factor of \(n\ge2\).
Since each integer \(n \leqslant x\) satisfies the condition \({\rm gpf}(n) \leqslant x\), we have \[\sum_{1\le n \leqslant x}|\: f(n)| \leqslant P(x)\]
Since \(P(x)\) has a finite limit as \(x \to \infty\), the above inequality implies that \(\sum_{n \geqslant 1} |\: f(n)|<\infty\). The second part of the theorem follows from the inequality \[\bigg|\sum_{n=1}^{\infty} f(n)-\prod_{p \in\mathbb P_{\le x}}\Big(1+\sum_{k=1}^{\infty} f(p^{k})\Big)\bigg| \leqslant \sum_{n>x}|\: f(n)|,\] and the fact that the right-hand side tends to \(0\) as \(x \to \infty\). $$\tag*{$\blacksquare$}$$
Let \(f\) be a multiplicative function and let \(s_{0}\in \mathbb C\). Then the three following assertions are equivalent.
One has \[\sum_{p\in\mathbb P} \sum_{k=1}^{\infty} \frac{|\: f(p^{k})|}{p^{s_{0} k}}<\infty.\]
The series \(D(s, f)\) is absolutely convergent in the half-plane \(\sigma>\sigma_{0}\).
The product \[\prod_{p\in \mathbb P}\left(1+\sum_{k=1}^{\infty} \frac{f(p^{k})}{p^{s k}}\right)\] is absolutely convergent in the half-plane \(\sigma>\sigma_{0}\). If one of these conditions holds, then we have for all \(\sigma>\sigma_{0}\) that
\[D(s, f)=\prod_{p\in\mathbb P}\left(1+\sum_{k=1}^{\infty} \frac{f(p^{k})}{p^{s k}}\right).\]
In particular, if \(\sigma_{a}\) is the abscissa of absolute convergence of \(D(s, f)\), then the last identity holds for all \(\sigma>\sigma_{a}\).