Subsection 2.1$L$-function background
Before getting to $L$-functions, we recall
two bits of terminology that will be used in the following discussion.
An entire function $f:\C\to\C$ is said to have order at most $\alpha$ if for all $\epsilon > 0$:
\begin{equation}
f(s)=\mathcal{O}(\exp(|s|^{\alpha + \epsilon})).
\end{equation}
Moreover, we say $f$ has order
equal to $\alpha$ if $f$ has order at most $\alpha$, and $f$ does not
have order at most $\gamma$ for any $\gamma < \alpha$. The notion of
order is relevant because functions of finite order admit a factorization
as described by the Hadamard Factorization Theorem, and the $\Gamma$-function and $L$-functions are all of order 1.
In order to ease notation, we use the normalized $\Gamma$-functions
defined by:
\begin{equation}
\Gamma_\R(s):=\pi^{-s/2}\,\Gamma(s/2)\ \ \ \ \text{ and }\ \ \ \ \Gamma_\C(s):=2(2\pi)^{-s}\,\Gamma(s).
\end{equation}
An $L$-function is a Dirichlet series
\begin{equation}L(s)=\sum_{n=1}^\infty \frac{a(n)}{n^s},\tag{2.1.1}
\end{equation}
where $s=\sigma+i t$ is a complex variable. We assume that
$L(s)$ converges absolutely in the half-plane $\sigma>1$ and
has a meromorphic continuation to all of $\C$. The resulting function is of order $1$, admitting at most finitely many poles, all of which are located on the line $\sigma = 1$.
Finally, $L(s)$ must have an Euler product and satisfy a functional equation as described below.
The functional equation involves the following parameters:
a positive integer $N$, complex numbers $\mu_1, \ldots, \mu_J$ and $\nu_1, \ldots, \nu_K$,
and a complex number $\varepsilon$. The completed $L$-function
\begin{align}
\Lambda(s) :=\mathstrut & N^{s/2}
\prod_{j=1}^{J} \Gamma_\R(s+ \mu_j)
\prod_{k=1}^{K} \Gamma_\C(s+ \nu_k)
\cdot L(s)\tag{2.1.2}
\end{align}
is a meromorphic function of finite order,
having the same poles as $L(s)$ in $\sigma>0$,
and satisfying the functional equation
\begin{align}
\Lambda(s)=\mathstrut& \varepsilon \overline{\Lambda}(1-s).\tag{2.1.3}
\end{align}
The number
$d=J+2K$ is called the degree of the $L$-function.
We require some conditions on the parameters $\mu_j$ and $\nu_j$.
The temperedness condition is the assertion that
$\Re(\mu_j)\in\{0,1\}$ and $\Re(\nu_j)$ a positive integer or half-integer.
With those restrictions, there is only one way to write the parameters in
the functional equation, as proved in Proposition 2.1.1. This restriction is not known to be a theorem
for most automorphic $L$-functions. In order to state theorems which
apply in those cases, we will make use of a "partial Selberg bound,"
which is the assertion that $\Re(\mu_j), \Re(\nu_j) > -\frac12$.
The Euler product is a factorization of the $L$-function into a product over the primes:
\begin{equation}
L(s)= \prod_p F_p(p^{-s})^{-1},\tag{2.1.4}
\end{equation}
where $F_p$ is a polynomial of degree at most $d$:
\begin{equation}
F_p(z) = (1-\alpha_{1,p} z)\cdots (1-\alpha_{d,p} z).\tag{2.1.5}
\end{equation}
If $p|N$ then $p$ is a bad prime and the degree of $F_p$ is strictly less than $d$, in other words, $\alpha_{j,p}=0$ for at least one $j$. Otherwise, $p$ is a good prime, in which case the $\alpha_{j,p}$ are called the Satake parameters at $p$. The Ramanujan bound is the assertion that at a good prime $|\alpha_{j,p}|=1$, and at a bad prime $|\alpha_{j,p}| \le 1$.
The Ramanujan bound has been proven in very few cases, the most
prominent of which are holomorphic forms on $\GL(2)$ and
$\GSp(4)$. See [Sar] for a survey of what progress is known towards
proving the Ramanujan bound. Also see [BB].
We write $|\alpha_{j,p}|\le p^\theta$, for some $\theta < \frac12$, to indicate progress toward the Ramanujan bound, referring to this as a "partial Ramanujan bound."
We will need to use symmetric and exterior power $L$-functions associated to a $L$-function $L(s)$. Let $S$ be the finite set of bad primes $p$ of $L(s)$. The partial symmetric and exterior square $L$-functions are defined as follows.
\begin{equation}
L^S(s,\sym^n) =
\prod_{p \not\in S}\:
\prod_{i_1+\ldots+i_d=n} (1-\alpha_{1,p}^{i_1} \ldots \alpha_{d,p}^{i_d} p^{-s})^{-1}\tag{2.1.6}
\end{equation}
\begin{equation}
L^S(s,\ext^n) =
\prod_{p \not\in S}\;
\prod_{1\leq i_1 < \ldots < i_n\leq d} (1-\alpha_{i_1,p} \ldots \alpha_{i_n,p} p^{-s})^{-1}.\tag{2.1.7}
\end{equation}
We do not define the local Euler factors at the bad primes since there is no universal recipe for these. It is conjectured that the symmetric and exterior power $L$-functions are in fact $L$-functions in the sense described above. In that case, Proposition 2.1.1 tells us that the bad Euler factors are uniquely determined. For applications that we present in this paper, the partial $L$-functions suffice.
In most cases it is not necessary to specify the local factors at the bad primes because,
by almost any version of the strong multiplicity one theorem,
an $L$-function is determined by its Euler factors at the good
primes. For completeness we state
a simple version of the result.
In the following proposition we use the term "$L$-function" in a precise sense,
referring to a Dirichlet series which satisfies a functional equation of
the form (2.1.2)-(2.1.3)
with the restrictions
$\Re(\mu_j)\in\{0,1\}$ and $\Re(\nu_j)$ a positive integer or half-integer,
and having an Euler product satisfying (2.1.4)-(2.1.5).
We refer to the quadruple
$(d,N,(\mu_1,\ldots,\mu_J:\nu_1,\ldots,\nu_K),\varepsilon)$
as the functional equation data of the $L$-function.
Proposition2.1.1
Suppose that $L_j(s)=\prod_p F_{p,j}(p^{-s})^{-1}$, for $j=1,2$, are $L$-functions which satisfy a partial Ramanujan bound for
some $\theta < \frac12$. If $F_{p,1}=F_{p,2}$ for all but finitely
many $p$, then $F_{p,1}=F_{p,2}$ for all $p$, and $L_1$ and $L_2$
have the same functional equation data.
In particular, the proposition shows that the functional equation data of an
$L$-function is well defined. There are no ambiguities arising, say,
from the duplication formula of the $\Gamma$-function. Also, we remark that the partial Ramanujan bound is essential. One can easily construct counterexamples to the above proposition using Saito-Kurokawa lifts, which do not satisfy the partial Ramanujan bound.
Proof
Let $\Lambda_j(s)$ be the completed $L$-function of $L_j(s)$ and
consider
\begin{align}
\lambda(s)=\mathstrut&\frac{\Lambda_1(s)}{\Lambda_2(s)}\cr
=\mathstrut& \Bigl(\frac{N_1}{N_2}\Bigr)^{s/2}
\frac{\prod_{j} \Gamma_\R(s+ \mu_{j,1})
\prod_{k} \Gamma_\C(s+ \nu_{k,1})}
{\prod_{j} \Gamma_\R(s+ \mu_{j,2})
\prod_{k} \Gamma_\C(s+ \nu_{k,2})}
\prod_p \frac{F_{p,1}(p^{-s})^{-1}}{F_{p,2}(p^{-s})^{-1}}.\tag{2.1.8}
\end{align}
By the assumption on $F_{p,j}$, the
product over $p$ is really a finite product.
Thus, (2.1.8) is a valid expression for $\lambda(s)$ for
all $s$.
By the partial Ramanujan bound and the
assumptions on $\mu_j$ and $\nu_j$, we see that $\lambda(s)$ has
no zeros or poles in the half-plane $\Re(s)>\theta$. But by the
functional equations for $L_1$ and $L_2$ we have
$\lambda(s) = (\varepsilon_1/\varepsilon_2)\overline{\lambda}(1-s)$.
Thus, $\lambda(s)$ also has no zeros or poles in the half-plane
$\Re(s) < 1-\theta$. Since $\theta < \frac12$, we conclude that
$\lambda(s)$ has no zeros or poles in the entire complex plane.
If the product over $p$ in (2.1.8) were not empty,
then the fact that $\{\log(p)\}$ is linearly independent over the
rationals implies that $\lambda(s)$ has infinitely many zeros
or poles on some vertical line.
Thus, $F_{p,1}=F_{p,2}$ for all $p$.
The $\Gamma$-factors must also cancel identically, because
the right-most pole of $\Gamma_\R(s+\mu)$ is at $-\mu$, and the right-most pole of $\Gamma_\C(s+\nu)$ is at $-\nu$.
This leaves possible remaining factors of the form
$\Gamma_\C(s+1)/\Gamma_\R(s+1)$, but that also has poles
because the $\Gamma_\R$ factor cancels the first pole
of the $\Gamma_\C$ factor, but not the second pole.
Note that the restriction $\Re(\mu)\in\{0,1\}$ is a critical
ingredient in this argument.
This leaves the possibility that $\lambda(s)=(N_1/N_2)^{s/2}$,
but such a function cannot satisfy the functional
equation $\lambda(s) = (\varepsilon_1/\varepsilon_2)\overline{\lambda}(1-s)$
unless $N_1=N_2$ and $\varepsilon_1=\varepsilon_2$.
The strong multiplicity one theorem for $L$-functions
In this section we state a version of strong multiplicity one
for $L$-functions which is stronger than Proposition 2.1.1
because it only requires the Dirichlet coefficients $a(p)$ and $a(p^2)$
to be reasonably close. This is a significantly weaker condition than
equality of the local factor.
Although the main ideas behind the proof appear in Kaczorowski-Perelli [KP]
and Soundararajan [S],
we give a slightly stronger
version
which assumes a partial Ramanujan bound $\theta < \frac16$, plus an additional
condition, instead of the full Ramanujan conjecture.
We provide a self-contained account because we
also wish to bring awareness of these techniques to people with a
more representation-theoretic approach to $L$-functions.
Theorem2.1.2
Suppose $L_1(s)$, $L_2(s)$ are Dirichlet series with Dirichlet
coefficients $a_1(n)$, $a_2(n)$, respectively, which continue to
meromorphic functions of order 1 satisfying functional equations
of the form (2.1.2)-(2.1.3) with a partial Selberg bound $\Re(\mu_j), \Re(\nu_j)>-\frac12$ for both functions,
and having Euler products satisfying
(2.1.4)-(2.1.5). Assume a partial Ramanujan
bound for some $\theta < \frac16$ holds for both functions, and that
the Dirichlet coefficients at the primes are close to each other
in the sense that
\begin{equation}
\sum_{p\le X} p\,\log(p) |a_1(p)-a_2(p)|^2\ll X .\tag{2.1.9}
\end{equation}
We have $L_1(s)=L_2(s)$ if either of the following two conditions are satisfied
- $\displaystyle \sum_{p\le X} |a_1(p^2)-a_2(p^2)|^2 \log p \ll X$.
- For each of $L_1(s)$ and $L_2(s)$, separately, any one of the following holds:
- The Ramanujan bound $\theta=0$.
- The partial symmetric square (2.1.6) of the function has a meromorphic continuation past the $\sigma=1$ line, and only finitely many zeros or poles in $\sigma\ge 1$.
- The partial exterior square (2.1.7) of the function has a meromorphic continuation past the $\sigma=1$ line, and only finitely many zeros or poles in $\sigma\ge 1$.
Note that condition (2.1.9) is satisfied if $|a_1(p)-a_2(p)|\ll 1/\sqrt{\mathstrut p}$, in particular, if $a_1(p)=a_2(p)$ for all
but finitely many $p$,
or more generally if
$a_1(p)=a_2(p)$ for all but a sufficiently thin set of primes.
In particular, $a_1(p)$ and $a_2(p)$ can differ at infinitely many primes.
Also, by the prime number theorem [Ap, Theorem 4.4] in the form
\begin{equation}
\sum\limits_{p < X} \log(p) \sim X,\tag{2.1.10}
\end{equation}
condition 2.1 for both $L$-functions implies condition 1.
The condition $\theta < \frac16$ arises from the $p^{-3s}$ terms in the
proof of Lemma 2.2.2. Those terms do not seem to give rise to
a naturally occuring $L$-function at $3s$, so it may be difficult
to replace the $\theta < \frac16$ condition by a statement about the average
of certain Dirichlet coefficients.
Subsection 2.3Proof of Theorem 2.1.2
Now we have the ingredients to prove Theorem 2.1.2. The proof begins the same as that of Proposition 2.1.1, by considering the ratio of completed $L$-functions:
\begin{equation}
\lambda(s) := \frac{\Lambda_1(s)}{\Lambda_2(s)},\tag{2.3.1}
\end{equation}
which is a meromorphic
function of order 1 and satisfies the functional equation $\lambda(s)=\varepsilon \overline{\lambda}(1-s)$,
where $\varepsilon = \varepsilon_1/\varepsilon_2$.
Lemma2.3.1
$\lambda(s)$ has only finitely many zeros or poles in the
half-plane $\sigma\ge\frac12$.
Assuming the lemma, we complete the proof of Theorem 2.1.2
as follows. By the functional equation, $\lambda(s)$ has only
finitely many zeros or poles, so by the Hadamard factorization
theorem
\begin{equation}
\lambda(s) = e^{A s} r(s)\tag{2.3.2}
\end{equation}
where $r(s)$ is a rational function.
By (2.3.2), as $\sigma\to\infty$,
\begin{equation}\lambda(\sigma) = C_0 \sigma^{m_0} e^{A \sigma} \bigl(1 + C_1 \sigma^{-1} + O(\sigma^{-2})\bigr),\tag{2.3.3}
\end{equation}
for some $C_0\not=0$ and $m_0\in \Z$.
On the other hand, if $b(n_0)$ is the first non-zero Dirichlet coefficient (with $n_0>1$) of $L_1(s)/L_2(s)$,
then by (2.3.1) and Stirling's formula, as $\sigma\to\infty$,
\begin{equation}\lambda(\sigma) = \bigl(B_0 \sigma^{B_1} e^{B_2 \sigma\log \sigma + B_3 \sigma }(1 +o(1))\bigr)\bigl(1 + b(n_0) n_0^{-\sigma}
+ O((n_0+1)^{-\sigma}).\tag{2.3.4}
\end{equation}
Comparing those two asymptotic formulas, the leading terms must be equal, so
$B_0=C_0$, $B_1=m_0$, $B_2=0$, and $B_3=A$. Comparing second terms, we have
polynomial decay equal to exponential decay, which is impossible
unless $b(n_0)=0$ and $C_1=0$. But $b(n_0)$ was the first nonzero coefficient of
$L_1(s)/L_2(s)$, so we conclude that $L_1(s)=L_2(s)$, as claimed.\qed
\vspace{3ex}
The rest of this section is devoted to the proof of
Lemma 2.3.1. By (2.1.8) and the
partial Selberg bound assumed
on $\mu$ and $\nu$, only the product
\begin{equation}
P(s)=\prod_p\frac{ F_{p,1}(p^{-s})^{-1} }{ F_{p,2}(p^{-s})^{-1} }
= \prod_p\frac{ 1+a_1(p)p^{-s}+a_1(p^{2})p^{-2s}+\cdots}
{1+a_2(p)p^{-s}+a_2(p^{2})p^{-2s}+\cdots}
\end{equation}
could contribute any zeros or poles to $\lambda(s)$
in the half-plane $\sigma\ge\frac12$.
By the first line in equation (2.2.6) of
Lemma 2.2.2 we have
\begin{align}
P(s) =\mathstrut & \prod_p \frac{1+a_1(p)p^{-s}}{1+a_2(p)p^{-s}}
\cdot \prod_p
\frac{1+a_1(p^2)p^{-2s}}{1+a_2(p^2)p^{-2s}}
\cdot H_1(s)\cr
=\mathstrut & A_1(s) H_1(s),\tag{2.3.5}
\end{align}
say,
where $H_1(s)$ is regular and nonvanishing for $\sigma>\frac13+\theta$.
Lemma2.3.2
Assuming $\theta < \frac16$, bound (2.1.9),
and condition 1) in Theorem 2.1.2,
with $A_1(s)$ as defined in (2.3.5) we have
\begin{align}
A_1(s)=\mathstrut & \prod_p (1+(a_1(p)-a_2(p))p^{-s})
\cdot
\prod_p (1+(a_1(p^2)-a_2(p^2))p^{-2s})
\cdot H_2(s),\tag{2.3.6}
\end{align}
where $H_2(s)$ is regular and nonvanishing for $\sigma > \frac{5}{12}$.
We finish the proof of Lemma 2.3.1 and then conclude
with the proof of Lemma 2.3.2.
Using the notation of Lemma 2.3.2,
write (2.3.6) as $A_1(s)=A_2(s)H_2(s)$. Since
$A_1(s)$ and $H_2(s)$ are meromorphic in a neighborhood of
$\sigma\ge\frac12$, so is $A_2(s)$. Changing variables $s\mapsto
s+\frac12$, which divides the $n$th Dirichlet coefficient by
$1/\sqrt{n}$, we can apply Lemma 2.2.3, using
the estimate (2.1.9) and condition 1)
to conclude that $A_2(s)$ has only finitely many zeros or poles
in $\sigma\ge\frac12$. Since the same is true of $H_1(s)$ and
$H_2(s)$, we have shown that $P(s)$ has only finitely many zeros
or poles in $\sigma\ge\frac12$. This completes the proof for
conditions 2.1) and 1).
In the other cases, the proof is almost the same, using
Lemma 2.2.2 to rewrite equation (2.3.5)
in terms of $L_j^S(s,\sym^2)$ or $L_j^S(s,\ext^2)$, and using Lemma 2.2.3
for the factors that remain. This concludes the proof of Lemma 2.3.1.\qed
\vspace{3ex}
Proof of Lemma 2.3.2. Using the identities
\begin{equation}\frac{1+a x}{1+b x} = 1+(a-b)x - \frac{b(a-b)x^2}{1+b x}\tag{2.3.7}
\end{equation}
and
\begin{equation}1+ax+bx^2 = (1+ax)\left(1+ \frac{b x^2}{1+ax} \right)\tag{2.3.8}
\end{equation}
we have
\begin{equation}\frac{1+a x}{1+b x} = (1+(a-b)x)\left(1-\frac{b(a-b)x^2}{(1+(a-b)x)(1+bx)}\right).\tag{2.3.9}
\end{equation}
Thus
\begin{align}\prod_p \frac{1+a_1(p)p^{-s}}{1+a_2(p)p^{-s}}
=\mathstrut & \prod_p \bigl(1+(a_1(p)-a_2(p))p^{-s} \bigr) \cr
&\phantom{xx}\times \prod_p \biggl( 1-
\frac{a_2(p)(a_1(p)-a_2(p)) p^{-2s}}{
(1+(a_1(p)-a_2(p))p^{-s})(1+a_2(p)p^{-s})}\biggr) \cr
=\mathstrut & \prod_p \bigl(1+(a_1(p)-a_2(p))p^{-s} \bigr) \cdot h(s)\tag{2.3.10}
\end{align}
say.
We wish to apply Lemma 2.2.3 to show that $h(s)$ is regular and nonvanishing
for $\sigma>\sigma_0$ for some $\sigma_0 < \frac12$.
Since $\theta < \frac16$,
if
$\sigma\ge \frac16$
and
$p > P_0$ where $P_0$ depends only on $\theta$,
then
$|1+a_2(p)p^{-\sigma}| \geq \frac12$ and
$|1+(a_1(p)-a_2(p))p^{-\sigma}| \geq \frac12$.
Using those inequalities and $|a_2(p)|\ll p^\theta$ we have
\begin{align}\sum_{P_0\le p\le X}& \left|\frac{a_2(p)(a_1(p)-a_2(p)) }{
(1+(a_1(p)-a_2(p))p^{-\sigma})(1+a_2(p)p^{-\sigma})}\right|^2 \log p\cr
&\phantom{xxxxxxxxxxxxxxxxxxx}\le 16 \mathstrut \sum_{P_0\le p\le X} \left|{a_2(p)(a_1(p)-a_2(p)) }\right|^2 \log p\cr
&\phantom{xxxxxxxxxxxxxxxxxxx}\ll\mathstrut X^{2\theta} \sum_{P_0\le p\le X} \left|(a_1(p)-a_2(p)) \right|^2 \log p \cr
&\phantom{xxxxxxxxxxxxxxxxxxx}\ll\mathstrut X^{\frac12+2\theta}.\tag{2.3.11}
\end{align}
Changing variables $s\to \frac{s}{2}-\frac{1}{12}$ and applying
Lemma 2.2.3, we see that $h(s)$ is regular and nonvanishing for $\sigma>\frac{5}{12}$.
Applying the same reasoning to the second factor in (2.3.6) completes the proof.\qed
Subsection 2.4Proof of Lemma 2.2.3
Two basic results which are used in this section are:
Lemma2.4.1
If $\sum_{n \le X} |a(n)| \ll X^{1+\epsilon}$ for every $\epsilon>0$, then
$\displaystyle
\sum_{n=1}^\infty \frac{a(n)}{n^s}
$
converges absolutely for all $\sigma>1$.
Lemma2.4.2
If $\sum_{n \le X} |a(n)| \le C X $ as $X\to\infty$, then
\begin{equation}
\sum_{n=1}^\infty \frac{a(n)}{n^\sigma} \le \frac{C}{\sigma-1} + O(1)
\end{equation}
as $\sigma \to 1^+$.
Both of those results follow by partial summation.
We first state and prove a simplified version of Lemma 2.2.3.
Lemma2.4.3
Let
\begin{equation}L(s)=\prod_p \sum_{j=0}^\infty a({p^j}) p^{-j s}\tag{2.4.1}
\end{equation}
and suppose there exists $M\ge 0$ and $\theta < \frac 12$ so that
$|a({p^j})|\ll p^{j \theta}$
and
\begin{equation}
\sum_{p\le X} |a(p)|^2 \log p \le (1+o(1)) M^2 X.\tag{2.4.2}
\end{equation}
Then $L(s)$ is a nonvanishing analytic function in the half-plane $\sigma>1$.
Furthermore, if $L(s)$ has a meromorphic continuation to a neighborhood of $\sigma\ge1$,
then $L(s)$ has at most $M^2$ zeros or poles on the
$\sigma=1$ line.
Note that, by the prime number theorem (2.1.10), the condition on $a(p)$ is satisfied if $|a(p)|\le M$.
Proof
We have
\begin{align}
L(s)=\mathstrut &\prod_p \sum_{j=0}^\infty a({p^j}) p^{-j s} \cr
=\mathstrut & \prod_p \biggl(1+a(p) p^{-s} + \sum_{j\ge 2} a({p^j}) p^{-j s}\biggr) \cr
=\mathstrut & \prod_p \left(1+a(p) p^{-s} \right) \cr
&\times \prod_p \bigl(1+a(p^2)p^{-2s} +(a(p^3)-a(p) a(p^2))p^{-3s}\cr
&\phantom{xxxxxx} +
(a(p^4)-a(p)a(p^3)+a(p)^2a(p^2))p^{-4s}+\cdots\bigr)\cr
=\mathstrut & \prod_p \left(1+a(p) p^{-s} \right) \prod_p \biggl(1+\sum_{j=2}^\infty b({p^j}) p^{-j s}\biggr),\tag{2.4.3}
\end{align}
say, where $b(p^j)\ll j M^j p^{j \theta} \ll p^{j(\theta+\epsilon)}$
for any $\epsilon>0$.
Writing (2.4.3) as $L(s)=f(s)g(s)$ we have
\begin{equation}
\log g(s) = \sum_p \log(1+Y) = \sum_p \left( Y + O(Y^2) \right),\tag{2.4.4}
\end{equation}
where $\displaystyle Y=\sum_{j=2}^\infty b({p^j}) p^{-j s}$. Now,
\begin{align}|Y| \le \mathstrut& \sum_{j=2}^\infty |b({p^j})| p^{-j \sigma} \cr
\ll \mathstrut& \sum_{j=2}^\infty p^{j(\theta-\sigma+\epsilon)} \cr
= \mathstrut& \frac{p^{2(\theta-\sigma+\epsilon)}}{1-p^{\theta-\sigma+\epsilon}}.\tag{2.4.5}
\end{align}
If $\sigma > \frac12 + \theta$ we have $|Y|\ll 1/p^{1+\delta}$ for some $\delta>0$.
Therefore the series (2.4.4) for $\log(g(s))$ converges absolutely for
$\sigma > \frac12 + \theta$,
so $g(s)$ is a nonvanishing analytic function in that region. By (2.4.2),
Cauchy's inequality,
and Lemma 2.4.1,
$f(s)$ is a nonvanishing analytic function for $\sigma>1$,
so the same is true for $L(s)$.
This establishes the first assertion in the lemma.
Now we consider the zeros of $L(s)$ on $\sigma=1$.
Since $\theta < \frac12$, the zeros or poles of $L(s)$ on the $\sigma=1$ line are the zeros
or poles
of $f(s)$. Furthermore, by (2.4.3) and the properties of $g(s)$,
for $\sigma>1$ we have
\begin{equation}
\frac{L'}{L}(s) = \sum_p \frac{-a(p)\log(p)}{p^s} + h(s),\tag{2.4.6}
\end{equation}
where $h(s)$ is bounded in $\sigma > \frac12+\theta+\epsilon$ for any $\epsilon>0$.
Suppose $s_1,\ldots,s_J$ are zeros or poles of $L(s)$,
with $s_j = 1+i t_j$ having multiplicity $m_j$.
We have
\begin{equation}\frac{L'}{L}(\sigma+i t_j) \sim \frac{m_j}{\sigma-1},
\ \ \ \ \ \ \ \
\mathrm{as}
\ \
\sigma \to 1^+,\tag{2.4.7}
\end{equation}
therefore
\begin{equation}
\sum_p \frac{-a(p)\log(p)}{p^{\sigma + it_j}} \sim \frac{m_j}{\sigma-1},
\ \ \ \ \ \ \ \
\mathrm{as}
\ \
\sigma \to 1^+.\tag{2.4.8}
\end{equation}
Now write
\begin{equation}k(s) = \sum_{j=1}^J m_j \sum_p \frac{-a(p)\log(p)}{p^{s+it_j}}.\tag{2.4.9}
\end{equation}
By (2.4.8) we have
\begin{equation}
k(\sigma) \sim \frac{\sum_{j=1}^J m_j^2}{\sigma-1},
\ \ \ \ \ \ \ \
\mathrm{as}
\ \
\sigma \to 1^+.\tag{2.4.10}
\end{equation}
On the other hand, for $\sigma>1$ we have
\begin{align}
|k(\sigma)| = \mathstrut &
\left|
\sum_p \frac{a(p)\log(p)}{p^{\sigma}} \sum_{j=1}^J m_j p^{-it_j}
\right| \cr
\le \mathstrut &
\left(\sum_p \frac{|a(p)|^2 \log(p)}{p^{\sigma}} \right)^{\frac12}
\left( \sum_p \frac{\log p}{p^\sigma}
\left| \sum_{j=1}^J m_j p^{-it_j} \right|^2
\right)^{\frac12} \cr
\le \mathstrut &
(1+o(1)) \left(\frac{M^2}{\sigma-1} \right)^{\frac12}
\left(
\sum_{j=1}^J \sum_{\ell=1}^J m_j m_\ell
\sum_p \frac{\log p}{p^{\sigma + i(t_j - t_\ell)} }
\right)^{\frac12} \cr
\sim \mathstrut & \left(\frac{M^2}{\sigma-1} \right)^{\frac12}
\left(
\sum_{j=1}^J \frac{m_j^2}{\sigma-1}
\right)^{\frac12}
\ \ \ \ \ \ \ \
\mathrm{as}
\ \
\sigma \to 1^+.\tag{2.4.11}
\end{align}
On the first line we used the Cauchy-Schwartz inequality,
on the next-to-last line we wrote the sum over $a(p)$ as a
Stieltjes integral and used (2.4.2) and Lemma 2.4.2, and on the
last line we used the fact that the Riemann zeta function has a
simple pole at $s=1$ and no other zeros or poles on the $\sigma=1$ line.
Combining (2.4.8) and (2.4.11) we have
$\sum_{j=1}^J m_j^2 \le M^2$. Since $m_j^2\ge 1$, we see that
$J\le M^2$, as claimed.
The proof of Lemma 2.2.3 is similar to Lemma 2.4.3.
Proof of Lemma 2.2.3
We have
\begin{align}
L(s)=\mathstrut &\prod_p \sum_{j=0}^\infty a({p^j}) p^{-j s} \cr
=\mathstrut & \prod_p \biggl(1+a(p) p^{-s} + a(p^2)p^{-2s} + \sum_{j\ge 3} a({p^j}) p^{-j s}\biggr) \cr
=\mathstrut & \prod_p \left(1+a(p) p^{-s} \right) \left(1+a(p^2) p^{-2s} \right) \cr
&\phantom{xxx}\times \bigl(1+(a(p^3)-a(p) a(p^2))p^{-3s}\cr
&\phantom{xxxxxx} +
(a(p^4)-a(p)a(p^3)+a(p)^2a(p^2))p^{-4s}+\cdots\bigr)\cr
= \mathstrut &
\prod_p \left(1+a(p) p^{-s} \right)\left(1+a(p^2) p^{-2s} \right) \prod_p \biggl(1+\sum_{j=3}^\infty c({p^j}) p^{-j s}\biggr)\cr
=\mathstrut & f(s)g(s),\tag{2.4.12}
\end{align}
say.
We have $c({p^j}) \ll j M^j p^{j\theta} \ll p^{j(\theta+\epsilon)}$ for any $\epsilon>0$.
We use this to show that $g(s)$ is a nonvanishing analytic function in $\sigma>\frac13+\theta$.
Writing $g(s) = \prod_p(1+Y)$ we have
\begin{equation}
\log g(s) = \sum_p \log(1+Y) = \sum_p \left( Y + O(Y^2) \right),\tag{2.4.13}
\end{equation}
where $\displaystyle Y=\sum_{j=3}^\infty b({p^j}) p^{-j s}$. Now,
\begin{align}|Y| \le \mathstrut& \sum_{j=3}^\infty |b({p^j})| p^{-j \sigma} \cr
\ll \mathstrut& \sum_{j=3}^\infty p^{j(\theta-\sigma+\epsilon)} \cr
= \mathstrut& \frac{p^{3(\theta-\sigma+\epsilon)}}{1-p^{\theta-\sigma+\epsilon}}.\tag{2.4.14}
\end{align}
If $\sigma > \frac13 + \theta$ we have $|Y|\ll 1/p^{1+\delta}$ for some $\delta>0$.
Therefore by Lemma 2.4.1
the series (2.4.13) for $\log(g(s))$ converges absolutely for
$\sigma > \frac13 + \theta$,
so $g(s)$ is a nonvanishing analytic function in that region. By
the same argument, using
(2.2.12) and (2.2.12),
$f(s)$ is a nonvanishing analytic function for $\sigma>1$,
so the same is true for $L(s)$.
This establishes the first assertion in the lemma.
Now we consider the zeros of $L(s)$ on $\sigma=1$. Since $\theta < \frac23$,
the zeros or poles
of $L(s)$ on the $\sigma = 1$ line are the zeros or poles of $f(s)$.
Taking the logarithmic derivative of (2.4.12) and using the
same argument as above for the lower order terms, we have
\begin{align}
\frac{L'}{L}(s) =\mathstrut & \sum_p \frac{-a(p)\log(p)}{p^s} + 2\,\frac{a(p)^2\log(p)}{p^{2s}} -2\, \frac{a(p^2)\log(p)}{p^{2s}} + h_1(s)\cr
=\mathstrut & \sum_p \frac{-a(p)\log(p)}{p^s} -2 \,\frac{a(p^2)\log(p)}{p^{2s}} + h_2(s),\tag{2.4.15}
\end{align}
where $h_j(s)$ is bounded in $\sigma > \frac13+\theta+\epsilon$ for any $\epsilon>0$.
By (2.2.12) and Lemma 2.4.1, the middle term in the sum over primes
in (2.4.15)
converges absolutely for $\sigma>\frac12$, so it was incorporated into $h_1(s)$.
Suppose $s_1,\ldots,s_J$ are zeros or poles of $L(s)$,
with $s_j = 1+i t_j$ having multiplicity $m_j$.
We have
\begin{equation}\frac{L'}{L}(\sigma+i t_j) \sim \frac{m_j}{\sigma-1},
\ \ \ \ \ \ \ \
\mathrm{as}
\ \
\sigma \to 1^+,\tag{2.4.16}
\end{equation}
therefore
\begin{equation}
\sum_p
\left(
\frac{-a(p)\log(p)}{p^{\sigma + it_j}}
- 2\, \frac{a(p^2)\log(p)}{p^{2(\sigma + it_j)}}
\right)
\sim \frac{m_j}{\sigma-1},
\ \ \ \ \ \ \ \
\mathrm{as}
\ \
\sigma \to 1^+.\tag{2.4.17}
\end{equation}
Now write
\begin{equation}
k(s) = \sum_{j=1}^J m_j \sum_p
\left(
\frac{-a(p)\log(p)}{p^{s+i t_j}}
-2\, \frac{a(p^2)\log(p)}{p^{2(s+i t_j)}}
\right)\tag{2.4.18}
\end{equation}
By (2.4.17) we have
\begin{equation}
k(\sigma) \sim \frac{\sum_{j=1}^J m_j^2}{\sigma-1},
\ \ \ \ \ \ \ \
\mathrm{as}
\ \
\sigma \to 1^+.\tag{2.4.19}
\end{equation}
We will manipulate (2.4.18) so that
so that we can use (2.2.12) and (2.2.12)
to give a bound on $\sum m_j^2$ in terms of $M_1$ and $M_2$.
By Cauchy's inequality and Lemma 2.4.2
we have
\begin{align}
|k(\sigma)| \le \mathstrut &
\left|
\sum_p \frac{a(p)\log(p)}{p^{\sigma}}
\sum_{j=1}^J \frac{m_j}{p^{it_j}}\right|
+ 2 \left|\sum_p \frac{p^{-\sigma} a(p^2)\log(p)}{p^{\sigma}}
\sum_{j=1}^J \frac{m_j}{p^{2it_j}}
\right| \cr
\le \mathstrut
&\left(\sum_p \frac{|a(p)|^2 \log(p)}{p^{\sigma}}\right)^{\frac12}
\left( \sum_p \frac{\log p}{p^\sigma} \biggl|\sum_{j=1}^J m_j p^{-it_j} \biggr|^2\right)^{\frac12}\nonumber\\
&\ \ \ \ \ \ +2 \left(\sum_p \frac{p^{-2\sigma}|a(p^2)|^2 \log(p)}{p^{\sigma}} \right)^{\frac12}
\left( \sum_p \frac{\log p}{p^\sigma}\biggl| \sum_{j=1}^J m_j p^{-2it_j} \biggr|^2\right)^{\frac12}\nonumber\\
\le & (1+o(1))\Biggl(
\left(\frac{M_1^2}{\sigma-1} \right)^{\frac12}
\biggl(
\sum_{j=1}^J \sum_{\ell=1}^J m_j m_\ell
\sum_p \frac{\log p}{p^{\sigma + i(t_j - t_\ell)} }
\biggr)^{\frac12} \nonumber\\
&\ \ \ \ \ +2 \left(\frac{M_2^2}{\sigma-1} \right)^{\frac12}
\biggl(
\sum_{j=1}^J \sum_{\ell=1}^J m_j m_\ell
\sum_p \frac{\log p}{p^{\sigma + 2i(t_j - t_\ell)} }
\biggr)^{\frac12}
\Biggr)\nonumber\\
\sim & \frac{M_1 + 2 M_2}{(\sigma-1)^\frac12 }
\left(
\sum_{j=1}^J \frac{m_j^2}{\sigma-1}
\right)^{\frac12}
\ \ \ \ \ \ \ \
\mathrm{as}
\ \
\sigma \to 1^+.\tag{2.4.20}
\end{align}
In the last step we used the fact that the Riemann zeta function has a simple
pole at 1 and no other zeros or poles on the $1$-line.
Combining (2.4.19) and (2.4.20) we have
$\displaystyle
\sum_{j=1}^J m_j^2 \le (M_1 +2 M_2)^2.
$
Since $m_j\ge 1$, the proof is complete.