Series ODE
Contents
Series ODE#
We showed how to solve 2nd order linear ODES with constant coefficients, now consider ODEs where coefficients are functions of the independent variable - here we denote it by \(x\)
It is sufficient to consider the homogeneous equation
since the procedure for the corresponding inhomogeneous equation is similar.
For now we will work with p q r that are polynomials and have no common factors, but can be extended to general analytic functions
Suppose also that we wish to solve Eq. (1) in the neighborhood of a point x0. The solution of Eq. (1) in an interval containing x0 is closely associated with the behavior of P in that interval
A point x0 such that P(x0)’ = 0 is called an ordinary point. Since P is continuous, it follows that there is an interval about x0 in which P(x) is never zero. In that interval we can divide Eq. (1) by P(x) to obtain
where p(x), q(x) are continuous functions. Hence, according to the existence and uniqueness Theorem 3.2.1, there exists in that interval a unique solution of Eq. (1) that also satisfies the initial conditions y(x0) = y0, y’(x0) = y’0 for arbitrary values of y0 and y’0. In this and the following section we discuss the solution of Eq. (1) in the neighborhood of an ordinary point.
We look for solution of (294) in the form of a power series:
and we assume that the series converges in the interval \( |x - x_0| < \rho, \ \rho > 0 \).
Series solutions of ODEs#
(310) A point \(x = x_0\) is an ordinary point if \(p(x)\) and \(q(x)\) are analytic on some interval about \(x_0\), as opposed to a singular point. If \(x_0\) is a singularity of functions \(p(x)\) and \(q(x)\) but such that \((x-x_0)p(x)\) and \((x-x_0)^2 q(x)\) are analytic at \(x_0\), then \(x_0\) is a regular singular point.
Note that it is possible to translate the point \(x_0\) to the origin. We can also analyse the behaviour at infinity with the transformation \(x = 1/t\).
Power series Method#
A power series in powers of \(x-x_0\) is an infinite series
where \(x\) is a variable, \(x_0\) is a constant called the centre of the series and \(a_m\) are the coefficients of the series. It is possible to translate the point \(x_0\) to the origin, so for convenience we will often want to assume \(x_0 = 0\). Then
We assume the solution of (310) to be of the form of the power series, so that
The idea to solve (310) is:
Represent \(p(x)\) and \(q(x)\) by power series
Substitute \(y\) and its derivatives in (310)
Equate coefficients of like powers of \(x\) and determine them successively
To demonstrate this, lets look at a simple example, the simple harmonic oscillator \(y'' + y = 0\). Inserting the above power series into the ODE (assuming \(x_0 = 0\)) yields
We cannot solve this yet as the summands involve different powers of x and the lower limits are different. To circumvent this, we can use a shifted index \(n = m-2\) for \(y''\) and then relabel \(n \rightarrow m\):
Now we can procceed
We require the overall coefficient of each and every power of x to vanish. This is the only way to guarantee that the LHS equals zero for any \(x\). Thus, we can write
which is known as a recurrence relation. It seperately links \(a_m\) together for even \(m\) and odd \(m\). Using the recurrence relation, we can determine all the coefficients of the power series and thus determine the answer of the ODE. Thus, for the simple harmonic oscillator, we find for even \(m = 2k\):
and by inspection we see that
Using the same procedure for odd \(m = 2k+1\) we find by inspection that
Thus, relabelling \(k \rightarrow m\), we can write
We can recognise the two power series as the sine and cosine functions respectively. Thus, the answer to the ODE can be written as
which, as expected, is the solution to the simple harmonic oscillator. Note that this is the general solution with the undetermined coefficients \(a_0\) and \(a_1\) acting as the two required arbitrary constants.
Frobenius method#
Consider a second-order linear ODE
Theorem (Fuchs) If \(x = x_0\) is a regular singular point, then the solutions of a differential equation:
are analytic on some neighbourhood around \(x_0\)
or they have a pole or a logarithmic term.
The solution to the ODE can be expressed using a generalised Frobenious series, meaning that any solution can be written as
where \(a_m \neq 0\), since if it were zero we can absorb a factor of \((x - x_0)\) into \((x - x_0)^r\). This condition leads to the indicinal equation for \(r\) (i.e. the equation for the index \(r\)), which is a quadratic. Usually there are two solutions and hence two series, however, if the roots for \(r\) differ by an integer, we have to be careful, for reasons that will be explained below. The best way to demonstrate the Frobenious method is through an example. We will solve Bessel’s equation which in standard form is
where \(s \geq 0\), \(p(x) = 1/x\) and \(q(x) = 1 - \frac{s^2}{x^2}\). So \(p\) and \(q\) are not analytic at \(x = 0\), and \(x = 0\) is a singular point. Also \((x – x_0)p(x)\) and \((x – x_0)^2 q(x)\) are analytic at \(x_0 = 0\), thus \(x_0=0\) is a regular singluar point and so we can use the Frobenious method to solve the ODE. Therefore we substitue
into Bessel’s equation, noting that the limits are all \(m = 0\) since the \(m = 0\) and \(m = 1\) terms do not necessarily differentiate to zero. If \(r\) is not an integer then the leading term does not vanish upon differentiating. Thus we get
Now absorb the \(x^1, x^2\) pre-factors onto the sum
Letting \(n = m + 2\) in the final sum:
Finally, relabel \(n \rightarrow m\), collect in powers of \(x\) and split of the \(m = 0\) and \(m = 1\) terms of the first three sums:
Similarly to the power series method, we require all coefficients infront of each power of \(x\) to vanish. The first two terms yield the indicial equation for determinind \(r\). The general term inside the summation gives the recurrence relation that generates the coefficients of the power series solution.
For \(m = 0\), after simplification the indicial equation becomes:
where \(s\) is positive. Remember that \(a_0\) cannot be zero.
For \(m = 1\):
thus either
For now we will only consider the \(a_1 = 0\) possibility.
for \(m \geq 2\):
where the last equality comes from the indicial equation, \(r^2 = s^2\). Using the recursion relation to evaluate the \(a_m\) coefficients and inserting in the generalised power series yields the answer to the ODE
Remembering that \(a_1 = 0\), the recurrence relation implies that all \(a_{odd} = 0\) and thus there is no second series. However, the two roots \(r = \pm s\) will usually yield the two independent solutions.
Now lets go back to the second indicial equation (from \(m = 1\)) and consider \(\left[ (r+1)^2 - s^2 \right] = 0\). Since \(r^2 = s^2\) (from the first indicial equation), this requires \(2r + 1 = 0\) or \(r = -1/2\) and is thus the special case for \(s = 1/2\). Therefore we will look for solutions with \(s = 1/2\).
For the \(r = 1/2\) solution, the recurrence relation becomes
and thus
which can be written as
There is no second series, since \(a_1 = 0\) for \(r = 1/2\).
Now lets consider the \(r = - 1/2\) solution. The recurrence relation becomes
In this cae, \(a_1\) is undetermined and thus the solution is given by two series:
which can be written as
The second term just duplicates the solution we found for \(r = 1/2\), so that solution is already present here, which is why we identify this solution as the general solution, \(y_{GS}(x)\). This occurs when the roots differ by an integer, as they do here (\(r = - 1/2, + 1/2\)), which is why we must be careful when roots for \(r\) do differ by an integer.