Differential Equations Notes
Higher Order Differential Equations
General Form:
a_n(x) \frac{d^n y}{dx^n} + a_{n-1}(x) \frac{d^{n-1} y}{dx^{n-1}} + \cdots + a_1(x) \frac{dy}{dx} + a_0(x)y = g(x)
To solve a differential equation of degree n, n initial conditions must be known:
\begin{aligned} y(x_0) &= y_0 \\ y'(x_0) &= y_1 \\ y''(x_0) &= y_2 \\ &\vdots \\ y^{(n-1)}(x_0) &= y_{n-1} \end{aligned}
Types of Solutions
- Initial Value Problem (IVP) – with respect to time
- Boundary Value Problem (BVP) – with respect to space
Existence of Unique Solution
A unique solution exists if:
- All functions a_n(x), a_{n-1}(x), \ldots, a_0(x), g(x) are continuous on an interval I.
- a_n(x) \ne 0 for all x in the interval I.
Differential Operator
Define the differential operator:
D = \frac{d}{dx}, D^2 = \frac{d^2}{dx^2}, etc.
Then the general homogeneous differential equation becomes:
a_n(x) D^n(y) + a_{n-1}(x) D^{n-1}(y) + \cdots + a_1(x) D(y) + a_0(x) y
This can be written as:
\left[ a_n(x) D^n + a_{n-1}(x) D^{n-1} + \cdots + a_1(x) D + a_0(x) \right] y
Define:
L = a_n(x) D^n + a_{n-1}(x) D^{n-1} + \cdots + a_0(x)
So the equation becomes:
Ly = g(x)
Here, L is called the n^\text{th}-order differential operator or polynomial operator.
Second Order Differential Equations
To solve, at least two initial conditions are needed:
Initial Value Problem (IVP): y(x_0) = y_0, y'(x_0) = y_1
Boundary Value Problem (BVP): Examples of conditions (choose any two as required):
\begin{aligned} y(a) &= y_0, \quad y(b) = y_1 \\ y'(a) &= y_0, \quad y'(b) = y_1 \\ y''(a) &= y_0, \quad y''(b) = y_1 \end{aligned}
Types of Solutions (for BVPs)
- Unique solution
- No solution
- Infinitely many solutions
Homogeneous Equations
A differential equation is homogeneous if Ly = 0.
If y = f(x) is a solution, then L[f(x)] = 0.
Also, for any scalar \alpha: L[\alpha f(x)] = \alpha L[f(x)] = \alpha \cdot 0 = 0
So any scalar multiple of a solution is also a solution.
Superposition Principle
The superposition principle states that if y_1, y_2, \ldots, y_k are solutions to a homogeneous linear differential equation, then any linear combination of these solutions, y = c_1 y_1 + c_2 y_2 + \cdots + c_k y_k, is also a solution, where c_1, c_2, \ldots, c_k are arbitrary constants.
L(c_1 y_1 + c_2 y_2 + \cdots + c_k y_k) = 0
Linear Dependence / Independence
- Linearly dependent solutions are not valid for forming the general solution.
- Only linearly independent solutions form a fundamental set.
To check dependence for two functions f_1(x) and f_2(x):
c_1 f_1(x) + c_2 f_2(x) = 0 for all x in the interval, where not both c_1, c_2 are zero. If the ratio \frac{f_1(x)}{f_2(x)} is constant, the functions are linearly dependent.
Wronskian Method
Given functions f_1, f_2, \ldots, f_n that are at least n-1 times differentiable, the Wronskian is:
W(f_1, f_2, \ldots, f_n) = \begin{vmatrix} f_1 & f_2 & \cdots & f_n \\ f_1' & f_2' & \cdots & f_n' \\ \vdots & \vdots & \ddots & \vdots \\ f_1^{(n-1)} & f_2^{(n-1)} & \cdots & f_n^{(n-1)} \end{vmatrix}
- If W(f_1, f_2, \ldots, f_n) = 0 for all x in the interval, the functions are linearly dependent.
- If W(f_1, f_2, \ldots, f_n) \ne 0 for at least one point in the interval, the functions are linearly independent.
Non-Homogeneous Equations
General form: Ly = g(x)
Associated homogeneous equation: Ly = 0
If y = f(x) is a particular solution, then L[f(x)] = g(x).
Note that L[\alpha f(x)] = \alpha g(x) \ne g(x) (unless \alpha=1) ⇒ Non-homogeneous equations are not scalable by arbitrary constants in the same way homogeneous equations are.
- Solution to the non-homogeneous equation: Particular solution (y_p)
- Solution to the homogeneous equation: Complementary solution (y_c)
So, the general solution of a non-homogeneous linear differential equation is the sum of its complementary solution and any particular solution:
y = y_c + y_p
Where y_c is the general solution to the associated homogeneous equation Ly=0, and y_p is any specific solution to Ly=g(x).
Superposition Principle in Non-Homogeneous Case
If g(x) can be written as a sum of functions, say g(x) = g_1(x) + g_2(x) + \dots + g_k(x), and y_{p_i} is a particular solution to Ly = g_i(x) for each i=1, \dots, k, then a particular solution to Ly = g(x) is the sum of these particular solutions:
y_p = y_{p_1} + y_{p_2} + \dots + y_{p_k}.
Homogeneous Linear Equations with Constant Coefficients
General form: ay'' + by' + cy = 0
Method of Solution: Auxiliary Equation
Assume a solution of the form y = e^{mx}. Then y' = me^{mx} and y'' = m^2 e^{mx}.
Substituting these into the differential equation: am^2 e^{mx} + bm e^{mx} + c e^{mx} = 0 Factor out e^{mx}: e^{mx}(am^2 + bm + c) = 0
Since e^{mx} \ne 0, we must have: am^2 + bm + c = 0 This is called the auxiliary equation (or characteristic equation). Solving this quadratic equation for m yields two roots, m_1 and m_2.
Types of Roots and Corresponding Solutions:
Real and Distinct Roots (m_1 \ne m_2): Discriminant \Delta = b^2 - 4ac > 0. The general solution is y = c_1 e^{m_1 x} + c_2 e^{m_2 x}.
Real and Equal Roots (m_1 = m_2 = m): Discriminant \Delta = b^2 - 4ac = 0. The general solution is y = c_1 e^{mx} + c_2 x e^{mx}.
Complex Conjugate Roots (m_1 = \alpha + i\beta, m_2 = \alpha - i\beta): Discriminant \Delta = b^2 - 4ac < 0. Using Euler’s Formula (e^{i\theta} = \cos\theta + i\sin\theta): The general solution is y = e^{\alpha x}(c_1 \cos(\beta x) + c_2 \sin(\beta x)).
Check/verify solutions after finding them.
Behavior of Solutions based on Roots:
- y = c_1 e^{m_1 x} + c_2 e^{m_2 x}: This solution generally does not pass through zero multiple times (at most once).
- y = c_1 e^{mx} + c_2 x e^{mx}: This solution may pass through zero.
- y = e^{\alpha x}(c_1 \cos(\beta x) + c_2 \sin(\beta x)): This solution oscillates.
- If \alpha < 0, it is a damped oscillation.
- If \alpha = 0, it is an undamped oscillation (occurs when the y' term is not present in the original equation, i.e., b=0).
For Equal Roots of Multiplicity R:
If an auxiliary equation has a root m with multiplicity R, then the corresponding linearly independent solutions are: e^{mx}, x e^{mx}, x^2 e^{mx}, \ldots, x^{R-1} e^{mx}.
Undetermined Coefficients (Method)
This method is used to find a particular solution y_p for non-homogeneous linear differential equations with constant coefficients, Ly = g(x), where g(x) is a sum of terms of the following types: polynomials, exponentials, sines, and cosines.
Annihilator Approach:
A differential operator A is an annihilator for a function g(x) if A[g(x)] = 0. Differential operators can be factored, and their order does not matter (they are commutative).
| Function g(x) | Annihilator A |
|---|---|
| c_0, c_1 x, \ldots, c_n x^n | D^{n+1} |
| c e^{ax}, c x e^{ax}, \ldots, c x^n e^{ax} | (D-a)^{n+1} |
| c e^{ax} \cos(\beta x), c e^{ax} \sin(\beta x), | (D^2 - 2\alpha D + (\alpha^2 + \beta^2))^{n+1} |
| c x^k e^{ax} \cos(\beta x), etc. (k \le n) | (For n=0 means functions without x multiplier) |
If g(x) is a sum of functions, say g(x) = g_1(x) + g_2(x), and A_1 is the annihilator for g_1(x) and A_2 is the annihilator for g_2(x), then A_1 A_2 (or A_2 A_1) is an annihilator for g_1(x) + g_2(x).
This means that the product of annihilators will annihilate a linear combination of functions. If L_1 is the annihilator of y_1 and L_2 is the annihilator of y_2: L_1 L_2 (y_1 + y_2) = L_1 L_2 y_1 + L_1 L_2 y_2 Since L_1 and L_2 are annihilators, L_1 y_1 = 0 and L_2 y_2 = 0. So, L_1 L_2 y_1 = L_2 (L_1 y_1) = L_2 (0) = 0. And L_1 L_2 y_2 = L_1 (L_2 y_2) = L_1 (0) = 0. Therefore, L_1 L_2 (y_1 + y_2) = 0.
Superposition Approach (Trial Solution Method):
“Assume a particular solution y_p based on the form of g(x).”
- For Polynomials: If g(x) is a polynomial of degree n, assume y_p is a general polynomial of degree n: A_n x^n + \dots + A_1 x + A_0.
- For Trigonometry: If g(x) contains \sin(kx) or \cos(kx), assume y_p = A \cos(kx) + B \sin(kx).
- For Exponentials: If g(x) contains e^{ax}, assume y_p = A e^{ax}.
- For Sums/Products:
- If g(x) is a sum of two or more terms (e.g., g_1(x) + g_2(x)), find y_{p_1} for g_1(x) and y_{p_2} for g_2(x) separately, then y_p = y_{p_1} + y_{p_2}.
- If g(x) is a product (e.g., x^k e^{ax} \cos(\beta x)), assume y_p is a product of the assumed forms for each factor.
Important Rule: - If a term in the assumed y_p is already part of the complementary solution y_c, multiply the assumed y_p term by the lowest power of x (usually x or x^2) that eliminates the duplication. This is called the multiplication rule. - For example, if g(x) = C e^{ax} and e^{ax} is already in y_c, assume y_p = Ax e^{ax}. If x e^{ax} is also in y_c, assume y_p = Ax^2 e^{ax}.
Table of Suggested Particular Solutions Y_p:
| g(x) (Term) | Form of Y_p |
|---|---|
| c (constant) | A |
| c x^n | A_n x^n + \dots + A_1 x + A_0 |
| c e^{ax} | A e^{ax} |
| c \sin(kx) or c \cos(kx) | A \cos(kx) + B \sin(kx) |
| c x^n e^{ax} | (A_n x^n + \dots + A_0) e^{ax} |
| c x^n \cos(kx) or c x^n \sin(kx) | (A_n x^n + \dots + A_0) \cos(kx) + (B_n x^n + \dots + B_0) \sin(kx) |
| c e^{ax} \cos(kx) or c e^{ax} \sin(kx) | e^{ax}(A \cos(kx) + B \sin(kx)) |
| c x^n e^{ax} \cos(kx) or c x^n e^{ax} \sin(kx) | e^{ax}((A_n x^n + \dots + A_0) \cos(kx) + (B_n x^n + \dots + B_0) \sin(kx)) |
Steps to Solve Non-Homogeneous Equation using Undetermined Coefficients:
- Solve the associated homogeneous equation Ly = 0 to find the complementary solution y_c.
- Determine the form of the particular solution y_p based on g(x) and applying the multiplication rule if necessary (i.e., if terms in y_p duplicate terms in y_c).
- Calculate the derivatives of y_p (up to the order of the differential equation).
- Substitute y_p and its derivatives into the original non-homogeneous equation Ly = g(x).
- Equate the coefficients of like terms on both sides of the equation to form a system of linear equations for the undetermined coefficients.
- Solve the system of equations to find the values of the coefficients.
- Write the general solution as y = y_c + y_p.
Variation of Parameters
This is a general method to find a particular solution y_p for any non-homogeneous linear differential equation, even if the coefficients are not constant or g(x) is not one of the types suitable for Undetermined Coefficients.
Given the non-homogeneous second-order differential equation: a_2(x)y'' + a_1(x)y' + a_0(x)y = g(x)
- Convert to Standard Form: Divide by a_2(x) to get the equation in standard form: y'' + P(x)y' + Q(x)y = f(x), where f(x) = g(x)/a_2(x).
- Find the Complementary Solution: Solve the associated homogeneous equation y'' + P(x)y' + Q(x)y = 0 to find the complementary solution y_c = c_1 y_1(x) + c_2 y_2(x).
- Calculate the Wronskian W(y_1, y_2): W(y_1, y_2) = \begin{vmatrix} y_1 & y_2 \\ y_1' & y_2' \end{vmatrix} = y_1 y_2' - y_2 y_1' Note: W must be non-zero for y_1, y_2 to be a fundamental set of solutions.
- Calculate W_1 and W_2: W_1 = \begin{vmatrix} 0 & y_2 \\ f(x) & y_2' \end{vmatrix} = -y_2 f(x) W_2 = \begin{vmatrix} y_1 & 0 \\ y_1' & f(x) \end{vmatrix} = y_1 f(x)
- Find u_1' and u_2': Assume the particular solution is of the form y_p = u_1(x)y_1(x) + u_2(x)y_2(x). The derivatives u_1' and u_2' are given by: u_1' = \frac{W_1}{W} = \frac{-y_2 f(x)}{W(y_1, y_2)} u_2' = \frac{W_2}{W} = \frac{y_1 f(x)}{W(y_1, y_2)}
- Integrate to find u_1 and u_2: u_1(x) = \int u_1'(x)\,dx u_2(x) = \int u_2'(x)\,dx Important: Do NOT introduce constants of integration here. These constants would simply replicate terms already present in y_c.
- Form the Particular Solution: y_p = u_1(x)y_1(x) + u_2(x)y_2(x)
- Write the General Solution: y = y_c + y_p
For n^\text{th} order differential equations: The particular solution will be y_p = u_1 y_1 + u_2 y_2 + \dots + u_n y_n. The Wronskian W will be an n \times n determinant of the fundamental solutions and their derivatives. To find u_k', replace the k-th column of the Wronskian matrix with (0, 0, \ldots, 0, f(x))^T (where f(x) is the right-hand side in standard form, and the other elements in the last row are zeros), then divide by W.
Why we don’t introduce constants of integration with u_1, u_2:
If we included constants of integration, say u_1 = \int u_1' dx + C_1 and u_2 = \int u_2' dx + C_2, then: y_p = (\int u_1' dx + C_1)y_1 + (\int u_2' dx + C_2)y_2 y_p = (\int u_1' dx)y_1 + (\int u_2' dx)y_2 + C_1 y_1 + C_2 y_2 The terms C_1 y_1 + C_2 y_2 are already part of the complementary solution y_c. Since the general solution is y = y_c + y_p, these extra terms are redundant and would simply be absorbed into the arbitrary constants of y_c.
Cauchy-Euler Equation
General form: a_n x^n \frac{d^n y}{dx^n} + a_{n-1} x^{n-1} \frac{d^{n-1} y}{dx^{n-1}} + \cdots + a_1 x \frac{dy}{dx} + a_0 y = g(x)
This is a linear differential equation with variable coefficients, specifically, the power of x matches the order of the derivative.
Method of Solution (Homogeneous Case):
For the homogeneous second-order Cauchy-Euler equation: ax^2 \frac{d^2 y}{dx^2} + bx \frac{dy}{dx} + cy = 0
- Assume a solution of the form y = x^m.
- Find the derivatives: y' = m x^{m-1} y'' = m(m-1) x^{m-2}
- Substitute into the differential equation: a x^2 [m(m-1) x^{m-2}] + b x [m x^{m-1}] + c x^m = 0 a m(m-1) x^m + b m x^m + c x^m = 0
- Factor out x^m: x^m [a m(m-1) + b m + c] = 0
- Form the Auxiliary (Indicial) Equation: Since x^m \ne 0 (for the relevant interval x > 0), we must have: a m(m-1) + b m + c = 0 am^2 - am + bm + c = 0 am^2 + (b-a)m + c = 0 This is a quadratic equation in m. Solve for m to find the roots m_1 and m_2.
Types of Roots and Corresponding Solutions (for Second Order):
Real and Distinct Roots (m_1 \ne m_2): The general solution is y = c_1 x^{m_1} + c_2 x^{m_2}.
Real and Equal Roots (m_1 = m_2 = m): The general solution is y = c_1 x^m + c_2 x^m \ln|x|.
Derivation of second solution using Reduction of Order: Given y_1 = x^m, and for the standard form y'' + P(x)y' + Q(x)y = 0: P(x) = \frac{bx}{ax^2} = \frac{b}{ax}. We use the shortcut formula: y_2 = y_1(x) \int \frac{e^{-\int P(x)\,dx}}{[y_1(x)]^2}\,dx First, calculate -\int P(x)\,dx = -\int \frac{b}{ax}\,dx = -\frac{b}{a} \ln|x| = \ln(|x|^{-b/a}). Then, e^{-\int P(x)\,dx} = e^{\ln(|x|^{-b/a})} = |x|^{-b/a}. So, y_2 = x^m \int \frac{|x|^{-b/a}}{(x^m)^2}\,dx = x^m \int \frac{x^{-b/a}}{x^{2m}}\,dx = x^m \int x^{-b/a - 2m}\,dx Since m = -\frac{(b-a)}{2a} for repeated roots, we have 2m = -\frac{b-a}{a} = \frac{a-b}{a} = 1 - \frac{b}{a}. So, -b/a - 2m = -b/a - (1 - b/a) = -b/a - 1 + b/a = -1. Therefore, y_2 = x^m \int x^{-1}\,dx = x^m \int \frac{1}{x}\,dx = x^m \ln|x| Thus, the general solution for repeated roots is y = c_1 x^m + c_2 x^m \ln|x|.
Complex Conjugate Roots (m_1 = \alpha + i\beta, m_2 = \alpha - i\beta): Using x^m = x^{\alpha + i\beta} = x^\alpha x^{i\beta} = x^\alpha e^{i\beta \ln|x|} (since x^A = e^{A \ln x}) Applying Euler’s Formula, e^{i\theta} = \cos\theta + i\sin\theta: x^{i\beta} = \cos(\beta \ln|x|) + i\sin(\beta \ln|x|) So, x^m = x^\alpha [\cos(\beta \ln|x|) + i\sin(\beta \ln|x|)]. The two linearly independent real solutions are obtained from the real and imaginary parts: y_1 = x^\alpha \cos(\beta \ln|x|) y_2 = x^\alpha \sin(\beta \ln|x|) The general solution is y = x^\alpha [c_1 \cos(\beta \ln|x|) + c_2 \sin(\beta \ln|x|)].
For Repeated Roots of Multiplicity K:
If an auxiliary equation has a root m with multiplicity K, then the corresponding linearly independent solutions are: x^m, x^m \ln|x|, x^m (\ln|x|)^2, \ldots, x^m (\ln|x|)^{K-1}.
Reduction to Constant Coefficients:
For a Cauchy-Euler equation, it can be transformed into a linear differential equation with constant coefficients by the substitution: Let x = e^t (so t = \ln|x|). Then solve the transformed equation for y(t), and finally substitute t = \ln|x| back to get y(x).
Different Form of Cauchy-Euler Equation:
A generalized form of the Cauchy-Euler equation is: a(x-x_0)^2 \frac{d^2 y}{dx^2} + b(x-x_0) \frac{dy}{dx} + c y = 0
Method of Solution:
- Substitution: Let t = x-x_0. Then dt = dx. The derivatives transform as: \frac{dy}{dx} = \frac{dy}{dt} \frac{dt}{dx} = \frac{dy}{dt} \frac{d^2 y}{dx^2} = \frac{d}{dx}\left(\frac{dy}{dt}\right) = \frac{d}{dt}\left(\frac{dy}{dt}\right)\frac{dt}{dx} = \frac{d^2 y}{dt^2} The equation becomes: a t^2 \frac{d^2 y}{dt^2} + b t \frac{dy}{dt} + c y = 0, which is a standard Cauchy-Euler equation in terms of t.
- Directly solve: Assume a solution of the form y = (x-x_0)^m. Then: y' = m(x-x_0)^{m-1} y'' = m(m-1)(x-x_0)^{m-2} Substitute these into the equation to obtain the auxiliary equation am(m-1) + bm + c = 0, just like the standard form. The solutions will then be in terms of (x-x_0).