Ordinary Differential Equations

General for of an ODE

F(n,y(n),y(1)(n),y(2)(n),,y(n)(n))=0

where y(i)(n) is the ith derivative of y(n)

The highest order of the derivatives is called the order of the ordinary differential equation.

A closed-form is often unavailable in practice!

Instead we may know a relationship between variable y and its derivative y given by a known function f:

y(t)=f(t,y(t))
The speed v(t) of a car changes based on forces applied to it:

v(t)=F(t)/m

Initial Value Problem

The general form is a differential equation

y(t)=f(t,y(t))

where f is specified, and the initial values are

y(t0)=y0
There are two ways that out ODE problems will become more complicated

  1. By involving more than one unknown variable (not just y): Systems of differential equations
  2. By solving second, third, or higher derivatives (not just y): Higher order differential equations

Systems of Differential Equations

Consider a model with multiple variables that interact:

Higher Order Differential Equations

To start, we consider 1st order ODEs, involving only first derivative

y(t)=f(t,y(t))$$Higherderivativesoccurofteninrealproblems.Theorderisthehighestderivativeintheequation:$$y(n)(t)=f(t,y(t),y(t),y(t),y(t),,y(n1)(t))

Numerical Schemes for ODEs

Numerical solution is a discrete set of time/value pairs, (ti,yi)

What will an approximate solution look like?

  • yi should approximate the true value, y(ti)

Time-Stepping

Given initial conditions, we repeatedly step sequentially forward to the next time instant, using the derivative info, y, and a timestep, h.
Set n=0,t=t0,y=y0:

  1. Compute yn+1
  2. Increment time, tn+1=tn+h
  3. Advance, n=n+1
  4. Repeat

Categorizing time-stepping methods

Single-step vs Multistep:

Trade-offs

Explicit:

Time-Stepping as a Recurrence

Time-stepping applies a recurrence relation to approximate the function values at later and later times

Time-Stepping as a “Time Integration”

Time-stepping is also known as “time-integration”:

Error of Time-Stepping

Absolute / global error at step n is |yny(tn)|

Forward Euler

Explicit, single-step scheme

  1. Compute the current slope: yn=f(tn,yn)
  2. Step in a straight line with that slope: yn+1=yn+hyn
yn+1=yn+hf(tn,yn)

System of Equations

For systems of 1st order ODEs, we apply Forward Euler to each row in exactly the same way.

Error of Forward Euler Method

F.E. is $$y_{n+1} = y_n + hf(t_n, y_n)$$
Taylor series is:

y(tn+1)=y(tn)+hy(tn)+h22!y(tn)+h33!y(tn)+

The error for one step is the difference between these, assuming exact data at time tn

yn+1=y(tn)+hy(tn)

The difference is:

yn+1y(tn+1)=h22!y(tn)+O(h3)=O(h2)

This is the Local Truncation Error (LTE) of Forward Euler!

More accurate time-stepping

Keeping more terms in the series will help derive higher order schemes!

We don’t know y!

Use a finite difference to approximate y

y(tn)=y(tn+1)y(tn)h+O(h)

Trapezoidal Rule

Trapezoidal Scheme is:

yn+1=yn+h2[f(tn+1,yn+1)+f(tn,tn)]

The Local Truncation Error for trapezoidal rules is O(h3)

This is an implicit scheme

Can we make it explicit?

Modified / Improved Euler

  1. Take forward Euler step to estimate the end point
  2. Evaluate slope there
  3. Use this approximate end-of-step slope in the trapezoidal formula
    This looks like:

Backwards Euler Method

Implicit Scheme

yn+1=yn+hf(tn+1,yn+1)

Its LTE is O(h2), like forward Euler

Explicit “Runge Kutta” schemes

Pasted image 20250303111155.png
A family of explicit schemes (including improved Euler) written as:
k1=hf(tn,yn)
k2=hf(tn+h,yn+k1)
yn+1=yn+k12+k22

4th Order Runge Kutta

LTE of O(h5)
Similar schemes exist for higher orders, O(hα) for α=3,4,5,6,
k1=hf(tn,yn)
k2=hf(tn+h2,yn+k12)
k3=hf(tn+h2,yn+k22)
k4=hf(tn+h,yn+k3)
yn+1=yn+16(k1+2k2+2k3+k4)
Again, evaluate y(t)=f(t,y) at various intermediate positions, and take a specific combination to find yn+1

Midpoint Method

Another explicit Runge Kutta scheme with LTE O(h3)
k1=hf(tn,yn)
k2=hf(tn+h2,yn+k12)
yn+1=yn+k2
Equivalent Expression
yn+12=yn+h2f(tn,yn)
yn+1=yn+hf(tn+h2,yn+12)

Local vs Global Error

The Global Error is the total error accumulated at the final time tfinal

#steps=tfinalt0h=O(h1)

Then, for a given method we have:

(Global Error)(Local Error)O(h1)

i.e. one degree lower than LTE

Multi-step Schemes

One way to derive them is by fitting curves to current and earlier data points, for better slope estimates.
E.g. fit a quadratic to previous, current, and next step

Backward Differentiation Formulas Methods (BDF)

BDF1, BDF2, BDF3, etc.

yn+1=yn+hf(tn+1,yn+1)

to higher order using earlier step info.

BDF2 uses current and previous step data:
yn+1=43yn13yn1+23hf(tn+1,yn+1)

BDF has LTE O(h3) and is a multi-step scheme

General Form

The same approach extends to higher orders, yielding implicit schemes with the general form

k=s+11akyn+k=hf(tn+1,yn+1)

With ak being the coefficients.

Explicit Multistep Scheme

Adams-Bashforth

e.g. 2nd order:

yn+1=yn+32hf(tn,yn)12hf(tn1,yn1)

LTE is O(h3)

Higher Order ODEs

The order of the ODE is the highest derivative that appears

Converting to First Order

The general form for such a higher ODE looks like

y(n)=f(t,y(t),y(1)(t),y(2)(t),,y(n1)(t))

We can convert them to systems of first order ODEs

yi=y(i1)

for i=1 to n, so each derivative has a corresponding new variable
Substituting the new variables into the original ODE leads to:

  1. One first order equation for each original equation
  2. One or more additional equations relating the new variables

Stability of Time-Stepping Schemes

Since errors are generally O(hp) for some p, our schemes are less accurate for large h.

But, error/perturbations in initial conditions may lead to vastly different or incorrect answers.

y(t)=f(t,y(t)),y(0)=y0+ϵ0

If some initial error ϵ0 grows exponentially for many steps, our time-stepping scheme is unstable.

Test Equation

We’ll consider a simple linear ODE, our “test equation”,

y(t)=λy(t),y(0)=y0

for constant λ>0.
The exact solution is y(t)=y0eλt

Our Approach

  1. Apply a given time stepping scheme to our test equation
  2. Find the closed form of its numerical solution and error behaviour
  3. Find the conditions on the timestep h that ensure stability (error approaching zero)

Stability of Forward Euler

It is conditionally stable when 0<h<2λ

Stability of Backward/Implicit Euler

It is unconditionally stable

Stability does not imply accuracy!

Large time steps still usually induce Large truncation error

Stability of Improved Euler

It is conditionally stable when h<2λ

Stability in general

For our linear test equation y=λy, forward Euler gave a bound related to

λ=|y(λy)=|fy|

For nonlinear problems, stability depends on fy (for dynamics function f) evaluated at a given point in time/space
For systems of ODEs, stability relates to the eigenvalues of the “Jacobian” matrix

(f1,f2,f3,,fn)(y1,y2,y3,,yn)

Truncation Error vs Stability

Stability tells us what our numerical algorithm itself does to small errors
Truncation error tells us how the accuracy of our numerical solution scales with time steph

Both factors should be considered to get a useful solution

Determining LTE

  1. Replace approximations on RHS with exact versions
  2. Taylor expand all RHS quantities about time tn
  3. Taylor expand the exact solution y(Tn+1) to compare against
  4. Compute difference y(tn+1)yn+1. Lowest degree non-cancelling power of h gives the LTE

Time Step Control

Trade-offs

Smaller h: many steps, computationally expensive, more accurate
Larger h: fewer steps, so cheaper, but also less accurate
Minimize effort/cost by choosing largest h with error still less than a desired tolerance:

error < tolerance

Adaptive Time-Stepping

Rate of change of the solution often varies over time!

Problem

If we knew the error for a given h, we could choose a good h to always satisfy error < tolerance

… but if we already knew the exact error, we’d also know the solution

Approach

Use two different time-stepping schemes together. Compare their results to estimate the error, and adjust h accordingly

Adaptive Time-Stepping using Order p and Order p+1 methods

Idea:

  1. Run 2 methods simultaneously with different truncation error orders
  2. Approximate the error as err=|yn+1Ayn+1B| where yn+1A has O(hp) and yn+1B has O(hp+1)
  3. If err > threshold, reduce h (halve it) and recompute the step again, until the threshold is satisfied
  4. Estimate the error coefficient c as
c|yn+1Ayn+1B|holdp

where hold is the most recent time step size. If c changes slowly in time, we can estimate the next step error as:

errnext=|yn+2Ay(tn+2)|c(hnew)p

plug in c:

errnext|yn+1Ayn+1B|holdp(hnew)p=(hnewhold)p|yn+1Ayn+1B|

Given a desired error tolerance tol, set errnext=tol and solve for hnew:

hnew=hold(tol|yn+1Ayn+1B|)1/p

To (roughly) compensate for our approximations, we may be conservative by scaling tol down by some factor α, e.g. α=1/2, or α=3/4. This will allow step size to grow larger again when it is likely safe to do so.