In this blog we will explore the basic building block for such simulations, the numerical integrator. First we go through the common Runge-Kutta and Adams-Moulton integrators and then use them to simulate a 3-phase LC filter in VHDL. To verify the model, we simulate a response and compare it against a QSPICE simulation of the same circuit. You can find the latest developed version of these models and integrators in the hVHDL_ode repository.
When we design systems that interact with the real world we very commonly need to be able to simulate some external phenomena. To change the speed or position, angle, temperature or voltage of some energy storing element, we need to account for the time it takes time for the state of the system to change. The state refers to instantaneous value of the dynamical variables so if we are simulating an electrical circuit, the states are commonly chosen as the voltages of the capacitors and currents of the inductors. For mechanical systems, it might be the angular speed and angle or position and velocy of masses. Regardless of the type of the states this type of interaction is modeled using differential equations and solved using numerical solvers for the differential equations.
Numerical solvers in dynamic simulations
Coding models and using an integrator to simulate them is straightforward way to develop and test feedback and signal processing systems. If we are developing filters and control algorithms in VHDL, we will greatly simplify code development with the simulations by using VHDL for the simulation model also. This way our test routine can run in single language, launched from a single test script and it can run part of a single test suite.
Additionally understanding how the solvers work and how they should be used is really useful when developing simulations. There are bound to be incredibly high number of hours wasted with development being done on simulations that could be sped up by orders of magnitude by customizing and tuning the model and its solver to fit the work. It is also really fun to create simulations and verify them against measurements or reference models!
The VHDL simulations are run with NVC simulator and they are plotted using python. The repository contains two helper python scripts, from_vunit_export.py and plot_data.py which allow for creating the vhdl_ls.toml file for language server protocol and plotting data which is created from vhdl simulations. The runs and simulation builds are managed with VUnit.
Numerical Integration
The simplest numerical integration method, which is also the base for all other common algorithms, is the first order or Euler integration method (1). The basic concept of an integrator is that we accumulate over a step h. So the simplest first order integration method is
Using typical symbols we refer to the right side of the differential equations as f(tn,yn), h refers to the step size which is the time increment over which the integration is done, yn is the resulting integrated state-vector and k1 refers to the calculated steps taken by the integration algorithm. Using this notation the first order runge kutta from (1) is rewritten as (2) – (3). This is how we present all rest of the solvers also.
Assuming that f(tn,yn)=1 as in just a plain constant, and our step size is h=1/3, then with each iteration we increment the integrator by 1/3 and on the 3rd step we have reached 1.
The problem with the simplest method is that it lacks both in stability and in accuracy for anything but the simplest of differential equations. What we can do is to increase the order of the integration by using the result of multiple evaluations of the differential equation which we are simulating. The first order integration method is thus the basis for all of the rest of the methods.
Higher order integration
There are two main categories of the basic higher order numerical integrators, Runge-Kutta and Adams-Moulton family of integration methods. The idea behind both of these is that we increase the number of points at which we evaluate the differential equation for each integration step and the individual points are then combined for the final state update.
Runge-Kutta integration
Runge-Kutta methods achieve the order increase by making small sub-steps within each integration step h. Making substeps means that in each integration step we actually calculate the differential equation multiple times and make intermediate integrations between steps. The intermediate small steps are only used for calculating the final step as a linear combination of the taken sub-steps and then discarded.
There are higher orders up to 8 that are commonly used. The most common are probably the 2nd and 4th order Runge-Kutta algorithms which are shown below.
Although the notation looks a bit complicated, what is actually happening, is that the first order integrator is called multiple times. So we first calculate from yn to k1, then update the values and integrate to k2 and so forth. When the last step is calculated we use a weighted sum of the intermediate steps to get the next state vector.
An interesting property of Runge-Kutta methods is that any order higher than 4 requires more evaluations of the differential equation than the order of the integrator. For example 5th order requires 7 and 8th order requires 14 evaluations. The Runge-Kutta 4th order is the reliable workhorse of integration methods as it gives the minimal number of function evaluations for and high order of integration.
It might seem wasteful to evaluate the equation 2 times in the 2nd order method or 4 times in 4th order even since we could also just calculate the first order method at half or quarte step size. However, the magic lies in the error being 2nd order and hence as order increases the error decreases much, much faster than with just increasing calculation points with 1st order integration. It very often is the case that the step size can be increased by an order of magnitude for equal accuracy when moving up in the orders of integration hence it is much faster to use higher order and stepsize than it is to just decrease steptime.
Adams-Moulton multi-step Integration
Where Runge-Kutta methods take small steps that are discarded upon arriving to the next integration point, multi-step methods preserve the previous steps between evaluations. The most significant difference is that the derivative function f(tn,yn) is evaluated only once regardless of the order of integration. This can seen in the implementations of which orders 2 and 4 are shown below.
With the function evaluation only being done once, the multi-step methods allow high order of integration with minimal increase in computational effort. Especially the 2nd order A-M integration method is really interesting, since it requires practically no extra effort at all compared to the first order but achieves greatly improved performance.
Due to the memory of the integration, when we have a change in the input variables of the differential equation, it takes a few steps for the multi-step method to have the input propagate through the integrator states. Hence we pay for the simplification in computation cost with our integration routine having a slightly delayed transient behavior.
High order integrators in VHDL with generic procedures
Creating any of these basic integrators in VHDL is very simple. We just write down the differential equation for what we want to integrate and use it to update the state vectors. We will design the integrator routines to be usable with any ordinary differential equation that we would like to try them out with, hence we will be using the VHDL2008 generic procedures to design them. You can find the latest developed version of these in the hVHDL_ode repository.
The state of the system is stored as real_vector. In the package real_vector_pkg I have provided the “+”, “-“, “*” and “/” operators in order to use a real_vector as the type which holds the states of the differential equation as impure function deriv.
The first order runge kutta is shown below. The differential equation is given as a function generic which takes in the current time and the state vector and returns the change of the state. Using generics we can give any differential equation to the solver to use internally and it can be of any order since the real vector can be as long as needed.
procedure generic_rk1
generic(impure function deriv (t : real; input : real_vector) return real_vector is <>)
(
t : real;
state : inout real_vector;
stepsize : real
) is
begin
state := state + deriv(t, state)*stepsize;
end generic_rk1;
Using this method, we can pretty much directly write the equations in VHDL as given above. Here the Runge-Kutta and Adams Moulton 4th order are shown.
------------------------------------------
procedure generic_rk4
generic(impure function deriv (t : real; input : real_vector) return real_vector is <>)
(
t : real;
state : inout real_vector;
stepsize : real
) is
type state_array is array(1 to 4) of real_vector(state'range);
variable k : state_array;
begin
k(1) := deriv(t, state);
k(2) := deriv(t + stepsize/2.0 , state + k(1) * stepsize/ 2.0);
k(3) := deriv(t + stepsize/2.0 , state + k(2) * stepsize/ 2.0);
k(4) := deriv(t + stepsize , state + k(3) * stepsize);
state := state + (k(1) + k(2) * 2.0 + k(3) * 2.0 + k(4)) * stepsize/6.0;
end generic_rk4;
------------------------------------------
------------------------------------------
procedure am4_generic
generic(impure function deriv (t : real; input : real_vector) return real_vector is <>)
(
t : real;
variable adams_steps : inout am_state_array;
variable state : inout real_vector;
stepsize : real
) is
alias k is adams_steps;
begin
k(4) := k(3);
k(3) := k(2);
k(2) := k(1);
k(1) := deriv(t, state);
state := state + (k(1)*55.0 - k(2)*59.0 + k(3)*37.0 - k(4)*9.0) * stepsize/24.0;
Next we will actually use all of the presented methods to illustrate how they are used to simulate a 6th order state equation.
Integrating a 3 phase lc filter with our integrators
We have done some modeling of LCR filters before, but we’ll quickly go over the basic procedure here. The inductor current and capacitor voltage differential equations are given below
To model a simple LC filter with the equation, we simply collect the voltages in the left and right side of the inductor and subtract them to get the voltage over the inductor as indicated in the schematic below.
Collecting these in a set of equations also known as state-space equations we get the dynamic model of an lc filter. The simplicity here is that we do not need to calculate anything, just collecting the information directly from the schematic is enough.
Following the markings in the schematic above, we get the left side voltage as uin and the right side voltage is the voltage drop of the resistor and the capacitor.
We can extend this to 3-phase quite easily since 3-phase lc filter is just three single phase lc filters coupled by the common neutral voltage.
3 phase LC filter
Aside from the presence of the neutral voltage, the 3 phase model is the same as 3 separate single phase LC filters which are just coupled with the neutral voltage. With the neutral voltage, the 3-phase lc filter has the following form. A more in-depth explanation can be found in tutorial section for modeling single and 3 phase lc filters
Note that we don’t assume balanced inductances, capacitances or voltages here hence we use all 6 state variables in our calculations.
The model (24)-(33) is written in vhdl using real vectors for the inductances, capacitances, voltages and resistances as seen here. The states : real_vector hold the 6 state variables and the function returns the states and the neutral voltage.
function deriv_lcr (
states : real_vector
; i_load : real_vector
; uin : real_vector
; l : real_vector
; c : real_vector
; r : real_vector)
return lcr_model_3ph_record
is
variable retval : lcr_model_3ph_record;
variable ul : real_vector(1 to 3) := (0.0 , 0.0 , 0.0);
alias il : real_vector(1 to 3) is states(0 to 2);
alias uc : real_vector(1 to 3) is states(3 to 5);
variable un : real := 0.0;
variable dil : real_vector(1 to 3);
variable duc : real_vector(1 to 3);
begin
ul(1) := uin(1) - uc(1) - il(1) * r(1);
ul(2) := uin(2) - uc(2) - il(2) * r(2);
ul(3) := uin(3) - uc(3) - il(3) * r(3);
un := get_neutral_voltage(ul, l);
dil(1) := (ul(1)-un)/l(1);
dil(2) := (ul(2)-un)/l(2);
dil(3) := (ul(3)-un)/l(3);
duc(1) := (il(1) - i_load(0)) / c(1);
duc(2) := (il(2) - i_load(1)) / c(2);
duc(3) := (il(3) - (i_load(1) - i_load(0))) / c(3);
retval := ( ( dil(1), dil(2), dil(3), duc(1), duc(2), duc(3) )
, un);
return retval;
end deriv_lcr;
This 3 phase model is then given to the solver through the impure function interface in the testbench. As the interface is marked as impure, it means that it can have side effects that is we can use variables and signals that are not given in the argument list of the impure function. Here we use the time to calculate a 1kHz sine wave which is the input to the 3 phase lc filter. The full testbench can be found here.
impure function deriv(t : real; states : real_vector) return real_vector is
variable input_voltage : real_vector(1 to 3);
begin
input_voltage := (
sin(t*1000.0*math_pi*2.0)
,sin((t*1000.0+1.0/3.0)*math_pi*2.0)
,sin((t*1000.0 + 2.0/3.0)*math_pi*2.0));
return deriv_lcr(states, i_load, input_voltage, l, c, r);
end deriv;
procedure rk1 is new generic_rk1 generic map(deriv);
procedure rk2 is new generic_rk2 generic map(deriv);
procedure rk4 is new generic_rk4 generic map(deriv);
LCR filter simulation
The results from different integration methods from the vhdl testbench are plotted against QSPICE simulation of the same circuit. To highlight the significance of the order of integration, we use 1us time step for the 1st order integration as a comparison which shows quite good match with the qspice reference.
In figure 2 the stepsize is incresed to 10us and we can see that the first order integration is unstable and the response diverges.
In Figure 3 we match the response with 2nd order Runge-Kutta and with a step size of 10us. In Figure 4 we use 4th order RK algorithm and we can increase the step size with another factor of 5 to 50us. So the difference in the step size is a factor of 50 when we use 4th order Runge-Kutta integration vs the first order. Hence even though we are calculating the differential equation 4 times each time step, we can still need to do an order of magnitude less evaluations of the differential equation with 4th order solver.
The Adams Moulton simulations are shown in the figures 6 and 7. The major drawback of the AM integrators can be most easily seen in the Figure 7 with the AM4 solver. The initial startup has significant transient due to the memory associated with Adams Moulton methods. However, we should note that since there are no extra function evaluations, hence with 10us stepsize, the adams moulton achieves around the same amount of function evaluations as the rk4 at 50us step size and with 20us, it is less than rk4. In cases when the function evaluation has the highest cost, this is a major benefit. Note that even with the issues of multi-step methods, we still can use 10x longer step sizes compared to the 1st order integration!
Since there are multiple options for what to use, then what should we choose for integration? For first guess for your simulations, I would recommend either RK2 or RK4. They take the step size as input hence if we know when to use smaller or larger stepsizes, we can easily do that. The AM family of methods are quite useful in HIL simulations where the equations are evaluated in some embedded hardware in real time. AM methods do not require any extra evaluations of the differential equation and this is likely the deciding benefit in embedded hardware especially with electrical circuits that in general require really small step sizes.
Next in part II
Should you use these methods for general purpose simulations? Short answer is possibly, but you might be able to do a lot better with adaptive step size solvers. With adaptive step size solvers we use an estimate of the error to find a step size which produces error within some acceptable tolerance. Changing the step size allows the integrator to take long leaps when nothing is changing but scale back the step size when it is actually required and this generally greatly speeds up the simulations. You can find the testbenches for the adaptive methods already in the repository and they will be introduced in the next blog.