LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Control&Simulation loop problem

Hi guys,

 

I'm trying to build a simple controller to a process, and something unexpected showed up: when I used the fixed step size ODE solver(Runge-Kutta 1-4), the graph showed that the output runs very unstable and after a period of time nowhere to be found. When I used variable step size(for example Runge-Kutta 23) the result seems pretty satisfying. What causes this to happen, and does it mean I can't run the simulation in real world time? Any suggestion is greatly appreciated!

 

Best Regards

0 Kudos
Message 1 of 3
(2,497 Views)

The instability of simulations using fixed-step size ODE solvers is because your timestep is too big to solve system you have. This is common problem with fixed-set size solvers. To fix that, just double-click on the left-size "dog ear" and reduce the sampling time until you can find a step Size that will work for your model. Also, you will notice that this number will get larger as you increase the type of solver too. 

 

image.png

Barp - Control, Simulation, RTT and HIL - National Instruments
0 Kudos
Message 2 of 3
(2,467 Views)

I actually was able to open your original code. I noticed that your model is very "stiff" which means that simulating this using the Control and Simulation Loop, you need to be careful with the setup.

To really understand the model, I created a simulation using Control Design functions, which is able to give me the expected model if I was going to use linear algebra without nonlinearities. Here is the step response of 100:

image.png

This will give me the combined model:

image.png

And for me to simulate such model, I have to use RK4 with timestep of 1E-5. Anything above that, the model will not converge. I used RK3 with the same timestep and it worked. For RK1, I had to go to 1E-6.

 

Barp - Control, Simulation, RTT and HIL - National Instruments
Message 3 of 3
(2,457 Views)