Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

You cannot change the integration time step during a simulation run, so the time step is always fixed. This is also available in the model as DELT (delta T). With the time step known you can calculate the number of VARs required, which you should do during initialisation. You may run into a problem though, if you want to model large delays, for example, using a delta T of 1 millisec, and modelling a time delay of 1 second would require a 1000 VARs. There are alternative methods, which are approximations. The most well-known is the PADE approximation. Just search for it on the internet, one helpful resource is: https://ris.utwente.nl/ws/portalfiles/portal/134422804/Someremarkson_Pade-approximations.pdf A less well-known approximation follows from:

You cannot change the integration time step during a simulation run, so the time step is always fixed. This fixed for the run. It is also available in the model as DELT (delta T). With the time step known you can calculate the number of VARs required, which you should do during initialisation. You may run into a problem though, if you want to model large delays, for example, using a delta T of 1 millisec, and modelling a time delay of 1 second would require a 1000 VARs. There are alternative methods, which are approximations. The most well-known is the PADE approximation. Just search for it on the internet, one helpful resource is: https://ris.utwente.nl/ws/portalfiles/portal/134422804/Someremarkson_Pade-approximations.pdf A less well-known approximation follows from: from:

in the limit when n approaches infinity 1/[(T/n)s + 1]*n = e * -sT. You could use this with say 10 or 20 lag blocks to represent a delay reasonably well. If you have Simulink you could quickly try it out to check if the approximation is good enough for you.