05-12-2023 11:44 AM
Hi,
I am using NI-9402 with cRIO NI-9022 to switch the MOSFETs. I am performing the power cycling test, where the junction temperature of the MOSFET is cycled in between given temperature. During the test, I store data in the TDMS file. I have implemented the sampling time-period of 30 msec by using the timed-loop. In order to capture the maximum temperature, I have used additional sampling when the condition of maximum temperature is met. Therefore, there will be additional sampling (in 1 cycle) in between 2 regular 30 msec sampling.
However, I have noticed that I have a big delay time (around 7 msec) from the time when LabView commands to turn OFF MOSFET to the time output of NI-9402 turns low.
In the below figure, on the extreme left, I am writing data at regular sampling of 30 msec using timed loop. When the turn-off condition is met, we do the switching of the MOSFETs and then again write the data (extreme right) at that operating state (MOSFET highest junction temperature). This data is stored when turn-OFF condition is met.
In the below figure, the data in green shows that MOSFET is commanded to switch from ON to OFF. The data in red shows the ouput voltage from NI-9402. This is without having any delay in between the time when the command is given and when the data is read. Column 1 shows the sampling time in milli-second.
The below data is when delay of 6 millisecond is introduced and it didn't worked this time as well. The data in red still don't changed to zero (NI-9402 low state)
The below data is when delay of 7 millisecond is introduced and it worked.
There are no gate drivers involved. We are directly reading the output of NI-9402 using NI-9229.
I was wondering if there is any way by which I can avoid this delay.
Kind Regards,
Bhanu
05-12-2023 01:01 PM - edited 05-12-2023 01:20 PM
Hi Bhanu,
@okidoki21 wrote:
I am using NI-9402 with cRIO NI-9022 to switch the MOSFETs. I am performing the power cycling test, where the junction temperature of the MOSFET is cycled in between given temperature. During the test, I store data in the TDMS file. I have implemented the sampling time-period of 30 msec by using the timed-loop. In order to capture the maximum temperature, I have used additional sampling when the condition of maximum temperature is met. Therefore, there will be additional sampling (in 1 cycle) in between 2 regular 30 msec sampling.
However, I have noticed that I have a big delay time (around 7 msec) from the time when LabView commands to turn OFF MOSFET to the time output of NI-9402 turns low.
In the below figure, on the extreme left, I am writing data at regular sampling of 30 msec using timed loop. When the turn-off condition is met, we do the switching of the MOSFETs and then again write the data (extreme right) at that operating state (MOSFET highest junction temperature). This data is stored when turn-OFF condition is met.
IMHO your VI lacks any proper design/algorithm…
When you want to use a Realtime target for realtime applications then you need to write your program in a realtime-compatible fashion!
My recommendation:
NI provides training and manuals for both recommendations…
05-15-2023 08:02 AM
Hi,
Thanks for the reply @GerdW.
We followed the tutorials mentioned here (https://knowledge.ni.com/KnowledgeArticleDetails?id=kA03q000000x0UdCAI&l=en-US) before starting to write our LabView code. We have also watched few LabView and cRIO tutorials.
I understood your points when it comes on replacing the shared variables and local variables with wires. However, I still cannot understand about the disadvantages of using the shared variable and local variables.
We basically have a simple state machine where we heat up the device using its own power losses and then cool it down depending on the manually entered turnON and turnOFF time. This process should repeat for many cycles. Through the heating and cooling phase we want to log the data at constant sampling frequency. Additionally, we also want to log the maximum and minimum temperature (which can occur in between 2 sampling points). I understand that having everything in the same timed loop can create a lot of burden on it. Should I use 2 synchronized timed loops, one for scanning the data from IO Scan Engines and saving it in a variable and the second for controlling the transition among the states in state machines?
Kind Regards,
Bhanu
05-17-2023 12:21 AM - edited 05-17-2023 12:23 AM
Hi okidoki,
@okidoki21 wrote:
However, I still cannot understand about the disadvantages of using the shared variable and local variables.
SharedVariables, especially the network-based ones, can introduce lags. When readings a NSV immediately after writing to it may result in wrong readings…
Local variables introduce race conditions and data copies…
THINK DATAFLOW: the wire (and shift registers) are "variables" in LabVIEW!
@okidoki21 wrote:
Additionally, we also want to log the maximum and minimum temperature (which can occur in between 2 sampling points). I understand that having everything in the same timed loop can create a lot of burden on it. Should I use 2 synchronized timed loops, one for scanning the data from IO Scan Engines and saving it in a variable and the second for controlling the transition among the states in state machines?
05-19-2023 06:37 PM
Hi GerdW,
Below are my replies:
I am planning to implement a state machine with 3 loops, one for sampling all the data from IO nodes, second and third for controlling state-transitions and writing to the TDMS files.
Kind Regards,
Bhanu
05-20-2023 01:12 PM
Hi okidoki,
@okidoki21 wrote:
- The ScanEngine will limit the timing anyway: which sampling period did you configure? Don't forget: the legacy cRIO9022 is rather limited… REPLY: ScanEngine is configured at 1 msec.
Even when you configure the ScanEngine to run at 1kHz: does it even run that fast? Does it run this fast when you handle more than 1…4 IO nodes? What's the CPU usage when trying to run the ScanEngine this fast?