LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Use Rising and Falling Edge of an Encoder's A & B pulse as the Sample Clock

Solved!
Go to solution

Kevin,

 

I was able to sync my encoder task using an example I found:

https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000000fyrqSAA&l=en-US&fireglass_rsn=true

 

and did a test where I recorded the z-pulse in parallel to my encoder angle measurement and was able to verify that the angles reset to 0 in the same row the z-pulse goes high and is in logic with the way it should.

 

I moved setting waveform properties inside the case structure and created a separate error dialog box for my consumer loop. This seems to have taken care of the buffer issue since my code has been running for 20 hours continuously now without giving me any errors.

 

Honestly, I would love to make it more efficient and apply what you suggested:

 

"make my Queue datatype to be a typedef'ed cluster with an array of waveforms (for AI) and an array of DBLs (for the Encoder).  Then you'd just wire straight from the DAQmx Read functions into a "Bundle by Name", and from there direct to Enqueue"

 

but to be honest, I don't know how to. I tried searching for examples and forums but I was not able to created a type def'ed cluster the way you suggested. If you could please show me that, I would really appreciate it. The reason why I have FFTs in real-time is that some of the components are being tested for the first time and I wanted to have a decent idea about their frequency response during the engine load and speed sweeps

Download All
0 Kudos
Message 11 of 17
(1,026 Views)

Could you use File->Save for Previous Version... to save back to LV2016?   I don't presently have access to LV2019 or LV2020.   Then I can make some mods in-place and repost.

 

My suggestion to use the typedef'ed cluster isn't certain to be a big improvement, just something that "feels" more efficient than expanding an array of waveforms every iteration.  It's quite possible that the LabVIEW compiler is up to the challenge though, and that the change won't improve memory or CPU efficiency.  At least it shouldn't hurt though, so I'll try it and you can see.

 

As to the FFT stuff, I would venture that 3 graphs updating twice a second would be more than enough for a human operator to try to monitor and notice things.   If your sampling rate is many times as high as your frequencies of interest, you can also realize some gains by decimating the data before doing the FFT's.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 12 of 17
(1,015 Views)

Kevin,

 

Attached is the LabVIEW version 16.0 copy. Let me know if it doesn't work for you or anything.

0 Kudos
Message 13 of 17
(1,011 Views)

Here are a few small mods along with a few small comments.  Let me know how it goes.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 14 of 17
(996 Views)

Kevin,

 

Thanks for the comments and clean-up on the code. I haven't ran it yet since I couldn't find "AI and Encoder Cluster.ctl" to load the VI fully.

 

osamafarqaleet_0-1595850414430.png

 

I am not sure if the code will run as intended with that one missing item.

0 Kudos
Message 15 of 17
(980 Views)

Oops, sorry about that.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 16 of 17
(973 Views)

Perfect and thank you so much! I have moved most of the hardware to a test cell but I will plug it in at some point today and test it out. Hoping to start testing this week.

0 Kudos
Message 17 of 17
(967 Views)