LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

FPGA analog input rate

Thank you, I did merge the VI on FPGA into one. And I can do recording data the output simutaneously. And what I get is still one data per 100ms, despite I change the "wait until" time. When you look into my VI on FPGA, you can find the loop rate is set to zero uS. That situation really makes me confused. My project document is attached.

0 Kudos
Message 11 of 51
(1,584 Views)

I don't see where you're setting the loop rate to 0uS, but it doesn't really matter.  The FPGA runs much faster than your host VI and since your host only receives one data point at a time from the FPGA, some data is lost without being transferred from the FPGA to the host.  Given this speed difference, I don't understand why you are using the interrupt.

 

How do you know your rate is 100ms?  Have you tried timing the host loop?  If it was 50 rather than 100, how would you know?

 

In any case, the right solution, as I mentioned before, is for you to use a DMA FIFO to transfer multiple data points from the FPGA to the host.  That way the FPGA can acquire data much faster and the host can read that data as an array containing many data points.

Message 12 of 51
(1,572 Views)

The Numeric Control indicate Loop Rate(uS). I combine two example in Labview into one. So I am not quite sure what you mean by interrrupt. How can I avoid using interrupt?

0 Kudos
Message 13 of 51
(1,567 Views)

The Numeric Control indicate Loop Rate(uS). I combine two example in Labview into one. So I am not quite sure what you mean by interrrupt. How can I avoid using interrupt?

0 Kudos
Message 14 of 51
(1,566 Views)

Hello shuishen,

You have two conflicting items on your Host VI. The wait on IRQ method is running your Host loop at whatever rate your FPGA is running (as long as it can keep up), however the Wait until Next Multiple (with the 10ms input) is slowing your loop down. The Wait Until is guaranteeing that the loop, and therefore the front panel indicator, is only reading at 100Hz. 

 

So, what you have is an FPGA that is running faster, but the host is not looking at it fast enough. Why not take the Wait Until Next out completely? Or, if 1kHz is OK for you, just put in a 1ms Wait, and remove the IRQ (you won't get sub-millisecond timing without the IRQ, though).

 

From what I see, you are accurately controlling the FPGA loop rate (also borne out by what you see on the Scope).

 

Also associated is that charts will take a finite amount of time to update, so when you get up to the very fast speeds that may interfere.

 

-Mello

 

**EDIT** Sorry Nathand, I leapfrogged your reply 🙂


Data Science Automation

CTA, CLA, CLED
SHAZAM!
0 Kudos
Message 15 of 51
(1,564 Views)

There is one thing that I am not clear about. When I display the measurement on the scope, the x axis is time or number of interval. When I am using 100ms waiting time, and the x scale is 1, the x axis runs much faster than normal, so I guess the x axis should be scaled to the number of 100ms. So the scale factor is 0.1. And it seems right to me. But When I change the wait time to 1 ms, and x scale to 0.001, the x axis after scale runs slower than the clock time. What should I do with the x axis then?

0 Kudos
Message 16 of 51
(1,551 Views)

Quick question. When you mention the "scope," are you talking about the Front Panel Chart indicators for motor speed and PSI, or an external hardware scope that you are connecting to your FPGA card?


Data Science Automation

CTA, CLA, CLED
SHAZAM!
0 Kudos
Message 17 of 51
(1,547 Views)

The scope is the scope in the Vi. Thank you for correct this point.

0 Kudos
Message 18 of 51
(1,531 Views)

When you say the chart updates "slower" or "faster" are you changing the limits of the graph when you change the scale?  If your graph runs from 0 to 10 and you change the X scale from 1 to .1, it will look like the graph is updating at 1/10th the speed because it is now filling 10 points where it previously filled only 1.

 

Also, it's possible you can't achieve a 1ms loop rate on your host code, and certainly LabVIEW won't update your screen that fast (although it might update the graph with several points at once).  Try timing it by storing the millisecond timer in a shift register and comparing it to the current timer value on each loop cycle to see how fast it is actually running.

0 Kudos
Message 19 of 51
(1,513 Views)

From this related post:


@shuishen1983 wrote:

I am using 7831R for data aquisition, the problem is that when I use a scope in the VI to observe the data, The number in the x axis seem to change with the "wait until next ms". I can't get the right time on the x axis. Does any body know how to get the real time in the x axis?


You cannot get LabVIEW to plot the data with real time on the X axis automatically.  You need to set the scale factor to match the rate at which you are acquiring data.  LabVIEW will simply add one new point to the plot each time it gets new data, regardless of actual loop rate.  That also means that if you have a gap in the data acquisition - say, Windows switches to doing sometime else for a second - you won't see that in your graph.  In order to get consistent timing you need to use some sort of hardware clock.  In your case the FPGA can do it if you use a DMA FIFO to transfer the data, but not if you use a front panel indicator as you are currently doing.

0 Kudos
Message 20 of 51
(1,505 Views)