LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Need to improve speed when graphing large arrays

Dear altenbach,

Thank you for your suggestions.

The original code did not include the 2D array & was implemented as you suggest. The addition of the 2 D array was a "work around" as happens quite often in programming.
Attached is an earlier version of this same project.

The 2 D array was used as a buffer & helps but does have overhead costs. The CPU time goes up from 28% to about 36%.
the most frustrating thing about the DAQmx is that the triggering is inconsistent.

With one set of code I would think that everything is OK as I have added timing to slow the consumer loop down only to have it miss lines the next day without changing a thing.

I have also converted the double precision to U32 as the computer should handle U32 more efficiently but for some reason this really gives me problems.
I have also tried to use the global variable as you suggest & still miss more lines.

The missing lines is also directly proportional to CPU usage although we are not near even 50% CPU time. I can take the "Windows Performance" window & move it to increase CPU time & I can see more lines will be missing.
The same thing happens when zooming while acquiring.

I was originally responding to another user's comment about Arrays & the entire point was, unfortunately, missed.
I didn't claim to be an expert as I have taken my first beginner's class in January, 2006. However, the point of LV handling large arrays had nothing to do with inexperience as I did ask NI & other more experienced programmers.

Again, many thanks for your suggestions. I will try to incorporate them into future code.

Earl

PS: meow
Download All
0 Kudos
Message 31 of 66
(2,063 Views)

labview provides a decimate function in the array pallet.  It is a growable function so you can choose to decimate it be every 2,3,4 ...n, any integer.  Decimate will however remove the fine detail from your graph (high freq components) if decimating is a problem there are better data compaction routines which are less lossy.

 

Paul

Paul Falkenstein
Coleman Technologies Inc.
CLA, CPI, AIA-Vision
Labview 4.0- 2013, RT, Vision, FPGA
Message 32 of 66
(2,052 Views)
Shane,

I have finally read you post as I was responding to altenbach's post.

I was trying to be humble in apologizing & emphasizing my apology in bold letters. It is certainly more effective than making the letters the same size or smaller.

In order to really find out if LV can handle large(a relative term) arrays one only needs to call NI support & ask. With regard to Mark 's comments, I have spoken with him & shown him my code. Unless you were there, you really can't say. I suspect as a professional, Mark is taying out of the fray & wants to be helpful. One also can look at the prior 30+ pages of postings as I have suggested in the past but to no avail.

As far as "not holding much water here", it appears that you have made some assumptions:

1. That I care. I really don't as I normally look at NI example code & can call NI support. I have occasionally perused through the discussion forum but so much data needs to be filtered to be of any use.

2. That I need the approval of the other NI "experts" for their code. It appears that for some their code comes at a high cost of cowering.

One of the true professional here is altenbach, he is just curious enough to analyze the code & make suggestions.

Earl

over & out.
0 Kudos
Message 33 of 66
(2,052 Views)
Dear Paul,

Thank you for the suggestion.

I have considered this option & others.

Decimating the array has a loss of data & not desirable for our customer.

I have also considered an interpolation which removes noise but also reduces the array size.

Unfortunately, the customer wants larger arrays so they can process it offline themselve.

Best Regards,

Earl
0 Kudos
Message 34 of 66
(2,041 Views)

I was thinking more of filtering/desiminating data prior to displaying on a graph but keeping all the data for storage.  There is often no need to show all the data when a subset will convey the same information (cliffnote approach)  just as 12megapixel images look no better than 4 megapixel images when looking at them on a 5x7, but the additional data is needed if we want to zoom in on a particular roi.

Paul

Paul Falkenstein
Coleman Technologies Inc.
CLA, CPI, AIA-Vision
Labview 4.0- 2013, RT, Vision, FPGA
Message 35 of 66
(2,035 Views)
Earl,

what 30+ pages of postings?  You're starting to sound like SCO.

I also thought Mark had actually posted in this thread (in contradiction to your claims), but maybe I'm mistaken.

As to the rest, I simply couldn't be bothered answering this kind of stuff any more.  Some things simply aren't worth the hassle (even to me).

Comments such as "
In order to really find out if LV can handle large(a relative term) arrays one only needs to call NI support & ask" are beyond comprehension to me.

Shane.
Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
0 Kudos
Message 36 of 66
(2,030 Views)

Paul,

 

Good suggestion.

 

Although the original posting was for graphing. I did not post the original question.

My particular application constructs an image of 2000 lines. This is a series of 1 D arrays to form the image.

As this is done with a DAQ card and not IMAQ, the difficulty is getting the data into the array reliably in the first place.

The technique you describe is similar to a "thumbnail" approach for pictures.

Thank for the help.

 

Earl

0 Kudos
Message 37 of 66
(2,030 Views)

Earl,

(part 1 of 2 due to 5000-char limit.  Part 1 is Editorial, Part 2 is Tech Content.)

I don't know if you're aware of this, but when you call NI for tech support you may very well be talking to someone that's not a highly experienced LabVIEW user.  A former co-worker of mine joined NI to be a sales engineer and a significant part of his training was spent on the phones doing tech support.  He would rotate through the various stations at a nearly-overwhelming pace.  2 weeks on Fieldpoint, 2 weeks on Motion, 2 weeks on Real Time, 2 weeks on Vision, 2 weeks on...

So during one 2 week assignment, he'd have to spend his "spare time" trying to learn some stuff about the topic for his next 2 weeks.  Generally speaking, his first day or two he'd have to defer a bigger % of questions to a more experienced NI support person but as the days went by he'd be able to handle more on his own.  Just as he'd start to find his groove, he'd be switching to another focus area and start over again.

The folks on the phones are very helpful and courteous, and generally do a good job of knowing when to defer to someone more experienced.  But I know of several times I've called in and been given an answer that was wrong or at least not correct or applicable in my situation.  Since I'd know to keep on probing & prodding, I would eventually get a more accurate answer -- usually from a more experienced person on a callback.

Everyone's mileage will vary on stuff like this.  I would just say that the answers from phone support may need to be filtered a bit too, albeit very likely much less than you'd filter advice from unknown folks in this forum.  On the other hand, having been around the forums for a few years, there are certain contributors here that have been just about always spot-on.  They aren't unknown (to me) any more, and I tend to trust what they say.  Some are from NI, but more of them aren't.

(continued in Part 2 - Tech Content)

-Kevin P.

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
Message 38 of 66
(2,015 Views)

Earl,

(part 2 of 2 due to 5000-char limit.  Part 1 is editorial, Part 2 is content.)

In your current case, I expect the advice you got from NI is exactly right though the reason isn't entirely clear.  I've got some thoughts though.

Some kinds of array handling can no doubt be implemented more efficiently in C than in LabVIEW.  There are usually ways to get "pretty close" with LabVIEW, but that's not always good enough.  I suspect I have an inkling why your app may be one of the exceptions where best-case C can stomp all over best-case LV.

In "C", you could implement a 2d array in a different manner than LabVIEW allows.  Specifically, you'd have a pointer-to-a-pointer.  For simplicity of the analogy, let's think of it as a 1d array of pointers.  Each element of the array is simply a pointer to a memory address that marks the starting point for 1 horizontal "line" of the image data. 

Non-C people might say, "so what?"  Well, let's tackle your problem where we need to replace an old image line with a new image line.  In LabVIEW, you must copy your 2400 data elements into the pre-allocated memory space using "Replace Array Subset."  In C, you could simply replace the pointer to the old data with a pointer to the new data.  So you copy exactly 1 integer.  (Of course, somewhere you've got to handle the deallocation of the memory that the old pointer was pointing to, but it doesn't necessarily need to be done on the spot in real time.  You could for example copy the old pointer to a linked list of pointers waiting for deallocation after all your time-critical processing is done.  The bottom line is that you can easily imagine the C version being ~1000x as fast.)

(Note: this pointer-to-a-pointer in C also more easily allows "arrays" where each row can have a different # of columns.)

I can't look at your code b/c I'm still at LV 7.1  I don't use IMAQ, but based on demos it would be my *guess* that its underlying image processing takes advantage of the efficiency to be gained by the C representation of arrays.  That would explain why you pass IMAQ refnums around rather than actual arrays of image data.  So the more you can do your manipulations in the IMAQ world, the more likely you are to be more efficient.  Can you treat your 1-line purely as an IMAQ image without ever turning it into a LV array?  Does IMAQ allow you to merge a 1x2400 image into a specific "line" of a 1600x2400 image, all based on IMAQ refnums rather than data arrays?  Does IMAQ provide any new front panel controls that accept IMAQ refnums as inputs?  If so, I would suspect it's more efficient to use one of these rather than a LabVIEW X-Y graph or similar.  Or maybe you're already doing this...

-Kevin P.

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
Message 39 of 66
(2,013 Views)

Hi Kevin,

Yours is the best response yet as it addresses all the issues:

1. NI information, concerning handling of large arrays.

2. possible LV solution,

3. possible "C" solution.

This has been an ongoing project for about 3 years (on & off). The original LV programmer has about 15 years experience & has run into the same issues that I am running into concerning the handling of large arrays & triggering. The original programmer has sufficient experience & expertise as NI uses him in their LV Users group meetings. This project was highlighted at several meetings as to the speed &  versatility of IMAQ but the limitations of handling arrays as well.

I have taken over this project as I have the time & although I am very much a novice I have asked the same questions to the NI support as well as the NI instructors.

One of the problems we have is triggerring. As we are forming a rather large image, any missing data as a result of mis-triggering manifests itself in a black line on the image. It doesn't happen much but even 1% is more than enough to ruin an image. Out of the 2000 lines, 1% translates into 20 random black lines. The mis-triggering is also proportional to CPU usage, the higher the usage the more lines are missing. We have tried to set different execution priorities, purchased a dual core computer, etc. & the issues are better. Attached are two pictures exhibiting the problem.

The idea for using "C" code was from NI as it was their personal opinion that "C" does handle arrays better. Hence my posting.

Your analysis of the "C" code operation for handling arrays is exactly what the NI people have said: "C" writes directly into specific memory locations hence the speed.

I was hoping for someone who was familiar with both LV & "C" code to have a potential solution perhaps using a "CIN". But "hybrids" (LV & C) may be few & far between.

IMAQ does allow you to address each 1 D array. It also allows you to address one pixel at a time but each is not without the overhead.

I gather several 1 D arays to form a 2 day array them input into IMAQ memory. IMAQ & arays appear to be more efficient at replacing array portions than 1 line at a time which makes sense.

 

Thanks for your input.

 

Earl

 

 

 

 

 

 

0 Kudos
Message 40 of 66
(2,007 Views)