LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Need to improve speed when graphing large arrays

Mark wrote:
 
"
... it does not mean that you cannot handle large arrays well in LabVIEW.  By getting rid of memory reallocations, you can optimize the handling of large arrays as seen the the Replace Array Subset Example.
 
"
 
Thank you very much for clarifying!
 
Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 21 of 66
(3,806 Views)

As I have said before again and again NI has looked at this code including Mark & other professionals have blessed it.

I have said enough about this. Unfortunately, this thread has degenerated to a misunderstanding & has more to do with egos that code. This reminds me about several dogs & only one tree.

OK, the King has no clothes & please disregard the 30+ pages int he dicussion forum about Labview slowing the computer operation.

Your turn at the tree.

 

Earl

 

0 Kudos
Message 22 of 66
(3,784 Views)
altenbach shrugs and goes on to more rewarding issues...............
Message 23 of 66
(3,778 Views)
Hi Earl

not wishing to prolong the flammin war but in your correspondance you claim to have
a 'satisfactory solution prior to DF Grays suggestion to check out a tutorial on data mangment.

Did you actually attempt to impliment a solution utilising the principles from that tutorial Smiley Indifferent
or just accept that the solution worked Smiley Surprised

thought without learning is perilous: Confucius


xseadog
Message 24 of 66
(3,758 Views)

Yes, I have read the tutorial & implemented the suggestions from the NI staff.

I have also had some help from an experienced Labview programmer prior to responding.

The biggest error most make is to use the "Append Array" vi instead of the "Replace Array subset" vi.

The biggest gain was made in by saving a series of 1 d arrays then sending the data in a 2 d array to the queue as opposed to sending the data in a 1 D array at a time.

This was not my original thread, I was just trying to make a point about array handling & it appears that some are more intent on blaming the code as opposed to reading the entire thread...

I really do appreciate and welcome your constructive comments.

Earl

 

0 Kudos
Message 25 of 66
(3,754 Views)
Earl,

I'm glad you got an experienced LV programmer to help you earlier.

You have also received answers here from some very experienced and highly skilled LV programmers too.  In fact I'd wager it'd be hard to get some more experienced LV programmers than those who have offered their free time to answer your questions, be they from within NI or not.  This was all right before your "tree" post which I find personally offensive.

Many of us here understand very well what it takes to make LV run smoothly and fast, even with large data sets.  That LV behaves differently to C in a given situation is not new, and I doubt anyone here will deny that.  What we're trying to tell you here is how to get LV to work fast.

The point is that your comment about trying to make a point doesn't occur in a vacuum.  That is to say that there are some very experienced programmers trying to make you aware of the fact that your comment that LV doesn't handle large arrays well in certain circumstances should not lead to a limiting factor in code execution.

But if you choose not to listen and reply with offensive "tree" comments, then I fear that many of the experts resident here (and they DO deserve to be called experts) will avoid offering up their free time to help you in future.

You are free to agree or disagree, but please remain polite and civil.

Shane.

Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
Message 26 of 66
(3,733 Views)

Dear Shane,

 

I can appreciate your comment & those that know me personally (including some LV experts) know that I am generally polite & civil.

What is frustrating is that I have repeatedly said that NI has looked at this code & although there are some things that may be corrected, it is generally acknowledged that the problem with large arrays slowing the application is real.

It appears that if anyone appears to take exception with an aspect of Labview, then it is blasphemy. My consistent response is that NI agrees with me but to no avail.

Unfortunately, there are those that choose to take some of the comments out of context & don't read the enire thread.

If you notice, I was "polite & civil" until I was met with an impolite response with comments that was taken "out of context".

While I appreciate those experts that take the time to read & provide assistance, I cannot agree with those that would repeatedly filter the facts & become condescending merely because they are experts.

My aplogies to those who have taken offense at my comments, it, generally, was not directed at them but I wanted to make a point.

I think the best response from this was from altenbach who responded, "altenbach shrugs and goes on to more rewarding issues..............."

Well put & I have thought about this long ago.

 

Best Regards,

 

Earl

 

Message 27 of 66
(3,707 Views)
Earl,

I really regret drawing this thing out any longer than it should be but I just want to finish on the following observation.

As I have stated in my last post, NI are not in possession of all the LV experts in the world.  Indeed, it's happened often that NI's own employees have been bested by other members of this community when it comes to code execution speed (See coding challenges for examples).

Taking this into account, your claimed "blessings" from NI don't carry much water in this forum, especially since MarkW seems to be of a different opinion than you have attributed to him in your posts.

Additionally, anyone can criticize LV on any point they wish.  Indeed we have some threads for suggesting improvements in LV as well as a bug list, so we're actually quite critical on several aspects of LV.  There are, however, certain topics which are again and again raised by users who seem to be first coming to grasp with the LV paradigm.  Having been in these forums for several years, I can understand the frustration each time the same question comes up especially since it's already been answered multiple times.

If you have a specific application where performance is a problem, post example code showing the problem and if it truly is below what it should be (According to proper LV programming) then I can tell you it won't be long before the "experts" you seem to disagree with here will be banging on NI's door asking for a) an explanation and b) a fix.

I apologise that I can't personally review the code you posted as I only have LV 6.1.

Shane.

PS A slightly minor point:  If you want to make an apology, writing it in large letters makes it seem like you're shouting.  I don't know about rules of etiquette where you come from, but my understanding is that a shouted apology usually comes over badly.
Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
Message 28 of 66
(3,703 Views)
Earl shrugs and goes on to more rewarding issues...............
0 Kudos
Message 29 of 66
(3,696 Views)
Ignoring all the other garbage, I am still somewhat intrigued by the original problem.
 
Lets take two steps back and look at the issue. The problem at hand is very simple:
  1. We get updated parts of an image, one line at a time.
  2. The image should be updated as new lines arrive.
 
Looking at the task, are comments such as:
 
There are techniques to improve the effieciency as you discribe by using the "Replace Array subset" vi instead of the "append array" vi.
 
even related to the problem?
 
NO!
 
Because we don't even need a 2D array to hold the image in addition to the image contained in the FP indicator. 🙂 Why would we need another copy of the image in an array, then convert it to an image and redraw all lines with each display update???
 
The only thing we need is to udated the new line using IMAQsetRowCol! Right? This is a 1000xsmaller operation!
 
I also questioned the use of a queue in this case. The queue holds yet another copy of lines in a variable size array that gets resized on demand. Sometimes it only contains one line, sometimes maybe five. Inefficient! If we still need to use a queues as a buffer, it might be worth implementing the queue as a customized functional global with a fixed size 2D array for a handful of lines only. Writing operation would replace one line at a time, while the reading operation would get all new lines at once. A few shift registers would keep track of the insert points etc. It won't have all the bells and whistles of the Q, but it also does not have all the baggage. The lower loop would need to be run with timing, e.g. at 100ms intervals and contain a big case structure so everything is skipped if there is no new data.
 
Maybe with SetRowCol the VI can be run syncronously without loosing lines?
 
Woof!
Message 30 of 66
(3,668 Views)