LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

slow f labview 2011

Our top level VI is a small stub, most of our code is dynamically loaded, with the data loading happening several layers down.
Neither the VI that holds the data, nor the VI that does the filling through a reference have the front panels opened when this happens.
What I meant to say was that if the control that will hold the data is hidden on whatever panel it lives, the text. property can't be filled through a property node, i.e. you can do it with no errors, but the string doesn't get there.
0 Kudos
Message 21 of 57
(1,624 Views)

A small update on this thread.

Rerun the code on a core 2 duo machine, windows xp pro, 32 bits, labview 7.1.  The step that's giving me trouble takes about 20 seconds to execute, and it is executed about three times.  In each of those steps the double loop wehere I think the code is stuck under LV11 is executed about 7 times with differente loop counts that can be in the hundreds.  This machine has Ardence's RTX installed so it may interfere a bit with the Windows scheduler but 20 seconds is reasonable.

 

Same exact code, on a core 2 quad machine, windows xp pro, 32 bits, labview 7.1, also with Ardence's RTX installed (and I think that their code does not work well under quad core machines, but no concrete evidence) and that step takes about 35 seconds, so I already have almost a 2x time hit on this machine, which is either due to RTX or LabVIEW threading not working well on quad core machines (no data to substantiate this, just a suspicion).  One of our standard tests, that takes about 2.5 hours on the core 2 duo machine, takes about an hour extra on this machine (same code ...) so there is something fishy already going on.

 

I can't run the same exact code on LV2011.  I had to recompile a couple of DLLs to be 64 bits, and tweak a few minor things here and there, but when I run on a i7Quad machine, windows 7 enterprise, 64 bits, labview 2011 64 bits, NO Ardence RTX installed, that step takes .... over 15 minutes, yes minutes, or about 35x times slower than my core 2 duo/xp/32/lv 7.1.  When I fire Process Explorer and look at the code, there is one thread with a start address of mgcore_11_0.dll!ThMgrChangeSystemPriorityMap+0x40 that eats about 25% of CPU time.  Not sure this has anything to do with it, but all other threads are in 0.0x CPU level.

 

I then tried to profile a couple of VIs, both the top level VI that fires up the whole process, and the dynamically loaded VI that eventually calls (directly) the VI that includes the step that causes the problem.  Unfortunately I think there's a bug in the profiler when they went from 32 to 64 bits, as some of my VIs report times like 1844674407363529.5 milliseconds, which rounded a little bit seems to be about 58494 years of execution time. Probably an integer wrap around problem of some sort. Oh well,

 

Just now running under the Desktop Execution Trace Toolkit but my steps seem to take about 39 minutes under those circumstances, so I have a bit to go before I get a report.

 

Now I need to condense this into a working failing example, or NI support won't push the code to the developers.

And I have a lot more work to do ...

 

 

0 Kudos
Message 22 of 57
(1,574 Views)

And I either don't understand how to run the diagnostic trace tool, as I got a blank report and a LV crash dump, or something is wrong with that tool too...

 

0 Kudos
Message 23 of 57
(1,569 Views)

I may have created a small VI that replicates the problem.  However, to be able to run it I need to save a double dimension array of OLE Variant data (as returned from an ADO call to Oracle) in a control in such a way that I can send the file to NI and/or post here, as the data comes from our Oracle server only we can get to (hopefully:-)).  However, if I copy the indicator where I get the data into to another VI, make it a control and make the current value the default value, that doesn't seem to be saved as closing and opening the VI and/or LabVIEW returns an empty array.  Flatening to XML and/or flat string doesn't seem to work with ActiveX variant data. Is there a simple way of doing this?

0 Kudos
Message 24 of 57
(1,552 Views)

@MoReese wrote:

@Jim K wrote:

Here is a code sample that I put together. The code I put it in to measure elapsed time doesn't seem to work. It takes longer for this code to run that what the elapsed time reports. In any event, I must be misunderstanding the amount of data you have, since my sample execute quickly.

 

I don't understand why your data has to be visible on the front panel. My sample code seems to execute much more quickly when the indicator is not visible.


Of course it does.  Your screen doesn't have to refresh all the data.  You can place 2D arrays or clusters on the fp and it can work just fine.  But if you have too much data, it can bog down your system.  I have used 2D arrays on the fp before, but it was more as a troubleshooting aid, plus I only use them on a subVI.  Never the top-level VI.


And if you must put them on the front panel you have to defer updates (panel property). However, what I found was that if you turn off updates, write to a large array indicator, then turn on updates, the indicator may still be updating. You have no way of knowing when the indicator is done updating unless you write to it with a value property node. You wire the error out of the first defer updates node to the error in of the value property node then wire error out of the value node to error in of the second defer updates node. Don't forget to clear any errors going in to the defer updates node that has true wired to it. An error will mean that updates don't get turned back on Smiley Surprised

=====================
LabVIEW 2012


0 Kudos
Message 25 of 57
(1,544 Views)

@instrumento wrote:


And the recommendation is to redesign my application?

As Mrs. Palin said, "You betcha I will."
But I tell you one thing, it won't be in LabVIEW.



I think you will find that LabVIEW is one of the fastest languages depending on what you want to do. If you are doing signal processing or manipulating large amounts of data it is hard to beat. Especially given that multithreading is so easy that you don't have to think about it.

 

LabVIEW has gone through quite a bit of changes under the hood since 7.1. In fact the entire compiler was changed in 2010. For starters It uses an open source low level virtual machine. These changes make fast code even faster but the compiler has to work harder. It gives the impression that LabVIEW is slower since compiles take longer but it is work being done ahead of (run)time.

 

I know that is probably not related to the issues you are having but it is just to illustrate that the language is really evolving. There may be optimizations that cause badly written code to suffer even more.

 

If you switch to another language then it is still possible to write bad code. Unless of course you are just more comfortable with a non-dataflow language. If you find it easier to write code in something other than LabVIEW then you shouldn't be using LabVIEW. Some people just never get it and that is fine. Different peoples minds work in different ways. But give it a chance like I did.

 

Since you have very large arrays of clusters on your front panel you should definately turn off front panel updates. The speed difference will be phenominal. If you don't then a gaming quality video card may help. But if you go with something other than LabVIEW you will have to turn off front panel updates. That is not specific to any language. It has more to do with the OS and hardware. If you have ever done any COM type building of Excel spreadsheets you will have found that it is extremely slow unless you defer updates until the spreadsheet is ready.

 

Other issues you may want to look at is how you build arrays. Hint - don't "build" your arrays. Initialize and fill them. There are a lot of opportunities for optimization when it comes to arrays. Altenbach and Ben among others have written lots on performance. This is a remarkable thread to give you some ideas for performance when it comes to arrays.

=====================
LabVIEW 2012


Message 26 of 57
(1,513 Views)

Is it at all possible to compare the 2011 version of your code running on a Windows XP machine? I have seen some strange things with Windows 7. They did some REALLY strange things to their network stack and I have seen that cause issues. You have changed a lot of variables in your system (PC, OS, LabVIEW version, etc.) and insist on blaming it all on LabVIEW. As others say, the code itself could be problematic and given the amount of changes that have occurred to the LabVIEW compiler you may be getting some strange results with some poorly designed code. Other issues may be causing some of your delays too. Try to isolate the variables and narrow down where the problem is. Definitely try to run the profiler on the code. That can definitely help to narrow down where the bottle neck is occurring. It would also help if you could post the code that is giving you the problem. It is near impossible to suggest any changes if we can't see the code.



Mark Yedinak
Certified LabVIEW Architect
LabVIEW Champion

"Does anyone know where the love of God goes when the waves turn the minutes to hours?"
Wreck of the Edmund Fitzgerald - Gordon Lightfoot
Message 27 of 57
(1,509 Views)

To be able to post a simplified example that still exhibits this problem I need to understand how to save a double array of OLE Variants so I can feed my example.

On the front panel the beginning of the array looks like the attached image, showing type and data for the variant.

0 Kudos
Message 28 of 57
(1,503 Views)

Attaching a zip file.  In this zip file there are two directories, lv2011 and lv2010.

 

The original VI that does all the work, data-loader, was originally writen, slightly different, and with another name, in 7.1 long long ago.

I started with the version of that file opened/saved by LV2011, then hacked at it to simplify things a bit to create this example.

The data-panel and top-level VIs were created in LV2011 just for this example.

There's a few other sub-VIs in there, there were originally 7.1 files, ported to LV2011.

Once I had the example finished I saved the VIs for a previous version, LV 2010.

And thus I arrived at the two archives/examples above that should be self contained.

Unfortunately you guys can't run it under my conditions, more on this below.

I'm just showing the two files so you can tell me what's wrong.

 

If I run this in LV2010 32 bits, Windows XP Pro/SP3 32 bits, fully patched, nothing else running besides the usual MS and NI daemons, the code runs in 2.3 seconds. (This is a subset of the operation I claimed took 20 seconds on this same machine.)

If I run this in LV2011 64 bits, Windows 7 Enterprise, 64 bits, fully patched, the code either bombs (i.e. brings down LabVIEW hard) or takes about 12 minutes to run.

 

Now, the data I feed the routine is generated from another set of VIs there's no use including as they can only be run by us.  It is basically an ORACLE query through ActiveX/ADO/ODBC, that generates a double array of OLE Variants, a partial image of which you can see in my previous post. The structure of that variant matches the structure of the cluster that lives in the array in the data-panel.vi. In fact, this other set of routines uses the controls in the cluster to generate the SQL query, as we can use this code to fill data that lives in separate tables in Oracle in a generic way.  In a given row some variants are integers, strings or doubles. Note that some of the variants can be NULL values, if the corresponding database cell is null and we trap that by the error output of the variant to data.

 

To generate the number above I run my data gathering routine, then copy the indicator in the panel that gets all the data to the top-level.vi of the examples above, change it to control, and replace the results double variant array in there, then press run.  I have yet to find a way to dump that data out in a way that I could later on read so you and/or NI could run the code.  That data doesn't seem to flatten to string or xml (as the help page explains) so I'm at a loss here.

 

So the problem is either with Windows 7, the quad core, LabVIEW 2011, or the fact it's a 64 bit version. I'm may invest the couple of hours it takes to uninstall all the NI 64 bit stuff and load LV2011 32 bit, but there goes a whole morning, at least.

 

 

 

0 Kudos
Message 29 of 57
(1,481 Views)

One more detail I forgot to mention.  I tried replacing the double array of OLE Variants with a double array of variants (all doubles or all strings) modifying the cluster accordingly, and I can't make it run slow, i.e. it runs just as fast as in 2010.  If I plug in the OLE variants the code goes catatonic under LV2011.

 

One interesting thing is that because of my inability to save/restore the variant array, even though I close the data gathering set of VIs, LabVIEW is not in a pristine state, i.e. I ran through the Oracle call.  I've seen cases where after running this and closing all those VIs, LabVIEW sits there spinnning using about 25% of the CPU.  I've seen this in LV7, 2010 and 2011, but only in 2011 do I get the slow running code, even when there's a baseline busy something.

 

0 Kudos
Message 30 of 57
(1,480 Views)