LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Trim Whitespace.vi is not re-entrant. Why?


@sth wrote:


Yes, but implementing a good testing program for each small issue is a time consuming project and takes away from the idea of getting things done.   Actually more information about the LV internals makes things more transparent and testing can be just a time consuming reverse engineering exercise.  


You’re not testing the VI; you are testing your own understanding.  If things don’t tests like you understand it, that will prompt you to identify any misunderstandings you may have, like about “substrings”, for example.  THEN you can apply your improved understanding to reduce the need for testing later on.  It’s a long iterative process to develop an understanding of how the LabVIEW compiler operates.

 

0 Kudos
Message 51 of 95
(1,352 Views)

So far the best performance I get is when I inline the trim white space primitives directly into the VI, like so:

 

Snip4.png

 

0 Kudos
Message 52 of 95
(1,347 Views)

mcduff wrote: the

So far the best performance I get is when I inline the trim white space primitives directly into the VI


Again, the purported reason not to make it reentrant is 9 year old comment about MEMORY performance not CPU.  I haven't seen anyone benchmark MEMORY.  I know the buffer copy dots are only "90% reliable" but even at 90% it is a pretty strong reason not to use either the Open G or the built-in versions.

 

Running the profiler and getting maximum memory performance might give the data needed.

LabVIEW ChampionLabVIEW Channel Wires

0 Kudos
Message 53 of 95
(1,341 Views)

Memory considerations will be small for the strings we are referring to.

 

My understanding of AQ's original post is that the Trim Whitespace function is ubiquitous in a lot of internal functions, that the memory considerations for all of those individual functions each having their own memory space is a worse trade-off than having the function called serially.

 

Is this still true for typical 2017 computers? Is it true for more limited systems like crio, or embedded Windows? I do not know. I do not think the profiler will give you the answer though concerning memory.

 

 

0 Kudos
Message 54 of 95
(1,333 Views)
  • Launch the task manager
  • Launch LabVIEW, Record memory and elapsed time.
  • Inline Trim Whitespace.vi and repeat

"Should be" isn't "Is" -Jay
0 Kudos
Message 55 of 95
(1,323 Views)

@JÞB wrote:
  • Launch the task manager
  • Launch LabVIEW, Record memory and elapsed time.
  • Inline Trim Whitespace.vi and repeat

Assume I do that, which is not on the table (for the obvious reasons posted earlier), what does the result prove?  What is the criteria for a function being called re-entrantly vs. single thread.  If the memory footprint is low?  How low?

 

Using Task Manager (Or Activity Monitor) will it show a few kB invocation of Trim Whitespace on a background of a couple of hundred MB of the application itself?

 

Back to the elapsed time.  It is irrelevant!   Whatever the time/iteration for the function, it will take advantage of multi-CPU cores as a re-entrant VI than as a single thread.  One of the reasons LV is an important language is the ability to program in 2D and take advantage of parallelism.  Has NI crippled that by making all VIs single thread?

 

So even in AQ's nightmare scenario, will the 150+ simultaneous invocations bring LV 2016 to its knees?  Is the vi.lib version just an archaic artifact from LV 8.5.

 

Actually I don't believe this is the true reason for the performance problems I am seeing which might be more related to a UI thread contention. 

LabVIEW ChampionLabVIEW Channel Wires

0 Kudos
Message 56 of 95
(1,306 Views)

BTW, the OGTK version (and the one I edited from it) are about 2 orders of magnitude faster than the built in version!  (This was on 1000 random strings up to 2Mchar with up to 1Mchar of white space on each end).

LabVIEW ChampionLabVIEW Channel Wires

Message 57 of 95
(1,304 Views)

What settings are you using for your VI, I assume it is the one you posted earlier. Is it in-lined, subroutine, etc.

 

When I place the Match Pattern directly on a for loop, it still appears faster than other solutions posted here, not true for the Native Trim Whitespace.

 

I believe AQ said there were 150+ copies on the Getting Started Window, does not include all of LabVIEW.

 

Assume the Native Trim White space take 4kB of memory, when you make it re-entrant you can tell how many copies are made by looking at the Task Manager, a before and after comparison, I think that was the gist of Jeff's suggestion.

0 Kudos
Message 58 of 95
(1,298 Views)

The are all subroutine priority and Preallocated clone.

For memory considerations I do not usually in-line that adds to the instruction space of the caller

Match pattern should be fast, I hope it is a highly optimized primitive

100 or 150 is the same number (physicist counting...) 🙂 but in terms of time and I am looking at orders of magnitude not factors of 2.

Maybe the task manager and LabVEW operate differently under windows than Mac OS.  But the memory foot print of LabVIEW changes even when the system is idle and running threads checking menus etc.  I was saying that looking for that 297654321 byte memory footprint changing to 297658321 bytes is within the noise of LV operating itself.

Lastly, spawning 150 copies of the VI shouldn't be a huge issue if they are really simultaneously executing.  If they are preallocated clones then the memory already is reserved in the caller.  If they are shared clones then at some point rational garbage collection should run on the clone pool manager.

LabVIEW ChampionLabVIEW Channel Wires

Message 59 of 95
(1,286 Views)

Scot

 

I'd like to see the vi.  I don't doubt your statement.  But, science requires duplication of results.


"Should be" isn't "Is" -Jay
0 Kudos
Message 60 of 95
(1,271 Views)