LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

High Performance library crash in OMP?

Yes, I use parallel FOR loops all the time without any problems.. However I typically don't use parallel FOR loops and VIs from the High Perf library in the same VI.

0 Kudos
Message 11 of 18
(1,855 Views)

Hi altenbach,

I found some good starting documentation here: http://www.ni.com/white-paper/14113/en/

Two things I've found which are interesting. 
1. "It is not recommended to execute functions from the Multicore Analysis and Sparse Matrix library in parallel with each other." (Taken from the white paper I linked)
2. "By default, the Multicore Analysis and Sparse Matrix VIs use the number of physical cores as the maximum number of threads unless you specify a smaller number." (From the help for the "Set Number of Threads.vi" in the toolkit)

It might be interesting to see if the problem happens after setting the number of threads to a smaller number using that sub VI. There's a basic example in that white paper showing how to set the threads using that function.

Charlie J.
National Instruments
0 Kudos
Message 12 of 18
(1,845 Views)

These are reasonable guidelines and I always follow them. SImilarly, I always only parallelize the outermost FOR loop in a stack, etc.

 

Here we never run two parallel things in parallel, the sequence structure isolates them just fine.

 

Yes, I could limit the number of parallel threads, but I would expect LabVIEW to never allocate more than it can swallow, even using the default settings, so there is probably a bug somewhere. Do you think you could generate a CAR?

 

I'll do some testing later.

0 Kudos
Message 13 of 18
(1,837 Views)

OK, a few more datapoints:

 

  • If I surround the entire code with a while loop loop and run normally, it still crashes.
  • If I read the number of threads for the high performance toolkit it says 16.
  • It crashes if I set the number of thread at 10 or higher. (Seems stable with 9 threads, but I did not test forever, of course.)
  • All the above was with 32 parallel instances configured for the FOR loop. If I change the FOR loop to 16 parallel instances, the code also no longer crashes, even with the highperf at 16 threads.
  • If I disable parallelism for the FOR loop, it no longer crashes, even with 16 threads.
  • If I set the highperf threads above 16, I still only get 16.
  • If I set the parallel FOR loop at 64 and the highperf threads at 9, I get an out of memory error (once or twice as in image followed by a more detailed LabVIEW memory full message. If I click OK, it highlights the outer FOR loop that generates the data).

 

0 Kudos
Message 14 of 18
(1,813 Views)

Hi altenbach,

I'm working on filing a CAR. Could tell me what version of LabVIEW and Windows you are using? 

Charlie J.
National Instruments
0 Kudos
Message 15 of 18
(1,782 Views)

Thanks. This is LabVIEW 2015 (with all patches) and Windows 10 pro (with all patches).

0 Kudos
Message 16 of 18
(1,773 Views)

Hi altenbach,

I've submitted the CAR, with request number is #561161.

Charlie J.
National Instruments
Message 17 of 18
(1,750 Views)

Did this ever get resolved Christian?

I'm getting an OMP error when I run my code (which works fine) on a system with more that 64 logical processors, but when the 80 logical processor system is limited (at boot) to only have 64, I don't have an issue.

James_W_0-1663318177233.png

 

I've got big data, matrix functions, lots of parallelisation and MASM functions (running in parallel) but I always limit MASM to 1 thread for Linear Algebra and & 1 thread for Transformation. 
(Using MASM AxB.vi too)

I'm thinking my issue might be related in LV 2018.

Cheers
James

CLD; LabVIEW since 8.0, Currently have LabVIEW 2015 SP1, 2018SP1 & 2020 installed
0 Kudos
Message 18 of 18
(971 Views)