LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Distributed computing

I reduced the number 50000 to 8000 and that's as low as the program allowed it to be. However, I still was not able to retrive data in real time. It still takes 5 seoncds for the data to appear on the wavechart. And the program is not on the coding. that's the reason i would like to use distributed computing to speed up my program. does any of you folks know how to do that??
0 Kudos
Message 11 of 24
(2,372 Views)

LabVIEW allows you to choose which cores a task runs on so you could split your data intesive tasks up, process them in different parallel loops and then recombine it. I don't think it has any inherent facilities to handle a cluster computers though. You can always get some Gigibit NICs and write your own code to distribute the tasks however you want to.

 

I'm using LV8.5 so I can't view your code but I would suggest you take the advice from Altenbach (et. al.) and look into optimizing the code you have running on the hardware you have. From your problem description I suspect your bottleneck is not due to lack of computer processing power.

 

EDIT:  BTW, are you using a standard GPIB (1MB/s) or High-Speed (8MB/s) card? 

Message Edited by NIquist on 05-26-2010 02:34 PM
LabVIEW Pro Dev & Measurement Studio Pro (VS Pro) 2019
0 Kudos
Message 12 of 24
(2,362 Views)

First, from the Block Diagram go to Edit >> Clean Up Diagram.  It hurts my eyes to look at your diagram.

 

Then run this modification of your code and post back with the value you get for Read Time (ms).  We cannot test this because we do not have your instrument.  The result will be the amount of time it takes the instrument to send 10000 bytes.  Try it several times.  Change the 10000 figure to a number large enough to get all the data from one sweep.  Let us know what the numbers are and we may be able to help you speed thing up, if it is possible to do so.

 

Lynn 

0 Kudos
Message 13 of 24
(2,352 Views)

Don't forget about the VI profiler in LabVIEW if you determine that the bottleneck is actually in the code itself.  It will tell you which VI is taking the most time to execute and/or hogging memory.

 

Tools > Profile > Performance and Memory. 

LabVIEW Pro Dev & Measurement Studio Pro (VS Pro) 2019
0 Kudos
Message 14 of 24
(2,343 Views)

Hi Folks,

thanks for the advices.  I used the modified code johnsold gave, but the program still takes 5 seoncds to run. too bad. so i don't really knoww what to do at this point. 

Lee

0 Kudos
Message 15 of 24
(2,309 Views)

Lee,

 

When you run my VI, what value does it show in the Read Time (ms) indicator?

 

Lynn 

0 Kudos
Message 16 of 24
(2,268 Views)

your VI was not functioning as the same as mine. yours was sat to trigger the OSA over and over but failed to retrive data from the OSA.

Lee

0 Kudos
Message 17 of 24
(2,219 Views)

What mode is the OSA in? how many frequencies are you covering?, whats the sweep time / dwell time etc?.  Sounds like 90% of that time is the acquisition in the analyser itself, not the labview programming side of things.  I used to work with some old Network analysers for characterisation which could take all day.  90% of that time was the analyser saweeping through frequencies across power and temps. (Particularly as it was an anlogue sweep generator within the Network analyser).

 

Craig

LabVIEW 2012
0 Kudos
Message 18 of 24
(2,206 Views)

Lee,

 

My code does not do anything "over and over." It has no loops. Its intention is to trigger the OSA once and to read some data (up to 10000 bytes as posted) once.  This will tell us how long it takes the instrument to transmit 10000 bytes.

 

Lynn 

0 Kudos
Message 19 of 24
(2,184 Views)

Lynn,

It's about 400 ms. the reason it was repeating because of the input command "RPT", i changed to "SGL" which is designed to trigger only once. sorry for the confusing.

Lee

0 Kudos
Message 20 of 24
(2,175 Views)