LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

sudden slow down in data transfer

I'm reading data from the hard disk on a Target PXI System (8106 RT) and sending it via Ethernet to a Host PC and writing it to its hard disk. I read a chuck of data from the Target, send it using TCP Write function. The Host uses the TCP Read function get the data and write it to its hard disk. Data is being transfer over a Gbit Ethernet. A total of 9 files ranging in size from 700 MB - 2 GB are being transferred. The process moves along at an adequate speed until it suddenly slows down to slower than a snail's pace in the middle of a file. By using probes (see enclosures) I've been able to determine it take approximately 17 seconds between when the data is read on the PXI to when it is written on the Host. However, an iteration of the loop on the Target has increased to over 3 minutes. The CPU usage on the Target and Host are minimal, less than 1%. Memory usage on the Host is also small. Any suggestions on how to diagnose what has caused this sudden slow down?
Download All
0 Kudos
Message 1 of 14
(4,578 Views)
I've never messed with a PXI system, but could you use FTP commands to get the file onto the other system. Is the receiving side doing any processing of the data before saving to a file?
0 Kudos
Message 2 of 14
(4,526 Views)

yes, I could FTP it.  However, the reason I am doing it this way is that potentially I have multiple files on the PXI side that are being combined into a single file on the PC side.  The PXI has a file size limitation of 1 GByte and I could easily be collecting 2-4 GBytes.

0 Kudos
Message 3 of 14
(4,481 Views)

Hello faustina,

 

After reading your post, I have a few more questions to ask you about this to help you solve this issue. Which version of LabVIEW Real-Time are you using?

 

Also from the pictures I have seen, it looks like you are sending the entire file over in one chunk of 700MB-2GB. This isn't the recommended way to send the amount of data across TCP/IP because of the problem you are seeing. In order to fix this issue it is best to divide the file into chunks of smaller sizes to send to the host. This will decrease the amount of memory and CPU usage dedicated to this task.

 

The other concern I have with the TCP/IP connection is its location in your code. Is it located in the time critical loop on the target? If it is, it will need to be removed from that loop and place in a different loop by itself. The reason for this is that TCP/IP connection is nondeterministic because it has to communicate over the network and with a Windows system.

 

I hope this information helps you with it and if you are still running into problems with execution time, I would suggest the Real-Time execution Trace toolkit.  This will allow you to monitor the execution time, CPU usage, memory, and much more of a set of code.


Jim St
National Instruments
RF Product Support Engineer
0 Kudos
Message 4 of 14
(4,440 Views)

Two thoughts come to mind.

 

1) What is the harddrive status? A fragmented almost full dirve can slow down the file writting after it fills the big gaps and has to start writting to scattered sectors on the disk.

 

2) What is the hardware connection between the machines? I have read of switches that throttle I/O after you go over some limit.

 

That's all I have to offer.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 5 of 14
(4,431 Views)

I am running LabVIEW RT 9.0.  The Target code is not residing in a time critical loop.  Data is being transfered in chunks of 524288 bytes (see probe location 1).  The hard disk on the PC was fragmented so I ran a defragmentation application to clean it up.  The only hardwared connection between the PXI and the PC is a 1 GBite ethernet line. 

 

I have since tried to copy all the files from the PXI to the PC via FTP on MAX.  I was hoping to combine the appropriate files into a single file on the PC.   However, when I FTP, after every file download I get the following message "connection time out. closing FTP client"  My file sizes are 1073741824 and 1023410176 bytes.  Is this because it is taking so long to download one file?

0 Kudos
Message 6 of 14
(4,416 Views)

Hello Faustina,

 

After looking at this code a little further, I have a few more questions to ask. In the code you are reading the data from a file. How are you saving the data? Is this done in the time critical loop? If it is, then since this resource is being used by both the time critical loop as well as the lower priority loop, this will introduce jitter into your system and causing these delays.

 

As for losing the connection during the FTP transfer, this problem could be caused by a slowed network connection or the RT target is closing the connection to conserve its CPU usage. It is hard to dianogisis these problems because each person's network is slightly different.

 


Jim St
National Instruments
RF Product Support Engineer
0 Kudos
Message 7 of 14
(4,391 Views)

Jim_S wrote:

...

 

As for losing the connection during the FTP transfer, this problem could be caused by a slowed network connection or the RT target is closing the connection to conserve its CPU usage. It is hard to dianogisis these problems because each person's network is slightly different.

 


Real-Time Trace Execution Toolkit?

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 8 of 14
(4,388 Views)
I would recommend using smaller chunk sizes for your data transfers. I would recommend something inthe 10K to 50K range. I have done quite a bit of profiling for sending large amounts of data and I have found that it is better to send lots of smaller chunks rather than fewer large chunks. Also make sure that you are not automatically building up large buffers in your application if you are using shift registers. If you are reading from a file read the file in chunks and send the chuncks. Don't read the whole file in in one read. One experiment I ran resulted in a single read of a large file (2 MB or so) took something like 10 minutes to read it in one read. However it only took a couple of 100 ms to read the whole thing in 10K chunks. Same applies to wrinting very large chunks using the TCP write. You can easily get a timeout if you write 1 MB as a single write because the receiver has not consumed all of the data within the timeout period. However, multiple small writes will reset the timeout for each chunk written.


Mark Yedinak
Certified LabVIEW Architect
LabVIEW Champion

"Does anyone know where the love of God goes when the waves turn the minutes to hours?"
Wreck of the Edmund Fitzgerald - Gordon Lightfoot
0 Kudos
Message 9 of 14
(4,378 Views)

Changing the chunk size does not resolve the problem.  I've reduced it to 4096 bytes and still can have the 'door' slammed shut.  In the enclosed files I can increase my network utilization (especially if I don't write to disk) to 20-23%.  Even if I include the writing to disk network utilization fluctuates rapidly from 6-18%.  But I can't figure what suddenly reduces the network utilization to below %1 effectively slamming the 'door.'  There are no errors for either of the TCP read/write function calls.  This occurs even if I disconnect from the ethernet and connect directly to the PXI.

Download All
0 Kudos
Message 10 of 14
(4,108 Views)