LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

−314320 LabVIEW could not create the remote endpoint because the computer that hosts the endpoint ran out of memory.

Solved!
Go to solution

I am streaming data continuously from FPGA to RT to Windows.

 

1)Size of FIFO Target to Host is 32000.

2)Create Network Stream on RT Target- Writer Buffer size input = 700000, Timeout in ms= -1 (wait indefinitely to establish connection with host)

3)Write multiple elements to stream - timeout in ms = 100

4)Create network stream reader endpoint- reader buffer size= 1000000, timeout in ms=10000( timeout if connection not established within 10s)

 

I received error given in the subject of this message. I changed buffer size input in 2 and 4 above to 100 elements then I started seeing data in windows vi. This had worked with the values given in 2 and 4 in the past. But now I am trying to deploy same labview application on a new desktop pc which has similar specifications to the pc which is already running this application. Any ideas why I am getting this error with the original values given in 2 and 4 above?

 

Kind Regards

Austin

 

 

 

0 Kudos
Message 1 of 5
(3,232 Views)
Solution
Accepted by topic author K.Waris

Hi Austin,

 

Sometimes creating a buffer size over 9MB can cause problems, that could have been the cause of the original problem. When moving to your new pc which values for Writer Buffer Size and Stream Reader Endpoint have you kept?

What version of labview are you using? I will see if there are any known issues in regards to the error you are getting.

 

Regards,

Ben B.

Applications Engineer
National Instruments UK & Ireland

"I've looked into the reset button, the science is impossible!"
0 Kudos
Message 2 of 5
(3,198 Views)

Hello Benjamin.

I am using 200000 instead of 700000 on RT target for buffer size and 1000000 in windows vi. And this is working fine.

I really dont see why it should cause problems setting buffer size with bigger value on RT target. Does it mean that it tries then to allocate that size of memory in windows? Can we have some accurate detail on what exactly happens in the back ground? 

I experimented this using different values for buffer size on RT target. And my target was to acquire at 10kHZ from FPGA target. I found the following:

If FPGA clock rate= 1000us which is 1kHZ. I need to have buffer size greater than 2000 on RT. Period of RT does not affect acquisition rate at all. Using 1MhZ clock, I set period to 1000us, then to 500us, then to 100us. It does not affect acquisition rate in windows vi at all. And as I mentioned in message earlier, I am using network stream to stream from RT to windows. I will expect RT while loop to be running twice the rate of FPGA rate to acquire data with no loss according to Nyquist criteria. Perhaps DMA FIFO in this case provides tight synchronization but then why to bother with while loops which are usually used in RT.

I really need to get this right in my head to have full confidence that I could acquire and log at user specified rate. So I am looking for following really.

1) If I set from host target acquisition rate of say 10kHZ, what period should I set timed loop in RT to?

2) If say RT period is set to 1 kHZ and DMA size to 32000, Does it mean that after 4 seconds DMA will overflow?

3) FPGA onboard clock is 40 MhZ usually, if I am sure that code wont take longer than 4 ticks then if I set acquisition rate to 10MhZ and in order to avoid any data loss what rate should I be running timed loop in RT vi at? When we could only select 1MhZ maximum clock? Why in the data sheet it says that the processor is 400 MhZ for 9012 when I could only select 1MhZ for timed loop?

4) Implications of choosing buffer sizes on Network stream functions?

I really appreciate your time and support

 

Kind Regards

Austin

0 Kudos
Message 3 of 5
(3,193 Views)

Hi Austin,

 

Apologies for the late reply. im glad you now have it working. Just to confirm what is your realtime target? Is it a cRIO? The RT buffer will pre-allocate memory but as far as i know it will allocate the memory on the the RT target not in windows, as the data is streamed to another buffer in windows. As the realtime target is deterministic with timed loops its possible if you are running a windows vi which is writing data to a file, you may need to have a larger buffer size on your RT target to allow the windows vi to catch up. I cant think of a reason why it would be a problem to have a larger buffer on the RT.

 

In answer to your questions, what i believe is if you set the clock to 1Mhz and set the period to 100us you can get an aquistion rate of 10Khz within your RT timed loop.

If you are acquring data in a timed loop at 1Khz then depending on the rate at which the buffer is emptied it could time out, the best thing to do is to first overestimate the size of buffer you will need, then reduce the size if needed.

The cRIO has an internal clock of 1MHz which is used for timing and synchronisation. I have found a KnowledgeBase article which will give you more information on network streams, including endpoint buffer size.

 

Regards,

Ben B.

Applications Engineer
National Instruments UK & Ireland

"I've looked into the reset button, the science is impossible!"
0 Kudos
Message 4 of 5
(3,175 Views)

Ben,

 

Could you comment on why specifying a buffer size larger than 9MB could cause problems?  I'm running into an issue where, randomly, my Read Buffer on the PC side starts to grow until it fills up the entire buffer I specified (which is 30MB).

0 Kudos
Message 5 of 5
(3,075 Views)