LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Memory leak with Actor Framework TCP server

Hello,

Im writing a big project, with TCP server as a command entry point. But in tests I have discovered a memory leak, and after week of tearing my hair and finding whats wrong, I have decided to prepare repeatable example and ask here.

 

Description:

I have an actor, which plays the TCP server (TCPServerService) role. He has Listen method, where is waiting for incoming connection (Wait on Listener), and after succesfull connected client, he prepare new actor (TCPRequestHandler) with Connection ID in private data and starts him.

 

This TCPRequestHandler has two methods HandleRequest and HandleResponse.

 

HandleRequest is waiting for data with TCP Read and then prepare response and send it away - the way, how the data are sent is crucial for memory leak.

HandleResponse just send everything what received via TCP Write and then call HandleRequest.

 

The first way how the data are sent is that they are sent directly on the same place via TCP Write. Then call HandleRequest again and cycle this process.

This way is without leak - tested for 12 hours with the biggest memory difference of 200 kB - it's OK.

 

The second way is to prepare the response and send it to HandleResponse method, which will send data via TCP Write and then call HandleRequest. And that is never ending cycle.

This way has big memory leak - in 12 hours more then 10 MB.

 

I prepared repeatable example with two versions of how data are sent - using Disable structure (For repeat it, enable one). (LabVIEW 2021)

Also I prepared TCP client written in C# - there is code and EXE in Bin folder - just run it after running Launcher.vi.

 

I used perfmon for watching memory of LabVIEW (Win-R -> perfmon)

 

Please, could someone review the code and tell me what's wrong?

 

Thank you very much,

Adam

Download All
0 Kudos
Message 1 of 4
(1,255 Views)

I would not call 10MB over 12 hours a "big memory leak". Also, IFAIK perfmon does not show the real memory usage of a LabVIEW app. LabVIEW will not release unused memory immediately, sometimes keeps it until the application is closed. Maybe you can see better what's happening if you trace your app using Desktop Execution Trace Toolkit (DETT) or Get Memory Status.vi from the "Memory Control" pallete.

I could not find a reason for any type of memory leak in your program except maybe for the HandleResponse.vi which, in my opinion, should not have a message defined to it. You don't need to send a message from the actor to perform the action of the same actor, you can simply call the method directly whenever you need it. Sending a message instead using variable data can cause reallocation of the memory of the actor's queue which might never be deallocated and maybe seen as a leak. Sending the message is also much slower than calling the method directly and if there is too much data to be processed, the actor might fall behind causing its queue to increase in size and so to use more memory.

As a side note, sending Listen message from the nested actor to the calling actor will cause a dependency between the two which might not be desired. You can send an abstract message instead or use an interface class if you are using LabVIEW 2020 or later. In this case I think you can avoid sending that message completely since the caller will listen continuously anyhow.

Lucian
CLA
0 Kudos
Message 2 of 4
(1,213 Views)

Hello Lucian,

thank you for reply.

 

  1. The HandleResponse message is necessary, because this is just an example. The real scenario of use is the data are sent to Core actor, which handle real answer (not simulated as here), and this answer is then sent to HandleResponse of particular connection actor to send reply.
  2. Could be the solution of this "leak" to change this model from "actor who read and reply" to two actors - "one for reading and one for replying"?
  3. The reason why I am looking for these leaks is, that the application will never stops in real production. And I need to avoid all possible issues with this long run. Thats btw the reason why I call 10 MB in 12 hours big leak - because after many hours of run it could me much more.
  4. Actually I am using interfaces - the HandleRequest and HandleResponse are functions of interface, but for creating example in LV19 I have removed all newer structures. Yes I know, Listen should be renamed to HandleRequest - I have in my plans to do it 😄
0 Kudos
Message 3 of 4
(1,202 Views)

Sorry I maybe haven't fully understood the sentence: "Sending a message instead using variable data can cause reallocation of the memory of the actor's queue which might never be deallocated and maybe seen as a leak."

 

Does it mean that every message sent to any actor cause memory reallocation which could possibly be never deallocated? Isn't exactly this called memory leak?

0 Kudos
Message 4 of 4
(1,170 Views)