10-22-2022 03:13 AM
Hello,
Im writing a big project, with TCP server as a command entry point. But in tests I have discovered a memory leak, and after week of tearing my hair and finding whats wrong, I have decided to prepare repeatable example and ask here.
Description:
I have an actor, which plays the TCP server (TCPServerService) role. He has Listen method, where is waiting for incoming connection (Wait on Listener), and after succesfull connected client, he prepare new actor (TCPRequestHandler) with Connection ID in private data and starts him.
This TCPRequestHandler has two methods HandleRequest and HandleResponse.
HandleRequest is waiting for data with TCP Read and then prepare response and send it away - the way, how the data are sent is crucial for memory leak.
HandleResponse just send everything what received via TCP Write and then call HandleRequest.
The first way how the data are sent is that they are sent directly on the same place via TCP Write. Then call HandleRequest again and cycle this process.
This way is without leak - tested for 12 hours with the biggest memory difference of 200 kB - it's OK.
The second way is to prepare the response and send it to HandleResponse method, which will send data via TCP Write and then call HandleRequest. And that is never ending cycle.
This way has big memory leak - in 12 hours more then 10 MB.
I prepared repeatable example with two versions of how data are sent - using Disable structure (For repeat it, enable one). (LabVIEW 2021)
Also I prepared TCP client written in C# - there is code and EXE in Bin folder - just run it after running Launcher.vi.
I used perfmon for watching memory of LabVIEW (Win-R -> perfmon)
Please, could someone review the code and tell me what's wrong?
Thank you very much,
Adam
10-24-2022 06:31 AM
I would not call 10MB over 12 hours a "big memory leak". Also, IFAIK perfmon does not show the real memory usage of a LabVIEW app. LabVIEW will not release unused memory immediately, sometimes keeps it until the application is closed. Maybe you can see better what's happening if you trace your app using Desktop Execution Trace Toolkit (DETT) or Get Memory Status.vi from the "Memory Control" pallete.
I could not find a reason for any type of memory leak in your program except maybe for the HandleResponse.vi which, in my opinion, should not have a message defined to it. You don't need to send a message from the actor to perform the action of the same actor, you can simply call the method directly whenever you need it. Sending a message instead using variable data can cause reallocation of the memory of the actor's queue which might never be deallocated and maybe seen as a leak. Sending the message is also much slower than calling the method directly and if there is too much data to be processed, the actor might fall behind causing its queue to increase in size and so to use more memory.
As a side note, sending Listen message from the nested actor to the calling actor will cause a dependency between the two which might not be desired. You can send an abstract message instead or use an interface class if you are using LabVIEW 2020 or later. In this case I think you can avoid sending that message completely since the caller will listen continuously anyhow.
10-24-2022 06:56 AM
Hello Lucian,
thank you for reply.
10-24-2022 09:49 AM
Sorry I maybe haven't fully understood the sentence: "Sending a message instead using variable data can cause reallocation of the memory of the actor's queue which might never be deallocated and maybe seen as a leak."
Does it mean that every message sent to any actor cause memory reallocation which could possibly be never deallocated? Isn't exactly this called memory leak?