12-07-2009 06:03 AM
Hi,
LV 2009
I have a simpe application where is some static aritmetic inside timedloop. Application cycle is 1000ms. I constatly measure how much time is spend to execute one cycle. Usually this value is about 300-320ms. This application is running in one workstation(2 CPUs). I noticed that when I took a windows remote desktop connection to workstation application cycle is disturbed. Sometimes it takes even more than 1000ms to execute one cycle. Remote desktop connection is quite slow because connection is done via cellural phone. If my remote desktop connection is slow why cycle is disturbed? Is frontpanel waiting that all data is updated to my cellural phone display? Have anyone else noticed this behaviour?
BR, Jim
04-24-2012 10:30 AM
I have a similar situation, I have a PC running a labview executable carrying out DAQ tasks on a test facility, remote desktop is used to view and control the exe aduring a trial. It appears that if the network connection is lost the running of the executable on the remote PC is interupted, I'm monitoring the DAQ device buffer during continuos operation and it fills up when the connection is lost, indicating the executable is not looping at the correct rate. It even struggles when the remote desktop window is scrolled!! It makes no sense as the remote PC should run quite happily with or without a remote desktop connection?!
04-24-2012 11:00 AM
What's the CPU Usage with and without the remote desktop connection?
Regards,
Marco
04-25-2012 02:14 AM
I cant remember the exact values but the CPU usage is low in both cases.
04-25-2012 05:22 AM
That would be nice to share some code, suppose.
Could help to work out then maybe.
04-25-2012 08:16 AM
I suspect part of the answer would be shown in the WIndows Task manager >>> Perfomance >>> show Kernal times.
The OS (operating system) exposes an evnvironment in which our code runs. That environment OPERATES the hardware as required to provide that environment. One of the operations it performs for us is controlling the memory allocation and mapping in order to provide the Virtual Memory our code run in. Virtual Memory is composed of hardware that translates memory address fetches into physical memory fetches. VM allows you to run multiple applications on the same machine at the same time where each of the processes have access to up to 4Gig of memory even though the machine has far less than the total of what all of the processes THINKS it has access to.
When the OS discovers a need to implement a new environment for a yet another process or expand the mapped memory space of an existing thread, the OS drops into Kernal mode where is can acces the memory mapping hardware and twiddle the bits to set up the new memory.
While in Kernal mode, normal processing is stopped until the mapping is complete.
So...
You have found out why critical processes running under Windows is a "Hit and Miss" game.
But then again, I am just guessing.
Ben