03-28-2001 10:00 AM
03-29-2001 09:27 AM
02-08-2006 10:09 AM
02-09-2006 05:22 PM
02-10-2006 02:02 AM
02-10-2006 04:51 AM
02-10-2006 04:53 PM
Since you are already using a LabVIEW manager function (NumericArrayResize) why not use another one too namely MoveBlock(). Maybe that this function is working different.
@anotherStefan wrote:
Hi,
memcpy( (*ImageIn)->arg1, image->pData, image->lPitch*image->lHeight );
memcpy( (*ImageIn)->arg1, image->pData, 1 );
now the 2nd call to the memcpy costs me around 15ms while the first one is fast (I added timestamped OutputDebugString calls between the lines). This is even true for the 2nd version where the 2nd call to memcpy just moves 1 byte?! Very, very strange. Could this be an issue related to the way LabVIEW executes CIN Code? I thought, that during a call to a CIN LabVIEW doesn't execute any other code so therefore I see no reason for a task switch or whatever is happening here?
Regards,
Stefan
02-13-2006 02:04 AM
02-13-2006 07:00 AM
void
imagecopy(unsigned char * ImageIn, unsigned long *WidthIn,unsigned long *HeightIn,unsigned char *Ptr_to_image_data_IN, LVBoolean *Boolean){
MoveBlock(Ptr_to_image_data_IN,ImageIn,(*WidthIn)*(*HeightIn));}
with a 1600x1200 jpeg image I get maximum 5ms execution time on a 1.7 Gz pentium processor
02-13-2006 10:41 AM
Dear
Cosmin,
thanks for that suggestion. Even if the 'real' source of the problem seems to
be located somewhere else today I made some progress. First of all I replaced
the calls to memcpy by MoveBlock, which didn't seem to change anything as I
expected. Then I modified the code, in a way that 'NumericArrayResize' is only
called when the array passed to the CIN differs in size from the buffer I need
to copy. Again no change in performance. Then I went back into LabVIEW and
called the VI containing the CIN directly. I didn't mention that before because
I couldn't imagine that this has impact on the whole issue and I still don't
really understand it.
My old version under LabVIEW looked like this:
One top level VI which in a case struct depending on the type of data included
a call to 3 sub-VIs. One for 8, one for 16 and one for 32-bit data. Each calls
into a CIN with the copy code for the correct data type, which might be very
silly but that day I couldn't think of any other solution. The top level VI
however returns a 32bit array always, while the low level VIs either return an
8, 16 or 32 bit array. I now suspect, that this 8 bit array returned by the CIN
get silently converted into a 32 bit array and this is the real bottleneck. However
if I call the 8 bit version directly the performance looks much better. The
points I don't understand are:
- Is this conversion from an 8 to a 32 bit array actually done?
- why measuring time inside the CIN produced the results I wrote about earlier
while I thought that a CIN is never interrupted?
- have I earned the jerk of the week award for what I did? 😉
Anyway: I would like to thank anybody who answered my pleads! Thanks a lot!!!
Stefan