08-14-2025 02:08 AM - edited 08-14-2025 02:15 AM
Hi dkfire,
the marked function is InsertIntoArray, so the header data (256 bytes) is inserted at the beginning of the data array.
No replacement, no destroying.
But I agree: it should have been BuildArray in concat mode…
This would be my implementation of that byte array handling:
That larger array coming from niScope is of type I8, so converting the header data to an array of I8 makes sense. I would prefer U8 for byte arrays…
08-14-2025 02:59 AM
Dear all,
thanks for your reply. I will add a cleaned code of my reading .vi during the day. Just an explanation of the logic behing.
The niScope provides I8 data. In my case about 62500 samples per trigger. I want to add some side information in a header to each single trigger event. Hence, I generate an 256 Byte header and fill it with some header information. As my niScope data is I8, the header data should also be I8. With that, I know that the first 256 Byte are my header and the rest is the actual data.
I am planning to generate a producer/consumer loop just storing the iteration of the while loop and deleting all the rest. By doing so, it might be easy for you to reproduce my problem in case it remains.
Anyway, great that people think with me. I will come back to you during the day (German day...)...
Thanks a lot and best regards, Benjamin
08-14-2025 03:01 AM
@GerdW wrote:
Hi dkfire,
the marked function is InsertIntoArray, so the header data (256 bytes) is inserted at the beginning of the data array.
No replacement, no destroying.
But I agree: it should have been BuildArray in concat mode…
This would be my implementation of that byte array handling:
That larger array coming from niScope is of type I8, so converting the header data to an array of I8 makes sense. I would prefer U8 for byte arrays…
My bad, I never use that, so I should have tested it before.
But why take 4000 from the first channel, and the rest from channel 2, with a sample length of 62500?
And I think we might still be back to the reading of the file.
And I would use a Flush File before closing the file.
08-14-2025 03:18 AM - edited 08-14-2025 03:18 AM
Hi Benjamin,
@Witschas82 wrote:
The niScope provides I8 data. In my case about 62500 samples per trigger. …
Hence, I generate an 256 Byte header and fill it with some header information. …
With that, I know that the first 256 Byte are my header and the rest is the actual data.
Are there "exactly" or "about" 62500 samples?
Where do you store the information how many samples come with each data block?
Ah, I see: that's the "Actual Record Length" read from niScope.
Maybe there is a mismatch between "Actual Record Length" as given by that property and the array size after your mangling of the 2D array data!? Can you verify that?
08-14-2025 04:04 AM
Regarding the 4000 samples. I have to input signals whereas the first 4000 samples refer to the reference signal and the rest is the actual signal. For the ni5162, both channels have to be sampled equally. As I do not two times 62500 samples and as I want to save data, I take the first 4000 samples from channel 1 (reference) and the rest from channel 2 before I save it. The 62500 samples are defined and verified. This is completely okay.
All the best,
Benjamin
08-14-2025 04:58 AM
@Witschas82 wrote:
Regarding the 4000 samples. I have to input signals whereas the first 4000 samples refer to the reference signal and the rest is the actual signal. For the ni5162, both channels have to be sampled equally. As I do not two times 62500 samples and as I want to save data, I take the first 4000 samples from channel 1 (reference) and the rest from channel 2 before I save it. The 62500 samples are defined and verified. This is completely okay.
All the best,
Benjamin
I still don't understand that math.
I would like to see how you have plotted the 2 graphs that you show.
08-14-2025 11:29 AM
From the offline help:
To preallocate memory for a queue, enqueue that number of elements and then flush the queue. The space remains allocated for further use of the queue.
08-15-2025 12:51 AM
Dear all, dear Dkfire,
sorry for the long silence. I tried to generate a "cleaned-up" Vi that reproduces my error (attached). While doing so, I realized that everything is working fine. Going back to my read-data routine, it turned out that I calculated the amount of read data wrong, leading to the missing data points. So just an extremely stupid error on my side. I am very sorry for bothering you. All the different versions discussed here store the data correctly in the producer/consumer loop!
Thank you very much for your help. I am very happy that I can proceed with the DAQ now.
Have a nice weekend and best regards, Benjamin
08-15-2025 04:34 AM
We've all been there. Kudos for being open and honest about it.
On a side-note: Quack!