08-13-2025 11:30 AM - edited 08-13-2025 11:32 AM
@Witschas82 wrote:
Thanks a lot for your comment, which I will test of course. When should I do that? For every Iteration?
I assume you always want to write all data accumulated in the queue so far. Yes. it will work equally well if the queue has zero or only one element.
(note that the shift register is important for the fileref if there is a chance that the FOR loop iterates zero times, else the reference would become invalid. Same for the error, which would get cleared)
08-13-2025 02:23 PM
@altenbach wrote:
Instead of writing only the oldest element, why not flush the queue and write everything that's in the queue?
I was once told the Flush Queue also clears out the memory of the queue. For this reason, it may be better to do the Dequeue in a small FOR loop (dequeue X times or until a timeout occurs). You can then process the array of data as a whole instead of 1 element at a time.
08-13-2025 02:34 PM
@crossrulz wrote:
@altenbach wrote:
Instead of writing only the oldest element, why not flush the queue and write everything that's in the queue?
I was once told the Flush Queue also clears out the memory of the queue.
While I don't really know what that actually means. Are you taking about deallocation? Since a queue can grow and shrink, memory is probably somewhat dynamic anyway.
In any case, the help is silent about that and I have never used it. No expert here. 😄
08-13-2025 02:48 PM
I just have one question: How do you read the files ?
08-13-2025 02:59 PM - edited 08-13-2025 03:00 PM
Also have you tried to flush the file before closing the file?
08-13-2025 02:59 PM
@dkfire wrote:
I just have one question: How do you read the files ?
Well, it is just flat binary and we know the datatype and byte order, so we con read it back as one huge blue array (or read in sections).
08-13-2025 03:01 PM
@altenbach wrote:
@dkfire wrote:
I just have one question: How do you read the files ?
Well, it is just flat binary and we know the datatype and byte order, so we con read it back as one huge blue array (or read in sections).
I know, but the OP hasn't shown that part of the code.
Just to make sure it is not at the reading there is a bug.
08-13-2025 04:26 PM
First, I want to thank the group here for the valuable comments and discussion. This is great help.
Unfortunately, flushing the queue did not solve my problem.
I guess, I read the data correctly, but it is a very good idea to double-check.
As my DAQ-data is I8, I also convert my header data to I8-values before saving, e.g., an I32 values results in a I8 array of length 4. Hence, reading my binary file looks as follows:
All values are correct from both the header as well as the data file. It is just that data is missing when an new file was opened. When I write larger files, e.g. 8 GByte, I can read continuous data for 8 GByte before a data is missing again. Can this be an issue of wrong reading?
Thanks again and all the best, Benjamin
08-13-2025 06:19 PM
Arrays are limited to 2147483647 (index is I32) elements while files have no such limits. Here's your problem.
Your truncated code snipped makes very little sense. I cannot comment further until I see the entire code.
08-14-2025 01:35 AM - edited 08-14-2025 01:45 AM
I agree, a small picture does not help.
But I found one problem in your producer loop:
That insert into array will destroy/remove the first 256 samples of your data.
That should have been a build array.
And you math with the 2d array to 1d array does not make much sense to me.
Do you know how much data the get in each batch of reading?
Is it a fixed size ?
The producer loop for reference: