Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Error when making file write size the same as sample rate

Hello everyone,

 

I'm using python-nidaqmx and LabVIEW for generating code to work with National Instruments hardware. I ran into the following error:

 

nidaqmx.errors.DaqError: 
The samples per file specified is not evenly divisible by the file write size.Either change the samples per file or modify the file write size. If not explicitly set,the file write size can be inferred from the buffer size, which is based on the sample rate.

Property: DAQmx_Logging_SampsPerFile
Requested Number of Samples: 2000
Suggested Value: 2048

Task Name: reader_task

Status Code: -201402

 

I manage to  fix this by applying the suggestion in the error. Now all my values are divisible by the file write size which is also divisible by my sector size. my problem is that I don't want to actually do as the error suggested. I want my sample rate to determine the amount of data written into my TDMS.

 

For example:

sample rate = 51.2ka/s

Then the TDMS should have 51200 data points rather than 52,224.

 

Is there a Daqmx way of doing this, or I need to remove the last

FILE_WRITE_SIZE - SAMPLE_RATE values out of my file each time a TDMS is created?

 

 

Thank you in advance everyone.

0 Kudos
Message 1 of 6
(2,395 Views)

I can not tell from your code, but it looks like you are using the DAQmx "Log Only" property. For this property the file size NEEDS to be a multiple of the sector size, as you found out.

 

If you use the "Read and Log" option, then this is no longer true, but you will need to read the data also. Also this is less efficient that write a multiple number of the sector size.

 

mcduff

Message 2 of 6
(2,389 Views)

Hi mcduff and thank you for the reply.

 

You say that Read and Log is an efficient way of logging into TDMS. But let's say that I have a case in which both choices do the work (just for logging into TDMS), why should I use Log over Read and Log? Can you give and example when one is better than the other?, if is not much trouble.

 

In my case, I just write data into the TDMS and for me it does not matter as long the data is continuous. But I did question myself, when to use one over the other beside having an specific amount of samples in a TDMS or plotting the data.

0 Kudos
Message 3 of 6
(2,309 Views)

Read and Log is useful if you want to view and save your data at the same time. That being said, I have a USB 6366 that record 8 channels at 2MSa/s per channel. When all 8 channels are running at the highest rate, I always have the number of samples as a multiple of the disk sector size even in Log and Read mode as these devices sometimes need to record for a few days straight. Writing is most efficient when a multiple of the sector size.

 

Log is useful if you don't need to see the data as it comes out.

 

The DAQmx API is written such that when you use the logging features it can directly write to disk bypassing the CPU and memory. Look at the activity monitor when you are in log only mode you will see barely any CPU usage. This is useful when running from battery, old computer, etc. So if you need highly efficient data saving and don't need to see your data Log Only is the way to go.

 

mcduff

0 Kudos
Message 4 of 6
(2,291 Views)

Thank you for all the information mcduff. One last question, I notice that sometimes when I'm using Log and Read a random TDSM gets the double of samples on it. For example, in a rate of 32k for 10 sec I get 640k samples instead of 320k. Could this be happening because I specify the timeout to WAIT_INFINITELY ?

0 Kudos
Message 5 of 6
(2,265 Views)

@jondoeagain wrote:

Thank you for all the information mcduff. One last question, I notice that sometimes when I'm using Log and Read a random TDSM gets the double of samples on it. For example, in a rate of 32k for 10 sec I get 640k samples instead of 320k. Could this be happening because I specify the timeout to WAIT_INFINITELY ?


Not sure what is going on, I have never had that happen to me. I use LabVIEW instead of Python, but it is probably not a language issue. Maybe it missed a read before, and now you effectively have two reads.

 

This is what I do in LabVIEW, there is probably a way to do it in python. In the DAQmx API I use the "Every N Samples Acquired into Buffer" Event to tell me when to read the DAQ device; I typically read every 100ms. With this method, this is no polling, no waiting, etc, as an event is triggered every time the DAQ has 100ms worth of samples in it. Once the event fires, I read the DAQ.

 

Sorry I cannot help more.

 

Cheers,

mcduff

0 Kudos
Message 6 of 6
(2,257 Views)