11-22-2018 10:48 PM - edited 11-22-2018 11:02 PM
hello James_McN,
Like you said it is a cool product but i think i am not using it correctly. i am attaching the only screen shot of the code. in the screen shot you can see that i am acquiring the data from the FIFO and directly writing into a binary file. if i was writing to a single file from single FIFO, then it was working Perfectly.. but when i was trying to write into 2 or more files at once from different FIFOs i am not getting all the data into the files. i thought that if i place "Wait until next ms multiple" it with "1 ms" it might help decrease the load on the processor, but because of the wait function it is taking more time to acquire and to record and i removed the wait function. i am also attaching the my code, in my code i am hardly acquiring the data and writing into the file.
11-23-2018 01:26 AM
Hi kiranteja,
why don't you just use ONE FIFO for ALL channels? Why don't you write ALL data into ONE file?
Btw.: why is the stop condition of the lower loop named "Start"? Why do you need to negate the "stop" value? Why do you negate the resulting "start" value again - in each loop??? Silly Rube-Goldbergs…
if i was writing to a single file from single FIFO, then it was working Perfectly.. but when i was trying to write into 2 or more files at once from different FIFOs i am not getting all the data into the files.
Using just one data stream to your harddrive creates much less work than to start two (or more) simultanuous data streams! Now the file system needs to "jump" across the hard drive for each access…
11-23-2018 02:34 AM
Sorry for the wrong Screen shot, my bad. after that i have done modification and uploading now.
why don't you just use ONE FIFO for ALL channels? Why don't you write ALL data into ONE file?
I tried it also . but i m getting same result. i appended all elements to one FIFO and i am acquiring it and as previously while acquiring there is no problem with data, i am getting each and every sample as my calculations. but while recording it is Fluctuating between 4 sec to 8 sec and sometimes even more. i am out of options that is why i posted in discussion forums.
11-23-2018 02:51 AM
As the DAQ Read and File Write are synchronized, the loop speed isn't as high as it could be. Usually, you put the DAQ data on a queue, and read the queue elements in a 2nd loop to write to file. Not sure if that is what's happening here, but it would be the first thing on my list. As an alternative, you could put the data in a shift register, so the DAQ Read and File Write execute in parallel. The queue method has added benefit that the queue can grow when\if the write loops stalls.
11-23-2018 03:42 AM - edited 11-23-2018 03:42 AM
Hi wiebe@CARYA,
i tried Queues, but i did not try along with shift registers...... i hope that it may work.
11-23-2018 09:12 AM
@kiranteja93 wrote:
Hi wiebe@CARYA,
i tried Queues, but i did not try along with shift registers...... i hope that it may work.
Queues should be able to perform better then a shift register. If queues did not help, then it wasn't the bottleneck, or you didn't do it correctly.
If you want to try a shift register, you might consider a feedback node. That could be a bit clearer on the wiring. In both cases, make sure to initialize properly with an empty array.
11-23-2018 10:04 PM
Queues should be able to perform better then a shift register. If queues did not help, then it wasn't the bottleneck,
Actually queues are working perfectly while data is acquiring i am getting each and every sample in that queue, when i was trying to write the data from queues into a binary file then i am getting " not enough memory to complete this operation" error.my problem is not with the data. but writing into a file....
11-24-2018 05:22 AM
It is really worth running a benchmark to make sure the disk you are writing to is set up properly. I normally use CrystalDiskMark from https://crystalmark.info/en/software/crystaldiskmark/
If that comes back nice and high then we know we have a software issue but if it is too low then we know we need to address the disk setup.
11-24-2018 10:53 AM
Some Questions:
mcduff
11-25-2018
02:22 PM
- last edited on
12-18-2024
10:56 AM
by
Content Cleaner
You haven't mentioned whether you are running the controller using Windows or RT - if Windows, then you might want to look at this example which provides non-buffered file access (no caching) by calling a different file open routine in Windows. I can write at over 800MB/s using this, so it should easily handle your data rate if the disk is up to it, which it sounds like it is. The only limitation is that writes must be in multiples of the sector size, usually 512 bytes.