Hello.
I have some binary data (it is a jpg) that is corrupted, but I know what I need to do to fix. However, I am having trouble getting LabVIEW to cooperate.
If I read the data as text, I am able to process it to fix it the way I need to, but when I write it back to a file, LabVIEW adds a LF for every byte it thinks is a CR and a CR for every byte it thinks is a LF. In other words, if a byte is 0x0d, it writes 0x0d 0x0a, and if a byte is 0x0a, it also writes 0x0d 0x0a.
This, of course, corrupts the jpeg data and makes it unusable.
So I thought I could just read the entire data and treat them as U8. But when I try to use the "Read From Binary File" function, it apparently tries to find some sort of header with the size of the array, or something like that, and since the first few bytes of my file are 0x0, it returns an empty array.
Do anybody know of a way to make LabVIEW simply read a file and interpret the data as an array of U8 where the length of the array is simply the length of the file in bytes?
Thanks,
Alejandro