02-01-2015 01:03 PM - edited 02-01-2015 01:08 PM
Ok I got it how to make it work.
But if I want to run it with 16 bit stereo (WAV) then what changes do i need to make in it?
Please guide
thanks
02-01-2015 02:03 PM
I think you should answer Gerd's last question.
Do you care to know how this VI works? If not, hire someone to do your project for you. If you do, give us SOMETHING that shows you've taken some time to understand the VI. You've been told it works for 8-bit inputs. Why is this? What is it that determines the input it can handle? If you can point to that, we can start to discuss how to modify that to allow it to work with 16-bit. If you're not going to put ANY effort into modifying this to fit your need, you really shouldn't expect anyone else to do your work for you.
02-01-2015 03:02 PM
If the wave file is 8-bit mono only that output will work, if it is 16 bit mono only 16 bit will work. Similarlly, for stereo and I got it.
Now guide plz what modifications do I need to make for 16 bit stereo.
Thanks
02-01-2015 03:28 PM
That wasn't the question. I asked what determines that you can only accept the one type of input. You didn't begin to address that.
The code is commented. It should be easier than most to look at and determine what code makes it only work with 8-bit inputs.
Again, put SOME effort into this. You've put none in so far. If this is how you intend to continue: http://www.ni.com/alliance/ Go pay someone to do it for you.
Otherwise, look at the code. Come back when you have an IDEA of what might be limiting the input to an 8-bit stream. What in the code makes this the case? We're all fully aware it is the case. Stating that won't show us you've put any time into understanding the code. So far, you're asking people to do it for you while making it abundantly clear you aren't willing to do any of the work yourself. That's beyond disrespectful.
02-01-2015 04:40 PM - edited 02-01-2015 04:45 PM
Sound format describes it. And in the first for loop (left hand side) it converts 8-bit decimal no. to binary and then in the another for loop it converts back to 8-bit decimal number.
hope it answer ur question.
Also I did not ask for solution. I asked for guide only.
02-02-2015 02:13 AM - edited 02-02-2015 02:13 AM
Hi Joseph,
in the first for loop (left hand side) it converts 8-bit decimal no. to binary
Wrong. It converts from generic decimal to binary. In the loop there is NO limitation to U8 inputs…
The limitation is done elsewhere!
in the another for loop it converts back to 8-bit decimal number.
Yes. And 10 years ago I hardcoded that part - because before I was expecting U8 values…
You need to spot the limiting parts and change them to support 16bit waveforms too. You might even make them respect the "sound format" output…