08-30-2008 12:42 PM
There is no need to convert to binary to join U8 values into U16 values. You are still doing it the hard way.
Please see the attached example that shows what I am talking about. It simulates the sending of the data and show two methods to convert to a U16 array. I thought of an even easier way to convert the string - just use typecast.
Hopefully this will solve your problem. I don't know a better way to explain how to do it.
Bruce
09-02-2008 03:23 AM
Thanx bruce.I discovered the magic of decimate array.The problem i am coding FPGA too. So i am manipulating everything with binary in my mind.Hmmm.LabVIEW makes life simple.
I do now realise that i was complicating things. Good learning though
09-02-2008 04:08 AM
Very well muks. But tell about your FPGA app. Do you use NI kit or other vendors kit ?
and are you going to implement such an algorithm of image processing on FPGA? it would be nice to give us an outline (without exposing your secret 🙂
09-02-2008 05:34 AM
Hi waleed,
I have 4 large blocks. One is the sensor setup where i have 8 signals (Combination of i/os and clks).The second is the ccd signal processor.That will o/p 12 bit values. I use 2 srams for memory and buffer.Then i have a usb controller.All this will be controlled by my FPGA.I am through with generating the required clocks for the sensor and i am getting a nice image but low in resolution.Since till now i was omitting the last 4 lsb from the 12 bit i/p. Ideally i wish there was a way to handle 12 bit image.But as far as i have seen,Lv supports 8,16 and 32 bit images only.
On top of this i cannot do BCG look up on a 16 bit image.Will post a lot more details when i am done with the project.