04-26-2012 07:40 AM
Hey,
Thanks!
I think I know what might be going wrong, but I would like to do some tests to fully reproduce the issue.
It has to do with how you "use" the borders in your original bmp.
Can you upload an example bmp (together with the used settings/steps you followed)?
Not that I directly need it, but I don't see any pdf in the zip-files.
04-26-2012 12:54 PM
Hum what do you need exactly besides the pdf? I didn't understand :S
I can put the pdf as attachment, it's too heavy, but i found it on that link :
04-27-2012 04:14 AM
Hello VGans,
Can you provide me with a sample image (bmp) that you used for the tests?
04-27-2012 10:20 AM
There you go ! 🙂
04-30-2012 03:07 AM
Hello VGans,
So as expected it had something to do with the borders.
In your code it seems like you do the same action independent of which position you are in?
You also keep your 24-bit pixmap 2D array the same size.
This is where it goes wrong when you're on the border.
As far as I see it is possible to refer to elements of an array that do not exist (or are not defined) inside that array.
Imagine you have a 800x600 image and for simplicity a 800x600 pixmap array.
Take for example the Erosie.vi:
Imagine that you're on the utmost right edge indexes (799,x)
If you look at your code, then your Index Array can try to read (799+1,x).
Because the Index Array function will not produce an error, this will assign "something" to that specific read out element.
This "something" might not be what you expect it to be.
These "unwanted/unknown elements" can cause your borders.
In general when you're doing Image Processing in courses (as far as I remember from my studies), you're professor should have mentioned something about boundary conditions.
The two simplest approaches to take are as follows:
- implementing some kind of case structure that performs a different comparison when you're at a border.
- Extend your array (eg. an extra border(s) around it that surrounds the original image).
Then perform your vision algorithm on the extended image.
Then only output the selection that corresponds to the original image (ignoring the results of your vision algorithm had in the extra border positions).
Normally the latter approach will be more efficient because you can always perform the same algorythm (although it might consumer more memory).
The first approach has some kind of case selector value to check during each iteration so it will most likely be slower.
PS: For the creation of this extra border there are quite alot of algorithms that exist.
Is this explanation a bit clear to you?
05-03-2012 11:36 AM
`Thanks a lot for your help!! It is clear enough!
Even though I already took my exam, I'm glad I could understand what went wrong, and it will maybe be helpful for other people!
So thanks again 🙂
05-09-2012 09:05 AM
Hi can you tell me what kind of algorithm you use in the actual process to scan the whole picture pixel by pixel, it looks complicated to me when it jumps from rows to up down Is this zig zag?
Thanks in advance
05-11-2012 02:50 AM
Well I don't think that it's the most efficient, but I just used two for-loops. The first one looks at each row and the other one at each column in this row, so just at each element of this row.
Is that what you asked? 🙂