07-28-2006 07:24 AM
07-31-2006 02:58 PM
Hello kupikupi,
Thank you for contacting National Instruments. The easiest way to detect motion is to subtract the two images. This will return the portion of the second image that is different from the first, and from here you can determine what type of motion has occurred. If you plan to write this code using Vision Builder for Automated Inspection (VBAI), you can refer to the following example on subtracting two images with VBAI:
Performing an Image Subtraction in Vision Builder for Automated Inspection
In regards to text recognition, there is an awesome feature in VBAI that allows you to read text. This function is called "Read Text", and it is part of the Identify Parts set of inspection steps. This utility will allow you to "read" text that is found within a search region. You will first have to train the utility with a known set of characters that you expect it to detect. Once the utility is properly trained, it will make comparisons between the characters that it sees for each successive image with the defined character set that you have supplied. For more information on the use of this feature, please refer to the Online Help for VBAI.
Regards,
Mike T
National Instruments
08-02-2006 02:56 AM
08-03-2006 08:44 AM
Hey kupikupi,
I can now see where you are trying to go with this, and a great deal of what will need to be done will depend on how flexible you are going to be with the glove user. Ideally, it would be best to have the user speak with his/her hands in such a fashion that the points of interest (fingers) will always follow defined paths so that it will be easy to compare the profile to a database of defined moves.
In order to develop a strategy for programming this translator, first define what constraints you must have to the glove user. For example, lets say that one of the words requires the index and middle fingers on the glove hand to touch the palm of the other hand. A number of things that will be variable in this move are the rotation of his/her wrist, the starting point of the move, the ending point of the move (where the other hand is positioned), what angle the fingers traveled, and where all of this motion is taking place relative to the position of the camera. Allowing for more flexibility in all of these variables will greatly increase the complexity in your algorithm, and thus reduce the resolution of your defined moves in the database. Are there any of these types of variables that you are going to have constrained to help simplify this algorithm? Not only will this help with code complexity, but it will also reduce the possibility of having two similar move profiles overlap each other in the database causing improperly translated words.
Regards,
Mike T
National Instruments
08-03-2006 10:47 AM
08-04-2006 10:35 AM
08-04-2006 11:11 AM
08-07-2006 08:50 AM
08-14-2006 09:13 AM
08-14-2006 09:16 AM