08-20-2010 09:37 AM
Hello. I have a JAI BM-141GE camera and for some reason when I change to the 10, and 12 bit packed and unpacked pixel formats using Measurement and Automation Explorer and the IMAQ-dx driver it gives slanted and overall "weird" images. I talked with someone from JAI and they said it looks like it is a National Instruments interpretation problem, since it doesn't have any problem when I use the JAI Camera Control Tool. I am attaching some image that illustrate what I mean. The 8 bit image seems to be just fine? I would like to eventually use Labview and the IMAQ-dx driver, so I would like to figure out what is going on. Also I am afraid that there could be other problems that I am currently unaware of...
Also as you can see sometimes their are black lines in the images, that appear to because their are missing packets. I had someone look with me and it appears that my Ethernet card settings are correct (Jumbo frame, etc.). It also seems to be occurring with the JAI Camera Tool software (since the "Missing packets in last image" parameter was as large as 104 when I looked at it in their software when continuously acquiring images). It seems the only solution JAI suggested was replacing my cable. I was wondering if anyone has seen and fixed this problem. Thanks in advance for any help.
Kevin
08-23-2010 11:27 AM
Hi Kevin,
I've asked someone at JAI to try looking into this as I don't have the same JAI model camera to try it with. I'd be a little skeptical that it is a problem with NI's driver misinterpreting the data as the GigE Vision standard defines the image formats fairly clearly and we test with dozens of other camera vendors and correctly interpret the same pixel formats that your JAI camera is sending.
For the missing packets issue, what ethernet card are you using?
Eric
08-24-2010 08:39 AM
Hello Eric. I was just wondering if you had heard from anyone at JAI on why the 10 and 12 bit images appear significantly differently than the 8 bit images?
Also my laptop is using a "Realtek RTL8168D/8111D Family PCI-E Gigabit Ethernet NIC (NDIS 6.20)", if that means anything? I am also wondering if I may have inadvertently messed something up recently by setting up a local area network (UDP), so I could play a game with others through the Ethernet port? In particular I think I needed to use the Internet Protocol Version 6 (TCP/IPv6) and the Internet Protocol Version 4 (TCP/IPv4), but the JAI GigE Vision Filter Driver is also installed so shouldn't it just use this driver? I am attaching some screenshots that might be helpful?
Thanks for anyone's help. I really like this camera (good sensitivity in NIR, a lot of features, etc.), but if I can't get these problems figured out I don't see how I can use it reliably!
Kevin
08-24-2010 03:22 PM
Here are some more "weird" images that were obtained in MAX. It seems that just by changing the Packet Size I can get "slanted images" even when I am using the 8 bit Monocrhome pixel format. The images that I originally posted had a packet size of 7996. Even more weird is that the images is that the degree of the image "slanting" seems to be periodic? For exmaple if you look at the attached image of packet size 8996 it appears unslated, but just by decreasing the value by 1 to 8995 gives a much different answer... If you keep decreasing the value by one it gets better and better until at 8988 it is back to the same as it was at 8996 (repeats every 8). This isn't just for these values this behavour seems to occur at all packet size values.
I noticed that this cyclic behavior is present in the 10 and 12 bit packed pixel formats (repeats every 12), and the unpacked 10 and 12 bit pixel formats also cycle but they never achieve a "good" image. When the cyclic behavior overlaps you can get "good" image for the 8 bit, 10 bit packed, and 12 bit packed for a single value (8940), but the unpacked pixel formats are never "good".
I also periodically get the thick black lines that I think are casued by missing packets.
Does anyone have any idea on what is going on? Thanks in advance for any suggestions.
Kevin
08-24-2010 03:30 PM
Hi Kevin,
I'm still waiting to hear back from JAI. It sounds like the camera might have internal restrictions on the PacketSize alignment and perhaps does not propagate them out properly so that IMAQdx thinks the camera is using a different packet size than it is. If you use JAI's software can you find out what packetsize their software is using when it has the good image? If you use that in IMAQdx does it work properly?
Eric
08-24-2010 04:12 PM
It seems that their software uses 8940 packet size, but even when you change the value the value doesn't change. When I change this value in MAX in looks like the 8940 works for the 8 and the 10 and 12 bit packed, but not the unpacked.
I am also wondering if I can change someof the packet parameters in MAX to reduce and hopefully reduce the number of lost packets. Can someone suggest how I should change the parameters to make some improvements? Please see the attached image. Thanks.
Kevin
08-26-2010 05:01 PM
Hello Eric. I think I discovered one of the problems with the help of Gordon. Evidently in MAX when I increment the packet size it should be incrementing by 8 for 8 bit pixel format, and 12 for the other. However, the camera I have appears to have an older firmware (xml file is wrong), so it was only incrementing by 1. That is why he was only able to reproduce the problem of the slanted images the first time and not afterwards because it was the correct multiple of 12.
However, I am still having a problem with the missing packets... One interesting thing I noticed is that the frequency of missing packets seems to vary as I change the amount of time between pulses? It seems it is worse for a delay time of 33ms to 70ms, and drops off after that, but there is always some degree of missing packets. I have a simple Labview program that generates a single pulse and waits a specified amount of time before generating another pulse. Why would the frequency of missing packets vary with the amount of time between pulses? Gordon suggested using a different computer to see if my network card is the problem. I don't think this is the problem though because I have used the AVT prosilica EC660 GigE camera, and have never had a problem with missing packets. Although the AVT camera has a 659 x 459 resolution, and this JAI camera has a 1392 x 1040 resolution, so maybe that is why? I need to use a laptop computer for my application, so I don't know if I can get a better network card even if that is the problem. Are there any MAX settings that I might be able to change to see if that will reduce or eliminate the missing packets?
Also I am having problems understanding why the image readout time for this camera to large? This camera specifies that it has a max frame rate of 30 frames per second, so I would expect the readout time to be 30^(-1)=33.3ms. This seems pretty close when I measure it using MAX. For example I have MAX set it up to accept the pulse-width control trigger, I generate a series of pulses with a very short duration (1ms), and I wait 36ms before generating another pulse. When I do this the Frames per second reported on the lower right of MAX says 28-29 fps. However, when I use Labview and read out the time it takes to use the "Get Image VI", it is much longer than the expected value of 33.3ms. The average value of when I run it many times is ~72ms, and it can reach as much as ~150ms! This will give me a frame rate of ~14fps, which is about half of what it should be. I am using the same Labview code to generate the pulses in both cases, the only difference is that I take out the camera stuff when I measured the 28-29 fps in the MAX measurement. I am attaching a screenshot illustrating the labview program that is centered on when I use the get image VI and when I record the readout time.
Thanks again for your help with these 2 remaining problems I have with this camera.
Kevin
08-26-2010 06:17 PM
@kbaker wrote:
Hello Eric. I think I discovered one of the problems with the help of Gordon. Evidently in MAX when I increment the packet size it should be incrementing by 8 for 8 bit pixel format, and 12 for the other. However, the camera I have appears to have an older firmware (xml file is wrong), so it was only incrementing by 1. That is why he was only able to reproduce the problem of the slanted images the first time and not afterwards because it was the correct multiple of 12.
Glad you were able to get to the bottom of this. Will JAI send you a new firmware that fixes this?
kbaker wrote:However, I am still having a problem with the missing packets... One interesting thing I noticed is that the frequency of missing packets seems to vary as I change the amount of time between pulses? It seems it is worse for a delay time of 33ms to 70ms, and drops off after that, but there is always some degree of missing packets. I have a simple Labview program that generates a single pulse and waits a specified amount of time before generating another pulse. Why would the frequency of missing packets vary with the amount of time between pulses? Gordon suggested using a different computer to see if my network card is the problem. I don't think this is the problem though because I have used the AVT prosilica EC660 GigE camera, and have never had a problem with missing packets. Although the AVT camera has a 659 x 459 resolution, and this JAI camera has a 1392 x 1040 resolution, so maybe that is why? I need to use a laptop computer for my application, so I don't know if I can get a better network card even if that is the problem. Are there any MAX settings that I might be able to change to see if that will reduce or eliminate the missing packets?
GigE Vision can be very demanding of your network interface. To ensure as low latency as possible, typically the camera will default to sending the image data as fast as possible. So even though the aggregate data rate for a VGA camera at 30 frames/sec might be <100Mbit, the instantaneous burst of the image transmission might consume the entire 1000Mbit available. However, not all cameras have the same read-out speed and some may be limited internally such that they don't actually ever consume the whole wire speed.
Many typical network cards cannot handle wire-rate gigabit ethernet packets very well. This might be the case you are experiencing. One option is to try to reduce the rate of image transmission. As I mentioned above, the frame rate of the camera usually has little to do with the peak transmission rate. However, GigE Vision does define a mechanism for throttling data transfers. We expose this in IMAQdx as a "Max Bandwidth Desired" feature. You could try setting this to a lower value that still allows you to hit the same frame rate you are currently using but with a little extra latency. However, if you are right now losing packets and having some resent, you might already be getting latency from that.
kbaker wrote:Also I am having problems understanding why the image readout time for this camera to large? This camera specifies that it has a max frame rate of 30 frames per second, so I would expect the readout time to be 30^(-1)=33.3ms. This seems pretty close when I measure it using MAX. For example I have MAX set it up to accept the pulse-width control trigger, I generate a series of pulses with a very short duration (1ms), and I wait 36ms before generating another pulse. When I do this the Frames per second reported on the lower right of MAX says 28-29 fps. However, when I use Labview and read out the time it takes to use the "Get Image VI", it is much longer than the expected value of 33.3ms. The average value of when I run it many times is ~72ms, and it can reach as much as ~150ms! This will give me a frame rate of ~14fps, which is about half of what it should be. I am using the same Labview code to generate the pulses in both cases, the only difference is that I take out the camera stuff when I measured the 28-29 fps in the MAX measurement. I am attaching a screenshot illustrating the labview program that is centered on when I use the get image VI and when I record the readout time.
IMAQdx should add very little latency on its own to the image readout time. I suspect there could be several reasons why you might be getting a large, non-deterministic latency. One possibility is that you are losing packets and having to have resends issued. You want to get your setup such that no packets are ever needed to be resent. The other possibility which I can't tell without seeing your whole program is that you may not be getting the buffers you are asking for. If you are continuously triggering your camera, if your loop acquiring images is getting behind (or maybe your trigger causes more than one frame to be acquired) and your requested buffer index is only increasing by one each time, you eventually will be requesting buffers that are no longer in memory. When this occurs, IMAQdx has a configurable behavior, the default is to instead ask for the "Next" buffer to be acquired. When this occurs, it will wait for a new one to come in after the latest one it already has in memory. Depending on how long ago the last one was triggered, you now might have a variable amount of time until the next one is triggered. To check this I would either inspect the buffer number coming out of IMAQdx or configure the overwrite mode attribute to be "Fail" which will cause it to generate an error if the buffer you request is not available. If you are generating a single trigger each iteration, acquiring the subsequent image, and then ensuring you are done before you either trigger the next image or it is triggered after you are done, then you should never hit this condition.
Hope this helps,
Eric
08-31-2010 12:08 PM
Thanks again Eric. I have found out some more about these problems, but there are still some items that I don't understand:
Missing packets: I tried the JAI camera on a co-workers computer, and everything seemed fine! There were no resend packets requested or received, and it never had any lost packets when using NI measurement and automation explorer. Then I noticed that on his computer he didn't have the JAI filter driver, or any other GigE driver installed on his network card. Therefore, I uninstalled the JAI filter driver and any other GigE drivers I had installed on my computer. This drastically helped my missing packets, but I still have them every once in a while... Before with the JAI filter driver the resend packets requested just kept continuously increasing, and every once in a while it lost a packet. Now with those drivers uninstalled the resend packets stays the same for a long time, and every once in awhile resend packets are requested. It is not near as often as before and it seems to happen in spurts. It very rarely has lost packets now, but every once in a while it still happens... As you said before what I am trying to set it up so it never has any resend packets requested. I tried to lower the expected band width usage, but it doesn't seem to solve the problem. Any other ideas why I am still requesting packets, and having packets lost?
Frame rate: As I mentioned before I am calculating these frame rates using an external trigger that operates on the trigger's pulse width. The readout time now is much better than before. Before the readout times were very spread out, but now they are all much closer, but for some reason they have different values at different times and under different settings (e.g. pulse duration, and time between pulses)?
Currently, another pulse is not generated until after the "get image VI" reads out the current buffer number. As you suggested I also checked to make sure the buffer number in was the same as the buffer number out, and it is (e.g. I compare the buffer numbers and the program will stop if they are not the same number). I am not sure how you set it up the overwrite mode attribute to be "Fail" as you mentioned?
For some reason when I have a pulse width of 10ms, and the additional wait is 0ms it gives me an average ~33ms readout time which is close to what it should be at maximum frame rate of the camera (e.g. 30fps => 33.3ms). Although for some reason every now and then it gives a much higher instantaneous value (e.g. 150ms)? I also just closed the program and opened it back up and instead of going back to an average readout time of ~33ms it went to ~45ms?
Also when I increase the time between each pulse (e.g. 20ms, 50ms, 100ms) it gives me an average readout time of ~45ms? This wait is just a Wait (ms) VI that is placed after my take image VI that is placed within a while loop.
Even more strange is that when I change the pulse time to 1ms and keep the wait at 0ms it gives me an average readout time of ~45ms? This behavior seems to occur for any pulse time that is ~7ms and less? Even when I change the pulse width in the while loop from 1ms pulse width to 10ms the readout times when it is 1ms seem to around 45ms, but when it is changed to 10ms it goes back down to ~33ms where it should be.
Any idea what could be causing these unexpected behaviors, since in my application I need to acquire a lot of images in a short amount of time (e.g. pulse times between 1-5ms, and shortest readout times), so I would like to understand what is going on. It almost seems that the readout time is ~45ms most of the time under various conditions? Thanks again for your suggestions/comments they are greatly appreciated.
Kevin