LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

LabVIEW gets Fatal Internal Error: "hObjList.cpp", line 91

LabVIEW 8.6.1. Application is running on various dual-core Intel systems.  It makes great use of multiple threads, LabVIEW objects, queues, notifiers, etc. The problem is reproducible about 20% of the time by heavily loading the application, but it takes a while to build up the load.  This happens in both source and built EXEs.  The initial problem is announced by a popup saying "LabVIEW Internal Error: "hObjList.cpp", line 91<next line>LabVIEW 8.6.1"

 

 hObjList error.PNG

When this is clicked, it is usually, but not always, followed by a  popup (I suppose from the C++ rts) complaining about a pure virtual function call.This popup spontaneously vanishes so quickly that it is difficult to get a screen shot of it.

 

This is a severe problem for us.  We can't release software with this known problem.

 

Note: We have tried LV2009. It fails even to read the project file (we've filed a separate ticket on this), so we do not have the option of moving to LV2009 at this time.

0 Kudos
Message 1 of 5
(3,342 Views)

Hello jodyle,

 

Is it possible to have you post the crash file so we can investigate this further for you?

 

Thanks

Andy Chang
National Instruments
0 Kudos
Message 2 of 5
(3,317 Views)

Hi Andy.  Here is the file.  Thanks for looking into this; it's giving us a real pain.

 

John Doyle

0 Kudos
Message 3 of 5
(3,302 Views)

Hello,

 

I have filed a Corrective Action Request on this specific issue (CAR#: 183814). Is there anypossibility of narrowing this issue down to specific item in your code that cause this behavior? The more reproducible, the easier it will be track down the issue.

 

-Zach

0 Kudos
Message 4 of 5
(3,273 Views)

Zach, thanks for looking at this.

 

Keep in mind that this application comprises > 2500 VIs and that there are many concurrent loops operating in parallel.  I will try to explain the pertinent parts without going into unnecessary detail.

 

Requests for a certain operation are placed into a queue, say Q1, for processing.  The reader of Q1 checks these requests for validity and places them into Q2.  The requests in Q2 must be sorted from time to time.  In order to manipulate entries in Q2 without interference from multiple threads, there is only one reference to Q2 and this is kept in a so-called "singleton queue", Q3, which acts as a kind of mutex.  Any thread wishing to manipulate Q2 must remove the reference from Q3, perform the manipulation, and then place the reference back into Q3.  Any other thread that wishes to manipulate Q3 blocks on the dequeue element while someone else has it checked out.

 

There is a FP button that requests the cancellation of all the requests.  When this Cancel button is pressed, a message is sent from the UI event loop to a command loop (another queue) to perform the cancel.  This consists of the following steps:

  • Remove all requests from Q1
  • Remove the Q2 reference from Q3
  • Remove all requests from Q2 that satisfy certain criteria
  • Put the Q2 reference back into Q3

 

The removed requests are placed in an array that is processed in a for loop that

  • Logs a message that the request has been cancelled ("Canceling request N")
  • Places a completion code ("Cancelled") in a queue associated with the request
  • Kills a notifier associated with the request. This signals that the request is "complete" 

 

A separate VI is waiting on the notifier.  When the notifier is killed, this VI looks at the completion code and logs a message that the request was not performed ("Request N complete (cancelled)").  I should point out that the VI waiting on the notifiers is a reentrant VI with shared clones and that there will be one of these for each group of 4 or 5 requests.

 

When the total number of requests cancelled is small, there is no problem.  When the number is large (> 1500 or so), the crash happens once every 3 or 4 tries.

 

Now, the VI that "logs a message" doesn't log it directly.  The message is formatted and placed in yet another queue, Q4, for processing.  This prevents the loop that generates the log message from having to wait for I/O and so forth.  Q4 is drained by an independent loop that handles log file details like changing files at midnight etc.

 

What I found was that when the hObjList error occurred, the log file looked like this:
Canceling request 1

Canceling request 2

...

Canceling request N

Request 1 complete (cancelled)

<end of file>

 

Keep in mind these messages are being pulled out of a queue by a loop running independently from the loops (also independent) that placed them in the queue.  This same queue is being fed from many independent loops in the application.  Doing this does not normally result in a crash.  But in this particular case, I can get a crash fairly frequently, and the end of the log file always looks like the one above.

 

In working with this problem, I have proceeded on the assumption that our code is not doing anything wrong, apart from making LabVIEW work hard.  This happens in both source and a built EXE.  There is plenty of memory available (we're typically using 1.2 GB when this happens; the runtime is normally happy until we get close to 2 GB).  I therefore tried to make changes that would perturb the system to see if I could change the frequency of the problem.  What I finally hit on was to eliminate the "Canceling request N" message.  Doing this seems to make the problem happen less frequently.

 

I should mention that there are other ways we have seen this problem apart from pushing the Cancel button.  Sometimes it happens apparently spontaneously, under varying load conditions; however this is less frequent.  Also, sometimes when the Cancel button is pressed, the application simply vanishes without a trace.  No popup, no crash message on the next startup, no JIT debugger.  Nothing.  We have also seen this symptom happen spontaneously.

 

These problems seem to be more frequent in LabVIEW 8.6.1, and I don't recall seeing them at all in 8.5, or at least they weren't bad enough to delay releasing our application.

 

Thanks again for your help.

 

John Doyle

0 Kudos
Message 5 of 5
(3,267 Views)