LabVIEW Idea Exchange

cancel
Showing results for 
Search instead for 
Did you mean: 
Mads

Calendar control that does not block the GUI

Status: New

If you click on the calendar button of a date & time control all updates of the user interface and all functions that happen to run in the user interface thread are halted/blocked.

This is, as far as I know, because the calendar is really an ActiveX control...We need a native control that does not show such behaviour (especially considering the fact that things like the run method is running in the user interface thread...).

 

It is quite ugly that such a fundamental control as the date & time control depends on code that will block the GUI. 

32 Comments
AristosQueue (NI)
NI Employee (retired)

> Unless it is a process that can wait indefinetly, or the software

> is run as a service with no GUI, it's not really possible in a reliable way.

 

Generally, correct. If your service is one that can be served by multiple copies of a single VI hierarchy, you can solve this with a pool of already-reserved VIs ready to run when needed, but if you're trying to load new VIs into memory, as for some sort of plug-in system, then, yes, you are correct. LabVIEW doesn't serve that use case well.

 

If a program kicks off a new process in response to user actions, LabVIEW can handle it because the UI thread is already servicing that request. If it is something like an error handler, you can have the VIs already available to be called, even if the UI thread is busy with a context menu. But for a truly dynamic launcher, you'll need to separate your GUI from your service as two separate EXEs, and communicate between them using some interprocess communication. That's just the way LV is archtected.

 

> What do you think we should name the "new" idea to really get people's attention?

 

You say that as if this behavior is new information that people need to be aware of. This was a known behavior of LabVIEW 10+ years ago when I first joined the team. It's documented in a few places. It doesn't appear hamper development most of the time, and a whole lot of other features that users generally like about LabVIEW work because of this behavior.

JackDunaway
Trusted Enthusiast

>> and a whole lot of other features that users generally like about LabVIEW work because of this behavior.

 

Can you expand upon this? What popular benefits would be sacrificed? (Just a brief enumeration is enough)

Mads
Active Participant

Oh it's definitely new information that people need to be aware of.

 

Sure, a lot of users will never run into this as a problem, heck - a bulk of users may never use dynamic calls at all I guess - but that does not always work as an excuse. There are lots of things that do not hamper the day to day development, but which may kill a software on the market once this flaw it is out of the box.

 

I've developed in LabVIEW myself continously since 1997, and have not noticed it for all this time, but then again most of the programs of ours that do dynamic instatiation of background processes and have a GUI are more often than not used via remote clients (served by that same dynamic instatiation...(!)) instead, so the odds of a freeze is so low that it can take a long time before the first occurrence. It is an intolerable risk though, and we will have to migrate away from it somehow.

 

You say that a whole lot of features about LabVIEW work because of this behaviour - as if there is absolutely no way you could support the possibility of creating new asynchronous processes on demand without losing all the other features. That's not true now, is it? It's a question of prioritization, not possibility.

 

The thing is - when people start using LabVIEW they kind of need to trust that NI does not just think about programming up until a certain level. You have to feel somewhat secure that even if you need some less used feature, or a higher than average need for reliability, that NI will still have thought of it for you - that you are not abandoned. So yes, most user's might never get into trouble the way this works now, but the risk itself is a bigger problem.

 

(It's like democracy - people typically say it's rule by majority, but forget the very important detail that a proper democracy has rules that protect the interest of the minority as well. It's in the majority's interest too because nobody knows when they will be in minority themselves...)

AristosQueue (NI)
NI Employee (retired)

> as if there is absolutely no way you could support the possibility of creating

> new asynchronous processes on demand without losing all the other features.

 

I had about 4 paragraphs written, then I deleted them. Why? Because a lot of it is just idle speculation. There's no way to know just how deep a change like that runs, and there are several features that I can point to that depend upon this to work, and I'm not going to sit here this evening and speculate about whether or not they could be architected completely differently. Maybe it's possible. Maybe it's not. To answer your question directly...

 

> That's not true now, is it? It's a question of prioritization, not possibility.

 

I think it is not hyperbole to say it is possible that some features would be impossible without the root loop concept, at least, not at the same level of usability that LV currently has.

 

> but the risk itself is a bigger problem

...

> because nobody knows when they will be in minority themselves...)

 

Maybe. I do not criticize a phillipshead screwdriver for being poorly designed the first time I meet a flathead screw. It's the wrong tool for that job. I would criticize a screwdriver with interchangeable bits if all the bits were phillips and no flats. That's poor design. Where on that spectrum does LabVIEW, as a programming tool, lie? It's a balance. I can't say where this feature in particular stands on that balance.

Mads
Active Participant

I see Jim Kring once proposed this idea, and a big point in that idea is that the call should not be blocking (thanks to Marc Blumentritt for that link - and his own suggestion). I see Jack Dunaway agreeing this is a big issue. Both are among the top 5 kudoed authors here. I'm on that list somewhere here as well. Other users like MIG have run into it. I just do not think it's correct to respond to the suggestion that this is not worthy of an idea (or rather - a solution!) by saying that this is well known, and a sacrifice we have to live with to get other features. It is a real and so fundamental limitation that it should be worked on. Sure, we can all use other development tools...that's kind of what you are saying, but is that the right attitude for NI to have? Is this a kind of feature that is outside NIs ambitions for LabVIEW?

JackDunaway
Trusted Enthusiast

@Mads wrote:

I see Jack Dunaway agreeing this is a big issue



@JackDunaway wrote:

 

>> and a whole lot of other features that users generally like about LabVIEW work because of this behavior.

 

 

 

Can you expand upon this? What popular benefits would be sacrificed? (Just a brief enumeration is enough)

 


Yes, it's a big issue, but if the sacrifices are too significant I'm going to back off a bit. Perhaps we could branch off into a "best practices" discussion on how to avoid the following issue:

 

Complete Hacker's Guide to DoS Attack on VI Server:

  1. Approach server terminal
  2. Right click, anywhere.

 

 

Mads
Active Participant

Good one Jack.  (I'm sorry if you feel I dragged you into something. It's always a bit risky to refer to someone the way I did, but I think it's still possible to see that I'm not speaking for you.)

 

It could be that some fundamental philosophy has caused this to be difficult to combine with existing functionality, although Aristos have not yet described exactly what those things are - but sometimes a major rewrite is inevitable anyway (and once it is done I do not think you'll feel a need to look back).

 

We would all slow down to a halt if we never touched the fundamentals.

SteveChandler
Trusted Enthusiast

 

Mads wrote:

I just do not think it's correct to respond to the suggestion that this is not worthy of an idea (or rather - a solution!) by saying that this is well known, and a sacrifice we have to live with to get other features.


I re-read the whole thread. Nowhere do I see Aristos saying it is not worty of an idea. I only see him explaining why things are the way they are. It is just how LabVIEW is architected. I think it is unlikely that NI will completely rearchitect the whole thing just for this. Only a small percent of their target market will ever see it and those are pretty advanced users.

 

I also see Aristos explaining how to completely avoid this issue. Those advanced users should have no trouble following his advice.

 

Maybe the solutions are a bit cumbersome. In that case maybe it is possible that a development tool could be written that would help in creating these plug-in architectures.

 

Hey OpenG guys?

=====================
LabVIEW 2012


AristosQueue (NI)
NI Employee (retired)

I'm not saying LV wouldn't be better if this changed. I'm saying changing it is a big deal. None of my comments above should be taken as "we can never do this" or "we would never do this". My goal is to try to give you guys insight into How LabVIEW Works, so that you are more aware of what you're asking for. That helps you to guestimate timelines in the event that LV does pick up an idea, and it may help you propose other solutions that would help you just as much but have a shorter implementation time.

 

I think a good analogy to this root loop issue is the recursion issue. For its first two decades, LabVIEW didn't have real recursion (there was a workaround involving VI Server). Massive parts of LabVIEW were written on the assumption that "the VI hierarchy can be traversed as a tree." When we started refactoring to allow for cycles, that tree became a graph. As a result of that change, I spent 18 months working on File >> Close, trying to get an orderly unload of VIs in memory. Other developers were allocated to Save, Reserve For Run, Compile, Download to Target, etc. A whole lot of resources went into that small "tree becomes a graph" change. The result was a refactored version of LabVIEW -- that still didn't actually have recursion. That took a couple versions more. In the end, a minority uses recursion, and it crossed off one item on the list of Reasons Why LV Isn't A *Real* Programming Language. Was it worth it? I think the same question applies equally well on this idea.

 

Mads
Active Participant

Steve - the "not worthy of an idea" comment referred to this section in an earlier message:

 

"

> What do you think we should name the "new" idea to really get people's attention?

 

You say that as if this behavior is new information that people need to be aware of. This was a known behavior of LabVIEW 10+ years ago when I first joined the team. It's documented in a few places. It doesn't appear hamper development most of the time, and a whole lot of other features that users generally like about LabVIEW work because of this behavior.

 

"

 

That's pretty dismissive.

 

A. It is new information to a lot of people (and not just the novices, quite the contrary), B. It's not exactly well documented, C. As Aristos is pretty open to himself it is a clear limitation, and finally D. The last part is not necessarily a valid argument (as I've elaborated on)...

 

As for the workarounds those were suggested by yours truly very early on, and no they are not that hard to do - but they are not fully satisfactory either...and from where I come from it does not look like something that should have been necessary in the first place.

 

I'm not here to talk bad about LabVIEW - I'm here because I've made a living on it and I want it to be all that it can be(...!). If I run into an issue like this I hate it, that's why I keep talking about it even though it has already been stated that this is difficult to fix and will probably not get fixed in the near future.