LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

programatically creating clusters or type defs

Ok I am working on this project where we are using an SQL based database. This data base has many tables that are related to each other in various ways. We are using clusters to handle the records of this table. (Because the values in any given record may be of different types.) We have a variety of VIs to read, update, insert and delete records. In fact because of using clusters as inputs, we have one VI each for every table. (Each table has a matching cluster type-def.)

Since we are still early in dev, we have frequent changes. So when we add a new feature to the Insert VI, for example, we have to touch 20+ VIs to make sure the feature is everywhere. There is also the risk that in touching all those VIs, bugs will be introduced, further slowing development.

The only solution I have been able to think of so far is to write 1 vi for each of the 4 functions (read, insert, update & delete) that uses a reference to the cluster in question and programitcally gets the data out and puts it in a form acceptable to the LabVIEW DB Toolkit VIs. The problem with this is that there is no 1 to 1 mapping between the table column names and the control names defined in the cluster type defs. (They have been typed in by hand, so some variations have crept in.) And of course I do not want to trust the index number of the cluster controls or the DB columns.

So I am wondering if there is a way to programitically make Type Defs? Perhaps it is possible to write a LabVIEW program to suck out all the column names, and data type from the DB and make appropriate cluster type defs? If my table ever changed I could re-run the VI and re-load LabVIEW. I could write my 4 function to match the column names it gets as inputs to the labels of the controls in the cluster.

Can this be done?

Is there an alternative brilliant solution I have overlooked?
0 Kudos
Message 1 of 6
(3,310 Views)
It's late here and your question is long, so I hope I haven't misunderstood you.
The answer to your question (at least my answer) is fourfold.
First, you can work with the variant datatype which will allow you to have an array holding different types of data.
Second, since you mentioned the LV DB toolkit, why not use that? It can connect to any ODBC compliant DB (and other standards, I believe) directly and pull the data out of the DB. Why write your own VIs to do this?
Third, if you don't want to pay for the toolkit, you can find a program called LabSQL, which should help you, but I'm not sure you can use it commercially or how it works.
Fourth, if you still want to this alone, go to openg. They have great VIs for handling variants and it's a good idea in general to have their stuff.

Message Edited by tst on 03-29-2005 01:03 AM


___________________
Try to take over the world!
0 Kudos
Message 2 of 6
(3,296 Views)
Thanks tst for your points.

I was not entirely clear so I will add some detail.

Firstly we are using the NI DB toolkit. The problem is that the canned insert and update VI like to take clusters. Now when you write a whole record, how can you guarentee that the fields of the cluster are in the same order as the columns of the table you are writing to?

So what we are doing now to make sure the right data always goes to the right pigeon hole is to unbundle the input cluster and repackage it. In the case of insert and update we make it an array of a special type of cluster which one of the lower level DB toolkit VIs likes.

This is further complicated by the posibility of not inserting or updateing the entire record. So only columns 2, 5, and 7 get updated for example. So you only pull out some of the data from the cluster to put into the array of cluster that the DB VIs like.

The kicker is that since every table has a different cluster, every insert and update need to have a custom VI that take the cluster and build this nice array of cluster.

Now your idea of using arrays of variant sounds interesting. I will have to take this up with the software team.

Thank you for taking the time to respond.
0 Kudos
Message 3 of 6
(3,287 Views)
Actually, the DB VIs don't "like clusters", they just don't care. If you open them (by double clicking) you will see that they use variants internally, so doesn't matter what you wire into them, it's automatically converted to a variant and then parsed to have its data type determined. So, you can convert the clusters to variant yourself in a case structure and wire the variant into the DB VI.

___________________
Try to take over the world!
0 Kudos
Message 4 of 6
(3,278 Views)
What I mean by liking clusters, and again I apologize for not being clear, is that clusters are the only type that allow you to Insert several columns at once. The Insert VI and it's subordinate VI's convert the cluster control values to an array of cluster which other Sub-VI's use to write to the columns specified in the 'columns' array input to the Insert VI.

If you pass in an array, the Insert function tries to write the entire array to the first column given in the 'columns' array of string input.

So you can either Insert clusters, or insert the constituent values one at a time. We are under the impression that SQL queries are one of the slower thigns we will be doing in this project, so making multiple queries to enter all the columns of a record is going to cost us in terms of performance.

Alternatly you can by-pass the top-level Insert VI, convert whatever your data type is to an array of cluster used by 'DB Tools Create Paramaterized Query.vi' and write the whole record at once. Or part of the record at once. Or whatever.

Now what we really ought to be doing is using this array of cluster that Paramaterized Query uses everywhere we need DB values.
0 Kudos
Message 5 of 6
(3,263 Views)
The fact that the cluster names and the DB filed names do not match will complcate life.

To get around this you will probably have to use a translation table.

What I have done in the past to do this was go through the cluster element by element and get each name, value, and data type.

A traditional SQL command was then built were I explicitly specified each field and its new value.

This way as the cluster typedef was updated, the resulting SQL command would adapt.

By cloning this VI and replacing the typdef in each clone with the appropriate typedef, the only time I problems was when the fileds in the tables changed. In that case the typdef was editied to match the new table definition.

Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 6 of 6
(3,241 Views)