12-08-2023 08:45 PM - edited 12-08-2023 08:47 PM
I would like to run a UMAP dimension reduction algorithm (https://umap-learn.readthedocs.io/en/latest/) in LabVIEW using python node. I succeeded to run the UMAP code, but when inputting data with a large size, it causes an error with an error code: 1672 (see attached). Typically, in my computer, there is no error when using 100000 x 100 float data, but it causes the error when using >200000 x 1000 float data. I would appreciate if you could give me advises. Attached is LabVIEW program, and the following is a python code (named as functions3.py) to communicate to the UMAP code.
There is no error when running the UMAP code in python, so I consider there should be a problem in the LabVIEW code, and I guess there may be a size limit in python node. Also, the line 4 "from sklearn.datasets import load_digits, load_sample_image, load_sample_images" seems unnecessary, but it causes an error with the same code (1672) and the same message when deleting this line even when using 100000 x 100 float data.
Solved! Go to Solution.
12-08-2023 08:58 PM
So, the data transfer between LabVIEW & Python node is not meant to handle massive arrays. The best way to handle this is to write the data to a file in LV, pass the file path to Python, read the file in Python, store the processed data in another file, and return that path to LabVIEW.
12-08-2023 11:31 PM
Thanks so much for the answer! I solved the issue by writing/reading the files as you suggested.