i am facing problems in voting... whenever i click on vote now i get the same screen again... can you please explain how a user should vote.
Also i see Ravens fan competing with my name and in the entire list of submissions i couldnt find entry from Ravens fan
I think this was Ravens Fan's entry:
http://decibel.ni.com/content/docs/DOC-15360
Even I am not able to see any voting option, I get redirected to the same page.
As the guidelines and regulations document says:
"First Round Top16 Challenge Dev Time: March 8 - 11:59 pm March 13
First Round Top 16 Voting: March 14 from 9am - March 15 9am CST (GMT -6:00)
Quarterfinals Top 8 Challenge Dev Time: March 15 - 11:59pm March 20
Quartefinals Top 8 Voting: March 21 from 9am - March 22 9am CST (GMT -6:00)
Semifinals Top 4 Challenge Dev Time: March 22 - 11:59pm March 27
Semifinals Top 4 Voting: March 28 from 9am - March 29 9am CST (GMT -6:00)
Coding Championship Challenge Dev Time: March 29 - 11:59pm April 3
Coding Championship Voting: April 4 from 9am - April 5am CST (GMT -6:00)"
voting for first round starts on March 14 9am. So I think voting is not working, as it does not have to work yet.
It would be quite interesting to vote for someone's document until you see his/her new entry. ![]()
Peter
Thanks for chiming in Peter. As he said the polls will go live Monday morning at 9am (US Central Standard Time) and all users will have 24 hours to cast their vote for each head-to-head match. Tournament players, please remember to use the Tournament Template for your submission. We will be building upon it for every sequential round.
Happy coding
Vu
i am seeing author as Bill Meier for this post. and i wasnt knowing that his nickname is Ravens Fan
@peter_smith: i am seeing some numbers in front of entries, and i dont know what it is (if they arent votes). Are they votes received for qualifying sweet16?
Hi Tushar!
You did not see the name, because you were logged in to the community. Ravens Fan is the nickname of Bill, and he probably set his profile to display his real name to community members. If you log out from this site and look at the documents, you will find his nickname.
I can't say anything for sure about the numbers as I'm not an NI employee, but I think they are numbers indicating in which order we entered to the challenge.
@NI: By the way is "Bye" a participant, as there is no number after the name, and there were 16 documents submitted, but from 15 people. Or does "Bye" mean there is no one there?
Peter
Hi Tushar and Peter,
Yes. My discussion forum nickname is Ravens Fan, but my real name is Bill Meier. Apparently there must be some setting that displays it differently between the two boards. It seems like we are matched up, but I just saw some recent discussion that the brackets are just for the March Madness basketball tournament flair, but aren't actual head to head matchups in this competition. That the best 8 submissions from this round would move on.
That seems more fair than head to head matchups. You would hate to see yourself bumped out and have what might have been the 2nd best submission because you were matched up against someone with the best submission. But then see someone else with a poor submission move on just because they were matched up against wasn't able to put up an entry that week.
@Tushar
The numbers which are shown in front of entries are just rankings, starting from 1 to 15, based on number of likes, votes, downloads and comments.
Hmmm, I grabbed the tournament template under the documents tab, but I think I should have created a new one instead of editing it. I apologize for any inconvience. I've place it in my documents, should I submit it somewhere else?
Henry
Peter, FraggerFox's bye is because even though we had 16 submissions, we have 15 unique users. So we gave the first seeded spot a bye for the first round.
In regards to the polling process as it states in the Guidelines and Regulations.
The user with the most user votes in each face-off will move to the next round which starts every Tuesday.
We will look at other formats for voting/collecting points in the tournament in future renditions of this type of challenge, but for this year we went with out of the box polling functionality (every user only gets one vote per poll). Creating 8 different polls allows our users to vote for more than just one submission per round. There may be two really great code submissions that go head to head in an early round, but that's why getting quality code in for the qualifier round and submitting early was important to get a good seed.
Hi Henry,
I would suggest creating a new submission with the template I sent to you in a private message. Here is the link again for your convenience. Since the document you edited has the unique doc ID of the original template, some users may get confused if they open the link from the first email I sent out.
Vu
Hi Vu!
After the submission timeline has ended for the 1st round, it seems that some of the participants used only simple webservice APIs for data collecting.
I think it would have been better to describe the challenge details for this round more exactly.
It currently states "Programmatically harvest data from a public website using LabVIEW...".
APIs are not public websites, as the name says, they are simply interface to webservices.
The two things are almost uncomperable: APIs are strictly structured, well defined, and usually come with heavy documentation, while public websites are nowadays almost always dynamic, they have no strict structure, and contain no documentation at all.
Peter
Hi Peter!
I have to admit, that use of APIs is pretty staightforward compared to real data-mining from dynamic sites.
Judit
Peter,
I understand your frustration and I take the blame for that one. I didn't properly word it. I actually intended for APIs to be used if the user wanted to but I see how the wording could point to just a website. I will try to do better for the future tasks to word them better but please post any questions you might have about the challenge as it is on going so I can address them to the whole group. Thanks for the feedback!
Grant
I don't understand round 2 at all. Are we supposed to split our program into a server and client sort of arrangement? Some sort of "remote" viewing vi? Talk about making a vi way more complicated than it should be...
Bruce
Bruce,
Task 2 is all about data communication. LabVIEW has a couple of different methods to pass information between VIs and people struggle to know when to use one method over another. I'm hoping the documentation you add about why and how you implemented Task 2 can help people understand some of the options for data communication. Task 2 can be seen as a server/client architecture if you wish. This VI will continue to grow over the next two rounds so it is definitely more complicated than it should be but if you knew all the tasks then it would be too easy ![]()
Grant
Hi Grant!
I have also some questions about our 2nd task:
Thanks you in advance!
Peter
1. No requirement on the data itself. You can choose whichever method you want to use.
2. It doesn't have to be network capable but it does have to pass from one VI to another.
3. This point just meant that there should be two attachments to your document. One that I can open and run and see that it satisfies Task 1 and another one that can open and run by itself as Task 2. Does that make sense?
4. No requirement for doing anything with the data once it's transferred.
5. This is up to you. You can code it to programmatically start both VIs but you don't have to. Just have instructions on how to run your VI.
6. Zipped folder are great. This is of what I had in mind in my Answer 3. One attachment that is Task 1 in its entirety. I can open that attachment and it will only have the functionality from Task 1. There should be another attachment that is Task 2 and has everything needed to run and show that functionality. This will continue for Task 3 and 4 as well. You can name them what you want because I'm sure you will reference that name in the Task 1 and Task 2 section of the document respectively so people will be able to determine which one is which.
Grant
Thanks for the info, Grant!
If I understand correctly, Task 1 is the application we already submitted to the contest, but it has to be extended with the functionality of transferring data to an other VI, which is not part of this application.
Task 2 should be an entirely new application. It must not be part of Task 1's project - if we compile Task 1 and 2 to .exe-s, we will have to end up with two executables.
Task 1 may be run alone, and it will do the same as in round 1.
Task 2 may be run alone, but in fact it will be useless, as it will not get any data until Task 1 is running.
Task 1 and 2 may be run together (manually or eg. Task 1 may run task 2), and then comes some new functionality not seen before (presence of the functionality is probably indifferent - I mean it may be a new display area in Task 1).
Peter
When you get done you should have two seperate LabVIEW projects (for instance) that can be run completely seperate. The first LabVIEW project (you don't have to use an actual project but I am using this to help with the point) should be able to open the main VI and run Task 1. The second LabVIEW project should be able to be opened and the main VI run Task 2. The task two project should have VIs in it that can get the data from a website and transfer it to another VI. The applicaiton is building. Task 1 had us just getting data. Task 2 is getting that data and transferring it to another VI. Task 3 is getting the data, transferring it to another VI, and......you'll have to get to the next round to know that.
Grant
Thanks Grant!
No I'm totally confused or finally understood everything ![]()
In this round we only have to make one thing: extend/modify our previous round's VIs/project - that is already attached to our documents - to be able to send data to a separate VI.
Task 1 is in fact already done. It is attached to the document we submitted.
Currently we only have to code the sender, not the receiver of the data, as that will be part of next round (Task 3).
Attachments Task 1 and Task 2 do not have to run together at any time, as Task 2 contains Task 1, it just extends it's functionality.
In fact the attachments (Task 1, 2, 3 and 4) will be the same project's different versions showing how our app. progresses/builds up during each turn. It's like if we were using SVN and submitting different revisions of the same project for each Task#.
Peter
Yes to everything but one little point. You should have the receiver as part of Task 2. You should have at least two VIs for Task 2, one that gets the data from a website and sends it and another VI that receives it. The receiver doesn't have to do anything with that data (yet
) once it gets there. You are correct about everything else though as this someone should be able to open each zipped folder and sort of see the progression as we keep adding more functionality.
Thanks again!
After sending my previous post, it was already clear for me, that there would be no sense of writing only the sender... ![]()
One question though: in the next round will we be allowed the change the communication method we introduce now? Or to be sure to be able to accomplish our next task we should choose a very general purpose method?
Peter
You will be able to change it next round. That would be very mean to make you stick with architecture choices that were made without knowing the entire scope of the project. Same thing with Task 1 functionality. If someone made their Task 1 VI without any thought about transferring that information, they are allowed to change it up for Task 2 so that it makes more sense and is easier.
OK, everything is clear now.
Thanks for being so patient. ![]()
I think public website referes to both APIs as well as webpages if we are not using browsers.
also just for information:
If you are using data from any website for parsing it is best to use API. Scraping data from html can be illegal (http://en.wikipedia.org/wiki/Web_scraping#Legal_issues). If you are scraping data from website you should ensure that you are in line with "Terms of Service" of website.
e.g. ni terms of use says (http://www.ni.com/legal/termsofuse/unitedstates/us/)
...(b) you may not modify the materials at this Site in any way or reproduce or publicly display, perform, or distribute or otherwise use them for any public or commercial purpose; and (c) you may not use any of these materials on any other Web site or networked computer environment for any purpose...
XML APIs are always designed for non browser use. hence they generally dont have legal issues in non standard use
Hi Tushar!
The only reason I mentioned the use of APIs was that they are a lot easier to use than harvesting data from real sites what some of us did. I already explained why that task is much harder.
Even though I don't mind if other participants used APIs, I just tried to say that the task description did not say anything about APIs.
"If you are using data from any website for parsing it is best to use API." That's correct, supposed there is an API available.
I see you like wikipedia, so: "A website (also written Web site[1] or simply site[2]) is a collection of related web pages containing images, videos or other digital assets. A website is hosted on at least one web server, accessible via a network such as the Internet or a private local area network through an Internet address also called URL."
APIs are absolutely different things than websites. They are only interfaces vs. websites, which are in fact data. APIs usually need a webservice in our context, they don't even need to have a web server vs. a website.
If all kinds of scraping is illegal, I'm sure Google and Bing with many of others would be in really large trouble, as indexing is also related to scraping. Of course I'm not a lawyer and I'm hungarian, so I don't know anything about US laws.
One of the reasons I chose the NI website is that they are hosting this competition. It would be very mean not to allow the participants to use their own website for "scraping", if their challenge's task was to harvest data from a public site. Even though I stated in my submission's document that it uses some part of the NI community site, which I hope does not break the law or violates copyright rules. If that's the case, of course I will remove my document or change it as needed.
Either way I think this topic has already been discussed, as Grant already answered it and admitted that the task was not worded correctly.
Just one more thing about "Terms of use":
They are "common habits" to place them on sites. We also have copyright info and terms of use on ours.
Their primary purpose is not to allow the owner of the site to find everybody "breaking" it and hunt them down. They are there to allow the owner to protect himself from people causing damage to him.
Scrapers may be a great way for any website to advertise the site, spread available information. This way the owner of the site can have some revenue from scrapers.
For example, look at my submission, which tries to make this community site more popular by providing an easier way to use it, and it also allows you to navigate to the site from the application.
Peter
Peter first of all sorry if i have hurt you. Sincerly the reason was just to give you information (possible problems with html) and reason of why I selected webservice instead of html pages . Thats why i didnt replied immidiately on 13th March (It might have reflected negatively on your voting.)
Also I dont want you to remove or change your document.
peter_smith wrote:
If all kinds of scraping is illegal...
I never said that I said "...can be illegal"
"A website (also written Web site[1] or simply site[2]) is a collection of related web pages containing images, videos or other digital assets. A website is hosted on at least one web server, accessible via a network such as the Internet or a private local area network through an Internet address also called URL."
and from the web pages link in above statement..
"A web page or webpage is a document or information resource that is suitable for the World Wide Web and can be accessed through a web browser and displayed on a monitor or mobile device."
Webservices generally return xml document when you ask for perticular URL(over http Protocol and also many a times URL has www. in it.). So I dont feel webservice should not be treated as website.
Also XML and HTTP file formats are very similar and parsing techniques are also very similar. If you build up DOM parser and then try to filter elemments it will be a breeze (but writing a DOM parser is pain in neck)
Now i agree that I (and some more) are using readymade parsers and somebody else has taken "pain in neck" for us. but that is completely acceptable for this competition (we can use open source code). Rather you could have utilized open source DOM parsers (plenty of them are available). Hence i dont agree "they[API] are a lot easier to use than harvesting data from real sites what some of us did". You have chosen hard path but that doesnt mean that is the only path available. DOM path would have been easier and it would have resulted in much robust application.
Hi!
You did not hurt me, and the information on the copyright issues was really useful, as I did not know about it so detailed. Even though I already mentioned the possible issues in my document when submitting it. However I don't think this discussion would have been any negative impact on the voting. The goal of these discussion is exactly what we do: discuss some things, and argue on others, try to convince somebody, provide useful information for others.
You mentioned DOM parsers. Yes, they are really easy to use, and I also would have been able to use them (and I used them many times before), but in this case I would not have got any advantage from them.
For me the hardest part was to identify where data come from, as many of them is put to the html page on the client side via JavaScripts, which cookies you have to obtain, what headers do you need. So sometimes the really hard part is not the parsing of the page, but to inspect it, try to discover how the developer was thinking. It's like a kind of reverse-engineering. Most of the development time I spent with using Wireshark and FireBug. That's why I said that webservices are much easier to use, as they have documentation, strict structure...
DOM parsers are the most useful when html elements have IDs or fixed positions. That's not the situation in my case: elements have no ID, end even the number of elements change on the page. I used RegEx-es (and even if I used a parser, I may have needed this), which are really easy to write after you successfully identified the proper parts in the html, and succeeded to find patterns before and/or after the relevant part.
TusharJambhekar wrote:
Hence i dont agree "they[API] are a lot easier to use than harvesting data from real sites what some of us did". You have chosen hard path but that doesnt mean that is the only path available. DOM path would have been easier and it would have resulted in much robust application.
I did not try to say it is lot easier to parse data from XML than HTML (but considering only the in-built functions of LV it is - but as we are using third party "extensions", the two task may be similarly easy). I said it's easier because with HTML you don't have a strict structure, data may change (element positions, number of elements) from request to request. APIs rarely change, as it would be a pain for every developer to change their program every time. With webservices you don't need any kind of "reversing".
And most importantly that easy-hard part was not the cause why I wrote all about this. I already told that in my previous posts, that there was no word about possible use of webservices or APIs in the task description. In my opinion APIs, webservices and websites are totally different things.
And it seems for me, that this is something we never will be able to convince the other about. ![]()
Peter
Ohhhh...
Somehow the javascript stuff never came to my mind
. I admit javascript and other things can make things much harder.
I have worked on web technologies and LabVIEW but I have never worked on serious HTML parsing and stuff before... and be sure if I get stuck in some similar work i will contact you for help
. Hope you will help me
To be honest, when I started this project I did not thought on JS either. But as I started it a bit late, and some parts were already done, I had to deal with it when it came to light.
JS likes to communicate with JSON, which I also use in my project by converting it to XML for easier parsing. It may worth to look at it, if you ever have to get data in such a way.
Well, of course I will help you if you ever need it. ![]()
Yet another crazy twist.
What in the world can I do to process the weather data I am retrieving? Data processing only works if you have a reasonable amount of data. If I was able to retrieve historical weather data, I could find a way to manipulate it. The current weather and a seven day weather forecast just doesn't give you anything to process, unless you want a plot of the predicted highs and lows for the next week. That's pretty boring processing, though.
Bruce
Hey, it seems this task is going to be really interesting for all of us. ![]()
But I have to admit I envy you a bit right now. You have at least a few data points about the same variable/signal. I have only one point of a specific data about like 100 groups. ![]()
That would be very interesting to make e.g. a graph with 100 bars indicating the number of users and put the legend underneath. ![]()
I think we will spend 99% of our available time on figuring out what to process and present...
Peter
For round 3, can we use the Report Generation Toolkit, or do we have to code everything from scratch?
Also, what is considered post-processing? Is plotting data enough? Do we have to do something else to the data?
Thanks,
Bruce
Hi Bruce!
I think there was no limitation given in the task description on the way you transfer data to the other apllication. So I assume it is possible. If someone wants to use eg. Excel I think there is no more straightforward and simple way to transfer your data than the report generation toolkit.
Peter
You can definitely use the Report Generation Toolkit for Microsoft Office. I consider post-processing any type of data manipulation (e.g. data type conversion, adding 2 to a number, etc.). I agree that those types of processing don't reflect real world applications but just taking the data that we just transferred from another VI and massaging it in some way in these other software packages is what we're looking for. Does that help Bruce?