Each write operation stores both the digital value and a timestamp. The uncompressed size of the LabVIEW timestamp is 16 bytes, so 2 bytes per point is a reasonably good compression rate. I can see why you are concerned, however. Even if we only required 1 bit per value, your application would generate more than 1Gb of data per day.
You can set the precision of the stored timestamp using the "trace attributes" input of the Open Trace VI. The default precision is 1ms. If you don't require that precision, setting a higher value will save you a byte here or there. Running your example with a precision of 1s saved approximately 4kb over the length of the run.
I don't know how the data in your application is formatted, but you might also consider using a writer of type Variant and logging your values as arrays. Using your example, but logging 500 arrays of 20,000 values each, approximately 1 byte is required for each value. This method will not provide individual timestamps for each value (you will get a timestamp for each array), and will require you to write your own code for retrieving and displaying the data, but it is considerably faster with a smaller disk footprint.