I wrote a routine to read and convert SYLK (multimate) data files,
but I was unsatisfied with the conversion speed, so I have been
experimenting with different buffer sizes.
I open the file as binary, and specify a buffer length with the GET$
command. On mainframes, it is usually beneficial to make the buffer
size a ratio of the track or cylinder size. I expected to find that
for this case, buffer size should be a ratio of the block size, but
then I started to wonder if block size was meaningfull on a FAT32
system?
My results show that the buffer prefers being a meg. I was a bit surprised to find
a maximum size for the buffer, and even more surprised that the largest
buffer I could read was not a binary number, but an inbetween.
This data is purely anecdotal, I made no attempt to stop background
processes, or clear disk caches. Also, as I said, the program is
decoding the SYLK format, not purely reading data off the disk.
The file size is 1,616KB, the test system is a Pentium 266 laptop.
Buffer size Time in seconds
32767 12.8
131072 12.7
262144 12.2
524288 12.3
786432 12.5
1048576 8.6
1310720 10.4
1572864 12.4
2097152 failed to run correctly (0.3 sec return)
I have run these several times to make sure that the sub 9 second result
is correct, and it is. Can anyone explain why this one buffer size would
make such a dramatic difference? Does it relate to a block size? Can
I expect it to change from machine to machine?
John Kovacich
------------------
but I was unsatisfied with the conversion speed, so I have been
experimenting with different buffer sizes.
I open the file as binary, and specify a buffer length with the GET$
command. On mainframes, it is usually beneficial to make the buffer
size a ratio of the track or cylinder size. I expected to find that
for this case, buffer size should be a ratio of the block size, but
then I started to wonder if block size was meaningfull on a FAT32
system?
My results show that the buffer prefers being a meg. I was a bit surprised to find
a maximum size for the buffer, and even more surprised that the largest
buffer I could read was not a binary number, but an inbetween.
This data is purely anecdotal, I made no attempt to stop background
processes, or clear disk caches. Also, as I said, the program is
decoding the SYLK format, not purely reading data off the disk.
The file size is 1,616KB, the test system is a Pentium 266 laptop.
Buffer size Time in seconds
32767 12.8
131072 12.7
262144 12.2
524288 12.3
786432 12.5
1048576 8.6
1310720 10.4
1572864 12.4
2097152 failed to run correctly (0.3 sec return)
I have run these several times to make sure that the sub 9 second result
is correct, and it is. Can anyone explain why this one buffer size would
make such a dramatic difference? Does it relate to a block size? Can
I expect it to change from machine to machine?
John Kovacich
------------------
Comment