For the first time I need to use record sizes for a PB 3.5 Binary
file which are larger than the 16374 byte UDT maximum size. For
actual use for what is in the file, I only need a close size just
under 4096 bytes of information per record. But to populate what
is needed in this 4096 bytes will require close to 1000 different
variables in about 25K more of junk in each record. The 4096 bytes
in the critical area, plus two more UDT's each less than the 16384
byte max size for a UDT mean three UDT's are needed for this task.
To minimize the load on most of the programs which use the file, I
contemplate only even addressing the UDT for the 4096 byte block.
I always try to choose block boundaries like this which are set up
so that network operations are optimized as much as possible along
the way the Novelle, IBM's OS/2 caching and so on are buffer set
in focus. Faster throughput and less thrashing that way. Only
even defining that 4096 byte UDT for use saves close to 950
variables out of the code not needed there.
Now .. If I open the file for BINARY use, I willl know exactly where
the boundaries are for the entire record, as well as for the three
UDT's. Can't I just do a SEEK to place the pointer for each file
read for that needed 4096 bytes and just go get it?
Come write time, from the master utility which uses the bigger UDT's
as a dedicated program to manipulate the whole file, can't I just do
a similar SEEK to know position and write that 4096 bytes? The same
as to read and write for the larger UDT's?
Alternatively, can I SEEK to the starting point of a given record,
then simply PUT the 4096 byte UDT and then, in sequence, each of
the other two larger UDT's? Will PB 3.5 know to keep on just moving
up in the file, even though there was no pointer hard defined for
them in the gross write for the whole record?
What will the general effects of this be if I move toward PBCC at
a later date .. or far more likely, to PB for LINUX when it arrives?
The technique looks appealing as I think I can pretty easily match
the UDT in STRUCT fashion for C++ if all else fails. Obviously with
appropriate thoughts about the internal members being stored in some
form of compatible mode to the common file that would be needed.
Inquiring mind wants to know .. Thanks!
------------------
Mike Luther
[email protected]
file which are larger than the 16374 byte UDT maximum size. For
actual use for what is in the file, I only need a close size just
under 4096 bytes of information per record. But to populate what
is needed in this 4096 bytes will require close to 1000 different
variables in about 25K more of junk in each record. The 4096 bytes
in the critical area, plus two more UDT's each less than the 16384
byte max size for a UDT mean three UDT's are needed for this task.
To minimize the load on most of the programs which use the file, I
contemplate only even addressing the UDT for the 4096 byte block.
I always try to choose block boundaries like this which are set up
so that network operations are optimized as much as possible along
the way the Novelle, IBM's OS/2 caching and so on are buffer set
in focus. Faster throughput and less thrashing that way. Only
even defining that 4096 byte UDT for use saves close to 950
variables out of the code not needed there.
Now .. If I open the file for BINARY use, I willl know exactly where
the boundaries are for the entire record, as well as for the three
UDT's. Can't I just do a SEEK to place the pointer for each file
read for that needed 4096 bytes and just go get it?
Come write time, from the master utility which uses the bigger UDT's
as a dedicated program to manipulate the whole file, can't I just do
a similar SEEK to know position and write that 4096 bytes? The same
as to read and write for the larger UDT's?
Alternatively, can I SEEK to the starting point of a given record,
then simply PUT the 4096 byte UDT and then, in sequence, each of
the other two larger UDT's? Will PB 3.5 know to keep on just moving
up in the file, even though there was no pointer hard defined for
them in the gross write for the whole record?
What will the general effects of this be if I move toward PBCC at
a later date .. or far more likely, to PB for LINUX when it arrives?
The technique looks appealing as I think I can pretty easily match
the UDT in STRUCT fashion for C++ if all else fails. Obviously with
appropriate thoughts about the internal members being stored in some
form of compatible mode to the common file that would be needed.
Inquiring mind wants to know .. Thanks!
------------------
Mike Luther
[email protected]
Comment