Actually yesterday I played around with this some more (it's just such a perfect way to handle this application that I had to give it another shot), and what I was able to figure out is that you CAN do this, but there is a bug in the RESET function which was screwing me up. (RESET array$() Should null all the strings, period; what it does do is unset the "array is dimensioned" flag [ARRAYATTR (array$(), 0)] and does not reset the strings.)
Fortunately I found a workaround.
I need to clean up that demo and send to support.
Note: This occurs with PB/Win 7.02, so anyone on another version of compiler should not take this to heart.
Announcement
Collapse
No announcement yet.
A Technique For Using Dynamic Arrays In UDTs
Collapse
X
-
So you can forget about using absolute arrays 'AS STRING' in a supported manner if you want to handle your own memory allocation/deallocation.
Oh, well, I guess I'll just have to sacrifice the factory support. At least I know where I stand.
------------------
If you try to make something idiot-proof, someone will invent a better idiot.
Leave a comment:
-
I have a request in to PB support 'as we speak' to confirm how to handle dynamic string arrays (topic not covered in help file),
Hi Michael,
I am sorry but we cannot document nor support allocation and freeing dynamic string data on your own.
PBCC 4 and PBWin 8 both use the same methods for allocating and freeing dynamic strings as was used in versions 3 and 7 respectevely.
Sincerely,
Jeff Daniels
PowerBASIC Staff
Oh, well, I guess I'll just have to sacrifice the factory support. At least I know where I stand.
Leave a comment:
-
FWIW, you can always allocate your own memory and create an absolute array using "DIM AT..."
e.g.
Code:TYPE Foo nElement AS LONG hArray AS LONG END TYPE ' yadda yadda Foo.nelement = whatever Foo.harray = GlobalAlloc (foo.nElement * 4, flags and stuff) yadda, yadda, yadda ' create a local PB array for the data, because the ARRAY functions are ' just SO handy.... pArray = GlobalLock (foo.hArray) REDIM localarray (foo.nelement -1) [as whatever] AT pArray
I have a request in to PB support 'as we speak' to confirm how to handle dynamic string arrays (topic not covered in help file), but I do this kind of thing all the time for arrays of scalar variables when I need either re-entrancy or thread-safeness.
MCM
Leave a comment:
-
i messed around with this and posted a little inc http://www.powerbasic.com/support/pb...ad.php?t=24735
there is a discussion at:
------------------
Leave a comment:
-
there's no difference between passing a pointer to an array descriptor by value and passing an entire array by reference. after all, it's documented (i think) that passing an array by reference passes (ta-dah!) a pointer to the descriptor!
the only potential advantage of the former is that it allows you to circumvent the compiler's automatic parameter type checking (should you have the rare application in which that would be considered an advantage).
see also single function to process any array type (cc3/win7+) 12-17-03
also: don't try to use either technique to redim an array created by pb version 5x/6x with 7x/8xor vice-versa, as the descriptor changed in 7x (afaik, only the 'data type' value changed, and only for 'udt' arrays, but i did not do a complete structred test of all possible datatypes).
Leave a comment:
-
I had to contend with the fact that I had a database whose
contents was being drawn from mainframe datasets set up and
managed by different groups, and that over time they modified
the structure of the individual datasets by adding or removing
fields, renaming them, rearranging them, and resizing them.
Tricky stuff. I had to devise ways of coping with the various
formats that were introduced over time. Since the dataset
extractions included the header data, I could check to see if
the headers matched with any of the previous profiles, and
proceed if they did. If the fields appeared in any order from
left to right, and I could still recognize the names used, I
still had to verify that the field contents were as expected
and the length was as before - a complicated process, since the
fields were fixed size without any separators, such as commas.
If there was any problem, I had to recognize this and force the
program to terminate with enough information to guide me in the
effort to adapt it to a new format where necessary.
Then I hit an issue where one dataset had a new performance
indicator field that I felt was important, so my program had to
be modified to include this field when it was available, but
still handle my older data which did not contain it.
I also wanted to minimize the size of my own database, meaning
replacing fixed sized string fields with variable length ones,
since much of the mainframe datasets were composed of blanks or
spaces, With daily downloads of over 100 million characters,
some space conservation was clearly indicated.
To manage my own data files, I created a header form that
named each field, typed it (and if a string, gave it's length).
Then I wrote into each data file the extract from the datasets
in accord to the header description. When I needed to process
a range of data files in conjunction with others, say a month's
or quarterly evaluation, my program would reference each dated
file in turn, examine the header, and recover the contents based
on the header description.
Unfortunately, the preferred way of handling complex info by
using TYPE structures is not very compatable with this approach.
I could not figure out the best way to approach this using any
TYPE structures, so I ended up using variable arrays for each
field, tracking the sequence in which the arrays were dimensioned,
retaining an array that tracked each field array's name and type
(and for strings, their size), and then for each entry, I
assembled a string in the field order that I wanted to use when
processing each entry.
Now from what you are saying, I gather you are venturing into a
similar area. My programs worked pretty good, since I first
tackled it as an issue of how did I want to save the data and
make it recognizable for use later, and then dealt with how to
process the data as a separate issue.
Initially, the variable length strings were a real challenge.
However, I created a text file that only contained the text
associated with that field. I replaced the string length with
an offset into the text file where that string began, and the
next entry's starting point for the same field marked where the
current entry's string must end. The last valid entry just ran
to the end of the text file.
Of course I could have placed all the string field contents in
the same text file, but then I would have had to keep walking the
header to identify each string field and determine where the next
field started and this one end. I just decided that each string
field deserved its own text file. This meant that each data file
actually became one reference file with a header, and a number of
additional text files. The text files were named after the
field name for easy identification.
My final step was to finally decide that the best way to group
my data files and text files together was to simply resort to a
subfolder method where the subfolders were all named for the
given date using the YYYYMMDD naming convention. That made it
easy to determine If I had the data for every day or not.
My program became an essential reporting process, and I was
pleased to have worked out the kinks myself. I found it very
cumbersome to have to examine and implement arrays based on a
header portion in the file, but it gave me the flexability to
deal with datasources that were subject to change. As I under-
stand it, this approach is sort of like a rudamentary effort at
a fundamental OOP concept.
I am not reciting all this to impress anyone. What I hope to do
is give you some ideas of moving away from fixed file formats
and make your data structures more adaptive to new requirements.
I feel that I understand what is being expressed in the first
message in this thread, but in case anyone else feels somewhat
loss by the lack of some concrete examples, then maybe my
experience can open your eye somewhat to the possibilities.
strings,
------------------
Old Navy Chief, Systems Engineer, Systems Analyst, now semi-retired
Leave a comment:
-
Thanks for posting that Eric.
When I get some time, I want to play around with it.
I’m very interested.
If you do anything more with it or have other examples, please post.
------------------
Leave a comment:
-
A Technique For Using Dynamic Arrays In UDTs
(continuation of thread at http://www.powerbasic.com/support/pb...ad.php?t=24734 )
[quote]originally posted by greg turgeon:
before anyone makes the mistake of relying on a tactic like this...
if you're simply considering this technique a mistake, then you may as well call global variables a mistake, as well, in that this technique is merely another tool that requires a bit of discipline in its use.
yes, there are considerations to keep in mind when using this technique, those mainly being that using external code and/or data is an uncertain practice when you access them like this, due to version discrepancies; the disciplined approach? you only use this technique when dealing with code that you can verify is compatible, and when you're not sure of that, you make provisions to bridge the gap.
really, if you're only dealing with your own code, both of those concerns are completely irrelevent. if you're worried about future compatibility, then when dealing with external code, yes, by all means use methods that you know will remain compatible. there's no reason you shouldn't use this technique internally in your own code, so long as you don't rely on code from external modules to be using the same tricks.
the more tools in your arsenal, the more flexible you are in the approaches you can take in doing things. as with most responsibilities, the responsibility ultimately falls on the programmer to pick the right tool for the job. no, this approach isn't the end-all, perfect approach for all situations, but there's been a lot of call for having dynamic arrays inside of udts, and it's a matter of what trade-offs you can afford to make.
me, my need for this arose out of working on a personal project where all of the procedure calls have at least 3 parameters, and some have as many as 7. (and potentially more, in the future.) yes, global variables are an option, but i chose to avoid that route. it just makes for a cleaner calling syntax to be able to pass a single udt to the subs/functions and have all of your data - arrays included - right there by means of a single passed parameter.
i don't need to interface with external code, and as it's my own project, currently all i'd need to do is recompile it if a new version of pb were to come out tomorrow. before i finish with my code, yes, i'll more than likely include provisions for version-independent access to my data, but i'm focussed on the innards of my beast, right now.
not every tool is right for every job, but when you need it, it's great to have it.
------------------
software: win xp pro sp 2, pb/win 7.04, pb/cc 4.02, win32api.zip v.02-01-05
hardware: amd athlon 64 3200+ (clock speed 2.00 ghz) on a gigabyte k8n pro nforce3-150 mobo, 1 gb pc3200/ddr400 ram, geforce 5 le 128 mb ddr video
[this message has been edited by eric cochran (edited november 04, 2005).]Tags: None
Leave a comment: