Announcement

Collapse
No announcement yet.

Best Practice for Dynamic Array Allocation?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Best Practice for Dynamic Array Allocation?

    I’ve to use an array; I don’t know in advance how many elements I need in this array; so I have two options :

    First one: to create an array that is larger than the maximum elements I would never have in my program:

    Code:
    Local lMyArray() As tMyArray
    Dim lMyArray (1 To 2000)
    Second option: adding one element to the array size each time I need to extend it

    Code:
    Local lMyArray() As tMyArray
    
    ' at first just allocate 1 element
    Dim lMyArray (1 To 1)
    ...
    ' and then if I need more elements 
    ' I can use reallocate my array using "REDIM PRESERVE"
    ReDim Preserve lMyArray (1 To UBound(lMyArray )+1)
    1. I know it is probably a beginner’s question; can you tell me the pros and cons for each option.
    2. Is-there any other options?
    3. Is-it better to allocate more elements each time? if yes I cannot the UBound() function to know how many elements are used in the array? Is-there an alternative in this case?


    Thank you for your help.
    Jean-Pierre LEROY

  • #2
    You can also do this in "chunks" ... then resize at the end.

    See the demo at Generic 'ADO' Connection and Query Tester (CC 5+/Win 9+) 11-02-08

    Why I use this technique is to cut down the number of REDIM PRESERVE operations, which I've found to be the time-consumers.

    MCM
    Michael Mattias
    Tal Systems (retired)
    Port Washington WI USA
    [email protected]
    http://www.talsystems.com

    Comment


    • #3
      Also FWIW... I'd suggest you change to using zero-based arrays, even if you don't use element zero.

      The memory required for the one unused element is a very cheap price for the improved performance.
      Michael Mattias
      Tal Systems (retired)
      Port Washington WI USA
      [email protected]
      http://www.talsystems.com

      Comment


      • #4
        JP, You can do it either way. (Redim Preserve (Ubound + 1) as each new element is added) or (Redim (100), then as 100 elements are reached Redim Preserve (200) and so on.)

        Either way is fine. It depends on the programmer's choice. The first method is cleaner/easier (but slower). The second is faster (as Michael pointed out) but requires more programmer attention to details.

        Neither way is "right" or "wrong", just depends on the application and how the programmer approaches it.

        ==========================================================
        "Life's a pudding full of plums;
        Care's a canker that benumbs,
        Wherefore waste our elocution
        On impossible solution?
        Life's a pleasant institution,
        Let us take it as it comes."
        W.S. (William Schwenk) Gilbert (1836)
        ==========================================================
        It's a pretty day. I hope you enjoy it.

        Gösta

        JWAM: (Quit Smoking): http://www.SwedesDock.com/smoking
        LDN - A Miracle Drug: http://www.SwedesDock.com/LDN/

        Comment


        • #5
          Another agreement with Michael yes originally make the array preferably larger than you will ever need and then keep a count of the number used.
          When finished loading the array do a REDIM PRESERVE to the size actually used. That way in later array operations you can safely use UBOUND rather than some stored variable, much safer.

          Comment


          • #6
            Jean-Pierre,
            Don't add one element each time you need it. Do it in big blocks, that's much more efficient. e.g. start with 1000 elements and when you reach that limit add another thousand.


            If your elements are small then just DIM the maximum size immediately. These days a "small" array is many megabytes so if the total size of the biggest array you can forsee using is only a few megabytes then just DIM it all at the start.


            Paul.

            Comment


            • #7
              >If your elements are small then just DIM the maximum size immediately
              Code:
              DO 
                 Real Life Size = What user last told you plus ten (10) percent. 
              LOOP
              Michael Mattias
              Tal Systems (retired)
              Port Washington WI USA
              [email protected]
              http://www.talsystems.com

              Comment


              • #8
                REDIM PRESERVE where you increase by an appropriate number of elements is the better choice, as some have already affirmed. The consideration for the size to increase by is usually driven by the programmer's knowledge of the element size. Sometimes the available RAM for array storage might also need to be considered, if you want to keep the array in memory.

                Your example is a one dimensional dynamic array, but please read the help on REDIM, since for multi-dimensional arrays where one can only extend the outer dimension.

                As far a 0 basing arrays, actually "it depends". Other factors can influence the decision, but considering that PB has optimized non-zero basing it could be better to use the basing you need. For example, if you are going to be dealing with information from other sources that specify a non-zero based index in the dynamic string array, you would save nothing having to adjust that value yourself before each array access. In other cases, your program may not be accessing the array gazillion times, so any speed advantage from zero basing would never, ever be noticed by users.
                Rick Angell

                Comment


                • #9
                  Shame it has to be an array. Sounds like a job for a linked list!

                  Comment


                  • #10
                    Why would you suggest a linked list without first knowing the rest of the application's needs? Linked lists have specific advantages for some types of applications, but they also have many disadvantages. Many systems now have 1 or 2 GB or even more RAM, so some of the purported benefits with regard to memory utilization never are a factor. The topic is discussed here: http://en.wikipedia.org/wiki/Linked_...sts_vs._arrays

                    It's a fair discussion, and has a section entitled "Linked lists vs. arrays". For the most part the advantages of arrays coupled with the memory availability of modern systems are why we use them, and perhaps as a passive testament to this linked lists have not been built in to many languages as native functions. If enough folks wanted them though, might be fodder for a successful NFS to have a more native approach to setting up the lists, with options such as single or double linked, rather than roll your own or copy some of the fine examples posted in the forums.
                    Last edited by Richard Angell; 14 Nov 2008, 01:29 PM.
                    Rick Angell

                    Comment


                    • #11
                      but considering that PB has optimized non-zero basing it could be better to use the basing you need.
                      Huh?

                      Help, DIM:
                      While PowerBASIC supports LBOUND values that are non-zero, PowerBASIC generates the most efficient code if the LBOUND parameter is omitted (i.e., the array uses the default LBOUND of zero). You should also avoid specifying an explicit LBOUND of zero, since this imposes a small efficiency penalty with no meaningful benefits
                      MCM
                      Michael Mattias
                      Tal Systems (retired)
                      Port Washington WI USA
                      [email protected]
                      http://www.talsystems.com

                      Comment


                      • #12
                        Code:
                        'This might be useful to somebody. With only a few thousand elements
                        'hardly any difference. When you get into tens of thousands the 
                        'performance can double.  Like Richard mentioned, the outer dimension
                        'must be used with REDIM PRESERVE so use the highest dimension.
                        'Example REDIM PRESERVE(cols,unknown) if rows is the unknown.
                         
                        #COMPILE EXE
                        #DIM ALL
                        'Note: REDIM PRESERVE must be performed on highest dimension
                        FUNCTION PBMAIN () AS LONG
                         
                         LOCAL REDIM_COUNT AS LONG  
                          LOCAL Row AS LONG
                          LOCAL Cols AS LONG
                          LOCAL Rows AS LONG
                          LOCAL Unknown_End_Of_File AS LONG
                          REDIM_COUNT = 10000 'increase for speed
                          Cols = 3
                          Unknown_End_Of_File = 1000 'have to test with something
                         
                          rows = 0  'the great unknown
                          DIM gsArray() AS STRING
                          REDIM gsArray(Cols, REDIM_COUNT)
                          FOR row = 1 TO Unknown_End_Of_File
                            INCR rows  'or use rows= row-1 (after coming out of loop)
                            IF row MOD REDIM_COUNT = 0 THEN
                              REDIM PRESERVE gsArray(Cols, row + REDIM_COUNT)
                           END IF
                          NEXT
                          REDIM PRESERVE gsArray(Cols, rows)
                          ? STR$(UBOUND(gsArray,2)) 'check outer dimension
                          SLEEP 5000
                        END FUNCTION
                        Last edited by Mike Doty; 14 Nov 2008, 03:36 PM. Reason: Didn't honor tabs

                        Comment


                        • #13
                          Michael,

                          Certainly zero-basing is a bit faster per call, I'm not referring to that. I'm referring the user needing to convert indexes to their array x-ref'd from other sources which are not zero based. So every time the programmer must add their own adjusting code for such situations, it adds up. If you start with a non-zero low bound, the compiler gets the onus, not you. Each situation can be judged on its own merits, then decide.

                          Speed, or better the perception of any consequential difference is relative not so much to array size, but the frequency of access. One would be really hard pressed, in human terms, to notice any significant time difference when access frequencies are relatively low or accesses are totally infrequent. So it really comes down to the external requirements a program may need to interact with. If the external world is, let say, 1 based, it could definitely be better to use one basing internally to keep the 1:1 correspondence.
                          Rick Angell

                          Comment


                          • #14
                            Zero=based, one-based.... it's all relative and it all depends on the application.

                            Eg if you send a string parameter to a function.....
                            Code:
                            FUNCTION foo  (S AS STRING)
                            .. and want to access each character of the string as a byte value via an array ...
                            Code:
                              LOCAL charval () AS BYTE
                            .. it is totally immaterial if you use 1-based indexing...
                            Code:
                               REDIM charval (1:LEN(S))
                            .. or zero-based indexing ....
                            Code:
                              REDIM charval (LEN(s) -1)
                            .. if you ccess those bytes using the array limits..
                            Code:
                                FOR Z = LBOUND (Charval,1) TO UBOUND (charval,1)
                            ... unless you care about things such as best performance, in which case you will use zero-based indexing to take advantage of the compiler's optimizations.
                            Code:
                            END FUNCTION
                            Michael Mattias
                            Tal Systems (retired)
                            Port Washington WI USA
                            [email protected]
                            http://www.talsystems.com

                            Comment

                            Working...
                            X