Announcement

Collapse
No announcement yet.

Data size vs execution speed

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • John Gleason
    replied
    Here is code that shows a nice 2x improvement in speed using a big BYTE array rather than a big LONG array, like I made reference to in post #3 above.
    Code:
    'This was code to solve the "Josephus" puzzle in another thread.
    'It's an example of a BYTE array being MUCH faster than a LONG array.
    'Try both [REDIM circ(n) AS LONG] and [REDIM circ(n) AS BYTE] below.
    'AS BYTE is over 2x faster on my celeron laptop.
    'PBCC
    #COMPILE EXE
    #DIM ALL
    
    FUNCTION PBMAIN () AS LONG
        LOCAL a, n, ii, lastWarr AS LONG
        REGISTER warrPos AS LONG, cMod AS LONG
    
    '    INPUT "How many warriors? ",n
    '    INPUT "Counting modus? ",a
    
    FOR n = 28048000 TO 28048003     'loop will try a several warrior/modus combinations
       FOR a = 3 TO 5
        REDIM circ(n) AS BYTE 'LONG  'Try both to verify speed increase using BYTE
        FOR ii = 1 TO n
           circ(ii) = 49
        NEXT
    
        DO
           INCR warrPos
           IF warrPos > n THEN warrPos = 1
           IF circ(warrPos) = 49 THEN
              INCR cMod
              IF cMod = a THEN
                 circ(warrPos) = 48
                 cMod = 0
                 INCR lastWarr
                 IF n - lastWarr = 1 THEN
                    FOR ii = 1 TO n
                       IF circ(ii) = 49 THEN
                          PRINT "The warrior with position"+ STR$(ii)+ " in the circle is remaining "
                          EXIT DO
                       END IF
                    NEXT
                 END IF
              END IF
           END IF
        LOOP
        RESET warrPos, cMod, lastWarr
    NEXT
    NEXT
    WAITKEY$
    
    END FUNCTION

    Leave a comment:


  • Michael Mattias
    replied
    The biggest thing with math is not which data type you select for operands vs real and/or results... it's using the same types within each expression or operation. (Excluding the fairly obvious "don't use floating point types when integer types will do").

    The compiler is very forgiving in that it will do all the conversions for you, but that thing about the non-freeness of lunch still applies.

    I still say the primary factor in application performance is the design.

    Leave a comment:


  • John Gleason
    replied
    It's always a good idea to test your optimizations. The below code simply compares SINGLE to DOUBLE using addition, and SINGLE tests several times faster. Try changing DOUBLE to EXT and you should see EXT become faster than DOUBLE, and even faster than SINGLE if allowed to be a REGISTER variable, ie. omit #REGISTER NONE.
    Code:
    #COMPILE EXE
    #DIM ALL
    '#REGISTER NONE                   'try with and without #REGISTER NONE with EXT variables
    
    FUNCTION PBMAIN () AS LONG
       
      LOCAL singl AS SINGLE, t, t2 AS QUAD,ii,ii2 AS LONG
      LOCAL doubl AS DOUBLE 'EXT      'try changing to EXT, with and without #REGISTER NONE
      LOCAL str AS STRING
    
     FOR ii2 = 1 TO 15
       TIX t
        FOR ii = 1 TO 1000000
           singl = 23.445
           singl += .0001
        NEXT
         TIX END t
         str &= USING$("*0#######", t) & "  "
       TIX t2
        FOR ii = 1 TO 1000000
           doubl = 23.445
           doubl += .0001
        NEXT
       TIX END t2
         str &= USING$("*0#######", t2) & STR$(t2/ t, 2) & "x faster" & $CRLF
       RESET singl,doubl
     NEXT
     ? str
    
    END FUNCTION
    No, the LONG will be faster. Once loaded into memory the "total storage size" of an array will have exactly zero effect on performance.
    I do have an old geezer laptop, but on it, huge LONG arrays have sometimes had a slower times than their compressed byte array counterparts. It is likely the exception to the rule, but that's the advantage of quick speed checks.

    Leave a comment:


  • Michael Mattias
    replied
    Only if your data is huge, eg. say 40MB byte vs 160MB long arrays will it possibly slow if converted...
    No, the LONG will be faster. Once loaded into memory the "total storage size" of an array will have exactly zero effect on performance.

    >SINGLE is usually fastest, then DOUBLE

    Well, that can depend on what you are doing. If you are using either an intrisic numeric function (e.g. EXP, SIN, etc) or an expression with untyped arguments, the compiler uses EXT as its native type. If you are assigning a value from an express or implied EXT to a SINGLE an additional conversion step is required.

    The EXT costs you memory over a single, but gains some speed because conversion is not required. Since there is no such thing as a free lunch, you have to decide if the extra memory is a cost you are willing to pay for the additional performance.

    But as Mr. Gleason suggests, you can test before deciding.

    In this case it's pretty obvious you should not use explicit type specifiers for your variables.... it's a lot easier change one "LOCAL" statement than it is to find and change six or eight references from "X!" to "X##" .... plus you WILL miss one. (Well, I would miss one, for sure).

    MCM

    Leave a comment:


  • John Gleason
    replied
    One other suggestion: You should consider #REGISTER/REGISTER too for certain of your variables, both integer and EXT floating point. This can result in a nice speed up.

    Leave a comment:


  • Tom Hanlin
    replied
    The quote is unambiguous. You get the best performance from LONG values because that's the most efficient data type for modern CPUs.

    Leave a comment:


  • John Gleason
    replied
    Originally posted by Mottel Gutnick View Post
    I am wondering:
    (a) whether that advice applies equally to variables used in general arithmetic operations as it does to loop-counter variables.

    (b) and, if so, at what point (if any) does the extra memory requirement of a large program in which all integer variables are converted to longs negate the increase in execution speed intended to be gained from this kind of optimization.

    (c) would the advice to go long even extend to integer variables used as pseudo-booleans -- i.e. with values of only zero, representing false and NOT zero (-1), representing true. The difference being that these are not generally used in arithmetic but only tested for equality to zero or non-zero.

    (d) And what about floating-point arithmetic? Is it also faster using double precision variables (#) as opposed to single precision (!) The PB documentation is silent on this.
    a) Not sure if it's "equally", but yes, make them LONG if possible

    b) Only if your data is huge, eg. say 40MB byte vs 160MB long arrays will it possibly slow if converted, but it is still worth testing to see if it actually is slower or faster.

    c) yes, make LONG

    d) SINGLE is usually fastest, then DOUBLE.

    Leave a comment:


  • Michael Mattias
    replied
    I'm trying to get an idea as to whether it is worth the effort to convert all my integers to longs.
    That's an easy one.

    Yes.

    Leave a comment:


  • Mottel Gutnick
    started a topic Data size vs execution speed

    Data size vs execution speed

    Long integers are the most efficient numeric data type in PowerBASIC and should be used in all cases where speed is important and a greater numeric range is not required. (Using Byte and Integer variables in FOR/NEXT loops is actually slower than using a Long integer.)
    The above advice is from the PBCC 5 documentation, and I have also observed, in most of the code samples offered here, a preference for longs over integers even where the latter would be adequate for the expected value range.

    I am wondering:
    (a) whether that advice applies equally to variables used in general arithmetic operations as it does to loop-counter variables.

    (b) and, if so, at what point (if any) does the extra memory requirement of a large program in which all integer variables are converted to longs negate the increase in execution speed intended to be gained from this kind of optimization.

    (c) would the advice to go long even extend to integer variables used as pseudo-booleans -- i.e. with values of only zero, representing false and NOT zero (-1), representing true. The difference being that these are not generally used in arithmetic but only tested for equality to zero or non-zero.

    (d) And what about floating-point arithmetic? Is it also faster using double precision variables (#) as opposed to single precision (!) The PB documentation is silent on this.

    I am writing a program with a lot of arithmetic-calculation routines executed multiple times. Mostly integer arithmetic in which about 80% of the variables used are integers. I'm trying to get an idea as to whether it is worth the effort to convert all my integers to longs. (I use type-identifier suffixes on all my variables.)

Working...
X
😀
🥰
🤢
😎
😡
👍
👎