Announcement

Collapse
No announcement yet.

Execution Speed Curiosity

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    Execution Speed Curiosity

    I have an Excel spreadsheet which uses VBA to do a lot of crunching. So I wrote a PB8 DLL to replace one of the VBA subroutines and it speeds things up nicely. (thanks to several people here who helped me over the bumps.)

    So once I was done with that, just for grins I ran it on two other machines to compare the all VBA -vs- PB/DLL versions of the program. I find the result curious, and just wondered if this is common.

    Machine "A" is a dual core AMD desktop running XP, 3GB.
    Machine "B" is a dual core Intel laptop running Vista, 2GB.
    Machine "C" is an old Celeron laptop running XP, 1GB.
    (Same Excel version, and all service packs and updates applied to the OS. No other apps running.)

    Here are the approximate execution times (minutes) for the various machines:

    Code:
         VBA   PB/DLL  SpeedUp
    "A"   42     3.25    13x
    "B"   50     8.0     6.3x
    "C"   85.5  13.0     6.6x


    I'm puzzled by the anomaly. Why does "A" get twice the speed boost from the DLL as "B" or "C"? Is this a common experience? It seemed so wrong that I went back and reran the numbers on the "A" machine. The obvious conclusion is that the AMD chip handles the code faster than Intel, but I'm reluctant to believe that. Mostly it's a bunch of deeply nested sorting, not trig functions or something obscure that could be impacted by math coprocessor design.

    I'm prepared to just give this the "mechanic's shrug" and move on, but I thought someone here may have already encountered and understood the phenomenon.

    Thanks.

    Bill

    #2
    Originally posted by William Martin View Post
    Machine "B" is a dual core Intel laptop running Vista, 2GB.
    Think you answered your own question!!

    Machine C is a celery machine, er, CELER-ON 1gb of ram but only a quarter the CPU cache, it's a fair machine but not a number crunching machine....never was intended to be either, but it's still respectable numbers.

    Now go put XP on Machine B and I bet it'll be RIGHT there with Machine A or very close.
    Scott Turchin
    MCSE, MCP+I
    http://www.tngbbs.com
    ----------------------
    True Karate-do is this: that in daily life, one's mind and body be trained and developed in a spirit of humility; and that in critical times, one be devoted utterly to the cause of justice. -Gichin Funakoshi

    Comment


      #3
      Originally posted by Scott Turchin View Post
      Now go put XP on Machine B and I bet it'll be RIGHT there with Machine A or very close.
      The good news is that I already have XP on "B" - it's set up as a dual boot with Vista & XP. The bad news is that I don't have Excel on the XP partition.

      I've been trying to figure out whether Microsoft counts one dual boot machine as one or two machines in their licensing count. If I can get them to let me put Excel on it without buying a new copy of Office I'll post back the result.

      Bill

      Comment


        #4
        Bill,
        it's probably Intel vs. AMD.

        I'm not up with the latest and greatest CPUs but 3-4 years ago it was definitely the case that AMD would knock spots of Intel for FPU maths but Intel always handled memory better.
        If you had lots of data to move around, Intel was the better choice. If you had lots of floating point calculations to do then AMD was the better choice.


        Another possibility you should check is that both cores of the dual core CPUs are being used. Press ctrl-Alt-Del for the task manager and look at the performance tab during the calculations and see if both CPUs are fully utilised.

        Paul.

        Comment


          #5
          Originally posted by Paul Dixon View Post
          Another possibility you should check is that both cores of the dual core CPUs are being used. Press ctrl-Alt-Del for the task manager and look at the performance tab during the calculations and see if both CPUs are fully utilised.
          No, Excel is single threaded and will only run on one core of any machine. So with dual core you'll typically see 50+% CPU usage. (That's not 100% true, but very nearly so.)

          Sometimes I'll run two copies of Excel at the same time to take advantage of both cores if I have a problem which I can manually partition.

          Bill

          Comment


            #6
            Bill,
            Excel may restrict itself to one core but you don't need to restrict PB to one core so you could make further gains in speed by writing your DLL code with multilple processors in mind.

            Paul.

            Comment


              #7
              Originally posted by William Martin View Post
              The good news is that I already have XP on "B" - it's set up as a dual boot with Vista & XP. The bad news is that I don't have Excel on the XP partition.

              I've been trying to figure out whether Microsoft counts one dual boot machine as one or two machines in their licensing count. If I can get them to let me put Excel on it without buying a new copy of Office I'll post back the result.

              Bill
              Microsoft support is here to help you with Microsoft products. Find how-to articles, videos, and training for Microsoft 365 Copilot, Microsoft 365, Windows, Surface, and more.
              Scott Turchin
              MCSE, MCP+I
              http://www.tngbbs.com
              ----------------------
              True Karate-do is this: that in daily life, one's mind and body be trained and developed in a spirit of humility; and that in critical times, one be devoted utterly to the cause of justice. -Gichin Funakoshi

              Comment


                #8
                Ok, I will bite
                VBA PB/DLL SpeedUp Core Memory
                "A" 42 3.25 13x 2 3GB
                "B" 50 8.0 6.3x 2 2GB
                "C" 85.5 13.0 6.6x 1 1GB
                Now forgetting the core, if you notice each machine has more memory than the other. Thus "FASTER"

                Add in the core, not only are you forgetting that its not just your application that is running (even if it only uses 1 core) but the other processes and how well they handle speed, threads, and the like.

                Of course you may hit the "Point of Diminishing Returns" but I would think in this case WELLLLLLL ABOVE anything being tested for. (or even have control over)

                For lack of a better word I think you just proved that PB makes things MUCH faster

                Unless you really need to shave MS from times, then I would not worry, but from the sounds of it, this is purely a "Learning Experience" of why something happens, rather than just "Accepting" it happened.

                I applaud you for that concept...all too often we "Accept" rather than learn why it does what it does and how it does it.
                Engineer's Motto: If it aint broke take it apart and fix it

                "If at 1st you don't succeed... call it version 1.0"

                "Half of Programming is coding"....."The other 90% is DEBUGGING"

                "Document my code????" .... "WHYYY??? do you think they call it CODE? "

                Comment


                  #9
                  Sorry about the format of the chart quote, but it too also goes to show that in this case readability is up to what allows me to display results.

                  (I could not get it to match my own test screen, but I think I show the point)
                  Engineer's Motto: If it aint broke take it apart and fix it

                  "If at 1st you don't succeed... call it version 1.0"

                  "Half of Programming is coding"....."The other 90% is DEBUGGING"

                  "Document my code????" .... "WHYYY??? do you think they call it CODE? "

                  Comment


                    #10
                    Originally posted by Paul Dixon View Post
                    Excel may restrict itself to one core but you don't need to restrict PB to one core so you could make further gains in speed by writing your DLL code with multilple processors in mind.
                    Oooh! I like it. I have absolutely no idea how to go about doing that, but then that's about how much I knew about DLLs in general a week ago.

                    Conceptually I should think the problem is partitionable. I'll have to do some research on implementation.

                    Bill

                    Comment


                      #11
                      Originally posted by Cliff Nichols View Post
                      Unless you really need to shave MS from times, then I would not worry, but from the sounds of it, this is purely a "Learning Experience" of why something happens, rather than just "Accepting" it happened.
                      You're right. It's probably not worth pursuing further from a purely rational perspective, but...

                      I'm rather taken by Paul's suggestion that I could nearly double the speed again by writing the DLL to run both cores simultaneously. I didn't realize it was possible. It's too good a challenge to pass up.

                      Bill

                      Comment


                        #12
                        Video?

                        Originally posted by William Martin View Post
                        Machine "A" is a dual core AMD desktop running XP, 3GB.
                        Machine "B" is a dual core Intel laptop running Vista, 2GB.
                        Machine "C" is an old Celeron laptop running XP, 1GB.

                        Code:
                             VBA   PB/DLL  SpeedUp
                        "A"   42     3.25    13x
                        "B"   50     8.0     6.3x
                        "C"   85.5  13.0     6.6x


                        I'm puzzled by the anomaly. Why does "A" get twice the speed boost from the DLL as "B" or "C"? Is this a common experience? It seemed so wrong that I went back and reran the numbers on the "A" machine. The obvious conclusion is that the AMD chip handles the code faster than Intel, but I'm reluctant to believe that. Mostly it's a bunch of deeply nested sorting, not trig functions or something obscure that could be impacted by math coprocessor design.

                        I'm prepared to just give this the "mechanic's shrug" and move on, but I thought someone here may have already encountered and understood the phenomenon.

                        Thanks.

                        Bill
                        Are you making ExcelApp visible false? If Excel is being displayed, this would have a big difference since the "B" is a laptop and most likely contains a video card much slower than the desktop. Machine C probably has a slower card as well.

                        Even if you are setting visible to false, I wonder if the video card performance still slows things down.

                        Not sure how much back and forth you are doing between Excel, PB.

                        Easy to test this, if you want, use CSV files and load without using Excel. See what the raw PB speed is without Excel communication.

                        Comment


                          #13
                          No, when the DLL is running, Excel is totally in a coma for the 3 minutes or whatever. The only video activity is that I have the DLL update a simple console text screen once a second as a progress indicator so I know it's still alive.

                          Even in the VBA only version I disable the screen updates, automatic calculations, etc., to speed it up.

                          Bill

                          Comment


                            #14
                            I'm rather taken by Paul's suggestion that I could nearly double the speed again by writing the DLL to run both cores simultaneously. I didn't realize it was possible. It's too good a challenge to pass up
                            I agree, when its for learning something new, and your own curiosity its always worth going extra steps to learn what you do not know
                            Engineer's Motto: If it aint broke take it apart and fix it

                            "If at 1st you don't succeed... call it version 1.0"

                            "Half of Programming is coding"....."The other 90% is DEBUGGING"

                            "Document my code????" .... "WHYYY??? do you think they call it CODE? "

                            Comment


                              #15
                              William
                              To use both CPU’s you will somehow need to divide the work up into 2 segments and run each in their own thread. Without seeing the data and how you pass it can only guess but lets say its one large array and then picking the main dimension one thread could process from the start forward and the other from the end backwards. Each thread would update a global as to their progress and end at the point of overlap. Obviously the global updates and compares need to be protected with possibly a Mutex and WaitForSingleObject. The OS should then make use of both CPU’s.
                              VB and VBA both use a single threaded apartment model (or some term like that) but I think allow a called DLL to create threads as I think Excel itself does.
                              John

                              Comment

                              Working...
                              X
                              😀
                              🥰
                              🤢
                              😎
                              😡
                              👍
                              👎