I have an Excel spreadsheet which uses VBA to do a lot of crunching. So I wrote a PB8 DLL to replace one of the VBA subroutines and it speeds things up nicely. (thanks to several people here who helped me over the bumps.)
So once I was done with that, just for grins I ran it on two other machines to compare the all VBA -vs- PB/DLL versions of the program. I find the result curious, and just wondered if this is common.
Machine "A" is a dual core AMD desktop running XP, 3GB.
Machine "B" is a dual core Intel laptop running Vista, 2GB.
Machine "C" is an old Celeron laptop running XP, 1GB.
(Same Excel version, and all service packs and updates applied to the OS. No other apps running.)
Here are the approximate execution times (minutes) for the various machines:
I'm puzzled by the anomaly. Why does "A" get twice the speed boost from the DLL as "B" or "C"? Is this a common experience? It seemed so wrong that I went back and reran the numbers on the "A" machine. The obvious conclusion is that the AMD chip handles the code faster than Intel, but I'm reluctant to believe that. Mostly it's a bunch of deeply nested sorting, not trig functions or something obscure that could be impacted by math coprocessor design.
I'm prepared to just give this the "mechanic's shrug" and move on, but I thought someone here may have already encountered and understood the phenomenon.
Thanks.
Bill
So once I was done with that, just for grins I ran it on two other machines to compare the all VBA -vs- PB/DLL versions of the program. I find the result curious, and just wondered if this is common.
Machine "A" is a dual core AMD desktop running XP, 3GB.
Machine "B" is a dual core Intel laptop running Vista, 2GB.
Machine "C" is an old Celeron laptop running XP, 1GB.
(Same Excel version, and all service packs and updates applied to the OS. No other apps running.)
Here are the approximate execution times (minutes) for the various machines:
Code:
VBA PB/DLL SpeedUp "A" 42 3.25 13x "B" 50 8.0 6.3x "C" 85.5 13.0 6.6x
I'm puzzled by the anomaly. Why does "A" get twice the speed boost from the DLL as "B" or "C"? Is this a common experience? It seemed so wrong that I went back and reran the numbers on the "A" machine. The obvious conclusion is that the AMD chip handles the code faster than Intel, but I'm reluctant to believe that. Mostly it's a bunch of deeply nested sorting, not trig functions or something obscure that could be impacted by math coprocessor design.
I'm prepared to just give this the "mechanic's shrug" and move on, but I thought someone here may have already encountered and understood the phenomenon.
Thanks.
Bill
Comment