Announcement

Collapse
No announcement yet.

How do I get full extended precision value?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Michael Mattias
    replied
    >..The Robert of Venice

    And here I thought your sense of humor had left the building in 1994.

    Leave a comment:


  • Dale Yarker
    replied
    Microsoft does NOT warn that “the only significant digits are the first 16”! Man, Microsoft is liable now! NOT! They would have been gone under long ago…at least since the release of VB6 and Visual Studio 6! What’s the purpose of showing 324 digits of negative numbers and 308 digits of positive numbers, IF they are zero after position 16? Who needs zeros?
    They should have warned. A Microsoft "double" is exactly the same thing as a PB "double". There are not 300 plus digits. There is nothing after position 16, not even zeros. You are confusing floating point with fixed point. When converting floating point to fixed point, zeros are needed to fill the space between the last significant digit and the decimal point.

    A "double" is 64 bits. If you can describe a 300 digit signed number within 64 bits, you should be designing FPUs for Intel. Until you can, please go study math and stop "flaming" the forum with nonsense.

    Leave a comment:


  • Bob Zale
    replied
    Originally posted by Ion Saliu View Post
    It ain’t validated by a trustworthy third party entity, is it?
    Yes. It is certified by The Robert of Venice.

    Best regards,

    Bob Zale
    PowerBASIC Inc.

    Leave a comment:


  • Ion Saliu
    replied
    0000000
    Last edited by Ion Saliu; 19 May 2008, 09:55 AM.

    Leave a comment:


  • Michael Mattias
    replied
    Some people are so confused that they would do anything, like blindly using so-called huge math libraries....
    ...and some are so confused they can't tell the difference between range and precison.

    Leave a comment:


  • Bob Zale
    replied
    Unfortunately, you are confusing the range of values with the number of digits of accuracy. Actually, the range of a double is over 600 digits, but the only significant digits are the first 16. That's because every floating point number is made up of a mantissa and an exponent, and are, by definition, an approximation.

    Best regards,
    Bob Zale

    Leave a comment:


  • Ion Saliu
    replied
    000000
    Last edited by Ion Saliu; 19 May 2008, 09:54 AM.

    Leave a comment:


  • Michael Mattias
    replied
    >>You cannot get a quart from a pint pot
    >Unless the quart pot has been mis-labelled as a pint?

    C'mon, Paul. By now you should know my view of using undocumented features.

    Leave a comment:


  • Marco Pontello
    replied
    Originally posted by Ion Saliu View Post
    Visual Basic has a data type named CURRENCY. It is 300 digits wide
    Never encountered; I'm unable to find info on a VB6's datatype with this range.
    Can you provide a link?

    Thanks,
    Bye!

    Leave a comment:


  • Bob Zale
    replied
    hmmm...

    Currency data type offers 18 and a fraction digits. Not 300. Sorry...

    Bob Zale
    PowerBASIC Inc.

    Leave a comment:


  • Ion Saliu
    replied
    00000
    Last edited by Ion Saliu; 19 May 2008, 09:54 AM.

    Leave a comment:


  • Donald Darden
    replied
    Floating point allows us to scale and approximate extremely large or extremely small numbers. By approximation, I mean that you cannot exactly represent a value like 1/3rd or 1/10th in binary form. Heck, you can't even represent 1/3rd exactly in base 10, as it would be written as 0.3333333333333333333333333333333333333333 ... to an infinite number of places.

    Eighteen or nineteen decimal places of accuracy is generally pretty good, and it keeps our representation down to just ten bytes. But if you need perfect accuracy, you probably need to consider big integer or huge integer math packages. These essentially use strings to represent arbitrary length integers, which then can be processed against each other. Fractions are like integers, only with the decimal point shifted far to the right, So technically, these packages can be used to process just about anything where the values and results come in at under 2 Gigabytes. But this type of processing is labor intensive on the part of the PC, so depending on what you are trying to do, the processing can take awhile.

    Leave a comment:


  • Ion Saliu
    replied
    0000
    Last edited by Ion Saliu; 19 May 2008, 09:53 AM.

    Leave a comment:


  • Ion Saliu
    replied
    000
    Last edited by Ion Saliu; 19 May 2008, 09:53 AM.

    Leave a comment:


  • Michael Mattias
    replied
    Yet another member of this community wrote a factorial function to handle gigantically huge numerical data types....
    Would that be the demo I wrote, available from my website at...
    http://www.talsystems.com/tsihome_ht...ads/Factor.zip
    .. to do up to 100 factorial, easily exandable?
    . There is no guarantee that any digit beyond position 18 is accurate
    Oh, I guess not, because I *do* guarantee the accuracy of that code.

    MCM

    Leave a comment:


  • Eddy Van Esch
    replied
    Originally posted by Tom Ulrich View Post
    .... a multi-precision program? .....Higher precision would help.
    Patience, my friends... Arbitrary precision math is on the way...
    At this moment, I have +, -, *, / but there is more to come ...

    Kind regards
    Last edited by Eddy Van Esch; 16 May 2008, 06:33 PM.

    Leave a comment:


  • Ion Saliu
    replied
    00
    Last edited by Bob Zale; 19 May 2008, 10:16 AM.

    Leave a comment:


  • Tom Ulrich
    replied
    Why not use a multi-precision program?

    I have U-BASIC but it is for DOS.

    There are a number of programs. Most are written in C++ and some in FORTRAN 90. I remember a SUN computer having quadruple precision (128 bits with: 112 for fraction and 15 for exponent).

    I tried to implement routines published for DOUBLE-DOUBLE but I got strange results. I tried to do Kahan sum routine and found the same trouble. DOUBLE-DOUBLE runs faster than multiprecision methods. I also looked into QUAD-DOUBLE but gave up because of the above problems.

    There are packages that use DOUBLE-DOUBLE internally to do linear algebra and get better results. I was trying to do least squares fit for 20 degree polynomial. The problem is known to have an ill-conditioned matrix. Higher precision would help.

    I resorted to storing extended result to string and picking out the 15 digits for double and then taking two 15 digit numbers, spliting each of them and multiplying to get 30 digit result. I also resorted to storing a string with 30 digits and spliting it up into integers and doing math with those. They require housekeeping to work. These routines are more proof of concept.

    I used a very crude way of getting the various parts of a floating-point number. That is why I asked the question.

    Leave a comment:


  • Tom Ulrich
    replied
    John Gleason posted some code so I used it (thankyou). I also used a symbolic algebra program to obtain all digits to compare.

    The program has the following line to generate an extended value.
    extVar = -38847.32737 * 227738489.33221

    The binary result:
    "11000000001010101000000010111101110000101101110001100010110110011101010100001001"
    Exact decimal from binary = -8847031649837.61451053619384765625
    For ref. the exact multiply = -8847031649837.6145555877
    The result of PB multiply = -8847031649837.61"
    The result of PB multiply = -8847031649837.61451 (18 digits displayed)

    Since PB converts the constants to lowest precision needed ... it used double precision. I forced it to use extended precision constants.
    extVar = -38847.32737## * 227738489.33221##"

    The only change in results is in the last 6 bits.
    The binary result:
    "11000000001010101000000010111101110000101101110001100010110110011101010100111000"
    Exact decimal from binary = -8847031649837.61455535888671875"
    For ref. the exact multiply = -8847031649837.6145555877
    The result of PB multiply = -8847031649837.61"
    The result of PB multiply = -8847031649837.61456 (18 digits displayed)"

    The reason for the numerical differences is that the computer uses base 2 and the input/display are in base 10. There is not an exact conversion going on. The mantissa or significand is not binary integer but binary fraction. The resulting binary fraction represents a large number of decimal digits aligned to base 2 fraction numbers.

    Some explanation of binary fraction follows:
    0.10110... = 1/2^1+1/2^3+1/2^4 = 1/2 + 1/8 + 1/16 +...
    The 63rd bit is 1/2^63 = 1/9223372036854775808 which is:
    =0.000000000000000000108420217248550443400745280086994171142578125 = 1.08420...e-19
    Each bit fraction is added up to get the mantissa. Next 1 is added to the mantissa fraction and then it is multiplied by 2^exponent. (The exponent is 43 for the example run above for variable extVar.)

    result = sign*mantissa*2^exponent

    PB has to convert decimal base 10 number to a binary floating-point number that represents it as close as it can get. Integer values translate exactly if within each range. The numbers that can be represented more precisely have small magnitude exponent. If one has a huge number exponent of say 16380 then 2^16380 multiplies the error. The range of exponent is about +16383 to -16382 and that multiplied by the mantisa results in a decimal exponent 10^-4932 to 10^4932.

    There are I believe 5 ways to round and there will be some loss going from one base to another base. If a base ten number is represented exactly in base 2 fraction then results will be excellent in floating-point math.

    Leave a comment:


  • Paul Dixon
    replied
    You cannot get a quart from a pint pot
    Unless the quart pot has been mis-labelled as a pint?

    Paul.

    Leave a comment:

Working...
X