This is not a StackOverflow question because the software engineers do not have appropriate math skills to answer an IEEE 754 question and thus must be answered by an IEEE tutor.
I had to implement my own string-to-float and float-to-string algorithms for Grisu and Unicode using C++ templates. I used the following algorithm found in the Wikipedia Single-precision Floating Point Format article:
- consider a real number with an integer and a fraction part such as 12.375
- convert and normalize the integer part into binary
- convert the fraction part using the following technique as shown here
- add the two results and adjust them to produce a proper final conversion
I am unsure what to do when the string being converted to float has more digits precision then I can store in a 32-bit (for example) float but it does not hit +/- infinity, and the proper way to round the imprecise digits/decimals. For 32-bit floating-point numbers, there is only 7 digits of precision. For 64-bit floating-point numbers, it's 16 digits of precision. Any more numbers than that must be rounded using the IEEE 754 rounding rules.
Example
Base 2 Option
We convert the integer portion first, like the algorithm in the wiki entry, by scanning to the least significant digit then parse back to the most significant digit by multiplying each digit by a power of 10 (i.e. the standard string-to-integer algorithm) and use unsigned multiplication wrap-around to detect bit overflow, round in base 2; which doesn't sound right to me and I think the Wikipedia article is not the best algorithm.
Base 10 Option
We convert every number to into exponent notation with a single most significant digit, such as converting 123456780123456789.012. to 1.234567e20 using the IEEE 754 rounding rules. Sounds like the best choice because we only have to parse the text once and we have to scan for the exponent and we can avoid the string-to-integer algorithm. Rounding in base 10 sounds the sanest due to the number still being in base 10 and you only need to read the digits and decimals once without multiplying by a power of 10. If this is the preferred method then wiki entry should be updated.