3

This is not a StackOverflow question because the software engineers do not have appropriate math skills to answer an IEEE 754 question and thus must be answered by an IEEE tutor.

I had to implement my own string-to-float and float-to-string algorithms for Grisu and Unicode using C++ templates. I used the following algorithm found in the Wikipedia Single-precision Floating Point Format article:

  • consider a real number with an integer and a fraction part such as 12.375
  • convert and normalize the integer part into binary
  • convert the fraction part using the following technique as shown here
  • add the two results and adjust them to produce a proper final conversion

I am unsure what to do when the string being converted to float has more digits precision then I can store in a 32-bit (for example) float but it does not hit +/- infinity, and the proper way to round the imprecise digits/decimals. For 32-bit floating-point numbers, there is only 7 digits of precision. For 64-bit floating-point numbers, it's 16 digits of precision. Any more numbers than that must be rounded using the IEEE 754 rounding rules.

Example

Base 2 Option

We convert the integer portion first, like the algorithm in the wiki entry, by scanning to the least significant digit then parse back to the most significant digit by multiplying each digit by a power of 10 (i.e. the standard string-to-integer algorithm) and use unsigned multiplication wrap-around to detect bit overflow, round in base 2; which doesn't sound right to me and I think the Wikipedia article is not the best algorithm.

Base 10 Option

We convert every number to into exponent notation with a single most significant digit, such as converting 123456780123456789.012. to 1.234567e20 using the IEEE 754 rounding rules. Sounds like the best choice because we only have to parse the text once and we have to scan for the exponent and we can avoid the string-to-integer algorithm. Rounding in base 10 sounds the sanest due to the number still being in base 10 and you only need to read the digits and decimals once without multiplying by a power of 10. If this is the preferred method then wiki entry should be updated.

  • 1
    This question has nothing to do with engineering, and is far more appropriate for [SO] or [Programmers.SE]. – Wasabi Jul 13 '18 at 01:05
  • 2
    It's an IEEE 754 question, so it's not a software engineering question and the software engineers I asked don't know. This is about a technicality of invalid bit patterns, and thus is a computer-engineering question as it applies to both hardware, firmware, and software. –  Jul 13 '18 at 05:37
  • If it was meant as an IEEE 754 question you should have said that in the question. Not every computer uses IEEE for floating point! – alephzero Jul 13 '18 at 07:56
  • 1
    Aside from being off-topic, from the second paragraph it seems the OP *doesn't actually know what the algorithm is supposed to do* - and it's not our job to guess what the specification should be! The comments later in the post might suggest the OP doesn't realize that even a string like "0.1" doesn't have an *exact* representation in binary floating point for *any* length of data. – alephzero Jul 13 '18 at 11:57
  • @Cale - I am willing to answer this over at [SE.SE](https://softwareengineering.stackexchange.com/), or anywhere else where it's on topic and we both have accounts. – Rob Jul 13 '18 at 15:38
  • I have updated the question to clarify everything better and that it is directed at an IEEE Specification. I could really use some help. Maybe if you could private message me? My startup company is suffering right now without my scanner working. –  Jul 13 '18 at 18:48
  • You have already 2 reopen votes, you changes seem positive. – peterh Jul 14 '18 at 00:37
  • The StackOverflow people have no clue what I'm talking about because they aren't computer engineers. I need a computer engineer to answer the IEEE 754 question. I believe the Wikipedia algorithm needs to be updated to incorporate IEEE 754 rounding. –  Jul 20 '18 at 14:24

0 Answers0