Why did some early computer designers eschew integers?What register size did early computers use?What other computers used this floating-point format?Why did so many early microcomputers use the MOS 6502 and variants?Why were early computers named “Mark”?Why did expert systems fall?Why were early personal computer monitors not green?When did “Zen” in computer programming become a thing?History of advanced hardwareWere there any working computers using residue number systems?Why did some CPUs use two Read/Write lines, and others just one?

Is there really no realistic way for a skeleton monster to move around without magic?

Can you lasso down a wizard who is using the Levitate spell?

Patience, young "Padovan"

Non-Jewish family in an Orthodox Jewish Wedding

Why don't electron-positron collisions release infinite energy?

Modification to Chariots for Heavy Cavalry Analogue for 4-armed race

Can town administrative "code" overule state laws like those forbidding trespassing?

Copenhagen passport control - US citizen

I see my dog run

Need help identifying/translating a plaque in Tangier, Morocco

Can I make popcorn with any corn?

Extreme, but not acceptable situation and I can't start the work tomorrow morning

What typically incentivizes a professor to change jobs to a lower ranking university?

A function which translates a sentence to title-case

Email Account under attack (really) - anything I can do?

How can I fix this gap between bookcases I made?

Does the radius of the Spirit Guardians spell depend on the size of the caster?

What is the meaning of "of trouble" in the following sentence?

What is the logic behind how bash tests for true/false?

Infinite past with a beginning?

When blogging recipes, how can I support both readers who want the narrative/journey and ones who want the printer-friendly recipe?

Why doesn't Newton's third law mean a person bounces back to where they started when they hit the ground?

New order #4: World

Why do we use polarized capacitor?



Why did some early computer designers eschew integers?


What register size did early computers use?What other computers used this floating-point format?Why did so many early microcomputers use the MOS 6502 and variants?Why were early computers named “Mark”?Why did expert systems fall?Why were early personal computer monitors not green?When did “Zen” in computer programming become a thing?History of advanced hardwareWere there any working computers using residue number systems?Why did some CPUs use two Read/Write lines, and others just one?













28















Several early computer designs regarded a 'word' as representing not an integer, with the bits having values 2^0, 2^1, 2^2, ..., but as representing a fixed-point fraction 2^-1, 2^-2, 2^-3, ...



(For the sake of simplicity in this question I'm ignoring the existence of the sign bit and talk only in terms of positive numbers)



Some examples of this convention are EDVAC, EDSAC, and the IAS machine.



Why was this? To me, having dealt with since the 1970s with machines that have "integers" at base, this seems a strange way to look at it.



Does it affect the machine operation in any way? Addition and subtraction are the same regardless of what you think the bits mean, but I suppose that for multiplication of two N-bit words giving an N-bit result, the choice of which N bits to keep depends on your interpretation. (Integer: you want the "right hand word"; fixed-point fraction, you want the "left hand word").










share|improve this question



















  • 18





    Very early on, it was likely that computers were not considered to be general purpose machines. So if the main task for which a computer was designed involved doing calculations with flractional numbers, prioritizing them over integers would make sense. It seems likely that computers designed for business programs would be more tuned to integers, because money (in the USA) can be treated as pennies, and very little would need to be fractional.

    – RichF
    Apr 1 at 1:20







  • 2





    Not only can be, but must be, to avoid rounding errors that could lose (or create) money. (This also applies to mils or any other smaller fraction of a dollar that might be necessary.)

    – chepner
    Apr 1 at 20:21











  • Also, remember that one of the primary first functions of computers was to calculate ballistic trajectories, especially for military applications.

    – Ron Maupin
    Apr 2 at 2:31






  • 2





    Note that this is not universal; some early computers used integers, others (Zuse's Z3) used floating point numbers.

    – fuz
    Apr 2 at 11:11











  • Awesome question. Not something I even realized. However it appears that I am a few years younger than you (b 1969).

    – Andrew Steitz
    Apr 2 at 20:09















28















Several early computer designs regarded a 'word' as representing not an integer, with the bits having values 2^0, 2^1, 2^2, ..., but as representing a fixed-point fraction 2^-1, 2^-2, 2^-3, ...



(For the sake of simplicity in this question I'm ignoring the existence of the sign bit and talk only in terms of positive numbers)



Some examples of this convention are EDVAC, EDSAC, and the IAS machine.



Why was this? To me, having dealt with since the 1970s with machines that have "integers" at base, this seems a strange way to look at it.



Does it affect the machine operation in any way? Addition and subtraction are the same regardless of what you think the bits mean, but I suppose that for multiplication of two N-bit words giving an N-bit result, the choice of which N bits to keep depends on your interpretation. (Integer: you want the "right hand word"; fixed-point fraction, you want the "left hand word").










share|improve this question



















  • 18





    Very early on, it was likely that computers were not considered to be general purpose machines. So if the main task for which a computer was designed involved doing calculations with flractional numbers, prioritizing them over integers would make sense. It seems likely that computers designed for business programs would be more tuned to integers, because money (in the USA) can be treated as pennies, and very little would need to be fractional.

    – RichF
    Apr 1 at 1:20







  • 2





    Not only can be, but must be, to avoid rounding errors that could lose (or create) money. (This also applies to mils or any other smaller fraction of a dollar that might be necessary.)

    – chepner
    Apr 1 at 20:21











  • Also, remember that one of the primary first functions of computers was to calculate ballistic trajectories, especially for military applications.

    – Ron Maupin
    Apr 2 at 2:31






  • 2





    Note that this is not universal; some early computers used integers, others (Zuse's Z3) used floating point numbers.

    – fuz
    Apr 2 at 11:11











  • Awesome question. Not something I even realized. However it appears that I am a few years younger than you (b 1969).

    – Andrew Steitz
    Apr 2 at 20:09













28












28








28


4






Several early computer designs regarded a 'word' as representing not an integer, with the bits having values 2^0, 2^1, 2^2, ..., but as representing a fixed-point fraction 2^-1, 2^-2, 2^-3, ...



(For the sake of simplicity in this question I'm ignoring the existence of the sign bit and talk only in terms of positive numbers)



Some examples of this convention are EDVAC, EDSAC, and the IAS machine.



Why was this? To me, having dealt with since the 1970s with machines that have "integers" at base, this seems a strange way to look at it.



Does it affect the machine operation in any way? Addition and subtraction are the same regardless of what you think the bits mean, but I suppose that for multiplication of two N-bit words giving an N-bit result, the choice of which N bits to keep depends on your interpretation. (Integer: you want the "right hand word"; fixed-point fraction, you want the "left hand word").










share|improve this question
















Several early computer designs regarded a 'word' as representing not an integer, with the bits having values 2^0, 2^1, 2^2, ..., but as representing a fixed-point fraction 2^-1, 2^-2, 2^-3, ...



(For the sake of simplicity in this question I'm ignoring the existence of the sign bit and talk only in terms of positive numbers)



Some examples of this convention are EDVAC, EDSAC, and the IAS machine.



Why was this? To me, having dealt with since the 1970s with machines that have "integers" at base, this seems a strange way to look at it.



Does it affect the machine operation in any way? Addition and subtraction are the same regardless of what you think the bits mean, but I suppose that for multiplication of two N-bit words giving an N-bit result, the choice of which N bits to keep depends on your interpretation. (Integer: you want the "right hand word"; fixed-point fraction, you want the "left hand word").







history






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 2 days ago









Dr Sheldon

1,8762832




1,8762832










asked Apr 1 at 0:35









another-daveanother-dave

1,301315




1,301315







  • 18





    Very early on, it was likely that computers were not considered to be general purpose machines. So if the main task for which a computer was designed involved doing calculations with flractional numbers, prioritizing them over integers would make sense. It seems likely that computers designed for business programs would be more tuned to integers, because money (in the USA) can be treated as pennies, and very little would need to be fractional.

    – RichF
    Apr 1 at 1:20







  • 2





    Not only can be, but must be, to avoid rounding errors that could lose (or create) money. (This also applies to mils or any other smaller fraction of a dollar that might be necessary.)

    – chepner
    Apr 1 at 20:21











  • Also, remember that one of the primary first functions of computers was to calculate ballistic trajectories, especially for military applications.

    – Ron Maupin
    Apr 2 at 2:31






  • 2





    Note that this is not universal; some early computers used integers, others (Zuse's Z3) used floating point numbers.

    – fuz
    Apr 2 at 11:11











  • Awesome question. Not something I even realized. However it appears that I am a few years younger than you (b 1969).

    – Andrew Steitz
    Apr 2 at 20:09












  • 18





    Very early on, it was likely that computers were not considered to be general purpose machines. So if the main task for which a computer was designed involved doing calculations with flractional numbers, prioritizing them over integers would make sense. It seems likely that computers designed for business programs would be more tuned to integers, because money (in the USA) can be treated as pennies, and very little would need to be fractional.

    – RichF
    Apr 1 at 1:20







  • 2





    Not only can be, but must be, to avoid rounding errors that could lose (or create) money. (This also applies to mils or any other smaller fraction of a dollar that might be necessary.)

    – chepner
    Apr 1 at 20:21











  • Also, remember that one of the primary first functions of computers was to calculate ballistic trajectories, especially for military applications.

    – Ron Maupin
    Apr 2 at 2:31






  • 2





    Note that this is not universal; some early computers used integers, others (Zuse's Z3) used floating point numbers.

    – fuz
    Apr 2 at 11:11











  • Awesome question. Not something I even realized. However it appears that I am a few years younger than you (b 1969).

    – Andrew Steitz
    Apr 2 at 20:09







18




18





Very early on, it was likely that computers were not considered to be general purpose machines. So if the main task for which a computer was designed involved doing calculations with flractional numbers, prioritizing them over integers would make sense. It seems likely that computers designed for business programs would be more tuned to integers, because money (in the USA) can be treated as pennies, and very little would need to be fractional.

– RichF
Apr 1 at 1:20






Very early on, it was likely that computers were not considered to be general purpose machines. So if the main task for which a computer was designed involved doing calculations with flractional numbers, prioritizing them over integers would make sense. It seems likely that computers designed for business programs would be more tuned to integers, because money (in the USA) can be treated as pennies, and very little would need to be fractional.

– RichF
Apr 1 at 1:20





2




2





Not only can be, but must be, to avoid rounding errors that could lose (or create) money. (This also applies to mils or any other smaller fraction of a dollar that might be necessary.)

– chepner
Apr 1 at 20:21





Not only can be, but must be, to avoid rounding errors that could lose (or create) money. (This also applies to mils or any other smaller fraction of a dollar that might be necessary.)

– chepner
Apr 1 at 20:21













Also, remember that one of the primary first functions of computers was to calculate ballistic trajectories, especially for military applications.

– Ron Maupin
Apr 2 at 2:31





Also, remember that one of the primary first functions of computers was to calculate ballistic trajectories, especially for military applications.

– Ron Maupin
Apr 2 at 2:31




2




2





Note that this is not universal; some early computers used integers, others (Zuse's Z3) used floating point numbers.

– fuz
Apr 2 at 11:11





Note that this is not universal; some early computers used integers, others (Zuse's Z3) used floating point numbers.

– fuz
Apr 2 at 11:11













Awesome question. Not something I even realized. However it appears that I am a few years younger than you (b 1969).

– Andrew Steitz
Apr 2 at 20:09





Awesome question. Not something I even realized. However it appears that I am a few years younger than you (b 1969).

– Andrew Steitz
Apr 2 at 20:09










7 Answers
7






active

oldest

votes


















30














I'd think that it was mostly down to the preferences of John von Neumann at the time. He was a strong advocate of fixed point representations, and early computers were designed with long words to accommodate a large range of numbers that way. You certainly don't need 30-40 bits to cover the most useful integers, but that many were needed if you wanted plenty of digits before and after the decimal point.



By the 1970s though, the costs of integration were such that much smaller word sizes made sense. Minicomputers were commonly 16 bit architectures, and micros 8 bits or sometimes even 4. At that point you needed all the integers you can get, plus floating point had largely replaced fixed point for when you needed decimals.



Nowadays we'd think nothing of using 64 bit integers, of course, but it's a heck of a lot easier to integrate the number of logic gates required for that than it would have been back when they all had to be made out of fragile and expensive vacuum tubes.






share|improve this answer


















  • 1





    I'm persuaded by the "preferences of von Neumann" part, since the 3 machines I mentioned had common conceptual roots, but less so by the rest. I agree with the word-size rationale. But given the machine is "fixed point" only, the choice seemed to be between integers, and fractions of magnitude between 0 and 1. Neither seems to me to be better suited to "plenty of digits before and after the decimal point". Either way, the position of the point is purely notional and the programmer needs to keep track of it.

    – another-dave
    Apr 1 at 12:13






  • 3





    Fixed-point doesn't have to be fractions between 0 and 1; it just means the value is an integer scaled by some fixed constant.

    – chepner
    Apr 1 at 20:25











  • I could perhaps have been clearer on that. Von Neumann points out in First Draft of a Report on the EDVAC that you can do all the calculations with numbers between 0 and 1 and scale the result accordingly, even. You still need those very long words to avoid a loss of precision with the intermediate results though.

    – Matthew Barber
    Apr 1 at 21:33












  • @chepner I'm aware of 'manual' scaling. But if you read, for example, the EDSAC description in Wilkes, Wheeler, and Gill, then they regard the store itself as holding numbers in the range -1 to +1 (binary point at left), rather than -2^35 to +2^35 (binary point at right) or anything else. That's the point of my question.

    – another-dave
    Apr 1 at 22:34











  • The First Draft is the clincher for this argument, I think. Thanks for the reference.

    – another-dave
    Apr 1 at 22:53


















10














This is not really a hardware issue at all, but just a different way of interpreting the bit patterns. A "fixed decimal point" representation for numbers is still used in some situations, where the full power and flexibility of floating point is unnecessary.



The IBM S/360 and S/370 hardware had decimal arithmetic as well as binary, and IBM's PL/I programming language had both "fixed decimal" and "fixed binary" data types, with an implied decimal point anywhere in the number, though fixed binary was mainly used for the special case of "integers". Fixed decimal was an obvious choice for handling financial calculations involving decimal currency such as dollars and cents, for example, because of the simple conversion to and from human-readable numbers.



Fixed point binary is still used in numerical applications like signal processing, where lightweight hardware (integer-only) and speed are critical and the generality of floating point is unnecessary.



In terms of the computer's instruction set, all that is needed is integer add, subtract, multiply, divide, and shift instructions. Keeping track of the position of the implied decimal point could be done by the compiler (as in PL/1) or left to the programmer to do manually. Used carefully, doing it manually could both minimize the number of shift operations and maximize the numerical precision of the calculations, compared with compiler-generated code.



There is a lot of similarity between this type of numerical processing and "multi-word integers" used for very high precision (up to billions of significant figures) in modern computing.






share|improve this answer


















  • 2





    Not just because of the simple conversion to and from human-readable numbers. Fixed point decimal, like BCD on later systems, has the big advantage of avoiding things like 0.30 turning into 0.29999999999999999999 etc.

    – manassehkatz
    Apr 1 at 14:03






  • 2





    Fixed-point binary is in widespread use in FPGA work as well. It used to be in heavy use in industry for embedded software generally, before 32-bit microcontrollers with floating-point became available at a reasonable price. I heaved a sigh of relief around 2001 when I could leave that all behind. Then about 4 years back I started working on FPGAs, and I was right back there again!

    – Graham
    Apr 1 at 16:21






  • 2





    Speaking as someone who has actually written a fixed point library in C++ I concur with all points. Integers and fixed points aren't all that different from each other in terms of binary representation. In fact, it could be argued that an integer is just a fixed point type with a fractional size of 0.

    – Pharap
    Apr 2 at 14:14


















5














Some early computers did use integers.



Manchester University's ‘Baby’ computer calculated the highest factor of an integer on 21 June 1948. https://www.open.edu/openlearn/science-maths-technology/introduction-software-development/content-section-3.2



EDSAC 2 which entered service in 1958, could do integer addition. https://www.tcm.phy.cam.ac.uk/~mjr/courses/Hardware18/hardware.pdf






share|improve this answer








New contributor




Chris Brown is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.















  • 1





    Well, as I understand it, EDSAC could do integer addition too. Where "the book" says you're adding 2^-35 + 2^-35 to get 2^-34, the programmer is perfectly at liberty to believe he's computing 1 + 1 = 2. It's just in the interpretation of the bits. I would describe EDSAC 2 as replacing fixed point arithmetic for both reals and integers, with floating point arithmetic for the reals, leaving fixed point for the integers.

    – another-dave
    Apr 1 at 22:43


















3














Computers came from calculators, and calculators are designed to solve numerical computations, and hence require the decimal point.



Babbage's difference engine ~1754 had 10 of 10-digit decimal numbers.  Its job was computing numerical tables — and printing them, since the copying of the day (by humans) made more mistakes than the mathematicians who made the original calculations.



EDVAC was part of the US Army's Ordnance Department, its job was ballistics computation.






share|improve this answer

























  • But did Babbage say his numbers were between 0 and 10 thousand million (being British, he would not have used "billion" for 10^9), or between 0 and 1 (approximately speaking)?

    – another-dave
    Apr 1 at 23:01


















3














The problems early computers were meant to solve used real numbers. Often these numbers were very large or very small, so you need a scale factor for the computer to handle them.



If the computer natively thinks in numbers from 0.0 to 1.0 (or -1.0 to 1.0), then the scale factor is simply the maximum value of the variable. This is easy for human minds to handle and not very error prone.



If the computer natively thinks in numbers from 0 to 16777215 (or whatever), the scale factor becomes something completely different. Getting the scale factor right becomes much harder and a source of errors.



Lets go with less errors, shall we?



As others have pointed out, the actual hardware is the same for the two schemes, it is just a matter of how humans interact with the machine. 0.0 to 1.0 is more human friendly.






share|improve this answer






























    1














    At the time using fiction point fraction representation seemed like the best solution for handling floating point numbers, because it avoids many of the issues that are present with other formats such as not being able to precisely represent 0.5 and cumulative errors when performing relatively common operations repeatedly.



    In time better ways to represent floating point numbers with binary were devised. General purpose hardware was able to process such formats and gave the programmer a free choice to select the one they wanted.






    share|improve this answer






























      0














      I am not sure i understand your question right, but if i did then "word" is commonly used not as data-type (like "integer") but as whole hardware register, so 8 bit CPU has 8-bit machine word.



      In fixed-point operations, common practice is to store "before point" value in one register, and "after point" value in another register. So, from CPU's point of view they both are words.



      What is the main difference between word and integer? word is only a set of bites, it does not represent any digital value, because one and the same word can be represented as different digital values - for example 11111111 word can be -128 Integer or 256 "Unsigned", depending on how you interpretent first bit. Two words can be Double (Integer before point and Integer or Unsigned after point), Long (for 8-bit cpu 16-bit int) and so on, depending on how you use their values in your calculations.



      I suppose that "word", in your case, was used to describe the way to access a part of data in memory, meaning that it can not or must not be a whole number, whose other part can be located in different word.






      share|improve this answer








      New contributor




      Stanislav Orlov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.




















        Your Answer








        StackExchange.ready(function()
        var channelOptions =
        tags: "".split(" "),
        id: "648"
        ;
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function()
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled)
        StackExchange.using("snippets", function()
        createEditor();
        );

        else
        createEditor();

        );

        function createEditor()
        StackExchange.prepareEditor(
        heartbeatType: 'answer',
        autoActivateHeartbeat: false,
        convertImagesToLinks: false,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: null,
        bindNavPrevention: true,
        postfix: "",
        imageUploader:
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        ,
        noCode: true, onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        );



        );













        draft saved

        draft discarded


















        StackExchange.ready(
        function ()
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9500%2fwhy-did-some-early-computer-designers-eschew-integers%23new-answer', 'question_page');

        );

        Post as a guest















        Required, but never shown

























        7 Answers
        7






        active

        oldest

        votes








        7 Answers
        7






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes









        30














        I'd think that it was mostly down to the preferences of John von Neumann at the time. He was a strong advocate of fixed point representations, and early computers were designed with long words to accommodate a large range of numbers that way. You certainly don't need 30-40 bits to cover the most useful integers, but that many were needed if you wanted plenty of digits before and after the decimal point.



        By the 1970s though, the costs of integration were such that much smaller word sizes made sense. Minicomputers were commonly 16 bit architectures, and micros 8 bits or sometimes even 4. At that point you needed all the integers you can get, plus floating point had largely replaced fixed point for when you needed decimals.



        Nowadays we'd think nothing of using 64 bit integers, of course, but it's a heck of a lot easier to integrate the number of logic gates required for that than it would have been back when they all had to be made out of fragile and expensive vacuum tubes.






        share|improve this answer


















        • 1





          I'm persuaded by the "preferences of von Neumann" part, since the 3 machines I mentioned had common conceptual roots, but less so by the rest. I agree with the word-size rationale. But given the machine is "fixed point" only, the choice seemed to be between integers, and fractions of magnitude between 0 and 1. Neither seems to me to be better suited to "plenty of digits before and after the decimal point". Either way, the position of the point is purely notional and the programmer needs to keep track of it.

          – another-dave
          Apr 1 at 12:13






        • 3





          Fixed-point doesn't have to be fractions between 0 and 1; it just means the value is an integer scaled by some fixed constant.

          – chepner
          Apr 1 at 20:25











        • I could perhaps have been clearer on that. Von Neumann points out in First Draft of a Report on the EDVAC that you can do all the calculations with numbers between 0 and 1 and scale the result accordingly, even. You still need those very long words to avoid a loss of precision with the intermediate results though.

          – Matthew Barber
          Apr 1 at 21:33












        • @chepner I'm aware of 'manual' scaling. But if you read, for example, the EDSAC description in Wilkes, Wheeler, and Gill, then they regard the store itself as holding numbers in the range -1 to +1 (binary point at left), rather than -2^35 to +2^35 (binary point at right) or anything else. That's the point of my question.

          – another-dave
          Apr 1 at 22:34











        • The First Draft is the clincher for this argument, I think. Thanks for the reference.

          – another-dave
          Apr 1 at 22:53















        30














        I'd think that it was mostly down to the preferences of John von Neumann at the time. He was a strong advocate of fixed point representations, and early computers were designed with long words to accommodate a large range of numbers that way. You certainly don't need 30-40 bits to cover the most useful integers, but that many were needed if you wanted plenty of digits before and after the decimal point.



        By the 1970s though, the costs of integration were such that much smaller word sizes made sense. Minicomputers were commonly 16 bit architectures, and micros 8 bits or sometimes even 4. At that point you needed all the integers you can get, plus floating point had largely replaced fixed point for when you needed decimals.



        Nowadays we'd think nothing of using 64 bit integers, of course, but it's a heck of a lot easier to integrate the number of logic gates required for that than it would have been back when they all had to be made out of fragile and expensive vacuum tubes.






        share|improve this answer


















        • 1





          I'm persuaded by the "preferences of von Neumann" part, since the 3 machines I mentioned had common conceptual roots, but less so by the rest. I agree with the word-size rationale. But given the machine is "fixed point" only, the choice seemed to be between integers, and fractions of magnitude between 0 and 1. Neither seems to me to be better suited to "plenty of digits before and after the decimal point". Either way, the position of the point is purely notional and the programmer needs to keep track of it.

          – another-dave
          Apr 1 at 12:13






        • 3





          Fixed-point doesn't have to be fractions between 0 and 1; it just means the value is an integer scaled by some fixed constant.

          – chepner
          Apr 1 at 20:25











        • I could perhaps have been clearer on that. Von Neumann points out in First Draft of a Report on the EDVAC that you can do all the calculations with numbers between 0 and 1 and scale the result accordingly, even. You still need those very long words to avoid a loss of precision with the intermediate results though.

          – Matthew Barber
          Apr 1 at 21:33












        • @chepner I'm aware of 'manual' scaling. But if you read, for example, the EDSAC description in Wilkes, Wheeler, and Gill, then they regard the store itself as holding numbers in the range -1 to +1 (binary point at left), rather than -2^35 to +2^35 (binary point at right) or anything else. That's the point of my question.

          – another-dave
          Apr 1 at 22:34











        • The First Draft is the clincher for this argument, I think. Thanks for the reference.

          – another-dave
          Apr 1 at 22:53













        30












        30








        30







        I'd think that it was mostly down to the preferences of John von Neumann at the time. He was a strong advocate of fixed point representations, and early computers were designed with long words to accommodate a large range of numbers that way. You certainly don't need 30-40 bits to cover the most useful integers, but that many were needed if you wanted plenty of digits before and after the decimal point.



        By the 1970s though, the costs of integration were such that much smaller word sizes made sense. Minicomputers were commonly 16 bit architectures, and micros 8 bits or sometimes even 4. At that point you needed all the integers you can get, plus floating point had largely replaced fixed point for when you needed decimals.



        Nowadays we'd think nothing of using 64 bit integers, of course, but it's a heck of a lot easier to integrate the number of logic gates required for that than it would have been back when they all had to be made out of fragile and expensive vacuum tubes.






        share|improve this answer













        I'd think that it was mostly down to the preferences of John von Neumann at the time. He was a strong advocate of fixed point representations, and early computers were designed with long words to accommodate a large range of numbers that way. You certainly don't need 30-40 bits to cover the most useful integers, but that many were needed if you wanted plenty of digits before and after the decimal point.



        By the 1970s though, the costs of integration were such that much smaller word sizes made sense. Minicomputers were commonly 16 bit architectures, and micros 8 bits or sometimes even 4. At that point you needed all the integers you can get, plus floating point had largely replaced fixed point for when you needed decimals.



        Nowadays we'd think nothing of using 64 bit integers, of course, but it's a heck of a lot easier to integrate the number of logic gates required for that than it would have been back when they all had to be made out of fragile and expensive vacuum tubes.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Apr 1 at 1:41









        Matthew BarberMatthew Barber

        41623




        41623







        • 1





          I'm persuaded by the "preferences of von Neumann" part, since the 3 machines I mentioned had common conceptual roots, but less so by the rest. I agree with the word-size rationale. But given the machine is "fixed point" only, the choice seemed to be between integers, and fractions of magnitude between 0 and 1. Neither seems to me to be better suited to "plenty of digits before and after the decimal point". Either way, the position of the point is purely notional and the programmer needs to keep track of it.

          – another-dave
          Apr 1 at 12:13






        • 3





          Fixed-point doesn't have to be fractions between 0 and 1; it just means the value is an integer scaled by some fixed constant.

          – chepner
          Apr 1 at 20:25











        • I could perhaps have been clearer on that. Von Neumann points out in First Draft of a Report on the EDVAC that you can do all the calculations with numbers between 0 and 1 and scale the result accordingly, even. You still need those very long words to avoid a loss of precision with the intermediate results though.

          – Matthew Barber
          Apr 1 at 21:33












        • @chepner I'm aware of 'manual' scaling. But if you read, for example, the EDSAC description in Wilkes, Wheeler, and Gill, then they regard the store itself as holding numbers in the range -1 to +1 (binary point at left), rather than -2^35 to +2^35 (binary point at right) or anything else. That's the point of my question.

          – another-dave
          Apr 1 at 22:34











        • The First Draft is the clincher for this argument, I think. Thanks for the reference.

          – another-dave
          Apr 1 at 22:53












        • 1





          I'm persuaded by the "preferences of von Neumann" part, since the 3 machines I mentioned had common conceptual roots, but less so by the rest. I agree with the word-size rationale. But given the machine is "fixed point" only, the choice seemed to be between integers, and fractions of magnitude between 0 and 1. Neither seems to me to be better suited to "plenty of digits before and after the decimal point". Either way, the position of the point is purely notional and the programmer needs to keep track of it.

          – another-dave
          Apr 1 at 12:13






        • 3





          Fixed-point doesn't have to be fractions between 0 and 1; it just means the value is an integer scaled by some fixed constant.

          – chepner
          Apr 1 at 20:25











        • I could perhaps have been clearer on that. Von Neumann points out in First Draft of a Report on the EDVAC that you can do all the calculations with numbers between 0 and 1 and scale the result accordingly, even. You still need those very long words to avoid a loss of precision with the intermediate results though.

          – Matthew Barber
          Apr 1 at 21:33












        • @chepner I'm aware of 'manual' scaling. But if you read, for example, the EDSAC description in Wilkes, Wheeler, and Gill, then they regard the store itself as holding numbers in the range -1 to +1 (binary point at left), rather than -2^35 to +2^35 (binary point at right) or anything else. That's the point of my question.

          – another-dave
          Apr 1 at 22:34











        • The First Draft is the clincher for this argument, I think. Thanks for the reference.

          – another-dave
          Apr 1 at 22:53







        1




        1





        I'm persuaded by the "preferences of von Neumann" part, since the 3 machines I mentioned had common conceptual roots, but less so by the rest. I agree with the word-size rationale. But given the machine is "fixed point" only, the choice seemed to be between integers, and fractions of magnitude between 0 and 1. Neither seems to me to be better suited to "plenty of digits before and after the decimal point". Either way, the position of the point is purely notional and the programmer needs to keep track of it.

        – another-dave
        Apr 1 at 12:13





        I'm persuaded by the "preferences of von Neumann" part, since the 3 machines I mentioned had common conceptual roots, but less so by the rest. I agree with the word-size rationale. But given the machine is "fixed point" only, the choice seemed to be between integers, and fractions of magnitude between 0 and 1. Neither seems to me to be better suited to "plenty of digits before and after the decimal point". Either way, the position of the point is purely notional and the programmer needs to keep track of it.

        – another-dave
        Apr 1 at 12:13




        3




        3





        Fixed-point doesn't have to be fractions between 0 and 1; it just means the value is an integer scaled by some fixed constant.

        – chepner
        Apr 1 at 20:25





        Fixed-point doesn't have to be fractions between 0 and 1; it just means the value is an integer scaled by some fixed constant.

        – chepner
        Apr 1 at 20:25













        I could perhaps have been clearer on that. Von Neumann points out in First Draft of a Report on the EDVAC that you can do all the calculations with numbers between 0 and 1 and scale the result accordingly, even. You still need those very long words to avoid a loss of precision with the intermediate results though.

        – Matthew Barber
        Apr 1 at 21:33






        I could perhaps have been clearer on that. Von Neumann points out in First Draft of a Report on the EDVAC that you can do all the calculations with numbers between 0 and 1 and scale the result accordingly, even. You still need those very long words to avoid a loss of precision with the intermediate results though.

        – Matthew Barber
        Apr 1 at 21:33














        @chepner I'm aware of 'manual' scaling. But if you read, for example, the EDSAC description in Wilkes, Wheeler, and Gill, then they regard the store itself as holding numbers in the range -1 to +1 (binary point at left), rather than -2^35 to +2^35 (binary point at right) or anything else. That's the point of my question.

        – another-dave
        Apr 1 at 22:34





        @chepner I'm aware of 'manual' scaling. But if you read, for example, the EDSAC description in Wilkes, Wheeler, and Gill, then they regard the store itself as holding numbers in the range -1 to +1 (binary point at left), rather than -2^35 to +2^35 (binary point at right) or anything else. That's the point of my question.

        – another-dave
        Apr 1 at 22:34













        The First Draft is the clincher for this argument, I think. Thanks for the reference.

        – another-dave
        Apr 1 at 22:53





        The First Draft is the clincher for this argument, I think. Thanks for the reference.

        – another-dave
        Apr 1 at 22:53











        10














        This is not really a hardware issue at all, but just a different way of interpreting the bit patterns. A "fixed decimal point" representation for numbers is still used in some situations, where the full power and flexibility of floating point is unnecessary.



        The IBM S/360 and S/370 hardware had decimal arithmetic as well as binary, and IBM's PL/I programming language had both "fixed decimal" and "fixed binary" data types, with an implied decimal point anywhere in the number, though fixed binary was mainly used for the special case of "integers". Fixed decimal was an obvious choice for handling financial calculations involving decimal currency such as dollars and cents, for example, because of the simple conversion to and from human-readable numbers.



        Fixed point binary is still used in numerical applications like signal processing, where lightweight hardware (integer-only) and speed are critical and the generality of floating point is unnecessary.



        In terms of the computer's instruction set, all that is needed is integer add, subtract, multiply, divide, and shift instructions. Keeping track of the position of the implied decimal point could be done by the compiler (as in PL/1) or left to the programmer to do manually. Used carefully, doing it manually could both minimize the number of shift operations and maximize the numerical precision of the calculations, compared with compiler-generated code.



        There is a lot of similarity between this type of numerical processing and "multi-word integers" used for very high precision (up to billions of significant figures) in modern computing.






        share|improve this answer


















        • 2





          Not just because of the simple conversion to and from human-readable numbers. Fixed point decimal, like BCD on later systems, has the big advantage of avoiding things like 0.30 turning into 0.29999999999999999999 etc.

          – manassehkatz
          Apr 1 at 14:03






        • 2





          Fixed-point binary is in widespread use in FPGA work as well. It used to be in heavy use in industry for embedded software generally, before 32-bit microcontrollers with floating-point became available at a reasonable price. I heaved a sigh of relief around 2001 when I could leave that all behind. Then about 4 years back I started working on FPGAs, and I was right back there again!

          – Graham
          Apr 1 at 16:21






        • 2





          Speaking as someone who has actually written a fixed point library in C++ I concur with all points. Integers and fixed points aren't all that different from each other in terms of binary representation. In fact, it could be argued that an integer is just a fixed point type with a fractional size of 0.

          – Pharap
          Apr 2 at 14:14















        10














        This is not really a hardware issue at all, but just a different way of interpreting the bit patterns. A "fixed decimal point" representation for numbers is still used in some situations, where the full power and flexibility of floating point is unnecessary.



        The IBM S/360 and S/370 hardware had decimal arithmetic as well as binary, and IBM's PL/I programming language had both "fixed decimal" and "fixed binary" data types, with an implied decimal point anywhere in the number, though fixed binary was mainly used for the special case of "integers". Fixed decimal was an obvious choice for handling financial calculations involving decimal currency such as dollars and cents, for example, because of the simple conversion to and from human-readable numbers.



        Fixed point binary is still used in numerical applications like signal processing, where lightweight hardware (integer-only) and speed are critical and the generality of floating point is unnecessary.



        In terms of the computer's instruction set, all that is needed is integer add, subtract, multiply, divide, and shift instructions. Keeping track of the position of the implied decimal point could be done by the compiler (as in PL/1) or left to the programmer to do manually. Used carefully, doing it manually could both minimize the number of shift operations and maximize the numerical precision of the calculations, compared with compiler-generated code.



        There is a lot of similarity between this type of numerical processing and "multi-word integers" used for very high precision (up to billions of significant figures) in modern computing.






        share|improve this answer


















        • 2





          Not just because of the simple conversion to and from human-readable numbers. Fixed point decimal, like BCD on later systems, has the big advantage of avoiding things like 0.30 turning into 0.29999999999999999999 etc.

          – manassehkatz
          Apr 1 at 14:03






        • 2





          Fixed-point binary is in widespread use in FPGA work as well. It used to be in heavy use in industry for embedded software generally, before 32-bit microcontrollers with floating-point became available at a reasonable price. I heaved a sigh of relief around 2001 when I could leave that all behind. Then about 4 years back I started working on FPGAs, and I was right back there again!

          – Graham
          Apr 1 at 16:21






        • 2





          Speaking as someone who has actually written a fixed point library in C++ I concur with all points. Integers and fixed points aren't all that different from each other in terms of binary representation. In fact, it could be argued that an integer is just a fixed point type with a fractional size of 0.

          – Pharap
          Apr 2 at 14:14













        10












        10








        10







        This is not really a hardware issue at all, but just a different way of interpreting the bit patterns. A "fixed decimal point" representation for numbers is still used in some situations, where the full power and flexibility of floating point is unnecessary.



        The IBM S/360 and S/370 hardware had decimal arithmetic as well as binary, and IBM's PL/I programming language had both "fixed decimal" and "fixed binary" data types, with an implied decimal point anywhere in the number, though fixed binary was mainly used for the special case of "integers". Fixed decimal was an obvious choice for handling financial calculations involving decimal currency such as dollars and cents, for example, because of the simple conversion to and from human-readable numbers.



        Fixed point binary is still used in numerical applications like signal processing, where lightweight hardware (integer-only) and speed are critical and the generality of floating point is unnecessary.



        In terms of the computer's instruction set, all that is needed is integer add, subtract, multiply, divide, and shift instructions. Keeping track of the position of the implied decimal point could be done by the compiler (as in PL/1) or left to the programmer to do manually. Used carefully, doing it manually could both minimize the number of shift operations and maximize the numerical precision of the calculations, compared with compiler-generated code.



        There is a lot of similarity between this type of numerical processing and "multi-word integers" used for very high precision (up to billions of significant figures) in modern computing.






        share|improve this answer













        This is not really a hardware issue at all, but just a different way of interpreting the bit patterns. A "fixed decimal point" representation for numbers is still used in some situations, where the full power and flexibility of floating point is unnecessary.



        The IBM S/360 and S/370 hardware had decimal arithmetic as well as binary, and IBM's PL/I programming language had both "fixed decimal" and "fixed binary" data types, with an implied decimal point anywhere in the number, though fixed binary was mainly used for the special case of "integers". Fixed decimal was an obvious choice for handling financial calculations involving decimal currency such as dollars and cents, for example, because of the simple conversion to and from human-readable numbers.



        Fixed point binary is still used in numerical applications like signal processing, where lightweight hardware (integer-only) and speed are critical and the generality of floating point is unnecessary.



        In terms of the computer's instruction set, all that is needed is integer add, subtract, multiply, divide, and shift instructions. Keeping track of the position of the implied decimal point could be done by the compiler (as in PL/1) or left to the programmer to do manually. Used carefully, doing it manually could both minimize the number of shift operations and maximize the numerical precision of the calculations, compared with compiler-generated code.



        There is a lot of similarity between this type of numerical processing and "multi-word integers" used for very high precision (up to billions of significant figures) in modern computing.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Apr 1 at 12:11









        alephzeroalephzero

        2,3881816




        2,3881816







        • 2





          Not just because of the simple conversion to and from human-readable numbers. Fixed point decimal, like BCD on later systems, has the big advantage of avoiding things like 0.30 turning into 0.29999999999999999999 etc.

          – manassehkatz
          Apr 1 at 14:03






        • 2





          Fixed-point binary is in widespread use in FPGA work as well. It used to be in heavy use in industry for embedded software generally, before 32-bit microcontrollers with floating-point became available at a reasonable price. I heaved a sigh of relief around 2001 when I could leave that all behind. Then about 4 years back I started working on FPGAs, and I was right back there again!

          – Graham
          Apr 1 at 16:21






        • 2





          Speaking as someone who has actually written a fixed point library in C++ I concur with all points. Integers and fixed points aren't all that different from each other in terms of binary representation. In fact, it could be argued that an integer is just a fixed point type with a fractional size of 0.

          – Pharap
          Apr 2 at 14:14












        • 2





          Not just because of the simple conversion to and from human-readable numbers. Fixed point decimal, like BCD on later systems, has the big advantage of avoiding things like 0.30 turning into 0.29999999999999999999 etc.

          – manassehkatz
          Apr 1 at 14:03






        • 2





          Fixed-point binary is in widespread use in FPGA work as well. It used to be in heavy use in industry for embedded software generally, before 32-bit microcontrollers with floating-point became available at a reasonable price. I heaved a sigh of relief around 2001 when I could leave that all behind. Then about 4 years back I started working on FPGAs, and I was right back there again!

          – Graham
          Apr 1 at 16:21






        • 2





          Speaking as someone who has actually written a fixed point library in C++ I concur with all points. Integers and fixed points aren't all that different from each other in terms of binary representation. In fact, it could be argued that an integer is just a fixed point type with a fractional size of 0.

          – Pharap
          Apr 2 at 14:14







        2




        2





        Not just because of the simple conversion to and from human-readable numbers. Fixed point decimal, like BCD on later systems, has the big advantage of avoiding things like 0.30 turning into 0.29999999999999999999 etc.

        – manassehkatz
        Apr 1 at 14:03





        Not just because of the simple conversion to and from human-readable numbers. Fixed point decimal, like BCD on later systems, has the big advantage of avoiding things like 0.30 turning into 0.29999999999999999999 etc.

        – manassehkatz
        Apr 1 at 14:03




        2




        2





        Fixed-point binary is in widespread use in FPGA work as well. It used to be in heavy use in industry for embedded software generally, before 32-bit microcontrollers with floating-point became available at a reasonable price. I heaved a sigh of relief around 2001 when I could leave that all behind. Then about 4 years back I started working on FPGAs, and I was right back there again!

        – Graham
        Apr 1 at 16:21





        Fixed-point binary is in widespread use in FPGA work as well. It used to be in heavy use in industry for embedded software generally, before 32-bit microcontrollers with floating-point became available at a reasonable price. I heaved a sigh of relief around 2001 when I could leave that all behind. Then about 4 years back I started working on FPGAs, and I was right back there again!

        – Graham
        Apr 1 at 16:21




        2




        2





        Speaking as someone who has actually written a fixed point library in C++ I concur with all points. Integers and fixed points aren't all that different from each other in terms of binary representation. In fact, it could be argued that an integer is just a fixed point type with a fractional size of 0.

        – Pharap
        Apr 2 at 14:14





        Speaking as someone who has actually written a fixed point library in C++ I concur with all points. Integers and fixed points aren't all that different from each other in terms of binary representation. In fact, it could be argued that an integer is just a fixed point type with a fractional size of 0.

        – Pharap
        Apr 2 at 14:14











        5














        Some early computers did use integers.



        Manchester University's ‘Baby’ computer calculated the highest factor of an integer on 21 June 1948. https://www.open.edu/openlearn/science-maths-technology/introduction-software-development/content-section-3.2



        EDSAC 2 which entered service in 1958, could do integer addition. https://www.tcm.phy.cam.ac.uk/~mjr/courses/Hardware18/hardware.pdf






        share|improve this answer








        New contributor




        Chris Brown is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.















        • 1





          Well, as I understand it, EDSAC could do integer addition too. Where "the book" says you're adding 2^-35 + 2^-35 to get 2^-34, the programmer is perfectly at liberty to believe he's computing 1 + 1 = 2. It's just in the interpretation of the bits. I would describe EDSAC 2 as replacing fixed point arithmetic for both reals and integers, with floating point arithmetic for the reals, leaving fixed point for the integers.

          – another-dave
          Apr 1 at 22:43















        5














        Some early computers did use integers.



        Manchester University's ‘Baby’ computer calculated the highest factor of an integer on 21 June 1948. https://www.open.edu/openlearn/science-maths-technology/introduction-software-development/content-section-3.2



        EDSAC 2 which entered service in 1958, could do integer addition. https://www.tcm.phy.cam.ac.uk/~mjr/courses/Hardware18/hardware.pdf






        share|improve this answer








        New contributor




        Chris Brown is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.















        • 1





          Well, as I understand it, EDSAC could do integer addition too. Where "the book" says you're adding 2^-35 + 2^-35 to get 2^-34, the programmer is perfectly at liberty to believe he's computing 1 + 1 = 2. It's just in the interpretation of the bits. I would describe EDSAC 2 as replacing fixed point arithmetic for both reals and integers, with floating point arithmetic for the reals, leaving fixed point for the integers.

          – another-dave
          Apr 1 at 22:43













        5












        5








        5







        Some early computers did use integers.



        Manchester University's ‘Baby’ computer calculated the highest factor of an integer on 21 June 1948. https://www.open.edu/openlearn/science-maths-technology/introduction-software-development/content-section-3.2



        EDSAC 2 which entered service in 1958, could do integer addition. https://www.tcm.phy.cam.ac.uk/~mjr/courses/Hardware18/hardware.pdf






        share|improve this answer








        New contributor




        Chris Brown is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.










        Some early computers did use integers.



        Manchester University's ‘Baby’ computer calculated the highest factor of an integer on 21 June 1948. https://www.open.edu/openlearn/science-maths-technology/introduction-software-development/content-section-3.2



        EDSAC 2 which entered service in 1958, could do integer addition. https://www.tcm.phy.cam.ac.uk/~mjr/courses/Hardware18/hardware.pdf







        share|improve this answer








        New contributor




        Chris Brown is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        share|improve this answer



        share|improve this answer






        New contributor




        Chris Brown is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        answered Apr 1 at 13:40









        Chris BrownChris Brown

        511




        511




        New contributor




        Chris Brown is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.





        New contributor





        Chris Brown is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.






        Chris Brown is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.







        • 1





          Well, as I understand it, EDSAC could do integer addition too. Where "the book" says you're adding 2^-35 + 2^-35 to get 2^-34, the programmer is perfectly at liberty to believe he's computing 1 + 1 = 2. It's just in the interpretation of the bits. I would describe EDSAC 2 as replacing fixed point arithmetic for both reals and integers, with floating point arithmetic for the reals, leaving fixed point for the integers.

          – another-dave
          Apr 1 at 22:43












        • 1





          Well, as I understand it, EDSAC could do integer addition too. Where "the book" says you're adding 2^-35 + 2^-35 to get 2^-34, the programmer is perfectly at liberty to believe he's computing 1 + 1 = 2. It's just in the interpretation of the bits. I would describe EDSAC 2 as replacing fixed point arithmetic for both reals and integers, with floating point arithmetic for the reals, leaving fixed point for the integers.

          – another-dave
          Apr 1 at 22:43







        1




        1





        Well, as I understand it, EDSAC could do integer addition too. Where "the book" says you're adding 2^-35 + 2^-35 to get 2^-34, the programmer is perfectly at liberty to believe he's computing 1 + 1 = 2. It's just in the interpretation of the bits. I would describe EDSAC 2 as replacing fixed point arithmetic for both reals and integers, with floating point arithmetic for the reals, leaving fixed point for the integers.

        – another-dave
        Apr 1 at 22:43





        Well, as I understand it, EDSAC could do integer addition too. Where "the book" says you're adding 2^-35 + 2^-35 to get 2^-34, the programmer is perfectly at liberty to believe he's computing 1 + 1 = 2. It's just in the interpretation of the bits. I would describe EDSAC 2 as replacing fixed point arithmetic for both reals and integers, with floating point arithmetic for the reals, leaving fixed point for the integers.

        – another-dave
        Apr 1 at 22:43











        3














        Computers came from calculators, and calculators are designed to solve numerical computations, and hence require the decimal point.



        Babbage's difference engine ~1754 had 10 of 10-digit decimal numbers.  Its job was computing numerical tables — and printing them, since the copying of the day (by humans) made more mistakes than the mathematicians who made the original calculations.



        EDVAC was part of the US Army's Ordnance Department, its job was ballistics computation.






        share|improve this answer

























        • But did Babbage say his numbers were between 0 and 10 thousand million (being British, he would not have used "billion" for 10^9), or between 0 and 1 (approximately speaking)?

          – another-dave
          Apr 1 at 23:01















        3














        Computers came from calculators, and calculators are designed to solve numerical computations, and hence require the decimal point.



        Babbage's difference engine ~1754 had 10 of 10-digit decimal numbers.  Its job was computing numerical tables — and printing them, since the copying of the day (by humans) made more mistakes than the mathematicians who made the original calculations.



        EDVAC was part of the US Army's Ordnance Department, its job was ballistics computation.






        share|improve this answer

























        • But did Babbage say his numbers were between 0 and 10 thousand million (being British, he would not have used "billion" for 10^9), or between 0 and 1 (approximately speaking)?

          – another-dave
          Apr 1 at 23:01













        3












        3








        3







        Computers came from calculators, and calculators are designed to solve numerical computations, and hence require the decimal point.



        Babbage's difference engine ~1754 had 10 of 10-digit decimal numbers.  Its job was computing numerical tables — and printing them, since the copying of the day (by humans) made more mistakes than the mathematicians who made the original calculations.



        EDVAC was part of the US Army's Ordnance Department, its job was ballistics computation.






        share|improve this answer















        Computers came from calculators, and calculators are designed to solve numerical computations, and hence require the decimal point.



        Babbage's difference engine ~1754 had 10 of 10-digit decimal numbers.  Its job was computing numerical tables — and printing them, since the copying of the day (by humans) made more mistakes than the mathematicians who made the original calculations.



        EDVAC was part of the US Army's Ordnance Department, its job was ballistics computation.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Apr 1 at 22:47

























        answered Apr 1 at 22:41









        Erik EidtErik Eidt

        1,127412




        1,127412












        • But did Babbage say his numbers were between 0 and 10 thousand million (being British, he would not have used "billion" for 10^9), or between 0 and 1 (approximately speaking)?

          – another-dave
          Apr 1 at 23:01

















        • But did Babbage say his numbers were between 0 and 10 thousand million (being British, he would not have used "billion" for 10^9), or between 0 and 1 (approximately speaking)?

          – another-dave
          Apr 1 at 23:01
















        But did Babbage say his numbers were between 0 and 10 thousand million (being British, he would not have used "billion" for 10^9), or between 0 and 1 (approximately speaking)?

        – another-dave
        Apr 1 at 23:01





        But did Babbage say his numbers were between 0 and 10 thousand million (being British, he would not have used "billion" for 10^9), or between 0 and 1 (approximately speaking)?

        – another-dave
        Apr 1 at 23:01











        3














        The problems early computers were meant to solve used real numbers. Often these numbers were very large or very small, so you need a scale factor for the computer to handle them.



        If the computer natively thinks in numbers from 0.0 to 1.0 (or -1.0 to 1.0), then the scale factor is simply the maximum value of the variable. This is easy for human minds to handle and not very error prone.



        If the computer natively thinks in numbers from 0 to 16777215 (or whatever), the scale factor becomes something completely different. Getting the scale factor right becomes much harder and a source of errors.



        Lets go with less errors, shall we?



        As others have pointed out, the actual hardware is the same for the two schemes, it is just a matter of how humans interact with the machine. 0.0 to 1.0 is more human friendly.






        share|improve this answer



























          3














          The problems early computers were meant to solve used real numbers. Often these numbers were very large or very small, so you need a scale factor for the computer to handle them.



          If the computer natively thinks in numbers from 0.0 to 1.0 (or -1.0 to 1.0), then the scale factor is simply the maximum value of the variable. This is easy for human minds to handle and not very error prone.



          If the computer natively thinks in numbers from 0 to 16777215 (or whatever), the scale factor becomes something completely different. Getting the scale factor right becomes much harder and a source of errors.



          Lets go with less errors, shall we?



          As others have pointed out, the actual hardware is the same for the two schemes, it is just a matter of how humans interact with the machine. 0.0 to 1.0 is more human friendly.






          share|improve this answer

























            3












            3








            3







            The problems early computers were meant to solve used real numbers. Often these numbers were very large or very small, so you need a scale factor for the computer to handle them.



            If the computer natively thinks in numbers from 0.0 to 1.0 (or -1.0 to 1.0), then the scale factor is simply the maximum value of the variable. This is easy for human minds to handle and not very error prone.



            If the computer natively thinks in numbers from 0 to 16777215 (or whatever), the scale factor becomes something completely different. Getting the scale factor right becomes much harder and a source of errors.



            Lets go with less errors, shall we?



            As others have pointed out, the actual hardware is the same for the two schemes, it is just a matter of how humans interact with the machine. 0.0 to 1.0 is more human friendly.






            share|improve this answer













            The problems early computers were meant to solve used real numbers. Often these numbers were very large or very small, so you need a scale factor for the computer to handle them.



            If the computer natively thinks in numbers from 0.0 to 1.0 (or -1.0 to 1.0), then the scale factor is simply the maximum value of the variable. This is easy for human minds to handle and not very error prone.



            If the computer natively thinks in numbers from 0 to 16777215 (or whatever), the scale factor becomes something completely different. Getting the scale factor right becomes much harder and a source of errors.



            Lets go with less errors, shall we?



            As others have pointed out, the actual hardware is the same for the two schemes, it is just a matter of how humans interact with the machine. 0.0 to 1.0 is more human friendly.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Apr 2 at 7:54









            Stig HemmerStig Hemmer

            12913




            12913





















                1














                At the time using fiction point fraction representation seemed like the best solution for handling floating point numbers, because it avoids many of the issues that are present with other formats such as not being able to precisely represent 0.5 and cumulative errors when performing relatively common operations repeatedly.



                In time better ways to represent floating point numbers with binary were devised. General purpose hardware was able to process such formats and gave the programmer a free choice to select the one they wanted.






                share|improve this answer



























                  1














                  At the time using fiction point fraction representation seemed like the best solution for handling floating point numbers, because it avoids many of the issues that are present with other formats such as not being able to precisely represent 0.5 and cumulative errors when performing relatively common operations repeatedly.



                  In time better ways to represent floating point numbers with binary were devised. General purpose hardware was able to process such formats and gave the programmer a free choice to select the one they wanted.






                  share|improve this answer

























                    1












                    1








                    1







                    At the time using fiction point fraction representation seemed like the best solution for handling floating point numbers, because it avoids many of the issues that are present with other formats such as not being able to precisely represent 0.5 and cumulative errors when performing relatively common operations repeatedly.



                    In time better ways to represent floating point numbers with binary were devised. General purpose hardware was able to process such formats and gave the programmer a free choice to select the one they wanted.






                    share|improve this answer













                    At the time using fiction point fraction representation seemed like the best solution for handling floating point numbers, because it avoids many of the issues that are present with other formats such as not being able to precisely represent 0.5 and cumulative errors when performing relatively common operations repeatedly.



                    In time better ways to represent floating point numbers with binary were devised. General purpose hardware was able to process such formats and gave the programmer a free choice to select the one they wanted.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Apr 2 at 8:35









                    useruser

                    4,025818




                    4,025818





















                        0














                        I am not sure i understand your question right, but if i did then "word" is commonly used not as data-type (like "integer") but as whole hardware register, so 8 bit CPU has 8-bit machine word.



                        In fixed-point operations, common practice is to store "before point" value in one register, and "after point" value in another register. So, from CPU's point of view they both are words.



                        What is the main difference between word and integer? word is only a set of bites, it does not represent any digital value, because one and the same word can be represented as different digital values - for example 11111111 word can be -128 Integer or 256 "Unsigned", depending on how you interpretent first bit. Two words can be Double (Integer before point and Integer or Unsigned after point), Long (for 8-bit cpu 16-bit int) and so on, depending on how you use their values in your calculations.



                        I suppose that "word", in your case, was used to describe the way to access a part of data in memory, meaning that it can not or must not be a whole number, whose other part can be located in different word.






                        share|improve this answer








                        New contributor




                        Stanislav Orlov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                        Check out our Code of Conduct.
























                          0














                          I am not sure i understand your question right, but if i did then "word" is commonly used not as data-type (like "integer") but as whole hardware register, so 8 bit CPU has 8-bit machine word.



                          In fixed-point operations, common practice is to store "before point" value in one register, and "after point" value in another register. So, from CPU's point of view they both are words.



                          What is the main difference between word and integer? word is only a set of bites, it does not represent any digital value, because one and the same word can be represented as different digital values - for example 11111111 word can be -128 Integer or 256 "Unsigned", depending on how you interpretent first bit. Two words can be Double (Integer before point and Integer or Unsigned after point), Long (for 8-bit cpu 16-bit int) and so on, depending on how you use their values in your calculations.



                          I suppose that "word", in your case, was used to describe the way to access a part of data in memory, meaning that it can not or must not be a whole number, whose other part can be located in different word.






                          share|improve this answer








                          New contributor




                          Stanislav Orlov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.






















                            0












                            0








                            0







                            I am not sure i understand your question right, but if i did then "word" is commonly used not as data-type (like "integer") but as whole hardware register, so 8 bit CPU has 8-bit machine word.



                            In fixed-point operations, common practice is to store "before point" value in one register, and "after point" value in another register. So, from CPU's point of view they both are words.



                            What is the main difference between word and integer? word is only a set of bites, it does not represent any digital value, because one and the same word can be represented as different digital values - for example 11111111 word can be -128 Integer or 256 "Unsigned", depending on how you interpretent first bit. Two words can be Double (Integer before point and Integer or Unsigned after point), Long (for 8-bit cpu 16-bit int) and so on, depending on how you use their values in your calculations.



                            I suppose that "word", in your case, was used to describe the way to access a part of data in memory, meaning that it can not or must not be a whole number, whose other part can be located in different word.






                            share|improve this answer








                            New contributor




                            Stanislav Orlov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.










                            I am not sure i understand your question right, but if i did then "word" is commonly used not as data-type (like "integer") but as whole hardware register, so 8 bit CPU has 8-bit machine word.



                            In fixed-point operations, common practice is to store "before point" value in one register, and "after point" value in another register. So, from CPU's point of view they both are words.



                            What is the main difference between word and integer? word is only a set of bites, it does not represent any digital value, because one and the same word can be represented as different digital values - for example 11111111 word can be -128 Integer or 256 "Unsigned", depending on how you interpretent first bit. Two words can be Double (Integer before point and Integer or Unsigned after point), Long (for 8-bit cpu 16-bit int) and so on, depending on how you use their values in your calculations.



                            I suppose that "word", in your case, was used to describe the way to access a part of data in memory, meaning that it can not or must not be a whole number, whose other part can be located in different word.







                            share|improve this answer








                            New contributor




                            Stanislav Orlov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.









                            share|improve this answer



                            share|improve this answer






                            New contributor




                            Stanislav Orlov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.









                            answered Apr 3 at 17:02









                            Stanislav OrlovStanislav Orlov

                            935




                            935




                            New contributor




                            Stanislav Orlov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.





                            New contributor





                            Stanislav Orlov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.






                            Stanislav Orlov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.



























                                draft saved

                                draft discarded
















































                                Thanks for contributing an answer to Retrocomputing Stack Exchange!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid


                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.

                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function ()
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9500%2fwhy-did-some-early-computer-designers-eschew-integers%23new-answer', 'question_page');

                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown







                                Popular posts from this blog

                                Bosc Connection Yimello Approaching Angry The produce zaps the market. 구성 기록되다 변경...

                                WordPress Information needed

                                Hidroelektrana Sadržaj Povijest | Podjela hidroelektrana | Snaga dobivena u hidroelektranama | Dijelovi hidroelektrane | Uloga hidroelektrana u suvremenom svijetu | Prednosti hidroelektrana | Nedostaci hidroelektrana | Države s najvećom proizvodnjom hidro-električne energije | Deset najvećih hidroelektrana u svijetu | Hidroelektrane u Hrvatskoj | Izvori | Poveznice | Vanjske poveznice | Navigacijski izbornikTechnical Report, Version 2Zajedničkom poslužiteljuHidroelektranaHEP Proizvodnja d.o.o. - Hidroelektrane u Hrvatskoj