This is one reason I stick to statically typed languages. I do a lot of work with cash values and without an explicit decimal type, the shit hits the fan when you hit a float issue.
I get a lot of flak on here for that opinion which is odd.
Dynamically typed languages are not the same as weak typed languages. Dynamically-typed languages, like Ruby, still have data types like String, Float, etc. I've seen people use floats in statically typed languages before and end up with rounding errors simply because they chose Float instead of Decimal.
I am fully aware of the differences between weak typing and dynamic languages.
The issue I have with dynamic languages with types is that the type is usually inferred based on the operation. It's very easy to make mistakes and for those mistakes not to be apparent until the shit hits the fan.
Fully statically typed languages put these concerns right in your face and make you think about them.
> Fully statically typed languages put these concerns right in your face and make you think about them.
That depends slightly - if your compiler uses Damas-Milner type inference, it's possible that the compiler will assign the most general type signature possible to your function, which will not prevent this type of error, as operations such as addition work on either integral or floating point numbers.
What you're really after is a strongly typed language that also has explicit type declarations (or a programmer that's reliable enough always to include them even when they are optional).
One counter example is Java and Python. In Java this is allowed and the result requires knowledge of detailed rules:
int i = 10;
int j = 20;
String s = "Test " + i + j;
In Python the equivalent is an error.
The 'in your face' difference between a statically typed language and dynamic one is the time difference between the compilation error and an execution error. One tool for working effectively with dynamically typed languages is to keep that time difference short with tools like unit tests.
It's not really about static typing or not, it's really about JS having some really strange and counter-intuitive behaviours in a lot of cases (numbers, default method scope, truthy values, ...).
Part of it is probably due to the fact that the syntax looks so much like C and Java that you'd except it to behave the same. Part of it seem to be that it's hard to fix things without breaking existing codebases.
Every language has it's own quirks, some even weirder than JS. The floating point "errors" are common to a hundred programming languages, that's why BigNum classes/extensions exist.
The problem in this case is not floating point approximations. It is that some functionalities (parseFloat vs parseInt, ++, 45.0 being printed 45 in most browsers) makes the user believe that JS has an integer type.
Square peg in a round hole, you wouldn't try to understand Haskell with a Ruby background either.
Since it has no types per se, it doesn't make much sense to think of an 'integer type'. Javascript only has a number primitive, which is a IEEE 754 float.
Dynamically typed vs Weakly typed languages aside, you still only have one choice in the browser. Sure, you can write in something which has all the greatest features you love, but it will still be cross compiled to JavaScript, with all of its pitfalls right?
> Sure, you can write in something which has all the greatest features you love, but it will still be cross compiled to JavaScript, with all of its pitfalls right?
By this logic, no bignum library could exist because machine code doesn't have bignums, and everything gets compiled to machine code and/or run by an interpreter that's been compiled to machine code.
I get a lot of flak on here for that opinion which is odd.