Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

48 might have been a much more reasonably sane answer, or just disallow operations that don't really make any sense, instead of implicitly casting in non-obvious ways.


48? Why's that? How did you reach this conclusion?


When you see a character, it usually has an underlying representation an unsigned (or byte or array of bytes or int if that's your thing, with size depending on ASCII, Unicode, etc). In both UTF-8 and ASCII the character "1" has a value of decimal 49. If the language actually allows you to subtract a number from a character (or length-one string, depending on language semantics), which is dubious to allow anyways, the expected behavior should be to return 48, but that's really a code smell to even attempt that operation.

EDIT: Clarified last statement.


Because JavaScript was first and foremost a language for validating user input... user input will always (well originally) be a string. In this light the type coercion choices JS makes are usually pretty sane. Given that the Browser is an environment where a user expects things to mostly still work in the case of an error (in formating markup, etc), having that carry through to the language is a pretty logical choice.

Given those two things, these edge cases are entirely sane... and given the flexibility of the language makes it a very good choice for a lot of tasks. No, it is not a purist mindset when it comes to languages.. but in the context of its' design goals, it makes perfect sense and is easy enough to avoid these scenarios when actually writing code that doesn't interact with user data.

For the record, I've seen C# developers pass around Guid (UUID) as string, even for code that never crosses a wire, and the dbms uses the UUID format. I've seen Java developers do some pretty asinine things too. The language can't always protect you from shooting yourself... in the case of JS for the most part at least a small bug doesn't take the building down with it.


String.prototype.charCodeAt and String.fromCharCode are never called in implicit type conversion. Type conversion is always done by calling primitive constructor functions. In the case of subtraction ("1" - 1), Number is called on any non-numeric operand. Also see:

> false - 1

> true - 1


If you have any experience in most other programming languages this would seem to violate principle of least astonishment (see "wat").


Why do you think that '1'.charCodeAt() and then subtracting 1 is more appropriate and logical than Number('1') and then subtracting 1?

Please note that the subtraction operator is not defined for strings in js and the language is loosely typed.


Because 49 isn't just the "character code" for "1" it is _the actual value_.


You're talking low level representation now and not high level.


If you're going to insist that a character (or string) is it's own magic datatype _and is not an unsigned (or similar) underneath it all_ and you're going to talk about high level niceties, then your interpreter really really should not violate principle of least astonishment. There is no sane way to frame "1" - 1 _unless you explicitly typecast the string to a number_, because you now have to reconcile it with what should be identical behavior for numeric types, like "1"+(-1), which guess what, yields "1-1" in javascript, which is the definition of insane. You've also got to deal with other less obvious cases like when the string is a in a var and is not _always guaranteed to be a nice number_ etc, which really really makes ever using anything like that a code smell. It's far easier for both the programmer and the interpreter _not_ to play the guessing game, and try and implement inconsistent numeric behavior, than to just say "well I'm not going to do this unless you really insist (via an explicit cast) that you want this".


Why you're insisting in ignoring the fact that js is a loosely typed language and automatic type casting is in its DNA?


Having automatic type casting of the form we've seen above present in the language is like having a gun without a safety--it's that 1% of the time that the pin inadvertently strikes the shell that you will really really wish you had had a language that would have faulted rather than silently proceeding with broken logic and now potentially disastrously bad data. I can't believe that anyone would pick a language like this that would allow you (especially silently) to be this sloppy.


You can't reason with dogmatic traditionalists like you people.


"1" is 49 is ASCII.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: