Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Whoever thought it was a good idea to have case sensitive function names where the names are allowed to be identical but not identical in function?

Major fuck-up there.

That should have never passed the concept stage, nor the review stage.

Function names should describe what a function does.



Case sensitivity in any programming language is crazy. I can't for the life of me see a valid engineering principal that accepts IsTheOne() and istheone() being different bits of code. Oh sure at a technical level the computer has no problem with... the problem is restricted to those oh so error prone humans.

Can anyone here honestly say that if they were doing a code review they'd agree that solely a difference in case would get their approval?

And don't even get me started on filenames that only differ by case. If that's a good idea then I think trailing whitespace should be significant too!!!


There are languages that use lowercased names for constants (Erlang's atoms) and uppercased Names for variables, which are immutable and quasi constants anyway. This is uncommon but many other languages use uppercase for classes (Person) and lowercase for variables such as Person person = new Person(). That last example is one reson for having case sensitive languages. I can't think of any language I used in the last 30 years that was case insensitive. Maybe some BASIC interpreters on home computers in the 80s? But not all of them had upper and lower case characters to start with.

Linux has a notoriously case sensitive file system, with trailing significant spaces (they are not special characters). Mac and Windows also have case sensitive file systems but that feature is turned off by default.


I don't know why you've been downvoted, but anyone who disagrees with you needs to explain in what situation it would make sense to have two different functions or files with names differing only in case.

I upvoted you.


I have used all-uppercase to make a distinction like class vs. instance in variables (in case-sensitive languages in which the class might be an ordinary held-in-a-variable value, like Javascript), and I might do it again. But it's very unusual, and it's also the kind of practice that is more suitable for a 1KLOC project that will receive 100 hours of effort from a single maintainer over its lifetime than for a bigger project with many maintainers and a highly motivated community of attackers.

I don't think there was ever a case when I was tempted to name two functions with different cases, but if I ever had to write a modest-sized 1-maintainer system in which many functions came in exactly two different flavors, I might be tempted. (Perhaps threadsafe locked vs. raw? or some C++-like distinction between raw functions and closure-like class instances which can be used in a function-call context? or raw functions vs. wrappers with the extra plumbing required to let them be invoked from a scripting language?)

afterthought: And now that I think of it, in old C code I think I vaguely remember working with macro vs. function implementations of the same operation distinguished by capitalizing the name, and I don't think the name convention was an urgent problem. C macros can breed various errors, but I think bitbang_macro vs. bitbang would breed pretty much the same errors as BITBANG vs. bitbang.


In those situations, it would have been much more readable to have classFoo vs foo, foo() vs nonThreadSafeFoo(), etc.

The bigger point is that while you can come up with creative ways to take advantage of case-sensitivity, it's not that you would have missed it if the language was case-insensitive. From that point of view, case-sensitivity has no benefit, but only a cost: leads to irritating errors from the compiler, or runtime errors in dynamically typed languages.

If something has no benefit, and only a cost, we should get rid of it.


It's not that it would make sense. It is more like it is the programmer's fault.

Next week's stupid but expensive typo will be received() instead of receive(). What will you demand then? That the compiler refuses names based on the Levenshtein distance?

It's the job of a style checking tool, not the job of the compiler.


That's a false analogy.

Saying it's the programmer's fault is another circular argument. It is, only in languages that make it the programmer's fault, but I would say that it's actually the language designers' fault.


IsTheOne() and istheone() are different bits of code.


That's a circular argument. They are different only because the language says they are different, while the post you've responded to says (correctly) that that's bad language design.


The bit representation is different.


Which is irrelevant, since languages are a medium of human expression, not a way to store bits on the disk.


Irrelevant to what? The two words literally are represented with different bits (code).


Assuming you are not trolling: yes they are indeed represented with different bits, which doesn't matter since these bits are not the bits executed by the CPU in the end. The "words" and not the bits are interpreted or compiled into machine code. Which is where bits matter.


clearly case sensitivity is a decision by the language designers. Many languages are case-insensitive.

from https://en.wikipedia.org/wiki/Case_sensitivity

Some computer languages are case-sensitive for their identifiers (C, C++, Java, C#, Verilog, Ruby and XML). Others are case-insensitive (i.e., not case-sensitive), such as Ada, most BASICs (an exception being BBC BASIC), Fortran, SQL and Pascal.


I couldn't agree more with this sentiment. I haven't spent too much time on Ethereum contracts but this is such a ridiculous language/platform design issue.


Funny thing is, it reminds me of implicit typing of int/float based on the first letter of the variable name that ye olde FORTRAN77 had as a major footgun.


Upvoted for introducing me to the word footgun


The problem is deeper, this idea of contracts enforced by computer code written by humans ( = full of bugs) is a dumb one.


Just because some smart contracts fail doesn't mean they all have to.


The problem is one of trust. Contracts that can either be formally verified, or by nature of the language they are written in are mathematically guaranteed to be free of certain bugs and behaviors can be 'TRUSTED'.

It's a very bad thing to not be able to trust that your money can't be trivially stolen.


Yeah. But which are which, and are you willing to put your money on that?


Are we sure that case sensitivity is the culprit? As others have noted, the function being called takes (and is given) 3 arguments but the function that is supposed to be called takes only 2.


IDE and autocompletion could be the direct cause, but in the end if the function had been properly named it wouldn't have happened (root cause).


For any language that allows non-ASCII identifiers, trying to define "case insensitive" is a minefield best avoided. For example, with general Unicode characters, "case-insensitive" comparison doesn't give you an equivalence relation on identifiers because transitivity fails.

Of course you can define things to be case-insensitive only in the ASCII range, and treat your non-ASCII stuff as second-class citizens...


Especially in code that supposedly describes contracts.


What a shitshow. These people don't need "advanced static analysis tools" [1], they need a big old bag of clue.

1: Developers, cryptographers and computer scientists should note that any high-level tools (including IDEs, formal verification, debuggers, symbolic execution) that make it easy to write safe smart contracts on Ethereum are prime candidates for DevGrants, Blockchain Labs grants and String’s autonomous finance grants.


OpenSSL had some code cruft too, and god knows that didn't stop it from being deployed in production. People use code because it works, not code that's perfect.


That's not good enough for software that's meant to be money. Besides, OpenSSL has the excuse of being old. Why would you design a new system to be crufty from day one?


I wouldn't design any system to be crufty or have bugs. I was just pointing out that it happens. And honestly probably gives us a lot cooler stuff than were software production restricted to developers who were certified to produce best practices code.

I feel like without weird blood we'd only ever get enterprise-approved spreadsheet software.

PS: Caveat. Personally, of course I'd never use stuff like that for mission critical deployments until it's been battle tested.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: