Lol name one outside of it’s well known equality rules that linters check for.
Also, name the language you think is better.
Because for those of us who have coded in languages that are actually bad, hearing people complain about triple equals signs for the millionth time seems pretty lame.
Recently I encountered an issue with “casting”. I had a class “foo” and a class “bar” that extended class foo. I made a list of class “foo” and added “bar” objects to the list. But when I tried use objects from “foo” list and cast them to bar and attempted to use a “bar” member function I got a runtime error saying it didn’t exists maybe this was user error but it doesn’t align with what I come to expect from languages.
I just feel like instead of slapping some silly abstraction on a language we should actually work on integrating a proper type safe language in its stead.
K, well configure your linter the way a professional Typescript environment should have it configured, and it will be there too. Not to be rude but not having a linter configured and running is a pretty basic issue. If you configured your project with Vite or any other framework it would have this configured OOTB.
Interpreted languages don’t have compilers, and one of the steps that compilers do is LINTING. You’re literally complaining about not configuring your compiler properly and blaming it on the language.
Not to play the devil’s advocate, but with compiled languages you can just install the language, “run” your script and it’ll work, if not the language will catch undeclared variables for you, and more. With interpreted languages you need to not only install the language but also third party tools for these fairly Barovia things.
To play devil’s advocate to that, why is it better that a language is monolithic vs having its various components be independent let different frameworks mix and match different parts?
@masterspace Love the confidence, but your facts could do with some work.
“Interpreted language” is not a thing. Interpreted vs compiled is a property of a particular implementation, not the language.
(I wonder how you would classify lazy functional languages like Haskell. The first implementations were all interpreters because it was not clear that the well-known graph reduction technique for lazy evaluation could be compiled to native code at all.Today, hugs (a Haskell interpreter written in C) isn’t maintained anymore, but GHC still comes with both a native compiler (ghc) and an interpreter (runghc, ghci).)
Most implementations that employ interpretation are compiler/interpreter hybrids. It is rare to find an interpreter that parses and directly executes each line of code before proceeding to the next (the major exception being shells/shell scripts). Instead they first compile the source code to an internal representation, which is then interpreted (or, in some cases, stored elsewhere, like Python’s .pyc files or PHP’s “opcache”).
You can tell something is a compile-time error if its occurrence anywhere in the file prevents the whole file from running. For example, if you take valid code and add a random { at the end, none of the code will be executed (in Javascript, Python, Perl, PHP, C, Haskell, but not in shell scripts).
The original “lint” tool was developed because old C compilers did little to no error checking. (For example, they couldn’t verify the number and type of arguments in function calls against the corresponding function definitions.) “Linting” specifically refers to doing extra checks (some of which may be more stylistic in nature, like a /*FALLTHRU*/ comment in switch statements) beyond what the compiler does by itself.
If I refer to an undeclared variable in a Perl function, none of the code runs, even if the function is defined at the end of the file and never called. It’s a compile-time error (and I don’t have to install and configure a linting tool first). The same is true of Haskell.
What’s funny to me is that Javascript copied use strict directly from Perl (albeit as a string because Javascript doesn’t have use declarations), but turned it into a runtime check, which makes it a lot less useful.
JavaScript only has a single number type, so 0.0 is the same as 0. Thus when you are sending a JS object as JSON, in certain situations it will literally change 0.0 to 0 for you and send that instead (same with any number that has a zero decimal). This will cause casting errors in other languages when they attempt to deserialize ints into doubles or floats.
It will cause casting errors in other languages that have badly written, non-standards compliant, JSON parsers, as 1 and 1.0 being the same is part of the official JSON ISO standard and has been for a long time: https://json-schema.org/understanding-json-schema/reference/numeric
JSON schema is not a standard lol. 😂 it especially isn’t a standard across languages. And it most definitely isn’t an ISO standard 🤣. JSON Data Interchange Format is a standard, but it wasn’t published until 2017, and it doesn’t say anything about 1.0 needs to auto cast to 1 (because that would be fucking idiotic). https://datatracker.ietf.org/doc/html/rfc8259
Edit: and to add to that, JavaScript has a habit of declaring their dumb bugs as “it’s in the spec” years after the fact. Just because it’s in the spec doesn’t mean it’s not a bug and just because it’s in the spec doesn’t mean everywhere else is incorrect.
ISO/IEC
21778 - Information technology — The JSON
data interchange syntax
8 Numbers
A number is a sequence of decimal digits with no superfluous leading zero. It may have a preceding
minus sign (U+002D). It may have a fractional part prefixed by a decimal point (U+002E). It may have
an exponent, prefixed by e (U+0065) or E (U+0045) and optionally + (U+002B) or – (U+002D). The
digits are the code points U+0030 through U+0039.
You clearly do not understand the difference between a specification and a schema. On top of that you clearly don’t even understand what you wrote down doesn’t even agree with you. No where in that spec does it say anything about auto casting a double or float to an int.
That confirms exactly what tyler said. I’m not sure if you’re misreading replies to your posts or misreading your own posts, but I think you’re really missing the point.
Let’s go through it point by point.
tyler said “JSON Schema is not an ISO standard”. As far as I can tell, this is true, and you have not presented any evidence to the contrary.
tyler said “JSON Data Interchange Format is a standard, but it wasn’t published until 2017, and it doesn’t say anything about 1.0 needs to auto cast to 1”. This is true and confirmed by your own link, which is a standard from 2017 that declares compatibility with RFC 8259 (tyler’s link) and doesn’t say anything about autocasting 1.0 to 0 (because that’s semantics, and your ISO standard only describes syntax).
tyler said “JSON Schema isn’t a specification of the language, it’s for defining a schema for your code”, which is true (and you haven’t disputed it).
Your response starts with “yes it is”, but it’s unclear what part you’re disagreeing with, because your own link agrees with pretty much everything tyler said.
Even the part of the standard you’re explicitly quoting does not say anything about 1.0 and 1 being the same number.
Why did you bring up JSON Schema (by linking to their website) in the first place? Were you just confused about the difference between JSON in general and JSON Schema?
Lol name one outside of it’s well known equality rules that linters check for.
Also, name the language you think is better.
Because for those of us who have coded in languages that are actually bad, hearing people complain about triple equals signs for the millionth time seems pretty lame.
Recently I encountered an issue with “casting”. I had a class “foo” and a class “bar” that extended class foo. I made a list of class “foo” and added “bar” objects to the list. But when I tried use objects from “foo” list and cast them to bar and attempted to use a “bar” member function I got a runtime error saying it didn’t exists maybe this was user error but it doesn’t align with what I come to expect from languages.
I just feel like instead of slapping some silly abstraction on a language we should actually work on integrating a proper type safe language in its stead.
I think that might be user error as I can’t recreate that:
Yeah, you would get a runtime error calling that member without checking that it exists.
Because that object is of a type where that member may or may not exist. That is literally the exact same behaviour as Java or C#.
If I cast or type check it to make sure it’s of type Bar rather than checking for the member explicitly it still works:
And when I cast it to Foo it throws a compile time error, not a runtime error:
I think your issues may just like in the semantics of how Type checking works in JavaScript / Typescript.
@masterspace “Undeclared variable” is a runtime error.
Perl.
A) yes, that’s how interpreted languages work.
B) the very simple, long established way to avoid it, is to configure your linter:
https://eslint.org/docs/latest/rules/no-undef
I haven’t used Perl though, what do you like better about it?
@masterspace
“Undeclared variable” is a compile-time error.
K, well configure your linter the way a professional Typescript environment should have it configured, and it will be there too. Not to be rude but not having a linter configured and running is a pretty basic issue. If you configured your project with Vite or any other framework it would have this configured OOTB.
@masterspace
Yeah, if you’re a C programmer in the 1980s, maybe. But it’s 2006 now and compilers are able to do basic sanity checks all on their own.
Interpreted languages don’t have compilers, and one of the steps that compilers do is LINTING. You’re literally complaining about not configuring your compiler properly and blaming it on the language.
Not to play the devil’s advocate, but with compiled languages you can just install the language, “run” your script and it’ll work, if not the language will catch undeclared variables for you, and more. With interpreted languages you need to not only install the language but also third party tools for these fairly Barovia things.
To play devil’s advocate to that, why is it better that a language is monolithic vs having its various components be independent let different frameworks mix and match different parts?
@masterspace Love the confidence, but your facts could do with some work.
“Interpreted language” is not a thing. Interpreted vs compiled is a property of a particular implementation, not the language.
(I wonder how you would classify lazy functional languages like Haskell. The first implementations were all interpreters because it was not clear that the well-known graph reduction technique for lazy evaluation could be compiled to native code at all.Today, hugs (a Haskell interpreter written in C) isn’t maintained anymore, but GHC still comes with both a native compiler (ghc) and an interpreter (runghc, ghci).)
Most implementations that employ interpretation are compiler/interpreter hybrids. It is rare to find an interpreter that parses and directly executes each line of code before proceeding to the next (the major exception being shells/shell scripts). Instead they first compile the source code to an internal representation, which is then interpreted (or, in some cases, stored elsewhere, like Python’s
.pyc
files or PHP’s “opcache”).You can tell something is a compile-time error if its occurrence anywhere in the file prevents the whole file from running. For example, if you take valid code and add a random
{
at the end, none of the code will be executed (in Javascript, Python, Perl, PHP, C, Haskell, but not in shell scripts).The original “lint” tool was developed because old C compilers did little to no error checking. (For example, they couldn’t verify the number and type of arguments in function calls against the corresponding function definitions.) “Linting” specifically refers to doing extra checks (some of which may be more stylistic in nature, like a
/*FALLTHRU*/
comment in switch statements) beyond what the compiler does by itself.If I refer to an undeclared variable in a Perl function, none of the code runs, even if the function is defined at the end of the file and never called. It’s a compile-time error (and I don’t have to install and configure a linting tool first). The same is true of Haskell.
What’s funny to me is that Javascript copied
use strict
directly from Perl (albeit as a string because Javascript doesn’t haveuse
declarations), but turned it into a runtime check, which makes it a lot less useful.None of that nitpicking changes the core point that they’re misconfiguring a dev environment and blaming the language.
JavaScript only has a single number type, so 0.0 is the same as 0. Thus when you are sending a JS object as JSON, in certain situations it will literally change 0.0 to 0 for you and send that instead (same with any number that has a zero decimal). This will cause casting errors in other languages when they attempt to deserialize ints into doubles or floats.
It will cause casting errors in other languages that have badly written, non-standards compliant, JSON parsers, as 1 and 1.0 being the same is part of the official JSON ISO standard and has been for a long time: https://json-schema.org/understanding-json-schema/reference/numeric
JSON schema is not a standard lol. 😂 it especially isn’t a standard across languages. And it most definitely isn’t an ISO standard 🤣. JSON Data Interchange Format is a standard, but it wasn’t published until 2017, and it doesn’t say anything about 1.0 needs to auto cast to 1 (because that would be fucking idiotic). https://datatracker.ietf.org/doc/html/rfc8259
JSON Schema does have a draft in the IETF right now, but JSON Schema isn’t a specification of the language, it’s for defining a schema for your code. https://datatracker.ietf.org/doc/draft-handrews-json-schema/
Edit: and to add to that, JavaScript has a habit of declaring their dumb bugs as “it’s in the spec” years after the fact. Just because it’s in the spec doesn’t mean it’s not a bug and just because it’s in the spec doesn’t mean everywhere else is incorrect.
Yes, it most literally and inarguably is:
https://www.iso.org/standard/71616.html
Page 3 of INTERNATIONAL STANDARD
You clearly do not understand the difference between a specification and a schema. On top of that you clearly don’t even understand what you wrote down doesn’t even agree with you. No where in that spec does it say anything about auto casting a double or float to an int.
That confirms exactly what tyler said. I’m not sure if you’re misreading replies to your posts or misreading your own posts, but I think you’re really missing the point.
Let’s go through it point by point.
tyler said “JSON Schema is not an ISO standard”. As far as I can tell, this is true, and you have not presented any evidence to the contrary.
tyler said “JSON Data Interchange Format is a standard, but it wasn’t published until 2017, and it doesn’t say anything about 1.0 needs to auto cast to 1”. This is true and confirmed by your own link, which is a standard from 2017 that declares compatibility with RFC 8259 (tyler’s link) and doesn’t say anything about autocasting 1.0 to 0 (because that’s semantics, and your ISO standard only describes syntax).
tyler said “JSON Schema isn’t a specification of the language, it’s for defining a schema for your code”, which is true (and you haven’t disputed it).
Your response starts with “yes it is”, but it’s unclear what part you’re disagreeing with, because your own link agrees with pretty much everything tyler said.
Even the part of the standard you’re explicitly quoting does not say anything about 1.0 and 1 being the same number.
Why did you bring up JSON Schema (by linking to their website) in the first place? Were you just confused about the difference between JSON in general and JSON Schema?