I started writing a comment wanting to defend it, because the expressions themselves are not that unreadable. <=> is a comparator (and used in many languages), clamp sets min and max sizes (and that can be guessed from the english meaning of the word), and 0.. is a range that goes from 0 to infinity (admittedly not that common an expression, but completely logical and intuitive way to express a range once you understood once that it is about ranges).
But then I realized that's nonsensical, and not what the code is supposed to do given the usage in the template. I assume something got mangled there when changing the code for the blog post.
Or I'm just understanding the ruby code wrong despite checking just now in irb, in that case that's a point for the intention of your comment, and a vote for "worse".
I think it probably made sense to the author because they've used all three (-1,0,1) for other examples, and would've been fine until separated out to a method reused to show the actual number.
I think they tried to be a bit too clever, basically.
Another thing is that almost every complaint I see about React (except bundle size maybe, but who cares?) exists in the APP context.
If your use case is a simple website, React is just a nice templating lib and you won't need to use any of the things people generally dislike about it. That AND your experience when you inevitably have to add some interactivity is going to be 100x better than vanilla JS.
As for the build step, there are many turn key solutions nowadays that "just work". And isn't a small build step a plus, compared to being at the mercy of a typo breaking everything? To me that piece of mind if worth a lot, compared to whatever manual testing you'd have to do if you work with "text" files.
Are these templates only used on the server-side to generate the HTML upfront? Or is it being generated on the client?
> experience when you inevitably have to add some interactivity is going to be 100x better than vanilla JS
I don't believe this can quantified. How are you measuring DX improvements? Are you also able to continue to measure these improvements as your application/codebase scales?
It's certainly possible to generate the HTML up-front. Tooling like Next.js even sets things up so it's easier to render the HTML for the first page load on the server than to push it to the client.
I have a website. It's not great, it doesn't get much traffic, but it's mine :). If you disable JS, it works: links are links, HTML is loaded for each page view. If you enable JS, it still works: links will trigger a re-render, the address bar updates, all the nice "website" stuff.
If I write enough (and people read enough) then enabling JS also brings performance benefits: yes, you have to load ~100kB of JS. But each page load is 4kB smaller because it doesn't contain any boilerplate.
Obviously I could generate the HTML any way I choose, but doing it in React is nice and composable.
If you really want to, you can have a react app that is just static templates with no interactivity with a simple Node server that just called renderToString and all of a sudden react is just a backend templating framework. If you want to get really fancy you can then re-render specific components on the client side without re-rendering the entire page. You don't need NextJS to do this either, its very simple and straightforward and lets you use an entirely frontend toolchain to do everything.
> I am lacking empathy for those who are apparently so hooked up to the here-and-now
A large amount of those people are very young, at an age where you don't really pick your options solely on their super long term consequences.
Most people are going to be "stupid" in their early adulthood, failing and adjusting is a big part of it. Unfortunately, some of those decisions will stick more than others and sex work is very sticky (zing).
>A large amount of those people are very young, at an age where you don't really pick your options solely on their super long term consequences.
And they will continue to be if there are never any consequences.
Stop bailing people out of problems they make for themselves and people will start learning to not make those problems.
Human beings are not stupid machines who see others put their hand in the fire, getting burned, then they put their own hands in the fire get burned, and then keep doing it over and over again.
Most will stop when they see others get burned, others still will stop when they get burned, and a small minority will stop once there is no hand left to burn.
There is a reason why many parts of the world will ticket you for not wearing your seatbelt. There is a reason you cannot (could not? crypto changed a lot) do advanced stock trading without a license. Why gambling is regulated, etc.
We don't want people to hurt themselves, because we have humanity and because they become a drain on society.
I find it hard to be that black and white with phenomenons like OF, that emerge from a mix of societal and technological advancement.
There are grey zones and not everyone is fortunate enough to be taught to be responsible. Not everyone can go through life without feeling desperate and resort to doing things they would not be proud of.
We should try to educate and protect people instead of pointing internet fingers at them.
> Most will stop when they see others get burned, others still will stop when they get burned, and a small minority will stop once there is no hand left to burn.
And this explains how drug problems solved themselves hundreds of years ago. Good thing we've all decided to stop doing debilitating drugs after seeing the consequences of addition in the past!
So, if young people are unable to take responsibility for their actions, we will need to raise the age for maturity... I am sorry, adults are adults are adults. Either you make your own decisions or you don't.
Unironically the former. It's weird that we have at the same time reduced the legal age of adulthood, while simultaneously extended the actual period of adolescence and dependence for the average young person. It used to be a century ago, that you started working for a wage at 14 and didn't get legal independence until 21. Now you get legal independence at 18 but might be in full time education until you are 25 (with a masters).
Yah my mum was helping out with the family business around age 5. It's kind of crazy to think how quickly its swung from having that kind of responsibility thrust on you from so young to now where people in their mid 20s may still be in their "incubatory" period
Strongly typed languages have a higher barrier of entry and require an engineering mindset. That's anecdotal but if I think of exceptionally competent people I've worked with on JS projects, all of them have spent time building and advocated for properly typed code bases.
The other camp "just hates it" because it "slows them down", as it seems they spend most of their time fighting the types but never get to the point where you get that huge return on investment.
I don't know, the ergonomics of the type system is not the same in all languages. A tool chain that report early useful feedbacks in a non cryptic sentences will certainly gains adoption quickly.
Unfortunately most of the time the result is at best needlessly cryptic, with zero thought about simplicity of use. Ruby has a reputation of taking care of this topic.
I've been working with types and malloc for years in C, then enter Java. No need to malloc anymore and everything worked. Great, goodbye C. Then enter Ruby, no need to write types anymore and everything worked. Great, goodbye Java.
That's the great picture. Looking into the details I've been working with Perl, JavaScript, Python plus many other less common languages. I always had a preference for languages that hide complexity away from me.
Code completion really doesn't matter much to me. I've been working for maybe ten years with an emacs package that remembered words and attempted to autocomplete with the most used suffix. It worked surprisingly well.
In my professional experience, types are a godsend for large, and/or long-running projects that have been worked on by many people over the years. They reduce complexity by informing you up-front of the shape of the data and/or objects that a function demands and produces.
If the type-checking system is decent, they also automatically catch a whole class of problems that will only show up at runtime, and may take years to surface. (Ask me about the error-handing code that detonated because it contained a typo in a Ruby class name, which was only discovered three years after it was introduced... when that code was actually exercised during an... exciting outage of a system it depended on.)
> types are a godsend for large, and/or long-running projects
Agreed. But that doesn't mean that every language needs to be statically-typed, which seems to be where we're heading nowadays.
IMO large and/or long-running projects should be written in languages with sound static type systems, not scripting languages with types tacked on. Conversely, I often work on projects which are neither large nor long-running - for those, a dynamically-typed scripting language works perfectly well.
> a typo in a Ruby class name, which was only discovered three years after it was introduced
So the code containing this typo was never tested? That's asking for trouble even if you have static typing.
> So the code containing this typo was never tested?
The code absolutely was tested. However, (obviously) not every possible path through the code was tested.
Given a long-enough timeline, you will NEVER remember to test every little thing. Given sufficient code complexity, it can be next to impossible to actually exercise every code path with hand-written tests.
That's one of the troubles with large projects written scripting languages like Ruby... you have to write an assload of tests to replace what you get for free in languages (even "loosely"-typed languages like Erlang) that have pre-runtime type-checking (whether or not it's provided by a compiler).
> Conversely, I often work on projects which are neither large nor long-running - for those, a dynamically-typed scripting language works perfectly well.
Oh yeah, for small things such languages are A-OK. I 1000% agree with that. The big problem (that you may never encounter because I bet that you're smarter than the average unending parade of Project Managers) is how often small projects are forced into becoming big projects, and then huge projects as time goes on.
I’d add to this that there’s a good reason the testing culture is so strong in ruby: you absolutely need to write tests for every last little 2-line method to make sure you did it right. With no compilation or type checking step, there’s no other way to know if your code is even valid.
Which means that IME a huge number of tests in my ruby code were doing things that a type checker does automatically: make sure that passing nil makes it return nil, making sure that passing an array does the right thing vs a string, etc etc etc.
I have very vivid memories of working in a large rails code base and having no idea whether my change would work until I actually exercised the code and realized I was completely messing up the contract that some method expected, so I had to slow down and write tests to validate my assumptions.
Fast forward to now, I work in a large Rust code base, and it seems like 99% of the time, if my code compiles, it works flawlessly the first time. So I concentrate on writing integration/functional tests, which give me a lot more bang for my buck than having to write hundreds of little unit tests. (I still write some unit tests, but the need for it is much less… the vast majority of things you’d write tests for in Ruby are things the type and borrow checkers handle for you in Rust.)
> ...there’s a good reason the testing culture is so strong in ruby:
"Strong" as in "You easily burn 10x more time writing tests as you do code... and not because it's difficult to think through how to write good tests"? If so, yes.
"Good"? Hell, no! That's a bad reason.
> ...a huge number of tests in my ruby code were doing things that a type checker does automatically...
The folks I work with demand that we don't write these tests, so they don't get written. Guess how often code detonates in production because of things a typechecker would have caught... despite the enormous volume of test code.
To be crystal clear, I totally agree with your statements in this comment. I started my "career" with C++ and I'm so damn glad I did. Had I started with Ruby and Rails, I would have come to the conclusion I was far too damn stupid for this field and left to become a lawyer.
“Good” in this context didn’t mean “this is a good situation”, but rather “if you’re using ruby, it would be very bad if you didn’t write tests”, and “bad if you don’t” can be roughly reworded as “good if you do”, at least if we’re presupposing that you have to be writing Ruby.
I have a small software company, under 10 employees. We make B2B applications and focus 100% of our efforts on understanding our customers' needs and the quality of our product. No marketing, no "incentives", nothing. Just honest work and creativity.
I've done consulting here and there and going full time could easily clear half a mil a year. I make a LOT less than that.
Our customers are happy. They are warm and praise us. They are forgiving when we make mistakes because the trust is high. They tell us about all the trickery they face from most other companies all the time.
It is a true win win. I do not sleep on a pile of cash but I sleep very well nonetheless.