Generally continental Europe -- except the Scandinavian countries -- makes it relatively easy to get long-term residency and even a passport. The UK is considerably more difficult, but very easy to work in for an extended period of time (intra-company transfer visas etc).
Scandinavia had Sweden until yesterday's vote in Riksdagen, I moved here 10+ years ago, got my permanent residence after 4 years, citizenship after 5.
Rules have been changed now, citizenship in 8 years will become law on June 6th, also requiring language and cultural tests which weren't required before.
Continental Europe used to vary, Germany was stricter with 8 years to citizenship but permanent residence would vary depending on work skill and language skills.
Can you give some specific examples? I would say that, unless you have some additional qualifications (European ancestors, EU spouse and similar), the majority of EU countries actually don't make it that easy. Of course, it depends on your definition of "relatively easy".
Yes, it's never a trivial process, so a lot of work is certainly being performed by that "relatively": I have extensive personal experience with the UK immigration process and know of the US equivalent through the experiences of former colleagues back home. France, for example, is five years to a passport/naturalization. Germany is three years of skilled work to indefinite leave to remain. The Netherlands is five years to indefinite leave to remain. None of those examples require European ancestors, EU spouse etc, but generally it's easier if you have a university degree and work in the various fields most readers of hacker news do.
Well, naturalisation in most EU countries would involve some other requirements: language knowledge (you'd have to pass an exam) + civic/constitutional exam or integration test + naturally, no criminal record, etc + some countries are quite restrictive on dual-citizenship (i.e. they don't allow it for foreigners, meaning that you would need to renounce your original citizenship).
Visas and residence permits are, of course, easier.
If you're living and working in California or New York, as I suspect a large number of hacker news readers are, EU taxes on income are generally not prohibitively more expensive, especially relative to increase in quality of life. 'Native' salaries are considerably lower, however, and tax treatment of equity-based compensation is very much not in favor of employees...
In my niece's (relatively rural) US high school class several students decided to attend university in Europe with no family ties to the countries in question. It was pretty common in my generation to see, as you note, kids moving to Berlin etc after their studies, but this strikes me as relatively new. Anecdata that seems supported in some of the public numbers [0].
Yeah, it's been a relatively rational decision for a while now, but one I personally hadn't really seen taken until recently. Again, all anecdata, but I am curious to track how much of a trend it really is.
I agree with your fundamental point. However, I don't think steady erosion of mastery is the only way that these next years have to go, even if it looks the most likely at present. Supposing LLMs or whatever future architecture surpass even the greatest human minds in intelligence, why is that situation fundamentally different to living in a world with Einstein, i.e. a level of mastery I'll never reach before the end of my life? As one interested in the depths, I prefer to live in a world with peaks ever greater than myself---it doesn't prevent me from going as deep as I can, inspired by where they've reached, and doing the things that matter to me.
Turing's view, in fact, is similar: "There would be great opposition [to AI] from the intellectuals [read programmers in the context of this thread] who were afraid of being put out of a job. It is probable though that the intellectuals would be mistaken about this. There would be plenty to do, i.e. in trying to keep one’s intelligence up to the standard set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits."
[0] Thomas Bernhard's The Loser is a fantastic account of the opposite standpoint---of the second best piano student, who cannot stand existing in a world with Glenn Gould.
Amazon's free cash flow rises year over year (apart from the post-COVID period) [0] while Walmart's doesn't [1] and price multiples are largely determined by expected FCF over time not directly revenue/EBITDA. FCF rain or shine maps roughly onto possibility of paying out dividends/buybacks, which determines the value of an equity in capital markets ("discounted present value of a company's future cash flows").
Honestly, I don't think it's irrational: the car industry is just horrible from a business perspective, which is why Tesla had to be financed for so long by crypto scams and most investors wouldn't touch it. Historically (if of course briefly/crudely), it was always a debt-backed gamble on overproduction hoping you could expand forever globally without competition (Ford) or into new market segments through financing (GM).
It's paywalled unfortunately, but [1] is an illustrative Financial Times article discussing car manufacturer behavior in relation to Covid shutdowns and strikes. Many firms found the manufacturing shutdowns to be a boon: the winning strategy to accept it as a cost cut and just raise prices on existing inventory for above average financial performance.
My sense is that Tesla is now just taking that a step further by getting rid of their Fordist aspirations and applying the unarguably successful Apple model to the automotive industry. They don't want to mass produce cars and hope for X% conversion rate to software and services over time: they literally don't want customers who are not able or not going to pay for recurring software services. Software is where free cash flow comes from and free cash flow is where dividends/buybacks come from, which determines the value of an equity. That, of course, is why we get paid well.
I end with the disclaimer that obviously I don't believe the world should be meticulously and exclusively organized for the production of free cash flow, but I do think it's important to understand the logic.
I don't know what it looks like on the ground now, but Scala was the defacto language of data infrastructure across the post-Twitter world of SV late stage/growth startups. In large part, this was because these companies were populated by former members of the Twitter data team so it was familiar, but also because there was so much open source tooling at that point. ML teams largely wrote/write Python, product teams in JS/whatever backend language, but data teams -- outside of Google and the pre-Twitter firms -- usually wrote Scala for Spark, Scalding etc in the 2012-2022ish era.
I worked in Scala for most of my career and it was never hard to get a job on a growth stage data team or, indeed, in finance/data-intensive industries. I'm still shocked at how the language/community managed to throw away the position they had achieved. At first I was equally shocked to see the Saudi Sovereign Wealth fund investing in the language, but then realized it was just 300k from the EU and everything made sense.
It's still my favorite language for getting things done so I wouldn't be upset with a comeback for the language, but I certainly don't expect it at this point.
Mostly the latter. Scala 3 is almost completely irrelevant to the big data space so far. Databricks took six years before upgrading their proprietary Spark runtime to Scala 2.13. Flink dropped the Scala API before even moving to 2.13. I don't know if Scio will seriously attempt the move to Scala 3. All of them suffer from Twitter libraries being abandoned, which isn't insurmountable, but an annoyance still.
And I don't think it matters anymore. I predict that the JVM will eventually be out of the equation. We're already seeing query engines being replaced by proprietary or open source equivalents in C++ or Rust. Large scale distribution is less of a selling point with modern cloud computing. Do you really need 100 executors when you can get a bare metal instance with 192, 256 or 384 cores?
People want a dataframe API in Python because that's what the the ML/DS/AI crowd knows. Queries and processing will be done in C++ or Rust, with little or even zero need for a distributed runtime. The JVM and Scala solve a problem that simply won't exist anymore.
Yeah, this is certainly the correct take. There's an alternate timeline where the Scala community focused during the peak on making it a better language for numeric computing/ML rather than the Nth category theoretic framework, but here we are. At a job almost a decade ago, we made some progress on an open source dataframes (and unfortunately proprietary data visualization) library for Scala, but we didn't get far enough before the company closed and the project died [1].
Still my favorite language I had the privilege to work with professionally for over a decade. However, in this post-JVM world, I'm actually excited to see a lot more OCaml discussion on here lately. The Jane Street work on OxCaml is terrific after a long period of stagnation with the language. I'm using it for most of my projects these days.
Yeah, I've lived in London, New York and San Francisco for work and the first had the lowest cost of living in absolute terms. Local developer salaries are, however, shockingly low outside of finance and consulting to US eyes.
The public transportation point is definitely key: London is just so unspeakably large spatially and it's all more or less well-connected that there isn't the same scarcity of commutable apartments as in NYC/SF. It wasn't uncommon for older colleagues to even commute in from Kent or elsewhere in the English countryside -- and often their morning train wasn't much longer than my own.
It seems to all be debt financed, i.e. just a private equity model slightly specialized for tech. The "innovation" is that Bending Spoons has an in-house engineering team it seems they try to keep constant yet scale out to all the acquisitions. I hadn't looked into them much before, but https://www.colinkeeley.com/blog/bending-spoons-operating-ma... is an interesting report -- though not focused on the finance side.
Really fantastic work! Can't wait to play around with your library. I did a lot of work on this at a past job long ago and the state of JS tooling was so inadequate at the time we ended up building an in-house Scala visualization library to pre-render charts...
More directly relevant, I haven't looked at the D3 internals for a decade, but I wonder if it might be tractable to use your library as a GPU rendering engine. I guess the big question for the future of your project is whether you want to focus on the performance side of certain primitives or expand the library to encompass all the various types of charts/customization that users might want. Probably that would just be a different project entirely/a nightmare, but if feasible even for a subset of D3 you would get infinitely customizable charts "for free." https://github.com/d3/d3-shape might be a place to look.
In my past life, the most tedious aspect of building such a tool was how different graph standards and expectations are across different communities (data science, finance, economics, natural sciences, etc). Don't get me started about finance's love for double y-axis charts... You're probably familiar with it, but https://www.amazon.com/Grammar-Graphics-Statistics-Computing... is fantastic if you continue on your own path chart-wise and you're looking for inspiration.
Thanks - and great question about direction. My current thinking: Focus on performance-first primitives for the core library. The goal is "make fast charts easy" not "make every chart possible." There are already great libraries for infinite customization (D3, Observable Plot) - but they struggle at scale.
That said, the ECharts-style declarative API is intentionally designed to be "batteries included" for common cases. So it's a balance: the primitives are fast, but you get sensible defaults for the 80% use case without configuring everything. Double y-axis is a great example - that's on the roadmap because it's so common in finance and IoT dashboards. Same with annotations, reference lines, etc. Haven't read the Grammar of Graphics book but it's been on my list - I'll bump it up. And d3-shape is a great reference for the path generation patterns. Thanks for the pointers!
Question: What chart types or customization would be most valuable for your use cases?
Most of my use cases these days are for hobby projects, which I would bucket into the "data science"/"data journalism" category. I think this is the easiest audience to develop for, since people usually don't have any strict disciplinary norms apart from clean and sensible design. I mention double y-axes because in my own past library I stupidly assumed no sensible person would want such a chart -- only to have to rearchitect my rendering engine once I learned it was one of the most popular charts in finance.
That is, you're definitely developing the tool in a direction that I and I think most Hacker News readers will appreciate and it sounds like you're already thinking about some of the most common "extravagances" (annotations, reference lines, double y-axis etc). As OP mentioned, I think there's a big need for more performant client-side graph visualization libraries, but that's really a different project. Last I looked, you're still essentially stuck with graphviz prerendering for large enough graphs...
Ha - the double y-axis story is exactly why I want to get it right. Better to build it in properly than bolt it on later.
"Data science/data journalism" is a great way to frame the target audience. Clean defaults, sensible design, fast enough that the tool disappears and you just see the data.
And yeah, graphviz keeps coming up in this thread - clearly a gap in the ecosystem. Might be a future project, but want to nail the 2D charting story first and foremost.
Thanks for the thoughtful feedback - this is exactly the kind of input that shapes the roadmap.
reply