100% agree, and I definitely see this in the tech industry and it all begins and ends with psychological safety. Right now there’s job pressure in tech which creates this toxic sense coming from management that they can fire any one at any time because they don’t like you. It essentially fosters this culture to not rock the boat or “piss off the wrong person.” The result is, you keep your mouth shut or significantly risk being penalized on your annual performance review. Add inflation and the ever-rising cost of living. For an individual contributor or even front line management, the choice is very clear. This is obviously a recipe for catastrophe when you’re dealing with human lives.
When you’re a rocket scientist at NASA, you also have relatively few alternatives other than SpaceX or Boeing.
If you want to build a system of monolith services and be locked into a 30 year old waterfall development model, then Oracle is for you.
I’ve had this argument with several DBAs. They always claim “Oracle is the most performant,” while quite possibly true technically, they also tend to run a single massive instance that inevitably leads to a complete failure of the site under heavy load. Oracle is often designed to be the single point of failure. I believe that is by design. The same problems can be solved with modern event driven architectures, better caching, horizontal dynamic scaling, etc.
I had to explain this to some slightly younger colleagues recently. It's hard to believe now, but in ye olde days hardware was not as cheap and abundant as it is now. So you invested heavily in your database servers and to justify the hardware and software cost, ran as many workloads as possible on it to spread the pain.
This is also the same incentives that resulted in many classic architectures from 80s and 90s relying heavily on stored procedures. It was the only place where certain data could be crunched in a performant way. Middleware servers lacked the CPU and memory to crunch large datasets, and the network was more of a performance bottleneck.
> and be locked into a 30 year old waterfall development model,
Oracle switched from Waterfall development model to sprint years ago. They also switched from yearly to quarterly releases (for their Apps) which means they deliver a lot of features in a year.
> They also switched from yearly to quarterly releases (for their Apps) which means they deliver a lot of features in a year.
Without commenting on whether this is true of Oracle, that conclusion doesn't inherently follow from the given. If I'm driving 60 miles per hour, then recalculate it in miles per minute, that doesn't actually mean I'm going faster. Oracle could easily be delivering 1/8 a year's worth of features in 1/4 a year due to release process overhead for all I'd know.
I was searching for jobs using it a while ago and it consumed 80 percent of my iphone’s battery in under 40 minutes. It’s quite impressive. Not even highest end mobile games can do that.
5-eyes, a bit tricky... but yeah anything that isnt a direct data pipeline to US gov and 3-letter agencies is a massive longterm win, in security and economy
I don't think it is. I liked a simpler world we lived without having to worry or look where a company was from.
But since this administration has started to threaten allies and keeps this nonsensical trade balance and tariffs argument (which never accounts for the very bulk of what US really exports: IT and financial services which are never included in the trade balance nonsense) you need to answer in some way.
And with tensions rising staying on US services is becoming a strategic risk.
> which never accounts for the very bulk of what US really exports: IT and financial services
Given the growing demand to move away from US services and towards European alternatives, I wonder what the US will look like in 10 years if this move gains significant momentum.
Largely agree, though some things are notably difficult in some languages. Things like true concurrency for example didn’t come as naturally in Ruby because of the global interpreter lock. Of course there are third party libs, and workarounds though. Newer versions of Ruby bring it more natively, and as we’ve seen, Homebrew has adopted and makes use of that experimentally for a while, and the default relatively recently.
I can’t say that’s the only reason it’s slow of course. I’m on the “I don’t use it often enough for it to be a problem at all” side of the fence.
* it’s purpose built for mega-sized monorepo models like Google (the same company that created it)
* it’s not at all beginner friendly, it’s complex mishmash of three separate constructs in their own right (build files, workspace setup, starlark), which makes it slow to ramp new engineers on.
* even simple projects require a ton of setup
* requires dedicated remote cache to be performant, which is also not trivial to configure
* requires deep bazel knowledge to troubleshoot through its verbose unclear error logs.
Because of all that, it’s extremely painful to use for anything small/medium in scale.
reply