It sounds like there were pretty broad layoffs which impacted a lot more than just a focus on enterprise contracts. It wasn't "just" a few enterprise sales people. Engineering may have indeed been the least impacted, but this sounds like biggest round of layoffs to hit Heroku since its inception, not just some right sizing from over hiring.
It sounds like there were pretty broad layoffs which impacted a lot more than just a focus on enterprise contracts. It wasn't "just" a few enterprise sales people. Engineering may have indeed been the least impacted, but this sounds like biggest round of layoffs to hit Heroku since its inception, not just some right sizing from overhiring.
Pgquery was created by the pganalyze team for their own purposes I believe initially for features like index recommendation tooling, but immediately planned as open source. It is indeed a very high quality project with the underlying C implementation having several wrappers that exist for a number of languages[1].
Citus works really well *if* you have your schema well defined and slightly denormalized (meaning you have the shard key materialized on every table), and you ensure you're always joining on that as part of querying. For a lot of existing applications that were not designed with this in mind if can be several months of database and application code changes to get things into shape to work with Citus.
If you're designing from scratch and make it worth with Citus then (specifically for a multi-tenant/SaaS sharded app) it can make scaling seem a bit magical.
Probably as useful is the overview of what pgdog is and the docs. From their docs[1]: "PgDog is a sharder, connection pooler and load balancer for PostgreSQL. Written in Rust, PgDog is fast, reliable and scales databases horizontally without requiring changes to application code."
I appreciate the idea of this a lot but am a bit skeptical.
The good thing is this would at least shine a better light on all those that "claim" to be Postgres but really have little to nothing to do with it. Overwhelmingly people are supporting the wire protocol despite being a completely separate database because Postgres is already so universal and it'd be a huge investment to recreate that ecosystem of language drivers and everything else around it.
The reality is even "wire protocol" there are varying levels of support depending on what you're trying to do.
Then when you get down to functionality, it could be "we support the Postgres data types"... well except this one or that one. That's fine and good until a user is surprised 2 years into building an application.
Even the notion of we support all Postgres extensions, well all Postgres extensions don't work together some take hooks and change queries that other extensions want to modify for themselves.
Having worked with Postgres and managed Postgres for a very long time. Postgres is Postgres, there are extensions that modify Postgres, there are forked versions of Postgres, and things that are "Postgres" compatible simply aren't Postgres.
It feels very disingenuous to say "Postgres compatible" and have this as a missing feature set. I'm sure they'd quickly argue it's wire compatibility, but even then it's a slippery slope and wire compatible is left open to however the person wants to interpret it.
There is no 'standard' or 'spec' for what makes something Postgres wire compatible.
This feels like a strong overreach on the marketing front to leverage the love people have for Postgres to help boost what they've built. That is not to say there isn't hard and quality engineering in here, but slapping Postgres compatible on it feels lazy at best.
> I'm sure they'd quickly argue it's wire compatibility, but even then it's a slippery slope and wire compatible is left open to however the person wants to interpret it.
I actually think that they'd argue they intend to close the feature gap for full Postgres semantics over time. Indeed their marketing was a bit wishful, but on Bluesky, Marc Brooker (one of the developers on the project) said they reused the parser, planner, and optimizer from Postgres: https://bsky.app/profile/marcbrooker.bsky.social/post/3lcghj...
That means they actually have a very good shot at approaching reasonably full Postgres compatibility (at a SQL semantics level, not just at the wire protocol level) over time.
This alone wouldn't be a full replacement. We do have a full product that does that with customers seeing great performance in production. Crunchy Bridge for Analytics does similar by embedding DuckDB inside Postgres, though for users is largely an implementation detail. We support iceberg as well and have a lot more coming basically to allow for seamless analytics on Postgres building on what Postgres is good at, iceberg for storage, and duckdb for vectorized execution.
That isn't fully open source at this time but has been production grade for some time. This was one piece that makes getting to that easier for folks and felt a good standalone bit to open source and share with the broader community. We can also see where this by itself for certain use cases makes sense, as you sort of point out if you had time series partitioned data, leveraged partman for new partitions and pg_cron which this same set of people authored you could automatically archive old partitions to parquet but still have thing for analysis if needed.