28 billion data points in 50 gigabytes are not impressive for time series use. That's nearly 2 bytes per data point, many time series databases achieve 1 or less byte per data point.
It's not a point, it's an entire row of data with dozens of columns and JSON.
Even if the data took twice as much space, it's still worth it to have everything in a single data warehouse with easy joins and the full expressiveness of SQL.
Because labels or dimensions are not stored in as a value but as a row identifier in most implementations. That results in having to scan the entire row space and look at every row name and see if it matches the lookup.
Storing labels in a row based system (like SQL) allows querying by value, not column name which takes advantage of all optimizations and indexes making it a lot faster.
That said there is nothing forbidding someone to do both, DalmatinerDB, for example, uses a column-based format for metric values but a row-based format (PostgreSQL) for dimensions.
I helped to create that spreadsheet we tried to be as fair as possible and whenever possible link reproducible, verifiable benchmarks (but then again all benchmarks are lies ;).
That said, Lasp PG is being used as the underlying infrastructure for the Erlang port of Microsoft Orleans, erleans.
We have several companies using our scalable infrastructure replacement for Distributed Erlang, partisan.
Additionally, our previous work has been inspirational for gen_stage in Elixir (Lasp's gen_flow) and Phoenix channels (riak_pg, replaced by lasp_pg later.) There are several papers outlining out work and designs at Erlang Workshop (co-located at ICFP.)
I'm one of the maintainers of tremor, happy to get together and talk about rust event processing if you ever want to :)