Hacker Newsnew | past | comments | ask | show | jobs | submit | CSDude's commentslogin

A major use case for LocalStack is CI/CD.

When you're running hundreds of integration test suites per day in CI pipelines, the free tier is irrelevant. You need fast, deterministic, isolated environments that spin up and tear down in seconds, not real AWS calls that introduce network latency, eventual consistency flakiness, rate limits, and costs that compound with every merge request.

It'd be great to just use AWS but in practice it doesn't happen. Even if billing doesn't, limits + no notion of namespacing will hit you very quickly in CI. It's also not practical to give every dev AWS account, I did it with 200 people it was OK but always caused management pain. Free tier also don't cover organizations.

> they MUST learn that there are no hard spend limits, and the only way to actually learn it, is to be bitten by it as early as possible

This is a bizarre take. "The best way to learn fire safety is to get burned." You can understand AWS billing without treating surprise charges as a rite of passage.


All my vertical videos in iCloud show up cropped horizontal for some reason. If I go to edit I see the whole video. I really do not want to trust any cloud provider to maintain my years of archives of family photos and videos. Glad things like this exist. I just need properly date-foldered files, without no duplciates. Is that so hard?


Sounds like the contact sheet view is just using square preview thumbails?


All my vertical videos in iCloud show up cropped horizontal for some reason.

Turn your phone? /ducks


It allows you to mock the whole universe so it becomes a hammer instead of nicely designed functions, interfaces.


I had a similar experiment ~10yr ago, see relevant discussion https://news.ycombinator.com/item?id=11064694

And updated domain: https://mustafaakin.dev/posts/2016-02-08-writing-my-own-init...


Interesting ... Do you still maintain the site?


I used Xmonad for a while, then switched to awesomewm used it for years. It was good on a 1366x768 screen to use space efficiently.


Imagine being a world class F1 driver and (someone) still have to upload your CV somewhere.


a couple of weeks ago Verstappen raced in a "Advanced-amateur" competition in Germany - he had to be "trained" by an official instructor in a restricted car because he hadn't raced there before

I imagine the instructor "What could I teach Verstappen now..."


There is an embedded one in DuckDB for a while now and it's great. I get the apeal of yours but this one is much easier to use for same cases:

https://duckdb.org/2025/03/12/duckdb-ui


This is not a self hosted one though. You can not use default ui offline, you can not guarantee data safety


It's very weird they don't offer it by default, but there are workarounds.

(You can use it offline)

https://github.com/duckdb/duckdb-ui/issues/62


some of the comments on that thread are surprising. Are people not aware that software can be bundled in such a way as to run on machines not having internet access?


the background is this UI is the MotherDuck UI for their cloud SaaS app. MotherDuck is a VC-backed DuckDB SaaS company, not to be confused with DuckDB Labs or the DuckDB foundation

MotherDuck decided to take their web app UI and make it a locally usable extension via DuckDB. however as noted in that thread, the architecture is a bit odd as the actual page loads once the extension is running from MotherDuck’s servers (hence the online requirement)

I don’t think it’s intentionally malicious or bad design or anything, just how this extension came about (and sounds like they’re fixing it)

disclaimer: I do know and actively work with the MotherDuck folks, I’ve also worked w/ DuckDB Labs in the past


Thanks, any recommendations on where to find the best information reference/resource for DuckDB-Wasm?


> MotherDuck is a VC-backed DuckDB SaaS company, not to be confused with DuckDB Labs or the DuckDB foundation

Separate entities, but cooperative/comprised of many overlapping people, right?


Afaik no overlapping people


Anyone know if there is a similar selfhosted/run local option?


`duckdb -ui` and you can launch a local server bound to 127.0.0.1


Doesn't do charts though (meaning: it does statistic histograms on your columns but no custom chart like OP's software does).


It's so opinionated but many people find it okay. And it's hard to install Arch successfully. Compared to Ubuntu Arch's package manager (also combined with AUR) are great.

I use every possible opportunity to say "Fuck Ubuntu Snaps"


>And it's hard to install Arch successfully.

archinstall. You can even select a DE in it


I only learned it with Omarchy after all of these years :(


Blanket statements like this miss the point. Not all data is waste. Especially high-cardinality, non-sampled traces. On a 4-core ClickHouse node, we handled millions of spans per minute. Even short retention windows provided critical visibility for debugging and analysis.

Sure, we should cut waste, but compression exists for a reason. Dropping valuable observability data to save space is usually shortsighted.

And storage isn't the bottleneck it used to be. Tiered storage with S3 or similar backends is cheap and lets you keep full-fidelity data without breaking the budget.


I agree with both you and the person you're replying to, but...

My centrist take is that data can be represented wastefully, which is often ignored.

Most "wide" log formats are implemented... naively. Literally just JSON REST APIs or the equivalent.

Years ago I did some experiments where I captured every single metric Windows Server emits every second.

That's about 15K metrics, down to dozens of metrics per process, per disk, per everything!

There is a poorly documented API for grabbing everything ('*') as a binary blob of a bunch of 64-bit counters. My trick was that I then kept the previous such blob and simply took the binary difference. This set most values to zero, so then a trivial run length encoding (RLE) reduced a few hundred KB to a few hundred bytes. Collect an hour of that, compress, and you can store per-second metrics collected over a month for thousands of servers in a few terabytes. Then you can apply a simple "transpose" transformation to turn this into a bunch of columns and get 1000:1 compression ratios. The data just... crunches down into gigabytes that can be queried and graphed in real time.

I've experimented with Open Telemetry, and its flagrantly wasteful data representations make me depressed.

Why must everything be JSON!?


I think Prometheus works similar to this with some other tricks like compressing metric names.

OTEL can do gRPC and a storage backend can encode that however it wants. However, I do agree it doesn't seem like efficiency was at the forefront when designing OTEL


These tricks are essential for every database optimized for metrics / logs / traces. For example, you can read on how VictoriaMetrics can compress production metrics to less than a byte per sample (every sample includes metric name, key=value labels, numeric metric value and metric timestamp with millisecond precision). https://faun.pub/victoriametrics-achieving-better-compressio...


Very curious to read your code doing it. Thought of a very similar approach but never had the time. Are you keeping it somewhere?


I only ever got it to a proof of concept. The back end worked as advertised, the issue was that there are too many bugs in WMI so collecting that many performance counters had weird side effects.

Google was doing something comparable internally and this spawned some fun blog titles like “I have 64 cores but I can’t even move my mouse cursor.”


Ah, I don't mean the Windows-specific stuff. I mean the binary diffing and RLE.

While not difficult, I am just curious how others approached it.


> Dropping valuable observability data to save space is usually shortsighted

That's a bit of a blanket statement, too :) I've seen many systems where a lot of stuff is logged without much thought. "Connection to database successful" - does this need to be logged on every connection request? Log level info, warning, debug? Codebases are full of this.


Yes, it allows you to bisect a program to see the block of code between log statements where the program malfunctioned. More log statements slice the code into smaller blocks meaning less places to look.


Probably not very useful for prod (non debug) logging, but it’s useful when such events are tracked in metrics (success/failure, connect/response times). And modern databases (including ClickHouse) can compress metrics efficiently so not much space will be spent on a few metrics.


There's always another log that could have been key to getting to the bottom of an incident. It's impossible to know completely what will be useful in advance.


in our app each user polls for a resource availability every 5 mins. do we really need "connection successful" 500x per minute? i dont see this as breaking up the logs into smaller sections. i see it as noise. i'd much rather have a ton of "connection failed" whenever that occurs than the "success" constantly


I agree that education needs overhaul, it's scary for new comers, AI can make mistakes that you need to be careful (so does old StackOverflow answers) but let’s be honest: Most employers aren’t paying for your art or your dopamine.


Most employers aren’t paying for your degrees!

Universities as we know them are obsolete.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: