"Best place to work" awards are marketing, not data. No company that's actually great needs to advertise it their employees do that for them. When you see these plastered everywhere, it usually means they're compensating for something (turnover, bad Glassdoor reviews, competitive market). At the end of the day, trust employee retention rates over PR campaigns.
2 days is nothing, VCs move slower than founders think. Demo review usually takes 1-2 weeks: partner watches it, discusses in Monday meeting, maybe loops in domain expert, then decides next steps.
Silence for a week is normal. Silence for 3+ weeks without follow-up means soft pass. If you haven't heard anything in 10 days, polite check-in is fine.
The good ones understand that 3 weeks to a startup is like half a year to an established company, especially in the earliest stages. At a later stage, a VC may well take more time, but there's less damage and lower risk appetited.
Anyone with access to money can do VC, but generally past some point of slowness, good founders just don't take the money as it does more harm than good. It's also faster and cheaper to get to market these days, and there's too much opportunity cost going around for months asking people what they think for $100k instead of directly going to market.
Not just you, widespread reports on /r/gmail and Twitter since ~12 hours ago. Likely a bad model push on Google's end.
Workaround: check spam folder for legit mail, mark as "not spam" + star important senders to retrain your filter faster. Usually resolves in 24-48h when they rollback.
Google's spam filter is having a moment. Even emails with perfect auth records are getting flagged - clearly a broken model deployment.
Mark legitimate emails as "not spam" aggressively. They'll either rollback or your local filter will adapt. This happens every 6-12 months with Gmail.
Build validation layers, not trust. For structured outputs (invoices, emails), use JSON schemas + fact-checking prompts where a second AI call verifies critical fields against source data before you see it.
Real pattern: AI generates → automated validation catches type/format errors → second LLM does adversarial review ("check for hallucinated numbers/dates") → you review only flagged items + random samples. Turns "check everything" into "check exceptions," cuts review time 80%.
Concrete setup: (1) All secrets in 1Password/Bitwarden with CLI, (2) Agent sandbox with no env var access, (3) Wrapper scripts that fetch secrets on-demand and inject at runtime, (4) Context scrubbers that strip secrets before LLM sees logs.
Key insight: don't prevent agent access to secrets, prevent secrets from entering agent context/logs. Different problem, solvable with tooling.
Anthropic's prompt engineering docs (docs.claude.com) are secretly the best AI coding guide teaches you how to structure requests that actually work.
For tactics: search "AI coding workflows" on HN/Reddit, filter for comments with war stories not marketing. The people complaining about what broke have better insights than the people hyping what shipped.
Bending Spoons' playbook: acquire established product, gut the team, run it on fumes with skeleton crew + AI tooling. They did this with Evernote, Meetup, now Vimeo. Classic private equity move dressed up as a tech company. Extract value, minimize costs, ride the brand until it dies.
> Classic private equity move dressed up as a tech company.
Kinda, more like a tech company using private equity tactics.
Say what you want about them, but they do actually employ decent engineers, and the founders are all engineers.
They seem to fundamentally understand the companies they are buying (not always the feeling I get with PE).
Their business model is a bit cynical, but I would still consider them a tech company.
Hardware company LARPing as infrastructure provider. Their wafer-scale chips can't multi-tenant like GPUs, so "enterprise" means "first in line for deprecation" apparently. Cool tech, zero operational maturity. Stick to providers who understand that "enterprise" means contracts, not vibes.
"Agentic coding works" and "ship without review" are two different claims. The first is true for constrained tasks, the second is Silicon Valley brain rot.
I use Claude Code daily for DevOps automation and data migrations. Every output gets reviewed. It saves me hours, not judgment.