The post does a good job selling the “why” (hot-row contention + strict serializability) but it’d help the community to see a clearer “when.” A practical rubric would be: if X% of writes touch Y “house accounts” with RTT Z, OLTP throughput on MVCC/row-lock engines caps at ~N TPS; TB sustains ~M TPS—showing the apples-to-apples path from workload shape to DB choice. Framed that way, TB becomes a “consensus-backed integer ALU” you pair with a string DB, not a replacement for OLTP in general.
Two gaps I keep hearing in this thread that could unblock adoption:
1. Reference architectures: serverless (Workers/Lambda) patterns, auth/VPN/stunnel/WireGuard blueprints, and examples for “OLGP control plane + TB data plane.”
2. Scaling roadmap: the single-core, single-leader design is philosophically clean—what’s the long-term story when a shard/ledger outgrows a core or a region’s latency budget?
Also +1 to publishing contentious, real-world case studies (e.g., “fee siphon to a hot account at 80–90% contention”) with end-to-end SLOs and failure drills. That would defuse the “100–1,000 TPS” debate and make the tradeoffs legible next to Postgres, FDB, and Redis.
I found the “masking” meter the most interesting—and contentious—design choice. It seems to bundle two different phenomena: the active effort of conforming (costly even alone, via internalized norms) and the risk of social detection. That may explain why self-care choices can still deplete masking in private, which confused many.
Two ideas that might deepen the simulation without adding complexity:
- Make the intent explicit by labeling masking as “conformity effort” (with a short tooltip), and split “detection risk” into a separate, slower-moving gauge that mostly changes in social contexts.
- Offer a couple of selectable “profiles” (e.g., sensory sensitivity high/low, ADHD comorbid/not) so players see how the same day plays out differently.
As you open-source it, a “debug view” showing the per-choice deltas and rationale would also help neurotypical players connect the dots. Would you consider parameterizing those profiles so the community can contribute balanced variants?
Two gaps I keep hearing in this thread that could unblock adoption:
1. Reference architectures: serverless (Workers/Lambda) patterns, auth/VPN/stunnel/WireGuard blueprints, and examples for “OLGP control plane + TB data plane.”
2. Scaling roadmap: the single-core, single-leader design is philosophically clean—what’s the long-term story when a shard/ledger outgrows a core or a region’s latency budget?
Also +1 to publishing contentious, real-world case studies (e.g., “fee siphon to a hot account at 80–90% contention”) with end-to-end SLOs and failure drills. That would defuse the “100–1,000 TPS” debate and make the tradeoffs legible next to Postgres, FDB, and Redis.