Hacker Newsnew | past | comments | ask | show | jobs | submit | micw's commentslogin

You can easily put it into an antimatter tank ;-)

Only if you wear antimatter gloves while doing it.

Also, now your tank is just fuel as well.


You can throw matter on it. But this needs to be confined carefully...

I'd like to run my personal DNS server for privacy reasons on a cheap VPS. But how can I make it available to me only? There's no auth on DNS, right?

It can’t be fully secure but you can use a domain or path with a uuid or similar such that no one could guess your dns endpoint, over dot or doh. In theory someone might log your dns query then replay it against your dns server though.

You could also add whitelisting on your dns server to known IPs, or at least ranges to limit exposure, add rate limiting / detection of patterns you wouldn’t exhibit etc.

You could rotate your dns endpoint address every x minutes on some known algorithm implemented client and server side.

But in the end it’s mostly security through obscurity, unless you go via your own tailnet or similar


A personal DNS server provides no privacy. Even if you were using a caching resolver, it would barely even provide any obfuscation.

If you want DNS that is only for you, edit your hosts file.


Let me address a sibling comment first:

stub resolver (client) -> OPTIONAL forwarding resolver (server) -> recursing / caching resolver (server) -> authoritative server. "Personal DNS server" doesn't disambiguate whether your objective is recursive or authoritative... or both (there is dogma about not using the same server for both auth and recursion, if you're not running your resource as a public benefit you can mostly ignore it). If it's recursive I don't know why you'd run it in the cloud and not on-prem.

You'll find that you can restrict clients based on IP address, and you can configure what interfaces / addresses the server listens on. The traditional auth / nonrepudiation mechanism is TSIG, a shared secret. Traditionally utilized for zone transfers, but it can be utilized for any DNS request.

The traditional mechanism for encryption has been tunnels (VPNs) but now we have DoH (web-based DNS requests) and DoT (literally putting nginx in front of the server as a TCP connection terminator if it's not built in). These technologies are intended to protect traffic between the client and the recursing resolver. Encryption between recursing resolvers and auths is a work in progress. DNSSEC will protect the integrity of DNS traffic between recursives and auths. I don't know how big your personal network is, for privacy / anonymity of the herd you might want to forward your local recursing resolver's traffic to a cloud-based server and co-mingle it with some additional traffic; check the servers' documentation to see if you can protect that forwarder -> recursive traffic with DoT or you're not gaining any additional privacy; it's extra credit and mostly voodoo if you don't know what you're doing. I don't bother, I let my on prem recursives reach out directly to the auths. Once the DNS traffic leaves my ISP it's all going in different directions, or at least it should be notwithstanding the pervasive centralization of what passes for the federated / distributed internet at present.


You could run it within a Tailscale VPN network. In fact Headscale (Tailscale server) has a very basic DNS server built-in.

That assumes a device that can enter a VPN. I’d like to run a DNS server for a group of kids playing Minecraft on a switch. Since they’re not in the same (W)LAN, I can’t do it on the local network level. And the switch doesn’t have a VPN client.

Perhaps it seems obvious to some, but it's not obvious to me so I need to ask: What's the advantage of a selectively-available DNS for kids playing Minecraft with Nintendo Switch instead of regular DNS [whether self-hosted or not]?

All I can think of is that it adds obscurity, in that it makes the address of the Minecraft server more difficult to discover or guess (and thus keeps everything a bit more private/griefing-resistant while still letting kids play the game together).

And AXFR zone transfers are one way that DNS addresses leak. (AXFR is a feature, not a bug.)

As a potential solution:

You can set up DNS that resolves the magic hardcoded Minecraft server name (whatever that is) to the address of your choosing, and that has AXFR disabled. In this way, nobody will be able to discover the game server's address unless they ask that particular DNS server for the address of that particular name.

It's not airtight (obscurity never is), but it's probably fine. It increases the size of the haystack.

(Or... Lacking VPN, you can whitelist only the networks that the kids use to play from. But in my experience with whitelisting, the juice isn't worth the squeeze in a world of uncontrollably-dynamic IP addresses. All someone wants to do is play the game/access the server/whatever Right Now, but the WAN address has changed so that doesn't work until they get someone's attention and wait for them to make time to update the whitelist. By the time this happens, Right Now is in the past. Whitelisting generally seems antithetical towards getting things done in a casual fashion.)


Ok, why would I want to do that? Because when Microsoft bought Minecraft they decided to split the ecosystem into the Java Edition (everyone playing on a computer) and Bedrock Edition (Consoles, Tablets, ...) and cross-play is not possible on the official realms. That leaves out the option to just pay and rent a realm for the group.

So we're hosting our own minecraft server and a suitable connector for cross-play - and it's easy to join on tablets, computers and so on because there's a button that allows you to enter an address. But on the switch, Microsoft in its wisdom decided that there'd be no "join random server" button. But there are some official realm servers, and they just happen to host a lobby and the client understands some interface commands sent by the server (1). Some folks in the community devised a great hack - you just host a lobby yourself that presents a list of servers of your choice. But to do that, you need to bend the DNS entries of a few select hostnames that host the "official" lobbies so that they now point to your lobby. Which means you need to run a resolver that is capable of resolving all hostnames, because you need to set it in the switchs networking settings as the primary DNS server.

Now, there are people that run resolvers in the community and that might be one option, but I'm honestly a bit picky about who gets to see what hostnames my kids switch wants to resolve.

Whitelisting networks is impossible - it's residential internet.

The reason I'd be interested in running this behind a VPN is that I don't want to run an open resolver and become part of an amplification attack. (And sadly, the Switch 1 does not have a sufficiently modern DNS stack so that I can just enable DNS cookies and be done with it. The Switch 2 supports it).

Sorry if this sounds complicated. It's just hacks on hacks on hacks. But it works.

(1) judging from the looks and feel, this is actually implemented as a minecraft game interface and the client just treats that as a game server. It even reports the number of players hanging out in the lobby.


Thanks. I suspected that this is where things were heading. I don't see a problem with using hacks-on-hacks to get a thing done with closed systems; one does what one must.

On the DNS end, it seems the constraints are shaped like this:

  1.  Provides custom responses for arbitrary DNS requests, and resolves regular [global] DNS
  2.  Works with residential internet
  3.  Uses no open resolvers (because of amplification attacks)
  4.  Works with standalone [Internet-connected] Nintendo Switch devices
  5.  Avoids VPN (because #4 -- Switch doesn't grok VPN)
With that set of rules, I think the idea is constrained completely out of existence. One or more of them need to be relaxed in order for it to get off the ground.

The most obvious one to relax seems to to be #3, open resolvers. If an open resolver is allowed then the rest of the constraints fit just fine.

DNS amplification can be mitigated well-enough for limited-use things like this Minecraft server in various ways, like implementing per-address rate limiting and denying AXFR completely. These kinds of mitigations can be problematic with popular services, but a handful of Switch devices won't trip over them at all.

Or: VPN could be used. But that will require non-zero hardware for remote players (which can be cheap-ish, but not free), and that hardware will need power, and the software running on that hardware will need configured for each WLAN it is needed to work with. That path is something I wouldn't wish upon a network engineer, much less a kid with a portable game console. It's possible, but it's feels like a complete non-starter.


Yep, I agree. It's essentially impossible given the contraints. I'm mostly responding to a post that says "just run it on a VPN" with an example that just can't run on a VPN.

(3) would be easy to handle if DNS Cookies were sufficiently well supported because they solve reflection attacks and that's the most prominent. Rate limiting also helps.

At the moment I'm at selectively running the DNS server when the kids want to play because we're still at the supervised pre-planned play-session. And I hope that by the time they plan their own sessions, they've all moved on to a Switch 2.


Thank you for the explanation, it was most interesting, I had no idea Bedrock could be coerced into talking to java servers.

Here are a few ideas:

1. Geoblocking. Not ideal, but it can make your resolver public for fewer people.

2. What if your DNS only answers queries for a single domain? Depending on the system, the fallback DNS server may handle other requests?

3. You could always hand out a device that connects to the WLAN. Think a cheap esp32. Only needs to be powered on when doing the resolution. Then you have a bit more freedom: ipv6 RADV + VPN, or try hijacking DNS queries (will not work with client isolation), or set it as resolver (may need manual config on each LAN, impractical).

4. IP whitelist, but ask them to visit a HTTP server from their LAN if it does not work (the switch has a browser, I think), this will give you the IP to allow, you can even password-protect it.

I'd say 2. Is worth a try. 4. Is easy enough to implement, but not entirely frictionless.


You could run a DNS server and configure the server with a whitelist of allowed IPs on the network level, so connections are dropped before even reaching your DNS service.

For example, any red-hat based linux distro comes with Firewalld, you could set rules that by default will block all external connections and only allow your kids and their friends IP addresses to connect to your server (and only specifically on port 53). So your DNS server will only receive connections from the whitelisted IPs. Of course the only downside is that if their IP changes, you'll have to troubleshoot and whitelist the new IP, and there is the tiny possibility that they might be behind CGNAT where their IPv4 is shared with another random person, who is looking to exploit DNS servers.

But I'd say that is a pretty good solution, no one will know you are even running a DNS service except for the whitelisted IPs.


They're all playing from home, connected to their residential internet. I don't know their IP addresses.

Correct me if I misunderstand what you're trying to do:

What you want to do is -on each LAN that has a Switch that you want to play on your specific Minecraft server- report that the IP for the hostname of the Minecraft server the Switch would ordinarily connect to is the server that you're hosting?

If you're using OpenWRT, it looks like you can add the relevant entries to '/etc/hosts' on the system and dnsmasq will serve up that name data. [0] I'd be a little shocked (but only a little) if something similar were impossible on all non-OpenWRT consumer-grade routers.

My Switch 1 is more than happy to use the DNS server that DHCP tells it to. I assume the Switch 2 is the same way.

[0] <https://openwrt.org/docs/guide-user/base-system/dhcp.dnsmasq>


I can do that for my network - but the group is multiple kids that play from their home. I'm not going to teach all of those parents how to mess with their network. There's just way too many things that can go wrong. Also, won't work if the kid is traveling.

From all this what I got is that Microsoft is connecting to some random servers not using TLS and then somehow outputting that data straight into the Nintendo Switch

Why do you want to do this? What would you redirect / override on this?

I just use a VPN like tailscale or wireguard. You can normally also tell clients what DNS to use when on the VPN

The article is about running your own DNS server, which is, and must, always be available to everyone. What you are talking about is running a DNS resolver, but that is not the topic.

Run it over WireGuard? I have this setup — cloud hosted private DNS protected by NOISE/ChaCha20. Only my devices can use it, because only they are configured as peers.

SSH tunnels is a possibility.

For labels I use Phomomo. Quite cheap. I wrote some python code to drive it.

I built such bumblebee houses a few years ago with the kids. The flap is essential against a kind of flies that lay their eggs in the bumblebee nest and their caterpillars eats the nest. Either the Queen or the others learn the usage quite fast. Sometimes next generation queens remember it in the next year

That's so cool! I have to try this with my kids. They will almost certainly not care, but what the hell.

Are you saying that a queen will die and its successor somehow knows how to use the door without learning like its mother had to?


No, the next generation queens grow in that swarm and learn it there. All swarm members learn it. To the queen (or the first few workers) you teach it. The rest learns from the others. But the workers are short living (few weeks). The queens live for about a year and can take knowledge in the next year.

Can I turn a real font into my handwriting?


Ironically, that's exactly what calligraphy is.

And learning to write in 'fonts' (hands) like block-print is still a form of calligraphy.


Asking the right questions


An AST based conflict resolver could eliminate the same kind of merge conflicts on a text based RCS


Yeah I suppose that's true, too. You've got to do the conversion at some point. I don't know that you get any benefit of doing storing the text, doing the transformation to support whatever ops (deconflicting, etc.) and then transforming back to text again vs just storing it in the intermediate format. Ideally, this would all be transparent to the user anyway.


For one merge, yes. The fun starts when you have a sequence of merges. CRDTs put ids on tokens, so things are a bit more deterministic. Imagine a variable rename or a whitespace change; it messes text diffing completely.


Full system access? Do people run npm install as root?


If they run npm at all, quite often.


of course, how else it could install system packages it needs /s


If the load balancer can force a downgrade, an attacker can do it as well.


Only if the attacker has a valid certificate for the domain to complete the handshake with.

Relying on HTTPS and SVCB records will probably allow a downgrade for some attackers, but if browsers roll out something akin to the HSTS preload list, then downgrade attacks become pretty difficult.

DNSSEC can also protect against malicious SVCB/HTTPS records and the spec recommends DoT/DoH against local MitM attacks to prevent this.


DNSSEC can't protect against an ECH downgrade. ECH attackers are all on-path, and selectively blocking lookups is damaging even if you can't forge them. DoH is the answer here, not record integrity.


DNSSEC alone is obviously useless because any attacker interested in SNI hostnames can just as easily monitor DNS traffic.

However, DoH/DoT without record integrity is about as useful as self-signed HTTPS certificates. You need both for the system to work right in every case.

To quote the spec:

> Clearly, DNSSEC (if the client validates and hard fails) is a defense against this form of attack, but encrypted DNS transport is also a defense against DNS attacks by attackers on the local network, which is a common case where ClientHello and SNI encryption are desired. Moreover, as noted in the introduction, SNI encryption is less useful without encryption of DNS queries in transit.


I don't think this is true; I think this misunderstands the ECH threat model. You don't need record integrity to make ECH a strong defense against on-path ISP attackers; you just need to trust the resolver you're DoH'ing to.


This actually reminds me of the "God of the gaps" problem. A gradual retreat in the face of inconvenient facts.

Many years ago when I was a student the argument was that integrity isn't a big deal so plaintext telnet is just fine. If you're paranoid you use an "enhanced" telnet where the authentication step is protected but not everything else [Yes I'm an old man]

By the turn of the century everybody agreed telnet is stupid, use SSH but integrity still wasn't a big deal when it comes to ordinary web sites. Only your bank needs SSL fool.

And I suppose that 8-10 years ago that changed too and it's now recognised that plaintext HTTP really isn't good enough, you need HTTPS. But still I see that you say integrity isn't important when it comes to DNS records.

Integrity is the hardest thing to get ordinary users to care about. Given how freely even young kids lie we should probably take it more seriously but it remains hard to get ordinary people to care, however ultimately this does matter.


Sir, this is a Wendy's. We're talking about ECH. Can you maybe rephrase all this to be specifically about how DNS record integrity practically impacts the threat model for ECH? The threat actor for Encrypted Client Hello is ISPs.

This same thing happened with DNS cache corruption; which went unaddressed from the mid-1990s to 2008 despite the known fix of port/ID randomization because the DNS operator community was fixated on the "real" fix of... DNS record integrity.


> you just need to trust the resolver you're DoH'ing to

I don't trust the public DoH resolvers that much, actually, and neither do I trust my own ISP. I know for a fact that they mess with DNS records because of court orders, and I want to know when that happens.

DoH and DoT are not the modern DNSSEC alternatives we need. They naively assume that the DNS resolver always speaks the truth.


> but if browsers roll out something akin to the HSTS preload list, then downgrade attacks become pretty difficult.

Can you explain why, considering it is at the client's side ("browsers")?


If browsers remember which domains do ECH and refuse to downgrade to non-ECH connections after, the way the HSTS cache forces browsers to connect over HTTPS despite direct attempts to load over HTTP, then you only need an entry in the browser database to make downgrade attacks to accomplish SNI-snooping impossible.

For HSTS, browsers come with a preloaded list of known-HTTPS domains that requests are matched against. That means they will never connect over HTTP, rather than connect over HTTP and upgrade+maintain a cache when the HSTS header is present. If ECH comes with a preload list, then browsers connecting to ECH domains will simply fail to connect rather than permit the network to downgrade their connection to non-ECH TLS.


Linux runs everywhere


Except on my stupid iPad “Pro”. :(


iirc theres an app on the app store that's basically a small alpine container


Well, there's iSH and a-Shell but they don't have GUI capability and are somewhat limited in other ways. There's also UTM, but without weird hacks you can only get SE version which is very slow.


IMO it depends a bit, but in most cases: No!

If you do proper software development (planing, spec, task breakdown, test case spec, implementation, unit test, acceptance test, ...) implementation is just a single step and the generated artifact is the source code. And that's what needs to be checked in. All the other artifacts are usually stored elsewhere.

If you do spec and planing with AI, you should also commit the outcome and maybe also the prompt and session (like a meeting note on a spec meeting). But it's a different artifact then.

But if you skip all the steps and put your idea directly to an coding agent in the hope that the result is a final, tested and production ready software, you should absolutely commit the whole chat session (or at least make the AI create a summary of it).


LLMs frequently hallucinate and go off on wild goose chases. It's admittedly gotten a lot better, but it still happens.

From that perspective alone the session would be important meta information that could be used to determine the rationale of a commit - right from the intent (prompt) to what the harness (Claude code etc) made of it. So there is more value in keeping it even in your second scenario


I try to use AI incremental and verify each result. If it goes mad, I just revert and start over. It's a bit slower but ensures consistency and correctness and it's still a huge improvement over doing everything manually.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: