Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It generally boils down to three things: 1) How much resource a service need to run well, 2) How much the service wants to consume if left unchecked, 3) How performant you want your service to be.

After understanding these parameters, you can limit the resources of your application by running it under a cgroup. Doing this won't allow a service to surpass the limits you've put onto it, and cgroup will pressure your service when it nears its resource limits.

Also, sharing resources is good. Instead of having 10 web server containers, you can host all of them under a single one with virtual hosts, most of the time. This allows good resource sharing and doing more with less processes.

On the extreme case, I'm running a home server (DNS server, torrent client, synching client, a small HTTP server and an FTP server with some other services) under a 512MB OrangePi Zero. The guy works well, and never locks up. It has plenty of free RAM, and none of the services are choking.



I agree but at the same time: Inter-process communication is also faster when a process is allowed to write to or read from another process's memory. Doesn't make it a good idea, though.


The way I deploy them doesn't mean "compromise one, compromise all". For example, I generally leave SELinux intact. So they're properly isolated.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: