I built a bootable Linux distribution that treats AI as a system primitive – like CPU or memory. Designed for security-conscious environments where data cannot leave the network.
The problem: Most AI requires cloud APIs, which means your data leaves your control. For banks, healthcare, defense, and regulated industries – that's a non-starter.
The solution: EdgeAI-OS runs everything locally. No cloud calls. No API keys. No telemetry. Boot the ISO, use AI. Your data never leaves the machine.
Security features:
- 100% offline operation – air-gap friendly, zero network dependencies
- No external API calls – all inference runs locally on CPU
- Command risk assessment – every command classified as Safe/Moderate/Dangerous
- Dangerous pattern blocking – prevents rm -rf /, curl|bash, fork bombs, etc.
- Open source & auditable – MIT licensed, inspect every line of code
- No data exfiltration – nothing phones home, ever
What's in the ISO:
- Local LLMs (TinyLlama 1.1B + SmolLM 135M) – runs on CPU, no GPU needed
- ai-sh: natural language shell where 80% of queries resolve instantly via templates
- Multi-tier routing: simple queries → fast model, complex → larger model
Example ai-sh session:
what time is it?
[template] date ← instant, no LLM
files larger than 1gb
[template] find . -size +1G ← instant, no LLM
rm -rf /
[DANGEROUS] Blocked ← security check
configure nginx as reverse proxy
[ai-generated] ... ← uses local LLM (~1-2s)
Target use cases:
- Air-gapped enterprise environments (banks, healthcare, government)
- Defense & classified networks
- Edge devices with no internet connectivity
- Privacy-conscious developers
- Compliance-heavy industries (HIPAA, GDPR, SOC2)
Built with Rust, based on Debian. 4GB RAM recommended.
GitHub: https://github.com/neuralweaves/edgeai-os
ISO Download: https://github.com/neuralweaves/edgeai-os/releases (1.2GB)
Would love feedback, especially from anyone working in secure/regulated environments. What features would make this enterprise-ready for you?