More like a how-to for computing poverty! It even has the parallels of predatory landlords ("Google Cloud") and sharecropping (Chromebook/Android).
A 35W draw will cost you $5/mo at $0.20/kWh. This will either let you run an oldish laptop continuously, or you can go down the path of newer hardware to optimize your constant draw, and spend that energy budget on in-use amenities (nice monitor or faster processing).
But looking at the bigger picture, why would you want to set a limit on your computing costs so low? Today's dominant business model is to use scaled computational processes to trick people into overpaying for products/services. One of the least prudent things an individual could do is further hinder their ability to exercise their own computational agency!
Assuming I did my math right, your cost estimate assumes 714 hours/month, or 23 hours/day. I'm guessing that's 24/7 usage plus some rounding?
Clearly that's not a realistic usage, but beyond that I'd say 35W is a high estimate even during use. CodingHorror has some numbers for a Dell XPS M1330 [1], a laptop from 2008 with a Core 2 Duo (2 GHz) and 13.3" display.
Idle at max screen brightness is 20W, wifi bandwidth test bumps it up to 24W. Running an instance of prime95 pushes it up to 50W, but most people don't do that. Especially with hardware accelerated video, I don't think people are spending the majority of their computer use with the CPU pegged.
A home laptop probably spends 18+ hours a day off/asleep, and the maybe 6 hours of use aren't going to all be heavy loads.
OP mentioned a server, so 24/7 is indeed what I was shooting for. 35W was a figure from when I plugged a Thinkpad T61 (Core 2 duo, as well) into a Kill-a-Watt. I actually did use that machine as a traveling server.
You can obviously buy newer tech for even better power efficiency, but that gets into a discussion of accounting for capital cost vs recurring.
35W for a t61?! Undervolt it, core2 can still do it from sw only, set the cpu governor to conservative, turn off the display - it's a server -, the modem, the firewire, the pcmcia, the expresscard, wifi - again, server, cable is good - <10W. I'm using a T400 like this with 2x2TB HDD.
Depends on the laptop, my new Thinkpad (T470P) has a processor with a TDP of 45W (limited to 35W) plus the screen.
Theoretically if I pegged it out with the screen on max brightness I'd get about 90 minutes out the battery (which is about what linux estimated running stress-ng) in reality depending on what I'm doing I get 5-6 hours and if it's light surfing on an evening over 7. Currently it's saying 3 hours 58 minutes @33% charge, I'm actually surprised how long it lasts given I didn't expect it to but looking at powertop it spends a good chunk of it's time running at about half it's top clock rate and it rarely bursts.
Though to be fair I didn't buy this laptop for it's battery life (I can swap those out if needed) or weight (not that it's excessive) but because it sat on a local maxima for what I wanted (fast quad core with HT, relatively light, relatively good battery life).
The GF's little Asus regularly goes over 10 hours with her using it for the usual net stuff and it's got a tiny battery, it's a cracking little machine but Intellij would wreck it.
I largely agree, but wanted to add a few data points about power usage.
5.5 Watts:
My Dell XPS13 with a bunch of browser/slack/java processes on Linux. (it's below 4W at idle)
40 Watts:
My 6-year old Xeon E3-1230 (v1) with 14TiB usable ZFS and 32GB of ECC RAM.
Also, if you're really going for very low power, I'd suggest an RPi Zero W instead of some Android phone. $10 @ MicroCenter and you can install pretty much anything your heart desires from standard Raspbian repos. I can't even measure it's wattage at the outlet with my power meter.
I think I have an APC Back-UPS 425VA (or 600VA) that takes care of that for a sprinkle of RPis, a firewall, an access point and the ONT outside. I like to keep my connectivity during the very brief power outages (underground power).
I've been meaning to try using an Anker portable battery as an extended-run UPS, but haven't tried it, and I don't know if it'll keep the USB port powered on continuously.
None of Anker's batteries charge and provide power at the same time so they cannot be used as cheap UPSs.
I think you'd be better off looking at some of the DIY electronic battery things - eg WiMos battery shield or adafruit powerboost (https://www.adafruit.com/product/2465) and a LiPo battery, but they will only provide ~1 Amp.
My issue with the whole article is that I don't understand the use-case here barring personal interest because that's a universal use-case. If I'm legitimately poor I'm going to buy the shittiest tablet I can for content consumption. This will run me between $30-$60 dollars and let me have my own personal machine to use with the library's wifi. If I have my own internet, I probably have enough income that within a year I could save up enough for a shitty tablet w/ attachable keyboard or a cheap used laptop. After that, computing power doesn't really matter because I'm just using the thing to go to community college while I either: a.) Pursue the scholarships that will get me into a 4-year college
or
b.) Pursue a sufficiently lucrative 2-year certification.
If neither of those things are attainable I should be sticking with just using the library's computers to try and manage to produce some form of profit I can put towards eventually getting into community college.
If I'm not dirt broke, this entire article is sort've pointless from a practical standpoint unless you just really enjoy this sort of cost-saving exercise. But if you don't and you're making enough for a real computer then you may as well just get the real computer.
Remembering to turn off or disable/delete cloud resources is a thing of the (very distant) past: you can automate away that problem with ease. AWS provide free services for doing this.
[OP] Depends what you want. I amuse myself optimising cost/resource usage; I know it's playing at the margins. I think 'sharecropping' is a bit harsh... in this particular case, my Android phone is actually running AOSP, so by having a debian chroot it's not so different from running an old laptop 24/7 (admittedly with more binary blobs, but no Intel ME!).
No way a cheap $150 Chromebook will last 4 years. I've been buying Chromebooks for my wife for a while now and you really do get what you pay for. It's like that parable with the poor and rich person's boots.
Sub-$200 Chromebooks are plastic so they break easily. I've had issues with keyboard switches not working any more. The hole where the power adapter plugs in gets loose and stops making a good connection. You're looking at about 2 years of everyday use before something critical breaks MAX.
A good, aluminum Chromebook that will actually last a few years of everyday use is about $300.
Anecdotal point: I have the exact same Acer Chromebook 11 that the OP mentions in the article, which I bought for around $150 USD. It all feels like plastic, but has been very durable for me.
Case in point: I was assaulted and robbed last year walking on the street, while carrying this laptop in my backpack. I don't have much else in the backpack and it's a thin one without padded protection. The robber pistol-whipped me and I fell to the ground. He kicked me in the back once (while I have the backpack on). My Chromebook took most of the force. There is still a visible dent on the front cover of the laptop right now, as a result of this. It still works perfectly (other than the dent being an aesthetic issue). This was a year ago and I am still using this Chromebook.
I also had none of the issues you mentioned with keyboard switches, power adapter plugs, etc. Maybe it's a manufacturer thing and not a Chromebook thing? Maybe Acer is better?
I'm loathe to go down this rabbit hole, but I have anecdotal experience here. My wife convinced me to buy a pair of what I thought was very expensive boots. (Wolverine 1000 Mile boot). They are nice, and supposed to last "a very long time."
There's a hidden cost though. I recently had them re-soled and had to buy some laces (this is after about a year and a half of wear) and the cost of re-soleing the boots was also very expensive! I'm not sure I am saving much over the cost of regularly buying less expensive boots.
The key to shoes lasting long is taking care of them. Don’t wear them in the rain. Give them a break evey couple of days. Most importantly you have to keep polishing them. At least once a week. If you take care of the upper leathers of a shoe, they can last pretty long, not to mention that any shoe with good arches is worth it.
For throwaway shoes that you cant resole, I like Ecco. Excellent!!
What good are boots that you can't wear in the rain and you have to polish more than 50 times per year? Even if polishing only takes 12 minutes, you'll spend more than 10 hours per year just on maintaining that one pair of shoes/boots.
I just want my feet covered; I don't want another hobby...
Personally, I've found that polishing my boots is a really enjoyable task, it's quite cathartic to sit there for a few minutes with dubbin and a brush and clean your boots up.
It's funny, because I used to hate polishing my school shoes as a kid.
I only polish my boots around once a month, so it's a couple of hours a year. I spend more time sewing up clothes and sewing on replacement buttons, which I've also discovered now that I'm an adult is a cathartic activity.
There's something nice about trying to distance yourself from the throwaway disposable consumer culture we have now, and actually fixing things.
I don't want to put words in your mouth or anything but I find a lot of enjoyment out of "menial" tasks as well and attribute it to practicing mindfulness without realizing it.
I recently had to sort huge boxes of legos for a hacker space. It sounded miserable but it really was relaxing just letting the mind blank out.
Use a horsehair brush to get whatever chunks of crap off - I pick up a lot of mud, maybe you do too. Remove any remaining dirt or dust with a wet rag, maybe a little saddle soap if they're really filthy.
Wipe off any lingering water, let the leather dry, and then apply a thin layer of mink oil - work it into the creases, that's where the leather will split if you don't keep it supple.
Let that dry, and then apply a very thin layer of polish with a damp - not wet - rag. Work it into the seams; if it's not under your fingernails, you're not trying hard enough. Once the entire upper is done, give it a few minutes to sit, so the wax can solidify a bit, then buff it lightly with long strokes of the rag. Repeat with another thin layer of polish.
You won't get much of a shine the first time. The temptation is to apply polish more heavily, but then it just gets sticky. Next time, you can buff it more firmly, and next time more firmly still. After a few goes, you'll have enough of a layer built up to develop a proper shine, and then it's just a few minutes of maintenance at a time. You'll never get a really good gleam off anything but black leather, but anything else will still look much better than if you don't put the effort in. Same for any leather that's not full-grain, but it still helps. And don't polish suede! Just brush it.
Mink oil every half-dozen times or so is enough to keep cracks from forming; more often than that and it'll soften the leather too much.
I get ten or more years out of a good pair of boots this way. Dress shoes are more delicate, I suppose, but they'll still do better with proper care than without. Good luck!
I've owned the same pair of Danner Mountain Light II's since 2012 and have never had them resoled or serviced. These boots have been with me to 15+ countries including trekking in India, have been on many, many hikes and all I do is clean and polish them every few months. I estimate I have walked at least 2000km in them, never had any issues. Can't recommend Danner enough.
Just as a counter point I've got a cheap C720 I got 4 years ago for lightweight mucking about, and it has held up to daily use perfectly fine. I wiped ChromeOS and installed mainline Linux on it, which has great support for it. No problems with the keyboard, screen, power supply, or anything, and I've had to open it to remove the flash-protect screw, and it's survived several moves, travel and outdoor use.
I suppose I could be just lucky, but as is I could probably use it for another 4 years without trouble. The biggest issue is the soldered in RAM which is only 2GB. But that's plenty for emacs, light firefox, and even videos since it's got hardware decoding acceleration.
How are you treating them? I'm using a 2014 Toshiba Chromebook 2 which is made entirely out of plastic and while it's not in perfect cosmetic conditions it's still 100% functional and doesn't show any signs of impending failure. I've dropped it on hard and soft surfaces alike from table height, and I've had it under 20 or 30 pounds of other objects fairly regularly. The charging cable still makes a good connection but I've been particularly careful not to use the chromebook or the charger as a handle for the other respectively as I've seen so many other people doing. I'm nearly three years into using it and nothing critical has broken.
I find it amazing, but maybe I was just lucky. I bought a cheap plastic HP ultrabook a decade ago and it survived four years of being carried every day on a backpack on the bus and being taken to camping vacations. Sure, it had scratches aplenty, and a couple of broken spots on the outer rim, but it worked fine.
[OP] I've owned this laptop for almost 2 years, and it's been heavily used, first as a work laptop for a year (I used EC2 instances for 'real stuff') and then a home laptop. Still works perfectly and looks fine. We'll see how it holds up...
Anecdotally, I have never has a laptop die in over 20 years except for the backlight failing after many years on my second hand Toshiba T1100 and my aged HP 2510p becoming flaky after a few disassemblies to clean dust out (and attempting to use better thermal paste on the heat sink).
My Asus C100P was around $280 when it was released. It is now down to $230. The 4GB model specifically, because it is my daily driver and I do my dev work on it.
I do "actual computing" on it too, since it is has way more ram and disk than any of my cloud stuff.
If you don't mind that it's four years old at this point, it was by far the best Chromebook I ever used, and my girlfriend is still using it as her daily driver (I gave it to her because I thought it was too nice a piece of equipment to gather dust in a drawer).
It's BIOS ROM code for (some) chromebooks and chromeboxes to allow you to run a normal Linux distribution. Not something like crouton, but regular Linux booting.
I used it with an ASUS chromebox to get a cheap but decent desktop experience. It's a Celeron, but easy to upgrade the storage and memory (2 DIMM slots!).
Careful there. In some models if your battery goes totally flat the only way to unlock the BIOS to boot boot Linux involved a restore to defaults which erases your Linux partition.
Been there. Lost a couple days of work over it. Gave away the chrome book.
I'd love to run hardware at my house. I have two modest sized servers already: one for storage and a few other things, and one for backups.
My biggest issue isn't cost of running the stuff. The problem is the network: I am limited to a 5 Mbps upload on a goddamn commercial line*. I literally don't have an option for a faster connection. No static IP, no IPv6, no SLA, no anything. Do you know how long it takes to upload 1 TB over a link like this, while still having the web be usable? Because I do, and it's not pretty.
This is why home computing isn't happening. I am not running a rendering farm or a BTC mining operation here, so just having CPU cycles is doing me no good. But being able to run small apps on a self hosted PaaS would be really great, and I can't do that, basically ever.
Yes and now. There are some in the area but my specific house has super poor reception. Also, hosting web apps where the latency due to radios starts at 300ms is not something I really want. Wireless/Cellular is ok for some consumers but it's not great for my needs.
300ms latency is closer to geo-synchronous satellite than decent terrestrial broadband or even next gen satellite.
Last time I used a WISP, the expected latency was something like half the best case for comcast or sonic dsl, and bandwidth was 6mbps symmetric.
When set up right, latency is ~zero queuing delay plus the speed of light to a peering closet ~3 miles away.
This was with Canopy gear and unwired ltd; unwired is at least one generation newer than that now. FWIW, Musk's rainbows-and-unicorns low orbit satellite internet is targeting 25ms.
I talked to a wireless isp in our area the other day. Latency is ~1ms/hop, so compared to 300ms, it's next to nothing. Also, sometimes is another relay needed and you're good to go.
I have 1x Raspberry Pi 3 running docker that has the following containers running
- A service I wrote to control my TV
- A service I wrote that sends commands to a smart plug that turns my lamp on and off (I can then invoke it via curl and have setup Alfred workflows so I can do it from my mac)
- GOGS to store private code repos and mirror ones on github
- ZNC (IRC bouncer)
- A service I wrote to present a dashboard of common things I like to glance at (e.g. twitter feed, pocket, pinboard, the weather etc)
- A service I wrote that hooks up to Twilio to manage my voicemail
- Portainer (a gui for managing docker containers)
- A docker registry to store my custom images
I have another Raspberry Pi 3 running Pi-Hole to block internet advertisements at the DNS level
Then I have a Raspberry Pi 1 (model B) that runs NGINX to act as a reverse proxy to services in my network that I want to be exposed. These are all protected by SSL (LetsEncrypt) and client side certificates authentication. It also runs a service that updates a DNS record on AWS Route53 whenever my IP changes as I don't have a static IP.
It's very simple at the moment, I have my phone forward all missed calls to a Twilio number, which has a webhook to my service.
The service just says a message to the user, gives them the option to leave a voicemail, then I call the transcription API to try and transcribe the message into an SMS message (sort of like what Google Voice and Apple offers on iPhones but I don't have an iPhone and I believe Google Voice is only available in the US)
I plan to do more with it soon, Twilio's transcription API is a bit rubbish, I think it might be geared towards American accents so it fails quite miserably with British ones with hilarious consequences.
It's started for me too. I just got a Pi Wireless Zero and I stumbled trying to explain to my misses what particular reason I purchased. I think I sounded like a sales engineer because I mumbled something like "Cloud", "Decentralized Internet" and Bitcoin. She just walked away.
Yeah, I can't bring myself to "upgrade" to a RPi3 since I know my threshold for a desktop I might regularly use is 2gb in 32bit. Upgrading to a RPi2 was a pretty marginal benefit while I'm just playing with GPIO via ssh.
Some of the clones look better but the community is too biased toward the RPi brand.
I lost lots of nerd cred when I bought an intel synology nas, but it is truely a turn-key solution. 20-30W idle, docker, vm's, and I hve it set to do crash consistent offsite backup of all that (and my data) to the cloud. It supports many storage services, vanilla rsync, other synologies, etc.
It also lets me vpn into the house and stream locally stored music / video to my cell phone.
This took a day or so to set up.
Can't recommend it highly enough, but cost / performance / functionality are all > 10x a raspberry pi. The only thing it's missing is hdmi out + retro pi.
I got the DS916+. If I were buying today, I'd consider the DS1517+ (which has a faster cpu, can be upgraded to 10GBit, and supports m.2 slots for flash cache).
These are the high end home/smb choices. You can go much cheaper and get 90% of the functionality from other synology models.
Set up a RPi to flash disk images via a USB SDcard hub!
I got tired of keeping track of the cards so I threw a small pile of them in the drawer and a script to pick from the handful of images I use & flash them.
Like the parent, I tend to use them for servers/services on my LAN at home. So a VM on DigitalOcean or Vultr etc. would be on the wrong side of the firewall, and a VM on one of my 'real' computers wouldn't be available 24/7 since they are often sleeping. Plus it's nice be able to hook sensors to the GPIO pins.
My thoughts exactly. I keep seeing posts in Hacker News promoting Google Cloud through innocent looking blogs. Do not know if that was the case here, but there was another also from Australia, at https://news.ycombinator.com/item?id=14809102
I'm pretty sure the platform can hold its own being a Google product and all, but this reads a lot like the posts from Microsoft MVPs back in the day.
You know what would be an interesting consumer product? A $100 black box you can plug into your home network that runs email, media, and social, applications with no hassle. Knock out dependencies on a dozen third party services in one fell swoop.
EDIT: to clarify, I believe consumer hardware has progressed enough to provide an elegant solution to a slew of simple cloud services which people pay monthly for, i.e. storage. The markup on simple software that is hosted in the cloud is just too high. Now convincing investors to touch a hardware start-up with a 10ft pole...
I agree. email is practically impossible to manage for a hobby-IT person. You need to manage accidental blacklistings, spam, make sure your ISP doesn't block port 25, etc.
Look at synology or one of the other NAS solutions. If you can deal with ARM (ie, not quite enough oomph for docker+vm use) you can get close to $100 price point, and hit all the requirements you mentioned.
However, I suggest forking out a few hundred more for an intel model if you can. My only remaining windows+office instance is now a vm on a 25w box that hosts a half dozen other things. The electricity this saves vs a desktop/home-built server has easily paid for the beefier CPU.
[edit: I estimate this setup is costing something like $20/month, including power + depreciation. Cloud backup is another $10/month, and I run a $1.30 / month cloud vm to host my website.]
Yeah I had a big hulking CoolerMaster Stacker Core i7 super home server before I ditched all my stuff and did the digital nomad thing for a while. Then when I got a home again I was used to just living off a laptop so all I had was a Apple Time Capsule for backups, and eventually moved to a place where it wouldn't route right (no manual MTU settings) so I got a separate router and decided to get the cheapest NAS I could find to replace the backup functionality. I ended up with an ARM Synology and now it does everything that home server did (vpn, git, rsync, backups, full dropbox archive, irc bouncer etc etc) and most of that was all set up with a few clicks in a web UI. And the security updates are someone else's job.
Mail would be hard to host at home since that's what many spammers did. Too many measures in place to discourage it. You would need at least a proxy service to make it work reliably.
Gonna get _so_ hacked. Eventually this is gonna happen but it's gonna be a mostly cloud service, perhaps with an integrated proprietary non-customizable auto-updating chromecast/appleTV like box for media locally. And Google or Apple will sell it to you. I mean, this is what they're already trying to do, but it's not all as integrated as it oughta be.
> You know what would be an interesting consumer product? A $100 black box you can plug into your home network that runs email, media, and social, applications with no hassle.
I think ben_jones is trying to imply that the black box would BE your email server, etc. He did say "Knock out dependencies on a dozen third party services in one fell swoop.", which seems to strongly imply not relying on an email provider, social network provider, etc.
I think he's talking about _services_ not the interface to them, ie: gmail.com, dropbox.com, facebook.com (which is weird, so maybe call it IRC/Usenet?).
Who (non-business) really pays for cloud storage and email anyway? For music you pay for the copyright licenses really. Why would you want to host your own version of facebook? Who would want to join 10+ personal social platforms...
I think that using cloud environment for development is very promising. I tried it with linode 8GB instance and the compilation speed of scala project was the same as on 2012 MBA. The problem is that you usually want some gui setup for IDE and things and you need to tune vnc to the limits it can offer. Docker didnt quite work for me, due to the ports hell or, probably, my wrong hands setup...
The trend over the last few years is to work locally and push changes through git.
Working remotely on a server gives you cloud abilities because you can easily switch devices and continue working.
Having a cloud service provide the gui or editor is an option available now. But to have a local editor with ssh means you can work on your phone using a proper mobile editor you can run a full ide on your desktop or use vim directly on the server.
The security and privacy of your code remains in your hands. Best of all worlds.
I have a dedicated server in a datacenter somewhere that has essentially become my main dev machine. It can only be connected to through a VPN that I built/manage. It's a fantastic setup, honestly.
"Ok honey, now type in gcloud compute instances create my-vm --zone us-central1-b --preemptible. But don't forget the --preemptible otherwise the bank will take our house."
"You don't need a bigger screen than that."
"Sorry kids, Google says you have to stop playing Minecraft now."
"Oh don't mind that noise, telemarketers call the server sometimes."
I've been wondering, is there some service that runs an SSH proxy that will automatically spin up a machine when I try to connect, and automatically spin it down after some idle period?
I've been meaning to try something along those lines with DigitalOcean, they have an API so presumably i can script starting a new instance and then SSH in to it; should take about 60s based on the manual route.
Yelp built something you can run on a server that will do that with Docker containers (https://github.com/Yelp/dockersh) but for you to not pay for it during the downtime, you'd need someone else to provide this as a service to you.
I have an absurdly expensive MacBook for work, and I thought about building a gaming rig but didn't want to drop the ~$3k to get it (yes, I could build one for cheaper but I get carried away when I'm shopping and have a weakness for best-in-class components). I started looking into AWS gaming (very disappointing performance) and discovered paperspace, which has p5000 and p6000 16GB dedicated GPU instances. I think it's about $0.60/hr and I end up spending about $30/mo on my weekend gaming binges. I have 1000Mb google fiber (sorry) so YMMV if you have constrained bandwidth, but the p5000 is beastly and it has easily brushed aside every game I've thrown at it with ultra settings at 1440p (the parsec streaming client I use is capped at 1440p or I'd be running it at 2160p). Eventually the costs will catch up with me, but in the meantime I can't think of a good reason to buy my own hardware (aside from the urge to tinker). Obviously for personal computing I feel differently but for a gaming machine paperspace is very convenient. No affiliation, I'm just super happy with the product.
Can you point to any writeups about this? The cost is low enough that I could just try it myself, but I do have one question about "controller support." I play PC Racing sims - I wonder about the support for wheel/pedals.
I'm assuming VR is out of the question due to the latencies.
I don't think VR is supported, and yeah, even a low ping would make it a vomit comet anyway.
Here's the pricing: https://www.paperspace.com/pricing
You have to sign up and then request access to the GPU instances (each instance type has to be requested individually, unfortunately. I found the GPU+ instance to be a little underpowered, and the P5000 has been performant enough that I haven't bothered to try the P6000). I can take up to 24 hours to get access. Once you have that though, you can create an instance with the parsec image and it comes with all the drivers, as well as steam and EA origin pre-installed. The parsec client by default starts up without admin mode, which crashes a lot in my experience, so I log in using the browser client and restart the parsec server as admin manually. It's basically ready to go out of the box and the longest part of the setup is just waiting for your steam games to install.
Interesting, thanks! I got an eGPU adapter to try to play without having to buy a full desktop, since I'd have to sell it in a few months, but I still haven't got it to work. This sounds like a nice alternative. Shame they only have US-based DCs, not sure the latency will be low enough.
Agreed, my one visit I couldn't get the page to scroll at first except with the mouse wheel, which I don't normally scroll with. So that unfortunate animation sequence where it seemed like copies of the page were being dealt to me like cards, must have messed with the focus. It would have been easier to cope with an ad!
To have someone talking from a position of simplicity and economy use such an excess of technical design to present it was a bit much.
I would be surprised if the site renders decently on a chrome book. Ironically.
I'd pick a Raspberry Pi over an Android, just because with the Pi I could plug in an external HD. But in terms of what I actually do, I just run a full server as a NAS, and don't pay any attention to the power bill.
Also, without external keyboard+display, using a laptop as my primary computer would ruin my body in a month flat. Ergonomics are important.
This dude doesn't have any storage anywhere. I mean, I don't take many photos and even my photos use more than 50GB. I can't imagine anyone who makes a post like that requiring less than a Terabyte for all the stuff they create and/or download.
I have a friend that was doing something like that but he suddenly changed his mind when he lost his broadband internet connectivity for 2 weeks and put his "home computing" on a 4G/3G network.
Termux on ChromeOS is a terrible experience though. Basic things like ^W to delete a word will cause the Termux window to disappear. Copying is HORRIBLE, requiring you to treat your very precise pointing instrument like a tiny finger -- you have to move the mouse somewhere, hold down the button for a few seconds, move both ends of the selection, then click "Copy". Ctrl-<number> does things in ChromeOS rather than going to the terminal, so no quick switching of application windows. You can't run two instances of Termux on Chrome OS, which is a major bummer if you have multiple monitors.
Really, most Android apps are a terrible desktop experience. They handle the mouse poorly, they handle multiple monitors poorly, few of them at this point can deal with being resized to arbitrary sized (i.e., you get only phone-aspect-ratio sized or full screen), they handle co-existing with other running processes poorly (i.e., if you change the focus from one window to another on a different monitor on an application playing a video the video pauses).
I'm running Termux on a Samsung Chromebook 3. Can you explain what you're running, so we can try to see why it doesn't work for you?
Also, don't forget, you can run Termux in the background, with an sshd, and then run any SSH client you want to talk to it --- which puts the copy and paste into your control.
Also, Termux is open source... So, if you have ideas about Copy and Paste, you can try to implement them.
To switch application windows in Termux, Control-Alt-C makes a new window, Control-Alt-N goes to next, Control-Alt-P goes to previous. It works great, and is super quick.
You can run multiple Termux Application Windows, and then you can run multiple SSH clients, so multiple monitors will work.
I know some of these are kind of hacks, but I'm just trying to be helpful.
Android Apps do handle the mouse terribly, no argument.
I'm running on an Acer Chromebook 15 (x86, 4GB of RAM).
For me Control-W closes the Termux window.
For me Control-Alt-N switches between sessions, which all live in the same physical window, which I cannot figure out how to open multiple of.
It seems like your suggestion is to use Termux just to run an SSH server locally and then use another, non-Android SSH client, to deal with the fact that Android apps on ChromeOS work terribly UI-wise. That's a good idea, it leads me to my next problem:
It lives in a separate IP address space from ChromeOS... I have some weird IP address inside the ARC that is not the same IP as ChromeOS... I did not try to SSH to that address from ChromeOS though, does it work ?
It is in a different space, yes, but you can ifconfig to find it's IP and go from there. I've run webservers from Termux and browsed them from Chrome. And I've read about people doing the SSH thing.
To be honest, I still can't browse a Termux server from a different machine, and I've spent a while trying.
This completely ignores the the cost of setting up your cloud VMs, cost of downtime if your Internet is down etc.
Sure an actual home server might be more expensive in electricty alone, but you can keep your VMs there 24/7 for not extra cost or hassle.
If it takes you a day a month to spin up new instances in extra dmin work, then that alone eats way morethan $5 saved in electricity.
Not to mention that you can actually own your server et etc...
Can you expand on that? It's an interesting parallel that you draw. I'm going out on a limb and implying you mean that the small price per month is going to eventually implode and engineers aren't going to be making as much money in the future similar to how a musician gets a very small portion of the cost of a music stream?
The only thing I meant by that was the relatively small revenue created for the performing musician when paid out by stream, vs when paid out by royalties of album sales.
But when the people don't want albums, what do we expect?
Today, we listen to music more than ever, but only those supported by a large audience really see any benefit from the streaming model. People just aren't spending the kind of money they used to on music, and it's affecting the music itself. [1] Years ago (for a solid 10-20 years) your average unsigned regional band could do well on CD sales alone. Yeah, it was up to them to make a good album, but the sale of 100 albums (which could easily happen over the course of one or two good nights), assuming 100% ownership could give the band $1,000-$1,300 to split. In retrospect now, they'd need about 1,800 downloads to achieve the same thing (Which people don't even really do anymore). Or ~70k-700k streams just to hit that same number. [2]
Ironically, I just heard of a Netflix suit suggesting something similar for movies[3]. I'd imagine if this was to happen, we'd see the same thing in Hollywood: a tightening of belts high up the food chain which may or may not affect A-list actors; but an otherwise decimation of the lower market. AMC has a nice quote (Although, they'd naturally be opposed to such a thing, so take it with a grain of salt): "...[the] price level is unsustainable and only sets up consumers for ultimate disappointment down the road if or when the product can no longer be fulfilled."
I really appreciate the reply! You definitely bring up great points. I suppose that's why we see the bigger names like Disney, Blizzard(Activision), EA, etc building out their own distribution/streaming channels compared to utilizing Netflix/Steam to avoid that 30% and keep a bigger chunk of the revenue. Obviously game sales are different than streaming TV/movies but I'd expect similar economics to be in play.
I think you're right. But I don't think casual consumers want that. I can't wait to explain to my 4 year old that Disney movies aren't on Netflix now.
I think what went wrong with music was the initial value set, and then competed with (into extinction) by Davids and Goliaths. Steve Jobs valued a song at $0.99 11 years ago. We've yet to adjust that for inflation. It's only gotten worse with valuations for streaming. The real problem now isn't that we don't listen to albums anymore (OK, maybe that's half the problem) or that we don't have physical music, but Steve Jobs valued it too low. And that a song that took $2,000,000 to produce costs the same as a song that took $0 to produce; yet both command a $0.99 price point.
I'm guessing this was an intentionally snarky comment, but the article includes the cost of buying a Chromebook in the figures it provides so "buy a decent laptop" would have a monthly cost associated with it.
In addition, Starbucks -- at least the one near me -- doesn't allow you to occupy a seat without purchasing something and since the Internet service offered there is subsidized by coffee ... very expensive coffee, at that ... the cost of using Starbucks[0] daily for 5 business days/over 4.5 weeks/month assuming you get away with buying only one Venti Starbucks regular coffee at USD$2.45 is about the cost of broadband at home, not including fuel to drive there and back (assuming you didn't walk/ride a bike)... and there's no preemptible VMs included with the brew.
[0] I skipped library internet access because, at least for me, the performance is so bad at my public library I wouldn't consider that viable Internet service. And it smells like books there.
it wasnt snarky at all, i have no idea why people downvoted it. i am extremely poor and starbucks and libraries are great places to use the internet and get things done. ive never encountered a starbucks that made you buy something. but that doesnt matter because free wifi is everywhere. just find a safeway, a cafe, or any other such place. i actually prefer the slower speeds you typically find in those places because they are perfectly adequate for downloading documentation, pushing to github and so on, ie do productive things. but it makes wasting time on youtube very painful.
Probably because you didn't include the laptop's price. If you spent $300 and it lasts you four years, that's already $6.25/month. Sure you get internet access for free, but so could the author, it's not like you can't take a Chromebook to Starbucks or the library.
A 35W draw will cost you $5/mo at $0.20/kWh. This will either let you run an oldish laptop continuously, or you can go down the path of newer hardware to optimize your constant draw, and spend that energy budget on in-use amenities (nice monitor or faster processing).
But looking at the bigger picture, why would you want to set a limit on your computing costs so low? Today's dominant business model is to use scaled computational processes to trick people into overpaying for products/services. One of the least prudent things an individual could do is further hinder their ability to exercise their own computational agency!