>Carl August, Grand Duke of Saxe-Weimar-Eisenach, Residenzschloss, Weimar (by 1804–d. 1828);
>by descent to Wilhelm Ernst, Grand Duke of Saxe-Weimar-Eisenach, Residenzschloss, Weimar, later Schloss Heinrichau, Lower Silesia, Germany (now Henryków, Poland) (1901–d. 1923);
>his widow, Feodora, Grand Duchess of Saxe-Weimar-Eisenach, Schloss Heinrichau (1923–1929;
>sold in May, 1929, to Kahlert & Sohn);
>[E. Kahlert & Sohn, Berlin, 1929;
>sold on December 14, 1929, for $135,000, to Sir Joseph Duveen for Mackay];
>Clarence H. Mackay, New York (1929–d. 1939; his estate, 1939, inv. no. A-17;
>sold through Jacques Seligmann & Co. on May 15, 1939, to MMA).
Unfortunately, this does not answer "why did it leave France?"
However, the book "Merchants of Art, 1880-1960: Eighty Years of Professional Collecting" (1961) by the rather famous art dealer Germain Seligman offers this missing link:
>Parade armor of King Henri II, embossed, damascened and gilded. Later presented by King Louis XIII to Bernhard von Weimar.
In addition, I think token efficiency will continue to be a problem. So you could imagine very terse programming languages that are roughly readable for a human, but optimized to be read by LLMs.
That's an interesting idea. But IMO the real 'token saver' isn't in the language keywords but it's in the naming of things like variables, classes, etc.
There are languages that are already pretty sparse with keywords. e.g in Go you can write 'func main() string', no need to define that it's public, or static etc. So combining a less verbose language with 'codegolfing' the variables might be enough.
I'm not an expert in LLMs, but I don't think character length matters. Text is deterministically tokenized into byte sequences before being fed as context to the LLM, so in theory `mySuperLongVariableName` uses the same number of tokens as `a`. Happy to be corrected here.
Running it through https://platform.openai.com/tokenizer "mySuperLongVariableName" takes 5 tokens. "a", takes 1. mediumvarname is 3 though. "though" is 1.
You're more likely to save tokens in the architecture than the language. A clean, extensible architecture will communicate intent more clearly, require fewer searches through the codebase, and take up less of the context window.
I think there's a huge range here - ChatGPT to me seems extra verbose on the web version, but when running with Codex it seems extra terse.
Claude seems more consistently _concise_ to me, both in web and cli versions.
But who knows, after 12 months of stuff it could be me who is hallucinating...
Its not verbose to some of us. It is explicit in what it does, meaning I don't have to wonder if there's syntatic sugar hiding intent. Drastically more minimal than equivalent code in other languages.
Code readability is another, correlating one, but this is more subjective. To me go scores pretty low here - code flow would be readable were it not for the huge amount of noise you get from error "handling" (it is mostly just syntactic ceremony, often failing to properly handle the error case, and people are desensitized to these blocks so code review are more likely to miss these).
For function signatures, they made it terser - in my subjective opinion - at the expense of readability. There were two very mainstream schools of thought with relation to type signature syntax, `type ident` and `ident : type`. Go opted for a third one that is unfamiliar to both bases, while not even having the benefits of the second syntax (e.g. easy type syntax, subjective but that : helps the eye "pattern match" these expressions).
Every time I hear complaints about error handling, I wonder if people have next to no try catch blocks or if they just do magic to hide that detail away in other languages? Because I still have to do error handling in other languages roughly the same? Am I missing something?
Exceptions travel up the stack on their own. Given that most error cases can't be handled immediately locally (otherwise it would be handled already and not return an error), but higher up (e.g. a web server deciding to return an error code) exceptions will save you a lot of boilerplate, you only have the throw at the source and the catch at the handler.
Meanwhile Go will have some boilerplate at every single level
Errors as values can be made ergonomic, there is the FP-heavy monadic solution with `do`, or just some macro like Rust. Go has none of these.
You’re not missing anything. I’ve worked with many developers that are clueless about error handling; who treat it as a mostly optional side quest. It’s not surprising that folks sees the explicit error handling in Go as a grotesque interruption of the happy path.
I would be very interested in this research... I'm trying to write a language that is simple and concise like Python, but fast and statically typed. My gutfeeling is that more concise than Python (J, K, or some code golfing language) is bad for readability, but so is the verbosity of Rust, Zig, Java.
People make fun of me but I'll never skip a chance to complain about how large these phones are. I hate it so much. I have a standard iPhone, not a max, and it causes real pain in my wrist if I use it too much. Was honestly thinking about downgrading to the last SE model even though it's several years out of date.
I want to +1 every comment in this thread. Phones are too big now. I don't understand Apple's weird obsessions, first trying to make all the phone so thin you cut your hand holding it, and then making it too big to fit comfortably in your pocket unless you are walking around in camo pants.
You know what I would like? When I tap on the search and type the first few letters of an app on my phone, and the app appears, and I click on that -- I would like the app to open. Only happens about half the time now. UI is getting worse with every release.
Many people used low iPhone mini sales to point at the idea that small phones aren't popular anymore.
They might be right, but the "Mini" was more like a return to the size of the 6 & 8; not the same size as the 5 or prior SE. So for me it was still too large.
The "usable screen" is where my thumb can reach, not whatever idea people have in their heads about the total size of the phone or anything, truthfully.
Anyway; hit recognition of the keyboard is so far behind where it was in the iPhone 4/5 generation that I doubt modern iOS would even be functional; even if you excused the padding issues that would inevitably be an issue.
> Anyway; hit recognition of the keyboard is so far behind where it was in the iPhone 4/5 generation that I doubt modern iOS would even be functional; even if you excused the padding issues that would inevitably be an issue
Right?? It is worse than I remember right? I'm not crazy.
POS Apple just made me upgrade my iPhone Mini to 26 so that I could pair my new Apple Watch, because I just broke the old one.
I wasn't sure I wanted another Apple Watch, but it was the easiest thing to buy, and I don't have to figure out how to transfer all the data and set it up somewhere else.
But I definitely regret going the "easy" way; iOS 26 is truly awful, what the fuck.
I'm going to figure out what fitness/sport watch I really want to use next because I doubt I'll be sticking to iPhone with what they have on offer these days...
Luckily, hearing all the complains early adopters of 26 had, I disabled auto updates on my SE. Since you can't go back to previous iOS version, leaving it on is a bit risky in general.
I recently switched from an iPhone 16 to an Air and my experience is the opposite. I type way more accurate on the Air (even when both dictionaries are reset and have no screen protector what could make the touch less sensitive). I do not know why.
> The "usable screen" is where my thumb can reach, not whatever idea people have in their heads about the total size of the phone or anything, truthfully.
In the early days of the phablets I had an observation that has mostly held true all these years later. At the time I noticed you could accurately predict whether someone wanted the large or small form factor based on their usage patterns. Did they tend to use their device while sitting down? Or did they tend to use their device while on the move? This indicated whether or not they typically used 2 hands vs 1 hand.
It turned out the 2 handers dominated the market, unfortunately for people like you & I.
Until you hit 'Search' at the bottom right it shows you a preview result set - that can differ completely from the one you get then. Because two are not enough, they added a third with 'Siri suggestions' as top row. Which is not in the Search settings but in the ones for Siri. The iOS docs[1] misname it as 'Suggest App' when it is called 'Suggest Apps Before Searching' which only the iPadOS docs [2] get right. Did I mention they cut useful info from the iOSv26 version[3] and changed the URL?
Apple follows the market. There just aren’t many people who want small phones, HN notwithstanding. If they sold like hotcakes they’d have a full lineup.
And I kind of get it. Philosophically I want a small phone. Realities of age and eyesight forbid.
The market is basically people who don’t read or watch videos on their phone, and who have excellent eyesight, and who don’t care about having the best cameras. 100% legit market segment, but that Venn intersection is too small to be worth it.
I don't agree with this. In my view, there are plenty of cases where the product changes are shoved down our throats.
I think the problem is that the product folks don't actually listen to the market. They read Jobs' biography and are convinced that they will tell their users what product they will like and that they will see the light later on.
The sad reality is: they are not Jobs (and even he was not faultless). So, we get Mac like Windows interfaces, we get mail clients losing features, we get AI in every single app you see, etc.
The iphone air isn’t popular either and yet here we are. They preferred releasing a huge thin phone than a tiny thin phone. Even if the % of clients is small, there are still millions of potential mini clients
I interpret the same facts differently: I see Apple realizing that the SE form factor doesn’t sell enough to be worth it, and trying something different with the Air. It sounds like the Air will likely go the way of the SE, with occasional updates but not every year.
Apple is very good at market research and understanding users… but not perfect. I think they genuinely believed the Air would sell a lot more than it did.
And “millions” is not necessarily a lot. Apple sells 250 million phones a year. A SKU that sells 3 million is a distraction with much lower ROI against R&D than a mainline phone. It takes just as much engineering to create and as much manufacturing to produce, so fixed costs are spread among many fewer units.
Am old. Am experiencing presbyopia. Am still very much tied to my mini on the default font size. When I can't read something I just pinch/zoom. Meanwhile it's easy to hold & use in one hand while walking down the street, and fits into normal sized pockets.
Why do almost all phones have to be in that narrow band of 6.5 to 6.9 inches?
I wish there were more size choices on both ends of the spectrum.
While most people prefer more choice below 6", I would like some choice above 7", since I keep my phone in my belly pouch, and never use it one-handed. My current Huawei Mate20X is actually ok at 7.2" (but worse than the Mediapad X1 I had before which at 7" was actually wider) but is way behind on Android updates, and will soon stop running my banking app.
While I agree with the spirit of the thread and dearly love my mini, I think this reasoning doesn’t account for a substantial reduction in bezels: my iPhone 5S had more than a centimetre of black bars above and below its 4" display (altogether it was 5.4" in diagonal), I bet those phablets you mentioned had even bigger bezels and were closer to modern 8.5" phones.
Yes. It's the Apple's product first philosophy that Steve Jobs repeated again and again:
"A lot of times, people don't know what they want until you show it to them."
"Some people say, 'Give the customers what they want.' But that's not my approach. Our job is to figure out what they're going to want before they do."
"You can't just ask customers what they want and then try to give that to them. By the time you get it built, they'll want something new."
"If I had asked customers what they wanted, they would have said 'a faster horse."
Is that the direction of causality or it's the other way around? Maybe people buy larger screens because they want to watch Netflix or TikTok on their phone more comfortably than on smaller screens. I do love small and light phones (an A40 right now) but I watch movies on a tablet. If I were often on the move or sharing home with many people, maybe I would use a larger phone.
I dived into a niche world of small phones recently while looking for replacement to malfunctioning Pixel 4a (which is apparently now considered compact phone). There's a few small manufacturers in China making some, with 4 inch or 5 inch screen, like Aiphor or Unihertz. And by "small" I mean "they use kickstarter to fund their R&D" small.
Other than that... Nobody's really bothering with compact phones anymore, in the US or in the rest of the world. Bummer.
> Nobody's really bothering with compact phones anymore, in the US or in the rest of the world. Bummer.
And the worst thing is that app developers do not bother with testing their apps on small phones. So even if someone would produce small phone, many apps would be broken on that UI. So there's no way back.
PS 4 inch is not a small phone. iPhone 4S had 3.5" display and it wasn't small, it was normal. Small is something like 2" screen I suppose. All modern phones including these "iPhone Minis" are egregiously huge.
I would not go as far as calling the iPhone Minis "egregiously huge", keep in mind that screen size is not a great measure for phone sizes across different generations. You could easily fit a 4+ inch display into the form factor of the 4S with modern technology, the bezels on those phones were huge. Unless my math is off, the housing of the 4S has a diagonal of just over 5 inches.
Yep. Aiphor's BlueFox NX1 with 4 inch screen is roughly the same size as original iPhone, but has a larger screen (iPhone had bigger bezels and the home button underneath). To me it feels a bit too small for things like typing/texting for example.
Unihertz Jelly Star has 3 inch screen, that's way too small for me.
We do, and it is a pain. It is incredibly easy to defeat any kind of design or in fact HID guidelines by cranking text size to the max on these smaller devices.
> Screen size is area (x^2) and battery size is volume (x^3). As battery life is a critical feature, a bigger screen supports (a nonlinear) better battery life.
This does not square with especially Apple's unending obsession to make phones as thin as possible. Which is doubly stupid when it makes them so fragile that the first thing you do after taking it out of the box is to wrap it in a thick rubber shell.
What obsession about making thin phones? iPhones are pretty thick and have been that way for years. The Air being an outlier, of course, but it's an intentionally thin phone in a lineup of thick and heavy ones.
I think it’s even better than that. Your cellular modem (on all the time) scales at O(1) with phone size. Same for on-board tasks that do not involve the screen. Powering your RAM (also on all the time) is similar, but larger (more expensive) phones may tend to have more RAM.
I have found the iPhone Air much easier to hold than the iPhone 13 Pro it replaced because of how light it is, even though the iPhone Air has a bigger screen.
The 17e weighs roughly the same at a smaller size, and the mini weighs significantly less. Not to mention the first SE, compared to which even the mini is heavy. Yes the Air is lightweight compared to the Pro, but that’s a low bar.
The other thing with the Air is that you can’t really use it one-handed, which is what most people who like small phones are after, besides pockability.
The first SE was the best form factor I've ever owned.
Incredibly small. Incredibly light. Pretty thin, even in a case. Had a headphone jack, Lightning and Touch ID.
The only thing I like about the new iPhone designs is the action button. Having an automation which automatically turns silent mode off or on based on whether I'm home or not is pretty cool. You can't do that with a physical switch.
I don’t think anyone should make fun of you for it but I’m in the opposite boat. I’m so glad that they make the pro max variants because most smartphones are so small that it hurts my fingers to bend them in the unnaturally inward way it requires to hold and interact with them.
It wouldn't be so bad if both options were available. By all means, have your giant pro max or whatever if you want, but that shouldn't be the only reasonable option.
I switched from a pixel 3 to a pixel 9 pro over a year ago, and I still miss the smaller form factor. the pixel 3 really was the perfect size for me and I am sad I can no longer get a smallish phone with a high end processor.
I switched from Pixel 4a to Unihertz Max (5G phone with 5 inch screen from a small Chinese startup). Love the form factor, I can keep the phone in my front pants pocket again, next to my keys or wallet. I'm somewhat reluctant to put anything sensitive on that phone (like my email), but happy overall.
I still have my Pixel3. I use it without a SIM for random stuff, and miss the small form factor. It is half the thickness of a Pixel 10, my current phone!
I’m still running an SE2020. I was expecting the latest update (with liquid) to be the death of it. But performance has actually improved significantly! Very unexpected.
Funnily, the large display is the most important thing for me. I find my efficiency directly proportional to display size (which holds for laptops too).
If a 30 second task can be done in just 20 on a device with a larger display, that's absolutely worth it for me.
Also larger device tends to imply longer battery life too.
If the task can’t be done in a few taps I feel I’m better off opening a laptop anyways.
However the market agrees with you so I must be missing something. I used to think it was driven by media consumption on phones, and that I try to avoid, but this isn’t the first time I have heard people tout phone productivity gains from a slightly larger screen.
The expression 'fat fingers' concerns the phenomena where users (including myself) lack the eyesight and finer motor skills required to type accurately on a small keyboard, so a slightly larger display makes all the difference.
Perhaps you simply have those fine motor skills (and good eye sight) so a larger device isn't necessary to prevent typos and remain productive.
I was able to thumb type at high speed and accuracy on the 3.5 inch iPhones. On modern iPhones, I produce more typos than ever, because apparently Apple thinks it knows which key I meant to hit better than I do, even with all the autocorrect and suggestions turned off.
I've banned social and don't use my phone much anymore, so it's less of an issue than it used to be, but it's really frustrating when I'm clearly hitting the right key and it insists on pretending I hit an adjacent key.
It’s so strange. Like, the obviously correct thing is to have a small ML model that learns the user’s typing patterns, which of their own typos they fix, which auto- and suggested fixes they reject, what rare, made-up, and jargon words they use, what acronyms they use, etc.
Instead, after 20 years of iPhone usage, I am not allowed to type the names of projects I use all the time without fixing the autocorrect every time, or (as you say) carefully hitting the left side of the F key because dead center will produce a G.
My preferred conspiracy theory is that larger, brighter screens hold attention better, so everyone involved in the whole “user experience” (phone manufacturer, application developers, advertisers, etc.) prefers (whether they consciously realize it or not!) phones to have a larger screen. Smaller phones make fewer demands; who would want to make a device like that?
Yes I can just print out directions on Mapquest before I leave home, tell people to page me and I will call them back from the nearest pay phone, carry around my Walkman and my Polaroid camera with me.
Have you ever thought that with 80% of web traffic coming from mobile, you might be the outlier?
What next? The old Slashdot meme “I haven’t watched TV in 20 years. Do people still watch TV?”
I said you don't have to do every task, not do no tasks.
> Have you ever thought that with 80% of web traffic coming from mobile, you might be the outlier?
Wow, snark too. In recent years, I've taken a much more luddite stance against mobile device usage for my own mental wellbeing. Maybe other people should follow suit.
"You should do your taxes on the train". No, I don't think that I will. You're free to stress yourself out like that. Have fun.
So park_match is the arbiter of what tasks should and should not be done on your phone?
> You should do your taxes on the train". No, I don't think that I will. You're free to stress yourself out like that. Have fun.
I along with 90% of the taxpayers in the US take the standard deduction - meaning my taxes are stupid simple.
I logged into the TurboTax app, it offered to download my w2’s, I answered five questions, entered the date that I wanted IRS to take out the taxes we owed and we were done. I don’t have to even file state taxes for the state I live in?
How would that have been easier from a computer? In fact it would have been harder if I had to use a computer because the other option I had to submit my W2 was to take a picture of it.
I believe the GP was talking about trying to do “real work” on a phone, which is something many people try to do — but which many others find a repugnant idea, as they currently use the excuse of the impracticality of doing work on a phone as a lever to push back on letting work intrude on their personal life.
Have you thought that a lot people work remotely and don’t sit at their desk all day? I have deliverables and deadlines to meet like everyone else. But sometime I would rather go for a swim in the middle of the day in the heated pool when the sun is still out - benefit of living in Florida in the winter - and work late and be contactable (wearing my watch) or go to the gym during the day (downstairs). Business traveling is also a thing (much less than I use to), working with people in different time zones where I’m not going to refuse to answer a message from a coworker in India if they need me.
It’s a fair trade off. My company gives me a lot of leeway during the day and I am flexible about time zones.
Is this really a driving factor for people? If I anticipate tasks that I can't wait to get back to a good work environment to do, I'll bring my laptop and tether on my phone. It's a fantastically more productive setup than trying to ssh in via a phone keyboard or even write a long email. 1 inch extra on the phone screen diagonal won't move the needle there for me.
It's not feigned. I'm astonished to learn how hard people will work for the (seemingly to me) false convenience of doing things on their phone which would be (to me) much more straightforward to do on a more suitable device.
So I tend to assume that these stories are often the outliers, and that my personal experience is more common. I recognize the fallacy, and I suspect we're both wrong and we're both right. I just honestly don't know which one of us is more of which.
It probably devolves to a question of what kind of work we're talking about. The work that I do (or the way I do it), I do not believe could be done effectively on a phone or tablet, most of the time. I work with people whose work can be done there. And there are probably more of them that there are of me. But that does not mean I could become one of them.
(addressing your comment on another subthread): if music, camera, and web are a person's "work", then sure. But that does not resemble "work" for me in any way.
Again, you can look at the worldwide penetration of cell phones vs laptops, where most web traffic comes from, the amount of resources spent on mobile development vs desktop, the amount of revenue globally of phone sales vs PC sales, etc
I also don’t spend all day working and I definitely don’t take out my laptop when I’m not working
Worldwide is not relevant, and mobile-vs-desktop dev is not relevant.
Mobile-vs-web dev is probably a better metric. And developed, mature markets only. Anything else introduces the second- and third-generation tech gap inconsistencies.
> Anything else introduces the second- and third-generation tech gap inconsistencies
This is completely responsive to your thread if you think countries that use their phones more than the US is some type of signal they are 3rd world countries.
Only about 70% of Americans even own a laptop[1]. Factor in many of those being ancient with 15 minute battery life, plus user preferences… it’s hard to see how that could be the majority use case.
It‘s also generational. My 18yo sister in law is now applying for colleges and the word “application” immediately made her look for an app. That the whole process happened on a (not mobile friendly) website was rather surprising to her.
I am 51. The amount of Ludditism on HN shouldn’t come as a surprise to me. But it does. Most older 70+ year old people I know don’t own a computer at all and would never use one. But they do know how to get to things they need on their phones.
It's not feigned ignorance, it's disbelief that people are comfortable working in such an inefficient and frankly unpleasant way.
Can I file my taxes on my phone? Probably. But I could also set myself on fire, and I think that might be more fun. Why would I not want to use a tool that is 100x faster and 1000x easier to use for any task more complex than writing a sentence?
I'm a developer. I've heard of developers SSH'ing from their phone and developing that way. It's impressive, in the same way removing all your fingernails is impressive.
Really? I did file my taxes by phone. It took me all of five minutes.
90% of taxpayers claim the deduction - meaning their taxes are really simple.
I launched TurboTax, it offered to download my and my wife’s W2s, I clicked through a few buttons on a wizard and I was done. It had all of my information from the prior year so it already knew my employer.
As far as speed, have you compared the speed of the fastest iPhone to a low to midrange x86 PC? The latest A series chips in the iPhone are faster in single core performance than an M1 MacBook Air which is no slouch. But all that is besides the point. How fast of a computer do you think you need to file taxes? There was tax filing software for the 1Mhz Apple //e in 1986. You just had to print it out.
I entered maybe one number?
I live in a state without state taxes so I didn’t even have to file states.
FWIW, I also shopped for, did all of the paperwork before closing, for the house we had built in 2016 from my phone.
The things that require more than a few taps to do aren't things that need to be done at a moment's notice. Those things can wait until I'm at my laptop.
Just Thursday, I left home at 6AM got in an uber, waited at the airport got on a plane for an hour and half , waited at another airport, got on another plane for four hours, uber to the Airbnb and while I was out to dinner that night, my wife and I were planning a trip we were taking during the summer.
Are you suggesting that o just queue everything up until I set my laptop up?
Again you realize you’re the odd one right with most activity these days taking place on mobile?
Is there anything you need to do during that time? Or are you looking to fill that time with whatever to keep you occupied and enjoy whatever?
If it's the former, you lead a very different life from me. There are very few things in my life that show up and require immediate action (or action within 24-ish hours for that matter. Most things can wait). If it's the latter, I try to fill that time with reading.
Again, are you so much in the HN bubble you don’t realize that most people don’t wait to get home to their laptop (if they even have a laptop) to get things done in 2026?
Is it really that hard to look at stats and realize that you might not be the normal one?
I'm sure they do it that way. I'm also not convinced there's any actual need to do it that way.
You also didn't answer my question. Nothing in your travel scenario there, if I were in your shoes, would need me to use my phone for more than a few taps per actual task, while the rest of my phone use would go to mindless browsing or reading. What specific tasks are you imagining popping up here that I would then queue to my laptop?
I'm not trying to say my way is superior. On the contrary, I'm asking what use cases you have that you are unable to solve. If you have a genuine need to send emails from your phone at a moment's notice, then I can't argue with that; if you can't wait to respond to the emails you receive, there's nothing else to really do about it. That's why I'm asking what needs you have. I'm trying to better understand your situation, trying to put myself in your shoes.
But if you have no desire to actually respond to my inquiry, I shall remain in the dark.
> Yes you will if you think most communication personally or even work related is happening via email…
The same principles apply to Slack, Teams or whatever else you may use. I don't do work outside of work hours, so what would I know. Email was just the example I thought of in the moment. Again, I'm asking you a question out of a desire to better understand your situation.
Personal correspondence doesn't take many taps to do. It's rarely more than 25 characters at a time in my experience.
> You know sending email via mobile has been popular since 2003 right?
'sending' and 'popular' are doing some pretty heavy lifting here. Reading, sure, I'll buy that. Sending? I'm not sure sending emails longer than two sentences from any device without a keyboard has ever been popular, for values of. It's probably more popular than ever given that touch keyboards make it reasonably possible, but James S. Casual isn't sending a lot of emails from his phone just through the sheer power of not sending many emails to begin with.
And 'popular' for that matter. Possible, sure, but how many people ever even had a mobile device that could send email before the iPhone came out?
I'm sure sarcasm and implying I'm stupid are great ways to convince your interlocutor, or the unseen masses for that matter.
I’m not implying you are stupid. I’m saying straight out that you’re feigning ignorance (ie not that you are ignorant) and you know how the world works in 2026.
Myself personally, I work remotely. I might be running errands during the day and still be monitoring Slack so I can be on a call at 6 or 7 at night with someone in another time zone.
I also travel for work - consulting - and travel personally during the work day and may work after I land. Even if not for work, do you wait to get to your computer to respond to text messages? Check HN?
Believe it or not, I'm not feigning ignorance. I just lead a very different life from you.
> Myself personally, I work remotely. I might be running errands during the day and still be monitoring Slack so I can be on a call at 6 or 7 at night with someone in another time zone.
> I also travel for work - consulting - and travel personally during the work day and may work after I land.
See, I would never do this. A.) I don't work remotely (not out of a desire not to, but it's just not viable with my current line of work), and B.) If I did, that work would be zoned off away from my personal life. If there's downtime, I can kill time by browsing whatever, but I wouldn't be out and about but also 'at work' at the same time. Work-time and personal time basically never mix in my life, and I'd like to keep it that way.
If you're 'at work' for 48 hours at a time, while travelling, then having to respond instantly at any given time makes a lot more sense, although I'd probably still want to defer those responses until I can get some downtime during any given travels to then type up my responses on an actual keyboard. I can however understand if that's not really viable in your life of work.
> do you wait to get to your computer to respond to text messages?
I've never(?) sent a text message longer than maybe 100 characters. Most are a fair bit shorter than that, and I don't send that many to begin with. Same goes for Discord, although confirming that is harder, since it's contaminated with messaged written with an actual keyboard.
> Check HN?
To read? Sure. I even read books on my phone. Respond to a comment? Not unless my response is really short.
You're being pretty defensive / aggressive about what some might call a phone addiction.
Most on HN know the data: healthier people tend to enforce boundaries with their devices. The average person is addicted, yes, but I'm not sure being "the odd one" in an era of actually decreasing literacy and numeracy and attention span is the insult that you seem to think.
I was ready to agree with you, as that was my belief. (I also agree it's a sign of a dangerous addition, but just like everyone in the 60s smoked, everyone today use phones)
Then I cam across this, showing about even split between laptop and phone
Yes I’m sure that using my phone for things that in the before times I would have used a desktop computer to do over a 2400 baud modem is a negative for my life. Those negatives are around social media
Yes, during our first night of our 45 day stay in another country and she got a text from someone she is meeting on the first leg of our trip during our summer 45 day domestic trip asking could we come 3 days earlier. We were looking at our calendar, our Hyatt points, flights etc. while enjoying live music and planning our next get away.
I’m sure you would have thought we should have waited to take out my laptop when we got back home.
I don't understand why are you downvoted. Are people in this thread really pulling out a laptop and trying to get it connected (or pay for one with a cellular modem) every time they need to respond two words to an email, call a uber or look up where is the nearest coffee shop that is open at an odd hour?
HN seems to have some really weirdly prescriptive view of how people ought to use their devices in a way that is almost like Steve jobs.
> I don't understand why are you downvoted. Are people in this thread really pulling out a laptop and trying to get it connected (or pay for one with a cellular modem) every time they need to respond two words to an email, call a uber or look up where is the nearest coffee shop that is open at an odd hour?
Because some of us read the original comment and thought maybe the discussion should be responsive to it:
> If the task can’t be done in a few taps I feel I’m better off opening a laptop anyways.
Talking about Uber, email and directions in Maps are literally "task[s] that can be done in a few taps". Perhaps being less "weirdly" defensive and taking the time to think about the discussion you're about to jump into would be helpful?
Surely your laptop has a mic on it and probably a camera. It also has blueteeth, wifi and stuff. Your phone has much the same and can act as a proxy to whatever is missing on your laptop and vice versa. Obviously, getting your laptop to fit under or within your "lap" is a bit of an ask!
Things like KDE Connect provide a direct bridge and a bit of imagination does the rest.
If your laptop isn't cutting the mustard then ditch it ...
... Oh your phone has a tiny screen and a shit mic and speakers, unless you stick it in your ear?
Oddly enough, I don’t carry around my laptop in my pocket all of the time. You do realize that in 2026 most people do most of their day to day non work tasks on phones don’t you?
At least for me, the effect is real, and is driven not by media consumption but ergonomics of use. But at the same time, I'd say you're not missing that much. I always preferred large screens because of productivity gains[2], but even as screens kept getting larger, the set of things that "I feel I’m better off opening a laptop" for remained the same for me.
That is, until I switched to a foldable phone (Galaxy Z Fold 7) half a year ago, and - I kid you not - I haven't used my personal laptop since that day.
FWIW, I still have a proper desktop PC; In the past decade+, I've been using a PC at home, and a "sidearm" on the go / away from home: always a 2-in-1 Windows laptop with top specs[0]. Being always with me, this laptop often replaced use of PC at home too, because of convenience & portability.
So by amount of productive use, for past 10+ years it was sidearm >> PC >> smartphone. But getting a foldable flipped it around. Having twice the screen size of a regular (large) phone is a big productivity win[1], but it's folding that makes the actual qualitative difference. Folded, the device becomes a regular smartphone - i.e. something that fits in my pocket, meaning it's always on me, in my hands, or less than 1 second away. Contrast that with tablets, whose form factor makes them basically just shitty laptops (same logistic as ultraportable, but toy OS of a phone).
I didn't expect this. I didn't even feel this change - I only noticed two months later that my laptop has been sitting unused on my desk, covered by a pile of stuff. Doing "laptop tasks" on a mobile device is still annoying (no keyboard, toy OS), but combining tablet-sized screen with portability of a phone makes them less annoying than logistics overhead of a laptop - and at least in my case, this eliminated the entire[3] space between "smartphone" and "PC".
--
[0] - Think Microsoft Surface, except I could get better specs at half the price if I bought an off-lease but pristine Dell or Lenovo.
[1] - It's not immediately obvious to people, but as things are today, a foldable phone isn't any better at media consumption than regular one, because almost all cinema, TV, videogames, etc. are all produced for widescreen - meanwhile, the inner screen of my Fold is approximately square, so e.g. for most TV, half or more of it is black at all times. However, all that extra space allows to effectively use multiple (3+) apps on screen, not to mention makes spreadsheets actually usable.
[2] - Bigger screen = less scrolling and tapping in menus, but also with text size scaled to minimum, my previous phone (S22) had a big enough screen that running two apps in split-screen became useful on a regular basis.
[3] - Well, almost. There are some tasks I really like physical keyboard and larger screen for - but for those, I just plug the phone into the screen via USB-C, and volia, it turns into a regular desktop. A shitty one, but good enough for occasional use.
That's interesting. I knew foldables have been selling well, and I assumed they were basically the promise that tablets were trying to sell but as you said- usable this time. I've never heard anyone's actual story laid out like this before though.
Now I'm having second thoughts on what I'll do myself because I would have never guessed a foldable would be ideal as you described.
I've been trying to avoid building an $8,000 tech stack of redundant devices that I don't need. Which is what Apple is all about, and then some. It's not the initial investment that bothers me, it's calculating replacement costs over time. It's pretty quickly that you have half a new vehicle in redundant electronics. It leaves you asking: why?
So while I appreciate the longevity and durability of my iPhone 12 mini, along with seamless Airdrop and the Airtag network being as handy as it gets, I'm thinking about going back to Android for docking support. This is a feature I don't think Apple will ever add until the end of time, so I may as well bite the bullet now and get another OS switch over with.
I'm not entirely convinced I would love a foldable like you do, but I am rethinking that now. I've been on the idea that Microsoft's partnership with Samsung for Phone Link features will make my life delightful at my desktop battlestation, and DeX with a lapdock will cover any mobile needs. A lapdock really does create an alternative to the battery life offered by the M-series Macbooks, while leaving me with only two devices to maintain and replace with my desktop and phone.
It's amazing with the flexibility and options offered in the Android space, whether it be my proposal or your foldable experience, how they don't have more marketshare. I think the issue is marketing, people need to be shown what they can do with a product and Apple makes Continuity and closed ecosystem features seem like a value add. When it's kind of a lure to an iCloud subscription and $8,000 personal tech stack.
What, ummm, efficiency benefits are you finding on a smart phone? Is it related directly to the keyboard size when typing? That's kind of all I can think of, other than a really tiny display + big fingers being an issue.
I find my efficiency directly proportional to the distance from my smart phone.
I did downgrade back to my SE (from iPhone 16). Big selling point (aside from its size and rounded corners) is the physical button with fingerprint. I missed that even more than I disliked carrying a big phone around.
Former small phone person here: I went from a small iphone to a large one just to substitute not having to carry around my ipad. I really wish iphone fold is here sooner.
Same. The Pixel 4a was the perfect phone for me: Light, screen exactly the right size to navigate with a single thumb whilst holding the phone in one hand, enough battery life, small enough to fit in my jean pockets comfortably.
But people buy big phones in preference to small ones, so that’s what Google & Apple manufacture. Nobody (from the POV of Apple/Google decision makers) buys these smaller phones.
I don't think that's true. Every iPhone user I've texted in the last 6 months at least has had rcs turned on, and that's including some very non tech savvy friends that I doubt did it manually
I really enjoyed Obj-C when I did some iOS work back in 2015/2016. It was my first non-JS language, and it taught me so much that I didn't understand since I started out doing web dev.
It's very obviously not "the easy part", it's definitely hard. It's just not the only hard part. And there may be other parts that are harder in some sense.
Something can be hard and also be the easy part. Imagine you got to see into the future and use a popular app before it was released, and you decided to make it yourself and reap the profits. Would be an absolute cinch to copy it compared to trying to make a successful app from a blank page.
Some code is hard. Most business logic (in the sense of pushing data around databases) isn't. The value is in analysis and action, which the system enablrs, not the machine itself.
Creating a high performance general purpose database is hard, but once it exists and is properly configured, the SQL queries are much easier. They'd better be or we wasted a lot of time building that database.
Still easy. You just have to learned different concepts. Someone in web needs to be familiar with databases, some protocols like HTTP, DNS, some basic Unix admin,.. Someone in low level kernel code will need to know about hardware architecture, operating system concepts, assembly,... But the coding skill stays the same mostly. You're still typing code, organizing files, and fiddling with build systems.
Different type of "coding skills" and different type of complexities make these two impossible to put into the same bucket of "still easy". You've probably never done the latter so you're under impression that it is easy. I assure you it is not. Grasping the concept of a new framework vs doing algorithmic and state of the art improvements are two totally different and incomparable things. In 30M population of software engineers around the globe there is only handful of those doing the latter, and there's a reason for it - it's much much more complicated.
You are conflating problem solving and the ability to write code. Web Dev has its own challenge, especially at scale. There’s not a lot of people writing web servers, designing distributed protocols and resolving sandboxing issues either.
I'm not conflating one with each other, I am saying that "coding skill" when dealing with difficult topics at hand is not just a "coding skill" anymore. It's part of the problem.
Not knowing C after a course on operating systems will block you from working on FreeBSD. Knowing C without a grasp on operating systems will prevent you from understanding the problem the code is solving.
Both are needed to do practical work, but they are orthogonal.
Exactly but they are not orthogonal as you try to make them to be. That's just trivializing things too much, ignoring all the nuance. You sound like my uncle who has spent a career in the IT but never really touched the programming but he nevertheless has a strong opinion how easy and trivial the programming really is, and how this was not super interesting to him because this is work done by some other unimportant folks. In reality you know, he just cannot admit that he was not committed enough, or shall I say likely not capable enough, to end up in that domain, and instead he ended up writing test specifications or whatnot. A classic example of Dunning-Kruger effect.
There is a nuance in what you say. You say it is "still easy" but it is not. It is not enough to take a course on operating systems and learn C to start contributing to the operating system kernel in impactful way. Apart from other software "courses" that you need to take such as algorithms, advanced data structures, concurrency, lock-free algorithms, probably compilers etc. the one which is really significant and is not purely software domain is the understanding of the hardware. And this is a big one.
You cannot write efficient algorithms if you don't know the intricacies of the hardware, and if you don't know how to make the best out of your compiler. This cannot be taught out of the context as you suggest so in reality all of these skills are actually interwhined and not quite orthogonal to each other.
I do agree with you that there's a skill tree for any practical work to be done. And nodes can be simple or hard. But even if there are dependencies between them, the nodes are clearly separated from each other and some are shared between some skill sets.
If you take the skill tree you need to be a kernel contributor, it does not take much to jump over to database systems development, or writing GUI. You may argue that the barrier entry for web dev is lower, but that's because of all the foundational work that has been done to add guardrails. In kernel work, they are too expensive so there's no hand holding there. But in webdev, often enough, you'll have to go past the secure boundary of those guardrails and the same skill node like advanced data structures and concurrency will be helpful there.
Kernel dev is not some mythical land full of dragons. A lot of the required knowledge can be learned while working in another domain (or if you're curious enough).
No, it's not mythical but it is vastly more difficult and more complex than the majority of other software engineering roles. Entry barrier being lower elsewhere is not something I would argue at all. It's a common sense. Unless you're completely delusional. While there's a lot of skills you can translate from system programming domain elsewhere there are not a lot of skills you can translate vice-versa.
99.9% of the code i write is easy, but that's just because of the sort of work i do. Its not far from basic CRUD. Maybe with pubsubs and caching thrown in for fun.
But that doesn't mean there isn't some tricksy stuff. All the linear algebra in graphics programming takes awhile to wrap your head around. Actually, game programming in general i find a bit hard. Physics, state management, multi threading, networking...
Right, the actual engineering part is hard. Typing out the code without botching syntax usually isn't very hard. Unless it's a C++ type with a dozen modifiers.
Sure; but I'm not humblebragging at how talented at coding I am. I'm good at it because I have a lot of practice and experience, but I'm hardly the best.
It's the easiest part because the hard parts of the job are everything else -- you're a knowledge worker so people look to you to make decisions and figure it out. You figure it out and make it work for whatever "it" happens to be.
> It's the easiest part because the hard parts of the job are everything else -- you're a knowledge worker so people look to you to make decisions and figure it out. You figure it out and make it work for whatever "it" happens to be.
If you thought coding was easy, wait till you see the competition for knowledge workers. You're in a spot now where the part that made you valuable (implementing business rules in software) can now be done by virtually anyone.
Doing all the non-coding parts (or, as you put it, "the hard parts") can now be done by almost any white collar worker.
Sure, anyone with the knowledge and experience lol
"Knowledge worker" isn't a cutesy phrase, it means I don't get paid for my time, I get paid for what I know. Contrast that to, say, working retail where you are paid to staff the store from 8-6. It's not a value judgement (retail is hard work) it's a description.
We've already had years and years of predicting the death of software engineering to offshoring and that didn't happen for the same reason. India turns out plenty of fantastic engineers who can do my job. Those people also have better options than staffing some cut rate code factory, and you can't substitute the latter if you need the former. But nice try lol
> "Knowledge worker" isn't a cutesy phrase, it means I don't get paid for my time, I get paid for what I know.
What you appear to be missing is that (if AI coding is as good as we are told) there will considerably more people with the business knowledge to drive an AI to create their solutions.
The bit that made developers valuable was the ability to actually implement those business rules in software. You will be competing with all those laid off devs as well as those non-developers who have all that business knowledge.
In simple terms, there are two groups of people:
1. Developers, who have some business knowledge, and
2. White collar workers who have no development knowledge.
Previously (or currently, say) the supply of solutions providers came only from group 1. Now they come from both group 1 and group 2.
The supply of solutions providers just exploded, you can expect the same sort of salary that the people in group 2 get, which is nowhere close to what the people in group 1 used to get.
> I'm not a code factory who occasionally talks to the suits. That isn't the job lol
The problem you are facing is that "person who talks to business" is a huge pool of talent, and now you have to compete with them. Previously your only competition was "person who talks to business and can code".
"I already told you what I actually do, you're free to read it and learn. Or not, I ain't the boss of you"
Nobody listens to someone who talks like this. Nobody learns from someone who talks like this. You're not a leader and you're not a very good software engineer and likely if you boss anyone around, they think you're a clown.
The people for whom I've seen "coding is the hard part" are typically promoted out of the way or fired. They never entered a flow like those who considered it easy and addictive. The latter are the pillars of the eng team.
It depends on whether you mean programming (typing your solution into your text editor) or programming (formalizing your solution to a problem in a way that can be programmed).
Honestly? Anything that requires a lot of manual dexterity because that takes a long time to master, like a trade or art.
People love to lionize it, but honestly I can teach the basics of coding to anyone in a weekend. I love teaching people to code. It can be frustrating to learn but it's just not that difficult. I've mostly written Python and Ruby and Node for my career. They're not super hardcore languages lol.
What is hard is learning the engineering side. I don't get paid for the code I write, I get paid because I get handed a business wishlist and it's my job to figure out how to make that business reality and execute it, from start to finish.
I tell my boss what I'm going to be working on, not the other way around, and that's true pretty early in your career. At my current level of seniority, I can personally cost the company millions of dollars. That's not even a flex, most software engineers can do that. Learning to make good decisions like that takes a long time and a lot of experience, and that's just not something you can hand off.
Sure, but they're going to be stuck writing software for yesterday's problems. As our tools become more powerful, we're going to unlock new problems and expectations that would be impossible or impractical to solve with yesterday's tooling.
How feasible would it be to scale this up to several feet in diameter? Like if you wanted to scan furniture? The device itself by default looks to hold much smaller items.
The dinosaur example lists an iPhone as source and none of their scanner models. It is also saying that it was recorded at a dinosaur theme park in Germany. This one might be meters long.
In that case I think you just take hundreds of photos by hand, probably with software which varies the focus as you take them so everything has a chance to be in focus.
The device is a way to automake taking those ~300 photos (number from the marigold example).
Can you please explain a bit more about why it's a difficult photogrammetry challenge, or point me in the direction of resources so I can learn more about it myself? This is an exact project on my projects list, so I'd love to have a better grounding in the topic when I get around to diving in to it.
Edit: I'm more focused on getting a dimensionally accurate/stable model, vs an esthetically pleasing one, if that matters. The hope is to be able to scan a broken chair and be able to design a jig in CAD that I could then 3d print for holding a specific piece in place while everything goes back together.
Most recent gaussian and nerf to mesh algorithms are surprisingly good at getting reasonable results for objects that traditional photogrammetry would struggle with.
The main challenge are reflective and uniform surfaces (e.g. lether or coated wood). See this overview what you'd want for perfect photogrammetry: https://openscan-org.github.io/OpenScan-Doc/photogrammetry/b... and also the challenging surfaces lower on that site
Same, which is why I asked. My naive intuition is that if you had an industrial grade turntable, like the one in the below video, you could hack together a hardware setup.
The way I like to think about it is to split work into two broad categories - creative work and toil. Creative work is the type of work we want to continue doing. Toil is the work we want to reduce.
edit - an interesting facet of AI progress is that the split between these two types of work gets more and more granular. It has led me to actively be aware of what I'm doing as I work, and to critically examine whether certain mechanics are inherently toilistic or creative. I realized that a LOT of what I do feels creative but isn't - the manner in which I type, the way I shape and format code. It's more in the manner of catharsis than creation.
You cannot remove the toil without removing the creative work.
Just like how, in writing a story, a writer must also toil over each sentence, and should this be an emdash or a comma? and should I break the paragraph here or there? All this minutia is just as important to the final product as grand ideas and architecture are.
If you don't care about those little details, then fine. But you sacrifice some authorship of the program when you outsource those things to an agent. (And I would say, you sacrifice some quality as well).
Quote: "Toil is the kind of work tied to running a production service that tends to be manual, repetitive, automatable, tactical, devoid of enduring value, and that scales linearly as a service grows."
Variations of this definition are widely used.
If we map that onto your writing example, "toil" would be related to tasks like getting the work published, not the writing process itself.
With this definition of toil, you can certainly remove the toil without removing the creative work.
You can remove a lot of toil from the writing process without taking away a writer's ability to do line edits. There's a lot of outlining, organization, bookkeeping and continuity work AI automates in the early draft/structural editing process.
Most writers can't even get a first draft of anything done, and labor under the mistaken assumption that a first draft is just a few minor edits away from being the final book. The reality is that a first draft might be 10% of the total time of the book, and you will do many rounds of rereading and major structural revision, then several rounds of line editing. AI is bad at line editing (though it's ok at finding things to nitpick), so even if your first draft and rough structural changes are 100% AI, you have basically a 0% chance of getting published unless you completely re-write it as part of the editing process.
It all depends on how you split the difference. I wouldn't call the emdash vs comma problem toil. It's fine-grained and there are technical aspects to the decision, but it's also fundamentally part of the output.
Agree that it's not the best for UI stuff. The best solution I've found is to add skills that define the look and feel I want (basically a design system in markdown format). Once the codebase has been established with enough examples of components, I tend to remove the skill as it becomes unnecessary context. So I think of the design skills as a kind of training wheel for the project.
Not to self-promote, but I am working on what I think is the right solution to this problem. I'm creating an AI-native browser for designers: https://matry.design
I have lots of ideas for features, but the core idea is that you expose the browser to Claude Code, OpenCode, or any other coding agent you prefer. By integrating them into a browser, you have lots of seamless UX possibilities via CDP as well as local filesystem access.
Lee Pace's performance in that show is one of my all time favorites. It's incredibly hard to play a charismatic marketing guru because in some sense, you're not acting. In a given scene, the character might be trying to convince people around him of some crazy idea, but if he hasn't convinced you, the viewer, then the entire illusion falls apart. So he really has to do in real life what he's pretending to do on screen.
Funny that this came up today. Last night I started re-watching the series after several years. Just this afternoon I was reflecting on how genuinely charismatic Lee Pace's Joe McMillen is.
You really feel it. Even when we know he's a manipulative sonuvabitch. It's mesmerizing. You have to admire his ability to spin shit into gold. The man has vision.
There's a sequence around S01E07 that I'm looking forward to reaching again, in which Joe is out on the front lawn with Donna's daughters during a hurricane and it's FEELS like magic. His performance feels earnest, and hypnotizing, and genuinely magical as he puts on a show for these young girls in the rain.
There's something intangible and hard to describe about the series. The writers have a way of making it transcend it's core drama and feel very different from just about any other show I can recall. Somehow it feels like pure creative expression that manages to defy outside expectations and tell a story that feels true to life and convey the ambitions of creative people who are fighting to make something beautiful.
It's shocking how few people have seen this show, let along watched it. Part of that probably has to do with how inaccessible it is on streaming. It's only readily available on AMC+. And no one has AMC+.
This is one of those shows that would likely shoot to the top if Netflix got the rights to it and even did a mild push. It's genuinely peak prestige TV.
That is where I originally watched it. It was on Netflix at one point. And now, it is not. Which is most of the problem with streaming service in general.
Scroll past the subscription options to find the full series listing. "Box Set" licensing terminology is as anachronistic as "Seasons", but both are used in Apple TV product listings for non-subscription streaming media purchases.
I'm not seeing anything anachronistic about either term. "Seasons" is absolutely aligned to the way television series are still produced and distributed. "Box Set" implies physical media. Using the latter term to refer to something else sounds like a case of false advertising.
Apple offers refunds for unwanted digital purchases, and this description in Apple TV app:
When you purchase access to this item, you can permanently download it to your iPhone, iPad, Mac, or PC. Once downloaded, you can access this without an internet connection, and Apple can't remove it from your device.
Wait, so it's actually a standalone, DRM-free download? If that's the case, then while the term is still somewhat misleading, it's considerably less so than I assumed.
Not DRM free, but unlike most streaming services Apple TV will download purchased media via different countries or VPNs and has no time limit to watch the download. In practice, it "just works". Buying all 4 seasons individually would be 4x$13=$52.
I don't see how it qualifies as a legitimate download or ownership. You cannot save the file to a disk you control and you have no way to ensure you have continued access to it. Apple or the IP holder can cause this "download" to dissapear from your device/account without prior warning. Its actually written in the terms.
With the advent of digital music, "record album" morphed from referring to the physical medium, to referring to the recording that would be put on it. I think something similar is happening for "box set".
Not sure I'd agree. "Record album" never specifically referred to anything physical, and just means "collection of recordings", regardless of what medium is used for them.
The term "album" by itself did originally refer to something physical -- a collection of photos bound into a book by a glue made from egg whites ("albumen") -- but the semantic shift to "album" meaning any kind of collection offered as a single unit happened well before "record albums" were a thing.
But the term "box set" has not experienced a comparable semantic shift, and still implies the presence of an actual box.
It's available on Prime Video (at least on amazon.de). For a long while they would only sell access to season 1, but I've just checked now and all 4 seasons are available at the moment.
> There's something intangible and hard to describe about the series. The writers have a way of making it transcend it's core drama and feel very different from just about any other show I can recall.
[actors gathered] at Pace's house on weekends to prepare dinner, drink wine, and discuss the scripts and their characters.. "it was really nice, because you got to hear other people's point of views about your character." For the third season, Pace, Davis, and McNairy lived together in a rented house in Atlanta, with Toby Huss joining them for the fourth season..
Rogers called Lisco the duo's mentor, saying: "He.. showed us the ropes.. it was a master class in how to run a room, both in terms of getting a great story out of people, and.. being a really good and decent and fair person in what can sometimes be a brutal industry.." Between the second and third seasons, all of the series's writers departed to work on their own projects, requiring Cantwell and Rogers to build a new writing staff.
I have watched the first two seasons a few years ago and didn't continue because I was getting so emotionally invested it was making me anxious, not just in front of the screen but also for quite some time afterwards. I'm looking forward to finishing it once I decide my skin has grown thick enough :D
I have Lee Pace on the radar since Singh's The Fall.
Your assessment of movie magic is only partially correct. Obviously, a character has to be convincing by himself but the heavy lifting of the illusion is done by the peer characters acting as if they believe the role he plays.
"The king is always played by the others"
Not sure who is to credit for this quote but in my opinion it is one of the most important insights to understand how movies work and also why movie characters are never relevant role models.
He's also extraordinary in Apple's Foundation, some say he carries the show. I treasure The Fall and every frame of it, in this he's uniquely blended with other great actors and images.
Apparently part of The Fall's magic stems from the fact that the girl playing Alexandria (Catinca Untaru) somehow didn't really understand that she was playing in a movie. The director, as well as Pace, received some criticism for this manipulation. She also didn't really continue acting afterwards.
IMO the plot of Apple's "The Foundation" is an infuriating insult to the original series. However, the production is great and Lee Pace is awesome as usual.
I think it's best appreciated as an original space opera that just happens to have the same name, especially given that so much of the show is genuinely original.
I generally agree, and also that it's impossible to take a book to video without change. I tend to try to think of it like this, imagine Bob and Jim watched a battle scene, but one from the west, the other from the east side. Bob wrote the book, Jim the movie.
Naturally, although it was the same battle, they'll have seen different things up close, and have different views on the battle overall.
Having said that...
It's like someone wrote the Foundation movie three generations after the book was written, turned into a play, and then told over the campfire for decades.
It literally has no more connection with Asimov's works, than Star Wars is like Star Trek. All of the technology is different, the size of the Empire is wildly different, literally almost nothing maps.
Is it good? Yes, sorta. But it's not Foundation, by any stretch. It's not even remotely in the same "world".
My problem is that the show essentially "says" the opposite of the novels.
For compelling TV you need recurring characters for the audience to become invested in. But the whole point of Foundation was that the individuals don't matter (mostly).
The show had to jump through all these hoops to keep the same actors around and make them heroes. And it expanded/emphasized the metaphysical element in a way that undermined the psychohistory. And IMO makes/will make (honestly don't know where they're at now) the series ending reveal far less interesting and thought provoking.
Last season's Brother Dude was awesome. I really felt sad for him. I have to say, however, my tolerance for manipulative sociopaths is very low - I'd totally punch McMillen in the face.
I was only aware of The Fall for its brilliant photography.
Often in movies you have the scrappy character that rises to the occasion by making a great speech, winning everybody over. I used to love those scenes.
Now, I've realized, in real life they wouldn't have let them finish their first sentence.
stuff like this. if i enjoy a movie but the script simply doesn't check out from a rational perspective (plot holes, implausible behavior, inconsistencies etc.) then i sometimes decide to switch to a fairy tale mental mode where those issues are excused magically. only works with some movies. kingdom of heaven comes to mind.
Project: Hail Mary, a fantasy world where geopolitics are trivially simple and every state in the world collectively agrees how great it would be to cede power and work together. (And therefore enable a genuinely fun and amazing science story which was the actual focus of the book to begin with, 10/10).
I remember seeing this discussed around the show The Marvelous Mrs Maisel, which is about a midcentury NYC divorcee getting into the world of stand up comedy. Overall it works and is a funny and enjoyable show, but there's definitely some of the standup routines depicted on-screen that are not actually as funny as the baked-in audience laughs might indicate. Because yeah... you can't really fake delivering good standup, even with a whole writer's room preparing the jokes and all the editing magic in the world, you still have to actually stand there and tell them in a funny way. That part can't be faked.
It never occurred to me that the jokes were oversold. I think the show is genuinely funny, with a very high batting average. Easily one of the funniest shows on television.
I sure do miss 'Mrs. Maisel'. What a stellar series.
I think I really loved Barry for exactly the opposite of this reason. Seeing a truly great actor play a bad actor was both impressive and hilarious at the same time.
Sadly, Season 1 Joe is just incohesive. Like, you want there to be some structural reason behind his madness and there just isn't any, because there's too much of crazy. Season 2 tries to walk much of that back.
I haven't yet seen season 3 and beyond, but it's clear the OP blogger agrees:
> The best thing the show’s writers ever did was realize that Joe wasn’t the most interesting character.
Like, Lee is a good actor for sure, he was just given a poorly story crafted role.
If you like Lee Pace, check out The Fall (2006). It's my favorite film, incredibly ambitious and funny and yet virtually unknown to the public. Lee's performance is incredible, as is his young co-star's.
Yeah, it's somewhat splintered in that you're unsure what movie you're watching between different parts, but I have a strong love for movies that dare, and that one certainly does.
I'll also second your comment about the kid, which is one of the best child performances I've seen.
Are we watching the same clip? I feel like I'm taking crazy pills.
This is from the pilot and I watched it based on high recommendations, and I couldn't keep going because the character you're describing as so convincing and charismatic is so dramatically unlikeable!?
In this scene, he is:
* disrespectful and entitled with a coworker
* privileged and self-important about his background with a client
* then makes an admittedly pretty rousing speech, but TBH the show doesn't really trust us to understand that "this is meant to be inspirational" because it keeps cutting to the other character reacting "inspired", which is significant because
* he doesn't make the sale
* then proceeds to verbally scream abuse at the other character.
and then i'm supposed to be excited about watching the two of these start a computer company together? ..........why?
The guy gives me chills, he reminds me of every sales douche who has ever tried to pull the wool over my eyes, or sell a customer something so horrendous and undeliverable as to be actively business ending.
> The guy gives me chills, he reminds me of every sales douche who has ever tried to pull the wool over my eyes, or sell a customer something so horrendous and undeliverable as to be actively business ending.
The thing is, Joe is supposed to actually have substance and vision. He's not faking it. The difference is that all those sales guys are pretending to be someone like Joe.
No, Joe wants to have substance and vision. The tragedy of his character is his slow realization that he just doesn't have it. Indeed it's the tragedy of all the main cast that each has some of what it takes to make something truly revolutionary, but they lack some key aspect. They each know that another has the missing piece they need, but they can't sustainably maintain a relationship with them.
There's a line in the first season that runs as an undercurrent through the whole show ("Computers aren't the thing. They're the thing that gets you to the thing"). Joe originally says this to make the viewer think about technology, evoking the dawn of the personal computer and subsequently the internet. But later on, you're invited to re-interpret that statement as being about people: computers and technology were the thing that got the main characters to work together. It's the -people- that are the thing.
Part of what makes the show so good is that it's one of the few renditions in TV / movies of the joy of engineering something, and the constant tension that comes from working with great people. Great people inspire you, but they also challenge you. The show does a great job of portraying realistic conflicts that arise between different personality types and roles, as well as cleverly exposing the limitations of those personalities. With just Gordon, you'll get a stable and well engineered product but it won't be revolutionary. Joe has the vision but he can't actually _do_ the substantive part. Cameron has great substance and technical ability, but she's impractical and inflexible. Donna is responsible, effective, and clear-eyed - but unchecked, purely rational decisions erode the soul of a company into nothing. These differences frustrate our characters, and yet there can be no success without them.
I think many of us spend our whole careers chasing those rare moments where the right people are in the room solving problems, butting heads, but ultimately doing things they could never do all by themselves.
He's basically supposed to be a Steve Jobs character - manipulative, with weak technical knowledge, but with high charisma. The part where he takes credit for Gordon's work is very much a reference to the Jobs/Wozniak relationship.
I dont know about substance, but possibly vision. Its an old pattern, he kept selling more until the technical reality caught up with him. And he would abuse the technical staff to try and squeeze more out, but mostly because his reputation was riding on having sold it.
It was easy to dismiss the show at the time because, though Pace’s performance was great from the beginning, it felt like he was a Temu Don Draper in an 80s Mad Men wannabe with ‘tech’ replacing ‘ads’.
The show is not at all that if you stick with it for even a short while.
Totally agree, he was incredibly good in that show.
He's also really great in the show Foundation, with a pretty different role. I watched Foundation much more recently and it took me a while to realize it was the same actor from Halt.
I got really disappointed at the mainframe booting into PC-DOS with a CGA font on a 3278 terminal. The show made such an impeccable job at rebuilding the 3033 CPU and the 3278 terminal just to make such a horrible job depicting its boot process. A VM/SE banner or an MVS login screen would have been sufficient (if inaccurate, if we are looking at the operator console). Did the research point out mainframes don't run PC operating systems?
Lee Pace is a first rate actor but I could not recognize him or indeed, most of the characters in this show, as representative of their roles. I struggled to suspend my disbelief. The show felt like it was written by people who imagined what it must have been like rather than people who had any experience of it. I still enjoyed it somewhat. Not Silicon Valley good but okay.
I'm always surprised Lee Pace doesn't get more recognition; I've loved a lot of his quirkier projects like Wonderfalls, Pushing Daisies, and Miss Pettigrew Lives for a Day, but it's not like he hasn't also been in mainstream things like The Hobbit and Guardians of the Galaxy.
He's in very heavy makeup in Guardians of the Galaxy (and his blink-and-you'll-miss-it cameo in Captain Marvel), and while you can get a good look at his face in The Hobbit, his character doesn't get much screentime and isn't especially prominent - and indeed I don't think the Hobbit trilogy really turned any actors into household names which weren't already.
I love Lee Pace but there really hasn't been a blockbuster where he's front and center.
That's fair. I think his starring moment was really Pushing Daisies, but that kind of thing is not for everyone; even just the hyperreal aesthetic would be a barrier for some.
I really liked the show despite Lee Pace's performance.
Pace really nails the intense Jobs vibe, but having seen his other work, it seems like it might not be 100% acting. There's consistency to the off feeling he gives across roles.
Gordon's role was probably the most setting accurate, but I do feel the story would have suffered if the entire cast was realistic to 80s standards rather than translated into late-2010s sensibilities.
> I struggled to suspend my disbelief. The show felt like it was written by people who imagined what it must have been like rather than people who had any experience of it.
This! It's not a bad show but people calling it the Best Drama are wildly overselling it.
The articles I can find say he's staying on as a EP, just stepping down as the main show runner. That seems very different than leaving the show behind.
Maybe I should watch a full episode but this clip doesn't sell -me- on it. Heavy handed and a bit phony. Great talent in these scenes, not directed or crafted for my tastes. I'm saying my feelings not downvoting!
Anyone modeling themselves after someone, isn't going to have that electricity.
You really have to believe in yourself and your plan, and have a real plan even if its in flux, to communicate like that and carry it off. But when audacity is backed up by substance, it really gets people's attention.
I wish they had captured one of their Faberge eggs; those are almost more impressive.
reply