Hacker Newsnew | past | comments | ask | show | jobs | submit | JoshTriplett's commentslogin

I exclusively buy tickets online, and whatever seats show up as empty online are empty when we show up.

Yeah, unless it's just me, and on a whim, which doesn't happen often, I'd always reserve online.

Yes, absolutely. Just like any other AI-generated content.

> Want even more ways to justify flying?

> There's no lack of ways to offset your carbon footprint from flying. For example:

For example, regulate industries that produce far far more carbon than consumers, or stop building new oil/gas/coal power plants completely in favor of solar and nuclear and wind. Stop blaming individual people for a societal problem that needs collective actions to solve.

A huge fraction of all emissions come from power plants.


> Arguably, principle of least surprise is very Apple.

Principle of least surprise is good engineering practice. The question is always whose surprise. Someone who expects tar to behave like other UNIX systems is going to be surprised by this. Someone who expects tar on Apple to have perfect fidelity would be surprised by not-this.

I increasingly feel like build systems should never be relying on any "native" utilities from the host system, and should instead be bringing them in via dependencies. You can't have this problem if your packaging system pulls in a specific portable `tar` library.


What should be really surprising for the users of UNIX-like operating systems is when they lose data because traditional UNIX utilities like cp, tar or cpio do not make complete copies of files, as one would expect from their description.

What is worse is that these utilities do not give any warnings when they do not make complete copies. For cp, the root cause is that it has bad default options, while for tar and cpio the standard file formats cannot store the metadata of modern file systems.

The various tar programs have their own different file format extensions to deal with modern file systems, which are guaranteed to work only when using the same tar program for both creation and extraction. The better tar programs implement both their own file format extensions and the file format extensions used by other popular tar programs.

The author of the TFA has used some obsolete tar program, which is the cause for the surprising behavior that was seen.

To avoid loss of data on Linux, I always use the PAX file format instead of tar or cpio, with the extensions implemented by "bsdtar --create --format=pax" from libarchive, and I always alias cp to '/bin/cp --no-dereference --recursive --one-file-system --preserve=all --strip-trailing-slashes --verbose --interactive', where cp has been built with extended attributes support.


> The question is always whose surprise.

I think that the surprise of more data than expected is more desirable than the surprise of data loss. So in this case, it seems like the safe choice.


Agreed. I usually hate on Apple, and its terribly ancient utilities and gratuitous incompatibility with modern Linux utilities, motivated by hatred of the GPL license.

But in this case, I think what it's doing is… basically fine? "Tar should faithfully reproduce the semantics of the source filesystem" is a perfectly reasonable starting point.

Ideally there would be a documented way to turn off the Apple-specific metadata with Apple's own tar, though.


From tar(1):

     --no-mac-metadata
             (x mode only) Mac OS X specific.  Do not archive or extract ACLs
             and extended file attributes using copyfile(3) in AppleDouble
             format.  This is the reverse of --mac-metadata.  and the default
             behavior if tar is run as non-root in x mode.

Apple is always surprised that non-Apple devices exist.

See: the permanent undismissable red icon to "finish setting up your Apple TV with your iPhone"


Apple can't control non-Apple devices. They can only control their own. So this makes perfect sense.

They could control their own Apple TVs to allow that dialogue to be dismissed via the TV controls.

Agreed, but why not just finishing setting it up? Or do people own Apple TVs without iPhones? That never occurred to me since a large part of the value prop is phone integration

No, the value prop is a streaming device with a clean UX not filled with ads. My phone (which is not an iPhone) has nothing to do with it. Apple TV is a far better YouTube device than Google TV. It's also the best device for Plex, Netflix, and all the streaming apps.

Yes, I believe it's possible to buy an Apple TV without owning an iPhone.

What integrations do you use? I can't really think of what I would miss on the Apple TV if I switched from iPhone. I rarely use AirPlay, disable Photos for in-house privacy reasons, and… oh yeah, the remote control for keyboard, volume, and navigation via iPhone is neat! I think the Apple TV is just a strong product on its own.

I use screen mirroring, a lot. Guess I’m in the minority around here. Really nice projecting your phone on a massive OLED to multitask on the phone. Or even pair programming and conference calls you can mirror the phone to TV for the call while coding on the laptop.

I use my Apple TV like it’s a big iPad stuck to the wall. Because that’s basically what it is. I honestly had no idea so many people just buy it to stream the same content on every other platform


> Someone who expects tar to behave like other UNIX systems is going to be surprised by this

They shouldn’t. The GNU tar manual already shows this behavior. https://www.gnu.org/software/tar/manual/html_node/What-tar-D...:

Because the archive created by tar is capable of preserving file information and directory structure, tar is commonly used for performing full and incremental backups of disks”

And yes, that same page also says:

“You can create an archive on one system, transfer it to another system, and extract the contents there. This allows you to transport a group of files from one system to another.”

> You can't have this problem if your packaging system pulls in a specific portable `tar` library.

You can’t pull in specific portable stuff all the way down (not even when running in Docker or a VM), so that will decrease the risk, but it cannot completely remove it. As an example, I think GNU tar will happily include .DS_Store files in archives.


> I increasingly feel like build systems should never be relying on any "native" utilities from the host system, and should instead be bringing them in via dependencies.

Well, you see, while this, frankly, applies not just to build systems but to most of software, the consensus in the community of distro-maintainers is that it's actually wrong: you should use your system's package manager, and tools it can install, and let it fiddle with the ambient environment and give you that delicious "path dependency". And if your distro's packaging environment doesn't allow to do the things you need (e.g. being able to install both mongodb 3.8 and mongodb 5.0, ideally at the same time, but okay, I can keep running apt remove/install over and over, but I do need to check if my app correctly handled the wire protocol changes), well, that's your problem for desiring strange things.


Nixos has a pretty solid solution to this issue: key your dependencies with checksums of the content. That way you get the best of both worlds: you always get the exact version you want, and you can share a copy of that exact version with other software that wants to use that exact version too!

Yeah, Nix-like distributions (e.g. guix, lix) do for Linux systems what some language package managers (e.g. cargo) do for individual projects.

So it sounds like you don’t get the exact version you want because metadata is thrown away.

Curious, what is your software doing that it depends on specific metadata in your dependencies? What metadata do you require? Most files metadata is stuff like created timestamp, last edit timestamp, read/write/execute permissions..

I'm just trying to think of a case where metadata would be relevant in a dependency?


It's a checksum not the content itself

Are the xattr / chattr / umask checksums rolled into the main data fork content or are they hashed separately (or not at all)?

IIRC Nix is checksummed in the hash of the source of the content, not the results.

Hash of a normalization of the derivation, so this roughly means source, dependencies and the ‘build recipe’. The exception are fixed-output derivations, which are typically content-hashed.

That said, a lot of work is done in content-addressed hashing, but AFAIK it’s not the default yet.


The main problem is that they often don't stick with it.

As far as I can tell, Intel more-or-less pioneered the idea of SSDs being the best storage rather than the cheap storage, for instance. The X25-M and X25-E were absurdly good. Then, once the market was established...they pulled out of it.


I’m still waiting for the Intel Arc B770 since a 5060 Ti and 9060 XT are already overpriced and if they just committed to something for once it wouldn’t be marginally worse.

Not that releasing the GPU would be something super innovative, they already have the B70.


Then, once the market was established...they pulled out of it.

This makes perfect sense given that Intel's target margins are pretty high. They only want to sell advanced tech, not commodities. Once SSDs became commoditized Intel was out.


It's possible that Intel wanted to seed the SSD industry.

They knew it wouldn't be profitable enough long term, but it would increase demand for their products.


An extreme and related example: https://en.wikipedia.org/wiki/Opticom_(company)

Popular science kind of backgrounder (can't vouch for the accuracy/relevancy - details are very scarce): https://www.geeksforgeeks.org/digital-logic/polymer-memory/


I've often found that trying to compile decade-old C code with a current toolchain and current libraries will have issues. It isn't always clear what versions the code is expecting (no equivalent to a lockfile), newer C compilers or standards can break old code, and newer libraries especially can break old code. It might still build if you could recreate exactly what it expects, but it becomes decreasingly possible to do that if you weren't compiling it a decade ago and archived off exactly what worked then.

It's a useful warning label for LLMed code. (When an editor isn't gratuitously adding it to non-LLMed code.)

> Honest question, what's the problem with crash dumps that include no personal info?

In addition to the other response: crash dumps are difficult to anonymize, both because useful crash dumps include something like a minidump (or some other small alternative to a core file), and because even without that, any random information from a backtrace may be sensitive (e.g. a URL).

There's nothing wrong with saving a crash dump and giving the user control of whether to submit a bug report.


I'm more thinking Python crashes, where you just get the lines that executed, and ~zero identifiable data.

> What calculators are you guys using that aren't in academia anymore and don't need the "exam approved" limitations?

I still have my TI-85, but I essentially haven't used it since I left college. For 99% of what I need, I use either Python, or what's built into Firefox (e.g. unit conversion), or DDG. For that last 1% (e.g. full CAS functionality), I tend to grab whatever web-based non-AI tool is handy.


Web based AI tools are remarkably helpful these days since they no longer try to do math themselves and instead write python to do it.

Advanced calculators are in an unusual space with external constraints on it. Some of the features or differentiation they add serves the constraint of "if you don't, we won't let students use it in the classroom".

When a calculator is used in a classroom, there's a concern about people using the calculator to replace the skill that's being taught. So, for instance, there's space for a calculator with no CAS, for a class that's trying to teach you to do algebra. That is in some ways easier than "don't use this function of the calculator".


Yeah there's not really a purpose for advanced calculators anymore (apart from the niche market of people who just enjoy using them). Calculators are basically only a thing now to make it harder to cheat on exams. If you don't have that constraint, you might as well use Wolfram or Matlab or whatever.

Or, here's a wild idea - exam problems should be structured such that they do not require any advanced calculator.

Math problems should not require any calculator. Physics problems should require a scientific calculator. Overcomplicating the arithmetic shouldn't be the point.


That rules out classes of problem which we want to teach, or falls back to using lookup tables which is more arduous and limits the number of problems which can be put on an exam.

Teaching students to use lookup tables at all is a largely pointless exercise. Teaching students to graph or use statistical functions on an advanced calculator transfers very well to other environments.


> That rules out classes of problem which we want to teach

Does it? Could you give a contrived example of a high school problem that would be ruled out by a lack of a graphing calculator?

> Teaching students to graph

They should be able to plot any of the functions they'll be working with by hand, very quickly.

> statistical functions

If they are using statistics, they should be able to provide the relevant combinatorial coefficients as the answer (xCy, etc), without actually doing the computation.

Not to mention that scientific calculators all support basic stats functions.


You've already rejected elsewhere in the comments the style of problem these calculators are used for as either "more complicated than a high schooler is taught" or a "your teachers have wasted your time".

Which is fine, you have an idiosyncratic view of modern mathematical pedagogy (at least as it exists in the US). When you're a high school math teacher you can argue with your state dept. of ed. about it.

These calculators are also used at the undergrad level, fwiw, so the "high school level" (whatever limit you're putting on that, many high schools will accelerate students into undergrad stats and as far as Calc II), is not a factor in their use overall.


Calculators can do a lot of things; a lot of physics is greatly improved by access to a good calculator.

My linear algebra class used F_2 as our field probably half the time that it was specified. Realistically almost any course probably doesn't need calculators at all (or they could at least be kept for homework). If you're not teaching arithmetic, you keep the arithmetic simple. If you're not teaching algebra, you keep the algebra simple. etc.

It is not really classroom. It is more so setting testing standard that matches the standardised testing that schooling aims for. This ofc then extends to testing in classroom tests as that is best way to prepare students.

Not that any of this matter anymore as it can be entirely replaced with LLMs in near future.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: