Hacker Newsnew | past | comments | ask | show | jobs | submit | fweimer's commentslogin

> A human can't search 10 apps for the best rates / lowest fees but an agent can.

Why would those apps permit access by agents?

It's always been the case that “agents” could watch content with ads, so that the users can watch the same content later, but without ads. The technology never went mainstream, though. I expect agents posing as humans would have a similar whiff of illegality, preventing wide adoption.

Local agents running open weights models won't really work because everybody will train their services against the most popular ones anyway.


What whiff of illegality? Personal recording and ad skipping DVRs are completely legal products (at least in the US). Courts have ruled on this.

As a U.S. consumer, can you buy a DVR that can record HDCP streams (without importing it yourself from a different country)? Even one that does not automatically edit out ads?

If I search "HDCP remover" on Amazon I see tons of results for $15-$30, sure. Reviews say they work as advertised. That typically exists in a different space from DVRs since it's not relevant for broadcast TV as far as I know (AFAIK there's nothing for DVRs to remove in the first place), but it'd be easy enough to chain it if you needed to.

The IEEE 754 standard covers decimal floating point arithmetic, too. Decimal floating point avoids issues like 0.1 + 0.1 + 0.1 not being equal to 0.3 despite usually being displayed as 0.3. Maybe it's reasonable to use that instead?

Some earlier spreadsheets such as Multiplan used it (but not in the IEEE variety) because it was all soft-float for most users anyway.


Those tools exist, but you have to pay by the token. I'm not sure if they scale financially to large code bases such as the Linux kernel. They are far more accessible than Coccinelle or Perl, though.

Honestly, I rather use Coccinelle, where I understand exactly what it does, when it does it and why it does it…

I would also rather use a tool that I trust than delegate the task to unreliable third party.

But to the person bringing up AI, you don't have to choose one or the other! Models use tools. Good tools for people are usually also good tools for models. The problem models have in learning to use tools like Coccinelle effectively is that there are too many of the tools and not enough documentation for each tool. If there were a unified, standard platform however then many humans would start to gain abilities through fluent tool use and of enough of those people would write docs and blog posts. Where people lead, models follow without doubt. Once a large enough corpus of writing existed documenting a single platform the models would also be fluent, just like they are fluent in JS and React because of how large the web platform is


How does LibreOffice handle ODF standardization? If they want to add a new feature that result in changes how things are formatted visually, write they papers to update the ISO standard for ODF, working with other office suite implementers to achieve interoperability, wait a couple of years for the new standard with the changes getting published, and finally turn on the feature for users?

My impression is that this is more or less how ISO standards are supposed to work. Personally, I don't want to work in such an environment.


Pretty much, and yes, this is not a desirable path for progress.

But communists have an absurd love for bureaucracy, and their need to control is unlimited, so they'll argue to the death about stupid shit instead of, you know, actually competing.


There is the VEX justification Vulnerable_code_not_in_execute_path. But it's an application-level assertion. I don't think there's a standardized mechanism that can describe this at the component level, from which the application-level assertion could be synthesized. Standardized vulnerability metadata is per component, not per component-to-component relationship. So it's just easier to fix vulnerability.

But I don't quite understand what Dependabot is doing for Go specifically. The vulnerability goes away without source code changes if the dependency is updated from version 1.1.0 to 1.1.1. So anyone building the software (producing an application binary) could just do that, and the intermediate packages would not have to change at all. But it doesn't seem like the standard Go toolchain automates this.


There is one non-technical countermeasure that Apple seems unwilling to try: Apple could totally de-legitimize the secondary access market if they established a legal process for access their phones. If only shady governments require exploits, selling access to exploits could be criminalized.

We have a word for this: a backdoor. It wouldn't de-legitimize the secondary access market. It would just delegitimize Apple itself to the same level. Apple seems to care about its reputation as the defender of privacy, regardless of how true it is in practice, and providing that mechanism destroys it completely.

It would not completely de-legitimize it. Maybe a government doesn't want anyone to know they are surveilling a suspect. But it definitely would reduce cash flow at commercial spyware companies, which could put some out of business.

Your opinion is that Apple should have just handed over Jamal Khashoggi‘s information to the Saudi Arabian agents who were trying to kill him, because then Saudi Arabia wouldn’t have been incentivized to hack his phone? I think you’ll find most people’s priorities differ from yours.

As many people in this space have found out recently, there is no real thing as a non-shady government.

The Gmail requirement is actually slightly different: the header must be present and unique. Gmail only keeps one copy of a message per user and message ID. Combined with a mail source that uses predictable message IDs (such as Github), you can abuse this to suppress delivery of certain messages to Gmail users.

Interesting, but what do you gain to send an email which you know will not land?

They mean to send an email in advance, with a message ID that would later be used in the target email. First email gets ignored, moved to spam, or not read yet. Then the target email gets sent with the predicable message ID, and gets bounced.

Comments on issues use the format <[OrgName]/[RepoName]/issues/[IssueNumber]/[CommentID]@github.com>

A mitigation to this would be to take the combination of message ID and the sending domain and use that as the unique value, because message ID is not guaranteed to actually contain a domain label that's owned by the sender.

For example SendGrid's message IDs are <[RandomValue]@geopod-ismtpd-[Integer]>.


Minor correction: The message doesn't get bounced, it gets de-duplicated against the first message. Effectively, it's deleted.

If I send it first, the real message won't get delivered. The real message could be be a newly reported security issue.

It should be possible to get a better idea where the filtering happens with a tool like tcptraceroute (possibly patched to use other segments beyond the default TCP SYN).

I haven't found evidence of extremely widespread filtering. Why would there be? The installation count is not that high. The potential side effects from uncoordinated port filtering could be quite severe. This isn't netkit's telnetd or Busybox. (I'm aware of Debian switching defaults, but that was fairly recently.)


Aren't Mali GPUs designed in Europe?


The last time this came up, people said that it was important to filter out unrelated address records in the answer section (with names to which the CNAME chain starting at the question name does not lead). Without the ordering constraint (or a rather low limit on the number of CNAMEs in a response), this needs a robust data structure for looking up DNS names. Most in-process stub resolvers (including the glibc one) do not implement a DNS cache, so they presently do not have a need to implement such a data structure. This is why eliminating the ordering constraint while preserving record filtering is not a simple code change.


Doesn't it need to go through the CNAME chain no matter what? If it's doing that, isn't filtering at most tracking all the records that matched? That requires a trivial data structure.

Parsing the answer section in a single pass requires more finesse, but does it need fancier data structures than a string to string map? And failing that you can loop upon CNAME. I wouldn't call a depth limit like 20 "a rather low limit on the number of CNAMEs in a response", and max 20 passes through a max 64KB answer section is plenty fast.


I don't know if the 20 limit is large enough in practice. People do weird things (after migrating from non-DNS naming services, for example). Then there is label compression, so you can have theoretically have several thousand RRs in a single 64 KiB response. These numbers are large enough that a simple multi-pass approach is probably not a good idea.

And practically speaking, none of this CNAME-chain chasing adds any functionality because recursive servers are expected to produce ready-to-use answers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: