Hacker Newsnew | past | comments | ask | show | jobs | submit | Orphis's commentslogin

Zstd is used in a lot of places now. Lots of servers and browsers support it as it is usually faster and more efficient than other compression standards. And some Linux distributions have packages, or even the kernel that can be compressed with it too, which is preferred in some situations where decompression speed matters more than storage cost.


There are also alternatives that can be good enough, such as the Swedish BankId system, which is managed by a private company owned by many banks. They provide authentication and a chain of trust for the great majority of the population on about all websites (government, healthcare, banking and other commercial services) and is also used to validate online payments (3D Secure will launch the BankId app).

While it's not without faults (services do not always support alternative authentication which may support foreigners having the right to live in the country), it has been quite reliable for so many years.

So just to say, you can have successful alternatives to a government controlled system as many actors may decide it is quite valuable to develop and maintain such a system and that it aligns with their interest, and then have it become a de-facto standard.


How does that prevent the ID service from discovering which services you use it for?


You don't "get into the sandbox", if a cheat program opted in, they would be launched into a separate instance that's distinct from the game.

And you would sign your files, which get verified by the integrity platform and allow you to authenticate with the servers securely.


Sounds very similar to total platform lockdown


It is similar except it's only a total lockdown of the sandbox.


In some cases, you can start by using the "at" functions (openat...) to work on a directory tree. If you have your logical "locking" done at the top-level of the tree, it might be a fine option.

In some other cases, I've used a pattern where I used a symlink to folders. The symlink is created, resolved or updated atomically, and all I need is eventual consistency.

That last case was to manage several APT repository indices. The indices were constantly updated to publish new testing or unstable releases of software and machines in the fleet were regularly fetching the repository index. The APT protocol and structure being a bit "dumb" (for better or worse) requires you to fetch files (many of them) in the reverse order they are created, which leads to obvious issues like the signature is updated only after the list of files is updated, or the list of files is created only after the list of packages is created.

Long story short, each update would create a new folder that's consistent, and a symlink points to the last created folder (to atomically replace the folder as it was not possible to swap them), and a small HTTP server would initiate a server side session when the first file is fetched and only return files from the same index list, and everything is eventually consistent, and we never get APT complaining about having signature or hash mismatches. The pivotal component was indeed the atomicity of having a symlink to deal with it, as the Java implementation didn't have access to a more modern "openat" syscall, relative to a specific folder.


As someone who worked on Meet at Google, it seems that it could have been networking to the datacenters where the call is routed from, some issues with UDP comms on your network which triggered a bad fallback to WebRTC over TCP. Could also have been issues with the browser version you used.

Since Teams is using the very old H264 codec and Meet is using VP8 or VP9 depending on the context, it's possible you also had some other issues with bad decoding (usually done in software, but occasionally by the hardware).

Overall, it shouldn't be representative of the experience on Meet that I've seen, even from all the bug reports I've read.


Google is made of many thousands of individuals. Some experts will be aware of all those, some won't. In my team, many didn't know about those details as they were handled by other builds teams for specific products or entire domains at once.

But since each product in some different domains had to actively enable those optimizations for themselves, they were occasionally forgotten, and I found a few in the app I worked for (but not directly on).


ICF seems like a good one to keep in the box of flags people don't know about because like everything in life it's a tradeoff and keeping that one problematic artifact under 2GiB is pretty much the only non-debatable use case for it.


Which app are you using?


It's not that relevant for video conferencing, most apps are still either doing H264, VP8 or VP9 and jumping to AV1 directly.

And for video streaming, AV1 is becoming increasingly used on Youtube and Netflix for example ( https://aomedia.org/av1-adoption-showcase/netflix-story/ )/.

It is used a lot more for people who don't have to worry too much about licensing at scale, such as pirate content or local streaming (quite often backed by an OS wide license). Doing a quick search on various pirate content search engine, I can see a lot of AV1 content now, so it'll eventually get more popular!


But it doesn't really apply when big entities with a lot of money are making the video conferencing services that would be using paid codecs. Then the consortiums have clear targets to request licenses to be paid.


They were still using H264 last time I checked, so it's irrelevant to them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: