>File changes within .git directories occur far too often[..]
That's a crazy statement. The cloud backup system I use can be configured to how often it should bother even looking for new files, and for the section where I have my .git repos (they're actually "bare" git repos and I push to them, locally) I've set it to every two hours. Which is actually overkill because they absolutely do not change that quickly.
> changes within .git directories occur far too often and over so many files that the Backblaze software simply would not be able to keep up.
I don’t really understand that. I’m using Windows File History, and while it’s limited to backing up changes only every 15 minutes, and is writing to a local network drive, it doesn’t seem to have any trouble with .git directories.
This is idiotic. All they have to do is schedule them and then introduce enough hysteresis to not constantly churn on their end. Even if they backed up at most once a day this would be better than this idiocy.
I had a back and forth with them about .git folders a couple of years back and their defence was something like "we are a consumer product - not a professional developer product. Pay for our business offering"
But if that's truly their stance, then they are being deceptive about their non-business offering at the point of sale.
EDIT - see my other comment where I found the actual email
Well I do pay for their business product, I have a "Business Groups" package with a few dozen endpoints all backing up for $99/year per machine.
According to support's reply just now, my backups are crippled just like every other customer. No git, no cloud synced folders, even if those folders are fully downloaded locally.
(This is also my personal backup strategy for iCloud Drive: one Mac is set to fully download complete iCloud contents, and that Mac backs up to Backblaze.)
Professional? We indeed use git at the company where I work, but there we have a dedicated backup system used by professionals. No BB involved.
I, on the other hand, as a private consumer, use git for all my hobby projects and note-taking. And my language learning. Of course I do, or I couldn't keep track of what I'm doing over the years, and I wouldn't be able to sort things out. There's nothing professional there, are BB saying that if you try to do something in an orderly and controlled manner, then it's "professional" and shouldn't be backed up? If that's their stance then no wonder people are leaving BB. I for sure won't ever recommend them again.
You can probably get around this problem by compressing the file and uploading it in a .zip. Google Files allows for making zip files at least, so I don't think it's a rare feature.
I think the linked spec suggestion makes the most sense: make the feature opt-in in the file picker, probably require the user to grant location permissions when uploading files with EXIF location information.
yeah it does sound kind of dodge that there's no option even for advanced users to bypass this, I would guess mainly a moat to protect Google Photos. I wonder if online photo competitors are finding a workaround or not as searching your photos by location seems like a big feature there
I don't know when Google's EXIF protections are supposed to kick in, but so far my photos auto-synced to Nextcloud still contain location information as expected.
I don't think this has anything to do with Google Photos. People fall victim to doxxing or stalking or even location history tracking by third party apps all the time because they don't realize their pictures contain location information. It's extra confusion to laypeople now that many apps (such as Discord) will strip EXIF data but others (websites, some chat apps) don't.
> It's extra confusion to laypeople now that many apps (such as Discord) will strip EXIF data but others (websites, some chat apps) don't.
You've given me a lot of sympathy for the young'uns whose first experiences on the web might have been with EXIF-safe apps. Then one day they use a web browser to send a photo, and there's an entirely new behavior they've never learned.
> Then one day they use a web browser to send a photo, and there's an entirely new behavior they've never learned.
The article is actually about Google's web browser stripping the EXIF location-data when uploading a photo to a webpage, and the author complains about that behavior.
This is not an implementation of the browser itself. Android Chrome is behaving in that way because the app didn't request the required permission for that data from the OS (which would ask the user), so the files it receives to upload already has the data removed
Thank you! Meant my comment for anyone who's not on the very latest version, anyone who experienced Android or another OS with disparate privacy-related behaviors as long as that OS has been around. Yes, now, the issue I'm talking about is solved for the general public on the latest Android devices! At reported cost to power users.
Just to add some more context: The change was applied in Android 10, which was released in 2019.
On OS-level there is no reduction in functionality, the implementation just ensures that the user agrees on sharing his location data to an app, and until that has been agreed it is not being shared (as to not hinder any normal app-operation).
Now the fact that the Chrome app doesn't trigger to ask the user-permissions is another topic, with its own (huge) complexity: If the user disagrees to share his location-history to a webpage, and Android can only ensure this for known media file types (while i.e. Windows cannot do this for ANY filetype, and on iOS I believe the user cannot even decide to not have it stripped), Chrome actually cannot commit to any decision taken by the user.
It's a known dilemma in the W3C, the Browser should ensure user privacy but for binary data it technically can't...
You're replying to someone who is talking about a native app, but the overall issue here is about web apps. Chrome and Firefox don't request the appropriate permission (which, as things stand right now, is probably the safer choice), and there's no way for a website to signal to the browser that it wants that permission, so that the browser could prompt the user only for websites that ask for it, and persist the allow/deny response, similarly to how general location permission works via the JS location APIs.
Seems to be quite simple, an App which wants to access this info just needs to set the permission for it.
Chrome doesn't seem to request that permission, so the OS doesn't provide the location-data to the app. So Chrome rather ended up in this state by doing nothing, not by explicitly doing something...
If your app targets Android 10 (API level 29) or higher and needs to retrieve unredacted EXIF metadata from photos, you need to declare the ACCESS_MEDIA_LOCATION permission in your app's manifest, then request this permission at runtime.
That's not sufficient. We need a standardized attribute on the HTML form to request the permission as well. If Chrome requests the permission, great, but that's not fine-grained enough for a web browser.
Well yes, agree, but as stated Chrome didn't end up with this behavior because they did something, the Browser behaves like this because they didn't implement any logic for this permission.
A standardized attribute on an HTML-form would be difficult to define, because in this context the page just requests/receives a binary file, so a generic "strip embedded location information" decision from the user would be hard to enforce and uphold (also, by whom?).
In this case Android only knows the file-structure and EXIF because the file is requested by Chrome from a Media Library in the OS, not a file-manager.
W3C keeps thinking about this data-minimization topic repeatedly [0], so far they managed to define the principles [1], but enforcing them technically is quite hard if any kind of content can be submitted from a storage to a webpage...
Ideally this should be something search engines handle - but they do a poor job in specialised areas like code repos.
It's helpful to have a github mirror of your "real" repo (or even just a stub pointing to the real repo if you object to github strongly enough that mirroring there is objectionable to you).
One day maybe there will be an aggregator that indexes repos hosted anywhere. But in many ways that will be back to the square one - a single point of failure.
The Fediverse seems to dislike global search. Or is that just a mastodon thing?
IMHO - disagree but it depends on point of view so this is not ”you are wrong” but ”in my view it’s not like that”.
I think it’s the role of the software vendor to offer a package for a modern platform.
Not the role of OS vendor to support infinite legacy tail.
I don’t personally ever need generational program binary compatibility. What I generally want is data compatibility.
I don’t want to operate on my data with decades old packages.
My point of view is either you innovate or offer backward compatibility. I much prefer forward thinking innovation with clear data migration path rather than having binary compatibility.
If I want 100% reproducible computing I think viable options are open source or super stable vendors - and in the latter case one can license the latest build. Or using Windows which mostly _does_ support backward binaries and I agree it is not a useless feature.
Software shouldn't rot. If you ignore the cancer of everything as a subscription service, algorithms don't need to be tweaked every 6 months. A tool for accounting or image editing or viewing text files or organizing notes can be written well once and doesn't need to change.
Most software that was ever written was done so by companies that no longer exist, or by people (not working for a software company) no longer associated with those company they wrote the tool for. In many of these cases the source is not available, so there is no way to recompile it or update it for a new platform, but the tool works as good as ever.
It makes honest people feel rewarded, valued and acknowledge. It teaches people who wish to follow the rules and conform to social norms what those norms are and where we actually draw the line in practice.
Looked at slightly differently, given a split between high trust and low trust preventing conversions from high to low is similarly important to inducing conversions from low to high.
Yes, my understanding (and I suspect the reason why the airflow experiment worked) is that a large part of the reason this happens is because of a mismatch between the output from the vestibular and visual systems. So, the automated defenses of your body freak out and go into a defensive mode.
I think that ~30% of the population just has more sensitivity to the mismatch.
There is always going to be some movement. It’s impossible for there not to be. Whether it is rendered in the VR environment or happening in real-life through small little motions, there’s a lot of little things that help to establish the mismatch.
It’s probably most like getting car sick. You are obviously moving, but you are also stationary at the same time. This doesn’t happen to folks suffering from motion sickness when they are driving, though, because there is now a physical action tying the motion to your inputs.
This may lead you to ask why people watching a movie in a theater don’t get motion sick and the reason is the same, multiple inputs tell you otherwise. You can see the edges of the screen, you can see the audience, there’s a lot of input telling your body there’s nothing weird going on here. The more immersive, the more some people’s bodies do not handle the illusion well.
Have you considered that it's unsolvable? Or - at least - there is an irreconcilable tension between capability and safety. And people will always choose the former if given the choice.
in a pure sense no, it's probably not solvable completely. But in a practical sense, yes, I think it's solvable enough to support broad use cases of significant value.
The most unsolvable part is prompt injection. For that you need full tracking of the trust level of content the agent is exposed to and a method of linking that to what actions it has accessible to it. I actually think this needs to be fully integrated to the sandboxing solution. Once an agent is "tainted" its sandbox should inherently shrink down to the radius where risk is balanced with value. For example, my fully trusted agent might have a balance of $1000 in my AWS account, while a tainted one might have that reduced to $50.
So another aspect of sanboxing is to make the security model dynamic.
> The issue title was interpolated directly into Claude's prompt via ${{ github.event.issue.title }} without sanitisation.
How would sanitation have helped here? From my understanding Claude will "generously" attempt to understand requests in the prompt and subvert most effects of sanitisation.
I would not have helped. People are losing their mind over agents "security" when it's always the same story: You have a black box whose behavior you cannot predict (prompt injection _or not_). You need to assume worst-case behavior and guardrail around it.
And yet people keep not learning same lesson. It's like giving extremely gullible intern that signed no NDA admin rights to your everything and yet people keep doing it
What was the injected title? Why was Claude acting on these messages anyway? This seems to be the key part of the attack and isn’t discussed in the first article.
Because that's how LLMs work. The prompt template for the triage bot contained the issue title. If your issue title looks like an instruction for the bot, it cheerfully obeys that instruction because it's not possible to sanitize LLM input.
> Bob (Backblaze Help)
> Aug 5, 2021, 11:33 PDT
> Hello there,
> Thank you for taking the time to write in,
> Unfortunately .git directories are excluded by Backblaze by default. File
> changes within .git directories occur far too often and over so many files
> that the Backblaze software simply would not be able to keep up. It's beyond
> the scope of our application.
> The Personal Backup Plan is a consumer grade backup product. Unfortunately we
> will not be able to meet your needs in this regard.
> Let me know if you have any other questions.
> Regards,
> Bob The Backblaze Team
reply