Hacker Newsnew | past | comments | ask | show | jobs | submit | hello_there's commentslogin

This looks like a very nice product and it matches exactly something that I've been wanting and thinking about. So I tried to order one (using PayPal), but it doesn't seem like you're shipping to Norway. Are you planning to open up for shipments to more countries any time soon?


Norway requires CE, as far as I know. I don't have that certification yet, it's quite expensive, about 7000 Euro.

It pains me that I cannot sell within Europe, I can see the sales I am losing.

Hopefully I can bootstrap those funds before the holiday season :D

EDIT: I created a google form. When it is available in Europe, I will notify anyone who leaves their email: https://forms.gle/96sGBWFsxgAomG3Q9


I don't think you would need to pay so much. In most cases i.e. not health, not children etc you can even self certify: https://europa.eu/youreurope/business/product-requirements/l...


I clicked through to the tiling WM that he was using for KDE and one of the things under "Features" that got my attention was this:

> Support for setting windows to floating or quitting tiling altogether, per-desktop (Meta+Shift+F11) and per-window (Meta-f) (“Meta” refers to the “super” or “windows” key here)

I think this might be worth a try...


If you find this interesting then you might also be interested in SnapRAID: https://www.snapraid.it/

> SnapRAID is a backup program for disk arrays. It stores parity information of your data and it recovers from up to six disk failures.

> SnapRAID is mainly targeted for a home media center, with a lot of big files that rarely change.

> If the failed disks are too many to allow a recovery, you lose the data only on the failed disks. All the data in the other disks is safe.

I'm not affiliated with any of the projects.


I’ve used MergerFS and SnapRAID for my media server for about two years. Together they’re really flexible and I’ve been able to drop extra drives into the array as I’ve needed the storage. On that capability alone I highly recommend using them.

I haven’t had a failure yet but I imagine I wouldn’t be in trouble unless I lost a significant portion of my parity drives at the same time.


> multi-touch gestures added in Gnome 40

This sounds like a great improvement! Is this Gnome-specific or can we soon expect to see this in other DEs as well?


As far as I know the bulk of the magic happens in libinput - the touchpad driver. It is the DE's job to take the recognized gestures and turn them into a useful action. It is even easier in wayland, but only KDE and Gnome seem to be the ones looking into it. I don't expect sway for example to start shipping fancy animations any time soon :D


I wonder if something like this could be effective against grey silverfish? They move slowly, usually on the floor at night and they're big and nice for the camera to see.


This looks very nice! I've been thinking about something like this for quite a while. May I suggest two additional features that I would love to see in such a window manager:

1. A way to neatly arrange all the windows on a virtual sphere that surrounds the user, possibly arranging them automatically in a similar manner as a tiling window manager.

2. A way to rotate the before-mentioned sphere around you without forcing the user to rotate it's head. This would avoid much of the neck strain. It could be done by, for example, holding a button on the keyboard while moving the mouse or by a simple keyboard shortcut to rotate the sphere by X degrees in any direction.

This concept could also be extended to virtual desktops where each desktop is a sphere around the next, like an onion, with the ability to "zoom" in to the next desktop.


> 2. A way to rotate the before-mentioned sphere around you without forcing the user to rotate it's head. This would avoid much of the neck strain. It could be done by, for example, holding a button on the keyboard while moving the mouse or by a simple keyboard shortcut to rotate the sphere by X degrees in any direction.

one more thing for the author to play around with (I don't have the hardware to try it myself) is experiment with speed/acceleration -- the mouse pointer movement has speed and acceleration parameters that affect how it moves, so that if you want you only have to move your mouse very little in real dimensions to move it thousands of pixels.

It might cause motion sickness, but maybe you can get away with pitching/yawing 5 degrees for every 1 degree of head movement, so to look "straight up" you only have to tilt your head up 18 degrees. Hopefully you'd still have the illusion of being oriented in a space.


> It might cause motion sickness, but maybe you can get away with pitching/yawing 5 degrees for every 1 degree of head movement, so to look "straight up" you only have to tilt your head up 18 degrees.

That would cause extreme motion sickness.


Great ideas. I've started a Github project to keep track of usability improvements and added them there.


What special tooling is required to deal with a monorepo that is not required for multi repo?


Must have: Tooling that can interact on a file or sub directory level. Git cannot do that.

Should have: Access control to view and change file on a subdirectory basis. Everyone can see the repo so you can't permissions users per repo anymore. It's optional but these companies have that.

Recommended: Global search tools, global refactoring tools, global linting that can identify file types automatically and apply sane rules, unit test checks and on commit checks available out of the box for everything and that run remotely quickly, etc...

It's regular tooling that every development company should have, but only big companies with mono repos have it.

It's not that the tooling is needed to deal with the mono repo, it's that the tools are great and you want them. But they can't be implemented in a multi repo setup.

Think of it. How could you have a global search tool in a multi repo setup? Most likely, you can't even identify what repo exists inside the company.

Makes me realize. If I ever go back to another tech company, the shit tooling is gonna make me cry.


IIRC, Bitbucket Enterprise has pretty decent global search. GitHub Enterprise doesn't seem to have much of any cross-repo tooling, which is one of my least favorite things about it.

Global refactoring seems a lot less necessary if you have clean separation among your processes. Maybe this is me coming from a more microservices perspective, but I'm inclined to say that needing to do a refactor that cuts across several different functional areas is a sign that things are becoming hopelessly snarled together.


Google have dedicated (no more there) language, platform, library, etc. teams that can push really huge refactoring changelists - for example if they've noticed that code had plenty of: "if (someString == null || someString.empty())" - they would replace it with something simpler.

Or if they've found some bad pattern, would pull it too. I do remember when certain java hash map was replaced, and they replaced it across. It broke some tests (that were relying on specific orders, and that was wrong) - and people quickly jumped and fixed them.

This level of coordination is great. And it's nost just, let's do it today - things are prepared in advance, days, weeks, months and years if it had to. With careful rollout plans, getting everyone aware, helping anyone to get to their goal, etc.

It's also easy to establish code style guides, and remove the bikeshedding of tabs/spaces, camel braces or not, swtich/case statement styles, etc. Once a tool has been written to reformat (either IDE, or other means), and another to check style, some semantics - then people like it or not soon get on that style and keep going. There are more important things to discuss than it.


The idea of global refactoring is mostly that you can decide to modify a private API, and in the process actually update all the consumers of that API, because they all live in the same repo as the component they're consuming. (This is also the argument of the BSD "base system" philosophy, vs. the Linux "distro" philosophy: with a base-system, you can do a kernel update that requires changes to system utilities, and update the relevant system utilities in the very same commit.)


Code search in bitbucket server is dismal. All punctuation characters are removed. This includes colons, full stops, braces and underscores. This makes it close to useless for searching source code.

Regarding global refactorings think new language features or library versions.


Bitbucket PM here. Thanks for the feedback!

Support for punctuation in search is something we knew wasn't ideal when we first added code search. As with all software, there were some technical constraints that made it hard to do.

We plan to have support for full stops and underscores in a future version and are exploring how to best handle more longer term. Our focus, based on feedback, is on "joining" punctuation character to better allow searching for tokens. Support for a full range of characters threatens to blow out index sizes, but if we get more feedback on specific use cases we're always happy to consider them.


That boggles the mind. Why wouldn't they just ship Hound or something else based on the Go regex search backend?


There's always a reason ;)

Being a self-hosted product we have to make tradeoffs for the thousands of people operating (scaling, upgrading, configuring, troubleshooting...) instances. In short, we try to keep the system architecture fairly simple using available technology and keeping the broad skillsets of admins in mind.

It was a somewhat difficult call to add ElasticSearch for it's broad search capability, but being used for other purposes helped justify it. Adding Hound or similar services that were considered would have added more to administrative complexity and wouldn't provide for a broader range of search needs.

We continue to iterate on search, making it better over time.


A fair point, but I will just say that Hound is _astonishingly_ low maintenance. I set it up at my current employer like two years ago and have logged into that VM maybe twice in the entire time. It just hums along and answers thousands of requests a week with zero fuss.


You really need a good "search and replace", whether it's called a refactoring tool or something else.


> Must have: Tooling that can interact on a file or sub directory level. Git cannot do that.

I mean, when you get big, sure. But until you're big, git is fine. Working at fb, I don't use some crazy invocation to replace `hg log -- ./subdir`, I just do `hg log -- ./subdir`. Sparse checkouts are useful, but their necessity is based on your scale - the bigger you are, the more you need them. Most companies aren't big enough to need them.

> Should have: Access control to view and change file on a subdirectory basis. Everyone can see the repo so you can't permissions users per repo anymore. It's optional but these companies have that.

Depends on your culture (and regulatory requirements). I prefer companies where anyone can modify anyone's code.

> Recommended: Global search tools, global refactoring tools, global linting that can identify file types automatically and apply sane rules, unit test checks and on commit checks available out of the box for everything and that run remotely quickly, etc...

I'd bump this up to `should have`. The power of a monorepo is being able to modify a lib that is used by everyone in the company, and have all of the dependencies recursively tested. Global search is required, but until you're big, ripgrep will probably be fine (and after that you just dump it into elasticsearch).


> Depends on your culture (and regulatory requirements). I prefer companies where anyone can modify anyone's code.

This is still true at Google, except for some very sensitive things. However, every directory is covered by an OWNERS file (specific or parent) that governs who needs to sign off on changes. If I’m an owner, I just need any one other engineer to review the code. If I’m not, I specifically need someone that owns the code. IMHO, this is extremely permissive and the bare minimum any engineering organization should have. No hot-rodding code in alone without giving someone the chance to veto.

>ripgrep, ElasticSearch

Having something understand syntax when indexing makes these tools feel blunt. SourceGraph is making a good run at this problem.


Eh, at least in FB, I see more unstructured querying.


Elasticsearch is too dumb. You need to use a parser and build a syntax tree to have a good representation of the code base. That's what facebook and google do on their java code.

Agree that any small to medium company could have a mono repo without special tooling. Yet they don't.

There are companies that care about development and there is the rest of the world.


Github uses Elasticsearch [1]. I agree that ES is too dumb by default, however the analysis pipeline can be customized for searching source code.

1. https://www.elastic.co/use-cases/github


Might I suggest using a tool designed for searching source code rather than dumping into elastic. Bitbucket, sourcegraph, github search or my own searchcodeserver.com

Unless designed to search source code most search tools will be lacking.


I had a bad time at Google and was glad to leave, but wow did I ever miss that culture of commitment to dev process improvement and investment in tooling. The next startup I joined was kind of a shocking letdown. It became clear pretty early on that nobody else there had ever seen anything like the systems at Google, couldn't imagine why they might be worth investing in, and therefore the level of engineering chaos we wasted so much time struggling with was going to be permanent.

The startup I'm working for now is roughly half ex-googlers, so it is a different story. Of course we can't afford Google level infrastructure, but there is at least a strong cultural value around internal tooling, and a belief that issues with repetitive or error-prone tasks are problems with systems, not the people trying to use them.


Worked at google for 2-3 years, mainly java, under google3: my thoughts: Having things under single repo, and with a system like blaze (bazel), I can quickly link to other systems, or be prevented/warned that it's not good idea (system may be going deprecated, or just fresh new, and you need visibility permission (can be ignored locally)).

Build systems, release systems, integration tests, etc. - everything works easier - as you refer to things just by global path like names.

Blaze helps a lot - one language for linking protobufs, java, c++, python, etc., etc., etc.

Lately docs are going in it too, with renderers.

Best features I've seen: code search, let's you jump by clicking on all references. Let's you "debug" directly things running in servers. Let's you link specific versions, check history, changes, diffs.

GITHUB is very far away from this, for nothing else - but naturally by not even be possible to know how things are linked. Even if github.com/someone/somelibrary is used by github.com/someone-else/sometool, GITHUB would not know how things are connected - is it CMake, Makefiles, .sln, .vcxproj. It maybe able to guess, but that would be lies at the end... Not the case at google - you can browse things better than your IDE - as you can't even produce this information for your IDE (a process that goes every few others updates it, and uses huge Map Reduce to do that).

Then local client spaces - I can just create a dir, open a space there, and virtually everything is visible from it (whole monolithic depot) + my changes. There are also couple of other ways to do it (git-like include), but I haven't explored those.

What's missing? I dunno... I guess the whole overwhelming things that such a beast exist, and it's already tamed by thousands of SREs, SWEs, Managers, and just most awesome folks.

I certainly miss the feeling of it all, back to good ole p4, but the awesome company that I'm in also realized that single depot is the way to go (with perforce that is). We also do have git, but our main business is game development, so huge .tiff, model files, etc. files require it.

Also ReviewBoard and now swarm (p4 web interface and review system) is so far nice. Not as advanced as what google had internally for review (no, it's not gerrit, I still can't get around this thing), but at going there.

Another last point - monolithically incremental change list number would always be easier than random SHAxxx without order - you can build whole systems of feature toggles, experiments, build verifications, around it, like:

This feature is present if built with CL > 12345 or having cherrypicks from 12340 and CL 12300 - you may come up with ways to do this too with SHA - but imagine what your confiuration would look like. It's also easier to explain to non-eng people - just a version number.


Sounds like an opportunity for Google cloud


Wouldn't it be better to just adjust the linter, refactoring etc to work on a multi-repo hierarchy? Most of them already mostly do.


What special tooling is required to deal with a monorepo that is not required for multi repo?

From my time at Google the first thing that came to mind was citc. But I couldn't remember if citc was publicly known, so I did an Internet search for "google citc". The first search result was this article.

"CitC supports code browsing and normal Unix tools with no need to clone or sync state locally."


Here's a slightly off-topic dream about online comments:

In an ideal comment system I believe that articles, comments and moderation events should come from three different, decentralized streams (like Atom) that the end user can subscribe to individually and that are joined at the end users client. That would would provide transparency to the moderation process, ability to comment anywhere, and it would allow moderators to become effective spam-filters without giving them the power of censorship. Now, imagine if this system was built into the browser and it became the default commenting platform for all websites...


I like where you're going with this, but here's a question for you... Who controls the streams that are available on a website? Is it the end user or the website owner?


I haven't thought out the details, but I really think that it should not be the website owner, or any other individual or organization, but rather some sort of decentralized community effort. For example, one could imagine a distributed log that every commenter and moderator appends to and that is replicated to different parts of the world -- each of which a client could subscribe to.


The end user should always be in control.


You say that, but have you considered the downsides?

Imagine you know someone in the public eye, let's say a musician, with their own website. They enable comments, and site now has comments all over it that they have no editorial control over whatsoever. People are posting a high amount of offensive content. What do you advise that this musician does?


The musician can display whatever they want on their website; they don't have to host content they don't like. In the "dream" commenting system, the comments are independent; you can apply your own filters and fetch comments from sources the site owner may not approve of.


But they're still associated with that site, and that musician.


Only in the sense that someone who makes a Wix site that says "Neil Young Sux" is associated with Neil Young.


I disagree. I don't think this is like that at all. Especially given now that comments are displayed on the actual site in question. If you're going to say it's something separate, then now you're talking about Twitter or, more likely, Mastodon.


If you look at the top-level comment you're replying to, we're describing "an ideal commenting system", not Mozilla Talk.


> Imagine you know someone in the public eye, let's say a musician, with their own website. They enable comments

In my mind it would be the users, not the site owners that would enable the comments.

> People are posting a high amount of offensive content. What do you advise that this musician does?

My advice in this case would be to create a moderator stream that the end users can subscribe to. Perhaps some mechanisms could be put into the system to make it easy for site owners to suggest a "default" moderation stream that the end-users can opt-in to.

In this case the site owners would be able to moderate comments through voluntary cooperation with its users, but it wouldn't be able to censor opinions that it didn't agree with, because the end users would always be in control of how its stream is filtered and would always be able to verify that on-topic posts aren't censored.


Isn't this now Mastodon? If it's not actually connected to the site in question?

"My advice in this case would be to create a moderator stream that the end users can subscribe to."

That sounds like a pretty complex thing to do, which doesn't solve the problem of, "The comments on my site are overrun with people posting racial slurs."

"In this case the site owners would be able to moderate comments through voluntary cooperation with its users, but it wouldn't be able to censor opinions that it didn't agree with, because the end users would always be in control of how its stream is filtered and would always be able to verify that on-topic posts aren't censored."

I don't believe that's actually a problem, though. You can always go make your own site if you want your voice heard.


doesn't solve the problem of, "The comments on my site are overrun with people posting racial slurs."

I think you're misunderstanding the proposal. Comments and moderation are independent of the site, not on the site.

If distributed commenting and moderation is too complex to implement, then we need to move to network designs that make it simpler.


Commenting systems are setup and enabled by the admins of the site. Otherwise you're just talking about Twitter or Mastodon.


We're talking about something like Twitter or Mastodon but (a) can be found using the original URL of the site, as in IPFS or content-addressable networking; and (b) uses distributed opt-in moderation, meta-moderation, and filtering; like a decentralized AdBlock.

It's something that doesn't actually exist yet, but maybe will.


So they're still associated with the site, but don't actually keep people on the site, site owners have no ability to screen content of these comments which are still associated with the site, and it seeks to deny revenue to the site.

I'm sure you'll have hoards of sites signing up for that.


No, the idea is that they don't sign up for it. It's independent of the originating site.


Except it's not, because you still want it to be associated with the site. If you just want to comment on something, you already have Twitter, Mastodon, Facebook, your own blog, and probably a dozen other outlets. What you're asking for is the added legitimacy of the site itself, without their consent.


It's literally like having a browser add-on enable comments on a website by appending the Reddit/HN thread after an article [1]. But with the added benefit of the user being able to choose from a number of algorithmic/community moderation strategies to apply to the existing comments in order to show/hide/rank them.

[1] This add-on actually exists for YouTube/Reddit: https://addons.mozilla.org/en-US/firefox/addon/reddit-on-you....


Read the top-level comment from hello_there:

In an ideal comment system I believe that articles, comments and moderation events should come from three different, decentralized streams (like Atom) that the end user can subscribe to individually and that are joined at the end users client.

What he is asking for is the exact opposite of "the added legitimacy of the site itself". He's asking for a user interface to integrate content that does not come from the site itself.

That would be a lot cooler than another comment moderation system, of which there are already multiple open-source implementations. Could someone at least provide an argument of why Mozilla Talk is better than the existing solutions?


If that's what you want, then again, you have multiple sources for that. Twitter, Facebook, Mastodon, your own blog, etc.


What's still needed is a set of tools to

1. Aggregate comments from Mastodon, personal blogs, etc.

2. Interact with these comments by upvoting and applying filters, etc (i.e. moderate)

3. Publish your moderation actions and apply the same type of metadata from other moderators (and moderation aggregators).

If Talk has any value, it's to serve as a starting point for Tool 2.


Agreed, this is the correct framework for web comments.

I'd perhaps do most of my commenting in a private comment stream with 2 or 3 friends.


Why would I, as a site owner, want to cede control of the comments that are associated with my site? If, for example, a comment was just a string of racial slurs, there's no way in hell I would want that associated with my site, and I would want that deleted as soon as possible, regardless if you would want to see it.


What hello_there proposes is a comment system that is not associated with your website. It would akin to having an article of your website linked to a subreddit and people commenting on it.


What do these things cost?


According to some other comments, around the $1.5 million mark.


if you have to ask, you can't afford it


> do not autoplay video setting.

Not to mention sound.


At least they added the feature to mute a tab. No more hunting for that one ad that's making sounds.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: