Hacker Newsnew | past | comments | ask | show | jobs | submit | paffdragon's commentslogin

It's not how it works. You suppose to contract a consulting company that contracts some offshore company to connect you to SAP.

But you can't just hire one, you have to hire functional consultants (who tell you your flow is wrong and you have to adjust that to how SAP does things) and then implementation consultants who don't know how the process works, but can actually implement that integration. And then again after the next release because the integration broke.

And the customer being cheap doesn't pay for the proper modules and thus everything gets mapped to PSP elements -- to keep the same old garbage piles that get pushed around.

I wonder if it’s cheaper to just have an AI write the parts of SAP you actually need.

if the AI is a certified SAP consultant, sure. But then it would probably cost you $20K/month in subscription.

We also used to run spyware and adware scanner and removal tools, but now the ad/spyware rebranded and became mainstream...

I often click accept, at least for site banners that get through ublock. But my browser blocks 3rd party cookies and then it clears all cookies on close (except for trusted sites). I also use private browsing for random sites where I dont bother rejecting cookies usually.

I really think this cookie consent concept should have been a browser functionality, so I can make my default choice for all sites and be done with it.


Well, maybe not everybody

  $ ls -ld /tmp
  drwxrwx--x. 2 shell shell 40 Jan 15  2022 /tmp
edit: sorry, I should have added this is termux :)

I also ditched Google years ago for DuckDuckGo, but its not without problems for sure. Often times still full of obviously fake sites in results, that I try to report them. Many times it just returns nothing where Google still manages to give results. And I still have to scroll through their ads when I am on a machine without an adblocker (like Firefox Focus/Klar on Android). But I still rather use them than Google, if I don't find something it is usually not the end of the world and I just move on. Recently, I switched all my browsers to their noai site, on some I still have the lite version, I think.


I was kind of interested in the content, but I am so overloaded with AI slop by now, that reading this generated text gives me nausea.

I was looking to see why they landed on this stack, but there are no alternatives or evaluation criteria listed - given the generated article, I wonder how much of the infra was selected by an LLM.


Claude helped write the article. It is 2026. I proof read it though and yes, giving an LLM a list of specific criteria of what you are looking for in a product is actually a pretty good experience.


If it works for you, it works. I just see the same phrases used repeatedly so frequently nowdays - including my own LLM conversations.

Regarding the use of LLM for picking infra. The issue I usually have with such task is that they frequently omit things - either from the list of options or the features compared. And depending on my familiarity with the topic, I might never notice, which might steer my decision making into a different direction. Basically a certain bias. Sometimes prompting it to repeat reveals more, but ultimately I end up hitting the search and doing my own research, then I might use the LLM again with now more knolwedge and data. Did you run into this too? What was your process?


I do understand what you mean with bias.. some models where quite stubbornly ignoring things like "I want made in EU - not GDRP compliant - not one office or data center in the EU". I remember this being especially painful for TEM and market email providers. Usually they suck at finding the right pricing data at first try.. so I ended up throwing screenshots of pricing pages. Now that I am writing this up, in some instances manually comparing them would have been faster :D ... The bias might come from the huge amount of US dominance in training data and might not even be intentional. In some niches you don't have many options, that's what I tried leaning on in the article.


> Claude helped write the article. It is 2026.

If that's the case, why do we have to suffer through an AI-generated article? Just give us the prompt.

This topic interests me but I stopped reading as soon as I noticed the slop. I'd much rather read a couple of human-written paragraphs with your personal experience.


Hah, what a coincidence, just started to look into yesterday how do I setup LDAP/OIDC on FreeBSD and today I was going to try FreeIPA or Keycloak. Thanks for sharing.


I also covered Keycloak on FreeBSD in the past - here:

- https://vermaden.wordpress.com/2024/03/10/keycloak-on-freebs...

Hope that helps.

Regards,

vermaden


Awesome, it definitely helps. I realized I have your blog already bookmarked, I subscribed to the RSS feed now as well :) I am new to FreeBSD and these kind of practical articles are really helpful. Thank you very much for sharing your knowledge with others.


Thanks.


Thanks for mentioning this, I am just beginning my FreeBSD journey and wanted to setup a small pre-boot env with mfsBSD[1], didn't know FreeBSD has a tool already to do something like that.

[1]: https://github.com/mmatuska/mfsbsd


You can run a container on Synology and install your custom services, tools there. At least that is what I do. For custom kernel modules you still need a Synology package for something like Wireguard.

If you have OPNSense, it has an ACME plugin with Synology action. I use that to automatically renew and push a cert to the NAS.

That said, since I like to tinker, Synology feels a bit restricted, indeed. Although there is some value in a stable core system (like these immutable distros from Fedora Atomic).


The extremely old kernel on Synology makes it hard or impossible to run some containers.


I have a fairly recent DS920+ and never had issues with containers - I have probably 10+ containers on it - grafana, victoriametrics/logs, jellyfin, immich with ML, my custom ubuntu toolboxes for net, media, ffmpeg builds, gluetun for vpn, homeassistant, wallabag,...

Edit: I just checked Grafana and cadvisor reports 23 containers.

Edit2: 4.4.302+ (2022) is my kernel version, there might be specific tools that require more recent kernels, of course, but I was so far lucky enough to not run into those.


While gluetun works great, there are other implementations of wireguard that fail without the kernel modules. I've also ran into issues from containers wanting the kernel modules for iptables-nft but Synology only has legacy iptables.


I belive even for gluetun I had to add the WG kernel module. I think I used this to compile it for myself https://github.com/runfalk/synology-wireguard

I know there are userspace implementations, but can't remember the specifics rn and don't have my notes with me.

> kernel modules for iptables-nft

I think you meant nftables. The iptables-nft package is meant to provide iptables interface for nftables for code that still expects that, afaik. I didn't run into that issue yet (knock-knock). According to docs nftables is available since kernel 3.13, so in theory it might be possible to build the modules for Synology.

However, I don't think I will be buying another Synology in the future, mainly because of other issues like they restricting what RAM I can use or what I want to use the M2 slots for, or their recent experiment with trying to push their own drives only, etc. I might give TrueNAS a try if I am not bored enough to just build one on top of a general purpose OS...


I had to look it up and I think it was a mix of user error and a bad container. At one point I had been trying to use the nicolaka/netshoot container as a sidecar to troubleshoot iptables on another container and it is/was(?) missing the iptables-legacy package and unable to interact with the first containers iptables.

As great as containerization is, having the right kernel modules available goes a long way and I probably wouldn't have run into trouble like that if the first container hadn't fallen back to iptables because nftables was unavailable.

All of these NAS OSs that include docker work great for the most popular containers, but once you get into the more complex ones strange quirks start poping up.


Sublime maybe?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: