I made this a while back to move us off our on-prem Atlassian to Gitlab [1]. Maybe it'll help someone if they want something similar. Fair warning: I haven't tried this recently, so YMMV.
Microsoft lost my 80yr old aunt and my two under teenager kids. My last hold-out at home is my son's laptop, which he needed Windows for a proctered exam (now completed). He's excited to soon be on the same OS as his other family members.
Project Ozone 3, Enigmatica 2 Expert, Nomifactory, GregTech New Horizons, Sky Factory 4, and SevTech Ages all run fine under GNU/Linux, is there some modpack that doesn't work?
If you can get bedrock working on it, I’ll be happy to follow your steps. None of their friends play java edition and it’s not compatible with their realms.
Was the utility called slomo? I recall having to do something like `slomo sopwith.exe` to bring the processing loop back down into human ranges of reaction times.
We run Proxmox VMs that are running Hashicorp's Nomad orchestration at $DAYJOB. The Nomad clients are then turning around and running the docker containers (Proxmox -> Nomad VM -> Docker). For us it's easier to manage and segregate duties on the initial metal this way.
I just installed Kubuntu last week so I could get the additional shift-drag targets to split my 34" ultrawide into 3 sections, or bump to the edges for the half filled.
Something I realized after spending a few months in sway (i3) and then niri is that I only care about a few windows (code editor, terminal, browser, apps I use moment to moment).
All the rest I'd prefer to just summon as-needed and then dismiss without navigating away from the windows I care about.
sway/niri want me to tile every window into some top-level spot.
Took me a while to admit it, but the usual Windows/macOS/DE "stacking" method is what I want + a few hotkeys to arrange the few windows I care about.
Yeah, I came to the same conclusion a few months back. Sadly I had to ditch KDE for GNOME due to an issue[0] specific to NixOS but after going through the gauntlet of tiling window managers and PaperWM/Niri over the years I've also settled on a traditional DE.
I'm surprised to hear that niri didn't work for you, I feel like it's a really good middle ground between tiling and floating window managers. It handles a lot of window resizing and arranging for me, without being too rigid. Windows can have any width they need without having to evenly divide my monitor.
Makes sense I guess. I mostly work with a few long-lived applications, and I hate having to do any manual window management myself.
I'm fairly sure you could use scripting to come up with a Niri workflow that worked for your use case. Maybe something like niri-scratchpad (https://github.com/Vizkid04/niri-scratchpad). But I sympathize if you don't want to spend a ton of time experimenting with your tools when you already have something that works for you.
In sway, put the lower priority windows in another workspace, or the scratchpad, or in tabs/stacks. You can bind keys to focus specific programs by their appid/class also, so even if they're on another workspace or monitor it'll jump right there.
It sounds like the scratchpad may be especially close to what you want.
Your sway solutions are hacks around the MRU stack of a stacking desktop environment though.
I don't want to leave the workspace nor go find which tab/stack I've put Spotify just to use it. And scratchpad is no better since I'd have to do an explicit summon/dismiss cycle between workspace and scratchpad just to recreate behavior I already have on a normal desktop env.
Datacenter income for nVidia last quarter was something like 62B vs the gaming market of <4B. While not quite a rounding error, it feels like the gaming market is just too small for them to put more resources toward it for us consumer folks.
One strategic reason is to remove oxygen from competitors. Otherwise someone will scoop up the gaming market and put the proceeds into developing technology to compete with NVIDIA in the more lucrative AI space.
I wonder who is going to fill the gaming market if AI market focused companies would simply outbid them during manufacturing? All available and not yet available manufacture is pivoting to AI market
“I wouldn’t pick up $20 if there was $100 on the ground!”
Most people would pick up both.
These economic proclamations don’t seem to make sense, when applied to different contexts — which suggests what you’re saying might be folk wisdom rather than sound theory (and greatly over simplifying the problem).
You’re also discounting ecosystem effects — gaming GPUs driving demand for datacenter and workstation GPUs as hobbyist experimentation turns into industrial usage. We don’t know what would happen if nVidia stopped suppressing the GPU market, because it’s never been tried — nVidia has always viciously undercut their own grassroots.
> “I wouldn’t pick up $20 if there was $100 on the ground!” Most people would pick up both.
No, it’s more like there’s a massive pile of both $20s and $100s on the ground. You wouldn’t waste time running between the two, you’d focus on the $100s
if you're within reach of both, then it's not a choice, and there's no opportunity cost in picking just one - you'd be taking both.
If not within reach of both but just one, and you picking one up means someone else might pick up the other, then which would you choose? The other is then by definition, the opportunity cost.
cash cannot buy more fabs. ASML machines are the constraint, plus TSMC's capacity is a constraint.
Not to mention that nvidia's cash pile isn't magic - they should not overpay for capacity; they're better off returning cash to investors in that case.
You’re standing on a traffic island in the middle of a busy road. The lights change allowing you to cross. On one side there is a $20 note, on the other there is a $100 note. Which side do you go to first?
I wouldn't pick up either even with empty hands. No idea where they've been. Maybe a fiver, a twenty sure. At that point I'd put down my bags and grab both.
But so many gamers want to buy GPUs and can’t because they are sold out or won’t because they are super price inflated. Wouldn’t the gaming market be larger if the products were actually available and at their actual MSRP?
Nvidia can't sell 10x the number of GPUs they sell. As much as the supply issues are discussed, it would likely take them a long time to just double the market. They could try to become the vendor of choice for the PS6/next xbox, but that's a big strategy shift for again maybe double the market, not 10x the market.
On the other hand right now the market doesn't seem to think that the >60bn of datacenter revenue is going away or even going to slow down _growing_ any time soon. Just adding 10% more revenue there is worth more than doubling their GPU business which they likely can't do.
I am not saying it would be anywhere near equal, just that it would be "bigger" than 4B if it wasn't so constrained.
>On the other hand right now the market doesn't seem to think that the >60bn of datacenter revenue is going away or even going to slow down _growing_ any time soon.
That is not substantiable. AI bubble is wealthy hype like a single drop of blood can be used to validate 100 different diagnostic test. Reality is parts per million fails this along with reusable medium. Wealth latches to idiocy.
Gaming and CAD market are real expectations that latch to reality. Grow the education systems and grow both. So is matrix math, such as hashing.
AI has reached a state of software issue, not hardware. And the divergence of AI hardware does not equate to CAD and Gaming math.
How many of the last ten years have had some kind of "temporary" GPU shortage? It was crypto, now it's LLMs, who knows what's next?
The only winning strategy for these guys is to exploit the market for all it's worth during shortages and carefully control production to manage the inevitable gluts.
> AI has reached a state of software issue, not hardware
Citation very much needed.
At the very least, OpenAI seems to believe more and larger datacenters is the path to better models... and they've been right about that every time so far.
Slop is still slop. There is no legitimate evidence that these systems get any better just by throwing more hardware at it. Every one of the people in this paper is involved with OpenAI, so it is very suspect in its findings.
I am afriad thhe GPUs chips will be often useless (to power hungry, running too hot and needing too expensive accessories) but it might be possible to harvest the memory chips and put them on useful GPU cards.
Shouldn't AI be able to take this one step further and just analyze the binary (of the samba server in this case) and create all kinds of interface specs from it?
As brazzy said, there's no such thing as extended ASCII. There's just a huge number of ASCII-compatible eight-bit encodings. The original IBM (and DOS) character set, hardwired into ROM, is the one you're thinking of, and went by various names such as "Personal Computer, MS-DOS United States, MS-DOS Latin US, OEM United States, DOS Extended ASCII (United States), PC-ASCII" [1].
DOS 3.3, in 1987, was the first version to support localized character sets, via a system of "code pages". You'd select an encoding/"character set" that suits your language in AUTOEXEC.BAT – or just used the default 437 if you were a US user and never had to worry about these things. For me, the most relevant code page was 850, aka "OEM Multilingual Latin 1" (not at all the same as ISO/IEC 8859-1 which is also known as "Latin 1").
Why the apparently arbitrary numbers, I'm not sure, but Claude and ChatGPT both claim the codes were simply drawn from a more general-purpose sequence of product numbers used at IBM at the time.
This application, like other similar ones, uses Unicode box drawing characters that now all reside comfortably out of the eight-bit range.
> Why the apparently arbitrary numbers, I'm not sure, but Claude and ChatGPT both claim the codes were simply drawn from a more general-purpose sequence of product numbers used at IBM at the time.
Claude and chatgpt are (probably) wrong. Wikipedia has 3 citations for the following statement:
> Originally, the code page numbers referred to the page numbers in the IBM standard character set manual
The reason they're so high is because code pages were assigned to EBCDIC first.
Yeah, I later found that quote on Wikipedia too. Though I don't think the cited source is super reliable either, or just folklore ("Oh, 'code page' refers to actual deadtree pages"). All the IBM documentation I could find showed big gaps in the sequence of code pages.
But I just now found the list at [1], I don't know why I didn't notice it before. It's certainly comprehensive! There's been some real detective work to be done in compiling that list. The gaps are much smaller, though still exist, eg. from 40 to 251. The 300s are rather sparse, there are only a few 4xx codes, and then there's a jump from 500 to 8xx (with some 7xx assigned later I think).
In any case, I agree that the LLMs seem to have hallucinated the "more general sequence" part. The code page IDs, or more formally CCSIDs, always were a specific set of 16-bit ID numbers. Why exactly the various gaps exist is probably lost in history by now, if there ever even were any particular reasons.
There is a single row of apps that can be favorited on the bottom row of the screen for quick access. There is also a search bar that searches across apps, some direct app actions (like Firefox: New Tab), contacts, and some settings. The search bar might be able to pass the query to the default browser, but
There is not another "desktop" that can be swiped to right and left. Widgets can be added, if desired.
thankful for all the answers, I'll give it a try for a while to 'learn' me and see if I can lean into the workflow.
Optimistically speaking, the only drawback would then be I only get one screen/desktop for adding widgets – which, I guess, might be a reasonable trade-off.
[1] https://gitlab.com/jeremygonyea/jira-to-gitlab-migration-too...
reply