This would not be a problem if you phone did not have IMEI and IMSI, and if the telco only provided an anonymous Internet channel. The problem is that you must have a phone number, often linked to your ID and pay with a bank card, linked to your ID, instead of cryptocurrency. Towers and beamforming are not a problem at all.
Yeah, whether or not precise location info is required, even coarse 24/7 location tracking is a huge privacy issue. Privacy was simply never a part of the core design of our phone system in the first place. That needs to change. Device anonymization would be a great first step.
This shows the importance of having open-source software and firmware - commercial companies will betray you every day - remember HDCP, unskippable DVD sections, TPM, updates you cannot disable, "security enclaves", content protection areas on SSDs and all the similar stuff where interests if the owner are not considered.
I think it is possible to run CPU code on GPU (including the whole OS), because GPU has registers, memory, arithmetic and branch instructions, and that should be enough. However, it will be able to use only several cores from many thousands because GPU cores are effectively wide SIMD cores, grouped into the clusters, and CPU-style code would use only single SIMD lane. Am I wrong?
Xeon Phi was so cool. I wanted to use the ones we had so much... but couldn't find any applications that would benefit enough to make it worth the effort. I guess that's why it died lol.
This seems correct to me. Of course you'd need to build a CPU emulator to run CPU code. A single GPU core is apparently about 100x slower than a single CPU core. With emulation a 1000x slowdown might be expected. So with a lot of handwaving, expect performance similar to a 4 MHz processor.
Obviously code designed for a GPU is much faster. You could probably build a reasonable OS that runs on the GPU.
GPUs having have thousands of cores is just a silly marketing newspeak.
They rebranded SIMD lanes "cores". For eaxmple Nvidia 5000 series GPUs have 50-170 SMs which are the equivalent of cpu cores there. So a more than desktops, less than bigger server CPUs. By this math each avx-512 cpu core has 16-64 "gpu cores".
170 compute units is still a crapload of em for a non-server platform with non-server platform requirements. so the broad "lots of cores" point is still true, just highly overstated as you said. plus those cores are running the equivalent of n-way SMT processing, which gives you an even higher crapload of logical threads. AND these logical threads can also access very wide SIMD when relevant, which even early Intel E-cores couldn't. All of that absolutely matters.
Each SM can typically schedule 4 warps so it’s more like 400 “cores” each with 1024-bit SIMD instructions. If you look at it this way, they clearly outclass CPU architectures.
This level corresponds to SMT in CPUs I gather. So you can argue your 192 core EPYC server cpu has 384 "vCPUs" since execution resources per core are overprovisioned and when execution blocks waiting for eg memory another thread can run in its place. As Intel and AMD only do 2-way SMT this doesn't make the numbers go up as much.
The single GPU warp is both beefier and wimpier than the SMT thread: they're in-order barely superscalar, whereas on CPU side it's wide superscalar big-window OoO brainiac. But on the other hand the SM has wider SIMD execution resources and there's enough througput for several warps without blocking.
A major difference is how the execution resources are tuned to the expected workloads. CPU's run application code that likes big low latency caches and high single thread performance on branchy integer code, but it doesn't pay to put in execution resources for maximizing AVX-512 FP math instructions per cycle or increasing memory bandwidth indefinitely.
Yep. But from the point of view of running CPU-style code on GPUs (eg Rust std lib) and how the "thousands of cores" fiction relates those are less relevant.
And for GenAI matrix math there's of course all the non-gpu acceleration features in various shapes and forms, like the on-chip edge tpu on G phones or Intel and Apple's name things that are both called AMX.
Merely mislead by marketing. The x64 arch has 512bit registers and a hundred or so cores. The gpu arch has 1024bit registers and a few hundred SMs or CUs, being the thing equivalent to an x64 core.
The software stacks running on them are very different but the silicon has been converging for years.
Cooperating with law enforcement cannot be a fraud. Fraud is lying to get illegal gains. I think, it's legally ok to lie if the goal is to catch a criminal and help the government.
For example, in 20th century, an European manufacturer of encryption machines (Crypto AG [1]) made a backdoor at request of governments and never got punished - instead it got generous payments.
I don't think anybody is interested in reverse-engineering closed-source OS to check if it works as documented; it;s easier to just use Linux which has open-source code.
The law makes a distinction between storing it on a disk and just remembering the content. The latter is not a "copy" and not a subject of law:
> “Copies” are material objects, other than phonorecords, in which a work is fixed by any method now known or later developed, and from which the work can be perceived, reproduced, or otherwise communicated, either directly or with the aid of a machine or device. The term “copies” includes the material object, other than a phonorecord, in which the work is first fixed.
> A work is “fixed” in a tangible medium of expression when its embodiment in a copy or phonorecord, by or under the authority of the author, is sufficiently permanent or stable to permit it to be perceived, reproduced, or otherwise communicated for a period of more than transitory duration. A work consisting of sounds, images, or both, that are being transmitted, is “fixed” for purposes of this title if a fixation of the work is being made simultaneously with its transmission.
Interesting. How long is the transitory duration? The interpretation of that likely has yet to be determined by a court case and can evolve similar to how “all men are created equal” doesn’t just refer to men.
Seems to me a possible interpretation is just deleting the data after training is finished.
If I am not mistaken, the law prohibits producing any unauthorized copies. So if you download a pirated book on a computer, you produce an illegal copy: [1]. If I am not missing anything, ML companies are galaxy-scale infringers.
> 106. Exclusive rights in copyrighted works
> Subject to sections 107 through 122, the owner of copyright under this title has the exclusive rights to do and to authorize any of the following:
> (1) to reproduce the copyrighted work in copies or phonorecords;
> 501. Infringement of copyright
> (a) Anyone who violates any of the exclusive rights of the copyright owner as provided by sections 106 through 122 or of the author as provided in section 106A(a), or who imports copies or phonorecords into the United States in violation of section 602, is an infringer of the copyright or right of the author, as the case may be.
NT is hybrid kernel, it has microkernel features, particularly with drivers if I understand correctly.
But that said, I didn't say either were "good", I said that NT is "arguably better".
ETA: I reread my comment; you're right, I actually said that NT "isn't bad at all". I stand by what I said mostly though; that doesn't imply it's "good" necessarily, just that it's arguably better than Linux.
NT is a monolithic kernel and "hybrid kernel" is a pure marketing term. You can move functionality out of the kernel into userspace or not. There is no in-between.
I also don't get why people claim NT is "better." Linux is a modern kernel under very active development.
The graphic stack in NT is done in a microkernel fashion, it runs in kernel space but doesn't (generally) crash the whole OS in case of bugs.
There are a few interviews of Dave Cutler (NT's architect) around where he explains this far better than I am here.
Overall, you have classic needs and if you don't care about OSS (either for auditability, for customizability or for philosophical choice about open source), it's a workable option with its strength and weaknesses, just like the Linux kernel.
Parts of the kernel can be made more resilient against failures, but that won't make it a microkernel. It'll still run in a shared address space without hardware isolation. It's just not possible to get the benefits of microkernels without actually making it one.
Also Linux being OSS can't be dismissed because it means it'll have features that Microsoft isn't interested in for Windows.
Modern Linux supports asynchronous I/O. It's debatable whether NT's lack of memory overcommitting is a superior choice. OS personalities have never been relevant outside of marketing and might even be technical debt at this point, as virtualization offers a more effective solution with significantly less complexity. Moreover, the Linux kernel maintains a stable ABI.
Much of the discussion surrounding NT's supposed superiority is outdated or superficial at best. Linux, on the other hand, offers several advantages that actually matters. It supports a wider variety of filesystems natively, with FUSE providing exceptional utility. Linux also accommodates more architectures and allows for more creative applications through features like User Mode Linux and the Linux Kernel Library. It also has a more robust debugging ecosystem thanks to its large community and open source nature. All of these things are possible because Linux isn't bound by a single company's commercial interests.
Also, is Microsoft putting as much effort into NT these days? I find it hard to believe they care about NT when they stopped caring about what runs on top of it, leading to articles like this one.
Not nearly to the depth and breth of NT. NT is async I/O throughout. Linux has a bunch of libs that ride on top of pretend async I/O with io_uring as a more recent bonus.
> It's debatable whether NT's lack of memory overcommitting is a superior choice.
NT won't randomly kill a process. That's the winning play.
> OS personalities have never been relevant outside of marketing
Until you've used them for non-marketing purposes, then they're invaluable. Personalities existed when virtualization didn't exist on x86.
> Moreover, the Linux kernel maintains a stable ABI.
The only stable ABI on Linux is Win32.
> It supports a wider variety of filesystems natively,
Most distros suggest ext4 out of the box. Sysadmins are going to deploy ZFS where it counts. Some might use XFS. Having access to a ton of file systems is great, but the usage outside of ext4 is going to be comparatively low. ext4 is the only FS I'd want to see as first-party on Windows as a data drive. But that would have been more important before persuasive networking.
> It also has a more robust debugging ecosystem thanks to its large community and open source nature.
This ignores the debugging tools on Win32 by a country mile.
> is Microsoft putting as much effort into NT these days
Yes. Even if you do the bare minimum investigative effort and follow the "what's new" for each version of Windows, you can see the kernel-level investment. Much of this is around security and isolation of kernel components. There is also Microsoft in talks (finally, again) with EDR vendors to isolate their solutions; hopefully game devs are next.
> leading to articles like this one.
This isn't an article. It's an uninformed blog post.
> Not nearly to the depth and breth of NT. NT is async I/O throughout. Linux has a bunch of libs that ride on top of pretend async I/O with io_uring as a more recent bonus.
Is this a purity thing or does it have practical implications?
> NT won't randomly kill a process. That's the winning play.
Every OS will have to when it runs out of resources. No overcommitting means it's less resource-efficient too, so things aren't that simple.
> Until you've used them for non-marketing purposes, then they're invaluable. Personalities existed when virtualization didn't exist on x86.
When have OS personalities ever been a commercial success? Every product that built on it went nowhere.
> The only stable ABI on Linux is Win32.
Containers and Flatpaks prove otherwise. Static binaries exist, too.
Also, if you're extending this Linux / Windows comparison to include the userland, then Windows is no match for Linux. Not when Microsoft is actively sabotaging Windows.
> Having access to a ton of file systems is great, but the usage outside of ext4 is going to be comparatively low.
What on earth? There's more use to filesystems than mounting it at root. Are you really claiming that OS personalities are useful, but being able to mount any filesystem is not? That's absurd.
Which doesn't mean much without an ecosystem of programs using WinFsp that's comparable to Linux. Moreover, the long-term development of WinFsp isn't guaranteed, and there remains the risk that Microsoft could introduce changes that might impede the functionality of third-party filesystems.
> It's an uninformed blog post.
Uninformed? While an official Windows-themed Linux distro doesn't make sense, the observation that Windows is declining and Microsoft no longer cares at all what users think is very much correct and obvious to anyone. The fact that Microsoft hasn't ceased development doesn't negate this fact.
> Every OS will have to when it runs out of resources. No overcommitting means it's less resource-efficient too, so things aren't that simple.
NT does memory overcommit...
> When have OS personalities ever been a commercial success? Every product that built on it went nowhere.
Xceed made money off of it. Yes, every product has a shelf life. Just like every commercial Unix. They were successful at what they did until a replacement came along.
> Also, if you're extending this Linux / Windows comparison to include the userland, then Windows is no match for Linux. Not when Microsoft is actively sabotaging Windows.
You're not saying anything, here. "No match" how, exactly?
> There's more use to filesystems than mounting it at root. Are you really claiming that OS personalities are useful, but being able to mount any filesystem is not? That's absurd.
Absurd, how? Is mounting HFS /really/ that critical to your day-to-day?
> Which doesn't mean much without an ecosystem of programs using WinFsp that's comparable to Linux.
Movin' those goal posts!
> Uninformed? While an official Windows-themed Linux distro doesn't make sense,
You uh... did read the post, right? That's what the entire thing was about!