Exactly, for all the hate of Windows, I could at least just look for shit named co-pilot and uninstall it for a pretty nice experience on my new computer. Phones aren't always as straightforward (especially jarring as "Google services" are required in Sweden on Android for stuff like mobile identity systems).
This is so absurd... I have to keep an old (rooted in order to hide that adb is enabled) phone connected to my home server just to use such app, because grapheneos without google services is apparently not secure enough.
Narrow mustache was leading a marginal party at the start of 1930 (Black tuesday happened only at the end of october 1929 so the Great Depression was only kinda starting) and his party "only" gained 18% of the popular in september of 1930, it's the years after that made his rise so with a start of 1930 cutoff he's still mostly a marginal player.
Broad mustache had risen to power, but only properly gotten rid of the other faction in his country the years before.
If it wasn't built by Matz I'd have severe doubts, but it's clearly defined and I presume he knows all limitations of the Ruby semantics well.
My thesis work (back when EcmaScript 5 was new) was an AOT JS compiler, it worked but there was limitations with regards to input data that made me abandon it after that since JS developers overall didn't seem to aware of how to restrict oneself properly (JSON.parse is inherently unknown, today with TypeScript it's probably more feasible).
The limitations are clear also, the general lambda calculus points to limits in the type-inference system (there's plenty of good papers from f.ex. Matt Might on the subject) as well as the Shed-skin Python people.
eval, send, method_missing, define_method , as a non-rubyist how common are these in real-world code? And how is untyped parsing done (ie JSON ingestion?).
> If it wasn't built by Matz I'd have severe doubts, but it's clearly defined and I presume he knows all limitations of the Ruby semantics well.
It's a very pragmatic design: Uses Prism - parsing Ruby is almost harder than the actual translation - and generates C. Basic Ruby semantics are not all that hard to implement.
On the other extreme, I have a long-languishing, buggy, pure-Ruby AOT compiler for Ruby, and I made things massively harder for myself (on purpose) by insisting on it being written to be self-hosting, and using its own parser. It'll get there one day (maybe...).
But one of the things I learned early on from that is that you can half-ass the first 80% and a lot of Ruby code will run. The "second 80%" are largely in things Matz has omitted from this (and from Mruby), like encodings, and all kinds of fringe features (I wish Ruby would deprecate some of them - there are quite a few things in Ruby I've never, ever seen in the wild).
> eval, send, method_missing, define_method , as a non-rubyist how common are these in real-world code? And how is untyped parsing done (ie JSON ingestion?).
They are pervasive. The limitations are similar to those of mruby, though, which has its uses.
Supporting send, method_missing, and define_method is pretty easy.
Supporting eval() is a massive, massive pain, but with the giant caveat that a huge proportion of eval() use in Ruby can be statically reduced to the block version of instance_eval, which can be AOT compiled relatively easily. E.g. if you can statically determine the string eval() is called with, or can split it up, as a lot of the uses are unnecessary or workaround for relatively simple introspection that you can statically check for and handle. For my own compiler, if/when I get to a point where that is a blocking issue, that's my intended first step.
> eval, send, method_missing, define_method , as a non-rubyist how common are these in real-world code?
Quite a lot, that's what allows you to build something like Rails with magic sprinkled all around. I'm not 100% sure, but probably the untyped JSON ingestion example uses those.
Remove that, and you have a very compact and readable language that is less strongly typed than Crystal but less metaprogrammable than official Ruby. So I think it has quite a lot of potential but time will tell.
> Quite a lot, that's what allows you to build something like Rails with magic sprinkled all around
True, but I'd point out that use in frameworks/DSLs etc is the main place you see those things, and most of the code people write in their own projects don't use these.
In my experience (YMMV), eval and send are rare outside of things like, slightly cowboy unit tests (send basically lets you call private methods that you shouldn't be able to call, so it's considered terrible form to use it 'IRL'. Though there is a public_send which is a non-boundary-violating version too).
Also in my opinion, unless you're developing a framework or something, metaprogramming (things like define_method etc) are Considered Harmful 95% of the time (at least in Ruby), as I think only about 5% of Ruby developers even grok it enough to work in a codebase with that going on. So while it might seem clever to a Staff Eng with 15 years of Ruby experience, the less experienced Rubyist who is going to be trying to maintain the application later is going to be in pain the whole time due to not being able to find any of the method definitions that appear to be being called.
I disagree, I use metaprogramming in application code quite regularly, although I tend to limit myself to a single construct (instance_eval) because I find that makes things more manageable.
In my opinion the main draw of Ruby is that it's kind of Lisp-y in the way you can quickly build a metalanguage tailored to your specific problem domain. For problems where I don't need metaprogramming, I'd rather use a language that is statically typed.
The two are not mutually exclusive. On many occasions I've used C# to define domain-specific environments in which snippets of code, typically expressions, are compiled and evaluated at runtime, "extending the language" by evaluating expressions in the scope of domain-specific objects and/or defining extension methods on simple types (e.g., defining "Cabinet" and "Title" properties on the object and a "Matches" extension method on System.String so I can write 'Cabinet.EndsWith("_P") || Title.Matches("pay(roll|check)", IgnoreCase)').
I don't think instance_eval is too nasty. The toughest "good" codebase I've worked in was difficult because it used method_missing magic everywhere, which built tons of methods whose existence you had to just infer, based on configuration stored in a database. So most method calls could not be "command clicked" or whatever to jump to their definition, because none were ever defined.
Or even just a compiler to C piggybacking off <objc/runtime/objc.h>; I think Apple still spends a lot of time making even dynamic class definition work fast. I haven't touched Cocoa/Foundation in a while, but I think (emphasis on think) a lot of proxy patterns in Apple frameworks still need this functionality.
>eval, send, method_missing, define_method, as a non-rubyist how common are these in real-world code?
The interesting bunch (to me, based on experience) is `eval`, `exec`, and `define_method` (as well as creating new classes with `Class.new` `Struct.new`). My sense is that the majority of their use is at the time of application boot, while requiring files. In some ways, it is nearly a compilation step already.
> eval, send, method_missing, define_method , as a non-rubyist how common are these in real-world code
This depends on the individual writing code. Some use it more than others.
I can only give my use case.
.send() I use a lot. I feel that it is simple to understand - you simply
invoke a specific method here. Of course people can just use .method_name()
instead (usually without the () in ruby), but sometimes you may autogenerate
methods and then need to call something dynamically.
.define_method() I use sometimes, when I batch create methods. For instance
I use the HTML colour names, steelblue, darkgreen and so forth, and often
I then batch-generate the methods for this, e. g. via the correct RGB code.
And similar use cases. But, from about 50 of my main projects in ruby, at
best only ... 20 or so use it, whereas about 40 may use .send (or, both a
bit lower than that).
eval() I try to avoid; in a few cases I use them or the variants. For instance, in a simple but stupid calculator, I use eval() to calculate the expression (I sanitize
it before). It's not ideal but simple. I use instance_eval and class_eval more often, usually for aliases (my brain is bad so I need aliases to remember, and sometimes it helps to think properly about a problem).
method_missing I almost never use anymore. There are a few use cases when it is nice to have, but I found that whenever I would use it, the code became more complex and harder to understand, and I kind of got tired of that. So I try to avoid it. It is not always possible to avoid it, but I try to avoid it when possible.
So, to answer your second question, to me personally I would only think of .send() as very important; the others are sometimes but not that important to me. Real-world code may differ, the rails ecosystem is super-weird to me. They even came up with HashWithIndifferentAccess, and while I understand why they came up with it, it also shows a lack of UNDERSTANDING. This is a really big problem with the rails ecosystem - many rails people really did not or do not know ruby. It is strange.
"untyped parsing" I don't understand why that would ever be a problem. I guess only people whose brain is tied to types think about this as a problem. Types are not a problem to me. I know others disagree but it really is not a problem anywhere. It's interesting to see that some people can only operate when there is a type system in place. Usually in ruby you check for behaviour and capabilities, or, if you are lazy, like me, you use .is_a?() which I also do since it is so simple. I actually often prefer it over .respond_to?() as it is shorter to type. And often the checks I use are simple, e. g. "object, are you a string, hash or array" - that covers perhaps 95% of my use cases already. I would not know why types are needed here or fit in anywhere. They may give additional security (perhaps) but they are not necessary IMO.
Why do you say HashWithIndifferentAccess shows a lack of understanding? Like many Rails features, it's a convenience that abstracts away details that some find unpleasant to work with. Rails sometimes takes "magic" to the extreme through meta-programming. However, looking at the source [1], HashWithIndifferentAccess doesn't use eval, send, method_missing, or define_method. So I'm not sure how it seems weird to someone who works more with plain Ruby.
Seeing the performance improvement numbers I'm pretty sure there's a type-inference system below it to realize types in all paths (same as the AOT JS compiler I created).
It's not to be beholden to types per-se, but rather that fixed types are way faster to execute since they map to basic CPU instructions rather than operations having to first determine the type and then branch depending on the type used.
The problem with dynamic types is that they either need to somehow join into fixed types (like with TypeScript specifying a type-specification of the parsed object) or remain dynamic through execution (thus costing performance).
I think you could work around send(). Not a Ruby person, but in most languages you could store functions in a hashmap, and write an implementation of send that does a lookup and invokes the method (passing the instance pointer through if need be).
Won’t work with actual class methods, but if you know ahead of time all the functions it will call are dynamic then it’s not a big deal.
Anyone tried using Kaitai descriptions? It seems like a fairly flexible system that would be an excellent starting point for a hex-editor that wants to add good higher level coloring (and perhaps even editing).
I think going bare-metal at cut-rate might've been the only way to actually kick-start that. F.ex. ECU's make things more optimal so buyers will be paying extra for fuel in a bad economy, but the lock-down was worse when it was causing downtime.
But tech in general is perhaps in a growing-up phase, we had Arduinos and Raspberry PI's filling a similar need (computer to electronics being needlessly complicated) that was initially filled from the low-end, but now we have faster SBC's and stuff like Framework laptop's that is expanding the range of options for repariable/replaceable/hackable parts up to the high end today and farming equipment is probably destined to get a similar range of options.
An interesting note here is, will cars also start getting a range of more hackable options, mechanics are ingenious already but it's still very much hacks without manufacturer support, but a new manufacturer providing a low-cost base could very well pop up and grow quickly if they establish an ecosystem.
That's true but employees offer more than code output, and you still need people operating the "machine" at this stage.
I am interested in how corporate politics evolves in this new environment. Usually all the way up the chain, managers and directors use head count as a measure of power and influence (and compensation). Who's going to pay a director top level pay when all they're doing is funneling requirements to various agents? That seems like a technical role that isn't particularly aligned with the soft skills management excel with either.
Management seems disconnected from reality. Real employees accumulate tribal knowledge, have an almost infinite context, and don’t keep disabling unit tests because they don’t pass. They don’t really cost money if it’s information workers that build almost all of the modern service industry. It’s management that we should see as a cost center.
> seems like x86 and the major 8bit cpu's had the same speed, pondering in this might be a remnant from the 4-bit ALU times.
I think that era of CPUs used a single circuit capable of doing add, sub, xor etc. They'd have 8 of them and the signals propagate through them in a row. I think this page explains the situation on the 6502: https://c74project.com/card-b-alu-cu/
In any ALU the speed is determined by the slowest operation, so XOR is never faster. It does not matter which is the width of the ALU, all that matters is that an ALU does many kinds of operations, including XOR and subtraction, where the operation done by an ALU is selected by some control bits.
I have explained in another comment that the only CPUs where XOR can be faster than subtraction are the so-called superpipelined CPUs. Superpipelined CPUs have been made only after 1990 and there were very few such CPUs. Even if in superpipelined CPUs it is possible for XOR to be faster than subtraction, it is very unlikely that this feature has been implemented in anyone of the few superpipelined CPU models that have ever been made, because it would not have been worthwhile.
For general-purpose computers, there have never been "4-bit ALU times".
The first monolithic general-purpose processor was Intel 8008 (i.e. the monolithic version of Datapoint 2200), with an 8-bit ISA.
Intel claims that Intel 4004 was the first "microprocessor" (in order to move its priority earlier by one year), but that was not a processor for a general-purpose computer, but a calculator IC. Its only historical relevance for the history of personal computers is that the Intel team which designed 4004 gained a lot of experience with it and they established a logic design methodology with PMOS transistors, which they used for designing the Intel 8008 processor.
Intel 4004, its successors and similar 4-bit processors introduced later by Rockwell, TI and others, were suitable only for calculators or for industrial controllers, never for general-purpose computers.
The first computers with monolithic processors, a.k.a. microcomputers, used 8-bit processors, and then 16-bit processors, and so on.
For cost reduction, it is possible for an 8-bit ISA to use a 4-bit ALU or even just a serial 1-bit ALU, but this is transparent for the programmer and for general-purpose computers there never were 4-bit instruction sets.
> I have explained in another comment that the only CPUs where XOR can be faster than subtraction are the so-called superpipelined CPUs. Superpipelined CPUs have been made only after 1990 and there were very few such CPUs.
(And I'm choosing 386 to avoid it being "a superpipelined CPU".)
> Or you do not consider MUL/DIV "arithmetic", or something.
Multiplier and divider are usually not considered part of the ALU, yes. Not uncommon for those to be shared between execution threads while there's an ALU for each.
386 is a microprogrammed CPU where a multiplication is dome by a long sequence of microinstructions, including a loop that is executed a variable number of times, hence its long and variable execution time.
A register-register operation required 2 microinstructions, presumably for an ALU operation and for writing back into the register file.
Unlike the later 80486 which had execution pipelines that allowed consecutive ALU operations to be executed back-to-back, so the throughput was 1 ALU operation per clock cycle, in 80386 there was only some pipelining of the overall instruction execution, i.e. instruction fetching and decoding was overlapped with microinstruction execution, but there was no pipelining at a lower level, so it was not possible to execute ALU operations back to back. The fastest instructions required 2 clock cycles and most instructions required more clock cycles.
In 80386, the ALU itself required the same 1 clock cycle for executing either XOR or SUB, but in order to complete 1 instruction the minimum time was 2 clock cycles.
Moreover, this time of 2 clock cycles was optimistic, it assumed that the processor had succeeded to fetch and decode the instruction before the previous instruction was completed. This was not always true, so a XOR or a SUB could randomly require more than 2 clock cycles, when it needed to finish instruction decoding or fetching before doing the ALU operation.
In very old or very cheap processors there are no dedicated multipliers and dividers, so a multiplication or division is done by a sequence of ALU operations. In any high performance processor, multiplications are done by dedicated multipliers and there are also dedicated division/square root devices with their own sequencers. The dividers may share some circuits with the multipliers, or not. When the dividers share some circuits with the multipliers, divisions and multiplications cannot be done concurrently.
In many CPUs, the dedicated multipliers may share some surrounding circuits with an ALU, i.e. they may be connected to the same buses and they may be fed by the same scheduler port, so while a multiplication is executed the associated ALU cannot be used. Nevertheless the core multiplier and ALU remain distinct, because a multiplier and an ALU have very distinct structures. An ALU is built around an adder by adding a lot of control gates that allow the execution of related arithmetic operations, e.g. subtraction/comparison/increment/decrement and of bitwise operations. In cheaper CPUs the ALU can also do shifts and rotations, while in more performant CPUs there may be a dedicated shifter separated from the ALU.
The term ALU can be used with 2 different senses. The strict sense is that an ALU is a digital adder augmented with control gates that allow the selection of any operation from a small set, typically of 8 or 16 or 32 operations, which are simple arithmetic or bitwise operations. Before the monolithic processors, computers were made using separate ALU circuits, like TI SN74181+SN74182 or circuits combining an ALU with registers, e.g. AMD 2901/2903.
In the wide sense, ALU may be used to designate an execution unit of a processor, which may include many subunits, which may be ALUs in the strict sense, shifters, multipliers, dividers, shufflers etc.
An ALU in the strict sense is the minimal kind of execution unit required by a processor. The modern high-performance processors have much more complex execution units.
Most of mul/div was implemented in hardware since the 80186 (and the more or less compatible NEC V30 too). The microcode only loaded the operands into internal ALU registers, and did some final adjustment at the end. But it was still done as a sequence of single bit shifts with add/sub, taking one clock cycle per bit.
> For general-purpose computers, there have never been "4-bit ALU times".
Well, consider minicomputers made from bit-slices. Those would be 4-bit ALUs with CLA.
What drives me crazy about the 8-bit era is the lack of orthogonality. We're having this whole discussion because they didn't have a ZERO or ONES opcode. In 1972's 74181 chip those were just cases among 48 modes.
The minicomputers made with bit-slices had 16-bit ALUs or 32-bit ALUs.
Those 16-bit or 32-bit ALUs were made from 2-bit, 4-bit or 8-bit slices, but this did not matter for the programmer, and it did not matter even for the micro-programmer who implemented the instruction set architecture by writing microcode.
The size of the slices mattered a little for the schematic designer who had to draw the corresponding slices and their interconnections an it mattered a lot for the PCB designer, because each RALU slice (RALU = registers + ALU) was a separate integrated circuit package.
Intel made 2-bit RALU slices (the Intel 3000 series), AMD made 4-bit RALU slices (the 2900 series), which were the most successful on the market. There were a few other 4-bit RALU slices, e.g. the faster ECL 10800 series from Motorola, Later, there were a few 8-bit RALU slices, e.g. from Fairchild and from TI, but by that time the monolithic processors became quickly dominant, so the bit-sliced designs were abandoned.
The width of the slices mattered for cost, size and power consumption, but it did not matter for the architecture of the processor, because the slices were made to be chained into ALUs of any width that was a multiple of the slice width.
I think the framebuffer devices is a least common denominator that is available on even miniscule or emulated hardware whilst anything above thats starts requiring a whole lot more infrastructure.
And honestly, you don't need much more for an image viewer.
This is exactly right - you can get a framebuffer on just about anything, including pretty much any video card made since about 1990, and also more fun things like the little i2c display that your toaster has. No need to restrict relatively simple software like fbi/fim to running on less hardware by using drm.
I used to play videos with the framebuffer just fine and read PDF's with fbpdf2, among watching TV with fbtv and the like. I didn't miss nearly anything, as most games under SDL1/2 can render into FB with
export SDL_VIDEODRIVER=fbcon
or
export SDL_VIDEODRIVER=kmsdrm
And tons of games too, such as Supertux2, Crispy Doom, SDLQuake, FreeCiv-SDL...
another fun one if you're in an exotic situation and don't have a framebuffer (or if you just want to have some fun by making your games to look worse):
Illegal for whom? The manufacturers? It's the same as it's now illegal for Boeing and Airbus to sell parts to Russia, yet Russia developed a network of intermediaries in several countries that buy the parts on their behalf so they can maintain their planes. PEWEX stores used to sell of kinds of goods from the west, including computers and even cars, if you had the dollars it was far easier to buy a western car or a computer than wait for a domestically made one. Maintaining it afterwards was a different question of course, but PEWEX stores were created specifically by the government to obtain dollars, they bought goods in the west usually by barter, and then sold them domestically for dollars, which then they used to buy the goods they really wanted since no one would take Polish Zloty in the west, but dollars opened many doors.
Space (Space-X showed that reusable rockets are feasible), Programmable health (Covid vaccine and remember that mRNA curing that dog?),etc.
Sadly, I think there's a risk we might also be heading towards a dark age with few advances since fundamental research has been squeezed away for being unprofitable or hobbled by a industrialized publishing/review-system for a while now and we've been coasting along on profitable applications rather than (expensive) breakthroughts in basics.
reply