you could just treat argument initialization as an executable expression which is called every time you call a function. If you have a=[], then it's a new [] every time. If a=MYLIST then it's a reference to the same MYLIST. Simple. And most sane languages do it this way, I really don't know why python has (and maintain) this quirk.
b = ComplexObject (...)
# do things with b
def foo (self, arg=b):
# use b
return foo
Should it create a copy of b every time the function is invoked? If you want that right now, you can just call b.copy (), when you always create that copy, then you can not implement the current choice.
I wonder, why that kind of ambiguity or complexity even comes to your mind at all. Just because python is weird?
def foo(self, arg=expression):
could, and should work as if it was written like this (pseudocode)
def foo(self, arg?):
if is_not_given(arg):
arg=expression
if "expression" is a literal or a constructor, it'd be called right there and produce new object, if "expression" is a reference to an object in outer scope, it'd be still the same object.
it's a simple code transformation, very, very predictable behavior, and most languages with closures and default values for arguments do it this way. Except python.
What you want is for an assignment in a function definition to be a lambda.
def foo (self, arg=lambda : expression):
Assignment of unevaluated expressions is not a thing yet in Python and would be really surprising. If you really want that, that is what you get with a lambda.
> most languages with closures and default values for arguments do it this way.
Do these also evaluate function definitions at runtime?
basically all object oriented languages work like that. You access a member; you call a method which changes that member; you expect that change is visible lower in the code, and there're no statically computable guarantees that particular member is not touched in the called method (which is potentially shadowed in a subclass). It's not dynamism, even c++ works the same, it's an inherent tax on OOP. All you can do is try to minimize cost of that additional dereference.
I'm not even touching threads here.
now, functional languages don't have this problem at all.
OOP has nothing to do with it. In your C++ example, foo(bar const&); is basically the same as bar.foo();. At the end of the day, whether passing it in as an argument or accessing this via the method call syntax it's just a pointer to a struct. Not to mention, a C++ compiler can, and often does, choose to put even references to member variables in registers and access them that way within the method call.
This is a Python specific problem caused by everything being boxed by default and the interpreter does not even know what's in the box until it dereferences it, which is a problem that extends to the "self" object. In contrast in C++ the compiler knows everything there's to know about the type of this which avoids the issue.
That's not true. I mean: it's true that it has little to do with OOP, but most imperative languages (only exception I know is Rust) have the issue, it's not "Python specific". For example (https://godbolt.org/z/aobz9q7Y9):
struct S {
const int x;
int f() const;
};
int S::f() const {
int a = x;
printf("hello\n");
int b = x;
return a-b;
}
The compiler can't reuse 'x' unless it's able to prove that it definitely couldn't have changed during the `printf()` call - and it's unable to prove it. The member is loaded twice. C++ compilers can usually only prove it for trivial code with completely inlined functions that doesn't mutate any external state, or mutates in a definitely-not-aliasing way (strict aliasing). (and the `const` don't do any difference here at all)
In Python the difference is that it can basically never prove it at all.
> This is a Python specific problem caused by everything being boxed
I would say it is part python being highly dynamic and part C++ being full of undefined behavior.
A c++ compiler will only optimize member access if it can prove that the member isn't overwritten in the same thread. Compatible pointers, opaque method calls, ... the list of reasons why that optimization can fail is near endless, C even added the restrict keyword because just having write access to two pointers of compatible types can force the compiler to reload values constantly. In python anything is a function call to some unknown code and any function could get access to any variable on the stack (manipulating python stack frames is fun).
Then there is the fun thing the C++ compiler gets up to with varibles that are modified by different threads, while(!done) turning into while(true) because you didn't tell the compiler that done needs to be threadsafe is always fun.
What is going on here is not, that an attribute might be changed concurrently and the interpreter can't optimize the access. That is also a consideration. But the major issue is that an attribute doesn't really refer to a single thing at all, but instead means whatever object is returned by a function call that implements a string lookup. __getattr__ is not an implementation detail of the language, but something that an object can implement how it wants to, just like __len__ or __gt__. It's part of the object behaviour, not part of the static interface. This is a fundamental design goal of the Python language.
> This is a Python specific problem caused by everything being boxed by default and the interpreter does not even know what's in the box until it dereferences it
That's not the whole thing, what is going on. Every attribute access is a function call to __getattr__, that can return whatever object it wants.
bar.foo (...) is actually bar.__getattr__ ('foo') (bar, ...)
This dynamism is what makes Python Python and it allows you to wrap domain state in interface structure.
it starts with a pretty common char, but almost never gets in the way to the point I forget it exists. Meanwhile docker -t uses ^P which I use all the time for history instead of arrow keys. It's possible to configure it, but it's not worth the hassle on servers. Really, really annoying.
It amplifies sequences that contain the two primer sequences on each end of the target. So if you had synthesized sequence XYZ with some mistakes like YZX, then you could target X and Z and purify.
You're correct that PCR has a limited max length, but it is longer and cheaper than vanilla DNA synthesis.
very much not on topic, but that reminded me: my first PC (286) miraculously had a 40MB 2.5" Apple-branded HDD connected via SCSI adapter. Who knows where it was sourced from. One weird thing was that it initialized on boot for about 40 seconds, displaying nothing. I've been really surprised later seeing how fast other PCs with ATA drives were to boot. I still wonder, and maybe someone has a clue why init was so long? Is it something inherent to SCSI?
Nothing to do with SCSI itself, possibly a long time out polling for devices. Some dumb firmware would do silly things like poll each possible target ID and wait for a timeout in series. 6 possible devices on an old early SPI bus times a 5 seconds each is getting you in the neighborhood.
Having flashbacks to troubleshooting bus termination on DEC equipment.
For contrast, I had an Amiga with a 120MB Maxtor SCSI drive, and power-on to looking at the loaded Workbench GUI was about 6-7 seconds. The slowest part was waiting for the drive to spin up, which seems like an acceptable reason for a delay. Warm reboots were a few seconds faster.
So no, that's not anything inherent to SCSI. It could've been either the SCSI driver being slow to initialize, or the adapter being glacial, or the drive itself taking forever to come online.
I've been 10-11 at the time, and half the games I had didn't have an obvious "quit" menu option. I hated pressing the hardware "reset" button because it meant waiting for a minute again, staring at the BIOS setup screen.
Every time I figured out a weird hidden keyboard combination to exit from yet another game was a happy day.
I was a kid without any PC anywhere in 40miles around me, had no idea that SCSI had to be terminated or anything. I don't remember any jumpers on the drive, though.
nope. it was in a 3.5" bay in my standard AT box, but it was smaller, on some massive rusty metal adapter. It looks like it was from some early apple powerbook.
I've got my computer second hand from some rural school accounting department in south of Russia, circa 1994. Who knows how it got there. And who got and wired SCSI adapter compatible with ISA bus in that box.
it's just "gitflow" is unnecessary complex (for most applications). with rebase you can work more or less as with "patches" and a single master, like many projects did in 90x, just much more comfortably and securely.
whoa. well, if it really works for you. The thing is, git has practically zero "destructive" commands, you almost always (unless you called garbage collector aggressively) return to the previous state of anything committed to it. `git reflog` is a good starting point.
I think i've seen someone coded user-friendlier `git undo` front for it.
TDLR is: people feel safer when they can see that their original work is safe, while just making a new branch and playing around there is safe in 99% of the cases, people are more willing to experiment when you isolate what they want to keep.
I learned Perl scripting client for one MUD back in 2000, and that led me to web development.
Just a month ago, I remembered one MUD I've been trying out since 2003. And found that it is still online, and my password still works. Very weird feeling to log in to something 22 years later.
It was pretty advanced at the time, with 256 color support, IPv6, advanced encoding support, etc.
https://cryosphere.org/
reply