Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In other words, not everyone involved is consenting to the things they're doing at runtime.


it's also a pain for compilers


If Python is interpreted, what’s the pain? Do you mean static compilers like Cython/pybind11/Boost.Python, or JIT compilers?


What does this mean?


It's hard to make optimizations, because you cannot make a lot of assumptions ahead of time on how the code is going to run.

For instance, one can override most of the builtin functions like len().


I am curious how this effects the compiler in particular?Does the compiler measure this in some way and respond with different behavior? Or is it already designed less efficient to account for these types of operations? Or a mixture of both and/or more nuanced hurdles? Thanks for any info you can provide.


Python 3.6 added a version field to dictionnaries. As all variables are looked up in dictionnaries (aka namespaces), a compiler can replace a call to a builtin by a test of the version of the builtins dict (aka a guarde), which chooses between an optimized version and the "naive" code.

Further reading: https://www.python.org/dev/peps/pep-0509/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: