I found it interesting to note that one of their use case is numerical integration. As of now Hope does method by method JIT. Numerical integration is one of those things where just "method by method" JIT leaves substantial room for improvement. It is the poster child used to showcase the strength of Stalin, the lisp compiler that substantially outperforms numerical integrator written in C. The main feature that Stalin uses to achieve that impressive performance is inlining of the function that needs to be integrated, right into the integration code. Such inlining is hard to do when code reside in separate compilation unit, separate functions and loaded dynamically, especially when the caller is only aware of a function pointer and not much else. The later the integration routine is compiled, more chance would there be for the compiler to know more about the function pointer, and JIT could potentially help a lot here.
It is indeed possible to do these inlining in C++ using its template machinery but it seems Hope does not to this yet. I am really curious about what Hope does and what plans do they have for the future. Inlining does give huge speedup for this application, more so if one is performing multidimensional numeric integration.
Another thing that I was curious about was how does Hope optimize away the dynamic semantics of Python. If one where to isolate one aspect of Python that makes it ridiculously hard to speed it up, it is its dynamic nature. It seems that Hope removes the dynamic aspect by analyzing the AST of the Python code. Since Python was never intended to be type inferred, it surely poses a substantial obstacle. I wonder whether Hope devs intend to add the Cython functionality of selectively removing dynamic nature of specific functions via annotations. Given the difficulty of automatic static typing of a piece of Python code, any little help afforded to the compiler should go a long way.
The other thing that I am really curious about is what does Hope do about Numpy vector expressions. Cython for example calls back into Numpy, so there is not much speed benefit to be had. In fact if this is being done in a loop, this hurts performance because of the overhead of the call. So if one really wants performance one needs to write the low level indexing code in the Cython syntax. I think it is a matter of personal taste, but if I were to write low-level C,C++; I would rather write it in C,C++ itself. The latter has the additional benefit of mature toolchains that target C, C++.
Finally, a number of tools aimed at similar use cases have been mentioned in the post and in the comments. I will add a shout out for copperhead. It isolates a subset of Python over which one can define a ML like semantics which is then used to aid compilation and parallelization, all hiding behind the familiar python syntax.
Just a note - if this use case is important, I've observed that Julia will often properly inline numerical integrals - even some fairly tricky ones. But you do generally have to make sure your loops are striding the quadrature in the memory correct order - Julia doesn't seem to reorder them.
I wonder whether Hope devs intend to add the Cython functionality of selectively removing dynamic nature of specific functions via annotations.
It is indeed possible to do these inlining in C++ using its template machinery but it seems Hope does not to this yet. I am really curious about what Hope does and what plans do they have for the future. Inlining does give huge speedup for this application, more so if one is performing multidimensional numeric integration.
Another thing that I was curious about was how does Hope optimize away the dynamic semantics of Python. If one where to isolate one aspect of Python that makes it ridiculously hard to speed it up, it is its dynamic nature. It seems that Hope removes the dynamic aspect by analyzing the AST of the Python code. Since Python was never intended to be type inferred, it surely poses a substantial obstacle. I wonder whether Hope devs intend to add the Cython functionality of selectively removing dynamic nature of specific functions via annotations. Given the difficulty of automatic static typing of a piece of Python code, any little help afforded to the compiler should go a long way.
The other thing that I am really curious about is what does Hope do about Numpy vector expressions. Cython for example calls back into Numpy, so there is not much speed benefit to be had. In fact if this is being done in a loop, this hurts performance because of the overhead of the call. So if one really wants performance one needs to write the low level indexing code in the Cython syntax. I think it is a matter of personal taste, but if I were to write low-level C,C++; I would rather write it in C,C++ itself. The latter has the additional benefit of mature toolchains that target C, C++.
Finally, a number of tools aimed at similar use cases have been mentioned in the post and in the comments. I will add a shout out for copperhead. It isolates a subset of Python over which one can define a ML like semantics which is then used to aid compilation and parallelization, all hiding behind the familiar python syntax.