Turns out to be backwards compatible in Python3 before 3.5 (though not Python2, yet another reason, if you actually needed one, to start new stuff in Python3)
I'm insanely happy that I have a new job that is not stuck on Python 2.x
This is one reason. All my personal fun-type projects are in the latest Python, and I have the leeway to update work to this.
I know there are a lot of people who don't like or want this, but coming from a C# background before Python, I really like this feature. (not that I want Python to be more like C#, I just like the safety net and guaranteed documentation.)
Using once ":" and then "->" for type declaration is inconsistent. Not to mention that the dash is not always aligned with the tip of the ">" and it looks ugly, not like a real arrow!
The name: implies that the type of the "name" object is str. If you were consistent, then, def greeting(...):str would imply that the type of the greeting object is str, which is definitely not the case (it is "function"). The -> isn't defining the type, it's specifying what is returned.
Correct. Annotations were added in 3.0 with no encouragement to use them for anything in particular, so they could observe what the community would use them for. They were just put in a dictionary attached to the function, and by inspection you could access them and do whatever the heck you wanted with them.
A lot of third-party libraries cropped up that used them to type-check stuff. Type-checking was a popular enough usage for them it's official now, to help gather people under the same libraries and terminology.
You can still use the annotations for other things, of course – the dictionary is still attached to the function and can be inspected as before – but if nothing else is specified, we can now assume they are type declarations.
They are just arbitrary expressions (though in Python, type names like 'int' or 'str' are themselves objects). They don't do anything by default but are accessible as fields of the function object.
I have no idea how they managed to find something that didn't throw a syntax error, yet looked reasonable. Perhaps someone who followed the process will chime in.
To me it sounds like addressing a pain point for larger projects, moving the language forward, and adding something many in the community has long wanted (as evident by blog posts and articles, earlier type checking tool attempts, the acceptance of the relevant PEP, etc).
Yes and that sounds unfamiliar from Python cultural POV. Python used to be the language that didn't try to please everyone and kept a coherent design philosophy that resulted in a clean and simple language.
>Python used to be the language that didn't try to please everyone
On the contrary. I use Python professionally since 1997, and it has had all kinds of features added on for different needs, from decorators and generators to async and virtual environments, context managers, list comprehensions, dict comprehensions and what have you.
Of the languages I know of, only C# has had a similar pace of adding new stuff in to the core language (not just libraries).
When did Python shy away from adding new features?
It seems that Python didn't try to please only 2 categories of people: those who dislike significant whitespace, and those who want proper closures.
>and kept a coherent design philosophy that resulted in a clean and simple language
At this point I'd say that Python, with its full feature set, is as complex as a scripting language gets. Maybe Perl 6 trumps it, not sure -- but what you describe is closer to a language like Lua, or tcl or even Go than Python.
It's true that the pace of cruft accumulation is worrying lately. Note that I said "used to"! Personally I think it all started to go overboard around the introduction of list comprehensions when we had perfectly good map, filter and lambda.
> Yes and that sounds unfamiliar from Python cultural POV.
Addressing pain points is unfamiliar to the Python culture?
In practice, type intransparency is a serious problem – duck typing is only fun as long as everyone a) knows what a duck is and b) agrees on what it should do. Type hints make it a lot easier to work with opaque libraries and inside large codebases.
> Python used to be the language that didn't try to please everyone and kept a coherent design philosophy that resulted in a clean and simple language.
Uh, ever seen "Batteries included" referenced in their documentation?
Yes, but it's definitely not the same thing. The vice of trying to please everyone means giving various people the features they ask for instead of weighing against the common good of the user base.
Edit: it was just an idea, it would be a bit more C-style syntax; of course backwards compatibility wouldn't be possible. Instead of downvoting would you care to explain?
I didn't downvote, but I don't like the syntax for a couple of reasons:
1. Instead of being more like C-style function declarations, they look a lot more like C-style type conversions.
2. However closer it may be to C (type before identifier I guess?) it's pretty far from any other mainstream language. For instance, C++ has support for -> syntax for function types (http://stackoverflow.com/questions/22514855/arrow-operator-i...) and some other languages (I can think of Scala) use name: Type declarations.
As a personal preference, I find the python type hinting more familiar than your example, despite (myself) being most familiar with C-like syntax.
Well C, C++, C#, Java come to my mind. But you are right C++ and PHP7/HHVM/Hack have a syntax similar to Python 3.5. It's probably just a personal preference of mine.
You're too familiar with C-style languages. When I initially picked up Scala the : type syntax felt a bit strange, but I really came to like it and am happy when it's in other languages.
Putting the type after the name feels more natural, and the : also translated nicely into 'of type' (or if you prefer for functions 'returns type').
Python is not C so why should it suddenly "pretend to be" when doing type hinting? Also, I personally find that the type declaration format of C is one of my least favorite things about C.
Tons of extra needless parenthesis for what exactly? To look more like C? It's not like (foo:x) is not also a common type of type annotation. Besides they look more like casts than type declarations to a C programmer.
I didn't downvote, but it being C-style syntax is not a recommendation in this context.
For familiarity, while the C family of languages favors "type name" declarations, the Pascal and ML family of languages use "name: type" instead. So, familiarity is entirely a function of what language you used before.
Second, C-style declarations have their problems, which is why they aren't used much outside of languages that derive from C. Generally, you look up declarations by searching for the name of the variable to determine the type; this generally has less cognitive overhead if the variable is at the start of the clause, not the end, ideally separated by punctuation (note how the "name: type" syntax mirrors Python dictionary "key: value" syntax) [1].
[1] Also, in C/C++ the "type name" syntax complicates the parser, as the parser has to be able to distinguish between type names and other identifiers; while this wouldn't be an issue for Python, there is still no reason to perpetuate this syntax choice.
Definitely not downvoting - but, all the extra parentheses are a kind of lispy in having to grok (and balance). Also, having the (str) at the beginning makes me think that it's an input, whereas having -> makes it clear (to me at least) that it's an output return type.
I'll bite - what would have liked to see in terms of syntax to (optionally as others have noted) provide hints as to what type each data element should be?
Recently stumbled upon IntelliJ using the Go plugin package after a Rubyist friend of mine spoke very highly of RubyMine. I have to say I was incredibly impressed. I looked at it a couple years ago when I was mostly Python and it was ok, but it's really stepped up its game - both feature-wise and visually.
The IntelliJ IDE set is really far better than I understood. IMHO.
The things you could do in Go vs in (Python | Ruby) are actually are results of difference between static vs dynamic typing. Yeah, static typing requires more key presses and some thinking, but in the case of tools (navigation, refactoring) it makes a huge difference. Ruby support would be years ahead of what it is now if the tools could understand the types in all cases.
I can see this being very useful in teams. Hell, its even useful for a single developer as a form of future documentation. I generally write code today and think, when I read this in 6 months and have no memory, how can I help my future self? For new code bases I'll be adding this.
For dynamically typed languages, using a popular type annotation syntax (e.g. rdoc for ruby, closure compiler for javascript) and an IDE (like Intellij) can give you 80% of the benefits of type safety by doing code inspection on-the-fly and warning you when it finds incorrect types. The main downside is no compiler safety net when you try to refactor code.
I'm the author of the article. When I gave a preview of this at the PyRVA meetup, I gave this exact illustration, even using "my future self". So...jinx. :)
I've wondered this exact same thing myself for other languages. I doubt its a lack of familiarity since I would bet most compiler/language authors are aware of most Haskell concepts/idioms. I think it probably has more to do with function definition semantics in non-functional languages but it seems that could be easily worked around...
Anyway the haskell way, especially with Typeclasses, has to be clearest way to express type constraints for functions I've encountered at least from a code readability standpoint.
I love Haskell, but I'm not sure this would actually be better for Python. To be Haskell-like, the comment would actually be different than what you wrote:
#greeting :: str -> int -> str
and this is specifically related to the functional nature of Haskell. `greeting "hello"` now instantly becomes a function with type :: int -> str. So the Haskell syntax is about currying, it's about how the function you wrote is really just a cascade of functions.
Doing something silly like making the "arguments" reside in an artificial tuple would definitely be bad practice in most Haskell cases, and so syntactically looks a bit weird and out of place, like it's shoehorned on just because Python function arguments are not like Haskell arguments.
Since Python is not based on this model of a function, I think it's a lot better to keep the syntax in the actual def statement. Adding the return type annotation at the end is nice.
Most probably, this stuff was added to the signature because there were already natural tools like the inspect library that would provide the immediate hooks for making use of it. But even if the implementation didn't need it to be part of the signature like that, I still think it's better for Python's paradigm.
I don't think this is a case of shoehorning anything. It is true that in Haskell you use currying more often, while in Python you usually supply a tuple with all the arguments at once.
However there are also cases in Python where you get currying, like decorators. And for complicated types like:
# greeting :: (str -> (str, str), int -> List[str]) -> Dict[str, int -> int]
being able to separate the signature from the declaration certainly makes reading it easier.
But in Python, to do currying, you either use decorators, functools.partial, or forced early binding (like nested function factories), etc. Just because currying is a thing you can do in the language (in a much different way than in Haskell where it is fundamental to every function) doesn't seem like a good reason to me to adopt Haskell-like type signatures.
Just for example, in Python it's extremely common to use default arguments, and in fact a lot of stuff in the typing module exists to make it easier to do something like make an Optional type that can either be `None` or a `str` or something. So whatever type signature you're going to mimic from Haskell, you won't be able to give optional types or default arguments without making it totally un-Haskell. And of course no one is going to go off and change Python to work based on Haskell-like type classes (nor should they).
They stuck with a syntax that was valid in older versions of Python, and used the function annotation work from Python 3.4. Thus, no "clean slate, choose the ideal syntax" discussion.
There's quite some dissonance between the artificial "generics" vs normal types. Why the concrete str type for strings, but not for lists? I realize there are implementation level arguments for it, but the resulting compromise does not seem good.
And specifying the concrete str type goes diametrically against the "don't check against a concrete type, embrace duck typing" mantra that Python programmers are taught early.
Probably because Python doesn't distinguish characters and strings. When you index into a string, you get a string back (that happens to be length one, of course), which means that a string is a list of strings. Thus, exploding string into a list type isn't really possible as the type of `str` is something like `List[str]`, which doesn't make a whole lot of sense.
> Why the concrete str type for strings, but not for lists
Because you would need to extend the base list type to support indexing just for this feature, which is a bit iffy if you ask me. I think you have a point about "don't check against concrete types" though, they should update the docs to suggest using an 'Iterable[type]' over a 'List[type]' unless you really really really want a list.
the type hinting in Python is just that -- hinting, right?
It's for IDEs like PyCharm to use to generate better docs and to be more intelligent about suggestions when writing code. But what it does NOT do is inform the interpreter/compiler about type information, correct?
I.e., Python3 code with perfectly implemented type hints would only make it easier on the developer, and not the interpreter/compiler, yeah?
The CPython interpreter does not enforce any errors/warnings/etc. based on annotations. It does programmatically expose annotations if you want to inspect them at runtime, but the feature is mostly meant for other tools to optionally hook into.
Yes, but there's nothing to say that an interpreter couldn't use the type hinting. Imagine a pypy-hinting interpreter that analysed the code and kept the hinting, it could theoretically JIT it better.
Oh, I was somehow hoping this would allow you to have multiple function signatures. i.e. two functions foo, both accepting one variable named bar, but one would implement str and one int, for example.
To help with the phrasing: this is called function overloading. Type hinting is ignored by the interpreter so it doesn't add any new functionality, only documentation.
So far I've been using Obiwan [1] to check types during testing. It's definitely not something you want to do all the time, because it has a huge runtime cost.
Are there a any good libraries to turn on/off type checking using if debug as part of the decoration logic? If so then running with -O in production would solve some of the problem with this.
people are going to get them wrong some of the time. Dumping the checking problem on some IDE, with a very vague specification of what an error is, is not progress. If functions have type specifications, you should not be able to call them with the wrong type, ever. This is the job of the compiler and the run time system, not the IDE.
Somebody should stop Python's little tin god before he creates another giant step backwards like the Python 2 to 3 mess.
If there is a tool which statically analyzes your code and reports errors due to type mismatches, does it matter to you whether that tool has the exact literal name "ahead-of-time compiler"?
What does matter to me is whether the standard build tool for the language will run that tool on a proposed library release before that library gets uploaded to PyPI (or equivalent). If there are type annotations in the standard package repository that are lies then that seriously limits the usefulness of the whole thing.
> It's optional. If the library maintainer wants to do that then he can.
A library with no annotations in PyPI is fine-ish. A library with correct annotations in PyPI is fine. A library with incorrect annotations in PyPI is extremely not-fine.
I'm fine-ish with annotations being optional, but if they're there then their correctness needs to be enforced, otherwise they're worse than useless.
> A library with incorrect annotations in PyPI is extremely not-fine.
As is a library with broken logic or one that throws exceptions randomly. You want Guido and the developers to implement some kind of system that automatically scans all PyPi packages for any kind of errors? I'm sure them solving the halting problem won't take that long.
Incorrect annotations are a bug. The onus is on the maintainers of packages to fix bugs in their packages, not PyPi which is a distribution channel.
> As is a library with broken logic or one that throws exceptions randomly. You want Guido and the developers to implement some kind of system that automatically scans all PyPi packages for any kind of errors?
Yes I do, but perfect is the enemy of good; it's ok to do something that makes things better without solving the whole problem. I want the build system to encourage people to do the right thing. E.g. any serious build system already ensures that you run the unit tests as part of the build or at least release process, because it would be just dumb to publish a library release on PyPI if it failed the unit tests.
> I'm sure them solving the halting problem won't take that long.
We already know how to avoid the halting problem, it's called types; see e.g. Idris.
But you don't need to solve the halting problem to check annotations. It's not even hard. Why would you want to not check them? I mean if you're not even going to check them then why bother having annotations in the first place? Comments would provide the same functionality and have less danger of being misleading, because everyone already knows comments often end up being out of date.
I'm really confused here. Are you really advocating that PyPi performs some kind of magic on the thousands and thousands of packages that are uploaded to ensure that all their unit tests pass, including testing the type annotations? Can you please give me an example of another languages packaging site with a non-trivial amount of packages that does this, or at least explain why this isn't the developers responsibility?
All you seem to be saying is that "Code should be tested before being uploaded to PyPi", and somehow that equates to type annotations being bad? Again, it's up to the developer to ensure his tests pass (by using a ci, or running tests locally, or even making a blood sacrifice to Cthulhu), and yes this should include a linter to ensure type annotations are correct if they are using them.
> I'm really confused here. Are you really advocating that PyPi performs some kind of magic on the thousands and thousands of packages that are uploaded to ensure that all their unit tests pass, including testing the type annotations?
I'm saying that the build tool should run unit tests and check annotations before uploading a package to PyPI. It doesn't matter whether checking annotations is done by a "compiler" or some other tool, but it does matter that whatever tool that is runs as part of the language standard release process (and in practice it tends to be the case that when a compiler enforces types then this happens, and when an external tool does then it doesn't).
> it's up to the developer to ensure his tests pass (by using a ci, or running tests locally, or even making a blood sacrifice to Cthulhu)
Leaving it up to the developer is a recipe for it not happening. We're programmers, we should automate these things.
> I'm saying that the build tool should run unit tests
Building in Python is little more than creating a .tar.gz (or other format) of your source directory. Testing is very different and is very specific per project.
Anyway, this whole discussion is pointless, you just need to run 'python setup.py test' before you run 'python setup.py bdist'. Is that so hard? If you really want you can edit your setup.py to do this first.
I can edit my setup.py but I can't edit everyone else's. Defaults matter. My experience is that the average quality of random PyPI package is... not great, and I can't help thinking the anemic state of python build/project tooling is partly to blame.
The tooling is fine for that kind of stuff. In fact there's an almost embarrassing number of different ways to do what you want (you can use your own local build/test/release, probably scripted with tox and other tools, or you can have Travis CI do it for you automatically, or...).
I personally take a sort of halfway approach: I use Travis heavily for public documentation of the fact that my packages are fully tested and passing, but stop short of having it automatically release to PyPI for me when I tag a new version.
But in general, the tools can't fix the fact that PyPI is not heavily curated. Any non-curated package repository will have the issues you're complaining about, even if the tools default to doing what you want them to do.
Maybe. Maven Central isn't really curated, but by being associated with maven it encourages packages to run tests as part of the release process, and also to follow some standards in terms of e.g. package metadata pointing to the source repository and release tag in a standard format.
From the IDE point of view, there are other benefits besides correctness. Proper autocomplete for objects (huge whenever you have a chain of method calls), ctrl+click to go to a method, showing a method's documentation in a popup, find usages - all these require the IDE to know about types if they are to work reliably.
No, they aren't. In order to make use of type hints for anything, they need to be correct. If they aren't, anything else you build on top of them will be unreliable and confusing. As long as there is no way to check them, they're worthless.
If a library comes annotated with types, then an error in the typehints is just as much of a bug as an error in code or documentation.
Having most of the standard library plus large portions of the ecosystem come with these hints will be hugely beneficial for productivity.
And of course, there IS a way to check them. IDEs, linters, etc. - I'm sure if you say that your function returns an instance of User and then you try to return a boolean, they will yell at you. And I expect that the standard library and popular libraries WILL run these tools before releasing a stable version. A 90% percent solution is still a step forward compared to relying on much less reliable human brains at every step.
I understand that errors in type hints are bugs just like errors in code are. That's the problem.
The entire point of type systems is to enforce correctness. They allow you to describe problems on the type level and ensure that the code being described matches that description, eliminating certain classes of bugs. That's why languages have them, that's the expectation.
Type hints are just an overhyped standardized comment format with special syntax support.
I agree that type hints are basically just "machine-readable documentation" (and human-readable as well!).
I still think that machine-readable documentation can be immensely useful when combined with a static code analysis tool. Not as useful as "real" static types would be, but useful nonetheless.
> If functions have type specifications, you should not be able to call them with the wrong type, ever.
This assumes you can represent a type specification for the function you are calling, this isn't always possible in a dynamic language with advanced metaprogramming facilities.
Think of it as modularizing the compiler and runtime so that the dials on dynamism can be tuned by the team.
PyCharm already used pretty sophisticated type inference this just helps it. But the real win is being able to run mypy or pytype over a codebase automatically.
Types, Contracts, Properties, Tests are different directions in a tensor field. With Python you can mix and match at your will. This is in _no way_ relatable to the 2-3 mess.
With the rise of online CI tools, I've seen it become much more common for Python-based projects to include a lint run as part of their tests and break the build on lint output.
I have fond memories of using Python 8-10 years ago, but looking at it today, I'm not sure I see compelling reasons not to just use C# instead.
I can have real static typing if I want it. I can also have dynamic typing if I want that. I can use lambda functions. I have LINQ. I can use ScriptCS or some of the new Roslyn stuff if I want to use it as a scripting language. Unix support is not as great, but they are coming along with that. There's no GIL.
Is there some killer feature for Python that I'm overlooking?
Still on average more concise to express the same idea.
Standard library is a nice mix of function-based vs. class-based solutions where appropriate, rather than forcing OO on you even when it's not the ideal approach.
Cross-platform C# usage is still not even a reality, let alone as good as what Python already has.
In several domains of programming, Python has best-of-breed, and sometimes just flat-out the best, available libraries.
I think its ubiquity is important (installed by default on most non-Windows machines, at least in my experience) combined with the simplicity of running a python script.
Now that Microsoft is supporting non-Windows with .Net core, C# is more attractive. That said, if I was thinking compiled Python with static typing my mind would go to Go.
I like this. I use Ruby as a scripting language more often than Python and I would be happy to see similar additions to Ruby to improve code readability and IDE support (although RubyMine) does an awesome job of warning of possible code problems just using code inspection).
I sometimes wish that one programming language would work for all of my requirements but I keep needing: scripting languages for quickly getting stuff done; practical languages like Clojure and Java, and type strict languages like Haskell (that I would like to use more, but development takes longer).
Looking ahead a few years, the ideal for me would be a loose scripting language that had awesome static analysis tools. So, for example, I would get the safety of Haskell when hacking in Ruby.
It's cool that PyCharm added support for this -- the point of the typing module and the type hinting PEP is for IDEs to adopt it for linting purposes.
For those of you using vim, emacs, and the standard UNIX editing toolchain, the mypy linter provides command-line support. This blog post from April 2015 describes the first version of mypy that includes support for the official typing library.
If you can deal with a different file navigation setup (and outside a terminal obviously), VIM emulation in PyCharm is very good.
Having recently switched, PyCharm really does make larger projects a lot easier to refactor / manage.
Edit: Also, to actually answer your question, this is a relatively new feature so I don't think any of the VIM plugins support it yet. Jedi has had type hinting from doc strings for some time now. I'd imagine they'll update that to use this soon.
Has someone tried GNOME Builder? I found it quite good for C, but the Python mode of PyCharm is still superior regarding search and introspection, yet it is not Free Software…