They did this to us as well. And not only this, but they blocked our ability to push new releases until we opted into the new plan. And this literally happened on the day of a major release for our biggest client. It felt like we were being held at gunpoint.
We called them and begged them to give us an extension so we could perform the release, and their sales rep treated us like we were the irresponsible ones for not reading the emails they had sent us carefully enough.
We've since moved to Vercel and will never use Netlify again because of the way they managed this.
Rust is not well suited for the same things as javascript / typescript / jsx / etc.
Also javascript and typescript are not well suited for compilers and dev tooling.
It shouldn't be seen as a negative that Deno is using rust in places where it really shines, which is a totally different use case than what deno is typically used for.
Other than browser based applications why would you choose JavaScript/TypeScript etc over Rust?
Developer tooling is not complicated and there is plenty of existing tools written in JavaScript, so it seems weird when they rule out there own language as not good enough. It should make anyone think twice before using Deno server side if it is not good enough for themselves.
The parts used to make your car were likely delivered to the factory by a semi truck, and not by another car. Should you buy a semi truck to go to the mall then, since even the carmaker thinks that cars aren't good enough? Obviously not. Different requirements are a thing.
I'm the creator / maintainer of a widely used authentication library, and I also don't know the answer to this.
There have been several issues reported around this, but no one seems to have a good proposal for a solution.
I've also discussed this with security auditors from Cobalt (because they flagged it as an issue), but they also did not propose any solutions (other than using httpOnly cookies instead of tokens, which doesn't really address the issue).
I had to use my old 2012 MBP the other day after using the 2016 MBP since it was released, and the 2012 keyboard was a stunning improvement over the 2016.
Which is strange because I never thought the 2012 keyboard was exceptional.
I like the new keyboard, they keys feel more stable and I can generally type faster due to less travel.
However, the problems are real. My ‘t’ key gets stuck regularly. Some other keys have had problems as well. And I rarely use the keyboard (at work I connect a Microsoft Natural Keyboard). I have also heard that colleagues have stuck keys.
I think it is absolutely inexcusable for a laptop that was 1699 Euro new. I have been a Mac user for 10 years. But Apple’s inability and unwillingness to address the problem leaves a bad taste in my mouth. I am now also unlikely to recommend MacBooks until they fix this issue.
I do like the new keyboard too, but I think older was waaay more accurate.
Mine got replaced after one year use (and I got AppleCare immediately after that). Unfortunately after less than a month keys started to get stuck again and I do keep quite good hygiene around my computer.
I actually love the way it _feels_. If they could get the same feel with normal reliability, it'd be the best keyboard ever IMO. But no key feel, no matter how good, is worth this ludicrous failure rate.
I think they removed it because the battery life of the new MBPs has been getting a lot of negative attention, deserved or not, and in general people don't seem to be capable of understanding why the estimate would change for different workloads. Maybe the best PR move was just to remove it altogether. I wish they hadn't.
It's not the same machine, but I routinely get > 10 hours on my 2016 12" Macbook with a workflow that includes moderate to heavy browsing in Opera, coding in Tmux/Vim, and various compilers and interpreters invoked periodically. Have you had a look in Activity Monitor to see where all your power is going?
I'm using two non-Apple apps as the core of my workflow (Opera and Alacritty for terminal: https://github.com/jwilm/alacritty) but my battery life is still excellent. However, I chose them carefully on the basis of their efficiency. Things written in Javascript seem to be real resource hogs (Atom comes to mind), and Chrome is notorious for inefficiency in memory, at least.
Click on the battery icon in your menu bar, and see if there are any programs using "Significant Energy". Some programs will switch on the discrete graphics card without needing it, for example.
You cannot invalidate JWT tokens
This is simple not true. You ALWAYS will sign your tokens with a well known secret, you could eventually even add some salt from a database to it.
I think one of the benefits of some (most?) JWT implementations is that it doesn't hit the database on every request - the token is only validated against a global secret.
So you can't invalidate a single user's session without invalidating all users' sessions.
I keep a counter in the JWT to at least mostly get around this issue. When processing a request, the counter is checked for the user, which isn't a big deal since all of the requests already require looking up the user. A counter increment invalidates all of that user's existing tokens.
If a user changes their password, their roles change, etc, then the counter gets incremented so all tokens issued up to that point won't be valid anymore.
Can you explain this a little bit more? You keep a counter on the user object in the DB? What is the JWT buying you if you still have to hit the DB on every request?
Presumably you can keep counters like these on the server edges, and just push new values out to servers whenever things change, as opposed to query a DB every time. This wouldn't invalidate individual tokens however, but all tokens that have that counter value. It'd also mean there's a window where tokens can still be used while servers are being updated with the new value(s).
These are just some random and half-baked thoughts, I have no idea what OP does, but there are options to limit hitting backing DBs anyway.
You can store the user info in the JWT so you don't need to hit the database to get user info every time. I usually just store an id in each issued token and store/remove it from redis or memory as needed for invalidating it.
Yes you can. The system I built uses a revocation list (propagated to all servers in the cluster) to invalidate the session.
Just as if you were storing state on the server, it still requires a means of propagating the user's state to every server. But we only need to propagate a single ID once per session (at logout) instead of propagating all the session data to all the servers on every request.
To revoke with a revocation list, you need to know the ID of the token being revoked. That only works if (a) the token is present at the event that triggered the revocation, or (b) you are tracking all tokens, in which case there's probably not much point to using JWTs at all.
Another approach is what I call the "subject epoch" pattern. The subject of a token (the "sub" JWT claim) is often something like a user ID. When an event occurs that requires all tokens for a given subject to be revoked, save that time stamp as the subject's "epoch". When processing JWTs, those issued before the subject's epoch must be considered invalid.
Not particularly. There are other layers of security here - the sessions expire (so there's a limited window to exploit each one), the sessions are always transmitted via SSL (so you pretty much have to have an exploit on the customer's system to get one), and the sessions are restricted to one customer (so you only have an attack against the customer whose system you have an exploit on).
If we used a different approach, then the same error (losing the data that's being synched) would result in losing all of the customer sessions.
The same problem occurs on every other method.
i.e. Bearer/Cookies.
As already said the most vulnerabilities applies to all methods. But people are just too blind to see that.
revocation list only needs to contain tokens that haven't expired. If there is an 'event' that causes this info to be lost, then expire everything by changing the global secret.
Out of curiosity, how are you propagating the IDs to other systems in the cluster? I am building something which could benefit from pushing config data across a cluster.
In most cases it shouldn't matter; revocation lists ought to be trimmed to the lifetime of the issued tokens--when they are used at all, revoking a JWT rather than just letting it expire is likely not extremely common--so you could stuff them anywhere at all that is convenient to your other technology selections.
I hit the database on requests - I keep their identity in the JWT but not their permissions. And if they're hitting a protected route (the only time their identity is necessary anyways) you best be sure I'm checking their canonical permissions.
Sure, but that's expiration, not invalidation, ie. we're talking about the ability to declare "this particular token is invalid now".
Incidentally, this limitation isn't too surprising to me... Is there any possible token-based authentication scheme that is both stateless (ie. no round trip to the database on every call) AND invalidate-able? Seems like any form of invalidation would require storing the "is valid" state somewhere else...
> Is there any possible token-based authentication scheme that is both stateless (ie. no round trip to the database on every call) AND invalidate-able?
I suspect this is provably impossible.
There are all sorts of things you can do at the protocol/platform level if you have a shared secret, but with only the constraints of an open authentication scheme you lack the tools to do this.
If you're willing to delve into the fun world of CRLs you can sorta do it. This isn't truly stateless of course, but for some design constraints it could be "practically" stateless since you're eliminating auth server round trips, which is probably why you were aiming for statelessness in the first place.
CRLs of course introduce lots of replication complexity and timing bounds to consider, and you probably want to pair them with short lived tokens to keep the CRL size manageable. (and then delve into refresh tokens)
As the OP points at, you most likely don't need any of this.
- Really short lived JWT of 1 minute
- If the JWT is invalid and the user didn't do a request in the last minute it does query the database for a session token (we use session tokens and jwt).
if the Token is inside the database/redis/ehcache/whatever the user gets still a new JWT token.
Actually we did that since we needed a "sane" way of revoking tokens fast but still keep the user logged in until the browser is closed.
We don't have a mobile client (yet) but I guess we try to do something like that, too. just not with a session.
This works really well and mostly our users won't hang around for more than a minute and when they do its not a problem to have a single backend call.
> stateless (ie. no round trip to the database on every call)
AFAIK, Stateless is about independent request/response and not needing a server to retain session information through the course of multiple requests. It has nothing to do with whether or not you're checking a database to cross-reference credentials - and I wouldn't keep anything more than an ID/name in a JWT... ever.
JWT is good for much more than an ID+name; it's sensible to allow your partially-trusted token issuers to vouch for a _limited_ set of user roles and the like. Just the same way you probably wouldn't allow the @acme.com security authority to vouch for @example.com principals in a multi-tenant system... in ye olde SAML lingo this is called "claim filtering / transform / passthrough."
Practically speaking, we use a bit of metadata (jsonb documents in pgsql, mostly) for each JWT-issuing party, which describes how to validate principals, how to map incoming claims to "our" claims (e.g., "by what name does sts.acme.com call a 'role'? is sts.acme.com allowed to vouch for the 'admin' role?) in addition to the more central things like shared secrets, certificates, etc. This kind of partial trust is how claims auth is supposed to work, and avoids any unnecessary provisioning/syncing of user details.
> Is there any possible token-based authentication scheme that is both stateless (ie. no round trip to the database on every call) AND invalidate-able?
SPKI (RFC 2692/2693) solved this in 1999, with its timed CRLs. Exactly one CRL is valid for any given period of time: when a token (= certificate) is received, it is invalid if it is referenced in the CRL, and valid otherwise.
The react test utils work in combination with jsdom. For example, TestUtils.renderIntoDocument will use jsdom as the document.
There is a facebook-sponsored test framework called "jest" that is more akin to mocha or jasmine, but of those three frameworks it is the worst in my opinion. It's absurdly slow, poorly supported, and it mocks all imported/required modules by default which causes me more problems than it solves.
We called them and begged them to give us an extension so we could perform the release, and their sales rep treated us like we were the irresponsible ones for not reading the emails they had sent us carefully enough.
We've since moved to Vercel and will never use Netlify again because of the way they managed this.