FWIW, depending on your state and policy, "glass only" claims don't affect your premiums. Agreed with the general principle of a higher deductible though.
I may have missed my chance to participate in this discussion, but this is the part I didn't understand.
Does this mean that when 1 phone goes past 15GB, it is necessary to opt-out of the cap to return to normal speeds?
If not (and it doesn't make sense if opting out of the cap is somehow possible), is there ever any case where someone would _not_ want to go back to the normal speed as they cross the 15GB on the way to passing the cap?
Is the intention just to throw in a minor speed bump requiring the affected 1% of users to contact support every month (leaving the few that are ignorant throttled)?
Great question. If I remember I'll reply somewhere in a month, I'll try to go past the 15GB mark by then. Opting out of the cap would make this useless for me.
But if you were reimbursed by an employer upto $X and that's what you want to spend, not going to normal speed starts to make sense.
auto_increment doesn't lock the table. When you use an auto_increment column, MySQL will grab the next int as it creates the insert write-ahead log message. Two concurrent transactions with T1 beginning first and T2 beginning second but actually committing out of order can thus have out of order ids. e.g. T2(id=10) T1(id=9)
Also, note that this means auto_increment IDs are not continuous (read: a reader looking after T2 commits but before T1 will see a gap, and if T1 fails that gap is permanent!)
"While initializing a previously specified AUTO_INCREMENT column on a table, InnoDB sets an exclusive lock on the end of the index associated with the AUTO_INCREMENT column. In accessing the auto-increment counter, InnoDB uses a specific AUTO-INC table lock mode where the lock lasts only to the end of the current SQL statement, not to the end of the entire transaction. Other sessions cannot insert into the table while the AUTO-INC table lock is held; see Section 14.5.2, “InnoDB Transaction Model”. "
If you have a long-running statement, it can block concurrent transactions. I've seen it specifically with 'load data infile', IIRC. We had to go through some painful migrations to remove auto-increment on some fairly large tables when we started seeing this.
That is an odd design. PostgreSQL only holds the lock long enough to increment a counter in memory, and every 32th time also write a write-ahead log memory in RAM. I can't see why one would need to lock the counter for the duration of the query.
We adopted it in the SpeedTracer chrome plugin a while back. We were very happy with the results then. Though, for the stuff I've been working with lately most JSON APIs are defined by protos, and we just bridge with JsonFormat/ProtoTypeAdapter
Our Atlanta, GA engineering office is in Atlantic Station, and we're hiring for iOS, Android, go, java, and ruby developers. We have a great team here with a lot of Xooglers, and we work on everything from the Square Register app to the highest SLA systems inside of Square.
We've started hosting monthly tech talk meetups. Even if you aren't looking for a job, please come by if you're in the area and want to nerd out with us: http://www.meetup.com/Square-Atlanta-Tech-Talks/
Of course, we're hiring at our headquarters in SF, but we also have an awesome engineering office in Atlanta, GA. We are about two dozen engineers (mostly Xooglers) and looking to grow even more. Our office is in Atlantic Station in Midtown.
It's got a much different API than java.util.RegEx, but it let's you work with Automatons as first class things instead of operating just with regexes. Being able to compute intersections, unions, shortest match examples, from multiple automata etc.. can be really useful.