From what I have heard of it: yes. I hope the formula language is also a bit better. AFAIK Improv could only have one formula per dimension element. Egeria allows formulas on any subspace inside the cube.
Certainly copyleft has a place, but changing the license to LGPLv3 would allow this very useful library to be used much more broadly while continuing to require that improvements to the shared code be made public. If glibc were GPLv3 rather than LGPLv3 (same goes for Boost, etc.) almost no one would use it. IMO this library will either be re-written under a less restrictive license by some other author (wasting time and effort of the community) or migrate to a more open license (LGPL, ASLv2, MPLv2, MIT, etc.).
I think the business, regulatory and infrastructure requirements are entirely different. They co-opt existing networks and only have to provide a SIM card, like other MVNOs. They don't have to deal with multi-year back-room deals for community access to dig trenches or hang wire to get into neighborhoods. I'm a Google fi user and I hope that your prediction doesn't come true. It's a fantastic service. If Google fiber was available in my neighborhood I'd be a subscriber to that too, so perhaps I'm a bit biased. :)
While I commend the developers of Minio for building what appears to be a functional S3-API-compatible system in Go Lang it seems to me to be missing the key thing that makes S3 a compelling object/data storage solution -- the distributed part. Meaning, while I see that a Minio developer (y4m4b4) talks about erasure coding, something when talking about AWS/S3 normally refers to the way data is encoded and replicated across nodes to mitigate outright data loss as well as bit rot, their description states "You may lose roughly half the number of drives..." -- "drives" not "systems" or "nodes". This appears to be a single node solution, or have I overlooked the documentation describing how to join nodes together into a cluster? The description given is much more akin to RAID, which is fine and useful for distributing data across disks connected to a single system.
I hope that this is just an early announcement of a thing that is going to mature into a fully distributed solution, or that it is made clear that this is like SQLite (Minio is to AWS/S3 as SQLite is to RDBMS systems [PostgreSQL, Oracle, etc.]) -- something intended to be smaller in scope and single node only. Leaving this fuzzy will lead to many people being confused and potentially someone depending on this system and later dealing with massive data loss when their drive or drives fail.
Could the developers of Minio please make a statement as to which direction they intending on going? Is this a single node S3-API compatible solution (which is valuable for a specific class of problems) or something that will eventually be designed to store data across 10s/100s/1000s of nodes geographically distributed all working together to maintain some degree of availability and data integrity?
What's Minio going to be when it grows up?
a) S3Lite
b) S3
This is a much more recent presentation by Kyle (the author of the linked article) with a more mature version of his Jepsen tool. I imagine he'll get around to a written version of his findings soon, until then it's worth the time to watch this and learn a bit about distributed systems, databases and testing. https://www.youtube.com/watch?v=XiXZOF6dZuE
If you have a 1.4.2 cluster you can do a rolling upgrade to 2.0 without doing a "dump/restore". Please remember, this is a tech-preview, don't upgrade your production cluster to 2.0 until it's been released for production use. Feel free to test drive this preview, and if you do so please send us feeback! #disclaimer I work for Basho
I think it's this facet of Bitcoin that's going to end up causing problems in the not too distant future. There will be an ever increasing number of "trapped" coins, BTC that belongs to someone who has abandoned it leaving it unused and thereby lowering the liquidity of the currency.
A potential solution would be to incorporate demurrage, which gradually reduces the value of currency the longer it's held, into the BTC algorithms. Essentially by encouraging the holders of a currency to use that currency you'd a) create a more fluid market and b) remove trapped value in some fixed amount of time. If you wanted to maintain a pool of BTC for a long time you'd simply cycle it between two addresses faster than the decay rate so that it's value wouldn't decay.
There are trillions of satoshis total, so even if most of them are lost there should still be billions left for people to use. By analogy, a certain amount of the world's gold has been lost but people still use it as a store of value.
It's weird that things as different as S3 and Redis all fall under the umbrella of "NoSQL". We should probably replace the SQL-NoSQL dichotomy with terms that more accurately reflect the real differences.
It's got an API, you can store data in it and retreive it later by key, since when was S3 not a database? It's a key/value store that lets you store very large values.