They would have to admit that they were wrong to switch away from using the proven Aerojet-Rocketdyne AJ-60A boosters (developed for and still flying with the Atlas V family).
Conferences are a long-lead-time project. The contract described here was signed in July 2023 - 2 years 9 months before the event date. Even if it was possible to setup a new event, the current contract would not be nullified.
Additionally, the Python Software Foundation is a US Based Nonprofit - spending money outside of the Country is generally more difficult than in-country. PyCon was held in Canada in 2014/2015; and there are apparently many smaller local PyCon events.
> Once your event outgrows academic spaces, donated conference rooms, or theatre spaces, working with the hotels is the industry’s standard way to pay for a professional convention center space. You commit to a certain number of hotel nights blocked off at nearby hotels, based on your event’s numbers from previous years, and in return, you get a reduced rental charge at the convention center. If you sell enough rooms, you additionally earn a small percentage of the revenue from those rooms, i.e. a commission. If, on the other hand, you don’t sell enough rooms, you owe damages to the hotels–essentially paying the full rate for the rooms they reserved for your event but didn’t sell.
Attendees pay the Hotel directly for their rooms. If the event does not book enough rooms to cover expenses then the organizer (PyCon) owes a minimum amount to the Hotel. If there are more rooms booked than expected the Organizer gets a check. This is a normal Hotel industry arrangement.
PyCon itself is run by the Python Software Foundation; according to publicly-available records they spent approximately US$2,491,000 on PyCon US expenses in 2024, including supporting 552 travel grant recipients: https://www.python.org/psf/records/
It is an old-school UNIX experience, not great for desktops but excellent for long-lived “pet servers” where long-term stability over decades of service is valued. I treasure it for running small Web servers and shell hosts, instead of Debian/Ubuntu.
Same. I've been running it on a "pet" server since the mid 90's, for shell, web, email, etc. I started on FreeBSD 2.x and has been through many upgrades and migrations! I also worked at an early ISP and FreeBSD was our go-to for email, NNTP, and DNS.
I built a NFS3-over-OpenVPN network for a startup about a decade ago; it worked “okay” for transiting an untrusted internal cloud provider network and even over the internet to other datacenters, but ran into mount issues when the outer tunnels dropped a connection during a write. They ran out of money before it had to scale past a few dozen nodes.
Nowadays I would recommend using NFS4+TLS or Gluster+TLS if you need filesystem semantics. Better still would be a proper S3-style or custom REST API that can handle the particulars of whatever strange problem lead to this architecture.
Publicly available data[1] on the pilot project in Nevada suggests a total of “50MW” generation capacity is planned across 10 rail lines, but the photos on the website seem to only show 1 set being built so far - and a claimed output of 5MW. The per-car mass of 720,000 lb (321 Tonnes) being lowered 229ft=70 Meters (510ft track length x sin(26.8) degrees) in Earth’s 9.81/ms^2 gravity field represents a maximum potential energy of only 220MJ, or 61 kWh per car. Reaching 5MW peak requires a car to be dispatched every 44 seconds. 10 cars would provide about 7.5 minutes of runtime - which matches the advertised 15-minute cycle length.
This all seems reasonable - but is a far cry from the performance of existing Pumped Hydrostorage plants which routinely exceed 1GW since the 1970s, and can run for several hours per cycle. They do require lots of Water and a mountain’s worth of elevation change, which limits the site selection, whereas this system seems to work with any open-pit mine.
It will be interesting to see if this technology can be made competitive with existing grid-stabilization techniques, and what challenges will be encountered along the way.
Pumped hydro should be expanded as part of a national water grid to cope with droughts and floods. NSF studied it and reached positive conclusions many years ago but no one is serious about implementing it.
To be fair, dams can be immensely destructive to ecosystems, with run-off effects that harm everything around them (humans included). My ex worked for on the NGOs that campaign for better dams instead of no dams at all.
The great thing about this gravity storage system is how easy it is to scale. You just need a hill. Sure, it's not going to deliver the power of pumped hydro, but it's easier to build and much safer to operate. And it's certainly a better design than those concrete block tower designs you occasionally see which are just a windy accident waiting to happen
If you have a hill then you can just put a water tank at the top and bottom and a pipe with a pump and a generator in-between. Even if your rolling mass was iron, you would only need a tank 8x larger than your rolling mass in volume (2x per dimension) to be equal in storage. Much easier to build and safer than a 300 ton railcar barreling down a hill. Also scales better, has lower operating cost, has lower capital cost, and has less energy loss.
Yes, using Microsoft SQL Server for Linux; hosted both on-premises with VMware and in Azure Virtual Machines - later migrated to Azure SQL Managed Instances. It worked great for the business’ needs. The major architectural advantage was that each Customer had a completely isolated Tablespace, easing compliance auditing. Each DB could be exported/migrated to a different Instance or Region, and migration scripts running slow for “whale” customers had no effect upon small fish. Monitoring of the Servers and individual Instances was straightforward, albeit very verbose due to the eventual Scale.
There were a few administrative drawbacks; largely because the MS-SQL Server Management Studio tools do not scale well to hundreds of active connections from a single workstation, worked-around through lots of Azure Functions runs instead. Costs and instance sizing were a constant struggle; though other engines like Postgres or even SQLite would likely be more efficient.
I have also seen this used in other formats quite successfully - Fandom/Wikia (used to?) use a MySQL database for each sub-site.
> I have also seen this used in other formats quite successfully - Fandom/Wikia (used to?) use a MySQL database for each sub-site.
Stack Overflow used it as well, with a database per site (DBA.StackExchange.com, ServerFault, SuperUser, Ask Ubuntu, etc.)
I have a bunch of clients using it. Another drawback with this design is high availability and disaster recovery can become more complex if you have to account for an ever-growing number of databases.
Location: United States, New York Metropolitan area
Remote: Sure
Willing to relocate: Within continental US
Technologies: Web, Networking protocols especially DNS, Virtualization and Clustering, Databases of all types from CSV and SQLite to CouchDB and MS-SQL, Revision Control/CI/CD (git and friends), distributed filesystems, many Languages
Résumé/CV: https://kashpureff.org/eugene/resume.html
Email: In Resume
Have been working across technical disciplines since the 1990s, always looking for Interesting Problems to solve. Please let me know if you have any questions about my background - I can guarantee an interesting story!
Computing Power has increased tremendously, along with the higher resolution of digital imaging technology compared to analog film plates. Sky Survey projects like the Vera C. Rubin Observatory have become active in recent years, which generate Terabytes of spectrographic data each night which can be rapidly examined for differences from previous captures. In the past each exposure had to be hand-aligned on a Light table and “flipped” between to spot differences.