Battlezone and battlezone 2 [0] were great for this. Many hours lost, even if bz2 was a buggy mess on release. It was also one of the first faves to really have a missing community.
The web would be one of the more well known technologies to come out of running collider experiments. More directly a whole lot of medical imaging including PET is only possible because of either isotopes manufactured through colliders or sensors developed in colliders.
I really want people to crowdsource the DMT prime factorisation project. I know at least one person tried but lost interest before they met an elf. It just seems like such a fun experiment to run. Is it possible to recall numbers at all while taking DMT? Can you memorise new ones? If not why not etc, and maybe the machine God factorises numbers for you!
Given it's an old article I'm sure someone else has pointed this out by now, but the article mixes up RSA modulus size expressed in base-10 (as in RSA-260) and base-2 (as in RSA-768 - The RSA challenge moduli are themselves inconsistently named)
The interesting thing about DMT is that it doesn't really seem to affect your mind/cognition (that is, if you can observe all the weirdness happening in a very detached way instead of being overwhelmed by it, it is an exceptionally clear-headed experience). Not exactly the same but I've drank ayahuasca hundreds of times (worked at retreat centers/apprenticed) and it's amazing how much work you can do on a light dose vs being blasted to the center of the universe (though, that is good once in a while to get over blockages). It can be hard to get a footing with dmt because it's so fast (and it's really hard to land in the "workable" zone compared to drinking ayahuasca).
I was also in Kyoto right before the lockdowns for the first time in a decade and it was magical and like it was in my childhood. When I went back a few years ago, I nearly cried; the lovely quiet city of books was so noisy and everyone was so angry. I don’t really have a point I don’t think you should stop people from travelling but still it make’s me sad.
What I'm a bit confused about is why Japan doesn't just make more wonderful things , so there is just more to go around? Like what happened to building amazing temples, and gardens? Where did that spirit go. I find it quite sad and strange that the temples are kind of ghost towns without any practicing monks (for example).
I guess that time is over, and that's sad, I just don't feel like it has to be.
Maybe the declining population is just part of it all, there is just less incentive to go bigger, on the other hand, even 100,000 million people on an island the size of Japan is a lot so I'm not sure that's it either.
There are actually a lot of other amazing temples, gardens, and shrines, they're just not as well known. And there's a healthy industry of traditional craftsmen who still build and maintain shrines, temples, and other buildings the traditional way. Kyoto will continue gobbling up the crowds no matter how many other alternatives there are because that's the nature of tourism, but it's not for a shortage of other beautiful places.
It is sad for the people of Kyoto though, because over tourism can really rip the heart out of a city for the locals.
There is more to go around, but there is a practical limit to how much you can see as a tourist in a limited window with good public transport access etc.
There's already plenty to go around. It's like paintings in the Louvre, the Mona Lisa is overrun because there's a kind of mythology about it, not because nobody paints anymore.
As another poster commented, there are people who still build with traditional materials and methods. The temples are made of wood and have to be renovated. Some are completely rebuilt, symbolic of the transitory nature of the material world. Enryakuji is undergoing renovations and they had to completely cover it with a metal shed while they work on the roof. But it's still open and you can still visit either as a sacred site or to learn about the traditional methods. It is supposed to finished in 2026.
As far as building new temples. Those monasteries had thousands of resident monks. They were significant military powers they were so populated. Even though the overall population of Japan has grown, far fewer people want to live that life. But again, there's no shortage of temples.
You're missing the bit where he has never studied under a Buddhist master and actively refuses to. Both Chan and Zen are traditions that are characterised by the belief that written works are always flawed and can't contain the actual teachings and if you want to learn you should find someone who already knows.
Do you have any resources for your sql workflow in emacs? I use eMacs for everything else, but keep going back to dbeaver whenever I write SQL or interact with PostgreSQL. All the tutorials I’ve turned up seem to be doing simple things with small tables, rather than a more complex workflow. I’m sure there are ways to use emacs effectively for SQL but I just can’t find out how.
Not really. I write SQL in text files, manage them in version control (git these days, but in the past it has been mercurial or subversion, doesn't really matter) and execute them in psql via a SQLi postgres buffer (emacs comint mode that wraps a psql command shell in a buffer).
I haven't used (or heard of) dbeaver but took a quick look at it and I have tried other SQL tools that seem similar. Never really felt they gave me much.
The main thing that I like is that it gives you nice table autocomplete as well as some nice metadata. As well as some connection management stuff. And it drives me mad because I am sure there must be a way to get the psql autocomplete to work in a buffer, but I have never managed it.
Are you writing all that sql simply to examine the existing data, or you're actively writing migratory expressions and such?
For the former - if you're willing to completely alter your perspective here, you may discover something unusually good. To understand the data, I find myself more often reaching for a Clojure REPL with basic odbc lib dependency. Clojure, at times may feel like hands-down the best tool for dealing with certain types of data. It really is great for quickly retrieving/grouping/sorting/slicing/dicing - these days I use it for exploring JSON results from APIs (that I automatically convert to EDN), or for dealing with DB data.
And you can perfectly manage that in org-source blocks too - I typically keep at least one Clojure REPL instance running in Emacs - I'd connect to it and go like:
For someone with no clue, all these parens may feel like annoyance, for me, they are the best way to deal with structured expressions - composing and writing them is far more satisfactory than even writing plain sql. Although, sure - sql can be a mind-blowing, great tool on itself.
For the exploratory stuff I tend to use R in code blocks as I’m familiar with it. Sadly I’ve never really done much with Clojure, though its data capabilities always look very cool, especially datatomic. It’s just always been a bit of a poor fit for what I’ve been doing so I’ve never quite got round to it. Maybe this will give me the push I need!
Sadly the things where the extra tooling is missed is more in larger databases where I’m either rebuilding or restructuring an existing database or writing more complex queries to help with this and I want to do it all in the database. Hence autocomplete and general knowledge of the schema being built in is nice.
I didn't even realize psql had autocomplete. It's nice, but I pretty quickly learn the table and column names I'm working with, or I just run \dt and copy/paste from the output. I guess I understand if someone would find that tedious; I learned to program before autocomplete was a thing in any editor, and I'm just used to it.
All the autocomplete and other smart editor features quickly outrun the speed at which I can think about what I'm doing, so I find them of limited use.
Did you ever try extending it out to other methods of probability estimation other than the forms of regression? I have only skimmed your excellent article, but I think you are first calculating the average probabilities from a regression model and then minimizing the loss to calculate Harville corrections for place and show markets? Is that correct or am I missing something here? I guess I am curious if there has been any improvement on using regressions for combining the various initial odds as I don't really follow the literature anymore.
Yes! There have been big improvements since then but they are beyond the scope of the post. I just wanted to reproduce the calculations in the paper using PyTorch.
Bill Benter subsequently replaced the multinomial logit model with a multinomial probit model, which assumes Normally distributed errors rather than errors that follow the Laplace distribution.
This is used by a number of betting syndicates. Notable by David Walsh[0] and Zjelko [1]. They knew of and worked with Benter in HK and adapted his system. One of the things that they did was pay large numbers of experts to watch and evaluate each horse and give it a standardised rating which they then used as an extra parameter along with the public odds. Another gambler who also extended that system out was Alan Woods [2], but I don't think he did anything as sophisticated in terms of modeling, concentrating more on the execution side. Regarding the yearly turnover, I have no idea, but billions would be on the low end. From the ATO filings we David Walsh alone was generating 100's of millions of dollars a year in profits, let alone turnover.
[0] https://en.wikipedia.org/wiki/Battlezone_(1998_video_game)