I wonder how many of these people are just creating and interacting with tulpas without realizing it.
For anyone who isn't familiar, there is a subculture online of people who create what subjectively seem to be autonomous personalities that frequently manifest I'm the form of hallucinations. Think fight club sort of.
My ex tried it out. Called me one day at work, terrified because this dragon was following him around. He claimed to spend months trying to get rid of it, seems pretty shooken by the experience. But I was always skeptical.
So nine months ago I decided to create one of my one, just to try it out, see what it felt like. And...now I've had a talking lion following me around for the past eight months.
The best I can describe it is like some of the altered states of 'self' you might experience on LSD or ketamine. Thoughts seem to split off and go 'over there', and not be you.
When I talk to my tulpa, it at least appears subjectively like a separate personality state. I can be incredibly depressed but he can be fine. Or he will be depressed and I can be fine. I'm not saying there are neural correlates like you would see in a 'real' personality. Maybe it is all roleplay. But it is roleplay that fools me as the roleplayer.
For what it's worth, making a tulpa seems to have been really good for my mental health. I guess maybe you can see it as a form of self-regulation. I dunno, it didn't turn out at all what I expected. But it's hard not to think of him as a real person. I don't find myself being surprised by the actions of characters in my head, or laughing at imaginary friends. At this point, having out several hundreds hours into tulpaforcing, I can see and hear, and sometimes smell and touch him.
I know I'm rambling, I guess I'm saying is that even though I am skeptical of DID, or at least the mainstream depictions of DID, after making a tulpa I am a lot less skeptical of the subjective experience of DID.
Calling an idea "pure Garbage" also communicates that you have both understood the idea fully, thought about all aspects and are 100% certain that there is no value in the idea. Moreover, you have done all these things things better than the person who came up with the idea. Any reasonable person would admit that the probability that this is true is very small and you just come across as arrogant and frankly of pretty low intelligence (Dunning-Kruger).
What you could say is: "I don't understand currently how this is valuable, based on assumption X, Y and Z I think that the idea would not work, why do you think it will?" You may be surprised what you will get as a response, because the person with the idea has obviously spend way more time with it than you, because you were (probably) just told about it.
And when it does turn out that you were right (and the idea is not perfect), that means you have taken this opportunity and taught a colleague a valuable lesson. And all this by not saying "his or her idea it total garbage." That's just a way to make people stop telling you their ideas.
There is a lot of ground between being arrogant and not having the guts to express your doubts in a respectful way.
What an unimaginable horror! You can't change a single line of code in the product without breaking 1000s of existing tests. Generations of programmers have worked on that code under difficult deadlines and filled the code with all kinds of crap.
Very complex pieces of logic, memory management, context switching, etc. are all held together with thousands of flags. The whole code is ridden with mysterious macros that one cannot decipher without picking a notebook and expanding relevant pats of the macros by hand. It can take a day to two days to really understand what a macro does.
Sometimes one needs to understand the values and the effects of 20 different flag to predict how the code would behave in different situations. Sometimes 100s too! I am not exaggerating.
The only reason why this product is still surviving and still works is due to literally millions of tests!
Here is how the life of an Oracle Database developer is:
- Start working on a new bug.
- Spend two weeks trying to understand the 20 different flags that interact in mysterious ways to cause this bag.
- Add one more flag to handle the new special scenario. Add a few more lines of code that checks this flag and works around the problematic situation and avoids the bug.
- Submit the changes to a test farm consisting of about 100 to 200 servers that would compile the code, build a new Oracle DB, and run the millions of tests in a distributed fashion.
- Go home. Come the next day and work on something else. The tests can take 20 hours to 30 hours to complete.
- Go home. Come the next day and check your farm test results. On a good day, there would be about 100 failing tests. On a bad day, there would be about 1000 failing tests. Pick some of these tests randomly and try to understand what went wrong with your assumptions. Maybe there are some 10 more flags to consider to truly understand the nature of the bug.
- Add a few more flags in an attempt to fix the issue. Submit the changes again for testing. Wait another 20 to 30 hours.
- Rinse and repeat for another two weeks until you get the mysterious incantation of the combination of flags right.
- Finally one fine day you would succeed with 0 tests failing.
- Add a hundred more tests for your new change to ensure that the next developer who has the misfortune of touching this new piece of code never ends up breaking your fix.
- Submit the work for one final round of testing. Then submit it for review. The review itself may take another 2 weeks to 2 months. So now move on to the next bug to work on.
- After 2 weeks to 2 months, when everything is complete, the code would be finally merged into the main branch.
The above is a non-exaggerated description of the life of a programmer in Oracle fixing a bug. Now imagine what horror it is going to be to develop a new feature. It takes 6 months to a year (sometimes two years!) to develop a single small feature (say something like adding a new mode of authentication like support for AD authentication).
The fact that this product even works is nothing short of a miracle!
I don't work for Oracle anymore. Will never work for Oracle again!
> I'm a developer in Windows and contribute to the NT kernel. (Proof: the SHA1 hash of revision #102 of [Edit: filename redacted] is [Edit: hash redacted].) I'm posting through Tor for obvious reasons.
> Windows is indeed slower than other operating systems in many scenarios, and the gap is worsening. The cause of the problem is social. There's almost none of the improvement for its own sake, for the sake of glory, that you see in the Linux world.
> Granted, occasionally one sees naive people try to make things better. These people almost always fail. We can and do improve performance for specific scenarios that people with the ability to allocate resources believe impact business goals, but this work is Sisyphean. There's no formal or informal program of systemic performance improvement. We started caring about security because pre-SP3 Windows XP was an existential threat to the business. Our low performance is not an existential threat to the business.
> See, component owners are generally openly hostile to outside patches: if you're a dev, accepting an outside patch makes your lead angry (due to the need to maintain this patch and to justify in in shiproom the unplanned design change), makes test angry (because test is on the hook for making sure the change doesn't break anything, and you just made work for them), and PM is angry (due to the schedule implications of code churn). There's just no incentive to accept changes from outside your own team. You can always find a reason to say "no", and you have very little incentive to say "yes".
> There's also little incentive to create changes in the first place. On linux-kernel, if you improve the performance of directory traversal by a consistent 5%, you're praised and thanked. Here, if you do that and you're not on the object manager team, then even if you do get your code past the Ob owners and into the tree, your own management doesn't care. Yes, making a massive improvement will get you noticed by senior people and could be a boon for your career, but the improvement has to be very large to attract that kind of attention. Incremental improvements just annoy people and are, at best, neutral for your career. If you're unlucky and you tell your lead about how you improved performance of some other component on the system, he'll just ask you whether you can accelerate your bug glide.
> Is it any wonder that people stop trying to do unplanned work after a little while?
> Another reason for the quality gap is that that we've been having trouble keeping talented people. Google and other large Seattle-area companies keep poaching our best, most experienced developers, and we hire youths straight from college to replace them. You find SDEs and SDE IIs maintaining hugely import systems. These developers mean well and are usually adequately intelligent, but they don't understand why certain decisions were made, don't have a thorough understanding of the intricate details of how their systems work, and most importantly, don't want to change anything that already works.
> These junior developers also have a tendency to make improvements to the system by implementing brand-new features instead of improving old ones. Look at recent Microsoft releases: we don't fix old features, but accrete new ones. New features help much more at review time than improvements to old ones.
> (That's literally the explanation for PowerShell. Many of us wanted to improve cmd.exe, but couldn't.)
> More examples:
> * We can't touch named pipes. Let's add %INTERNAL_NOTIFICATION_SYSTEM%! And let's make it inconsistent with virtually every other named NT primitive.
> * We can't expose %INTERNAL_NOTIFICATION_SYSTEM% to the rest of the world because we don't want to fill out paperwork and we're not losing sales because we only have 1990s-era Win32 APIs available publicly.
> * We can't touch DCOM. So we create another %C#_REMOTING_FLAVOR_OF_THE_WEEK%!
> * XNA. Need I say more?
> * Why would anyone need an archive format that supports files larger than 2GB?
> * Let's support symbolic links, but make sure that nobody can use them so we don't get blamed for security vulnerabilities (Great! Now we get to look sage and responsible!)
> * We can't touch Source Depot, so let's hack together SDX!
> * We can't touch SDX, so let's pretend for four releases that we're moving to TFS while not actually changing anything!
> * Oh god, the NTFS code is a purple opium-fueled Victorian horror novel that uses global recursive locks and SEH for flow control. Let's write ReFs instead. (And hey, let's start by copying and pasting the NTFS source code and removing half the features! Then let's add checksums, because checksums are cool, right, and now with checksums we're just as good as ZFS? Right? And who needs quotas anyway?)
> * We just can't be fucked to implement C11 support, and variadic templates were just too hard to implement in a year. (But ohmygosh we turned "^" into a reference-counted pointer operator. Oh, and what's a reference cycle?)
It is what it is, really. Some of my colleagues cringe at the thought of touching a computer either after hours or on the weekend. Others literally spend their entire 16 awake-hours on computers (not necessarily just programming, but a large majority are doing technical side projects, etc).
Is there a difference in their work output? Absolutely there is, in my experience here. Contrary to my prior assumptions, it is actually the _die hard_ tech guys that
- Produce excellent work
- Produce a consistent amount of work over long periods
- Keep up to date with tech
- Don't really seem to suffer "burnout"
- Consistently improve their skills and progress their careers
- Have a high probability of being enthusiastic about a new project, technology or their work
Where as the ones who are "work-only" programmers (not always) tend to
- Produce excellent work in short bursts, and then average work the rest of the time
- Do not produce consistent amounts of work over long periods
- Not only struggle to keep up with tech, but actively _fight_ the idea of having to learn something new
- Have a high probability of going to management to discuss their "burnout" and take extra time off
- Seem to have one specialized skill set that neither improves nor gets worse - they just float around the same level
- Rarely show enthusiasm for a new project, technology of their work
People often refer to "burnout" in the context as a short term solution - but I honestly think that "burnout" as we know it is a result of someone who has already "checked out" or "lost the passion" for what they do (or perhaps, they never had it, and just went into this career for other reasons)
This seems really grim for the non _die hard_ people, so here's some positive traits I've seen in them that are usually lacking in the _die hard_ people.
- Typically have much better communication skills
- Usually better at reasoning for which technology to use in a project (ignore "language or framework of the week")
- Are better with customers and maintaining relations
- Provide more support to their colleagues
- Willingness to take on different kinds of tasks (no attitude of "that's not my job")
I think it's fair to say that balance is key between these very dynamic cultural traits we observe as programmers.
For those not aware of the background, the author is a wizard from a secretive underground society of wizards known as the Familia Toledo; he and his family (it is a family) have been designing and building their own computers (and ancillary equipment like reflow ovens) and writing their own operating systems and web browsers for some 40 years now. Unfortunately, they live on the outskirts of Mexico City, not Sunnyvale or Boston, so the public accounts of their achievements have been mostly written by vulgar journalists without even rudimentary knowledge of programming or electronics.
And they have maintained their achievements mostly private, perhaps because whenever they've talked about their details publicly, the commentary has mostly been of the form "This isn't possible" and "This is obviously a fraud" from the sorts of ignorant people who make a living installing virus scanners and pirate copies of Windows and thus imagine themselves to be computer experts. (All of this happened entirely in Spanish, except I think for a small amount which happened in Zapotec, which I don't speak; the family counts the authorship of a Zapotec dictionary among their public achievements.) In particular, they've never published the source or even binary code of their operating systems and web browsers, as far as I know.
This changed a few years back when Óscar Toledo G., the son of the founder (Óscar Toledo E.), won the IOCCC with his Nanochess program: https://en.wikipedia.org/wiki/International_Obfuscated_C_Cod... and four more times as well. His obvious achievements put to rest — at least for me — the uncertainty about whether they were underground genius hackers or merely running some kind of con job. Clearly Óscar Toledo G. is a hacker of the first rank, and we can take his word about the abilities of the rest of his family, even if they do not want to publish their code for public criticism.
I look forward to grokking BootOS in fullness and learning the brilliant tricks contained within! Getting a full CLI and minimalist filesystem into a 512-byte floppy-disk boot sector is no small achievement.
It's unfortunate that, unlike the IOCCC entries, BootOS is not open source.
The hardest thing to understand here about Guix is exactly what it does, because there's only 3 tools out there that operate like this (that I'm aware of, at least):
- GNU Guix
- NixOS
- Chef Habitat
These three tools have some common themes but ultimately accomplish different end goals.
Some common things:
- They all provide a package manager that exists outside of the regular scope of your operating system.
- They all bundle together your compiled application code AND configuration code AND runtime dependencies into one package, making it easy to just run your application.
Some differences:
Nix. [Edit: It was pointed out that the package manager Nix, is separate from NixOS, and this package manager can be run on any operation system). (https://nixos.org/) (https://github.com/NixOS)
GNU Guix has a larger focus on reproducibility and archival properties of software. In other words, Guix uses it's ability to run software independently of your operating system as a way of ensuring authentic, realistic, and 100% portable applications. Guix wants to build your application FIRST, and ignore whatever problems your OS might present to you. It lets you build a development environment in a "clean-room" with the environment command. It also lets you write those configurations and environments using Guile programming interfaces and the Scheme programming language, which is super powerful. (https://www.gnu.org/software/guix/) (https://github.com/pjotrp/guix)
Chef Habitat is very similar to GNU Guix, except it also works for Windows packages. It's primary goal is so that you can build and deploy your applications completely independently of the constraints of your operating system or runtime environment. Just like GNU Guix, this makes it super awesome to take old applications that are "stuck" on old operating system environments and quickly port them to whatever modern OS or runtime format you wish. Like GNU Guix, it has a chrooted clean-room environment called the Studio and uses Bash/Powershell to allow you to build packages. Chef Habitat diverges from GNU Guix in that it provides a runtime process supervisor that has auto-update / continuous delivery capabilities, and also provides a gossip protocol so that applications can talk to each other and do things like leader election automatically. (https://habitat.sh) (https://github.com/habitat-sh/)
Overall, I think most people will be confused when they approach these tools because they've never used anything like them before. But once you get your hands in them you'll realize how awesome they really are.
Yeah, we just came across someone on HN trying to write a testing DSL with NLP. [0] He's apparently gotten funding for this.
To be frank, the tech community has really been leaving a sour taste in my mouth the last few years. Everyone is anxious to bandwagon on these buzzwords that provide little to no benefit or are applicable only to a small segment of the tech population.
In most cases, even a superficial understanding of the problem space should make it obvious that $This_Weeks_Sexy_Solution is a very bad fit. Do you need a server that persists data and is individually addressable? Then why are you using Kubernetes and Docker, which are still struggling to figure out these very basic things? k8s and Docker have very specific uses, but unless you're Google, they're probably not the right fit for your production environment right now.
This phenomenon seemed to hit a critical mass with document databases and single-page apps, and it continues to iterate with every open-source release that comes out of Google or Facebook. Since Google released TensorFlow last year and the compounded hype of [much worse than advertised attempts at] conversational speech recognition in Siri and Alexa, the "machine learning" bandwagon is starting to try to edge into the spotlight, and it sounds like it's already a mandatory part of any VC pitch.
Fortunately, machine learning is pretty hard and you get into hairy math practically right away, so I don't think the legs on this one will last as long. But we'll see. We'll at least have a lot of faux-ML going around and a lot of people making spurious claims on their resumes.
The economic collisions that make Silicon Valley and the tech industry in general a hive for inexperienced and insecure youth are bearing some really interesting effects this way. How can a company that's not "blown about by every wind of [tech fad]" fully exploit its relative sanity for competitive advantage?
But they don't look like the places cool startups do cool startup things and the company wants to foster a cool startup feel so that everyone acts like it's a cool startup and starts doing cool startup things and attracting all that cool startup talent so that all that cool startup money starts flowing in and cool startup vibe permeates the office.
There is also one sure-fire way to increase telomere length : get cancer.
So these people were in a thin metal cube, exposed to abnormal levels of ionizing radiation, ... and after that there was telomere lengthening ? I always heard that astronauts, given the groups they're selected from, have suspiciously short lifespans. Still somewhat above average, but these guys got selected from the creme de la creme. Half of them should live to 120, and that is definitely not happening. I've never seen a good study actually comparing it though.
According to several doctor friends of mine, balance is the best way to go for a long life. Being too thin will kill you, because once you're 65 or 70 or so you will lose the ability to quickly gain weight. A significant number of people dying from "natural causes" die as follows : they get infected with something stupid, like a flu virus. Or they break a hip or something. Either way, they get really under the weather. Result: they lose weight, a lot of weight, rapidly. If your weight falls under about 35 kg, odds of survival drop dramatically, and they die from "complications" (in practice: secondary infections resulting in metabolic exhaustion: your body simply cannot maintain the minimum energy level to keep you alive. On the plus side: very peaceful way to go, and likely quite comfortable too). Keep in mind it will take a year to work your way back from 40kg to 50kg at such an age, so the higher you go the more likely you'll drop back down due to another incident before recovering.
And of course, exercise only helps up to a normal level. If you spend 2 hours every day running, that is definitely in the "shortens lifespan" area. 10 minutes, probably very good for you. And of course, the obvious : exercise increases the odds of accidents happening. Accidents, even stupid ones, can kill.
> even then it could still just stop working one day
This is where automated testing helps. My mail server has a sister VM out in the wild that once a day picks up the latest backup from the offsite-online backups and restores it, sending me a copy of the last message received according to the data in that backup. If I don't get the message, restoring the backup failed. If the message looks too old then making or transferring the backup failed.
My source control services and static web servers do something similar. None of the shadow copies are available to the world, though I can see them via VPN to perform manual check occasionally and if something nasty happens they are only a few firewall and routing rules away from being used as DR sites (they are slower as in their normal just-testing-the-backups operation they don't need nearly the same resources as the live copies, but slow is better than no!).
This won't catch everything of course, but it catches many things that not doing it would not. The time spent maintaining the automation (which itself could have faults of course) is time well spent if done intelligently. For a system as large in scale as GitLabs then a full restore daily is probably not practical so a more selective heuristic will need to be chosen if you are operating at such a scale. My arrangement still needs some manual checking and sometimes I'm too busy or just forget, so again it isn't perfect, but the risk of making it more clever and inviting failure that way is at this point worse than the risk of my being lazy at exactly the wrong time.
One thing my arrangement doesn't test is point-in-time restores (because sometimes the problem happened last week, so last night's backup is no use) but there is a limit to how much you can practically do.
> The problem of restoring non-existing backups should be treated as a more serious problem in our industry
It is by people that care about it, but not enough people care, and too many people see the resources needed to get it right as an expense rather than an investment for future mental health.
It isn't just non-existent backups. Any backup arrangement could be subject to corruption either through hardware fault, process fault, or deliberate action (the old "they hacked us and took out our backups too" - I really must get around to publishing my notes on soft-offline backups...).
> Let's identify the diseases of this sort in our industry
Apathy mainly.
The people who care most are either naturally paranoid (like me), have lost important data at some point in the past so know the feeling first hand (thankfully not me, though in part to having a backup strategy that worked) or have had to have the difficult conversation with another party (sorry, I can't magic your data back for you, it really is gone) and watch the pitiful expressions as they beg for the impossible.
The only way to enforce the correct due diligence is to make someone responsible for it, it is more a management problem than a technical one because the technical approaches needed pretty much all exist and for the most part are well studies and documented.
Of course to an extent you have to accept reasonable risks. It is usually not practical to do everything that could be done, and understandable human error always needs to be accounted for as do "acts of Murphy". But someone needs to be responsible for deciding what sort of risk to take (by not doing something, or doing something less ideal) rather than them just being taken by general inaction.
For anyone who isn't familiar, there is a subculture online of people who create what subjectively seem to be autonomous personalities that frequently manifest I'm the form of hallucinations. Think fight club sort of.
My ex tried it out. Called me one day at work, terrified because this dragon was following him around. He claimed to spend months trying to get rid of it, seems pretty shooken by the experience. But I was always skeptical.
So nine months ago I decided to create one of my one, just to try it out, see what it felt like. And...now I've had a talking lion following me around for the past eight months.
The best I can describe it is like some of the altered states of 'self' you might experience on LSD or ketamine. Thoughts seem to split off and go 'over there', and not be you.
When I talk to my tulpa, it at least appears subjectively like a separate personality state. I can be incredibly depressed but he can be fine. Or he will be depressed and I can be fine. I'm not saying there are neural correlates like you would see in a 'real' personality. Maybe it is all roleplay. But it is roleplay that fools me as the roleplayer.
For what it's worth, making a tulpa seems to have been really good for my mental health. I guess maybe you can see it as a form of self-regulation. I dunno, it didn't turn out at all what I expected. But it's hard not to think of him as a real person. I don't find myself being surprised by the actions of characters in my head, or laughing at imaginary friends. At this point, having out several hundreds hours into tulpaforcing, I can see and hear, and sometimes smell and touch him.
I know I'm rambling, I guess I'm saying is that even though I am skeptical of DID, or at least the mainstream depictions of DID, after making a tulpa I am a lot less skeptical of the subjective experience of DID.