Hacker Newsnew | past | comments | ask | show | jobs | submit | vercaemert's commentslogin

I worked on the Human Connectome Project.

If they freeze the vesicles that deliver transmitters and make them analyzable, you've got all the information you need. In terms of a modern ANN, it's the connections (axons) and the weights (transmitters/receptors in tandem).

That said, this article doesn't get to the point in the free section. How are they collecting the information? Slicing is inherently destructive. Someone's got to manufacture an entirely novel imaging modality. Perhaps they could scan millimeters ahead of the slice at a resolution high enough to image receptors. Not possible currently.


> If they freeze the vesicles that deliver transmitters and make them analyzable, you've got all the information you need.

How can we possibly know that the non-connectome details of the brain don't influence computation or conscious experience?

It seems we ignore these only because they don't fit neatly into our piles of linear algebra that we call ANNs.


Take a gander at the OpenWorm project. It's a great example of how simple neuronal activity is (given details like the connections, number of receptors, and transmitter infrastructure). SOTA models of neuronal activity are simple enough for problem sets in undergraduate biomedical engineering programs.

Sure, to your point, we don't know. But the worm above (nematode) swims and seeks food when dropped into a physics engine.

My main point is that the scale of the human brain is well beyond the capabilities of modern imaging modalities, and it will likely remain so indefinitely. Fascicles we can image, individual axons we cannot. I guess, theoretically, we'll eventually be able to (but it's not relevant to us or any of our remote descendants).


> But the worm above (nematode) swims and seeks food when dropped into a physics engine.

Nematode worms have an oxytocin analogue called nematocin that is known to influence learning and social behaviors like mating. As far as I can find, the project doesn't account for this, or only minimally, but aims to in the future.

It's not surprising that immediate short-term behaviors like movement depend mostly on the faster signaling of the connectome. But since we know of other mechanisms that most definitely influence the connectome's behavior, and we know we don't account for those at the moment, it is not accurate to say that the connectome is "all the information you need".

I agree that mapping the connectome of the human brain is impractical to the point of impossibility. But even if we could, the resulting "circuit diagram" would not capture all the details needed to fully replicate human cognition. Aspects of it, sure. Maybe even enough to make it do useful tasks for EvilCorp LLC while being prodded with virtual sticks and carrots. But it would be incomplete.


I saw a putative 3D animation of a fly whose brain had been digitized and then run in a simulation. It buzzed around, sipped food it had found on the ground, even rubbed its forelegs together as flies do. A true Dixie Flyline. We live in strange times...

Why would axons be unimageable?

There's research on the translation process where cells are basically flash-frozen (to avoid water crystals), then imaged with cryoelectronmicroscopy / AFM etc. where they image the translation process (RNA to protein) in order to get snapshots and get a better understanding of how the folding proceeds and is aided.

If we can image sub-cellular features, what makes you believe we can't trace all the axons, dendrites and the synapses?

It seems more like a question of how to do it cost effectively at scale, not so much a question of "can we or not?".


> If they freeze the vesicles that deliver transmitters and make them analyzable, you've got all the information you need. In terms of a modern ANN, it's the connections (axons) and the weights (transmitters/receptors in tandem).

This is exactly what I’m doubting, how can you be so sure?


Same question answered under other comment.

Yeah but it wasn’t though. I found your answer unconvincing. I suppose “we don’t know” is an answer but that is nothing like “we have all the information we need”

Am I right in thinking that even if you had all of the connections and weights mapped out for a brain, the specifics of synaptic plasticity are still pretty poorly understood?

All the information to replicate the structure we have delineated. But what else?

What is the state of the art in regards to how neurons learn over time? Do existing neuron models account for that? Being trapped, unable to learn anything, sounds terrible.

I love this conundrum.

I have a good analogy. 10 years ago, I was convinced that a 24-inch 1080p monitor at arm's length was perfection. There could never be any reason to improve over it. I could do everything I ever wanted to, to a standard I would never need to improve upon.

Yet here we are. The simplest and most obvious improvement is a 24" 4k monitor at 200% scaling. Basically, better in every way.

There's a discussion to be had about whether you need the better setup, which I think is your point, but there's no denying you'd want it (all other variables the same).


At some point specs don’t matter. I don’t wonder about the processor in my thermostat either. I don’t know how many horsepower my XC90 has. I don’t know the rated power of my chainsaw.

All I care about is: do they work, are they ‘safe’, are they comfortable, etc.


A thermostat’s capabilities and what’s expected of it wont change even if the tech gets better though, and that’s the key difference.

Yes. Not a drug in the typical sense. Simply replacing the protein in which the mice are deficient (that was identified by measuring levels in human spinal fluid).

Peter often reiterates that he doesn't recommend the use of open models for OpenClaw. They're much weaker when it comes to prompt injection, being the main reason. I'd be interested to understand the security measures put in place here.

The joke is that Willie Nelson has used very high concentrations simultaneously frying his brain cells and staving off Alzheimer's.

Willie Nelson is pretty sharp for his age. I compare him to the much younger President of the United States who blathers absolute nonsense constantly despite no known history of cannabis use and a claimed history of abstaining from all substances.

> claimed history of abstaining from all substances

[rolls eyes]

lots of anecdotal evidence suggests donnie t like stimulants, esp. the kind that you can put up your nose.


at present, it's just a fun discussion

the complexity of advanced connectomes is so far beyond our imaging capabilities that we have no way of knowing how far away from understanding intelligence we are


see the open worm project to get an idea of what artificial neuronal architecture requires to express anything meaningful. (and an interesting ethical perspective on digital consciousness.) my point being that the number of neurons is fairly meaningless. you could take neuron models and link them circuit-style to play doom at the 10^2 scale if you wanted. from a cellular neurophysiological perspective, there's nothing particularly special here (as opposed to sentience/intelligence that's a paradigm shift beyond our understanding). and, in my opinion, absolutely nothing to be even the slightest bit worried about ethically.


I support further research along the lines of what is being done with neurons here, however, I don't think we quite know enough about consciousness or general self awareness (and how it comes about) yet to make sweeping generalizations saying there's _nothing_ to worry about. Proceeding with caution is always warranted when the stakes involve living organisms in my book.


This will be a Harvard Business case study on market share.

Claude Code was instrumental for Anthropic.

What's interesting is that people haven't heard of it/them outside of software development circles. I work on a volunteer project, a webapp basically, and even the other developers don't know the difference between Cursor and Claude Code.


It's impressive, even if the books and the posts you're talking about were both key parts of the training data.

There are many academic domains where the research portion of a PhD is essentially what the model just did. For example, PhD students in some of the humanities will spend years combing ancient sources for specific combinations of prepositions and objects, only to write a paper showing that the previous scholars were wrong (and that a particular preposition has examples of being used with people rather than places).

This sort of experiment shows that Opus would be good at that. I'm assuming it's trivial for the OP to extend their experiment to determine how many times "wingardium leviosa" was used on an object rather than a person.

(It's worth noting that other models are decent at this, and you would need to find a way to benchmark between them.)


I don’t think this example proves your point. There’s no indication that the model actually worked this out from the input context, instead of regurgitating it from the training weights. A better test would be to subtly modify the books fed in as input to the model so that there was actually 51 spells, and see if it pulls out the extra spell, or to modify the names of some spells, etc.

In your example, it might be the case that the model simply spits out consensus view, rather than actually finding/constructing this information on his own.


Ah, that's a good point.


I'm suprised there isn't more "hope" in this area. Even things like the GPT Pro models; surely that sort of reasoning/synthesis will eventually make its way into local models. And that's something that's already been discovered.

Just the other day I was reading a paper about ANNs whose connections aren't strictly feedforward but, rather, circular connections proliferate. It increases expressiveness at the (huge) cost of eliminating the current gradient descent algorithms. As compute gets cheaper and cheaper, these things will become feasible (greater expressiveness, after all, equates to greater intelligence).


It seems like a lot of the benefits of SOTA models are from data though, not architecture? Won't the moat of the big 3/4 players in getting data only grow as they are integrated deeper into businesses workflows?


That's a good point. I'm not familiar enough with the various moats to comment.

I was just talking at a high level. If transformers are HDD technology, maybe there's SSD right around the corner that's a paradigm shift for the whole industry (but for the average user just looks like better/smarter models). It's a very new field, and it's not unrealistic that major discoveries shake things up in the next decade or less.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: