Hacker Newsnew | past | comments | ask | show | jobs | submit | more username90's commentslogin

Start that argument once we can model insect brains. Mouse brains aren't even on the horizon of what we can do.


> Start that argument once we can model insect brains.

I'm not sure what level of modelling you'd accept, but we appear to be close:

https://pubmed.ncbi.nlm.nih.gov/32880371/

> Mouse brains aren't even on the horizon of what we can do.

I would say that they are "on the horizon", given that the mouse brain connectome has already been published:

https://advances.sciencemag.org/content/6/51/eabb7187


They have mapped out the synapse structure, there is no evidence that synapse structure is enough to actually run the brain. If you can run that brain and show it is an accurate representation of a flea brain then you'd have something, but until then I'll believe that the neurons do way more than ML researchers hope they do.


General intelligence requires it to solve real problems in the real world. It isn't about emulating humans, but emulating anything resembling an intelligent being we are aware of. It would be totally exceptional if we could properly emulate the intelligence of a fly or an ant, but we can't even do that. "Emulate a human brain" you say, but we can't even emulate brains a million times smaller than that.


https://en.m.wikipedia.org/wiki/AnimatLab

We totally do emulate organisms on that scale. The real challenge is simulating the sensory inputs and the feedback loop between the outputs, the environment as the body acts, then new inputs.

Disembodied simulations of nerual networks don't work. They are part of a body, an environment, and all the feedback loops that come with it.

It sounds like you really just want to see a ML algorithm have a body to learn in. Why we ever expect AGI to happen without letting an ML algorithm learn by interacting with a "real" reality seems strange to me. By all means, keep making glorified optic nerve and expecting them to "wake up".


You don't need sensors, you just need a virtual room.

> We totally do emulate organisms on that scale. The

There is no evidence those emulations actually emulates those organisms. They just built a neural net in the same structure and assumes the cells doesn't matter. But cells are really smart and can navigate environments on their own, they are intelligent beings in their own right, and building a flea using a thousand of those is very plausible compared to doing it using neural net of similar size.

And yes, in order to prove that we actually emulated those you need to show that it does the same things in the same scenarios. You don't even need to do everything, just a simple thing like being able to move around, gather material and build a home in a physics engine would be huge.


> You don’t need sensors, you just need a virtual room.

While technically true, I actually think this is way more difficult than it sounds, bordering on practical impossibility.

I think the other commenter was making a really important point. The simulated environment would need to be incredibly rich, to a point as to almost defy imagination.

Consider what happens to a human mind when confined in a box (prison) with limited opportunities for stimulation. There’s a room, a gym, other people with which to socialize, food, walls, an outdoors enclosure... And yet someone who spends their entire life in this type of environment will certainly be facing serious neurodevelopmental issues.

For human/mammal order of AI, I would even argue that simulating adequate inputs might actually be a more difficult problem than building the AI that responds to them!


Have you seen these C. elegans emulations in a robot body?

https://www.youtube.com/watch?v=YWQnzylhgHc

https://www.youtube.com/watch?v=xu_oYLmPX9U


Also note, organisms not only navigate the environment, they interact with it, handle their own capture and consumption of energy from it,and reproduce. Autonomously.


Numbers implies you can do mathematical operations on them that makes sense.

So how would you quantify "good" or "bad"? You can't unless you also answer what "good" + "bad" should be. In psychology they just assume that mapping those onto 1 and 5 makes sense, so "good" + "bad" = 5 + 1 = 6, but that doesn't make sense since it would imply that "good" is the same as "bad" + "bad" + "bad" + "bad" + "bad". You get similar but different issues if you start including negative numbers, or if you just use relative measures and don't have a proper zero, no matter what you do numbers doesn't properly represent feelings as we know them.


They never promised that the output would be error free, having output with errors is still useful for many applications. And the issues you are talking about got fixed as soon as it was discovered and since then Google has made sure to always diversify their datasets by race. Nowadays that is common knowledge that you need to do it, but back then it wasn't obvious that a model wouldn't generalize across human races and it is much thanks to that mistake that everyone now knows it is an issue.


It was discovered by others, not them; they fixed the issue only retroactively when it was called out in public. This lack of oversight is part of what I mean with applying things with caution.

And why would they have assumed in the first place that the model _would_ generalize across human races, or any other factor for that matter?


Human thinking includes building new models/filters for understanding the world, not just applying old ones. And that isn't used for learning, we do it all the time when solving any kind of challenging problem or even for simple problems like trying to recognize a face. Computer models might never compete with human performance unless they can learn how to solve a problem as it is solving it, because that is what humans do.


I am on the same page.

To talk about the models some more...

There's this big mass of models. And it's got all kinds of sections. Special sections that we learn about in school. Special sections called "science". Sections that we invent ourselves. Sections that we inherit from our parents, religion, etc. It's partially biological. Partially cultural. A massive library of models, mostly inherited.

You move in relationship with the mass in different ways.

You can create new models. That's what basic science is. Extending the edge of the mass. Naming the nameless.

You can operate freely from the mass. Creating your own models or maybe operating model-less. Artists, mystics, weirdos.

You can operate completely within the mass. Never really contending with unmodelled reality. The map and territory become one. Like in a videogame. I think that's the most popular way.


They aren't very similar errors, ML solutions are equally accurate as humans in at a glance performance but longer and humans clearly wins. I'd say that the system is similar to humans in some ways, but humans have a system above that which is used to check if the results makes sense or not, that above system is completely lacking from modern ML theory and it doesn't seem to work like our neural net models at all (the brain isn't a neural net).


10 billion a year in global tax avoidance for those tech companies combined doesn't sound like a lot considering their revenue is around 600 billion a year.

Also consider that all the workers are also tax payers and they want the company to continue existing, their taxes would also go towards protecting the bay. Their tax dollars wouldn't exist in California if those tech companies weren't there.


10 billion a year of tax money can fund a lot of stuff. Just because it is a fraction of a 600 billion does not make it somehow better.


It is across the entire world, it wouldn't be used in the bay area. They already pay taxes properly for all the workers and buildings they have in the bay area, if that isn't enough then they ought to raise those taxes since they should cover the cost of operating there.


So it's not useful because it wouldn't benefit the bay area?

In fact, it's from places all over the world, significantly poorer places, where the money would stretch even further.

You're making it sound even worse.


The discussion is about issues in the bay area. They contribute a huge amount to the tax base there, so saying they don't pay taxes isn't fruitful to this discussion. If those taxes isn't enough to handle issues in the bay area then bay area taxes should be increased, simple as that. And if that makes tech companies leave the area then those tech companies couldn't afford to operate there, and if they can't afford it then nobody can and everyone would have to leave the area.

Edit: The main point is that nobody can say that San Francisco has less money with those tech companies operating there than they would if those tech companies left. They are among the richest places on earth and should easily be afford any measures that can be afforded anywhere.


Just because Mr X gives me 50 when he owes me 500, and I'd have 0 if he didn't exist, doesn't mean he's not taking advantage of me.


I don't think it has to serve a purpose other than power projection for the rich owners. I'd say that owning all media is the best bang for your buck if you want to to convert money to power.


Big media exists to define the overton window for the plebs.


They don't even compete to solve the problem, they compete for funding. People who are better at politics gets more funding, so adding more people could even be net negative with them draining up all the funding from those who do the actual research.


Iphone users aren't allowed to install Chrome. They want chrome, but all they get is a shitty chrome skin on Safari. It does trick a lot of people into thinking that they actually have chrome though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: