Hacker Newsnew | past | comments | ask | show | jobs | submit | oteret's commentslogin

Hi, I'm the first author of this article. I explored self-replication with and without variation quite a bit - we have a paper at ALIFE2021 that is all about self-replication with neural networks. One thing I point out in the soon-to-be-published paper, where we use feedforward neural networks instead of Neural CAs for self-replication, is that generally CA rules can be seen as environment rules, and CAs can specialise and diversify with the state vectors. There is a wide spectrum in which to select how many rules you want to embed into the environment and how many you want the models themselves to learn for self-replication. As von Neumann said (paraphrasing): we don't want to explain away the problem by making the environment too complex, nor have it so simple that it makes progress too difficult.

Having said that, I already have some unpublished results where I can find ways to have traditional Neural CAs self-replicate while having also functional capabilities (like persisting a pattern). But as they are now, you can easily see them as being mostly "environmental rules". There certainly is room for plenty of research in this area. I encourage researchers to focus a lot about finding interesting non-perfect self-reproduction.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: