Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Apart from NEAT/HyperNEAT there are also other approaches to neuroevolution (I think in this context it is referred to as "Evolutionary Neural Architecture Search" [0]). Evolution in general can be applied in different ways (e.g. optimizing the architecture, replacing training gradient descent etc.).

A while ago I co-authored a paper in this space [1] and released some code for interested folks [2].

[0]: https://arxiv.org/pdf/2008.10937.pdf

[1]: https://arxiv.org/abs/1801.00119

[2]: https://gitlab.com/pkoperek/pytorch-dnn-evolution/-/tree/mas...



I also find the related ideas of neuroevolution of the weights of a neural networks to be fascinating in its own right.

I've implemented "cooperative coevolution" which felt like magic when I considered how good it is (on some tasks like continuous control RL problems) relative to known good methods like anything involving gradients.

I wish that this stuff was explored a bit more. Seems we are leaving the paradigm of evolutionary methods...




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: