Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your argument is wrong or miscommunicated. The AI itself can figure only some of the halting problem results, not all, plus it can make a mistake. It is not an oracle.

Recursive neural networks are not necessarily halting when executed in arbitrary precision arithmetic.



Have you read the referenced article?


I’ve read a fair portion of it now (at least, a fair portions as far as reading on one’s phone goes), but not all of it yet, and while I do think it seems to probably be making some interesting points, I do feel like it is taking a fair bit of time to get to the point?

Maybe that’s partly due to the expectations of the medium, but I feel like it could be much more concise to, instead of saying, “here is a non-rigorous version of an argument which was previously presented at [other location] (note that while we believe the argument as we are presenting it is essentially the same as it was presented, we are making this different choice of terminology). Now, here is a reason why that argument is insufficiently rigorous, and a way in which that lack of rigor could be used to prove too much. Now, here is a more rigorous valid version of the argument (with various additional side tangents)”, one could instead say “Let objects X,Y,Z satisfy properties P,Q,R. Then, see this argument. Now, to see what this is applicable to the situations that it is meant to represent, [blah].”


Ya, it could be more concise but I think that would require more prerequisites from the reader in terms of model theory and formal logic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: