Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>through some cryptographic trickery you can prove that the backdoor can't be detected.

Can you explain more about this? E.g. in the worst case, if I know the learning algorithm, I could retrain the model myself and notice the difference, right? What is the threat model exactly?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: