Hacker Newsnew | past | comments | ask | show | jobs | submit | bpcrd's commentslogin


I completely agree! Please see the bottom of our post where we offer free access in two ways:

1) Email up to 50 files to yc@realitydefender.com, we’ll scan them for you, no setup required

2) 1-click add to Zoom/Teams (via Appstore) to try detection live in your own calls immediately


This is something for us to consider in the near future!


See my other response on this, but this is something we are actively protecting against.


This is something we're constantly updating, upgrading, iterating, and improving on. Every. Single. Day.

Whether it's introducing new models, deprecating old ones, or improving existing ones, there is an element of both staying current but also looking ahead at research. Many of the new models generating hyperreal content we catch on day one because they're based on existing technology and/or research.


This sounds exhausting, and I don’t know what you will do when the day comes that you can’t keep up. Any day could be the end.


It is in fact something we support with our enterprise clients and will roll out at a later date via Public API. Including the free tier.


Noted. Our marketing team only uses 640x480 CRTs and works exclusively in IE6, so will flag to them via Yahoo Messenger.


As noted elsewhere, we give confidence scores between 1-99%. We also use many different models for each modality for a more robust and complete answer with each scan, and each model has its own confidence score.


That doesn't fix the fundamental potential for abuse, moral hazard, and accountability sink[1].

[1]: https://en.wikipedia.org/wiki/The_Unaccountability_Machine


I understand this is in jest, but unfortunately AI generation tools more or less stopped the six-finger issue a couple of years ago. We are decidedly not a model used for the express detection of finger abnormalities, but a multi-model and multimodal detection platform — driven by our Public API (which you can try for free right now, btw) — which uses many different techniques to differentiate between content that is likely manipulated and likely not manipulated.

That said, neat gag.


Thank you. As an inference-based detection platform, our models go into every scan with the assumption that all files are both not the original/ground truth AND the files have been likely transcoded. We never say something is 0% or 100% fake because we don’t have that ground truth. That said, our award-winning models are able to say, with a confidence score of 1-99% — the higher being likely manipulated — which, in turn, is sent to the team using said detection to action as they will. Some use it as one of many signals to make an informed decision manually. Others have chosen to moderate or label accordingly. There are experts who’ve been called to testify on matters like this one, and some of them work on these very models.

As for synthetic content that is undetectable to the naked eye or ear, we are already there.


Have you checked the calibration of that confidence value? When it reports 99% confidence, are 99/100 of those manipulated?


I’m curious what awards the models have won?


We won multiple awards including RSA: https://www.rsaconference.com/library/press-release/reality-...

And we’ve published peer-reviewed research at top AI conferences E.g. CVPR, NeurIPS, ECCV, AAAI, Interspeech which are available at https://www.realitydefender.com/research


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: