What I do is I don't ask it to generate results, but to give me a program that does the processing, then have it give me a test suite where I can 'trivially' (usually) verify the program on synthetic data (which, of course, I also have ChatGPT create for me). It usually even includes good edge cases.
I've had a few times where I didn't like the library that it used to come to the results, I just told it 'use library xyz' and it would rewrite the whole thing using idioms native to that library. It's amazing. It's eliminated like 75% of the drudgery that I've come to hate so much about programming the last years.
> How do you verify if what ChatGPT returns is actually correct?
A bit ironically, if it is regarding a fact, I perform a search on DDG or google and find an authoritative website or paper to corroborate it. So it still feels like search outside of an LLM is an important tool. If the result is code I can test it myself.
There needs to be some lexicon for this kind of thing. Theres a parallel with NP problems, some things are hard to figure out but easy to verify. Others hard to do both.
It works, too. I asked it for the history of a term, and it gave a very plausible-sounding answer citing three different links. All of them were to the same page that didn't say what Bing said it did. A rare miss, but at least I could check its work.
I liked it at the time due to its meta nature. Integrating suspension of disbelief as a core mechanic in its power system is very appealing for my unread ass. Reminds me of Pratchett's novels and how they handle the concept of "stories".
I'd like to have more fantastical stuff to read that can take me along the ride of the author's mindset and problems while they weave their story :) Recommendations are welcome.
We are a collection of highly-skilled software developers working on a host of projects related to enterprise network engineering, monitoring and reporting using tools designed and developed by the best minds in the IT industry.
We have multiple teams working on different projects, and we hold our software development practices up to high standards although this can vary by team. We try to balance the ability to deliver projects and being aggressive with refactoring, code reviews and automated tests.
We're mostly looking for senior software engineers, but also open for mid/junior SWE :)
I am an experienced fullstack software developer (8 years) still open for areas to specialize in. I've lead a few small teams, mentored juniors, contributed and kickstarted dozens of projects.
My interest is in anything increasing productivity via automation, visualization and process streamlining. Anything out of this box will probably still be pretty interesting to me. Please hit me an email and I'll look forward to helping you out!
Although hmm, on the other hand, it's also arguable that what we may have have coded would be subject to errors too.