A colleague was working on a replication study. We got the code from the original researcher and another researcher who did a follow-on study. The code barely runs, and the results seem off. I spent days debugging to no avail. Just because researchers provide code does not mean it is well-written code, or that it necessarily works.
Then I'd be skeptical about those results. Releasing the code allows others to judge the likely accuracy and integrity of the results. A lot of things can go wrong in complex, multi-step computational processes. If care and rigor has not been put into them, then I'd have no reason to believe in the integrity of the output. I want the general public right to judge that, as well as build on it when it's useful and valuable.
Every publication involving data and technical analysis should publish them to a degree that makes validation possible in at least as detailed a way as portrayed in "Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff" (http://www.peri.umass.edu/236/hash/31e2ff374b6377b2ddec04dea...)