Exactly - I can tell you an example from India. India has the Indian Meteorological Department (IMD) precipitation dataset. Even if your estimates are more accurate, no company will use your dataset to design/validate a civil engineering design. This has to do with liability of using a dataset from a "non-official" source. Right now, you are stuck in the middle where it is not viable for non-US companies to use it, while US companies will mostly rely on NOAA Atlas. If this is to become a public-facing product, then the current pricing is too high, and might have to develop an alternative business model. Maybe people are interested in checking the floodplain zonation/xrain before buying a house, for example. But no SAAS in that case.
How flexible is your codebase to incorporate regional datasets? I think you will have to regional merging.
What are your current costs of running the setup? Any possibility/plans of white-labeling the codebase?
When I was looking at purchasing a house in the Portland area, I wanted to know the sunlight per day over a year (the house was on a hill). There was a Swedish company that had an interesting service which would generate a sunlight report. I paid for it, about $20 if I recall correctly. The conversation here about people using this to determine flooding before purchasing a house seems similar.
Policy impediments to use are real! Your data gap-filler approach is interesting though.
Along this line… occasionally there is official but obviously-wrong data from even WMO accredited providers whose automatic weather stations ('AWS') are busted. Perhaps your approach would help provide a widely validated bound-check? The trouble is often that kind of undetected, obviously-wrong data, is also a symptom of 'we have no money to fix it'…
How flexible is your codebase to incorporate regional datasets? I think you will have to regional merging.
What are your current costs of running the setup? Any possibility/plans of white-labeling the codebase?