It is easy to not appreciate the loss of acccuracy that occurs with this type of change. We tend to think that there's an adjustment screw that you can just rotate to go from 0 - 10"W.C. to 0 - 500"W.C. (or the equivalent in software). In fact, the working parts of the instrument have a particular accuracy; they will respond to a change of (say) x"W.C. If your range is 0-10", then the likely error can be thought of as x/10; if it's 0-500", then the errror would be x/500.
Actually, things are more complicated than that. I've seen vendor's literature that states incredible accuracy for their instruments. In fact, a transmitter on a test bench under ideal conditions does have a very high level of accuracy. Field instruments set and then neglected for long periods of time don't do nearly as well. There are many potential sources of unexpected errors as well. If you're measuring a vapor flowrate using an orifice, for example, you might get a small slug of condensate that forms in your instrument's impulse line. Capillary atractive forces could easily hold it in place, and it may deaden the transmitter's response. With a large differential, this might go unnoticed, but with a small differential it could cause an appreciable error.
As you might infer, I believe that it is much better to think of instrument errors in terms of +/- some value and NOT a % of span.
For what it's worth,
Doug