Interesting data, thanks for sharing. It’s nice to see that the few spray heads you have are exactly on the manufacturer’s spec of 1.5 in/hr, which is the default used in the simulator. For rotors, the default is .75 in/hr, and your values vary quite a bit but are all well below this. There are so many variables at play here that I always try to keep in mind that the precision of the calculated results is not the same as accuracy – everything is just a ballpark number and needs to be looked at skeptically. Also, the research was done for crop irrigation. Managing acres of soybeans or a baseball field is a lot different than a home landscape. In my case, I have a few areas in beds that get little or no water because spray heads are blocked by shrubs. No amount of calculated watering will fix this. I could add or relocate sprinkler heads, but it’s easier just to hand water these areas.
There are several numbers that are good to measure for your zones, and the measured values used instead of defaults. The first is the precipitation rate, as you’ve done. You might also want to calculate the distribution uniformity – DU(LQ). It’s described in the Landscape Water Management Training Manual (in the References directory). If you have individual catch can values from your tests you might not even need to retest. Basically, the uniformity is the ratio of the average water volume of the lower quarter (hence LQ) of catch cans to that of all the cans. In the simulation it’s the field called “Efficiency”. Here’s a web site that calculates efficiency from catch can measurements:
Another thing that’s easy to measure is the maximum water cycle time. I found that in several of my zones, runoff starts in 5 or 6 minutes, which is less than the calculated value (which is based on soil type, slope and sprinkler type). So I set MaxCycleTime.