The radar data provided at level 1 would consist of parameters such as radar reflectivity, Dopper velocity and spectral width, and should conform to the suggested Cloudnet NetCDF convention. The parameters should be calibrated, but not corrected for attenuation. Artifacts due to instrumental effects should be removed, but data should not be removed in the cases of other problems that affect all radars, such as insect contamination and the fact that reflectivities in rain are unreliable at 94 GHz. It would be the user's responsibility to remove rain events if they wish, using rain gauge observations at the site, since other aspects of the data taken in rain (such as the velocity measurements) may well still be of interest to other users. In the case of insects, there is no completely reliable way to remove them, so it is better to leave them in the data also.
Ideally the calibration convention used would ensure that "in the absence of attenuation, a cloud at 273 K containing one million 100-micron droplets per cubic metre will have a reflectivity of 0 dBZ at all frequencies." This convention is described more fully in the documentation for the Galileo radar data on BADC. The convention used should be indicated in the comment attribute of the Z variable, as described in the NetCDF conventions page.
The reflectivity reported should indicate the actual target reflectivity, i.e. the noise contribution should have been subtracted from the measured power. In the case of a simple "incoherent" (or "pulse-pair") processing system, the typical processing would involve:
Radar data often contain spurious signals that exhibit themselves as horizontal lines in the background noise field, and if not treated correctly could be incorrectly interpreted as cloud. To make things easier for the user, it is better if these effects could be cleaned up before the data are provided to Cloudnet.
In the case of a "coherent" processing system, which performs spectral analysis on the complex returned signal in order to achieve better sensitivity, the noise component is subtracted every ray in real time, and so unlike an incoherent system there is nothing to be gained in sensitivity by further averaging. However, in order to keep file sizes reasonable for most applications, it makes sense to still average the data to a similar temporal resolution. Therefore the processing of the raw data for such systems would consist only of items 1, 4, 5 and 6 in the list above.
So what is a suitable radar temporal resolution for Cloudnet? In the case of an incoherent processing system there is a trade-off between sensitivity and resolution - the more you average, the more sensitivity you get. This is because averaging more samples reduces the fluctuations in the background noise, and so smaller meteorological signals can be identified. However, when the reflectivity begins to change significantly within the averaging period, it is rather less clear that anything is gained by averaging (scales larger than 500 m?). For a coherent system there is nothing to be gained by further averaging of the data that comes out of the data acquisition system (which typically has a resolution of the order of 1 s) because the noise has already been subtracted. The principal aim of Cloudnet is to generate products to compare with models, and currently the highest resolution model in the project is the Met Office mesoscale model with a gridbox size of 12 km. For a typical wind speed of 20 m/s, this is equivalent to a 10 minute radar dwell, although for comparison of subgridscale parameters, such as cloud fraction, at least 10 radar pixels are required in each model box, implying an averaging time of a minute or less. To get accurate cloud fraction, as well as cloud microphysical parameters, many algorithms will make use of the Vaisala CT75K lidar, which has a maximum temporal resolution of 30 seconds. This therefore seems like a sensible averaging time to use for the radar. However, some felt at the Paris meeting that this was rather too long. Well, I'm happy for people to use shorter averaging times (particularly if their system is coherent as this won't imply reduced sensitivity), I'd just suggest not to go below 5 seconds to keep file sizes reasonable. The Chilbolton radars will use 10 or 30 seconds. ARM data uses 10 second averaging.
To give the user maximum flexibility with incoherently processed data, it may be desirable to provide a Z_raw parameter. This would be the reflectivity at the same temporal resolution as Z, but without noise and artifact removal (although it should still be calibrated and probably range corrected). Then the user can do further averaging and their own noise subtraction/cloud masking if they want.
Note: please be careful when averaging the various parameters; averaging of reflectivity should be performed in linear units, averaging of velocity should account for the fact that the velocity may well fold in the averaging time (you should convert each high-resolution velocity into an angle theta=pi*velocity/folding_velocity, average up the cos(theta) and sin(theta) values separately, and then recalculate the velocity from the averaged angle); the average linear depolarisation ratio should be the average cross-polar power divided by the average co-polar power (calculated in linear units), while the average spectral width should be the simple average of the individual spectral width values.