Typically, when evaluating the performance of DEM data generated from various satellite images, comparisons are often made using USGS DTED or other high-accuracy DEMs. But how is the accuracy of the DEM used for comparison measured? In other words, how is the accuracy of the so-called ‘ground truth’ DEM, used as a benchmark, determined?
A DEM is a raster data type that contains topographic information and elevation values for each pixel. But how can accuracy be determined for all points? It seems there would be limitations if it were interpolated from results calculated using leveling, especially as the resolution increases. Is it indirectly measured using LiDAR or the concept of the Geoid?
In fact, when LiDAR comes into play, it brings up the question of how the accuracy of LiDAR is determined… I’m curious about the origin of DEM accuracy overall. Could you recommend some resources for me to look at?