How to Evaluate DEM? (The origin of DEM accuracy)

Typically, when evaluating the performance of DEM data generated from various satellite images, comparisons are often made using USGS DTED or other high-accuracy DEMs. But how is the accuracy of the DEM used for comparison measured? In other words, how is the accuracy of the so-called ‘ground truth’ DEM, used as a benchmark, determined?

A DEM is a raster data type that contains topographic information and elevation values for each pixel. But how can accuracy be determined for all points? It seems there would be limitations if it were interpolated from results calculated using leveling, especially as the resolution increases. Is it indirectly measured using LiDAR or the concept of the Geoid?

In fact, when LiDAR comes into play, it brings up the question of how the accuracy of LiDAR is determined… I’m curious about the origin of DEM accuracy overall. Could you recommend some resources for me to look at?

There’s quite a bit of literature about it:

1 Like

I assumed there was a commonly used method for measuring the accuracy of DEM data in areas where no reference DEM is available, but it appears this is still a field with many unresolved issues.
I’ll try looking it up with the help of some research papers!

Thank you

If you are working with Sentinel-1 here’s a great review of the process by Andreas Braun: