In my analysis of sentinel 3 images, I’m often working with inland water pixels which are on the edge of lakes. This obviously causes issues with edge effects, whereby edge pixels will contain some signal from the fresh water and some signal from the land.
To try and overcome this issue, I’ve gathered shape data that shows the exact shape of lakes. If I also knew the bounds of a pixel, I could use these two data to determine the % of water coverage within that pixel. (In case I’m using the wrong terminology, by ‘bounds’ of a pixel I mean the lat/long of each corner of the pixel).
Has anyone tried finding the bounds of satellite pixels in this way, or alternatively tried any other way of dealing with edge effects? I thought about applying a buffer to the lake data I mentioned above, in order to remove any pixel within a certain distance of the lake edge, but suspect that this approach might remove too many data points in my analysis.
Cheers for the help,
Edit: I’ve realised that I can use diagonally adjacent pixel positions to determine the bounds of a pixel. Could someone confirm that the lat/long of a pixel in sentinel-3 data corresponds to the center of the pixel?