Please excuse my ameture question. I am trying to detect urban damage by the fusion of two datasets (S1 and Sentinel 2). Are there any generalised post processing steps that should I follow for the same ?
There are many forms of change detection (or damage assessment) and many forms of image fusion. So, there is no general answer on this.
Maybe you can tell us a bit more about your workflow or idea. For anyl damage detection you need information before and after the incident. There is a change detection module for SAR data in SNAP, but it only takes one polarization as input.
Fusion of both datasets allows you to have a larger feature space, but this only makes sense when both image products are acquired within a comparable time.
Have you seen this tutorial? Synergetic use of S1 (SAR) and S2 (optical) data and use of analysis tools