Sentinel-1 SNAP Processing - Step-by-step pixel coordinate transform (and inverse)

I am using the SNAP Python functions to do:
GPF.createProduct(‘Calibration’, …
and then:
GPF.createProduct(‘Terrain-Correction’, …

I can access the final calibrated, terrain-corrected product and run object detection algorithms on the layer. I want to be able to easily access both the calibrated, terrain-corrected magnitude image chip around a detection AND the original complex SLC image chip for the same location.

Because of the non-linear coordinate transformations reprojecting the data to the ground, I can not access the same actual location in the SLC image and the calibrated, terrain-corrected image by using the (lat,lon) from the calibrated, terrain-corrected image pixel location. Example code would be:
detect_ll = geocoding.getGeoPos(PixelPos(cal_terrcorr_det_col,cal_terrcorr_det_row), None)
original_pixel_pos = initial_geocoding.getPixelPos(GeoPos(,detect_ll.lon),None)
Where here the resulting original_pixel_pos.x and .y are significantly shifted in range from the calibrated, terrain-corrected image position.

I don’t see functions to support providing me the step-by-step pixel-coordinate transformations. Is there a set of functions within Python SNAP to support this type of position tracking through processed layers of the same image?