From my experience the first and most difficult step is to get snappy to install in your python environment. If you work with multiple python environments I suggest you set up one specifically for snap. Unless they release the new SNAP 10 you will have to set up SNAP 9 with an old version of python.
I am week one day one with SNAP but have average python experience so I will research Snappy and see if I can pull this off.
Was hoping it would be a something like a python interface in SNAP and just input code to batch process.
I did read something (not confirmed as genuine) about using the model builder function in SNAP and adding the model builder saved graph file Into a python script.
No idea if this is the best way to do this.
Essentially, I have 40 SLC Sentinel-1 files and I want to use them for multiple analysis (polarimetry, interferometry) so the processeing steps will be slightly different as I understand it.
SLC is best for interferometry? Would the SLC work for Polarimetric analysis or should this be GRD (Sentinel-1)?
There is a graph builder in the SNAP GUI which may fit your needs. You build the graph (or input an XML that describes your graph) for your batch processing.
The way they made snappy was something like an emulator or wrapper for their Java classes so if you’ve never seen java code it looks a little janky in a python setting. I think this will change once they release snap 10? Not sure. Note that snappy is also the name for an existing compression algorithm so that just adds to the confusion.
I recommend you wait for when they release this: GitHub - senbox-org/esa-snappy.
I think the rename to esa-snappy and all the other changes will help move the python version to become more pythonic and more accessible to science people. I believe they said they’ll release it with SNAP 10.