The content of this document has no claim to be correct and comes with absolutely no warranty.
A large part of this document is a summary of the discussions and presented workflows from MAINSAR and the step forum. Thanks to all members and developers contributing ideas, solving problems and helping to make this workflow as smooth as possible.
Code presented in this summary is written to match the paths on a certain machine, you have to adapt the path information to your settings. For instance, the user on my machine is thho, which will be different to your setting.
The guidance given in this summary are technically. It is absolutely necessary to know the basics of the theoretical background of radar remote sensing and InSAR. Here are some first introductions, you are welcome to send me literature to complete this list:
To give a full introduction even to users which have not worked with SNAP Toolbox etc., the document starts with installing a Linux based OS, software and their configurations you need during the workflow. If you are already familiar with all the software needed, skip these sections and move on to the processing part you are interested in.
It is really recommended to use Linux as OS, Cygwin under Windows was reported to end with errors.
All software packages, OS versions and variations to install and configure them, must be seen as suggestions that worked for me in January 2018.
When I started using Linux, I tried Kubuntu. It works fine for me, so I never changed my running system. There is no other reason using Kubuntu for me, you might prefer an other Linux-distribution.
To prepare and install Kubuntu I assume you have access to a Windows computer with internet and an USB device with at least 4GB RAM. If you do not have a second machine running Kubuntu, I suggest to use a external SSD (~500GB) via USB 3.0 and install Kubuntu there. If you want to use Kubuntu, boot from this device after setup. Thus, you leave your Windows installation as it is and all Linux stuff is separated from your Windows installation.
Because I have a GIS background I am used to work with Python 2.7. I have not tested 3.x but I assume my approach will work for it too. To install Python and spyder I use Anaconda.
Here are the steps I followed for Installation, a short summary:
Go to https://www.anaconda.com/download/#linux and download Anaconda2.
#in terminal
#maybe you have to change the version number
bash ~/Downloads/Anaconda2-5.1.0-Linux-x86_64.sh
#agree the license terms
#choose an installation folder
#add anaconda installation to PATH environment
Go to http://step.esa.int/main/download/ and download Sentinel Toolboxes Unix 64-bit.
#in terminal
#navigate to the Download directory
cd Downloads
#do not use sudo for installation
chmod +x esa-snap_sentinel_unix_6_0.sh
./esa-snap_sentinel_unix_6_0.sh
Follow the instructions and do not configure python during the installation. You can do that, but we will do it in the next step manually. Run SNAP and update the program, lower right corner green check mark symbol. If you get an error message telling you the WWWorld Viewer can not be launched you probably missed to install your graphic drivers. Go to the application launcher in Kubuntu search for drivers and open the Driver Manager, install the needed drivers for your system.
Close SNAP.
In 5.x I faced some trouble doing this, since the 6.0 release it works fine again. Before starting the configuration, install jpy a Java - Python interface, hence SNAP is written in Java.
sudo apt-get update
sudo apt-get install python-jpy
I configured snappy by following these guide
It tells you to do:
#in terminal
cd <snap-install-dir>/bin
./snappy-conf /home/<user>/anaconda2/bin/python
If it works, go to /home/<user>/.snap/snap-python/ and copy the snappy folder to the site-package folder of your python installation /home/<user>/anaconda2/lib/python2.7/site-packages/
cp -r /home/thho/.snap/snap-python/snappy /home/thho/anaconda2/lib/python2.7/site-packages
To test if all is right, call spyder:
spyder
and run this python script, use F9 to run selected lines or the line your courser is in, when using spyder.
from snappy import ProductIO
p = ProductIO.readProduct('/home/thho/.snap/snap-python/snappy/testdata/MER_FRS_L1B_SUBSET.dim')
list(p.getBandNames())
your output should look like this:
[‘radiance_1’, ‘radiance_2’, ‘radiance_3’, ‘radiance_4’, ‘radiance_5’, ‘radiance_6’, ‘radiance_7’, ‘radiance_8’, ‘radiance_9’, ‘radiance_10’, ‘radiance_11’, ‘radiance_12’, ‘radiance_13’, ‘radiance_14’, ‘radiance_15’, ‘l1_flags’, ‘detector_index’]
If the configuration returns an error, I followed the hints found in the Step forum by marpet on the 19th of September in this thread.
Copy the snappy folder /home/<user>/.snap/snap-python/snappy/ to the site-package folder of your python installation /home/<user>/anaconda2/lib/python2.7/site-packages/.
cp -r /home/thho/.snap/snap-python/snappy /home/thho/anaconda2/lib/python2.7/site-packages
I think the snappy folder you have just copied, was created when you tried to configure snappy and it ended with an error, therefore the snappy.ini file in the folder is empty by now. Enter the copied folder and create or overwrite a file named snappy.ini with a text editor including this content:
[DEFAULT]
snap_home = /home/<user>/snap
java_max_mem: 21G
# the java_max_mem value should be 70-80% of your overall RAM
# snap_start_engine: False
# java_class_path: ./target/classes
# java_library_path: ./lib
# java_options: -Djava.awt.headless=false
# debug: False
To test if all is right,
#in terminal
spyder
and run this script within spyder
from snappy import ProductIO
p = ProductIO.readProduct('/home/thho/.snap/snap-python/snappy/testdata/MER_FRS_L1B_SUBSET.dim')
list(p.getBandNames())
your output should look like this:
[‘radiance_1’, ‘radiance_2’, ‘radiance_3’, ‘radiance_4’, ‘radiance_5’, ‘radiance_6’, ‘radiance_7’, ‘radiance_8’, ‘radiance_9’, ‘radiance_10’, ‘radiance_11’, ‘radiance_12’, ‘radiance_13’, ‘radiance_14’, ‘radiance_15’, ‘l1_flags’, ‘detector_index’]
This is a python module which allows you to enter esa’s Copernicus Open Access Hub very comfortable to download remote sensing data. Have a look here. You need to have an account, which is free!
sudo apt-get install python-pip
pip install sentinelsat
pygeoj is a python module to read, write and work with the GeoJSON format. Have a look here.
pip install pygeoj
trinagle is a software generating exact Delaunay triangulations, constrained Delaunay triangulations, conforming Delaunay triangulations, Voronoi diagrams, and high-quality triangular meshes. It is used in the StaMPSworkflow in step 4 PS weeding. Have a look at this page for a full description.
sudo apt-get update
sudo apt-get install triangle-bin
This is the only not free available software in the whole workflow. Due to different licensing options and the good documentation how to install Matlab, this is skipped in this summary.
snaphu is a software used for 2 dimensional phase unwrapping during the StaMPS processing in step 6 phase unwrapping. Have a look at this page for a full description.
sudo apt-get update
sudo apt-get install snaphu
csh is a interpreter for C-shell which is needed to run scripts within the StaMPS installation.
sudo apt-get install csh
Got to https://homepages.see.leeds.ac.uk/~earahoo/stamps/, here you can find the handbook and the download link. The next steps can be found in the handbook in chapter 2 Installation:
Download the .tar.gz from the StaMPS homepage
mv /home/thho/Downloads/StaMPS_v3.3b1.tar.gz /home/thho/
tar -xvf StaMPS_v3.3b1.tar.gz
cd StaMPS_v3.3b1/src
make
make install
cd /home/thho
rm StaMPS_v3.3b1.tar.gz
After the installation is complete, the StaMPS_CONFIG.bash file must be prepared to configure StaMPS on your machine. Be sure Matlab, snaphu, triangle and csh are installed. I take it, that your installations are at this locations:
whereis matlab snaphu triangle csh
## matlab: /usr/local/bin/matlab
## snaphu: /usr/bin/snaphu /usr/share/man/man1/snaphu.1.gz
## triangle: /usr/bin/triangle
## csh: /bin/csh /usr/share/man/man1/csh.1.gz
The task of StaMPS_CONFIG.bash is to extend your PATH variable, so that your machine finds all these applications and some more directories which are used in StaMPS. Open StaMPS_CONFIG.bash with scite or some other text editor. You will notice that the configuration is prepared to point to much more applications and directories. Hence the preprocessing will be performed in SNAP, DORIS for instance, will never be used in our setting. We are able to comment a lot of the script. If you followed the installation guide in this summary, you can use the script below, adapting the user thho in each path to your user name. In case your installation of snaphu or one of the other applications is not located in /usr/local/bin/ or /usr/bin/ you can set the path to the folder containing the application bin, using one of the prepared rows which are commented in my version. Do not miss the last line, where the unneeded variables are excluded from the final export to PATH. Notice, that the path information /usr/local/bin/ and /usr/bin/ are already part of PATH, hence we do not have to point to special folders containing snaphu or triangle, because they are installed in the default paths mentioned.
export STAMPS="/home/thho/StaMPS_v3.3b1"
#export SAR="/home/thho/ROI_PAC_3_0"
#export GETORB_BIN="/home/thho/getorb/bin"
#export SAR_ODR_DIR="/home/thho/SAR_FILES/ODR"
#export SAR_PRC_DIR="/home/thho/SAR_FILES/PRC"
#export VOR_DIR="/home/thho/SAR_FILES/VOR"
#export INS_DIR="/home/thho/SAR_FILES/INS"
#export DORIS_BIN="/home/thho/doris_v4.02/bin"
#export TRIANGLE_BIN="/home/thho/triangle/bin"
#export SNAPHU_BIN="/home/thho/snaphu-v1.4.2/bin"
#export ROI_PAC="$SAR/ROI_PAC"
#####################################
# ROI_PAC VERSION 3
#####################################
#export INT_BIN="$ROI_PAC/INT_BIN"
#export INT_SCR="$ROI_PAC/INT_SCR"
#####################################
#####################################
# ROI_PAC VERSION 2.3 and before
#####################################
#set MACH=`uname -s`
#if ($MACH == "HP-UX") then
# export ARCHC=HP
#else if ($MACH == "IRIX") then
# export ARCHC=SGI
#else if ($MACH == "SunOS") then
# export ARCHC=SUN
#else if ($MACH == "Linux") then
# export ARCHC=LIN
#else if ($MACH == "Darwin") then
# export ARCHC=MAC
#fi
#export INT_LIB="$ROI_PAC/LIB/$ARCHC"
#export INT_BIN="$ROI_PAC/BIN/$ARCHC"
#export FFTW_LIB="$SAR/FFTW/$ARCHC""_fftw_lib"
#####################################
#####################################
# shouldn't need to change below here
#####################################
#export MY_BIN="$INT_BIN"
export MATLABPATH=$STAMPS/matlab:`echo $MATLABPATH`
#export DORIS_SCR="$STAMPS/DORIS_SCR"
# Needed for ROI_PAC (a bit different to standard)
### use points not commas for decimals, and give dates in US english
export LC_NUMERIC="en_US.UTF-8"
export LC_TIME="en_US.UTF-8"
#export MY_SAR="$SAR"
#export OUR_SCR="$MY_SAR/OUR_SCR"
#export MY_SCR="$STAMPS/ROI_PAC_SCR"
export SAR_TAPE="/dev/rmt/0mn"
#export PATH=${PATH}:$STAMPS/bin:$MY_SCR:$INT_BIN:$INT_SCR:$OUR_SCR:$DORIS_SCR:$GETORB_BIN:$DORIS_BIN:$TRIANGLE_BIN:$SNAPHU_BIN
export PATH=${PATH}:$STAMPS/bin:$MATLABPATH
Open the terminal and type:
source /home/<user>/StaMPS_v3.3b1/StaMPS_CONFIG.bash
To check if something happened, type:
printenv PATH
## /home/thho/anaconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/thho/StaMPS_v3.3b1/bin:/home/thho/StaMPS_v3.3b1/matlab:
The PATH variable now contains the information where to look for C scripts (../StaMPS_v3.3b1/bin/) and where Matlab can search for the .m scripts used by StaMPS, which are stored in /StaMPS_v3.3b1/matlab/ directory.
Notice that you have to call
source /home/<user>/StaMPS_v3.3b1/StaMPS_CONFIG.bash
every time you want to work with StaMPS, see StaMPS PS analysis
Even it is not necessary in all cases, some bugs occurred using the StaMPS scripts to process interferometry stacks preprocessed by SNAP. In order to overcome this bugs, FeiLiu changed two scripts, which we can now include in the StaMPS installation.
Notice
I am not totally aware, if the changed scripts are necessary after the SNAP 6.0 release, because they handle some problems which where solved after the newest release. Because I do not have a dataset produces with older SNAP versions to compare to, someone who has such “old” products could try a workflow without the changed scripts and tell if there is a difference at all.
Download mt_prep_gamma_snap and change the name to mt_prep_gamma_snap. After that, move it to the bin folder of your StaMPS installation.
#in terminal
mv /home/thho/Downlaods/mt_prep_gamma_snap(changed) /home/thho/Downlaods/mt_prep_gamma_snap
mv /home/thho/Downlaods/mt_prep_gamma_snap /home/thho/StaMPS_v3.3b1/bin/
Because the script was not installed during StaMPS setup, we have to set an execution flag manually, to call it from terminal.
#in terminal
chmod u+x /home/thho/StaMPS_v3.3b1/bin/mt_prep_gamma_snap
The second changed script is ps_load_initial_gamma.m. A script with this name is already in the matlab folder of the StaMPS installation. In order to replace it with the downloaded script, I recommend to save the original script.
#in terminal
mkdir /home/thho/StaMPS_v3.3b1/matlab/exclude/
mv /home/thho/StaMPS_v3.3b1/matlab/ps_load_initial_gamma.m /home/thho/StaMPS_v3.3b1/matlab/exclude/
#rename and move changed matlab script to StaMPS matlab folder
mv /home/thho/Downloads/ps_load_initial_gamma(changed).m /home/thho/Downloadsb/ps_load_initial_gamma.m
mv /home/thho/Downloads/ps_load_initial_gamma.m /home/thho/StaMPS_v3.3b1/matlab/
Downloading a bulk of Sentinel images, either S1 or S2, is not very smooth if you use the GUI from the Open Access Hub. But if you have an account, you can use the sentinelsat module from your python interpreter.
#in terminal
spyder
#within spyder
from sentinelsat import SentinelAPI, read_geojson, geojson_to_wkt
api = SentinelAPI('username', 'password')
footprint = geojson_to_wkt(read_geojson('/path/to/roi.geojson'))
products = api.query(footprint,
platformname='Sentinel-1',
producttype='SLC',
orbitdirection='DESCENDING',
beginposition='[2017-06-01T00:00:00.000Z TO 2017-07-01T00:00:00.000Z]',
sensoroperationalmode='IW',
polarisationmode='VV')
api.download_all(products, directory_path='/path/to/directory')
To see all options you can use to communicate with the server have a look at the sentinelsat documentation and the original API documentation of the data hub.
The whole workflow is a summary and combination of the StaMPS manual and discussions found in mainly two threads from the google group MAINSAR and the step forum. The aim of the summary presented here, is to summarize the development of this workflow and to disentangle some steps and workarounds, which were developed over time but in some cases are not longer necessary due to developments of the SNAP toolbox 6.0.
The images used for PS analysis have to be IW SLC products with VV or HH polarization. 20+ images are recommended, to built up a first test setting, ~8 images are sufficient to check, if your workflow is stable. To reduce computing time in this test setting, choose a ROI which is within one burst if it is possible.
The first step is to read the images as .zip files as you get them from the Copernicus Data Hub. Each .zip file is about 4.5GB in size. To possibly reduce the amount of data to be processed and to just process in one subswath, the TOPSAR-Split operator is used. To get this done you have to know in which subswath (IW 1-3, 3 subswaths make one S1-SLC-scene) your ROI is located. Load one image in SNAP using the GUI and visually check, which IW you have to choose using the World View tool. Later the .geojson, which defines your ROI (region of interest), is passed to the operator which than select the right burst (9 bursts make one subswath) from the selected subswath. That approach decreases data to be processed significantly! For further information about image acquisition modes see De Zan and Guarnieri (2006).
Sentinel-1 IW SLC product made of 3 subswaths, each made of 9 bursts. ESA (n.d.)
After the TOPSAR-Splitting the exact orbit file is applied to subtract orbit-phase or increase the coregistration accuracy. Files are automatically downloaded if available.
#in terminal
spyder
In the python code adapt inpath_dir (directory where your downloaded images as .zip files are stored), outpath (directory where the splitted image with precise orbit information will be stored), aoi_dat (.geojson of the ROI) and IWnum (Number of the subswath the ROI is within).
#1 Read data
print('Reading, TOPSAR-Split and Apply-Orbit-File...')
#get filenames for each scene
inpath_dir = '/path/to/directory/containing/IWSLC_products/'
import os
filenames = os.listdir(inpath_dir)
import sys
import snappy
from snappy import ProductIO
from snappy import GPF
from snappy import HashMap
#Get snappy Operators
GPF.getDefaultInstance().getOperatorSpiRegistry().loadOperatorSpis()
inpath = ''
outpath = '/path/to/directory/'
out_form = '.dim'
aoi_dat = '/path/to/ROI.geojson'
from sentinelsat import read_geojson, geojson_to_wkt
aoi = geojson_to_wkt(read_geojson(aoi_dat))
#put parameters for TOPSAR-Split operator
#https://github.com/senbox-org/s1tbx/blob/master/s1tbx-op-sentinel1/src/main/java/org/esa/s1tbx/sentinel1/gpf/TOPSARSplitOp.java
#number of IW where the ROI is within 'IW1', 'IW2' or 'IW3'
IWnum = 'IW2'
parameters_split = HashMap()
parameters_split.put('subswath', IWnum)
parameters_split.put('selectedPolarisations', 'VV')
parameters_split.put('wktAoi', aoi)
#put parameters for Apply-Orbit-File
parameters_ao = HashMap()
parameters_ao.put('orbitType', "Sentinel Precise (Auto Download)")
parameters_ao.put('polyDegree', 3)
parameters_ao.put('continueOnFail', False)
#prepare object to store split_ao_results for Back-Geocoding operator
prodset = []
for i in range(len(filenames)):
inpath_r = inpath_dir+filenames[i]
outpath_r = outpath + filenames[i][0:25] + '_split_oa' + out_form
img = ProductIO.readProduct(inpath_r)
###################
####TOPSAR-Split####
####################
print('TOPSAR-Split for ' + filenames[i][0:25])
img_split = GPF.createProduct('TOPSAR-Split', parameters_split, img)
print('TOPSAR-Split for ' + filenames[i][0:25] + 'done!')
#deleting input data
del img
########################
####Apply-Orbit-File####
########################
print('Apply-Orbit-File for ' + filenames[i][0:25])
#execute Apply-Orbit-File operator
img_split_ao = GPF.createProduct('Apply-Orbit-File', parameters_ao, img_split)
ProductIO.writeProduct(img_split_ao, outpath_r, 'BEAM-DIMAP')
del img_split
print('Apply-Orbit-File for ' + filenames[i][0:25] + 'done!')
prodset.append(img_split_ao)
del img_split_ao
The products produced in the last step, have to be stacked. In order to do so, one product has to be the master in this stack. Finding the optimal master is done in python as well.
#find Optimal Master Product
InSARStackOverview = snappy.jpy.get_type('org.esa.s1tbx.insar.gpf.InSARStackOverview')
opt_master = InSARStackOverview.findOptimalMasterProduct(prodset)
#get PRODUCT string for optimal Master Product
opt_master = opt_master.getMetadataRoot().getElement('Abstracted_Metadata').getAttribute('PRODUCT').getData()
ProductData = snappy.jpy.get_type('org.esa.snap.core.datamodel.ProductData')
master = ProductData.getElemString(opt_master)
print master
Note the name printed in the console and close spyder.
To coregistrate the SLC images, the Back-Geocoding operator can be executed using snappy, but python is rather slow doing this step. Using the GUI is much faster. Go to your output directory and open all splitted and orbit file applied products (.dim) in SNAP. Go to Radar ⇨ Coregistration ⇨ S1 TOPS Coregistration ⇨ S-1 Back Geocoding. In ProductSet-Reader choose Add Opened to add all loaded products. Select the product which should become the master and use the ⤒ to make it the first product of the set. You can also sort the images by their date with these arrows, but be sure the first image remains the optimal master image, despite its date. Set the parameters in Back-Geocoding and an output path in Write and hit Run, the process takes some time.
It is possible, that the Back-Geocoding operator returns an error like org.jblas.NativeBlas.dgemm(CCIIID[DII[DIID[DII)V). If that is the case, the libgfortran3 package must be installed on your machine. This error occurs mainly on fresh installed Kubuntu OS which have not been used that much or any other packages are installed beside the packages mentioned in the Installation section, hence the package is missing.
#in terminal
apt-get install libgfortran3
The result is a stack, the first three bands are the bands of the master product i, q and Intensity. Edges of the Intensity band look like this:
Edge of a burst of a Sentinel-1 IW SLC product.
To get rid of the effects on the edges shown in the last section, the TOPSAR-Deburst operator is used. Hence the stack was created using the GUI I recommend to open spyder again and continue processing in python.
#in terminal
spyder
Append the parameters inpath_stack (the path to the output of the Back-Geocoding operator) and outpath (path where the output will be stored)
#Read data
print('Reading stack...')
inpath_stack = '/home/thho/PSI/backgeoc/S1A_IW_SLC__1SDV_20170514_Stack.dim'
import sys
import snappy
from snappy import ProductIO
from snappy import GPF
from snappy import HashMap
#Get snappy Operators
GPF.getDefaultInstance().getOperatorSpiRegistry().loadOperatorSpis()
outpath = '/home/thho/PSI/dbrst/'
out_form = '.dim'
aoi_dat = '/home/thho/PSI/input/roi_poly.geojson'
from sentinelsat import read_geojson, geojson_to_wkt
aoi = geojson_to_wkt(read_geojson(aoi_dat))
prodset = ProductIO.readProduct(inpath_stack)
print('Reading stack done!')
######################
####TOPSAR-Deburst####
######################
print('TOPSAR-Deburst...')
#put parameters for TOPSAR-Deburst
parameters = HashMap()
parameters.put('selectedPolarisations', 'VV')
#execute TOPSAR-Deburst
prodset_bgc_dbrst = GPF.createProduct('TOPSAR-Deburst', parameters, prodset)
ProductIO.writeProduct(prodset_bgc_dbrst, outpath + "stack_bgc_dbrst" + out_form, 'BEAM-DIMAP')
print('TOPSAR-Deburst done')
The SubsetOP, when used via snappy, does not (or I can not handle it right) preserve the datastructur of the metadata. Using SubsetOP via snappy leads to the problem, that it is not possible to do the next step interferogram formation. Hence the workflow will continue using the GUI of SNAP until the StaMPS export is finished.
A spatial subset will be done, to further minimize the dataset to be processed. Therefore the bounding box of the ROI is the extend of the subset. Before we say good by to python, we will use it to retrieve the bbox coordinates.
#in terminal
spyder
import pygeoj
aoi_gjson = pygeoj.load(filepath=aoi_dat)
aoi_gjson.bbox
Now switch to SNAP GUI, load the debursted product and go to Raster ⇨ Subset… use the python bbox output to fill in the coordinates:
Spatial subsetting of a corigestrated, debursted stack of S1 IW SLC products manually. Using bounding box coordinates from a ROI.
The product processed is not saved yet. In order to do so, right click it and select save product and save it as stack_deb_sub.dim.
Do not use it now, meant for interested users.
This was my first guess to subset the debursted stack using the SubsetOP via snappy. Maybe the operator will be able handle the data in future and this chunk can possibly be used.
print('Subset...')
SubsetOp = snappy.jpy.get_type('org.esa.snap.core.gpf.common.SubsetOp')
WKTReader = snappy.jpy.get_type('com.vividsolutions.jts.io.WKTReader')
wkt = geojson_to_wkt(read_geojson(aoi_dat))
geom = WKTReader().read(wkt)
op = SubsetOp()
op.setSourceProduct(prodset_bgc_dbrst)
op.setGeoRegion(geom)
sub_product = op.getTargetProduct()
ProductIO.writeProduct(sub_product, outpath + 'dbrst_sub_py' + out_form, "BEAM-DIMAP")
print('Subset done!')
Now it is time to process interferograms with the stacked and subseted data. Since SNAP 6.0 is available some steps have changed from the original workflow introduces by katherine. With the current version, it is possible to add needed data as lat, lon and elevation bands in one step and even terrain correction afterwards is not needed any more to avoid a shifting effect, which were seen in previous versions, tested and reported by FeiLiu here.
Select your subset and go to Radar ⇨ Interferometric ⇨ Products ⇨ Interferogram Formation. Name the product to be written stack_deb_sub_ifg.dim. In Processing Parameters… enable Subtract topographic phase, Output Elevation and Output Orthorectified Lat/Lon. Hit Run.
Finally…we can export to StaMPS and the stopover will be reached soon!
Create a target folder named INSAR_master_date
#in terminal
mkdir /home/thho/PSI/INSAR_master_date/
In SNAP, go to Radar ⇨ Interferometric ⇨ PSI/SBAS ⇨ Export StaMPS. As input select the two products stack_deb_sub.dim and stack_deb_sub_ifg.dim. The target folder is the folder which was created just before INSAR_master_date. Hit Run.
The folder INSAR_master_date has to look like this:
ls /home/thho/PSI/INSAR_master_date/
As mentioned in Configuration, the StaMPS_CONFIG.bash must be called in each terminal which is opened to work with StaMPS. That means, the terminal which is opened must not be closed. For instance, Matlab, which will call scripts from the StaMPS installation, must be called from this terminal only and not via the application launcher or something similar.
The steps described below are just a brief workflow of StaMPS, I really recommend to read the StaMPS handbook where the steps and parameters are described in detail!
#open terminal (further StaMPS-terminal) and close it when you finished all steps in StaMPS
source /home/thho/StaMPS_v3.3b1/StaMPS_CONFIG.bash
#<yyyymmdd> must be the date of the master
#the / in the end of the path information is crucial!
mt_prep_gamma_snap <yyyymmdd> /home/thho/PSI/INSAR_master_date/ 0.4
Before we use the first Matlab script, we should check if Matlab works properly.
#in StaMPS-terminal
matlab
Matlab opens and we can try:
getparm
A list of parameters should be returned, if not, check section Configuration and your PATH variable.
Now it is more or less technically ‘easy’ to analyse PS with StaMPS.
StaMPS for PS consists of seven steps, where you can repeat step six after you had once run step seven. You are able to run StaMPS complete like this:
stamps(1,8)
But to know what is happening, it is a good idea to start with executing each step by step. The first number of the two parameters is the beginning step the second the last step. To just run the first step, do:
stamps(1,1)
After Step five, you have the option to plot the wrapped phase, to check them visually. The command in matlab is:
ps_plot('w')
This produced an error in my case, because of a break command in the Matlab script parseplotprm.m. This may occur in some other plot commands too, it did in my case. But it is easy to fix this. The error will tell you in which line the break command occurs, just go to the matlab folder of your StaMPS installation, open the affected script and command (%) the whole line and save the script. In Matlab, call the same plot again and it should work.
After all, you should be able to process your data. Time to dive in some parameters to produce some interesting outputs. Have fun and good luck!
To prepare the export for matlab, the script mt_prep_gamma_snap must be run. The script does have some parameters, the first one has a huge impact on the performence of the whole processing.
The amplitude dispersion index \(D_A\) is a value that discribes the amplitude stability, which is used to preselect pixels and therefore reduces the number of pixels for the phase analysis.
\[D_A = \frac{\sigma_A}{\mu_A}\] Where \(\sigma_A\) is the standard deviation and \(\mu_A\) is the mean of a series of amplitude values. The default threshold value is 0.4, the higer the threshold, more pixels will be selected for phase analysis. That means that surfaces like water and vegetation, where amplitude is instable, have a higher ratio than bedrock outcroppings or man made strucutres (most likeley PS pixels), see (Ferretti et al. (2001), Hooper et al. (2007)).
Parameter | Default | Description |
---|---|---|
clap_alpha | 1 | CLAP \(\alpha\) term. Together with the \(\beta\) term, determines the relative contribution of the low-pass and adaptive phase elements to the CLAP filter. |
Loading the PS candidates into Matlab which were selected by the mt_prep_gamma_snap script using the amplitude dispersion as criteria.
stamps(1,1)
Once step 1 has been run details can be listed for each ifg by using
ps_info
To set parameters do:
setparm('parameter_name', value)
To gain the teporal coherence, the spatially correlated phase is estimated and subtracted. From the remianing phase the spatially uncorrelated DEM error is than estimated and subtracted too. After that the temporal coherence is estimated from the risidual phase.
Parameter | Default | Description | level of influence |
---|---|---|---|
max_topo_err | 5 | Maximum uncorrelated DEM error (pixels with uncorrelated DEM error greater than this will not be picked) | unknown |
filter_grid_size | 50 | Pixel size of grid which is used to look for spatially correlation | low |
filter_weighting | ‘P-square’ | Approach or weighting scheme, other one is ‘SNR’. P-squared was found to be better (Hooper et al. (2007)). | high |
clap_win | 32 | Windowsize (64 as another option) pixel x pixel depending on over what distance pixels are expected to remain spatially correlated | unkown |
clap_low_pass_wavelength | 800 | Cutoff wavelength for phase filtering | medium |
clap_alpha | 1 | Weighting parameters for phase filtering | unknown |
clap_beta | 0.3000 | Weighting parameter for phase filtering | unknown |
gamma_change_convergence | 0.0050 | Threshold for iteration process in phase noise estimation | low |
quick_est_gamma_flag | ‘y’ | unknown | unknown |
stamps(2,2)
Parameter | Default | Description | level of influence |
---|---|---|---|
select_method | ‘DENSITY’ | Other option ‘PERCENT’ | unknown |
density_rand | 20 | Maximum acceptable spatial density (per km²) of selected pixels with random phase. Only applies if select_method is ‘DENSITY’. | unknown |
percent_rand | 20 | Max acceptable percentage of selected pixels having random phase. They are selected on the basis of their noise characteristics. The value can be set higher, hence bad pixel will be weeded in the next steps 4 and 5. Only applies if select_method is ‘PERCENT’ | medium |
|slc_osf|1|unknown|unknown| |gamma_stdev_reject|0|unknown|unknown|
stamps(3,3)
Parameter | Default | Description | level of influence |
---|---|---|---|
weed_time_win | 730 | unknown | unknown |
weed_standard_dev | 1 | Threshold standard deviation. For each pixel, the phase noise standard deviation for all pixel pairs including the pixel is calculated, If the minimum standard deviation is greater than the threshold, the pixel is dropped. If set to 10, no noise-based weeding is performed | high, increasing the sd increases the selected PS but also the noise level, 1-1.2 is a good start |
weed_max_noise | Inf | Threshold for the maximum noise allowed for a pixel. For each pixel the minimum pixel-pair noise is estimated per interferogram. Pixels whose maximum interferogram noise value is higher than the indicated threshold are dropped | unknown |
weed_neighbours | ‘y’ | If ‘y’ proximity weeding is turned on, pixels are dropped based in their proximity to each other | high should be ‘y’ |
weed_zero_elevation | ‘n’ | unknown | unknown |
drop_ifg_index | [] | listed ifgs will not be included in the weeding | unkown |
stamps(4,4)
Parameter | Default | Description | level of influence |
---|---|---|---|
merge_resample_size | 0 | Coarser posting (in m) to resample to. If set to 0, no resampling is applied. | unknown |
merge_standard_dev | inf | only applied in case of resampling. | unknown |
When step 5 is ready and both PS and SBAS approach should be combined, data for both have to processed to this step before continue with step 6.
stamps(5,5)
From StaMPS handbook:
Check the wrapped phase of the selected pixels after running this step, e.g.,
ps_plot('w')
In terms of reprocessing, the first parameter to play with is weed standard dev. If it looks like too many noisy pixels are being chosen, the value can be reduced. If very few pixels are chosen, the value can be increased. If still too few pixels are being selected such that any signal is generally undersampled, variation of Step 2 parameters can be tried. The number of initial candidates can also be increased by setting the amplitude dispersion higher in mt prep.
Parameter | Default | Description | level of influence |
---|---|---|---|
unwrap_patch_phase | ‘n’ | If ‘y’ uses the patch phase from Step 3 as prefiltered phase. If set to ‘n’ PS phase is filtered using a Goldstein adaptive phase filter. | high ‘n’ recommended |
scla_deramp | ‘n’ | Using after Step 7. if ‘y’ the in step 7 estimated ramp will be subtracted before unwrapping. This is useful if one is interested in local signals only. | meidum/unknown interesting when looking for landslides in a larger area |
drop_ifg_index | [] | ifg will not be included in unwrapping. This might be helpful if after unwrapping and testing parameters an ifg is still noisy. Than the index can be set and steps from 2 should be redone. | high if needed |
unwrap_time_win | 730 | Window to smooth phase in time. Number is the filter length (in days), used to estimate the noise contribution for each phase arc (phase difference between neighbouring pixels). Could be importand when looking for Landslides. | high see ‘unwrap_grid_size’ |
unwrap_method | ‘3D’ | Unwrapping Method, alternative ‘3D_QUICK’ for SBAS. ‘3D_QUICK’ is default for SBAS but can be set before running step 6 to ‘3D’ to potentially improve accuracy, although this will take longer. | mdeium/high |
unwrap_grid_size | 200 | Resampling grid spacing if unwrap_prefilter_flag is ‘y’ phase is resampled to this grid. Should be at least as large as ‘merge_grid_size’ | high making it large reduces noise but may undersample signal |
unwrap_gold_n_win | 32 | Window size Goldstein filter | unknown |
unwrap_prefilter_flag | ‘y’ | Phase is prefiltered before unwrapping for noise reduction and SCLA and AOE subtraction is applied when step 6 is redone after step 7. To avoid subtraction use scla_reset command in matlab | high, ‘y’ is recommended |
unwrap_gold_alpha | 0.8000 | The larger the value the stronger the filtering | medium/high |
unwrap_la_error_flag | ‘y’ | unknown | unknown |
unwrap_spatial_cost_func_flag | ‘n’ | unknown | unknown |
#to reset SCLA and AOE to no values if no subtratction is wanted
scla_reset
stamps(6,6)
This summary is based on a discussion in the step forum. FeiLiu created a group conversation to present her approach of a modified InterferogramOP which should be tested to use Sentinel-1 data to process the SBAS approach in StaMPS.
During the test the members @FeiLiu, @memorid and @thho tested, discussed and modified scripts. The automatisation of the preprocessing workflow was scripted by @thho and designed for Unix based OS in April 2018.
FeiLiu configured the InterferogramOP from the s1tbx in order to create interferograms between images coregisterd to a superior master image. This is crucial for the SBAS approach. This guide summaries es the installation, of software needed to run SNAP from source and to pre- and process the data for the SNAP-StaMPS Workflow.
This first version must be seen as a first experiment, any advices or error reports are highly appreciated.
All software packages, OS versions and variations to install and configure them, must be seen as suggestions that worked for me in January 2018.
When I started using Linux, I tried Kubuntu. It works fine for me, so I never changed my running system. There is no other reason using Kubuntu for me, you might prefer an other Linux-distribution.
To prepare and install Kubuntu I assume you have access to a Windows computer with internet and an USB device with at least 4GB RAM. If you do not have a second machine running Kubuntu, I suggest to use a external SSD (~500GB) via USB 3.0 and install Kubuntu there. If you want to use Kubuntu, boot from this device after setup. Thus, you leave your Windows installation as it is and all Linux stuff is separated from your Windows installation.
Because I have a GIS background I am used to work with Python 2.7. I have not tested 3.x but I assume my approach will work for it too. To install Python and spyder I use Anaconda.
Here are the steps I followed for Installation, a short summary:
Go to https://www.anaconda.com/download/#linux and download Anaconda2.
#in terminal
#maybe you have to change the version number
bash ~/Downloads/Anaconda2-5.1.0-Linux-x86_64.sh
#agree the license terms
#choose an installation folder
#add anaconda installation to PATH environment
Go to http://step.esa.int/main/download/ and download Sentinel Toolboxes Unix 64-bit.
#in terminal
#navigate to the Download directory
cd Downloads
#do not use sudo for installation
chmod +x esa-snap_sentinel_unix_6_0.sh
./esa-snap_sentinel_unix_6_0.sh
Follow the instructions and do not configure python during the installation. You can do that, but we will do it in the next step manually. Run SNAP and update the program, lower right corner green check mark symbol. If you get an error message telling you the WWWorld Viewer can not be launched you probably missed to install your graphic drivers. Go to the application launcher in Kubuntu search for drivers and open the Driver Manager, install the needed drivers for your system.
Close SNAP.
In 5.x I faced some trouble doing this, since the 6.0 release it works fine again. Before starting the configuration, install jpy a Java - Python interface, hence SNAP is written in Java.
sudo apt-get update
sudo apt-get install python-jpy
I configured snappy by following these guide
It tells you to do:
#in terminal
cd <snap-install-dir>/bin
./snappy-conf /home/<user>/anaconda2/bin/python
If it works, go to /home/<user>/.snap/snap-python/ and copy the snappy folder to the site-package folder of your python installation /home/<user>/anaconda2/lib/python2.7/site-packages/
cp -r /home/thho/.snap/snap-python/snappy /home/thho/anaconda2/lib/python2.7/site-packages
To test if all is right, call spyder:
spyder
and run this python script, use F9 to run selected lines or the line your courser is in, when using spyder.
from snappy import ProductIO
p = ProductIO.readProduct('/home/thho/.snap/snap-python/snappy/testdata/MER_FRS_L1B_SUBSET.dim')
list(p.getBandNames())
your output should look like this:
[‘radiance_1’, ‘radiance_2’, ‘radiance_3’, ‘radiance_4’, ‘radiance_5’, ‘radiance_6’, ‘radiance_7’, ‘radiance_8’, ‘radiance_9’, ‘radiance_10’, ‘radiance_11’, ‘radiance_12’, ‘radiance_13’, ‘radiance_14’, ‘radiance_15’, ‘l1_flags’, ‘detector_index’]
If the configuration returns an error, I followed the hints found in the Step forum by marpet on the 19th of September in this thread.
Copy the snappy folder /home/<user>/.snap/snap-python/snappy/ to the site-package folder of your python installation /home/<user>/anaconda2/lib/python2.7/site-packages/.
cp -r /home/thho/.snap/snap-python/snappy /home/thho/anaconda2/lib/python2.7/site-packages
I think the snappy folder you have just copied, was created when you tried to configure snappy and it ended with an error, therefore the snappy.ini file in the folder is empty by now. Enter the copied folder and create or overwrite a file named snappy.ini with a text editor including this content:
[DEFAULT]
snap_home = /home/<user>/snap
java_max_mem: 21G
# the java_max_mem value should be 70-80% of your overall RAM
# snap_start_engine: False
# java_class_path: ./target/classes
# java_library_path: ./lib
# java_options: -Djava.awt.headless=false
# debug: False
To test if all is right,
#in terminal
spyder
and run this script within spyder
from snappy import ProductIO
p = ProductIO.readProduct('/home/thho/.snap/snap-python/snappy/testdata/MER_FRS_L1B_SUBSET.dim')
list(p.getBandNames())
your output should look like this:
[‘radiance_1’, ‘radiance_2’, ‘radiance_3’, ‘radiance_4’, ‘radiance_5’, ‘radiance_6’, ‘radiance_7’, ‘radiance_8’, ‘radiance_9’, ‘radiance_10’, ‘radiance_11’, ‘radiance_12’, ‘radiance_13’, ‘radiance_14’, ‘radiance_15’, ‘l1_flags’, ‘detector_index’]
This is a python module which allows you to enter esa’s Copernicus Open Access Hub very comfortable to download remote sensing data. Have a look here. You need to have an account, which is free!
sudo apt-get install python-pip
pip install sentinelsat
pygeoj is a python module to read, write and work with the GeoJSON format. Have a look here.
pip install pygeoj
trinagle is a software generating exact Delaunay triangulations, constrained Delaunay triangulations, conforming Delaunay triangulations, Voronoi diagrams, and high-quality triangular meshes. It is used in the StaMPSworkflow in step 4 PS weeding. Have a look at this page for a full description.
sudo apt-get update
sudo apt-get install triangle-bin
This is the only not free available software in the whole workflow. Due to different licensing options and the good documentation how to install Matlab, this is skipped in this summary.
snaphu is a software used for 2 dimensional phase unwrapping during the StaMPS processing in step 6 phase unwrapping. Have a look at this page for a full description.
sudo apt-get update
sudo apt-get install snaphu
csh is a interpreter for C-shell which is needed to run scripts within the StaMPS installation.
sudo apt-get install csh
Got to https://homepages.see.leeds.ac.uk/~earahoo/stamps/, here you can find the handbook and the download link. The next steps can be found in the handbook in chapter 2 Installation:
Download the .tar.gz from the StaMPS homepage
mv /home/thho/Downloads/StaMPS_v3.3b1.tar.gz /home/thho/
tar -xvf StaMPS_v3.3b1.tar.gz
cd StaMPS_v3.3b1/src
make
make install
cd /home/thho
rm StaMPS_v3.3b1.tar.gz
After the installation is complete, the StaMPS_CONFIG.bash file must be prepared to configure StaMPS on your machine. Be sure Matlab, snaphu, triangle and csh are installed. I take it, that your installations are at this locations:
whereis matlab snaphu triangle csh
## matlab: /usr/local/bin/matlab
## snaphu: /usr/bin/snaphu /usr/share/man/man1/snaphu.1.gz
## triangle: /usr/bin/triangle
## csh: /bin/csh /usr/share/man/man1/csh.1.gz
The task of StaMPS_CONFIG.bash is to extend your PATH variable, so that your machine finds all these applications and some more directories which are used in StaMPS. Open StaMPS_CONFIG.bash with scite or some other text editor. You will notice that the configuration is prepared to point to much more applications and directories. Hence the preprocessing will be performed in SNAP, DORIS for instance, will never be used in our setting. We are able to comment a lot of the script. If you followed the installation guide in this summary, you can use the script below, adapting the user thho in each path to your user name. In case your installation of snaphu or one of the other applications is not located in /usr/local/bin/ or /usr/bin/ you can set the path to the folder containing the application bin, using one of the prepared rows which are commented in my version. Do not miss the last line, where the unneeded variables are excluded from the final export to PATH. Notice, that the path information /usr/local/bin/ and /usr/bin/ are already part of PATH, hence we do not have to point to special folders containing snaphu or triangle, because they are installed in the default paths mentioned.
export STAMPS="/home/thho/StaMPS_v3.3b1"
#export SAR="/home/thho/ROI_PAC_3_0"
#export GETORB_BIN="/home/thho/getorb/bin"
#export SAR_ODR_DIR="/home/thho/SAR_FILES/ODR"
#export SAR_PRC_DIR="/home/thho/SAR_FILES/PRC"
#export VOR_DIR="/home/thho/SAR_FILES/VOR"
#export INS_DIR="/home/thho/SAR_FILES/INS"
#export DORIS_BIN="/home/thho/doris_v4.02/bin"
#export TRIANGLE_BIN="/home/thho/triangle/bin"
#export SNAPHU_BIN="/home/thho/snaphu-v1.4.2/bin"
#export ROI_PAC="$SAR/ROI_PAC"
#####################################
# ROI_PAC VERSION 3
#####################################
#export INT_BIN="$ROI_PAC/INT_BIN"
#export INT_SCR="$ROI_PAC/INT_SCR"
#####################################
#####################################
# ROI_PAC VERSION 2.3 and before
#####################################
#set MACH=`uname -s`
#if ($MACH == "HP-UX") then
# export ARCHC=HP
#else if ($MACH == "IRIX") then
# export ARCHC=SGI
#else if ($MACH == "SunOS") then
# export ARCHC=SUN
#else if ($MACH == "Linux") then
# export ARCHC=LIN
#else if ($MACH == "Darwin") then
# export ARCHC=MAC
#fi
#export INT_LIB="$ROI_PAC/LIB/$ARCHC"
#export INT_BIN="$ROI_PAC/BIN/$ARCHC"
#export FFTW_LIB="$SAR/FFTW/$ARCHC""_fftw_lib"
#####################################
#####################################
# shouldn't need to change below here
#####################################
#export MY_BIN="$INT_BIN"
export MATLABPATH=$STAMPS/matlab:`echo $MATLABPATH`
#export DORIS_SCR="$STAMPS/DORIS_SCR"
# Needed for ROI_PAC (a bit different to standard)
### use points not commas for decimals, and give dates in US english
export LC_NUMERIC="en_US.UTF-8"
export LC_TIME="en_US.UTF-8"
#export MY_SAR="$SAR"
#export OUR_SCR="$MY_SAR/OUR_SCR"
#export MY_SCR="$STAMPS/ROI_PAC_SCR"
export SAR_TAPE="/dev/rmt/0mn"
#export PATH=${PATH}:$STAMPS/bin:$MY_SCR:$INT_BIN:$INT_SCR:$OUR_SCR:$DORIS_SCR:$GETORB_BIN:$DORIS_BIN:$TRIANGLE_BIN:$SNAPHU_BIN
export PATH=${PATH}:$STAMPS/bin:$MATLABPATH
Open the terminal and type:
source /home/<user>/StaMPS_v3.3b1/StaMPS_CONFIG.bash
To check if something happened, type:
printenv PATH
## /home/thho/anaconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/thho/StaMPS_v3.3b1/bin:/home/thho/StaMPS_v3.3b1/matlab:
The PATH variable now contains the information where to look for C scripts (../StaMPS_v3.3b1/bin/) and where Matlab can search for the .m scripts used by StaMPS, which are stored in /StaMPS_v3.3b1/matlab/ directory.
Notice that you have to call
source /home/<user>/StaMPS_v3.3b1/StaMPS_CONFIG.bash
every time you want to work with StaMPS, see StaMPS PS analysis
Even it is not necessary in all cases, some bugs occurred using the StaMPS scripts to process interferometry stacks preprocessed by SNAP. In order to overcome this bugs, FeiLiu changed two scripts, which we can now include in the StaMPS installation.
Download mt_prep_gamma_snap and change the name to mt_prep_gamma_snap. After that, move it to the bin folder of your StaMPS installation. A script with this name is already in the bin folder of the StaMPS installation. In order to replace it with the downloaded script, I recommend to save the original script.
#in terminal
mkdir /home/thho/StaMPS_v3.3b1/bin/exclude/
mv /home/thho/StaMPS_v3.3b1/bin/mt_prep_gamma_snap /home/thho/StaMPS_v3.3b1/bin/exclude/mt_prep_gamma_snap
#insert changed script
mv /home/thho/Downlaods/mt_prep_gamma_snap(changed) /home/thho/Downlaods/mt_prep_gamma_snap
mv /home/thho/Downlaods/mt_prep_gamma_snap /home/thho/StaMPS_v3.3b1/bin/
Because the script was not installed during StaMPS setup, we have to set an execution flag manually, to call it from terminal.
#in terminal
chmod u+x /home/thho/StaMPS_v3.3b1/bin/mt_prep_gamma_snap
The second changed script is ps_load_initial_gamma.m. A script with this name is already in the matlab folder of the StaMPS installation. In order to replace it with the downloaded script, I recommend to save the original script.
#in terminal
mkdir /home/thho/StaMPS_v3.3b1/matlab/exclude/
mv /home/thho/StaMPS_v3.3b1/matlab/ps_load_initial_gamma.m /home/thho/StaMPS_v3.3b1/matlab/exclude/
#rename and move changed matlab script to StaMPS matlab folder
mv /home/thho/Downloads/ps_load_initial_gamma(changed).m /home/thho/Downloadsb/ps_load_initial_gamma.m
mv /home/thho/Downloads/ps_load_initial_gamma.m /home/thho/StaMPS_v3.3b1/matlab/
To install SNAP from source you need to install
To further preprocess data for SBAS you need R, RStudio, some packages and a LaTeX distribution installed.
sudo apt-get install maven
sudo apt-get install openjdk-8-jdk openjdk-8-demo openjdk-8-doc openjdk-8-jre-headless openjdk-8-source
sudo apt-get install git
sudo add-apt-repository ppa:mmk2410/intellij-idea
sudo apt-get update
sudo apt-get install intellij-idea-community
The following code chunks and setting in IntelliJ IDEA presented, were found and modified from here and here.
#create directories
mkdir snap2
cd snap2
#clone source from Github
git clone https://github.com/senbox-org/snap-engine.git
git clone https://github.com/senbox-org/snap-desktop.git
git clone https://github.com/senbox-org/s1tbx.git
Download the changed InterferogramOp.java operator changed by @FeiLiu and modified by @thho.
#archive old operator
cd .. #to move to ~/snap2
mkdir old
cd ~/snap2/s1tbx/s1tbx-op-insar/src/main/java/org/esa/s1tbx/insar/gpf
mv InterferogramOp.java ~/snap2/old
#download new opeartor
wget -O ~/snap2/s1tbx/s1tbx-op-insar/src/main/java/org/esa/s1tbx/insar/gpf/InterferogramOp.java "http://forum.step.esa.int/uploads/default/original/2X/e/ea716f33dc49dd5004cb9d591235cd12819ac415.java"
##check out for each clone and changed OP
cd ~/snap2/snap-engine
mvn clean install -DskipTests=true
cd ..
cd snap-desktop
mvn clean install -DskipTests=true
cd ..
cd s1tbx
mvn clean install -DskipTests=true
To Run SNAP, click the green play icon in the upper toolbar next to the s1tbx application drop down menu and the bug icon.
This installation is tested for Kubuntu 17.10.1 artful and RStudio 1.1.414, you have to adapt information for latest versions!
R isntallation:
sudo add-apt-repository 'deb https://mirror.ibcp.fr/pub/CRAN/bin/linux/ubuntu artful/'
sudo apt-key adv --keyserver.ubuntu.com --recv-keys E084DAB9
sudo apt-get update
sudo apt-get install libgdal-dev libproj-dev libgeos-dev
sudo apt-get install r-base-core
sudo apt-get install gdebi-core
RStudio installation, visit RStudio homepage and check for new version and adapt the version number in the first line below:
wget https://download1.rstudio.org/rstudio-1.1.414-amd64.deb
wget http://ftp.ca.debian.org/debian/pool/main/g/gstreamer0.10/libgstreamer0.10-0_0.10.36-2amd64.deb
wget http://ftp.ca.debian.org/debian/pool/main/g/gst-plugins-base0.10/libgstreamer-plugins-base0.10-0_0.10.36-2amd64.deb
sudo dpkg -i libgstreamer0.10-0_0.10.36-1.5_amd64.deb
sudo dpkg -i libgstreamer-plugins-base0.10-0_0.10.36-2_amd64.deb
sudo apt-mark hold libgstreamer-plugins-base0.10-0
sudo apt-mark hold libgstreamer0.10
sudo gdebi rstudio-1.1.414-amd64.deb
#install needed packages
#SBAS preprocessing
install.packages("rmarkdown")
install.packages("magrittr")
install.packages("xtable")
install.packages("rgdal")
#StaMPS-visualizer
install.packages("shiny")
install.packages("lubridate")
install.packages("leaflet")
install.packages("colorRamps")
#this is a huge installtion of several 100MB!
sudo apt-get install texlive-full
The preprocessing workflow is highly data intensive. I suggest using a HDD as storage for input data (downloaded images from sci hub) and a SSD as working storage where you write your products to.
The preprocessing of Sentinel-1 data is a complex workflow using different scripts. To set up the folder structure needed by this scripts, a project directory has to exist wherever you want. In this project directory the folder structure will be created. Open a terminal and cd to the project directory:
cd ~/projectdir
wget -O "$PWD/snap_sbas_wf.zip" "http://forum.step.esa.int/uploads/default/original/2X/c/c1fc64285366cb920c809a2ffff309830a5ed15b.zip"
unzip snap_sbas_wf.zip
cd snap_sbas_wf
source create_folder_str.bash
Documentation of create_folder_str.bash:
#!/bin/bash
#creating folder structure for semiautomated SBAS processing
cd ..
projectDirectory=$PWD
mkdir SBAS
cd SBAS
mkdir export snap_prepro
cd snap_prepro
mkdir superior_master spl_oa stack_deb stack_deb_rename ifg subset
cd subset
mkdir ifg stack
cd $projectDirectory
mv snap_sbas_wf "$projectDirectory/SBAS"
rm -r snap_sbas_wf.zip
You can now see a folder called SBAS in your chosen project directory within this folder there are three sub directories, snap_sbas_wf (containing scripts), snap_prepro (will contain preprocessing of your images) and export (will contain the exported data for StaMPS).
After the images to be analyzed have been downloaded to your HDD, the data must be split and orbit files applied using the TOPSAR-Split and the Apply-Orbit-File Operator. To do this automatically and using a ROI (region of interest) polygon, snappy is used in this step. Go to the snap_sbas_wf directory and open the script dbrst_ao_sbas.py in spyder. Under the User Configuration adapt the path and parameters to your machine and dataset, projpath should be the same as used before, pointing to the project directory where the SBAS folder structure is within.
For some reason I was not able to write a script which clears your memory after each loop, therefore I recommend to process a limited amount of images in one loop 8 with 32GB RAM seems to work well. After you finished one loop, restart your kernel in spyder, change the inputdata in the folder you select in the script and run it again. It does not take to long to do this.
A possible java error occurs if the hidden -directory file is within the input folder. Go to this folder, select show hidden files and if this file is present in the folder, delete it.
####################
#User Configuration#
####################
inpath_dir = '/home/thho/huge/RS_data/sentinel1data/SLC/maoxian/process/'
projpath = '/home/thho/master/kitchen/maoxian_deskt'
aoi_dat = '/home/thho/master/kitchen/maoxian_deskt/input/maoxian_poly.geojson'
#number of IW where the ROI is within 'IW1', 'IW2' or 'IW3'
IWnum = 'IW2'
polarisation = 'VV'
###################################
#import modules and SNAP operators#
###################################
print('Importing modules and SNAP operators...')
import os
import sys
import snappy
from snappy import ProductIO
from snappy import GPF
from snappy import HashMap
from sentinelsat import read_geojson, geojson_to_wkt
#Get snappy Operators
GPF.getDefaultInstance().getOperatorSpiRegistry().loadOperatorSpis()
print('Importing modules and SNAP operators done!')
####################################
#prepare objects needed in for loop#
####################################
print('Preparing objects needed in for loop...')
filenames = os.listdir(inpath_dir)
inpath = ''
out_form = '.dim'
aoi = geojson_to_wkt(read_geojson(aoi_dat))
print('Preparing objects needed in for loop done!')
###################################
#put parameters for SNAP operators#
###################################
print('Reading data, TOPSAR-Split and Apply-Orbit-File...')
#put parameters for TOPSAR-Split operator
#https://github.com/senbox-org/s1tbx/blob/master/s1tbx-op-sentinel1/src/main/java/org/esa/s1tbx/sentinel1/gpf/TOPSARSplitOp.java
parameters_split = HashMap()
parameters_split.put('subswath', IWnum)
parameters_split.put('selectedPolarisations', polarisation)
parameters_split.put('wktAoi', aoi)
#put parameters for Apply-Orbit-File
parameters_ao = HashMap()
parameters_ao.put('orbitType', "Sentinel Precise (Auto Download)")
parameters_ao.put('polyDegree', 3)
parameters_ao.put('continueOnFail', False)
for i in range(len(filenames)):
######################
####reading images####
######################
inpath_r = inpath_dir+filenames[i]
outpath_r = projpath + '/SBAS/snap_prepro/spl_oa/' + filenames[i][17:25] + out_form
img = ProductIO.readProduct(inpath_r)
####################
####TOPSAR-Split####
####################
print('TOPSAR-Split for ' + filenames[i][17:25])
img_split = GPF.createProduct('TOPSAR-Split', parameters_split, img)
print('TOPSAR-Split for ' + filenames[i][17:25] + 'done!')
#deleting input data
del img
########################
####Apply-Orbit-File####
########################
print('Apply-Orbit-File for ' + filenames[i][17:25])
#execute Apply-Orbit-File operator
img_split_ao = GPF.createProduct('Apply-Orbit-File', parameters_ao, img_split)
ProductIO.writeProduct(img_split_ao, outpath_r, 'BEAM-DIMAP')
del img_split
print('Apply-Orbit-File for ' + filenames[i][17:25] + 'done!')
del img_split_ao
print('Reading data, TOPSAR-Split and Apply-Orbit-File done!')
After this step it is a good idea to check if the selected and debursted images are correct. Sometimes during this step, data is processed wrong and no or zero values are stored in one layer. What I use to do is to open all results in SNAP and have a look at the intensity band, if it appears to be correct, we can carry on, if not rerun the step with the image effected. It takes some time to double check the results, but it is worth it! A mistake during this step might become a huge problem later on and you have to do the whole preprocessing.
If you have not already created a stack with this images for a PS workflow, you have to do a stack of all images now. To do so, open SNAP GUI, read all processed images and select Radar ⇨ Interferometric ⇨ InSAR Stack Overview. Load all images by click All Opened and click on Overview. The first image in the lower window should be the master of your stack and the later used superior master in SBAS. Create the stack with Radar ⇨ Coregistration ⇨ S-1 TOPS Coregistration ⇨ S-1 Back Geocoding. Load all opened products, put the master image first in the table by using the arrows to the right and build the stack. This process will take some time!
Open the stack and choose View ⇨ Tool Windows ⇨ Radar ⇨ InSAR Stack. In the opened window, click on Baselines and then copy the table with the top right button to your clipboard. Open a text editor and paste the data, change the ∞ to NA and save it in …/SBAS/baseline_info.csv. Be very precise with name and place of this csv-file! I know, its quite a workaround, but the baseline info can not be called via gpt or snappy as I know.
In order to generate a PDF containing a baselineplot, some information about the baselines and master-slave combinations for each image pair, two .csv files with the image pairs needed for processing and a bash script to generate export folders for each image pair, call the basel_info_sbas.bash from your project dir (it is very important that you always call the scripts from the project directory!):
cd ~/projectdir
source ~/projectdir/SBAS/snap_sbas_wf/basel_info_sbas.bash
Documentation of basel_info_sbas.bash:
#!/bin/bash
projectDirectory="$PWD"
cd "${projectDirectory}/SBAS/snap_sbas_wf"
R -e "rmarkdown::render('baseline_info.Rmd')"
mv baseline_info.pdf ../baseline_info.pdf
To change header information or baseline thresholds, open the Rmd file …/SBAS/snap_sbas_wf/baseline_info.Rmd. Header information can be adapted in the ymal header, thresholds and some other parameters in the third chunk called userconfig.
For each image combination shown in baseline_info.pdf in the SBAS folder we have to coregister the images to the superior master image. A file with the superior master image date is produced during the last step, the file can be found in …/SBAS/snap_sbas_wf/superior_master.txt. With this file and the master_slave_info.csv it is now possible to run a bash script calling SNAP’s Back-Geocoding and TOPS-Deburst operators. We use the SNAP’s gpt and not snappy in this step because the direct use of the operator in java is much faster because of better parallelized processing.
Open a terminal in your project directory and call back_geoc_sbas.bash, this process will take some time, ~3-4 min per stack:
source back_geoc_sbas.bash
Documentation of back_geoc_sbas.bash:
#!/bin/bash
# enable next line for debugging purpose
# set -x
############################################
# User Configuration
############################################
# adapt this path to your needs
export PATH=~/snap/bin:$PATH
gptPath="gpt"
############################################
# Main processing
############################################
projectDirectory="$PWD"
graphXmlPath="${projectDirectory}/SBAS/snap_sbas_wf/back_geoc_sbas.xml"
oaDirectory="${projectDirectory}/SBAS/snap_prepro/spl_oa"
supmaster="$(cat ${projectDirectory}/SBAS/snap_sbas_wf/superior_master.txt)"
cp -R ${projectDirectory}/SBAS/snap_prepro/spl_oa/$supmaster* ${projectDirectory}/SBAS/snap_prepro/superior_master/
supmstpath="${projectDirectory}/SBAS/snap_prepro/superior_master/$supmaster.dim"
dbDirectory="${projectDirectory}/SBAS/snap_prepro/stack_deb"
cd "${projectDirectory}/SBAS/snap_sbas_wf"
for C in $(awk -F, '{print $1 "_" $2}' $projectDirectory"/SBAS/snap_sbas_wf /mst_slv_info.csv"); do
mst="$oaDirectory/${C:0:8}.dim"
slv="$oaDirectory/${C:9:8}.dim"
fileList="${supmstpath},${mst},${slv}"
targetFile="$dbDirectory/${C:0:8}_${C:9:8}.dim"
echo "fileList=$fileList">back_geoc_sbas.properties
echo "file=$targetFile">>back_geoc_sbas.properties
parameterFilePath="$projectDirectory/SBAS/snap_sbas_wf/back_geoc_sbas.properties"
${gptPath} ${graphXmlPath} -e -p ${parameterFilePath}
done
Documentation of back_geoc_sbas.xml:
<graph id="back_geoc_sbas">
<version>1.0</version>
<node id="ProductSet-Reader">
<operator>ProductSet-Reader</operator>
<sources/>
<parameters class="com.bc.ceres.binding.dom.XppDomElement">
<fileList>${fileList}</fileList>
</parameters>
</node>
<node id="Back-Geocoding">
<operator>Back-Geocoding</operator>
<sources>
<sourceProduct.3 refid="ProductSet-Reader"/>
</sources>
<parameters class="com.bc.ceres.binding.dom.XppDomElement">
<demName>SRTM 3Sec</demName>
<demResamplingMethod>BICUBIC_INTERPOLATION</demResamplingMethod>
<externalDEMFile/>
<externalDEMNoDataValue>0.0</externalDEMNoDataValue>
<resamplingType>BISINC_5_POINT_INTERPOLATION</resamplingType>
<maskOutAreaWithoutElevation>true</maskOutAreaWithoutElevation>
<outputRangeAzimuthOffset>false</outputRangeAzimuthOffset>
<outputDerampDemodPhase>false</outputDerampDemodPhase>
<disableReramp>false</disableReramp>
</parameters>
</node>
<node id="TOPSAR-Deburst">
<operator>TOPSAR-Deburst</operator>
<sources>
<sourceProduct refid="Back-Geocoding"/>
</sources>
<parameters class="com.bc.ceres.binding.dom.XppDomElement">
<selectedPolarisations/>
</parameters>
</node>
<node id="Write">
<operator>Write</operator>
<sources>
<sourceProduct refid="TOPSAR-Deburst"/>
</sources>
<parameters class="com.bc.ceres.binding.dom.XppDomElement">
<file>${file}</file>
<formatName>BEAM-DIMAP</formatName>
</parameters>
</node>
</graph>
After the stacks have been made, they must be renamed in order to prepare the modified process of interferogram formation. Because of some sorting which happens in Superior-Master-Stack and Image Combinations with Small Baselines, we are able to do this step with snappy automated, open spyder, adapt the projpath in User Configuration and run the script renaming_sbas.py:
####################
#User Configuration#
####################
projpath = '/home/thho/master/kitchen/maoxian_deskt'
###################################
#import modules and SNAP operators#
###################################
print('Importing modules and SNAP operators...')
import os
import sys
import snappy
from snappy import ProductIO
from snappy import GPF
from snappy import HashMap
#Get snappy Operators
GPF.getDefaultInstance().getOperatorSpiRegistry().loadOperatorSpis()
from numpy import genfromtxt
print('Importing modules and SNAP operators done!')
##############################
##load master and slave info##
##############################
comb_info_path = projpath + '/SBAS/snap_sbas_wf/mst_slv_info.csv'
comb_info = genfromtxt(comb_info_path, delimiter=',')
out_form = ".dim"
######################
##rename stack bands##
######################
for i in range(comb_info.shape[0]):
mst_name = str(int(comb_info[i, 0]))
slv_name = str(int(comb_info[i, 1]))
inpath = projpath + '/SBAS/snap_prepro/stack_deb/' + mst_name + '_' + slv_name + out_form
################
###read stack###
################
stack = ProductIO.readProduct(inpath)
#####################
###renaming bands###
####################
bnames = list(stack.getBandNames())
stack.getBand(bnames[0]).setName(bnames[0][:9] + 'wait' + bnames[0][-10:])
stack.getBand(bnames[1]).setName(bnames[1][:9] + 'wait' + bnames[1][-10:])
stack.getBand(bnames[2]).setName(bnames[2][:17] + 'wait' + bnames[2][-10:])
stack.getBand(bnames[3]).setName(bnames[3][:9] + 'mst' + bnames[3][-10:])
stack.getBand(bnames[4]).setName(bnames[4][:9] + 'mst' + bnames[4][-10:])
stack.getBand(bnames[5]).setName(bnames[5][:17] + 'mst' + bnames[5][-10:])
bnames = list(stack.getBandNames())
stack.getBand(bnames[0]).setName(bnames[0][:9] + 'slv1' + bnames[0][-10:])
stack.getBand(bnames[1]).setName(bnames[1][:9] + 'slv1' + bnames[1][-10:])
stack.getBand(bnames[2]).setName(bnames[2][:17] + 'slv1' + bnames[2][-10:])
outpath = projpath + '/SBAS/snap_prepro/stack_deb_rename/' + mst_name + '_' + slv_name + out_form
ProductIO.writeProduct(stack, outpath, 'BEAM-DIMAP')
del stack
In this step, the stack data is doubled, because the new renamed stacks are saved in a new folder. If you are sure, your stacks were build correctly, I recommend to delete or move the folder …/SBAS/snap_prepro/stack_deb to HDD, to save space on your SSD.
This step must be done manually, if someone finds a way to use the changed operator and SNAP built from source via gpt or snappy, please let me know, then I can script this section too.
However, until now, you have to build SNAP in Intellij IDEA, load the first renamed stack, go to Radar ⇨ Interferometric ⇨ Product ⇨ Interferogram Formation. The changed operator will have all parameters set for this process (subtract topographic phase, output dem and orthorectified lat/lon). Change the output path to …/SBAS/snap_prepro/ifg, the output file name has to have this format masterdate_slavedate_ifg.dim then click Run. When finished, close the dialog window, right click one of the products and click close all products. You will be asked if you want to save changes, select Yes (the very important metadata change in the stack will be saved by this). Repeat this step until all ifgs are built. I recommend this window arrangement to walk through your files without missing one:
Window arrangement for manual inteferogram formation.
When you have processed all stacks, it is a good idea to make a spatial and band subset, skip the spatial subset if you want to work with the selected burst, but this might exceed your RAM in later processing. However, running the script subset_sbas.py in spyder will subset bands, what you should do in every case, because it is important for further processing and export to StaMPS. For a spatial subset we use a .geojson polygon. To create this polygon I recommend using Google Earth, be sure that the polygon is within the bursts you have selected and not to close to the edges, because ifgs have 0 values there, due to of the debursting step. To convert Google Earth’s .kml to .geojson, you can use the R script kml_to_geojson.R:
###################################
##convert google's kml to geojson##
###################################
library(rgdal)
#as .kml
in.path <- "~/path/to/polygon.kml"
#as .geojson
out.path <- "~/path/to/polygon.geojson"
l.name <- "layername"
dat <- readOGR(in.path)
writeOGR(dat, out.path,
driver = "GeoJSON", layer = l.name,
overwrite_layer = T)
When you have all your data ready, open spyder, adapt this script and run it. We use snappy for this step, since we use a polygon for subsetting. I was not able to get this done in gpt, if anybody can do this, you can tell me about it and I will change the step to gpt.:
####################
#User Configuration#
####################
projpath = '/home/thho/master/kitchen/maoxian_deskt'
subpoly_path = '/home/thho/master/kitchen/maoxian_deskt/input/mao_sub.geojson'
out_form = ".dim"
IWnum = 'IW2'
polarisation = 'VV'
###################################
#import modules and SNAP operators#
###################################
print('Importing modules and SNAP operators...')
import os
import sys
import snappy
import csv
from snappy import ProductIO
from snappy import GPF
from snappy import HashMap
from sentinelsat import read_geojson, geojson_to_wkt
#Get snappy Operators
GPF.getDefaultInstance().getOperatorSpiRegistry().loadOperatorSpis()
WKTReader = snappy.jpy.get_type('com.vividsolutions.jts.io.WKTReader')
print('Importing modules and SNAP operators done!')
####################################
###read in .geojson conver to WKT###
####################################
print('reading processinfos and geojson and convert to WKT...')
wkt = geojson_to_wkt(read_geojson(subpoly_path))
geom = WKTReader().read(wkt)
comb_info_path = projpath + '/SBAS/snap_sbas_wf/mst_slv_info.csv'
from numpy import genfromtxt
comb_info = genfromtxt(comb_info_path, delimiter=',')
comb_info_date_path = projpath + '/SBAS/snap_sbas_wf/mst_slv_ifg.csv'
comb_info_date = []
with open(comb_info_date_path) as f:
reader=csv.reader(f)
for row in reader:
comb_info_date.append([row[0], row[1]])
print('reading processinfos and geojson and convert to WKT done!')
#######################################
###band and spatial subset of stacks###
#######################################
print('Subsetting stacks...')
for i in range(comb_info.shape[0]):
stack_name = str(int(comb_info[i, 0])) + "_" + str(int(comb_info[i, 1]))
stack_path = projpath + '/SBAS/snap_prepro/stack_deb_rename/' + stack_name + '.dim'
stack = ProductIO.readProduct(stack_path)
band_names = list(stack.getBandNames())
band_names = band_names[3]+','+band_names[4]+','+band_names[5]+','+band_names[6]+','+band_names[7]+','+band_names[8]
parameters = HashMap()
parameters.put('geoRegion', geom)
parameters.put('sourceBands', band_names)
parameters.put('copyMetadata', True)
sub_stack = GPF.createProduct("Subset", parameters, stack)
del stack
outpath_substack = projpath + '/SBAS/snap_prepro/subset/stack/' + stack_name + out_form
ProductIO.writeProduct(sub_stack, outpath_substack, 'BEAM-DIMAP')
del sub_stack
print('Subsetting stacks done!')
#####################################
###band and spatial subset of ifgs###
#####################################
print('Subsetting ifgs...')
for i in range(len(comb_info_date)):
ifg_name = str(int(comb_info[i, 0])) + "_" + str(int(comb_info[i, 1])) + '_ifg'
ifg_path = projpath + '/SBAS/snap_prepro/ifg/' + ifg_name + '.dim'
ifg = ProductIO.readProduct(ifg_path)
band_names = 'i_ifg_' + IWnum + '_' + polarisation + '_' + comb_info_date[i][0] + '_' + comb_info_date[i][1] + ',' + 'q_ifg_' + IWnum + '_' + polarisation + '_' + comb_info_date[i][0] + '_' + comb_info_date[i][1] + ',' + 'Intensity_ifg_' + IWnum + '_' + polarisation + '_' + comb_info_date[i][0] + '_' + comb_info_date[i][1] + ',' + 'Phase_ifg_' + IWnum + '_' + polarisation + '_' + comb_info_date[i][0] + '_' + comb_info_date[i][1] + ',' + 'coh_' + IWnum + '_' + polarisation + '_' + comb_info_date[i][0] + '_' + comb_info_date[i][1] + ',' + 'elevation,orthorectifiedLat,orthorectifiedLon'
parameters = HashMap()
parameters.put('geoRegion', geom)
parameters.put('sourceBands', band_names)
parameters.put('copyMetadata', True)
sub_ifg = GPF.createProduct("Subset", parameters, ifg)
del ifg
outpath_subifg = projpath + '/SBAS/snap_prepro/subset/ifg/' + ifg_name + out_form
ProductIO.writeProduct(sub_ifg, outpath_subifg, 'BEAM-DIMAP')
del sub_ifg
print('Subsetting ifgs done!')
This step reduces the amount of data significantly, well it depends on the size of the polygon, but still.
We can now use the StaMPS Export operator, this will not result in a folder and file structure, we can directly plug into StaMPS since the export is meant for PSI, but the exported data can later be rearranged to fit in the structure needed for SBAS. This will be the last step using SNAP! Go to your project directory, open a terminal from there and call stamps_export_sbas.bash
cd ~/projectdir
source ./SBAS/snap_sbas_wf/stamps_export_sbas.bash
Documentation of stamps_export_sbas.bash:
#!/bin/bash
# enable next line for debugging purpose
# set -x
############################################
# User Configuration
############################################
# adapt this path to your needs
export PATH=~/snap/bin:$PATH
gptPath="gpt"
############################################
# Main processing
############################################
projectDirectory="$PWD"
graphXmlPath="${projectDirectory}/SBAS/snap_sbas_wf/stamps_export_sbas.xml"
exportDirectory="${projectDirectory}/SBAS/export"
cd "$exportDirectory"
rm "$exportDirectory/*"
bash "${projectDirectory}/SBAS/snap_sbas_wf/mkdir_sbas_exp.bash"
cd "${projectDirectory}/SBAS/snap_sbas_wf"
for D in $(ls -1 "${exportDirectory}"); do
targetFolder="$exportDirectory/$D"
stack="$projectDirectory/SBAS/snap_prepro/subset/stack/$D.dim"
ifg="$projectDirectory/SBAS/snap_prepro/subset/ifg/"$D"_ifg.dim"
fileList="${stack},${ifg}"
echo "targetFolder=$targetFolder">stamps_export.properties
echo "psiFormat=true">>stamps_export.properties
echo "fileList=$fileList">>stamps_export.properties
parameterFilePath="$projectDirectory/SBAS/snap_sbas_wf/stamps_export.properties"
${gptPath} ${graphXmlPath} -e -p ${parameterFilePath}
done
Documentation of stamps_export_sbas.xml:
<graph id="stamps_export">
<version>1.0</version>
<node id="ProductSet-Reader">
<operator>ProductSet-Reader</operator>
<sources/>
<parameters class="com.bc.ceres.binding.dom.XppDomElement">
<fileList>${fileList}</fileList>
</parameters>
</node>
<node id="StampsExport">
<operator>StampsExport</operator>
<sources>
<sourceProduct.2 refid="ProductSet-Reader"/>
</sources>
<parameters class="com.bc.ceres.binding.dom.XppDomElement">
<targetFolder>${targetFolder}</targetFolder>
<psiFormat>${psiFormat}</psiFormat>
</parameters>
</node>
</graph>
The common data structure for SBAS in StaMPS is the INSAR_master_date directory containing three sub directories, geo, dem and SMALL_BASELINES. In SMALL_BASELINES each ifg is stored in a single directory with master_slave dates as name. To build this structure and merge the exported data, which contains redundant and useless information into the structure call the bash script merge_stamps_sbas.bash in your project directory:
cd ~/projectdir
source ./SBAS/snap_sbas_wf/merge_stamps_sbas.bash
All directories has been made and data moved from the export folder to the INSAR_master_date directory. Before processing I used to backup the INSAR_master_date with a simple copy paste and a *_backup prefix. But this is not necessary to do.
Finally we have finished the preprocessing and are now able to run SBAS in StaMPS!
Before we start processing in StaMPS, you have to update a bash script in the StaMPS install directory. To exchange the script, without deleting the original, go to the snap_sbas_wf directory and move the script mt_prep_gamma_snap to the bin folder of your StaMPS installation. When you installed StaMPS in your home directory, this will do, adapt the path if StaMPS is somewhere else:
#save the original script and move the chagned one into the installation
mv ~/StaMPS_v3.3b1/bin/mt_prep_gamma_snap ~/StaMPS_v3.3b1/bin/mt_prep_gamma_snap_old
cp ~/projectdir/snap_sbas_wf/mt_prep_gamma_snap* ~/StaMPS_v3.3b1/bin/mt_prep_gamma_snap*
#set an execution flag manually, to make it accessable from terminal
chmod u+x ~/StaMPS_v3.3b1/bin/mt_prep_gamma_snap
To start processing with StaMPS, enter INSAR_master_date and furthermore enter SMALL_BASELINES. In the SMALL_BASELINES directory call the mt_prep_gamma_snap script and when finished launch Matlab from here, as you used to do in INSAR_master_date when processing PSI, but now, as we are processing SBAS, you have to do all this in the SMALL_BASELINES directory!
mt_prep_gamma_snap <yyyymmdd> ~/projectdir/INSAR_master_date/SMALL_BASELINES 0.6
matlab
When a SMALL_BASELINES folder exists in INSAR_master_date, the small_baseline_flag parameter is set to ‘y’ and together many other parameters are changed from PS process in StaMPS, call getparm to get familiar with the parameters and start processing with stamps(1,1) etc. When you are ready and you will plot a time series, you can not simply plot the time series from the processed ifgs, but have to do this from inverted data. Hence the data is already inverted and ready to plot you have to call ps_plot(‘V-D’, ‘ts’) (for example) with capital letters, to plot time series from SBAS data.
This step is optional but if you want to export data to a shiny application which allows you to plot and export your data in a comfortable way, you can do this in some very simple steps.
After you have plotted a plt like ps_plot(‘V-D’, ‘ts’) and have chosen a location from which you are want to see time series, objects will be created, which can be compiled to a table and exported as .csv file:
To prepare the dataset in StaMPS/MTI-Matlab run your process in Matlab until step 6 or further. Create a ‘ts’ plot where you choose a region of interest you want to export. I recommend a search radius of 500m but this depends on point density and study site. This action will create some needed objects in Matlab. Which ps_plot you create depends on you, ‘V-DO’, ‘V-D’, ‘V-O’ etc. but be sure that you repeat the same plot without ‘ts’ but with -1 argument after that, see line 1, 3 and 4 in the chunk below. In this example the ‘v-do’ argument must be the same in all lines, adapt it to your chosen method. Additional data is prepared in this step, similar to the googleearth-kml export in StaMPS/MTI.
The last adaption is optional, you can change the name of the outpufile.csv in the last line. The file will be written into your SMALL_BASELINES directory. After the export, move the .csv file to your case study folder in stusi directory of the application.
ps_plot('V-D', 'ts');
load parms.mat;
ps_plot('V-D', -1);
load ps_plot_v-do.mat
lon2_str = cellstr(num2str(lon2));
lat2_str = cellstr(num2str(lat2));
lonlat2_str = strcat(lon2_str, lat2_str);
lonlat_str = strcat(cellstr(num2str(lonlat(:,1))), cellstr(num2str(lonlat(:,2))));
ind = ismember(lonlat_str, lonlat2_str);
disp = ph_disp(ind);
export_res = [lon2 lat2 disp ts];
metarow = [ref_centre_lonlat NaN transpose(day)-1];
k = 0;
export_res = [export_res(1:k,:); metarow; export_res(k+1:end,:)];
export_res = table(export_res);
writetable(export_res,'stamps_tsexport.csv')
Create a new folder with the name of the study site or some other information in ~/projectdir/SBAS/snap_sbas_ef/stamps_visualizer/stusi move the .csv table to the new created folder. Go to ~/projectdir/SBAS/snap_sbas_ef/stamps_visualizer open the ui.R script in RStudio and click Run App in the upper right corner of the upper left window.
De Zan, F., Guarnieri, A.M., 2006. TOPSAR: Terrain observation by progressive scans. IEEE Transactions on Geoscience and Remote Sensing 44, 2352–2360. https://doi.org/10.1109/TGRS.2006.873853
ESA, n.d. User Guides. Sentinel-1 SAR. Acquisition Modes. Interferometric Wide Swath.
Ferretti, A., Monti-Guarnieri, A., Prati, C., Rocca, F., Massonnet, D., 2007. InSAR Principles. Guidelines for SAR Interferometry Processing and Interpretation. ESA Publications TM-19, Noordwijk.
Ferretti, A., Prati, C., Rocca, F., 2001. Permanent scatterers in SAR interferometry. IEEE Transactions on Geoscience and Remote Sensing 39, 8–20. https://doi.org/10.1109/36.898661
Grandin, R., 2015. Interferometric Processing of SLC Sentinel-1 TOPS Data, in: Ouwehand, L. (Ed.), Proceedings of Fringe’15: Advances in the Science and Applications of Sar Interferometry and Sentinel-1 Insar Workshop. ESA Publication SP-731, pp. 1–14. https://doi.org/10.5270/Fringe2015.pp116
Hooper, A., Segall, P., Zebker, H., 2007. Persistent scatterer interferometric synthetic aperture radar for crustal deformation analysis, with application to volcán alcedo, galápagos. Journal of Geophysical Research: Solid Earth 112, 1–21. https://doi.org/10.1029/2006JB004763
Yagüe-Martínez, N., Prats-Iraola, P., González, F.R., Brcic, R., Shau, R., Geudtner, D., Eineder, M., Bamler, R., 2016. Interferometric processing of sentinel-1 tops data. IEEE Transactions on Geoscience and Remote Sensing 54, 2220–2234. https://doi.org/10.1109/TGRS.2015.2497902