Glad to see that snap2stamps works fine for you as well!
Regarding the mt_prep_gamma_snap run you made, could you tell me which values were reported as amplitude average per scene? if it was written in the script an s instead of f for the format type, the script will return values over 0.xxxxx instead of being a lot higher, and then you may get errors later inside StaMPS processing.
Regarding your question, in both cases you should let on the search folder only the data that you want to process or export.
the splitting starts with reading data within the slaves folder, but later steps read all the other folders
the stamps export reads the folders ifg and coreg
The current best option is to moved the already processed data. But with this indeed grows the need of the overwrite option (true or false) for the processing. I will take care of it.
Regarding the mt_prep_gamma_snap run you made, could you tell me which values were reported as amplitude average per scene? if it was written in the script an s instead of f
for the format type, the script will return values over 0.xxxxx instead
of being a lot higher, and then you may get errors later inside StaMPS
Thank you for pointing it out. I noticed it in the other topic but overlooked it.
With mt_prep_gamma_snap i get mean amplitudes of 30-50 while with mt_prep_gamma I get values between 24000 and 26000. I am currently testing the difference for the later steps.
In such case it should be ok. The values you got are in the expected range for C-band (and obviously it changes from site to site).
Hence, I expect you will manage to get through the entire StaMPS processing.
It is normal as these are the default values for those variables and it means that you had not set the reference point, so it getsthe entire scene and takes the average value as reference (in fact I was still thinking what you meant with “no lat/lon values displayed when checking with getparm”)
I am still surprise for the 22PS with amplitude values on that range 30-50. I am wondering whether the processing of some slave was wrong. Could you check it in the calamp.out ? There the average values per slave are describe and maybe some of them could be near to 0… not sure.
Anyway, please let me know which output you get using this latter script.
thank you. I again checked. All (n=36) lay between 28 and 56. Before weeding, there are around 17000 PS but after weeding only 20 remain. Again, the geocoding is correct but the script seems to miss the correct PS candidates.
The PROJECTFOLDER should contain a full path where can be found the slaves folder.
My guess is that if the step 1 on the slave preparation worked, it might not be because of the PROJECTFOLDER.
Could you provide more info about this? Had you run step 1, right?
Was your first try also run in Windows OS? I have tested only in several linux distros.
it was the same machine and as far as I know I entered a full project folder in both cases. Step 1 was successful, all data was sorted according to the acquisition date. I am currently repeating it from scratch, now it works again…
Strange, but as long as it runs - fine