Problem of S1TBX taking a lot of space in /tmp directory

Hi,
I installed S1TBX and SNAP in our Linux server. After several weeks of use, I found the /tmp directory is full because of too many temporary files written into /tmp by the S1TBX. Is there any way I can change the directory it writes into?

Best regards,
Luyi

Hi Luyi,

I had the same problem with S1TBX due to limited space in my /tmp directory. I added the following line to the default s1tbx.vmoptions file to specify a new location for the temporary files:
-Djava.io.tmpdir=/data/Sentinel-1A/temp

This worked successfully. I had a look at my SNAP installation with S1TBX and there does not seem to be a default s1tbx.vmoptions file provided, but I think you can create one. The gpt shell script tries to read a file with this name:
read_vmoptions “$prg_dir/$progname.vmoptions”

Perhaps even more simply you can just add the temporary directory option to the java exec command at the end of the gpt shell script.

I have pasted below the contents of my s1tbx.vmoptions from S1TBX. You can also change other settings, such as increase the memory used by S1TBX through this configuration file.

I hope this information is helpful for you.

Kind regards,
Steven

$ cat /opt/S1TBX/s1tbx.vmoptions

Enter one VM parameter per line

For example, to adjust the maximum memory usage to 512 MB, uncomment the following line:

-Xmx512m

To include another file, uncomment the following line:

-include-options [path to other .vmoption file]

-Xmx11164M
-Ds1tbx.home=/opt/S1TBX
-Ds1tbx.splash.image=/opt/S1TBX/resource/images/snap_splash.png
-Ds1tbx.build.date=20150506
-Dceres.context=s1tbx
-Djava.io.tmpdir=/data/Sentinel-1A/temp
-Xverify:none
-XX:+AggressiveOpts
-XX:+UseFastAccessorMethods
-XX:+UseParallelGC
-XX:+UseNUMA
-XX:+UseLoopPredicate
-Xverify:none
-XX:+AggressiveOpts
-XX:+UseFastAccessorMethods
-XX:+UseParallelGC
-XX:+UseNUMA
-Xverify:none
-XX:+AggressiveOpts
-XX:+UseFastAccessorMethods
-XX:+UseParallelGC
-XX:+UseNUMA

1 Like

As a work around you can do what srhubbard suggested.
But you should add the line

-Djava.io.tmpdir=/data/Sentinel-1A/temp

to the file $SNAP_HOME/etc/snap.properties

But actually these files should be cleaned up. Can you tell us what type of files you find in the tmp directory and come from SNAP/S1TBX?

https://senbox.atlassian.net/browse/SNAP-215

Hi Marpet,

One of these file is “imageio1194935035275329431.tmp” for example.

Kind regards,
Luyi

Dear Marco,

I am receiving a java.io.IOException: No space left on device
and consequently:
org.esa.snap.core.gpf.OperatorException: I/O error reading image metadata!

My toolbox is installed in /opt/s1tbx/.

I added
-Djava.io.tmpdir=/my/big/temp/dir
to the file /opt/s1tbx/etc/snap.properties

Although, when running
/opt/s1tbx/bin/gpt -e graphfile.xml
I still got the IO error named above.

Then, I tried to run as follows:
/opt/s1tbx/bin/gpt -e -Djava.io.tmpdir=/my/big/temp/dir graphfile.xml
And I did not get the error anymore.

Any idea why the property defined in /opt/s1tbx/etc/snap.properties is not working?

Thanks in advance,
Joaquim

snap.properties is just for snap properties not for vm properties :wink:

For gpt you can specify the vm property in /opt/s1tbx/bin/gpt.vmoptions.

Thank you for the suggestion.

Indeed, some small temporary files are written to the tmpdir defined in gpt.vmoptions (in a given partition) and also in /tmp, but I still see that the size of my main partition keeps growing during the execution of the gpt, until I receive the “not space left” error. After the error, the space allocated during the gpt is freed.

Therefore, other temporary files are being written somewhere else… I verified that this is not in /tmp. Do you know where these files might be written to, or what can I do to debug this behavior?
Thanks in advance.

Are you working with Sentinel-2 or Sentinel-1 data? What kind of operators are you using in your processing chain?
This would be good to know before I can give you some more hints.

OK, the data is clear. It’s Sentinel-1.

In this test I am actually using a high resolution RadarSAT-2 product.
Find attached the XML graph file, which has only one Read and one Write nodes.
test.xml (677 Bytes)

Just found using the command fatrace that the temporary file causing the error is being written to my home in .snap/var/cache/temp/imageio7565718895106317660.tmp

For gpt try to set in snap.properties the value of snap.userdir to another directory.
Then .snap/var/cache/temp/imageio7565718895106317660.tmp should be at the new place.

That’s exactly what I did.
It worked in the sense that the temporary file is created in the directory I have redefined in snap.properties (as expected) but strangely the size of my main partition still keeps growing.
It now seems that the temporary file is being written in two different folders (in both partitions), but no longer in .snap/var/cache/temp/

I have tested now using the snap.
I open the product, and then I do export to BEAM-DIMAP.
I see the dialog “Writing bands of product” file but after a while the dialog is closed (though I see no exception or error). The BEAM-DIMAP file has been created although it is incomplete (I cannot open bands, for example) as there was no more disk space available to complete the operation.
So, this issue seems to be valid for the snap too.

We found the root cause of our issue. It was actually a confusion regarding the mounting folders. My temporary folder was actually not mounted in an external device, but still in the main partition.
Sorry for this, and thanks for your help again.

No problem. Good that it is solved
You’re welcome.

Using SNAP 6 ubuntu I see that these files are still not being cleaned up and my /tmp dir ran out of space processing a stack for StAMPS using gpt.
I adjusted to gpt.vmoptions and snap.properties to point to a new /temp dir with more available storage which I don’t really see as a permanent solution.
It creates quicklooks for Sentinel1 in the /tmp folder: /tmp/snap/1525859720503-0/S1A_IW_SLC__1SDV_20150530T172543_20150530T172610_006153_007FF3_4207.SAFE/preview/…png