Yes, I also think that this is interested. Maybe an accidental coincidence. Actually I can’t really think of any reason why this helps to prevent the memory error. … Maybe… because the data can be faster transferred and less data needs to be kept in memory? Not sure.
I updated my Java and selected the Use FileCache and this worked for me. I was having issues before even on a 16 GB Ram system.
Simple solution that worked for me. Canceled all the other unnecessary products in the Product Explorer panel. But I have some powerful hardware, i7-4720, 2.6 GHz, 16GB, dedicated graphic board and a dual SSD-classic hard drive, therefore this error was quite unexpected.
16GB should definitely be enough but more RAM never hurts.
I have a 16GB RAM system and I am processing a stack of 30 dates of sentinel-1 data in order to apply a multi-temporel filter. My study area is not covered by the same track in all dates (for some dates I have two s1 images for the same date) so i can’t apply the same subset on all images.
Due to "can’t construct data buffer " problem for the whole image, I did a shell/python script where I added to my master image subset ( Jannury 1st 2016) mounth by month dates and it performed very well until November.
It is impossible to perform the stack with all the images of december, I increased the memory parameters as specified below and i even tried to stack just december images(6 dates) and always the same error “can’t construct data buffer”. I tried 2 per 2 stack of december but the 6 dates stack i had the famous error! I found that very strange, did any one have an idea or a proposition? Thanks in advance for your help!
you have installed Snap 32 bit while your java is 64 bit and Data should not be in external hard drive.
Have you changed the snappy.ini file?
Hi Marpet, thank you for the replay, no i didn’'t because i’m not using snappy in my python script it is just a script to rename stacked images and the dim file after each stack ( image names were too long with the name of the master at each time ( for exp Sigma0_VV_slv10_25Mar2016_mst_01Jan2016.img to Sigma0_VV_slv10_25Mar2016.img).
But my problem has just been solved when working on 32 Gb Ram computer…
I would like to share my solution in case somebody “suffered” the same as I with this error. I am working with the snappy module on Python 2.7. I tried everything discussed in here on my own laptop, which was working with 10GB, and nothing worked. So I thought there was nothing else to do and that it was all about the RAM. I then tried at the computer in my office were I don’t have administrator permits and it has 32GB… There it worked. I however decided to install everything on my own laptop again in the same way I did at the office. Since I had no administrator permits there, I installed both SNAP and The Intel Distribution for Python on my C:\Users\user\AppData\Local folder where no admin rights are asked. This is what finally did the trick for me! The only things I changed was setting up the file cache option and write 10G in the snappy.ini file. I hope this helps others. I’m also new with Python and this is probably a very basic solution and therefore wasn’t even discussed here, but for all the new users, there it is! hopefully it helps you!
I know that this adds to an old post, but I keep getting the Cannot construct data buffer error on a 32 Gb RAM computer. I work in linux (launching a graph with GPT for SLC to GRD with the latest SNAP version) and the RAM consumption does not climb above 10 Gb.
have you configured your gpt.vmoptions file? Please, see for example the following post:
Thanks for the quick answer. There are indeed many posts on the topic but I couldn’t find the best solution. Previously I’ve tried with -c = 30G or -c = 16G without success.
I’ve tried with -Xmx16G but then I have another error.
INFO: org.hsqldb.persist.Logger: dataFileCache open start
…10%.Java HotSpot™ 64-Bit Server VM warning: INFO: os::commit_memory(0x00000003f2f80000, 1206386688, 0) failed; error=‘Cannot allocate memory’ (errno=12)
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 1206386688 bytes for committing reserved memory.
Is there a compromise between -c and -Xmx16G ? (like using both at 16G ?)
Update: I was testing -Xmx together with -c and it didn’t work, but using -Xmx alone it worked fine. Thank you for the tips.
I have read through this forum, and it appears that there is no solution.
I am having the same problem. I have used three machines (one mac and two Dells). The mac has low memory, but the two Dells have 16GB RAM and both Dells have Java and all products up to date. The processes appear not to use all the RAM resources in the Dells. I have also check the file cache option in tools and tried that. Moreover, I have tried incorporated the lines of code that foran suggested without success.
When I process the SLC using the Snap GUI I get the same message when processing Sentinel-1A products: “Cannot construct data buffer”. This is not the case with EOLI SLC products nor the GRD Senttinel-1A products.
Could someone give a concrete answer (perhaps if you have solved this issue)? Please double check you grammar before submitting, as it is hard to follow some replies. Furthermore, could this be a constructive conversation and not a contest?
With kind regards,
I have found a potential solution to the problem: In order to process large images with Snap (around seven Gb), a computer with a large amount of RAM, a preferably multiple processors and cores are required. Personally, I spent time processing images and attempting InSAR using a computer that have two processors with eight cores each and 256Gb RAM with no errors. Because of this, It is not recommended to use a computer with 16Gb or less RAM when trying to use complete datasets that are being processed.
Using advice from other users, it is best to split the data set into its sub-swaths. By using the S-1 TOPS Coregistration, the sub-swaths can be split into their IW components. By doing this, I have not had any trouble processing any data from Coregistration to Range-Doppler Terrain Correction on a single processor with two cores (four logical processors) and 8GB RAM. Furthermore, I found that resetting the computer, and removing unnecessary processes (closing excess programs in the task bar can help) stopped this issue as well on a computer with a single processor, two cores, and8GB RAM.
I hope this helps other users / researchers.