Compression of odb results
Compression of odb results
(OP)
I've found that .gz DEFLATE compression algorithm reduces most .odb file sizes by a factor of two or more depending on model size. However, CPFE simulations with 1-2 million DOF sometime push 10k-20k increments in a step and I can I have very large .odb files that can't be further reduced due to memory usage during the simulation and the number of variables I need for post processing. These .odb files aren't terrible alone, but when doing studies with 100+ of these simulations the storage of the data becomes a pain.
I'm sure I'm not the only one with this issue. Does anyone have a Python code or know of a method for retroactively rewriting the .odb files with some stride and perhaps dropping select data?
I'm sure I'm not the only one with this issue. Does anyone have a Python code or know of a method for retroactively rewriting the .odb files with some stride and perhaps dropping select data?





RE: Compression of odb results
There is a C++ script included with the abaqus installation called odbFilter. If you create a duplicate of your .odb by running a datacheck you can then use odbFilter to copy results from specific frames from the large .odb to the duplicate .odb.
It is described in the abaqus scripting users guide:
Section 10.15.4 Decreasing the amount of data in an output database by retaining data at specific frames
Often I will run analyses and save data from multiple frames in each step for post-processing purposes. Later on, when archiving the files, I will then duplicate the .odb and use odbFilter to copy the results from the final frame in each step to the duplicate. This reduces the size of the .odb significantly.
Hope this helps,
Dave
RE: Compression of odb results
*********************************************************
Are you new to this forum? If so, please read these FAQs:
http://www.eng-tips.com/faqs.cfm?fid=376
http://www.eng-tips.com/faqs.cfm?fid=1083
RE: Compression of odb results