mellejgr
Materials
- Feb 4, 2019
- 8
Hi everyone,
I am collecting results from a very large .ODB (4GB) using a python script, this is taking me a very long time. I wonder if anyone knows if it is possible to run the following code in parallel or have any suggestions to speed it up? As you can see I already have a node set so as to collect only from the interesting points and am looking at a single frame too.
odb = session.openOdb(name=pathway+job+'.odb')
a = odb.rootAssembly
step = odb.steps['Step-1']
FRAMEOFINTEREST=10
nodeNAME='NODES'+str(nx*ny)
nodeSET = a.nodeSets[nodeNAME]
du=[]
dv=[]
dn=[]
Kind regards,
Melle
frame=step.frames[FRAMEOFINTEREST]
field = frame.fieldOutputs['U'].getSubset(region=nodeSET)
for value in field.values:
n = value.nodeLabel
u,v = value.data
du.append(u)
dv.append(v)
dn.append
I am collecting results from a very large .ODB (4GB) using a python script, this is taking me a very long time. I wonder if anyone knows if it is possible to run the following code in parallel or have any suggestions to speed it up? As you can see I already have a node set so as to collect only from the interesting points and am looking at a single frame too.
odb = session.openOdb(name=pathway+job+'.odb')
a = odb.rootAssembly
step = odb.steps['Step-1']
FRAMEOFINTEREST=10
nodeNAME='NODES'+str(nx*ny)
nodeSET = a.nodeSets[nodeNAME]
du=[]
dv=[]
dn=[]
Kind regards,
Melle
frame=step.frames[FRAMEOFINTEREST]
field = frame.fieldOutputs['U'].getSubset(region=nodeSET)
for value in field.values:
n = value.nodeLabel
u,v = value.data
du.append(u)
dv.append(v)
dn.append