×
INTELLIGENT WORK FORUMS
FOR ENGINEERING PROFESSIONALS

Log In

Come Join Us!

Are you an
Engineering professional?
Join Eng-Tips Forums!
  • Talk With Other Members
  • Be Notified Of Responses
    To Your Posts
  • Keyword Search
  • One-Click Access To Your
    Favorite Forums
  • Automated Signatures
    On Your Posts
  • Best Of All, It's Free!
  • Students Click Here

*Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail.

Posting Guidelines

Promoting, selling, recruiting, coursework and thesis posting is forbidden.

Students Click Here

Jobs

Wall clock times lower for cpu only versus cpu+gpu simulation

Wall clock times lower for cpu only versus cpu+gpu simulation

Wall clock times lower for cpu only versus cpu+gpu simulation

(OP)
Hi all,

I'm running a strain model with millions of elements which takes several hours. Our group has recently started running simulation work on GPU clusters and I thought I might try to take advantage of the new hardware for some of these ABAQUS/Implicit jobs. I've found the results to be quite perplexing in R2018x. I've included all of the timestamp data for each of two cases and was hoping someone might either point me toward why I'm seeing this discrepancy, and/or literature that highlights best practices for job formulations which take advantage of GPU+CPU combination. I was under the impression my job which takes advantage of the gpu would run significantly faster.

I do monitor the gpu using nvidia-smi and it is most certainly being used for parts of the simulation.

job=x cpus=16 double=both int
JOB TIME SUMMARY
USER TIME (SEC) = 2.24008E+05
SYSTEM TIME (SEC) = 12359.
TOTAL CPU TIME (SEC) = 2.36367E+05
WALLCLOCK TIME (SEC) = 20171

job=x cpus=16 gpus=1 double=both int
JOB TIME SUMMARY
USER TIME (SEC) = 1.99401E+05
SYSTEM TIME (SEC) = 18010.
TOTAL CPU TIME (SEC) = 2.17411+05
WALLCLOCK TIME (SEC) = 39364

These two cases are run on the same box. As far as I can tell, the job with the added GPU takes almost twice as long to run based on the wall clock time. It is possible I am misunderstanding this information. I would appreciate any insight the community can lend.

RE: Wall clock times lower for cpu only versus cpu+gpu simulation

Is the run with gpu not ~11% faster? (user time). Could the other quoted times be "virtual" because of the GPU use?

RE: Wall clock times lower for cpu only versus cpu+gpu simulation

(OP)
Thanks for your response, JXB. I very well may have misunderstood but I thought that wallclock time meant the time passed as seen by the "clock on the wall". If user-time is the more appropriate metric for passed real world time, then I suppose that would make sense.

Can someone confirm?

Red Flag This Post

Please let us know here why this post is inappropriate. Reasons such as off-topic, duplicates, flames, illegal, vulgar, or students posting their homework.

Red Flag Submitted

Thank you for helping keep Eng-Tips Forums free from inappropriate posts.
The Eng-Tips staff will check this out and take appropriate action.

Reply To This Thread

Posting in the Eng-Tips forums is a member-only feature.

Click Here to join Eng-Tips and talk with other members!


Resources


Close Box

Join Eng-Tips® Today!

Join your peers on the Internet's largest technical engineering professional community.
It's easy to join and it's free.

Here's Why Members Love Eng-Tips Forums:

Register now while it's still free!

Already a member? Close this window and log in.

Join Us             Close