Index  | Recent Threads  | Unanswered Threads  | Who's Active  | Guidelines  | Search
 

Quick Go »
No member browsing this thread
Thread Status: Active
Total posts in this thread: 781
Posts: 781   Pages: 79   [ Previous Page | 70 71 72 73 74 75 76 77 78 79 | Next Page ]
[ Jump to Last Post ]
Post new Thread
Author
Previous Thread This topic has been viewed 621234 times and has 780 replies Next Thread
goben_2003
Advanced Cruncher
Joined: Jun 16, 2006
Post Count: 146
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

Time is as time does. If you contribute by processing 100 GPU WUs at 30 seconds each then you have done 50 minutes processing.

To say that if you had done them on the CPU it would have taken 150 hours is ridiculous.

If we are talking about how fast a core is, yes. A raspberry pi is much slower than many other CPUs. It has 4 cores and is able to get roughly 4 days of time credit just like some earlier 2 core/4 thread intel i5s.

The issue here though is not about speed. It is about # of cores.
With a CPU you get Time spent x number of cores(really threads), not just 1 hour per CPU. The GPUs are also multicore, but time credit pretends they only have 1 core. It should be per core just like CPUs.


Duh, yes.

As I said, the sum of the time taken by all of the GPU WUs against the time taken by all of the CPU WUs not the number of GPU WUs times the time that would have been taken had they been CPU WUs which is what the poster appeared to be saying.

Certainly, if you do 100 GPU WUs and they take 30 seconds each you should get 50 minutes credit and not 25 because you were doing two at a time but that was not how the post I responded to was phrased “but that could be gotten round by allocating notional time credits to GPU units that are equivalent to time to process as CPU only”.

It think that I did not state what I meant well. It is ok if we disagree, but understanding each others point is nice. smile

A GPU WU uses multiple cores. You cannot run 1 WU per core. Some GPUs have enough cores that you need to run multiple to saturate the GPU, but that is nowhere close to 1 per core. I will use an intel UHD 520 as an example since it has less cores(24). 1 WU is 70% average/100% peak utilization. It is faster partially because it is parallelizing the work across many cores. Time credit calculations pretends that it is 1 core.

So 24 cores are being used, but you get time credit for 1 core.

Please note that I am not saying that A GPU should be getting time credit based on the time it would take a CPU to complete it. Rather that it should be based on the # of cores used, just like with CPUs. So if the UHD620 completes the WU in 1 hour it should get 24 cores x 1 hour = 24 hours of credit. This is very different than saying it should get the ~70 hours the same amount of work would take on the i7 in the same machine.

Hopefully I stated what I meant more clearly this time. smile
----------------------------------------

[May 6, 2021 7:45:24 AM]   Link   Report threatening or abusive post: please login first  Go to top 
siu77
Cruncher
Russia
Joined: Mar 12, 2012
Post Count: 22
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

Thank you, poppinfresh99, now it's clear about one line of code.

One more question. Does it make sense to transfer all calculations to the GPU? Or, in some cases, computing on a CPU is preferable?
[May 6, 2021 7:46:35 AM]   Link   Report threatening or abusive post: please login first  Go to top 
aegidius
Cruncher
Joined: Aug 29, 2006
Post Count: 25
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

My little ol' GTX-750 is just waiting for some more GPU WU's to make its heart sing and its fan spin :-)
[May 6, 2021 11:11:21 AM]   Link   Report threatening or abusive post: please login first  Go to top 
poppinfresh99
Cruncher
Joined: Feb 29, 2020
Post Count: 49
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

One more question. Does it make sense to transfer all calculations to the GPU? Or, in some cases, computing on a CPU is preferable?


Good question. Many calculations run slower on a GPU even if someone spends the large amount of time writing the best GPU code to do it. This is because the calculations cannot be made highly parallel. I put the if/else statement in my code example to show how GPUs can start to be slower: threads have to wait on other threads sometimes.

This is why few people want to run OPN1 (CPU-only) tasks when OPNG exists. The OPN algorithm can be made highly parallel, and WCG spent the resources to write the GPU code, so we don't want to waste our resources on CPU-only, which could be spent on calculations that cannot be made highly parallel.
[May 6, 2021 1:10:11 PM]   Link   Report threatening or abusive post: please login first  Go to top 
poppinfresh99
Cruncher
Joined: Feb 29, 2020
Post Count: 49
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

It should be per core just like CPUs.


A powerful GPU could then do multiple years of work in a day! Though maybe this isn't a bad thing?

A compromise could be to give time credit per GPU work group of let's say 32 threads. (Nvidia calls it a "warp", AMD calls it a "wavefront", etc.). In fact, I believe that some people refer to these work groups as the "cores" of the GPU since the 32 threads are acting more like a single core that is a vector processor.

If doing this timing is tricky, the value could be predetermined.

My apologies if I'm using the terminology incorrectly!
----------------------------------------
[Edit 1 times, last edit by poppinfresh99 at May 6, 2021 1:41:15 PM]
[May 6, 2021 1:19:16 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Bryn Mawr
Senior Cruncher
Joined: Dec 26, 2018
Post Count: 346
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

Time is as time does. If you contribute by processing 100 GPU WUs at 30 seconds each then you have done 50 minutes processing.

To say that if you had done them on the CPU it would have taken 150 hours is ridiculous.

If we are talking about how fast a core is, yes. A raspberry pi is much slower than many other CPUs. It has 4 cores and is able to get roughly 4 days of time credit just like some earlier 2 core/4 thread intel i5s.

The issue here though is not about speed. It is about # of cores.
With a CPU you get Time spent x number of cores(really threads), not just 1 hour per CPU. The GPUs are also multicore, but time credit pretends they only have 1 core. It should be per core just like CPUs.


Duh, yes.

As I said, the sum of the time taken by all of the GPU WUs against the time taken by all of the CPU WUs not the number of GPU WUs times the time that would have been taken had they been CPU WUs which is what the poster appeared to be saying.

Certainly, if you do 100 GPU WUs and they take 30 seconds each you should get 50 minutes credit and not 25 because you were doing two at a time but that was not how the post I responded to was phrased “but that could be gotten round by allocating notional time credits to GPU units that are equivalent to time to process as CPU only”.

It think that I did not state what I meant well. It is ok if we disagree, but understanding each others point is nice. smile

A GPU WU uses multiple cores. You cannot run 1 WU per core. Some GPUs have enough cores that you need to run multiple to saturate the GPU, but that is nowhere close to 1 per core. I will use an intel UHD 520 as an example since it has less cores(24). 1 WU is 70% average/100% peak utilization. It is faster partially because it is parallelizing the work across many cores. Time credit calculations pretends that it is 1 core.

So 24 cores are being used, but you get time credit for 1 core.

Please note that I am not saying that A GPU should be getting time credit based on the time it would take a CPU to complete it. Rather that it should be based on the # of cores used, just like with CPUs. So if the UHD620 completes the WU in 1 hour it should get 24 cores x 1 hour = 24 hours of credit. This is very different than saying it should get the ~70 hours the same amount of work would take on the i7 in the same machine.

Hopefully I stated what I meant more clearly this time. smile


OK, that makes more sense but a GPU core is not equivalent to a CPU core and you are only using 70% of 24 cores.

There must be a compromise position but a 24x markup in your example is excessive. Using that basis a GTX1660 running a single WU would have about a 1400x markup which is ridiculous.
----------------------------------------
[Edit 1 times, last edit by Bryn Mawr at May 6, 2021 1:42:24 PM]
[May 6, 2021 1:38:45 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Crystal Pellet
Veteran Cruncher
Joined: May 21, 2008
Post Count: 1324
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

Result Name: OPNG_ 0021605_ 00029_ 2--

<core_client_version>7.16.11</core_client_version>
<![CDATA[
<message>
WU download error: couldn't get input files:
<file_xfer_error>
<file_name>a68d4e2e07883d1a8044b3af79be8e02.job</file_name>
<error_code>-200 (wrong size)</error_code>
</file_xfer_error>
</message>
[May 6, 2021 3:59:06 PM]   Link   Report threatening or abusive post: please login first  Go to top 
erich56
Senior Cruncher
Austria
Joined: Feb 24, 2007
Post Count: 295
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

I observe something very strange:
one of my 4 PCs which crunched most of the WUs during the stress test, now does not receive any single WU. Whereas the remaining 3 PCs do receive WUs, at least once in a while.

Does anyone have an explanation for this?
----------------------------------------
[Edit 1 times, last edit by erich56 at May 6, 2021 5:23:54 PM]
[May 6, 2021 4:18:52 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Bryn Mawr
Senior Cruncher
Joined: Dec 26, 2018
Post Count: 346
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

It should be per core just like CPUs.


A powerful GPU could then do multiple years of work in a day! Though maybe this isn't a bad thing?

A compromise could be to give time credit per GPU work group of let's say 32 threads. (Nvidia calls it a "warp", AMD calls it a "wavefront", etc.). In fact, I believe that some people refer to these work groups as the "cores" of the GPU since the 32 threads are acting more like a single core that is a vector processor.

If doing this timing is tricky, the value could be predetermined.

My apologies if I'm using the terminology incorrectly!


If the proposal is number of WUs x number of warps the WU uses then I could support that, it would take account of the actual usage of the GPU and give a level of equivalence between GPU core and CPU thread.
[May 6, 2021 5:02:51 PM]   Link   Report threatening or abusive post: please login first  Go to top 
DennyInDurham
Cruncher
USA
Joined: Aug 4, 2020
Post Count: 23
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

How you gonna keep'em down on the farm after they've seen Paris?

It will be a month before I can even see the current points contribution on the bar graph wink
[May 6, 2021 5:14:09 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Posts: 781   Pages: 79   [ Previous Page | 70 71 72 73 74 75 76 77 78 79 | Next Page ]
[ Jump to Last Post ]
Post new Thread