Index  | Recent Threads  | Unanswered Threads  | Who's Active  | Guidelines  | Search
 

Quick Go ยป
No member browsing this thread
Thread Status: Active
Total posts in this thread: 781
Posts: 781   Pages: 79   [ Previous Page | 70 71 72 73 74 75 76 77 78 79 | Next Page ]
[ Jump to Last Post ]
Post new Thread
Author
Previous Thread This topic has been viewed 621236 times and has 780 replies Next Thread
ca05065
Senior Cruncher
Joined: Dec 4, 2007
Post Count: 328
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

@erich56
After I completed the final GPU work unit in my queue, the work request changed from:
Requesting new tasks for CPU and NVIDIA GPU
[World Community Grid] [sched_op] CPU work request: 3798671.04 seconds; 0.00 devices
[World Community Grid] [sched_op] NVIDIA GPU work request: 341697.74 seconds; 0.00 devices
to:
Requesting new tasks for CPU
[World Community Grid] [sched_op] CPU work request: 3660133.20 seconds; 0.00 devices
[World Community Grid] [sched_op] NVIDIA GPU work request: 0.00 seconds; 0.00 devices

I have not been able to find any explanation. The graphics card section of the device settings are still set to request GPU work units.
[May 6, 2021 7:14:49 PM]   Link   Report threatening or abusive post: please login first  Go to top 
jeffwy
Advanced Cruncher
Taiwan
Joined: Nov 17, 2004
Post Count: 77
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

I did not see the posts about a GPU stress test so did not join, but what I did notice was that the GPU WUs were not really utilizing a lot of my GPU, the fans on the GPU weren't even spinning during the days the project was running. But it would cause my CPU fans to constantly change speeds where other projects including both GPU and non-GPU projects typically would not do.

In addition, was that it? A week of GPU units and we're finished for now?
----------------------------------------

[May 7, 2021 12:54:09 AM]   Link   Report threatening or abusive post: please login first  Go to top 
DennyInDurham
Cruncher
USA
Joined: Aug 4, 2020
Post Count: 23
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

I did not see the posts about a GPU stress test so did not join, but what I did notice was that the GPU WUs were not really utilizing a lot of my GPU, the fans on the GPU weren't even spinning during the days the project was running. But it would cause my CPU fans to constantly change speeds where other projects including both GPU and non-GPU projects typically would not do.

In addition, was that it? A week of GPU units and we're finished for now?


About 30,000 GPU WUs about as fast as possible. It found a hiccup or two and melted the file system on the server when they packaged some of the results, along with some client SSDs. wink

GPU WUs are back to trickling out now.
[May 7, 2021 4:23:23 AM]   Link   Report threatening or abusive post: please login first  Go to top 
erich56
Senior Cruncher
Austria
Joined: Feb 24, 2007
Post Count: 295
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

...along with some client SSDs. wink
I saw this coming; an in order to avoid it, I changed to Ramdisk early enough smile
[May 7, 2021 5:05:03 AM]   Link   Report threatening or abusive post: please login first  Go to top 
jeffwy
Advanced Cruncher
Taiwan
Joined: Nov 17, 2004
Post Count: 77
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

I am certainly hoping that you are being sarcastic about SSDs dying and GPUs overheating...

I store all my WUs on a HDD not SDD anyway. lol.
----------------------------------------

[May 7, 2021 6:42:20 AM]   Link   Report threatening or abusive post: please login first  Go to top 
squid
Advanced Cruncher
Germany
Joined: May 15, 2020
Post Count: 56
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

Here are some Autodock alternatives.

https://click2drug.org/index.php#Docking
[May 7, 2021 7:06:10 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Crystal Pellet
Veteran Cruncher
Joined: May 21, 2008
Post Count: 1324
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

About 30,000 GPU WUs about as fast as possible.

Not 30,000 WU's, but 30,000 batches. Don't know how may WU's are in 1 batch.
[May 7, 2021 10:55:16 AM]   Link   Report threatening or abusive post: please login first  Go to top 
goben_2003
Advanced Cruncher
Joined: Jun 16, 2006
Post Count: 146
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

OK, that makes more sense but a GPU core is not equivalent to a CPU core and you are only using 70% of 24 cores.

There must be a compromise position but a 24x markup in your example is excessive. Using that basis a GTX1660 running a single WU would have about a 1400x markup which is ridiculous.

Sure, then maybe 1 hr * 24 core * (70/100) = ~17 would be fine? I am not sure I agree that 24 is excessive, but I do think it should be more than 2(It uses a cpu core too).
I would like to note some things: In my testing, there was more total science done(jobs/hr) when running just the iGPU without CPU units. So I ran just iGPU on the machines without discrete graphics. My time credit earned per day went downhill, but the amount of science done went way up. I ran it this way anyways since I consider the science to be more important than time credit. I still do like time credit though. smile

Now for discrete graphics though, maybe counting all 1920 cores would be excessive(using the nvidia rtx 2060 as an example). Maybe per warp group like poppinfresh suggested would be fine? That would still be a very large number. So in this example 1920 / 32 * 1 hr * % load?

One major problem is that this is all much more complicated to implement in practice. One problem is that I do not think load% is sent. It also gets more tricky with how many WUs are run at a time. The max that are set to run at a time is sent to the server, I do not think the number actually running at a time is. Also, while my Nvidia card does have the warpSize(32) reported, it does not appear to report the number of cores(1920) or the number of groups of 32 there are(60).
----------------------------------------

[May 7, 2021 10:55:33 AM]   Link   Report threatening or abusive post: please login first  Go to top 
DennyInDurham
Cruncher
USA
Joined: Aug 4, 2020
Post Count: 23
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

About 30,000 GPU WUs about as fast as possible.

Not 30,000 WU's, but 30,000 batches. Don't know how may WU's are in 1 batch.

Yes, you're correct.
[May 7, 2021 6:51:42 PM]   Link   Report threatening or abusive post: please login first  Go to top 
jlrobins58@gmail.com
Cruncher
Joined: Jan 2, 2021
Post Count: 5
Status: Offline
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

Kind of miss the GPU WUs. Other than it blocked an African Rainfall Project unit from running.

I am new and just a casual supplier of CPU/GPU power. But the GPU test increased my throughput scoring to over a million a day!?! That is up from a few hundred per day before and since that.

My laptop did run noticibly hotter, though. I have both an Intel HD630 GPU and an NVIDIA GeForce GTX 1050 and it appeared that both were grinding through WUs.
[May 19, 2021 3:00:32 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Posts: 781   Pages: 79   [ Previous Page | 70 71 72 73 74 75 76 77 78 79 | Next Page ]
[ Jump to Last Post ]
Post new Thread