| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 14
|
|
| Author |
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
hey, I am getting the following error trying to download work and I've tried four different projects all returning the same error.
>> Fri 15 May 2009 10:06:26 PM CDT|World Community Grid|Message >> from server: (reached daily quota of 64 results) What's up with that? Am I just locked out until midnight? |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Yep
|
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I don't guess I understand having a limit on how much work we can run. I'm in the process of building a quad core 64 system. Does this mean I can only run for about half a day. And why is it that others seem to have a higher limit?
|
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hello g00dkn16ht,
The daily quota is lowered every time you return an error. I think that a quad-core starts off with a quota of 320 right now. Your quota has been cut to a fifth. Every time you return a valid result, your quota is raised. Lawrence |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Reason behind this quota system is to avoid systems which have a problem to work through and error on a work unit every few seconds, which would lead to over 10000 errored workunits in a day from one machine.
----------------------------------------In normal operation no currently existing computer should be able to crunch more than 80 workunits per core (although they seem able to do so on the micro-workunits of HCMD2 at the moment )If your computer really has a problem, the quota is acting as a healthy protection mechanism for WCG, and you should solve the problem at your end. If you've just had some bad luck (bunch of bad workunits in a row) the problem should resolve itself once you start returning healthy results again. Note: I had this problem once myself when my computer was racing through HPF2 workunits, erroring out within minutes on all of them, as some HPF2 batches don't seem to like Vista. In that case I had to turn off HPF2 for a while to get healthy results back again, restoring my quota, so I suggest you check your Results page if you might be able to spot a pattern in your errors. [Edit 2 times, last edit by Former Member at May 16, 2009 7:41:14 AM] |
||
|
|
JmBoullier
Former Community Advisor Normandy - France Joined: Jan 26, 2007 Post Count: 3716 Status: Offline Project Badges:
|
Hi g00dkn16ht!
----------------------------------------In addition to what has been said before and is perfectly right I must add - that tasks that you abort manually are also considered as errors - and that when you detach a device all tasks in the queue at this time are aborted and thus in error. I hope that will help you to understand what has got your daily quota so low. Last, the daily quota is reset to its default (80 tasks per core usually, maybe more right now because of the short HCMD2 WUs) at 00:00:00 UTC, in case it would be still under the default at this time. Cheers. Jean. |
||
|
|
AnRM
Advanced Cruncher Canada Joined: Nov 17, 2004 Post Count: 102 Status: Offline Project Badges:
|
..... Note: I had this problem once myself when my computer was racing through HPF2 workunits, erroring out within minutes on all of them, as some HPF2 batches don't seem to like Vista.. Thanks for the info.....the only random errors on HPF2 we've seen are on Vista machines. Never made the connection until your note....cheers ![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Have a question about aborted units.
----------------------------------------"- that tasks that you abort manually are also considered as errors - and that when you detach a device all tasks in the queue at this time are aborted and thus in error." I understand manually aborted task and aborted task when you detach a device are aborted and in error, and should effect your quota. But what if the WCG server aborts a wu before you even have a chance to crunch it is this still considered an error against your quota? BETA_ CMD2_ 0003-PPP5A.clustersOccur-SKP1A.clustersOccur_ 39_ 63064_ 63439_ 2-- 611 Valid 5/17/09 07:46:51 5/17/09 09:56:24 2.03 24.3 / 23.9 BETA_ CMD2_ 0003-PPP5A.clustersOccur-SKP1A.clustersOccur_ 39_ 63064_ 63439_ 0-- 611 Aborted 5/17/09 07:46:50 5/17/09 10:07:27 0.00 0.0 / 0.0 BETA_ CMD2_ 0003-PPP5A.clustersOccur-SKP1A.clustersOccur_ 39_ 63064_ 63439_ 1-- 611 Valid 5/17/09 07:46:43 5/17/09 09:54:22 1.55 23.5 / 23.9 BETA_ CMD2_ 0003-PPP5A.clustersOccur-SKP1A.clustersOccur_ 39_ 63064_ 63439_ 2-- 611 Valid 5/17/09 07:46:51 5/17/09 09:56:24 2.03 24.3 / 23.9 BETA_ CMD2_ 0003-PPP5A.clustersOccur-SKP1A.clustersOccur_ 39_ 63064_ 63439_ 0-- 611 Aborted 5/17/09 07:46:50 5/17/09 10:07:27 0.00 0.0 / 0.0 (not even 10:00 my time yet) BETA_ CMD2_ 0003-PPP5A.clustersOccur-SKP1A.clustersOccur_ 39_ 63064_ 63439_ 1-- 611 Valid 5/17/09 07:46:43 5/17/09 09:54:22 1.55 23.5 / 23.9 [Edit 1 times, last edit by Former Member at May 17, 2009 12:16:06 PM] |
||
|
|
nasher
Veteran Cruncher USA Joined: Dec 2, 2005 Post Count: 1423 Status: Offline Project Badges:
|
if you are returning valid results then it shouldnt hit this quota limit
----------------------------------------if you are returning errors yes you might hit the quota limit but that is designed to prevent people with computers that arnt properly reporting hopefully you wont hit too many bad jobs another thing you could try is find out if all the errors you have are from 1 project... if so you should probably shift to different projects to prevent this from happening ![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Thanks for all the responses. I had not posted a response myself because my ISP has been down for three days. I think I have a good handle on the issue now. Thanks again everyone.
|
||
|
|
|