| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 13
|
|
| Author |
|
|
Ingleside
Veteran Cruncher Norway Joined: Nov 19, 2005 Post Count: 974 Status: Offline Project Badges:
|
Pleas DO NOT give more time to do calculations! I know I would be extremely peed off if I had to wait two months for a wingman. If you cannot return a WU in less than two weeks you should consider only those projects that do a single replication. Penalising those who return a WU in a matter of days and then waiting two month for it to be validated is detrimental. Having a long deadline is an advantage, when projects with non-steady supply suddenly has work, like the so-called DDDT2-"rain" or additional work for SIMAP appart for the normal monthly supply (but granted SIMAP has had continous work-supply lately), or when a project suddenly goes from 1+ year left to run down to 1 month and you wants another colour on your badge. For someone running a project 24/7, it doesn't really matter if the ocassional result takes 1+ month to validate, since they'll anyway get N points/day on average. But granted, for many reasons limiting the deadline to 14 days is in most instances an advantage for both the users and the projects, except for projects like Climateprediction.net where many of the wu's takes 2+ weeks to crunch even while running 24/7. BTW, even Folding@home where they needs fast turnaround-times due to next wu is based on previous result, has some work with a very long deadline. While much work for SMP and GPU has only 2 or 3 days deadline, going by the status-page where's some GPU-work that AFAIK can easily be done in 1 day with a 48-day deadline. ![]() ![]() "I make so many mistakes. But then just think of all the mistakes I don't make, although I might." |
||
|
|
JoergH
Cruncher Joined: Mar 27, 2006 Post Count: 6 Status: Offline Project Badges:
|
Dynamic sizing is a very good idea. I think that could help. It's such a waste, if the time expires and you end up with 80% of a work unit. So, I'll wait and see. Maybe it also could be some help to bundle the cores for a work unit, if this is possible. On my Dual core machine it takes 10 hours, that would shrink to a convenient 5 hours. On a quad core, you could be finished in about 2 hours. Faster CPUs seem to be impossible, as there is still a frequency of about 3.6 Ghz unaltered for some years now. There seems to be a boarder, until silicon will be replaced by graphene some day... ;-)
|
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Multi-threading is great idea being toyed with. There is though a -big- but: If results are serially dependent, how then to parallelize? Then if one thread starts waiting on pieces computed on other threads, you'll discover there can be substantial inefficiency as someone moaned on over at Berkeley [not to speak of the overhead that comes with multithreading]. So, no, this is not the route with what we tend to do at WCG. The sizing to device power is what's coming and as I understand it, the server side coding is presently being integrated from Berkeley... loads of testing ahead for the work generation and distribution system. Dynamically sized is BTW an overstatement; There will be multiple sizes from which the distributor will pick the appropriate size based on the device class, which is my simplified understanding.
|
||
|
|
|