| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 29
|
|
| Author |
|
|
[B@H] Kokomiko
Cruncher Joined: Jul 11, 2008 Post Count: 6 Status: Offline |
I'm don't crunching only for credits, otherwise I would not crunch here. But calculating for science must not be squandering the electrical power. If I could get the treble from my computational power with a better compiled program, why should I don't ask for optimization? The electricity bill is the same if I crunch a bad code or a optimized code. And I have more than one PC running.
----------------------------------------Otherwise a lot of cruncher looks on the credits. After Cosmology has broken down their credits per hour, they lost 85% of their volunteer. Look here for the situation: ![]() With the knowing of the needs of the volunteers someone should think about it ... ![]() |
||
|
|
KerSamson
Master Cruncher Switzerland Joined: Jan 29, 2007 Post Count: 1684 Status: Offline Project Badges:
|
Hello Kokomiko,
----------------------------------------I fully understand what you mean. Around 9 months ago I mentioned already this fact after I did notice some "incompatibilities" between particular projects and specific host architecture. The advantage of this observation was that some problems have been discovered and ... solved (see for example the "page fault" problem). At this time, I mentioned also that, regarding the immense need for computational power, optimized code is not luxury but simple fairness to the projects waiting for resources. At the same time, I understand the technical support point of view being afraid having to maintain multiple code implementations and optimizations with the risk to significantly lose efficiency and reliability, in particular in case of debugging and testing. Indeed, I do not crunch for credits, even if it is flattering to reach good performance. However, I consider granted credits like a performance indicator and it is the reason why I try to understand deviation. I did never privilege or deselect a project because of the granted credits. Nonetheless, I did temporary deselect some projects because of significant instability or negative performance (even if I maintain from time to time a dashboard providing performance observations related to project and host architecture). It is not valuable to devote resource to a project which does not fit with the host architecture. The question of optimization is - IMHO - strictly depending of the (financial and human) resources available for the technical support. Likewise, if we consider the necessity to improve (or to put in place) some Governance, it needs resources to do it. After around 18 months crunching at WCG, I am really impressed by the accuracy and reliability of the technical support, even if improvements are alaways welcome. Have a nice day, |
||
|
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
Hi, just to confirm Jean's assessment. I have one powerful system running XP64 and it is chronically (since 1 year) under granted. The contrast is particularly obvious if I compare a Q6600 running XP32 (over 170 results) and the double E5345 running XP64 (over 320 results). The first host receives generally 12.67% more than claimed and the second host receives 9.14% less than claimed. I use boinc 32 with XP32 and boinc 64 with XP64. Considering the theoretical advantage of 64 bits systems, the recognition of the host performance seems to be not really appropriate. Furthermore, even if the Q6600 has a lightly higher frequency (2.4 GHz) then the double E5345 (2.33 GHz), thanks the Xeon design, the E5345 should bring better performance. It is difficult to identify the root cause, but it looks that the 64bits systems experience some credit prejudices. You can take a look on this comparison here . Anyway, have a good crunching ! Cheers, Yves, This situation may more or less end with the zero-redundancy DDDT, which kicks in soon for FAAH too. There is a test at the beginning of each job which forms part of the validation and credit computation. Interested to hear from you how the credit on your 64 bit system looks for DDDT and FAAH after the switch.
WCG
----------------------------------------Please help to make the Forums an enjoyable experience for All! [Edit 1 times, last edit by Sekerob at Aug 5, 2008 7:52:08 AM] |
||
|
|
KerSamson
Master Cruncher Switzerland Joined: Jan 29, 2007 Post Count: 1684 Status: Offline Project Badges:
|
Sekerob,
----------------------------------------Beginning of September, I will perform again an investigation in order to evaluate the new WU's only. I will keep you inform about the results, perhaps we will experience some surprises ? (it would not be the first time that feelings, impressions and numerical facts would not really match together; some impressions could be illusive). By the way, I would appreciate if the result status pages could be reported as xml (the "&xml=true" statement is not considered for such requests). Performing "cut & paste" between browser and a text editor for over 40 pages is neither funny nor efficient. ![]() |
||
|
|
[B@H] Kokomiko
Cruncher Joined: Jul 11, 2008 Post Count: 6 Status: Offline |
Hello Kokomiko, At this time, I mentioned also that, regarding the immense need for computational power, optimized code is not luxury but simple fairness to the projects waiting for resources. Hello KerSamson, for my point of view this matters, that I let my old hardware (Intel D925, T2400, PIII 800) run for WCG, but the modern and efficient hardware crunchs for projects with optimized code. Any other will be wasting electrical power and basic research for mathematical or chemical projects is also needed. ![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I see that the full facts haven't been explained fully in this thread (although they have been explained many times in the past).
To create optimised applications for specific architectures takes a large investment of time. It is also extremely difficult to do when the source code is in FORTRAN. If optimised applications were to be created, then the number of applications WCG would be forced to maintain would skyrocket. There are already three for each project: that's 15-18 different applications to be maintained by a small team. If optimised applications were added, that number would jump to 60 or more. Clearly this is impractical! Strike one. Even if that wasn't a problem, validation would be impacted. Some optimisations have no effect on results. However, more aggressive optimisation often affects the results. This makes validation harder. Strike two. Then we have the perceived inequality in 64bit machines. Why do some projects perform better than others, and why are the claimed credits different from the granted credits? This is kind of tricky, so bear with me. The benchmark has two components, integer and floating point. 64bit computers tend to have a high integer benchmark. This means that they tend to claim high for floating point intensive projects, and low for integer intensive projects (I think I got that the right way around). Is this unfair? No. The granted credit will still be proportional to the actual work done. The difference is that such computers can actually do integer computation faster. A few talented people have compared the performance of native 64bit applications with 32bit applications. The difference is surprisingly small. Strike three. And that is why World Community Grid provide the applications they do. Not because they don't want to squeeze the most from your computer, but for eminently practical reasons. As for those crunchers who go where the credits are: adiós. We are concerned with humanity here, not credit counting. If you try to compare credits across different BOINC projects, you will fail. Simply, there is no basis for comparison. Cross-project credit is a myth. This has been discussed extensively by the BOINC developers, and only a few still cling to the idea that a workable solution can be found. So, don't bother comparing WCG credits with credits from other projects. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I would prefer to avoid sounding argumentative. Efficiency depends on which metric you apply. If the figure of merit is science per kilowatt-hour, then optimized programs would be more efficient. If the figure of merit is science per programmer-hour, we think our current approach is more efficient.
Since the idea is for people contributing computer time to feel good about their contribution, we are using an open-source system (BOINC) that allows people a wide choice once they learn how to use BOINC. They are not restricted to the World Community Grid. It has been a while since I last posted the list of active distributed computer projects maintained at http://distributedcomputing.info/projects.html Give it a look. Lawrence |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hello Kokomiko; hello KerSamson,
Hello everyone, Me reading the ideas reflected in the various individual posts of Mr. Kokomiko and Mr. KerSamsonyou (say, KK, for short) and the interaction between those, suggest to me that KK are just about on the same page as far as what they want to ask for here in WCG, and seems to me to be each quite clear about it as well: optimized code that makes efficient use of the host architecture. And that KK have put forward arguments to push the idea as a good one. Whether it will prove to be a good idea or not, of course, remains to be seen. Responses from CA's and others ranged from some degree of agreement by some to skepticism by others. Mr. Didactylos' [Aug 6, 2008 6:14:10 PM] post, stands out as the most pessimistic thus far in this thread/discussion, sounding more particularly interested in striking (three strikes, right?) out the opponent, as it were, rather than playing his hand well: like pressing, for example, the idea that the current status quo (non-optimized code) is as good an overall situation as we can (ever?) have. Because that can never be always the case, and unless one does not want improvement, only points out that the way forward is to give the KK's idea the light of day. Sure enough, and KK (as well as others) have been consistently clear about it: that nobody is talking about points for the sake of making points. A portion of Mr. Kokomiko's post [Aug 4, 2008 11:10:31 PM] .. "I'm don't crunching only for credits, otherwise I would not crunch here. But calculating for science must not be squandering the electrical power. If I could get the treble from my computational power with a better compiled program, why should I don't ask for optimization? The electricity bill is the same if I crunch a bad code or a optimized code. And I have more than one PC running." .. is not asking for more points, as I see it. A portion of Mr. KerSamson's post [Aug 5, 2008 7:23:50 AM], just about nailed the matter of optimiztion right on the head.. "The question of optimization is - IMHO - strictly depending of the (financial and human) resources available for the technical support. Likewise, if we consider the necessity to improve (or to put in place) some Governance, it needs resources to do it. After around 18 months crunching at WCG, I am really impressed by the accuracy and reliability of the technical support, even if improvements are alaways welcome." Thus, I guess it all boils down (at this point) to us helping get the parameters and their numbers out. And see if we can justify the cost of optimizing. That'll be for later, for now.. Happy crunching everyone :-) [N.B. to lawrencehardin: how about "people contributing computer time to feel good about their contribution" while running on optimized code? Now I'm asking too much.. :-)] |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
andzgrid, this is my whole point, really. The World Community Grid techs are vastly over-stretched, and have many demands on their time. Since World Community Grid is not operated for profit, resources are very limited.
We all want better, more efficient software. My post was mainly to explain why things are the way they are. I left out positive notes, such as: The World Community Grid techs do spend time (as much as they can spare) optimising the science applications. But rather than waste time on architecture specific optimisation, they make optimisations which benefit all architectures, all members, and thus give the greatest benefit to the project. Any single one of the reasons I gave for not providing optimised applications would alone make it impractical. But there are so many other, easier optimisations possible. The World Community Grid techs make the best use of their time that they can. |
||
|
|
[B@H] Kokomiko
Cruncher Joined: Jul 11, 2008 Post Count: 6 Status: Offline |
We all want better, more efficient software. My post was mainly to explain why things are the way they are. ... and is it so hard to use a flag on a fortran compiler? http://www.intel.com/support/performancetools/fortran/sb/CS-010417.htm ![]() |
||
|
|
|