Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 5541
|
![]() |
Author |
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hey guys - I apologize. I certainly didn't mean to stir up such a storm. Snurk - you are doing an absolutely super job with your signatures. I have read all the various comments, and have to agree that your technique is probably the fairest one going. So - let's end it here. I said I would abide by your decision, and I do so.
----------------------------------------Once again - many thanks for your great work. [Edit 1 times, last edit by Former Member at Mar 9, 2012 6:02:59 PM] |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
My thinking would be that if you retain the daily export for 30 days and take the newest and one nearest the 30 day mark, take run time difference and divide by days between those 2 exports, you'd have a fair mean. For the starters you could accept to just use that formula or test join date against last export date to see if less than 30 days, and use that as an avarager... molto complicato ;? Either that, or keep stats for every day, and take the average of the last 30 'good' days (good being the days that the script succeeded in getting the data) and take average of that. This way there's a possibility to take the spikes out. Say: pick 30 days, take the 2 highest spikes and the 2 lowest spiks out, and devide the remainder by 26. Or something.. That's what I did way back when I created a signature only for myself, with the history available under my grid / my statistics history. As I commented in the ''Going under...'' and "Building 600 TFL machine" threads, [we] may have to review if to stick to the old 100 Cobblestones per GFL for performance indicating, or go the new BOINC 200 Cobblestones/GFL. For the moments the credits have though not at all doubled [to what SETI gives per second, but including their GPU credits] per the new server 7 credit system. We used to run 1.59 TFL per year and after few days on new system it's 1.806/year, so holding off for a little to maybe when we get GPU contributions running at WCG. Anticipating, but not overly, that this will lift credit substantially [not having been allowed to sniff the tech kitchen at all how the credits will be approached for that part. Time will though remain time as I understand 1 day GPU is 1 day runtime as clocked by the CPU elapsed, meaning the term Runtime has to remain and is not clean CPU time or GPU time [GPU's do not understand the time concept, which is why Elapsed it taken] Oops, that last statement surprised me. Do I understand correctly that for GPU work, all that's available will be elapsed time, not run time? Meaning that if you set the GPU to work only 50% on Boinc, it will still report a run time of 24 hours per day instead of 12? Then to GFL calculation. Do you know of any instance where 'our' GFLs have been compared to the real GFLs that a particular system is capable of. I suppose that the theoratical GFLs of a particular CPU can be found in some datasheet somewhere, or an application that tests and tells you the GFLs of your system. Of course Boinc has such a test if that's any good. Would be interesting to know if these values are close or way off. Perhaps that could bring us some closer to the multiplication value to settle for. Last 30 good stat days is fine too. Taking the increase between 2 dates and divide by days passed is I think as good. No need for the scientific down to the 8th decimal... cant imagine dogfights because one has 7.256 days and the other 7.255... it's general indicative to me in good jest. Yes, what BOINC logs is Elapsed time for GPU tasks. If this is throttled to 50% then that's reflected in the Elapsed time, don't think you'll get the ol UD agent running at 25% and still clocking 100% runtime per day. We-ll just have to wait what WCG decides on. BTW, I consider run-time and elapsed time interchangeable, a semantical variation that only slightly differs. For credit, the official wiki page was changed by admin on July 2010 from 100 to 200 credit per Ghz, the history http://boinc.berkeley.edu/w/?title=Computatio...;diff=2818&oldid=1126 . No notes as to the motivation [widespread hyperthreading?] Know no place other where than GFL we compute off the points/credits. Kind of not expecting a separate credit/time recording scheme for GPU, all expecting to go into a single HCC pool adding to the same badge. --//-- |
||
|
acp134
Cruncher Joined: Feb 14, 2011 Post Count: 2 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
It's amazing! Please sign me up if possible!Thank you!
----------------------------------------Account number: 734762 [Edit 1 times, last edit by none1996 at Mar 11, 2012 10:45:23 AM] |
||
|
SNURK
Veteran Cruncher The Netherlands Joined: Nov 26, 2007 Post Count: 1217 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
My thinking would be that if you retain the daily export for 30 days and take the newest and one nearest the 30 day mark, take run time difference and divide by days between those 2 exports, you'd have a fair mean. For the starters you could accept to just use that formula or test join date against last export date to see if less than 30 days, and use that as an avarager... molto complicato ;? Either that, or keep stats for every day, and take the average of the last 30 'good' days (good being the days that the script succeeded in getting the data) and take average of that. This way there's a possibility to take the spikes out. Say: pick 30 days, take the 2 highest spikes and the 2 lowest spiks out, and devide the remainder by 26. Or something.. That's what I did way back when I created a signature only for myself, with the history available under my grid / my statistics history. As I commented in the ''Going under...'' and "Building 600 TFL machine" threads, [we] may have to review if to stick to the old 100 Cobblestones per GFL for performance indicating, or go the new BOINC 200 Cobblestones/GFL. For the moments the credits have though not at all doubled [to what SETI gives per second, but including their GPU credits] per the new server 7 credit system. We used to run 1.59 TFL per year and after few days on new system it's 1.806/year, so holding off for a little to maybe when we get GPU contributions running at WCG. Anticipating, but not overly, that this will lift credit substantially [not having been allowed to sniff the tech kitchen at all how the credits will be approached for that part. Time will though remain time as I understand 1 day GPU is 1 day runtime as clocked by the CPU elapsed, meaning the term Runtime has to remain and is not clean CPU time or GPU time [GPU's do not understand the time concept, which is why Elapsed it taken] Oops, that last statement surprised me. Do I understand correctly that for GPU work, all that's available will be elapsed time, not run time? Meaning that if you set the GPU to work only 50% on Boinc, it will still report a run time of 24 hours per day instead of 12? Then to GFL calculation. Do you know of any instance where 'our' GFLs have been compared to the real GFLs that a particular system is capable of. I suppose that the theoratical GFLs of a particular CPU can be found in some datasheet somewhere, or an application that tests and tells you the GFLs of your system. Of course Boinc has such a test if that's any good. Would be interesting to know if these values are close or way off. Perhaps that could bring us some closer to the multiplication value to settle for. Last 30 good stat days is fine too. Taking the increase between 2 dates and divide by days passed is I think as good. No need for the scientific down to the 8th decimal... cant imagine dogfights because one has 7.256 days and the other 7.255... it's general indicative to me in good jest. Yes, what BOINC logs is Elapsed time for GPU tasks. If this is throttled to 50% then that's reflected in the Elapsed time, don't think you'll get the ol UD agent running at 25% and still clocking 100% runtime per day. We-ll just have to wait what WCG decides on. BTW, I consider run-time and elapsed time interchangeable, a semantical variation that only slightly differs. For credit, the official wiki page was changed by admin on July 2010 from 100 to 200 credit per Ghz, the history http://boinc.berkeley.edu/w/?title=Computatio...;diff=2818&oldid=1126 . No notes as to the motivation [widespread hyperthreading?] Know no place other where than GFL we compute off the points/credits. Kind of not expecting a separate credit/time recording scheme for GPU, all expecting to go into a single HCC pool adding to the same badge. --//-- Hi Sek, have a look at this checkin-note: David 10 Jun 2010 that I found here while doing some Google search on the subject: http://boinc.berkeley.edu/svn/tags/boinc_core_release_6_11_9/checkin_notes Could this be about the same 100/200 change we are debating here? Date seems to coincide nicely with the official wiki change. If 200 is the official word from Berkeley then I'm beginning to lean towards being in favour of changing it here too. |
||
|
SNURK
Veteran Cruncher The Netherlands Joined: Nov 26, 2007 Post Count: 1217 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
..And of course there's new signatures:
----------------------------------------http://i1007.photobucket.com/albums/af195/wcgsig/717121.gif ; JBL71 http://i1007.photobucket.com/albums/af195/wcgsig/734762.gif ; none1996 Greetings ![]() SNURK |
||
|
NKrader
Cruncher Joined: Jan 1, 2010 Post Count: 2 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
id like one :)
----------------------------------------User ID 662595 [Edit 1 times, last edit by NKrader at Mar 13, 2012 5:43:20 PM] |
||
|
pfm3136
Cruncher Joined: Apr 11, 2010 Post Count: 13 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
i'd also like one :)
----------------------------------------Account Number: 680818 Thanks. |
||
|
acp134
Cruncher Joined: Feb 14, 2011 Post Count: 2 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Thank you!I love it!
|
||
|
JBL71
Cruncher Joined: Nov 2, 2010 Post Count: 2 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Thank You Snurk.
----------------------------------------![]() |
||
|
SNURK
Veteran Cruncher The Netherlands Joined: Nov 26, 2007 Post Count: 1217 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Signatures added:
----------------------------------------http://i1007.photobucket.com/albums/af195/wcgsig/662595.gif ; NKrader http://i1007.photobucket.com/albums/af195/wcgsig/680818.gif ; pfm3136 Happy crunching! ![]() |
||
|
|
![]() |