Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 7
|
![]() |
Author |
|
BoatYuma
Cruncher Joined: Nov 6, 2008 Post Count: 1 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
what if someone with a workstation cracks the boinc result and sends the intented wrong changed result with error to WCG server?
----------------------------------------if WCG would send the same task to 2 different workstations and than compare results to be the same, that would be avoided and there will be no errors in results. [Edit 1 times, last edit by spekulanten at Mar 30, 2010 6:57:43 PM] |
||
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
Spekulanten, if referring to the zero redundant FAAH and HFCC projects that only send 1 copy out most of the times, there are so many safeties, permanent and random x-verifiers that tampering is practically impossible. Those safeties include techniques inside the result itself that would mark it immediately for need of re-verification i.e it turning into an inconclusive result.
----------------------------------------
WCG
Please help to make the Forums an enjoyable experience for All! |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hello everyone,
thank you for posting this. In my opinion, it is a pity, that not all projcts are constructed zero redundand. Why do only some projects use those safety techniques, which allow zero redundancy and not all? Sending out the same job more than one time consumes a lot of crunching power. Power, that could and shoud be used to go forward, instead, shoudn t it? All the best to everyone Martin Schnellinger |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hello everyone, thank you for posting this. In my opinion, it is a pity, that not all projcts are constructed zero redundand. Why do only some projects use those safety techniques, which allow zero redundancy and not all? Sending out the same job more than one time consumes a lot of crunching power. Power, that could and shoud be used to go forward, instead, shoudn t it? All the best to everyone Martin Schnellinger main thing is the cpu power required to verify, some jobs are, simply put, A+B=x, others are A+B(5 to 5,000,000 ways)=x if the server can recalc the results quickly, single redundancy, if it takes 30 min to recalc, is better offloaded to another cruncher. While 30 min is not very long, multiply this by 100,000+ results a day and you begin to see the reasoning. Now if we could just get every person to be honest and NEVER play with the results, AND config every computer the same, so what is returned is valid, then.......... ![]() [Edit 1 times, last edit by Former Member at May 1, 2010 6:23:28 AM] |
||
|
nasher
Veteran Cruncher USA Joined: Dec 2, 2005 Post Count: 1423 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
@ Fredski - its not the fact that we expect people to be dishonest
----------------------------------------sometime places want redundancy and yes while low redundancy is a nice idea sometimes the scientists really need the redundancy in the past there was the rice project... now while that one SEEMED to have a huge redundancy it in fact did not because each result was different in some ways. not to mention there will always be computer errors and while they are few and far between... do you really want to risk "the cure" being in a computer error that wasn't detected and came back with a negative instead of possible / highly possible hit? remember there are not many people who will intentionally mess things up but errors happen ![]() |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
expecting dishonesty and guarding against it are not the same, all science requires verifiable/repeatable results. It only takes one invalid slipping thru to call into question the entire project, weather it is caused by computer error or human error or outside intervention, the fact that it was not caught places the entire process in doubt.
----------------------------------------PS. I agree people are honest, but it only takes 1 out of 1,000,000 to change everything [Edit 1 times, last edit by Former Member at May 5, 2010 4:34:59 PM] |
||
|
uplinger
Former World Community Grid Tech Joined: May 23, 2005 Post Count: 3952 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
There are checks on the backend. I will not explain them, but the checks are there.
As for zero redundancy, some projects could not use this because of the verification of the results are different for each project. -Uplinger |
||
|
|
![]() |