Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 23
|
![]() |
Author |
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I just rejoined after being away for awhile we working on a different project but they had estimated the computing time on 'one' PIII @ 1ghz would of taken almost 28.000 yrs at 24/7 and 100% computing power/speed. I however rejoined after recieving a 'spam' mailing in my yahoo mail which reminded me of the grid. However I have not seen even one reference to it publically in Canada. Perhaps more awareness would cut the puting time considerablly. I am also looking for a Team to join with in Canada preferably but anywhere is fine.
![]() ![]() ![]() ![]() |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I just rejoined after being away for awhile we working on a different project but they had estimated the computing time on 'one' PIII @ 1ghz would of taken almost 28.000 yrs at 24/7 and 100% computing power/speed. I however rejoined after recieving a 'spam' mailing in my yahoo mail which reminded me of the grid. However I have not seen even one reference to it publically in Canada. Perhaps more awareness would cut the puting time considerablly. I am also looking for a Team to join with in Canada preferably but anywhere is fine. ![]() ![]() ![]() ![]() Welcome back Orcasman, If you're still in need of a Team you can join us: http://www.worldcommunitygrid.org/team/viewTeamInfo.do?teamId=FF5RSMBR9N1 Not in Canada but right on the border ![]() Hope to cya around. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
http://www.worldcommunitygrid.org/stat/global.html
So if there are 50,000 proteins, and a total of 21,553 members (at the time of this writing) then what does that say about the level of redundency? Obviously there must be more to it...Perhaps each protein gets analyzed a number of different ways, or perhaps there is actually a lot more than just 50,000 proteins being tested here. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
On the systemsbiology.org site there is a link to a news release on the Human Proteome page. It says that the work on each protein is split up into a number of Work Units and that millions of Work Units will have to be processed. I took it to mean that each Work Unit only covers a small range of possible values for the entire protein rather than that the protein itself was split up. I don't see how the latter would work.
|
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I agree. It would seem improbable that they could assign each protein a unique range of values when the essence of the project is to start with a basic unit and evolve it into various conceivable shapes.
hmmm...Perhaps it is more evolutionary than we think. If it is in essence, evolving, via a sequence of random numbers/algorithms - then it would make sense that each run on each machine would give it the maximum probability of what it is most likely to evolve into. Certainly there could be a number of environmental variables that they could use as differentiating each individual protein, but this is not written anywhere that I have seen. No·MAD |
||
|
u12349768
Cruncher Joined: Nov 16, 2004 Post Count: 9 Status: Offline |
http://www.systemsbiology.org/extra/PressRelease_111604.html
The Human Proteome project running on World Community Grid will split the problem of folding the Human proteome into millions of smaller problems called "work units". And how was it really done now? Do the parameters differ for each of the 30,000 or 50,000 proteins? Does the same work unit give the same result or are there intentionally random elements? |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Viktors 3 Dec 2004 on Screen Saver and Random Seed for Protein Folding:
----------------------------------------http://www.worldcommunitygrid.org/forums/wcg/viewthread?thread=783 Since we are returning the structures (in some form or another) that we have created through folding together with the score for that structure, my personal guess is that the Institute for Systems Biology will do some additional computing of their own for likely structures. For all I know we (the Project as a whole) may be creating and scoring hundreds of thousands of possible configurations for each protein. We will be getting some more information when the Human Proteome Folding Project staff at the Institute for Systems Biology start their posting on the progress of the Project. Did you catch that post from the Project Manager? http://www.worldcommunitygrid.org/forums/wcg/viewthread?thread=908 [Edit 1 times, last edit by Former Member at Dec 12, 2004 4:12:30 AM] |
||
|
u12349768
Cruncher Joined: Nov 16, 2004 Post Count: 9 Status: Offline |
Thanks for the links.
----------------------------------------http://www.worldcommunitygrid.org/forums/wcg/viewthread?thread=783 1b. Each work unit works on one particular gene sequence and therefore one protein. It folds the protein many times, each time using a different random number seed so that many subtle alternatives are tried during the folding process. The results of all of the foldings that your computer processed are combined with those computed by many other computers. This gives a sufficiently large statistical sample to produce useful final results. http://www.worldcommunitygrid.org/projects_showcase/proteome_faqs.html What ISB wants to do is take the 30-50% of protein domains that are of unknown function and begin the process of figuring out what roles they play in the human animal. http://www.systemsbiology.org/Default.aspx?pagename=humanproteome Depending on how genes are counted, there are over 30,000 genes in the human genome. So the project is looking at those 9000 - 15000 genes with unknown function? The more results for each gene the better the statistical sample. That would mean that each work unit is sent out 100, 1000.. times leading to millions of (different) results. (This leaves room for the question how large is a sufficiently large sample, or how you'll know when the work is done.) Edit: The 50,000 came from here: http://www.systemsbiology.org/Default.aspx?pagename=humanproteome We’ll use the spare computing power from huge numbers of volunteers to run Rosetta on more than fifty thousand protein sequences. [Edit 1 times, last edit by u12349768 at Dec 13, 2004 12:04:42 AM] |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
My idea is that each protein is sent out many times, but the Work Unit is assigned a different set of seed numbers used in generating trial configurations. I do not think that there is large scale duplication of effort. Of course, there may be a number of cases in which identical configurations are generated starting from different seed numbers. Anyway, this is just guessing. But I am sure that a great deal of thought has been put into maximizing the return from computer time.
|
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hmmm ... would an additional 500 2.1GHz pc's working at 100% for 18hours each day make a difference ?
If it does i'll see if i can get the IT admin at my high school to install the client on every pc ![]() ![]() |
||
|
|
![]() |