Index  | Recent Threads  | Unanswered Threads  | Who's Active  | Guidelines  | Search
 

Quick Go »
No member browsing this thread
Thread Status: Active
Total posts in this thread: 4
[ Jump to Last Post ]
Post new Thread
Author
Previous Thread This topic has been viewed 1574 times and has 3 replies Next Thread
Petrctale
Cruncher
USA
Joined: Nov 26, 2005
Post Count: 1
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Mapping Cancer Markers taking over work units

Help. I can't find any information on how WCG distributes its work units. How many of each chosen project BOINC will work on at the same time. Currently I've chosen to participate in all projects. I realize some are intermittent and some projects have more work the others.I. I was getting a few of each project and seemed fairly well mixed. Two days ago it seems that Mapping Cancer Markers started taking over. Currently I have BOINC on sixteen cores. MCM is running on fourteen and Open Pandemics on the other two, and not getting any other project WU's. It's also nudged out Rosetta on boinc.

Settings
Each project set to 100%
Project limits under setting is set to unlimited.

Any answers?
Suggestions?
Setting I've missed?

Thanks
----------------------------------------


[Jun 17, 2020 11:32:29 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Aurum
Master Cruncher
The Great Basin
Joined: Dec 24, 2017
Post Count: 2391
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Mapping Cancer Markers taking over work units

Put an app_config.xml file in your project folder and tell BOINCmgr to read the config files. E.g.
<app_config>
<app>
<name>arp1</name>
<!-- needs 1 GB RAM per arp1 WU -->
<!-- Xeon E5-2686v4 18c36t 32 GB L3 Cache = 45 MB -->
<max_concurrent>1</max_concurrent>
</app>
<app>
<name>opn1</name>
<max_concurrent>8</max_concurrent>
</app>
<app>
<name>mcm1</name>
<max_concurrent>5</max_concurrent>
</app>
<app>
<name>hst1</name>
<max_concurrent>2</max_concurrent>
</app>
<app>
<name>mip1</name>
<!-- needs 5 MB L3 cache per mip1 WU -->
<max_concurrent>3</max_concurrent>
</app>
</app_config>

----------------------------------------

...KRI please cancel all shadow-banning
----------------------------------------
[Edit 2 times, last edit by Aurum420 at Jun 17, 2020 11:45:39 PM]
[Jun 17, 2020 11:44:52 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Aurum
Master Cruncher
The Great Basin
Joined: Dec 24, 2017
Post Count: 2391
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Mapping Cancer Markers taking over work units

Also you can set limits in Device Profiles:
https://www.worldcommunitygrid.org/ms/device/viewProfiles.do
----------------------------------------

...KRI please cancel all shadow-banning
----------------------------------------
[Edit 1 times, last edit by Aurum420 at Jun 17, 2020 11:47:13 PM]
[Jun 17, 2020 11:46:57 PM]   Link   Report threatening or abusive post: please login first  Go to top 
hchc
Veteran Cruncher
USA
Joined: Aug 15, 2006
Post Count: 865
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Mapping Cancer Markers taking over work units

Root cause may be due to subproject weight. Not all subprojects weigh the same, so maybe MCM1 has higher distribution priority on the feeder than other subprojects.

I've re-enabled MCM1 on my end to see if I experience the same as you (that I get more MCM1 tasks than OPN1).

Edited to add: It's only been 2 hours since making this post, but out of the 9 most recent tasks I've received, 7 were MCM1, and 2 were OPN1. I'll give it a couple days to normalize, but so far, I'm experiencing the same as you.

It's possible that because so many people are opting into only crunching OPN1 that the other subprojects are neglected, so the feeder is more likely to give non-OPN1 tasks to devices who aren't picky.

Editing about 2 days after making this post: Over half -- maybe 2/3 or more -- of new tasks have been MCM1 compared to OPN1, so I'm able to reproduce what @Petrctale is seeing. So I think there is such a thing as sub-project weight. I don't know if this weight is visible (i.e. in the API) or if we just have to guess.

Mitigation steps are as discussed: Limit the # downloaded per device in the WCG profile custom preferences and limit the # concurrently running via an app_config.xml parameter.
----------------------------------------
  • i5-7500 (Kaby Lake, 4C/4T) @ 3.4 GHz
  • i5-4590 (Haswell, 4C/4T) @ 3.3 GHz
  • i5-3570 (Broadwell, 4C/4T) @ 3.4 GHz

----------------------------------------
[Edit 3 times, last edit by hchc at Jun 20, 2020 12:39:42 PM]
[Jun 18, 2020 6:50:22 AM]   Link   Report threatening or abusive post: please login first  Go to top 
[ Jump to Last Post ]
Post new Thread