| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 9
|
|
| Author |
|
|
tekennelly
Cruncher Joined: Oct 10, 2005 Post Count: 45 Status: Offline Project Badges:
|
I have a laptop with to 2 gig's of memory and when I run XP process monitor (i.e., ALT-CTRL-DEL/Task Manager/Process) I see the PF Delta column for the two Boinc process running with numbers in the hundreds. While I believe these are soft page faults, because I do not see the disk light illuminated, I wonder if the application is running as efficiently as possible. Are the soft page faults reducing throughput?
Note that ALL of the other processes are showing a nearly zero PF Delta. Other metrics: Commit Charge: 1140M/3941M. Each Boinc process has a memory footprint of approximately 80,000K, or roughly 160M combined. The CPU usage for each is nearly 50% thereby indicating between the two of them that the entire dual core is being used for Boinc. |
||
|
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
What project-sciences are showing these PF Deltas? On a quad i see only one producing any.
----------------------------------------Added: I don't know but since BOINC is by default set to only write to disk every 60 seconds, one would expect little flicker.
WCG
----------------------------------------Please help to make the Forums an enjoyable experience for All! [Edit 2 times, last edit by Sekerob at Oct 31, 2007 6:36:25 PM] |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Memory usage varies a lot by project. Peak memory use is usually during checkpointing, where it can be 2-10 times the normal working set size. Given the memory intensive nature of the computation, faster memory with less contention and better caching will significantly improve performance.
The figure to check is available physical memory. If this is healthy, then swapping will only occur rarely - when memory priorities change, such as when an inactive application is brought to the foreground, or when the science application starts working on a different part of the dataset. Mere quantity of pagefaults isn't all that helpful. If the OS moves a large chunk of memory, then the operation is fairly efficient. The danger is repeated swapping, when memory is in contention. You don't have that problem. All in all - with 2 GB, you have nothing to worry about. All this said - the page fault delta for Autodock on my computer is a steady zero, and I have much less memory than you. Can you tell me which project you noticed this issue with? |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
It is my understanding that one should see the pf delta at the moment when there is written from or to the swapfile. It depends on the project, with AC@H I see them immideately in large quantities, other projects I have to wait some time. It's a momentary thing, not cumulative.
|
||
|
|
tekennelly
Cruncher Joined: Oct 10, 2005 Post Count: 45 Status: Offline Project Badges:
|
The autodock tasks have zero PF delta whereas the rosetta has hundreds to thousands of page faults.
wcg_faah_autodock_5.18_windos_intelx86 - 0 page faults wcg_hpf2_rosetta_5.18_windows_intelx86 - hundreds of page faults ![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I see them too on hpf. Why sad, it just means that there is written from or to the swapfile. See here and here.
----------------------------------------[Edit 1 times, last edit by Former Member at Nov 3, 2007 12:19:30 AM] |
||
|
|
tekennelly
Cruncher Joined: Oct 10, 2005 Post Count: 45 Status: Offline Project Badges:
|
Thanks for the pointers. I did find in one of the references the following on minor page faults:
Minor page fault If the page is loaded in memory at the time the fault is generated, but its status is not updated as 'present' in hardware, then it is called a minor or soft page fault. This could happen if the memory is shared by different programs and the page is already brought into memory for other programs. Since these faults do not involve disk latency, they are faster and less expensive than major page faults. Since I do not see any disk I/O, via the disk light on my laptop, I am assuming what I am experiencing are minor page faults. Did I mention the laptop is dual core and both a rosetta and a autodock are running at the exact same time. The question becomes why does autodock processes show 0 page faults whereas rosetta processes show non-zero page faults. There has to be some, maybe small, overhead with minor page faults and it removed the work unit completes faster. Just thinking about optimization here. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
The Help Conquer Cancer project has this problem to an even greater degree (thousands of faults per second).
----------------------------------------I confirmed that these are nearly all soft faults*. However, soft faults do have an impact on performance - you will see that the CPU is spending more time in the kernel than usual (I observed 10-20% kernel time). I have also mentioned the problem to the techs. Frankly, I don't know what would cause this. I hope the techs will find time to look more closely at the problem, but since the issue is with the application code, fixing this is non-trivial. * checking the disk activity light is insufficient. I set up performance counters for pages in/out and page read/writes as well as total page faults. [Edit 1 times, last edit by Former Member at Nov 3, 2007 2:37:07 PM] |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Alright, awaiting that then.
----------------------------------------[Edit 1 times, last edit by Former Member at Nov 4, 2007 12:35:36 AM] |
||
|
|
|