Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
World Community Grid Forums
Category: Active Research Forum: Africa Rainfall Project Thread: For Current Volunteers: Advance Information on Our Newest Project |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 164
|
Author |
|
Jim1348
Veteran Cruncher USA Joined: Jul 13, 2009 Post Count: 1066 Status: Offline Project Badges: |
Is it possible with arp1 to become a Lone Ranger??? I haven't seen it. It is slightly odd. You wouldn't think that a single point on a climate projection would be that critical, or that hard to weed out if it were bad. But I know nothing of their methodology either. |
||
|
Aurum
Master Cruncher The Great Basin Joined: Dec 24, 2017 Post Count: 2384 Status: Offline Project Badges: |
Once a result has been validated for the 48 hours, the output will be used to build the input for the next 48 hours of runtime. Seems like if they want quick turn around time then they should allow Lone Ranger status. When I look through my Valid Results it's typically 7 days. It could be 1 day with Quorum = 1.armstrdj ...KRI please cancel all shadow-banning |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Once a result has been validated for the 48 hours, the output will be used to build the input for the next 48 hours of runtime. armstrdj This raises a LOT of questions concerning the validity of the output of this project. This is output from a weather forecast model (WRF). As such, there could be a deviation from the model output and actual observational data. What that means, is bad data is being fed into the next 48 hour run and so on through the sequence and unless there is a reset back to a set of known observational data the error is going to multiply. The end result is output that doesn't reflect actual conditions in 2 years. Each 48 hour run should start with actual observational data not output from a previous model run. If the goal is to start with a set of observational data and do a forecast two years out then the process is probably valid but I would bet most forecasters would tell you that the two year forecast is not going to reasonably resemble actual conditions that far out. If we could forecast two years out with any kind of accuracy we would be doing it now. Maybe the goal of the project is to see how much the forecast deviates from the actual observed conditions. Most reasonable people would say the deviation is going to be quite large. Is it worth the computational effort to quantify the magnitude? [Edit 1 times, last edit by Former Member at Aug 22, 2020 4:54:09 PM] |
||
|
Aurum
Master Cruncher The Great Basin Joined: Dec 24, 2017 Post Count: 2384 Status: Offline Project Badges: |
entity, I bet IBM knows in exquisite detail since they spent a year getting it to run ;-)
----------------------------------------All I know is what they said here: https://www.tudelft.nl/citg/over-faculteit/af...dr-ir-nick-van-de-giesen/ And it looks like there's more available on the PI's website: https://www.tudelft.nl/citg/over-faculteit/af...dr-ir-nick-van-de-giesen/ They're working on a conference paper using these data but I bet there's a prior journal article that would give the details. ...KRI please cancel all shadow-banning |
||
|
Aurum
Master Cruncher The Great Basin Joined: Dec 24, 2017 Post Count: 2384 Status: Offline Project Badges: |
Has anyone figured out how ARP behaves??? If I run it exclusively on a computer it slows down tremendously even if I have enough RAM (minimum 1 GB per WU).
----------------------------------------None of the Xeon E5s or i7s kick the CPU up to its turbo frequency and they just stay stuck on the base frequency. My i9-9960X will increase the CPU frequency a few percent but no where near its turbo frequency. The i7-6950X can finish an ARP WU 3 hours sooner than and i9-9960X (21 vs 24 hr) but the i9 as a whole completes more work per day than the i7 (32t vs 20t). Is hyperthreading choking this project? Once I clear the deck I'm going to disable HT on the i9 and see how it runs. There's something going on that seriously slows this project down but I can't put my finger on what it is. Anybody have a suggestion??? ...KRI please cancel all shadow-banning |
||
|
Sgt.Joe
Ace Cruncher USA Joined: Jul 4, 2006 Post Count: 7581 Status: Offline Project Badges: |
What is the difference in the L1, L2, and L3 caches on the two machines ?
----------------------------------------Do they use the same operating system ? Cheers
Sgt. Joe
*Minnesota Crunchers* |
||
|
Aurum
Master Cruncher The Great Basin Joined: Dec 24, 2017 Post Count: 2384 Status: Offline Project Badges: |
They're all Linux Mint 19.3 (Ubuntu 18.04 Bionic). I'm working on a table to summarize.
----------------------------------------The i9-9960X has: Level 1 cache size = 16 x 32 KB 8-way set associative instruction caches & 16 x 32 KB 8-way set associative data caches Level 2 cache size = 16 x 1 MB 16-way set associative caches Level 3 cache size = 22 MB 11-way set associative shared cache The i7-6950X has: Level 1 cache size = 10 x 32 KB 8-way set associative instruction caches & 10 x 32 KB 8-way set associative data caches Level 2 cache size = 10 x 256 KB 8-way set associative caches Level 3 cache size = 25 MB 20-way set associative shared cache Both computers have 4x8 GB DDR4 2666 MHz of RAM. ...KRI please cancel all shadow-banning [Edit 3 times, last edit by Aurum420 at Aug 23, 2020 1:16:56 AM] |
||
|
Aurum
Master Cruncher The Great Basin Joined: Dec 24, 2017 Post Count: 2384 Status: Offline Project Badges: |
My test on an i9-9960X with HT disabled has only been running for 3.5 hours but it looks like it's running twice as fast as it did with hyperthreading enabled. That's using 15 CPU cores running ARP1 WUs leaving one CPU for the GPU.
----------------------------------------There are severe limitations to this program that have not been disclosed to us. ...KRI please cancel all shadow-banning [Edit 1 times, last edit by Aurum420 at Aug 23, 2020 6:08:09 AM] |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
That's to be expected. Even if you set BOINC to half the possible threads with HT on you will observe faster processing, at least I do on my machine with W10. Nothing needed to be hyperthreaded.
|
||
|
Jim1348
Veteran Cruncher USA Joined: Jul 13, 2009 Post Count: 1066 Status: Offline Project Badges: |
Has anyone figured out how ARP behaves??? If I run it exclusively on a computer it slows down tremendously even if I have enough RAM (minimum 1 GB per WU). No, it is not the hyperthreading. It is the cache (apparently, though it could be some other shared resource) The optimum number varies somewhat per machine, but in general I limit ARP to a maximum of four work units on my best machines (Ryzen 3000 series). On lesser machines, two is OK. And MIP makes it worse. I would limit MIP to two machines when running ARP, and then limit ARP also to two. But you have to check it out. Hyperthreading always slows down as you add more cores, but that is because you are sharing a real core between two threads, creating two virtual cores per real core. The total output increase with more virtual cores however. |
||
|
|