| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 26
|
|
| Author |
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hello Dagorath,
Sounds great so far! I would not expect WOW on Windows 64 would slow things down much. It simply supplies an emulation layer so that BIOS/OS calls get mapped from 32-bit pointers to 64-bit pointers. Since there are very few such calls in the application, this should not have a noticeable effect. The primary benefit of Windows 64 is that there is a second set of registers available in 64-bit mode that the compiler can use. This lets a 64-bit compiled application run 10% - 15% faster. Unless there are a lot of pointers. I heard of one application based on linked lists which actually slowed down because of the doubled length of all the pointers that it had to process. Running a 32-bit application is a wash - - the slow down from the WOW emulation layer is compensated by the slightly speeded up OS routines being called while the 32-bit application runs at about the same speed. Lawrence |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
am still not convicned, might be a small advantge, usualy my pentium is happy enought wiith a couple of w/u 's HT but i have noticed a killer work unit now, hmm 9 hrs 3% i will tuenoff new work till it is done ie the other wu is 70% done at 7hrs but i wll give that over to help with the 3% one arghhh.. easier than resetting the profile then when is nicely underway almost done will re- enable new work :) .. am keen to see this HT 1 or 2 thing sorted out once and for all but i can say that HT 2 w/u generally works ok for me but if i hit a killer w/u then i am swapping back donw to one work unit to get it cruncher up in a reasonalbe time frame. ok i know that boinc gives time window for return deadline but i like to be wayyyyy ahead of it, in fact as my puteres reasonalb e spces if often am 1st one back on a result and waiting for others so i am the impatient sort... cheers
|
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
With regard to hyperthreading, I'm pretty sure it only makes a difference when you're running two dissimilar tasks.
Modern processors (even single-core non-hyperthreaded ones) are never actually doing just one thing at a time...they have separate ALUs (arithmetic logic units, for integer calculations) and FPUs (floating point units, for calcualtions involving decimal points), and long pipelines, so they can start processing the next bit of code before the first bit of code is completely finished. Let me explain what I mean by 'dissimilar tasks', since it's not immediately clear: some tasks don't make use of both the FPU and the ALU at the same time...and if you have two different tasks, one of which wants to use the FPU, and the other of which wants to use the ALU, hyperthreading allows the processor to schedule them to both run at the same time. Some tasks, like rendering 3D graphics, switch back and forth between integer operations and floating point operations, and as a result, if you run two threads at the same time, sometimes one of them will want the FPU when the other wants the ALU, and the overall task gets finished faster. I'm not certain, but I suspect that current WCG projects involve floating-point operations almost exclusively, so it's unlikely that running two tasks on a hyperthreaded processor will provide any speed-up. I'm also hearing a bit of a question about why hyperthreading will soon disappear...there are a couple of reasons...first of all, processors are starting to actually have more than one core, which reduces the benefits of pretending to have additional cores...there may not be enough threads go go around, so that they all have something to run simultaneously. But beyond that, there's also a more fundamental shift going on: hyperthreading dupes your OS into believing that it has twice as many processor cores at its disposal as it actually has, so that the OS will schedule two tasks to run at the same time. If the OS happens to schedule two dissimilar tasks, the processor can run them both at the same time. Recently, however, processor manufacturers have come up with a better way: let the OS schedule everything in one thread, and have the processor look through it for the juicy bits that can be run simultaneously to best effect. This out-of-order execution is still fairly new, and, at the moment, Intel's hyperthreading is still a little better than their out-of-order execution...but soon it won't be. AMD is actually looking at doing the oppositie of hyperthreading: making a real dual core processor look like a single-core one to the OS, so that the processor gets to make intelligent decisions about what to run on each core...even running the same thread on both when there are parts that can be run ahead of time that don't depend on the outcome of current calculations. And as the number of cores in a processor continues to increase, the ability to run the same thread on more than one core will become much more important than the ability to run two threads on one. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Exactly right. I remember one of the techs confirming this.
I thought out-of-order execution was a lot older than hyperthreading. It's hardly new, anyway. AMD's multiplexing sounds fascinating, though. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Yeah, I fudged a little on some of the details...the original use of out-of-order processing was to keep the processor running code when a branch prediction failed, and it had to wait for data to be transferred from extended memory into the cache...if I understand correctly, they're now heading toward using more sophisticated out-of-order execution to do things similar to what hyperthreading does, however, rendering HT obsolete.
I am speculating a bit about the capabilities of AMD's future reverse-hyperthreading...but it seems logical that the next step in improving the preformace of a processor through out-of-order execution (after achieving the kind of gains that can be made through hyperthreading) would be to get additional cores involved in processing the same thread. It's still several years down the road, and not on their official roadmap yet, so we'll have to see what actually develops. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
AMD has been a bit coy, but they are thinking about changing the basic architecture of the core. Instead of assembling the various execution units into a core with one each and then putting multiple cores on a chip, they *might* produce a chip with varying numbers of different execution units. The real limit for a chip is the bandwidth that the chip can access. But right now, all that I hear is that their architects are working on this concept. It might prove to be a dud. On the other hand, Clearspeed seems to be proving that this sort of design can be very efficient for floating point.
|
||
|
|
|