| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 33
|
|
| Author |
|
|
KLiK
Master Cruncher Croatia Joined: Nov 13, 2006 Post Count: 3108 Status: Offline Project Badges:
|
Well, it's my understanding that the 360 is running a tri core Xenon with 2 threads per core for a total of 6 cores (clocked at 3.2 Ghz), and incredibly high bandwidth(21.6 GB/s) which is by far better than any computer i have around my house... And that link is for the original Xbox, anyone got one for the 360? you cannot have these kind of assumptions, 'cause they are wrong! triple core is 3 cores...so that means it can run 3 separate projects! why is that? 'cause you have to disable every virtual core on the computer that you use...there have been numerous problems with P4HT & virtual cores...you can find it in other sub-forums on this site! and even if you do get this up, you only have 512MB of memory...that is simply (almost) not enough for a system & 3 projects running! maybe it can be done, but it hasn't been done yet... http://gizmodo.com/gadgets/home-entertainment...ng-to-xbox-360-248072.php http://www.engadget.com/2007/05/13/folding-ho...-360-under-consideration/ |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I'm sorry, I must be overlooking something (probably obvious) but I don't see why the fact that it uses a PowerPC Xenon matters. After all that only affects the core itself and it's architecture right? (well I know that's not correct, but why?)
----------------------------------------Ok and i know that P4HT didnt work very well, especially for WCG, but the new i7 HT works just fine, so why would we have to immediately say that it wouldn't work? If we prove it doesn't work, what's to stop us from running 3 NRW on linux, it's only 128MB, or scale it to two of anything else. I mean, after all, at 3.2 Ghz, it could do quite a bit of work... I'm guessing I'm obviously wrong, but being the computer geek that I am, I'd like to know why. [Edit 1 times, last edit by Former Member at Mar 5, 2009 10:23:03 PM] |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hello Otis11,
The Xbox 360 runs PPC instructions. We support compiled application programs for Linux that use x86 instructions. We do not have any projects compiled to run Linux PPC code. Lawrence |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Ah, good to know. Thanks!
|
||
|
|
KerSamson
Master Cruncher Switzerland Joined: Jan 29, 2007 Post Count: 1684 Status: Offline Project Badges:
|
Hi,
----------------------------------------please let me correct the statement regarding P4HT. My initial (historical) host which runs WCG-project is a P4HT ! It works very well since 2007-01-29, even if it becomes now a little bit out of time. I did not have more problems with this system than with every other multi-core systems. Indeed, some projects seem to not fit very well with n-core architectures, but HT is not causing more problem than real multi-core. Instead of disseminating blanket (and uncertain) statements about supposed to be appropriate and not appropriate architecture, it would be much better to enforce good performance monitoring practice by WCG members ! ... For my self, I spend a little bit time (too few time on my taste) for looking on hosts performance. If a (new) project respectively a new project version seems to produce some disturbances (e.g. too many page faults), I deselect temporarily the concerned project. Finally, because some projects did never show real improvement regarding crunching efficiency, I do not participate to ! ... Cheers, Yves |
||
|
|
manafta
Cruncher Joined: Apr 5, 2009 Post Count: 2 Status: Offline Project Badges:
|
I was suprised to find out that wcg does not run on IBM technology
So my PS3 is not running wcg, but other projects. |
||
|
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
WCG does not make the research projects, it hosts them, so if a scientist comes along that has a serious proposal that falls in the target research areas of WCG, is funded enough to see the research project to the end and beyond so the product can be published and provided to the public domain for longer, there might be a chance. Not for gimmick.
----------------------------------------PS, F@H puts the equivalency of CPU/GPU at 2.1, not 10-20 times. So I derive from that stats summary page.
WCG
Please help to make the Forums an enjoyable experience for All! |
||
|
|
manafta
Cruncher Joined: Apr 5, 2009 Post Count: 2 Status: Offline Project Badges:
|
I am referring to Linux PPC. Now only x86 is supported.
|
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
April 01, 2009 Dr. Dobb's Journal 'A First Look at the Larrabee New Instructions (LRBni)'
http://www.ddj.com/hpc-high-performance-computing/216402188?pgno=1 What Does It All Add Up To? I'd sum up my experience in writing a software graphics pipeline for Larrabee by saying that Larrabee's vector unit supports extremely high theoretical processing rates, and LRBni makes it possible to extract a large fraction of that potential in real-world code. For example, real pixel-shader code running on simulated Larrabee hardware is getting 80% of theoretical maximum performance, even after accounting for work wasted by pixels that are off the triangle but still get processed due to the use of 16-wide vector blocks. Tim Sweeney, of Epic Games -- who provided a great deal of input into the design of LRBni -- sums up the big picture a little more eloquently: Larrabee enables GPU-class performance on a fully general x86 CPU; most importantly, it does so in a way that is useful for a broad spectrum of applications and that is easy for developers to use. The key is that Larrabee instructions are "vector-complete." More precisely: Any loop written in a traditional programming language can be vectorized, to execute 16 iterations of the loop in parallel on Larrabee vector units, provided the loop body meets the following criteria: * Its call graph is statically known. * There are no data dependencies between iterations. Shading languages like HLSL are constrained so developers can only write code meeting those criteria, guaranteeing a GPU can always shade multiple pixels in parallel. But vectorization is a much more general technology, applicable to any such loops written in any language. This works on Larrabee because every traditional programming element -- arithmetic, loops, function calls, memory reads, memory writes -- has a corresponding translation to Larrabee vector instructions running it on 16 data elements simultaneously. You have: integer and floating point vector arithmetic; scatter/gather for vectorized memory operations; and comparison, masking, and merging instructions for conditionals. This wasn't the case with MMX, SSE and Altivec. They supported vector arithmetic, but could only read and write data from contiguous locations in memory, rather than random-access as Larrabee. So SSE was only useful for operations on data that was naturally vector-like: RGBA colors, XYZW coordinates in 3D graphics, and so on. The Larrabee instructions are suitable for vectorizing any code meeting the conditions above, even when the code was not written to operate on vector-like quantities. It can benefit every type of application! A vital component of this is Intel's vectorizing C++ compiler. Developers hate having to write assembly language code, and even dislike writing C++ code using SSE intrinsics, because the programming style is awkward and time-consuming. Few developers can dedicate resources to doing that, whereas Larrabee is easy; the vectorization process can be made automatic and compatible with existing code. In short, it will be possible to get major speedups from LRBni without heroic programming, and that surely is A Good Thing. Of course, nothing's ever that easy; as with any new technology, only time will tell exactly how well automatic vectorization will work, and at the least it will take time for the tools to come fully up to speed. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
We do not have any projects compiled to run Linux PPC code. I understand the lack of motivation since probably there are not a lot of people asking for workloads on linux on PPC But is it really a lot of effort ? When compared to enabling GPUs and CELL, this is only a matter of recompilation. |
||
|
|
|