Any One For A Dual?

TdC

Trem's hunky sex love muffin
Joined
Dec 20, 2003
Messages
30,693
mmm :) maybe in two years time :)
 

Jonty

Fledgling Freddie
Joined
Dec 22, 2003
Messages
1,411
Interesting results :) I have to admit, I still don't know of anybody who really needs the power the Extreme Edition range has to offer. My system isn't bleeding edge any more but it can still render lovely scenes at playable framerates, e.g. (sorry about the file size)



Given that Splinter Cell: Chaos Theory implements Shader Model 3 and advanced features such as soft shadows, displacement mapping and such (which future engines will use) I really don't understand why people would pay the premium for the Extreme Edition range when a) they're sometimes outclassed by other CPUs in gameplay b) their gains are often in specialist applications, such as rendering or editing c) they cost so much more for little genuine improvement. Still, I guess it gives Intel's marketters something to keep busy with :)

Kind Regards

Jonty

P.S. It's a shame SC:CT doesn't presently support shader model 2.x to bridge the feature gap between GeForce 6 cards and the rest
 

Escape

Can't get enough of FH
Joined
Dec 26, 2003
Messages
1,643
The lack of applications/games which can use dual cpu/cores, will hold them back from the mainstream. I guess that'll only change when processor speeds hit their limit!
 

Jonty

Fledgling Freddie
Joined
Dec 22, 2003
Messages
1,411
Escape said:
The lack of applications/games which can use dual cpu/cores, will hold them back from the mainstream.
That's true. I was reading something about Unreal Engine 3 a while back (which support multithreading CPUs) and it sounds like a lot of hassle to get things up and running, and even then not every area of a game, say, can utilise multithreading effectively. Still, time will tell :)

Kind Regards
 

TdC

Trem's hunky sex love muffin
Joined
Dec 20, 2003
Messages
30,693
well, for example they could have all game related IO in a separate thread, or all AI related operations in a separate thread. then in theory you'd never have game operations like loads/saves or the AI scripting up something really nasty causing degredation of your gaming pleasure. currently your (single) CPU does everything, and that can be noticed at times.

my NwN saves dir was huge at a certain point. this caused the game to judder to a halt when in the loads/saves option menu as the engine read out the data from each save I had done. now if there were two threads in two cores, one could be talking to the disk while the other could be presenting me a nicely smooth scrolling save-games menu :)

in theory ofc ;)

if you had a huge amount of cores and a cool engine, you could split out pre-rendering calculation and stuff amongst your cores. all your GPU would have to do is put things together :)
 

Jonty

Fledgling Freddie
Joined
Dec 22, 2003
Messages
1,411
I finally found the Tim Sweeney article I was thinking of (why is it always so difficult to find something you when you need it and yet trip over it when you don't? :)).

Tim Sweeneyy said:
For multithreading optimizations, we're focusing on physics, animation updates, the renderer's scene traversal loop, sound updates, and content streaming. We are not attempting to multithread systems that are highly sequential and object-oriented, such as the gameplay ...

... it's especially important to focus multithreading efforts on the self-contained and performance-critical subsystems in an engine that offer the most potential performance gain. You definitely don't want to execute your 150,000 lines of object-oriented gameplay logic across multiple threads - the combinatorical complexity of all of the interactions is beyond what a team can economically manage. But if you're looking at handing off physics calculations or animation updates to threads, that becomes a more tractable problem ...

... You can expect games to take advantage of multi-core pretty thoroughly in late 2006 as games and engines also targeting next-generation consoles start making their way onto the PC.

Writing multithreaded software is very hard; it's about as unnatural to support multithreading in C++ as it was to write object-oriented software in assembly language. The whole industry is starting to do it now, but it's pretty clear that a new programming model is needed ...
Kind Regards
 

Embattle

I am a FH squatter
Joined
Dec 22, 2003
Messages
10,439
Personally I had an idea how the dual core processors would perform and with the HardOCP results I can honestly say they matched my expectations. This is the start of a long road but it's a road that both the main manufacturers of processors seem committed too, unlike SMP systems which were only really a side product designed for certain applications and thus when the odd game did arrive with SMP support it was only in it's basic form and showed no benefit.

TdC said:
my NwN saves dir was huge at a certain point. this caused the game to judder to a halt when in the loads/saves option menu as the engine read out the data from each save I had done. now if there were two threads in two cores, one could be talking to the disk while the other could be presenting me a nicely smooth scrolling save-games menu :)
I would of thought that would happen no matter how many processor cores, esp since I would consider that a HD related issue.....basically they are still as slow as hell in relative terms of other parts of computers.
 

Danya

Fledgling Freddie
Joined
Dec 23, 2003
Messages
2,466
Thing is faster CPUs are currently a lot less bang for buck than faster gfx cards. You buy a top CPU it's only maybe 10-20% quicker than a fairly cheap CPU, while a top GFX card may be as much as 2 or 3 times faster than a low end one. And GPUs are getting faster at an incredible rate still. Expect to see the gfx card being even more important than it is currently as people try to shove more physics and such on to it.
 

TdC

Trem's hunky sex love muffin
Joined
Dec 20, 2003
Messages
30,693
Embattle said:
I would of thought that would happen no matter how many processor cores, esp since I would consider that a HD related issue.....basically they are still as slow as hell in relative terms of other parts of computers.
that's not entirely true. it happening, that is. HDD's being slow as hell is a given naturally :) my point was that all IO to devices could be "handed off" (as Tim puts it) to a helper thread for (example) seamless background loads while the main threads keep the game running smoothly. What Tim says is very logical from the game engine's perspective, ie having certain tasks that the engine has to perform in separate threads so that the engine runs as smoothly as possible.
that's all nice and stuff, but I personally would like to see the game run asap. point being that you'd never have to wait for anything again. ever. all IO, physics, animation being done while the "game" thread keeps the eye-candy flooding your monitor.
 

Jonty

Fledgling Freddie
Joined
Dec 22, 2003
Messages
1,411
There's an interesting, semi-related article in the April edition of PC Format (UK) which discusses the Cell processor being used in the PS3, amongst other things. In theory it sounds like an utter powerhouse, but in part that's the problem, much of the power is theoretical.

PC Format April 2005 said:
The Cell is designed for vector processing ... handled by something called a Synergistic Processor Element or SPE. [...] a single Cell processor has eight SPEs all running summultaneously ...

Each SPE is a complete, independent processor core with four integer maths units, four floating-point units and 256K of its own internal memory. Controlling all of these is yet another processor ... based on IBM's PowerPC chip. [...] nine complete processors plus local storage all packed onto a single chip
In defence of the present Intel/AMD architecture, however, it is noted that existing software must be rewritten to support the Cell (and other dual/multi-core CPUs) and that most software doesn't make efficient usage of parallel execution anyway, which is where multithreaded systems can excel. It seems hardware abstraction (i.e. being able to forget about what hardware your software will work on) is also dumped by the Cell design, which may create more complication is a similar stance was adopted by Intel/AMD's dual/multi-core products.

Kind Regards

Jonty

P.S. There's also a funny little statistic quoted that the theoretical power of two PS3's adequately linked together would actually rank as the 500th most powerful computer on the planet :)
 

TdC

Trem's hunky sex love muffin
Joined
Dec 20, 2003
Messages
30,693
Jonty said:
P.S. There's also a funny little statistic quoted that the theoretical power of two PS3's adequately linked together would actually rank as the 500th most powerful computer on the planet :)

heheh you could build a PS/3 supercomputer! have one in the comfort of your own home! a huge selling-point, I'm sure you'll agree :D
 

Danya

Fledgling Freddie
Joined
Dec 23, 2003
Messages
2,466
TdC said:
that's not entirely true. it happening, that is. HDD's being slow as hell is a given naturally :) my point was that all IO to devices could be "handed off" (as Tim puts it) to a helper thread for (example) seamless background loads while the main threads keep the game running smoothly. What Tim says is very logical from the game engine's perspective, ie having certain tasks that the engine has to perform in separate threads so that the engine runs as smoothly as possible.
that's all nice and stuff, but I personally would like to see the game run asap. point being that you'd never have to wait for anything again. ever. all IO, physics, animation being done while the "game" thread keeps the eye-candy flooding your monitor.
Well, you can already do asynchronous loading from a HDD - basically the program says go to the disk subsystem then gets on with processing the game (or whatever) and get's a notification at some point in the future when the data is available. A lot of the time games will stall because without the data they just can't do anything, there's only so much predicitve loading that can be done.

As for CELL, while it is 9 complete processors, the SPEs can only access their local memory (256k) directly, so it's not quite 9 general purpose CPUs. For what it's deisnged for though, it's a really nice chip.
 

TdC

Trem's hunky sex love muffin
Joined
Dec 20, 2003
Messages
30,693
Danya said:
A lot of the time games will stall because without the data they just can't do anything, there's only so much predicitve loading that can be done.
ah, but that is the thing indeed. the point being that a helper thread could be looking after this continually while a main thread would be showing what it had ready. in my mind there is not only a whole bunch of real and virtual cpus doing their best to keep threads working at optimal performance, but there will be a change in how stuff is done from a programming, or scheduling standpoint. I'm finding it hard to explain as I am not a programmer. In this respect I am a deciple of Sun Microsystems and AMD: "you don't have to do something at blinding gigahertz if you can do it in the proper order, or at optimal efficiency.
 

Danya

Fledgling Freddie
Joined
Dec 23, 2003
Messages
2,466
Thing is, disk I/O doesn't take any CPU anyway, so you really don't gain much. It's really easy to stream a game in without pauses if you're going to follow a nice linear path through the world, but people find that rather boring. You could put it in a thread now and it would be about as fast as if you had dual CPUs, simply because the loader thread will just be blocked waiting on the HDD 99% of the time, which takes no CPU.

Offloading things like physics and animation are more worthwhile as they do take significant processor time. It is, as Mr. Sweeney states, a large change to do that though, and currently you have a bit of a chicken and egg situation. You can't push SMP or other multi-processing devices because games don't use them. And games don't use them because no one has them.

Even so, you'll probably get better results by just pushing more physics and animation on to the GPU (most animation is GPU based already). GPUs are a lot faster than CPUs, and they are increasing in power at a much higher rate than CPUs.
 

TdC

Trem's hunky sex love muffin
Joined
Dec 20, 2003
Messages
30,693
troo, troo, the thread would be sleeping a majority of the time but doing something with the data returned would be costly imo. to my mind your comment on a GPU being faster by far than a CPU is because it's a specialized bit of kit that does nothing else but graphics operations. if you had threads preprocessing animation and other stuff apart from the main engine things would go very much more swimmingly, though I don't agree with the "only linear games being able to stream" stance. you don't have to know where a player's pawn will go to be able to preload/cache/whatever. I think it's because that engines get taylored to be able to run on the "lowest common denominator" of cpu/mem/gfx combinations and thusly don't do as much as they possibly could.
that's a huge guess though: I'm not a programmer and know nothing of how game engines actually work :) still, that said it would be nice to have an option in game.cfg called preload_everything :D
 

Jonty

Fledgling Freddie
Joined
Dec 22, 2003
Messages
1,411
TdC said:
you don't have to know where a player's pawn will go to be able to preload/cache/whatever. I think it's because that engines get taylored to be able to run on the "lowest common denominator" of cpu/mem/gfx combinations and thusly don't do as much as they possibly could
Just on that tiny point, rather than the whole discussion, I recall John Carmack (I think) stating that the Doom 3 engine is such that, with the right amount of RAM on a fairly powerful system, the entire game's content could be preloaded without any significant problems (I guess this holds true for Source et al).

It's merely the fact that people don't have a few gigabytes of RAM and a system to match that prevents this kind of en masse preloading (indeed I suppose it would be a bit silly to load an entire game with xx hours of content for just one session, but background loading large segments during gameplay to create a seemless environment would be very cool).

I believe Unreal Engine 3 will feature so-called seemless worlds whereby large areas can be seemlessly 'stitched' together to create one large patchwork which dwarfs even UT2004's largest maps.

Kind Regards
 

TdC

Trem's hunky sex love muffin
Joined
Dec 20, 2003
Messages
30,693
ooh I stand corrected! still it's a bit naff that my single gig of ram suddenly pales in comparison to the gaming rig I'd "need" to run our hypothetical game properly :( I still stand my my point that parcelling the load out will get you some win though :)

*plans quad fx-55 with 16 gigs ram and lto raid array*

*watches credit card writhe in agony*
 

Jonty

Fledgling Freddie
Joined
Dec 22, 2003
Messages
1,411
I thought I was supporting your point? hehe. Granted, the amount of RAM needed to preload a game will vary (Doom3 vs a 2D sidescroller etc.) but your theory is obviously sound (and lets face it, John Carmack programmes all day for a job and builds rockets as a hobby in his spare time, so I'm guessing he knows his stuff :D).

Hopefully with 64-bit arriving with Windows XP x64, the 4GB RAM cap previously imposed will be of less importance and motherboards will start to allow for greater amounts of system RAM. Then all our credit cards will writhe in agony ;)

Kind Regards
 

Escape

Can't get enough of FH
Joined
Dec 26, 2003
Messages
1,643
btw TdC, there was a patch for NWN to help the long loading times. If you're still playing, might be worth checking it out ;)
 

TdC

Trem's hunky sex love muffin
Joined
Dec 20, 2003
Messages
30,693
really? ooh, I shall have to give it a go :)

it's not even installed atm though. the only games I play are Call of Duty and X2/the threat
 

Danya

Fledgling Freddie
Joined
Dec 23, 2003
Messages
2,466
TdC said:
troo, troo, the thread would be sleeping a majority of the time but doing something with the data returned would be costly imo. to my mind your comment on a GPU being faster by far than a CPU is because it's a specialized bit of kit that does nothing else but graphics operations. if you had threads preprocessing animation and other stuff apart from the main engine things would go very much more swimmingly, though I don't agree with the "only linear games being able to stream" stance. you don't have to know where a player's pawn will go to be able to preload/cache/whatever. I think it's because that engines get taylored to be able to run on the "lowest common denominator" of cpu/mem/gfx combinations and thusly don't do as much as they possibly could.
GPU is much faster, and they're getting quite general purpose these days. Even so, they are specialised for operation that games need - physics, animation etc. they all can run just fine on a modern GPU because most of the work they do is 3d vector maths, which is what GPUs are very good at. AI can be a bit tougher because making decisions is not something a GPU does well.

I didn't mean to say that only linear games can stream, just that it's much easier to stream linear games, and you can often get load stalls in streamed non-linear games. It depends how much memory you're prepared to use for pre-caching really. A lot of PC games are, as you say, quite conservative about what they load to avoid sucking up too much memory on low spec machines.

Ideally when streaming you don't do much processing on the streamed in data - a quick decompress at most. If you're having to do a lot of processing, odds are you're going to stall the game a lot. Preloading everything tends to be infeasible, many games have several gig data on the CD/DVD which is compressed, to load it all in and decompress would require a silly amount of memory. I have noticed console games tend to do a better job of streaming than PC games in general though. If you want an example of a large streamed game currently, try world of warcraft - it's huge, yet there are almost no loading screens in the game, it just streams the world in as you travel.

FWIW I am a games programmer, and have worked on streaming content in games (it worked really well but memory is a big issue, load screens take far less time).
 

Embattle

I am a FH squatter
Joined
Dec 22, 2003
Messages
10,439
Regarding preloading I was quite happy to see GTA:SA ditch the loading screen when passing from one area to another, wasn't so happy that HL2 seemed to keep it esp when the game was so linear.

I still hope that dual cores will enable improved physics, I don't mean on the graphics side either but enable better physical interaction with items and scenery. An example is CS:S where ragdoll physics exist but while it is nice it is also a little weak, even more weak are items like cabinets on maps such as Office. Basic interactivity has been achieved to some extent but they don't actually behave as they would in real life should a grenade go off right next to them imho.

Dual cores offer a good promise for the future, I personally believe it is important to think of them in the same way most currently look at 64bit extensions to processors......its a bonus and something for the future that will become common place and at least I'll have it even if the tools aren't technically in place at the moment to take advantage of it right now.
 

Users who are viewing this thread

Top Bottom