Not matter how good your texture manager, you can still have a lot more texture space on xbox than PS2 - you can for instance use 40MB of textures on xbox should you wish to, and the hardware can easily support that much being accessed per frame, in contrast you can't even load 40MB of textures on a PS2.Boni said:b). Texture room, this is a misconception about how the hardware should be used. The PS2 can upload 10MB of textures per frame. They way the GS and DMA operates on the PS2 means that you dont need more texture memory, but you do need a good texture manager, which is hard to program and many many programers simply arent up to the job.
Danya said:Not matter how good your texture manager, you can still have a lot more texture space on xbox than PS2 - you can for instance use 40MB of textures on xbox should you wish to, and the hardware can easily support that much being accessed per frame, in contrast you can't even load 40MB of textures on a PS2.
I agree with the 512x512 thing - that does seem to be the most common res used for backbuffers on PS2, 320x240 just looks too coarse. Xbox games usually run with a 640x480 backbuffer, though the console does support 720x480 for NTSC, 720x576 for PAL and 1280x720p/1920x1080i for HDTV. I would think most games could run at 720p without FSAA enabled and maintain framerate, if they run normal PAL/NTSC res with FSAA.
I'm yet to see any technique possible on PS2 but not xbox - you do actually have direct hardware access on xbox should you wish to use it, very few do though because directx encapsulates all the functionality of the hardware anyway (NB: xbox DX is not the same as PC DX).
Nxs et al who are waiting for the next gen hardware - don't hold your breath, you're looking at 2006 at the earliest before any of that hits the shelves.
Graknak said:2 PS2's, broadband adapters and a HUB give you a nice home network for the PS2 fyi and you can just go online with it , meet some friends and play there.
Danya said:There's no large bus hit for doing CPU geometry generation on xbox - it's UMA, you write geometry to memory with the cpu, the GPU reads the same memory and uses it for drawing. It's entirely possible to do that multiple times per frame should you wish to.
Similiarly it's quite possible (and even common) to draw a texture and reuse it the same frame - multiple times if you wish, on xbox. In addition to this the xbox supports more sophisticated blending techniques than PS2 allowing for much richer and more visually impressive effects. If anything the xbox is better at frame buffer effects because of the higher memory bandiwdth.
.
Danya said:As for directly tinkering with the hardware - you can if you wish. The xbox runs in ring 0 which means you have direct hardware access at all times. The only thing microsoft doesn't like you playing with directly is the hard disk and bios, for security reasons. The more pertinent question is why you'd need to given directx on xbox is a very thin layer and has many extensions to allow you to exploit the unique hardware features of the platform.
Not really - the whole point of UMA is that the CPU and GPU share the same memory. It doesn't cost anymore to shift cpu generated or pre-generated verts to the GPU - you still have to read the same amount of data. The only real hit is that you have to use the cpu rather than having a dedicated chip, but the xbox has a lot more cpu power than the PS2 anyway.Boni said:Theres an enourmous hit for doing procedural geometry on the Xbox, writing and reading verts CPU side and using UMA on a per vertex basis to shift them to the GPU is too slow to do some of the more impressive tricks the PS2 is capable of. On a PS2 you should be able to achieve in excess of half a million dynamic verts per frame and be limited only by GPU speed, on an XBox youll be lucky to get 100k.
The game I'm working on currently reads the frame buffer and renders using it in the space of a frame without any issues on xbox. It's more than possible, and indeed not particularly difficult to do without completely stalling the hardware. I'm not sure I'd want to do it on PC though.No, what I am talking about is reading from the same frame you are drawing to several times in sucession, not just seting up texture to render to and using that, hardware limitations stop you dead doing this on XBox or PC architecture (or at least without stalling the hardware). Its not a particularly obvious or usefull thing, but it is something I came across that caused some headaches but was trivial todo on GC & PS2.
Um, the frame buffer has to live somewhere, it's not just magically "there". Generally the frame buffer lives in memory (I don't know of any machine with registers enough to hold an entire frame in), if you read or write it, that uses memory bandwidth.Memory bandwidth wouldnt come into it for frame buffer effecs on the PS2 at all, all you need to do is render, no information needs to go anywhere at all, you can even do operations in place (see previous paragraph). If anything the PS2 is better because full screen buffer effects are essentialy done at the cost a handfull of polygons, the only limiting factor is really your fill rate and PS2 wins there.
As mentioned the only thing you can fail standards for are direct hard disk / bios interation. MS will allow you to directly use the graphics and sound hardware if you wish, but there really isn't much you can do with it that you can't do with directx, and spending a lot of time writing direct-hardware routines for what is probably less than 5% extra speed really isn't going to be worth it for developers.Boni said:The trouble is that there isnt any documentation available for the graphics hardware, microsoft dont support developers who tinker (with the exception of a couple of 'favoured' mainly in house teams). In fact its highly likely that a title that did use the hardware directly would be failed by microsoft through some breach of their standards. Why you would want to get round DX and at the hardware directly? To make better games through better use of the hardware, to out perform your competitors rendering engines.
Danya said:Not really - the whole point of UMA is that the CPU and GPU share the same memory. It doesn't cost anymore to shift cpu generated or pre-generated verts to the GPU - you still have to read the same amount of data. The only real hit is that you have to use the cpu rather than having a dedicated chip, but the xbox has a lot more cpu power than the PS2 anyway.
The game I'm working on currently reads the frame buffer and renders using it in the space of a frame without any issues on xbox. It's more than possible, and indeed not particularly difficult to do without completely stalling the hardware. I'm not sure I'd want to do it on PC though.
.
Danya said:As mentioned the only thing you can fail standards for are direct hard disk / bios interation. MS will allow you to directly use the graphics and sound hardware if you wish, but there really isn't much you can do with it that you can't do with directx, and spending a lot of time writing direct-hardware routines for what is probably less than 5% extra speed really isn't going to be worth it for developers.
UMA means it's the same memory for the gfx card and cpu - all your geometry has to live there so it's not really any greater cost how you generate it - the only possible hit is the cpu cycles themselves and waiting for the cpu to flush the data down the bus, that's typically a minimal hit and can be avoided with a bit of care.Boni said:Interesting is there no cost for UMA at all? I thought you could gain by having your memory read only.. The speed of the PS2 vector units with their custom instuctions is suited perfectly for the purpose of building geometry and will well outperform the XBox cpu plowing through fields of direct x verts many times over. Basically see how many different moving objects you can draw on XBox...
Cool, perhaps it was PC I couldnt get this silly effect to work on. It made nice 'stargate' like effects if you can get a few thousand sprites doing it.
Danya said:As for plowing through verts repeatedly - you wouldn't do that on the CPU, you'd do that on the GPU in a vertex shader, that allows for massive amounts of moving objects relatively cheaply, and long as they don't have to move in too complicated a fashion.
.