GeforceFX Hands on :)

Status
Not open for further replies.
G

GypsyCurse

Guest
dscf0027.jpg

dscf0026.jpg
 
Y

~YuckFou~

Guest
Awful lot of heatsinks/copper? and hooge fan :)
Venting out the back is very wise.
 
W

Wilier

Guest
thats all very well and good, but if pci slots are in shortage anyway, why would you want to give up another. :p
 
I

Insane

Guest
very nice indeed :D

but i wouldn't mind the TFT monitor in the 2nd picture tho :) looks strangely like a 19" iiyama TFT monitor...

Anyhow, what FOOL would use PCI1 anyhow for a PCI card?!? you'll cause compatibility problems!*

note*:noticed recently that my old MSi 6167 SlotA Athlon board has AGP, PCI1 and PCI3 on the same IRQ interupt :eek6: but it doesnt matter really since W2k is doing the IRQ11 thing... every PCI slot used except PCI1 :) x2 Intel Pro100 cards, DPT SCSI raid controller and a MuseLT
 
T

Testin da Cable

Guest
looks like it indeed. I didn't get it though [a 22" crt instead :)]
 
P

PR.

Guest
Uh, I can't even get a card PCI1 when theres an AGP card fitted but that looks soooo thick it may take PCI2 as well
 
B

bodhi

Guest
Originally posted by Insane
very nice indeed :D

but i wouldn't mind the TFT monitor in the 2nd picture tho :) looks strangely like a 19" iiyama TFT monitor...

Anyhow, what FOOL would use PCI1 anyhow for a PCI card?!? you'll cause compatibility problems!*

[/i]

A fool who didnt buy a shit chipset perchance?


Geforce FX will need all that cooling cos it runs at 500Mhz, which is a huge advance on anything before. Looks like nVIDIA had to go to 13 micron technology ao they could ramp up the clockspeed in order to get NV30 to even touch Rad 9700 speeds.
 
K

kameleon

Guest
Resize the picture its too big.

Why do people need see through windows in their box anyway?
 
B

bodhi

Guest
Originally posted by Sar
Touch/trounce.

Meh, it's all semantics tbh.

:p

Man, I see nVIDIA's marketing bullshit has another victim.
 
X

Xavier

Guest
Originally posted by bodhi


A fool who didnt buy a shit chipset perchance?


Geforce FX will need all that cooling cos it runs at 500Mhz, which is a huge advance on anything before. Looks like nVIDIA had to go to 13 micron technology ao they could ramp up the clockspeed in order to get NV30 to even touch Rad 9700 speeds.

Ok, a quick comparison

Radeon 9700 Pro
110M transistors @ 0.15 micron w/ aluminum interconnects @ 325Mhz
96-bit floating point accuracy ROUNDED to 128-bit values - no discrete 64-bit mode for games such as Doom III
Full DX9 spec as per PS2.0/VS2.0
256-bit wide memory interface @ 310Mhz - 19.2GB/s bandwidth with nil compression = 19.2GB/s effective

GeForceFX (aka NV30)
125M transistors @ 0.13 micron w/ copper interconnects @500Mhz
128-bit floating point accuracy with true 32-bit granularity resulting in discrete 32, 64 and 128-bit modes with respective accuracy/performance benefits
"Beyond" DX9 spec with up to 1024 Pixel Shader 2.0+ operations, 65,535 Vertex Shader 2.0+ operations, Conditional Write Masks, Call and Return, Dynamic Flow Control, Instruction Predicates
128Mb of "DDR2" memories @ 500Mhz - 16Gb physical bandwidth with all colour and Z data being losslessly compressed @ 4:1 (intellisample technology) = 64GB/s effective


While ATI are building a GPU on a tried and tested process, NVIDIA chose to make the hop to 0.13 micron sooner and have probably set themselves in better stead for their whole NV3X family, ATI still have to attack all the hurdles of the new manufacture - with NV30 it's done.

Don't forget, NVIDIA were confident enough to demonstrate their new technology on A1 silicon, the first spin which made it into boards - a genuine testament to the amount of time and effort spent in the simulation stage. ATI demonstrated the R300 on A12, and didn't take silicon to retail until A13 which needed a metal layer change before they could pin down clock speeds and respin the GPU.

Rather than being a case of them needing 0.13 micron to move up to 500Mhz, what we're seeing is the very first yields of a relatively new process @ TSMC. Given time alone the NV30 core (let alone new dies altogether) will be seen running much quicker.

On the cooling solution front it's worth noting two things, firstly that the cooler in Johns shots is only the NVIDIA reference for FlowFX - their partners can fit what they like, if they have something thinner that does the job then they can use it with no problems. Secondly that cooling solution is designed to run unobtrusively. Within windows and 2D applications - the GPU is clock-gated and runs far cooler, FlowFX monitors the temperatures and kicks the blower in when you fire up games or other applications which stress the chip... So when you're writing email you won't hear it and during sessions of DoomIII you'll have headphones or speakers piping out sounds and won't be able to notice it.

John wasn't the only one to see all the next-gen goodness either, I flew out to Vegas this week for COMDEX and caught the american launch of the GPU, and pretty spinky it was too.

NVIDIA CEO Jen-Hsun Huang on the stage @ the NV30 launch.

nv30_1.jpg


Oh, one final thing - Bodhi - I know you're a fan of Intel boards/CPUs, so I checked the resource sharing on my three main boxes which all use Intel 850E - all three share IRQs between PCI1 and the AGP - one goes even further, sharing AGP/PCI1/PCI5. Taking into account the ventilation needed around the AGP (don't forget the ATX design was basically done by Intel and is very CPU-centric) it makes sense that if you're going to have to double up IRQs on any one slot manufacturers do so with PCI1.
 
B

bodhi

Guest
I like how you put nVIDIA's compression technology in the sums but left Hyper-Z out of ATi's.
 
X

Xavier

Guest
hyper-z delivers varible compression, and only of colour data so at some points it compresses zero data and sometimes manages as much as 50% - an extra 50% bandwidth (30Gb total effective) is still HALF of what GeForceFX delivers.

If you want me to go into the other sides of intellisample just say ;)

anyhow child, just go away... you bore me.
 
X

Xavier

Guest
Oh, and remember - we're talking only about the effective bandwidth after compression - Hyper-Z III focuses mainly on discarding pixels which won't be rendered in the final scene. If you want comparisons of the z-culling between the two as well just ask.
 
B

bodhi

Guest
Originally posted by Xavier
anyhow child, just go away... you bore me.


A little bit rich after you have just tried your best to baffle me with tech speak but never mind.



[It would seem Tom disagrees with you

As does Anand


Anyway as you've danced around any issues I've brought up so far I'm pretty sure you'll start bleating on about them using different calculations or something. I mean your numbers must be right. It makes nVIDIA look better.

I'm also pretty sure ATi have something up their sleeve to counter it, and I'm also pretty sure the initial batch of Geforce FX drivers will generally suck.
 
X

Xavier

Guest
by 'tom' you do of course mean, ahem, Lars Weinand - as for Anand he's only seen documentation, no solid hardware.

Actually, browsing Anands piece he seems quite vocally supportive:

The compression engine is completely invisible to the rest of the architecture and the software running on the GeForce FX, which is key to its success. It is this technology that truly sets the GeForce FX apart from the Radeon 9700 Pro.

The compression engine and the high clock speed of the GeForce FX enabled NVIDIA to introduce to new anti-aliasing modes: 6XS under Direct3D, and 8X AA under both OpenGL and Direct3D. Because of the compression engine, performance with AA enabled should be excellent on the GeForce FX.

NVIDIA's Intellisample technology is the perfect example of the type of innovation we're used to seeing from them. NVIDIA will undoubtedly make the move to a 256-bit memory interface eventually, but until then the combination of high speed memory and their compression engine make for a very efficient use of memory bandwidth.
odd, you must have read another article... or something :rolleyes:
 
B

bodhi

Guest
From FiringSquad:-

As far as NVIDIA’s bandwidth claims of GeForce FX’s 48GB/sec memory bandwidth, ATI states that the color compression in their HYPERZ III technology performs the same thing today, and with all of the techniques they use in RADEON 9700, they could claim bandwidth of nearly100GB/sec, but if they did so no one would believe them, hence they’ve stood with offering just shy of 20GB/sec of bandwidth.


Kinda makes the 64Gb/s you made up look even sillier tbh.


But then I'd knew you'd sidestep the fact that 64Gb/s is just plain wrong.
 
X

Xavier

Guest
yeah, a lot of journos have done that - 4:1 = 4xbase - not 4-base-base

i.e. 4x16gb=64gb - not 64gb-16gb otherwise the compression would only be 3:1 effective.

And ATI have never made those claims, with only half of their data being colour only half the data they pass is a candidate for compression... so saying 10gb data remains uncompressed (z data) that means they would have to be compressing by a factor of 9:1 their colour - which isn't possible losslessly with any known algorythm.

Come on, keep bleating, your sad fanboy colours burn bright and you remind us yet again how petty you can be...

I'm waiting...
 
X

Xavier

Guest
Funnier still

Hmm, so - I had a quick look over the firingsquad coverage of the GeForceFX but your quote was nowhere to be seen... so, getting on with other stuff I looked at their comdex coverage to see the chunk you quoted under:

ATI’s take on GeForce FX

hmm, nice - I'd go get that wooly coat shaved off mate, quit bleating and give us all a break... LOL
 
X

Xavier

Guest
Anyhow, I don't think anything more needs to be said on this matter except that, if you want to talk in terms of physical bandwidth rather than effective bandwidth then that's your call - the benchmarks will disprove you in the end.

We did quite a lot with a card last week and already have initial indications which totally disprove your opinions - and they're exactly that.

As for the firingsquad or anyone else, if you're going to fall in line like a little sheep and listen to ATI's take on their competitors technology (and I'd see that as the worst angle to be at - after all they haven't even had briefings, let alone product - so aside from yourself they are the least educated parties on the matter) then you deserve everything you get when you pick up one of their cards retail. With the likes of Rage3D providing direct feedback to the ATI driver guys and bug lists for games with the 9700 Pro reaching the 80-title mark it really speaks for itself doesn't it?
 
Status
Not open for further replies.

Users who are viewing this thread

Top Bottom