TweakTown

nVidia GeForce FX 5900 Ultra - Fourth Time Lucky

For many months enthusiasts awaited the arrival of nVidia GeForce FX 5800 only to be let down by poor performance, jet engine level volume from its cooler and supply problems due to the use of DDR-II. nVidia acknowledged the FX 5800 was a failure and have recently released the GeForce FX 5900 Ultra with many improvements over its predecessor. Read on as Cameron "Sov" Johnson tells us if nVidia learned from their mistakes or not.
@TweakTown
Published Tue, Jun 10 2003 11:00 PM CDT   |   Updated Tue, Apr 7 2020 12:25 PM CDT
Rating: 90%Manufacturer: nVidia

GeForce FX 5900 Ultra - Introduction

IntroductionIt certainly has been a long and hard road for the 3D giant nVidia - Three product releases and not a winner amongst them. With what we have seen in the past from nVidia with high speed cards leading the market, it is certainly a first to see nVidia's newest products falling to the bottom of the order. Design wise, nVidia was to bring a new wave of 3D entertainment, but using Intel's idea that core speed is total, nVidia seems to have forgotten that the 3D world is fickle, remember Voodoo 2 and TNT2?Just as graphics cards based on the GeForceFX 5800 chipset, known to us as the NV30, are finally making their way into the retail channel, nVidia is introducing its follow-up product. Following the company's nomenclature, the new high-end chip will be called nVidia GeForceFX 5900.nVidia made a few mistakes in its first attempts on the FX line. First we will cover the Memory. Originally intended to use a 256bit interface, similar to the ATI Radeon 9700 and 9800, nVidia was unable to use the 256bit interface due to the unavailability of the Flip Chip package the 256bit interface was deemed to be un-usable, this led nVidia to outfit the card with a 128bit DDR-II interface - This was mistake number one.Mistake number two was the use of the 0.13 micron technology. nVidia banked on the 0.13 micron technology would be fully mature and free of bugs. This was not the case and forced the NV30 release dates back many months and eventually led to supply problems recently. On top of that, nVidia's reference cooling solution proved to be unacceptably loud, earning the reference card a host of degrading names amongst enthusiasts. Lastly, the chip's image quality when using anisotropic filtering also didn't live up to expectations and was criticized in many reviews as a result.When ATI launched its Radeon 9800 Pro, it was able to beat nVidia's flagship in practically every discipline. In short, the FX 5800/ NV30 is too loud, too expensive, offers sub-par image quality and is slower than its direct competitor, the Radeon 9800. However then nVidia introduced GeForceFX 5900 to take on the ATI giant, can it? Read on and find out.

GeForce FX 5900 Ultra - GeForce FX 5900 Ultra in Detail

GeForce FX 5900 Ultra in DetailnVidia is aiming to remedy all of the problems of NV30 core with the new FX 5900 (NV35) series. Obviously, this is not a completely new design, but a lot of details have been changed and improved.- MemoryThe NV35 is built on the .13 micron process just like the NV30 and NV31. We do get a few new buzzwords with this release, CineFX 2.0 and Intellisample HCT, which we will go over a bit later. There are a few big improvements for the NV35 over the NV30. The hardest hitting change is the use of a 256-bit memory bus. You will recall the ATI Radeon 9700/Pro and 9800/Pro already uses a 256-bit memory bus. Now the NV35 utilizes this bus width as well, allowing for much more bandwidth to be processed when compared to the NV30. Another addition is the use of 256MB total video card RAM on the high end GeForceFX 5900 Ultra board. This added memory will give you more room for framebuffer functions such as resolution and AA settings combined with texture storage. The GeForce FX 5900 Ultra ships at an 850MHz DDR memory speed on a 256-bit memory bus, which produces a total of 27.2GB/sec of raw memory bandwidth. That is a 70% increase in raw memory bandwidth over the GeForceFX 5800 Ultra and 21% more raw bandwidth when compared to the ATI Radeon 9800 Pro 256MB. However, this is still a 4 pixel pipeline card.- CoolingMany gamers and definitely all hardware testers will be relieved to know that the new cooling solution sounds a lot less like a vacuum cleaner and operates at a much more bearable volume.- Ultra ShadowCineFX 2.0 is the name given to the cinematic quality part of the NV35. This is where the 2x increase in shader ops comes into play as well as the Ultrashadow technology and 128-bit internal precision. Ultrashadow is nVidia's name for a new routine in the NV35 to help accelerate shadow volumes. As you know, shadow volumes will be a big part in Doom III, and any card that wants to compete with the big boys will need to be able to handle these extensive stencil shadows very efficiently. The easiest way to think about Ultrashadow is simply to visualize that the card is not calculating shadows that cannot be seen by the camera. Meaning that shadows behind an object on your computer screen, or out of view of your character (or you), are not given to the GPU to figure out. Therefore, the 5900 Ultra is not doing work that needs not be done, giving those saved GPU cycles for other priorities.

GeForce FX 5900 Ultra - GeForce FX 5900 Ultra in Detail Continued

GeForce FX 5900 Ultra in Detail Continued- Intellisample HCTnVidia incorporates the new Intellisample HCT. HCT stands for High Compression Technology which tells the Intellisample is a new compression algorithm. The following is taken from the nVidia handbook on the new GeForce FX 5900 Ultra:NVIDIA Intellisample HCT delivers the next step in performance and visual quality, compressing color, texture, and z data. Performance gains can be seen in all applications, especially at high resolutions with antialiasing. This new technology allows for up to a 50-percent increase in compression efficiency in these modes, and delivers unprecedented visual quality for resolutions up to 1600 × 1280.NVIDIA Intellisample HCT includes a "quality" mode that delivers true anisotropic filtering and that allows programmers to achieve unmatched visual clarity for every portion of the scene, even difficult areas representing surfaces viewed from a sharp angle or from a distance.- More on Ultra ShadowUltrashadow is nVidia's name for a new routine that is employed to help accelerate shadow volumes. Doom 3 will be the first game out to take out a huge amount on a video card due to the shadow level that it requires, here is what nVidia have to say:Shaders are also used to create sophisticated shadows for enhanced realism. The new NVIDIA UltraShadow™ technology accelerates the complex computations for lighting source and object interactions. For details on this patent-pending NVIDIA innovation, please refer to the NVIDIA GeForce FX: UltraShadow Technology paper at www.nvidia.com.UltraShadow gives programmers the ability to calculate shadows much more quickly by eliminating unnecessary areas from consideration. With UltraShadow, programmers can define a bounded portion of the scene (often called depth bounds) that limits calculations of lighting source effects to objects within a specified area. (See Figure 2.) By limiting calculations to the area most affected by a light source, the overall shadow generation process can be greatly accelerated. Programmers can fine-tune shadows within critical regions, create incredible visualizations that effectively mimic reality, and still achieve awesome performance for fast-action games. The accelerated shadow generation can also free up time that can be allocated to other sophisticated but time-consuming effects.Because stenciled shadow volumes require no texturing or color updates, the hardware "doubles up" the rendering horsepower to generate stenciled shadow volumes at speeds of up to double the standard pixel-processing rate. Other graphics solutions have to render stenciled shadow volumes in two passes. UltraShadow accomplishes the shadow volume rendering in a single pass, reducing CPU overhead and improving GPU performance. The NVIDIA approach also interoperates with NVIDIA Intellisample™ high-resolution compression technology (HCT) to make sure that shadow edges are properly antialiased. The GeForce FX 5900 GPUs maintain the stencil information on a sub-pixel basis, ensuring that shadow edges are antialiased rather than "blocky" or "jaggy."

GeForce FX 5900 Ultra - The Card

The Card in DetailWhat kind of review would it be from us if we didn't give you a look at the new nVidia offering?
First off we have a look at the card itself. This is by far the largest card nVidia has ever produced. Using a 6 layer PCB, the reference FX5900 Ultra is defiantly a card you need a lot of space for. nVidia has confirmed that this is not the final size and retail cards will come out smaller than the one we have here, this is defiantly good news.With the size of the card and the heatsink unit, the reference card is a lot fatter than most, in fact users of the ABIT OTES systems will understand. The unit takes up one PCI slot so your will loose PCI1 on most motherboards. The unit also requires you to place screws in the AGP and PCI1 backplane lock downs. This lets you know you can't use PCI1 at all.nVidia have done this as protection for users. According to Steve Sims in our conference call with nVidia, they've purposely taken up PCI1 to stop users shorting out their cards. Since you could probably squeeze a card in PCI1 the chances of it touching the FX5900 Ultra are high.
The cooling solution which nVidia uses on the FX5900 Ultra while as large as the NV30, it is far quieter than its predecessor. Producing around 38db it is much quieter than the 70db that the 5800 Ultra cards produced. The heat that the NV35 core generates is much higher than any previous chip that nVidia has released. This required a much larger cooling solution on the GPU. Taking up an extra PCI slot is one of the disadvantages of this solution, however, 90% of users out there don't populate PCI1 so it won't be a big issue for many.The cooler itself if a three piece unit - First on the front of the card you have the fan area, this is held onto the card with push pins, removal of these reveals the NV35 GPU as pictured below. On the back of the card you have four screws holding the rear heatsink onto the memory which screws into the front Ramsinks to hold them in place, once removed this shows the bare essentials of the card.
Under the massive heatsink is the largest GPU core we have ever seen from nVidia. From our photo you can see that the core is only a reference sample with NV35 printed on it rather than GeForce FX 5900. Designed on the Flip Chip design this core now is able to dissipate heat much more efficiently than the 5800 Ultra which should be seen as a delight to all of us.
Totalling 256MB of memory onboard, 16 TinyBGA memory modules are included which is the most we have ever seen on any card in the past. nVidia uses Samsung 2.2ns TinyBGA DDR-I memory in a 256bit interface to accomplish a huge bandwidth boost over the FX5800 series - finally something done right.
Power consumption of the FX 5900 over the 5800 has increased very slightly. nVidia provides a four pin Molex connector to allow you to give the card the extra power it requires. Onboard you can see some very serious power regulation schemes put into place to deliver a clear signal to the FX 5900.

GeForce FX 5900 Ultra - Benchmarks - Test System and Synthetics

BenchmarksTest SystemProcessor: Intel Pentium 4 3.0GHz (800MHz FSB) (Supplied by Spectrum Communications)Memory: 2x 256MB DDR-400 Kingmax (Supplied by Kingmax Australia)Hard Disk: Western Digital WD80 7200RPM (Supplied by Techbuy)Motherboard: 8KNXP Pro Canterwood (Supplied by Gigabyte)Operating System Used: Windows XP ProDrivers Used: Detonator FXSoftware Used: 3DMark03, Vulpine GLMark 1.1, Quake 3 Arena, Star Trek Voyager, Jedi Knight II, Comanche 4, Aquanox, UT 2003.Synthetic 3D3DMark03 Build 3303DMark03 is the latest instalment in the popular 3DMark series. By combining DirectX 9 support with completely new graphics (including the GeForce FX and ATI Radeon 9800), it continues to provide benchmark results that empower you to make informed hardware assessments.
3DMark03 shows a marked improvement over the Radeon and the 5600 series. This shows so far what can be done if it's done right.Vulpine GLMark 1.1pVulpine GL mark is a Windows based OpenGL API designed to stress the OpenGL systems of a 3D Accelerator. Patch 1.1p adds in support for ATI Radeon 9800's fast 256bit memory interface optimisations and early support for the nVidia Geforce FX.
The new FX 5900 is starting to show how the new engines onboard can give it the lead in the synthetics.

GeForce FX 5900 Ultra - Benchmarks - OpenGL

Real World OpenGLQuake 3 ArenaQuake 3 Arena is a real-world OpenGL benchmark that we have been using here at TweakTown for quite a while now because it has proven itself to be one of the best gaming benchmarks around to compare a range of different products.
Quake 3, while not using all of the latest engines, it still shows the increased memory bandwidth of the FX 5900 over the Radeon and the FX 5600.Star Trek VoyagerStar Trek Voyager is a real-world OpenGL benchmark. Based on the Quake 3 Arena engine, this game is a OpenGL master utilising DirectX 8. We also apply the new Opt3D patch to allow for the use of Hardware T&L's use as well as new optimisations for AMD Athlon XP and Pentium 4 SSE2.
Again like Quake 3 Arena, the FX 5900 wins Voyager.Jedi Knight IIJedi Knight II, Jedi Outcast is a newly released OpenGL game that many have been waiting for. It has much improved graphics over its predecessor. It fully supports advanced shaders, as well as very high texture resolutions and effects. There is one demo included in the multi-player section that is good for benchmarking use. In order to enable the benchmarking mode, you have to make a shortcut to the jk2mp.exe program located in the GameData folder of Jedi Knight 2. You have to put the switch "+set sv_cheats1" (no quotes) at the end of the line in the Target Area so that it looks like this: "C:\Star Wars JK II Jedi Outcast\GameData\jk2mp.exe" +set sv_cheats 1. The demo file used is jk2ffa.
Jedi Knight II also benefits from the latest addition from nVidia.

GeForce FX 5900 Ultra - Benchmarks - Direct3D

Real World Direct3DComanche 4Comanche 4 is a helicopter flying game using the DirectX 8.1 graphics interface. It is used to test the memory and 3D sub systems of a motherboard and video processor. Any weaknesses will show up on this baby.
Comanche 4 also shows the improvements in the FX 5900 and gives it the edge in D3D applications.AquanoxAquanox is the latest instalment of our benchmark software. This game is based heavily on DirectX 8 and 8.1 advancements and is designed to stress video cards to their ultimate limit, in all the best D3D benchmark to date.
Aquanox being heavy on the 3D side shows the FX 5900 has improved since the NV30 generation.UT 2003Unreal Tournament 2003 continues the success that Unreal Tournament generated as an online game and benchmark. UT2003 pulls all of its weight on to the 3D and Memory subsystems, pushing graphics reality to the maximum is its game, and you need some serious power to pull this one off.
In our last test of the day we see nVidia take the last victory.

GeForce FX 5900 Ultra - Conclusion

ConclusionAfter letting the entire hardware community down with the FX 5200, 5600 and 5800, these lines were considered to be a total waste of money compared to what was available to the end user, which we here agree with. The mothership of the nVidia line, the FX 5800 Ultra was unable to beat a Radeon 9700 Pro in most benchmarks, let alone the 9800 series. With the price of the FX 5800 being well over $800 AU why would you take the inferior product? You wouldn't.nVidia have once again shown that with a bit more research, a few good ideas and listening to what people want rather than sticking something in front of them. After all the troubles that the FX 5800 Ultra had, such as poor memory bandwidth, graphic distortion with FSAA enabled and the ultra high noise curtsey of the horrid cooling solution, the NV35 is defiantly a giant leap in the right direction for nVidia. The fact is that the memory bandwidth of the improved 256-bit memory interface can easily pick up the slack and then some. Now, the FX 5900 is able to outpace the Radeon 9800 Pro in all relevant benchmarks and can reclaim the Ultimate 3D card claim for NVIDIA - for the moment.While the FX5900 has its good points there are a few drawbacks with the new 3D marvel. First off the PCB is much larger than any previous card from nVidia. This can cause serious space issues in SFF computers, which are the biggest sellers amongst the enthusiast community these days. The larger PCB also costs a considerable amount to produce compared to the relatively small PCB design of the Radeon 9800.The other major drawback is the size of the cooling. The unit blocks the first PCI slot on most motherboards, reducing you to five or in some cases four PCI slots.- ProsFaster than the FX 5800 UltraQuieter Cooling solutionImproved FSAA image QualityImproved Driver support- ConsCost of retail cards to be well over $900AUCooling solution blocks PCI slot 1Rating: 9 out of 10 and TweakTown's Editors Choice

PRICING: You can find products similar to this one for sale below.

USUnited States: Find other tech and computer products like this over at Amazon.com

UKUnited Kingdom: Find other tech and computer products like this over at Amazon.co.uk

AUAustralia: Find other tech and computer products like this over at Amazon.com.au

CACanada: Find other tech and computer products like this over at Amazon.ca

DEDeutschland: Finde andere Technik- und Computerprodukte wie dieses auf Amazon.de

We openly invite the companies who provide us with review samples / who are mentioned or discussed to express their opinion. If any company representative wishes to respond, we will publish the response here. Please contact us if you wish to respond.
Newsletter Subscription

Latest News

View More News

Latest Reviews

View More Reviews

Latest Articles

View More Articles