63 million transistors (only 3 million more than GeForce3)
Manufactured in TSMC's .15 µ process
Chip clock 225 - 300 MHz
Memory clock 500 - 650 MHz
Memory bandwidth 8,000 - 10,400 MB/s
TnL Performance of 75 - 100 million vertices/s
128 MB frame buffer by default
nfiniteFX II engine
Accuview Anti Aliasing
Light Speed Memory Architecture II
There are currently three planned versions of to the GF4 Ti video line.
First we have the more budget orientated version the 4200. This baby came with a core clock of 225Mhz and a DDR SDRAM clock of 500MHz, with a total memory bandwidth of 8GB/s and a theoretical fill rate of 900 megapixels per second.
The middle of the line is the 4400. This GPU comes at 275Mhz core and 550Mhz RAM, with a total memory bandwidth of 8.8GB/s and a theoretical fill rate of 1.1 Gigapixels per second
The top of the line is the 4600. This GPU comes at 300Mhz Core and 650Mhz ram, with a total memory bandwidth of 10.4GB/s and a theoretical fill rate of 1.2gigapixels per second.
All three cards come standard with a 128MB frame buffer for maximum performance. Now that we have seen the simple side, lets have a look at some of the features specific to the GeForce 4 cards.
First introduced in the Geforce3 video card, nfiniteFX is NVIDIA's name for their programmable vertex and pixel shader engine. This feature is run off the DirectX 8 software and game, programmers can access this feature through DirectX 8 optimizations in the games. On the GeForce 4 we get two of these engines, allowing for more speed in the T&L section.
New Anti-Aliasing, Accuview
When the GeForce3 was released, NVIDIA introduced what is known as high resolution anti aliasing (HRAA), based on multi sampling AA. The GeForce 4 comes with Accuview. It is supposed to be an advanced multi sampling AA, improved in terms of quality as well as performance. While this Anti-Aliasing is a nice cosmetic affect it draws too much out of the video card to render the images properly and can reduce your scores from 200FPS to around 100FPS or even less.
Lightspeed Memory Architecture
The most important feature for GeForce4 Ti's impressive performance leap over GeForce3 is the new improved Lightspeed Memory Architecture.
Crossbar Memory Controller
The GeForce 3 was already equipped with this feature, enabling it to access memory in 64-bit, 128-bit as well as the usual 256-bit chunks, significantly improving memory bandwidth usage. For LMA II, NVIDIA improved the load balancing algorithms for the different memory partitions and improved the priority scheme to make more efficient use of memory across the four partitions.
Visibility Subsystem - Z-Occlusion Culling
This feature was also found in the GeForce 3 already, but for NV25 it has been tuned to cull more pixels while using less memory bandwidth to do it. The culling is now done in a certain culling surface cache on-chip to avoid off-chip memory accesses.
Lossless Z-Buffer Compression
This is another feature that was included into the GeForce 3 already. However, in LMA II the 4:1 compression is supposed to be done successfully more often, due to a new compression algorithm.
The vertex cache stores vertices after they are sent across the AGP. It's used to make the AGP more efficient, by avoiding multiple transmissions of the same vertices (e.g. primitives that share edges).
Assembles vertices after processing (after vertex shader) into fundamental primitives to pass onto triangle setup.
Dual Texture Caches
These were already found in GeForce3. The new cache algorithms are advanced to 'look ahead' more efficiently in cases of multi texturing or higher quality filtering. This contributes to the significantly improved 3 and 4 texture performance of GeForce4 Ti.
This cache at the end of the rendering pipeline is a coalescing cache, which is very similar to the 'write combining' feature of Intel and AMD processors. It waits until a certain amount of pixels have been drawn until it writes them to memory in burst modes.
Memory banks need to be pre-charged before they can be read, adding a nasty clock penalty to every read in a new bank of memory. To avoid this waste of time, the GeForce 4 Ti is able to assign memory banks for pre-charge ahead of time, according to a certain prediction algorithm.
This feature has already been around for several years. It was used for the first time on ATI Radeon chips. What it does is simply set a flag for a defined area of the frame buffer, so that, instead of filling the whole frame buffer area with zeros, only the flag has to be filled, saving memory bandwidth.
nView was first used on Geforce2 MX cards to allow dual display on either two or more monitors. This feature has been passed along to the GeForce 4 cards to allow for maximum display possibilities.
PRICING: You can find products similar to this one for sale below.
United States: Find other tech and computer products like this over at Amazon's website.
United Kingdom: Find other tech and computer products like this over at Amazon UK's website.
Canada: Find other tech and computer products like this over at Amazon Canada's website.
- We at TweakTown openly invite the companies who provide us with review samples / who are mentioned or discussed to express their opinion of our content. If any company representative wishes to respond, we will publish the response here.
Latest News Posts
- New digital-only Xbox reveal likely for E3 2019
- 5 minutes of Super Smash Bros. Ultimate's 'World of Light'
- Fury's rage showcased in two new Darksiders III trailers
- Marvel & Riot partner to release 'League of Legends' comics
- Players will unlock Smash Bros. Ultimate's roster rapidly
- Silicon Power Bolt B80 Portable SSD Review - Subtle Style
- NeueChair Review: Redefining The Modern Office Chair
- Crucial P1 500GB NVMe SSD - $100 NVMe
- Narrow bezel on a metallic chassis MSI PS42 14 Thin & Light Laptop Review
- Z390 Taichi Onboard LED issue
- 'Koala Stoner Noir' Free Demo Released Today. Full Game Updated With New Map Location And Cinema Experience
- TRENDnet launches long-range PoE+ switches
- ADATA Launches Ultimate SU630 3D QLC NAND SSD
- Razer announces the BlackWidow Lite
- ASUS ROG Announces ROG Strix Radeon RX 590 Graphics Card