Find fields & stores near you!
Find fields and stores
Zipcode
PbNation News
PbNation News
Community Focus
Community Focus

Reply
 
Thread Tools
Old 08-01-2006, 03:17 AM #1
Pwn Nubs?!?!? (Banned)
Wa-pa-sha Mofo
 
Pwn Nubs?!?!?'s Avatar
 
Join Date: May 2006
Location: Austin, Texas
[Tutorial] Video Cards 101

Refresh Rate

Just like movies or television, your computer simulates motion on your monitor by displaying a series of different images. Your monitor's refresh rate is the amount of times that the graphics card will update this image every second. A refresh rate of 75 Hz means that the monitor image is refreshing 75 times per second.

Refresh rate problems may arise in a video game when the computer is processing frames faster than the monitor's refresh rate. For example, if the computer is fast enough to process 100 frames per second, and the monitor's refresh rate is 75 Hz, there will be times when a frame is calculated and is displayed halfway through one of the monitor's refreshes. This can cause "tearing" or "artifacts," which is a nuisance.

As a solution, V-sync (short for vertical synchronization) can be enabled. This limits the frames the computer processes to the exact refresh rate of the monitor, and prevents artifacts. For example, with V-sync enabled the calculated frames in a game will never exceed the refresh rate. A 75-Hz refresh rate would limit the computer from calculating more than 75 frames per second.

Shaders

Currently there are two forms of shaders: vertex shaders and pixel shaders. Vertex shaders deform or transform 3D elements. Pixel shaders can change pixel colors based on complex input; think of a light source in a 3D scene in which some colors appear brighter when the object is lit, while others create shadows that are generated by changing pixels' color information.

Pixel shaders are frequently used for complex effects you see in your favorite games. For example, a shader could make the pixels surrounding a 3D sword appear to glow. A different shader could affect all of the vertices in a complex 3D object and make it appear to explode. Game developers rely more and more on complex shader programs and logical units to produce realistic graphics. Chances are, the most graphically appealing games you have seen make extensive use of shaders.

With the launch of Microsoft's next Application Programming Interface (API), DirectX 10, a third shader will be introduced, called a geometry shader. This new unit will be able to break apart objects, modify and even destroy them depending on the intended outcome. These three shader types will be very similar programming-wise but will have different purposes.

Unified Shaders

Unified shaders are not quite here yet in the PC world, but the upcoming DirectX 10 specification calls for a unified shader architecture. That means that vertex geometry and pixel shader code structures will be functionally similar but have dedicated rolls. The new specification can be found inside the Xbox 360 that was developed by ATI for Microsoft. It will be interesting to see the potential of the new DirectX 10 requirement unfold.

Pipelines

Pipeline is a term used to describe the graphics card's architecture, and it provides a generally accurate idea of the computing power of a graphics processor.

A pipeline isn't formally accepted as a technical term. There are different pipelines within a graphics processor as there are separate functions being preformed at any given time. Historically, it has been referred to as a pixel processor that is attached to a dedicated TMU. Graphics cards like the Radeon 9700 had eight pixel processors, each attached to a single TMU, and as such it was considered an eight-pipeline card.

The term pipeline no longer accurately describes some of the newer graphics processor architectures. Processors have a fragmented structure compared to past designs. ATI, with its X1000 series graphics card, was the first to deviate from this norm in order to achieve performance boosts through substructures optimizations. Some units are used more than others, and in an effort to increase the processor's entire performance, they attempted to find a "sweet spot" in the number of units needed for optimum efficiency without the need for excess silicon. In this architecture the name pixel pipeline lost its meaning as pixel processors were no longer attached to single TMUs. For example, ATI's Radeon X1600 graphics processor has 12 pixel shader units and only four TMUs. It can not accurately be described as either a 12-pipeline architecture, nor can it accurately be described as a 4-pipeline architecture, although it is often referred to as either.

As such, the number of pipelines in a graphics processor is usually used to compare two different cards (other than ATI's X1x00 series). For example, when comparing a card with 24 pipelines and a card with 16 pipelines, it is reasonable to assume that the graphics card with 24 pipelines will be generally faster.

Last edited by durrell : 04-22-2009 at 04:51 PM.
Pwn Nubs?!?!? is offline   Reply With Quote
Old Sponsored Links Remove Advertisement
Advertisement
Old 08-01-2006, 03:19 AM #2
Pwn Nubs?!?!? (Banned)
Wa-pa-sha Mofo
 
Pwn Nubs?!?!?'s Avatar
 
Join Date: May 2006
Location: Austin, Texas
Graphics Processor Clock Speed

Graphics processor clock speed is measured in Megahertz (MHz), which can be described as 'millions of cycles per second'.

The clock speed has a direct effect on the performance of the graphics processor. The faster it runs, the more work it does per second. In the first example, let's consider the Nvidia GeForce 6600 and 6600 GT: The 6600 GT has a graphics processor speed of 500 MHz, but the regular 6600 has a clock speed of 400 MHz. Because the processor is technically identical, the 20% clock speed boost of the 6600 GT translates into higher performance.

Clock speed isn't everything, however. You need to keep in mind that the architecture has a large impact on performance as well. In a second example, let's consider the GeForce 6600 GT and GeForce 6800 GT. The 6600 GT has a graphics processor clock speed of 500 MHz, but the 6800 GT runs at only 350 MHz. This does not tell the whole story, as the 6800 GT is a 16-pipeline architecture, while the 6600 GT has eight pipelines. In a sense, the 6800 GT with 16 pipelines at 350 MHz would offer roughly the same performance with eight pipelines at twice the speed (700 MHz). This is a very simplified comparison, but it can be used as a reasonable performance indicator.

Graphics Memory Size

The amount of video RAM is probably the most overrated part of a graphics card. Uninformed consumers tend to use the amount of RAM on a card to differentiate it from other cards, but in reality the amount of RAM has a very small impact on performance when compared to other considerations like clock speed and the memory interface.

Generally speaking, a card with 128 MB of RAM will perform almost identically to a card with 256 MB of RAM in most situations. There are situations where more RAM is advantageous for performance, but it is important to keep in mind that more RAM does not automatically translate into more performance.

Where RAM does come in useful is for higher-resolution texture sets. Game developers often make multiple texture sets for their games, and the more RAM your graphics card has, the higher resolution textures you will be able to use. Higher resolution textures provide clearer textures in the game. Having said all this, it is still common sense to choose a card with the largest amount of RAM available, but only if all other considerations are equal. Still, the memory bus and memory clock speed are far more important performance factors than the total amount of physical memory on the card.

Memory Bus

The memory bus is one of the most important aspects of memory performance. A modern graphics card's memory bus can range from 64 bits to 256 bits, and in some cases, it can be 512-bits wide. As the bus width increases, so does the amount of data that it can carry per cycle, and that is very important for performance. For example, if you had two buses running at the same clock frequency, a 128-bit bus could theoretically carry twice as much data as the 64-bit bus per clock cycle and a 256-bit bus can carry four times as much.

Higher memory bandwidth (channel capacity in terms of bits per second) equals higher memory performance. This is why the memory bus is so much more important than the amount of RAM. Assuming identical clock speeds, memory on a 64-bit bus practically works at only 25% of the speed of memory on a 256-bit bus!

Considering the amount of RAM, note that a graphics card with 128 MB of 256-bit memory would have much higher memory performance than a 512-MB model with 64-bit memory. It is notable that some of ATI's X1x00 series graphics cards will advertise their "internal" memory bus specifications, but the "external" bus is the number of note. For example, the X1600 has a 256-bit internal "ring bus," but the external bus is 128 bits. In reality, the memory bus performs at 128-bit performance.

Memory Types

Memory comes in two main categories: SDR (single data rate) and DDR (double data rate), which transfers twice the data per clock cycle. Single data rate memory has long been obsolete as far as graphics cards go. Because DDR memory does twice the work of SDR memory, it is important to remember that all DDR memory usually is clock speed-advertised at double its physical clock speed. For example, DDR memory is considered to be "1000 MHz DDR" memory (a.k.a. "1000 MHz effective") when its actual clock speed is 500 MHz.

For this reason, many people are surprised when their graphics card's 1200 MHz DDR memory is reported to be running at 600 MHz. But this should not be a cause for alarm, as it's simply how DDR memory clock speed is reported. DDR2 and GDDR3 memory works on the same principle, at double the clock speed. The difference between DDR, DDR2 and GDDR3 memory is only manufacturing technology. DDR2 can generally run at higher speeds than DDR; and DDR3 can generally run at higher clock speeds than DDR2.

Memory Clock Speed

Like the graphics processor, the graphics card memory works at a clock speed that is measured in megahertz. Similarly, increasing the clock speed will have a direct impact on memory performance. As such, the memory clock speed is one of the numbers used to compare memory performance. For example, assuming all other factors (like the memory bus width) are equal, when comparing a graphics card with 500 MHz memory and a graphics card with 700 MHz memory, it is reasonable to assume that the card with 700 MHz memory will be generally faster at memory operations.

Once again, clock speed isn't everything. A 700-MHz memory component with a 64-bit bus will perform slower than 400-MHz memory with a 128-bit bus. A 400 MHz clock speed on a 128-bit bus is roughly equivalent to 800 MHz on the 64-bit bus. It should be noted that the graphics processor speed and memory speed are completely separate, and are usually run at different settings.

Graphics Card Interface

All data communicated between the graphics card and the rest of the computer travels through the graphics card's slot, or interface. There are three types of graphics interfaces currently in use: PCI, AGP, and PCI Express. Different graphics interfaces allow for different amounts of bandwidth, and more bandwidth means better performance. It is important to note that contemporary graphics cards are only capable of using so much bandwidth. At a certain point, if this interface bandwidth is sufficient, it no longer remains a bottleneck.

The slowest graphics card bus, the PCI bus (Peripheral Components Interconnect), is detrimental to graphics card performance. AGP (Accelerated Graphics Port) is much better, but even the AGP 1.0 and AGP 2x specifications can limit performance. However, once we reach AGP 4x, we begin to approach the practical limit of bandwidth required for contemporary graphics cards. The AGP 8x specification has twice the bandwidth of the AGP 4x specification (2.16 GB/s), but there is a negligible difference in performance between these two standards.

The newest and highest-bandwidth interconnect is the PCI Express bus. New graphics cards typically use the PCI Express x16 specification that combines 16 separate PCI Express links (or lanes) to reach as much as 4 GB/s bandwidth. This is twice the bandwidth of AGP 8x. PCI Express offers this bandwidth both for uploading data to the computer and downloading to the graphics cards. However, the AGP 8x specification is still so adequate that we have not seen an example of a PCI Express graphics card performing better than an AGP 8x model (assuming all other hardware and parameters are equal). For example, an AGP version of a GeForce 6800 Ultra will perform identically to a PCI Express 6800 Ultra.

PCI Express is the preferred graphics card slot today and will be around for several years. The highest performance parts are not manufactured for the aging AGP 8x bus anymore, and PCI Express solutions tend to be cheaper and easier to find now than their AGP counterparts.
Pwn Nubs?!?!? is offline   Reply With Quote
Old 08-01-2006, 03:20 AM #3
Pwn Nubs?!?!? (Banned)
Wa-pa-sha Mofo
 
Pwn Nubs?!?!?'s Avatar
 
Join Date: May 2006
Location: Austin, Texas
Multi-Card Solutions

Using multiple graphics cards for more graphics power is not a new concept. In the early days of 3D graphics, 3dfx pioneered multi-graphics card technology. But with the demise of 3dfx, consumer-level multi-graphics card technology was unavailable, even though ATI has been manufacturing multi-card systems for industrial simulators since the Radeon 9700. The consumer has recently been reintroduced to this technology with the release of Nvidia's SLI solution, and more recently, ATI's Crossfire solution. Using multiple graphics cards together provides sufficient graphics performance to support highest visual quality settings and resolutions. However, the choice to purchase a multi-card solution is not a simple one.

First, power and heat have to be considered; multi-card solutions require a great deal of energy, and therefore an expensive high-output power supply unit is a must. In addition, graphics cards produce a lot of heat, and careful attention must be paid to the PC case and cooling equipment to ensure that the system does not overheat.

In addition, remember that SLI/Crossfire requires a motherboard that supports them (usually either one), which can cost a premium compared to single-card/single-slot solutions. Nvidia's SLI will only work on certain nForce4 motherboards, while ATI's Crossfire solution will work on ATI's Crossfire chipsets and even Intel motherboards with a certified multi-card chipset. On the downside, some Crossfire setups require one of the graphics cards to be a special Crossfire edition card. Since the launch of Crossfire, ATI has enabled software Crossfire through the PCI Express bus with some models and is continuing to increase the number of combinations with further driver revisions. The "master" cards for the non-software Crossfire combinations cost a premium over a regular card. Currently, the Radeon X1300, X1600 and X1800 GTO do not require Crossfire edition cards for the dual-graphics Crossfire mode to work.

There are other factors to consider as well. While two graphics cards linked together will offer a performance boost, the result is rarely anywhere near twofold, so from a budgetary standpoint, it is important to keep in mind that paying twice the money will not yield cost effective results. A 120% - 160% performance increase is a more realistic expectation with multi-card solutions - while in some cases performance will not increase at all. For this reason, multi-card solutions usually don't make sense when using cheaper graphics cards, because a more expensive card will almost always outperform a pair of cheaper ones. With this in mind, SLI/Crossfire solutions do not make sense for most consumers. When one is looking to enable more image quality features or to run games at extreme resolutions such as 2560x1600 that is generating over 4 million pixels per frame, solutions like these are the only answer.

HDR Lighting & OpenEXR HDR

HDR stands for "High Dynamic Range." A video game with HDR lighting has the ability to display a much more realistic image than a game without HDR, and not all graphics cards are able to display HDR graphics.

Before DirectX 9 compliant graphics processors, graphics cards have been severely limited as far as the range of lighting precision they could calculate. All lighting up to this point had to be calculated to 8-bit (or 256) integer levels.

Once true DirectX 9 class graphics processors arrived, graphics cards had the ability to render a high range of lighting precision at a full 24 bit, or about 16.7 million levels.

With 16.7 Million levels of brightness and the increased computational power that DirectX 9 / Shader Model 2.0 compatible graphics cards brought to the table, HDR lighting in PC games became possible. HDR Lighting is a complex concept to describe, and it has to be seen in motion to be truly appreciated. A simplified explanation is that HDR lighting allows increased contrast (darker darks and brighter lights), while at the same time increasing the amount of lighting detail displayed in both the dark and bright areas. A video game that features HDR appears to have much more life-like lighting and depth compared to a game without HDR.

Graphics processors that comply with the latest Pixel Shader 3.0 specification allow for even more lighting precision (32 bit), and also have the capability to use floating-point precision blending. This means that all SM 3.0 graphics cards are capable of supporting a specific method of HDR called "OpenEXR," which is a specification developed for the movie industry.

Some games, which support only the OpenEXR method of HDR, do not support HDR on Shader Model 2.0 graphics cards. However, games supporting non-OpenEXR methods will work with any DirectX 9 graphics processor. For example, the PC game Oblivion sports OpenEXR HDR, and will only display this feature on new graphics cards, which support the Shader model 3.0 specification - Nvidia's GeForce 6800 or ATI's Radeon X1800 are examples. Games that are based on the Half-Life 2 3D engine such as Counter-Strike: Source and the upcoming Half-Life 2: Aftermath will allow HDR to be displayed with older DirectX 9 graphics cards that only support Pixel Shader 2.0. The GeForce 5 series or ATI's Radeon 9500 are examples here.

Finally, it should be noted that all forms of HDR require tremendous computational power and can bring all but the most powerful graphics processors and computer systems to a crawl. If you want to play the newest HDR-enabled games, high-performance graphics hardware is a must.

Anti-Aliasing


Aliasing (abbreviated 'AA') is a term to describe jagged or blocky patterns associated with displaying digital images. In the case of graphics, it refers to the inherent stair-likeness of angular edges when displayed on the monitor. Anti-aliasing is a graphics feature that reduces this effect. However, since anti-aliasing calculations use a fair amount of graphics processor power, enabling it will cause a significant drop in frame rates.

Anti-aliasing technology relies heavily on graphics memory performance, so high performance graphics cards with high performance memory will be able to use anti-aliasing with less of a performance hit than low-end graphics cards. Anti-aliasing can be enabled at different levels. For example, 4x anti-aliasing will produce higher quality images than 2x anti-aliasing, but at a higher performance cost than 2x anti-aliasing. While 2x doubles both horizontal and vertical resolution, 4x quadruples it.

Texture Filtering

All 3D objects in a video game are textured, and as the viewing angle of a displayed texture increases, the texture will appear more and more blurry and distorted in the game. To combat this effect, texture filtering was introduced for graphics processors.

The earliest texture filtering was called bilinear and displayed pretty obvious filtering 'bands' that weren't very pretty. This was fixed with tri-linear texture filtering, which improved on the bilinear technique. Both of these filtering options are relatively free from a performance cost standpoint in modern graphics cards.

The best filtering available is now anisotropic filtering (often abbreviated as AF). Similar to anti-aliasing, anisotropic filtering can be enabled at different levels. For example, 8x AF will produce superior filtering quality compared to 4x AF. Also similar to anti-aliasing, anisotropic filtering demands processing power, and will increasingly affect performance as the level of AF is raised.

High Definition Texture Sets

All 3D video games are developed with target specifications, and one of these specifications is the amount of texture memory the game will require. All of the required textures must fit into graphics card memory while playing, otherwise performance will be heavily taxed, as the extra textures required will have to be stored in slower system RAM, or even on the hard disk. So if a game's developers have targeted 128 MB of memory as their minimum requirement for that game, the textures to support it would be called a 'texture set' and would not require more than 128 MB of memory on a graphics card at any given time.

Newer games often support multiple texture sets, so that the game will support older graphics cards with less texture memory, in addition to the newest graphics cards with the most onboard memory. For example, a game might include three texture sets: for 128 MB, 256 MB and for 512 MB of graphics memory. Games that support 512 MB of graphics card memory are few and far between, but they are the most compelling reason to buy a graphics card with that much of memory. More memory generally has a relatively minor effect on raw performance, but can increase visual quality considerably if the game supports it.
Pwn Nubs?!?!? is offline   Reply With Quote
Old 08-01-2006, 03:45 AM #4
WGP>You27k
_____
 
WGP>You27k's Avatar
 
Join Date: Jul 2004
Location: 310
 has been a member for 10 years
WGP>You27k is offline   Reply With Quote
Old 08-01-2006, 04:07 AM #5
The Dread Pirate
Warning: Choking Hazard
 
The Dread Pirate's Avatar
 
Join Date: Jul 2004
Location: The High Seas
The Dread Pirate is a Supporting Member
 has been a member for 10 years
Added to the ultimate sticky.
__________________
I am going to become rich and famous when I invent a device that allows you to stab people in the face over the internet.
The Dread Pirate is offline   Reply With Quote
Old 08-01-2006, 04:12 AM #6
mousek801
Blasé Polar Bear
 
mousek801's Avatar
 
Join Date: Jul 2005
good read
__________________
"Forget the necessities, it's the luxuries I can't live without."
mousek801 is offline   Reply With Quote
Old 08-01-2006, 08:26 AM #7
Pwn Nubs?!?!? (Banned)
Wa-pa-sha Mofo
 
Pwn Nubs?!?!?'s Avatar
 
Join Date: May 2006
Location: Austin, Texas
cool i've been stickied
Pwn Nubs?!?!? is offline   Reply With Quote
Old 08-01-2006, 08:36 AM #8
12345421
ST:T Resident Canadian
 
12345421's Avatar
 
Join Date: Oct 2005
Where did you get that info from?
__________________
Feedback
"Originally posted by brickwall329: Ian is my favorite Canadian"
"Originally posted by kmarriner: Because we are americans and that would make too much sense."
"Originally posted by zellthemedic: I swear Ian. You're my roll model."

ST:Tech - We has Canadians
12345421 is offline   Reply With Quote
Old 08-01-2006, 09:06 AM #9
Pwn Nubs?!?!? (Banned)
Wa-pa-sha Mofo
 
Pwn Nubs?!?!?'s Avatar
 
Join Date: May 2006
Location: Austin, Texas
different places
Pwn Nubs?!?!? is offline   Reply With Quote
Old 08-01-2006, 09:10 AM #10
Randomperson
Fear of the dodgy!
 
Randomperson's Avatar
 
Join Date: Oct 2004
What would be the suggested price for a used 9800 Pro with an added pro cool fan? I think that's the name of the fan, but I'm not sure.
Randomperson is offline   Reply With Quote
Old 08-01-2006, 09:20 AM #11
Mps2216
Control Yourself
 
Mps2216's Avatar
 
Join Date: Apr 2004
Location: London
 has been a member for 10 years
You mean he didn't write it himself?
__________________
WashU '12
SC Rage RKPB
Mps2216 is offline   Reply With Quote
Old 08-01-2006, 09:25 AM #12
Mustang7302
has been here too long
 
Mustang7302's Avatar
 
Join Date: Sep 2001
Location: Houston, TX
Mustang7302 is a founding member
 has been a member for 10 years
Mustang7302 has achieved Level 3 in PbNation Pursuit
Great information, thank you for putting it all together!
__________________
Cars, Technology, and Stogies ... All the fine things in life.

"Originally posted by Volucris: Computers are tools. Man is separated by how well he chooses his tool."

"Originally posted by Rebeltilldeath3: We went from Allan Sheppard, Gus Grissom, Neil Armstrong, and Buzz Aldrin to Charlie Sheen, Lindsay Lohan, and Snooki."
Mustang7302 is offline   Reply With Quote
Old 08-01-2006, 03:41 PM #13
Topher.
25 Characters.....No Way!
 
Topher.'s Avatar
 
Join Date: Mar 2006
Location: Ten-er-see
Very informative, but the list made me think about which card I want to upgrade to. I'm torn between these 3.

http://www.newegg.com/Product/Produc...82E16814130033

http://www.newegg.com/Product/Produc...82E16814131428

http://www.newegg.com/Product/Produc...82E16814195019

Which would be best if my goal was high setting gaming?
__________________
ΛΧΑ
Topher. is offline   Reply With Quote
Old 08-01-2006, 03:45 PM #14
Volucris
asmuchtextastheywillallow
 
Volucris's Avatar
 
Join Date: Sep 2005
Location: Nashville
Very good read. Some pictures to go along with the indivual sections would help a lot. If you have requests for pics, I can get you anything.
Edit*
Topher, you would want the 7900GT model you listed. It's made by eVGA and is an Nvidia card. eVGA is the greatest graphics card distributor at the time. They have spectacular warranties and and awesome website for support.
Edit*2*
This is a great site for card comparision.
http://www.gpureview.com

Last edited by Volucris : 08-01-2006 at 03:50 PM.
Volucris is offline   Reply With Quote
Old 08-01-2006, 03:48 PM #15
WGP>You27k
_____
 
WGP>You27k's Avatar
 
Join Date: Jul 2004
Location: 310
 has been a member for 10 years
Quote:
Originally Posted by Topher.
Very informative, but the list made me think about which card I want to upgrade to. I'm torn between these 3.

http://www.newegg.com/Product/Produc...82E16814130033

http://www.newegg.com/Product/Produc...82E16814131428

http://www.newegg.com/Product/Produc...82E16814195019

Which would be best if my goal was high setting gaming?
first or second, then just decide if you're a nvidia guy or ati guy.
WGP>You27k is offline   Reply With Quote
Old 08-01-2006, 03:48 PM #16
Kookies
Rogue Leader
 
Kookies's Avatar
 
Join Date: Mar 2006
Location: SoCal •949•
Nice post/find....
__________________
I'm Big
Orange County
XBL:CaptainCh33dMan
Kookies is offline   Reply With Quote
Old 08-01-2006, 03:49 PM #17
akatch79
Queen of the Norse
 
Join Date: Oct 2004
Thank you so much! I'm slowly but surely picking out parts for my first PC build and I was having LOTS of trouble finding the right graphics card. Your post was a lifesaver! Thanks!
__________________
"Those who can make you believe absurdities can make you commit atrocities"
- Voltaire
A
akatch79 is offline   Reply With Quote
Old 08-01-2006, 03:51 PM #18
TenaciousTomato
 
 
TenaciousTomato's Avatar
 
Join Date: Apr 2006
Location: Rhode Island
Nice.
TenaciousTomato is offline   Reply With Quote
Old 08-01-2006, 03:53 PM #19
Wendig0
^_^
 
Wendig0's Avatar
 
Join Date: May 2004
Location: Utah
 has been a member for 10 years
psh, I was thinking about doing this. But what ever.
oooo got to go to the store
__________________
Get-Tuned.com

PBDirectory.net
Wendig0 is offline   Reply With Quote
Old 08-01-2006, 03:54 PM #20
Topher.
25 Characters.....No Way!
 
Topher.'s Avatar
 
Join Date: Mar 2006
Location: Ten-er-see
Quote:
Originally Posted by Volucris
Very good read. Some pictures to go along with the indivual sections would help a lot. If you have requests for pics, I can get you anything.
Edit*
Topher, you would want the 7900GT model you listed. It's made by eVGA and is an Nvidia card. eVGA is the greatest graphics card distributor at the time. They have spectacular warranties and and awesome website for support.
Edit*2*
This is a great site for card comparision.
http://www.gpureview.com

Thanks. The 7900GT was my initial choice, but after seeing the list I decided to check out some other options. I'll stick with the 7900, and then use eVGA's program for trading in for directx 10 cards(forgot their name for it) when dx10's are released.
__________________
ΛΧΑ
Topher. is offline   Reply With Quote
Old 08-01-2006, 03:56 PM #21
yellowsnowman
CCIE
 
yellowsnowman's Avatar
 
Join Date: Feb 2004
Location: Montgomery, Alabama
yellowsnowman is a Supporting Member
 has been a member for 10 years
Good deal.

Here you can get a Voodoo 1 for 99 cents, lol
http://cgi.ebay.com/Gainward-CardExp...QQcmdZViewItem
__________________
"During my service in the United States Congress, I took the initiative in creating the Internet.
- Al Gore describing his 1986 legislation to interconnect five supercomputer centers (17 years after the first Internet servers hooked up) "
yellowsnowman is offline   Reply With Quote
Reply


Thread Tools

Posting Rules
Forum Jump