Gå til innhold

Skjermkortguiden - Råd, Teknologier, mm


guezz

Anbefalte innlegg

This is a "beta" release (maybe 80% done @ 4900 words). Please come with feedback!

 

Disclaimer

I have tried to be as objective as possible. If you find factual errors or something worth adding, please don’t hesitate to tell me about it (preferably by using a PM).

 

1. Determining you actual needs.

 

2. Video card basics. How much video memory do I need? PCIe? And so on.

 

3. Introduction to technical terms on a 101 level. What is AA? AF? And so on.

 

4. A more advanced introduction and will include topics like: What happens inside a video card?

 

5. Drivers and utilities for ATI, nVidia and others.

 

Remember than you can use Ctrl+f and enter a section headline for easy access (e.g. “• Video memory”).

 

Content Overview

Determine your actual needs

• Budget ceiling

• What kind of applications will be used?

• Do you have any special preferences?

• So you want to play games

• Will the system bottleneck the card?

• Cost vs. Gain

• Please do some research

• Please ask for help if you're uncertain

 

Video Card Basics

• What is a video card?

• Vendors

• Connection interfaces

• External connections (inc. HDCP)

• Power and its connections

• Size

• Noise

• Video memory

 

Technical Terms: 101

• Theoretical Performance

• Image Quality Enhancements

 

Advanced Introduction

• How a video card works [about 50%]

 

Drivers & Utilities

• Official and third party drivers

• Utilities also inc. DriverCleaner

 

 

Determine your actual needs

Today it’s pretty easy to get a bit lost in the myriad of different cards and technical terms which in turn demands that you determine your actual needs so the viable options can be reduced.

 

1. Budget ceiling

How much money are you willing to spend? Even though the right card for you (if you actually need one) might be way below the ceiling it will at least exclude the more expensive ones. You can always raise the ceiling later if you think the potential benefits (performance, features, etc.) are worth the extra cost.

 

2. What kind of applications will be used??

A family PC which is only used for Word, banking and surfing the Internet usually doesn’t need a dedicated video card and almost anything can be used as long as you can connect your monitor to it. The only real exception is if you would like to use Microsoft’s Vista Aero GUI which requires a card supporting DX9 to run - then an old IGP (integrated on motherboard) or video card will not suffice. Otherwise you will need an IGP (Intel GMA 950 / ATI Xpress 200 / nVidia 6100 or better) or a video card (ATI 9500 / nVidia FX5200 SE or better) and drivers which supports Vista.

 

If you would like to play games then a dedicated video card is highly recommended, although IGPs like ATI X1150 and nVidia 6150 are pretty decent for games which are a few years old or more.

 

Applications used by professionals usually like cards like nVidia Quadro or ATI FireGL because of drivers which are especially written to provide good image quality and performance. I can’t say much else since I’m very ignorant on this field.

 

3. Do you have any special preferences?

Such as:

- Noise (passive cooling, loudness)

- Power usage

- Size (length, half-size, single/dual-slot cooler)

- Multimedia capabilities (HDCP, 3D glasses, de-/encoding performance and support)

- External connections (HDMI, dual-link DVI, etc.)

- Image quality capabilities (D3D, AA, AF, HDR, etc.)

- Operating System and driver support

- Etc.

 

4. So you want to play games

Ask yourself what kind of games (and how often) will be played and which settings and resolution are the most preferable – also how long do you plan to keep the card – these are all important questions when choosing the right card for you. Obviously what you want and what you can afford don’t always coexist but it will at least give you an idea to what to look for.

 

Old games like Counter Strike 1.6 will require a lot less powerful video card than e.g. Crysis - therefore is the latest and greatest not useful if you like to play more older games.

 

The latest games require a very powerful card to be enjoyed in all their glory (high settings / high resolution / AA / AF) but this might be something of less importance to you.

 

If all you play is WoW then you should especially take note of cards which excels in that game.

 

5. Will the system bottleneck the card?

A new and powerful video card isn't very useful if it's severely bottlenecked by the rest of the system. 8800 GTX + 512MB + AMD Athlon 1800+ = performance nightmare, while 8800GTX + 2GB + Intel C2D E6600 = very good. Less powerful cards don’t need such a good system to run well.

 

When you don't have enough system memory the PC will try to compensate by using the much slower hard drive to store temporary game files - the game stutters (negative performance spikes). This can also happen if the video card runs out of video memory and is forced to use system memory which in a worst case scenario also results in the system memory exceeding its capacity.

 

The CPU’s role while gaming is to "fuel" the video card with information while performing other roles (AI, physics, sound, network (when playing online), etc.). A slow CPU will have a very limited capacity to fill all these roles if you have a powerful video card.

 

6. Cost vs. Gain

The old card can always be sold but will the new card's performance or other functions be worth the price difference?

 

Let's say you’re upgrading from an ATI X1900 XTX to the faster nVidia 8800 GTS 640MB. Will the extra performance, more video memory, DX10 and other technologies (e.g. CSAA, better AF quality and better HD performance) be worth it?

 

7. Please do some research

Read some reviews / articles of your potential video cards and about technologies which might be of importance to you (HDCP, etc.).

 

8. Please ask for help if you're uncertain

It might be here; from someone you know or elsewhere - just remember to provide crucial information about budget and desired usage. Please stay away from people who are ignorant about video cards (nVidia 6200 512 MB (ZOMG, it has 512MB!!11!), anyone?)!

 

 

 

 

 

Video Card Basics

kopiavbfg8800gtxez8.jpg

 

• What is a video card?

• Vendors

• Connection interfaces

• External connections (inc. HDCP)

• Power and its connections

• Size

• Noise

• Video memory

 

 

• What is a video card?

It provides the means to manipulate and display information through a cable (VGA, DVI, etc.) to a display so you can perform tasks like: use the Internet, watch movies and play games. The card can be integrated onto a motherboard (IGP) or be a dedicated solution (see picture above).

 

• Vendors

Since all vendors use the reference design set by the chip producers (ATI, nVidia, etc.) which ensures equal build quality. The main differences are: cooler used (reference or third party), guaranty terms and length, support quality, performance (i.e. factory overclocked or reference) bundle and price.

 

• Connection interfaces

The common interfaces are AGP 4x/8x and PCIe which are used to connect the video card to a motherboard.

 

- PCI (≤133MBps and 25W)

It was commonly used before the introduction of AGP 1.0 in 1997. Its severely limited bandwidth doesn’t allow any fast cards to exist (ATI X1550 is currently the fastest). If you are going to softmod a card it might be good to have such a card as backup if it fails.

pcief8.jpg

 

- AGP 4x/8x (2.1GBps@8x and 42W)

The latest version of PCI’s successor and has barely enough bandwidth to supply even the latest G7X and R5X0 cards – so the AGP will probably not suffice for PCIe cards like 8800 GTS and faster. The interface supports both 4x and 8x cards. Even though cards are still being bridged from PCIe to AGP it has stalled performance wise with the release of nVidia 7950 GT and ATI X1950 XT – which might be a good stopgap before moving to PCIe.

agpcm8.png

 

- PCIe x16 (4.0GBps up/down and 76W)

This has become the standard interface. Since it provides more power through the interface - external power is therefore of less importance.

pciewz3.png

 

- PCIe x16 (8.0GBps up/down and 76W)

The version 2.0 of the PCIe interface offers twice the bandwidth of 1.1 and older versions of PCIe cards are still supported. It should be excellent for bridgeless SLI/Crossfire.

 

________________________________________________________________

 

• Power and its connections

Extra power connections are then needed when the power provided by the connection interface isn't sufficient.

 

To summarize the power being supplied by the motherboard:

- PCI = 25W

- AGP = 42W

- PCIe = 75W

 

The power connectors listed below will only draw its power from the +12V rail which in turn requires a lot of Ampere (A) to “fuel” the most power hungry video cards. The PCIe connection interface uses the +3.3V and +12V rails while PCI uses +5.0V.

 

- 4 pin floppy cable (36W)

This power connector is used on cards like the ATI 9700 Pro AGP.

powerfloppyfw4.jpg

 

- 4 pin molex (72W)

It's recommended to only use dedicated cable(s).

power4pinmolexin7.jpg

 

- 6 pin PCIe (75W)

This has become the standard power connection. It's recommended to only use dedicated cable(s).

power6pindo0.jpg

 

- 8 pin PCIe (150W)

It's recommended to only use dedicated cable(s).

power8pinmolexjpghk4.jpg

 

This is a decent Internet based PSU calculator

 

The graph below clearly shows that the power requirements can vary greatly between cards. A good power supply unit (PSU) is both efficient and stable – thus a good 500W (PC Power & Cooling, etc.) can easily match a bad 700W (Q-Tec, etc).

8600gtspowerei0.gif

Graph from X-bit Labs

 

More information

 

________________________________________________________________

 

• External connections (inc. HDCP)

 

Great article discussing digital vs. analogue

 

Digital

- DVI (Digital Visual Interface)

This has become the new standard way of connecting your monitor to the video card. It offers no real image quality improvements over the analogue VGA.

 

The card must support HDCP if such DRM protected material is used over a DVI.

 

DVI-A: Analogue only

dviarx6.gif

DVI-D: Digital only

dvidssm2.gif

DVI-I: Digital and analogue (a DVI to VGA converter can be used). You will normally find this one on video cards.

dviisvz2.gif

 

Single link DVI offers enough bandwidth to run up to 1920x1200@60Hz while the dual link (more pins) support up to 2560x1600x60 (2560x1600x60 = 256 MHz (<7.4 Gbps) > 165 MHz (3.7 Gbps) single link DVI limitation).

DVI-I Dual Link

dviidkd6.gif

 

- HDMI (High-Definition Multimedia Interface)

DVI’s successor offers more bandwidth and can also transfer sound - all in a smaller physical size.

 

HDMI 1.0-1.2a: 4.9Gbps offers up to 1080p@60Hz and 8-channel/192kHz/24-bit audio.

HDMI 1.3(a): 10.2Gbps offers up to 1440p+@60Hz and 8-channel compressed lossless audio (Dolby TrueHD or DTS-HD Master Audio).

 

Audio FAQ

 

The card must support HDCP if such DRM protected material is used over a HDMI.

hdmivsdvipd1.jpg

 

More information

 

- S/PDIF (Sony/Philips Digital Interface)

It's used for transferring digital audio and is rarely supported by video cards.

spdfdg9.jpg

 

 

Analogue

- VGA (Video Graphics Array)

This is DVI’s predecessor and has a competitive image quality. The maximum resolution it supports is 2048×1536@60Hz when using the standard 400MHz RAMDAC (Random Access Memory Digital-to-Analogue Converter).

vgaty4.jpg

 

- HDTV Output (YPrPb component)

It works great up to 1080p and offers very good image quality - easily comparable with DVI and HDMI.

componentqw2.jpg

 

- S-video (Separate video)

Since the connection only uses six pins over two channels (brightness and colour) it will provide an image quality which is a lot worse than component (nine pins).

 

It’s good enough for a SDTV (480i or 576i).

 

Note: Most SDTVs which supports SCART don’t support S-video which results into a black and white image when using S-video to SCART due to the colour channel not being transmitted.

svideotg8.jpg

 

- Composite

It uses a single channel signal which provides an image quality which is pretty bad even on a SDTV. S-video and component uses the same connection on the video card.

compositeny2.jpg

 

To summarize (best to worst quality when connected to a TV)

1. HDMI / DVI

2. VGA*

3. Component*

 

4. S-video

 

5. Composite

 

* VGA offers normally slightly better IQ than component but some HDTVs like component more. Try and see what is best for your HDTV if it supports both!

 

Important!

Only HDTVs based on DLP or LCOS technology are 100% digital – whereas LCD and Plasma are all analogue at the pixel/gas pocket level. VGA and component will easily have competitive IQ to DVI/HDMI on these “analogue” sets, while they lags behind when the HDTV is a DLP or LCOS.

More information (pdf)

 

HDCP (High-Bandwidth Digital Content Protection)

This is a DRM (Digital Rights Management) technology which currently is being used by HD-DVD and Blu-Ray movies. The movie industry thinks this will reduce piracy but it makes it a lot harder for people to back-up their originals.

 

You need a video card and monitor/HDTV which are both HDCP ready if you want to use digital connections (HDMI, DVI).

 

Analogue connections have currently no such limitations since ICT (Image Constraint Token) is not yet implemented – when it’s (2009 or even as late as 2012) the resolution will be reduced to 540p or not allowed at all. It should be noted that AACS (Advanced Access Content System) don’t allow analogue connections to transfer more than a maximum 1080i.

hdcphn3.png

 

You can bypass HDCP by using Slysoft’s AnyDVD HD ($79) – thus it will work with video cards and displays which doesn’t support it! It can also remove the other user restrictions set by DRM.

 

________________________________________________________________

 

• Size

Most medium sized cases can use the 8800 GTX (currently the longest), although fitting it can be a bit of a hassle in cases like the Thermaltake Tsunami:

closecallpx5.jpg

 

Many shuttles (preferably with a 400W PSU) have also enough room (no room for an expansion card, though).

 

When you SLI/Crossfire cards which are using a two slot HSF the available expansion ports will be rather limited:

sliintro2mb.jpg

 

Unofficial 8800 GTS/GTX Compatible Cases List

 

________________________________________________________________

 

• Noise

This is something which is really difficult to accurately quantify:

I'd be wary of noise figures (even though I include them in my own reviews) as they're heavily system-dependent and I've yet to see anyone include frequency ranges, sensitivity, sound spectral analysis data, etc with noise values; unless you're willing to cross-reference them with several other sources, they're probably not worth inserting them into the guide.
dBA isn't anymore "accurate" than using dB as neither are actual measures of volume - they're both units of a ratio of sound pressures - and neither take into account the human ear; even dBA with its weighted filter isn't the same. But you're right in that collecting a valid cross-reference with sound is going to be a problem, not least because of what units/filters are used.

From a physics teacher and Beyond3D writer

 

When someone says a card is quiet this might not be the case since noise is a highly subjective experience, although if the person tries more than one card it will at least give us an idea of how noisy they are compared to each other, even if the actual loudness is undetermined.

 

Cards using passive coolers have no such issues – while others now become more apparent. Temperature and case ventilation control are now more important since a fan don’t exist to effectively transfer the heat from the heat sink, also components in its close vicinity can become hotter. Overclocking restrictions can also happen due to a higher temperature, although this is often not an issue for the intended usage (e.g. HTCP).

 

There exists several very silent and good performing third party coolers you can buy but this will void the guaranty (except for EVGA cards). To name a couple: Arctic Cooling’s Silencer and Zalman Tech’s VF900.

 

________________________________________________________________

 

• Video Memory

Its purpose is to temporarily store information to ensure fast access when the GPU (Graphics Processing Unit) wants to manipulate it. The reasons for having dedicated onboard memory rather than using system memory are due to much lower timings and much faster internal transfers between the GPU and video memory (called memory bandwidth).

 

Video memory is used to store such things like:

- A few finished and pending frames (frame buffer: front, back) and its size increases dramatically when using AA

- Other buffers (vertex, w, stencil, etc.)

- Textures

- Maps (bump, light, etc.)

 

As games become more and more complex they will also require more video memory as a minimum to run, which of course increases by using higher in-game settings, resolution, AA.

 

When I used a Mobility X700 256MB (performance ~ desktop 9700 Pro) in 2006 many of the games I tried had no problems almost filling its video memory – therefore would 128MB bottleneck its performance.

 

Since video memory usage is so sensitive to the settings used it will also give us a guideline to the amount needed as the video card performance increases. This in turn means that an nVidia 6200 512MB is way too slow to effectively utilize the available memory – 128MB would be more fitting.

 

Size recommendations

- 128MB: up to X1600 Pro / 6600 GT (256MB is good from ATI 9700)

- 256MB: up to X1950 XT / 7900 GTX (512MB is good from ATI X1800 XT / nVidia 7900 GT)

- ≤512MB: up to 8800 GTS

- >512MB: 8800 GTS and faster

 

If I would like to use the card for a long while (2008 or later) I would rather have an X1900 XT 512MB than an X1950 XT 256MB, even though they are pretty equal performance wise now (Q1 2007).

 

________________________________________________________________

 

 

 

 

 

Technical Terms: 101

• Theoretical Performance

• Image Enhancements

- AA (Anti-aliasing)

- AF (Anisotropic filtering)

• How a video card works

 

• Theoretical Performance

It should be noted that these numbers are just theoretical and should therefore not be used just by themselves when comparing cards. X850 XTX vs. X1900 XTX is an excellent example on this, which tells us that the architecture is of extreme importance for real-life performance.

 

- Fragment Rate

This is how many fragments (pixel parts) per second it can perform math on - this manipulates the appearance of a pixel.

Formula: number of pixel- or Stream (unified shader units)*core- or Stream frequency (MHz).

 

- Fill-rate (Gp/s)

Tells us about how many finished pixels it can output.

Formula: number of ROPs*core frequency (MHz).

 

- Texture Fill-rate (Gt/s)

This is how fast it can perform texturizing of pixels. This is basically giving 3D-objects a surface.

Formula: number of TMUs*core frequency (MHz).

 

- Memory Bandwidth (GB/s)

It’s how fast information travels between the GPU and video memory. DDR (Double Data Rate) memory allows two information transfers per clock-cycle – this is the effective memory frequency.

Formula: (memory bus width (bit)*effective memory frequency (MHz))/8.

1 Byte = 8 bits.

 

 

________________________________________________________________

 

• Image Quality Enhancements

 

- AA (Anti-Aliasing)

antialiasingyh0.png

The screen consists of a lot of pixels (a square entity) and because of its shape aliasing happens, which also produces crawling (when aliasing moves). This isn’t a problem for lines which are vertical or horizontal since they then will follow the pixel’s sides. Anti-aliasing will reduce the problem by making the transition between pixels smother by replacing the light intensity of the pixel. This is done by creating sub-pixels which samples how the actual luminance (colour) is within the pixel which then creates a final pixel colour. More samples will ensure an even smother transition between the pixels.

aauj3.jpg

 

There exist two main solutions:

 

MS (Multi-Sampling)

This method only works on polygon edges. Think about polygons as something which has a skeleton (called mesh). Not everything on screen is made up by polygons - it will therefore not catch everything (e.g. vegetation). It requires a lot of memory bandwidth and memory.

4xRGMS (D3D FSAAViewer)

4xmsaaqi9.png

Pink = sub-pixels

Green = texture sample

 

SS (Super-Sampling)

It’s a “catch all” method. The normal implementation is by making a higher resolution version (4x = 4xresolution!) of the frame which is then down-sampled to the correct screen resolution. This ensures that everything on-screen get anti-aliased. It requires a lot of fill-rate, memory bandwidth and memory.

 

Ordered Grid vs. Rotated Grid

Ordered grid = samples follows the shape of the pixel

Rotated grid = same as the above but now on a angle (see picture above)

RGSS is seen to have one basic advantage over OGSS: More effective anti-aliasing near the horizontal and vertical axes, where the human eye can most easily detect screen aliasing (jaggies). This advantage also permits the use of fewer sub-samples to achieve approximately the same visual effect as OGSS.

Quote

 

- AF (Anisotropic Filtering)

No AF............................16xAF

0xafvt3.jpg16xaftz3.jpg

If you look down at the ground besides your feet it's highly detailed, this is the base texture map.

 

As you look down the street it becomes more and more blurry, these are mipmaps (lower resolution versions of the base texture map). If the base texture map is 1024x1024, then you can have ten mipmaps (512x512, 256x256, etc) to use. The reason for the blurriness is pixels being blended together at a higher and higher rate as the resolution of the mipmaps gets smaller. AF tries to fix this by using a higher level of texture (texel / pixel) samples.

 

You can choose the level of AF from the available: 2x, 4x, 8x and 16x. I would recommend using at least 8x when playing games where it’s beneficial (i.e. close to the mipmaps, e.g. First Person Shooters).

 

Game developers are using these mipmaps to save performance but cards today have normally only a small performance hit when applying 16xAF. The extra memory bandwidth cost isn’t a big issue today.

 

g80-colored-aniso.png

You might have seen something similar in a review. This is a test meant to show us the card’s AF IQ. The picture above is from an nVidia 8800 which today has the best AF IQ available – even when not using the high quality setting.

- Colours = different mipmaps

- Distance from viewer = where different mipmaps are effective to. This may be dependent on the angle the mipmap is viewed. This is caused by the card’s architecture.

 

________________________________________________________________

 

 

 

 

 

Advanced Introduction

 

How a video card works

 

This introduction will cover what the video card does when you play a game – first on a traditional architecture (e.g. 7900 GTX) then comparing it to a unified one (e.g. 8800 GTX).

 

It requires a lot of resources to manipulate the original data into information which can be displayed on a screen – this creates many interesting work load reducing solutions, else a fluid motion couldn’t be achieved.

 

Basics

Today almost all games are three dimensional:

xymatrixhv3.gif

 

Video cards are based on the idea of parallelism which results into an impressive performance at a relatively low frequency than a single “pipeline” would need. The great thing about video cards is that they can perform calculations on both vertices and pixels (both from the same triangle) at the same time, even when they are two completely different processes.

 

Application

The card must first rely on the game engine (e.g. CryEngine2), API (e.g. Direct3D), and display driver (e.g. nVidia ForceWare 158.18) before it can perform any actual work.

apistackym8.gif

The game usually uses vertices (corner with two sides) as building blocks to make primitives (polygons, lines and dots). Polygons consists of tree (flat triangle) or more (e.g. volume) vertices. Before a new frame is under creation the game engine will move the geometry (primitives) which the video card transforms.

 

To save performance one can use something called level of detail (LOD) which are versions containing less vertices/resolution/etc. than the original version right in front of you. One example might be the one covered in the AF section earlier in this guide. For objects a simple dot is often used to determine the distance which then decides the version used.

 

Fans and strips are a great way of reducing the number of vertices (e.g. in a tire) because many of the vertices are used by more than one triangle and they can then be excluded without loss of information.

fanstipstt0.gif

 

Geometry Basics

- Model Space: Each model has its own coordinate system

- World Space: Everything is in the same coordinate system (normally not used)

- View Space: The eye represents the beginning of the coordinate system.

- Clip Space: The view space using the coordinates raging from -1 to 1 (x,y) and 0 to 1 (z). A w-factor is used to standardize (scale) clipping operations.

- Screen Space: Makes a 2D image (the screen) from a 3D image.

 

The release of nVidia GeForce 256 in 1999 was a revolution because it introduced hardware transform and lighting (T&L) which were previously performed by the CPU.

 

Translation: moving something along any of the three axes.

Rotation: rotation on an arbitrary axis

Scaling: changing the size, shape or both by a factor.

Skewing: changing the shape by rotating it on one or more axes.

 

Transforming from world space to view space usually takes a translation and a rotation.

 

The transforms are typically done by multiplications and additions through matrix math.

 

Instead of using a 3x3 matrix (x,y,z axes) one also incorporates the earlier mentioned w-factor:

transformsrz7.gif

 

The Geometry Process

Occlusion culling (trivial rejection) decides if something should be rendered (as in visible to the eye) to save video card processing power. This is simply not sufficient to determine if something exists in the view space. It then compares it to the view space - if it doesn’t exist here it’s discarded. It’s extremely computer intensive to check all the triangles to an object, therefore one normally put it inside a bounding enclosement (e.g. 8 vertices for a box).

 

Next is back-face culling which discards a triangle’s surface if its normal (blue arrow) is greater than 90 degrees from the view camera. This can also be done in the screen space.

cullingcp8.gif

 

The lighting process happens after all the transforms are finished. It’s calculated by taking into consideration the properties of both the light and material’s surface. Example real-time radiosity will dynamically light an object and it may also be affected by the light of other objects e.g. creating colour bleeds.

 

nVidia's Chief Scientist Dave Kirk: "lighting is the luminance value, whereas shading is about reflectance and/or transmittance."

 

Next is the clipping stage which decides what to do with triangles that crosses the outer boundaries of the view space. If a triangle is consisting of three vertices and only one of them are inside the view space then the rest will be discarded (clipped). This is where the w-factor works since it makes a more manageable perspective cube by dividing x, y and z with the scale factor w. After it’s clipped it must be retesselated which makes a vertex or vertices so the polygon becomes once again complete but now inside the view space.

 

 

 

Drivers & Utilities

 

Drivers

 

ATI

 

Official

Desktop

 

Notebook

 

Third Party

Omega

Tweaked version which supports most Mobility and desktop cards, all in one driver.

 

ModTool

Makes regular Catalyst drivers work on Mobility cards.

 

nVidia

 

Official

Desktop

 

Notebook (GO7)

 

Third Party

LaptopVideo2Go

Great support of most GO cards!

 

Matrox

Official

 

3dfx

Falconfly (official and third party)

 

Intel

Official

 

 

Utilities

 

ATI Tray Tools (ATI)

Overclocking, temperature, fan control, game profiles. Can replace CCC (Crossfire support is still in its infancy, though). Great tool!

 

NVTray (nVidia)

nVidia's equivalent to ATI Tray Tools (no support of G8X) but it’s from personal experience less user friendly.

 

ATI Tool (ATI and nVidia)

Overclocking, temperature, fan control.

 

NVTweak

Tweaking of 3D Stereo & ForceWare settings.

 

nTune (official from nVidia)

Overclocking, temperature, fan control, system overclocking on approved motherboards.

 

RivaTuner (ATI and nVidia)

Overclocking, temperature, fan control, softmodding. Discontinued (2.0 Final).

 

DriverCleaner

Removes traces after a driver uninstall. This is very important when moving from one brand to another (e.g. ATI -> nVidia) so potential conflicts can be minimized if you don't perform a format.

 

1. Uninstall the display driver

 

2. Boot up in safe mode. Run -> type “msconfig” -> tick “safeboot” in the “Boot.ini” tab and apply the change.

 

3. Use DriverCleaner and choose the components which are belonging to your brand.

 

3. Disable safe mode and boot.

 

4. Install new driver and boot.

 

________________________________________________________________

 

23 April 2007: Initial release

29 April 2007: Added a “How a video card works” teaser :p

Endret av guezz
Lenke til kommentar
Videoannonse
Annonse
- AGP 4x/8x (2.1GBps@8x and 42W)

The latest version of PCI’s successor and has barely enough bandwidth to supply even the latest G7X and R5X0 cards – so the AGP will not suffice for PCIe cards like 8800 GTS and faster. The interface supports both 4x and 8x cards. Even though cards are still being bridged from PCIe to AGP it has stalled performance wise with the release of nVidia 7950 GT and ATI X1950 XT – which might be a good stopgap before moving to PCIe.agpcm8.png

8452717[/snapback]

Jeg driver og diskuterer dette i en annen tråd nå. Har du noen kilder på påstanden om at AGP 8x ikke holder mål for raske grafikkort? Noen ytelsetester som viser hvor mye eller lite AGP reduserer ytelsen i forhold til PCIe?

Lenke til kommentar
- AGP 4x/8x (2.1GBps@8x and 42W)

The latest version of PCI’s successor and has barely enough bandwidth to supply even the latest G7X and R5X0 cards – so the AGP will not suffice for PCIe cards like 8800 GTS and faster. The interface supports both 4x and 8x cards. Even though cards are still being bridged from PCIe to AGP it has stalled performance wise with the release of nVidia 7950 GT and ATI X1950 XT – which might be a good stopgap before moving to PCIe.agpcm8.png

8452717[/snapback]

Jeg driver og diskuterer dette i en annen tråd nå. Har du noen kilder på påstanden om at AGP 8x ikke holder mål for raske grafikkort? Noen ytelsetester som viser hvor mye eller lite AGP reduserer ytelsen i forhold til PCIe?

8511461[/snapback]

Firingsquad

wnunet

 

Jeg ser ikke hensikten med å lansere en AGP-versjon av eksempelvis 8800 GTS. Den vil både være svært overpriset og yte svakere enn PCIe-versjonen.

 

Tom's Hardware har testet forskjellige versjoner av PCIe (x4, osv.). Selv om resultatet ikke kan direkte sammenlignes med AGP, siden PCIe er duplex, vil det likevel gi en pekepin.

Endret av guezz
Lenke til kommentar
Fin guide, men den må oversettes til norsk om den skal ha noen reel funksjon her på forumet.

8512431[/snapback]

Jeg er vel egentlig enig. Personlig har jeg ikke noe problem med å forstå det (kanskje det tekniske, men ikke selve språket). Det er ikke lenge siden en moderator stengte en arbeidslogg på forumet fordi den var skrevet på engelsk. Ble skrevet på engelsk fordi han førte tråden på to ulike forum (det andre var naturligvis engelskspråklig). Sånn sett burde også denne tråden bli stengt, selv om jeg er sterkt imot det. Det beste er jo om trådstarter kan ta seg bryet med å oversette denne i nær fremtid, eller få noen andre til å gjøre det.

 

Det står vel også i retningslinjene at en skal skrive på norsk.

 

Goscinny ;)

Lenke til kommentar

Det vunet skriver er omtrent som jeg forventet. Altså et ytelsetap på 0-23% ved å bruke AGP i stedet for PCIe på Radeon X1950Pro. Videre skriver de at de ikke så noen ytterligere tap ved å skru på AA. Det er jo logisk nok siden AA ikke øker kommunikasjonen over PCIe eller AGP-bussen, men kun øker komminukasjonen i GPUen og minnebussen på skjermkortet. Dette viser at det fortsatt kan være et poeng i å kjøpe raske AGP-kort hvis det blir mye dyrere å måtte oppgradere resten av systemet samtidig.

 

Testen til Firingsquad sier omtrent det samme, men dokumenterer det med langt flere spill. Mange av spillene tar ikke nevneverdig "skade" av AGP. Jo høyere oppløsning som brukes jo mindre blir den prosentvise ytelseforskjellen.

 

Konklusjonen deres sier også:

AGP interface: Technically the AGP interface is dead, but obviously it’s still delivering solid performance in today’s games. While we did see a slight advantage in favor of the PCIe-based X1950 Pro card in our benchmarks, keep in mind that the PCIe card was running on a high-end nForce 590 motherboard with DDR2-800 memory. The AGP-based PowerColor X1950 Pro card was using NVIDIA’s older nForce3 chipset and relied on slower DDR400 RAM.

 

Testen til THG tester ikke AGP i det hele tatt så den egner seg jo svært dårlig til sammenligning mellom AGP og noe annet. 2GB/s PCIe behøver jo ikke være sammenlignbart med 2GB/s AGP. Ellers så er både AGP og PCIe duplex. Forskjellen er at AGP er halv duplex (toveis kommunikasjon 1 vei av gangen), mens PCIe er full duplex (toveis kommunikasjon, 2 veier samtidig)

 

Prismessig så vil neppe AGP-PCIe broa koste så mye mer ekstra, men det lave produksjonsvolumet og endring av PCB på grunn av strømtilførsel vil nok gjøre sitt for å øke prisen.

 

Jeg tror jeg konkluderer for meg selv at AGP er litt ugunstig, men ikke så galt at det ruinerer ytelsen. Og at det fortsatt har hensikt i mellomklassen (1000-2000 kr).

 

Ellers så vil jeg tipse trådstarter om babelfish som kan gjøre grovarbeidet med oversettelsen. Det må selvfølgelig leses igjennom og rettes etterpå.

Lenke til kommentar

Jeg er enig i nesten alt du skriver, spesielt i at svært raske kort ikke lenger hører hjemme på AGP-plattformen.

 

Motstrider du at testen til Tom's Hardware likevel ikke gir en viss pekepinn, selv ikke litt ...

 

Jeg skulle ha skrevet full duplex :(

 

Interessant diskusjon.

 

Forresten, det er vel lov å skrive på engelsk, i likhet med de skandinaviske språkene?

Endret av guezz
Lenke til kommentar
Fin guide, men den må oversettes til norsk om den skal ha noen reel funksjon her på forumet.

8512431[/snapback]

Jeg er vel egentlig enig. Personlig har jeg ikke noe problem med å forstå det (kanskje det tekniske, men ikke selve språket). Det er ikke lenge siden en moderator stengte en arbeidslogg på forumet fordi den var skrevet på engelsk. Ble skrevet på engelsk fordi han førte tråden på to ulike forum (det andre var naturligvis engelskspråklig). Sånn sett burde også denne tråden bli stengt, selv om jeg er sterkt imot det. Det beste er jo om trådstarter kan ta seg bryet med å oversette denne i nær fremtid, eller få noen andre til å gjøre det.

 

Det står vel også i retningslinjene at en skal skrive på norsk.

 

Goscinny ;)

8512497[/snapback]

 

Jeg syns absolutt ikke tråden burde bli stengt, på noen måte. Kommenterer kun det faktum at guiden ville hatt mer nytte for seg, om den hadde blitt skrevet på norsk. Selv har jeg ingen problemer å lese slik på engelsk, da jeg snakker og skriver engelsk like godt som norsk. Men dette er et norsk forum, og derfor er det logisk å tro at flesteparten av leserne foretrekker å lese norsk.

Lenke til kommentar
  • 4 uker senere...
  • 4 uker senere...

Klikk for å se/fjerne innholdet nedenfor
This is a "beta" release (maybe 80% done @ 4900 words). Please come with feedback!

 

Disclaimer

I have tried to be as objective as possible. If you find factual errors or something worth adding, please don’t hesitate to tell me about it (preferably by using a PM).

 

1. Determining you actual needs.

 

2. Video card basics. How much video memory do I need? PCIe? And so on.

 

3. Introduction to technical terms on a 101 level. What is AA? AF? And so on.

 

4. A more advanced introduction and will include topics like: What happens inside a video card?

 

5. Drivers and utilities for ATI, nVidia and others.

 

Remember than you can use Ctrl+f and enter a section headline for easy access (e.g. “• Video memory”).

 

Content Overview

Determine your actual needs

• Budget ceiling

• What kind of applications will be used?

• Do you have any special preferences?

• So you want to play games

• Will the system bottleneck the card?

• Cost vs. Gain

• Please do some research

• Please ask for help if you're uncertain

 

Video Card Basics

• What is a video card?

• Vendors

• Connection interfaces

• External connections (inc. HDCP)

• Power and its connections

• Size

• Noise

• Video memory

 

Technical Terms: 101

• Theoretical Performance

• Image Quality Enhancements

 

Advanced Introduction

• How a video card works [about 50%]

 

Drivers & Utilities

• Official and third party drivers

• Utilities also inc. DriverCleaner

 

 

Determine your actual needs

Today it’s pretty easy to get a bit lost in the myriad of different cards and technical terms which in turn demands that you determine your actual needs so the viable options can be reduced.

 

1. Budget ceiling

How much money are you willing to spend? Even though the right card for you (if you actually need one) might be way below the ceiling it will at least exclude the more expensive ones. You can always raise the ceiling later if you think the potential benefits (performance, features, etc.) are worth the extra cost.

 

2. What kind of applications will be used??

A family PC which is only used for Word, banking and surfing the Internet usually doesn’t need a dedicated video card and almost anything can be used as long as you can connect your monitor to it. The only real exception is if you would like to use Microsoft’s Vista Aero GUI which requires a card supporting DX9 to run - then an old IGP (integrated on motherboard) or video card will not suffice. Otherwise you will need an IGP (Intel GMA 950 / ATI Xpress 200 / nVidia 6100 or better) or a video card (ATI 9500 / nVidia FX5200 SE or better) and drivers which supports Vista.

 

If you would like to play games then a dedicated video card is highly recommended, although IGPs like ATI X1150 and nVidia 6150 are pretty decent for games which are a few years old or more.

 

Applications used by professionals usually like cards like nVidia Quadro or ATI FireGL because of drivers which are especially written to provide good image quality and performance. I can’t say much else since I’m very ignorant on this field.

 

3. Do you have any special preferences?

Such as:

- Noise (passive cooling, loudness)

- Power usage

- Size (length, half-size, single/dual-slot cooler)

- Multimedia capabilities (HDCP, 3D glasses, de-/encoding performance and support)

- External connections (HDMI, dual-link DVI, etc.)

- Image quality capabilities (D3D, AA, AF, HDR, etc.)

- Operating System and driver support

- Etc.

 

4. So you want to play games

Ask yourself what kind of games (and how often) will be played and which settings and resolution are the most preferable – also how long do you plan to keep the card – these are all important questions when choosing the right card for you. Obviously what you want and what you can afford don’t always coexist but it will at least give you an idea to what to look for.

 

Old games like Counter Strike 1.6 will require a lot less powerful video card than e.g. Crysis - therefore is the latest and greatest not useful if you like to play more older games.

 

The latest games require a very powerful card to be enjoyed in all their glory (high settings / high resolution / AA / AF) but this might be something of less importance to you.

 

If all you play is WoW then you should especially take note of cards which excels in that game.

 

5. Will the system bottleneck the card?

A new and powerful video card isn't very useful if it's severely bottlenecked by the rest of the system. 8800 GTX + 512MB + AMD Athlon 1800+ = performance nightmare, while 8800GTX + 2GB + Intel C2D E6600 = very good. Less powerful cards don’t need such a good system to run well.

 

When you don't have enough system memory the PC will try to compensate by using the much slower hard drive to store temporary game files - the game stutters (negative performance spikes). This can also happen if the video card runs out of video memory and is forced to use system memory which in a worst case scenario also results in the system memory exceeding its capacity.

 

The CPU’s role while gaming is to "fuel" the video card with information while performing other roles (AI, physics, sound, network (when playing online), etc.). A slow CPU will have a very limited capacity to fill all these roles if you have a powerful video card.

 

6. Cost vs. Gain

The old card can always be sold but will the new card's performance or other functions be worth the price difference?

 

Let's say you’re upgrading from an ATI X1900 XTX to the faster nVidia 8800 GTS 640MB. Will the extra performance, more video memory, DX10 and other technologies (e.g. CSAA, better AF quality and better HD performance) be worth it?

 

7. Please do some research

Read some reviews / articles of your potential video cards and about technologies which might be of importance to you (HDCP, etc.).

 

8. Please ask for help if you're uncertain

It might be here; from someone you know or elsewhere - just remember to provide crucial information about budget and desired usage. Please stay away from people who are ignorant about video cards (nVidia 6200 512 MB (ZOMG, it has 512MB!!11!), anyone?)!

 

 

 

 

 

Video Card Basics

kopiavbfg8800gtxez8.jpg

 

• What is a video card?

• Vendors

• Connection interfaces

• External connections (inc. HDCP)

• Power and its connections

• Size

• Noise

• Video memory

 

 

• What is a video card?

It provides the means to manipulate and display information through a cable (VGA, DVI, etc.) to a display so you can perform tasks like: use the Internet, watch movies and play games. The card can be integrated onto a motherboard (IGP) or be a dedicated solution (see picture above).

 

• Vendors

Since all vendors use the reference design set by the chip producers (ATI, nVidia, etc.) which ensures equal build quality. The main differences are: cooler used (reference or third party), guaranty terms and length, support quality, performance (i.e. factory overclocked or reference) bundle and price.

 

• Connection interfaces

The common interfaces are AGP 4x/8x and PCIe which are used to connect the video card to a motherboard.

 

- PCI (≤133MBps and 25W)

It was commonly used before the introduction of AGP 1.0 in 1997. Its severely limited bandwidth doesn’t allow any fast cards to exist (ATI X1550 is currently the fastest). If you are going to softmod a card it might be good to have such a card as backup if it fails.

pcief8.jpg

 

- AGP 4x/8x (2.1GBps@8x and 42W)

The latest version of PCI’s successor and has barely enough bandwidth to supply even the latest G7X and R5X0 cards – so the AGP will probably not suffice for PCIe cards like 8800 GTS and faster. The interface supports both 4x and 8x cards. Even though cards are still being bridged from PCIe to AGP it has stalled performance wise with the release of nVidia 7950 GT and ATI X1950 XT – which might be a good stopgap before moving to PCIe.

agpcm8.png

 

- PCIe x16 (4.0GBps up/down and 76W)

This has become the standard interface. Since it provides more power through the interface - external power is therefore of less importance.

pciewz3.png

 

- PCIe x16 (8.0GBps up/down and 76W)

The version 2.0 of the PCIe interface offers twice the bandwidth of 1.1 and older versions of PCIe cards are still supported. It should be excellent for bridgeless SLI/Crossfire.

 

________________________________________________________________

 

• Power and its connections

Extra power connections are then needed when the power provided by the connection interface isn't sufficient.

 

To summarize the power being supplied by the motherboard:

- PCI = 25W

- AGP = 42W

- PCIe = 75W

 

The power connectors listed below will only draw its power from the +12V rail which in turn requires a lot of Ampere (A) to “fuel” the most power hungry video cards. The PCIe connection interface uses the +3.3V and +12V rails while PCI uses +5.0V.

 

- 4 pin floppy cable (36W)

This power connector is used on cards like the ATI 9700 Pro AGP.

powerfloppyfw4.jpg

 

- 4 pin molex (72W)

It's recommended to only use dedicated cable(s).

power4pinmolexin7.jpg

 

- 6 pin PCIe (75W)

This has become the standard power connection. It's recommended to only use dedicated cable(s).

power6pindo0.jpg

 

- 8 pin PCIe (150W)

It's recommended to only use dedicated cable(s).

power8pinmolexjpghk4.jpg

 

This is a decent Internet based PSU calculator

 

The graph below clearly shows that the power requirements can vary greatly between cards. A good power supply unit (PSU) is both efficient and stable – thus a good 500W (PC Power & Cooling, etc.) can easily match a bad 700W (Q-Tec, etc).

8600gtspowerei0.gif

Graph from X-bit Labs

 

More information

 

________________________________________________________________

 

• External connections (inc. HDCP)

 

Great article discussing digital vs. analogue

 

Digital

- DVI (Digital Visual Interface)

This has become the new standard way of connecting your monitor to the video card. It offers no real image quality improvements over the analogue VGA.

 

The card must support HDCP if such DRM protected material is used over a DVI.

 

DVI-A: Analogue only

dviarx6.gif

DVI-D: Digital only

dvidssm2.gif

DVI-I: Digital and analogue (a DVI to VGA converter can be used). You will normally find this one on video cards.

dviisvz2.gif

 

Single link DVI offers enough bandwidth to run up to 1920x1200@60Hz while the dual link (more pins) support up to 2560x1600x60 (2560x1600x60 = 256 MHz (<7.4 Gbps) > 165 MHz (3.7 Gbps) single link DVI limitation).

DVI-I Dual Link

dviidkd6.gif

 

- HDMI (High-Definition Multimedia Interface)

DVI’s successor offers more bandwidth and can also transfer sound - all in a smaller physical size.

 

HDMI 1.0-1.2a: 4.9Gbps offers up to 1080p@60Hz and 8-channel/192kHz/24-bit audio.

HDMI 1.3(a): 10.2Gbps offers up to 1440p+@60Hz and 8-channel compressed lossless audio (Dolby TrueHD or DTS-HD Master Audio).

 

Audio FAQ

 

The card must support HDCP if such DRM protected material is used over a HDMI.

hdmivsdvipd1.jpg

 

More information

 

- S/PDIF (Sony/Philips Digital Interface)

It's used for transferring digital audio and is rarely supported by video cards.

spdfdg9.jpg

 

 

Analogue

- VGA (Video Graphics Array)

This is DVI’s predecessor and has a competitive image quality. The maximum resolution it supports is 2048×1536@60Hz when using the standard 400MHz RAMDAC (Random Access Memory Digital-to-Analogue Converter).

vgaty4.jpg

 

- HDTV Output (YPrPb component)

It works great up to 1080p and offers very good image quality - easily comparable with DVI and HDMI.

componentqw2.jpg

 

- S-video (Separate video)

Since the connection only uses six pins over two channels (brightness and colour) it will provide an image quality which is a lot worse than component (nine pins).

 

It’s good enough for a SDTV (480i or 576i).

 

Note: Most SDTVs which supports SCART don’t support S-video which results into a black and white image when using S-video to SCART due to the colour channel not being transmitted.

svideotg8.jpg

 

- Composite

It uses a single channel signal which provides an image quality which is pretty bad even on a SDTV. S-video and component uses the same connection on the video card.

compositeny2.jpg

 

To summarize (best to worst quality when connected to a TV)

1. HDMI / DVI

2. VGA*

3. Component*

 

4. S-video

 

5. Composite

 

* VGA offers normally slightly better IQ than component but some HDTVs like component more. Try and see what is best for your HDTV if it supports both!

 

Important!

Only HDTVs based on DLP or LCOS technology are 100% digital – whereas LCD and Plasma are all analogue at the pixel/gas pocket level. VGA and component will easily have competitive IQ to DVI/HDMI on these “analogue” sets, while they lags behind when the HDTV is a DLP or LCOS.

More information (pdf)

 

HDCP (High-Bandwidth Digital Content Protection)

This is a DRM (Digital Rights Management) technology which currently is being used by HD-DVD and Blu-Ray movies. The movie industry thinks this will reduce piracy but it makes it a lot harder for people to back-up their originals.

 

You need a video card and monitor/HDTV which are both HDCP ready if you want to use digital connections (HDMI, DVI).

 

Analogue connections have currently no such limitations since ICT (Image Constraint Token) is not yet implemented – when it’s (2009 or even as late as 2012) the resolution will be reduced to 540p or not allowed at all. It should be noted that AACS (Advanced Access Content System) don’t allow analogue connections to transfer more than a maximum 1080i.

hdcphn3.png

 

You can bypass HDCP by using Slysoft’s AnyDVD HD ($79) – thus it will work with video cards and displays which doesn’t support it! It can also remove the other user restrictions set by DRM.

 

________________________________________________________________

 

• Size

Most medium sized cases can use the 8800 GTX (currently the longest), although fitting it can be a bit of a hassle in cases like the Thermaltake Tsunami:

closecallpx5.jpg

 

Many shuttles (preferably with a 400W PSU) have also enough room (no room for an expansion card, though).

 

When you SLI/Crossfire cards which are using a two slot HSF the available expansion ports will be rather limited:

sliintro2mb.jpg

 

Unofficial 8800 GTS/GTX Compatible Cases List

 

________________________________________________________________

 

• Noise

This is something which is really difficult to accurately quantify:

I'd be wary of noise figures (even though I include them in my own reviews) as they're heavily system-dependent and I've yet to see anyone include frequency ranges, sensitivity, sound spectral analysis data, etc with noise values; unless you're willing to cross-reference them with several other sources, they're probably not worth inserting them into the guide.
dBA isn't anymore "accurate" than using dB as neither are actual measures of volume - they're both units of a ratio of sound pressures - and neither take into account the human ear; even dBA with its weighted filter isn't the same. But you're right in that collecting a valid cross-reference with sound is going to be a problem, not least because of what units/filters are used.

From a physics teacher and Beyond3D writer

 

When someone says a card is quiet this might not be the case since noise is a highly subjective experience, although if the person tries more than one card it will at least give us an idea of how noisy they are compared to each other, even if the actual loudness is undetermined.

 

Cards using passive coolers have no such issues – while others now become more apparent. Temperature and case ventilation control are now more important since a fan don’t exist to effectively transfer the heat from the heat sink, also components in its close vicinity can become hotter. Overclocking restrictions can also happen due to a higher temperature, although this is often not an issue for the intended usage (e.g. HTCP).

 

There exists several very silent and good performing third party coolers you can buy but this will void the guaranty (except for EVGA cards). To name a couple: Arctic Cooling’s Silencer and Zalman Tech’s VF900.

 

________________________________________________________________

 

• Video Memory

Its purpose is to temporarily store information to ensure fast access when the GPU (Graphics Processing Unit) wants to manipulate it. The reasons for having dedicated onboard memory rather than using system memory are due to much lower timings and much faster internal transfers between the GPU and video memory (called memory bandwidth).

 

Video memory is used to store such things like:

- A few finished and pending frames (frame buffer: front, back) and its size increases dramatically when using AA

- Other buffers (vertex, w, stencil, etc.)

- Textures

- Maps (bump, light, etc.)

 

As games become more and more complex they will also require more video memory as a minimum to run, which of course increases by using higher in-game settings, resolution, AA.

 

When I used a Mobility X700 256MB (performance ~ desktop 9700 Pro) in 2006 many of the games I tried had no problems almost filling its video memory – therefore would 128MB bottleneck its performance.

 

Since video memory usage is so sensitive to the settings used it will also give us a guideline to the amount needed as the video card performance increases. This in turn means that an nVidia 6200 512MB is way too slow to effectively utilize the available memory – 128MB would be more fitting.

 

Size recommendations

- 128MB: up to X1600 Pro / 6600 GT (256MB is good from ATI 9700)

- 256MB: up to X1950 XT / 7900 GTX (512MB is good from ATI X1800 XT / nVidia 7900 GT)

- ≤512MB: up to 8800 GTS

- >512MB: 8800 GTS and faster

 

If I would like to use the card for a long while (2008 or later) I would rather have an X1900 XT 512MB than an X1950 XT 256MB, even though they are pretty equal performance wise now (Q1 2007).

 

________________________________________________________________

 

 

 

 

 

Technical Terms: 101

• Theoretical Performance

• Image Enhancements

- AA (Anti-aliasing)

- AF (Anisotropic filtering)

• How a video card works

 

• Theoretical Performance

It should be noted that these numbers are just theoretical and should therefore not be used just by themselves when comparing cards. X850 XTX vs. X1900 XTX is an excellent example on this, which tells us that the architecture is of extreme importance for real-life performance.

 

- Fragment Rate

This is how many fragments (pixel parts) per second it can perform math on - this manipulates the appearance of a pixel.

Formula: number of pixel- or Stream (unified shader units)*core- or Stream frequency (MHz).

 

- Fill-rate (Gp/s)

Tells us about how many finished pixels it can output.

Formula: number of ROPs*core frequency (MHz).

 

- Texture Fill-rate (Gt/s)

This is how fast it can perform texturizing of pixels. This is basically giving 3D-objects a surface.

Formula: number of TMUs*core frequency (MHz).

 

- Memory Bandwidth (GB/s)

It’s how fast information travels between the GPU and video memory. DDR (Double Data Rate) memory allows two information transfers per clock-cycle – this is the effective memory frequency.

Formula: (memory bus width (bit)*effective memory frequency (MHz))/8.

1 Byte = 8 bits.

 

 

________________________________________________________________

 

• Image Quality Enhancements

 

- AA (Anti-Aliasing)

antialiasingyh0.png

The screen consists of a lot of pixels (a square entity) and because of its shape aliasing happens, which also produces crawling (when aliasing moves). This isn’t a problem for lines which are vertical or horizontal since they then will follow the pixel’s sides. Anti-aliasing will reduce the problem by making the transition between pixels smother by replacing the light intensity of the pixel. This is done by creating sub-pixels which samples how the actual luminance (colour) is within the pixel which then creates a final pixel colour. More samples will ensure an even smother transition between the pixels.

aauj3.jpg

 

There exist two main solutions:

 

MS (Multi-Sampling)

This method only works on polygon edges. Think about polygons as something which has a skeleton (called mesh). Not everything on screen is made up by polygons - it will therefore not catch everything (e.g. vegetation). It requires a lot of memory bandwidth and memory.

4xRGMS (D3D FSAAViewer)

4xmsaaqi9.png

Pink = sub-pixels

Green = texture sample

 

SS (Super-Sampling)

It’s a “catch all” method. The normal implementation is by making a higher resolution version (4x = 4xresolution!) of the frame which is then down-sampled to the correct screen resolution. This ensures that everything on-screen get anti-aliased. It requires a lot of fill-rate, memory bandwidth and memory.

 

Ordered Grid vs. Rotated Grid

Ordered grid = samples follows the shape of the pixel

Rotated grid = same as the above but now on a angle (see picture above)

RGSS is seen to have one basic advantage over OGSS: More effective anti-aliasing near the horizontal and vertical axes, where the human eye can most easily detect screen aliasing (jaggies). This advantage also permits the use of fewer sub-samples to achieve approximately the same visual effect as OGSS.

Quote

 

- AF (Anisotropic Filtering)

No AF............................16xAF

0xafvt3.jpg16xaftz3.jpg

If you look down at the ground besides your feet it's highly detailed, this is the base texture map.

 

As you look down the street it becomes more and more blurry, these are mipmaps (lower resolution versions of the base texture map). If the base texture map is 1024x1024, then you can have ten mipmaps (512x512, 256x256, etc) to use. The reason for the blurriness is pixels being blended together at a higher and higher rate as the resolution of the mipmaps gets smaller. AF tries to fix this by using a higher level of texture (texel / pixel) samples.

 

You can choose the level of AF from the available: 2x, 4x, 8x and 16x. I would recommend using at least 8x when playing games where it’s beneficial (i.e. close to the mipmaps, e.g. First Person Shooters).

 

Game developers are using these mipmaps to save performance but cards today have normally only a small performance hit when applying 16xAF. The extra memory bandwidth cost isn’t a big issue today.

 

g80-colored-aniso.png

You might have seen something similar in a review. This is a test meant to show us the card’s AF IQ. The picture above is from an nVidia 8800 which today has the best AF IQ available – even when not using the high quality setting.

- Colours = different mipmaps

- Distance from viewer = where different mipmaps are effective to. This may be dependent on the angle the mipmap is viewed. This is caused by the card’s architecture.

 

________________________________________________________________

 

 

 

 

 

Advanced Introduction

 

How a video card works

 

This introduction will cover what the video card does when you play a game – first on a traditional architecture (e.g. 7900 GTX) then comparing it to a unified one (e.g. 8800 GTX).

 

It requires a lot of resources to manipulate the original data into information which can be displayed on a screen – this creates many interesting work load reducing solutions, else a fluid motion couldn’t be achieved.

 

Basics

Today almost all games are three dimensional:

xymatrixhv3.gif

 

Video cards are based on the idea of parallelism which results into an impressive performance at a relatively low frequency than a single “pipeline” would need. The great thing about video cards is that they can perform calculations on both vertices and pixels (both from the same triangle) at the same time, even when they are two completely different processes.

 

Application

The card must first rely on the game engine (e.g. CryEngine2), API (e.g. Direct3D), and display driver (e.g. nVidia ForceWare 158.18) before it can perform any actual work.

apistackym8.gif

The game usually uses vertices (corner with two sides) as building blocks to make primitives (polygons, lines and dots). Polygons consists of tree (flat triangle) or more (e.g. volume) vertices. Before a new frame is under creation the game engine will move the geometry (primitives) which the video card transforms.

 

To save performance one can use something called level of detail (LOD) which are versions containing less vertices/resolution/etc. than the original version right in front of you. One example might be the one covered in the AF section earlier in this guide. For objects a simple dot is often used to determine the distance which then decides the version used.

 

Fans and strips are a great way of reducing the number of vertices (e.g. in a tire) because many of the vertices are used by more than one triangle and they can then be excluded without loss of information.

fanstipstt0.gif

 

Geometry Basics

- Model Space: Each model has its own coordinate system

- World Space: Everything is in the same coordinate system (normally not used)

- View Space: The eye represents the beginning of the coordinate system.

- Clip Space: The view space using the coordinates raging from -1 to 1 (x,y) and 0 to 1 (z). A w-factor is used to standardize (scale) clipping operations.

- Screen Space: Makes a 2D image (the screen) from a 3D image.

 

The release of nVidia GeForce 256 in 1999 was a revolution because it introduced hardware transform and lighting (T&L) which were previously performed by the CPU.

 

Translation: moving something along any of the three axes.

Rotation: rotation on an arbitrary axis

Scaling: changing the size, shape or both by a factor.

Skewing: changing the shape by rotating it on one or more axes.

 

Transforming from world space to view space usually takes a translation and a rotation.

 

The transforms are typically done by multiplications and additions through matrix math.

 

Instead of using a 3x3 matrix (x,y,z axes) one also incorporates the earlier mentioned w-factor:

transformsrz7.gif

 

The Geometry Process

Occlusion culling (trivial rejection) decides if something should be rendered (as in visible to the eye) to save video card processing power. This is simply not sufficient to determine if something exists in the view space. It then compares it to the view space - if it doesn’t exist here it’s discarded. It’s extremely computer intensive to check all the triangles to an object, therefore one normally put it inside a bounding enclosement (e.g. 8 vertices for a box).

 

Next is back-face culling which discards a triangle’s surface if its normal (blue arrow) is greater than 90 degrees from the view camera. This can also be done in the screen space.

cullingcp8.gif

 

The lighting process happens after all the transforms are finished. It’s calculated by taking into consideration the properties of both the light and material’s surface. Example real-time radiosity will dynamically light an object and it may also be affected by the light of other objects e.g. creating colour bleeds.

 

nVidia's Chief Scientist Dave Kirk: "lighting is the luminance value, whereas shading is about reflectance and/or transmittance."

 

Next is the clipping stage which decides what to do with triangles that crosses the outer boundaries of the view space. If a triangle is consisting of three vertices and only one of them are inside the view space then the rest will be discarded (clipped). This is where the w-factor works since it makes a more manageable perspective cube by dividing x, y and z with the scale factor w. After it’s clipped it must be retesselated which makes a vertex or vertices so the polygon becomes once again complete but now inside the view space.

 

 

 

Drivers & Utilities

 

Drivers

 

ATI

 

Official

Desktop

 

Notebook

 

Third Party

Omega

Tweaked version which supports most Mobility and desktop cards, all in one driver.

 

ModTool

Makes regular Catalyst drivers work on Mobility cards.

 

nVidia

 

Official

Desktop

 

Notebook (GO7)

 

Third Party

LaptopVideo2Go

Great support of most GO cards!

 

Matrox

Official

 

3dfx

Falconfly (official and third party)

 

Intel

Official

 

 

Utilities

 

ATI Tray Tools (ATI)

Overclocking, temperature, fan control, game profiles. Can replace CCC (Crossfire support is still in its infancy, though). Great tool!

 

NVTray (nVidia)

nVidia's equivalent to ATI Tray Tools (no support of G8X) but it’s from personal experience less user friendly.

 

ATI Tool (ATI and nVidia)

Overclocking, temperature, fan control.

 

NVTweak

Tweaking of 3D Stereo & ForceWare settings.

 

nTune (official from nVidia)

Overclocking, temperature, fan control, system overclocking on approved motherboards.

 

RivaTuner (ATI and nVidia)

Overclocking, temperature, fan control, softmodding. Discontinued (2.0 Final).

 

DriverCleaner

Removes traces after a driver uninstall. This is very important when moving from one brand to another (e.g. ATI -> nVidia) so potential conflicts can be minimized if you don't perform a format.

 

1. Uninstall the display driver

 

2. Boot up in safe mode. Run -> type “msconfig” -> tick “safeboot” in the “Boot.ini” tab and apply the change.

 

3. Use DriverCleaner and choose the components which are belonging to your brand.

 

3. Disable safe mode and boot.

 

4. Install new driver and boot.

 

________________________________________________________________

 

23 April 2007: Initial release

29 April 2007: Added a “How a video card works” teaser :p

8452717[/snapback]

 

 

Lang tid brukte du på guiden ? :ohmy::ohmy:

Lenke til kommentar

Opprett en konto eller logg inn for å kommentere

Du må være et medlem for å kunne skrive en kommentar

Opprett konto

Det er enkelt å melde seg inn for å starte en ny konto!

Start en konto

Logg inn

Har du allerede en konto? Logg inn her.

Logg inn nå
  • Hvem er aktive   0 medlemmer

    • Ingen innloggede medlemmer aktive
×
×
  • Opprett ny...