Gå til innhold

[Løst]Intel eller AMD?


Anbefalte innlegg

Lurte på om noen kan si meg hvilket av disse systemene som er best ..

 

INTEL

Asus Rampage II GENE, X58

Intel Core™ i7 Quad Processor i7-920

Corsair Dominator DHX+ DDR3 1600MHz 6GB

Gainward GeForce GTX 275 896MB PhysX

8390kr

Eller

 

AMD

XFX Radeon HD 4890 1GB "BLACK" GDDR5

Asus M4A79T Deluxe, Socket-AM3

AMD Phenom II X4 955 Black Edition

OCZ Platinum XTC DDR3 1600MHz 6GB KIT

7600kr

Endret av Hunex
Lenke til kommentar
Videoannonse
Annonse

Hei!

 

Emnetittelen i denne tråden er lite beskrivende for trådens innhold og det er derfor ingen god emnetittel. Jo bedre og mer beskrivende emnetittelen er, jo lettere er det for andre å skjønne trådens innhold og det vil være lettere å treffe den riktige forumbrukeren med det rette svaret. Ber deg derfor om å endre emnetittel. Vennligst forsøk å ha dette i tankene neste gang du starter en tråd, og orienter deg om hva vår nettikette sier om dårlig bruk av emnetitler.

 

Husk at en god emnetittel skal beskrive eller oppsummere hvilket problem du har - ikke at du har et problem. En god emnetittel skal heller ikke kun bestå av et produktnavn.

 

Bruk p_edit.gif-knappen i første post for å endre emnetittelen.

 

(Dette innlegget vil bli fjernet ved endring av emnetittel. Ikke kommenter dette innlegget, men p_report.gif gjerne dette innlegget når tittelen er endret, så vil det bli fjernet..)

Lenke til kommentar

Det vil avhenge av hva du skal spille, og ved hvilken oppløsning.

I begge oppsettene er det skjermkortet som er flaskehalsen med mindre du skal spille på 1024x768 med lave grafikkinstillinger :p

Begge oppsettene vil også takle dobbelt så mye GPU kraft uten at prosessorene blir flaskehals, så om du KUN skal game og ikke gjøre andre CPU-intensive ting, så er det skjermkortet som blir avgjørende.

For å få best "value" kan du jo gå for 2 stk 4770 i crossfire på AMD oppsettet der. Det vil gi best resultater for omtrent alle spill på 1680x1050.

Her er en artikkel fra Tom's hardware om 4770 CF.

Jeg har forresten en kamerat som har nøyaktig likt oppsett som det AMD du har satt opp der, bare med annen (1333) RAM. Han klokket CPU til 3,8ghz på stock kjøler, og skjermkortet til 900 (eller 950?)mhz core og 1000mhz (x4 for GDDR5) ram. Begge deler stabilt og uten nevneverdig viftestøy ved full load. Da snakker du value ;)

HD 4770 klokker også bra (og støyer ikke), selv i crossfire i følge Tom's hardware. Det er enkelt å gjøre i CCC.

Endret av GullLars
Lenke til kommentar
kan også gå for sapphire 4850x2 som består av to relativt gode kort slått sammen! Btw, quad (x4) prosessorer egner seg ikke stort til gaming :)

 

At quad ikke egner seg til gaming er historie. Vi har hatt samme diskusjonen 1000 ganger, er du uenig-søk før du "krangler" :p

 

 

4850x2 gir mye for pengene, men bråker mye.

 

 

4770 i CF gir mest Bang for the Bucks atm.

Lenke til kommentar

Quads er kjempefine til gaming, men ikke alle spill klarer å utnytte alle kjernene.

720BE er jo klart et alternativ til 955 som er billigere.

720BE + 2x 4890 CF vil slå Ci7 + 275 ganske kraftig, spesielt ved høy oppløsning og eyecandy på.

Skal du game ved 1680x1050 eller lavere uten max AA/AF vil 2x 4770 CF gi mest for pengene.

Om du ikke skal drive med hardcore gaming vil 720BE, 8GB 1333mhz DDR3, og 2x 4770 CF være der du får klart mest value.

 

Grunnen til at jeg ikke foreslår 4850x2 er at det bare yter ca likt eller dårligere enn 2x 4770 om begge deler klokkes, og stock varierer det fra 0-10% mer ytelse avhengig av spill og oppløsning, men til en del høyere pris.

 

 

Om du skal klokke moderat vil 720BE + 2x 4770 slå Ci7 + 275 i de fleste spill ved 1680x1050 uten maks AA/AF. Du kan også spare inn noe ved å gå for 1333 RAM i stedet for 1600, da det ikke vil være stor praktisk forskjell.

(les Tom's hardware review før dere kritiserer meg ;))

Endret av GullLars
Lenke til kommentar
Jeg ville definitivt gått for INTEL oppsettet! En smule bedre skjermkort, bedre prosessor.

 

ATI kortet er jo DX 10.1

Nvidia kortet er DX 10

 

ville ikke ATI kortet gjøre det bedre i fremtidlige spill side DX 10.1?

Endret av Hunex
Lenke til kommentar
nei.

 

Men de nye spillene som kommer og har DX 10.1 og DX 11 når den tiden kommer.

Da må jo ATI kortet DX 10.1 gjøre det best.

 

hørt at ATI kortet knuser Nvidia i HAWX siden det er DX.10.1

Lenke til kommentar
Lets be honest, DX10.1 brings a lot of new features that don't really matter much if at all, and you can read all about them here. That said, there is one there that will matter a lot, contrary to what MS people say. This magic feature is the multi-sample buffer reads and writes(MSBRW). If you are wondering how you missed that big one in the feature list, well shame on you, read better next time.

 

What MSBRW does is quite simple, it gives shaders access to depth and info for all samples without having to resolve the whole pixel. Get it now? No? OK, we'll go into a bit more detail. DX10 forced you to compute a pixel for AA (or MSAA) to be functional, and this basically destroyed the underlying samples. The data was gone, and to be honest, there was no need for it to be kept around.

 

Games like Quake3 would do a lighting pass, then a shader pass, and another lighting followed by shaders and so on until everything was rendered right. This was quite precise but also quite slow. Dog slow.

 

To optimize around this, a technique called deferred shading took was invented. This does all the lighting passes followed by a single shader pass. If you have five passes, you basically can skip four trips through the shaders. The problem? Because the pixel isn't fully computed, just a pile of AA data, there is no way for it to be read. This is horribly simplified, but I don't want to go into the low level stuff here, go look it up if you really care.

 

What this meant is that you can't turn on AA if you have deferred rendering unless you do Supersampling which is rendering it at higher reolutions and sampling down. This is unusably slow, so it went out the door, meaning if you were designing a game, you picked speed in the form of deferred shading, or beauty in the form of AA. Most DX10 games will go for speed, meaning the AA hardware will sit more or less idle.

 

DX10.1 brings the ability to read those sub-samples to the party via MSBRW. To the end user, this means that once DX10.1 hits, you can click the AA button on your shiny new game and have it actually do something. This is hugely important.

 

The first reaction most people have is that if a game is written for DX10, then new 10.1 features won't do anything, AA awareness needs to be coded in the engine. That would be correct, but we are told it is quite patchable, IE you will probably see upgrades like the famous 'Chuck patch' for Oblivion. Nothing is guaranteed, but there is a very good chance that most engines will have an upgrade available.

 

In the end, DX10.1 is mostly fluff with an 800-pound gorilla hiding among the short cropped grass. MSBRW will enable AA and deferred shading, so you can have speed and beauty at the same time, not a bad trade-off.

 

Since NV has not done the usual 'we can do it too' song and dance when they are being beaten about the head and neck by a bullet point feature they don't have, you can be pretty sure they can't do it.

 

Close looks at the drivers, and more tellingly no PR trumpeting that they will have it out before the release of SP1 almost assuredly means that it will never happen. If you have a G8x or a G9x card, the only feature of DX10.1 you will miss is the important one. µ

 

Forskjellen er der, men jeg tviler på at du merker noe.

Lenke til kommentar
Lets be honest, DX10.1 brings a lot of new features that don't really matter much if at all, and you can read all about them here. That said, there is one there that will matter a lot, contrary to what MS people say. This magic feature is the multi-sample buffer reads and writes(MSBRW). If you are wondering how you missed that big one in the feature list, well shame on you, read better next time.

 

What MSBRW does is quite simple, it gives shaders access to depth and info for all samples without having to resolve the whole pixel. Get it now? No? OK, we'll go into a bit more detail. DX10 forced you to compute a pixel for AA (or MSAA) to be functional, and this basically destroyed the underlying samples. The data was gone, and to be honest, there was no need for it to be kept around.

 

Games like Quake3 would do a lighting pass, then a shader pass, and another lighting followed by shaders and so on until everything was rendered right. This was quite precise but also quite slow. Dog slow.

 

To optimize around this, a technique called deferred shading took was invented. This does all the lighting passes followed by a single shader pass. If you have five passes, you basically can skip four trips through the shaders. The problem? Because the pixel isn't fully computed, just a pile of AA data, there is no way for it to be read. This is horribly simplified, but I don't want to go into the low level stuff here, go look it up if you really care.

 

What this meant is that you can't turn on AA if you have deferred rendering unless you do Supersampling which is rendering it at higher reolutions and sampling down. This is unusably slow, so it went out the door, meaning if you were designing a game, you picked speed in the form of deferred shading, or beauty in the form of AA. Most DX10 games will go for speed, meaning the AA hardware will sit more or less idle.

 

DX10.1 brings the ability to read those sub-samples to the party via MSBRW. To the end user, this means that once DX10.1 hits, you can click the AA button on your shiny new game and have it actually do something. This is hugely important.

 

The first reaction most people have is that if a game is written for DX10, then new 10.1 features won't do anything, AA awareness needs to be coded in the engine. That would be correct, but we are told it is quite patchable, IE you will probably see upgrades like the famous 'Chuck patch' for Oblivion. Nothing is guaranteed, but there is a very good chance that most engines will have an upgrade available.

 

In the end, DX10.1 is mostly fluff with an 800-pound gorilla hiding among the short cropped grass. MSBRW will enable AA and deferred shading, so you can have speed and beauty at the same time, not a bad trade-off.

 

Since NV has not done the usual 'we can do it too' song and dance when they are being beaten about the head and neck by a bullet point feature they don't have, you can be pretty sure they can't do it.

 

Close looks at the drivers, and more tellingly no PR trumpeting that they will have it out before the release of SP1 almost assuredly means that it will never happen. If you have a G8x or a G9x card, the only feature of DX10.1 you will miss is the important one. µ

 

Forskjellen er der, men jeg tviler på at du merker noe.

 

Ja ok men vill et DX 10.1 kort funke bedre på DX 11 en et DX 10 kort? eller vill det ikke bli noe forskjell.

Lenke til kommentar

Opprett en konto eller logg inn for å kommentere

Du må være et medlem for å kunne skrive en kommentar

Opprett konto

Det er enkelt å melde seg inn for å starte en ny konto!

Start en konto

Logg inn

Har du allerede en konto? Logg inn her.

Logg inn nå
×
×
  • Opprett ny...