Comments Locked

84 Comments

Back to Article

  • yacoub - Tuesday, January 3, 2006 - link

    quote:

    The Athlon 64 X2 4800+ actually is faster in the Splinter Cell: CT benchmark without anything else running, but here we see a very different story. Although its 66 fps average frame rate is reasonably competitive with the Presler HT system, its minimum frame rate is barely over 10 fps - approximately 1/3 that of the Presler HT.


    Yet no mention of the Max, where the 4800+ utterly trounces the two Intel chips. Does Max not matter (in which case why bother listing it), or does it matter but you just neglected to mention that (whether on purpose or by accident)?
  • jjunk - Tuesday, January 3, 2006 - link

    quote:

    Yet no mention of the Max, where the 4800+ utterly trounces the two Intel chips. Does Max not matter (in which case why bother listing it), or does it matter but you just neglected to mention that (whether on purpose or by accident)?


    It's right there in the chart. As for further discussion not really necessary. Screaming frame rates might look good on the chart but they don't help game play. A 10 fps min will definately be noticiable.
  • IntelUser2000 - Sunday, January 1, 2006 - link

    quote:

    When we do receive the new motherboard, we will take a look at power consumption once more to get an idea of the final state of Intel's 65nm power consumption, but until then, we don't want to draw any conclusions based on what we've seen.
    '

    I don't like that paragraph. It makes it sound like 65nm will be all that makes Presler in power consumption. It will also make people judge 65nm based on Presler, since that's the first CPU on the 65nm.

    In fact its not that simple. Taking a CPU that's on a certain process like the Smithfield and putting on a smaller process won't mean instant 40-50% decrease in power consumption. That's called the dumb shrink. The reason Northwood had significantly lower power than Willamette was because Northwood was optimized to lower power consumption.

    A CPU that runs well at 130nm may do bad at 90nm and even worse at 65nm for example. Presler was said to be not Intel's main focus and Intel moved their design teams to Conroe, so people who's supposed to be optimizing Presler for 65nm all went away and Presler was just done a dumb shrink.

    Sleep transistor was an optional feature on 65nm, not required. So Presler may not have it.
  • IntelUser2000 - Monday, January 2, 2006 - link

    Why use DDR2-667 with 5-5-5-15 timings?? Most DDR2-667 can do 4-4-4-8(around there). This is gonna skew the results in AMD's favor as DDR400 used is the lowest latency possible.

    In reality nobody is gonna use DDR400 at 2-2-2-7 lateny or DDR2-667 at 4-4-4-8 latency. Nobody I have ever heard in outside internet uses the RAM at those timings.

    Anandtech should either benchmark them all at JEDEC timings or use them all with low latency. I understand they want to be sure the new test system to work properly, but using low latency RAM for the comparison system is just not fair.

    JEDEC timings for DDR400 is 3-3-3-8. Where are your DDR400 advantage over DDR2 now??
  • hans007 - Sunday, January 1, 2006 - link

    i think that the 9xx series is a big improvement over the 8xx.

    i have an 8xx myself the 820 which is the lowest power. the leakage is exponential so the 955 is going to draw a much highe ramount than say a 920 will.

    i bet the 920 will be a half decent cpu drawing maybe only 70 watts. which isnt TOO terrible in the grand scheme of power. the 920 would only run at 2.8 ghz and have not as high leakage percentage so i think it will be the one to get.

    true intel is not better yet, but they are getting there. and their dual cores still cost less.

    i also think that intel should be commended for writing the smp code for q4. that is the doom3 engine which will go into a LOT of games. and since it speeds up the amd chips as well, it is a free upgrade for everyone. sure it makes up for a large deficiency in the intel chips, but it is FREE.

    and it makes the really cheap 920/820 chips very price competitive. as the 820 chips are very very cheap about $150 on ebay (which is probably near what oems get them for in bulk, this the rampant dell 820 deals going on)
  • jjmcwill - Saturday, December 31, 2005 - link

    I do professional software development for a living, using Visual Studio 2003 to build the code for a product I work on. We have over 1000 .cpp files and over 1500 header files.

    On my work box: An HP xw6200 workstation with a single 3.0GHz Xeon CPU, 2MB L2 cache, 1G RAM, compilation takes 10:45 for a single project in our solution. On my home system: Socket 754 Athlon 64 3000+, 1.5G RAM, compilation takes 7:30. Both systems build the code off of the exact same, external ide hard drive in a Firewire enclosure. I use it to carry all my work back and forth between work and home.

    At some point we'll be investigating Make to launch parallel compiles, and I would be VERY interested in seeing dual-core CPU comparisons which include compilation benchmarks, using Visual Studio 2003 under Windows, using Make -j2 or Make -j3 under windows, and using gcc/make under Linux.

    Based on what I've seen with the Xeon, I'm leaning toward an AMD X2 or dual core Opteron for my next upgrade.


    Thanks.

  • Calin - Tuesday, January 3, 2006 - link

    I think that an Extreme Edition CPU (while much more expensive) would give better results with hyperthreading enabled than a simple Pentium D and maybe even than an Athlon64 X2 while doing several threads of compile.
  • Brian23 - Saturday, December 31, 2005 - link

    The second valuable post in this thread.

    I own a X2 3800 and I'm pleased with the results anand posted. I won't need to upgrade for a while.

    I'm looking forward to AMD implementing something similar to Sun's design: multiple threads running simultaneously. It shouldn't be that hard to do. It's just adding GPRs and a little logic that controls the thread contexts.
  • Missing Ghost - Saturday, December 31, 2005 - link

    Some other web sites report that the cpu becomes too hot with the stock heatsink.
  • Gary Key - Saturday, December 31, 2005 - link

    quote:

    Some other web sites report that the cpu becomes too hot with the stock heatsink.


    The initial press release kits that contained the Intel D975XBX motherboard had an issue that created higher than normal idle/load temperatures. We have new boards on the way from Intel. I can promise you that the first results shown in other 955EE reviews do not occur on the 975x boards from Gigabyte and Asus, nor will it occur on the production release Intel D975XBX. I highly recommend a different air cooling system than the stock heatsink but most of the reported results at this time are incorrect.
  • Aenslead - Saturday, December 31, 2005 - link

    As J.J., from Spider-Man would say:

    "Ceap, crap, mega-crap!" and then toss it away.
  • ElJefe - Saturday, December 31, 2005 - link

    well it does move very fast in games. that is nice to see finally.

    it would be great if the overall power draw numbers were shown as on tomshardware. even there they showed a 90 watt difference between 4800 and the new 65nm. and that wasnt on the oc'd one. The oc'd one showed 150 more watts draw.
  • Viditor - Saturday, December 31, 2005 - link

    quote:

    well it does move very fast in games. that is nice to see finally

    Agreed...if it weren't for the X2, this would be an excellent chip by comparison!
  • Betwon - Friday, December 30, 2005 - link

    Now, anandtech begin to learn the truth. There are still many knowledge about CPU that anandtech need to learn.
    quote:

    . Through some extremely clever and effective engineering, Prescott actually wasn't any slower than its predecessors, despite the increase in pipeline stages.


    The resluts of tests are simple and clear, but the reasons are complex.

    In past years, anandtech took many mistakes about the correct reasons.
  • bldckstark - Monday, January 2, 2006 - link

    You do realize that none of this stuff is very important, right? Both chips work well. Nobody should be criticized for buying either one of them.
    I love my FIVE computers but making sure my wife and kids are healthy and happy is way more important than any electronic device, especially just one piece of it.
    Your damaging and hostile statements are making it appear as if you have forgotten this and the most important thing in the world is that you make all of us geeks think Anandtech is not perfect. News update - WE ALL KNOW THAT! We still like it.
  • bob4432 - Friday, December 30, 2005 - link

    why don't you do the gaming benchmark with bf2 fps unlocked? it appears that it is just hitting its built in lock with both the fx-57 and also P955 EE 3.46 cpus.
  • Spacecomber - Friday, December 30, 2005 - link

    I believe that they are using the timedemo feature of the game and that the frame rate max doesn't affect this. It would be nice to see more than just average frame rates reported for games, though. At least a range should be mentioned and maybe a standard deviation.

    Space
  • Betwon - Friday, December 30, 2005 - link

    We see a test, where the average fps of PD is less than (about 1% - 2%) the fps of AMD's. But PD's fps is more stable than AMD's.

    In the case that the average fps of netburst is better than the average fps of K8, the test shows that netburst is more stable than K8.
  • Betwon - Friday, December 30, 2005 - link

    The test isn't bf2.
  • bob4432 - Friday, December 30, 2005 - link

    any link you could give me on how to do the time demo from within bf2? is this new with the 1.12 patch?

    thanks
  • JarredWalton - Friday, December 30, 2005 - link

    See above post. The 3800+ OC article has the BF2 benchmarks/tools in it.
  • bob4432 - Friday, December 30, 2005 - link

    thanks, i had just found that. excellent tool ;). what is the difference between average fps and actual fps?
  • Spacecomber - Friday, December 30, 2005 - link

    If you need more direction on how to go about creating and running a timedemo in BF2, take a look at http://www.overclockers.com.au/article.php?id=3841...">this article over at overclockers.com.au.

    The timedemo records the time it takes for each frame to be rendered over the course of the demo being run. It sums these times and divides by the number of frames to come up with an average. You end up with just one number standing in for a rather large collection of data. Some sites, such as hardocp, try to show more than just an average, usually by presenting a graph of the framerates over the length of the timedemo. This can be helpful, because when you are trying to evaluate how well a particular hardware setup will work with your favorite game, you really are looking to see whether it will maintain playable minimun framerates at the resolution and graphics settings that you want to use. An average alone only gives you a rough idea about this, though it does give you a quick and dirty way to compare different video cards in the same game setting.

    If you create and run a Battlefield 2 timedemo and look at the complete results, you'll see how very wide the range of framerates is. For example, running the timedemo, I have gotten an average of 50 fps, but the range is from 2 to 105 fps, with a standard deviation of 12.3. Graphing out the individual frame rates will let you see how often the frame rates drop below 20 fps, for example, which many would consider too low for online gaming.

    http://www.sequoyahcomputer.com/Analysis/BF2memory...">Here is a graph of a BF2 timedemo. It's for the data that gave me an average of 50 fps that I mentioned previously. Although 50 fps sounds like an ok average, looking at the graph, you can see that many might consider these settings on this hardware to be barely playable.

    Space
  • bob4432 - Saturday, December 31, 2005 - link

    thanks, what program did you use to graph the data?
  • Spacecomber - Saturday, December 31, 2005 - link

    The full results of the time demo are saved in a csv file, timedemo_framerates.csv, which can be opened with a spreadsheet program. I used the spreadsheet program in OpenOffice to view the data and eliminate the framerates that are erroneously recorded before the actual gameplay demo has begun (they are easy to recognize, since they are at the begining of the data and unnaturally high), and I also used the spreadsheet program to graph the data.

    Space
  • JarredWalton - Friday, December 30, 2005 - link

    I believe Anand is using the same benchmark that I http://www.anandtech.com/cpuchipsets/showdoc.aspx?...">linked in my Overclocking article. He's probably running the 1.12 version now, which would account for the slightly lower scores than what I got with the 1.03 version and demo files. BF2 is VERY GPU limited, so even at 1024x768 you will start to hit FPS limits on high-end systems. You can see in the above page how FPS scaled with CPU speed on an X2 3800+ chip, and I only improve average frame rates by 18% with a 35% overclock at 1024x768. That dropped to 8% at 1280x1024 and less than 4% at 1600x1200 and above.
  • danidentity - Friday, December 30, 2005 - link

    Has there been any official word on whether or not 975X will support Conroe?
  • coldpower27 - Friday, December 30, 2005 - link

    a 975X Rev 2.0 is probably needed. However the i965 Chipser series for sure as they are rumored to be launched simultaneously.
  • Shintai - Friday, December 30, 2005 - link

    You gonna need i965 I bet for sure, specially if Conroe gonna use a 1333Mhz bus.

    However, Merom should fit in Yonah Socket (Conroe mobile part)
  • Beenthere - Friday, December 30, 2005 - link

    Every hardware site that has tested the power consumption and operating temps of Presler knows full well this is a 65 nano FLAME THROWER almost making the P4 FLAME THROWER look good by comparison. "Normal" operating temps of 80 C are OUTRAGEOUS as is equal or higher power consumption than the FLAME THROWING P4 series. And as the benches show -this is a Hail Mary approach by Intel to baffle the naive with B.S. No one with a clue would touch this inferior CPU design. And to add insult to injury, after the Paper Launch -- when they are actually available for purchase in Feb. or later, the asking price is $999. Yeah, I'll run right out and buy a truckload of Preslers to use for space heaters in my house...
  • skunkbuster - Friday, December 30, 2005 - link

    cramitpal is that you?
  • coldpower27 - Friday, December 30, 2005 - link

    This is the Pentium Extreme Edition of course it's price is going to be 999US.

    If you want cheaper Presler cores, wait for the Pentium D 920 to 950 line to com out in Mid January.
  • Betwon - Friday, December 30, 2005 - link

    INQ says Presler 920 will be about 240$.

    It is very interesting that PD820 defeat FX-57 in a SMP game.
  • phaxmohdem - Friday, December 30, 2005 - link

    quote:

    It is pretty much a toss-up at this point, but we'd recommend sticking with AMD for now and re-evaluating Intel's offerings when Conroe arrives.


    Let's recap, the X2 4800+ was ahead in most tests, and at worst could probably be called the 955 EE's equivilent....

    955EE = $999
    4800+ = ~$785

    Yeah, I'd definately recommend "sticking with AMD for now and re-evaluating Intel's offerings when Conroe arrives."

    Did anyone else notice how the lowly 3800+ did better in most gaming scenarios?

    955EE = $999
    3800+ = ~$315

    Tasty :)
  • GhandiInstinct - Friday, December 30, 2005 - link

    LOL, honestly your post is the only necessary post here. It compares and contrats the two perfectly in terms of which is a better buy, given the reader has seen all of the benchmarks in which the 4800 beats the 955EE.

    Intel just can't win because of EGO.
  • Anemone - Friday, December 30, 2005 - link

    Kudos because no matter where you sat personally you seemed to have called the shots fairly. I'd agree with the conclusion as well, that you are either Conroe or A64, that the P4 is an overdue dead end. It performs well, but it is hot and uses lots of electricity to do so. Overclocking wasn't needed because, quite frankly the X2 chips oc too, and you'd find they probably do it better.

    Socket M2 is again, something you "should" wait for if you can, as is Conroe. These are heavy recomendations, you really would be very smart to wait for these two things. Barring that, given the better of two bad options (meaning you have to upgrade now when you should be waiting), AMD is the better choice, partially for the power consumption, partially for the "less of a dead end than a P4" issue.

    Still, heavy, heavy emphasis on "you should wait", as a complete changeover is going on with both AMD and Intel and your ability to perform minor upgrades 1-2 years from now will depend on waiting patiently for a few more months.

    :)
  • JarredWalton - Friday, December 30, 2005 - link

    Socket M2 doesn't appear to be anything special. Why wait 6 months for a 5% performance boost and a RAM change? Just like waiting for Prescott ended up being much ado about nothing, M2 isn't going to be wildly different from today's 939 chips. Get a good socket 939 system with an X2 and SLI, and you should be set for at least 18 months.
  • Calin - Tuesday, January 3, 2006 - link

    I don't find SLI important - except for the possibility to run two top of the line video cards. And increased speed won't come from higher RAM speed - not so much anyway in order to keep you waiting.
    I just wonder how long will the Socket 939 be kept - considering that the value line is the cheaper Socket 754 (cheaper in having a single memory channel, so half as many lines to memory banks). Or if Socket 754 will be abandoned before Socket 939, or if Socket M2/2 (single channel DDR2 memory) will appear.
  • nserra - Friday, December 30, 2005 - link

    Two cores on the same packing is an excellent idea!

    Will amd do that with m2?
    Could lower the dual core price and even at 90nm could put 2 dual core processors on the same packing and build a 4 core processor (fake one, but 4 cores there).
  • ViRGE - Friday, December 30, 2005 - link

    The problem with 2 physical cores is that you're forgoing any sort of on-die communication benefits by doing so. It's certainly cheaper for Intel to make things this way, but it's a poor way to go for performance, as it makes it harder for the cores to quickly send data to each other and share resources. It's certainly a valid solution(especially given how Intel didn't have any inter-core communication even when both cores were on the same die), but ultimately a combined die for inter-core communication is superior for performance and scaling.
  • Betwon - Friday, December 30, 2005 - link

    NO.

    The speed is still very slow for AMD--latency 101ns. Even it is slow than the latency of RAM(5x ns -- 8x ns)

    With so large a latency, we don't find any benefits for those apps which communicate frequently between 2 cores. But it will hurt the performance.

    The best way for core-communication -- share L2 cache. The latency of yonah will be very low, much faster than AthlonX2 and Presler.
  • mlittl3 - Friday, December 30, 2005 - link

    Not to mention the crossbar switch would not be possible if the dies were separated. Remember AMD did dual-core the right way by bringing the memory controller on die and using the crossbar switch to switch memory communications between the two cores with little latency. If the dies were separated the crossbar switch would have to be moved off die and that would make the whole point of on-die memory controller, well, pointless really.
  • ricardo dawkins - Friday, December 30, 2005 - link

    S939 AMD chip when these chips are phasing out by M2 and the like or i'm crazy ?
  • Calin - Tuesday, January 3, 2006 - link

    Because you can still find good processors for socket 754. Socket 939 will become the "value" or "mid-range" socket for AMD, and not the premier one (like it is now). New chips will come to socket 939, but the top of the line will be the new M2 - so a new 939 now is a good investment, that should be upgradable in a couple of years
  • Griswold - Friday, December 30, 2005 - link

    Would you rather recommend presler when the next big thing will yet again bring a new socket?
  • ricardo dawkins - Friday, December 30, 2005 - link

    Are you dead sure Conroe will need a new socket ?...LGA775 is with us for a few more years..stop spreading FUD. BTW, I'm not a intel fanboy but I read a lot of news.
  • coldpower27 - Friday, December 30, 2005 - link

    No your correct, there are images of the Conroe processor showing that it pin out is LGA775. I predict most likely we will ditch LGA775 when Intel ditiches NetBurst FSB technology in favor of CSI in 2008.
  • JarredWalton - Friday, December 30, 2005 - link

    Conroe should be socket 775, but it appears that it will require a new chipset - possibly 965/Broadwater, but it might also be something else. I am almost positive that 945/955 *won't* support the next gen Intel chips, which is too bad.
  • michaelpatrick33 - Friday, December 30, 2005 - link

    The power draw numbers from other websites are nothing short of frightening for Intel. They have closed the gap with AMD's current X2 4800 but at double the power draw. It is getting ridiculous that a 65nm processor uses more power at idle than a competitor's 90nm draw at full load. Conroe is the true competitor to AMD in 2006 and it will be interesting to see the power numbers for the FX-60 and new AMD socket early next year.
  • Spacecomber - Friday, December 30, 2005 - link

    I thought that part of the big news coming out in prior reviews of this chip was its overclocking potential. Not that anyone would necessarily buy this processor in order to overclock it, but it was suggestive of what the core was capable of.

    Unless I overlooked it, overclocking wasn't mentioned in this article.

    Space
  • Anand Lal Shimpi - Friday, December 30, 2005 - link

    I had some serious power/overclocking issues with the pre-production board Intel sent for this review. I could overclock the chip and the frequency would go up, but the performance would go down significantly - and the chip wasn't throttling. Intel has a new board on the way to me now, and I'm hoping to be able to do a quick overclocking and power consumption piece before I leave for CES next week.

    Take care,
    Anand
  • Betwon - Friday, December 30, 2005 - link

    quote:


    We tested four different scenarios:

    1. A virus scan + MP3 encode
    2. The first scenario + a Windows Media encode
    3. The second scenario + unzipping files, and
    4. The third scenario + our Splinter Cell: CT benchmark.

    The graph below compares the total time in seconds for all of the timed tasks (everything but Splinter Cell) to complete during the tests:

    AMD Athlon 64 X2 4800+ AVG LAME WME ZIP Total
    AVG + LAME 22.9s 13.8s 36.7s
    AVG + LAME + WME 35.5s 24.9s 29.5s 90.0s
    AVG + LAME + WME + ZIP 41.6s 38.2s 40.9s 56.6s 177.3s
    AVG + LAME + WME + ZIP + SCCT 42.8s 42.2s 46.6s 65.9s 197.5s

    Intel Pentium EE 955 (no HT) AVG LAME WME ZIP Total
    AVG + LAME 24.8s 13.7s 38.5s
    AVG + LAME + WME 39.2s 22.5s 32.0s 93.7s
    AVG + LAME + WME + ZIP 47.1s 37.3s 45.0s 62.0s 191.4s
    AVG + LAME + WME + ZIP + SCCT 40.3s 47.7s 58.6s 83.3s 229.9s


    We find that it isn't scientific. Anandtech is wrong.
    You should give the end time of the last completed task, but not the sum of each task's time.

    For expamle: task1 and task2 work at the same time

    System A only spend 51s to complete the task1 and task2.
    task1 -- 50s
    task2 -- 51s

    System B spend 61s to complete the task1 and task2.
    task1 -- 20s
    task2 -- 61s

    It is correct: System A(51s) is faster than System B(61s)
    It is wrong: System A(51s+50s=101s) is slower than System B(20s+61s=81s)
  • tygrus - Tuesday, January 3, 2006 - link

    The problem is they don't all finish at the same time and the ambiguous work of a FPS task running.

    You could start them all and measure the time taken for all tasks to finish. That's a workload but it can be susceptible to the slowest task being limited by its single thread performance (once all other tasks are finished, SMP underutilised).

    Another way is for tasks that take longer and run at a measurable and consistent speed.
    Is it possible to:
    * loop the tests with a big enough working set (that insures repeatable runs);
    * Determine average speed of each sub-test (or runs per hour) while other tasks are running and being monitored;
    * Specify a workload based on how many runs, MB, Frames etc. processed by each;
    * Calculate the equivalent time to do a theoretical workload (be careful of the method).

    Sub-tasks time/speed can be compared to when they were run by themselves (single thread, single active task). This is complicated by HyperThreading and also multi-threaded apps under test. You can work out the efficiency/scaling of running multiple tasks versus one task at a time.

    You could probably rejig the process priorities to get better 'Splinter Cell' performance.
  • Viditor - Saturday, December 31, 2005 - link

    Scoring needs to be done on a focused window...
    By doing multiple runs with all of the programs running simultaneously, it's possible to extract a speed value for each of the programs in turn, under those conditions. The cumulative number isn't representative of how long it actually took, but it's more of a "score" on the performance under a given set of conditions.
  • Betwon - Saturday, December 31, 2005 - link

    NO! It is the time(spend time) ,not the speed value.
    You see:
    24.8s + 13.7s = 38.5s
    42.8s + 42.2s + 46.6s + 65.9s = 197.5s

    Anandtech's way is wrong.
  • Viditor - Saturday, December 31, 2005 - link

    quote:

    It is the time(spend time), not the speed value

    It's a score value...whether it's stated in time or even an arbitrary number scale matters very little. The values are still justified...
  • Betwon - Saturday, December 31, 2005 - link

    You don't know how to test.
    But you still say it correct.

    We all need the explains from anandtech.
  • Viditor - Saturday, December 31, 2005 - link

    quote:

    You don't know how to test


    Then I better get rid of these pesky Diplomas, eh?
    I'll go tear them up right now...:)
  • Betwon - Saturday, December 31, 2005 - link

    I mean: You don't know how the anandtech go on the tests.
    The way of test.
    What is the data.

    We only need the explain from anandtech, but not from your guess.

    Because you do not know it!
    you are not anandtech!
  • Viditor - Saturday, December 31, 2005 - link

    Thank you for the clarification (does anyone have any sticky tape I could borrow? :)
    What we do know is:
    1. All of the tests were started simultaneously..."To find out, we put together a couple of multitasking scenarios aided by a tool that Intel provided us to help all of the applications start at the exact same time"
    2. The 2 ways to measure are: finding out individual times in a multitasking environment (what I think they have done), or producing a batch job (which is what I think you're asking for) and getting a completion time.

    Personally, I think that the former gives us far more usefull information...
    However, neither scenario is more scientifically correct than the other.
  • Betwon - Saturday, December 31, 2005 - link

    NO, 2. is wrong.

    We need to know the end time of all tasks.

    The sum of each task's time will mislead.

    Because it can not show the real time spend to complete those tasks. (Time is overlayed)
  • Viditor - Saturday, December 31, 2005 - link

    quote:

    The sum of each task's time will mislead

    That's what I thought you meant...it's not misleading to me (nor to most of the other readers I gather, since nobody else has come forward). If you want to know the time to complete all tasks, then just take the largest time number of what ever test you wish.
    The reason that the setup they used appeals to me is that it helps me understand how an individual application is affected under those conditions, and the totals give me a relative picture of each of the apps as a whole. They haven't said that the time listed in the "Total" is actually how long things took in reality, they said it was the total of the times.
    I understand that the difference in those two phrases is perhaps a difficulty that many have when understanding a foriegn language...

    In the future, you might want to be less confrontational about your questions...
    Phrases like "There are still many knowledge about CPU that anandtech need to learn" are considered quite inhospitable...
  • fitten - Saturday, December 31, 2005 - link

    No. What is being mentioned here is "Wall Clock Time" vs. summation of execution times. You start a stopwatch at the instant you start your task bundle and when the last task in the bundle is finished, you stop your stopwatch. That's the wall clock time. Measuring CPU utilization time is quite easily seen to be false. with two CPUs, two tasks may take 20s each to finish, but they may start and finish at the same time after 20s of wall clock time... not 20s + 20s = 40s (each task will see 20s of CPU utilization time, but those sets of 20s are simultaneously used... 20s on one CPU and 20s on the other CPU at the same time - for a wall clock finish time of 20s, not 40s).

    And, you cannot simply take the largest time number. For example, suppose a task that runs for 1s is blocked by a second task which takes 10s, then the first task takes another 1s to finish, while 10s is larger than 2s, the wall clock time for this bundle is actually 12s (1s + 10s + 1s), not 10s or 2s.
  • bldckstark - Monday, January 2, 2006 - link

    Ummmmm, all of the times you are screaming about are listed. You can work it out for yourself. Although, when you look at the concurrent timing for each app, you will find that the AMD posted a better score. Concurrent timing results -
    AMD 4800+ - 65.9s
    955EE No HT - 83.3
    955EE With HT - 71.1

    Consecutive times of course show a different picture, and most of all, SPCC is a wreck during all of this for AMD.

    I have to say, I can't remember when I last opened 4 huge memory and CPU hogging programs at exactly the same time that I tried to play a game. These CPU's may be great at doing this many activities at once, but I can only do one thing at a time. Each of these programs would be started separately, and when they are on their way, I might start gaming. This is a great test, but not realistic.

  • Betwon - Friday, December 30, 2005 - link

    Your test of the SMP game --Quake4
    Your result is diffirent with the result of the more detail test from FiringSquad.
    http://www.firingsquad.com/hardware/quake_4_dual-c...">http://www.firingsquad.com/hardware/qua...-core_pe...

    We find that both HT and multi-core will improve the fps. P4 540 HT is about 1x % improvement.

    We need your explains. Why you say that HT will not help the in the the SMP game --Quake4?

    And we do not find that AthlonX2 have the more excellent improvement than PD, when they work (change from single-core-work to multi-core-work).

    Where is the benefits of on-die communication? 101ns latency? why is it slower the lateny of the memory? Is your cache2cache test software wrong?

    The test shows that
    SMPon/SMPoff PD840 102.9 fps/74.8 fps --> 37.6% improvement
    SMPon/SMPoff X2 3800+ 101.1 fps/74.4 fps --> 35.9% improvement
    SMPon/SMPoff X2 4800+ 103.2 fps/87.7 fps --> 17.7% improvement
    AMD test:
    http://www.firingsquad.com/hardware/quake_4_dual-c...">http://www.firingsquad.com/hardware/qua...al-core_...
    Intel test:
    http://www.firingsquad.com/hardware/quake_4_dual-c...">http://www.firingsquad.com/hardware/qua...-core_pe...

    The improvement ratio of PD is better than that of athlonX2.
  • psychobriggsy - Saturday, December 31, 2005 - link

    > SMPon/SMPoff PD840 102.9 fps/74.8 fps --> 37.6% improvement
    > SMPon/SMPoff X2 3800+ 101.1 fps/74.4 fps --> 35.9% improvement
    > SMPon/SMPoff X2 4800+ 103.2 fps/87.7 fps --> 17.7% improvement

    Looks like the issue is an upper performance limit around the 103 fps mark that probably isn't caused by the CPU - e.g., GPU or something else.

    If it is a memory bandwidth issue (which should be easy to test for by using faster memory and running the tests again) then there isn't much that can be done. Then again, the Intel processor uses DDR2 so ...

    If the 4800+ improved by 36% like the 3800+ then it would achieve around 120fps.

    In the end it just shows that the lower-priced dual-cores are still a better deal ... especially as they can be overclocked quite nicely.
  • Viditor - Friday, December 30, 2005 - link

    quote:

    The improvement ratio of PD is better than that of athlonX2.

    I would hope so, since the patch was partially written by Intel...
    quote:

    the 1.0.5 patch mentions Intel by name as a collaborator with no word on AMD...While it isn’t optimized for AMD64, frame rates on a dual-core Athlon 64 X2 3800+ are 63 percent faster at 800x600 with threading enabled. The 4800+ also feeds back good gains

    http://firingsquad.com/hardware/quake_4_dual-core_...">http://firingsquad.com/hardware/quake_4_dual-core_...
  • Betwon - Saturday, December 31, 2005 - link

    PD840 139.1fps/83fps --> 67.6%
    PD840 are 67.6 percent faster at 800x600 with threading enabled.

    67.6% > 63%

    Patch was partially written by Intel...?
    But the patch is very excellent!

    This patch is the most improvement game patch for SMP CPU.
    We can not find that another SMP game patch can improvement the game performent so much.

    Good quality of the codes!
  • Betwon - Saturday, December 31, 2005 - link

    PD840 139.1fps/83fps --> 67.6%
    PD840 are 67.6 percent faster at 800x600 with threading enabled.

    67.6% > 63%

    Patch was partially written by Intel...?
    But the patch is very excellent!

    This patch is the most improvement game patch for SMP CPU.
    We can not find that another SMP game patch can improvement the game performent so much.

    Good quality of the codes!
  • Viditor - Saturday, December 31, 2005 - link

    quote:

    But the patch is very excellent!

    Possibly, but Intel is well known for creating an imbalance in performance for their processors using software (e.g. the Intel Compiler). Most likely, future versions of the patch will correct for this. Either way, it really says less about the CPU than it does the patch...
  • Betwon - Saturday, December 31, 2005 - link

    NO.
    Don't You think that Future versions of the patch will be written by intel.
  • Viditor - Saturday, December 31, 2005 - link

    quote:

    Don't You think that Future versions of the patch will be written by intel

    Doubtful (but who knows)...I can't see Intel spending 100s of millions with every developer (or even 1 developer) for the long term, just to keep tweaking their patches. It's just not a very smart long term strategy (and Intel is quite smart).
  • Betwon - Saturday, December 31, 2005 - link

    You just guess it.

    We find that the good quality codes can provide better performance for both AMD and Intel.
    Intel can often benefit more, because the performance potential of Intel is high.

    Now, You can not find another SMP-game which can make fps of SMP CPU improve so much great.
    If you find it, please tell us.

    There is no one who found it.
  • Viditor - Saturday, December 31, 2005 - link

    quote:

    Intel can often benefit more, because the performance potential of Intel is high

    Now it's you who's guessing...
  • Betwon - Saturday, December 31, 2005 - link

    NO.

    It is true.
  • Viditor - Saturday, December 31, 2005 - link

    quote:

    It is true.

    OK...prove it!
  • Betwon - Saturday, December 31, 2005 - link

    For example:
    we saw a test(from anandtech)
    With the good quality codes, AMD become faster than before, but Intel become much faster than before.
    They use Intel's compiler.
  • Betwon - Saturday, December 31, 2005 - link

    When not use the intel's compiler, AMD become slow.
  • Viditor - Saturday, December 31, 2005 - link

    quote:

    When not use the intel's compiler, AMD become slow

    I know you've often quoted from the spec.org site...
    I suggest you revisit there and look at the difference between AMD systems using Intel compilers and the PathScale or Sun compilers. In general, the Spec scores for AMD improve by as much as 30% when not using an Intel compiler...especially in FP.
    quote:

    The code produced by the Intel compiler checks to see if it's running on an Intel chip. If not, it deliberately won't run SSE or SSE2 code, even if the chip capability flags (avaialble [sic] through the 'cpuid' instruction) say that it can. In other words, the code has been nobbled to run slower on non-Intel chips

    http://www.swallowtail.org/naughty-intel.html">http://www.swallowtail.org/naughty-intel.html
  • defter - Saturday, December 31, 2005 - link

    quote:

    I know you've often quoted from the spec.org site...
    I suggest you revisit there and look at the difference between AMD systems using Intel compilers and the PathScale or Sun compilers. In general, the Spec scores for AMD improve by as much as 30% when not using an Intel compiler...especially in FP.



    This is not true, for example:
    FX-57, Intel compiler, SpecInt base 1862:
    http://www.spec.org/osg/cpu2000/results/res2005q2/...">http://www.spec.org/osg/cpu2000/results/res2005q2/...
    FX-57, Pathscale compiler, 1745: http://www.spec.org/osg/cpu2000/results/res2005q2/...">http://www.spec.org/osg/cpu2000/results/res2005q2/...

    Opteron 2.8GHz, Intel compiler, SpecInt base 1837: http://www.spec.org/osg/cpu2000/results/res2005q3/...">http://www.spec.org/osg/cpu2000/results/res2005q3/...
    Opteron 2.8GHz, Sun compiler, SpecInt base 1660: http://www.spec.org/osg/cpu2000/results/res2005q4/...">http://www.spec.org/osg/cpu2000/results/res2005q4/...

    In SpecFP Intel compiler produces slightly slower results, but the difference isn't 30%:
    Opteron 2.8GHz (HP hardware), Intel compiler, SpecFP base 1805: http://www.spec.org/osg/cpu2000/results/res2005q3/...">http://www.spec.org/osg/cpu2000/results/res2005q3/...

    Opteron 2.8GHz (HP hardware), Pathscale compiler, SpecFP base 2052: http://www.spec.org/osg/cpu2000/results/res2005q3/...">http://www.spec.org/osg/cpu2000/results/res2005q3/...

    Opteron 2.8GHz (Sun hardware), Sun compiler, SpecFP base 2132: http://www.spec.org/osg/cpu2000/results/res2005q4/...">http://www.spec.org/osg/cpu2000/results/res2005q4/...

    So let's see:
    Intel vs Sun compiler:
    - Intel complier is 10.7% faster in SpecINT
    - Sun compiler is 18.1% faster in SpecFP

    Intel vs Pathscale compiler:
    - Intel compiler is 6.7% faster in SpecInt
    - Pathscale compiler is 13.7% faster is SpecFP

    It is quite suprising that Intel's compiler gives best results for AMD's processors in many situations.
  • Betwon - Friday, December 30, 2005 - link

    edit:
    Why is it slower the latency of the memory? 101ns is much more than 5x ns. where is the 'on-die' communication? Your test program is wrong?
  • Viditor - Friday, December 30, 2005 - link

    Thanks Anand!

    I don't know if you'll have time, but one of the things lacking in all of the other reviews of the OC XE955 is a comparison to an OC X2 4800...
    Speculation is quite rife, and it would be a good comparison IMNSHO.

    Cheers!
  • Gigahertz19 - Friday, December 30, 2005 - link

    Intel's back...back again...backkkkkkk again..backkkkkkkk again...du dah duh da
  • yacoub - Tuesday, January 3, 2006 - link

    If by "back" you mean finally (barely) able to compete with existing AMD performance, then yes. ;P

Log in

Don't have an account? Sign up now