Comments Locked

38 Comments

Back to Article

  • Idoxash - Monday, April 11, 2005 - link

    I'm glad to see RAMBUS back in the fight :) Maybe this time they will prove their selfs to everyone who has the better tech that's not dino old.

    --Idoxash--
  • Viditor - Monday, February 14, 2005 - link

    Derek - RDRAM is higher latency than SDRAM for anything less than 800MHz (which was essentially unavailable due to poor yields). Also, DDR had far lower latency than even the 800MHz...
    So, yes...the RDRAM roadmap would have been bad for AMD.
    Even with the on-die memory controller, latency appears to be far more critical to Athlon's performance than bandwidth (and of course the opposite is true for Netburst chips).
  • ceefka - Monday, February 14, 2005 - link

    #17 Would Rambus dare to sue Sony? That'll be the day.
  • srg - Monday, February 14, 2005 - link

    At first, I just wasn't impressed with Cell, now with Rambus on the case, I'm possitively against it.

    srg
  • DerekWilson - Monday, February 14, 2005 - link

    Though the discussion is probably over by now, I'm gonna add my two cents and say that AMD would definitely have benefited from RDRAM at the time. When it came out it was higher bandwidth and lower latency than PC100 or PC133. Doing these two things at the time when AMD still relied on the northbridge as a memory interface would have increased performance -- it's one more link in the chain that's stronger (or faster and wider as the case may be).

    If you don't think I'm right, look at nforce2. improved bandwidth to the system increased performance. at the time, latency was also a larger issue, but now that amd has an integraded memory controller it's not as much a problem.

    saying RDRAM would have been "bad" for AMD in terms of performance is probably not true. As far as business goes, AMD made a good decision not to support RDRAM.

    Going back to what many have said before, with RAMBUS, the technology was not the problem (it was very good) as much as their business philosophy (which was horrifically bad).

    But what about XDR + K10 ??

    It'll never happen (and thankfully so, if I do say so myself), and the bandwidth would likely be overkill for even the next AMD solution. Still, it's interesting to think about.
  • Viditor - Thursday, February 10, 2005 - link

    Thanks for the reply Jarred.

    I absolutely agree with you that RDRAM was a FAR better fit for the Netburst architecture (which is why AMD never embraced it, it would have been terrible for the Athlon architecture).
    On price however...IIRC, the price never came down until Intel began subsidizing it (I believe they spent ~$500 million doing so). The inherent problem wasn't market acceptance, it was:
    1. DDR is made with the same Fab lines as SDRAM, and they could actually determine which kind of memory they wanted at the last stage of assembly
    2. RDRAM required all new testing equipment, while DDR could continue using SDRAM testing equipment
    3. The bin-splits for higher clocked RDRAM (800 Mhz) was extremely poor (~15% IIRC), and the lower clocked RDRAM wasn't as good as the DDR.

    These are all the main reasons (IMHO) that Intel abondened RDRAM, because from a business standpoint (all things being equal), RDRAM was perfect for them and bad for AMD.

    As for DDR2, yields are still a bit low while they ramp up. But I don't disagree that they are milking it...

    Your point on 1MB/1066 is well taken, and I was quite surprised that Intel went with the 2MB cache choice (a VERY expensive decision!). I can only assume that they have been running into production problems...
    All that said, I don't see Intel being very competitive on the performance side until next year (JMHO) when Conroe is released. My impression is that they are (wisely) pushing that release as hard as they can and I wouldn't be surprised if it's quite early.

    Cheers, mate!
  • retrospooty - Thursday, February 10, 2005 - link

    RAMBUS = CACA ;)
  • JarredWalton - Thursday, February 10, 2005 - link

    11 - Sorry to not get back to you earlier on this, Viditor. What I said about Rambus and Pentium 3 not going well together is very accurate. Forget the price for a minute. The P3 could only have something like 2 outstanding (unfulfilled) RAM requests at the same time. I think the chipsets could also only support 4 open banks of memory at a time, so the fact that RDRAM could support up to 32 open banks went completely unused.

    P4, on the other hand, could handle more open banks/pages, more outstanding requests, and it had deeper buffers. Up until the 875P chipset, none of the DDR chipsets were actually able to surpass 850E for performance - and even then not in all areas. If Intel had stuck with RDRAM, PC1200 and even PC1600 would have surfaced, and it would be interesting to see performance of a P4 system with PC1600 RDRAM instead of PC3200 DDR.

    If you look at historical price trends, once production of RDRAM ramped up, there was actually a brief period where it was slightly cheaper than DDR memory. Then Intel released DDR chipsets and abandoned RDRAM, demand for RDRAM dried up, and the prices climbed back up. Anyway, look at DDR2 and tell me that memory manufacturers aren't milking new technologies for all they can.

    Shoulda, coulda, woulda... I don't hold any ill will towards Rambus, and if they can actually design a product that noticeably outperforms competitors, more power to them! In reality, of course, caches and such make the memory subsystem less of an impact on performance in a lot of applications. That's why FSB1066 is not doing much for Intel right now: the only official support is with CPUs that have 2MB of cache. I think a 1MB (or 512K) cache with FSB1066 would show more of a benefit. Maybe not enough to make it truly worthwhile, but more than the 3% or so that we saw with the P4XE 3.46.
  • retrospooty - Thursday, February 10, 2005 - link

    ICE 9

    All roads still lead to Rambus ? You aint been around long have you ?

    As I said before... We have been hearing this for years. R&D and unreleased products means nothing. Rambus is full of it, and cannot be beleived until there is a shipping product, and its independantly benchmarked and isnt 10x more expenssive than the competition.

    Its one thing to have specs, and partnerships, its totally another thing to ship working product at a price that consumers will be able to buy in mass quantities.

    RAmbus has proven inept at the latter.
  • Viditor - Thursday, February 10, 2005 - link

    Ice9 - Answer truthfully now, are you a Rambus shareholder? :-)
  • Ice9 - Thursday, February 10, 2005 - link

    #17, first of all, tech companies sue each other all the time.

    Intel has sued far more companies in its lifetime than Rambus has. Rambus to date has sued what, 5 companies? All for the same patent infringement?

    How many companies has Intel or AMD sued over its patents?

    Honestly, if you look at how many open patent lawsuits there are for any given technology company, you'll see that Rambus is actually a tame little kitten by comparison. But people love to hate Rambus because they have the better (and more threatening) technology, so it's smear-smear-smear until they hopefully go away.

    Well, they aren't. So either embrace the superior technology or let the memory manufacturing cartels tell you that you need the slower stuff that costs more :)
  • Ice9 - Thursday, February 10, 2005 - link

    Rambus is going to be all over the place with Cell. They're using 3 key technologies for Cell, all invented by Rambus.

    XDR (octal data rate memory), Redwood chip-to-chip internconnects, and Flexphase - which gets around that pesky equal-trace-length limitation that's been dogging DRAM for years.

    Thus far, there's simply nothing coming out of the lazy memory manufacturers that make up JEDEC to compete with it. And if they decide to try, they better make sure they steer clear of Rambus patents :)

    Oh, and yeah, none of it is vaporware either, at least not on Rambus' side. Toshiba has been sampling XDR for quite some time, and the interconnects have been available for even longer.

    All roads still lead to Rambus.
  • Viditor - Thursday, February 10, 2005 - link

    Tujan - "Why leave this out.Hyperthreaded software will remain to those that can pay for the higher priced equipment"

    Hyperthreading is a way of simulating dual cores without all of the assets to process it. This is beneficial to Intel as they have a problem keeping the pipes fed. However, real dual cores use exactly the same software as HT, but they can process it completely.
  • Viditor - Thursday, February 10, 2005 - link

    March...what retro said is correct.

    Live - "It looks to me that AMD:s lack of production capacity will really hurt us this year"

    I doubt it, though on it's face it might appear so. The missing element is the 90nm ramp...
    For example, at 130nm you get 186 candidate 3500+ dice per wafer, and at 90nm you get 329. At the moment, they are only halfway through their ramp and accelerating. Also, we are entering the low demand period...
    This might also explain why AMD is delaying most of their desktop dualcore until 2006 (when Fab36 comes on line). 90nm dualcores will be about the same size as their 130nm single core counterparts.
  • Tujan - Thursday, February 10, 2005 - link

    What’s interesting is that the 90nm dual core Pentium 4 Extreme Edition will feature Hyper Threading support, something that is left out of the regular Pentium 4 8xx series. """

    This is a weird habit of Intels. Leaving a ventured into technology 'out of the next 'great new thing". Software wich was created to take advantage of hyperthreading will no longer be a criteria of software shoppers. No will the work of coders putting their software up to 'spec. be satified of their job well done.

    Why leave this out.Hyperthreaded software will remain to those that can pay for the higher priced equipment.Development will remain on those that purchase that equipment.New technology will have another reason to consider themselves 'low end".

    - hi ya
  • Tujan - Thursday, February 10, 2005 - link

  • retrospooty - Wednesday, February 9, 2005 - link

    #20

    AMD scrapped the K9, and decided to go straight on with the K10 design.
  • Live - Wednesday, February 9, 2005 - link

    It looks to me that AMD:s lack of production capacity will really hurt us this year. The only part that seems able to put up fight against AMD is the ridiculously priced Extreme Editions. On the other hand AMD are already at full capacity and making dual core will only reduce that even more. So Intel faces no real threat. Here in Sweden there has been a shortage of AMD 90 nm for 2 months know so they can’t be making enough to satisfy demand.

    Hopefully AMD will get the new fab up soon and/or Cell will be as good as can be and we can play with that instead. Intel looks like a nightmare for a value minded costumer.
  • MarchTheMonth - Wednesday, February 9, 2005 - link

    "I'm also looking forward to AMD's K10 (also due out in 2006)!"
    Considering that AMD has gone by increasing the number scheme by 1, and the current processor core is K8, do you mean K9, or did K10 development jsut look so good that the skipped over K9 and went right to K10? I figure an investor would know...
  • MarchTheMonth - Wednesday, February 9, 2005 - link

    "Yup...SCO :-)"

    That just totally made my day.
  • Viditor - Wednesday, February 9, 2005 - link

    "I mean is there ANYONE in the tech world they HAVEN'T sued yet?"

    Yup...SCO :-)
  • Desslok - Tuesday, February 8, 2005 - link

    How long before RAMBUS starts to sue everyone in this agreement? I mean is there ANYONE in the tech world they HAVEN'T sued yet?
  • Viditor - Tuesday, February 8, 2005 - link

    The bottom line is that both AMD and Intel will have multicore, 64bit, and virtualization available across all their lines by the time it's needed for Longhorn. The big question will be performance, and on this I have quite a bit more faith in AMD's offering because they have been designed for it since day 1. That said, I'm looking forward to Intel's next generation after Prescott (probably late 2006), Conroe. Prescott and all of it's derivations have been less than impressive (to say the least), but Conroe will be based on the quite impressive Dothan core.
    I'm also looking forward to AMD's K10 (also due out in 2006)! 2006 should be a VERY interesting year (especially if MS gets off the dime on Longhorn!).
  • erikvanvelzen - Tuesday, February 8, 2005 - link

    Hmm i forgot the Xeon MP's which are coming. So it's more like 1 Xeon more. But again, the point is clear.
  • erikvanvelzen - Tuesday, February 8, 2005 - link

    uhm the sceme i tried to make above doesn't really make sense but yours is even worse.

    I forgot yonah. maybe i have 1 itanium too much and a xeon. but the point is clear. 11 projects means 11 codenames.
  • erikvanvelzen - Tuesday, February 8, 2005 - link

    I quote:

    The first thing they sent out to us was an interesting fact - that Intel has 11 multi-core projects that they’re working on for the 2005 - 2006 time period. Doing a quick number check we’re left with the following breakdown:

    3 - “Smithfield” based Pentium 4 8xx series CPUs
    1 - dual core Pentium 4 Extreme Edition
    3 - “Yonah” based Pentium M CPUs (in 2006)

    That leaves us with four unaccounted for chips - we’d expect Xeon and Itanium to fill in those blanks nicely.

    (end of quote)

    Sorry I must disagree with you on that. I think every code name has only ONE project. I think the list is more like this:

    DESKTOP (5):
    smithfield
    presler
    conroe (desktop merom)
    merom (mobile conroe)
    allendale (maybe moved to 2007)

    Then we have 4 Xeon products related to the desktop parts and 2 Itanium products (I'm too lazy to look for the code names).

    That makes 11!
  • benk - Tuesday, February 8, 2005 - link

    It's a shame that Intel is basically crippling the consumer-level parts compared to the EEs. I wonder if it's to maintain the price premium, or if it has more to do with getting an acceptable yield. I can't imagine that it's a big enough percentage of sales to justify any additional engineer, so I would guess it's the latter.
  • Viditor - Tuesday, February 8, 2005 - link

    Jarred - "That is the heart of the whole Rambus problem. The Pentium 3 was not designed AT ALL to make use of Rambus"

    I disagree somewhat...the real problem that Rambus had was "bang for the buck". They had some good engineering, but the only parts worth anything were the high-clocked ones, and they were WAY overpriced for the boost (if any) you received.
    The royalty problem isn't so much that they charge one (as you say, many do), it's that it's so nose-bleedingly high (many times what others charge)! Also, since they are an IP-only company, there can be none of the usual cross-licensing deals which help keep costs down for the manufacturers...
    They also have a nasty habit of litigating at the drop of a hat!

    JMHO
  • Viditor - Tuesday, February 8, 2005 - link

    Call me cynical (if you must), but I'm reminded strongly of the old 1 GHz race...
    AMD and Intel came out within a few days of one another (AMD sure snuck THAT one in!), but actually finding shipping chips was another matter (biggest paper launch Intel has ever done IIRC). Of course this time, they are both releasing low demand parts...Opteron and EE (though I think Opteron will have a much greater demand than EE).
    It will be interesting to see when the parts are really available this time...
  • retrospooty - Tuesday, February 8, 2005 - link

    Bah... With Rambus, the best thing to do is ignore them... All the specs and hype mean absolutley nothing.

    Wait until there is a shipping product, and judge it by price and performance. Until then, everything they say or do means nothing.
  • IntelUser2000 - Tuesday, February 8, 2005 - link

    #7

    Actually it seems Prescott doesn't want more memory bandwidth AT ALL, so XDR would do nothing. It seems Prescott is designed for nothing but clock speed scaling, it cares nothing about cache, nothing about bandwidth, perfect for scaling clock speeds since increase in clock speed=more bandwidth requirement but since Prescott doesn't care, its all good. Its all moot now though.

    If you want to see that go to www.x86-secret.com for reviews. The 2MB cache Prescotts do absolutely nothing for performance while costing 20% more. So does the 3.73GHz Prescott 2M EE, even with 1066MHz bus its slower than 3.46GHz Northwood 2MB L3 1066MHz bus EE.
  • stevty2889 - Monday, February 7, 2005 - link

    I was worried for a second there when the mentioned rambus and Intel dual core in the heading together..although it would be interesting to see how much the Prescott would like all the bandwidth XDR ram could provide..
  • JarredWalton - Monday, February 7, 2005 - link

    Don't get so down on Rambus, guys. Did they cause some trouble initially? Yes. But then, AMD, Intel, ATI, NVIDIA, VIA.... Just about any major technology company has made a product that people didn't like, or one that was timed poorly, offered less than acceptable performance, etc. Rambus is actually some pretty interesting stuff at the low level. The basic point of any memory technology was covered in the memory article we published a while ago:

    "Any design can be modified to work with higher or lower latencies; it is but one facet of the overall goal which needs to be addressed."

    That is the heart of the whole Rambus problem. The Pentium 3 was not designed AT ALL to make use of Rambus. It couldn't really use DDR well either, although Intel never pursued that. The Pentium 4 actually made very good use of Rambus, and up until the 865/875 were released, the Rambus chipsets (850/850E) were the fastest performing P4 chipsets. Graphics chips in particular are usually quite happy with higher latencies as long as they also get higher bandwidth. XDR certainly offers that!

    XDR at 400 MHz octal-pumped gives 3.2 GHz per pin, so a 64-bit interface can provide a whopping 25.6 GB/s of bandwidth. Yeah, the 256-bit interface on an X850XT or 6800U provides more total bandwidth, but only by using four times as many pins. Considering most graphics chips with 64-bit DDR solutions are only providing 3.2 GB/s of bandwidth, XDR is very interesting.

    We'll see how the actual implementation turns out. Personally, I would not be at all surprised to see NVIDIA try XDR with an upcoming GPU. The "royalty problem" of Rambus really isn't that big of a deal in the larger scheme of things. Remember that Pentium, Athlon, Opteron, etc. are all proprietary technologies that have a built-in "royalty" to their designers.
  • knitecrow - Monday, February 7, 2005 - link

    The flexIO sounds interesting, does any one acutally know what the bandwidth is like for such an IO and how does it compare to hypertransport?

  • Brian23 - Monday, February 7, 2005 - link

    #2, Who says that I'm going to connect my PS3 to the internet? If I just use it as a game machine, sony can't do anything with my hardware. Even if I did connect it to the internet, I HIGHLY doubt that there is a backdoor for sony to do stuff to it. It would just be a hole for someone to exploit.
  • xsilver - Monday, February 7, 2005 - link

    is there any word / ideas on the possbility of overclocking in the dual core p4's ? running 800mhz fsb may not make that easy but........

    and also with rambus I think a lot of people will take the stance of "I'll believe it when I see it" since so many people still have a bad taste in their mouths :)
  • DestruyaUR - Monday, February 7, 2005 - link

    Anyone else think it's kind of odd that Cells are using Rambus, a technology we were ALSO told was going to revolutionize computing as we then knew it?

    Also, I'd like to know - since Cell's power depends on interconnectivity, is there a central compromisable link to control the network it'll create.

    A conspiracy theorist could almost make the case that Sony would have access to one of, if not THE, most powerful supercomputer in the world...and if not them, whoever could compromise the net.
  • MarchTheMonth - Monday, February 7, 2005 - link

    I really think amd will release a press info tomorrow that they will have dual core opterons and athlon64's ready 2 days before intel will have their dualies out.

Log in

Don't have an account? Sign up now