Comments Locked

65 Comments

Back to Article

  • Zak - Wednesday, August 22, 2007 - link

    I seem to remember reading somewhere, probably couple of years ago, about research being done on hyperconductivity in "normal" temperatures. Right now hyperconductivity occurs only in extremely low temperatures, right? If materials were developed that achieve the same in normal temperatures it'd solve lots of these issues, like wire delay and power loss, wouldn't it?

    Z.
  • Tellme - Monday, February 21, 2005 - link

    Carl what i meant was that soon we might not see much improved performance with multicores as well because the data comes too late to the processor for quick execution. (That is true for single cores as well).

    Did you checked the link?
    Their idea is simple.
    "If you can't bring the memory bandwidth to the processor, then bring the processors to the memory."
    Intresting no?
    Currently processor waits most of its time for data to be processed.

  • carl0ski - Saturday, February 19, 2005 - link

    #61 i thought p4 already had memory bandwidth problems,
    AMD has a temporary work around (on die memory controller) which aids in multiple CPU's/Dies using the same fsb to access the Ram.

    Intel has proposed multiple fsb's , one each CPU/die.

    Does anyone know if that means they will need sperate RAM dimms for each FSB? because that would prove an expensive system.
  • carl0ski - Saturday, February 19, 2005 - link

    [quote]59 - Posted on Feb 12, 2005 at 11:28 AM by fitten Reply
    #57 What was the performance comparison of the 1GHz Athlon vs. the 1GHz P3? IIRC, the Athlon was faster by some margin. If this was the case, then there was a little more than tweaking that went on in the Pentium-M line. Because they started out looking at the P3 doesn't mean that what they ended up with was the P3 with a tweak here or there. :)[/quote]

    #59 didnt P3 1ghz run 133mhz sdram? on a 133fsb?
    Athlon 1ghz had a nice DDR 266 fsb to support it.

  • Tellme - Monday, February 14, 2005 - link

    Nice article.

    I think dual cores will soon reach hit the wall ie Memory Bandwidth.

    Hopefully memory and processors are integrates in near future.

    See
    http://www.ee.ualberta.ca/~elliott/cram/

  • ceefka - Monday, February 14, 2005 - link

    Though still a little too technical for me, it makes a good read.

    It's good to know that Intel has eaten their words and realized they had to go back to the drawing board.

    I believe rather sooner than later multicore will mean 4 - 8 cores providing the power to emulate everything that is not necessarily native, like running MAC OSX on an AMD or Intel box. Iow the CELL will meet its match.
  • fitten - Saturday, February 12, 2005 - link

    #57 What was the performance comparison of the 1GHz Athlon vs. the 1GHz P3? IIRC, the Athlon was faster by some margin. If this was the case, then there was a little more than tweaking that went on in the Pentium-M line. Because they started out looking at the P3 doesn't mean that what they ended up with was the P3 with a tweak here or there. :)
  • avijay - Friday, February 11, 2005 - link

    EXCELLENT Article! One of the very best I've ever read. Nice to see all the references at the end as well. Could someone please point me to Johan's first article at AT please. Thanks.
    Great Work!
  • fishbreath - Friday, February 11, 2005 - link

    For those of you who don't actually know this:

    1) The Dotham IS a Pentium 3. It was tweaked by Intel in Israel, but it's heart and soul is just a PIII.

    1b) All P4's have hyperthreading in them, and always have had. It was a fuse feature that was not announced until there were applications to support them. But anyone who has HT and Windows XP knows that Windows simply has a smoother 'feel' when running on an HT processor!

    2) Complex array processors are already in the pipeline (no pun intended). However the lack of an operating system or language to support them demands they make their first appearance in dedicated applications such as h264 encoders.
  • blckgrffn - Friday, February 11, 2005 - link

    Yay for Very Large Scale Integration (more than 10,000 transistors per chip)! :) I wonder when the historians will put down in the history books that we have hit the fifth generation of computing org....
  • WhoBeDaPlaya - Thursday, February 10, 2005 - link

    Ain't no way you can get those repeaters out of there - that's already the optimum solution for driving the large load (interconnect). It probably equalizes the stage effort required (you can work out the math and find that for multi-stage logic, the optimal config is that each stage has the exact same effort level). Eg. instead of driving an interconnect with a "unit" inverter, it might be more feasible to drive it with a chain of them, each with different fan in/out. Repeater insertion is tricky and (as far as I know) can't readily be automated.

    Interconnects are getting to tbe point where traversal of a die diagonally can take multiple clock cycles. Some folks are suggesting that a pipelined approach could be extended to interconnects, esp. clock trees. But the most fun problem (for me at least :P) is the handling of inductance extraction - how in the h*ll do you model it accurately? High-speed digital design == Analog design. Long live analog / mixed-signal VLSI designers :P
  • fitten - Thursday, February 10, 2005 - link

    [quote]Well-written multicore-aware code should have the number of cores as a _variable_, so you just set it to 1 on a uniprocessor platform.[/quote]

    Sometimes parallel algorithms aren't very good for serial execution. In these cases, you may actually have one algorithm for multiple processors and another algorithm for a single processor.

    [quote]So, if Intel were to use less repeaters the heat output could be lowered significantly. [/quote]

    Well... I'm sure the Intel engineers didn't just up-and-say one day, "Hey, I know something cool to do... let's put some more repeaters into the core." I'm sure there's a reason for them being in there. It would probably take a bit of redesign to get the repeaters out. (I'm pretty sure this is what you meant, but I just wanted to clarify that stuff like repeaters aren't just put into a CPU for no reason. Things like repeaters are put in because there wasn't a more viable solution to some signalling problem that's there.)
  • sphinx - Thursday, February 10, 2005 - link

    So, the reason for the Prescott's shortcomings is the use of too many repeaters as shown in the image of the Itanium 2. If I remember correctly, the article said that the repeaters were using too much power as well. So, if Intel were to use less repeaters the heat output could be lowered significantly.
  • AtaStrumf - Thursday, February 10, 2005 - link

    Nice article and pretty easy to understand as well. I'm happy to hear that there may still be hope for controlling the power leakage, because without it I just can't see anybody getting beyond 65 nm, since even 65 nm will, without improvements, leak almost 3 times as much power as 90nm does now.

    Anxiously waiting for E0 A64 to see what AMD has managed to cook up.
  • mickyb - Wednesday, February 9, 2005 - link

    There are plenty of multi-threaded apps out there. I am not sure pure single threaded apps exist any more outside of "Hello World" and some old Cobol/FORTRAN ports that are on floppy.

    Quake and UT have been multi-threaded for a while. Quake was multi-threaded when I had a dual Pentium pro. There were even benchmarks. The benefits seen with hyper-threading also show that many apps are multi-threaded. The performance gain was negligible due to the graphics drivers and OpenGL/DirectX not being thread optimized. I am sure that has been worked out by now.

    Multi-threading is not all about making use of multiple CPUs. There are many conditions where a program would be stopped dead in its tracks waiting for a response from some outside program or hardware device. You can solve this with events, multi-process, multi-threading, call-backs, etc. Goal wise, they are related. In the Winders world, threading is the method of choice.

    I really can't believe there are still arguments going on about programs not being multi-threaded. This is not that much of an issue any more. Even if your apps is not threaded, the OS is and it can run on one CPU while your app runs on the other. Or if you have 2 apps, then they can run on different CPUs.

    With all that said, I agree with the thought that creating performance for all applications is better served using a faster single core CPU than dual CPUs. I think this way because when you have a unit of work to be done (even with multiple threads), it is more likely to be done quicker with a single CPU that is capable of the same computing power as 2 CPUs. I single unit of work will ultimately be smaller than a thread in all cases. The smallest is the instruction set.

    Now...with that said, if the limiting factor is technology and they cannot obtain the equivalent performance of a dual core with a single core, then it makes since to go dual core to obtain it, especially with the power leakage. I like the thinking behind dual core on a laptop, but am skeptical about the part that says turning the CPU off and on rapidly to keep it cool and efficient. It will probably work if it isn't turned on and off too quickly, but heat spreads pretty quickly. You wouldn't even get past POST without a heat-sink and that silicon insulator keeps everything pretty cozy.
  • NegativeEntropy - Wednesday, February 9, 2005 - link

    Johan, another excellent article, I'm looking forward to part 2.
  • Evan Lieb - Wednesday, February 9, 2005 - link

    It's pretty much impossible to get a "newbie" explanation of CPU architectures without a least a basic understanding of how CPUs work. Rand's suggestions were quite good, you should start there if you're overwhelmed by Johan's explanations IceWindius. It also wouldn't hurt to start with Anand's CPU articles from last year.
  • Rand - Wednesday, February 9, 2005 - link

    "I wish someone like Arstechinca would make something really built ground up like CPU's for morons so I could start understanding this stuff better."

    You may want to read parts 1-5 of "The Secrets of High Performance CPUs"
    http://www.aceshardware.com/list.jsp?id=4
    A bit outdayed now, as it was written in 99' if I recall correctly but it's still broadly relevant and a nice series of articles if your looking to get a better understanding of microprocessors without being drowned in the technical side of things.

    ArsTechnica also has some good articles with a newbie friendly slant.

    There are some excellent articles at RealWorldTech as well, but their definitely written for engineers rather then the average person.
    Unfortunately most of the more noteable books like those by Hennessy & Patterson assume you've already some knowledge of computer architectures.
  • stephenbrooks - Wednesday, February 9, 2005 - link

    #46, Well-written multicore-aware code should have the number of cores as a _variable_, so you just set it to 1 on a uniprocessor platform. I also think there already exists a multithreaded version of one of the big engines (Quake, UT?) that apparently does not lose any performance on a single core either.

    But I agree with the main thrust of your post, which is "Buy AMD".
  • Noli - Wednesday, February 9, 2005 - link

    Not to belittle dual core development and I know there are a lot of people who run technical programs that will benefit from dual core on this site, but when I spend a small fortune on a pc, the primary driver is being able to play the most advanced games in the world. Unfortunately, I don't feel multi-threaded game code is going to get written for a longggggg time (what's the point of reducing potential customers?). How long till a very large percentage of users have dual cores? End of 2006 at the very earliest? So it's really a just a theoretical interest till then for me...
  • Momental - Wednesday, February 9, 2005 - link

    #41, I understood what he meant when he stated that AMD could only be so lucky to have something which was a technological failure, ie: Prescott, sell as well as it has. Even the article clearly summarizes that Prescott in and of itself isn't a piece of junk per se, only that is has no more room for evolution as Intel originally had hoped.

    #36 wasn't saying that it was a flop sales-wise, quite the contrary. The thing has sold like hotcakes!

    I, like many others here, literally got dizzy as I struggled to keep up with all of the technical terminology and mathmetical formulas. My brain is, as of this moment, threatening to strike if I don't get it a better health and retirement plan along with a shorter work week. ;)
  • Ivo - Wednesday, February 9, 2005 - link

    1. About the multiprocessing: Of coarse, there are many (important!) applications, which are more than satisfied with the existing mono-CPU performance. Some other will benefit from dual CPUs. Matrix 2CPU+2GPU combinations could be essential e.g. for stereo-visualization. Probably, desktop machines with enhanced voice/image analytical capabilities could require even more sophisticated CPU Matrices. I suppose, the mono- and multi-CPU solutions will coexist in the near future.

    2. About the leakage problem: New materials like SOI are part of the solution. Another part are the new techniques. Let us take a lesson from the nature: our blood-transportation system consists of tiny capillaries and much thicker arteries. Maybe it could make sense to combine 65 nm transistors e.g. in the cash memory and 90 nm transistors in the ALU?
  • Noli - Wednesday, February 9, 2005 - link

    "Netburst architecture is very innovative and even genial"

    genius-like?
    If by genial you mean 'having a pleasant or friendly disposition', it sounds weird. It can mean 'conducive to growth' in this context but that's not so intuitive because a) it wasn't and b) at best it was only theoretically genial.

    Presumably it's not genial as in 'of or relating to the chin' :)

    Agree monolithic was confusing but it was the intel dude who said it - I thought it meant 'large single unit' rather than 'old (as in technology)' as in: increasing processing power by increasing the size and complexity of a single core is now not as efficient as strapping two cores together - a duallithic unit :)

    Sorry to be a pedantic twat.
  • Xentropy - Wednesday, February 9, 2005 - link

    Some of the verbage in that final chapter makes me wonder how much better Prescott might have done if Intel had just left out everything 64-bit and developed an entirely different processor for 64-bit. Especially since we won't have a mainstream OS that'll even utilize those instructions for another few months, and it's already been about a year since release, they could have easily gotten away with putting 64-bit off for the next project. It's pretty obvious by now even the 32-bit Prescotts have those 64-bit transistors sitting around. Even if not active, they aren't exactly contributing to the power efficiency of the processor.

    I think one big reason Intel thinks dual core will be the savior of even the Prescott line is supposedly dual cores running at 3Ghz only require equivalent power draw to a single core at 3.6Ghz and should be just as fast in some situations (multitasking, at least). Dual core at 85% clockspeed will be slower for gaming, though, so dual core Prescott still won't close the gap with AMD for gaming enthusiasts (98% of this site's readership), and may even represent an even further drop in performance per watt. Here's hoping for Pentium-M on the desktop. :>
  • piroroadkill - Wednesday, February 9, 2005 - link

    #36 -- You really didn't read the article and get the point of it. It wasn't a failure from a sales point of view, and this article was not written from a sales point of view, but a technical point of view, and how the Prescott helped in furthering CPU technology.

    Thus, a failure.
  • ViRGE - Wednesday, February 9, 2005 - link

    Although I think I sank more than I swam, that was a very good and informative article Johan. I just have one request for a future article since I'm guessing the next one is on multi-core tech: will someone at AT run the full AT benchmark suite against a SMP Xeon machine so that we can get a good idea ahead of time what dual-core performance will be like against single core? My understanding is that the Smithfields aren't going to be doing much else new besides putting 2 cores on one die(i.e. no cache sharing or other new tech), so SMP benchmarks should be fairly close to dual-core benchmarks.
  • Griswold - Wednesday, February 9, 2005 - link

    Point and case as to why the marketing department is the most important (and powerful) part of any highly successful company. It's not the R&D labs who tell you what works and what comes next, it's the PR team.
  • quidpro - Wednesday, February 9, 2005 - link

    Someone needs to make a new Tron movie so I can understand this better.
  • tore - Wednesday, February 9, 2005 - link

    Great article, on page 3 you talk about BJT transistor with a base, collector and emitter, since all modern cpu's use mosfets should you talk about a mosfet with a gate, source and drain?
  • Questar - Wednesday, February 9, 2005 - link

    "The Pentium 4 "Prescott" is, despite its innovative architecture, a failure."


    AMD wishes they had a "failure" that sold like Prescott.


  • stephenbrooks - Wednesday, February 9, 2005 - link

    #28 - that's interesting. I was thinking myself just a few days ago "I wonder if those wires go the long way on a rectangular grid or do they go diagonally?" Looks like there's still room for improvement.
  • Chuckles - Wednesday, February 9, 2005 - link

    The word comes from Latin. "mono" meaning one, "lithic" meaning stone. So monolithic refers to the fact that it is a single cohesive unit.
    The reason you associate "lithic" with old is only due to the fact that anthropologists use Paleolithic and Neolithic to describe time periods in human history in the Stone Age. The words translate as "old stone" and "new stone" respectively.
    I have seen plenty of monolithic benches around here. Heck, a slab granite countertop qualifies as a monolith.
  • theOracle - Wednesday, February 9, 2005 - link

    Very good article - looks like a university paper with all the references etc! Looking forward to part two.

    Re "monolithic", granted the word doesn't mean old but anything '-lithic' instantly makes me think ancient (think neolithic etc). -lithic means a period in stone use by humans, and a monolith is a (usually ancient) stone monument; I think its fair to say Intel were trying to make the audience think 'old technology'.
  • DavidMcCraw - Wednesday, February 9, 2005 - link

    Great article, but this isn't accurate:

    "Note the word "monolithic", a word with a rather pejorative meaning, which insinuates that the current single core CPUs are based on old technology."

    Neither the dictionary nor technical meanings of monolithic imply 'old technology'. Rather, it simply refers to the fact that the single-core CPU being referred to is as large as the two smaller chips, but is in one part.

    In the context of OS kernel architectures, the Linux kernel is a good example of monolithic technology... but I doubt many people consider it old tech!
  • IceWindius - Wednesday, February 9, 2005 - link

    Even this articles makes my head hurt, so much about CPU's is hard to understand and grasp. I wish I kneow how those CPU engineers do this for a living.

    I wish someone like Arstechinca would make something really built ground up like CPU's for morons so I could start understanding this stuff better.
  • JohanAnandtech - Wednesday, February 9, 2005 - link

    Jason and Anand have promised me (building some pressure ;-) a threaded comment system so I can answer more personally. Until then:

    1. Thanks for all the encouraging comments. It really gives a warm feeling to read them, and it is basically the most important motivation for writing more

    2. Slashbin (27): Typo. just typed with a small period of insanity. Voltage of course, fixed

    3. CSMR: the SPEC numbers of intel are artificially high, as they have been spending more and more time on aggressive compiler optimisations. All other benchmarks clearly show the slowdown.
  • CSMR - Tuesday, February 8, 2005 - link

    Excellent article. Couple of odd things you might want to amend in chapter one: "CPUs run 40 to 60% faster each year" contradicts the previous discussion about slowed CPU speed increases. Also power formula explanation on the same page doesn't really make sense as pointed out by #27.
  • Doormat - Tuesday, February 8, 2005 - link

    Good article. The only real thing I wanted to bring up was something called the "X Consortium". I wrote a paper in my solid state circuit design class a few years ago. Basically instead of having all the interconnects within a chip laid out in a grid-like fashion, it allows them to be diagonal (and thus, a savings of, at most, 29% - for the math impaired it could be at most 1/sqrt(2)). Perhaps the tools arent there or its too patent encumbered. If interconnects are really an issue then they should move to this diagonal interconnect technology. I actually dont think they are a very pressing need right now - leakage current is the most pressing issue. The move to copper interconnects a while ago helped (increased conductivity over aluminum, smaller die sizes mean shorter distances to traverse, typically).

    It will be very interesting to see what IBM does with their Cell chips and SOI (and what clock speed AMD releases their next A64/Opteron chips at since they've teamed with IBM). If indeed these cell chips run at 4GHz and dont have leakage current issues then there is a good chance that issue is mostly remedied (for now at least).
  • slashbinslashbash - Tuesday, February 8, 2005 - link

    " In other words, dissipated power is linear with the e ffective capacitance, activity and frequency. Power increases quadratically with frequency or clock speed." (Page 2)

    Typo there? Frequency can't be both linear and quadratic..... from the equation itself, it looks like voltage is quadratic. (assuming the V is voltage)
  • AnnoyedGrunt - Tuesday, February 8, 2005 - link

    And of course I meant to refer to post 23 above.
    -D!
  • AnnoyedGrunt - Tuesday, February 8, 2005 - link

    It's possible that 22 was referring solely to the grammar of the sentence, which could potentially make more sense if it was rewritten as, "while other applications will REQUIRE exponential investments in develpment....."

    Very good article overall, but some portions could be polished a bit perhaps to make it easier for people only slightly familiar with processor details (people like myself) to understand.

    Really looking forward to part 2!

    -D'oh!
  • JarredWalton - Tuesday, February 8, 2005 - link

    23 - Not at all. Have you ever tried writing multi-threaded code? If it take 12 months to write and debug a single-threaded program that handles a task, and you try to do the same thing in multi-threaded code, I would expect 24 to 36 months to get everything done properly.

    Let's not even get into the discussion of the fact that not all code really *can* benefit from multi-threadedness. I had a similar conversation with several others in the Dual Core AMD Roadmap article. You can read the comments there for additional insight, I hope:

    http://www.anandtech.com/talkarticle.aspx?i=2303
  • cosmotic - Tuesday, February 8, 2005 - link

    "while the other applications will see exponential investments in development time to achieve the same performance increase." Thats a really stupid statement.
  • cosmotic - Tuesday, February 8, 2005 - link

    That first image really sucks. You should at least make it look decent. It looks like crap now.
  • IceWindius - Tuesday, February 8, 2005 - link

    Math hurts, and thus my head hurts.......


    Either way, Intel finally admits they fucked up and AMD spanked them for it. Justice is served.
  • faboloso112 - Tuesday, February 8, 2005 - link

    only about halfway through the article but this is a damn good article.

    not a fanboi of any sort but i certainly do hate intel's pr team.

    i think the reason amd has done well for itself is because it doesn't pride itself nor relies of fake product specs and their exaggerated capabilities and scalability...unlike intel...and ill admit...i got cought up in the hype too with the whole 10ghz thing at the time because based on moore's law and how things had been going w/ the clock speed jumps...i thought one day it would be possible...but look at where the prescott stands now...and look at how instead of blabbing about 10ghz..they talk of multi-core cpu.

    i think ill stop talking now and return to the article...
  • erikvanvelzen - Tuesday, February 8, 2005 - link

    i eat these sort of articles about cpu's, memory and the like which have references to hardware which i actually use.

    If you like this, check out these articles by John 'Hannibal' Stokes @ arstechnica.com:
    http://arstechnica.com/cpu/index.html
    http://arstechnica.com/articles/paedia/cpu.ars
  • jbond04 - Tuesday, February 8, 2005 - link

    AWESOME article, Johan. Good to see someone do some real research regarding the Prescott processor. Keep up the good work!
  • Oxonium - Tuesday, February 8, 2005 - link

    Johan used to write very good articles for Ace's Hardware. I'm glad to see him writing those same high-quality articles for Anandtech. Keep up the good work!
  • BlackMountainCow - Tuesday, February 8, 2005 - link

    Wow, very interesting read. Finally some stuff based on real facts and not some "Prescott just sux" stuff. Two thumbs up!
  • Cybercat - Tuesday, February 8, 2005 - link

    It's sad that software isn't moving in the direction of AMD's architectural emphasis, and instead heading toward a more media-oriented design. As said above, AMD is better at keeping in mind the future of their processors, by keeping up with low-leakage technologies (E0 stepping being a good example).

    I do think though that the whole dual-core thing is a gimmick. I certainly won't be buying into it any time soon.
  • fitten - Tuesday, February 8, 2005 - link

    Good read! I'm looking forward to the next installment.
  • reactor - Tuesday, February 8, 2005 - link

    Half of it went over my head, but was none the less very interesting. The prescott chapter was very informative.

    Well Done.
  • Rand - Tuesday, February 8, 2005 - link

    I'm still getting accustomed to seeing your byline on articles published on AnandTech, rather then AcesHardware :)

    As always, it's an excellent and fascinating read.

  • Regs - Tuesday, February 8, 2005 - link

    Pentium-M can't*
  • Regs - Tuesday, February 8, 2005 - link

    Thanks to this article I now know why the PM can reach faster clock cycles, and why AMD is still behind in multimedia tasks like video encoding.

    Awesome article! I see some one has been lurking the forums.
  • FinalFantasy - Tuesday, February 8, 2005 - link

    Nice article!
  • bersl2 - Tuesday, February 8, 2005 - link

    Yay! I get to use some of the stuff from my CS2110 class!
  • Gnoad - Tuesday, February 8, 2005 - link

    This is one hell of an in depth article! Great job!
  • WooDaddy - Tuesday, February 8, 2005 - link

    I have to say this is the most technical article from Anandtech I have read. Good thing I'm a hardware engineer... I think it could be a difficult read for someone with even average understand of microprocessor development.

    Good though.
  • sandorski - Tuesday, February 8, 2005 - link

    While reading the article I couldn't help but think that when Intel states something it becomes all the buzz in the Industry and is accepted as fact. OTOH, AMD has been way ahead of Intel concerning these issues, adopting the Technologies in order to avoid the issues while Intel ran ahead right into the wall. Given the history between the 2, I'd hope that AMD's musings on the future become more relevant as they seem more in tune with the technology and its' limitations. Likely won't happen though.
  • Mingon - Tuesday, February 8, 2005 - link

    I Thought originally it was reported that prescotts alu's were single pumped vs double for northwood et al.
  • segagenesis - Tuesday, February 8, 2005 - link

    Heh heh heh, good timing with the recent news. Very well written and good insight on low level technology.

    It is starting to become obvious to even average joe user now that computer power for pc's has plataeu'ed (sp?) over the past year or so. You can have a perfectly functional and snappy desktop in just 2ghz or less if you use the right apps.

    I think the recent walls hit by processor technology should be an indication for developers to work better with what they have rather than keep demanding more power. We used to make jokes about how much processor power is needed for word processing, but considering MS Word runs no faster really than it did on a P2-266mhz in Office 97... urrrgh.
  • sandorski - Tuesday, February 8, 2005 - link

    hehe, you said, "clocks peed" hehe :D
    (Chapter 1)

    good article.
  • Ender17 - Tuesday, February 8, 2005 - link

    Interesting. Great read.

Log in

Don't have an account? Sign up now