I've been reading a little about Larrabee, INTEL's reaction to CUDA on NVIDIA and ?? (ATI SDK) on ATI GPU's
Apparently a developer release is scheduled for November this year..
whats your thoughts on Larrabee? personally i see it having an advantage over NVIDIA/ATI since it supports x86 instruction set, which we're well familiarized with by now..
but at the same time, i'd rather not see the last few remaining companies go under just because INTEL have deep pockets.
imho, without AMD we'd be stuck with P4 chips, without GeForce 8 series, we'd not have Larrabee on the horizon..so its good for all that competition is there, but it looks like INTEL is eating them up.
by the way, i know this is probably better topic for the colosseum, feel free to move it there.
I don't know if you know this or not, but Intel already sells more graphics hardware then either Nvidia and ATi combined.
However, I love my geForce cards, but I honestly don't see Intel crushing the competition any time soon. Both Nvidia and ATi are too entrenched and it would take a lot to unseat either of them. Graphics is not something that Intel has ever done well with, especially in the dedicated market which is non-existent for them, or the enthusiast market which is also non-existent for them.
I do hope that their offering is at least decent because that will put a bit of pressure on the other h/w makers and it will only drive innovation and competition. And that my friend will give you, I, and others a lower price.
Cheers!
Quote from: Kernel_Gaddafi on July 25, 2008, 02:39:45 AM
personally i see it having an advantage over NVIDIA/ATI since it supports x86 instruction set, which we're well familiarized with by now..
That's not so much of an advantage. Any generic ISA would have worked. The vector extensions are believed to be entirely new anyway, so it might not resemble x86 very closely. The biggest advantage is that it's much closer to a multi-core CPU than other GPUs. You can program it in much the same way, while with today's GPUs you have to be really careful about data layout and processing order. Larrabee has a cache hierarchy which should make it much more accessible for the average programmer. So while GPUs are still only really good at graphics and closely related things, Larrabee could run far more diverse software at high throughput.
Quotebut at the same time, i'd rather not see the last few remaining companies go under just because INTEL have deep pockets.
They won't. First of all it's not like the other companies have no money at al and are sitting on their asses all day. Intel may have an advantage in process technology but lacks experience in designing this type of chips. The next round of chips from NVIDIA and AMD will have an architecture close to that of Larrabee, or better. Secondly, there's antitrust law. It makes it unlikely for Intel to ever have a monopoly in any market, no matter how deep their pockets are.
Quoteimho, without AMD we'd be stuck with P4 chips, without GeForce 8 series, we'd not have Larrabee on the horizon..so its good for all that competition is there, but it looks like INTEL is eating them up.
Why the panic? Larrabee hasn't proven anything yet. The specifications might look impressive but by the time Larrabee hits the retail market there might be far more impressive products from NVIDIA and/or AMD.
Either way, there are definitely interesting times ahead. To some extent Intel might even become its own competitor. Desktop CPUs are rapidly gaining performance due to multi-core and things like AVX/FMA (http://softwareprojects.intel.com/avx/)...
Quote from: Cobra on July 27, 2008, 04:25:39 PM
I don't know if you know this or not, but Intel already sells more graphics hardware then either Nvidia and ATi combined.
That's only because of the number of integrated chips on motherboards and in laptops. These cost a few bucks a piece so they're not making big money out of these sales. Furthermore, for desktops, many of those motherboards still get a discrete graphics card (essentially making the IGP useless). So you have to put these market share numbers into perspective.
Quote...I honestly don't see Intel crushing the competition any time soon. Both Nvidia and ATi are too entrenched and it would take a lot to unseat either of them. Graphics is not something that Intel has ever done well with, especially in the dedicated market which is non-existent for them, or the enthusiast market which is also non-existent for them.
Couldn't agree more.
QuoteLarrabee has a cache hierarchy which should make it much more accessible for the average programmer. So while GPUs are still only really good at graphics and closely related things, Larrabee could run far more diverse software at high throughput.
this is exactly why INTEL and their Larrabee product may have the advantage over AMD/ATI or NVIDIA. ;)
QuoteThey won't. First of all it's not like the other companies have no money at al and are sitting on their asses all day
Well, if you've been following the financial markets, you'll notice that AMD posted a loss of $1.19 billion, the 17th loss in a row..who is investing in this company?
AMD bought ATI for how much? now how much is it worth? certainly not what they paid..
INTEL have dramatically dropped the price of their chips putting pressure on AMD to do the same..
QuoteIntel may have an advantage in process technology but lacks experience in designing this type of chips
true, but its not like they couldn't reverse engineer either of NVIDIA or ATI products and learn from them.
is it hard to believe? not for me..
Quote from: Kernel_Gaddafi on July 28, 2008, 02:55:36 PM
this is exactly why INTEL and their Larrabee product may have the advantage over AMD/ATI or NVIDIA. ;)
Sure, but the real question then becomes: for how long? Both NVIDIA and ATI have been using the same overall architecture for over two years now. They certainly have been working on something entirely new and must realize that the only way forward is to make things more accessible for the programmers. Technologies such as eDRAM and ZRAM could make Larrabee's advantages insignificant.
QuoteWell, if you've been following the financial markets, you'll notice that AMD posted a loss of $1.19 billion, the 17th loss in a row..who is investing in this company?
Right before the succesful Athlon 64 appeared, AMD wasn't doing so great financially either. They have to take the beating, lick their wounds, and then strike again when the time is right. Phenom doesn't have a bad architecture, but it looks like it requires 45 nm technology to reach higher clocks and to enable bigger caches. Note that AMD is actually ahead of Intel by having a native quad-core and an integrated memory controller. Their experience with this could lead to a succesful competitor of Nehalem.
QuoteAMD bought ATI for how much? now how much is it worth? certainly not what they paid..
Return on investment doesn't happen overnight. New products that take the best of both of AMD and ATI's technology could still be years away. But just imagine what they can achieve. Larrabee is clearly a GPU designed by a CPU designer, and could fail miserably. AMD on the other had has everything to create a successful merge of CPU and GPU technology...
Just look at the (unexpected) success of AMD's Radeon 4850/4870 and bad reception of NVIDIA's GeForce 260/280. For a while it looked like AMD could never create a more succesful card but they pulled it off thanks to superior design.
Quotetrue, but its not like they couldn't reverse engineer either of NVIDIA or ATI products and learn from them.
is it hard to believe? not for me..
They are making something entirely different, based on CPU technology. So no, I don't expect they're reverse engineering anything. They are just arrogant enough to think they can beat NVIDIA and AMD. Maybe they can, but I wouldn't put any bets on it. After all this is their very first chip aimed at the enthusiast market. They are entering the playground of NVIDIA and ATI, and anything can happen... So sit back and watch the spectacle with a large bag of popcorn. :8)
A complete DirectX 9 rasterizer has already been done in software for x86 chips,
called Pixomatic 3 and was bought by Intel a few years ago.
Some of the people who worked on it are now on the Larrabee software team.
Larrabee uses multiple in-order x86 CPU cores that are augmented by a wide vector processor unit,
as well as fixed-function co-processors. This provides dramatically higher performance per watt and
per unit of area than out-of-order CPUs on highly parallel workloads and greatly increases the flexibility
and programmability of the architecture as compared to standard GPUs.
Pixomatic 3 info
http://www.radgametools.com/pixomain.htm
The Larrabee info is part of the abstract of a Siggraph 2008 paper.
Quote from: dsouza123 on August 01, 2008, 12:36:13 AM
A complete DirectX 9 rasterizer has already been done in software for x86 chips,
called Pixomatic 3 and was bought by Intel a few years ago.
Interesting. I wonder how performance compares to SwiftShader. Also, would Pixomatic 3 actually run on Larrabee or is it more like a research software renderer?
Pixomatic 3 (DirectX 9)
Optimization
__SSE-optimized vertex and pixel shaders
__SSE2 used if available
__16 general-purpose and 16-XMM registers used in 64-bit mode
__Automatic support for multithreading across up to 16 cores
Requirements
__An x86-compatible processor with SSE
__Microsoft Windows, either 32- or 64-bit
Performance
Pixomatic 3 is about 4 times slower than Pixomatic 2, due to its full DX9 compatibility.
However, Pixo 3 was also designed with multi-CPU support built right in!
With near-perfect linear scaling, Pixomatic takes full advantage of multi-core and multi-CPU machines.
At the heart of Pixomatic 3 is an Intel-optimized SSE shader compiler.
This is the same compiler that drives the vertex shading on some Intel hardware GPUs;
we have optimized it further and added pixel shading features.
Your DX9 games will run, right out of the box, just a bit slower!
-------------------------------------------------------------------------------------
Info on the previous version of pixomatic
Pixomatic 2 (DirectX 7) was used in Unreal Tournament 2004 and Microsoft Flight Simulator 2004 and other games.
At the heart of Pixomatic 2's performance and quality is what we call the welder,
the software that compiles the pixel pipeline on the fly whenever the rasterization state changes,
producing code equivalent to hand-tuned assembly language.
(In fact, it effectively is hand-tuned assembly code; the compilation involves intelligent stitching together
and fixing up of hand-optimized code fragments.) The welded pixel pipeline uses all 8 MMX registers and
all 8 general-purpose registers to keep dynamic variables in registers at almost all times ...
The texel lookup itself requires a mere 5 instructions, thanks to careful use of MMX.
Only one branch - the loop branch - is performed per pixel, apart from the z, stencil,
and alpha tests, if they're enabled. The pipeline early-outs on z or stencil failure.
The span generator automatically uses SSE or 3DNow if either is present,
and the SSE version is written entirely in hand-tuned assembly language,
with 7 general-purpose registers, 8 MMX registers, and 6 XMM registers in use simultaneously.
Z prefetching is used to improve effective memory latency when prefetch instructions are available.
Intel keeps number of Larrabee cores under wraps (http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9111498&intsrc=hm_list)
Intel's Larrabee Architecture Disclosure: A Calculated First Move (http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3367)
Intel says 48 core graphics just over the horizon (http://www.reghardware.co.uk/2008/08/04/larrabee_gpu_benifits/)
The beauty of Larrabee isn't that it has anything at all to do with graphics, its that its whats been comming even if graphics wasnt part of the focus.
4 cores, 8 cores, 16 cores, 32 cores, ... Asymetric many-cores
Thats the future.
The idea that such architectures could replace the need for GPU's is merely a fact, not the goal itself.
What is very interresting is the real-time raytracing prospects of a fully programmable many-core. Real-time raytracing is inevitable on many-cores due to the way raytracing scales algorithmically as O(log N) where N is the number of primitives within the view frustum. This is different from rasterization which scales as O(N).
Rasterization has a much smaller per-iteration constant so for "small N" it is superior (much like for small N, bubble-sort beats quicksort) Research suggests that the break-even point where rasterization and raytracing is neck-and-neck, due to the relative per-iteration constants, is somewhere between 1-million and 10-million primitives. This just happens to be the magnitude that games are starting to approach today. Another factor is how trivial it is to scale-up a raytracer to more cores, whereas for rasterization it is non-trvial due to shared memory considerations (Raytracing begins "For each pixel..." and writes sequentialy whereas Rasterization begin "For each primitive..." and writes randomly)
and FYI, Larrabee isnt a response to anything nvidia/ati did. Larrabee has been in the works for a very long time now. NVidia has been putting out press releases trying to play down the danger Larrabee represents, while putting out other press releases saying that they intend to put together a similarly powered architecture.
AMD purchased ATI in response to what Intel has been working on.
NVidia is now between a rock and a hard place (Intel and AMD) Two giants who are both threatening to blow them out of the performance water in due time because NVidia doesnt get to design the CPU's that will command their own flavor of many-core. NVidia has to piggy-back on Intel and AMD's products and that puts them at a very severe long-term disadvantage. The gap will grow because neither Intel nor AMD are going to hand them the keys to the city.
I still like NVidia's video cards today.. but I suggest selling their stock before its too late.
QuoteNVidia has to piggy-back on Intel and AMD's products and that puts them at a very severe long-term disadvantage.
And you see no possibility of NVIDIA moving into the x86 CPU business?
Quote from: MichaelW on August 26, 2008, 03:18:17 PM
QuoteNVidia has to piggy-back on Intel and AMD's products and that puts them at a very severe long-term disadvantage.
And you see no possibility of NVIDIA moving into the x86 CPU business?
I heard stuff from people in the business that Nvidia was definitely doiing that.
EDIT: I was also contacted by two different recruiters from the Larabee group at Intel.
EDIT2: I have a link to the website of those specific jobs for that group, there are about 9. If you want me to post it, let me know.
EDIT3: I grabbed the link from my inbox :)
http://www.intel.com/jobs/careers/visualcomputing/
Quote from: MichaelW on August 26, 2008, 03:18:17 PM
QuoteNVidia has to piggy-back on Intel and AMD's products and that puts them at a very severe long-term disadvantage.
And you see no possibility of NVIDIA moving into the x86 CPU business?
They don't have a license to do that. Think they will pay for one?
Welcome to the world of IP rights. You can't sell an x86 clone without paying Intel a license fee, and you can't sell an x86-64 clone without paying both Intel and AMD a license fee.
AMD is paying Intel and Intel is paying AMD.. for them its a wash. For a 3rd player... well... what do they have to trade?
Quote from: Rockoon on August 26, 2008, 11:40:43 PM
Quote from: MichaelW on August 26, 2008, 03:18:17 PM
QuoteNVidia has to piggy-back on Intel and AMD's products and that puts them at a very severe long-term disadvantage.
And you see no possibility of NVIDIA moving into the x86 CPU business?
They don't have a license to do that. Think they will pay for one?
Welcome to the world of IP rights. You can't sell an x86 clone without paying Intel a license fee, and you can't sell an x86-64 clone without paying both Intel and AMD a license fee.
AMD is paying Intel and Intel is paying AMD.. for them its a wash. For a 3rd player... well... what do they have to trade?
yepper, the way it works is Nvidia would buy a company that is already licensed to sell x86. If I remember correctly it was Asus? but it was a while back when I heard about it.
NVidia is firmy denying rumors of them entering the x86 market..
http://www.pcpro.co.uk/news/221229/nvidia-we-wont-build-a-cpu.html
Quote from: Rockoon on August 27, 2008, 04:38:00 PM
NVidia is firmy denying rumors of them entering the x86 market..
http://www.pcpro.co.uk/news/221229/nvidia-we-wont-build-a-cpu.html
every big company does that. Until they are ready announce it, they firmly deny it. When big companies deny they are going to do something, take it with a grain of salt. I've been in the industry quite a while, and this is standard practice.
they have to do it to remain competitive with Intel and AMD/ATI.
Mark
In this case tho, they didnt just say that they 'wont' try to compete with Intel, they are saying they 'cant' compete with Intel.
Quote from: Rockoon on August 27, 2008, 05:13:33 PM
In this case tho, they didnt just say that they 'wont' try to compete with Intel, they are saying they 'cant' compete with Intel.
still more garbage. When they entered the 3d market, they were competing against ATI and 3dfx who had been in the market longer and had more resources and experience. But yet they still competed and eventually ended up on top. They eventually bought 3dfx.
I have one friend who is at Nvidia, who I used to work at with at Dell. He works in the video BIOS group. He was the one who told me. Nviidia itself can't enter the x86 market and be competitve. It would take them years to get to that point. That is why they have to buy a company with lots of x86 experience. So technically their statement is true. It'll be some company that they buy that does it.
EDIT:
so let's look at two of the quotes from the article. The below quote isn't true. They actually sell a range of products including motherboards. He is just trying to misdirect. How is a motherboard a "visual product"? Here is their shopping page.
http://store.nvidia.com
Quote
"That's not our business," he insisted. "It's not our business to build a CPU. We're a visual computing company, and I think the reason we've survived the other 35 companies who were making graphics at the start is that we've stayed focused."
This is also completely a misdirect. The 3d graphics market is just as fierce as the x86 market. That is why most of the 3d companies went bye bye and only 2 remain. And again, they wouldn't be doing it. They would buy a company with many years of experience. Not to mention the fact that the product cycle with GPUs started at 2years, and kept getting shorter and shorter. It's about 6 months now. Do you think Intel could produce a new x86 processor in 6 months? hell no. So I would actually argue that the 3d graphics market is even more intense. In addition have you ever looked at Cuda? It's a program for programming a Nvidia GPU directly in C. I point you to this Dr Dobb's article about it. They guy doing the article shows you how to turn your computer into a super computer. He has access to a super computer. He shows the speed up that is usually in the 50 times faster range for using Cuda over standard applications that a super computer would do. He has access to one and posts his timings for both. I highly recommend ya'll read it. He goes into the times faster on the first page.
http://www.ddj.com/hpc-high-performance-computing/207200659
Quote
"He also pointed out that such a move would expose the company to fierce competition. "Are we likely to build a CPU and take out Intel?" he asked."I don't think so, given their thirty-year head start and billions and billions of dollars invested in it. I think staying focused is our best strategy."
everyone agees that the future of computing is to combine a CPU and a GPU. That is why Intel is doing Larrabee, so they can extend their knowledge in the graphics market. Currently their graphics solutions have been bad. So now with Larrabee they are taking it seriously.
Quote from: Mark_Larson on August 27, 2008, 05:24:59 PM
still more garbage. When they entered the 3d market, they were competing against ATI and 3dfx who had been in the market longer and had more resources and experience.
Note the differences, not the similarities.
a) The 3d market was rapidly growing.
b) They didnt use their own money. Instead Sequoia Capital took the risk, lucky for them
Do you think that before they asked Sequoia for money, they announced that they couldn't compete with ATI or 3DFX, and then went shopping for money anyways?
Here, they announce that they cannot compete with Intel or AMD, pretty much putting the nix on ever getting any external investors. Also rules out mergers, stock trades, and other common methods of aquiring another companies assets. Their only choice now it a flat buy-out, and I highly doubt that they have enough liquidity to do so on their own. They can't borrow the money now, they cant use their stock as collateral now, and a massive public offering would open up the door for AMD or Intel to gobble them up.
..and even after they aquire a license to produce x86 clones, they still have to spend a billion or two on R&D, since nobody else has any competitive technology which they could aquire.
IMHO NVidia is highly unlikely to be making x86 clones any time soon, if ever. They will eventualy downsize.
Mark,
The CUDA stuff looks interesting, I wonder how general purpose this stuff is capable of, the massive multithreading interests me.
Quote from: hutch-- on August 31, 2008, 06:29:24 AM
Mark,
The CUDA stuff looks interesting, I wonder how general purpose this stuff is capable of, the massive multithreading interests me.
extremely general purpose and highly parallel, that is what shocked me. I"ve seen raytracers using it. Pi programs could use it, because they have a FFT. I have it installed under Linux and I am playing with it, but I am having a problem compiling code.
For the geeks, I recommend ya'll download it and play around with it. It is way cool :)
i'm running it on windows and it is worth learning how to use even if you don't use it for an everyday purpose.
i have a feelin Larrabee will offer more power though..we'll have to wait and see.