News:

MASM32 SDK Description, downloads and other helpful links
MASM32.com New Forum Link
masmforum WebSite

INTEL Larrabee and threat to AMD/ATI/NVIDIA

Started by bozo, July 25, 2008, 02:39:45 AM

Previous topic - Next topic

bozo

I've been reading a little about Larrabee, INTEL's reaction to CUDA on NVIDIA and ?? (ATI SDK) on ATI GPU's
Apparently a developer release is scheduled for November this year..

whats your thoughts on Larrabee? personally i see it having an advantage over NVIDIA/ATI since it supports x86 instruction set, which we're well familiarized with by now..

but at the same time, i'd rather not see the last few remaining companies go under just because INTEL have deep pockets.
imho, without AMD we'd be stuck with P4 chips, without GeForce 8 series, we'd not have Larrabee on the horizon..so its good for all that competition is there, but it looks like INTEL is eating them up.

by the way, i know this is probably better topic for the colosseum, feel free to move it there.

Cobra

I don't know if you know this or not, but Intel already sells more graphics hardware then either Nvidia and ATi combined.

However, I love my geForce cards, but I honestly don't see Intel crushing the competition any time soon. Both Nvidia and ATi are too entrenched and it would take a lot to unseat either of them. Graphics is not something that Intel has ever done well with, especially in the dedicated market which is non-existent for them, or the enthusiast market which is also non-existent for them.

I do hope that their offering is at least decent because that will put a bit of pressure on the other h/w makers and it will only drive innovation and competition. And that my friend will give you, I, and others a lower price.

Cheers!

c0d1f1ed

Quote from: Kernel_Gaddafi on July 25, 2008, 02:39:45 AM
personally i see it having an advantage over NVIDIA/ATI since it supports x86 instruction set, which we're well familiarized with by now..

That's not so much of an advantage. Any generic ISA would have worked. The vector extensions are believed to be entirely new anyway, so it might not resemble x86 very closely. The biggest advantage is that it's much closer to a multi-core CPU than other GPUs. You can program it in much the same way, while with today's GPUs you have to be really careful about data layout and processing order. Larrabee has a cache hierarchy which should make it much more accessible for the average programmer. So while GPUs are still only really good at graphics and closely related things, Larrabee could run far more diverse software at high throughput.

Quotebut at the same time, i'd rather not see the last few remaining companies go under just because INTEL have deep pockets.

They won't. First of all it's not like the other companies have no money at al and are sitting on their asses all day. Intel may have an advantage in process technology but lacks experience in designing this type of chips. The next round of chips from NVIDIA and AMD will have an architecture close to that of Larrabee, or better. Secondly, there's antitrust law. It makes it unlikely for Intel to ever have a monopoly in any market, no matter how deep their pockets are.

Quoteimho, without AMD we'd be stuck with P4 chips, without GeForce 8 series, we'd not have Larrabee on the horizon..so its good for all that competition is there, but it looks like INTEL is eating them up.

Why the panic? Larrabee hasn't proven anything yet. The specifications might look impressive but by the time Larrabee hits the retail market there might be far more impressive products from NVIDIA and/or AMD.

Either way, there are definitely interesting times ahead. To some extent Intel might even become its own competitor. Desktop CPUs are rapidly gaining performance due to multi-core and things like AVX/FMA...

c0d1f1ed

Quote from: Cobra on July 27, 2008, 04:25:39 PM
I don't know if you know this or not, but Intel already sells more graphics hardware then either Nvidia and ATi combined.

That's only because of the number of integrated chips on motherboards and in laptops. These cost a few bucks a piece so they're not making big money out of these sales. Furthermore, for desktops, many of those motherboards still get a discrete graphics card (essentially making the IGP useless). So you have to put these market share numbers into perspective.

Quote...I honestly don't see Intel crushing the competition any time soon. Both Nvidia and ATi are too entrenched and it would take a lot to unseat either of them. Graphics is not something that Intel has ever done well with, especially in the dedicated market which is non-existent for them, or the enthusiast market which is also non-existent for them.

Couldn't agree more.

bozo

QuoteLarrabee has a cache hierarchy which should make it much more accessible for the average programmer. So while GPUs are still only really good at graphics and closely related things, Larrabee could run far more diverse software at high throughput.

this is exactly why INTEL and their Larrabee product may have the advantage over AMD/ATI or NVIDIA. ;)

QuoteThey won't. First of all it's not like the other companies have no money at al and are sitting on their asses all day

Well, if you've been following the financial markets, you'll notice that AMD posted a loss of $1.19 billion, the 17th loss in a row..who is investing in this company?

AMD bought ATI for how much? now how much is it worth? certainly not what they paid..
INTEL have dramatically dropped the price of their chips putting pressure on AMD to do the same..

QuoteIntel may have an advantage in process technology but lacks experience in designing this type of chips

true, but its not like they couldn't reverse engineer either of NVIDIA or ATI products and learn from them.
is it hard to believe? not for me..

c0d1f1ed

Quote from: Kernel_Gaddafi on July 28, 2008, 02:55:36 PM
this is exactly why INTEL and their Larrabee product may have the advantage over AMD/ATI or NVIDIA. ;)

Sure, but the real question then becomes: for how long? Both NVIDIA and ATI have been using the same overall architecture for over two years now. They certainly have been working on something entirely new and must realize that the only way forward is to make things more accessible for the programmers. Technologies such as eDRAM and ZRAM could make Larrabee's advantages insignificant.

QuoteWell, if you've been following the financial markets, you'll notice that AMD posted a loss of $1.19 billion, the 17th loss in a row..who is investing in this company?

Right before the succesful Athlon 64 appeared, AMD wasn't doing so great financially either. They have to take the beating, lick their wounds, and then strike again when the time is right. Phenom doesn't have a bad architecture, but it looks like it requires 45 nm technology to reach higher clocks and to enable bigger caches. Note that AMD is actually ahead of Intel by having a native quad-core and an integrated memory controller. Their experience with this could lead to a succesful competitor of Nehalem.

QuoteAMD bought ATI for how much? now how much is it worth? certainly not what they paid..

Return on investment doesn't happen overnight. New products that take the best of both of AMD and ATI's technology could still be years away. But just imagine what they can achieve. Larrabee is clearly a GPU designed by a CPU designer, and could fail miserably. AMD on the other had has everything to create a successful merge of CPU and GPU technology...

Just look at the (unexpected) success of AMD's Radeon 4850/4870 and bad reception of NVIDIA's GeForce 260/280. For a while it looked like AMD could never create a more succesful card but they pulled it off thanks to superior design.

Quotetrue, but its not like they couldn't reverse engineer either of NVIDIA or ATI products and learn from them.
is it hard to believe? not for me..

They are making something entirely different, based on CPU technology. So no, I don't expect they're reverse engineering anything. They are just arrogant enough to think they can beat NVIDIA and AMD. Maybe they can, but I wouldn't put any bets on it. After all this is their very first chip aimed at the enthusiast market. They are entering the playground of NVIDIA and ATI, and anything can happen... So sit back and watch the spectacle with a large bag of popcorn.  :8)

dsouza123

A complete DirectX 9 rasterizer has already been done in software for x86 chips,
called Pixomatic 3 and was bought by Intel a few years ago.

Some of the people who worked on it are now on the Larrabee software team.

Larrabee uses multiple in-order x86 CPU cores that are augmented by a wide vector processor unit,
as well as fixed-function co-processors. This provides dramatically higher performance per watt and
per unit of area than out-of-order CPUs on highly parallel workloads and greatly increases the flexibility
and programmability of the architecture as compared to standard GPUs.

Pixomatic 3 info
http://www.radgametools.com/pixomain.htm

The Larrabee info is part of the abstract of a Siggraph 2008 paper.

c0d1f1ed

Quote from: dsouza123 on August 01, 2008, 12:36:13 AM
A complete DirectX 9 rasterizer has already been done in software for x86 chips,
called Pixomatic 3 and was bought by Intel a few years ago.
Interesting. I wonder how performance compares to SwiftShader. Also, would Pixomatic 3 actually run on Larrabee or is it more like a research software renderer?

dsouza123

Pixomatic 3 (DirectX 9)

Optimization
__SSE-optimized vertex and pixel shaders
__SSE2 used if available
__16 general-purpose and 16-XMM registers used in 64-bit mode
__Automatic support for multithreading across up to 16 cores

Requirements
__An x86-compatible processor with SSE
__Microsoft Windows, either 32- or 64-bit

Performance

Pixomatic 3 is about 4 times slower than Pixomatic 2, due to its full DX9 compatibility.
However, Pixo 3 was also designed with multi-CPU support built right in!
With near-perfect linear scaling, Pixomatic takes full advantage of multi-core and multi-CPU machines.

At the heart of Pixomatic 3 is an Intel-optimized SSE shader compiler.
This is the same compiler that drives the vertex shading on some Intel hardware GPUs;
we have optimized it further and added pixel shading features.
Your DX9 games will run, right out of the box, just a bit slower!

-------------------------------------------------------------------------------------
Info on the previous version of pixomatic

Pixomatic 2 (DirectX 7) was used in Unreal Tournament 2004 and Microsoft Flight Simulator 2004 and other games.

At the heart of Pixomatic 2's performance and quality is what we call the welder,
the software that compiles the pixel pipeline on the fly whenever the rasterization state changes,
producing code equivalent to hand-tuned assembly language.
(In fact, it effectively is hand-tuned assembly code; the compilation involves intelligent stitching together
and fixing up of hand-optimized code fragments.) The welded pixel pipeline uses all 8 MMX registers and
all 8 general-purpose registers to keep dynamic variables in registers at almost all times ...

The texel lookup itself requires a mere 5 instructions, thanks to careful use of MMX.
Only one branch - the loop branch - is performed per pixel, apart from the z, stencil,
and alpha tests, if they're enabled. The pipeline early-outs on z or stencil failure.

The span generator automatically uses SSE or 3DNow if either is present,
and the SSE version is written entirely in hand-tuned assembly language,
with 7 general-purpose registers, 8 MMX registers, and 6 XMM registers in use simultaneously.
Z prefetching is used to improve effective memory latency when prefetch instructions are available.


Rockoon

The beauty of Larrabee isn't that it has anything at all to do with graphics, its that its whats been comming even if graphics wasnt part of the focus.

4 cores, 8 cores, 16 cores, 32 cores, ... Asymetric many-cores

Thats the future.

The idea that such architectures could replace the need for GPU's is merely a fact, not the goal itself.

What is very interresting is the real-time raytracing prospects of a fully programmable many-core. Real-time raytracing is inevitable on many-cores due to the way raytracing scales algorithmically as O(log N) where N is the number of primitives within the view frustum. This is different from rasterization which scales as O(N).

Rasterization has a much smaller per-iteration constant so for "small N" it is superior (much like for small N, bubble-sort beats quicksort) Research suggests that the break-even point where rasterization and raytracing is neck-and-neck, due to the relative per-iteration constants, is somewhere between 1-million and 10-million primitives. This just happens to be the magnitude that games are starting to approach today. Another factor is how trivial it is to scale-up a raytracer to more cores, whereas for rasterization it is non-trvial due to shared memory considerations (Raytracing begins "For each pixel..." and writes sequentialy whereas Rasterization begin "For each primitive..." and writes randomly)

and FYI, Larrabee isnt a response to anything nvidia/ati did. Larrabee has been in the works for a very long time now. NVidia has been putting out press releases trying to play down the danger Larrabee represents, while putting out other press releases saying that they intend to put together a similarly powered architecture.

AMD purchased ATI in response to what Intel has been working on.

NVidia is now between a rock and a hard place (Intel and AMD) Two giants who are both threatening to blow them out of the performance water in due time because NVidia doesnt get to design the CPU's that will command their own flavor of many-core. NVidia has to piggy-back on Intel and AMD's products and that puts them at a very severe long-term disadvantage. The gap will grow because neither Intel nor AMD are going to hand them the keys to the city.

I still like NVidia's video cards today.. but I suggest selling their stock before its too late.
When C++ compilers can be coerced to emit rcl and rcr, I *might* consider using one.

MichaelW

QuoteNVidia has to piggy-back on Intel and AMD's products and that puts them at a very severe long-term disadvantage.

And you see no possibility of NVIDIA moving into the x86 CPU business?
eschew obfuscation

Mark_Larson

Quote from: MichaelW on August 26, 2008, 03:18:17 PM
QuoteNVidia has to piggy-back on Intel and AMD's products and that puts them at a very severe long-term disadvantage.

And you see no possibility of NVIDIA moving into the x86 CPU business?


I heard stuff from people in the business that Nvidia was definitely doiing that.

EDIT:  I was also contacted by two different recruiters from the Larabee group at Intel.

EDIT2:  I have a link to the website of those specific jobs for that group, there are about 9.  If you want me to post it, let me know.

EDIT3:  I grabbed the link from my inbox :)
http://www.intel.com/jobs/careers/visualcomputing/
BIOS programmers do it fastest, hehe.  ;)

My Optimization webpage
htttp://www.website.masmforum.com/mark/index.htm

Rockoon

Quote from: MichaelW on August 26, 2008, 03:18:17 PM
QuoteNVidia has to piggy-back on Intel and AMD's products and that puts them at a very severe long-term disadvantage.

And you see no possibility of NVIDIA moving into the x86 CPU business?


They don't have a license to do that. Think they will pay for one?

Welcome to the world of IP rights. You can't sell an x86 clone without paying Intel a license fee, and you can't sell an x86-64 clone without paying both Intel and AMD a license fee.

AMD is paying Intel and Intel is paying AMD.. for them its a wash. For a 3rd player... well... what do they have to trade?
When C++ compilers can be coerced to emit rcl and rcr, I *might* consider using one.

Mark_Larson

Quote from: Rockoon on August 26, 2008, 11:40:43 PM
Quote from: MichaelW on August 26, 2008, 03:18:17 PM
QuoteNVidia has to piggy-back on Intel and AMD's products and that puts them at a very severe long-term disadvantage.

And you see no possibility of NVIDIA moving into the x86 CPU business?


They don't have a license to do that. Think they will pay for one?

Welcome to the world of IP rights. You can't sell an x86 clone without paying Intel a license fee, and you can't sell an x86-64 clone without paying both Intel and AMD a license fee.

AMD is paying Intel and Intel is paying AMD.. for them its a wash. For a 3rd player... well... what do they have to trade?


yepper, the way it works is Nvidia would buy a company that is already licensed to sell x86.  If I remember correctly it was Asus?  but it was a while back when I heard about it.
BIOS programmers do it fastest, hehe.  ;)

My Optimization webpage
htttp://www.website.masmforum.com/mark/index.htm