News:

MASM32 SDK Description, downloads and other helpful links
MASM32.com New Forum Link
masmforum WebSite

Dual processor action

Started by RedXVII, May 02, 2006, 12:24:13 PM

Previous topic - Next topic

RedXVII

Yo!

Dual processors seem to be coming forwards a bit. What i want to know, is what effect this will have on writing programs in assembly. 64-bit processors are obvious what it would mean to a masm programmer. But dual processors; what does it actually mean? Or is it all just handled by the OS and we continue like normal? will there be any extra new ways of doing things?

Anyone got any info on this?

Cheers  :U

Ossa

I guess it just becomes much more efficient to use multithreaded apps now. But the optimisation of those can be a bit difficult. Thats all it means to me anyway.

Ossa
Website (very old): ossa.the-wot.co.uk

asmfan

The only difficulty is to insert a right parallel deadlocks free logic in program...
Create needed threads everywhere, nonsequential execution... etc.
Russia is a weird place

RedXVII

#3
My thread programming pretty rubbish. But the thing is, how do you know if the thread is running on the other processor or not, or is just on the same processor. If this is the case, it could just f*ck some things up.

I presume someone will make some sort of large manual on it i suspect.

Ratch

RedXVII,
    I used to work with a mainframe where multiprocessor/multiuser/multitasking operations were no big thing.  Especially if the OS is up to the job.  Tasks get put on a switch list prioritized by the OS, and then any idle processor can  grab the next task off the list and run with it.  As long as your job mix is such that they do not fight over access to the same resources, for instance writing to same part of a a particular file, the multiprocessor system should be transparent to the user.  In a multiuser system, this was usually not a big problem either, because the users usually had their own files or accessed a common file on read-only basis.  Problems occur when a particular processor has some feature, for example, floating point capability where the task or the OS has to designate a specific processor to do it.  Then there might be a asymmetric load on a particular processor.  But if all processors have equivalent capabilities, no sweat.  The designers found that up to four processors were the optimum before they started tripping over each other for memory or I/O channels.  Nowadays, with newer technology, perhaps more CPU's can be run together.  As a user, you should notice increased performance with a multiprocessor system.  Especially if it has a memory management system that allows each processor to access all memory addresses and manages any conflicts, instead of just allotting each processor an exclusive block of memory.  Ratch

Ratch

RedXVII,

Quote... how do you know if the thread is running on the other processor or not...

     Why should you give a damn?  As long as the task gets done, who cares what CPU did it?  For the most part, a dually should be transparent to you unless the processors have asymmetric capabilities.  All you have to do is make sure the threads synchronize with each other, just as you need to do now on a unit processor.  Ratch

Mark Jones

B.t.w, setting a task to a specific processor is called "Affinity", at least in NT. Had a dual pentium pro box once with NT on it, and in task manager it was possible to assign tasks to certain processors. I'm sure 2k/XP is leaps and bounds ahead of this behavior though.
"To deny our impulses... foolish; to revel in them, chaos." MCJ 2003.08

RedXVII

What i meant was, lets say i had a program which did stuff. And a thread which calculated something and sent a message to the main program. The main program doesnt know when the calculating thread has completed its task. Isnt it possible that the main program could miss the check on the calculated solution from the thread, and continue on without the response needed. This could potentially lead to possible waits/lags or if it only checked it once (badly programmed), cause the program to malfunction [kindof defeating the whole point of multi processors]?

I thaught dual processing was to meant to make our computers faster (super fast games), but in order to use it, you have to multithread; but then the threads are fighting over resources...

Well, its good to know tat we wont have to change the way we program much.

Red  :U

Ratch

RedXVII,

Quote
What i meant was, lets say i had a program which did stuff. And a thread which calculated something and sent a message to the main program. The main program doesnt know when the calculating thread has completed its task. Isnt it possible that the main program could miss the check on the calculated solution from the thread, and continue on without the response needed. This could potentially lead to possible waits/lags or if it only checked it once (badly programmed), cause the program to malfunction [kindof defeating the whole point of multi processors]?

     There are synchronizing API's and techniques that compel threads to execute in the correct order, and keep a phenomena called "deadly embrace" from happening.  If there is a queue at a single resource, then no amount of extra CPU's are going to make the programs that are involved with the resource queue run any faster during the queue up time.

Quote
I thaught dual processing was to meant to make our computers faster (super fast games), but in order to use it, you have to multithread; but then the threads are fighting over resources...

     If one has X programs running, then there are automatically at least X threads present.  If two the program/threads are going after a file at the same time, one thread will have to wait.  Meanwhile, every thread is only allowed to execute a certain amount of time before it goes back on the switch list to wait for its next turn.  A sophisticated OS will not allow a thread off the switch list if it is waiting for a resource.  That is, the OS prioritizes what threads get to execute.  That should be happening on a unit processor also, but there is more help to get the job done with two CPU's doing the work.  If a thread is waiting, then other threads that don't need to wait can be run.  I don't understand what you preceive to be a problem, other than two threads trying to access a resource at the same time.  Ratch

Roger

RedXV11,

You should think about the difference between multi-threading/multi-tasking and multi-processing.

With multi-threading/multi-tasking on a single processor we have one programme/thread/task running at any given moment in time and it is usualy the Os's job to share out the avaliable time to each task. It is the job of each thread to signal to other related threads when they have/haven't performed particular tasks and also to make sure that all necessary tasks are completed before progressing forward. Nowadays we expect Real time or multitasking OSs to provide the tools to do this. A significant point with regard to overall speed is that if, for example, your main program has to wait for a solution from a thread it can be arainged that it takes no processing time and this leaves more time for the other thread to catch up.

When it comes to multi-processing we no longer have the suituation where only one thread/task is actually active but several can be active together and it is now necessary to determine which will run together. If there are seperate tasks, ie compleatly seperate say a wordprocessor and  a media player for music while you type, then it is relativly easy for the OS to partition the tasks between processors and we expect the OS to do this transparently. It is when  it is required to efficiently split a single task between processors that the current generation of OSs start to struggle. In your example when the main program has to wait for the thread on another processor its processor's time is not avaliable to the thread to speed it up. The OS will need help from somewhere to be able to do this partitioning of a single programme between processors. One obvious way is for the programme to provide the information as part of its loadable file and a new generation of concurrent programming languages is needed/on its way to do this.


Quote from: Ratch on May 02, 2006, 02:01:32 PM
    Why should you give a damn?  As long as the task gets done, who cares what CPU did it?  
In 3 years/10 years/sometime Multiprocessor boxes will be here and we or at least some of us will want to use them.  Let us hope that someone has cared enough to develop the software needed and that someone has also spent the time making the necessary information availiable to all of us rather than it being unreachable within corporate walls.

Regards Roger

RedXVII

Oh i see. That helps clear things up for me.

Well, its another case of - dont worry about it, the OS is doing it. Ahhhh such a convenient world.

Ian_B

The trick is to find which parts of your program can be split up into concurrent tasks. There is a little more management overhead involved, but it can be worth it. And use the mutithreading built into the API whenever possible to help a single-threaded app, like the asynchronous (overlapped) IO functions. Even a single-threaded app can benefit from setting multiple consecutive buffers to load in from disk, then working on each buffer in sequence as they become available, while the separate (asynchronous) OS threads continue to read in the next file segments. If you can work on each buffer of data independently and can minimise memory access conflicts, then this is a perfect opportunity to multithread in the app too, as long as you can synchronise when the threads are finished. If you can get asynchronous IO working, you shouldn't have any problems figuring out how to use multithreading effectively.  :U

Ian_B

daydreamer

well if somebody owns a multicpu machine, why not make a few benchmarks on splitting execution and test run it on one thread vs two threads
and put together some different tests

sluggy

Quote from: !Czealot on May 07, 2006, 02:38:51 PM
well if somebody owns a multicpu machine, why not make a few benchmarks on splitting execution and test run it on one thread vs two threads
and put together some different tests

There is no need to do this, it will just run faster. But not twice as fast. AFAIK even if it is two distinct processors then most of the time they still share the same memory bus - unless it is a very expensive server mobo. And with dual core, both cores share the same cache. But the object of multi-threading is not to *speed* the app up, it is to spread the load. And dual core/processor machines don't run twice as fast, they just have close to twice the horsepower, which is a big difference.

Just to make your mouths water - i am currently working on a large data warehousing and business intelligence project, it runs on a machine that has 8 cpu's - and while processing we max them all out  :eek :8)

P1

Quote from: sluggy on May 10, 2006, 09:42:12 AMJust to make your mouths water - i am currently working on a large data warehousing and business intelligence project, it runs on a machine that has 8 cpu's - and while processing we max them all out  :eek :8)
Any clustering Plans ???

Regards,  P1  :8)