News:

MASM32 SDK Description, downloads and other helpful links
MASM32.com New Forum Link
masmforum WebSite

Any advantages to 64 bit?

Started by Damos, August 05, 2009, 02:26:02 PM

Previous topic - Next topic

Astro

Hi,

Interesting discussion.

Anyone run MS FSX?

On XP 32-bit, it runs OK. Vista 64-bit, it's a bit quicker, but on Windows 7 32-bit, it runs faster than either XP or Vista!

I haven't downloaded Win 7 64-bit yet to compare, but this is with it running on the same hardware. Windows 7 is currently using my XP graphics drivers as there is a problem with the newer nVidia drivers.

I can't see how you can avoid a certain amount of memory usage, especially where graphics or video are concerned. The only real advantage I can see to 64-bit is the addressable memory.

Say NO to BLOAT.  :U :cheekygreen:  I think it's the only real way to improve things.

Best regards,
Astro.

dedndave

64-bit architecture, by nature, propagates "bloat"
when you double the width of the archtecture, it seems you quadruple the requirement for file and memory space
that translates to needing a faster cpu to handle the enlarged data
which, in turn, translates to ms, intel, and computer mfg companies selling higher volumes of their products
i think that is what it boils down to - it has nothing to do with what the marketplace desires or requires

the problem is, they have the power to force you to upgrade
if ms stops supporting 32-bit OS's, you won't have a choice in the matter
just look how many people are still using Windows 2000 and XP
that is because they never needed vista or above, to begin with

donkey

I agree with Dave here, we seem to be at a plateau of sorts. Back when multimedia and desktop publishing were just gleams in Steve Job's eye we needed ever faster processors with more and more memory to meet the demands of having our computers become a complete media and entertainment center. When the Internet took off we needed ever faster communications to meet the demands of ever more complex web pages. Well, my laptop is A LOT more powerful than my first Win95 PC and not even comparable to my first IBM PC (old school with 2 full height floppies), it can do everything that I want, has more than enough storage space for what I need and through my home network I can access almost 2 TB of storage. I can surf the net and download at MBs per sec, I can play full motion HD videos, the demands on my system for word processing and spreadsheets hasn't changed much in the last ten years so that's not an issue. So, what do I need upgrades for ? There is no killer "must have" application out there that pushes me towards upgrading. I do it anyway as one thing or another goes in my PCs or I want a personal laptop, forcing me to Windows Vista which I must say isn't bad but seriously, there is little in it that I use that was unavailable in Win2K and certainly nothing I would miss.

My question is what killer application do you think will make the upgrade a must have and put a bump in this plateau ?
"Ahhh, what an awful dream. Ones and zeroes everywhere...[shudder] and I thought I saw a two." -- Bender
"It was just a dream, Bender. There's no such thing as two". -- Fry
-- Futurama

Donkey's Stable

dedndave

that's just it - unless you are doing some high-powered CAD, animation (disney employees ?) or something, you don't need it
what is likely to happen is ms will stop supporting all 32-bit OS's in order to force the issue
at that point, they may find out whether the market will sway, or go another way
we all may be using linux or some other alternative in 10 years - lol
it will be a great time for a new competitor to come along and possibly knock ms off it's stool

bruce1948

A comment I found on Donald Knuth's web pages:-


QuoteA Flame About 64-bit Pointers
It is absolutely idiotic to have 64-bit pointers when I compile a program that uses less than 4 gigabytes of RAM. When such pointer values appear inside a struct, they not only waste half the memory, they effectively throw away half of the cache.

The gcc manpage advertises an option "-mlong32" that sounds like what I want. Namely, I think it would compile code for my x86-64 architecture, taking advantage of the extra registers etc., but it would also know that my program is going to live inside a 32-bit virtual address space.

Unfortunately, the -mlong32 option was introduced only for MIPS computers, years ago. Nobody has yet adopted such conventions for today's most popular architecture. Probably that happens because programs compiled with this convention will need to be loaded with a special version of libc.

Please, somebody, make that possible.


Rockoon


Modern high performance SSD's lifetimes are equivilent to that of mechanical HD's. Failure is gracefull (readable, but not writable) as well.

They have 10,000 or more write cycles, which means that even high throughput usages will take a very long time to kill the drive.

Lets take one of the raved-about high performance drives, the 120GB OCZ Vertex, with a sustained write of 100MB/sec.

120GB * 10,000 write cycles = max 1,200,000GB of writes. Thats 1,200 terabytes.

1,200,000GB / 100MB/sec = 12,000,000 seconds of 100% write activite to kill the drive (thats over 138 days of constant writing)

Now, one might argue that its then possible to kill the drive in 138 days.. and thats true.. but would a mechanical HD survive as long under those same conditions? The answer is almost certainly that no, it wouldn't.

For the average user, these new SSD's will last a decade or more.


The main arguing point against the modern SSD's is that of write verification, making them unsuitable for some enterprise scenarios. On an enterprise-class mechanical raid array if the drive reports that something was written, then it was. Power can be lost after the write confirmation, knowing full well that the data is stored. On an SSD raid array that is not the case. The modern high performance SSD's require write caches to achieve that performance, and if power is lost soon after write confirmation then the fate of that data is unknown. The solution to this problem (read after write) tanks their write performance back down to below that of machanical drives.

SSD's are greate for home/desktop/laptop users, and are well suited to enterprise environments with UPS's on their data arrays, as well as WORM (Write Once, Read Many) scenarios, but are not well suited to enterprise environments without UPS's.

The general gist of what I am saying is that if you arent managing a raid array for redundancy purposes, and are also not performing constant 100% writing, then there is no reason not to consider SSD's. They will last a very long time, probably a lot longer than most of your current machanical drives. If you need more capacity than SSD's offer, thats another thing entirely.
When C++ compilers can be coerced to emit rcl and rcr, I *might* consider using one.

ecube

rockoon but ssd are still very expensive and don't come in very large sizes, when prices comes down atleast they'll be lot better id imagine.

Astro

I haven't found one app that required a 64-bit OS for anything. In the case of FSX, it is bloat-ware of another kind, but its very own 32-bit limitation makes running it on a 64-bit OS useless.

I'm not convinced about SSDs yet, although interesting comment on writes failing but maintaining read capability.

Best regards,
Astro.

dedndave

the price will drop
i think ssd's are up and coming and have a definite place in the industry
actually, it wouldn't be too difficult to replace the flash chip and renew a "worn out" ssd
you may be able to turn used ones in for credit - lol - like pop bottles   :bg
better yet, if the flash chip was on a replacable "cartridge", just toss it out like a bic lighter and replace it
oddly, i have heard that it is critical to partition them on sector boundries (for some strange reason)
it makes a huge difference in speed - i can only guess this has more to do with the driver than the flash memory

Astro

Quoteoddly, i have heard that it is critical to partition them on sector boundries (for some strange reason)
it makes a huge difference in speed - i can only guess this has more to do with the driver than the flash memory
I heard this too but never looked into it.

Best regards,
Astro.

Alloy

One advantage I found of 64 bit is an old fashioned ramdisk. On older computers they keep commonly used executables safe from reboots and speed up their loading. I tried running virtual machines off ramdisks and think the windows caching stole most of the benefit of them. Too bad they aren't bookable. Maybe one of those multibooters like http://www.expresshd.com/p135/EFi-X-USB-V1/product_info.html can solve that.

   Superspeed claims that 32 bit vista can access more than 4GB with its ramdisk. http://www.superspeed.com/desktop/ramdisk.php

We all used to be something else. Nature has always recycled.

Rockoon

I do not think that Knuth was talking specifically about any particular 64-bit architecture. I think that x86-64 has a lot to offer, but Knuths arguement is only towards some fractional increases in cache contention. x86-64 has about 4 times as much general purpose register space. Seriously.

I admire and respect Knuth greatly, but structures with pointers in them imply frequent cache misses by their very nature. I think that worrying about data density in this case is unreasonable, since in practice you miss the cache nearly every time you follow an arbitrary pointer (whats it doing in the cache when you didnt even have a pointer to it?)

When C++ compilers can be coerced to emit rcl and rcr, I *might* consider using one.

GregL

Advantages or not, everything (hardware, operating systems and software) is in the process of moving to it.


dedndave

it is Greg, but only because we are being forced to move
they have hit a plateu in terms of making newer, better, faster machines
current technology won't allow much more reduction in size, thus, they are stuck around 3.3 GHz
they have used up all the tricks such as bus compression, out of order execution, caching, etc
even the number of cores in a machine - i bet if you get more than, say 4 to 16 cores (somewhere in that range),
the advantage levels off due to the overhead of managing threads and cores
so, what's left to play with ? - width - and that is why we are messing with 64-bit machines (that are also beyond a plateu)
i would say if you have 4 good cores (not P4's - lol) running at 3.2 GHz, 32-bits wide,
you are running at what i would consider an optimal level of performance
that level may change in the future, but not rapidly
ms, intel, amd make their bread and butter from growth - they are trying to keep the momentum rolling, is all

MichaelW

I suspect that Intel and AMD could actually make much higher clock speeds work, but they are more or less cooperating now, releasing advances at a slower, and more profitable, pace.
eschew obfuscation