dundermuppen, du är vad du heter förmodar jag. Räknar inte med att du ska förstå, eller ens vilja förstå. Och eftersom du inte orkar eller vill ta reda på fakta själv så får jag göra det åt dig. Har du provat sökmotorn Google nån gång?
The x86 architecture does things that almost no other modern architecture does, but due to its overwhelming popularity, people think that the x86 way is the normal way and that everybody else is weird.
Let's get one thing straight: The x86 architecture is the weirdo.
The x86 has a small number (8) of general-purpose registers; the other modern processors have far more. (PPC, MIPS, and Alpha each have 32; ia64 has 128.)
The x86 uses the stack to pass function parameters; the others use registers.
The x86 forgives access to unaligned data, silently fixing up the misalignment. The others raise a misalignment exception, which can optionally be emulated by the supervisor at an amazingly huge performance penalty.
The x86 has variable-sized instructions. The others use fixed-sized instructions. (PPC, MIPS, and Alpha each have fixed-sized 32-bit instructions; ia64 has fixed-sized 41-bit instructions. Yes, 41-bit instructions.)
The x86 has a strict memory model, where external memory access matches the order in which memory accesses are issued by the code stream. The others have weak memory models, requiring explicit memory barriers to ensure that issues to the bus are made (and completed) in a specific order.
The x86 supports atomic load-modify-store operations. None of the others do.
The x86 passes function return addresses on the stack. The others use a link register.
Bear this in mind when you write what you think is portable code. Like many things, the culture you grow up with is the one that feels "normal" to you, even if, in the grand scheme of things, it is one of the more bizarre ones out there.
The longevity of the x86 architecture is perhaps one of the most surprising achievements of the Information Age thus far. Nobody, probably not even its Intel inventors, envisioned the dominance it has attained in the industry. After more than 25 years, the lowly x86 rules the all-important desktop, laptop and server markets.
For the past decade the x86 has been swallowing the high performance computing market, paralleling the rise of cluster computing. In the enterprise market, RISC/Unix boxes have been giving way to x86/Linux machines. And finally, with last year's conversion of Apple from PowerPC to Intel, the last bastion of non-x86 personal computers was removed from the desktop. In fact, had IBM anticipated the critical importance of desktop platform earlier and been a little quicker on the trigger with the development of the PowerPC chip, the whole history of computing might have followed a very different path.
As it was, the "Wintel" platform attracted a substantial software base in the 1980s before any RISC competitors could mount a challenge. The early accumulation of software, especially compiler/runtime tools and system software, created the initial momentum which propelled the x86 forward. With the thousands of applications that now run on x86 platforms, the cost of losing binary compatibility would be overwhelming for many users. It represents the technological version of the rich-get-richer syndrome: The bigger your market share, the more developers will be attracted to your architecture, which results in yet more market share.
Which brings us to the question: Will the x86 architecture ever lose its dominance? And if so, how will this happen? In 2020 it's conceivable that we'll be using terascale processors (and exascale supercomputers) based on the x86 ISA and implemented on post-CMOS technology. The demise of the x86 has been predicted before, so I hesitate to write its epitaph here. But all technologies have a lifespan and there is reason to believe that the architecture might not survive the age of terascale processors.
One problem to confront is that we're running out of Moore's Law. Before non-silicon-based processor technology -- compound semiconductors, carbon nanotubes, nanowires, molecular electronics, three-dimensional transistor designs and spintronics -- is developed and commercialized, the physics of sub-32nm process technology will constrain the number of transistors that can be placed on a die. The general-purpose x86 architecture, with its relatively complex instruction set, has to drag around a lot of transistors and microcode that have only limited utility for many types of computing, including high performance computing.
There's reason to believe that some the problems of sub-32nm technology will actually be solved, but most analysts believe CMOS-based silicon devices will no longer be practical at some point between 2015 and 2020. When this happens, transistor space on the die will become such a limiting factor that more efficient processor architectures will have an enormous advantage.
But even before that occurs, Intel and AMD may have moved beyond their x86 heritage. The current limitations of power consumption and heat dissipation are causing chipmakers to not only explore multi-core designs, but alternative processing engines as well. While the engineers at Intel and AMD have been extremely clever at increasing performance/watt, the market demand seems to be outstripping their efforts.
With the acquisition of ATI, AMD seems to have its sights set on a hybrid CPU-GPU approach, which could theoretically evolve away from strict x86 compatibility. The addition of GPU cores to general-purpose processors may be part of a trend that portends greater processor heterogeneity -- the Cell chip being an early example. As for the x86-only roadmap, AMD has not publicized any plans beyond an 8-core processor. Of course, the company would be expected to change direction if their major customers demanded a many-core x86 solution.
Intel, itself, has actually tried to move beyond the x86 twice before (not counting the i432 processor), once with the i860/i960 chips and more recently with the Itanium processor. The failure of the i860 and the (as yet) unrealized potential of the Itanium shows how even Intel can be a victim of its own success. In 2006, the company previewed a very non-x86 80-core prototype of a terascale processor, which it expects to commercialize by the middle of the next decade. Intel will be showing the next prototype of this processor at the upcoming International Solid-State Circuits Conference next month in San Francisco. According to Intel, "the 65nm 100-million transistor die is designed to achieve a peak performance of 1.0 teraflops at 1V while dissipating 98 watts."
With its (Niagara) UltraSPARC T1 chip, Sun Microsystems has demonstrated that a simplified processor can achieve much greater throughput than a more general-purpose architecture. The TI processor provides up to eight 4-way multithreaded cores (32 threads), while consuming just 72 watts. The processor is low on floating-point horsepower, making it unsuitable for scientific computing, but the design is well suited for Web servers and a wide variety of enterprise applications.
In contrast, SiCortex, an HPC cluster startup, developed a non-x86 architecture expressly targeted for high performance technical computing. Its MIPS-based chip holds six 64-bit CPUs, cache, two interleaved memory controllers, the interconnect fabric links and switch, a DMA Engine, and a PCI Express interface. The simplicity of the MIPS architecture enables a tightly integrated solution and claims two orders of magnitude more performance/watt compared to a typical x86 system. Their 5.8-teraflop, 8-terabyte cluster is housed in a single cabinet and consumes just 20 kilowatts of power. The system relies on GNU and PathScale compilers for the MIPS target and open source Linux to insulate the applications from the non-standard hardware.
The SiCortex case is interesting in another respect. The MIPS CPU, like many RISC chips, was a high-end processor that got relegated to the embedded market when it couldn't compete as a workstation chip. The embedded market is much more diversified than the desktop, laptop and server markets. The latter community runs a relatively limited set of applications, while embedded applications are much more diverse and include devices such as PDAs, laser printers, set-top boxes, network switches, automobile diagnostic controllers, game machines, etc. The diversity is reflected in the diversity of processors: PowerPC, MIPS, ARM, 68K, SPARC, and even x86. Due to the dynamic nature of the market, no processor has maintained dominance for any length of time.
But as power, heat and space constraints become increasingly important in the non-embedded world, the simpler, embedded RISC processors are looking more attractive. The simpler processor architectures enable more aggressive multi-core and multi-threaded designs. This advantage is especially important for HPC applications, where parallel throughput is usually much more critical than single thread performance. IBM's use of the energy-efficient PowerPC processors in its Blue Gene supercomputers is a reflection of this strategy.
While the end of the x86 dynasty will not happen in 2007, some of the forces that could end its dominance are already in motion. In a decade or so we'll probably look back at this time and wonder how we could ever have been so dependent on a single architecture for so long. Its 30-year reign will be seen as an anomalous blip in the early history of computer technology.
*Otrevlighet raderad av moderator*