Your Supercomputer Arrives In The Cloud

For as long as there have been supercomputers, people like us have seen the announcements and said, “Boy! I’d love to get some time on that computer.” But now that most of us have computers and phones that greatly outpace a Cray 2, what are we doing with them? Of course, a supercomputer today is still bigger than your PC by a long shot, and if you actually have a use case for one, [Stephen Wolfram] shows you how you can easily scale up your processing by borrowing resources from the Wolfram Compute Services. It isn’t free, but you pay with Wolfram service credits, which are not terribly expensive, especially compared to buying a supercomputer.

[Stephen] says he has about 200 cores of local processing at his house, and he still sometimes has programs that run overnight. If your program already uses a Wolfram language and uses parallelism — something easy to do with that toolbox — you can simply submit a remote batch job.

Continue reading “Your Supercomputer Arrives In The Cloud”

So What Is A Supercomputer Anyway?

Over the decades there have been many denominations coined to classify computer systems, usually when they got used in different fields or technological improvements caused significant shifts. While the very first electronic computers were very limited and often not programmable, they would soon morph into something that we’d recognize today as a computer, starting with World War 2’s Colossus and ENIAC, which saw use with cryptanalysis and military weapons programs, respectively.

The first commercial digital electronic computer wouldn’t appear until 1951, however, in the form of the Ferranti Mark 1. These 4.5 ton systems mostly found their way to universities and kin, where they’d find welcome use in engineering, architecture and scientific calculations. This became the focus of new computer systems, effectively the equivalent of a scientific calculator. Until the invention of the transistor, the idea of a computer being anything but a hulking, room-sized monstrosity was preposterous.

A few decades later, more computer power could be crammed into less space than ever before including ever higher density storage. Computers were even found in toys, and amidst a whirlwind of mini-, micro-, super-, home-, minisuper- and mainframe computer systems, one could be excused for asking the question: what even is a supercomputer?

Continue reading “So What Is A Supercomputer Anyway?”

Remembering Seymour Cray

If you think of supercomputers, it is hard not to think of Seymour Cray. He built giant computers at Control Data Corporation and went on to build the famous Cray supercomputers. While those computers aren’t especially amazing today, for their time, they were modern marvels. [Asianometry] has a great history of Cray, starting with his work at ERA, which would, of course, eventually produce the computer known as the Univac 1103.

ERA was bought up by Remington Rand, which eventually became Sperry Rand. Due to conflict, some of the ERA staff left to form Control Data Corporation, and Cray went with them. The new company decided to focus on computers to do simulations for things like nuclear test simulations.

Continue reading “Remembering Seymour Cray”

Pssst… Wanna Buy An Old Supercomputer?

If you spend your time plotting evil world domination while stroking your fluffy white cat in your super-villain lair, it’s clear that only the most high-performance in computing is going to help you achieve your dastardly aims. But computers of that scale are expensive, and not even your tame mad scientist can whistle one out of thin air. Never mind though, because if your life lacks a supercomputer, there’s one for sale right now in Wyoming.

The Cheyenne Supercomputer was ranked in the top 20 of global computing power back in 2016, when it was installed to work on atmospheric simulation and earth sciences. There’s a page containing exhaustive specs, but overall we’re talking about a Silicon Graphics ICE XA system with 8,064 processors at 18 cores each for a total of 14,5152 cores, and a not inconsequential 313,344 GB of memory. In terms of software it ran the SuSE Linux Enterprise Server OS, but don’t let that stop you from installing your distro of choice.

It’s now being sold on a government auction site in a decommissioned but able to be reactivated state, and given that it takes up a LOT of space we’re guessing that arranging the trucks to move it will cost more than the computer itself. If you’re interested it’s standing at a shade over $40,000 at the time of writing with its reserve not met, and you have until the 3rd of May to snag it.

It’s clear that the world of supercomputing is a fast-moving one and this computer has been superseded. So whoever buys it won’t be joining the big boys any time soon — even though it remains one heck of a machine by mere mortal standards. We’re curious then who would buy an old supercomputer, if anyone. Would its power consumption for that much computing make it better off as scrap metal, or is there still a place for it somewhere? Ideas? Air them in the comments.

The Apple Silicon That Never Was

Over Apple’s decades-long history, they have been quick to adapt to new processor technology when they see an opportunity. Their switch from PowerPC to Intel in the early 2000s made Apple machines more accessible to the wider PC world who was already accustomed to using x86 processors, and a decade earlier they moved from Motorola 68000 processors to take advantage of the scalability, power-per-watt, and performance of the PowerPC platform. They’ve recently made the switch to their own in-house silicon, but, as reported by [The Chip Letter], this wasn’t the first time they attempted to design their own chips from the ground up rather than using chips from other companies like Motorola or Intel.

In the mid 1980s, Apple was already looking to move away from the Motorola 68000 for performance reasons, and part of the reason it took so long to make the switch is that in the intervening years they launched Project Aquarius to attempt to design their own silicon. As the article linked above explains, they needed a large amount of computing power to get this done and purchased a Cray X-MP/48 supercomputer to help, as well as assigning a large number of engineers and designers to see the project through to the finish. A critical error was made, though, when they decided to build their design around a stack architecture rather than a RISC. Eventually they switched to a RISC design, though, but the project still had struggled to ever get a prototype working. Eventually the entire project was scrapped and the company eventually moved on to PowerPC, but not without a tremendous loss of time and money.

Interestingly enough, another team were designing their own architecture at about the same time and ended up creating what would eventually become the modern day ARM architecture, which Apple was involved with and currently licenses to build their M1 and M2 chips as well as their mobile processors. It was only by accident that Apple didn’t decide on a RISC design in time for their personal computers. The computing world might look a lot different today if Apple hadn’t languished in the early 00s as the ultimate result of their failure to develop a competitive system in the mid 80s. Apple’s distance from PowerPC now doesn’t mean that architecture has been completely abandoned, though.

Thanks to [Stephen] for the tip!

A History Of NASA Supercomputers, Among Others

The History Guy on YouTube has posted an interesting video on the history of the supercomputer, with a specific focus on their use by NASA for the implementation of computational fluid dynamics (CFD) models of aeronautical assemblies.

The aero designers of the day were quickly finding out the limitations of the wind tunnel testing approach, especially for so-called transonic flow conditions. This occurs when an object moving through a fluid (like air can be modeled) produces regions of supersonic flow mixed in with subsonic flow and makes for additional drag scenarios. This severely impacts aircraft performance. Not accounting for these effects is not an option, hence the great industry interest in CFD modeling. But the equations for which (usually based around the Navier-Stokes system) are non-linear, and extremely computationally intensive.