Remembering Seymour Cray

If you think of supercomputers, it is hard not to think of Seymour Cray. He built giant computers at Control Data Corporation and went on to build the famous Cray supercomputers. While those computers aren’t especially amazing today, for their time, they were modern marvels. [Asianometry] has a great history of Cray, starting with his work at ERA, which would, of course, eventually produce the computer known as the Univac 1103.

ERA was bought up by Remington Rand, which eventually became Sperry Rand. Due to conflict, some of the ERA staff left to form Control Data Corporation, and Cray went with them. The new company decided to focus on computers to do simulations for things like nuclear test simulations.

Continue reading “Remembering Seymour Cray”

Is That A Large Smartwatch? Or A Tiny Cray?

While we aren’t typically put off by a large wristwatch, we were taken a bit aback by [Chris Fenton]’s latest timepiece — if you can call it that. It’s actually a 1/25th-scale Cray C90 worn as a wristwatch. The whole thing started with [Chris] trying to build a Cray in Verilog. He started with a Cray-1 but then moved to a Cray X-MP, which is essentially a Cray-1 with two extra address bits. Then he expanded it to 32 bits, which makes it a Cray Y-MP/C90/J90 core. As he puts it, “If you wanted something practical, go read someone else’s blog.”

The watch emulates a Cray C916 and uses a round OLED display on the top. While the move from 22 to 32 address bits sounds outdated, keep in mind the Cray addresses 64-bit words exclusively, so we’re talking access to 32 gigabytes of memory. The hardware consists of an off-the-shelf FPGA board and a Teensy microcontroller to handle mundane tasks like driving the OLED display and booting the main CPU. Interestingly, the actual Cray 1A used Data General computers for a similar task.

Of course, any supercomputer needs a super program, so [Chris] uses the screen to display a full simulation of Jupiter and 63 of its moons. The Cray excels at programs like this because of its vector processing abilities. The whole program is 127 words long and sustains 40 MFLOPs. Of course, that means to read the current time, you need to know where Jupiter’s moons are at all times so you can match it with the display. He did warn us this would not be practical.

While the Cray wouldn’t qualify as a supercomputer today, we love learning about what was state-of-the-art not that long ago. Cray was named, of course, after [Seymour Cray] who had earlier designed the Univac 1103, several iconic CDC computers, and the Cray computers, of course.

The Apple Silicon That Never Was

Over Apple’s decades-long history, they have been quick to adapt to new processor technology when they see an opportunity. Their switch from PowerPC to Intel in the early 2000s made Apple machines more accessible to the wider PC world who was already accustomed to using x86 processors, and a decade earlier they moved from Motorola 68000 processors to take advantage of the scalability, power-per-watt, and performance of the PowerPC platform. They’ve recently made the switch to their own in-house silicon, but, as reported by [The Chip Letter], this wasn’t the first time they attempted to design their own chips from the ground up rather than using chips from other companies like Motorola or Intel.

In the mid 1980s, Apple was already looking to move away from the Motorola 68000 for performance reasons, and part of the reason it took so long to make the switch is that in the intervening years they launched Project Aquarius to attempt to design their own silicon. As the article linked above explains, they needed a large amount of computing power to get this done and purchased a Cray X-MP/48 supercomputer to help, as well as assigning a large number of engineers and designers to see the project through to the finish. A critical error was made, though, when they decided to build their design around a stack architecture rather than a RISC. Eventually they switched to a RISC design, though, but the project still had struggled to ever get a prototype working. Eventually the entire project was scrapped and the company eventually moved on to PowerPC, but not without a tremendous loss of time and money.

Interestingly enough, another team were designing their own architecture at about the same time and ended up creating what would eventually become the modern day ARM architecture, which Apple was involved with and currently licenses to build their M1 and M2 chips as well as their mobile processors. It was only by accident that Apple didn’t decide on a RISC design in time for their personal computers. The computing world might look a lot different today if Apple hadn’t languished in the early 00s as the ultimate result of their failure to develop a competitive system in the mid 80s. Apple’s distance from PowerPC now doesn’t mean that architecture has been completely abandoned, though.

Thanks to [Stephen] for the tip!

A History Of NASA Supercomputers, Among Others

The History Guy on YouTube has posted an interesting video on the history of the supercomputer, with a specific focus on their use by NASA for the implementation of computational fluid dynamics (CFD) models of aeronautical assemblies.

The aero designers of the day were quickly finding out the limitations of the wind tunnel testing approach, especially for so-called transonic flow conditions. This occurs when an object moving through a fluid (like air can be modeled) produces regions of supersonic flow mixed in with subsonic flow and makes for additional drag scenarios. This severely impacts aircraft performance. Not accounting for these effects is not an option, hence the great industry interest in CFD modeling. But the equations for which (usually based around the Navier-Stokes system) are non-linear, and extremely computationally intensive.