Experimenting with supercomputers in outer space

On the 14th of August this year, Hewlett Packard Enterprise sent a supercomputer to the International Space Station (ISS) using SpaceX’s resupplying trip. The supercomputer is a part of an experiment in order to see how such a sophisticated machine behaves in outer space for a year without any modifications – the relative time it might take us to reach Mars.

Dr. Frank Fernandez, Hewlett Packard’s head payload engineer, explained that computers deployed at the ISS are always modified. Unfortunately, computers that are fit to work in space go through extensive improvements and are usually several generations behind those we use today. This means complex tasks are still done on Earth and transmitted back to the ISS. This model is working for now, but the longer the distance the more time it would take transmissions from Earth to reach their destination.

Taking into account the complexity of a trip to and a landing on Mars, it is crucial to assure instant and advanced computational power – not one that is 5 or even 2 years old. The idea of this experiment is to see whether there will be deviations in processor speed and memory refresh rate and make adjustments wherever necessary in order for results to be optimal.

Dr. Fernandez said that error detection and correction can be done through built-in hardware when it’s given enough time. When exploring space, however, time is sometimes a luxury. The main idea is to have the computing tools on board of the spacecraft. The more time is saved on computations, the more different experimentation tasks can take place in outer space. Another benefit is less bandwidth usage on the network between Earth and ISS, since a lot of it is used to transmit computational data. If the processing power is in the station itself, this additional bandwidth can be used for a different purpose.

In the end, the goal is to find the optimal technology we currently have to aid our efforts in space exploration.

Source

Image source