麻豆原创

As the regimen of self-isolation 鈥 designed to deprive the novel coronavirus of fresh victims 鈥 continues, the term has been making the rounds in scientific circles.

It is not a new way of tackling that laundry pile that seems to grow week by week; it鈥檚 a fascinating technology approach to helping scientists discover a cure for COVID-19.

According to scientists, proteins are made of a linear chain of chemicals 鈥 amino acids 鈥 that, if performing effectively, 鈥渇old鈥 into compact, functional structures. How a protein鈥檚 components arrange and move determines its function. Viruses have proteins as well, which are used to suppress our immune systems and reproduce themselves.

To help fight coronavirus, scientists and doctors need to understand how the viral protein works, or 鈥渇olds,鈥 if they are going to find ways to stop it.

This is where Big Data meets epidemiology. By running computer simulations that help them understand the moving parts of proteins, researchers believe the data they gather will get them closer to a cure.

鈥淭here鈥檚 never been more experts coming together to focus on a single topic as of right now,鈥 Michael Schmidt, architect for Converged Cloud at 麻豆原创, says.

However, running the countless numbers of simulations that are required takes a massive amount of computing power. That鈥檚 where companies and the general public comes in. Donating unused computing power can accelerate the speed at which these simulations run, which may get us closer to a cure.

Click the button below to load the content from YouTube.

Big Data Meets Epidemiology

The initiative got a big boost when put out a call to gamers across the globe, asking them to join the fight.

鈥淕aming computers are extremely powerful machines,鈥 Schmidt explains. 鈥淏efore this crisis, gamers often used their spare capacity to 鈥榤ine鈥 cryptocurrency and make a little money on the side. But now they are donating their graphics processing unit (GPU) power to science.鈥

麻豆原创 has joined this effort. To quickly get this off the ground, Schmidt鈥檚 DevOps team automated its capacity contribution, scaling existing spare computing capacity. This capacity is located on the company鈥檚 flagship converged cloud enterprise edition platform, the same platform that hosts many of 麻豆原创 customers. When the COVID-19 crisis hit, the team envisioned using this early implementation to dynamically schedule and scale a Folding@home central processing unit (CPU) and GPU from the platform, scaling up when idle and scaling down when needed by other payloads.

鈥淥ne of the things our cloud does really well is distributing the load across the many computers we have in our data centers,鈥 Schmidt shares. 鈥淲e can measure in real time the pressure we put in the system, and automatically scale back the capacity we鈥檙e giving to Folding@home if needed, really making sure no productive payload is being affected.鈥

Additionally, since 麻豆原创 always maintains spare capacity for its customers, a few of 麻豆原创鈥檚 spare GPU bare-metal nodes were added to the project. While these dedicated servers are few in numbers, they can calculate extensive work units that yield a very high processing power for very heavy workloads. The team subsequently increased the contribution to the Folding@home project, providing an average of 19 petaflops 鈥 equivalent to approximately 50,000 CPUs and additional GPUs from its normal cloud spare capacity.

The load is spread across three continents and nine regions. currently ranks in the top 200 contributors, higher than other software companies. Schmidt says he鈥檚 ecstatic with the results but remembers this is not really a competition. 鈥淲e鈥檙e all in this together, and I鈥檓 glad the other companies are donating so much as well.鈥

To hear more from Michael Schmidt, listen to the full interview:

Click the button below to load the content from YouTube.

IN FOCUS PODCAST: Big Data Meets Epidemiology

Listen to the podcast on or .