Advanced Computing in the Age of AI | Thursday, April 18, 2024

Trek Bicycles in the Fast Lane with CFD in the Cloud 

<img style="float: left;" src="http://media2.hpcwire.com/dmr/bikes_in_desert.jpg" alt="" width="95" height="93" />When the company’s in-house HPC server proved inadequate, Mio Suzuki, an analyst engineer at the fabled bicycle company, turned to cloud computing to run more complex CFD cases in a far less time.

When she joined Trek Bicycle as an analyst engineer a few years ago, Mio Suzuki already had her head in the clouds.

During her graduate work at the University of Wisconsin-Madison she and one of her principal investigators had access via the cloud to the high performance computing capabilities at the National Energy Research Scientific Computing Center (NERSC), the primary scientific computing facility for the Office of Science in the U.S. Department of Energy.  

NERSC is a division of the Lawrence Berkeley National Laboratory, located in Berkeley, Calif. where Suzuki did her undergraduate work.  And, NERSC is synonymous with heavy duty high performance computing (HPC) – for example, its Hopper system is a Cray XE6 with a peak performance of 1.29 petaflops/second.

At Trek, she found a somewhat different situation.

Trek is a fast growing medium sized manufacturer headquartered in Waterloo, Wisconsin. The company, founded in 1976, today has about 1800 employees worldwide. In 2005 the company added 43,000 square feet to its headquarters to accommodate fast growing engineering, R&D and marketing departments.  

Despite the added square footage, CFD specific computational resources at Trek have not kept pace with the corporation’s rapid growth.  “We simply did not have enough local resources to handle all the jobs we wanted to run,” says Suzuki.

More, Richer CFD
In particular, she wanted to ramp up the number and complexity of the computational fluid dynamics (CFD) cases that range all the way from full bike and rider simulations to individual components and products, such as helmets.

The problem was that rather than a petaflop Cray sitting in the data center, Suzuki’s HPC resources consisted of one half of a rack mounted, single node server.  

The HPC server did a good job on small to medium sized CFD cases, but Suzuki quickly found that running larger simulations – for example a turbulence analysis of a full bike and rider configuration, was just too computationally demanding for the system.

“I wanted to pack a lot more physics into the jobs and also to use CFD to to more efficiently predict the wind tunnel prototype testing outcomes,” she comments.  “Wind tunnel tests are expensive. Trek will continue to use the wind tunnel to benchmark the aerodynamic drag numbers, but we want to be smart in how we use the tunnel.

Because of the limited computer capacity, the HPC server could run only one full bike and rider configuration case at a time, and it had to run overnight.  She was also limited as to the amount of complexity the system could handle – resolving small components using fine mesh takes a massive toll on computation time. Plus, Suzuki says, she normally works on four or five jobs at a time.  Running even one or two of these jobs simultaneously on the HPC server would swamp the system and take many hours to be completed.

“Recently we’ve been doing a lot more computer oriented design and analysis, and one of my jobs is to elevate the level of CFD analysis at Trek,” she says. “To do that, I obviously need a lot more computational power.  From my graduate school experience,  I was familiar with solving large cases in the cloud, and I wanted to bring that same kind of sophisticated approach to Trek.”

Finding the Cloud
She knew what she wanted, but didn’t know where to go. Friends at other tech companies recommended some universities with cluster time available; others pointed her toward various government programs, but finally her account executive at CD-adapco provided the answer.

“The account exec recommended R Systems, a company in Champaign, Ill, that provides HPC cluster resources in the cloud, ” Suzuki recalls.  “I contacted them right away. They worked fast – I didn’t have to wait months to get underway.”

R Systems immediately set up an account for Trek and constructed a proof of concept space before fully connecting her workstation with its remote HPC cloud services.  She says that the proof of concept worked very well and the R Systems cluster performance exceeded her expectations.  “They also went the extra mile on service,” she adds. “Having the additional expertise available from R Systems support is an absolute plus.”

Another enabler was CD-adapco’s STAR-CCM+/Power on Demand licensing scheme, which allows users to access unlimited computational resources, such as the R Systems private cloud.

Suzuki still runs small cases on the local Trek Server, but when it comes to full bike simulations with high degrees of granularity, it’s off to the cloud.  

Previously, if she wanted to evaluate a number of design options, it would take tens of hours and each option had to be run individually.  Now she can run four or five cases at a time and, instead of taking hours, or even days, she gets the results in a matter of hours.  Armed with this information, the Trek design team can rapidly develop new products and shorten the company’s time to market in a highly competitive marketplace.

“Trek emphasizes innovation,”  Suzuki says. “With the R Systems HPC cloud resources, I can pack more physics into my simulation models and potentially uncover design information that was unknown to us previously.  For example, by implementing more fine-grained, realistic physics models, we can discover how different kinds of turbulence models impact the performance of our products.  HPC computing in the cloud allows us to run simulations that we could only dream about before.”

––-
Editor’s Note – Mio Suzuki will be presenting a case study based on her work at Trek at the upcoming R Systrems HPC 360 conference to be held Oct. 23-24 in Champaign, Ill.

EnterpriseAI