Advanced Computing in the Age of AI | Tuesday, March 19, 2024

CAE Experiment Provides Insight into Computing as a Service 

<img style="float: left;" src="http://media2.hpcwire.com/dmr/Clouds.jpg" alt="" width="95" height="76" />After a fast-paced three months, Round 1 of the CAE Experiment concluded last month, with more than 160 digital manufacturing organizations and individuals from 25 countries, working together in 25 international teams. In this summary, Wolfgang Gentzsch and Burak Yenier present the experiment’s main findings and challenges. And, Round 2 is now open for new CAE participants. Learn how you can be part of this grand experiment.

The aim of the CAE Experiment is to explore the end-to-end process of accessing and using remote resources in Computer Centers (CILEA, FCSCL, FutureGrid, HSR, and SDSC) and in HPC Clouds (Amazon, Bull, Nimbix, Penguin, SGI, and TotalCAE), and to study and overcome the potential roadblocks.

The Experiment kicked off in July 2012 and brought together four categories of participants: industry end-users with their digital manufacturing applications; software providers; computing and storage resource providers; and the experts. End users can achieve many benefits by gaining access to additional compute resources beyond their current internal resources (e.g. workstations).

Arguably the most important benefits are:
•    Agility gained by speeding up product design cycles through shorter simulation run times.
•    Superior quality achieved by simulating more sophisticated geometries or physics, or by running many more iterations to look for the best product design.

During the three months of the experiment, we were able to build 25 teams each with a project proposed by an end user, in the field of digital manufacturing.

These teams were:
•    Team Anchor Bolt
•    Team Resonance
•    Team Radiofrequency
•    Team Supersonic
•    Team Liquid-Gas
•    Team Wing-Flow
•    Team Ship-Hull
•    Team Cement-Flows
•    Team Sprinkler
•    Team Space Capsule
•    Team Car Acoustics
•    Team Dosimetry
•    Team Weathermen
•    Team Wind Turbine
•    Team Combustion
•    Team Blood Flow
•    Team Turbo-Machinery
•    Team Gas Bubbles
•    Team Side Impact
•    Team ColombiaBio
•    Team Cellphone.

The final report, available to all of our registered participants, contains the use cases of many of the teams, offering valuable insight in their own words. We look forward to future rounds of the experiment where this accumulating knowledge will yield ever more successful projects.

Roadblocks and Recommendations on How to Overcome Them

Our teams have reported the following main roadblocks and provided information on how they did or did not resolved them:

•    Security and privacy, guarding the raw data, processing models and the results
•    Unpredictable costs can be a major problem in securing a budget for a given project
•    Lack of easy, intuitive self-service registration and administration
•    Incompatible software licensing models that hinder adoption of Computing-as-a-Service;
•    High expectations can lead to disappointing results;
•    Lack of reliability and availability of resources can lead to long delays;
•    …and more.  

Recommendations from the teams to circumvent some of these roadblocks include: The end-user should start with clearly documenting security and privacy requirements at the beginning of the project. Automated, policy driven monitoring of usage and billing is essential to keep costs under control.

To speed up the resource allocation process, we recommend resource providers consider setting up queues specific to the end-user needs and assign the queue during the registration process. Also, resource providers could develop self-service knowledge base tools which increase the efficiency of their support processes. Concerning incompatible software licensing, there are already existing successful on-demand licensing models from some forward looking ISVs that we believe the others can learn from.

To set the right level of expectations, define goals that are incrementally better than the current capabilities of your organization, technology infrastructure and processes. And finally, selecting a reliable resource provider with adequate available resources is paramount. The final report explains each of these recommendations in detail.

We hope that our participants will extract value out of the Experiment and the final report. They certainly deserve to do so in return for their generous contributions, support and participation.

We now look forward to Round 2 of the Experiment – which already has over 250 participants –and the learning that will result. To participate in Round 2 or just monitor it closely, you can register at http://cfdexperiment.com. More information about Round 1 – including two use cases - has been published in the Digital Manufacturing Report.

EnterpriseAI