Published in collaboration with NCMS
Digital Manufacturing Report

News & information about the fast-moving world
of digital manufacturing, modeling & simulation

Language Flags

Making Digital Manufacturing Affordable: A Vendor Perspective


If you're building an aircraft carrier, designing a wing for a new jetliner, or building a state-of-the-art light water nuclear reactor, chances are you're using a supercomputer and the very latest modeling, simulation and analytic software. Also, you probably work at a very large company, government lab, or university. And you have some serious funding.

NVIDIA village sceneBut if you're a small- to medium-sized manufacturer (SMM) located further down on the supply chain, a big high performance computing (HPC) system is probably not part of your development environment. You may have some older but serviceable workstations, some 2D CAD software, a limited budget, and a small, overworked IT staff primarily dedicated to fighting fires. Reducing design and prototyping time and costs through the adoption of HPC is a desirable but not yet affordable option both in terms of money and staffing.

Organizations like the National Center for Manufacturing Science (NCMS) and the Alliance for High Performance Digital Manufacturing (AHPDM) are trying to change all that (see last week's feature article, Hope for the Missing Middle). But some HPC industry vendors are stepping up to the plate as well.

NVIDIA and the Pervasive GPU

NVIDIA is one of those companies. We spoke with Sumit Gupta, product lead, computing products, who gave us his perspective on how the benefits of HPC can be made available to SMMs — in particular the "missing middle" who are not yet making full use of the technology.

Gupta points out that 15-20 years ago manufacturing software was running on desktop workstations and this setup was relatively affordable. But over time, desktop workstations did not keep up with the performance requirements of the manufacturing software. The software migrated to increasingly powerful HPC clusters to take advantage of the raw horsepower that these systems provided at a lower cost than the typical high-end supercomputer.

Says Gupta, "As soon as software products migrate off the desktop, they start to become prohibitively expensive for small business users. For HPC to truly make inroads into the SMMs it has to be easily available — and the best way to make this happen is through an affordable desktop machine. Not every office has an HPC cluster; but every office does have a desktop system." He points out that these new affordable workstations are not only powered by multicore CPUs and GPUs, but the software has also evolved to take advantage of this parallel computing capability in the workstations.

One of the problems that has to be overcome is the fundamental wall that the manufacturing software has run up against. A macro-level dynamic is taking place: applications are not scaling in proportion to the addition of multiple cores. Adding one or two cores may result in a 2X speedup, but because of fundamental memory bandwidth limitations, adding more cores to a system with the same size bus and memory can actually choke the system.

Because all applications are extremely memory and bandwidth sensitive, NVIDIA's solution, as one might expect, is to use GPUs in these desktop systems. For example, the company recently announced that Dassault Systèmes is using its Quadro and Tesla GPUs coupled with CPUs to run computer-aided engineering (CAE) simulators — its Abaqus 6.11 finite element analysis (FEA) suite — twice as fast as with a CPU alone.

Now a 2X speedup may not seem like a huge leap forward, but the fact is that for the past five years, manufacturing has been experiencing only incremental speedups despite trying all sorts of technological fixes. GPUs, however modestly, are breaking the logjam. And GPUs have a history of becoming faster every 18 months to two years through the addition of hundreds of small cores — a technique that works very well with manufacturing application software.

Memory and I/O are still limiting factors, but the memory bandwidth of a GPU is about 10X that of a CPU and this advantage is expected to be maintained as solutions such as fast graphic memory are incorporated. For example, the NVIDIA Tesla M2070Q features 6 GB of GDDR5 memory per GPU with ultra-fast bandwidth. This kind of capability is particularly important for modeling, simulation and analysis — the backbone of digital manufacturing.

So what does all this have to do with SMMs who would like to leverage HPC for their business, but can't afford the price tag and overhead associated with conventional clusters or supercomputers? This is where the new breed of personal supercomputers comes in.

NVIDIA, along with a number of other companies, began offering these powerful desktop systems about three years ago. NVIDIA claims that the Tesla Personal Supercomputer delivers the performance of a cluster in a desktop system — nearly 4 teraflops (up to 250 times faster than your average PC or workstation) — for under $10,000.

The company hopes that by offering a relatively low-cost system that can easily handle advanced modeling and simulation software, it will make inroads into the some 285,000 SMMs that constitute the "missing middle." However, as we noted in a recent blog, there are a number of other speed bumps that have to be navigated before digital manufacturing, modeling and simulation and the personal supercomputers that make it possible enjoy widespread adoption in this nascent mid-market. (HPC in the cloud is another rapidly developing option for those manufacturers that don't want to own and support their own HPC system.)

But the odds are that as the price of personal supercomputers continues to drop while their processing power continues to rise, an increasing number of SMMs will be ready to take the plunge.

Dirty Cotton and Microwaved Pizza — Affordable Supercomputer Solutions

cotton photo by Martin LaBar on flickrWhen them cotton balls get rotten you can lose a lot of money. Fortunately, the cotton manufacturing industry has gotten an assist from researchers in the U.S. Department of Agriculture. They used NVIDIA GPUs to create a machine vision system that does a far better job of detecting contaminants on cotton lint traveling down an assembly line for cleaning.

Current CPU-based solutions can't react fast enough to make precise reading of the level of trash contamination on the cotton. The result is overwashing and significant lint loss. The GPU-based system uses pattern recognition software to identify the dirt level on each batch of cotton and precisely control the washing process.

The prototype system indicates that a more than 30 percent reduction in lint loss could be gained, speeding up processing and saving a significant amount of cotton fiber that would otherwise be washed away. This simple innovation could result in savings of up to $100 million per year for the US cotton industry.

Zapping a Pizza

General Mills is not exactly a member of the "missing middle," but the company did recently use a CUDA-based system in a way that could be emulated by SMMs in the food industry.

The question: what's the optimal way to cook a frozen pizza in the microwave? Instead of experimenting with thousands of combinations, the company created virtual pizza models to test out the effects of microwave radiation on various permutations of mozzarella cheese, tomato paste and crust. This allowed the researchers to only cook up the best candidates, a great savings in time and money and, presumably, a lot easier on their digestion.

RSS Feeds

Subscribe to All Content


Feature Articles

Titan Puts a New Spin on GE’s Wind Turbine Research

Unlike traditional energy sources, wind is a trouble to tame, which has led GE to turn to advanced simulations at Oak Ridge National Laboratory to put the technology on track to cover 12 percent of the world's energy production.
Read more...

Lighting a Fire Under Combustion Simulation

Combustion simulation might seem like the ultimate in esoteric technologies, but auto companies, aircraft firms and fuel designers need increasingly sophisticated software to serve the needs of 21st century engine designs. HPCwire recently got the opportunity to take a look at Reaction Design, one of the premier makers of combustion simulation software, and talk with its CEO, Bernie Rosenthal.
Read more...

D-Wave Sells First Quantum Computer

On Wednesday, D-Wave Systems made history by announcing the sale of the world's first commercial quantum computer. The buyer was Lockheed Martin Corporation, who will use the machine to help solve some of their "most challenging computation problems." D-Wave co-founder and CTO Geordie Rose talks about the new system and the underlying technology.
Read more...

Short Takes

Local Motors and ORNL Partner for Automotive Manufacturing

Jan 24, 2014 | Local Motors, a vehicle innovator, and the U.S. Department of Energy’s Oak Ridge National Laboratory (ORNL) have announced a new partnership that they hope will bring change to the automotive industry.
Read more...

Advanced Modeling Benefits Wind Farms

May 25, 2011 | Advanced computing resources optimize the site selection of wind farms.
Read more...

Not Your Parents' CFD

Oct 13, 2010 | Outdated beliefs stand in the way of greater CFD adoption.
Read more...

Manufacturers Turn to HPC to Cut Testing Costs

Oct 06, 2010 | Supercomputing saves money by reducing the need for physical testing.
Read more...

HPC Technology Makes Car Safety Job 1

Aug 05, 2010 | Automakers turn to computer simulations to design safer vehicles.
Read more...

Sponsored Whitepapers

Technical Computing for a New Era

07/30/2013 | IBM | This white paper examines various means of adapting technical computing tools to accelerate product and services innovation across a range of commercial industries such as manufacturing, financial services, energy, healthcare, entertainment and retail. No longer is technically advanced computing limited to the confines of big government labs and academic centers. Today it is available to a wide range of organizations seeking a competitive edge.

The UberCloud HPC Experiment: Compendium of Case Studies

06/25/2013 | Intel | The UberCloud HPC Experiment has achieved the volunteer participation of 500 organizations and individuals from 48 countries with the aim of exploring the end-to-end process employed by digital manufacturing engineers to access and use remote computing resources in HPC centers and in the cloud. This Compendium of 25 case studies is an invaluable resource for engineers, managers and executives who believe in the strategic importance of applying advanced technologies to help drive their organization’s productivity to perceptible new levels.

Conferences and Events

Featured Events



Copyright © 2011-2014 Tabor Communications, Inc. All Rights Reserved.
Digital Manufacturing Report is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications Inc. is prohibited.
Powered by Xtenit.