Thought Leadership

Companies Lack Resources to Get Deep Learning Models into Production [Survey]

Apr 30, 2020

Icon

Author(s)

How many deep learning models do companies typically have in production? A lot fewer than you’d think. 84% of companies had five or fewer models in production. For many teams, this process is simply too hard or too costly. We recently surveyed more than 290 machine learning engineers and data scientists to find out how they’re executing deep learning in practice (It’s not too late to take or re-take the survey, as our goal is to track and report on changes year-over-year.) Despite many technical advancements in the deep learning field, we found that most teams may not be fully taking advantage of them.

For example, the majority of companies (59%) are not optimizing their machine learning models in production, despite the performance gains techniques like quantization and pruning can offer.  (For example, one researcher found that by pruning, a VGG-16 based classifier was made 3x faster and 4x smaller).

optimizing deep learning models for production

There are a few different reasons why organizations may not be taking advantage of optimizations. 

  1. They can take a painstakingly long time to execute, and standard machine learning frameworks are just starting to officially support techniques like quantization. 
  2. For those who have the knowledge and resources to do optimizations, deployment tools are few and far between (or extremely difficult to set up). In addition, CPUs are only starting to get support for quantization and pruning.

Beyond optimizations, here are three of the most surprising findings revealed within our survey. 

#1: Low Numbers of Models in Production

Surprisingly, 42% of organizations only had 2-5 deep learning models in production, while 27% had none. These numbers imply that it’s hard to get these models into production, or else companies may be experimenting with far more. With optimizations, it’s likely that more models could reach production at lower cost.

deep learning models in production

#2: Companies are Resource-Strapped for Training

Despite the fact that training is a compute-intensive process, 60 percent of our survey respondents were using CPUs for training. This finding suggests that GPUs may be too expensive for organizations to use, at least for every model they have in training. The organizations using CPUs are likely making major sacrifices in favor of cost. Given the low number of models in production, it’s likely that most teams are still finding deep learning too hard -- due to resource constraints, costs, and performance issues.

deep learning training hardware

#3: Image Classification, Object Detection Growing as Use Cases for Deep Learning 

Nearly half of our participants were using machine learning for image classification (47%). Object detection was also a prominent use case (37%) , which isn’t surprising given the market for practical applications such as visual search, security, defect inspection, object detection, and more.

machine learning use cases

Deployment: Top Tools, Frameworks, and More

Beyond these top three findings, we also found valuable insights on the types of frameworks and tooling that data scientists and machine learning engineers are using today, as well as how they’re deploying their models in production. Overall, TensorFlow was the dominant framework, used by 70% of practitioners. In addition, TensorFlow ranked as the top deployment tool (59%). Most organizations were using GPUs (64%) in production, and deploying to the cloud (69%). Finally, container orchestration tooling such as Docker and Kubernetes was the most popular CI/CD tooling, used by nearly half of practitioners. 

See the full results below.

deep learning frameworks
deep learning deployment tools
deep learning deployment hardware
deep learning deployment locations
deep learning ci/cd pipeline tools

Lowering the Deep Learning Barrier to Entry

Overall, our inaugural deep learning survey showed us that data scientists and machine learning engineers still face many hurdles when it comes to getting their models into production. Whether it’s prohibitive cost or the lack of the right experts or tools, these challenges could be standing in the way of the next big machine learning breakthrough. 

Fortunately, optimization can help lower the barrier to entry for deep learning, so teams can put more models into production at lower cost. Reach out to us to learn more about how to make optimization techniques simpler to execute!

Was this article helpful?
YesNo
Icon

Author(s)

Icon

Join the Conversation