Engineers are increasingly looking to successfully integrate AI into projects and applications while attempting to climb their own AI learning curve. To tackle AI, engineers start with wanting to understand what AI is and how it fits into their current workflow, which might not be as straightforward as it seems. A simple search of “What is AI?” yields millions of results on Google, with varying degrees of technical and relevant information.
So, what is AI to engineers?
Most of the focus on AI leans heavily on the AI model, which drives engineers to quickly dive into the modeling aspect of AI. After a few starter projects, engineers quickly learn that AI is not just modeling, but rather a complete set of steps that includes data preparation, modeling, simulation and test, and deployment.
Engineers using machine learning and deep learning often expect to spend a large percentage of their time developing and fine-tuning AI models. Yes, modeling is an important step in the workflow, but the model is not the end of the journey. The key element for success in practical AI implementation is uncovering any issues early on and knowing what aspects of the workflow to focus time and resources on for the best results—and it’s not always the most obvious steps.
Two important asides to consider before diving into the complete workflow:
- Most often, AI is only a small piece of a larger system, and it needs to work correctly in all scenarios with all other working parts of the end product, including other sensors and algorithms such as control, signal processing, and sensor fusion.
- Engineers in this scenario already have the skills to be successful incorporating AI. They have inherent knowledge about the problem, and with tools for data preparation and designing models, they can get started even if they’re not AI experts, allowing them to leverage their existing areas of expertise.
The AI-Driven Workflow
Now we can dive into the four steps for the AI-driven complete workflow and better understand how each step plays its own critical role in successfully implementing AI into a project.
Step 1: Data Preparation
Data preparation is arguably the most important step in the AI workflow: Without robust and accurate data as input to train a model, projects are more likely to fail. If an engineer gives the model “bad” data, he or she will not get insightful results—and will likely spend many hours trying to figure out why the model is not working.
To train a model, you should begin with clean, labeled data, as much as you can gather. This may also be one of the most time-consuming steps of the workflow. When deep learning models do not work as expected, many often focus on how to make the model better—tweaking parameters, fine-tuning the model, and multiple training iterations. However, engineers would be better served focusing on the input data: preprocessing and ensuring correct labeling of the data being fed into a model to ensure that the data can be understood by the model.
One example of the importance of data preparation is from construction machinery and equipment company, Caterpillar, which takes in high volumes of field data from various machines. This plethora of data is necessary for accurate AI modeling, but the sheer volume of data can make the data cleaning and labeling process even more time intensive than usual. To streamline that process, Caterpillar uses automatic labeling and integration with MATLAB to quickly develop clean, labeled data for input into machine learning models, providing more promising insights from field machinery. The process is scalable and gives users the flexibility to use their domain expertise without having to become experts in AI.
Step 2: AI Modeling
Once the data is clean and properly labeled, it’s time to move on to the modeling stage of the workflow, which is where data is used as input, and the model learns from that data. The goal of a successful modeling stage is creating a robust, accurate model that can make intelligent decisions based on the data. This is also where deep learning, machine learning, or a combination thereof comes into the workflow as engineers decide what will be the most accurate, robust result.
At this stage, regardless of deciding between deep learning (neural networks) or machine learning models (SVM, decision trees, etc.), it’s important to have direct access to many algorithms used for AI workflows, such as classification, prediction, and regression. You may also want to use a variety of prebuilt models developed by the broader community as a starting point or for comparison.
Using flexible tools, like MATLAB and Simulink, offers engineers the support needed in an iterative environment. While algorithms and prebuilt models are a good start, they’re not the complete picture. Engineers learn how to use these algorithms and find the best approach for their specific problem by using examples, and MATLAB provides hundreds of examples for building AI models across multiple domains.
AI modeling is an iterative step within the complete workflow, and engineers must track the changes they are making to the model throughout this step. Tracking changes and recording training iterations, with tools like Experiment Manager, is crucial as it helps explain the parameters that lead to the most accurate model and create reproducible results.
Step 3: Simulation and Test
AI models exist within a larger system and must work with all other pieces in the system. Consider an automated driving scenario: Not only do you have a perception system for detecting objects (pedestrians, cars, stop signs), but this has to integrate with other systems for localization, path planning, controls, and more. Simulation and testing for accuracy are key to validating that the AI model is working properly, and everything works well together with other systems, before deploying a model into the real world.
To build this level of accuracy and robustness prior to deployment, engineers must ensure that the model will respond the way it is supposed to, no matter the situation. Questions you should ask in this stage include:
- What is the overall accuracy of the model?
- Does the model perform as expected in each scenario?
- Does it cover all edge cases?
Trust is achieved once you have successfully simulated and tested all cases you expect the model to see and can verify that the model performs on target. By using tools like Simulink, engineers can verify that the model works as desired for all the anticipated use cases, avoiding redesigns that are costly both in money and time.
Step 4: Deployment
Once you are ready to deploy, the target hardware is next—in other words, readying the model in the final language in which it will be implemented. This step typically requires design engineers to share an implementation-ready model, allowing them to fit that model into the designated hardware environment.
That designated hardware environment can range from desktop to the cloud to FPGAs, and MATLAB can handle generating the final code in all scenarios. These types of flexible tools will offer engineers the leeway to deploy their model across a variety of environments without having to rewrite the original code.
Take the example of deploying a model directly to a GPU: Automatic code generation eliminates coding errors that could be introduced through manual translation and provides highly optimized CUDA code that will run efficiently on the GPU.
Engineers don’t have to become data scientists or even AI experts in order to achieve success with AI. Tools designed for engineers and scientists, functions and apps to integrate AI into your workflow, and available experts to answer questions related to AI integration are crucial resources for setting up engineers—and their AI models for success. Ultimately, engineers are at their best when they can focus on what they do best and build on it with the right resources to help them bring AI into the picture.