Jim Witherspoon, Product Manager, Machine Vision, Zebra Technologies
The interest, innovation, and buzz around various types of artificial intelligence (AI) — machine learning, neural networks, and computer vision (CV) is well recognised. What is less recognised is the renewed interest and application of modern machine vision (MV) systems by warehousing operators and manufacturers.
MV and CV are both intelligence-based systems used for image capture, processing, and analysis. However, the speed and level at which this intelligence is gathered, distributed, and applied is one of the most distinguishing factors between the two types of technologies.
MV systems tend to be self-contained, meaning the image capture and analysis occur right there on the line — data does not have to be sent to a back-office system for processing.
On the other hand, CV is often used as a back-end processing platform for front-line image capture technologies. Though CV elicits fast decision-making and action, the lead time tends to be longer, given the depth and breadth of data being processed through the system.
Every company looking to implement MV systems today must decide how to share and manage image datasets and how to use them to deliver quality, actionable results on a consistent basis — data has become as important as code. However, while there are industrial practices for code management, progress needs to be made for managing image data.
One of the biggest challenges we face today, whether as MV specialists or businesses looking to implement MV systems, is the correct capture and annotation of data. However, there are now new solutions to help industrial imaging professionals to think and act like data scientists.
For that to happen, engineers need modern software tools that make working with today’s machine vision systems easier and accessible within minutes of set-up.
A THREE-STEP APPROACH TO DATA TOOLS
One way to conceptualise today’s need for engineers to think and act like data scientists is to set the groundwork by breaking down the process into three parts.
Stage one we could call ‘Capture,’ with a focus on setting-up the exposure, lighting, and triggering. The second stage, ‘Build,’ would aim to make is easier to add and configure MV/FIS tools. And third, ‘Connect,’ makes it painless to get image data from the MV/FIS camera to the host system. With each stage, users need modern software that will augment their experience and help them get faster ROI.
For example, with ‘Capture,’ engineers could utilise a software that would allow them to capture a number of images in one go, but with each having perfect lighting. This would create a ‘perfect’ library of images. Often, parts are not evenly lit or have inspections at different depths and focus, necessitating changes to exposure settings.
With such a tool, an engineer could quickly adjust image settings, link the tool they need for the inspection to the ‘perfect’ library they’ve created, and get a pass/fail response with a single trigger pull. By eliminating the need for multiple job changes or tracking, hours are saving when deploying a system.
When it comes to ‘Build,’ how about software with the ability to draw vision tools directly onto the image? Such an approach would eliminate unnecessary clicks and drags when adding a tool to a solution. That may not sound like much, but as solutions get more sophisticated, the time savings can really add up.
And being able to capture images at the time of set-up is highly desirable—it’s when the system works best. Now, imagine being able to capture and save time-of-set-up images in perpetuity. If an issue causes the system to malfunction, the user can simply click on the image captured at the time of set-up and get a side-by-side view next to the new image.
If the camera was bumped, or a light is no longer working, or if key settings were changed, the software would help a user quickly identify the problem and get the system back up and running immediately. Such an approach would help eliminate downtime should something happen to the camera or setup.
A POWERFUL TOOLBOX FOR A RANGE OF USERS
Progress is being made when it comes to dataset management, which is vital for newer machine vision systems and those who use them. Engineers need a software toolbox if they’re to become more like data scientists: no one relies on just one screwdriver to complete a complex construction job, so why rely on just one vision tool when it comes to the complexities of data capture and processing?
Another example: getting optical character recognition (OCR) inspection right can be challenging. A variety of factors including stylised fonts, blurred, distorted or obscured characters, reflective surfaces and complex, non-uniform backgrounds can make it impossible to achieve stable results using traditional OCR techniques.
However, there’s new industrial-quality deep learning OCR tools that makes reading text quick and easy. They come with ready-to-use neural networks that are pre-trained using thousands of different image samples. Such a tool can deliver high accuracy straight out of the box, even when dealing with very difficult cases. Users can create robust OCR applications in just a few simple steps—all without the need for machine vision expertise.
Even things like user interfaces can and are being improved, to reduce excessive mouse clicks and scroll bars. That might sound small but the time savings add-up and the user experience feels more like working with a photo editor. Similarly, the navigation process should be intuitive for MV inspection and barcode scanning specialists as well as for novice users.
The goal is to give users—experienced experts and non-specialists alike—the best and most appropriate solutions to address their business needs, and that starts with getting to grips with better data tools.
To discuss how industrial imaging professionals can overcomes today’s data challenges, get in touch with Jim Witherspoon here.