Computer vision in manufacturing has been misunderstood for too long

Featured Latest News

An interview with Albane Dersy, Co-Founder and COO of Inbolt

Why do you believe computer vision in manufacturing has been so widely misunderstood, and what’s changed in recent years to make it more viable?

Computer vision was long seen as unreliable and expensive. Limited by early 2D systems that struggled with lighting, reflections and depth, manufacturers often dismissed the technology, assuming it couldn’t handle the complexity of real-world manufacturing.

The turning point came with a shift to 3D vision. By enabling depth perception and spatial awareness, 3D vision opened automation to previously inaccessible tasks. At first, early 3D vision was primed for tasks like structured light and stereo vision. Today’s systems combine advanced imaging with AI, allowing robots to interpret and adapt in real time. This evolution has made vision not just viable but essential for modern, flexible manufacturing.

Can you explain how vision-guided robotics reduces the need for traditional infrastructure like fixed tooling and precision fixtures?

Industrial robots are built for precision, but not perception. Even slight misalignments can lead to errors, downtime, and cost overruns. Traditionally, manufacturers relied on pricey fixtures and tedious reprogramming to keep robots on track. Vision-guided robotics flips that script.

By giving robots the power to “see” and react in real time, they adapt like a human would. Modern 3D vision systems can be retrained in minutes using CAD data, without the need for retooling or precision fixtures. The vision system also travels with the robot, making it easy to integrate, flexible, and resilient to factory conditions. Now, robots adapt to the environment, not the other way around – unlocking faster changeovers and more agile production.

Inbolt FANUC CRX

What are some real-world examples where computer vision has significantly improved reliability and reduced downtime on production lines?

At Stellantis Valenciennes, an advanced computer vision system enhanced production precision and efficiency by addressing two key challenges: inconsistent part positioning and pallet deformation post-heat treatment. Two Universal Robots were equipped with real-time 3D vision, enabling precise handling and eliminating the need for a manual operator to handle 2.7 tonnes of pieces.

Meanwhile, at Stellantis Detroit, a FANUC cell was retrofitted in the Body Side Outer Aperture line, where the age of the racks meant that parts didn’t always fit perfectly and subsequently led to frequent breakdowns. The vision-guided system eliminated pick errors, reduced downtime by 97%, and achieved ROI in just three months – delivering consistent, collision-free performance without mechanical rework.

How does computer vision technology enable the concept of “dark factories”?

“Dark factories” require more than automation. They demand autonomy. To realise the vision of fully automated factories, capable of operating without human intervention, you need reliability from every system, especially vision. Without human oversight, robots must be able to perceive and respond to even the slightest deviations in their surroundings.

Computer vision provides this capability. It allows robots to detect subtle deviations, such as a misaligned rack, and adapt in real time. Without vision, these errors could halt production. With it, robots operate continuously and safely. Computer vision acts as both the eyes and the intelligence behind robotic systems, enabling factories to run 24/7 with precision and resilience. It’s the foundation for lights-out manufacturing that’s not just automated but truly autonomous.

Inbolt Purple camera stellantis

What advice would you give to manufacturers who are hesitant to adopt vision-based automation due to concerns about cost or complexity?

With the introduction of 3D Vision, there is no longer a costly and complex implementation. Engineers don’t need specialised coding skills. They can retrofit existing production cells of robot brands such as ABB and KUKA, training the AI on a part’s CAD file or 3D model within 30 minutes.

This also helps to reduce CapEx and infrastructure costs since production lines don’t need to be ripped out and replaced when manufacturing a new model. Added to that, 3D vision makes the line more reliable and efficient, delivering 80% fewer rejected parts and reducing downtime by up to 97%.

All of this means that manufacturers can see ROI in as little as 3–12 months, demonstrating that the upfront cost pays off quickly.