GigE integrating fully the output from all sensors – machine vision on the cloud

Today, designers are converting the image feeds from these sensors into GigE Vision to use traditional machine vision processing for analysis. Looking ahead, there will be obvious value in fully integrating the output from all of the sensors within an application to provide a complete data set for analysis and eventually AI.

Machine vision and the cloud

The cloud and having access to a wider data set will play important roles in bringing IoT to the vision market. Traditionally, production data has been limited to a facility. There is now an evolution toward cloud-based data analysis, where a wider data set from a number of global facilities can be used to improve inspection processes.

Instead of relying on rules-based programming, vision systems can be trained to make decisions using algorithms extracted from the collected data. With a scalable cloud-based approach to learning from new data sets, AI and machine learning processing algorithms can be continually updated and improved to drive efficiency. Smart frame grabbers and embedded imaging devices provide a straightforward entry point to integrating preliminary AI capabilities within a vision system.

Inexpensive cloud computing also means algorithms that were once computationally too expensive because of dedicated infrastructure requirements are now affordable. For applications such as object recognition, detection, and classification, the learning portion of the process that once required vast computing resources can now happen in the cloud versus via a dedicated, owned, and expensive infrastructure. The processing power required for imaging systems to accurately and repeatedly simulate human understanding, learn new processes, and identify and even correct flaws is now within reach for any system designer.

Providing this data is a first step toward machine learning, AI, and eventually deep learning for vision applications that leverage a deeper set of information to improve processing and analysis. Traditional rules-based inspection excels at identifying defects based on a known set of variables, such as whether a part is present or located too far from the next part.

In comparison, a machine learning-based system can better solve inspection challenges when defect criteria change — for example, when the system needs to identify and classify scratches that differ in size or location, or across various materials on an inspected device. With a new reference data set, the system can be easily trained to identify scratches on various types of devices or pass/fail tolerances based on requirements for various customers. Beyond identifying objects, a robotic vision system designed for parts assembly can use a proven data set to program itself to understand patterns, know what its next action should be, and execute the action. The need to transport and share large amounts of potentially sensitive, high-bandwidth sensor data to the cloud will also help drive technology development in new areas for the vision industry, including lossless compression, encryption, security, and higher-bandwidth wireless sensor interfaces.

With some incremental technology shifts toward bandwidth-efficient devices integrating preliminary AI functionality, IoT will be a reality for the vision industry. The first key step is to bring processing to the edge of the network via smart devices. These devices must then be able to communicate and share data, without adding costs or complexity to existing or new vision systems. Finally, while the concept of shared data is fundamentally new for the vision industry, it is the key to ensuring processes can be continually updated and improved to drive efficiencies.

Contact with us for the further information: sales@innomiles.com