What’s the difference between machine vision and computer vision?

Computer vision and machine vision are overlapping technologies. A machine vision system requires a computer and specific software to operate while computer vision doesn’t need to be integrated with a machine. Computer vision can, for example, analyze digital online images or videos as well as “images” from motion detectors, infrared sensors or other sources, not just a photo or video. Machine vision is a sub-category of computer vision.

Select the right software for your applications

The very core of every machine vision application is the software the performs the actual processing and analysis of the image. At this point specific software tools (“algorithms,” “operators,” etc. depending on the terminology used by the application or library vendor) are configured or programmed to perform specific analysis on the pixel-based data in the acquired image.

Machine Vision Software: What’s “Under the Hood”

In terms of practical implementation, one construct for machine vision software can be described as an application that “configures” the system components and how they execute machine vision functions and tasks. These apps tend to have graphical user interfaces (GUI) devoted to “ease of use” with intuitive and graphically manipulated application configuration steps.

Let’s ensure that your machine vision applications all will be successful and reliable now and in the future

The successful application of machine vision technology involves an intricately and carefully balanced mix of a variety of elements. While the hardware components that perform the tasks of image formation, acquisition, component control, and interfacing are decidedly critical to the solution, machine vision software is the engine “under the hood” that supports and drives the imaging, processing, and ultimately the results.

Coronavirus: How can AI help fight the pandemic?

British start-up Exscienta became the first company to put a AI-designed drug molecule to human trials earlier this year.
It took just 12 months for algorithms to create it, compared with four to five years for traditional research.