Machine vision systems are used to perform complex visual inspections for robotic applications, delivering precise, multi-dimensional feedback on a target part in the language robotics can recognise and use.
Vision systems also provide information on what action robotic components should take to interact with the target object. Successfully employing vision in robotic applications requires comprehension of how vision systems work and knowledge of the tools specifically designed for robotic application needs.
Vision sensors produce a way for machines to "see." Whereas traditional sensors analyse and interpret data from a single point, vision sensors input an entire image. These sensors consist of a camera that snaps a picture of the part.
The image is then transferred to memory, processed, analysed and compared against predetermined parameters.
When the vision sensor evaluates the features of the part as compared to user-defined tolerances for each parameter, it determines whether the part passes or fails the inspection and outputs the results for the function of robot control.
The controller and camera constitute the hardware elements of a vision system. The software elements include the control system, graphical user interface and image algorithms.
A vision system's set of features includes its vision tools and method of communicating data. Robot applications that can benefit from a machine vision system are arranged into several classes. The most common application involves randomly oriented parts moving along a conveyer belt in a range of different positions.
Coordinate transformation
The robot must adjust itself according to the orientation of the parts, grasp the items, and then palletise them. In this case, vision sensors supply the link between the randomly oriented part and the robot.
For example, a machine vision system can be applied to control robots at a pick-and-place machine for assembling electronic circuit boards.
Another common class of applications consists of robots that transfer parts from one station to the next station in a process. The vision system supplies information to allow the robots to grab a target object and move it to the next station in a manufacturing or inspection system.
When a machine vision camera detects an object in its field of view, the camera can locate it and establish the object's x and y coordinates with respect to the upper left-hand corner of the image–the 0, 0 point.
Yet, the robot functions with its own system of coordinates, centred on its own 0, 0 point, which generally does not correspond to the origin the vision system employs.
To simplify communication between the vision sensor and the robot, and allow the robot to easily execute the correct action, vision systems utilise robot coordinate transformation.
Through this capability, vision systems convert information regarding the location of the point of interest in the camera's frame of reference into the robot's coordinate system.
In addition to the x and y position coordinates, machine vision systems frequently need to tell the robots the theta coordinate, 0, or the angle of rotation of a target object.
The inclusion of the 0 coordinate allows robots to identify where a part is located, as well as be able to pick it up. Vision tools can report the position of the object and how it is rotated, so the robot can adjust itself appropriately before picking up the object and completing a task.
The x, y and 0 coordinates of a particular part can be ascertained using a variety of vision tools, which are part of the software components of a vision system.
The precision available in these tools varies, as does the amount of time required to analyse the point of interest.
For instance, edge-based tools provide the x and y coordinates for wherever an edge is found on the product.
When several edge-detecting tools are combined with an analysis tool, the angle, or 0 coordinate can be determined.
A more sophisticated blob tool can usually find the x, y and 0 coordinates of the centre of mass for an object — according to the two-dimensional average of all the pixel x and y positions and information about the overall shape of the part — allowing the robot to grab and pick up the object in a balanced way.
Even more precise (and more time-consuming) are pattern-matching tools, which provide information on the centre of an object, as well as how it is rotated, so the robot understands how it has to adjust to pick up the object.
Transferring data
Vision sensors communicate information to the robots in several different ways. The simplest and most cost-effective method uses an ASCII string.
In this method, the camera detects the x, y and 0 positions and sends them to the robot via a RS-232 serial connection or a TCP/IP Ethernet connection.
The robot controller does not request the information — only receiving whatever information the camera sends, whenever the camera sends it.
A remote command channel (RCC) feature allows a robot controller to instruct the camera to carry out a task, such as taking a picture or reporting information about an image.
With RCC, data is also sent over a serial or Ethernet connection, though in a more controlled manner. The camera only transmits information to the robot when it is requested, streamlining the flow of data and increasing overall efficiency.
In the third and most complex method, the Industrial Ethernet connection goes to a PLC or advanced robot controller.
Like RCC, this method allows more regulated control for transferring the information from the camera. Further, it provides a better storage system for the information.
The camera identifies the x, y and 0 coordinates of the target objects, and all the fragments of data are uniquely mapped to an exclusive area in memory.
Now, when the robot controller requests information, it receives the data in a much more orderly fashion.
In the simplest method of data transfer, operators must set up the robot controller to listen to whatever data the camera delivers — and make sense of it.
This can be challenging for the robotic components.
With RCC, a more sophisticated method of data transfer, the robot can request specific information at specific times, but it still must interpret data from the format that the camera sends it in.
By utilising a PLC or advanced robot controller, robots can request whatever information the camera knows, whenever the robot wants it, and the data is transferred in an ordered and grouped fashion.
However, some less sophisticated robots are incapable of this system, and other operators may not want to use an expensive PLC as the controller.
Interconnectivity
In early robotic applications, robot controllers and software were often exclusive to a specific company. To integrate other technology, such as a vision system, manufacturers needed to design a custom solution.
Subsequently, developments on both sides have opened otherwise closed systems, allowing components to work with more solutions.
In addition to more sophisticated capabilities, such as regulated control, and higher levels of computed processing, robotics and vision sensors can now be integrated more easily than ever before — making them suitable for multiple industrial applications.
[Simon Webb is Product Support Engineer, Micromax Sensors & Automation.]