Electronics – Bernd-Dietmar Becker, Robert E. Bridges, Ariane Stiebeiner, Rolf Heidemann, Matthias Wolke, Faro Technologies Inc

Abstract for “Registration and measurement of three-dimensional coordinates on the interior and exterior of an object”

A dimensional measuring device consists of an overview camera and a triangulation scanner. The dimensional measuring device is tracked by a six-DOF tracking device. The triangulation scanner measures the three-dimensional (3D), coordinates of an object’s exterior using a triangulation scanner. The overview camera uses the Cardinal points to register in a common frame that includes reference 3D coordinates from the triangulation scanner measuring the object’s interior and exterior.

Background for “Registration and measurement of three-dimensional coordinates on the interior and exterior of an object”

“Today, dimensional measurements can be taken by handheld instruments that have not been automated or fixed instruments that have been automated. Triangulation scanners, such as structured light scanners, are examples of such handheld instruments. A handheld instrument that can be used to track lasers is the one with a mounted retroreflector (SMR), which allows an operator to carry a spherically mounted retroreflector. This retroreflector will allow him or her to determine the 3D coordinates for points in contact with it. A Cartesian CMM is an example of an instrument that is both automated and fixed in place. A robot holding a measuring device, such as a triangulation scanner or 3D imagingr, is another example of an automated instrument that is fixed in place. Some 3D measuring devices combine non-contact scanning capability with tactile measurement capability.

A common problem when 3D measuring complex objects is that the 3D instruments, such as scanners, are registered using an external 3D coordinate measurement device, such as a laser tracker. In some cases, however, the 3D instruments don’t have a connection to a device which allows them to register to a global setting. It is necessary to coordinate the registration of 3D data from these various cases. An example of this is when a tactile probe is used to measure an internal feature, such as a hole, but there is no way to register the tactile probe directly with an external 3D measuring instrument. Another opportunity is the design of measuring devices that can make 3D measurements in both interior and exterior areas of an object.

“While current 3D measuring devices, such as scanners, are adequate for their intended purpose, what’s needed is a device with improved ability to combine locally and globally registered 3D measurement data.”

A method is provided for measuring three-dimensional (3D), coordinates in accordance with one or several embodiments. This method involves: determining the first position and orientation of a dimensional measurement device using a six-DOF track system, where the device is configured to determine 3D coordinates for the object’s surface in a frame of reference; determining the second position, orientation and first coordinates with the six DOF track system; taking a second overview of the object, and determining the 2D coordinates for the first cardinal points in the second overview picture and the 3D coordinates for the second cardinal points in the 2D coordinates

These and other benefits and features will be more obvious if you read the description in conjunction with the drawings.

“FIGS. 1A-1D show perspective. They are left, top, and front views respectively of a mobile measurement station 100. It includes a mobile base assembly 110 and an arm electronics assembly 130. A robotic articulated arm 150 is also included. The mobile measurement platform 100 can be driven by many different types of wheels. Two active wheels are used in an embodiment to propel the mobile measuring platform 100. One or more additional wheels are used for balance. Additional active wheels can be used in other embodiments. There are three types of wheels: simple, omni-wheels and Mecanum. In one embodiment, wheels 112A and 112B are simple active tires, while wheels 114A and 114B, respectively, are simple passive, free-turning, simple wheels. Active wheels 112A and 112B may operate independently, so they can rotate forward, backward and stay in place. The mobile robot platform 100 will be propelled forward or reverse if both wheels are turning at the same speed. The mobile robot platform 100 will move in the opposite direction to the top view if the left wheel 112B is in place and the right wheel propels it forward. The robot will turn in a counterclockwise direction if the left wheel 112B rotates slowly and the right wheel 112A spins relatively fast. This is as seen in the top view, but in an arc. The mobile measurement platform 100 can be directed to follow any path by changing the directions and rotation rates of the wheels 112A and 112B.

“In an embodiment, the mobile measuring platform 100, with the robotic articulated arms 150, is considered human-friendly robotic devices. This is a type robotic device that is safe around people. The speed of the mobile platform in an embodiment is one meter per second. This is the maximum speed that autonomous vehicles can travel in a factory environment with people. The mobile measurement platform 100 also includes an obstacle detection system and avoidance system. Two two-dimensional (2D), scanners 116A and 116B are used to detect obstacles in an embodiment. The 2D scanners in an embodiment are the SICK model S300 Professional safety scanner. These scanners emit horizontal planes of light that measure distance and angles for every angle in a horizontal plan 117. This horizontal plane can be used to measure angles over an angle range of 270 degrees. The scanner can obtain information about potential obstacles by measuring distances and angles from the mobile base assembly 110. In one embodiment, horizontal plane 117 is located close to the floor. This is the area in which obstacles are most easily detected. A signal is sent to the processors of the system when an obstacle is detected. This signals the mobile platform to cease moving. To stop the mobile platform from moving, a person can press the emergency stop button 122A. An embodiment may include ultrasound sensors to avoid obstacles when there is glass or other transparent material. An embodiment of the mobile base assembly 110 has an on-off switch 128. It also has indicator lights 118A and 118B, 120A and 120B, 120B, 120C, 120B, 120C, 120C, 120D, and 120D.

“In an embodiment, a robotic articulated arm 150 can be safely used around people. The Universal Robots model U10 is used in an embodiment to create the robotic articulated arm 150. Other embodiments use a Universal Robots model UR10 robotic articulated arm 150. In accordance with human anatomy, the robotic articulated arm 150 has a shoulder 152 and an elbow 170. A wrist 180 is also included. The robotic articulated arm 150 is an embodiment that includes six joints 154, 162, 182, 186 and 190. These are configured to rotate around axes 155-159-173, 183, 173, 187, 191, respectively. Bidirectionally, they can also be controlled by angular movements 156-165, 174,184,188, 192.

“An embodiment of the arm electronics assembly 130 contains an arm electronics housing 132. An embodiment of the robotic articulated arms 150 is mounted on the top surface of the arm electronics assembly 132 through an arm base element 53. An embodiment of the robotic arm assembly 150 cables are routed from the arm base element 53 through a hole 133 into the arm electronics unit 130. The cables from the robotic articulated arms 150 are routed inside the assembly 130 to the control box and teaching pendant electronics 250. 2B.”

“In one embodiment, the robotic articulated arms are attached at a first and second ends to the top surfaces of the arm electronics housings 132 and a third end to an effector that includes an 3D measurement device as discussed below. There are several connected arm segments 162, 175, 176 between the first and second ends. Each joint assembly 157-161, 175, 185 189, 193, includes a motor as well as an angle measuring device. This is typically an angular encoder.

“The robotic articulated arms includes a variety of joints. Each joint has its own axis for rotation. Every joint is connected to an associated arm segment, or to the end affector. If there is no intervening arms segment between the joint & the associated arm segments, the joint and the arm segment are coupled. Some joints have swivel joints with a swivel as the corresponding rotation axis. Each of the swivel joint is designed to rotate the associated arm segment or effector around the swivel. Another type of joint is a hinge joint with a hinge-axis as its corresponding axis rotation. Each of the plurality hinge joints is designed to rotate the associated arm segment/end effector in a perpendicular direction to the hinge. FIG. FIG. 2B shows the swivel joints as 154, 186, and 190. The corresponding swivel-axes are 155 and 187 respectively. The hinge joints are 158.172.182 and 182. The corresponding hinge axes for these hinge joints are 159.173. and 183.

“The physical hardware of the teach pendant 140 is connected through a cable to the control box. The arm electronics assembly 130 is attached to the teach pendant hardware 140 in one embodiment. Another embodiment allows the operator to remove the teach pendant 140 from the arm electronics assembly 130 and hold it in his/her hand. The robotic arm assembly 150 is taught how to use the teach pendant. The teach pendant 140, in one embodiment, includes a touch screen display (146) and an emergency stop button (142) for the robotic articulated arms. To stop the robot articulated arm from moving, a user can press the emergency button. Fans may be included in the arm electronics assembly 130 that work with vents 131 for cooling.

“In one embodiment, the entire movement of the mobile measurement platform 100 is controlled by one or more processors within the system. Any of the processors in FIG. 220 may be used. 2B, which includes computer processors 270 that are available on a network. One or more processors can also include processors other than those listed in elements 220. All movements, including those of the wheels and robotic articulated arms, are made autonomously in an embodiment. Another embodiment allows an operator to train the robotic articulated arms 150 to perform desired movements by using the teach pendant 140.

“FIG. 2A is an isometric view showing a complete mobile measurement system 200. It is identical to the 100, but with an end effector (205), which includes an electronics interface box and a triangulation scan 210. An embodiment of the invention includes a connector 212, which allows the triangulation scanner (210) to be detachably connected with the electronics box. The motors 150 of the robotic articulated arms have changed the orientations of the arm segments and the end effector by turning them. 1A-1D. The shoulder 152 remains the same, but the elbow is rotated 90 degrees around a hinge axis and the wrist rotated 90 degrees about a swivel. The interior of an automobile Body in White (BiW), is one example of a complex object that can be 3D measured. 2C. The measurement platform 200 must have a number of swivel or hinge joints in order to perform a variety of 3D measurements. FIG. 2C shows the rotation of a hinge joint as well as a swivel joints. 2C to measure the area.

“In one embodiment, a battery 276 supplies 24 volts to multiple devices within the system, as shown in FIG. 2B. The mains adapter (126 in FIG.) allows you to access a recharger 278. 1B. 1B. In another embodiment, an additional battery supplies power to the robot controlbox and the teach pendant 250 in arm electronics assembly 130.

“The I/O bus coupler works with bus terminals to transmit and receive digital data. The Beckhoff BK5150CANopen bus coupler, which cooperates with Beckhoff L1408 digital input terminals as well as Beckhoff L2408 digital output terminals, is an embodiment of the I/O bus coupler. Digital input and output terminals can operate at 24 volts. Digital output signals are sent over digital output channels to the signals lamps, brakes and sound maker in an embodiment.

“A microcontroller 268 in an embodiment is used as a watchdog to monitor the digital traffic and troubleshoot problems. An embodiment integrates the microcontroller with a touchscreen or keypad that can be used by an operator.

“In one embodiment, an extended IMU 269 includes a magnetometer that provides heading information in addition to the accelerometers and geoscopes of traditional IMUs. The IMU can also include a pressure sensor that provides elevation information. A processor is located in the extended IMU unit and fuses information from various sensors to improve the accuracy of position information and orientation information over time. The extended IMU 269 communicates over a Universal Serial Bus communication link with the IPC 262. The extended IMU in an embodiment is the x-IMU manufactured and sold by x-io Technologies.

The IPC 262 can communicate with multiple devices over Ethernet 282, including the 3D measuring device robot control box 250, teach pendant 250, 2D scan controllers 270 and WLAN access point 272, as well as cellular gateway 274. The 2D scanner controllers allow the IPC to communicate with the 2D scanners (116A,116B) and to receive distance information for a collection horizontal angles.

The WLAN access point 272 allows wireless devices, such as tablets and smart phones, to connect via communication channel 288 with a wired network 200 of the mobile measurement platform 200. It uses the IEEE 802.11 standard (WiFi), or another related standard. Wireless communication can also be established over Bluetooth using a Bluetooth transceiver 289. An embodiment allows devices like cellular phones to establish wireless communication using cellular communication channels, such as 3G/4GLTE. A cellular gateway 274 is used to establish a communication channel in an embodiment. An embodiment uses a Sierra Wireless model, GX400 as the cellular gateway.

“In some embodiments signals are sent directly from the 3D measurement device to the IPC 262, over a wired network or wirelessly. An embodiment uses a cable to attach to the robotic articulated arms 150 at various positions. The cable then is routed through hole 134 to IPC 262. Some robots allow for the routing of signals through the robot arm. An embodiment of this method routes signals from the 3D measurement device electronics 230 through a real time bus. This could be EtherCAT or SERCOS III. PROFINET. POWERLINK. Or EtherNet/IP. This real-time bus can be attached to hundreds or even thousands of devices within an automation network.

“FIGS. 3A-3C shows a section of a mobile measuring system 200 that has an end effector205. This includes a triangulation scan 210, which is attached to electronics box207 via a connector 212. The robotic articulated arm 150’s wrist elements 180 have the end effector205 attached. Triangulation scanner 210 contains a camera 508 as well as a projector. The projector 510 is an example of a light source that projects a straight line onto an object surface or a pattern over an area. A laser, superluminescent (SLD), incandescent (LED), or light emitting diode(LED) may all be used as the light source. Visible or invisible light can be projected, though visible light is more useful in certain cases. Camera 508 comes with a lens and an image sensor. The imaging sensor is a photosensitive array that may be a charge-coupled device (CCD) two-dimensional (2D) area sensor or a complementary metal-oxide-semiconductor (CMOS) 2D area sensor, for example, or it may be some other type of device. An imaging sensor can be composed of a 2D array, i.e. rows and columns, of multiple light sensing photo elements (pixels). Each pixel usually contains at least one photodetector, which converts light into an electrical charge within the pixel wells. This is then read out as a voltage. An analog-to-digital convertor (ADC) converts voltage values into digital values. The ADC is usually contained in a CMOS sensor chips. The ADC for a CCD sensor chips is typically located outside of the sensor chip on the circuit board.

The enclosure 218 houses the electrical circuit 219 that is electrically connected to the projector 510, camera 508, and other components. One or more microprocessors, digital signals processors, memory and other types signal conditioning and/or storage circuits may be included in the electrical circuit 219

“The beam of light from projector 510 intersects with the beam of the marker light source 509 and is called the marker light source 509. The user can determine the optimal distance between the scanner 500 and the object to be tested by observing the intersection of these beams. A camera mounted on the end effector (205) is used in an embodiment to help identify the optimal distance from the object surface being investigated. Another embodiment uses additional information to guide the triangulation scanner (510) to the optimal distance from the object surface.

“FIG. 4. The elements of a laser probe (LLP) 4500, which includes a projector 4520 as well as a camera 4540. The projector 4520 has a source pattern 4521 and a lens 4522. The source pattern includes an illuminated line-shaped pattern. The projector lens has a projector perspective centre and a projector optic axis that passes through it. FIG. FIG. 4 shows how a central beam of light 4524 aligns with the projector optical alignment. Camera 4540 contains a 4542 camera lens and a 4541 photosensitive array. A camera optical axis 4543 passes through the lens and is connected to a 4544 camera lens perspective centre 4544. The exemplary system 4500 has the projector optical alignment aligned with the beam of light 4524. The camera lens optical axis 4544 is perpendicular the line of light4523 projected by source pattern light 4521. The line 4523 runs parallel to FIG. 4. The line hits an object surface at the first distance from projector 4510A, and 4510B at the second distance. It is possible to have different heights from the plane of FIG. 4. The object surface could be at a different distance to the projector. The line of light intersects surface4510A (in a plane of paper) at a point 4526. It also intersects surface4510B (in a plane of paper) at a point 4527. In the case of the intersection at 4526, a light ray travels from the 4526 point through the camera lens perspective centre 4544 to intersect with the photosensitive array 4541 in image point 4546. In the case of intersection point 4527, the ray of light travels through the camera lens perspective centre to reach the photosensitive array 4541. Image point 4547. The distance between the projector and the object surface can easily be determined by triangulation. This is done by comparing the position at the intersection point to that of the camera lens optical-axis 4544. Distance from projector to other points along the line 4526. This is points that are not in FIG. 4 may also be found.”

“In one embodiment, the photosensitive array 4541 aligns to position the array columns or rows in the direction of reflected laser stripes. The position of a spot in one direction of an array’s photosensitive array 4541 provides information necessary to determine the distance to the object. This is indicated by the differences between the spots 4546-4547 in FIG. 4. The location of the spot light in the orthogonal direction of the array gives information that can be used to determine the point at which the laser line intersects with the object.

“In this specification it is understood that column and row refer to a first direction along a photosensitive array, and a second perpendicular. According to the documentation supplied by the manufacturer of the photosensitive array 4541, the terms row or column are not always used to refer to rows and columns. The rows will be discussed in the following discussion. They are to be in line with the photosensitive array’s paper surface. The columns should be orthogonal to rows and on the surface photosensitive array. Other arrangements are possible, however.

“As described above, light from the scanner can be projected in a line pattern to collect 3D coordinates across a line. Alternately, light from the scanner can be projected to cover an area and thus obtain 3D coordinates for an object surface. FIG. 510 is an example of a projector. 3C is an area projector, not a line projector. The system 2560 in FIG. 2 illustrates the triangulation principles. 5A and FIG. 4760. 5B. Referring to FIG. Referring first to FIG. 5A, the system 2560 comprises a projector 2562 as well as a camera 2564. The projector 2562 has a source pattern 2570 that is placed on a source plane, and a projector lenses 2572. There may be several lenses elements in the projector lens. The projector lens is equipped with a lens perspective centre 2575 and a projection optical axis 2676. The ray of Light 2573 travels to a point at 2571 along the source pattern of the light through the lens perspective centre onto the object 2590. It intercepts at point 2574.

The camera 2564 contains a 2582 lens and a 2580 photosensitive array. The lens perspective center 2585 is part of the camera lens 2582. An optical axis 2586 is part of the lens 2582. The ray of light 2583 travels through the object point 2574 and the camera perspective centre 2585 to intercept the photosensitive array 2580 at the point 2581.

FIG. 5A shows the line segment connecting the perspective centers. It is the baseline 2588. 5A and the baseline 47888 in FIG. 5B. 5B. The baseline projector angle is the angle between the projector optical and baseline (2594, 4794). The baseline camera angle is the angle between the camera optical (2583, 4786), and the baseline (2596, 4796). If a point in the source pattern is known to correspond with a point on a photosensitive array (2581 or 4771), then it can be used the baseline length, baseline projection angle and baseline camera angle to determine which sides of the triangle connect the points 2585 and 2574 and determine the surface coordinates for points on object 2590 relative the frame of reference from the measurement system 2560. The angles of the triangle that lies between the source pattern light 2570 and projector lens 2572 can be determined using the distance between them and the plane 2570, and the distance between 2571 and intersection of the optical and 2576 axes with the plane 2570. These angles can be subtracted or added to the larger angles 2596-2594 depending on the need to get the desired angles. One of ordinary skill in art will see that similar mathematical methods are possible to determine the lengths and coordinates of sides of triangle 2574-2585-2575. Other triangles related to triangles can also be used.

“Referring first and foremost to FIG. 5B: The system 4760 is very similar to FIG. 2560. 5B, the system 4760 is similar to FIG. 2560 except that it does not include a camera. The system could include a projector 4762 or a camera 4764. FIG. 5B shows the embodiment. 5B shows the projector with a light source 4778, and a light modator 4770. Since such a light source can remain focused for a long time using the geometry of FIG. 4778, it could be a laser source. 5B. 5B. Other rays from the light source 4778 also strike the optical moduleator at different positions on the surface. The optical modulator 4770 alters the power of the emitted sunlight, usually by decreasing its optical power. The optical modulator gives the light an optical pattern, also known as the source pattern. It is located on the optical modulator 4770. For example, the optical modulator 4770 could be a DLP/LCOS device. The modulator 4770 may be transmissive or reflective in some instances. The light from the optical modulator 4775 appears to come from a virtual perspective center 4775. The light ray appears to emanate from the virtual light perspective centre 4775, travel through point 4771 and reach point 4774 on the surface of object 4790.

The baseline is the line segment that extends from the camera lens perspective centre 4785 to virtual light perspective center 4775. The method of triangulation is a way to find the lengths of sides of triangles. For example, the triangle with the vertex points 4774 and 4785. This can be done by finding the baseline length, the angle between baseline and camera optical axis 4786 and angle between baseline and projector reference axis 476. Additional angles can be found to find the desired angle. The small angle between the camera’s optical axis 4786, and the ray 4783 is found by finding the angle of the small triangle that lies between the camera lens 4782 (and the photosensitive array 4780). This angle is calculated based on the distance between the lens and the photosensitive arrangement and the distance between each pixel and the camera’s optical axis. To find the angle desired, the angle between the baseline angle and the camera optical angle is added to the angle of the small triangular. The angle between the projector’s reference axis 4776, and the ray 4773 can also be found. This angle is calculated based on the distance between the light source 4777 (and the surface of optical modulation) and the distance between the projector pixel at 4771 and the intersection of reference axis 4776. To get the desired angle, this angle is subtracted from angle between baseline and projector reference.

The camera 4764 contains a 4782 camera lens and a 4780 photosensitive array. Camera lens 4782 includes a camera lens perspective centre 4785 and a photo optical axis 4786. An example of a camera refer axis is the camera optical axis. Any axis passing through the camera lens perspective centre may be used in triangulation calculations. However, the camera optical axis is a symmetry axis for the lens. The ray of light 4783 passes through the object point 4774 and intercepts at point 4781 the photosensitive array 4780. You can also use other mathematical methods to determine the lengths of each side of a triangle 4774-4785-47575. This is as simple as it gets for someone with ordinary skill in the arts.

“Although triangulation is well-known, we provide additional technical information below. Every lens system has an exit pupil and an entrance pupil. If first-order optics is used, the entrance pupil is the place from which light appears to emerge. The exit pupil is where light appears to emerge as it travels from the lens system into the photosensitive array. Multi-element lenses have an exit pupil and entrance pupil that do not always coincide. Also, the angles of the rays relative to the exit pupil and entrance pupil may not be the same. The model can be simplified by using the perspective center as the entrance pupil and then adjusting distance between the lens and source or image plan so that rays travel straight lines to intercept source or image. The simple, but widely-used model in FIG. 5A is obtained. This is FIG. 5A. This description is a good approximation of light’s behavior. However, it should be noted that lens aberrations can affect the light’s position relative to the positions calculated with the model of FIG. 5A. 5A.

“In some cases, the scanner system may contain two cameras and a projector. A triangulation system can also be built using just two cameras. The cameras are designed to capture points of light on objects or within an environment. A baseline can be used to perform triangulation between images taken by the cameras. FIG. 5A illustrates the process of triangulation. 5A with the projector 2562 being replaced by a camera.

“In certain cases, multiple scan patterns can be combined to achieve better performance in a shorter time. In one embodiment, the fast measurement method employs a two-dimensional coded scan pattern that can be combined with three-dimensional coordinate data in one shot. Different characters, shapes, thicknesses, sizes or colors can be used in coded patterns to create unique elements. These are also called coded elements or coded feature. These features can be used to match the points 2571 and 2581. On the photosensitive array 2580, a coded feature about the source pattern light 2570 could be identified.

The advantage of coded patterns is the ability to quickly obtain three-dimensional coordinates for objects surface points. In most cases, however, a sequential structured lighting approach such as the sinusoidal phase shift approach mentioned above will yield more accurate results. The user can choose to measure specific objects, certain areas, or features with different projection methods depending on the level of accuracy required. This can be done easily by using a programmable pattern of light.

A laser scanner emits a line that intersects an object. It is called a “linear projection”. The object’s illuminated shape is only two-dimensional. A projector that projects a pattern of light in a two-dimensional direction creates an illumination on the object that is three-dimensional. The structured light scanner can be distinguished from the laser scanner by having at least three non-collinear patterns elements. The three non-collinear patterns elements of a coded two-dimensional pattern of light are easily identifiable by their codes. Since they are projected in two dimensions and must contain at least three elements, it is necessary that the minimum of three of the pattern elements be non-collinear. Each sinusoidal period is a pattern element in the periodic pattern. The pattern elements must not be collinear because there are many periodic patterns in two dimensions. The laser line scanner emits a light line, so all the pattern elements are on a straight line. The line may be wider than it is wide, and the tail of its cross section may be less powerful than the peak. However, these aspects are not considered separately when determining the surface coordinates of an object. They do not constitute separate pattern elements. These pattern elements may be multiple, but they are all collinear.

“It is important to note that although the above descriptions distinguish between area (structured lighting) scanners and line scanners, they are based on whether three or four pattern elements are collinear. However, the purpose of this criterion was to differentiate patterns projected as either areas or lines. Therefore, lines can still be projected in a linear fashion with information along only one path even though the one-dimensional pattern might be curved.

It is also possible to use scanner 2500. This scanner could be either a line scanner (or area scanner) with a six-DOF laser tracker 900, as shown in FIG. 6A. 6A. This light travels outwards until it intersects object 2528. It emerges from the projector’s lens perspective center. This type of pattern includes the coded and periodic patterns, which are described above. Alternately, the projector 2520 can project a one-dimensional light pattern, thereby acting as an LLP (line scanner).

FIG. 6B shows an exemplary laser tracking system 4005. FIG. 6B shows an exemplary laser tracker system 4005 illustrated in FIG. A gimbaled beam steering mechanism 4012 of the laser tracker 4010 is an exemplary example. It consists of a zenith carrier 4014 that is mounted on an azimuth platform 4016 and rotated around an azimuth direction 4020. The zenith carriage 4014 is equipped with a payload 4015 and it rotates about a Zenith axis 418. The azimuth and zenith axes 4018 intersect orthogonally at tracker 4010 at gimbal position 4022. This is usually the origin of distance measurements. The gimbal point 422 is the origin of distance measurements. A laser beam 4046 passes almost through it. It points orthogonally to the zenith 4018. Laser beam 4046 is located in a plane that lies approximately parallel to the zenith-axis 4018, and passes through the azimuth-axis 4205. The outgoing laser beam 4046 can be oriented in the desired direction through rotation of payload 4015 around zenith 4018 and rotation of zenith carrier 4014 about azimuth 4020. An internal zenith angle encoder is attached to a Zenith mechanical axis that aligns to the zenith 4018. An internal azimuth-angular encoder is attached to the tracker’s azimuth mechanical axle, aligned with the azimuth angle 4020. The azimuth and zenith angular encoders measure rotation angles with a high degree of accuracy. The retroreflector target 4026 is reached by the outgoing laser beam 4046. This could be, for instance, a spherically-mounted retroreflector (SMR), as described above. The radial distance between retroreflector 426 and gimbal point 4022 can be measured. Also, the rotation angle around the zenith and azimuth axes will be calculated. This will determine the location of retroreflector 426 within the tracker’s spherical coordinates.

“Outgoing laser beam 446 may contain one or more wavelengths as described below. A steering mechanism similar to the one shown in FIG. In the following discussion, 6B will be assumed. Other types of steering mechanisms may be possible. It is possible, for example, to reflect a laser beam from a mirror rotating about the azimuth or zenith directions. These techniques are applicable regardless of what type of steering mechanism you use.

“Several laser trackers exist or have been suggested for measuring six degrees of freedom rather than the usual three. U.S. Patent describes six examples of degree-of freedom (six DOF) systems. No. 7,800,758 (‘758) to Bridges et al., U.S. Pat. No. 8,525,983 (‘983) to Bridges et al., U.S. Pat. No. No. 6.166,809 (‘809) Pettersen et. al., U.S. Patent Application No. 2010/0149525 to Lau. All of the contents are included by reference. Six-DOF systems measure three orientational degrees of freedom as well as three positions degrees of freedom (i.e. x,y,z).

“FIG. 6A illustrates a six DOF scanner 2500 that is used with a six DOF laser tracker 910. Six-DOF laser tracker (900) sends a beam 784 to a retroreflector 2510 or 2511 on the six DOF scanner 2500. The six-DOF laser tracker (900) measures distance between tracker 9000 and scanner 2500. It also measures angles between tracker 900- to scan 2500. Six-DOF scanner 2500 has a body 2514 and one or more retroreflectors 25,10, 2511, a scanner camera 2530 and a scanner light projector, 2520. An optional electrical cable 2546, a battery 2444, an an antenna 2548, as well as an electronics circuit board 2542. If present, the antenna 2548 allows wireless communication between six-DOF scanner 2500 with other computing devices like the laser tracker 9000 and external computers. Together, the scanner projector 2520 (and the scanner camera 2530) are used for measuring the three-dimensional coordinates of a workpiece 25,28. Camera 2530 contains a camera lens system 2532 as well as a photosensitive array 2534. The photosensitive array 2534 could be a CCD array or CMOS array. The scanner projector 2520 has a projector lens system 2523, and a source pattern 2524. A source pattern of the light can emit a point, a line, or a structured (two-dimensional) pattern. The scanner light source may emit a point of light. This point can be scanned with a moving camera to create a line or array of lines. To produce an array of lines, the scanner light source may emit a line of radiation. The line can be scanned with a mirror or moving mirror. An embodiment might have a source pattern of light that is an LED, laser or any other light source reflected off a micromirror device (DMD), such as a Texas Instruments digital light projector (DLP), liquid crystal device(LCD), or a similar device in transmission mode. A slide pattern might be the source of light, such as a chrome-on glass slide. These slides can have one or more patterns and move in and out of their positions according to need. To allow the laser tracker from different directions to track the six DOF scanner, additional retroreflectors (e.g retroreflector-2511) can be added to the retroreflector-2510. This will give the projector 2500 greater flexibility as to where light is projected.

The scanner camera 2530 measures the three-dimensional coordinates of the workpiece 2528 using the principles triangulation. There are many ways to measure triangulation depending on how much light is emitted from the scanner light source 2520 or the type of photosensitive arrangement 2534. If the scanner light source 2520 emits a pattern of light that is either a line or points of light, and the photosensitive array 2534 scans into the form of a line, the one dimension of the two-dimensional array 2534 corresponds with the direction of a 2526 point on the workpiece 2528, for example. The distance between the scanner light source 2520 and the point 2526 corresponds to the other dimension of the two-dimensional array 2534. The three-dimensional coordinates of each point 2526 along a line of light emitted from scanner light source 2520 are known relative to the local reference frame of the six DOF scanner 2500. Six-DOF laser tracking methods are used to determine the six degrees of freedom in the six-DOF scanner. The six degrees of freedom can be used to determine the three-dimensional coordinates for the scanned line light in the tracker frame. This frame may then be used to convert the frame of reference into the workpiece 2528 by measuring three points on the workpiece with the laser tracker.

“If the six-DOF scanner 2250 is moved by the mobile measurement platform 200’s end effector 205, a line emitting from the scanner light source 2520 can be moved so as to paint? The surface of the workpiece 2528 is measured, and the three-dimensional coordinates are obtained for the entire surface. You can also?paint’ the surface. A scanner light source 2520 emits a pattern of light that creates patterns on the surface of a workpiece. Mounting the scanner 2500 with a structured light pattern may allow for more precise measurements. For example, the structured light pattern produced by the scanner light source 2520 could include a series of fringes. Each fringe has an irradiance that changes sinusoidally across the surface of the workpiece 2528. An embodiment shifts the sinusoids by changing three or more phases. Each pixel of the camera 2530 records the amplitude level for each phase value. This information is used to determine the position of each sinusoid pixel. This information can be used to determine the three-dimensional coordinates for each point 2526. Another embodiment of the structured light could be in the form a coded pattern. This can be used to evaluate single image frames from the camera 2530 and determine the three-dimensional coordinates. The six-DOF scanner 2500 can be moved at a reasonable speed by hand while the coded pattern allows for more precise measurements.

“In certain cases, it may be advantageous to measure features such as edges and holes using an optional tactile probe 2550 attached the the six-DOF scanner 2505. FIG. FIG. 6A shows a tactile probe 2550. It includes a probe tip 2554 that is part of a probe extension 2550. The projector 2520 emits a laser beam to illuminate the area to be measured in an embodiment.

“The laser tracker900 measures distances and angles in order to determine the three positional degrees of freedom (x,y,z) for the six-DOF scanner 2500. There are several ways to determine the three orientational degrees of freedom of the six-DOF scanner 2250. These methods will be described in greater detail below.

“FIGS. 7A and 7B show isometric exploded views and isometric views of an isometric target assembly with six-DOF tracking 710, which is coupled to a triangulation scan 210. FIG. FIG. 7A shows an isometric view showing the six-DOF target assembly 710 that is attached to the triangulation scanner (210). The mechanical and electrical interfaces 216 allow for coupling. The electrical interface 216 consists of two parts. A first part 216A is an upper scanner connector (216A) and a second part 221B which is a lower six DOF tracker assembly connector (216B). The first and second parts are joined to keep the scanner 210 in a fixed orientation relative to the six DOF tracker target assembly 710.

“The six-DOF laser tracker target assembly 710 collaborates with a 4010 laser tracker to determine six degrees freedom of assembly 710. Three translational degrees are included in the six degrees of freedom. (e.g. x,y,z) This is determined by the tracker as described above with reference to FIG. 6B. 6B. This six-DOF target assembly can be any type of tracker, such as the ones described in patents ‘758,’ ‘983, and ‘809, or patent application 525, which are all incorporated herein by reference. The six degrees of freedom that the attached six-DOF accessory 710 has on the tracker enable it to track the orientation and position of the scanner 215 relative to the object. This allows for relatively easy and precise registration of multiple line scans and area scans. A probe tip 718 is attached and paired with a probe coupler 719 in an embodiment. Based on six degrees of freedom, the tracker determines 3D coordinates for the probe tip 718.

“In one embodiment, the laser tracking device 4010 works in conjunction with the six-DOF target assembly 710 and a processor that determines the six degrees freedom of the six DOF target assembly 710. The laser tracker 4010 beams light to the six DOF target 710. In one embodiment, the retroreflector target may be a cube corner retroreflector. To allow easy viewing of six-DOF targets at different angles, a collection of six-DOF retroreflector target targets 720 could be used. The retroreflector’s first light beam travels to a distance measurement device 4010 that measures the distance between the tracker and the retroreflector. A second portion of this light travels to an electric position detector that generates an indication of the retroreflector’s position. The position detector is used to provide an electrical signal to a control device that includes motors to direct the beam of light so it remains centered on retroreflector. The tracker also uses angular transducers, such as angular encoders, to provide two angles that indicate the direction of laser beam. These two angles, along with the distance measured by distance meter, give the translational degrees of freedom for the 6-DOF tracker target assembly 710. The signals from six-DOF targets can be sent to an electric unit 730 for data processing and synchronization. An embodiment of the invention uses electrical signals to send information from six-DOF targets to an electrical processing unit 730. This is done via connector 712 or cable 714.

“As described herein, there are many ways to determine the three orientational degrees, such as the ones taught in the patents 758, 983, and ‘809 and the patent application ‘525. These methods include (1) measuring the position and orientation of multiple light sources on a tracker 6-DOF target using a camera attached to the laser tracker; (2) measuring lines on a cube corner retroreflector in order to determine the three directional degrees of liberty; (3) measuring light passing through an aperture in a cube corner retroreflector for pitch, yaw and angle of inclination to determine the roll angle; and (4) measuring light passing through an opening within a cube corner retroreflector There are many other ways to measure three orientational freedoms. The six-DOF target assembly 710 can be used for any method that measures three orientational freedoms.

“A common frame of reference is required for the scanner 210, six-DOF tracker target 710 as a preliminary step in the described methods. This preliminary step can be performed at the manufacturer’s facility or by the operator following the manufacturer’s instructions. For example, you can obtain a common frame of reference by first looking at common features using the scanner 210 or camera assembly 710. Then, perform a least-squares optimization to match the observed features. These methods are well-known in the art, and they are not further discussed.

“In one embodiment, six-DOF tracker target assembly 710 includes a tactile probe 718 that connects to six-DOF targets collection 720 via an interface unit 719. It may be used to attach and remove different tactile probes 718. It can also be used to provide electrical functionality for special probes, such as a touch probe. It measures the distance between the probe and an object.

“In one embodiment, the triangulation scanning scanner 210 is removed. The six-DOF tracker target assembly 710 is attached directly to the electronics 207 or to a mechanical coupler at the last joint of the robotic articulated arms 150. FIGS. FIGS. 7C and 7D illustrate this arrangement.

“FIG. 8A shows a perspective view showing a three-dimensional tactile probing device 5100. It includes a camera bar 5110, and a probe assembly 5140. The mounting structure 5112 is included in the camera bar, as well as at least two triangulation cameras (5120 and 5124). Optional camera 5122 may be included. Each camera includes a lens and a photographic array. For example, the lens 2564 in FIG. 5A. 5A. The optional camera 5122 could be identical to the 5120 and 5124 cameras, or it may be a colour camera. The probe assembly 5140 includes the housing 5142, optional pedestals 5146 and shaft 5148, as well as the stylus 5150 and probe tip 5152. The probe tip 5152 is the reference point for the position of the lights 5144. Light sources, such as light emitting diodes, or reflective spots, may be used to illuminate the probe tip 5152. These positions can be found using factory or on-site compensation methods. A shaft can be used as a handle or alternative handle.

“Triangulation using the image data 5120, 5124 and 5110 of the camera bars 5110 is used to determine the three-dimensional coordinates for each point of light 5144 within a frame of reference. The term “frame of reference” is used throughout this document and in the claims. It is assumed that the term “frame of reference” is synonymous with the phrase “coordinate system”. The probe tip’s position within the frame of reference is determined by mathematical calculations. These are well-known in the art. The probe tip 5152 can be brought into contact with 5160 to measure the object’s surface points.

An electrical system 5101 could include an external computer 5104 and an electrical circuit board 5102. An external computer 5104 could be a network of computers. The electrical system 5101 can include both wired and wireless components, internal and external to the components shown in FIG. 8A, which performs the calculations and measurements required to determine the three-dimensional coordinates for points on the surface. The electrical system 5101 generally includes one or more processors. These could be microprocessors or computers.

“FIG. 8B is a perspective of a three-dimensional area scanner system 5200, which includes a camera assembly 5110 and a scanner assembly 55240. Referring to FIG. 8A. 8A. FIG. 5148 shows the characteristics of the housing 5142 and lights 5144, optional pedestals 536, shaft 5148, and shaft 5148. 8A. 8A. Projector 5252 can be any of a number of types. It could reflect light off a digital micromirror device, such as a Texas Instruments digital light projector (DLP), liquid crystal (LCD), or liquid crystal on silicone (LCOS) device. The projected light could be generated by light passing through a slide pattern. For example, a Chrome-on-Glass slide might have one or more patterns. Slides can move in and out of their positions as required. Projector 5252 projects light 5262 onto an area 5266 of the object 5160. The camera 5254 images a portion of the illuminated area 5266 to get digital data.

“The digital data can be processed in part using the electrical circuitry of the scanner assembly 5240. An electrical system 5201 may provide the partially processed data to an external computer 5204. It includes an electrical circuit board 5202. An external computer 5204 could be a network of computers. The electrical system 5201 can include both wired and wireless components, internal or external to the FIG. components. 8B are responsible for the calculations and measurements required to determine the three-dimensional coordinates of the points on the surface 5160. The electrical system 5201 generally includes one or more processors. These processors can be microprocessors, computers, field programmable gates arrays (FPGAs), digital signal processing (DSP units), for example. The result of the calculations will be a set coordinates within the camera bar frame. This frame can then be converted into another frame if necessary.

“FIG. 8C is a perspective of a three-dimensional scanning system 5300, which includes a camera assembly 5110 and a scanner assembly 53340. Referring to FIG. 8A. 8A. FIG. 5148 shows the characteristics of the housing 5142 and lights 5144, optional pedestals 53146, shaft 5148, and shaft 5148. 8A. 8A. Projector 5352 could be a source that projects light onto the object 5160. 8B. 8B. The camera 5354 images a portion of the object’s stripe pattern to get digital data. Digital data can be processed in a similar way to the one described in FIG. 8B, for example, using electrical components 5201. The calculation results are a set 3D coordinates of the object’s surface in the camera-bar frame. These coordinates can then be converted into another frame if necessary.

“FIG. 9 is an isometric view showing a six-DOF target assembly 910 attached to a triangulation scan 210. A camera bar such as the 5110 camera bar in FIGS. may be used to measure the targets on the six-DOF target 910. 8A-8C. Alternately, targets may be measured using two or more cameras mounted in a different environment. Camera bars are made up of two or more cameras that are separated by a baseline camera-bar. Triangulation is used to calculate the six degrees freedom of the six-DOF target assembly. A processor may also use additional geometrical values, such as the camera-bar baseline or orientation of the cameras on a camera bar, in order to calculate triangulation.

“In one embodiment, the six-DOF target system 910 includes a collection light points 920, an electric enclosure 930 and a tactile probe 911. An embodiment’s collection of light points includes some points 922 that are mounted directly to the structure and others mounted on pedestals 926. The points of light 922 and 924 in an embodiment are LEDs. In another embodiment, the points 922 and 924 of light are LEDs. An embodiment illuminates the reflective spots with an external source. An embodiment places the points or light so that they are visible from many angles relative to scanner 210.

“An embodiment of the six-DOF target assembly contains a tactile probe 918 that connects to the electrical enclosure 930 via a probe interface 215. The probe interface 216 can provide analog or touch probe electronics. The scanner 210 can provide detailed information fast, but less information about edges and holes than you might need. Operators can use the tactile probe 918 to get this information.

“FIG. 10A depicts a camera assembly 1850 that can be attached to a triangulation scanner, or any other 3D measurement device. An embodiment of the camera assembly 1850 includes at most one camera. An embodiment of the camera assembly 1850 contains two cameras 1853A and 1853B. Camera 1853A contains a lens assembly 1854A, and an electronics box (1856A) that includes a photosensitive array (not illustrated). Camera 1853B contains a lens assembly 1854B, and an electronics box (1856B) that includes a photographsensitive array. The cameras 1853A and 1853B are configured to provide stereo imaging by partially overlapping fields-of-view (FOVs). This imaging allows for the determination of 3D coordinates on targets by using triangulation methods, as described above. In some embodiments, both cameras provide a larger FOV than the camera 508. In some embodiments, the FOV of the cameras combined is larger than that of the camera 508. One wide FOV camera may be provided on the assembly 1850 in some embodiments. Other cases provide several cameras with wide FOV but not overlapping on the camera assembly 1850. Another embodiment of the camera assembly 1850B includes one camera 1853C. It contains a lens 1854C as well as an electronics box 1856C, which includes a photosensitive array.

The triangulation scanner210 is a line scanner. This means that the 3D coordinates of the object are projected along a line. The scanner 210 can scan in an area, so the 3D coordinates will be projected on a 2D area of the object’s surface. Multiple collections of 3D coordinates from scans taken by the scanner 210 must be registered. Individual scans must be registered for a line scanner (210). An area scanner 210 will register individual scans.

“Methods for using the camera 1850 or 1850B with the scanner 220 to register multiple scans taken by the scanner 220, which allows scans to be taken and not tracked by a 6-DOF measuring device are described.”

“A common frame of reference is required for both the scanner 210, camera assembly 1850B and 1850B. This preliminary step will be used in all the procedures described below. This preliminary step can be performed at the manufacturer’s facility or by the operator following the manufacturer’s instructions. You can obtain a common frame of reference by looking at common features using the scanner 210 or camera assembly 1850B and then performing a least squares optimization procedure to match those features. These methods are well-known in the art, and they are not discussed further.

“FIG. “FIG. A projector 510 projects a first line 1810 onto an object 1801. The object 1801 might have some fine details in some areas, such as the features 1802 or 1803, while other regions may have large areas 1804. The 2D image sensor (e.g. photosensitive array) of camera 508 views the first line of light 1810 in a region 1815. This is the object imaged with the camera 508. Referring to FIG. 4. The first line light 1810 appears on the 2D image sensor 508 and provides information to a processor in order to determine the 3D coordinates for the first light line on the object 1801. These coordinates are provided in the frame reference of scanner 210.

“In a second example, the projector510 projects a second line 1812 onto the object 1801. The second line 1812 appears on the 2D image sensor 508 and provides information to the scanner 210 for determining the 3D coordinates. This is again done in the frame reference of scanner 210. It is necessary to register the scans in both the first and second instances so that the 3D coordinates for the first and second lines of light can be put into a common frame.

“In a first-method registration that uses natural features, the cameras 1853A and 1853B image an area 1820 of the object. The cameras 1853A and 1853B image the detailed features 1806, 1807, 1808 in the example. Triangulation is a technique that allows a processor to use images from the cameras 1853A and 1853B to find the 3D coordinates for these details in the frame of reference provided by the scanner 210. This triangulation, as explained above, requires a baseline distance between camera 1853A- 1853B and relative orientation of these cameras relative the baseline. It may be possible to match features in 2D because the 3D coordinates of detailed features captured by cameras 1853A and 1853B cover an object 1801. This allows for the coordinate transformation to place the first and second lines of light 1810 in the same frame. Natural features, such as the intersection point of three planes 1809 shown in FIG. 11A are in a clear position in 3D space. These features are easy to match in multiple cameras images, making them particularly useful for registering images that are based on natural targets.

“FIG. 11B shows a second way to use the cameras 1853A and 1853B to register multiple 3D coordinates from line scans taken with a line scanner210. The registration is based upon the matching of natural targets rather than physical targets. FIG. FIG. 11.B is the same as FIG. 11B is the same as FIG. 11B also includes markers 1832 on object 1801 and/or markers1834 in the area of the object, but not on it. The targets in an embodiment are reflective targets. They can be white circular targets (sometimes referred to photogrammetry targets). These targets can be illuminated using light sources 1858A and 1858B in FIGS. 10A and 10B, or light sources 1858C (FIG. 10C. 10C. The targets 1832 and 1834 in an embodiment are light sources themselves, such as LEDs. The targets 1832 and 1834 in an embodiment are a combination photogrammetry targets with LEDs.

“In a first instance the projector510 projects a line of light 1810 onto object 1801. In a second case, the projector510 projects a second light 1812 onto object 1801. The cameras 1853A and 1853B image three common non-collinear targets in an embodiment. These could be 1832 or 1834. These points allow a processor to position the 3D coordinates from the first and the second lines of light in a common frame. The scanner 210 moves across the object 1801 repeatedly, thereby allowing the processor to determine 3D coordinates for the object 1801. Another embodiment combines image information from physical targets with information from natural targets to register 3D coordinates for the object’s surface 1801.

“FIG. “FIG. A separate external projector 1840 and camera assembly 1850 projects spots 1832 onto an object and/or spots 1834 off it but within the vicinity. These spots of light are imaged by the cameras 1853A and 1853B in the same manner as the physical targets in FIG. 11B and the processor calculates the 3D coordinates for each object surface in each case.

“FIG. 11D is an illustration of a first way to use the cameras 1853A and 1853B to register multiple 3D coordinates from area scans taken with an area scanner210. The registration is based upon the matching natural features. A projector 510 projects a portion of the first area light 1810B onto an object 1801. The projector 510 projects a portion of the first light area 1810B onto an object 1801. An overlap region 1817 is formed when the projected area of light 1810B overlaps the imaged region 18.15. A processor can determine 3D coordinates for the object’s surface 1801 in this overlap region 1817. These 3D coordinates can be found in the frame reference of the scanner 210.

“In FIG. FIG. 11E shows a second example. Sometimes, the overlap regions have enough common features to allow registration of 3D coordinates using the scanner 210 in the first or second instances. It may not be possible for object 1801 to register the first and second scans using scan data alone if there are very few features in the overlap areas 1817 and 1817B.

“In one embodiment, the cameras 1853A and 1853B have wider FOVs than the camera 510. This allows additional features like 1806, 1807 and 1808 to enhance registration by matching the 3D features, as discussed above using the methods with respect to FIGS. 11A, 11B and 11C. If the object 1801 does not have distinct features, such as the region 1804, registered 3D images can end up warping. The flat region 1804 might end up looking like it is a saddle, for example. This effect is often called the “potato chip” or ?potato crisp? effect.”

“For scan regions without many features, it is possible to improve registration by placing targets on or near the object 1801. FIG. FIG. 11F shows a second way to use the cameras 1853A and 1853B to register multiple 3D coordinates from area scans taken with an area scanner 220. The registration is based upon the matching of natural targets and physical targets. FIG. FIG. 11.F is the same as FIG. 11F is the same as FIG. 11F also includes markers 1832 on object 1801 and/or markers1834 in the vicinity, but not on the object. The method is described in FIG. 11B can be used to improve the registration of the 3D coordinates from successive scans.

“FIG. “FIG. A separate external projector 1840 and camera assembly 1850 project spots of light 1832 onto an object and/or spots 1834 off it but within the vicinity. These spots of light are imaged by the cameras 1853A and 1853B in the same manner as the physical targets in FIG. 11F and the processor calculates the 3D coordinates for each object surface in each case.”

“Mark” is a term that refers to any of the physical features used in the registration of multiple sets 3D coordinates. “Mark” can be used to describe any physical feature that aids in the registration multiple sets of 3D coordinates. The following four marks were discussed in the above discussion: (1) Natural features of the object’s surface (or features on an object stationary surface proximate it); (2) LED markers or targets on the object, (3) reflective markers or targets on the object; (4) spots or proximate objects of light project onto the object using an external projector that is not on the scanner 220 or 1850.

“FIG. 12A illustrates a mobile measurement platform 200 with an end effector which includes a triangulation scan 210 and a camera assembly 1850. The mobile measurement platform measures the rear section of an automobile BiW 1202. A projector external projects light 1212 onto large areas to allow the camera assembly 1850 information to be provided to the processors to register multiple scans from scanner 210. An embodiment of the external projector 1210 can be mounted on a mobile tripod 1220 that may move independently or under user direction to project spots wherever needed.

“FIG. 12B illustrates the mobile measurement platform 200 with an end effector, which includes the triangulation scanner 220 and the camera assembly 1850. The interior portion of the BiW 1202 is being measured by the mobile measurement platform. A projector placed outside of one of the windows will likely project spots in elongated, ellipses due to the angle of entry for the spots projected on the interior. Additionally, the mobile platform 200 will block some spots as it moves its end effector around from one place to another. This could cause problems in some cases. This can be avoided by placing the projector 1810 on a 100-pound mobile platform. The robot articulated arm can position external projector 1810 in any position or orientation that is most appropriate for the situation.

“FIG. 13A is a picture of an apparatus 1310 with a laser tracker 4010 mounted to a mobile structure 1220. The mobile structure 1220 is shown in an embodiment with tripod legs 1224, and a motorized doll 1222.

“FIGS. 13B-13E illustrate a method for placing measurements from a large volume into a common frame. FIG. FIG. 13B shows a mobile measuring platform 200 that measures 3D coordinates of the rear section of a BiW 1202. The mobile platform’s end effector includes a scanner 220 and a 6-DOF tracker target 710. The six-DOF laser target assembly 710 receives a beam from the laser tracker 4010, which determines the six degrees of freedom for the scanner 210. The tactile probe 718 can also be used to measure the six-DOF tracker target 710.

“The mobile platform 200 wants to move to a different position in order to take additional measurements. In one embodiment, the BiW will block light from the laser tracker’s beam in the new position. The laser tracker must be moved in such a manner that 3D measurements taken at each location can be placed in the same frame of reference. The procedure to relocate the laser tracker involves measuring one or more six-DOF targets 1340 in a single instance, as shown in FIG. 13C, and then again measure the six-DOF targets in another instance as shown in FIG. 13D. Alternativly, the tracker could measure three or more targets with three DOF, also known as 1340 herein. These include SMRs in this first instance, as shown in FIG. 13C, and then measure the three-DOF targets again in FIG. 13D.”

“These tracker measurements of a six-DOF common target or three or more three DOF targets allow for the calculation of a transformation matrix that will allow measurements to be placed in a common frame. FIG. 13E shows the steps involved. FIG. 13E shows how the mobile platform continues to measure the interior of BiW while the tracker monitors six degrees of freedom of scanner 210. This corrects for movement and puts scanner readings in a common frame. It is possible to measure a single 6-DOF target with a laser tracker to create a transformation matrix. However, it is possible to also measure three 3-DOF retroreflectors. This six-DOF laser tracker method can be combined with a camera bar device and a six DOF light-point target assembly. 15A.”

Automated inspections can be performed by a mobile measurement platform 200 that includes any combination of the end effector devices described above. An embodiment provides an automated inspection using centralized production scheduling software. You can modify the inspection instructions depending on what type of device is being inspected. An embodiment of inspection instructions is placed on a tag. This tag could be near-field communication (NFC), radio-frequency identification tag (RFID tag), bar code, QR code or other storage device that can easily convey information. The tag is carried with the object to be tested and can be read by the mobile measurement system. An inspection plan is information about where measurements will be taken and what types of data will be collected and analysed. A 3D measuring device can evaluate many dimensions, including points, distances, lengths and angles. An inspection plan may include specific measurements and the characteristics to be measured. An embodiment provides a nominal value for each measured dimensional characteristic. The measured value less the nominal value is the error. An embodiment of the invention gives an alarm if the absolute error exceeds a tolerance. An embodiment includes a tolerance value as part of the inspection plans. An embodiment includes the inclusion of a tolerance value in the inspection plan. An embodiment provides a CAD model to one or more processors for guiding the movement and measurement of the mobile platform. The CAD model can contain information about the dimensions of an object that may be used to guide the movement of the mobile platform. Alternately, the mobile platform may choose a path based upon the object’s dimensions as determined by 2D scanners or other sensors connected to the platform.

There are two types of navigation that a mobile platform can use. The platform’s internal sensors are used to guide its movement in the first type of navigation. This method usually provides a movement accuracy of 1 to 5 centimeters. 2D scanner sensors are internal sensors that emit horizontal light near the floor. These scanners are useful for preventing collisions and can also be used to aid in navigation. The angular encoders located in the wheel assemblies may provide information that can also be used to aid in navigation. Navigation may be assisted by heading and IMU sensors as well as 2D or 3D cameras mounted around a mobile platform.

“In a second type, a 6-DOF measuring device such as a laser tracker (or a camera bar which may have two cameras separated and stabilised but not connected by any bar) is used for directing the movement of the mobile platform or articulated arm. Depending on speed and other factors, such navigation can reach a distance of more than 100 micrometers.

The 3D coordinates of the outside and inside of an object, such as an automobile BiW 1202, are often difficult to measure. The coordinates of an object are often obtained by using a global registration system such as registration by laser tracker or registration via a photogrammetry program with targets that extend across the object’s length. However, 3D coordinates for the interior of an object can be measured using a laser tracker, similar device or other methods, as shown in FIG. 13E are not usually global measurable within the object’s interior. The present invention aims to link 3D measurements taken in global and local frames of reference.

“FIG. 14A shows a 3D measuring device 1410 used to measure the exterior of an object 1202, while it is being measured in six DOF by a six DOF measuring device 1420. A six-DOF laser tracker, such as the one shown in FIGS., is another example of a possible six-DOF measurement device. 13A-E, and a camera bar like the one shown in FIGS. 8A-C. 8A-C.

“FIG. 14A shows the collection of 3D coordinates while the 3D measuring device 1410 is being moved. This movement can be done by a human operator who holds and moves the measurement device. A 3D measurement device may be held in one hand by a mechanical device, such as the mobile robotic platform 200 illustrated in FIGS. 2A, 2C and 12A respectively.

“FIG. 14B is a close-up of the 3D measuring apparatus 1410. It includes a triangulation scanning scanner 210 that can measure 3D coordinates for an object, as described herein. A six-DOF measuring instrument 710 is attached to the triangulation scanner. It can be used for six-DOF measurement using a six DOF laser tracker. Attached to the triangulation scanning device 210 is also a camera assembly 1850B or 1850B that provides registration using one of the following methods. 11A and 11D; (b) matching artificial targets (markers), placed near or on the object, as illustrated in FIG. 11B, (c), matching projected spots of light and projected patterns as illustrated by FIG. 11C and FIG. 11F. 11F. The 3D measuring device 1410 can be configured in many other ways. A two-axis inclinometer 1490 may be added to one of the three-dimensional measuring devices 210, 1850 or 710. 14B.”

“In FIG. “In FIG. 14A is a six DOF tracker. The six-DOF tracker sends the laser beam 1422 to the retroreflector 710 in the six-DOF Target 710. The return light is used by the laser tracker to measure distance and angles to the six DOF target. You can use any of several additional methods to determine the three degrees of freedom for the six-DOF target. Triangulation scanner 220 determines the 3D coordinates for object 1202. Other embodiments allow for the measurement of the 3D coordinates using a tactile probe, such as probe 718 on 6-DOF target 710.

“In one embodiment, the six-DOF measuring instrument 1420 is mounted to a mobile platform 1220. An embodiment of the mobile platform can be moved by an operator on wheels 1222 to reach desired locations. Another embodiment allows the mobile platform 1220 to be controlled by computer via motorized wheels 1222.

“FIG. 14C shows the 3D measurement device 1410 measuring an object in a transition area between an exterior portion 1202 and an inner portion 1202. The 3D measuring device 1410 measures a portion of the object in the transition region between an exterior and interior section of the object 1202 and 1202. To obtain global registration, the beam of light 1422 is intercepted at the six-DOF probe by a retroreflector. Camera assembly 1850 determines 3D coordinates for at least one cardinal position 1470 within the local frame reference of the 3D measuring device 1410. As explained above, a cardinal point can be either a natural feature or a target marker. The term interest point detection is used to identify cardinal points that are associated with natural features. The points detected are called interest points. An interest point is defined as a position that has been mathematically determined, is well-defined in space, has an image structure that surrounds it that is rich with local information, and is stable in terms of its illumination level over time. One example of an interest points is the corner point. This could be any point that corresponds to the intersection of three planes. Scale invariant feature transformation (SIFT) is another example of signal processing that can be used. This method is well-known in the art and is described in U.S. Pat. No. No. Edge detection, blob detection and ridge detection are all common methods of detecting cardinal points. FIG. 14C, the FOV of camera assembly 1850 includes the cardinal point 1470. Stereo cameras from camera assembly 1850 can determine the 3D coordinates for the cardinal points 1470 within the frame of reference of a 3D measuring device 1410.

“FIG. 14D is the 3D measuring device, which is placed outside the line of sight of the 6-DOF measuring device 1420. An exterior portion 1204 of object 1202 blocks the beam of light 1422. The coordinates of the first cardinal position 1470 are determined by the 3D measuring device 1410. 14C. Some cases are described below in relation to FIGS. 14L-R. Continued use of this method at other locations in the interior region will require at least a second cardinal position 1471.”

“Consider FIGS. 14L and 14M. FIG. FIG. 14L shows a measurement 6000, which includes a collection 3D coordinates 6010 that were obtained using a triangulation scanning scanner 210. This scanner is used to illuminate the object’s surface with patterned light. The object’s surface is illustrated with a 6012 feature that contains 3D coordinates. These coordinates are part of 3D coordinates 6010. Measurement 6000 also includes at least one cardinal points 6014. This could be measured using a camera assembly 1850 with two cameras 1853A or 1853B, or a single camera 1853C in a camera apparatus 1850B. FIG. 3D coordinates 6010, and the 2D or 3D coordinates for the cardinal points 6014 are representative of the situation. 14C is the cardinal point of FIG. 14C, where the cardinal point in FIG.

“In FIG. 14D. The 3D measuring device 1850B or 1850B can be translated relative to its position in FIG. 14C. FIG. 14M. To locate the 3D coordinates as shown in FIGS. 14L and 14M must be determined the amount of translation, dy and dz as indicated in the coordinate systems 6020 and rotation, d??, d??, d??, as shown in the coordinates 6030. Rotation angles d., d., d. These angles are also known as pitch, roll, and yaw angles. You can also use other types of rotation angles.

Summary for “Registration and measurement of three-dimensional coordinates on the interior and exterior of an object”

“Today, dimensional measurements can be taken by handheld instruments that have not been automated or fixed instruments that have been automated. Triangulation scanners, such as structured light scanners, are examples of such handheld instruments. A handheld instrument that can be used to track lasers is the one with a mounted retroreflector (SMR), which allows an operator to carry a spherically mounted retroreflector. This retroreflector will allow him or her to determine the 3D coordinates for points in contact with it. A Cartesian CMM is an example of an instrument that is both automated and fixed in place. A robot holding a measuring device, such as a triangulation scanner or 3D imagingr, is another example of an automated instrument that is fixed in place. Some 3D measuring devices combine non-contact scanning capability with tactile measurement capability.

A common problem when 3D measuring complex objects is that the 3D instruments, such as scanners, are registered using an external 3D coordinate measurement device, such as a laser tracker. In some cases, however, the 3D instruments don’t have a connection to a device which allows them to register to a global setting. It is necessary to coordinate the registration of 3D data from these various cases. An example of this is when a tactile probe is used to measure an internal feature, such as a hole, but there is no way to register the tactile probe directly with an external 3D measuring instrument. Another opportunity is the design of measuring devices that can make 3D measurements in both interior and exterior areas of an object.

“While current 3D measuring devices, such as scanners, are adequate for their intended purpose, what’s needed is a device with improved ability to combine locally and globally registered 3D measurement data.”

A method is provided for measuring three-dimensional (3D), coordinates in accordance with one or several embodiments. This method involves: determining the first position and orientation of a dimensional measurement device using a six-DOF track system, where the device is configured to determine 3D coordinates for the object’s surface in a frame of reference; determining the second position, orientation and first coordinates with the six DOF track system; taking a second overview of the object, and determining the 2D coordinates for the first cardinal points in the second overview picture and the 3D coordinates for the second cardinal points in the 2D coordinates

These and other benefits and features will be more obvious if you read the description in conjunction with the drawings.

“FIGS. 1A-1D show perspective. They are left, top, and front views respectively of a mobile measurement station 100. It includes a mobile base assembly 110 and an arm electronics assembly 130. A robotic articulated arm 150 is also included. The mobile measurement platform 100 can be driven by many different types of wheels. Two active wheels are used in an embodiment to propel the mobile measuring platform 100. One or more additional wheels are used for balance. Additional active wheels can be used in other embodiments. There are three types of wheels: simple, omni-wheels and Mecanum. In one embodiment, wheels 112A and 112B are simple active tires, while wheels 114A and 114B, respectively, are simple passive, free-turning, simple wheels. Active wheels 112A and 112B may operate independently, so they can rotate forward, backward and stay in place. The mobile robot platform 100 will be propelled forward or reverse if both wheels are turning at the same speed. The mobile robot platform 100 will move in the opposite direction to the top view if the left wheel 112B is in place and the right wheel propels it forward. The robot will turn in a counterclockwise direction if the left wheel 112B rotates slowly and the right wheel 112A spins relatively fast. This is as seen in the top view, but in an arc. The mobile measurement platform 100 can be directed to follow any path by changing the directions and rotation rates of the wheels 112A and 112B.

“In an embodiment, the mobile measuring platform 100, with the robotic articulated arms 150, is considered human-friendly robotic devices. This is a type robotic device that is safe around people. The speed of the mobile platform in an embodiment is one meter per second. This is the maximum speed that autonomous vehicles can travel in a factory environment with people. The mobile measurement platform 100 also includes an obstacle detection system and avoidance system. Two two-dimensional (2D), scanners 116A and 116B are used to detect obstacles in an embodiment. The 2D scanners in an embodiment are the SICK model S300 Professional safety scanner. These scanners emit horizontal planes of light that measure distance and angles for every angle in a horizontal plan 117. This horizontal plane can be used to measure angles over an angle range of 270 degrees. The scanner can obtain information about potential obstacles by measuring distances and angles from the mobile base assembly 110. In one embodiment, horizontal plane 117 is located close to the floor. This is the area in which obstacles are most easily detected. A signal is sent to the processors of the system when an obstacle is detected. This signals the mobile platform to cease moving. To stop the mobile platform from moving, a person can press the emergency stop button 122A. An embodiment may include ultrasound sensors to avoid obstacles when there is glass or other transparent material. An embodiment of the mobile base assembly 110 has an on-off switch 128. It also has indicator lights 118A and 118B, 120A and 120B, 120B, 120C, 120B, 120C, 120C, 120D, and 120D.

“In an embodiment, a robotic articulated arm 150 can be safely used around people. The Universal Robots model U10 is used in an embodiment to create the robotic articulated arm 150. Other embodiments use a Universal Robots model UR10 robotic articulated arm 150. In accordance with human anatomy, the robotic articulated arm 150 has a shoulder 152 and an elbow 170. A wrist 180 is also included. The robotic articulated arm 150 is an embodiment that includes six joints 154, 162, 182, 186 and 190. These are configured to rotate around axes 155-159-173, 183, 173, 187, 191, respectively. Bidirectionally, they can also be controlled by angular movements 156-165, 174,184,188, 192.

“An embodiment of the arm electronics assembly 130 contains an arm electronics housing 132. An embodiment of the robotic articulated arms 150 is mounted on the top surface of the arm electronics assembly 132 through an arm base element 53. An embodiment of the robotic arm assembly 150 cables are routed from the arm base element 53 through a hole 133 into the arm electronics unit 130. The cables from the robotic articulated arms 150 are routed inside the assembly 130 to the control box and teaching pendant electronics 250. 2B.”

“In one embodiment, the robotic articulated arms are attached at a first and second ends to the top surfaces of the arm electronics housings 132 and a third end to an effector that includes an 3D measurement device as discussed below. There are several connected arm segments 162, 175, 176 between the first and second ends. Each joint assembly 157-161, 175, 185 189, 193, includes a motor as well as an angle measuring device. This is typically an angular encoder.

“The robotic articulated arms includes a variety of joints. Each joint has its own axis for rotation. Every joint is connected to an associated arm segment, or to the end affector. If there is no intervening arms segment between the joint & the associated arm segments, the joint and the arm segment are coupled. Some joints have swivel joints with a swivel as the corresponding rotation axis. Each of the swivel joint is designed to rotate the associated arm segment or effector around the swivel. Another type of joint is a hinge joint with a hinge-axis as its corresponding axis rotation. Each of the plurality hinge joints is designed to rotate the associated arm segment/end effector in a perpendicular direction to the hinge. FIG. FIG. 2B shows the swivel joints as 154, 186, and 190. The corresponding swivel-axes are 155 and 187 respectively. The hinge joints are 158.172.182 and 182. The corresponding hinge axes for these hinge joints are 159.173. and 183.

“The physical hardware of the teach pendant 140 is connected through a cable to the control box. The arm electronics assembly 130 is attached to the teach pendant hardware 140 in one embodiment. Another embodiment allows the operator to remove the teach pendant 140 from the arm electronics assembly 130 and hold it in his/her hand. The robotic arm assembly 150 is taught how to use the teach pendant. The teach pendant 140, in one embodiment, includes a touch screen display (146) and an emergency stop button (142) for the robotic articulated arms. To stop the robot articulated arm from moving, a user can press the emergency button. Fans may be included in the arm electronics assembly 130 that work with vents 131 for cooling.

“In one embodiment, the entire movement of the mobile measurement platform 100 is controlled by one or more processors within the system. Any of the processors in FIG. 220 may be used. 2B, which includes computer processors 270 that are available on a network. One or more processors can also include processors other than those listed in elements 220. All movements, including those of the wheels and robotic articulated arms, are made autonomously in an embodiment. Another embodiment allows an operator to train the robotic articulated arms 150 to perform desired movements by using the teach pendant 140.

“FIG. 2A is an isometric view showing a complete mobile measurement system 200. It is identical to the 100, but with an end effector (205), which includes an electronics interface box and a triangulation scan 210. An embodiment of the invention includes a connector 212, which allows the triangulation scanner (210) to be detachably connected with the electronics box. The motors 150 of the robotic articulated arms have changed the orientations of the arm segments and the end effector by turning them. 1A-1D. The shoulder 152 remains the same, but the elbow is rotated 90 degrees around a hinge axis and the wrist rotated 90 degrees about a swivel. The interior of an automobile Body in White (BiW), is one example of a complex object that can be 3D measured. 2C. The measurement platform 200 must have a number of swivel or hinge joints in order to perform a variety of 3D measurements. FIG. 2C shows the rotation of a hinge joint as well as a swivel joints. 2C to measure the area.

“In one embodiment, a battery 276 supplies 24 volts to multiple devices within the system, as shown in FIG. 2B. The mains adapter (126 in FIG.) allows you to access a recharger 278. 1B. 1B. In another embodiment, an additional battery supplies power to the robot controlbox and the teach pendant 250 in arm electronics assembly 130.

“The I/O bus coupler works with bus terminals to transmit and receive digital data. The Beckhoff BK5150CANopen bus coupler, which cooperates with Beckhoff L1408 digital input terminals as well as Beckhoff L2408 digital output terminals, is an embodiment of the I/O bus coupler. Digital input and output terminals can operate at 24 volts. Digital output signals are sent over digital output channels to the signals lamps, brakes and sound maker in an embodiment.

“A microcontroller 268 in an embodiment is used as a watchdog to monitor the digital traffic and troubleshoot problems. An embodiment integrates the microcontroller with a touchscreen or keypad that can be used by an operator.

“In one embodiment, an extended IMU 269 includes a magnetometer that provides heading information in addition to the accelerometers and geoscopes of traditional IMUs. The IMU can also include a pressure sensor that provides elevation information. A processor is located in the extended IMU unit and fuses information from various sensors to improve the accuracy of position information and orientation information over time. The extended IMU 269 communicates over a Universal Serial Bus communication link with the IPC 262. The extended IMU in an embodiment is the x-IMU manufactured and sold by x-io Technologies.

The IPC 262 can communicate with multiple devices over Ethernet 282, including the 3D measuring device robot control box 250, teach pendant 250, 2D scan controllers 270 and WLAN access point 272, as well as cellular gateway 274. The 2D scanner controllers allow the IPC to communicate with the 2D scanners (116A,116B) and to receive distance information for a collection horizontal angles.

The WLAN access point 272 allows wireless devices, such as tablets and smart phones, to connect via communication channel 288 with a wired network 200 of the mobile measurement platform 200. It uses the IEEE 802.11 standard (WiFi), or another related standard. Wireless communication can also be established over Bluetooth using a Bluetooth transceiver 289. An embodiment allows devices like cellular phones to establish wireless communication using cellular communication channels, such as 3G/4GLTE. A cellular gateway 274 is used to establish a communication channel in an embodiment. An embodiment uses a Sierra Wireless model, GX400 as the cellular gateway.

“In some embodiments signals are sent directly from the 3D measurement device to the IPC 262, over a wired network or wirelessly. An embodiment uses a cable to attach to the robotic articulated arms 150 at various positions. The cable then is routed through hole 134 to IPC 262. Some robots allow for the routing of signals through the robot arm. An embodiment of this method routes signals from the 3D measurement device electronics 230 through a real time bus. This could be EtherCAT or SERCOS III. PROFINET. POWERLINK. Or EtherNet/IP. This real-time bus can be attached to hundreds or even thousands of devices within an automation network.

“FIGS. 3A-3C shows a section of a mobile measuring system 200 that has an end effector205. This includes a triangulation scan 210, which is attached to electronics box207 via a connector 212. The robotic articulated arm 150’s wrist elements 180 have the end effector205 attached. Triangulation scanner 210 contains a camera 508 as well as a projector. The projector 510 is an example of a light source that projects a straight line onto an object surface or a pattern over an area. A laser, superluminescent (SLD), incandescent (LED), or light emitting diode(LED) may all be used as the light source. Visible or invisible light can be projected, though visible light is more useful in certain cases. Camera 508 comes with a lens and an image sensor. The imaging sensor is a photosensitive array that may be a charge-coupled device (CCD) two-dimensional (2D) area sensor or a complementary metal-oxide-semiconductor (CMOS) 2D area sensor, for example, or it may be some other type of device. An imaging sensor can be composed of a 2D array, i.e. rows and columns, of multiple light sensing photo elements (pixels). Each pixel usually contains at least one photodetector, which converts light into an electrical charge within the pixel wells. This is then read out as a voltage. An analog-to-digital convertor (ADC) converts voltage values into digital values. The ADC is usually contained in a CMOS sensor chips. The ADC for a CCD sensor chips is typically located outside of the sensor chip on the circuit board.

The enclosure 218 houses the electrical circuit 219 that is electrically connected to the projector 510, camera 508, and other components. One or more microprocessors, digital signals processors, memory and other types signal conditioning and/or storage circuits may be included in the electrical circuit 219

“The beam of light from projector 510 intersects with the beam of the marker light source 509 and is called the marker light source 509. The user can determine the optimal distance between the scanner 500 and the object to be tested by observing the intersection of these beams. A camera mounted on the end effector (205) is used in an embodiment to help identify the optimal distance from the object surface being investigated. Another embodiment uses additional information to guide the triangulation scanner (510) to the optimal distance from the object surface.

“FIG. 4. The elements of a laser probe (LLP) 4500, which includes a projector 4520 as well as a camera 4540. The projector 4520 has a source pattern 4521 and a lens 4522. The source pattern includes an illuminated line-shaped pattern. The projector lens has a projector perspective centre and a projector optic axis that passes through it. FIG. FIG. 4 shows how a central beam of light 4524 aligns with the projector optical alignment. Camera 4540 contains a 4542 camera lens and a 4541 photosensitive array. A camera optical axis 4543 passes through the lens and is connected to a 4544 camera lens perspective centre 4544. The exemplary system 4500 has the projector optical alignment aligned with the beam of light 4524. The camera lens optical axis 4544 is perpendicular the line of light4523 projected by source pattern light 4521. The line 4523 runs parallel to FIG. 4. The line hits an object surface at the first distance from projector 4510A, and 4510B at the second distance. It is possible to have different heights from the plane of FIG. 4. The object surface could be at a different distance to the projector. The line of light intersects surface4510A (in a plane of paper) at a point 4526. It also intersects surface4510B (in a plane of paper) at a point 4527. In the case of the intersection at 4526, a light ray travels from the 4526 point through the camera lens perspective centre 4544 to intersect with the photosensitive array 4541 in image point 4546. In the case of intersection point 4527, the ray of light travels through the camera lens perspective centre to reach the photosensitive array 4541. Image point 4547. The distance between the projector and the object surface can easily be determined by triangulation. This is done by comparing the position at the intersection point to that of the camera lens optical-axis 4544. Distance from projector to other points along the line 4526. This is points that are not in FIG. 4 may also be found.”

“In one embodiment, the photosensitive array 4541 aligns to position the array columns or rows in the direction of reflected laser stripes. The position of a spot in one direction of an array’s photosensitive array 4541 provides information necessary to determine the distance to the object. This is indicated by the differences between the spots 4546-4547 in FIG. 4. The location of the spot light in the orthogonal direction of the array gives information that can be used to determine the point at which the laser line intersects with the object.

“In this specification it is understood that column and row refer to a first direction along a photosensitive array, and a second perpendicular. According to the documentation supplied by the manufacturer of the photosensitive array 4541, the terms row or column are not always used to refer to rows and columns. The rows will be discussed in the following discussion. They are to be in line with the photosensitive array’s paper surface. The columns should be orthogonal to rows and on the surface photosensitive array. Other arrangements are possible, however.

“As described above, light from the scanner can be projected in a line pattern to collect 3D coordinates across a line. Alternately, light from the scanner can be projected to cover an area and thus obtain 3D coordinates for an object surface. FIG. 510 is an example of a projector. 3C is an area projector, not a line projector. The system 2560 in FIG. 2 illustrates the triangulation principles. 5A and FIG. 4760. 5B. Referring to FIG. Referring first to FIG. 5A, the system 2560 comprises a projector 2562 as well as a camera 2564. The projector 2562 has a source pattern 2570 that is placed on a source plane, and a projector lenses 2572. There may be several lenses elements in the projector lens. The projector lens is equipped with a lens perspective centre 2575 and a projection optical axis 2676. The ray of Light 2573 travels to a point at 2571 along the source pattern of the light through the lens perspective centre onto the object 2590. It intercepts at point 2574.

The camera 2564 contains a 2582 lens and a 2580 photosensitive array. The lens perspective center 2585 is part of the camera lens 2582. An optical axis 2586 is part of the lens 2582. The ray of light 2583 travels through the object point 2574 and the camera perspective centre 2585 to intercept the photosensitive array 2580 at the point 2581.

FIG. 5A shows the line segment connecting the perspective centers. It is the baseline 2588. 5A and the baseline 47888 in FIG. 5B. 5B. The baseline projector angle is the angle between the projector optical and baseline (2594, 4794). The baseline camera angle is the angle between the camera optical (2583, 4786), and the baseline (2596, 4796). If a point in the source pattern is known to correspond with a point on a photosensitive array (2581 or 4771), then it can be used the baseline length, baseline projection angle and baseline camera angle to determine which sides of the triangle connect the points 2585 and 2574 and determine the surface coordinates for points on object 2590 relative the frame of reference from the measurement system 2560. The angles of the triangle that lies between the source pattern light 2570 and projector lens 2572 can be determined using the distance between them and the plane 2570, and the distance between 2571 and intersection of the optical and 2576 axes with the plane 2570. These angles can be subtracted or added to the larger angles 2596-2594 depending on the need to get the desired angles. One of ordinary skill in art will see that similar mathematical methods are possible to determine the lengths and coordinates of sides of triangle 2574-2585-2575. Other triangles related to triangles can also be used.

“Referring first and foremost to FIG. 5B: The system 4760 is very similar to FIG. 2560. 5B, the system 4760 is similar to FIG. 2560 except that it does not include a camera. The system could include a projector 4762 or a camera 4764. FIG. 5B shows the embodiment. 5B shows the projector with a light source 4778, and a light modator 4770. Since such a light source can remain focused for a long time using the geometry of FIG. 4778, it could be a laser source. 5B. 5B. Other rays from the light source 4778 also strike the optical moduleator at different positions on the surface. The optical modulator 4770 alters the power of the emitted sunlight, usually by decreasing its optical power. The optical modulator gives the light an optical pattern, also known as the source pattern. It is located on the optical modulator 4770. For example, the optical modulator 4770 could be a DLP/LCOS device. The modulator 4770 may be transmissive or reflective in some instances. The light from the optical modulator 4775 appears to come from a virtual perspective center 4775. The light ray appears to emanate from the virtual light perspective centre 4775, travel through point 4771 and reach point 4774 on the surface of object 4790.

The baseline is the line segment that extends from the camera lens perspective centre 4785 to virtual light perspective center 4775. The method of triangulation is a way to find the lengths of sides of triangles. For example, the triangle with the vertex points 4774 and 4785. This can be done by finding the baseline length, the angle between baseline and camera optical axis 4786 and angle between baseline and projector reference axis 476. Additional angles can be found to find the desired angle. The small angle between the camera’s optical axis 4786, and the ray 4783 is found by finding the angle of the small triangle that lies between the camera lens 4782 (and the photosensitive array 4780). This angle is calculated based on the distance between the lens and the photosensitive arrangement and the distance between each pixel and the camera’s optical axis. To find the angle desired, the angle between the baseline angle and the camera optical angle is added to the angle of the small triangular. The angle between the projector’s reference axis 4776, and the ray 4773 can also be found. This angle is calculated based on the distance between the light source 4777 (and the surface of optical modulation) and the distance between the projector pixel at 4771 and the intersection of reference axis 4776. To get the desired angle, this angle is subtracted from angle between baseline and projector reference.

The camera 4764 contains a 4782 camera lens and a 4780 photosensitive array. Camera lens 4782 includes a camera lens perspective centre 4785 and a photo optical axis 4786. An example of a camera refer axis is the camera optical axis. Any axis passing through the camera lens perspective centre may be used in triangulation calculations. However, the camera optical axis is a symmetry axis for the lens. The ray of light 4783 passes through the object point 4774 and intercepts at point 4781 the photosensitive array 4780. You can also use other mathematical methods to determine the lengths of each side of a triangle 4774-4785-47575. This is as simple as it gets for someone with ordinary skill in the arts.

“Although triangulation is well-known, we provide additional technical information below. Every lens system has an exit pupil and an entrance pupil. If first-order optics is used, the entrance pupil is the place from which light appears to emerge. The exit pupil is where light appears to emerge as it travels from the lens system into the photosensitive array. Multi-element lenses have an exit pupil and entrance pupil that do not always coincide. Also, the angles of the rays relative to the exit pupil and entrance pupil may not be the same. The model can be simplified by using the perspective center as the entrance pupil and then adjusting distance between the lens and source or image plan so that rays travel straight lines to intercept source or image. The simple, but widely-used model in FIG. 5A is obtained. This is FIG. 5A. This description is a good approximation of light’s behavior. However, it should be noted that lens aberrations can affect the light’s position relative to the positions calculated with the model of FIG. 5A. 5A.

“In some cases, the scanner system may contain two cameras and a projector. A triangulation system can also be built using just two cameras. The cameras are designed to capture points of light on objects or within an environment. A baseline can be used to perform triangulation between images taken by the cameras. FIG. 5A illustrates the process of triangulation. 5A with the projector 2562 being replaced by a camera.

“In certain cases, multiple scan patterns can be combined to achieve better performance in a shorter time. In one embodiment, the fast measurement method employs a two-dimensional coded scan pattern that can be combined with three-dimensional coordinate data in one shot. Different characters, shapes, thicknesses, sizes or colors can be used in coded patterns to create unique elements. These are also called coded elements or coded feature. These features can be used to match the points 2571 and 2581. On the photosensitive array 2580, a coded feature about the source pattern light 2570 could be identified.

The advantage of coded patterns is the ability to quickly obtain three-dimensional coordinates for objects surface points. In most cases, however, a sequential structured lighting approach such as the sinusoidal phase shift approach mentioned above will yield more accurate results. The user can choose to measure specific objects, certain areas, or features with different projection methods depending on the level of accuracy required. This can be done easily by using a programmable pattern of light.

A laser scanner emits a line that intersects an object. It is called a “linear projection”. The object’s illuminated shape is only two-dimensional. A projector that projects a pattern of light in a two-dimensional direction creates an illumination on the object that is three-dimensional. The structured light scanner can be distinguished from the laser scanner by having at least three non-collinear patterns elements. The three non-collinear patterns elements of a coded two-dimensional pattern of light are easily identifiable by their codes. Since they are projected in two dimensions and must contain at least three elements, it is necessary that the minimum of three of the pattern elements be non-collinear. Each sinusoidal period is a pattern element in the periodic pattern. The pattern elements must not be collinear because there are many periodic patterns in two dimensions. The laser line scanner emits a light line, so all the pattern elements are on a straight line. The line may be wider than it is wide, and the tail of its cross section may be less powerful than the peak. However, these aspects are not considered separately when determining the surface coordinates of an object. They do not constitute separate pattern elements. These pattern elements may be multiple, but they are all collinear.

“It is important to note that although the above descriptions distinguish between area (structured lighting) scanners and line scanners, they are based on whether three or four pattern elements are collinear. However, the purpose of this criterion was to differentiate patterns projected as either areas or lines. Therefore, lines can still be projected in a linear fashion with information along only one path even though the one-dimensional pattern might be curved.

It is also possible to use scanner 2500. This scanner could be either a line scanner (or area scanner) with a six-DOF laser tracker 900, as shown in FIG. 6A. 6A. This light travels outwards until it intersects object 2528. It emerges from the projector’s lens perspective center. This type of pattern includes the coded and periodic patterns, which are described above. Alternately, the projector 2520 can project a one-dimensional light pattern, thereby acting as an LLP (line scanner).

FIG. 6B shows an exemplary laser tracking system 4005. FIG. 6B shows an exemplary laser tracker system 4005 illustrated in FIG. A gimbaled beam steering mechanism 4012 of the laser tracker 4010 is an exemplary example. It consists of a zenith carrier 4014 that is mounted on an azimuth platform 4016 and rotated around an azimuth direction 4020. The zenith carriage 4014 is equipped with a payload 4015 and it rotates about a Zenith axis 418. The azimuth and zenith axes 4018 intersect orthogonally at tracker 4010 at gimbal position 4022. This is usually the origin of distance measurements. The gimbal point 422 is the origin of distance measurements. A laser beam 4046 passes almost through it. It points orthogonally to the zenith 4018. Laser beam 4046 is located in a plane that lies approximately parallel to the zenith-axis 4018, and passes through the azimuth-axis 4205. The outgoing laser beam 4046 can be oriented in the desired direction through rotation of payload 4015 around zenith 4018 and rotation of zenith carrier 4014 about azimuth 4020. An internal zenith angle encoder is attached to a Zenith mechanical axis that aligns to the zenith 4018. An internal azimuth-angular encoder is attached to the tracker’s azimuth mechanical axle, aligned with the azimuth angle 4020. The azimuth and zenith angular encoders measure rotation angles with a high degree of accuracy. The retroreflector target 4026 is reached by the outgoing laser beam 4046. This could be, for instance, a spherically-mounted retroreflector (SMR), as described above. The radial distance between retroreflector 426 and gimbal point 4022 can be measured. Also, the rotation angle around the zenith and azimuth axes will be calculated. This will determine the location of retroreflector 426 within the tracker’s spherical coordinates.

“Outgoing laser beam 446 may contain one or more wavelengths as described below. A steering mechanism similar to the one shown in FIG. In the following discussion, 6B will be assumed. Other types of steering mechanisms may be possible. It is possible, for example, to reflect a laser beam from a mirror rotating about the azimuth or zenith directions. These techniques are applicable regardless of what type of steering mechanism you use.

“Several laser trackers exist or have been suggested for measuring six degrees of freedom rather than the usual three. U.S. Patent describes six examples of degree-of freedom (six DOF) systems. No. 7,800,758 (‘758) to Bridges et al., U.S. Pat. No. 8,525,983 (‘983) to Bridges et al., U.S. Pat. No. No. 6.166,809 (‘809) Pettersen et. al., U.S. Patent Application No. 2010/0149525 to Lau. All of the contents are included by reference. Six-DOF systems measure three orientational degrees of freedom as well as three positions degrees of freedom (i.e. x,y,z).

“FIG. 6A illustrates a six DOF scanner 2500 that is used with a six DOF laser tracker 910. Six-DOF laser tracker (900) sends a beam 784 to a retroreflector 2510 or 2511 on the six DOF scanner 2500. The six-DOF laser tracker (900) measures distance between tracker 9000 and scanner 2500. It also measures angles between tracker 900- to scan 2500. Six-DOF scanner 2500 has a body 2514 and one or more retroreflectors 25,10, 2511, a scanner camera 2530 and a scanner light projector, 2520. An optional electrical cable 2546, a battery 2444, an an antenna 2548, as well as an electronics circuit board 2542. If present, the antenna 2548 allows wireless communication between six-DOF scanner 2500 with other computing devices like the laser tracker 9000 and external computers. Together, the scanner projector 2520 (and the scanner camera 2530) are used for measuring the three-dimensional coordinates of a workpiece 25,28. Camera 2530 contains a camera lens system 2532 as well as a photosensitive array 2534. The photosensitive array 2534 could be a CCD array or CMOS array. The scanner projector 2520 has a projector lens system 2523, and a source pattern 2524. A source pattern of the light can emit a point, a line, or a structured (two-dimensional) pattern. The scanner light source may emit a point of light. This point can be scanned with a moving camera to create a line or array of lines. To produce an array of lines, the scanner light source may emit a line of radiation. The line can be scanned with a mirror or moving mirror. An embodiment might have a source pattern of light that is an LED, laser or any other light source reflected off a micromirror device (DMD), such as a Texas Instruments digital light projector (DLP), liquid crystal device(LCD), or a similar device in transmission mode. A slide pattern might be the source of light, such as a chrome-on glass slide. These slides can have one or more patterns and move in and out of their positions according to need. To allow the laser tracker from different directions to track the six DOF scanner, additional retroreflectors (e.g retroreflector-2511) can be added to the retroreflector-2510. This will give the projector 2500 greater flexibility as to where light is projected.

The scanner camera 2530 measures the three-dimensional coordinates of the workpiece 2528 using the principles triangulation. There are many ways to measure triangulation depending on how much light is emitted from the scanner light source 2520 or the type of photosensitive arrangement 2534. If the scanner light source 2520 emits a pattern of light that is either a line or points of light, and the photosensitive array 2534 scans into the form of a line, the one dimension of the two-dimensional array 2534 corresponds with the direction of a 2526 point on the workpiece 2528, for example. The distance between the scanner light source 2520 and the point 2526 corresponds to the other dimension of the two-dimensional array 2534. The three-dimensional coordinates of each point 2526 along a line of light emitted from scanner light source 2520 are known relative to the local reference frame of the six DOF scanner 2500. Six-DOF laser tracking methods are used to determine the six degrees of freedom in the six-DOF scanner. The six degrees of freedom can be used to determine the three-dimensional coordinates for the scanned line light in the tracker frame. This frame may then be used to convert the frame of reference into the workpiece 2528 by measuring three points on the workpiece with the laser tracker.

“If the six-DOF scanner 2250 is moved by the mobile measurement platform 200’s end effector 205, a line emitting from the scanner light source 2520 can be moved so as to paint? The surface of the workpiece 2528 is measured, and the three-dimensional coordinates are obtained for the entire surface. You can also?paint’ the surface. A scanner light source 2520 emits a pattern of light that creates patterns on the surface of a workpiece. Mounting the scanner 2500 with a structured light pattern may allow for more precise measurements. For example, the structured light pattern produced by the scanner light source 2520 could include a series of fringes. Each fringe has an irradiance that changes sinusoidally across the surface of the workpiece 2528. An embodiment shifts the sinusoids by changing three or more phases. Each pixel of the camera 2530 records the amplitude level for each phase value. This information is used to determine the position of each sinusoid pixel. This information can be used to determine the three-dimensional coordinates for each point 2526. Another embodiment of the structured light could be in the form a coded pattern. This can be used to evaluate single image frames from the camera 2530 and determine the three-dimensional coordinates. The six-DOF scanner 2500 can be moved at a reasonable speed by hand while the coded pattern allows for more precise measurements.

“In certain cases, it may be advantageous to measure features such as edges and holes using an optional tactile probe 2550 attached the the six-DOF scanner 2505. FIG. FIG. 6A shows a tactile probe 2550. It includes a probe tip 2554 that is part of a probe extension 2550. The projector 2520 emits a laser beam to illuminate the area to be measured in an embodiment.

“The laser tracker900 measures distances and angles in order to determine the three positional degrees of freedom (x,y,z) for the six-DOF scanner 2500. There are several ways to determine the three orientational degrees of freedom of the six-DOF scanner 2250. These methods will be described in greater detail below.

“FIGS. 7A and 7B show isometric exploded views and isometric views of an isometric target assembly with six-DOF tracking 710, which is coupled to a triangulation scan 210. FIG. FIG. 7A shows an isometric view showing the six-DOF target assembly 710 that is attached to the triangulation scanner (210). The mechanical and electrical interfaces 216 allow for coupling. The electrical interface 216 consists of two parts. A first part 216A is an upper scanner connector (216A) and a second part 221B which is a lower six DOF tracker assembly connector (216B). The first and second parts are joined to keep the scanner 210 in a fixed orientation relative to the six DOF tracker target assembly 710.

“The six-DOF laser tracker target assembly 710 collaborates with a 4010 laser tracker to determine six degrees freedom of assembly 710. Three translational degrees are included in the six degrees of freedom. (e.g. x,y,z) This is determined by the tracker as described above with reference to FIG. 6B. 6B. This six-DOF target assembly can be any type of tracker, such as the ones described in patents ‘758,’ ‘983, and ‘809, or patent application 525, which are all incorporated herein by reference. The six degrees of freedom that the attached six-DOF accessory 710 has on the tracker enable it to track the orientation and position of the scanner 215 relative to the object. This allows for relatively easy and precise registration of multiple line scans and area scans. A probe tip 718 is attached and paired with a probe coupler 719 in an embodiment. Based on six degrees of freedom, the tracker determines 3D coordinates for the probe tip 718.

“In one embodiment, the laser tracking device 4010 works in conjunction with the six-DOF target assembly 710 and a processor that determines the six degrees freedom of the six DOF target assembly 710. The laser tracker 4010 beams light to the six DOF target 710. In one embodiment, the retroreflector target may be a cube corner retroreflector. To allow easy viewing of six-DOF targets at different angles, a collection of six-DOF retroreflector target targets 720 could be used. The retroreflector’s first light beam travels to a distance measurement device 4010 that measures the distance between the tracker and the retroreflector. A second portion of this light travels to an electric position detector that generates an indication of the retroreflector’s position. The position detector is used to provide an electrical signal to a control device that includes motors to direct the beam of light so it remains centered on retroreflector. The tracker also uses angular transducers, such as angular encoders, to provide two angles that indicate the direction of laser beam. These two angles, along with the distance measured by distance meter, give the translational degrees of freedom for the 6-DOF tracker target assembly 710. The signals from six-DOF targets can be sent to an electric unit 730 for data processing and synchronization. An embodiment of the invention uses electrical signals to send information from six-DOF targets to an electrical processing unit 730. This is done via connector 712 or cable 714.

“As described herein, there are many ways to determine the three orientational degrees, such as the ones taught in the patents 758, 983, and ‘809 and the patent application ‘525. These methods include (1) measuring the position and orientation of multiple light sources on a tracker 6-DOF target using a camera attached to the laser tracker; (2) measuring lines on a cube corner retroreflector in order to determine the three directional degrees of liberty; (3) measuring light passing through an aperture in a cube corner retroreflector for pitch, yaw and angle of inclination to determine the roll angle; and (4) measuring light passing through an opening within a cube corner retroreflector There are many other ways to measure three orientational freedoms. The six-DOF target assembly 710 can be used for any method that measures three orientational freedoms.

“A common frame of reference is required for the scanner 210, six-DOF tracker target 710 as a preliminary step in the described methods. This preliminary step can be performed at the manufacturer’s facility or by the operator following the manufacturer’s instructions. For example, you can obtain a common frame of reference by first looking at common features using the scanner 210 or camera assembly 710. Then, perform a least-squares optimization to match the observed features. These methods are well-known in the art, and they are not further discussed.

“In one embodiment, six-DOF tracker target assembly 710 includes a tactile probe 718 that connects to six-DOF targets collection 720 via an interface unit 719. It may be used to attach and remove different tactile probes 718. It can also be used to provide electrical functionality for special probes, such as a touch probe. It measures the distance between the probe and an object.

“In one embodiment, the triangulation scanning scanner 210 is removed. The six-DOF tracker target assembly 710 is attached directly to the electronics 207 or to a mechanical coupler at the last joint of the robotic articulated arms 150. FIGS. FIGS. 7C and 7D illustrate this arrangement.

“FIG. 8A shows a perspective view showing a three-dimensional tactile probing device 5100. It includes a camera bar 5110, and a probe assembly 5140. The mounting structure 5112 is included in the camera bar, as well as at least two triangulation cameras (5120 and 5124). Optional camera 5122 may be included. Each camera includes a lens and a photographic array. For example, the lens 2564 in FIG. 5A. 5A. The optional camera 5122 could be identical to the 5120 and 5124 cameras, or it may be a colour camera. The probe assembly 5140 includes the housing 5142, optional pedestals 5146 and shaft 5148, as well as the stylus 5150 and probe tip 5152. The probe tip 5152 is the reference point for the position of the lights 5144. Light sources, such as light emitting diodes, or reflective spots, may be used to illuminate the probe tip 5152. These positions can be found using factory or on-site compensation methods. A shaft can be used as a handle or alternative handle.

“Triangulation using the image data 5120, 5124 and 5110 of the camera bars 5110 is used to determine the three-dimensional coordinates for each point of light 5144 within a frame of reference. The term “frame of reference” is used throughout this document and in the claims. It is assumed that the term “frame of reference” is synonymous with the phrase “coordinate system”. The probe tip’s position within the frame of reference is determined by mathematical calculations. These are well-known in the art. The probe tip 5152 can be brought into contact with 5160 to measure the object’s surface points.

An electrical system 5101 could include an external computer 5104 and an electrical circuit board 5102. An external computer 5104 could be a network of computers. The electrical system 5101 can include both wired and wireless components, internal and external to the components shown in FIG. 8A, which performs the calculations and measurements required to determine the three-dimensional coordinates for points on the surface. The electrical system 5101 generally includes one or more processors. These could be microprocessors or computers.

“FIG. 8B is a perspective of a three-dimensional area scanner system 5200, which includes a camera assembly 5110 and a scanner assembly 55240. Referring to FIG. 8A. 8A. FIG. 5148 shows the characteristics of the housing 5142 and lights 5144, optional pedestals 536, shaft 5148, and shaft 5148. 8A. 8A. Projector 5252 can be any of a number of types. It could reflect light off a digital micromirror device, such as a Texas Instruments digital light projector (DLP), liquid crystal (LCD), or liquid crystal on silicone (LCOS) device. The projected light could be generated by light passing through a slide pattern. For example, a Chrome-on-Glass slide might have one or more patterns. Slides can move in and out of their positions as required. Projector 5252 projects light 5262 onto an area 5266 of the object 5160. The camera 5254 images a portion of the illuminated area 5266 to get digital data.

“The digital data can be processed in part using the electrical circuitry of the scanner assembly 5240. An electrical system 5201 may provide the partially processed data to an external computer 5204. It includes an electrical circuit board 5202. An external computer 5204 could be a network of computers. The electrical system 5201 can include both wired and wireless components, internal or external to the FIG. components. 8B are responsible for the calculations and measurements required to determine the three-dimensional coordinates of the points on the surface 5160. The electrical system 5201 generally includes one or more processors. These processors can be microprocessors, computers, field programmable gates arrays (FPGAs), digital signal processing (DSP units), for example. The result of the calculations will be a set coordinates within the camera bar frame. This frame can then be converted into another frame if necessary.

“FIG. 8C is a perspective of a three-dimensional scanning system 5300, which includes a camera assembly 5110 and a scanner assembly 53340. Referring to FIG. 8A. 8A. FIG. 5148 shows the characteristics of the housing 5142 and lights 5144, optional pedestals 53146, shaft 5148, and shaft 5148. 8A. 8A. Projector 5352 could be a source that projects light onto the object 5160. 8B. 8B. The camera 5354 images a portion of the object’s stripe pattern to get digital data. Digital data can be processed in a similar way to the one described in FIG. 8B, for example, using electrical components 5201. The calculation results are a set 3D coordinates of the object’s surface in the camera-bar frame. These coordinates can then be converted into another frame if necessary.

“FIG. 9 is an isometric view showing a six-DOF target assembly 910 attached to a triangulation scan 210. A camera bar such as the 5110 camera bar in FIGS. may be used to measure the targets on the six-DOF target 910. 8A-8C. Alternately, targets may be measured using two or more cameras mounted in a different environment. Camera bars are made up of two or more cameras that are separated by a baseline camera-bar. Triangulation is used to calculate the six degrees freedom of the six-DOF target assembly. A processor may also use additional geometrical values, such as the camera-bar baseline or orientation of the cameras on a camera bar, in order to calculate triangulation.

“In one embodiment, the six-DOF target system 910 includes a collection light points 920, an electric enclosure 930 and a tactile probe 911. An embodiment’s collection of light points includes some points 922 that are mounted directly to the structure and others mounted on pedestals 926. The points of light 922 and 924 in an embodiment are LEDs. In another embodiment, the points 922 and 924 of light are LEDs. An embodiment illuminates the reflective spots with an external source. An embodiment places the points or light so that they are visible from many angles relative to scanner 210.

“An embodiment of the six-DOF target assembly contains a tactile probe 918 that connects to the electrical enclosure 930 via a probe interface 215. The probe interface 216 can provide analog or touch probe electronics. The scanner 210 can provide detailed information fast, but less information about edges and holes than you might need. Operators can use the tactile probe 918 to get this information.

“FIG. 10A depicts a camera assembly 1850 that can be attached to a triangulation scanner, or any other 3D measurement device. An embodiment of the camera assembly 1850 includes at most one camera. An embodiment of the camera assembly 1850 contains two cameras 1853A and 1853B. Camera 1853A contains a lens assembly 1854A, and an electronics box (1856A) that includes a photosensitive array (not illustrated). Camera 1853B contains a lens assembly 1854B, and an electronics box (1856B) that includes a photographsensitive array. The cameras 1853A and 1853B are configured to provide stereo imaging by partially overlapping fields-of-view (FOVs). This imaging allows for the determination of 3D coordinates on targets by using triangulation methods, as described above. In some embodiments, both cameras provide a larger FOV than the camera 508. In some embodiments, the FOV of the cameras combined is larger than that of the camera 508. One wide FOV camera may be provided on the assembly 1850 in some embodiments. Other cases provide several cameras with wide FOV but not overlapping on the camera assembly 1850. Another embodiment of the camera assembly 1850B includes one camera 1853C. It contains a lens 1854C as well as an electronics box 1856C, which includes a photosensitive array.

The triangulation scanner210 is a line scanner. This means that the 3D coordinates of the object are projected along a line. The scanner 210 can scan in an area, so the 3D coordinates will be projected on a 2D area of the object’s surface. Multiple collections of 3D coordinates from scans taken by the scanner 210 must be registered. Individual scans must be registered for a line scanner (210). An area scanner 210 will register individual scans.

“Methods for using the camera 1850 or 1850B with the scanner 220 to register multiple scans taken by the scanner 220, which allows scans to be taken and not tracked by a 6-DOF measuring device are described.”

“A common frame of reference is required for both the scanner 210, camera assembly 1850B and 1850B. This preliminary step will be used in all the procedures described below. This preliminary step can be performed at the manufacturer’s facility or by the operator following the manufacturer’s instructions. You can obtain a common frame of reference by looking at common features using the scanner 210 or camera assembly 1850B and then performing a least squares optimization procedure to match those features. These methods are well-known in the art, and they are not discussed further.

“FIG. “FIG. A projector 510 projects a first line 1810 onto an object 1801. The object 1801 might have some fine details in some areas, such as the features 1802 or 1803, while other regions may have large areas 1804. The 2D image sensor (e.g. photosensitive array) of camera 508 views the first line of light 1810 in a region 1815. This is the object imaged with the camera 508. Referring to FIG. 4. The first line light 1810 appears on the 2D image sensor 508 and provides information to a processor in order to determine the 3D coordinates for the first light line on the object 1801. These coordinates are provided in the frame reference of scanner 210.

“In a second example, the projector510 projects a second line 1812 onto the object 1801. The second line 1812 appears on the 2D image sensor 508 and provides information to the scanner 210 for determining the 3D coordinates. This is again done in the frame reference of scanner 210. It is necessary to register the scans in both the first and second instances so that the 3D coordinates for the first and second lines of light can be put into a common frame.

“In a first-method registration that uses natural features, the cameras 1853A and 1853B image an area 1820 of the object. The cameras 1853A and 1853B image the detailed features 1806, 1807, 1808 in the example. Triangulation is a technique that allows a processor to use images from the cameras 1853A and 1853B to find the 3D coordinates for these details in the frame of reference provided by the scanner 210. This triangulation, as explained above, requires a baseline distance between camera 1853A- 1853B and relative orientation of these cameras relative the baseline. It may be possible to match features in 2D because the 3D coordinates of detailed features captured by cameras 1853A and 1853B cover an object 1801. This allows for the coordinate transformation to place the first and second lines of light 1810 in the same frame. Natural features, such as the intersection point of three planes 1809 shown in FIG. 11A are in a clear position in 3D space. These features are easy to match in multiple cameras images, making them particularly useful for registering images that are based on natural targets.

“FIG. 11B shows a second way to use the cameras 1853A and 1853B to register multiple 3D coordinates from line scans taken with a line scanner210. The registration is based upon the matching of natural targets rather than physical targets. FIG. FIG. 11.B is the same as FIG. 11B is the same as FIG. 11B also includes markers 1832 on object 1801 and/or markers1834 in the area of the object, but not on it. The targets in an embodiment are reflective targets. They can be white circular targets (sometimes referred to photogrammetry targets). These targets can be illuminated using light sources 1858A and 1858B in FIGS. 10A and 10B, or light sources 1858C (FIG. 10C. 10C. The targets 1832 and 1834 in an embodiment are light sources themselves, such as LEDs. The targets 1832 and 1834 in an embodiment are a combination photogrammetry targets with LEDs.

“In a first instance the projector510 projects a line of light 1810 onto object 1801. In a second case, the projector510 projects a second light 1812 onto object 1801. The cameras 1853A and 1853B image three common non-collinear targets in an embodiment. These could be 1832 or 1834. These points allow a processor to position the 3D coordinates from the first and the second lines of light in a common frame. The scanner 210 moves across the object 1801 repeatedly, thereby allowing the processor to determine 3D coordinates for the object 1801. Another embodiment combines image information from physical targets with information from natural targets to register 3D coordinates for the object’s surface 1801.

“FIG. “FIG. A separate external projector 1840 and camera assembly 1850 projects spots 1832 onto an object and/or spots 1834 off it but within the vicinity. These spots of light are imaged by the cameras 1853A and 1853B in the same manner as the physical targets in FIG. 11B and the processor calculates the 3D coordinates for each object surface in each case.

“FIG. 11D is an illustration of a first way to use the cameras 1853A and 1853B to register multiple 3D coordinates from area scans taken with an area scanner210. The registration is based upon the matching natural features. A projector 510 projects a portion of the first area light 1810B onto an object 1801. The projector 510 projects a portion of the first light area 1810B onto an object 1801. An overlap region 1817 is formed when the projected area of light 1810B overlaps the imaged region 18.15. A processor can determine 3D coordinates for the object’s surface 1801 in this overlap region 1817. These 3D coordinates can be found in the frame reference of the scanner 210.

“In FIG. FIG. 11E shows a second example. Sometimes, the overlap regions have enough common features to allow registration of 3D coordinates using the scanner 210 in the first or second instances. It may not be possible for object 1801 to register the first and second scans using scan data alone if there are very few features in the overlap areas 1817 and 1817B.

“In one embodiment, the cameras 1853A and 1853B have wider FOVs than the camera 510. This allows additional features like 1806, 1807 and 1808 to enhance registration by matching the 3D features, as discussed above using the methods with respect to FIGS. 11A, 11B and 11C. If the object 1801 does not have distinct features, such as the region 1804, registered 3D images can end up warping. The flat region 1804 might end up looking like it is a saddle, for example. This effect is often called the “potato chip” or ?potato crisp? effect.”

“For scan regions without many features, it is possible to improve registration by placing targets on or near the object 1801. FIG. FIG. 11F shows a second way to use the cameras 1853A and 1853B to register multiple 3D coordinates from area scans taken with an area scanner 220. The registration is based upon the matching of natural targets and physical targets. FIG. FIG. 11.F is the same as FIG. 11F is the same as FIG. 11F also includes markers 1832 on object 1801 and/or markers1834 in the vicinity, but not on the object. The method is described in FIG. 11B can be used to improve the registration of the 3D coordinates from successive scans.

“FIG. “FIG. A separate external projector 1840 and camera assembly 1850 project spots of light 1832 onto an object and/or spots 1834 off it but within the vicinity. These spots of light are imaged by the cameras 1853A and 1853B in the same manner as the physical targets in FIG. 11F and the processor calculates the 3D coordinates for each object surface in each case.”

“Mark” is a term that refers to any of the physical features used in the registration of multiple sets 3D coordinates. “Mark” can be used to describe any physical feature that aids in the registration multiple sets of 3D coordinates. The following four marks were discussed in the above discussion: (1) Natural features of the object’s surface (or features on an object stationary surface proximate it); (2) LED markers or targets on the object, (3) reflective markers or targets on the object; (4) spots or proximate objects of light project onto the object using an external projector that is not on the scanner 220 or 1850.

“FIG. 12A illustrates a mobile measurement platform 200 with an end effector which includes a triangulation scan 210 and a camera assembly 1850. The mobile measurement platform measures the rear section of an automobile BiW 1202. A projector external projects light 1212 onto large areas to allow the camera assembly 1850 information to be provided to the processors to register multiple scans from scanner 210. An embodiment of the external projector 1210 can be mounted on a mobile tripod 1220 that may move independently or under user direction to project spots wherever needed.

“FIG. 12B illustrates the mobile measurement platform 200 with an end effector, which includes the triangulation scanner 220 and the camera assembly 1850. The interior portion of the BiW 1202 is being measured by the mobile measurement platform. A projector placed outside of one of the windows will likely project spots in elongated, ellipses due to the angle of entry for the spots projected on the interior. Additionally, the mobile platform 200 will block some spots as it moves its end effector around from one place to another. This could cause problems in some cases. This can be avoided by placing the projector 1810 on a 100-pound mobile platform. The robot articulated arm can position external projector 1810 in any position or orientation that is most appropriate for the situation.

“FIG. 13A is a picture of an apparatus 1310 with a laser tracker 4010 mounted to a mobile structure 1220. The mobile structure 1220 is shown in an embodiment with tripod legs 1224, and a motorized doll 1222.

“FIGS. 13B-13E illustrate a method for placing measurements from a large volume into a common frame. FIG. FIG. 13B shows a mobile measuring platform 200 that measures 3D coordinates of the rear section of a BiW 1202. The mobile platform’s end effector includes a scanner 220 and a 6-DOF tracker target 710. The six-DOF laser target assembly 710 receives a beam from the laser tracker 4010, which determines the six degrees of freedom for the scanner 210. The tactile probe 718 can also be used to measure the six-DOF tracker target 710.

“The mobile platform 200 wants to move to a different position in order to take additional measurements. In one embodiment, the BiW will block light from the laser tracker’s beam in the new position. The laser tracker must be moved in such a manner that 3D measurements taken at each location can be placed in the same frame of reference. The procedure to relocate the laser tracker involves measuring one or more six-DOF targets 1340 in a single instance, as shown in FIG. 13C, and then again measure the six-DOF targets in another instance as shown in FIG. 13D. Alternativly, the tracker could measure three or more targets with three DOF, also known as 1340 herein. These include SMRs in this first instance, as shown in FIG. 13C, and then measure the three-DOF targets again in FIG. 13D.”

“These tracker measurements of a six-DOF common target or three or more three DOF targets allow for the calculation of a transformation matrix that will allow measurements to be placed in a common frame. FIG. 13E shows the steps involved. FIG. 13E shows how the mobile platform continues to measure the interior of BiW while the tracker monitors six degrees of freedom of scanner 210. This corrects for movement and puts scanner readings in a common frame. It is possible to measure a single 6-DOF target with a laser tracker to create a transformation matrix. However, it is possible to also measure three 3-DOF retroreflectors. This six-DOF laser tracker method can be combined with a camera bar device and a six DOF light-point target assembly. 15A.”

Automated inspections can be performed by a mobile measurement platform 200 that includes any combination of the end effector devices described above. An embodiment provides an automated inspection using centralized production scheduling software. You can modify the inspection instructions depending on what type of device is being inspected. An embodiment of inspection instructions is placed on a tag. This tag could be near-field communication (NFC), radio-frequency identification tag (RFID tag), bar code, QR code or other storage device that can easily convey information. The tag is carried with the object to be tested and can be read by the mobile measurement system. An inspection plan is information about where measurements will be taken and what types of data will be collected and analysed. A 3D measuring device can evaluate many dimensions, including points, distances, lengths and angles. An inspection plan may include specific measurements and the characteristics to be measured. An embodiment provides a nominal value for each measured dimensional characteristic. The measured value less the nominal value is the error. An embodiment of the invention gives an alarm if the absolute error exceeds a tolerance. An embodiment includes a tolerance value as part of the inspection plans. An embodiment includes the inclusion of a tolerance value in the inspection plan. An embodiment provides a CAD model to one or more processors for guiding the movement and measurement of the mobile platform. The CAD model can contain information about the dimensions of an object that may be used to guide the movement of the mobile platform. Alternately, the mobile platform may choose a path based upon the object’s dimensions as determined by 2D scanners or other sensors connected to the platform.

There are two types of navigation that a mobile platform can use. The platform’s internal sensors are used to guide its movement in the first type of navigation. This method usually provides a movement accuracy of 1 to 5 centimeters. 2D scanner sensors are internal sensors that emit horizontal light near the floor. These scanners are useful for preventing collisions and can also be used to aid in navigation. The angular encoders located in the wheel assemblies may provide information that can also be used to aid in navigation. Navigation may be assisted by heading and IMU sensors as well as 2D or 3D cameras mounted around a mobile platform.

“In a second type, a 6-DOF measuring device such as a laser tracker (or a camera bar which may have two cameras separated and stabilised but not connected by any bar) is used for directing the movement of the mobile platform or articulated arm. Depending on speed and other factors, such navigation can reach a distance of more than 100 micrometers.

The 3D coordinates of the outside and inside of an object, such as an automobile BiW 1202, are often difficult to measure. The coordinates of an object are often obtained by using a global registration system such as registration by laser tracker or registration via a photogrammetry program with targets that extend across the object’s length. However, 3D coordinates for the interior of an object can be measured using a laser tracker, similar device or other methods, as shown in FIG. 13E are not usually global measurable within the object’s interior. The present invention aims to link 3D measurements taken in global and local frames of reference.

“FIG. 14A shows a 3D measuring device 1410 used to measure the exterior of an object 1202, while it is being measured in six DOF by a six DOF measuring device 1420. A six-DOF laser tracker, such as the one shown in FIGS., is another example of a possible six-DOF measurement device. 13A-E, and a camera bar like the one shown in FIGS. 8A-C. 8A-C.

“FIG. 14A shows the collection of 3D coordinates while the 3D measuring device 1410 is being moved. This movement can be done by a human operator who holds and moves the measurement device. A 3D measurement device may be held in one hand by a mechanical device, such as the mobile robotic platform 200 illustrated in FIGS. 2A, 2C and 12A respectively.

“FIG. 14B is a close-up of the 3D measuring apparatus 1410. It includes a triangulation scanning scanner 210 that can measure 3D coordinates for an object, as described herein. A six-DOF measuring instrument 710 is attached to the triangulation scanner. It can be used for six-DOF measurement using a six DOF laser tracker. Attached to the triangulation scanning device 210 is also a camera assembly 1850B or 1850B that provides registration using one of the following methods. 11A and 11D; (b) matching artificial targets (markers), placed near or on the object, as illustrated in FIG. 11B, (c), matching projected spots of light and projected patterns as illustrated by FIG. 11C and FIG. 11F. 11F. The 3D measuring device 1410 can be configured in many other ways. A two-axis inclinometer 1490 may be added to one of the three-dimensional measuring devices 210, 1850 or 710. 14B.”

“In FIG. “In FIG. 14A is a six DOF tracker. The six-DOF tracker sends the laser beam 1422 to the retroreflector 710 in the six-DOF Target 710. The return light is used by the laser tracker to measure distance and angles to the six DOF target. You can use any of several additional methods to determine the three degrees of freedom for the six-DOF target. Triangulation scanner 220 determines the 3D coordinates for object 1202. Other embodiments allow for the measurement of the 3D coordinates using a tactile probe, such as probe 718 on 6-DOF target 710.

“In one embodiment, the six-DOF measuring instrument 1420 is mounted to a mobile platform 1220. An embodiment of the mobile platform can be moved by an operator on wheels 1222 to reach desired locations. Another embodiment allows the mobile platform 1220 to be controlled by computer via motorized wheels 1222.

“FIG. 14C shows the 3D measurement device 1410 measuring an object in a transition area between an exterior portion 1202 and an inner portion 1202. The 3D measuring device 1410 measures a portion of the object in the transition region between an exterior and interior section of the object 1202 and 1202. To obtain global registration, the beam of light 1422 is intercepted at the six-DOF probe by a retroreflector. Camera assembly 1850 determines 3D coordinates for at least one cardinal position 1470 within the local frame reference of the 3D measuring device 1410. As explained above, a cardinal point can be either a natural feature or a target marker. The term interest point detection is used to identify cardinal points that are associated with natural features. The points detected are called interest points. An interest point is defined as a position that has been mathematically determined, is well-defined in space, has an image structure that surrounds it that is rich with local information, and is stable in terms of its illumination level over time. One example of an interest points is the corner point. This could be any point that corresponds to the intersection of three planes. Scale invariant feature transformation (SIFT) is another example of signal processing that can be used. This method is well-known in the art and is described in U.S. Pat. No. No. Edge detection, blob detection and ridge detection are all common methods of detecting cardinal points. FIG. 14C, the FOV of camera assembly 1850 includes the cardinal point 1470. Stereo cameras from camera assembly 1850 can determine the 3D coordinates for the cardinal points 1470 within the frame of reference of a 3D measuring device 1410.

“FIG. 14D is the 3D measuring device, which is placed outside the line of sight of the 6-DOF measuring device 1420. An exterior portion 1204 of object 1202 blocks the beam of light 1422. The coordinates of the first cardinal position 1470 are determined by the 3D measuring device 1410. 14C. Some cases are described below in relation to FIGS. 14L-R. Continued use of this method at other locations in the interior region will require at least a second cardinal position 1471.”

“Consider FIGS. 14L and 14M. FIG. FIG. 14L shows a measurement 6000, which includes a collection 3D coordinates 6010 that were obtained using a triangulation scanning scanner 210. This scanner is used to illuminate the object’s surface with patterned light. The object’s surface is illustrated with a 6012 feature that contains 3D coordinates. These coordinates are part of 3D coordinates 6010. Measurement 6000 also includes at least one cardinal points 6014. This could be measured using a camera assembly 1850 with two cameras 1853A or 1853B, or a single camera 1853C in a camera apparatus 1850B. FIG. 3D coordinates 6010, and the 2D or 3D coordinates for the cardinal points 6014 are representative of the situation. 14C is the cardinal point of FIG. 14C, where the cardinal point in FIG.

“In FIG. 14D. The 3D measuring device 1850B or 1850B can be translated relative to its position in FIG. 14C. FIG. 14M. To locate the 3D coordinates as shown in FIGS. 14L and 14M must be determined the amount of translation, dy and dz as indicated in the coordinate systems 6020 and rotation, d??, d??, d??, as shown in the coordinates 6030. Rotation angles d., d., d. These angles are also known as pitch, roll, and yaw angles. You can also use other types of rotation angles.

Click here to view the patent on Google Patents.