Great research starts with great data.

Learn More
More >
Patent Analysis of

SYSTEMS AND METHODS THAT IMPROVE ALIGNMENT OF A ROBOTIC ARM TO AN OBJECT

Updated Time 15 March 2019

Patent Registration Data

Publication Number

WO2018182538A1

Application Number

PCT/SG2018/050167

Application Date

02 April 2018

Publication Date

04 October 2018

Current Assignee

AGENCY FOR SCIENCE, TECHNOLOGY AND RESEARCH

Original Assignee (Applicant)

AGENCY FOR SCIENCE, TECHNOLOGY AND RESEARCH

International Classification

B25J9/10,G06F19/00,G01S17/88

Cooperative Classification

G01S17/89,B25J9/10,G01S7/4808,G01S7/497,G01S17/50

Inventor

LI, JUN,TEE, KENG PENG,WAN, KONG WAH,CHEN, LAWRENCE TAI PENG,YAU, WEI YUN,LI, RENJUN,LAI, FON LIN

Patent Images

This patent contains figures and images illustrating the invention and its embodiment.

SYSTEMS AND METHODS THAT IMPROVE ALIGNMENT OF A ROBOTIC ARM TO AN OBJECT SYSTEMS AND METHODS THAT IMPROVE ALIGNMENT OF A ROBOTIC ARM TO AN OBJECT SYSTEMS AND METHODS THAT IMPROVE ALIGNMENT OF A ROBOTIC ARM TO AN OBJECT
See all 10 images

Abstract

Systems and methods that improve alignment of a robotic arm to an object or parts of the object. Sensors sense a 3D protrusion on the object and a salient regular surface to which the 3D protrusion attaches. A method infers, based on a known geometry of the object, a component of the object that is next to the 3D protrusion and not detectable with the sensors. The robotic arm moves, based on a location of the 3D protrusion and a location of the component, with respect to the component.

Read more

Claims

CLAIMS

What is claimed is:

1. A method executed by one or more electronic devices in a robotic system that improves alignment of a robotic arm to an object, the method comprising: detecting, with Lidar sensors, a salient regular surface on the object;

detecting, with the Lidar sensors, a three-dimensional (3D) protrusion attached to the salient regular surface on the object;

inferring, based on a known location of the 3D protrusion on the object, a component of the object adjacent to the 3D protrusion; and

aligning, based on the known location of the 3D protrusion and a known location of the component, the robotic arm with respect to the object.

2. The method of claim 1, wherein the 3D protrusion is detected on the object without prior knowledge of a shape of the 3D protrusion.

3. The method of claim 1 further comprising:

calibrating the 2D Lidar and the 3D Lidar to determine a correspondence between components monitored by the 2D Lidar and components monitored by the 3D Lidar;

monitoring the components of the object with 2D Lidar and 3D Lidar; and monitoring movement of surrounding objects with 2D Lidar and 3D Lidar.

4. The method of claim 1 further comprising:

monitoring movement of the object by building a statistical model for each scanned point of a point cloud determined from data sensed of the Lidar sensors; determining whether deviations from the statistical model are larger than a threshold;

grouping detected points with large deviations into pieces according to Euclidean distance; and

determining if each of the pieces is a component of the object or an external object.

5. The method of claim 1 further comprising:

detecting the 3D protrusion attached to the salient regular surface by:

segmenting the object from a background of the object; detecting the salient regular surface by RANSAC;

selecting points connected to the salient regular surface; and segmenting the points connected to the salient regular surface.

6. The method of claim 1, wherein the object is an automobile, the 3D protrusion is a side mirror on a door of the automobile, the component being inferred is a side window on the door of the automobile, and the salient regular surface is a surface adjacent the window from to which the side mirror attaches.

7. The method of claim 1 further comprising:

detecting, with a 3D Lidar sensor, a car door window when the object is an automobile; and

monitoring, with a 2D Lidar sensor, movement of a door of the automobile.

8. The method of claim 1 further comprising:

generating a 2D point cloud of the object; and

monitoring movements of the object or other objects in the environment within a field of view of the Lidar sensors by observing deviations within the 2D point cloud.

9. A non- transitory computer-readable storage medium that one or more electronic devices in a robotic system executes to improve alignment of a robotic arm to an object, the method comprising:

sensing, with a three-dimensional (3D) sensor and a two-dimensional (2D) sensor, a 3D protrusion on the object and a salient regular surface to which the 3D protrusion attaches;

inferring, based on a known geometry of the object, a component of the object that is next to the 3D protrusion and that is not detectable with the 3D sensor; and moving, based on a location of the 3D protrusion and a location of the component, the robotic arm to deliver an object to the location of the component.

10. The non-transitory computer-readable storage medium of claim 9 in which the method further comprises:

calibrating the 3D sensor with the 2D sensor using a predetermined object by: obtaining first set of image data of the object with the 3D sensor; obtaining second set of image data of the object with the 2D sensor; generating a first set of coordinates of a point on the predetermined object with the first image data;

generating a second set of coordinates of the point on the

predetermined object with the second image data; and

matching the first set of coordinates with the second set of coordinates.

11. The non-transitory computer-readable storage medium of claim 9 in which the method further comprises:

determining a location of a side mirror on the object when the object is an automobile;

inferring, based on the location of the side mirror, a side door window that is adjacent to the side mirror without detecting the side door window; and

moving the robotic arm to the side door window to deliver an object to a person in the automobile.

12. The non-transitory computer-readable storage medium of claim 9 in which the method further comprises:

detecting changes to the object or other objects in the environment by:

receiving point cloud from the 3D and 2D sensors;

estimating a mean distance μ and a standard deviation σ for each point in the point cloud; and

assigning a point as a foreground point when

is a positive threshold value;

grouping all foreground points into pieces according to Euclidean distance; and

determining whether the foreground pieces belong to the objects or other objects in the environment.

13. The non-transitory computer-readable storage medium of claim 9 in which the method further comprises:

monitoring components of the object with the 3D sensor and the 2D sensor; and

calibrating the 2D sensor and the 3D sensor based on a design pattern of a 3D cube with a triangular hole on each surface, wherein the 3D cube for the 3D sensor is detected from 3D point cloud by template matching, and the 3D cube for the 2D sensor is detected by measuring a length of a projected point cloud on each surface of the 3D cube and a length of broken parts of a projected line.

14. The non-transitory computer-readable storage medium of claim 9, wherein the 3D sensor and the 2D sensor are Lidar sensors positioned around the object, and the object is an automobile.

15. The non-transitory computer-readable storage medium of claim 9 in which the method further comprises:

segmenting protrusions detected on the object into protrusion clusters based on Euclidean distance constraints; and

inferring components that are occluded from the 3D sensor and from the 2D sensor based on the protrusion clusters and on known geometric properties of the object.

16. A robotic system comprising:

two-dimensional (2D) Lidar sensors positioned around an object;

three-dimensional (3D) Lidar sensors positioned around the object; a computer that includes a memory that stores instructions and a sor that executes the instructions to improve aligning of a robotic arm to the by:

receiving data from the 2D Lidar sensors and the 3D Lidar sensors; determining, from the data, a salient regular surface on the object; determining, from the data, a three-dimensional (3D) protrusion attached to the salient regular surface on the object;

inferring, based on a known geometry of the object, a component of the object that is occluded and not detectable with the 3D Lidar sensors;

moving, based on a location of the 3D protrusion and a location of the component, the robotic arm to align the robotic arm with respect to the location of the component; and

monitoring, based on calibrated 2D Lidar data, movement of components of the object or other objects in the environment.

17. The robotic system of claim 16, wherein the object is an automobile, the 3D protrusion is a side view mirror of the automobile, and the component is a side door window of the automobile.

18. The robotic system of claim 16, wherein the processor further executes the instructions by:

calibrating the 2D Lidar sensors and the 3D Lidar sensors by estimating a first set of locations of a cube from a 2D Lidar coordinate system,

estimating a second set of locations of the cube from a 3D Lidar coordinate system, and matching the first set of locations with the second set of locations.

19. The robotic system of claim 16, wherein the processor further executes the instructions by:

generating a 2D point cloud of the object; and

monitoring dynamic changes to the object within a field of view of the 2D Lidar sensors and the 3D Lidar sensors by observing deviations within the 2D point cloud.

20. The robotic system of claim 16, wherein the regular salient surfaces are smooth surfaces that include planes, spheres, and cylinders.

Read more

Claim Tree

  • 1
    1. A method executed by one or more electronic devices in a robotic system that improves alignment of a robotic arm to an object, the method comprising:
    • detecting, with Lidar sensors, a salient regular surface on the object
    • detecting, with the Lidar sensors, a three-dimensional (3D) protrusion attached to the salient regular surface on the object
    • inferring, based on a known location of the 3D protrusion on the object, a component of the object adjacent to the 3D protrusion
    • and aligning, based on the known location of the 3D protrusion and a known location of the component, the robotic arm with respect to the object.
    • 2. The method of claim 1, wherein
      • the 3D protrusion is detected on the object without prior knowledge of a shape of the 3D protrusion.
    • 3. The method of claim 1 further comprising:
      • calibrating the 2D Lidar and the 3D Lidar to determine a correspondence between components monitored by the 2D Lidar and components monitored by the 3D Lidar
      • monitoring the components of the object with 2D Lidar and 3D Lidar
      • and monitoring movement of surrounding objects with 2D Lidar and 3D Lidar.
    • 4. The method of claim 1 further comprising:
      • monitoring movement of the object by building a statistical model for each scanned point of a point cloud determined from data sensed of the Lidar sensors
      • determining whether deviations from the statistical model are larger than a threshold
      • grouping detected points with large deviations into pieces according to Euclidean distance
      • and determining if each of the pieces is a component of the object or an external object.
    • 5. The method of claim 1 further comprising:
      • detecting the 3D protrusion attached to the salient regular surface by: segmenting the object from a background of the object
      • detecting the salient regular surface by RANSAC
      • selecting points connected to the salient regular surface
      • and segmenting the points connected to the salient regular surface.
    • 6. The method of claim 1, wherein
      • the object is an automobile, the 3D protrusion is a side mirror on a door of the automobile, the component being inferred is a side window on the door of the automobile, and the salient regular surface is a surface adjacent the window from to which the side mirror attaches.
    • 7. The method of claim 1 further comprising:
      • detecting, with a 3D Lidar sensor, a car door window when the object is an automobile
      • and monitoring, with a 2D Lidar sensor, movement of a door of the automobile.
    • 8. The method of claim 1 further comprising:
      • generating a 2D point cloud of the object
      • and monitoring movements of the object or other objects in the environment within a field of view of the Lidar sensors by observing deviations within the 2D point cloud.
  • 9
    9. A non- transitory computer-readable storage medium that one or more electronic devices in a robotic system executes to improve alignment of a robotic arm to an object, the method comprising:
    • sensing, with a three-dimensional (3D) sensor and a two-dimensional (2D) sensor, a 3D protrusion on the object and a salient regular surface to which the 3D protrusion attaches
    • inferring, based on a known geometry of the object, a component of the object that is next to the 3D protrusion and that is not detectable with the 3D sensor
    • and moving, based on a location of the 3D protrusion and a location of the component, the robotic arm to deliver an object to the location of the component.
    • 10. The non-transitory computer-readable storage medium of claim 9 in which
      • the method further comprises:
    • 11. The non-transitory computer-readable storage medium of claim 9 in which
      • the method further comprises:
    • 12. The non-transitory computer-readable storage medium of claim 9 in which
      • the method further comprises:
    • 13. The non-transitory computer-readable storage medium of claim 9 in which
      • the method further comprises:
    • 14. The non-transitory computer-readable storage medium of claim 9, wherein
      • the 3D sensor and the 2D sensor are Lidar sensors positioned around the object, and the object is an automobile.
    • 15. The non-transitory computer-readable storage medium of claim 9 in which
      • the method further comprises:
  • 16
    16. A robotic system comprising:
    • two-dimensional (2D) Lidar sensors positioned around an object
    • three-dimensional (3D) Lidar sensors positioned around the object
    • a computer that includes a memory that stores instructions and a sor that executes the instructions to improve aligning of a robotic arm to the by: receiving data from the 2D Lidar sensors and the 3D Lidar sensors
    • determining, from the data, a salient regular surface on the object
    • determining, from the data, a three-dimensional (3D) protrusion attached to the salient regular surface on the object
    • inferring, based on a known geometry of the object, a component of the object that is occluded and not detectable with the 3D Lidar sensors
    • moving, based on a location of the 3D protrusion and a location of the component, the robotic arm to align the robotic arm with respect to the location of the component
    • and monitoring, based on calibrated 2D Lidar data, movement of components of the object or other objects in the environment.
    • 17. The robotic system of claim 16, wherein
      • the object is an automobile, the 3D protrusion is a side view mirror of the automobile, and the component is a side door window of the automobile.
    • 18. The robotic system of claim 16, wherein
      • the processor further executes the instructions by: calibrating the 2D Lidar sensors and the 3D Lidar sensors by estimating a first set of locations of a cube from a 2D Lidar coordinate system, estimating a second set of locations of the cube from a 3D Lidar coordinate system, and matching the first set of locations with the second set of locations.
    • 19. The robotic system of claim 16, wherein
      • the processor further executes the instructions by: generating a 2D point cloud of the object; and monitoring dynamic changes to the object within a field of view of the 2D Lidar sensors and the 3D Lidar sensors by observing deviations within the 2D point cloud.
    • 20. The robotic system of claim 16, wherein
      • the regular salient surfaces are smooth surfaces that include planes, spheres, and cylinders.
See all 3 independent claims

Description

SYSTEMS AND METHODS THAT IMPROVE ALIGNMENT OF A

ROBOTIC ARM TO AN OBJECT

PRIORITY CLAIM

[0001] This application claims priority from Singapore Patent Application No. 10201702706P filed on 31 March 2017.

TECHNICAL FIELD

[0002] The present invention generally relates to object detection and robotic arm alignment, and more particularly relates to systems and methods that improve alignment of a robotic arm to an object.

BACKGROUND OF THE DISCLOSURE

[0003] Many robot-related applications require alignment of robot arms to objects or its specific components. Three-dimensional (3D) information of an object or its unique components is important for this purpose. Knowing 3D information of an object is also important for interaction between the robotic arm and objects.

[0004] Currently, there are a number of conventional methods to detect and align robot arms to objects or their components by various types of sensed data. These methods typically aim for specific objects, e.g. detection of door knobs for door opening and detection of car door windows to deliver food to in- vehicle passengers. These methods, however, are difficult to apply to other objects (e.g., door handle detection methods may not be suitable for car window detection). In addition, these methods require good observations of objects or their components. If objects or their components are occluded, unseen or hardly detected, these methods will fail.

[0005] In most of robotic applications, safety issues are given a high priority. Consider examples of a robotic arm that opens doors or one that delivers food or other objects through a car window to occupants. During a door opening operation, the dynamic status of door should be monitored all the times to avoid breaking or damaging the door with force. Or when the robot arm delivers food to in-vehicle passengers, the status of car door also needs to be monitored to avoid damaging the vehicle or harming the occupants.

[0006] If the object and parts of the object are not accurately detected, then damage can occur to the robotic arm, the object, or people or objects in the environment. These tasks are not trivial, and conventional systems and methods often fail in these endeavors.

[0007] Thus, what is needed are methods and systems that improve object and protrusion detection, improve alignment of robotic arms with the detected object or components of the object, and improve detection of changes to the object and its environment. Furthermore, other desirable features and characteristics will become apparent from the subsequent detailed description and the appended claims, taken in conjunction with the accompanying drawings and this background of the disclosure.

SUMMARY

[0008] One example embodiment in accordance with the present invention includes a method that aligns a robotic arm to an object or to one or more components associated with the object. The method includes detecting, with Lidar sensors, a salient regular surface on the object; detecting, with the Lidar sensors, a three- dimensional (3D) protrusion attached to the salient regular surface on the object; inferring, based on a known location of the 3D protrusion on the object, a component

of the object adjacent to the 3D protrusion; and aligning, based on the known location of the 3D protrusion and a known location of the component, the robotic arm with respect to the object.

[0009] The 3D protrusion can be detected on the object without prior knowledge of a shape of the 3D protrusion.

[0010] The method includes calibrating the 2D Lidar and the 3D Lidar to determine the correspondence between the components monitored by the 2D Lidar and those monitored by the 3D Lidar.

[0011] The method further includes monitoring movement of the object by building a statistical model for each scanned point of a point cloud determined from data sensed of the Lidar sensors; and determining whether deviations from the statistical model are larger than a threshold; and grouping the points with large deviations into pieces; and determining these pieces from movement of the object.

[0012] The method further includes detecting the 3D protrusion attached to the salient regular surface by: segmenting the object from a background of the object; detecting the salient regular surface by RANSAC; selecting points connected to the salient regular surface; and segmenting the points connected to the salient regular surface.

[0013] By way of example, the object is an automobile, the 3D protrusion is a side mirror on a door of the automobile, the component being inferred is a side window on the door of the automobile, and the salient regular surface is a surface adjacent the window from to which the side mirror attaches.

[0014] The method also includes detecting, with a 3D Lidar sensor, a car door window when the object is an automobile; and monitoring, with a 2D Lidar sensor, movement of the automobile.

[0015] The method further includes generating a 2D point cloud of the object and/or other objects in the environment; monitoring movements of the object and/or other objects in the environment within a field of view of the Lidar sensors by observing deviations within the 2D point cloud; and grouping the points with large deviations into pieces; and detecting change in status of the object and/or other objects in the environment based on these pieces.

[0016] One example embodiment is a non-transitory computer-readable storage medium that one or more electronic devices in a robotic system executes to improve alignment of a robotic arm to an object. The method includes sensing, with a three- dimensional (3D) sensor and a two-dimensional (2D) sensor, a 3D protrusion on the object and a salient regular surface to which the 3D protrusion attaches; inferring, based on a known geometry of the object, a component of the object that is next to the 3D protrusion and that is not detectable with the 3D sensor and the 2D sensor; and moving, based on a location of the 3D protrusion and a location of the component, the robotic arm to deliver an object to the location of the component; monitoring, based on calibrated 2D and 3D Lidar, the movement of components of the object and the movement of other objects in the environment.

[0017] The method per the storage medium includes calibrating the 3D sensor with the 2D sensor using a predetermined object by: obtaining first image data of the object with the 3D sensor; obtaining second image data of the object with the 2D sensor; generating a first coordinate of a point on the predetermined object with the first image data; generating a second coordinate of the point on the predetermined object with the second image data; and matching the first coordinate with the second coordinate.

[0018] The method per the storage medium includes determining a location of a side mirror on the object when the object is an automobile; inferring, based on the location of the side mirror, a side door window that is adjacent to the side mirror without detecting the side door window; moving the robotic arm to the side door window to deliver an object to a person in the automobile; and monitoring movement of the object and movement of other surrounding objects.

[0019] The method per the storage medium includes detecting changes to the object and/or other objects in the environment by: receiving point cloud from the 3D and 2D sensors; estimating a mean distance μ and a standard deviation σ for each point in the point cloud; and assigning a point as a foreground point when \distance ^ > §^ where

S is a positive threshold value; grouping foreground points into pieces; and detecting change in status of the object and/or other objects in the environment based on these pieces.

[0020] The method per the storage medium includes calibrating the 2D sensor and the 3D sensor based on a design pattern of a 3D cube with a triangular hole on each surface, wherein the 3D cube is detected from the 3D sensor by template matching on the 3D point cloud, and from the 2D sensor by measuring a length of a projected point cloud on each surface of the 3D cube and a length of broken parts of a projected line.

[0021] The 3D sensor and the 2D sensor are Lidar sensors positioned around the object, such as an automobile or other vehicle.

[0022] The method per the storage medium includes segmenting protrusions detected on the object into protrusion clusters based on Euclidean distance constraints; and inferring components that are occluded from the 3D sensor and from the 2D sensor based on the protrusion clusters and on known geometric properties of the object.

[0023] One example embodiment is a robotic system. This robotic system includes two-dimensional (2D) Lidar sensors positioned around an object; three-dimensional (3D) Lidar sensors positioned around the object; and a computer that includes a memory that stores instructions and a processor. The processor executes the instructions to improve aligning of a robotic arm to the object by: receiving data from the 2D Lidar sensors and the 3D Lidar sensors; determining, from the data, a salient regular surface on the object; determining, from the data, a three-dimensional (3D) protrusion attached to the salient regular surface on the object; inferring, based on a known geometry of the object, a component of the object that is occluded and not detectable with the 2D Lidar sensors and the 3D Lidar sensors; moving, based on a location of the 3D protrusion and a location of the component, the robotic arm to align the robotic arm with respect to the location of the component; and monitoring, based on calibrated 2D and 3D Lidar, the movement of the object and/or other objects in the environment..

[0024] In the example embodiment of the robotic system, the object is an automobile; the 3D protrusion is a side view mirror of the automobile; and the component is a side door window of the automobile.

[0025] In the example embodiment of the robotic system, the processor further executes the instructions by: calibrating the 2D Lidar sensors and the 3D Lidar sensors by estimating a first set of locations of a cube from a 2D Lidar coordinate system estimating a second set of locations of the cube from a 3D Lidar coordinate system, and matching the first set of locations with the second set of locations.

[0026] In the example embodiment of the robotic system, the processor further executes the instructions by: generating a 2D point cloud of the object; and

monitoring dynamic changes to the object within a field of view of the 2D Lidar sensors and the 3D Lidar sensors by observing deviations within the 2D point cloud.

[0027] In the example embodiment of the robotic system, the regular salient surfaces are smooth surfaces that include planes, spheres, and cylinders.

[0028] Other example embodiments in accordance with the invention are disclosed herein.

BRIEF DESCRIPTION OF THE DRAWINGS

[0029] The accompanying figures, where like reference numerals refer to identical or functionally similar elements throughout the separate views and which together with the detailed description below are incorporated in and form part of the specification, serve to illustrate various embodiments and to explain various principles and advantages in accordance with present embodiments.

[0030] Figure 1 depicts a computer system or robotic system in accordance with an example embodiment of the present invention.

[0031] Figure 2 is schematic diagram of a subsystem of a computer system or robotic system comprising lidar sensors calibration, protrusion detection, object component inference, and object/environment change detection, in accordance with an example embodiment of the present invention.

[0032] Figure 3 is a method to calibrate a plurality of three-dimensional (3D) and two-dimensional (2D) Lidar sensors with designed patterns in accordance with an example embodiment of the present invention.

[0033] Figure 4 is a design pattern to illustrate calibration between 3D Lidar and 2D Lidar in accordance with an example embodiment of the present invention.

[0034] Figure 5 is a method to calibrate patterns for 2D Lidar and 3D Lidar for a cube in accordance with an example embodiment of the present invention.

[0035] Figure 6 is a method to align a robotic arm to a component of an object by detecting protrusions attached to salient regular surfaces of the object without prior knowledge of shapes of protrusions or full model data, and extrapolating the position of the component of the object from the detected protrusion, in accordance with an example embodiment of the present invention.

[0036] Figure 7 is a method to detect changes of a status of an object or an environment of the object from a sequence of sensor data in accordance with an example embodiment of the present invention.

[0037] Figure 8 shows an example of inferring protrusions in an automobile in accordance with an example embodiment of the present invention.

[0038] Figure 9 shows a robotic system in accordance with an example embodiment of the present invention.

[0039] Figure 10A shows an automobile with an example 2D Lidar configuration in accordance with an example embodiment of the present invention.

[0040] Figure 10B shows the automobile with an example 3D Lidar configuration in accordance with an example embodiment of the present invention.

[0041] Figure 11 shows detecting a proximity of a vehicle and determining whether the robotic arm is within a predefined safety limit in accordance with an example embodiment of the present invention.

[0042] Figure 12 shows a portion of a robotic system illustrating shared control between a robotic arm and an arm of a human in accordance with an example embodiment of the present invention.

[0043] Figure 13 is a method to optimize placement of the robotic arm in accordance with example embodiment of the present invention.

[0044] Figure 14 shows a robotic system checking for reachability of a robotic arm to an automobile based on grid spacing in accordance with an example embodiment of the present invention.

[0045] Figure 15 shows a system workflow of multiple robotic arms loading objects to passengers in an automobile in accordance with an example embodiment of the present invention.

[0046] Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been depicted to scale.

DETAILED DESCRIPTION

[0047] The following detailed description is merely exemplary in nature and is not intended to limit the invention or the application and uses of the invention. Furthermore, there is no intention to be bound by any theory presented in the preceding background of the invention or the following detailed description. In comparison to conventional systems and methods, example embodiments present unique systems and methods that improve object and protrusion detection, improve alignment of robotic arms with the detected object or components of the object, and improve detection of changes to the object and its environment.

[0048] Example embodiments include methods and systems that detect 3D protrusions attached to salient regular surfaces of objects in order to locate objects, to locate components on the objects, and to align one or more robotic arms with the objects and/or components. In contrast to many conventional techniques, the method and systems do not rely on any known template. Furthermore, example embodiments represent an improvement over conventional techniques since components of the object are inferred or extrapolated based on sensing and determining locations of protrusions and salient regular surfaces to which the protrusions are attached. Advantageously, these components can be determined even when they are occluded or not otherwise detectable with the sensors in the robotic system. In this way, the robotic system can quickly ascertain components of the object even though such components are not visible to sensors or currently visible to the sensors. Furthermore, it is not necessary to sense data from an entirety of the object in order to ascertain its components, and this expedites processing of data and reduces the amount of data needed to move and align robotic arms in the system. Example embodiments also lessen the likelihood of the robotic arm unintentionally contacting and thus damaging

the object or robotic arm itself since occluded components are inferred even though they are not detectable.

[0049] Example embodiments include methods and systems that execute two- dimensional (2D) point cloud to monitor dynamic changes in real time within a field of view or observable area of the sensors. These systems and methods execute both 2D point cloud and three-dimensional (3D) point cloud to detect and differentiate internal changes and external changes to the object when 3D point cloud and 2D point cloud are calibrated.

[0050] Figure 1 depicts a computer system or robotic system 100 in accordance with an example embodiment of the present invention.

[0051] System 100 includes a computer 110, one or more sensors 120, one or more robotic arms 130, and one or more objects 140.

[0052] The computer 110 includes a processing unit 160, memory 162, display 164, Lidar sensor calibration 166, regular surface and protrusion detection 168, object component inference 170, object and/or environment change detection 172, and a robotic arm controller 174.

[0053] By way of example, the computer 110 includes one or more of a server, desktop computer, laptop computer, or other electronic device for executing example embodiments.

[0054] The processor unit 160 includes one or more processors (such as a central processing unit (CPU), microprocessor, application-specific integrated circuit (ASIC), etc.) for controlling the overall operation of memory 162 (such as random access memory (RAM) for temporary data storage, read only memory (ROM) for permanent data storage, and firmware). The processing unit 160 communicates with memory 162 and performs operations and tasks that implement one or more blocks of the flow

diagrams discussed herein. The memory, for example, stores applications, data, programs, algorithms (including software to implement or assist in implementing example embodiments) and other data.

[0055] Lidar sensor calibration 166 includes software and/or hardware that calibrate a plurality of 3D Lidar sensors with a plurality of 2D Lidar sensors with designed patterns. This calibration enables the computer 110 to monitor internal changes that occur to the object 140 being monitored and external changes that occur in an environment of the object. By way of example, Lidar sensor calibration executes per the method shown in example embodiments discussed herein.

[0056] Regular surface and protrusion detection 168 includes software and/or hardware that detect regular surfaces (or salient regular surfaces) and protrusions that extend from or attach to these surfaces. By way of example, regular surface and protrusion detection executes per the method shown in example embodiments discussed herein.

[0057] Salient regular surfaces are quite common for man-made objects. The regular surface and protrusion detection 168 detects these surfaces with one or more algorithms, such as an iterative method executing random sample consensus

(RANSAC).

[0058] In contrast to salient regular surfaces, protrusions can be very different in terms of relative locations and appearance. As such, protrusions often have unique or distinguishable features of objects. Protrusions can be hard to detect directly due to their irregular shapes and appearance. The regular surface and protrusion detection 168, however, facilitates this detection by first detecting the regular surface to which the protrusion is attached and then segmenting the protrusion.

[0059] Further, unlike conventional techniques, example embodiments do not require full observation of object in order to detect the salient regular surfaces and protrusions attached thereto. Instead, example embodiments can detect the protrusion from a partial view of the object.

[0060] Object component inference 170 includes software and/or hardware that execute to detect components or parts of the object (including protrusions) that are hidden or otherwise not detectable with the sensors. Based on information known about the object being sensed, the object component inference 170 infers the identity and location of the hidden component or part of the object. By way of example, the object component interference 170 executes per the method shown in example embodiments discussed herein.

[0061] In an example embodiment, protrusions provide unique features that are used to infer other components of the object when the geometric structure of the object is known to the system. Therefore, if the targeted components on objects are occluded or hard to be detected, the object component inference 170 is still able to imply the position of the targeted components or parts of the object 140.

[0062] The object and/or environment change detection 172 includes software and/or hardware that execute to detect changes to one or more of the object 140, an environment in which the object is located, or other objects in the environment. By way of example, the object and/or environment change detection 172 executes per the method shown in example embodiments discussed herein.

[0063] The object and/or environment change detection 172 monitors dynamic changes that include internal changes to the object (e.g., movements of the object or components of the object) and external changes away from the object (e.g., movements of objects that exist in the environment of the object). These changes are

determined from calibration of 2D and 3D data sensed from the field of view of one or more sensors in the system.

[0064] The robotic arm controller 174 includes software and/or hardware that executes to control movement and alignment of the robotic arm 130 that includes a proximity sensor 132 (e.g., mounted at an end effector of the arm). The robotic arm controller (e.g., a microcontroller) executes a set of instructions of one or more programs to perform various functions, such as grasping and releasing objects, moving objects, assembly operations, and handling machine tools.

[0065] By way of example, the controller includes a programmable logic controller (PLC) with an input and output interface controlled by a program for automation (e.g., in industrial, commercial, or consumer applications). The input and output of the PLC are programmable in different forms and stored in memory, such as ladder diagrams, structural text, and functional block diagrams. PLCs generally operate in real time systems in which output (e.g., manipulation of the robotic arm) is produced in response to input conditions (e.g., data from the sensors) within a limited time in order to avoid unintended operations.

[0066] The robotic arm 130 is a programmable mechanical arm often with an end effector. The robotic arm, for example, includes one or more of a shoulder, elbow, wrist (or other joint) movement to position the end effector in a correct location to execute a task. Typically, the end effector mechanically connects to the arm via one or more joints to function as a hand and give the robotic arm various degrees of freedom. An engine, motor, or drive moves the links or sections between joints to various positions via hydraulic, electric, or pneumatic drive. These actuators are controlled by the robotic arm controller 174. Examples of a robotic arm include, but are not limited to, a Cartesian robot, cylindrical robot, and articulated robot.

[0067] Sensors 120 enable the computer 110 to receive feedback about object 140, protrusions in the object 150, the environment of the object 140, and other objects 152 in the environment of the object 140. The sensor collects and send information to the computer 110 and robotic arm controller 174 to execute various tasks discussed herein.

[0068] One or more different sensors 120 can be used for object detection and alignment for robotic systems. These robotic systems can be separated into three categories based on the type of sensor: camera based robotic systems, range sensor based robotic systems, and RGB-D sensor based robotic systems.

[0069] The robotic arm 130 includes one or more sensors 132.

[0070] Besides Lidar sensors, example embodiments include other sensors. For example, the robotic arm 130 includes one or more sensors 132, such as a proximity sensor for sensing proximity of the end effector of the robotic arm to the vehicle. This proximity sensor can be a tilted proximity sensor mounted on the end effector. In the event of impending contact with the vehicle, where close proximity is detected, emergency stop of the robot will be activated.

[0071] Generally speaking, conventional camera based systems that utilize a single camera lack information pertaining to depth. These systems are suitable in situations where information pertaining to depth has less importance, such as object tracking. Systems that utilize multiple cameras are able to provide depth information but with high computation cost. In addition, rich texture must exist in order to reconstruct 3D information. Further, the illumination conditions heavily affect the performance.

Further yet, both single camera based methods and multiple cameras based methods face another difficulty that the appearance of the same type of objects may change significantly.

[0072] Range sensor based systems use shape and structure information from sparse point cloud. One possible way to detect and align the objects is to register the point cloud to a known template by iterative closest point (ICP) or template matching. Another way to detect and align the objects is to detect geometric features of objects without any prior known templates.

[0073] RGD-D sensor based systems are good for indoor applications due to their capability to obtain both rich texture inform and 3D shape information. For outdoor applications, the capability of these decrease dramatically. Hence, these systems have limited or restricted usability.

[0074] Example embodiments provide an improved robotic system with the use of sensors 120 in combination with other elements. By way of example, these sensors include one or more 2D Lidar sensors and 3D Lidar sensors.

[0075] Lidar sensors illuminate the object with pulsed laser light and then detect reflected light from the object with a sensor. The system constructs a digital representation, image, or map of the object based on the return times of the light pulses from the object and wavelengths of the detected light pulses.

[0076] Lidar sensors are active sensors in that they emit light or energy from their own power source, as opposed to passive sensors that detect light or energy naturally emitted from the object. A laser transmits a light pulse, and a receiver (sensor) detects the backscattered or reflected light from the object. The distance to the object is calculated based on the speed of light and the time between the transmitting and receiving the return pulse.

[0077] Output from Lidar sensors, such as point cloud, provide data for the computer 110 to determine a location of the robotic arm 130 with respect to the object

140 and locations and structural identities of the object 140, protrusions 150, and other objects 152 in an environment of the object 140.

[0078] Example embodiments can use various methods to process Lidar data to determine identities and locations of the objects, protrusions, and other components. These methods include retrieving and processing data from other types of sensors through sensor fusion. By way of example, these methods include, but are not limited to, fusing data from lidar sensors with data from cameras, radar systems, and other types of active and passive sensors.

[0079] Figure 2 is schematic diagram of a computer system or robotic system 200 in accordance with an example embodiment of the present invention. Hardware and software (including algorithms) for components are discussed in more detail in the accompanying figures.

[0080] The system 200 includes 3D Lidar 202 and 2D Lidar 204. Data or output from 3D Lidar 202 couples to background modeling 206 and object segmentation from the background 208. Data or output from 2D Lidar 204 couples to 2D object modeling 210. Calibration of the 2D and 3D Lidar occurs at 2D to 3D calibration 212. Data or output from the 2D Lidar 204, 2D object modeling 210, and object segmentation from background 208 couple change detection 214 that detects changes to the object itself and/or objects in the environment.

[0081] As shown in FIG. 2, the system 200 detects regular surfaces or salient regular surfaces 222 based on object segmentation from background 208, and detects protrusions on surfaces of the object 224 based on object segmentation from background 208 and regular surfaces or salient regular surfaces 222.

[0082] The system 200 also extrapolates positions of components of objects 232 (including components that are hidden or otherwise not detectable), and detect object's orientation 234 to help in aligning the robotic arm to the object.

[0083] Data or output from the 3D Lidar 202 and 2D Lidar 204 enable the system 200 to compute internal change pertaining to the object itself 240 and/or external change pertaining to the environment of the object 242.

[0084] Figure 3 is a method to calibrate a plurality of three-dimensional (3D) and two-dimensional (2D) Lidar sensors with designed patterns in accordance with an example embodiment of the present invention.

[0085] Block 300 states obtain, with one or more 3D sensors, first set of image data from an object. For example, the first set of image data is obtained with a 3D Lidar sensor or other sensor.

[0086] Block 310 states obtain, with one or more 2D sensors, second set of image data from the object. For example, the second set of image data is obtained with a 2D Lidar sensor or other sensor.

[0087] Block 320 states generate a first set of coordinates of a point on the object using the first set of image data and a second set of coordinates of the point using the second set of image data. For example, a set of coordinates of a corner point of a cube object is generated from the first set of image data obtained with a 3D Lidar sensor based on template matching with the help of known size of the designed pattern; a set of coordinates of the same corner point of the cube object is generated from the second set of image data obtained with a 2D Lidar sensor based on the 2D sensor points on the object with the help of known size of the designed pattern.

[0088] Block 330 states calibrate the 3D Lidar and 2D Lidar by matching the first set of coordinates of the point with the second set of coordinates of the point. For

example, the translation and rotation parameters between 2D Lidar and 3D Lidar coordinate system can be estimated by a set of paired cube corner points acquired from 2D and 3D Lidar sensors.

[0089] The purpose of calibration between 3D Lidar and 2D Lidar is to find the correspondences between the image data from the 3D Lidar and the image data from the 2D Lidar. In this way, the system is able to register data from both the 3D Lidar and the 2D Lidar in a common frame of reference for detecting features and making inferences on the object.

[0090] For safety concerns, monitoring dynamic status of the object requires real time performance. Such monitor prevents, for example, the robotic arm from unintentionally hitting and thus damaging the object or the robotic arm itself. The 2D Lidar is more suitable for such kind of tasks than 3D Lidar.

[0091] Figure 4 is a design pattern 400 to illustrate calibration between 3D Lidar and 2D Lidar in accordance with an example embodiment of the present invention.

[0092] By way of example, the designed pattern 400 is shown as a 3D cube with a triangle hole 410 on each surface with known dimension and location. This 3D cube is able to be detected from 3D point cloud by template matching since the shape is known.

[0093] As for detecting the cube from 2D Lidar, an example embodiment measures the length of the projected point cloud on each surface and the length of broken parts of projected lines. According to the relationship between broken lines 420 on two surfaces, the location of a corner of the cube is able to be determined.

[0094] Figure 5 is a method to calibrate patterns for 2D Lidar and 3D Lidar for a cube in accordance with an example embodiment of the present invention.

[0095] Block 500 states estimate, from a 2D Lidar coordinate system, a location of a point on the cube Ai = [Axl, Ay;, Azi]T by the projections of a 2D line on a surface of the cube, where superscript T denotes the transpose operator on a vector.

[0096] Block 510 states obtain, from a 3D Lidar coordinate system, the following: (1) a point cloud of the cube, and (2) an estimated location of the same point on the cube Bi = [Bxi, Byi, Bzi]T by matching the point cloud to a standard cube point cloud.

[0097] Block 520 states repeat blocks 500 and 510 N times to obtain N correspondences .

[0098] Block 530 states find the solution for R and t in the equation B = R*A + t. In this equation, B = [Bi, B2, .. . , B ] is a matrix aggregating the locations of the N cube points in the 3D Lidar coordinate frame, R is a matrix representing rotation from 3D Lidar coordinate frame to 2D Lidar coordinate frame, A = [Ai, A2, AN] is a matrix aggregating the locations of the N cube points in the 2D Lidar coordinate frame, and t is a vector from the origin of the 3D Lidar coordinate frame to the origin of the 2D Lidar coordinate frame, expressed in the 3D Lidar coordinate frame.

[0099] Figure 6 is a method to align an object by detecting protrusions attached to salient regular surfaces of the object without prior knowledge of shapes of protrusions or full model data in accordance with an example embodiment of the present invention.

[00100] Block 600 states identify, from image data from the object, information associated with at least one surface to which the object is attached. This step can include segmenting an object from a background.

[00101] Block 610 states detect a salient regular surface of the object using information of the at least one identified surface and the image data of the object.

[00102] Block 620 states select points connected to the salient regular surfaces and segment into candidate protrusion clusters based on Euclidean distance constraints.

[00103] Block 630 states for each candidate cluster, classify as a protrusion if the cluster size is above a threshold and, optionally, if the cluster lies within a known expected area for the protrusion on the salient regular surface.

[00104] Block 640 states align robotic arm to a component of the object whose position is extrapolated from the position of a detected protrusion based on known geometrical relationship between the component of the object and the protrusion.

[00105] This method identifies the location and orientation of an object by detecting its protrusions and identifying salient regular surfaces to which the object is attached. The protrusions can be unique features of the object that are used to infer the rest of the object.

[00106] The regular salient surfaces are defined as smooth surfaces that are large compared to the object. Examples of regular salient surfaces include, but are not limited to, planes, spheres, and cylinders. Advantageously, the detection of protrusions does not require full observation of the object. Hence, the position of the object can be identified through protrusion detection even though a portion of the object is occluded or unable to detect. The location of the remaining parts of the object can be inferred by extrapolating detected protrusions.

[00107] Protrusions are generally quite unique for an object. If the geometric information among components of the object is available or known, an example embodiment extrapolate and infer other parts of the object based on protrusions and their attracted surfaces.

[00108] Such inferences are quite useful in the cases that components of the object are unseen, occluded, or hard to be detected. For example, in automated security

clearance system, car door windows need to be detected in order to deliver biometric devices or other items to passengers located in the vehicle. Car door windows are not easy to be detected from point cloud since they can disappear with car door window blinds. This detection of windows is also complicated when passengers or their arms extend from the windows. On the other hand, car side mirrors are considered stable and unique features of the car. Relative positions of car door windows and car side mirror are fixed. So there is alternative and easier way to locate car door windows by detecting car side mirrors and estimate positions of car door windows according their geometric information.

[00109] Figure 7 is a method to detect changes of a status of an object or an environment of the object from a sequence of sensor data in accordance with an example embodiment of the present invention.

[00110] By way of example, this method includes detecting movement of components of the object or objects in the environment. For example, the method detects when a car door opens and stops movement of the robotic arm to avoid a collision between the car door and the robotic arm.

[00111] Block 700 states estimate a mean distance and standard deviation for at least one each point in the a first N images of the object.

[00112] Block 710 states check connectivity of each individual point by Euclidean distance constraints.

[00113] Block 720 states select the largest set of connected points with their mean distances and standard deviations as the target object template for change detection.

[00114] For each point in following frames, the method detects the change by calculating a difference between its value and a corresponding point in a statistical model. If the absolute difference exceeds a threshold, the change is detected. If the

absolute difference does not exceed this threshold, then no change is detected. The method groups all detected changed points into pieces by their connectivity. Any piece overlapping on the target object indicates an internal change. Otherwise, the change is deemed external.

[00115] Consider an example embodiment of a method that detects motion between an object template and other scanned data. This method includes generating the object template by establishing a statistical model from a series of scanned data from the object. The method checks connectivity of each individual points by Euclidean distance constraints and selects the longest and/or closest one as the target object. The method infers point level change detection for another scanned data if the distance between the corresponding points on the object template and on the scanned data exceeds a predetermined threshold, change detected. Otherwise, no change is detected. All detected changed points are grouped into pieces by their connectivity, and any piece overlapping on the target object indicates the internal change. Otherwise the change is external.

[00116] An example embodiment identifies dynamic changes of the object within the field of view of one or more sensors by observing large deviations within a computer- generated 2D point cloud (dot image) of the object. By way of example, an example embodiment builds a Gaussian model for each scanned point respectively. Changes occur when large deviations from the Gaussian model are observed for each point.

[00117] Block 730 states for the following frame of each point, if an absolute value of a distance minus the mean distance divided by the standard deviation is greater than a threshold, then assign the point as foreground, else assign as background..

[00118] Block 740 states group the foreground points as the candidate change region. If the detected region has enough length, then:

(1) an internal change is detected if the change region belongs to the object;

(2) obstacles are detected if the change region does not belong to the object. Otherwise, no internal change or obstacles are detected.

[00119] Example embodiments include inferring parts of an object by extrapolating detected protrusions. The method includes inferring occluded components of the object based on the protrusion clusters and predetermined geometric properties of the object.

[00120] Detecting a car window with a computer system or robotic system is not an easy task due to heavy reflections of light from a side of the car and windows of the car, due to the potential existence of window blinds in the windows, due to occlusion of the head of the driver, and due to other factors. Example embodiments, however, are able to successfully detect a car window in spite of these difficulties.

[00121] By way of example, an example embodiment executes a perception system of automated security clearance (PSASC). Here, the system calculates coordinates for windows of the car door based on the location of the 3D protrusion (e.g., a side view mirror on the door) and the inferred or extrapolated component (e.g., the window of the door where the side view mirror is located). The coordinates are sent to the robot arm(s) so they can deliver an object or item (e.g., a biometric device) to occupants or passengers in the car.

[00122] Figure 8 shows an example of inferring protrusions in an automobile in accordance with an example embodiment of the present invention.

[00123] Figure 8 shows two different view angles of an automobile 800 in which a window 810 in the car is inferred after locating a protrusion (a side mirror 820) that appears on a regular salient surface (car side body plane 830). The position and location of the window 810 is extrapolated after determining the position and location

of the side mirror 820. This extrapolation and inference of the location of the window is possible since the structure of the car is already know (i.e., a side mirror is located on a perimeter of a front window of an automobile).

[00124] Consider an example in which the robotic system includes a plurality of Lidar sensors placed around an automobile, such having 2D Lidar and 3D Lidar sensing above the automobile and on each side of the automobile. For instance, one 3D Lidar and one 2D Lidar is positioned on each side of the automobile. The 3D Lidar detects the car windows, and the 2D Lidar monitors movements or dynamic changes to the automobile. These sensors are calibrated in advance in accordance with an example embodiment.

[00125] The sensors detect the side view mirror. The robotic system infers the location of the front window from the side view mirror, and uses coordinate or location information of the window to deliver objects to occupants in the automobile via a robotic arm. The computer in the robotic system provides coordinates of the window to the robot arm so the robot arm can automatically deliver objects to the people in the automobile without hitting the automobile or the occupants.

[00126] Instead of direct detection of car door windows, the robotic system first detects one or more car side mirrors as the protrusion of the car side body. Since the relative positions of car side mirrors and car door windows are fixed, the robotic system uses the position of the car side mirror and its attached car side body plane to infer the positions of car door windows. For example, it is known that the car side mirror is located next to or on a perimeter of the front window of the car. Further, if the make and model of the car is known, then the position and coordinate locations of the window can be precisely determined since these objects are fixed on the body of the car.

[00127] Figure 9 shows a robotic system 900 in accordance with an example embodiment of the present invention. The robotic system 900 includes a perception system 910 and an action system 920 for an automobile.

[00128] The perception system 910 includes 2D Lidars 930 that provide obstacle and door opening detection 932, 3D Lidars 934 that provide window localization 936, and proximity sensors 938 that provide vehicle proximity detection 940.

[00129] The action system 920 includes collision avoidance 950, window reaching 952, shared control 954, proximity-triggered emergency stop 956, and one or more robots or robotic arms 958.

[00130] The perception system 900 includes 2D Lidar sensors 930 and 3D Lidar sensors 934 with fields of view directed towards the vehicle. At least one 2D Lidar sensor and one 3D Lidar sensor are on each side of the vehicle, such as being on or near a curb. By way of example, the 2D Lidar sensors 930 are located at the height of about 0.6m above the road. These sensors detect vehicle door opening or obstacles (such as human legs) using a high scan rate of about 30-40Hz. The 3D Lidar sensors 934 are located at the height of about 1.3m above the road. These sensors acquire a 3D scan of the vehicle side profile for window localization at a slower rate of about 0.5Hz.

[00131] The action system 920 consists of one or more robot arms and the control modules to position the object-carrying end effectors near the vehicle windows for easy access by the passengers inside the vehicle. The control modules include the collision avoidance module 950 that reduces the chances of making unintended contact with obstacles and vehicle doors, the window reaching module 952 to convey the objects to the respective window target points, and the shared control module 954 to enable the passenger to easily reposition the robot arm and interact with the object.

[00132] An example embodiment of the action system 920 is for the security clearance use case, and includes four robot arms. Each robotic arm has six degrees of freedom and is mounted with a facial and finger print reader for biometric clearance. The arms move the face and fingerprint readers to within convenient reach by the passengers in the car. Four arms are chosen to maximize speed of clearance, with each arm designated to one of the four car windows.

[00133] Another example embodiment is for the drive-through grocer use case, and includes one robot arm with six degrees of freedom. The end effector carries a mobile device for ordering items or verifying pre-orders, as well as a specialized gripper for carrying the package of items to the passenger.

[00134] Upon detection of door opening or obstacles, appropriate collision avoidance 950 action for affected robot arms is activated. If a detected obstacle lies between the current position and target destination of the end-effector, the robot arm will not extend until its workspace is clear of the obstacle. In the case of a door detected to be opening, the proximal robot shall retract to stow position from an extended position, or remain in stow position if it has not extended.

[00135] The window reaching control module 952 receives the window target key points from the window localization module 936 of the perception system. Each robotic arm positions the object in a target zone near the corresponding target key point such that the object can be easily retrieved by the passenger sitting at the window without removing seat belt. After the interactions with the passengers are completed (e.g. biometric verification, object handover), the robot arms retract to their respective home positions.

[00136] In case physical contact still occurs during the reaching phase (despite the collision avoidance behavior) due to detection or avoidance failure (e.g., the car door

opens very rapidly), the robot arms have active compliance to ensure that the contact force is limited to a safe value to prevent injury to humans or damage to the car. When the robot senses a contact force, it compliantly moves in the direction of the force. If the force is transient and vanishes within a time window, the robotic arm resumes motion towards the target position. Otherwise, when the timeout is reached, the robotic arm returns to home position and requests perception system 910 to reinitiate window and obstacle detection.

[00137] The action system 920 includes the proximity-triggered emergency stop 956. Power to robot is removed and mechanical brakes engaged when the end effector crosses critical safety limit. This is an independent and redundant system in case robot hardware malfunctions. For example, if any of the encoders in the robot arm fails, the motion may become unpredictable and the control software may not be able to recover the robot arm to a safe position. In this case, the best mitigation action is to bypass the control software, activate an immediate power cut to the robot actuator, and engage mechanical brakes to lock the robot arm to prevent it from collapsing. This prevents unexpected and catastrophic consequences of a robot with hardware defects and puts the robot in a safe mode for manual recovery.

[00138] Figure 10A shows an automobile 1000 with an example 2D Lidar configuration in accordance with an example embodiment of the present invention. A 2D Lidar sensor 1010 is positioned on each side of the automobile 1000. Each sensor has a field of view 1020. An object 1030 is shown in the field of view of one of the sensors.

[00139] Figure 10B shows the automobile 1000 with an example 3D Lidar configuration in accordance with an example embodiment of the present invention. A

3D Lidar sensor 1050 is positioned on each side of the automobile 1000. Each sensor has a field of view 1060.

[00140] The proposed sensor setup provides some redundancy of sensing to improve accuracy as well as robustness in case of sensing failure (e.g. car too near one side of the curb, or sensor malfunction). If one of the 3D Lidar sensors fails, the counterpart sensor on the other side of the car can still localize the window positions, respectively. Since the car is symmetrical, the window positions on the blind side can be inferred from those measured on the working side.

[00141] By calibrating all the Lidars and cameras on both sides, information from multiple Lidar sensors is fused into a coherent 3D coordinate system, which facilitates robot motion planning and control. The 3D coordinates of the windows, any detected obstacles and opened doors, as well as emergency safety-related stop signals, are fed to the system, such as the action system 920 in FIG. 9.

[00142] One example of detecting changes of the object status or environment from sensor data sequence is detecting an object in the environment of an automobile or detecting a door opening of the automobile. Door opening and obstacle detection methods involve the use of a Gaussian model of the 2D laser scan of the static vehicle. For example, an example embodiment conducts a planar Lidar scan of a side of a vehicle to determine whether the door is open or closed. If a large deviation (above a threshold) from this Gaussian model is detected, then it is classified as an obstacle or door opening event. Furthermore, based on geometrical analysis of this deviation from the original model, the system not only discriminates between a door opening event and an obstacle, but also localizes the obstacle or opened door.

[00143] Example embodiments can also be used to detect windows in an automobile. By way of example, localization of windows is based on one or more of detecting

protrusions (e.g., a side mirror on a door) attached to salient regular surfaces (e.g., a metal frame of the door that surrounds the window), and inferring the remaining parts of an object (e.g., the window in a door of the automobile) by extrapolation from detected protrusions.

[00144] From the dense point cloud of the vehicle profile captured by the 3D Lidar sensors, features of the vehicle structure (windscreen frame, side mirror) are localized to determine the positions of the window target points for the conveyed objects to reach. First, the point cloud of the vehicle is segmented by background subtraction with a previously stored scan of the background without the vehicle. Then, the side and roof of the vehicle are segmented using plane detection from 3d point cloud. The side mirrors are segmented by detection of abruptly protruding objects from the vehicle side surface.

[00145] The front window target point is estimated from the positions of a side mirror and front roof corner. Specifically, the height of the target point lies between the heights of the side mirror and front roof corner, the lengthwise coordinate depends on the lengthwise difference between the side mirror and front roof corner, and the lateral coordinate is at a foot's distance from the vehicle side surface. The back door window is estimated based on a backwards shift from the detected front window position, mediated by any detected hole feature in the point cloud representing the rear window.

[00146] Figure 11 shows detecting a proximity of a vehicle 1100 and determining whether the robotic arm 1110 is within a predefined safety limit in accordance with an example embodiment of the present invention.

[00147] Vehicle proximity detection involves checking for breaching of a safety limit by the end effector 1115 or the object carried by the end effector of the robotic

arm 1110. By way of example, this figure includes two safety limits: a first safety limit 1120 and a critical safety limit 1130. If the end effector 1115 passes the critical safety limit 1130, then the robotic arm will hit the automobile 1100 and possibly cause damage to the automobile, the end effector, or an object carried by the robotic arm.

[00148] Consider an example in which this critical safety limit 1130 is set at 15 cm from the side of the vehicle to allow for emergency braking distance of the robot arm 1110. When the proximity sensor measures that the robot arm has transgressed the critical safety limit, an emergency signal transmits to the robot safety controller. If the measure is of low confidence, then the signal is treated as a false alarm.

[00149] As shown in FIG. 11, the end effector 1115 has a downward tilt. This downward tilt increases the chance of detecting the vehicle body when the windows are wound down.

[00150] Figure 12 shows a portion of a robotic system 1200 illustrating shared control between a robotic arm 1210 and an arm 1220 of a human in accordance with an example embodiment of the present invention. The robotic arm 1210 couples to a controller or robot controller 1230 that control movement of the robotic arm. When the robotic arm 1210 is within a predetermined distance or area of the object (e.g., an automobile), the controller initiates shared control of robotic arm. This area is designated with box 1240. The shared control prevents motion of the end effector 1250 across the box 1240, but allows free repositioning by the person when the end effector is inside the box.

[00151] As shown in FIG. 12, an end effector 1250 includes an object 1260 being exchanged between the robotic arm 1210 and the hand of the person. When the robotic arm is in this area, shared control occurs as explained herein.

[00152] Example embodiments include optimization of placement of the robotic arm. Placement of robotic arm bases affects how well the robot end effectors can reach vehicles of different forms parked in different positions within the lane, subject to environmental constraints. For example, the presence of a back wall constrains how close the end effector can move towards the base, since the elbow of the robotic arm will collide with the back wall. In a different region of the workspace, the elbow will protrude forward and risk contacting the vehicle. Also, if a vehicle is parked slightly off position (e.g., too close to the curb), there is a risk that the window target point is no longer reachable. Due to these constraints on the reachability of the robot arms, suitable robot placements are identified to ensure system operability.

[00153] An objective of robot placement optimization is to maximize reachable workspace and minimize area of overall layout. This will be a one-off optimization performed before deployment to determine where to fix the bases of the robot arms.

[00154] Figure 13 is a method to optimize placement of the robotic arm in accordance with example embodiment of the present invention.

[00155] Block 1300 states choose a set of candidate base positions B for each robotic arm.

[00156] Block 1310 states for each base position, choose grid space G between the lane and the robot base.

[00157] Block 1320 states for each point pg in G, set end effector position pe-pg, and set end effector orientation qe to be orthogonal to longitudinal plane of vehicle.

[00158] Block 1330 states compute an inverse kinematics based on (pe ,qe). If a solution does not exist, mark pg as infeasible and go to next pg. Else, proceed.

[00159] Block 1340 states compute forward kinematics to determine a Cartesian position of key points (e.g. elbow). If any key point violates a constraint (e.g. exceed

back wall), mark pg as infeasible and go to next pg. Else, add pg to feasible workspace F.

[00160] Block 1350 states take union of feasible workspaces for all robots to obtain combined workspace au = U F.

[00161] Block 1360 states test for variation of vehicles form and position. If robot end effector positions pw corresponding to vehicle windows belong in combined workspace an, then the set of candidate base positions {B} is feasible for the variations considered.

[00162] Block 1370 states for all feasible base position candidates, compute a metric

J=∑i ~^ where di is the shortest distance of pwi to the boundary of an, and A is the footprint spanned by the robot bases. Then, the optimal set of base positions is the one with the highest metric .

[00163] Figure 14 shows a robotic system 1400 checking for reachability of a robotic arm 1410 to an automobile 1420 based on grid spacing 1430 in accordance with an example embodiment of the present invention.

[00164] A case study of reachable workspace analysis was performed for an example embodiment based on four Universal Robot UR5 arms with back wall serving a 2.5m width lane. The study included three car models, namely one model representing larger cars, one model representing medium-sized cars, and one model representing smaller cars. The study also considered three variations in parked position, namely ideal position, lateral offset by 20cm, and overshoot by 20cm.

[00165] The data confirms an example embodiment in which a feasible set of robot placements includes 1.3m height, 0.65m away from the curb, and lm between adjacent robots.

[00166] Figure 15 shows a system workflow of a robotic arm 1510 loading an object 1515 to a passenger in an automobile 1520 in accordance with an example embodiment of the present invention.

[00167] As shown in position 1500A, the vehicle 1520 parks in the approximate position, and the robotic system starts to scan the vehicle so as to localize the window target points and check for the presence of obstacles or door opening.

[00168] As shown in position 1500B, in the absence of obstacles and door opening events, the robot arms 1510 extend forward to convey the objects 1515 of interest to the localized window target points.

[00169] As shown in position 1500C, once the target points are reached, the passengers are free to interact with or use the objects 1515 and, if necessary, reposition the robot arm within a safe zone.

[00170] As shown in position 1500D, after the user interaction process is completed, the robot arms 1510 retract to their respective home positions and the vehicle drives off as shown in position 1500C. The robotic system then waits for the next vehicle to repeat the work cycle.

[00171] As used herein, "2D lidar" is a remote sensing device which uses the pulse from a laser to collect planar measurements of the distance to the targets.

[00172] As used herein, "3D lidar" is a remote sensing device which uses the pulse from a laser to collect 3 dimensional measurements of the distance to the targets.

[00173] As used herein, "point cloud" is a collection of data points defined by a given coordinate system and created from scanning or sensing a surface of an object. Point clouds create models of the object, such as 3D models that define the shape of the object.

[00174] As used herein, a "protrusion" is something that protrudes from a regular surface. Protrusions can have various shapes and are often small components relative to the surface to which they are attached.

[00175] As used herein, a "regular surface" is a smooth surface of an object in differential geometry. Examples of a salient regular surface include, but are not limited to, planes, spheres, and cylinders.

[00176] As used herein, "salient" means prominent or important.

[00177] In some example embodiments, the methods illustrated herein and data and instructions associated therewith, are stored in respective storage devices that are implemented as computer-readable and/or machine-readable storage media, physical or tangible media, and/or non-transitory storage media. These storage media include different forms of memory including semiconductor memory devices such as DRAM, or SRAM, Erasable and Programmable Read-Only Memories (EPROMs), Electrically Erasable and Programmable Read-Only Memories (EEPROMs) and flash memories; magnetic disks such as fixed and removable disks; other magnetic media including tape; optical media such as Compact Disks (CDs) or Digital Versatile Disks (DVDs). Note that the instructions of the software discussed above can be provided on computer-readable or machine-readable storage medium, or alternatively, can be provided on multiple computer-readable or machine-readable storage media distributed in a large system having possibly plural nodes. Such computer-readable or machine-readable medium or media is (are) considered to be part of an article (or article of manufacture). An article or article of manufacture can refer to a manufactured single component or multiple components.

[00178] Blocks and/or methods discussed herein can be executed by a software application, an electronic device, a computer or computer system, a robotic system,

firmware, hardware, and/or a process. Furthermore, blocks and/or methods discussed herein can be executed automatically with or without instruction from a user.

[00179] While exemplary embodiments have been presented in the foregoing detailed description of the present embodiments, it should be appreciated that a vast number of variations exist. It should further be appreciated that the exemplary embodiments are only examples, and are not intended to limit the scope, applicability, operation, or configuration of the invention in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing exemplary embodiments of the invention, it being understood that various changes may be made in the function and arrangement of steps and method of operation described in the exemplary embodiments without departing from the scope of the invention as set forth in the appended claims.

Read more
PatSnap Solutions

Great research starts with great data.

Use the most comprehensive innovation intelligence platform to maximise ROI on research.

Learn More

PatSnap Solutions

PatSnap solutions are used by R&D teams, legal and IP professionals, those in business intelligence and strategic planning roles and by research staff at academic institutions globally.

PatSnap Solutions
Search & Analyze
The widest range of IP search tools makes getting the right answers—and asking the right questions—easier than ever. One click analysis extracts meaningful information on competitors and technology trends from IP data.
Business Intelligence
Gain powerful insights into future technology changes, market shifts and competitor strategies.
Workflow
Manage IP-related processes across multiple teams and departments with integrated collaboration and workflow tools.
Contact Sales