Robots are ideal surrogates for performing tasks that are dull, dirty, and dangerous. To fully achieve this ideal, a robotic teammate should be able to autonomously perform human-level tasks in unstructured environments where we do not want humans to go. In this paper, we take a step toward realizing that vision by introducing the integration of state of the art advancements in intelligence, perception, and manipulation on the RoMan (Robotic Manipulation) platform. RoMan is comprised of two 7 degree of freedom (DoF) limbs connected to a 1 DoF torso and mounted on a tracked base. Multiple lidars are used for navigation, and a stereo depth camera visualizes point clouds for grasping. Each limb has a 6 DoF force-torque sensor at the wrist, with a dexterous 3-finger gripper on one limb and a stronger 4-finger claw-like hand on the other. Tasks begin with an operator specifying a mission type, a desired final destination for the robot, and a general region where the robot should look for grasps. All other portions of the task are completed autonomously. This includes navigation, object identification and pose estimation (if the object is known) via deep learning or perception through search, fine maneuvering, grasp planning via grasp library, arm motion planning, and manipulation planning (e.g. dragging if the object is deemed too heavy to freely lift). Finally, we present initial test results on two notional tasks: clearing a road of debris such as a heavy tree or a pile of unknown light debris, and opening a hinged container to retrieve a bag inside it.
Emulating the sense of touch is fundamental to endow robotic systems with perception abilities. This work presents an unprecedented mechanoreceptor-like neuromorphic tactile sensor implemented with fiber optic sensing technologies. A robotic gripper was sensorized using soft and flexible tactile sensors based on Fiber Bragg Grating (FBG) transducers and a neuro-bio-inspired model to extract tactile features. The FBGs connected to the neuron model emulated biological mechanoreceptors in encoding tactile information by means of spikes. This conversion of inflowing tactile information into event-based spikes has an advantage of reduced bandwidth requirements to allow communication between sensing and computational subsystems of robots. The outputs of the sensor were converted into spiking on-off events by means of an architecture implemented in a Field Programmable Gate Array (FPGA) and applied to robotic manipulation tasks to evaluate the effectiveness of such information encoding strategy. Different tasks were performed with the objective to grant fine manipulation abilities using the features extracted from the grasped objects (i.e., size and hardness). This is envisioned to be a futuristic sensor technology combining two promising technologies: optical and neuromorphic sensing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.