Robotic Grasping In Clutter And Occlusion With Deep Reinforcement Learning

Citation

Mohammed, Marwan Qaid and Kwek, Lee Chung and Chua, Shing Chyi (2021) Robotic Grasping In Clutter And Occlusion With Deep Reinforcement Learning. In: 2nd FET PG Engineering Colloquium Proceedings 2021, 1-15 Dec. 2021, Online Conference. (Unpublished)

[img] Text
41 Marwan Qaid_abstract.pdf
Restricted to Repository staff only

Download (120kB)

Abstract

The first proposed PDDBA approach contains two main components that handle the above-mentioned issues: 1) leveraging a single Fully Connected Network to predict the best pushing and grasping pose, and 2) using a pixel-depth difference-based synergizing the execution of a push and grasp action. Whereas, the second proposed MV-COBA approach is divided into two parts: 1) using multiple cameras to set up multi-view to address the occlusion issue while also improving grasp performance in cluttered and occluded environments, increasing the likelihood of a successful grasp.

Item Type: Conference or Workshop Item (Paper)
Uncontrolled Keywords: Depth Difference, Multi-View, Change Observation, Synergizing Two Actions, Deep-RL, Robotic Grasping, Cluttered Scene
Subjects: T Technology > TJ Mechanical Engineering and Machinery
Divisions: Faculty of Engineering and Technology (FET)
Depositing User: Ms Nurul Iqtiani Ahmad
Date Deposited: 26 Jan 2022 04:08
Last Modified: 26 Jan 2022 04:08
URII: http://shdl.mmu.edu.my/id/eprint/9912

Downloads

Downloads per month over past year

View ItemEdit (login required)