top of page

Robust robot packing

We build a fully automatic robot packing station with an Universal Robot UR5e equipped with a suction gripper  (Robotiq EPick). One RGB-D camera (Realsense SR300) and two high-resolution stereo / structured light black-and-white depth cameras (Ensenso N35). 

​

The robot packer is capable of packing seen and unseen itemsets to a shipping box with optimized size, with each item identified, 3D-model acquired and packed to a stable position fully automatically.

​

For pre-trained objects, classification is performed with ResNet18. The system is also capable of recognizing newly added objects, by using siamese one shot learning. 99% top 1 accuracy is achived on trained object set. ~70% accuracy is achieved on object only seen once.​

​

We wrote a novel placement planner that finds placements for objects of arbitrary shapes (convex and concave) so that the placement is 1) Stable against the placed pile, and 2) Graspable and collision free while satisfying kinematic constraints.

VacumnGrasp.png
learning.png

Object dataset tools

Tools to create customized object dataset with ease. This tool generates pixel-wise segmentation masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera. It can be used to prepare training and testing data for various deep learning projects such as 6D object pose estimation projects, as well as object detection and instance segmentation projects.

​

Used and recommended by several popular deep learning projects, including PVnet and Microsoft's Singleshotpose.

​

URL: https://github.com/F2Wang/ObjectDatasetTools

sugar (1).gif

In-hand object scanning

​RGB-D in-hand object manipulation is potentially the fastest, easiest way for novices to construct 3D models of household objects. However, it remains challenging to accurately segment the target object from the user’s hands and background. 

​

This project presents a 3D model acquisition pipeline from in-hand interaction. The pipeline includes a novel object tracking technique and a set of reconstruction and post-processing procedures. With this pipeline, a non-expert can scan arbitrary objects with only a hand-held single RGB-D camera and light manual annotation.

​

URL: https://www.rgbdinhandmanipulation.com/

reconstruct.png

Live demo at ICRA 2019

Robot Button Pressing In Human Environments

A small, relatively inexpensive, 3-DOF robot, that recognize and actuates buttons reliably. Its operating characteristics were developed after conducting a systematic study of  buttons and switches in human environments.

© 2022 by Fan Wang

  • Google_Scholar_logo.svg
  • Github
  • Personal Website
  • LinkedIn
bottom of page