Resources

Vine Robots

Vine robots are a class of soft continuum robots.
In contrast to traditional robots that move based on flight or repeated contacts with a surface (e.g., walking, running, rolling), vine robots are soft robots that achieve movement through growth, on time scales much faster than their biological counterparts. As vine robots grow, they expand from the tip, allowing them to use their newly established "stem" as a base from which to traverse gaps, climb vertically, and grow to over 100 times their original length. Because they do not rely on contact with the environment to achieve movement, they can navigate over rough, slippery, sticky, and sharp terrain. Growth from the tip of a robot also enables it to withstand being stepped on and extend through gaps a quarter of its height. Within its region of growth, a vine robot can provide not only sensing, but also a physical conduit, such as a water hose that grows to a fire or an oxygen tube that grows to a trapped disaster victim. Vine robots could also protect trapped victims and infrastructure by gently wrapping themselves around unstable rubble or grasping a gas valve to be pulled shut.
Open source designs are available here: http://vinerobots.org

The Hapkit Family

The Hapkit family is a family of open source kinesthetic haptic devices developed for educational applications.
Hapkit 1.0, a one-degree-of-freedom haptic kit that uses a friction drive and acrylic structural materials, was first designed based on the original haptic paddle. Hapkit 1.0 uses an Arduino Uno based board for computation, making it a stand alone device that can be used outside of a laboratory. Hapkits 2.0 and 3.0 made further improvements on the Hapkit design in an attempt to make it more accessible. Hapkit 3.0 uses 3-D printed structural materials and a capstan drive.
Graphkit and Haplink are two-degree-of-freedom kinesthetic haptic devices based on Hapkit 3.0. Graphkit combines two Hapkit 3.0 using a Pantograph mechanism and Haplink customizes two Hapkit 3.0's sector pulleys and combines them using a novel serial mechanism.
The Open Source designs as well as example code are available here: http://hapkit.stanford.edu/.

GelSight Video Dataset of 93 Textures

Rich haptic sensory feedback in response to user interactions is desirable for an effective, immersive virtual reality or teleoperation system. However, this feedback depends on material properties and user interactions in a complex, non-linear manner. Therefore, it is challenging to model the mapping from material and user interactions to haptic feedback in a way that generalizes over many variations of the user's input. Current methodologies are typically conditioned on user interactions, but require a separate model for each material. In this project, we present a learned action-conditional model that uses data from a vision-based tactile sensor (GelSight) and user's action as input and predicts the induced acceleration. We trained our proposed model on a publicly available dataset (Penn Haptic Texture Toolkit) that we augmented with GelSight measurements of the different materials.
We have made these GelSight videos publicly available available here: https://sites.google.com/stanford.edu/haptic-texture-generation.

Multilateral Manipulation Software Framework

This work is a software design methodology to be applied in the first stages of describing a multilateral manipulation task. The multilateral manipulation software framework can be built on top of ROS or CISST, or built independent of these libraries. We break down the formalization of a multilateral manipulation task into seven base classes. The base classes facilitate completion of the task and include input/output characteristics: human-interface devices, graphical display, and data logging. Extensions to these base classes specify the framework for the task at hand; in this repository, a simulated surgical task. Interfaces between the classes are simple, well defined, and easily extensible, facilitating integration of this structure to tasks other than the one demonstrated in this work. Through careful adherence to the specifications of this software framework, this repository contains five different collaboration models between a human operator and a robotic agent in an inclusion segmentation task. Different collaboration models can broaden our understanding of multilateral manipulation and enable us to think about new ways to investigate human-centered autonomy. Source code available here: https://github.com/nichollka/MMSF-Barebones-Framework.

3-DoF Skin deformation haptic device

The 3-DoF skin deformation haptic device is a tactile haptic device that is capable of rendering translational skin deformation cues to your index finger, middle finger, and thumb, without imparting any physical forces to the user. The Solidworks design files (which include the parts and the complete assembly) can be downloaded here. Download the SolidWorks files and other part information for rc-servo powered 3-DoF skin deformation device

Download the SolidWorks files and other part information for Micromo 1516 DC-motor powered 3-DoF skin deformation device

Download the C++ Qt project file for the rc-servo powered 3-DoF skin deformation device, which is needed for the control of the device

Download the C++ Qt project file for the dc-motor powered 3-DoF skin deformation device, which is needed for the control of the device

Wearable 3-DoF Skin Deformation Device

The wearable 3-DoF skin deformation device is an extension of previous designs that attached to kinesthetic manipulators. It is worn on the fingertip and is capable of deforming the skin in 3-DoF. When integrated with a free space tracking method such as a magnetic or optical tracking system, it can be used to provide real time feedback of interaction forces. CAD models of the entire assembly can be found here https://github.com/sschorr/WearableDevice

Stereoscopic Vision System

With the Raven-II we use a pair of Point Grey Flea3 cameras and a Samsung UN46FH6030 3D TV to display the Raven's manipulators to the user. The camera mount allows both the view angle and the distance between the cameras to be adjusted. It is designed to be laser cut from 1/8" acrylic and can be attached to either English or Metric 80/20.Download the SolidWorks files
  The source code used in our stereoscopic visualization system for teleoperation is hosted on https://github.com/cliffbar/Stereoscopic_3D_Display. A .zip file is also available for download here. Download the source code Δ

OmniGrip Haptic Device

The OmniGrip is a master manipulator gripper device capable of rendering a programmed stiffness. This open source project aims to add a seventh degree of freedom to the conventional Sensable Touch (Phantom Omni) haptic device. The project's files can be found at our github repository.

Finite Element Models

Comparing Lump Detection by Human and Artificial Tactile Sensing

Finite Element Method (FEM) model of human finger interacting with embedded lumps of different sizes and depths.Download Abaqus Input File
Finite Element Method (FEM) model of artificial tactile sensor interacting with embedded lumps of different sizes and depths.Download Abaqus Input File
Nodal coordinates for both models are in millimeters, and material parameters are in MPa. See Paper for material property values and applied boundary conditions.