Learning Human-Environment Interactions Using Conformal Tactile Textiles
Learning Human-Environment Interactions Using Conformal Tactile Textiles
https://doi.org/10.1038/s41928-021-00558-0
Recording, modelling and understanding tactile interactions is important in the study of human behaviour and in the develop-
ment of applications in healthcare and robotics. However, such studies remain challenging because existing wearable sensory
interfaces are limited in terms of performance, flexibility, scalability and cost. Here, we report a textile-based tactile learn-
ing platform that can be used to record, monitor and learn human–environment interactions. The tactile textiles are created
via digital machine knitting of inexpensive piezoresistive fibres, and can conform to arbitrary three-dimensional geometries.
To ensure that our system is robust against variations in individual sensors, we use machine learning techniques for sensing
correction and calibration. Using the platform, we capture diverse human–environment interactions (more than a million tactile
frames) and show that the artificial-intelligence-powered sensing textiles can classify humans’ sitting poses, motions and other
interactions with the environment. We also show that the platform can recover dynamic whole-body poses, reveal environmen-
tal spatial information and discover biomechanical signatures.
L
iving organisms extract information and learn from the sur- recording and analysis of full-body human–environment interac-
roundings through constant physical interactions1. Humans tions using the tactile textiles.
are particularly receptive to tactile cues (on hands, limbs and
torso), which allow complex tasks such as dexterous grasp and loco- Conformal tactile sensing textiles
motion to be carried out2. Observing and modelling interactions Our low-cost (~US$0.2 per metre; details about cost estimation are
between humans and the physical world are thus fundamental to provided in Supplementary Table 1) coaxial piezoresistive fibres are
the study of human behaviour3, and also for the development of created using a scalable automated fabrication process (Fig. 1a; further
applications in healthcare4, robotics5,6 and human–computer inter- details on characterization and preparation are provided in Fig. 2a–c
actions7. However, studies of human–environment interactions and the Methods). A pair of fibres are then orthogonally overlapped
typically rely on easily observable visual or audible datasets8, and to create a sensing unit, which converts pressure stimuli (a normal
obtaining tactile data in a scalable manner remains difficult. force acting on the surface) into electrical signals1,8. Figure 2d shows
Wearable electronics have benefited from innovations in that the measured resistance of a typical sensor drops from ~8 kΩ to
advanced materials9–11, designs12–16 and manufacturing tech- 2 kΩ in response to an increasing applied normal force (or pressure)
niques17,18. However, sensory interfaces that offer conformal, in the range of 0.1–2 N (with an average peak hysteresis of −22%;
full-body coverage and can record and analyse whole-body Supplementary Fig. 1f). Our functional fibre has reliable performance
interactions have not been developed. Such full-sized tactile over 1,000 load and unload cycles (Fig. 2f). It also maintains stable
sensing garments could be used to equip humanoid robots with performance in a wide range of daily environments, such as differ-
electronic skin for physical human–robot collaboration2,6, could ent temperatures (20–40 °C) and humidities (40–60%, Supplementary
serve as auxiliary training devices for athletes by providing Fig. 1i–k). The performance of the sensing unit can be tuned on
real-time interactive feedback and recording19, and could assist demand by adjusting the material compositions (copper and graphite
high-risk individuals, such as the elderly, in emergencies (a sud- weight percentages, Supplementary Fig. 1c,d) and fabrication process
den fall, for example) and early disease detection (heart attacks (pulling speed and material feeding rate, Supplementary Fig. 1e).
or Parkinson’s disease, for example)20 by acting as unobtrusive To fabricate 3D conformal tactile textiles in a scalable manner,
health-monitoring systems. we use digital machine knitting to seamlessly integrate the func-
In this Article, we report a full-body tactile textile for the study tional fibres into shaped fabrics and full-sized garments. Although
of human activities (Fig. 1). Our textiles are based on coaxial piezo- weaving has been widely attempted for inserting functional fibres
resistive fibres (conductive stainless-steel threads coated with a into fabric, knitting is chosen here as it has two primary advan-
piezoresistive nanocomposite), produced using an automated coat- tages over weaving. First, the fabrication process is simpler: whereas
ing technique. The functional fibres can be turned into large-scale woven fabric must be cut and sewn to form a garment, full-garment
sensing textiles that can conform to arbitrary three-dimensional machine knitting can directly manufacture wearables with arbi-
(3D) geometries, using digital machine knitting, an approach that trary 3D geometry21. Second, the interlocking loops of yarn (that
addresses current challenges in the large-scale manufacturing of is, stitches) used in knitting create a softer, stretchier fabric, which
functional wearables. An artificial intelligence (AI)-based compu- ensures comfort and compatibility during natural human motions
tational workflow is also developed here to assist in the calibration, (Supplementary Video 1).
1
Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA. 2Microsystems Technology
Laboratories, Massachusetts Institute of Technology, Cambridge, MA, USA. 3Electrical Engineering and Computer Science Department, Massachusetts
Institute of Technology, Cambridge, MA, USA. ✉e-mail: [email protected]; [email protected]; [email protected]; [email protected]
a b
(i)
Nanocomposite Piezoresistive coating (i)
feeding
Stainless-steel (iv)
thread feeding Thermal curing Conductive
core
Winding system
Load
(ii)
(ii)
Digital knitting
machine
Readout
circuit
(iii)
(iii) (iv)
Self-supervised calibration
Interaction identification
Signature discovery
Full-body motion prediction
Neural networks
Fig. 1 | Textile-based tactile learning platform. a, Schematic of the scalable manufacturing of tactile sensing textiles using a customized coaxial
piezoresistive fibre fabrication system and digital machine knitting. A commercial conductive stainless-steel thread is coated with a piezoresistive
nanocomposite (composed of polydimethylsiloxane (PDMS) elastomer as the matrix and graphite/copper nanoparticles as the conductive filler).
b, Digitally designed and automatically knitted full-sized tactile sensing wearables: (i) artificial robot skin, (ii) vest, (iii) sock and (iv) glove. c, Examples of
tactile frames collected during human–environment interactions and their applications explored using machine learning techniques.
Because of its relative stiffness compared to regular knitting yarn Although researchers conventionally attempt to fabricate flaw-
(Supplementary Fig. 1g), the piezoresistive functional fibre is not less sensor arrays23,24, we draw inspiration from living organisms,
suitable to be knitted directly by forming loops. We thus used an which develop the ability to adapt their sensory system in the
alternative knitting technique—inlay—which horizontally inte- presence of localized defects or environmental variations using
grates the yarn in a straight configuration (Supplementary Fig. 5e overall sensing1,2,4. We adopt a similar mechanism to provide
and Supplementary Video 2). We assembled two knitted fabrics robust sensing capabilities while relaxing the flawless requirement
with functional fibres inlaid perpendicularly to form a sensing in sensor fabrication. This is critical for many applications, as it is
matrix. Different knitting patterns can be used to tune the per- impractical to perform individual calibration and correction of
formance of the sensing matrix for customized applications. The our sensing units due to their high-density, complex geometries
highest achieved sensitivity of 1.75 kPa and a detection range up to and diverse applications. To implement such a mechanism, we
87.5 kPa are demonstrated in Fig. 2e (the configuration is shown in developed a self-supervised learning paradigm that learns from
Supplementary Fig. 5f). weak supervision, using spatial-temporal contextual informa-
We computationally designed and automatically knitted sev- tion to normalize sensing responses, compensate for variation
eral example garments: socks (216 sensors over an area of 144 cm2, and fix malfunctioning sensors. In particular, we first calibrate
Supplementary Fig. 6a), a vest (1,024 sensors over 2,100 cm2, our tactile socks and gloves by collecting synchronized tactile
Supplementary Fig. 6b), a robot arm sleeve (630 sensors over responses and readings from a digital scale stepped or pressed
720 cm2, Supplementary Fig. 6c), and full-sized gloves (722 sen- by a wearer (Fig. 3a and Supplementary Video 3). At each frame,
sors over 160 cm2, Supplementary Fig. 6d). Thanks to the digital the scale reading indicates the overall applied pressure, which is
machine knitting system22, these garments are fully customizable: expected to correlate linearly with the sum of tactile responses
they can be adapted to individual shape, size, surface texture and at all sensing points. We then train a fully convolutional neural
colour preference, meeting the needs of personalization and fashion network with four hidden layers, which takes a small sequence
design. Details of knitting operation and our designs can be found of raw tactile array responses as input and outputs a single frame
in Supplementary Figs. 5 and 6. with the same spatial array resolution (Supplementary Fig. 8a).
The output represents the corrected tactile response of the middle
Self-supervised sensing correction frame of the input sequence. The neural network is optimized via
During the large-scale manufacturing of wearable sensors, it stochastic gradient descent with the objective consisting of two
is inevitable that variations will be introduced and thus failure. components: one encourages the output to preserve the spatial
Stainless-steel thread
Functional fibre
200 µm
5 cm Acrylic yarn 1 mm
d e
104
8 Functional fibres without fabrics Manual inlay
Automatic inlay + manual inlay Automatic inlay
Load
Functional fibres with fabrics
103 10
6
8
Resistance (kΩ)
Resistance (kΩ)
6
Readout 102 4
4 circuit
2
0
0 5 10 15 20
2 101
0 100
0 1 2 0 20 40 60 80 100 120 140
Load (N) Pressure (kPa)
f 20
10
Resistance (kΩ)
8
15 6
Resistance (kΩ)
4
2
10 0
200 300 10,000 10,100 18,000 18,100
0
0 10,000 20,000
Time step
Fig. 2 | Characterization of the functional fibre. a, Photograph of the piezoresistive functional fibre (>100 m) and sensing fabrics. b, Optical image
of a stainless-steel thread, coaxial piezoresistive fibre and acrylic knitting yarn. c, Scanning electron microscopy (SEM) cross-sectional image of the
piezoresistive fibre. d, The resistance profile of a typical sensor (composed of two piezoresistive fibres) in response to pressure (or normal force).
Error bars indicate the standard deviation of four individual sensors. e, The influence of fabric structures (‘manual’ inlay and automatic inlay) on device
performance. Behaving as a buffer, the introduction of the soft fabric (red curve) decreases the sensitivity and increases the sensing range. Also, the ribbed
structures obtained from automatic inlay induce gaps between two aligned fabrics (dark blue and light blue curves), further decreasing the sensitivity and
increasing the detection range. Detailed structures are shown in Supplementary Fig. 5f. f, The stable performance of a typical sensor over 1,000 cycles of
force load and unload. (Additional characterization of the fibre and fabric is provided in the Supplementary Information).
details from the input, and the other requires the summed tactile the latter through the glove. With the same self-supervised learn-
response to be close to the reading from the scale. Our network ing framework (Fig. 3b), the correlation between the responses
consistently increases the correlation between the tactile response increases from 32.1% to 74.2% for the tactile vest and from 58.3% to
and the reference (reading from scale)—the correlation improves 90.6% for the tactile robot arm sleeve (Fig. 3b and Supplementary
from 75.9% to 91.1% in the right sock, from 92.4% to 95.8% in Fig. 9e). The self-supervised calibration network exploits the induc-
the left sock and from 77.7% to 88.3% in the glove (Fig. 3a and tive bias underlying the convolutional layers25, learns to remove
Supplementary Fig. 9a–d). artefacts, and produces more uniform and continuous responses
To further calibrate the sensing fabrics with arbitrary shapes, (Fig. 3c–h and Supplementary Fig. 10). It enables the large-scale
such as the vest and robot arm sleeve, we treat the corrected glove sensing matrix to be robust against variation among the individ-
as a mobile, flexible ‘scale’ and record data from both the tactile ual elements and even their occasional disruption, consequently
glove and the target garments while researchers randomly press on improving the reliability of the measurement.
a b
1.0
1.00 Raw
Raw
Manual 0.9 Manual
0.95 Self-supervised Self-supervised
0.8
0.911 0.742
0.90
0.7
Correlation
Correlation
0.85 0.6
0.5
0.80
0.771
0.759 0.4
0.354
0.75 0.321
0.3
0.70 0.2
e f
g h
1
Fig. 3 | Self-supervised correction. a, Procedure for correcting the tactile sock. The wearer steps on a digital scale (a tennis ball was placed between the foot
and the scale to enhance conformal contact) and synchronized readings from the sock and scale are collected. The same method is used for tactile gloves.
b, Procedure for self-supervised correction of the vest using the response of the calibrated glove as the reference. The same method is also used for the
robot arm sleeve. The bar plots on the right (a,b) show the correlations between the tactile response and the scale reading: ‘Raw’ indicates the correction
of the original, unprocessed tactile signal; ‘Manual’ indicates the correlation of manual-adjusted data where all saturated tactile signals were clipped; and
‘Self-supervised’ indicates the correlation obtained after self-supervised correction. c–h, Examples of raw and corrected readouts from the sock (c,d), vest
(e,f) and robot arm sleeve (g,h). Our method removes artefacts and enhances the smoothness of the sensors. The colour bar indicates the relative pressure
in each sensing point.
Learning on human–environment interactions Our full-sized sensing vest shows the characteristic pressure
Using our full-body sensing garments, we collected a large tactile distributions during sitting, standing, reclining and other actions,
dataset (over 1,000,000 frames recorded at 14 Hz, details are pro- which indicate the wearer’s pose, activity and the texture of the con-
vided in the Methods)26 featuring various human–environment tacted surfaces (Supplementary Video 6). We captured a dataset for
interactions, including diverse contacting patterns from sitting on a wearer performing different poses over various surfaces (Fig. 4a
chairs of different materials with different postures, complex body and Supplementary Fig. 11a). Projecting the high-dimensional sen-
movement and other daily activities (Supplementary Videos 3–7). sory responses into 2D space via t-distributed stochastic neighbour
The capture of human behaviour, skills and crafts is essential for embedding (t-SNE)27, we observe that the recordings from different
cultural preservation, transfer of knowledge, as well as for human classes naturally form distinctive clusters, indicating the discrimi-
and robot performance optimization6,19. We demonstrate the capa- native power of the vest (Fig. 4b). Our vest also exhibits a dis-
bility and utility of our platform by leveraging our data for envi- criminative resolution of 2 cm, which is superior to a human’s back
ronment and action classification, motion pattern discovery and (~4.25 cm)28. We dressed a manikin in the vest and collect a dataset
full-body human pose prediction. by pressing three letters cutouts—‘M’, ‘I’ and ‘T’—against the back
Classification
accuracy (%)
Sit_woodenchair Top 1
Sit_woodenchair
Laiddown_cushion 50
Laiddown_dots
M_ort1
Laiddown_sofa 25
Sit_sofalazy
Sit_sofaleft 0
Sit_sofaright 32 × 32 16 × 16 8×8 4×4 1×1
Sit_sofastraight Effective input resolution
Lean_wall
l_ort1 738 89 5 10 6 0 77 8 66 1
Laiddown_dots
700
l_ort2 42 787 0 49 0 3 0 40 78 1
I_ort1
M_ort1 0 0 630 140 0 0 0 0 0 230 600
Number of samples
M_ort2 0 0 148 733 0 97 0 0 0 22
500
True label
M_ort3 2 0 5 109 711 134 26 4 7 2
400
M_ort4 1 0 9 54 151 785 0 0 0 0
300
T_ort1 26 0 9 0 57 22 682 69 88 47
Lean_wall
T_ort1
T_ort2 8 0 145 55 26 8 37 410 138 173 200
t1
M t2
M t1
M t2
M t3
T_ 4
T_ 1
T_ 2
T_ 3
t4
rt
t
t
or
or
r
r
r
or
or
or
or
_o
_o
_o
_o
l_
l_
Predicted label
e f g h
0.20 Legs
Torso 0.05
Arms
0.15
0.04
0.10
0.03
Ours Ours
0.05
Mean Mean
0.02
0 60 50 40 30 20 10 4 1
32
16
2
×
×
Number of input frames
2
32
16
pp leg
L- leg
R ee
L- e
R el
L- l
Lo R oe
er e
U id-b ck
er ck
L- ack
L- -co r
R ou r
ho er
el r
R bow
L- w
R rist
t
ee
ris
R lla
sh lla
L- lde
ne
w -to
o
he
-s ld
M ba
pp a
kn
lb
-h
co
w
-w
-k
-e
-
p
up
L-
R
i
Tactile
footprint
Ground
truth
Prediction
(ours)
j k
Tactile
footprint
Ground
truth
Prediction
(ours)
Walking
Fig. 4 | Learning on human–environment interactions. a, Example photographs and tactile frames. b, t-SNE plot from our pose dataset recorded by the
tactile vest. The separation of clusters corresponding to each pose illustrates the discriminative capability of the sensing vest. c, Example photographs
and tactile frames of ‘M’, ‘I’ and ‘T’ pressed on the tactile vest. d, Letter classification accuracy drops as sensor resolution decreases (top). The confusion
matrix for classifying the letter and the orientation (bottom). e, Location of the 19 joint angles representing body pose in our model. f, Mean squared error
in pose prediction. g,h, Influence of sensor resolution (g) and the number of input frames (h, temporal window) on prediction performance. The dashed
lines represent the baseline defined as the canonical mean pose of the training data. i, Comparison of various poses recorded by MOCAP (ground truth)
and the same poses recovered by our model from the tactile socks pressure frames. Notable errors in the arm region are highlighted in red. j, Time series
prediction of walking. k, PCA on tactile maps from walking (insets are corresponding tactile frames).
Reporting Summary
Nature Research wishes to improve the reproducibility of the work that we publish. This form provides structure for consistency and transparency
in reporting. For further information on Nature Research policies, see our Editorial Policies and the Editorial Policy Checklist.
Statistics
For all statistical analyses, confirm that the following items are present in the figure legend, table legend, main text, or Methods section.
n/a Confirmed
The exact sample size (n) for each experimental group/condition, given as a discrete number and unit of measurement
A statement on whether measurements were taken from distinct samples or whether the same sample was measured repeatedly
The statistical test(s) used AND whether they are one- or two-sided
Only common tests should be described solely by name; describe more complex techniques in the Methods section.
For null hypothesis testing, the test statistic (e.g. F, t, r) with confidence intervals, effect sizes, degrees of freedom and P value noted
Give P values as exact values whenever suitable.
For Bayesian analysis, information on the choice of priors and Markov chain Monte Carlo settings
For hierarchical and complex designs, identification of the appropriate level for tests and full reporting of outcomes
Estimates of effect sizes (e.g. Cohen's d, Pearson's r), indicating how they were calculated
Our web collection on statistics for biologists contains articles on many of the points above.
Data analysis All data visualization and analysis were performed with customized codes based on open source libraries, including PyTorch and OpenCv.
For manuscripts utilizing custom algorithms or software that are central to the research but not yet described in published literature, software must be made available to editors and
reviewers. We strongly encourage code deposition in a community repository (e.g. GitHub). See the Nature Research guidelines for submitting code & software for further information.
Data
Policy information about availability of data
All manuscripts must include a data availability statement. This statement should provide the following information, where applicable:
- Accession codes, unique identifiers, or web links for publicly available datasets
- A list of figures that have associated raw data
April 2020
The data that support the plots within the paper and other findings of this study are available from the corresponding authors upon reasonable request.
1
nature research | reporting summary
Field-specific reporting
Please select the one below that is the best fit for your research. If you are not sure, read the appropriate sections before making your selection.
Life sciences Behavioural & social sciences Ecological, evolutionary & environmental sciences
For a reference copy of the document with all sections, see nature.com/documents/nr-reporting-summary-flat.pdf
Replication Different devices made with the same materials were tested for multiple times with different activities.
Authentication NA
Mycoplasma contamination NA
Recruitment Consenting subjects were recruited from within the research group
Ethics oversight NA
Note that full information on the approval of the study protocol must also be provided in the manuscript.