0% found this document useful (0 votes)
7 views

Learning Human-Environment Interactions Using Conformal Tactile Textiles

Recording, modelling and understanding tactile interactions is important in the study of human behaviour and in the development of applications in healthcare and robotics. However, such studies remain challenging because existing wearable sensory interfaces are limited in terms of performance, flexibility, scalability and cost. Here, we report a textile-based tactile learning platform that can be used to record, monitor and learn human–environment interactions. The tactile textiles are created v
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Learning Human-Environment Interactions Using Conformal Tactile Textiles

Recording, modelling and understanding tactile interactions is important in the study of human behaviour and in the development of applications in healthcare and robotics. However, such studies remain challenging because existing wearable sensory interfaces are limited in terms of performance, flexibility, scalability and cost. Here, we report a textile-based tactile learning platform that can be used to record, monitor and learn human–environment interactions. The tactile textiles are created v
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Articles

https://doi.org/10.1038/s41928-021-00558-0

Learning human–environment interactions using


conformal tactile textiles
Yiyue Luo1,2,3, Yunzhu Li 1,3 ✉, Pratyusha Sharma1,3, Wan Shou 1,3 ✉, Kui Wu1,3, Michael Foshey1,3,
Beichen Li1,3, Tomás Palacios2,3, Antonio Torralba1,3 ✉ and Wojciech Matusik1,3 ✉

Recording, modelling and understanding tactile interactions is important in the study of human behaviour and in the develop-
ment of applications in healthcare and robotics. However, such studies remain challenging because existing wearable sensory
interfaces are limited in terms of performance, flexibility, scalability and cost. Here, we report a textile-based tactile learn-
ing platform that can be used to record, monitor and learn human–environment interactions. The tactile textiles are created
via digital machine knitting of inexpensive piezoresistive fibres, and can conform to arbitrary three-dimensional geometries.
To ensure that our system is robust against variations in individual sensors, we use machine learning techniques for sensing
correction and calibration. Using the platform, we capture diverse human–environment interactions (more than a million tactile
frames) and show that the artificial-intelligence-powered sensing textiles can classify humans’ sitting poses, motions and other
interactions with the environment. We also show that the platform can recover dynamic whole-body poses, reveal environmen-
tal spatial information and discover biomechanical signatures.

L
iving organisms extract information and learn from the sur- recording and analysis of full-body human–environment interac-
roundings through constant physical interactions1. Humans tions using the tactile textiles.
are particularly receptive to tactile cues (on hands, limbs and
torso), which allow complex tasks such as dexterous grasp and loco- Conformal tactile sensing textiles
motion to be carried out2. Observing and modelling interactions Our low-cost (~US$0.2 per metre; details about cost estimation are
between humans and the physical world are thus fundamental to provided in Supplementary Table 1) coaxial piezoresistive fibres are
the study of human behaviour3, and also for the development of created using a scalable automated fabrication process (Fig. 1a; further
applications in healthcare4, robotics5,6 and human–computer inter- details on characterization and preparation are provided in Fig. 2a–c
actions7. However, studies of human–environment interactions and the Methods). A pair of fibres are then orthogonally overlapped
typically rely on easily observable visual or audible datasets8, and to create a sensing unit, which converts pressure stimuli (a normal
obtaining tactile data in a scalable manner remains difficult. force acting on the surface) into electrical signals1,8. Figure 2d shows
Wearable electronics have benefited from innovations in that the measured resistance of a typical sensor drops from ~8 kΩ to
advanced materials9–11, designs12–16 and manufacturing tech- 2 kΩ in response to an increasing applied normal force (or pressure)
niques17,18. However, sensory interfaces that offer conformal, in the range of 0.1–2 N (with an average peak hysteresis of −22%;
full-body coverage and can record and analyse whole-body Supplementary Fig. 1f). Our functional fibre has reliable performance
interactions have not been developed. Such full-sized tactile over 1,000 load and unload cycles (Fig. 2f). It also maintains stable
sensing garments could be used to equip humanoid robots with performance in a wide range of daily environments, such as differ-
electronic skin for physical human–robot collaboration2,6, could ent temperatures (20–40 °C) and humidities (40–60%, Supplementary
serve as auxiliary training devices for athletes by providing Fig. 1i–k). The performance of the sensing unit can be tuned on
real-time interactive feedback and recording19, and could assist demand by adjusting the material compositions (copper and graphite
high-risk individuals, such as the elderly, in emergencies (a sud- weight percentages, Supplementary Fig. 1c,d) and fabrication process
den fall, for example) and early disease detection (heart attacks (pulling speed and material feeding rate, Supplementary Fig. 1e).
or Parkinson’s disease, for example)20 by acting as unobtrusive To fabricate 3D conformal tactile textiles in a scalable manner,
health-monitoring systems. we use digital machine knitting to seamlessly integrate the func-
In this Article, we report a full-body tactile textile for the study tional fibres into shaped fabrics and full-sized garments. Although
of human activities (Fig. 1). Our textiles are based on coaxial piezo- weaving has been widely attempted for inserting functional fibres
resistive fibres (conductive stainless-steel threads coated with a into fabric, knitting is chosen here as it has two primary advan-
piezoresistive nanocomposite), produced using an automated coat- tages over weaving. First, the fabrication process is simpler: whereas
ing technique. The functional fibres can be turned into large-scale woven fabric must be cut and sewn to form a garment, full-garment
sensing textiles that can conform to arbitrary three-dimensional machine knitting can directly manufacture wearables with arbi-
(3D) geometries, using digital machine knitting, an approach that trary 3D geometry21. Second, the interlocking loops of yarn (that
addresses current challenges in the large-scale manufacturing of is, stitches) used in knitting create a softer, stretchier fabric, which
functional wearables. An artificial intelligence (AI)-based compu- ensures comfort and compatibility during natural human motions
tational workflow is also developed here to assist in the calibration, (Supplementary Video 1).

1
Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA. 2Microsystems Technology
Laboratories, Massachusetts Institute of Technology, Cambridge, MA, USA. 3Electrical Engineering and Computer Science Department, Massachusetts
Institute of Technology, Cambridge, MA, USA. ✉e-mail: [email protected]; [email protected]; [email protected]; [email protected]

Nature Electronics | VOL 4 | March 2021 | 193–201 | www.nature.com/natureelectronics 193


Articles NaTURe ElecTRonics

a b
(i)
Nanocomposite Piezoresistive coating (i)
feeding
Stainless-steel (iv)
thread feeding Thermal curing Conductive
core

Winding system

Load

(ii)
(ii)
Digital knitting
machine
Readout
circuit

(iii)

(iii) (iv)

Self-supervised calibration
Interaction identification
Signature discovery
Full-body motion prediction

Neural networks

Fig. 1 | Textile-based tactile learning platform. a, Schematic of the scalable manufacturing of tactile sensing textiles using a customized coaxial
piezoresistive fibre fabrication system and digital machine knitting. A commercial conductive stainless-steel thread is coated with a piezoresistive
nanocomposite (composed of polydimethylsiloxane (PDMS) elastomer as the matrix and graphite/copper nanoparticles as the conductive filler).
b, Digitally designed and automatically knitted full-sized tactile sensing wearables: (i) artificial robot skin, (ii) vest, (iii) sock and (iv) glove. c, Examples of
tactile frames collected during human–environment interactions and their applications explored using machine learning techniques.

Because of its relative stiffness compared to regular knitting yarn Although researchers conventionally attempt to fabricate flaw-
(Supplementary Fig. 1g), the piezoresistive functional fibre is not less sensor arrays23,24, we draw inspiration from living organisms,
suitable to be knitted directly by forming loops. We thus used an which develop the ability to adapt their sensory system in the
alternative knitting technique—inlay—which horizontally inte- presence of localized defects or environmental variations using
grates the yarn in a straight configuration (Supplementary Fig. 5e overall sensing1,2,4. We adopt a similar mechanism to provide
and Supplementary Video 2). We assembled two knitted fabrics robust sensing capabilities while relaxing the flawless requirement
with functional fibres inlaid perpendicularly to form a sensing in sensor fabrication. This is critical for many applications, as it is
matrix. Different knitting patterns can be used to tune the per- impractical to perform individual calibration and correction of
formance of the sensing matrix for customized applications. The our sensing units due to their high-density, complex geometries
highest achieved sensitivity of 1.75 kPa and a detection range up to and diverse applications. To implement such a mechanism, we
87.5 kPa are demonstrated in Fig. 2e (the configuration is shown in developed a self-supervised learning paradigm that learns from
Supplementary Fig. 5f). weak supervision, using spatial-temporal contextual informa-
We computationally designed and automatically knitted sev- tion to normalize sensing responses, compensate for variation
eral example garments: socks (216 sensors over an area of 144 cm2, and fix malfunctioning sensors. In particular, we first calibrate
Supplementary Fig. 6a), a vest (1,024 sensors over 2,100 cm2, our tactile socks and gloves by collecting synchronized tactile
Supplementary Fig. 6b), a robot arm sleeve (630 sensors over responses and readings from a digital scale stepped or pressed
720 cm2, Supplementary Fig. 6c), and full-sized gloves (722 sen- by a wearer (Fig. 3a and Supplementary Video 3). At each frame,
sors over 160 cm2, Supplementary Fig. 6d). Thanks to the digital the scale reading indicates the overall applied pressure, which is
machine knitting system22, these garments are fully customizable: expected to correlate linearly with the sum of tactile responses
they can be adapted to individual shape, size, surface texture and at all sensing points. We then train a fully convolutional neural
colour preference, meeting the needs of personalization and fashion network with four hidden layers, which takes a small sequence
design. Details of knitting operation and our designs can be found of raw tactile array responses as input and outputs a single frame
in Supplementary Figs. 5 and 6. with the same spatial array resolution (Supplementary Fig. 8a).
The output represents the corrected tactile response of the middle
Self-supervised sensing correction frame of the input sequence. The neural network is optimized via
During the large-scale manufacturing of wearable sensors, it stochastic gradient descent with the objective consisting of two
is inevitable that variations will be introduced and thus failure. components: one encourages the output to preserve the spatial

194 Nature Electronics | VOL 4 | March 2021 | 193–201 | www.nature.com/natureelectronics


NaTURe ElecTRonics Articles
a b c

Stainless-steel thread

Functional fibre

200 µm
5 cm Acrylic yarn 1 mm

d e
104
8 Functional fibres without fabrics Manual inlay
Automatic inlay + manual inlay Automatic inlay
Load
Functional fibres with fabrics

103 10
6
8

Resistance (kΩ)
Resistance (kΩ)

6
Readout 102 4
4 circuit
2

0
0 5 10 15 20
2 101

0 100
0 1 2 0 20 40 60 80 100 120 140
Load (N) Pressure (kPa)

f 20
10
Resistance (kΩ)

8
15 6
Resistance (kΩ)

4
2
10 0
200 300 10,000 10,100 18,000 18,100

0
0 10,000 20,000
Time step

Fig. 2 | Characterization of the functional fibre. a, Photograph of the piezoresistive functional fibre (>100 m) and sensing fabrics. b, Optical image
of a stainless-steel thread, coaxial piezoresistive fibre and acrylic knitting yarn. c, Scanning electron microscopy (SEM) cross-sectional image of the
piezoresistive fibre. d, The resistance profile of a typical sensor (composed of two piezoresistive fibres) in response to pressure (or normal force).
Error bars indicate the standard deviation of four individual sensors. e, The influence of fabric structures (‘manual’ inlay and automatic inlay) on device
performance. Behaving as a buffer, the introduction of the soft fabric (red curve) decreases the sensitivity and increases the sensing range. Also, the ribbed
structures obtained from automatic inlay induce gaps between two aligned fabrics (dark blue and light blue curves), further decreasing the sensitivity and
increasing the detection range. Detailed structures are shown in Supplementary Fig. 5f. f, The stable performance of a typical sensor over 1,000 cycles of
force load and unload. (Additional characterization of the fibre and fabric is provided in the Supplementary Information).

details from the input, and the other requires the summed tactile the latter through the glove. With the same self-supervised learn-
response to be close to the reading from the scale. Our network ing framework (Fig. 3b), the correlation between the responses
consistently increases the correlation between the tactile response increases from 32.1% to 74.2% for the tactile vest and from 58.3% to
and the reference (reading from scale)—the correlation improves 90.6% for the tactile robot arm sleeve (Fig. 3b and Supplementary
from 75.9% to 91.1% in the right sock, from 92.4% to 95.8% in Fig. 9e). The self-supervised calibration network exploits the induc-
the left sock and from 77.7% to 88.3% in the glove (Fig. 3a and tive bias underlying the convolutional layers25, learns to remove
Supplementary Fig. 9a–d). artefacts, and produces more uniform and continuous responses
To further calibrate the sensing fabrics with arbitrary shapes, (Fig. 3c–h and Supplementary Fig. 10). It enables the large-scale
such as the vest and robot arm sleeve, we treat the corrected glove sensing matrix to be robust against variation among the individ-
as a mobile, flexible ‘scale’ and record data from both the tactile ual elements and even their occasional disruption, consequently
glove and the target garments while researchers randomly press on improving the reliability of the measurement.

Nature Electronics | VOL 4 | March 2021 | 193–201 | www.nature.com/natureelectronics 195


Articles NaTURe ElecTRonics

a b
1.0
1.00 Raw
Raw
Manual 0.9 Manual
0.95 Self-supervised Self-supervised
0.8
0.911 0.742
0.90
0.7

Correlation

Correlation
0.85 0.6

0.5
0.80
0.771
0.759 0.4
0.354
0.75 0.321
0.3

0.70 0.2

c Image Raw Corrected d Image Raw Corrected

e f

g h
1

Fig. 3 | Self-supervised correction. a, Procedure for correcting the tactile sock. The wearer steps on a digital scale (a tennis ball was placed between the foot
and the scale to enhance conformal contact) and synchronized readings from the sock and scale are collected. The same method is used for tactile gloves.
b, Procedure for self-supervised correction of the vest using the response of the calibrated glove as the reference. The same method is also used for the
robot arm sleeve. The bar plots on the right (a,b) show the correlations between the tactile response and the scale reading: ‘Raw’ indicates the correction
of the original, unprocessed tactile signal; ‘Manual’ indicates the correlation of manual-adjusted data where all saturated tactile signals were clipped; and
‘Self-supervised’ indicates the correlation obtained after self-supervised correction. c–h, Examples of raw and corrected readouts from the sock (c,d), vest
(e,f) and robot arm sleeve (g,h). Our method removes artefacts and enhances the smoothness of the sensors. The colour bar indicates the relative pressure
in each sensing point.

Learning on human–environment interactions Our full-sized sensing vest shows the characteristic pressure
Using our full-body sensing garments, we collected a large tactile distributions during sitting, standing, reclining and other actions,
dataset (over 1,000,000 frames recorded at 14 Hz, details are pro- which indicate the wearer’s pose, activity and the texture of the con-
vided in the Methods)26 featuring various human–environment tacted surfaces (Supplementary Video 6). We captured a dataset for
interactions, including diverse contacting patterns from sitting on a wearer performing different poses over various surfaces (Fig. 4a
chairs of different materials with different postures, complex body and Supplementary Fig. 11a). Projecting the high-dimensional sen-
movement and other daily activities (Supplementary Videos 3–7). sory responses into 2D space via t-distributed stochastic neighbour
The capture of human behaviour, skills and crafts is essential for embedding (t-SNE)27, we observe that the recordings from different
cultural preservation, transfer of knowledge, as well as for human classes naturally form distinctive clusters, indicating the discrimi-
and robot performance optimization6,19. We demonstrate the capa- native power of the vest (Fig. 4b). Our vest also exhibits a dis-
bility and utility of our platform by leveraging our data for envi- criminative resolution of 2 cm, which is superior to a human’s back
ronment and action classification, motion pattern discovery and (~4.25 cm)28. We dressed a manikin in the vest and collect a dataset
full-body human pose prediction. by pressing three letters cutouts—‘M’, ‘I’ and ‘T’—against the back

196 Nature Electronics | VOL 4 | March 2021 | 193–201 | www.nature.com/natureelectronics


NaTURe ElecTRonics Articles
a b c d
Image Tactile Image Tactile 100
Random
Sit_plasticchair 75 Top 3

Classification
accuracy (%)
Sit_woodenchair Top 1
Sit_woodenchair

Laiddown_cushion 50
Laiddown_dots

M_ort1
Laiddown_sofa 25
Sit_sofalazy
Sit_sofaleft 0
Sit_sofaright 32 × 32 16 × 16 8×8 4×4 1×1
Sit_sofastraight Effective input resolution
Lean_wall

l_ort1 738 89 5 10 6 0 77 8 66 1
Laiddown_dots

700
l_ort2 42 787 0 49 0 3 0 40 78 1

I_ort1
M_ort1 0 0 630 140 0 0 0 0 0 230 600

Number of samples
M_ort2 0 0 148 733 0 97 0 0 0 22
500

True label
M_ort3 2 0 5 109 711 134 26 4 7 2
400
M_ort4 1 0 9 54 151 785 0 0 0 0
300
T_ort1 26 0 9 0 57 22 682 69 88 47
Lean_wall

T_ort1
T_ort2 8 0 145 55 26 8 37 410 138 173 200

T_ort3 11 57 248 15 3 18 23 104 404 117


100
T_ort4 0 0 99 12 43 4 40 7 111 684
0

t1

M t2
M t1
M t2
M t3

T_ 4

T_ 1
T_ 2
T_ 3
t4
rt

t
t
or
or

r
r
r

or

or
or
or
_o
_o
_o
_o
l_
l_
Predicted label

e f g h
0.20 Legs
Torso 0.05
Arms

Mean squared error


Mean squared error

0.15
0.04

0.10
0.03
Ours Ours
0.05
Mean Mean
0.02
0 60 50 40 30 20 10 4 1

32

16

2
×

×
Number of input frames

2
32

16
pp leg

L- leg
R ee

L- e
R el

L- l
Lo R oe
er e

U id-b ck
er ck
L- ack

L- -co r
R ou r
ho er

el r
R bow

L- w
R rist

t
ee

ris
R lla
sh lla

L- lde
ne

w -to

o
he

-s ld
M ba
pp a
kn

lb
-h

co

w
-w

Effective input resolution


-b
-u er
er

-k

-e
-
p
up
L-
R

i
Tactile
footprint

Ground
truth

Prediction
(ours)

Squatting Lifting leg Twisting


Lunging Bending

j k

Tactile
footprint

Ground
truth

Prediction
(ours)

Walking

Fig. 4 | Learning on human–environment interactions. a, Example photographs and tactile frames. b, t-SNE plot from our pose dataset recorded by the
tactile vest. The separation of clusters corresponding to each pose illustrates the discriminative capability of the sensing vest. c, Example photographs
and tactile frames of ‘M’, ‘I’ and ‘T’ pressed on the tactile vest. d, Letter classification accuracy drops as sensor resolution decreases (top). The confusion
matrix for classifying the letter and the orientation (bottom). e, Location of the 19 joint angles representing body pose in our model. f, Mean squared error
in pose prediction. g,h, Influence of sensor resolution (g) and the number of input frames (h, temporal window) on prediction performance. The dashed
lines represent the baseline defined as the canonical mean pose of the training data. i, Comparison of various poses recorded by MOCAP (ground truth)
and the same poses recovered by our model from the tactile socks pressure frames. Notable errors in the arm region are highlighted in red. j, Time series
prediction of walking. k, PCA on tactile maps from walking (insets are corresponding tactile frames).

Nature Electronics | VOL 4 | March 2021 | 193–201 | www.nature.com/natureelectronics 197


Articles NaTURe ElecTRonics

in different orientations (Fig. 4c and Supplementary Fig. 11c). The Conclusions


classification network (Supplementary Fig. 8b) takes a small win- We have reported large-scale tactile sensing textiles with dense
dow of tactile responses and predicts the type and orientation of the sensor arrays that are conformal to arbitrary 3D geometries. The
letter with an accuracy of 63.76%; the accuracy drops as the effective textiles are manufactured via an automated, inexpensive and scal-
resolution decreases (Fig. 4d). The discriminative capability of the able approach that combines functional fibres and full-garment
sensing socks was demonstrated similarly with action classification. digital machine knitting. A self-supervised sensing correction has
Details are provided in Supplementary Fig. 11e. been developed to normalize the sensor responses and correct
Humans maintain the dynamic balance of the body by redirecting malfunctioning sensors in the array. Demonstrations of various
the centre of mass and exerting forces on the ground, which results human–environment interaction learning using the system sug-
in distinct pressure distributions on the feet3,29,30. Given this, we gest that our approach could be of use in biomechanics, cognitive
hypothesize that a person’s pose can be estimated from the change of sciences and child development, as well as in imitation learning for
pressure distribution over time obtained by our tactile socks, shown intelligent robots.
as a sequence of pressure maps (Fig. 4i,j). Here, the body pose is
represented by 19 joint angles spanning over the legs, torso and arms Methods
(Fig. 4e). We record synchronized tactile data from a pair of sensing Fabrication of coaxial piezoresistive functional fibre. The functional fibre
socks and a full-body motion capture (MOCAP) suit, while the user was constructed in two parts: a conductive core and a piezoresistive sheath.
conducts versatile actions (Supplementary Video 4). We model the The piezoresistive sheath was prepared by mixing graphite nanoparticle
pose prediction task as a regression problem using a convolutional (400–1,200 nm, US Research Nanomaterials), copper nanoparticle (580 nm, US
Research Nanomaterials) and PDMS elastomer (Sylgard-184, base to curing
neural network (Supplementary Fig. 8c). The model processes a agent weight ratio of 10:1, Dow Corning). Silicone solvent OS2 (Dow Corning)
time-series of tactile array footprints that contain the evolving infor- was added to optimize the viscosity of the mixture for the coating. The mixture
mation about the contact events and predicts the human pose in the was thoroughly mixed by a speed mixer (FlackTek) at 2,500 r.p.m. for 90 s. The
middle frame. We optimize the neural network by minimizing the prepared piezoresistive composite was loaded into a syringe (Nordson), which
was connected to a customized material reservoir with 500-µm-diameter inlet
mean squared error between the predicted and the ground truth
and 700-µm-diameter outlet (Fig. 1a). Constant pressure (20 psi) was applied
joint angles (MOCAP data) using stochastic gradient descent. to the syringe with a Nordson dispenser, while the 3-ply stainless-steel thread
The model learns to make accurate predictions that are both (Sparkfun, DEV-13814) was fed to the inlet (Fig. 1a). The thread was then pulled
smooth and consistent over time, achieving a mean squared error by a continuously rotating motor and coated with the nanocomposites. After
that is 70.1% lower than our baseline, which always outputs the thermal curing at 150 °C, the resulting coaxial piezoresistive fibre (namely, the
functional fibre) was wound into a roll (Fig. 1a). Each sensor was constructed at
mean pose. As shown in Fig. 4f–j and Supplementary Video 5, our the intersection of two orthogonally overlapped functional fibres, forming a layered
model achieves higher accuracy for the poses in the torso and legs structure with the piezoresistive nanocomposite sandwiched by two conductive
than in the arms. This is congruous with our observation that the electrodes (inset, Fig. 2d).
pressure distributions on the feet are mostly affected by lower body
movement and the majority of body mass is located in the torso3. Morphological characterization. The longitudinal uniformity is illustrated by
We also observe that the recovered dynamic motion implies the optical images (Olympus SZ61, Supplementary Fig. 1a). The microstructure of the
coaxial piezoresistive fibre was further characterized using a scanning electron
environmental spatial information (Supplementary Video 5), for microscope (Zeiss Merlin, Supplementary Fig. 1b). All fibres were characterized
example, whether the stairs are upwards or downwards. The signifi- without additional conductive coating in the scanning electron microscope using a
cance of sensing resolution and temporal information is reiterated, voltage of 3–5 kV.
as the performance drops with a systematic reduction in either the
input resolution of the tactile pressure map or the context size of the Electrical characterization. The individual sensor, constructed from two
orthogonally aligned functional fibres, was used for electrical characterization.
input tactile frames (Fig. 4g,h). The resistance profile was recorded by a digital multimeter (DMM 4050,
To further understand the patterns that emerged from the tactile Tektronix) while an adjustable pressure (or normal force) was applied to the sensor
dataset recorded by the sensing socks, we used principal component by a mechanical testing system (Instron 5944). The applied load was controlled at
analysis (PCA) to project the tactile signals collected from walking a specific strain rate for all tests (0.05–0.5 mm min−1). To evaluate the robustness
into a 2D plane (Fig. 4k). Intriguingly, the walking signals naturally of the sensor, we cycled the force between 0.1 and 0.5 N for more than 1,000 cycles
at a constant strain rate of 0.1 mm min−1 (Fig. 2f). We measured the performance
form a circular pattern in the reduced PCA space. The pressure dis- of sensors composed of fibres with various graphite and copper compositions and
tribution smoothly transitions back and forth between the left and coating thicknesses (Supplementary Fig. 1a,b,h). Sensor stability was studied with
right foot as we traverse the circle, which describes signatures of the different fibre alignment angles, humidity and temperature (Supplementary
different phases during walking3. Fig. 1i–k). The effect of fabric structure on single sensor performance was evaluated
Our results indicate that a pair of tactile socks can potentially by recording the resistance of an isolated sensor embedded in knitted fabrics with
different structures. Also, sensors consisting of different combinations of fabric
replace the bulky MOCAP system. Our approach sets a path towards structures inlaid with functional fibres (automatic and manual inlaying) were
the analysis and study of human motion activities without much investigated (Fig. 2e and Supplementary Fig. 5f). The typical sensing resolution
physical obtrusion in domains like sports, entertainment, manufac- ranged from 0.25 cm2 to 4 cm2 (for details see Supplementary section 1.4).
turing activities and care of the elderly. Furthermore, such footprints
contain dynamic body balance strategies, demonstrating a valuable, Mechanical characterization. The tensile test was conducted on an Instron
mechanical tester (Instron 5984). The fabricated functional fibre (diameter of
instructive paradigm for robot locomotion and manipulation3,6. 600 µm) was compared with stainless-steel core thread and two common kinds
Finally, we also demonstrate that our 3D conformal sensing of acrylic knitting yarn (Tamm Petit C4240 and Rebel TIT8000), as shown in
textiles can be used as tactile robot skin. Most modern robots rely Supplementary Fig. 1g. Samples with a length of 10 cm were pulled at a strain rate
solely on vision; however, large-scale and real-time tactile feedback of 5 mm min−1. The yield strength of the functional fibre is over six times larger
is critical for dexterous manipulation and interaction skills, espe- than that of the acrylic knitting yarns. However, its ultimate strain is less than 10%
of acrylic knitting yarn’s ultimate strain. These mechanical characteristics require
cially when vision is occluded or disabled31,32. Our functional tex- special accommodations during the machine knitting process. The functional
tile enables conformal tactile coverage of the robot arm (Fig. 1b–i), fibre shows mechanical properties similar to that of the core stainless-steel thread.
gripper, limbs and other functional parts with complex 3D geom- More mechanical characterizations of the tactile sensing fabric are provided in
etries, endowing the robots with real-time tactile feedback (Fig. 3e Supplementary section 1.4.
and Supplementary Video 7). Our platform offers a critical ingre-
Conformal 3D sensing textiles manufacturing. Digital machine knitting. The
dient for unobtrusive multi-point collision detection and physical full-sized conformal garments and coverings were computationally designed using
human–robot interaction, which remains challenging with embed- KnitPaint22 and the stitch meshing framework, and later automatically fabricated
ded torque sensors in the robot arm6. with a V-bed digital knitting machine (SWG091N2, Shima Seiki), which has two

198 Nature Electronics | VOL 4 | March 2021 | 193–201 | www.nature.com/natureelectronics


NaTURe ElecTRonics Articles
beds of needles (front and back) forming an inverted ‘V’ shape. Each needle is by replacing shorted or dead sensors with an average value of its surrounding.
composed of a hook, which catches the yarn and holds the topmost loop, and A fully convolutional neural network augmented with skip connections was
a slider, which can be actuated to move vertically to close the loop and assist in developed (Supplementary Fig. 8a). The model takes a small sequence of the
holding the loop. Movement of the yarn carrier is synchronized to the needle processed frames as input and then adds an individual bias term to each sensor.
operation, where yarns are fed in with appropriate tension (Supplementary Fig. 5a). The output of the model (denoted conv4 in Supplementary Fig. 8a) is a tensor
We used three basic needle operations during fabrication: knit, tuck and transfer. of the same spatial resolution as the input with a channel size of 16. The scale
A knit operation actuates the needles to grab the fed yarn from the yarn carrier, prediction is then defined as the mean of all values in this tensor. We applied an
forms a new loop, and pulls it through the existing loop to connect the loops in affine transformation to the first channel of the tensor to derive the calibration
columns (Supplementary Fig. 5b). A tuck operation actuates the needles to grab the result, which corresponded to the middle frame of the input sequence. The affine
yarn and hold it without forming a new loop (Supplementary Fig. 5c). A transfer transformation was parameterized by two scalar numbers—a scaling factor and a
operation actuates needles from both beds to pass the existing loop from one bed bias term—which were shared by all sensors in the same garment.
to the other (Supplementary Fig. 5d). We also utilized racking, where the back bed The calibration network was expected to predict the readings from the scale
shifts laterally to the left or to the right as a whole to create needle offsets during accurately and preserve the details of the input frame. Hence, the overall objective
transferring to guide the complex 2D and 3D shaping. was a reweighted sum of two components: one was the mean squared error
between the predicted scale reading and the actual scale reading, while the other
Inlaying. Two methods of inlaying were utilized to seamlessly integrate the was an L1 distance between the calibration result and the corresponding input
piezoresistive fibres into conformal 3D knitted textiles: automatic inlaying and frame, where we scaled the second reconstruction loss with a factor of 0.1. We
manual inlaying. Automatic inlaying requires the fabric structure of ‘ribs’, which are optimized the parameters in the convolutional layers, the position-wise bias and
textured vertical stripes created by alternating columns of knit stitches on the front the global affine transformation using stochastic gradient descent, which employed
and back bed. The functional fibre is forced to move simultaneously with normal the Adam optimizer with an initial learning rate of 0.001 and a batch size of 64.
knitting yarn, which is caught by the needles on the two beds and forms alternating We decreased the learning rate with a factor of 0.8 whenever the loss on the
knit stitches to hold down the inlaid functional fibre (Supplementary Fig. 5e and validation set plateaued for five epochs.
Supplementary Video 2). The ribbed structure (alternating stitches on the front
and back bed) allows the straightforward inlaying design; however, it creates a Self-correction. After properly correcting the glove, we used it as our new ‘scale’
textured gap when two fabrics are aligned orthogonally to act as a functioning to calibrate other garments like the vest and the robot arm sleeve. During data
device and lowers the sensitivity and accuracy. Manual inlaying requires collection, the customized printed circuit board was mounted on the leg for the
consecutive movements of the normal knitting yarn and the functional fibre. Four vest and mounted on the robot arm for the sleeve. We randomly pressed the
steps of operations are conducted: (1) knitting with normal knitting yarn; (2) the target garment embedded with the sensing matrix using the corrected glove
transfer of specific loops from front to back bed; (3) moving the functional fibre (Supplementary Video 3). In total, 56,031 paired frames and 44,172 frames were
across the fabric; (4) transfer of specific loops from back to front bed to their collected for the vest and robot arm sleeve correction. We used the same data
original positions (Supplementary Fig. 5e and Supplementary Video 2). A flat split, network architecture and training procedure, except that we used the sum
inlaid fabric (without ribbed structure) was fabricated with manual inlay, and of the sensors from the calibrated glove as the target value to minimize the scale
designs with continuous functional fibre coverage were achieved by alternating the prediction error.
transfer direction (from the front bed to back bed or from back bed to front bed).
However, due to the fibre stiffness and the limited space between two beds, the Human pose classification. To evaluate the discriminative power of the vest, and
functional fibre can barely stay down during the second round of stitches transfer the quality of the obtained signal, a neural-network-based classifier was developed
and is easily caught by the needles, leading to the destruction of fibre functionality. to distinguish different sitting postures and contacted surfaces. As shown in
Therefore, the manual inlay can only be applied to 2D structures. Supplementary Fig. 11a, we selected 10 different classes of poses and recorded
82,836 tactile frames, in total, to construct the dataset. The input to the model
Full-sized conformal 3D sensing textiles. Two knitted fabrics of specific 2D and 3D is a small sequence of 45 consecutive frames, which totals ~3 s in real time. The
shapes with horizontally and vertically inlaid functional fibres were arranged as a sequences were selected from the dataset with a stride of 2, among which 5,000
double-layer structure to form the large-scale sensing matrix (Supplementary Fig. 6). sequences were used for validation (500 per class), another 10,000 sequences were
To optimize the manufacturability and sensing performance of the garments, we reserved as the test set (1,000 per class), and we used the rest for training.
exploited both automatic inlaying and manual inlaying. Different combinations of As shown in Supplementary Fig. 8b, we passed each tactile frame through three
fabric texture (ribbed or not ribbed) were selected for different devices to obtain shared convolutional layers. The resulting 45 vectors with a length of 512 after
the desired sensitivity and detection range (Fig. 2e). Full-sized socks, vest, gloves flattening were then passed through a bidirectional gated recurrent unit33 and
and conformal robot arm (LBR iiwa, KUKA) sleeve were fabricated with the two fully connected layers to derive the final probabilistic distribution over the 10
designs shown in Supplementary Fig. 6. To connect the embedded sensing matrices classes. We trained the model using the standard cross-entropy loss for 20 epochs
to the readout circuit for data transmission without disrupting the main knitted with the Adam optimizer with learning rate of 0.001 and batch size of 32. We
fabric, cutting was performed at the edge of the fabric, where 1–2-cm functional obtained an accuracy of 99.66% on the test set in distinguishing different lying
fibres were pre-reserved through additional tucking at the edge for connection. A postures and supporting surfaces (Supplementary Fig. 11b). A real-time pose
modified electrical-grounding-based circuit architecture26 was used to eliminate classification with an accuracy of 94% is demonstrated in Supplementary Video 8.
most crosstalk and parasitic effects of the passive matrix (Supplementary Fig. 7a). Here, the real-time classifier predicts the probability of each class, accompanied by
the ground-truth human pose.
Self-supervised sensing correction. Dataset. During data collection for the
tactile sock correction, our researcher wore the sock (with a customized printed Letter classification. The dataset was collected from the tactile vest worn by a
circuit board mounted on the calves) and conducted random stepping on a digital manikin when the models of three letters—‘M’, ‘I’ and ‘T’—were pressed against
weighing scale (Etekcity). The tactile learning platform is powered by a portable the back (Supplementary Fig. 11c). We recorded 62,932 frames in total for the 10
energy source (Dell XPS 13 9380, 30 Wh), which allows real-time recording classes and retrieved a top-1 accuracy of 63.76% and a top-3 accuracy of 88.63%
(data acquisition and serialization) while being carried by the researcher in a using the same network architecture, data splitting and training procedure as
backpack. The fully charged laptop allows 4 h of recording. We recorded both the the pose classification task (Supplementary Fig. 11d). To ablate the resolution
real-time tactile information from the sensing matrix and the readings from the of the sensor, we reassigned the value in each 2 × 2 grid with the average of the
scale. To ensure the conformal contact of all sensors, we placed a tennis ball on four values, which reduced the effective resolution to 16 × 16. We then used the
the scale during the data-collection process (Fig. 3a and Supplementary Video 3). same classification training pipeline to obtain accuracy. A similar procedure was
Sensing correction for the tactile gloves used a similar data-collection process. employed for calculating the accuracy for sensors with effective resolutions of 8 × 8,
For data collection with gloves, the printed circuit boards were mounted on 4 × 4 and 1 × 1.
the arms. A person wore the glove and conducted random pressing on a digital
scale (Lansheng). We used 3D printed shapes (dots, lines and curves) to ensure Action classification and signature discovery. In this experiment, we used
conformal contact of individual sensors (Supplementary Video 3). In total, we tactile information from the two socks worn by a person to identify which action
collected 25,024 frames for the left sock correction, 37,275 frames for the right sock the wearer was conducting. The dataset consists of tactile frames retrieved from
correction and 108,305 paired frames for glove calibration. We split the dataset a pair of socks when the wearer conducts nine different activities: walking,
sequentially, with the first 80% reserved for training, another 10% for validation climbing up the stairs, climbing down the stairs, fast walking, standing on toes,
and the remaining 10% for testing. jumping, leaning on the left foot, leaning on the right foot and standing upright
(Supplementary Fig. 11e). The dataset contains 90,295 frames across the different
Data processing and network architecture. The data-collection process is assumed to action categories. The same classification pipeline achieved a top-1 accuracy of
be quasistatic, so the readings from the scale should have a linear correlation with 89.61% and a top-3 accuracy of 96.97% (Supplementary Fig. 11f).
the sum of all sensor responses on the glove or the sock. Based on this observation, To further analyse the patterns underlying the signals collected from the socks,
we developed a self-supervised learning framework that uses the scale’s reading as we performed PCA to visualize their distribution. We used the 12,245 frames
weak supervision to guide correction of the garment. We preprocessed all frames from the class of walking in the action classification dataset. We concatenated

Nature Electronics | VOL 4 | March 2021 | 193–201 | www.nature.com/natureelectronics 199


Articles NaTURe ElecTRonics
and flattened the signals from the left and right sock as the high-dimensional 5. Yang, G. Z. et al. The grand challenges of Science Robotics. Sci. Robot 3,
representation at each time step, each of which has 2,048 dimensions. We extracted eaar7650 (2018).
the principal components with the highest and the second-highest variance to 6. Sundaram, S., Kellnhofer, P., Zhu, J. Y., Torralba, A. & Matusik, W. Learning
project the high-dimensional responses to a 2D space and visualize them in Fig. 4k. the signatures of the human grasp using a scalable tactile glove. Nature 569,
698–702 (2019).
Full-body pose prediction. Dataset. The key idea of full-body pose prediction is to 7. Poupyrev, I. et al. Project Jacquard: interactive digital textiles at scale. In Proc.
look at how different sequences of tactile footprints correlate to the different poses 2016 CHI Conference on Human Factors in Computing Systems (CHI ‘16)
as the person transitions from one pose to another. To this end, we simultaneously 4216–4227 (ACM, 2016).
collected data from a pair of tactile socks (frame rate of 13–14 Hz) and an XSENS 8. Zeng, W. et al. Fiber‐based wearable electronics: a review of materials,
motion capture system (MOCAP, frame rate of 50 Hz) worn by a person. The fabrication, devices and applications. Adv. Mater. 26, 5310–5336 (2014).
person conducted exercises and other daily activities, including walking, bending 9. Yan, W. et al. Advanced multimaterial electronic and optoelectronic fibers
forward, twisting, lunging and so on (Supplementary Video 4). The collected and textiles. Adv. Mater. 31, 1802348 (2019).
dataset is diverse in terms of poses as the person transitions through different 10. Rogers, J. A., Someya, T. & Huang, Y. Materials and mechanics for stretchable
tasks. The MOCAP is composed of 17 inertia-based sensors, mounted on 17 key electronics. Science 327, 1603–1607 (2010).
points on the human body to record and estimate the orientation of 19 different 11. Engler, A. J., Sen, S., Sweeney, H. L. & Discher, D. E. Matrix elasticity directs
joints during the movement. The real-time pressure imprints from both feet were stem cell lineage specification. Cell 1261, 677–689 (2006).
recorded and fed into the network. The 19 different joints include the joints of the 12. Moeslund, T. B., Hilton, A. & Kruger, V. A survey of advances in vision-based
legs, arms and torso. The collected dataset includes 282,747 frames of concurrent human motion capture and analysis. Comput. Vis. Image Underst. 104,
MOCAP and tactile pressure maps: 236,036 frames (~83.5%) were used as the 90–126 (2006).
training set, 10,108 frames (~3.5%) as the validation set and 36,603 frames (~13%) 13. Rein, M. et al. Diode fibres for fabric-based optical communications. Nature
as the test set. 560, 214–218 (2018).
14. Kim, D. H. et al. Stretchable and foldable silicon integrated circuits. Science
Data processing and network architecture. We trained a deep convolutional 320, 507–511 (2008).
neural network to predict the 19 different joints of the human body, given the 15. Wang, S., Oh, J. Y., Xu, J., Tran, H. & Bao, Z. Skin-inspired electronics: an
tactile footprints of the person. The architecture of the network is described in emerging paradigm. Acc. Chem. Res. 51, 1033–1045 (2018).
Supplementary Fig. 8c. The network consists of two convolution layers, which take 16. Xenoma; https://xenoma.com/#products/
in a sequence of tactile frames from the socks from time step t − k to time step t + k. 17. Ahn, B. Y. et al. Omnidirectional printing of flexible, stretchable and
The input to the model is 30 consecutive frames of the pressure map of the left spanning silver microelectrodes. Science 323, 1590–1593 (2009).
and right feet. These layers extract patterns from the 2D signal, and the resulting 18. Truby, R. L. & Lewis, J. A. Printing soft matter in three dimensions. Nature
embedding is then passed through three fully connected layers to finally output the 540, 371–378 (2016).
predicted relative joint angles of the human body, corresponding to the person’s 19. Adams, J. A. A closed-loop theory of motor learning. J. Mot. Behav. 3,
pose at time step t. The relative joint angles are the relative angle transformation 111–150 (1971).
of the distal joint with respect to the proximal joint represented in the axis angle 20. Wang, Z., Yang, Z. & Dong, T. A review of wearable technologies for elderly
format. The network was trained using stochastic gradient descent to minimize the care that can accurately track indoor position, recognize physical activities
mean squared error between the predicted joint angles and the ground truth. and monitor vital signs in real time. Sensors 17, 341 (2017).
We trained the network with a batch size of 128 and with a learning rate of 21. Narayanan, V., Wu, K., Yuksel, C. & McCann, J. Visual knitting machine
0.01 using the Adam optimizer. This was a design choice to predict the pose in the programming. ACM Trans. Graph 38, 63 (2019).
joint angle space instead of the position space, but the code provided can be easily 22. Shima Seiki. Sds-one apex3 http://www.shimaseiki.com/product/design/
modified to predict the pose in position space. The collected dataset also contains sdsone_apex/flat/ (2011).
positions of the different joints that could be used to train such a model. 23. Briseno, A. L. et al. Patterning organic single-crystal transistor arrays. Nature
444, 913–917 (2006).
Evaluation. The model is validated with the test dataset, including various 24. Khang, D. Y., Jiang, H., Huang, Y. & Rogers, J. A. A stretchable form of
exercises. The ability of the model to infer each of the joint angles is illustrated single-crystal silicon for high-performance electronics on rubber substrates.
in Supplementary Fig. 13. The model can infer the motion of the lower body Science 311, 208–212 (2006).
better than the motion of the upper body. It is also found that the motion of 25. Ulyanov, D., Vedaldi, A. & Lempitsky, V. Deep image prior. In Proc. IEEE
the hands does not induce a systematic change in the tactile pressure map, and Conference on Computer Vision and Pattern Recognition (CVPR ’2018)
hence their joint angles are more difficult to infer. There was no additional loss 9446–9454 (IEEE, 2018).
used to train the system to be temporally consistent. The network figured out the 26. D’Alessio, T. Measurement errors in the scanning of piezoresistive sensors
correspondence between the trajectory and the pose, and smoothly transitioned arrays. Sens. Actuator A Phys. 72, 71–76 (1999).
from one pose to another accordingly. More results are provided in Supplementary 27. Maaten, L. V. D. & Hinton, G. Visualizing data using t-SNE. J. Mach. Learn.
Fig. 12 and Supplementary Video 5. We also evaluated how the performance of the Res. 9, 2579–2605 (2008).
model changes with a systematic reduction in the resolution on each of the feet 28. Lederman, S. J. & Klatzky, R. L. Haptic perception: a tutorial. Atten. Percept.
from 32 × 32 to 1 × 1 (with the same ablation method used for letter classification). Psychophys. 71, 1439–1459 (2009).
29. Bauby, C. E. & Kuo, A. D. Active control of lateral balance in human walking.
Reporting Summary. Further information on research design is available in the J. Biomech. 33, 1433–1440 (2000).
Nature Research Reporting Summary linked to this article. 30. Scott, J. et al. From kinematics to dynamics: estimating center of pressure and
base of support from video frames of human motion. Preprint at https://arxiv.
Data availability org/abs/2001.00657 (2020).
Data that support the findings of this study are available from the corresponding 31. Okamura, A. M., Smaby, N. & Cutkosky, M. R. An overview of dexterous
authors upon reasonable request. Source data are provided with this paper. manipulation. In Proc. 2000 ICRA. Millennium Conference. IEEE International
Conference on Robotics and Automation. Symposia Proceedings Vol. 1, 255–262
(IEEE, 2000).
Code availability 32. Cheng, G. et al. A comprehensive realization of robot skin: sensors, sensing,
The code used to generate the plots within this paper is available from the control and applications. Proc. IEEE 107, 2034–2051 (2019).
corresponding authors upon reasonable request. 33. Cho, K. et al. Learning phrase representations using RNN encoder–decoder
for statistical machine translation. Preprint at https://arxiv.org/abs/1406.1078
Received: 26 August 2020; Accepted: 18 February 2021; (2014).
Published online: 24 March 2021
Acknowledgements
References This work is supported by the Toyota Research Institute. We thank L. Makatura,
1. Dahiya, R. S., Metta, G., Valle, M. & Sandini, G. Tactile sensing—from P. Kellnhofer, A. Kaspar and S. Sundaram for their helpful suggestions for this work. We
humans to humanoids. IEEE Trans. Robot. 26, 1–20 (2009). also thank D. Rus for the use of the mechanical tester and J. L. McCann for providing us
2. Winter, D. A. Human balance and posture control during standing and with the necessary code to programmatically work with our industrial knitting machine
walking. Gait Posture 3, 193–214 (1995). and visualize the knitting structure.
3. Someya, T., Bao, Z. & Malliaras, G. G. The rise of plastic bioelectronics.
Nature 540, 379–385 (2016).
4. Yousef, H., Boukallel, M. & Althoefer, K. Tactile sensing for dexterous Author contributions
in-hand manipulation in robotics—a review. Sens. Actuator A Phys. 167, Y. Luo, W.S. and M.F. developed and implemented the functional fibre fabrication
171–187 (2011). set-up. Y. Luo and W.S. conceived and implemented the sensor design, and performed

200 Nature Electronics | VOL 4 | March 2021 | 193–201 | www.nature.com/natureelectronics


NaTURe ElecTRonics Articles
characterizations. Y. Luo and K.W. designed and fabricated full-body sensing textiles. Additional information
Y. Luo, W.S., Y. Li, P.S., K.W. and B.L. conducted data collection. Y. Li conducted Supplementary information The online version contains supplementary material
the self-supervised sensing correction. P.S. conducted the experiments on 3D pose available at https://doi.org/10.1038/s41928-021-00558-0.
prediction from the tactile footprints. Y. Li and P.S. implemented the classification
Correspondence and requests for materials should be addressed to Y.Li, W.S., A.T. or W.M.
framework. T.P., A.T. and W.M. supervised the work. All authors contributed to the study
concept, conceived of the experimental methods, discussed and generated the results, Peer review information Nature Electronics thanks Sihong Wang, Jun Chen and Lining
and prepared the manuscript. Yao for their contribution to the peer review of this work.
Reprints and permissions information is available at www.nature.com/reprints.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in
published maps and institutional affiliations.
Competing interests © The Author(s), under exclusive licence to Springer Nature Limited 2021,
The authors declare no competing interests. corrected publication 2021

Nature Electronics | VOL 4 | March 2021 | 193–201 | www.nature.com/natureelectronics 201


nature research | reporting summary
Wan Shou, Yunzhu Li, Antonio Torralba,
Corresponding author(s): Wojciech Matusik
Last updated by author(s): Feb 17, 2021

Reporting Summary
Nature Research wishes to improve the reproducibility of the work that we publish. This form provides structure for consistency and transparency
in reporting. For further information on Nature Research policies, see our Editorial Policies and the Editorial Policy Checklist.

Statistics
For all statistical analyses, confirm that the following items are present in the figure legend, table legend, main text, or Methods section.
n/a Confirmed
The exact sample size (n) for each experimental group/condition, given as a discrete number and unit of measurement
A statement on whether measurements were taken from distinct samples or whether the same sample was measured repeatedly
The statistical test(s) used AND whether they are one- or two-sided
Only common tests should be described solely by name; describe more complex techniques in the Methods section.

A description of all covariates tested


A description of any assumptions or corrections, such as tests of normality and adjustment for multiple comparisons
A full description of the statistical parameters including central tendency (e.g. means) or other basic estimates (e.g. regression coefficient)
AND variation (e.g. standard deviation) or associated estimates of uncertainty (e.g. confidence intervals)

For null hypothesis testing, the test statistic (e.g. F, t, r) with confidence intervals, effect sizes, degrees of freedom and P value noted
Give P values as exact values whenever suitable.

For Bayesian analysis, information on the choice of priors and Markov chain Monte Carlo settings
For hierarchical and complex designs, identification of the appropriate level for tests and full reporting of outcomes
Estimates of effect sizes (e.g. Cohen's d, Pearson's r), indicating how they were calculated
Our web collection on statistics for biologists contains articles on many of the points above.

Software and code


Policy information about availability of computer code
Data collection Tactile frames from on-body human recording were captured using custom developed electronic module and commands. The poses and
motions of human were captured with a commercial motion capture system, Xsens MVN.

Data analysis All data visualization and analysis were performed with customized codes based on open source libraries, including PyTorch and OpenCv.
For manuscripts utilizing custom algorithms or software that are central to the research but not yet described in published literature, software must be made available to editors and
reviewers. We strongly encourage code deposition in a community repository (e.g. GitHub). See the Nature Research guidelines for submitting code & software for further information.

Data
Policy information about availability of data
All manuscripts must include a data availability statement. This statement should provide the following information, where applicable:
- Accession codes, unique identifiers, or web links for publicly available datasets
- A list of figures that have associated raw data
April 2020

- A description of any restrictions on data availability

The data that support the plots within the paper and other findings of this study are available from the corresponding authors upon reasonable request.

1
nature research | reporting summary
Field-specific reporting
Please select the one below that is the best fit for your research. If you are not sure, read the appropriate sections before making your selection.
Life sciences Behavioural & social sciences Ecological, evolutionary & environmental sciences
For a reference copy of the document with all sections, see nature.com/documents/nr-reporting-summary-flat.pdf

Life sciences study design


All studies must disclose on these points even when the disclosure is negative.
Sample size One consenting female subject with multiple recordings

Data exclusions No data excluded

Replication Different devices made with the same materials were tested for multiple times with different activities.

Randomization Subject was selected within the research group

Blinding Data from the tests were analyzed by different authors

Reporting for specific materials, systems and methods


We require information from authors about some types of materials, experimental systems and methods used in many studies. Here, indicate whether each material,
system or method listed is relevant to your study. If you are not sure if a list item applies to your research, read the appropriate section before selecting a response.

Materials & experimental systems Methods


n/a Involved in the study n/a Involved in the study
Antibodies ChIP-seq
Eukaryotic cell lines Flow cytometry
Palaeontology and archaeology MRI-based neuroimaging
Animals and other organisms
Human research participants
Clinical data
Dual use research of concern

Eukaryotic cell lines


Policy information about cell lines
Cell line source(s) NA

Authentication NA

Mycoplasma contamination NA

Commonly misidentified lines NA


(See ICLAC register)

Human research participants


Policy information about studies involving human research participants
April 2020

Population characteristics Healthy female subject in the age range of 20-40

Recruitment Consenting subjects were recruited from within the research group

Ethics oversight NA

Note that full information on the approval of the study protocol must also be provided in the manuscript.

You might also like