0% found this document useful (0 votes)
39 views19 pages

Human-Robot Collaboration in Disassembly For Sustainable Manufacturing

This document discusses human-robot collaboration (HRC) in product disassembly for sustainable manufacturing. It proposes a systematic framework called PCDEE-Circle to enable HRC in disassembly. This framework uses artificial intelligence methods for perception, cognition, decision making, and knowledge formation/evolution to address challenges in HRC disassembly. A case study demonstrates multi-modal perception, sequence planning, safety control, and motion control for an example HRC disassembly task. The framework aims to improve sustainability through resource recycling via HRC disassembly.

Uploaded by

jchikesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views19 pages

Human-Robot Collaboration in Disassembly For Sustainable Manufacturing

This document discusses human-robot collaboration (HRC) in product disassembly for sustainable manufacturing. It proposes a systematic framework called PCDEE-Circle to enable HRC in disassembly. This framework uses artificial intelligence methods for perception, cognition, decision making, and knowledge formation/evolution to address challenges in HRC disassembly. A case study demonstrates multi-modal perception, sequence planning, safety control, and motion control for an example HRC disassembly task. The framework aims to improve sustainability through resource recycling via HRC disassembly.

Uploaded by

jchikesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

International Journal of Production Research

ISSN: 0020-7543 (Print) 1366-588X (Online) Journal homepage: https://www.tandfonline.com/loi/tprs20

Human-robot collaboration in disassembly for


sustainable manufacturing

Quan Liu, Zhihao Liu, Wenjun Xu, Quan Tang, Zude Zhou & Duc Truong
Pham

To cite this article: Quan Liu, Zhihao Liu, Wenjun Xu, Quan Tang, Zude Zhou & Duc Truong Pham
(2019): Human-robot collaboration in disassembly for sustainable manufacturing, International
Journal of Production Research, DOI: 10.1080/00207543.2019.1578906

To link to this article: https://doi.org/10.1080/00207543.2019.1578906

Published online: 14 Feb 2019.

Submit your article to this journal

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=tprs20
International Journal of Production Research, 2019
https://doi.org/10.1080/00207543.2019.1578906

Human-robot collaboration in disassembly for sustainable manufacturing


Quan Liua,b , Zhihao Liu a,b
, Wenjun Xu a,b∗
, Quan Tanga,b , Zude Zhoua and Duc Truong Pham c

a School
of Information Engineering, Wuhan University of Technology, Wuhan, China; b Hubei Key Laboratory of Broadband Wireless
Communication and Sensor Networks, Wuhan University of Technology, Wuhan, China; c Department of Mechanical Engineering,
University of Birmingham, Birmingham, UK
(Received 26 January 2018; accepted 30 January 2019)

Sustainable manufacturing is a global front-burner issue oriented to the sustainable development of humanity and society.
In this context, this paper takes the human-robot collaborative disassembly (HRCD) as the topic on its contribution to
economic, environmental and social sustainability. In addition, a detailed enabling systematic implementation for HRCD is
presented, combined with a set of advanced technologies such as cyber-physical production system (CPPS) and artificial
intelligence (AI), and it involves five aspects which including perception, cognition, decision, execution and evolution
aiming at the dynamics, uncertainties and complexities in disassembly. Deep reinforcement learning, incremental learning
and transfer learning are also investigated in the systematic approaches for HRCD. The demonstration in the case study
contains experiment results of multi-modal perception for robot system and human body in hybrid human-robot collaborative
disassembly cell, sequence planning for an HRCD task, distance based safety strategy and motion driven control method,
and it manifests high feasibility and effectiveness of the proposed approaches for HRCD and verifies the functionalities of
the systematic framework.
Keywords: sustainable manufacturing; human-robot collaboration; product disassembly; cyber-physical production sys-
tem; artificial intelligence

1. Introduction
So far, the focus and discussion on sustainability and sustainable development have been in existence for nearly 50 years
(Haapala et al. 2013), make them pillars of smart manufacturing (Kusiak 2018). Sensing, smart and sustainable elements
have become essential for enterprises facing global challenges (Miranda et al. 2017). As the backbone of industries,
sustainable manufacturing has shown greatly influence in economy, environment and society. In economy, sustainable man-
ufacturing promotes innovation and change in business modes, creates new space for economic growth, makes business
services face at the whole life cycle of production and accelerates development of diversified economic modes and markets.
For the environment, sustainable manufacturing reduces the use and waste of raw materials, increases the utilisation of
resources, and slows down pollution and emissions. For society, sustainable manufacturing creates new human capital and
provides more and better work (Jovane, Westkämper, and Williams 2008; Jovane et al. 2008).
In sustainable manufacturing, disassembly as the main production mode of remanufacturing is of great significance for
economic and environmental benefits such as resource recycle, energy saving and emission reduction. On the other hand, due
to the development and deployment of industrial robotics, society factors reflect in the replacement of the heavy-loaded,
repetitive and dirty jobs which were held formerly by human operators in disassembly. However, under many existing
disassembly environments, robots are not able to fully replace human operators due to the individual difference of recycled
products which need a high-level of human intelligence. To cope with this, human-robot collaboration (HRC) is one solution
aimed to assist, not replace, the workers engaged in a wide variety of applications (Djuric, Urbanic, and Rickli 2016). One
way to future manufactures is to let humans and robots work closer together (Ore et al. 2016).
Before HRC was introduced into the manufacturing area, human-robot interaction (HRI) in robotics had been an
extremely extensive and diverse R&D activity (Tsarouchi, Makris, and Chryssolouris 2016). This is mainly embodied
in the design and development of collaborative robots, and collaborative capability for traditional ones. In human-robot
collaboration, the combination of humans, robots and products result in the requirements of the connection of different
systems such as human sensing systems, robot control systems, product status sensing systems and process control sys-
tems. Information systems witnessed a long history in manufacturing technologies (Sanchez and Nagi 2001), but things get

*Corresponding author. Email: [email protected]


© 2019 Informa UK Limited, trading as Taylor & Francis Group
2 Q. Liu et al.

different when taking multi-modal sensors, robots and intelligent algorithms into consideration. All the elements in the phys-
ical world and information systems in the cyber world make up a typical cyber-physical production system (CPPS). This
system is based on the progress of computer science, information and communication technology, sensor technology and
network technology. It includes information systems and hardware resources (Lee, Bagheri, and Kao 2015), and supports
the communication between human, machine and production (Monostori 2014). Undoubtedly, human-robot collaborative
manufacturing (HRC-Mfg) related R&D work has already occurred in CPPS (Wang et al. 2017). CPS, on the other hand,
also become one powerful booster for HRC applications (Nikolakis, Maratos, and Makris 2019).
However, even though HRC has been widely applied in the fields of industry, it is still in the user experience stage
with intelligence need to be improved, especially in the application of disassembly. It is mainly caused by the difficulties of
perceiving the status of human intention, robot motion and product. Besides, the decision making of HRC should be in a
position to consider both the real-time dynamics and the recorded knowledge of humans, robots and products which bring
challenges to existing decision systems. Fortunately, due to the rapid development of artificial intelligence and computing
power, it is now foreseeable to utilise these powerful new technologies to promote the efficiency of HRC which is also a
new trend in manufacturing. Moreover, intelligent algorithms should be deployed in CPPS because of the lack of computing
power in robot systems just like gathering all data from all kinds of sensors to the cloud in Internet of Things. On the other
hand, it is also a supplement for the concept and the function of CPPS.
In this paper, a systematic development framework called PCDEE-Circle is proposed towards human-robot collaborative
disassembly (HRCD) in sustainable manufacturing. Artificial intelligence methods towards perception, cognition, decision
making and knowledge formation and evolution are also proposed in this paper to meet the special requirements of HRCD in
sustainable manufacturing. From the unique view of innovative information technologies, this paper also delivers numerous
advanced intelligent methods and analyses why they could and should be implemented in HRCD. The case study verifies
the feasibility of the proposed framework on the perception, decision making and control of it, that is the implementation of
a multi-modal perception platform for industrial robot system and human body, a bees algorithm based sequence planning
method for an HRCD task, one safety assurance strategy and one motion driven control method. As for the cognition and
knowledge formation and evolution, this paper discusses workflow combined with the characteristics and requirements of
HRCD but remains the deployment in the future work.

2. Related work
HRC is a comprehensive research area and it is also known as human-robot cooperation and interaction. With the rapid
development of robotic and AI technologies, it has become one desire of human beings to work with robots. The pioneer
work can be traced to articles in the 1980s (Awad, Engelhardt, and Leifer 1983). In 1985, researchers started to conclude the
factors in the design and development of human-robot interactive workstation (Holloway, Leifer, and Van der Loos 1985).
However, due to the limitation of enabling technologies in the last century, researches on HRC came to a halt in basic design
(Rahimi and Karwowski 1990; Kobayashi 2000). At the beginning of the twenty-first century, key technologies for HRC
have been designed and tested, such as the perception of human gestures (Waldherr, Romero, and Thrun 2000).
The application of HRC-Mfg was introduced around 2010 (Kato, Fujita, and Arai 2010; Tan and Arai 2010). Based
on almost 30 years’ study on HRI and HRC, the application of HRC-Mfg including the perception of industrial robots
and human at the very beginning has taken a multitude of aspects into account. Safety as the top priority of HRC has
been studied by a number of researchers. Methods for collaborative zone design, robot speed limitation and vision-based
human motion monitoring has been investigated in (Tan et al. 2012; Wang 2015). Safety-related ISO standard and metrics
have been reviewed in (Hu et al. 2013; Zanchettin et al. 2016) and they have proved the feasibility of RGBD camera
applications in HRC. Estimation and the evaluation of injuries in human-robot collisions are also researched to min-
imise the consequences of collisions (Robla-Gómez et al. 2017). Path planning is another topic that has been explored
by many investigators, which need to combine with human factors especially in mixed HRC-Mfg environments (Zanchet-
tin and Rocco 2013). Task allocation and procedure arrangement are crucial and special parts in HRC-Mfg comparing
with common HRC applications, such as Rahman’s method (Rahman, Sadrfaridpour, and Wang 2016) of a trust-based
optimal subtask allocation in HRC-Mfg as well as the optimised scheduling using integer linear programming in (Bogner
et al. 2018). Evaluation and assessment in HRC are the basis of strategy making. Researched in this topic mainly includes
manufacturing capability assessment through data fusion (Cheng et al. 2017), mental strain evaluation using physiological
parameters (Kato, Fujita, and Arai 2010) and analytic hierarchy process based evaluation for multiple criteria (Tan and
Arai 2010).
In the industrial area, the number of industrial robots deployed in the manufacturing environment is growing at an
overwhelmingly high rate, and significantly facilitate the development of intelligent manufacturing. In recent years, the
concept of collaborative robots has appeared and been adopted in practical industry, and the new collaborative industrial
International Journal of Production Research 3

robots, e.g. KUKA iiwa, ABB YuMi, Rethink Baxter and Rethink Sawyer (Weber 2014; Han et al. 2016) have been gradually
put into the industrial market. For disassembly, 3D safety sensors based intuitive programming environment for HRCD
has been researched and implemented in the disassembly of Lithium-Ion Batteries (Gerbers et al. 2018). Nevertheless,
psychological and social factors of HRC-Mfg need to be addressed and embedded in the development to make robot actions
become acceptable and comfortable for the human (Sadrfaridpour, Saeidi, and Wang 2016).
Currently, there is still no standard paradigm of implementation in HRCD. Although HRC has been concerned in the
manufacturing industry, however, since the uncertainty and complexity of disassembly are much higher than that of assem-
bly, the research of HRC and HRC-Mfg is relatively rare in product disassembly. (Abdullah, Popplewell, and Page 2003)
concluded that for tasks like assembly, methods implementation should not only consider factors of product technology,
but also the industrial environment where task occurs. Methods of perception, cognition, task allocation and assessment
need to be modified according to the characteristics of disassembly. Besides, artificial intelligence is rapidly utilised in
machine vision, automatic drive, games and primitive HRI. But for HRC in manufacturing especially HRCD, the lack of
implementation even well-designed framework is obvious.

3. Human-robot collaborative disassembly within CPPS: PCDEE-circle framework


Although modern manufacturing devices are constantly being introduced, fully automated disassembly is still impractical.
HRCD makes up for the gap between the full manual operation and the full automation in sustainable manufacturing. Based
on the design ideas of CPPS, this paper presents an HRCD framework named PCDEE-Circle, which is shown in Figure 1.
The PCDEE-Circle is divided into five phases including perception, cognition, decision, execution and evolution, reflecting
in one external circle and two internal circles.
Since HRCD is obviously a complex production model containing human, robot, multiple products and background
environments, it requires all kinds of sensor and interfaces of multiple modalities. Multi-modal perception is an integration
of sensing technology. Aiming at the dynamics (human behaviour, robot motion, product delivery, etc.), individual differ-
ences (different human individuals, different cases of damage of recycled products, etc.) and uncertainties (human intention,
impacts on programme and time delay caused by long term usage of products, etc.) in HRCD, multi-target is built for the
content analysis in that. Decision and execution are to realise the physical interaction of different individuals in HRCD
while knowledge formation and evolution supports backwards the whole framework.

Figure 1. PCDEE-circle: an HRCD framework.


4 Q. Liu et al.

The first internal circle is a human-in-loop circle (PCDE-Circle), it contains multi-modal perception, multi-target cog-
nition, strategy and decision making and control. Though robot and human participate together in this PCDE-Circle, the
human is the main factor in this internal circle. In this circle, human factors are the key to decision making and guide the
control and execution of robots and programmes. The next internal circle is a robot-in-loop circle (DEE-Circle). It embodies
the last three aspects of the whole external circle. From Figure 1, we can see that the intersection of these two internal circles
include the decision and execution aspects which are exactly the kernel to realise physical HRC. External PCDEE circle is
from a macroscopical view of the whole process of HRCD. It is a product-in-loop circle which represents the human-robot
collaboration here serves the production.
For logic in PCDEE-Circle, multi-modal perception technology connects the parameters of the industrial robot system
and the action behaviour of human beings in the process of HRCD. After that, it utilises multi-target cognition technology
to recognise industrial robot body, human behaviour, disassembly objects, disassembly tools, background environment
and disassembly tasks, so as to support strategy and decision making. Intelligent decision-making based on reinforcement
learning (RL) or swarm intelligence is trained through continuous training in the CPPS to satisfy the requirements of HRCD.
Finally, knowledge formation and evolution based on incremental learning (IL) and transfer learning (TL) can accumulate
knowledge generated during the process of HRCD, and achieve knowledge sharing through industrial cloud robot system
and other related technologies.
Comparing with the systems in the published literature, the PCDEE-Circle framework has an original view of human-
in-loop, robot-in-loop and product-in-loop characteristics in the implementation of HRC in disassembly. Not only do we
integrate perception, cognition, decision making and control into a holistic architecture, but for the first time put up with
the knowledge formation and evolution and the idea of using knowledge to support decision making which is rarely seen in
published frameworks.

4. Systematic approaches
4.1. Multi-modal perception
In order to achieve HRCD, how to integrate the real-time state of the industrial robot, human worker, manufacturing cells
and tasks is the key problem to be solved (Liu et al. 2017). In the PCDEE-Circle, the multi-modal perception in CPPS is the
key technology to it. As shown in Figure 2, the multi-modal perception architecture for HRCD includes four aspects: the
physical layer, the transport layer, the cyber layer and the application layer.
The physical layer consists of industrial robots, robot controllers, multi-modal sensor group and other electronic equip-
ment. Among them, the industrial robot with high programming capability can be equipped with different tools to cooperate

Figure 2. Multi-modal perception in HRCD.


International Journal of Production Research 5

Table 1. Multi-modal data in cyber modules.


Data name Source Data type Units
Industrial robot Joint angles Robot controller Array ◦
Position of the tool Robot controller Array cm
Torque Robot controller Numerical value N·m
Payload Robot controller Numerical value kg
Task Robot controller String /
Programme Robot controller String and files /
HMI Inputs HMI I/O Boolean
Vision towards the human body RGB Microsoft Kinect Image /
Infrared Microsoft Kinect Image /
Depth Microsoft Kinect Image /
Skeleton Microsoft Kinect Array cm
Point cloud Microsoft Kinect Array cm
Vision towards disassembly products RGB Industrial camera Image /
Colour Industrial vision software String /
Shape Industrial vision software String /
Position of product Industrial vision software Array cm
Type of product Cognition system String /
Damaged condition Assessment system String or numerical value /
PLC & IPC Triggers PLCs & IPCs I/O Boolean
Detailed human factor Hands Leap Motion Array cm
Fingers Leap Motion Array cm
Forearms Leap Motion Array cm
Energy consumption Voltage Energy metre Numerical value V
Current Energy metre Numerical value A
Power Energy metre Numerical value W
Cost Energy metre Numerical value kW·h or $
Alternative modules Laser Laser radar Numerical value or image cm
Ultrasonic Ultrasonic sensor Numerical value cm
Touch Touch sensor Numerical value or I / O cm or Boolean
Voice Microphone Audio /
Metal Detection Metal sensor I/O Boolean

with human operators to handle different disassembly tasks. The sensor group includes the energy metre, industrial cameras,
RGBD cameras, human factor sensors. Additionally, there also exist basic electronic equipment such as human-machine
interfaces, programmable logic controllers (PLCs) and industrial personal computers (IPCs).
The transport layer is based on the industrial fieldbus and the factory network, which can realise the data transmission
of multi-modal perception sensors, to provide different data interfaces for different cyber modules.
The cyber layer accepts various data from the physical layer through the transport layer and constructs different cyber
modules according to its source. As shown in Figure 2, cyber modules are incorporated in the cyber layer according to
specific perception source like industrial robots, cameras, energy meter, human-machine interface (HMI), PLCs, IPCs and
so on. For different disassembly tasks and sensing needs, the cyber layer needs to be elastic, which means that it can freely
build cyber modules for alternative sensors. All the perception information in HRCD is shown in Table 1.
The application layer is the transition from the cyber layer to the specific HRCD tasks. Most directly, operators and
managers can manage data from the cyber layer in the applications, and conduct remote monitoring of HRCD tasks. Besides,
the application layer also provides interfaces for other intelligent system and computing sources, such as Hadoop, Spark
and other big data processing architecture, or Tensorflow, Keras and other training systems.
On the other hand, data fusion is definitely one crucial and large-scaled problem in HRCD. However, since cyber
modules change along with different HRCD tasks, data fusion methods cannot be constant. A general solution towards the
data fusion problem is extremely hard to be given but should be replaced by a set of specific methods combined with specific
HRCD tasks and cyber modules.

4.2. Multi-target cognition


Different from the traditional CPPS applications, the objects of HRCD are highly complex ones, such as disassembly
products, human behaviour, manufacturing environment and so on. This makes the process from the perception to the
application cannot be realised directly.
6 Q. Liu et al.

Figure 3. Multi-target cognition in HRCD.

Multi-target cognition is an artificial intelligence technology which is built on machine learning (ML) and pattern recog-
nition. It takes various data formats as input, and outputs the cognitive results for different cognitive targets. As shown
in Figure 3, in the hybrid human-robot collaborative manufacturing cell, human-robot collaborative manufacturing system
(HRCMS, an aggregation of multiple sub-systems) needs an overall sense of human workers, industrial robots, disassembly
tools, disassembly objects and the background environment.
The cognition of human workers mainly takes RGB images and point cloud images as the main data sources, and
establishes human skeleton models by using human physiological structure. After that, we can get the human skele-
ton point cloud model including real-time location and space occupancy. Combining the human skeleton point cloud
model with the safety assessment system and the minimum safe distance calculation in ISO (Matthias 2015), a human
dynamic security model can be obtained for the safety in HRCD. Force, coordinate and gesture of industrial robots can
be calculated using the joints angle and torque data from multi-modal perception based on kinematics, dynamics and
pre-designed programme. It is unnecessary to cognise it by vision information, which greatly reduces the difficulty of
modelling. For disassembly products and tools, the traditional way is tantamount to place them to a fixed position in a
fixed order. This method has obvious limitations. Firstly, it requires the robot to encode and initialise the location of the
tool in the programme. At the same time, the staffs are required to undergo a complex process of training to adapt to a
specific disassembly task. Secondly, the collaboration between human and robot will bring more uncertainty to HRCD.
Consequently, it is arduous to ensure that the needs and usage of tools in disassembly tasks are unchanged. Therefore,
it is indispensable to recognise the types and status of disassembly products and tools by multi-target cognition. In this
process, it is required to combine the characteristics of the products and the data format, select the appropriate learn-
ing network, and use the well-designed sample set to train. The cognition of the background environment is the final
part of the multi-target cognition in HRCD. However, it is still a paramount part of the cognition in the manufacturing
environment (Christensen 2016). Through the cognition of the background environment, the HRCD cell and production
line can recognise more folks, industrial robots, AGVs and other manufacturing equipment in the environment, as well
as their behaviours and intentions. This makes the intelligence of the collaborative manufacturing system improved as
a whole.
International Journal of Production Research 7

4.3. Strategy and decision making


Studies in automated disassembly decision-making and recovery planning have always been a key area in remanufacturing
research (Tao et al. 2018). In the traditional task decision-making and scheduling, the capacity and the responsible pro-
cedures of each manufacturing device are generally fixed, which can be regarded as a static scheduling process. As for
HRCD, the decision-making environment is obviously dynamic, unstructured and uncertain. Machine learning or swarm
intelligence (Tang et al. 2017; Liu et al. 2018) can be adopted for sequence planning in HRCD.
RL is a kind of ML method in the field of AI. In RL, the decision-makers and all the external effects that may influ-
ence decision-makers to become the decision environment, while RL uses value functions to represent the sum of future
rewards and punishments (Kulkarni 2012). For any action in the environment, the decision-makers will receive rewards and
punishments from the environment according to the corresponding action results. Through constant testing and correction,
the decision-makers will learn the most likely strategy to solve the problem. This method originated in the late twentieth
century and demonstrated a major breakthrough in Project AlphaGo in 2015–2016, which made intelligent system able to
defeat many professional players in the go game (Mnih et al. 2015; Silver et al. 2016). Since then, research of RL and
deep reinforcement learning has become a hot spot in the academic and industrial sectors and has gradually revealed its
application prospect in intelligent manufacturing (Zhao et al. 2016).
However, from the perspective of CPPS, unlike the Go game that is fully running in the cyber world, HRCD occurs
completely in the physical world. Accordingly, HRCD must be transformed into a model in the cyber world so as to apply
RL to carry out decision-making training. This is because RL training needs to accumulate experience in mistakes, but
HRCD does not allow errors. Any minor mistakes can bring safety risks to humans. Therefore, simulation and reproduction
of HRCD in the cyber world have become the precondition for the implementation of RL. Digital twin (Tao et al. 2017) is
one possible paradigm solving this problem. The definition of digital twin given by NASA (Glaessgen and Stargel 2012) is
that digital twin is a simulation process integrated with multiple physical quantities, dimensions and probability. It builds a
simulation model completely reflects the physical structure and describes the full life-cycle of the physical object by historic
and real-time data. From this perspective, we propose multiple twin models for human, industrial robots and manufacturing
tasks. These models can be applied to carry out RL training, so as to ultimately improve the decision-making ability of the
system.
As shown in Figure 4, in the cyber world, digital twin models of the human and industrial robot are built based on multi-
modal perception data and robotics and ergonomics theories. These models are not single but a combination of various
ones. The digital twin model of human is composed of the kinematics model, the point cloud model and the skeleton model,
which can represent the movement and the space occupancy of the human body. For the industrial robot twin model, it
should include the kinematic, dynamic and visual model. These models can be specially designed according to the types
of industrial robots, such as the size and shape of them. Besides, we also need to establish mathematical models for the
operation mechanism, disassembly tasks and the safety assessment of HRCD to reflect production uncertainty. In the cyber
world, production uncertainty and error models are represented by probability functions and models. These models mainly
express the contents of safety assurance, task decomposition, sequence planning and scheduling evaluation. Finally, the twin
models of human, industrial robots and manufacturing tasks form a virtual hybrid human-robot collaborative manufacturing
cell in the cyber world, which can further become a virtual production line.
Take safety assurance as an instance, the virtual industrial robot, as the decision-maker, performs a disassembly task in
a shared environment and tries to ensure that it does not collide with people. Obviously, if a collision occurs, the decision-
maker will be punished. Otherwise, if there is no collision and the cooperative disassembly task is successfully completed,
the decision-maker will be rewarded. With the iterations in RL, the decision-maker will change their strategies to maximise
the value function. After a large number of repeated training, an optimal collaborative disassembly strategy is formed on
the premise of safety. When one strategy is repeatedly verified in the cyber world, it can be downloaded to the physical
world. Finally, it drives the industrial robot to collaborate with human according to strategy, and ultimately achieve a
safe and efficient way of collaboration. In addition, in the process of RL, the value function should be adjusted according
to specific disassembly needs, so as to meet sustainable manufacturing requirements such as ‘completing the task in the
shortest time under the premise of ensuring safety’ or ‘minimising energy consumption under the premise of ensuring
safety.’

4.4. Execution and control


Device control and command execution are the key threads in transforming the decision of the cyber world into the actions
of the physical world. However, there remain quite a few challenges to be solved.
First of all, even the same brand of industrial robots also has different operating systems and software architecture. In
addition, due to the requirements of business secrets and industrial stability, the structure of contemporary industrial robot
8 Q. Liu et al.

Figure 4. Strategy and decision making in HRCD.

Figure 5. Execution and control in HRCD.

controllers is mostly closed. Secondly, HRCD requires various types of sensors, which often come from different manufac-
turers with different design patterns. In a word, closed industrial systems and limited sensor adaptation bring developing
challenges to HRCD in the phase of executive control system design. In view of this, we put forward a device control and
command execution architecture for HRCD, as shown in Figure 5.
Hardware including robot controllers and sensor drivers occupies the bottom layer of this architecture. Inside, it saves
the firmware written by the industrial robot and sensor manufacturers as the basic drivers. Above hardware, there exist
International Journal of Production Research 9

industrial robots and sensor groups. However, software modules, such as perception, cognition and decision-making, have
to be implemented into HRCMS on the computer operating system. For this reason, first of all, the multi-modal perception
data from the industrial robot system and the sensor group are required to be introduced into the operating system through
interfaces. These interfaces could be official SDKs (such as ABB PC SDK, PC Interface Option or Microsoft Kinect 2.0
SDK) or packages in ROS-Industrial (Edwards and Lewis 2012). They are running respectively on Windows and Linux
Ubuntu. At the top is the control and execution layer with HRCMS as the core. After strategies are made in HRCMS, it will
be sent back to robot controllers, finally realise the device control and command execution.

4.5. Knowledge formation and evolution


The formation and evolution of knowledge can be based on the previous experience of HRCD to guide the disassembly
tasks in the future. It mainly relies on IL, TL and other techniques.
With the continuous operation of HRCD, data from sensors, manufacturing execution systems, quality assurance systems
and human resources feedback is growing rapidly in manufacturing enterprises. A manufacturing system without the ability
to learn gradually will lose a lot of knowledge and efficiency for decision-making, and waste the potential value of industrial
big data. But considering the growth rate of data, the traditional ML method that training and discarding previous learning
results not only need more learning time but also limit their learning efficiency and knowledge retention ability.
IL enables the HRCD to accumulate knowledge gradually which is shown in Figure 6. It not only allows knowledge
accumulation, but also can update knowledge according to the emergence of new events, and it does not lose the useful
knowledge that has been established in this process. Discovering and updating knowledge is the key factor of the next
generation of the HRC system. Making new decisions requires making use of acquired knowledge, and a new decision will
bring new knowledge, which makes the decision system have the characteristics of learning in order to practice. Obviously,
learning takes place in every aspect of HRCD. The new data and new learning materials generated in each stage have led to
the need for IL. Knowledge formation and evolution in HRCD are mainly embodied in the following three aspects.
• IL for human behaviour. It enables HRCMS to drive industrial robots to respond to human actions more clearly
and can learn and accumulate knowledge for different individuals’ behaviour habits.
• IL for disassembly tasks. It enables HRCMS to record and analyse the characteristics of different tasks and further
deduces the special needs of specific tasks, finally provides support for the optimisation of decision making.
• TL for knowledge migration oriented to similar behaviour and tasks. It makes it possible for multiple industrial
robots to share knowledge in clouds.
Knowledge in (1) and (2) need to be stored in the knowledge base so that they can be retrieved at any time. With the progress
of HRCD under different disassembly tasks, HRCMS can realise the collection, integration, expression and expansion of
knowledge, and finally establish a complete knowledge base. It could be utilised with constantly updated knowledge to
realise the catalysis of new knowledge to the old knowledge, the formation of new knowledge and the evolution of the
whole knowledge system.

Figure 6. Knowledge formation and evolution in HRCD.


10 Q. Liu et al.

5. Case study and implementation analysis


In order to realise the multi-modal perception of hybrid HRCD cell in Figure 7, a self-designed software (RobotCube)
was developed to percept the information from ABB IRC5 robot controller. Besides, we utilised the Microsoft Kinect
2.0 to generate the infrared point cloud of the human body and built it with human skeleton from Microsoft Kinect 2.0
SDK. Furthermore, one diaphragm coupling disassembly task was designed for the case study. Based on the multi-modal
perception results, we developed and implemented a simple safety strategy mechanism and designed a motion driven control
mode in HRCD.

5.1. Perception for robot system


Our software is based on ABB PC SDK 6.00.01 and runs on a PC platform. We had tested it with both ABB IRB1200
industrial robot (with RobotWare 5.15.13) and virtual robot in ABB RobotStudio 6.00.01. Functions and experimental
results are illustrated in Figure 8.
In Figure 8, module 1 is the network scanner for robot systems. It links all ABB robot systems through a network in
our lab. From the scanner, we had gotten the information of the robot controller such as IP address, ID, availability, virtual
status, system name, firmware (robotware) version, controller name, execute level, station name and MAC address. Module
2 is the event log of the robot controller. It contains the log, message and alerts of the robot system. Module 3 is the database
interface and the data table is shown in Figure 8(h). Module 4 and module 5 are controller and task information respectively.
Module 6 embodies the real-time tool central point (TCP) position, joints angle, quaternion and speed data of the robot.
Multi-modal data under different tasks are shown in Figure 8(e) to Figure 8(g). Module 7 and module 8 make two kinds of
telemanipulation mode for our robot. Module 9 processes the kinematic data of the robot.

5.2. Perception for human body


For building the infrared point cloud model of human body, the software processing data flow was implemented on a PC
group (3 PCs with 3 Kinects) with master-slave architecture to realise the fusion of multi-source data (Yang et al. 2018). The
Microsoft Kinect 2.0 is the XBOX edition with a PC adapter linking to the USB 3.0 port. Experimental results are illustrated
in Figure 9.
In Figure 11(a), the red dot in the red circle represents the TCP of the industrial robot. Snapshots from (b) to (j) demon-
strate a distinct movement of the worker, and it is obvious that the red dot can move being dependent on the direction of
the human’s hand. In (h) and (j), there are snapshots from other views to demonstrate the tridimensional character of the
infrared point cloud model. Due to the large size and high frequency of data from three Kinect sensors, the processing
speed for point cloud registration is limited. To solve this problem, we only extracted the depth data and RGB images and
deployed the downsampling algorithm. Finally, we could achieve data transmission above 20 frames (20 sets of point cloud
in one minute) and ensured the real-time performance of the point cloud model.

5.3. Sequence planning for diaphragm coupling disassembly


A diaphragm coupling disassembly task in Figure 10(a) was deployed for the sequence planning in HRCD. This product has
37 independent procedures with symmetric structure. Therefore, if half of the procedures have been planned, the remaining

Figure 7. The case study scenario.


International Journal of Production Research 11

Figure 8. Results of robot system perception.


12 Q. Liu et al.

Figure 9. Results of human body perception.

part can be handled in the same way. In Figure 10(a), we also illustrate just half of the procedures with serial numbers but
in Figure 10(b) we show the structure and priority of all the procedures.
The sequence planning method was based on bees algorithm which had been illustrated in our former work (Tang
et al. 2017). In this process, one diaphragm coupling had been snapshotted and analysed in the virtual explosive model,
delivering the disassembly hybrid graph in Figure 10(b). Besides, Figure 10(c) indicates the variation trend of the solution
of this algorithm, and Figure 10(d) demonstrates the result of sequence planning. Time-consuming of every procedure is
assumed with disassembly unit time which is also illustrated in Figure 10(c).

5.4. Safety strategy and motion control demonstration


This demonstration combines a safety strategy based on the distance between human and robot and a motion driven control
mode.
In (a) to (b) of Figure 11, when human were far away from the industrial robot, the robot was running at full speed.
When human gradually moved towards the cell in (c) to (d), the industrial robot first detected the proximity of human, and
then reduced the speed so as to decrease the safety risk. When human entered the shared space of the cell in (e), the robot
first tentatively stopped the current task and stood by. Then it settled in the pre-set target point and started the motion driven
control mode. From (f) to (j) of Figure 11, the TCP of industrial robots followed the movement trend of the human hand in
a shared space. At the end of the collaboration in (k) and (l), the human left the shared space backwards, then the industrial
robot gradually increased the speed of operation and restored the task before the collaboration.
Traces of TCP and hand are illustrated in Figure 12(a). From point 1–2, the robot was executing tasks at full speed. At
point 2, the robot stopped which is paralleled with Figure 11(e). Point 3 in Figure 12(a) is the pre-set target point. After
point 3, the robot was driven by the hand motion of human, representing the collaboration process. It can be noted that
traces of TCP and hand are basically coincided, manifesting the accuracy of the motion driven control method. In order to
observe the time delay during hand following, we selected seven key points. They are the start point (point 1), the end point
(point 7) and the points at veers (point 2 ∼ 6). The time delay of these points is illustrated in Figure 12(b). In this figure,
we can find that the sensitivity of the hand movement tracing is not constant. The time delay from the motion of human to
the movement of robot varies from about 100 ms to nearly 1000 ms, which is caused by not only the point cloud processing
but also the robot system. Figure 12(c) and Figure 12(d) manifests the time delay of the whole trace launching from blue
to red. Points and lines in the same colour represent that they occurred at the same time. Because of the closed industrial
robot system, an industrial robot cannot achieve real-time sensitivity strictly. This assumes that industrial robots will have
to be under a delay up to one second when the speed of human motion is faster. Numerical comparison with (Du and Zhang
2014) on the time delay of motion control is given in Table 2.
The related work in Table 2 used Microsoft Kinect v1 as the depth camera with a depth resolution of 320*240. They
utilised the arm motion of human to control a dual arm robot in a virtual environment. However, they recorded a relatively
long phase of motion up to 60 s. The step value in their figures is much larger than ours, so we have to enlarge their figure
International Journal of Production Research 13

Figure 10. Results of sequence planning.

to get an estimated value of the time delay between the paths of ‘Hand’ and ‘End-Effector’ using a pixel ruler. Similarly, we
select 7 different points randomly at veer in the start, middle and end phase of the movement.
From Table 2, we can see that the average time delay in our work is less than the result of the related work. One possible
reason is the advanced performance of the newer generation of sensor. Besides, the purpose of the related work is to make the
position and rotation error as little as possible without considering too much about the time delay while we adopt algorithms
such as downsampling to control the data size. However, there are 3 sensors in our system and the resolution is much higher
which means our work could bring more detail and scope of depth vision.
To discuss the time delay of the proposed system, Figure 13 shows the architecture difference between typical Kinect
SDK (single-sensor) and our 3-sensors network. Our system sacrifice 10 frames per second to obtain a much wider view of
human point cloud comparing with a single-sensor solution. As for the total time delay of robot reacting to human motion,
it contains processing time for 20 frames in one second and the time delay on signal transmission resulting in the numerical
indication of 100–1000 ms.
In order to address the time delay problem, we consider improve the data format, sampling algorithms as well as opti-
mising the structure of the software such as using ROS 2.0 (“ROS 2”) and improving the sampling rate of the industrial
robot system. Moreover, different brands of industrial robots are running under different hardware and software architecture
which mean a great challenge for satisfying the requirement of international standards.
14 Q. Liu et al.

Figure 11. Demonstration of safety strategy and motion control.

Figure 12. Results of trace and time delay in collaboration.


International Journal of Production Research 15

Table 2. Numerical comparison on the time delay of motion control.


Resolution
(depth
Number Initial Output image)
of frame frame (Skarredghost
Sensor sensors rate (fps) rate (fps) 2016) Robot Algorithms Time delay (ms)
Related Microsoft 1 30 N/A 320*240 Virtual dual Over damping . Estimated
work in Kinect arm robot value from
(Du and v1 (Du and
Zhang Zhang
2014) No. 2014) Our work
1 1327 101
2 1264 850
3 1801 714
Our work Microsoft 3 30 20 512*424 Real ICP and 4 1043 785
Kinect Industrial Downsampling 5 1106 739
v2 robot 6 1043 991
7 1264 796

Figure 13. Analysis of frame output and total time delay.


16 Q. Liu et al.

6. Conclusion and future work


HRC is currently a hot topic in robotics and artificial intelligence, which represents one direction of robotic development.
HRCD is a quintessential application of HRC and has multiple contributions for sustainable manufacturing.
In this paper, a systematic development framework named PCDEE-Circle was presented. The key technologies that are
perception, cognition, decision, execution and evolution were further discussed. At the same time, it lays a foundation in the
field of disassembly and intelligent manufacturing, manifesting the application prospect of AI technology, such as DL, RL,
DL and TL. In the case study, we demonstrated the multi-modal perception for ABB industrial robots and human body and
sequence planning for an HRCD task, finally realised a distance based security strategy and motion driven control mode.
It manifests high feasibility and effectiveness of the proposed approaches for HRCD and verifies the functionalities of the
systematic framework.
Future work is summarised as follows. Firstly, we will establish an integrated industrial robot perception system for
more kinds of industrial robots. Secondly, we will go deep into the work of digital human modelling. Thirdly, we will
implement the twin models of industrial robots and the digital human body in the cyber world, and use RL to make intelligent
strategies. Fourthly, we will work through the bottom-up control and command execution architecture to realise the omni-
directional control of one HRCD production line. Finally, we will develop human-robot collaborative knowledge formation
and evolution software based on IL and TL.
Geolocation information: Wuhan, Hubei Provence, China.

Disclosure statement
No potential conflict of interest was reported by the authors.

Funding
This work is supported by National Natural Science Foundation of China (Grant No. 51775399 and 51675389), the Keygrant Project of
Hubei Technological Innovation Special Fund (Grant No. 2016AAA016), and the Engineering and Physical Sciences Research Council
(EPSRC), UK (Grant No. EP/N018524/1).

ORCID
Zhihao Liu http://orcid.org/0000-0002-0222-912X
Wenjun Xu http://orcid.org/0000-0001-5370-3437
Duc Truong Pham http://orcid.org/0000-0003-3148-2404

References
Abdullah, T. A., K. Popplewell, and C. J. Page. 2003. “A Review of the Support Tools for the Process of Assembly Method Selection and
Assembly Planning.” International Journal of Production Research 41 (11): 2391–2410.
Awad, Roger E., K. G. Engelhardt, and Larry J. Leifer. 1983. “Development of Training Procedures for an Interactive Voice-Controlled
Robotic Aid.” Paper presented at the Proceedings of the 6th Annual Conference on Rehabilitation Engineering: The Promise of
Technology., San Diego, CA, USA.
Bogner, Karin, Ulrich Pferschy, Roland Unterberger, and Herwig Zeiner. 2018. “Optimised Scheduling in Human-Robot Collaboration –
A Use Case in the Assembly of Printed Circuit Boards.” International Journal of Production Research 56 (16): 5522–5540.
doi:10.1080/00207543.2018.1470695.
Cheng, Huiping, Wenjun Xu, Qingsong Ai, Quan Liu, Zude Zhou, and Pham Duc Truong. 2017. “Manufacturing Capability Assessment
for Human-Robot Collaborative Disassembly based on Multi-Data Fusion.” In 45th SME North American Manufacturing Research
Conference, edited by L. Wang, L. Fratini and A. J. Shih, 26–36.
Christensen, H. 2016. “A Roadmap for US Robotics from Internet to Robotics, 2016 edn.” Sponsored by National Science Foundation &
University of California, San Diego.
Djuric, Ana M., R. J. Urbanic, and J. L. Rickli. 2016. “A Framework for Collaborative Robot (CoBot) Integration in Advanced
Manufacturing Systems.” SAE International Journal of Materials and Manufacturing 9 (2). doi:10.4271/2016-01-0337.
Du, Guanglong, and Ping Zhang. 2014. “Markerless Human–Robot Interface for Dual Robot Manipulators Using Kinect Sensor.” Robotics
and Computer-Integrated Manufacturing 30 (2): 150–159.
Edwards, Shaun, and Chris Lewis. 2012. “ROS-Industrial: Applying the Robot Operating System (ROS) to Industrial Applications.”
Paper presented at the IEEE Int. Conference on Robotics and Automation, ECHORD Workshop.
Gerbers, Roman, Kathrin Wegener, Franz Dietrich, and Klaus Dröder. 2018. “Safe, Flexible and Productive Human-Robot-Collaboration
for Disassembly of Lithium-Ion Batteries.” In Recycling of Lithium-Ion Batteries, 99–126. Cham: Springer.
International Journal of Production Research 17

Glaessgen, E. H., and D. S. Stargel. 2012. “The Digital Twin Paradigm for Future NASA and U.S. Air Force Vehicles.” Paper presented at
the 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference 2012, April 23, 2012 - April
26, 2012, Honolulu, HI, United states.
Haapala, Karl R., Fu Zhao, Jaime Camelio, John W. Sutherland, Steven J. Skerlos, David A. Dornfeld, I. S. Jawahir, Hong Chao Zhang,
Andres F. Clarens, and Jeremy L. Rickli. 2013. “A Review of Engineering Research in Sustainable Manufacturing.” Journal of
Manufacturing Science & Engineering 135 (4): 599–619.
Han, Bo, Dhanya Menoth Mohan, Muhammad Azhar, Kana Sreekanth, and Domenico Campolo. 2016. “Human-Robot Collaboration for
Tooling Path Guidance.” Paper presented at the IEEE International Conference on Biomedical Robotics and Biomechatronics.
Holloway, Karen, Larry Leifer, and Machiel Van der Loos. 1985. “Factors in the Design and Development of an Interactive Human-robot
Workstation.” Paper presented at the Proceedings of the Eighth Annual Conference on Rehabilitation Technology: Technology - A
Bridge to Independence., Memphis, TN, USA.
Hu, Jhen-Jia, Chung-Ning Huang, Hau-Wei Wang, Po-Huang Shieh, and Jwu-Sheng Hu. 2013. Safety-based Human-Robot Collaboration
in Cellular Manufacturing: A Case Study of Power Protector Assembly, 2013 International Conference on Advanced Robotics and
Intelligent Systems.
Jovane, Francesco, Engelbert Westkämper, and David Williams. 2008. The ManuFuture Road to High-Adding-Value Competitive
Sustainable Manufacturing. Berlin: Springer.
Jovane, F., H. Yoshikawa, L. Alting, C. R. Boër, E. Westkamper, D. Williams, M. Tseng, G. Seliger, and A. M. Paci. 2008. “The Incoming
Global Technological and Industrial Revolution Towards Competitive Sustainable Manufacturing.” CIRP Annals 57 (2): 641–659.
Kato, Ryu, Marina Fujita, and Tamio Arai. 2010. Development of Advanced Cellular Manufacturing System with Human-Robot
Collaboration-Assessment of Mental Strain on Human Operators Induced by the Assembly Support, 2010 IEEE Ro-Man.
Kobayashi, H. 2000. “Special Issue on Human-Robot Interaction.” Robotics and Autonomous Systems 31 (3): 129–130.
doi:10.1016/s0921-8890(99)00102-5.
Kulkarni, P. 2012. Reinforcement and Systemic Machine Learning for Decision Making. Hoboken, NJ: IEEE PRESS; WILEY.
Kusiak, Andrew. 2018. “Smart Manufacturing.” International Journal of Production Research 56 (1–2): 508–517. doi:10.1080/00207543.
2017.1351644.
Lee, Jay, Behrad Bagheri, and Hung An Kao. 2015. “A Cyber-Physical Systems Architecture for Industry 4.0-Based Manufacturing
Systems.” Manufacturing Letters 3: 18–23.
Liu, Zhihao, Quan Liu, Wenjun Xu, Aiming Liu, Zude Zhou, and Duc Truong Pham. 2017. “Multi-Mode Perception for Human-
Robot Collaborative Manufacturing: Framework and Enabling Technologies.” Paper presented at the International Conference
on Innovative Design and Manufacturing.
Liu, Jiayi, Zude Zhou, Duc Truong Pham, Wenjun Xu, Chunqian Ji, and Quan Liu. 2018. “Robotic Disassembly Sequence Planning Using
Enhanced Discrete Bees Algorithm in Remanufacturing.” International Journal of Production Research 56 (9): 3134–3151.
Matthias, Bjoern. 2015. “ISO/TS 15066 - Collaborative Robots - Present Status.” Paper presented at the European Robotics Forum.
Miranda, Jhonattan, Roberto Pérez-Rodríguez, Vicente Borja, Paul K Wright, and Arturo Molina. 2017. “Sensing, Smart and Sustainable
Product Development (S3 Product) Reference Framework.” International Journal of Production Research 57 (4): 1–22.
Mnih, Volodymyr, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, et al. 2015. “Human-
level Control Through Deep Reinforcement Learning.” Nature 518 (7540): 529–533.
Monostori, László. 2014. “Cyber-Physical Production Systems: Roots, Expectations and R&D Challenges.” Procedia CIRP 17: 9–13.
Nikolakis, Nikolaos, Vasilis Maratos, and Sotiris Makris. 2019. “A Cyber Physical System (CPS) Approach for Safe
Human-Robot Collaboration in a Shared Workplace.” Robotics and Computer-Integrated Manufacturing 56: 233–243.
doi:10.1016/j.rcim.2018.10.003.
Ore, Fredrik, Bhanoday Reddy Vemula, Lars Hanson, and Magnus Wiktorsson. 2016. “Human – Industrial Robot Collaboration:
Application of Simulation Software for Workstation Optimisation.” Procedia CIRP 44: 181–186. doi:10.1016/j.procir.2016.02.002.
Rahimi, Mansour, and Waldemar Karwowski. 1990. “A Research Paradigm in Human-Robot Interaction.” International Journal of
Industrial Ergonomics 5 (1): 59–71. doi:10.1016/0169-8141(90)90028-Z.
Rahman, S. M. Mizanoor, Behzad Sadrfaridpour, and Yue Wang. 2016. Trust-based Optimal Subtask Allocation and Model Predictive
Control for Human-robot Collaborative Assembly in Manufacturing, Proceedings of the ASME 8th Annual Dynamic Systems and
Control Conference, 2015, Vol 2.
Robla-Gómez, S., Victor M Becerra, Jose R Llata, Esther Gonzalez-Sarabia, Carlos Torre-Ferrero, and Juan Perez-Oria. 2017. “Working
Together: A Review on Safe Human-Robot Collaboration in Industrial Environments.” IEEE Access 5: 26754–26773.
“ROS 2.” https://github.com/ros2/ros2/wiki.
Sadrfaridpour, Behzad, Hamed Saeidi, and Yue Wang. 2016. “An Integrated Framework for Human-Robot Collaborative Assembly in
Hybrid Manufacturing Cells.” Paper presented at the IEEE International Conference on Automation Science and Engineering.
Sanchez, Luis M, and Rakesh Nagi. 2001. “A Review of Agile Manufacturing Systems.” International Journal of Production Research
39 (16): 3561–3600.
Silver, D., A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, et al. 2016. “Mastering the Game of Go
with Deep Neural Networks and Tree Search.” Nature 529 (7587): 484–489.
Skarredghost. 2016. “The Difference Between Kinect v2 and v1.” https://skarredghost.com/2016/12/02/the-difference-between-kinect-
v2-and-v1/.
18 Q. Liu et al.

Tan, Jeffrey Too Chuan, and Tamio Arai. 2010. “Analytic Evaluation of Human-Robot System for Collaboration in Cellular
Manufacturing System.” In 2010 IEEE/ASME International Conference on Advanced Intelligent Mechatronics.
Tan, Jeffrey Too Chuan, Feng Duan, Ryu Kato, and Tamio Arai. 2012. “Safety Strategy for Human-Robot Collabora-
tion: Design and Development in Cellular Manufacturing (vol 24, pg 839, 2012).” Advanced Robotics 26 (10): 1203.
doi:10.1080/01691864.2012.704724.
Tang, Quan, Wenjun Xu, Zude Zhou, and Duc Truong Pham. 2017. “Sequence Planning for Human-Robot Collaborative Disassembly in
Remanufacturing.” Paper presented at the International Conference on Innovative Design and Manufacturing.
Tao, Yiyun, Kai Meng, Peihuang Lou, Xianghui Peng, and Xiaoming Qian. 2018. “Joint Decision-Making on Automated Disassem-
bly System Scheme Selection and Recovery Route Assignment Using Multi-Objective Meta-Heuristic Algorithm.” International
Journal of Production Research 57 (1): 1–19.
Tao, Fei, Meng Zhang, Jiangfeng Cheng, and Qinglin Qi. 2017. “Digital Twin Workshop: A New Paradigm for Future Workshop.”
Computer Integrated Manufacturing Systems 23: 1–9.
Tsarouchi, Panagiota, Sotiris Makris, and George Chryssolouris. 2016. “Human-Robot Interaction Review and Challenges
on Task Planning and Programming.” International Journal of Computer Integrated Manufacturing 29 (8): 916–931.
doi:10.1080/0951192x.2015.1130251.
Waldherr, S., R. Romero, and S. Thrun. 2000. “A Gesture Based Interface for Human-Robot Interaction.” Autonomous Robots 9 (2):
151–173. doi:10.1023/a:1008918401478.
Wang, Lihui. 2015. “Collaborative Robot Monitoring and Control for Enhanced Sustainability.” International Journal of Advanced
Manufacturing Technology 81 (9–12): 1433–1445.
Wang, Xi Vincent, Zsolt Kemény, József Váncza, and Lihui Wang. 2017. “Human–Robot Collaborative Assembly in Cyber-physical
Production: Classification Framework and Implementation.” CIRP Annals - Manufacturing Technology 66 (1): 5–8.
Weber, A. 2014. “Human-Robot Collaboration Comes of Age.” Assembly 57.
Yang, Sida, Wenjun Xu, Zhihao Liu, Zude Zhou, and Duc Truong Pham. 2018. “Multi-Source Vision Perception for Human-Robot
Collaboration in Manufacturing.” Paper presented at the IEEE 15th International Conference on Networking, Sensing and Control.
Zanchettin, Andrea Maria, Nicola Maria Ceriani, Paolo Rocco, Hao Ding, and Bjoern Matthias. 2016. “Safety in Human-Robot Collab-
orative Manufacturing Environments: Metrics and Control.” IEEE Transactions on Automation Science and Engineering 13 (2):
882–893. doi:10.1109/tase.2015.2412256.
Zanchettin, Andrea Maria, and Paolo Rocco. 2013. “Path-Consistent Safety in Mixed Human-Robot Collaborative Manufacturing
Environments.” In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, edited by N. Amato, 1131–1136.
Zhao, Dong-Bin, Kun Shao, Yuan-Heng Zhu, Dong Li, Ya-Ran Chen, Hai-Tao Wang, De-Rong Liu, Tong Zhou, and Cheng-Hong
Wang. 2016. “Review of Deep Reinforcement Learning and Discussions on the Development of Computer Go.” Kongzhi Lilun Yu
Yingyong/Control Theory and Applications 33 (6): 701–717. doi:10.7641/CTA.2016.60173.

You might also like