Human-Robot Collaboration in Disassembly For Sustainable Manufacturing
Human-Robot Collaboration in Disassembly For Sustainable Manufacturing
Quan Liu, Zhihao Liu, Wenjun Xu, Quan Tang, Zude Zhou & Duc Truong
Pham
To cite this article: Quan Liu, Zhihao Liu, Wenjun Xu, Quan Tang, Zude Zhou & Duc Truong Pham
(2019): Human-robot collaboration in disassembly for sustainable manufacturing, International
Journal of Production Research, DOI: 10.1080/00207543.2019.1578906
a School
of Information Engineering, Wuhan University of Technology, Wuhan, China; b Hubei Key Laboratory of Broadband Wireless
Communication and Sensor Networks, Wuhan University of Technology, Wuhan, China; c Department of Mechanical Engineering,
University of Birmingham, Birmingham, UK
(Received 26 January 2018; accepted 30 January 2019)
Sustainable manufacturing is a global front-burner issue oriented to the sustainable development of humanity and society.
In this context, this paper takes the human-robot collaborative disassembly (HRCD) as the topic on its contribution to
economic, environmental and social sustainability. In addition, a detailed enabling systematic implementation for HRCD is
presented, combined with a set of advanced technologies such as cyber-physical production system (CPPS) and artificial
intelligence (AI), and it involves five aspects which including perception, cognition, decision, execution and evolution
aiming at the dynamics, uncertainties and complexities in disassembly. Deep reinforcement learning, incremental learning
and transfer learning are also investigated in the systematic approaches for HRCD. The demonstration in the case study
contains experiment results of multi-modal perception for robot system and human body in hybrid human-robot collaborative
disassembly cell, sequence planning for an HRCD task, distance based safety strategy and motion driven control method,
and it manifests high feasibility and effectiveness of the proposed approaches for HRCD and verifies the functionalities of
the systematic framework.
Keywords: sustainable manufacturing; human-robot collaboration; product disassembly; cyber-physical production sys-
tem; artificial intelligence
1. Introduction
So far, the focus and discussion on sustainability and sustainable development have been in existence for nearly 50 years
(Haapala et al. 2013), make them pillars of smart manufacturing (Kusiak 2018). Sensing, smart and sustainable elements
have become essential for enterprises facing global challenges (Miranda et al. 2017). As the backbone of industries,
sustainable manufacturing has shown greatly influence in economy, environment and society. In economy, sustainable man-
ufacturing promotes innovation and change in business modes, creates new space for economic growth, makes business
services face at the whole life cycle of production and accelerates development of diversified economic modes and markets.
For the environment, sustainable manufacturing reduces the use and waste of raw materials, increases the utilisation of
resources, and slows down pollution and emissions. For society, sustainable manufacturing creates new human capital and
provides more and better work (Jovane, Westkämper, and Williams 2008; Jovane et al. 2008).
In sustainable manufacturing, disassembly as the main production mode of remanufacturing is of great significance for
economic and environmental benefits such as resource recycle, energy saving and emission reduction. On the other hand, due
to the development and deployment of industrial robotics, society factors reflect in the replacement of the heavy-loaded,
repetitive and dirty jobs which were held formerly by human operators in disassembly. However, under many existing
disassembly environments, robots are not able to fully replace human operators due to the individual difference of recycled
products which need a high-level of human intelligence. To cope with this, human-robot collaboration (HRC) is one solution
aimed to assist, not replace, the workers engaged in a wide variety of applications (Djuric, Urbanic, and Rickli 2016). One
way to future manufactures is to let humans and robots work closer together (Ore et al. 2016).
Before HRC was introduced into the manufacturing area, human-robot interaction (HRI) in robotics had been an
extremely extensive and diverse R&D activity (Tsarouchi, Makris, and Chryssolouris 2016). This is mainly embodied
in the design and development of collaborative robots, and collaborative capability for traditional ones. In human-robot
collaboration, the combination of humans, robots and products result in the requirements of the connection of different
systems such as human sensing systems, robot control systems, product status sensing systems and process control sys-
tems. Information systems witnessed a long history in manufacturing technologies (Sanchez and Nagi 2001), but things get
different when taking multi-modal sensors, robots and intelligent algorithms into consideration. All the elements in the phys-
ical world and information systems in the cyber world make up a typical cyber-physical production system (CPPS). This
system is based on the progress of computer science, information and communication technology, sensor technology and
network technology. It includes information systems and hardware resources (Lee, Bagheri, and Kao 2015), and supports
the communication between human, machine and production (Monostori 2014). Undoubtedly, human-robot collaborative
manufacturing (HRC-Mfg) related R&D work has already occurred in CPPS (Wang et al. 2017). CPS, on the other hand,
also become one powerful booster for HRC applications (Nikolakis, Maratos, and Makris 2019).
However, even though HRC has been widely applied in the fields of industry, it is still in the user experience stage
with intelligence need to be improved, especially in the application of disassembly. It is mainly caused by the difficulties of
perceiving the status of human intention, robot motion and product. Besides, the decision making of HRC should be in a
position to consider both the real-time dynamics and the recorded knowledge of humans, robots and products which bring
challenges to existing decision systems. Fortunately, due to the rapid development of artificial intelligence and computing
power, it is now foreseeable to utilise these powerful new technologies to promote the efficiency of HRC which is also a
new trend in manufacturing. Moreover, intelligent algorithms should be deployed in CPPS because of the lack of computing
power in robot systems just like gathering all data from all kinds of sensors to the cloud in Internet of Things. On the other
hand, it is also a supplement for the concept and the function of CPPS.
In this paper, a systematic development framework called PCDEE-Circle is proposed towards human-robot collaborative
disassembly (HRCD) in sustainable manufacturing. Artificial intelligence methods towards perception, cognition, decision
making and knowledge formation and evolution are also proposed in this paper to meet the special requirements of HRCD in
sustainable manufacturing. From the unique view of innovative information technologies, this paper also delivers numerous
advanced intelligent methods and analyses why they could and should be implemented in HRCD. The case study verifies
the feasibility of the proposed framework on the perception, decision making and control of it, that is the implementation of
a multi-modal perception platform for industrial robot system and human body, a bees algorithm based sequence planning
method for an HRCD task, one safety assurance strategy and one motion driven control method. As for the cognition and
knowledge formation and evolution, this paper discusses workflow combined with the characteristics and requirements of
HRCD but remains the deployment in the future work.
2. Related work
HRC is a comprehensive research area and it is also known as human-robot cooperation and interaction. With the rapid
development of robotic and AI technologies, it has become one desire of human beings to work with robots. The pioneer
work can be traced to articles in the 1980s (Awad, Engelhardt, and Leifer 1983). In 1985, researchers started to conclude the
factors in the design and development of human-robot interactive workstation (Holloway, Leifer, and Van der Loos 1985).
However, due to the limitation of enabling technologies in the last century, researches on HRC came to a halt in basic design
(Rahimi and Karwowski 1990; Kobayashi 2000). At the beginning of the twenty-first century, key technologies for HRC
have been designed and tested, such as the perception of human gestures (Waldherr, Romero, and Thrun 2000).
The application of HRC-Mfg was introduced around 2010 (Kato, Fujita, and Arai 2010; Tan and Arai 2010). Based
on almost 30 years’ study on HRI and HRC, the application of HRC-Mfg including the perception of industrial robots
and human at the very beginning has taken a multitude of aspects into account. Safety as the top priority of HRC has
been studied by a number of researchers. Methods for collaborative zone design, robot speed limitation and vision-based
human motion monitoring has been investigated in (Tan et al. 2012; Wang 2015). Safety-related ISO standard and metrics
have been reviewed in (Hu et al. 2013; Zanchettin et al. 2016) and they have proved the feasibility of RGBD camera
applications in HRC. Estimation and the evaluation of injuries in human-robot collisions are also researched to min-
imise the consequences of collisions (Robla-Gómez et al. 2017). Path planning is another topic that has been explored
by many investigators, which need to combine with human factors especially in mixed HRC-Mfg environments (Zanchet-
tin and Rocco 2013). Task allocation and procedure arrangement are crucial and special parts in HRC-Mfg comparing
with common HRC applications, such as Rahman’s method (Rahman, Sadrfaridpour, and Wang 2016) of a trust-based
optimal subtask allocation in HRC-Mfg as well as the optimised scheduling using integer linear programming in (Bogner
et al. 2018). Evaluation and assessment in HRC are the basis of strategy making. Researched in this topic mainly includes
manufacturing capability assessment through data fusion (Cheng et al. 2017), mental strain evaluation using physiological
parameters (Kato, Fujita, and Arai 2010) and analytic hierarchy process based evaluation for multiple criteria (Tan and
Arai 2010).
In the industrial area, the number of industrial robots deployed in the manufacturing environment is growing at an
overwhelmingly high rate, and significantly facilitate the development of intelligent manufacturing. In recent years, the
concept of collaborative robots has appeared and been adopted in practical industry, and the new collaborative industrial
International Journal of Production Research 3
robots, e.g. KUKA iiwa, ABB YuMi, Rethink Baxter and Rethink Sawyer (Weber 2014; Han et al. 2016) have been gradually
put into the industrial market. For disassembly, 3D safety sensors based intuitive programming environment for HRCD
has been researched and implemented in the disassembly of Lithium-Ion Batteries (Gerbers et al. 2018). Nevertheless,
psychological and social factors of HRC-Mfg need to be addressed and embedded in the development to make robot actions
become acceptable and comfortable for the human (Sadrfaridpour, Saeidi, and Wang 2016).
Currently, there is still no standard paradigm of implementation in HRCD. Although HRC has been concerned in the
manufacturing industry, however, since the uncertainty and complexity of disassembly are much higher than that of assem-
bly, the research of HRC and HRC-Mfg is relatively rare in product disassembly. (Abdullah, Popplewell, and Page 2003)
concluded that for tasks like assembly, methods implementation should not only consider factors of product technology,
but also the industrial environment where task occurs. Methods of perception, cognition, task allocation and assessment
need to be modified according to the characteristics of disassembly. Besides, artificial intelligence is rapidly utilised in
machine vision, automatic drive, games and primitive HRI. But for HRC in manufacturing especially HRCD, the lack of
implementation even well-designed framework is obvious.
The first internal circle is a human-in-loop circle (PCDE-Circle), it contains multi-modal perception, multi-target cog-
nition, strategy and decision making and control. Though robot and human participate together in this PCDE-Circle, the
human is the main factor in this internal circle. In this circle, human factors are the key to decision making and guide the
control and execution of robots and programmes. The next internal circle is a robot-in-loop circle (DEE-Circle). It embodies
the last three aspects of the whole external circle. From Figure 1, we can see that the intersection of these two internal circles
include the decision and execution aspects which are exactly the kernel to realise physical HRC. External PCDEE circle is
from a macroscopical view of the whole process of HRCD. It is a product-in-loop circle which represents the human-robot
collaboration here serves the production.
For logic in PCDEE-Circle, multi-modal perception technology connects the parameters of the industrial robot system
and the action behaviour of human beings in the process of HRCD. After that, it utilises multi-target cognition technology
to recognise industrial robot body, human behaviour, disassembly objects, disassembly tools, background environment
and disassembly tasks, so as to support strategy and decision making. Intelligent decision-making based on reinforcement
learning (RL) or swarm intelligence is trained through continuous training in the CPPS to satisfy the requirements of HRCD.
Finally, knowledge formation and evolution based on incremental learning (IL) and transfer learning (TL) can accumulate
knowledge generated during the process of HRCD, and achieve knowledge sharing through industrial cloud robot system
and other related technologies.
Comparing with the systems in the published literature, the PCDEE-Circle framework has an original view of human-
in-loop, robot-in-loop and product-in-loop characteristics in the implementation of HRC in disassembly. Not only do we
integrate perception, cognition, decision making and control into a holistic architecture, but for the first time put up with
the knowledge formation and evolution and the idea of using knowledge to support decision making which is rarely seen in
published frameworks.
4. Systematic approaches
4.1. Multi-modal perception
In order to achieve HRCD, how to integrate the real-time state of the industrial robot, human worker, manufacturing cells
and tasks is the key problem to be solved (Liu et al. 2017). In the PCDEE-Circle, the multi-modal perception in CPPS is the
key technology to it. As shown in Figure 2, the multi-modal perception architecture for HRCD includes four aspects: the
physical layer, the transport layer, the cyber layer and the application layer.
The physical layer consists of industrial robots, robot controllers, multi-modal sensor group and other electronic equip-
ment. Among them, the industrial robot with high programming capability can be equipped with different tools to cooperate
with human operators to handle different disassembly tasks. The sensor group includes the energy metre, industrial cameras,
RGBD cameras, human factor sensors. Additionally, there also exist basic electronic equipment such as human-machine
interfaces, programmable logic controllers (PLCs) and industrial personal computers (IPCs).
The transport layer is based on the industrial fieldbus and the factory network, which can realise the data transmission
of multi-modal perception sensors, to provide different data interfaces for different cyber modules.
The cyber layer accepts various data from the physical layer through the transport layer and constructs different cyber
modules according to its source. As shown in Figure 2, cyber modules are incorporated in the cyber layer according to
specific perception source like industrial robots, cameras, energy meter, human-machine interface (HMI), PLCs, IPCs and
so on. For different disassembly tasks and sensing needs, the cyber layer needs to be elastic, which means that it can freely
build cyber modules for alternative sensors. All the perception information in HRCD is shown in Table 1.
The application layer is the transition from the cyber layer to the specific HRCD tasks. Most directly, operators and
managers can manage data from the cyber layer in the applications, and conduct remote monitoring of HRCD tasks. Besides,
the application layer also provides interfaces for other intelligent system and computing sources, such as Hadoop, Spark
and other big data processing architecture, or Tensorflow, Keras and other training systems.
On the other hand, data fusion is definitely one crucial and large-scaled problem in HRCD. However, since cyber
modules change along with different HRCD tasks, data fusion methods cannot be constant. A general solution towards the
data fusion problem is extremely hard to be given but should be replaced by a set of specific methods combined with specific
HRCD tasks and cyber modules.
Multi-target cognition is an artificial intelligence technology which is built on machine learning (ML) and pattern recog-
nition. It takes various data formats as input, and outputs the cognitive results for different cognitive targets. As shown
in Figure 3, in the hybrid human-robot collaborative manufacturing cell, human-robot collaborative manufacturing system
(HRCMS, an aggregation of multiple sub-systems) needs an overall sense of human workers, industrial robots, disassembly
tools, disassembly objects and the background environment.
The cognition of human workers mainly takes RGB images and point cloud images as the main data sources, and
establishes human skeleton models by using human physiological structure. After that, we can get the human skele-
ton point cloud model including real-time location and space occupancy. Combining the human skeleton point cloud
model with the safety assessment system and the minimum safe distance calculation in ISO (Matthias 2015), a human
dynamic security model can be obtained for the safety in HRCD. Force, coordinate and gesture of industrial robots can
be calculated using the joints angle and torque data from multi-modal perception based on kinematics, dynamics and
pre-designed programme. It is unnecessary to cognise it by vision information, which greatly reduces the difficulty of
modelling. For disassembly products and tools, the traditional way is tantamount to place them to a fixed position in a
fixed order. This method has obvious limitations. Firstly, it requires the robot to encode and initialise the location of the
tool in the programme. At the same time, the staffs are required to undergo a complex process of training to adapt to a
specific disassembly task. Secondly, the collaboration between human and robot will bring more uncertainty to HRCD.
Consequently, it is arduous to ensure that the needs and usage of tools in disassembly tasks are unchanged. Therefore,
it is indispensable to recognise the types and status of disassembly products and tools by multi-target cognition. In this
process, it is required to combine the characteristics of the products and the data format, select the appropriate learn-
ing network, and use the well-designed sample set to train. The cognition of the background environment is the final
part of the multi-target cognition in HRCD. However, it is still a paramount part of the cognition in the manufacturing
environment (Christensen 2016). Through the cognition of the background environment, the HRCD cell and production
line can recognise more folks, industrial robots, AGVs and other manufacturing equipment in the environment, as well
as their behaviours and intentions. This makes the intelligence of the collaborative manufacturing system improved as
a whole.
International Journal of Production Research 7
controllers is mostly closed. Secondly, HRCD requires various types of sensors, which often come from different manufac-
turers with different design patterns. In a word, closed industrial systems and limited sensor adaptation bring developing
challenges to HRCD in the phase of executive control system design. In view of this, we put forward a device control and
command execution architecture for HRCD, as shown in Figure 5.
Hardware including robot controllers and sensor drivers occupies the bottom layer of this architecture. Inside, it saves
the firmware written by the industrial robot and sensor manufacturers as the basic drivers. Above hardware, there exist
International Journal of Production Research 9
industrial robots and sensor groups. However, software modules, such as perception, cognition and decision-making, have
to be implemented into HRCMS on the computer operating system. For this reason, first of all, the multi-modal perception
data from the industrial robot system and the sensor group are required to be introduced into the operating system through
interfaces. These interfaces could be official SDKs (such as ABB PC SDK, PC Interface Option or Microsoft Kinect 2.0
SDK) or packages in ROS-Industrial (Edwards and Lewis 2012). They are running respectively on Windows and Linux
Ubuntu. At the top is the control and execution layer with HRCMS as the core. After strategies are made in HRCMS, it will
be sent back to robot controllers, finally realise the device control and command execution.
part can be handled in the same way. In Figure 10(a), we also illustrate just half of the procedures with serial numbers but
in Figure 10(b) we show the structure and priority of all the procedures.
The sequence planning method was based on bees algorithm which had been illustrated in our former work (Tang
et al. 2017). In this process, one diaphragm coupling had been snapshotted and analysed in the virtual explosive model,
delivering the disassembly hybrid graph in Figure 10(b). Besides, Figure 10(c) indicates the variation trend of the solution
of this algorithm, and Figure 10(d) demonstrates the result of sequence planning. Time-consuming of every procedure is
assumed with disassembly unit time which is also illustrated in Figure 10(c).
to get an estimated value of the time delay between the paths of ‘Hand’ and ‘End-Effector’ using a pixel ruler. Similarly, we
select 7 different points randomly at veer in the start, middle and end phase of the movement.
From Table 2, we can see that the average time delay in our work is less than the result of the related work. One possible
reason is the advanced performance of the newer generation of sensor. Besides, the purpose of the related work is to make the
position and rotation error as little as possible without considering too much about the time delay while we adopt algorithms
such as downsampling to control the data size. However, there are 3 sensors in our system and the resolution is much higher
which means our work could bring more detail and scope of depth vision.
To discuss the time delay of the proposed system, Figure 13 shows the architecture difference between typical Kinect
SDK (single-sensor) and our 3-sensors network. Our system sacrifice 10 frames per second to obtain a much wider view of
human point cloud comparing with a single-sensor solution. As for the total time delay of robot reacting to human motion,
it contains processing time for 20 frames in one second and the time delay on signal transmission resulting in the numerical
indication of 100–1000 ms.
In order to address the time delay problem, we consider improve the data format, sampling algorithms as well as opti-
mising the structure of the software such as using ROS 2.0 (“ROS 2”) and improving the sampling rate of the industrial
robot system. Moreover, different brands of industrial robots are running under different hardware and software architecture
which mean a great challenge for satisfying the requirement of international standards.
14 Q. Liu et al.
Disclosure statement
No potential conflict of interest was reported by the authors.
Funding
This work is supported by National Natural Science Foundation of China (Grant No. 51775399 and 51675389), the Keygrant Project of
Hubei Technological Innovation Special Fund (Grant No. 2016AAA016), and the Engineering and Physical Sciences Research Council
(EPSRC), UK (Grant No. EP/N018524/1).
ORCID
Zhihao Liu http://orcid.org/0000-0002-0222-912X
Wenjun Xu http://orcid.org/0000-0001-5370-3437
Duc Truong Pham http://orcid.org/0000-0003-3148-2404
References
Abdullah, T. A., K. Popplewell, and C. J. Page. 2003. “A Review of the Support Tools for the Process of Assembly Method Selection and
Assembly Planning.” International Journal of Production Research 41 (11): 2391–2410.
Awad, Roger E., K. G. Engelhardt, and Larry J. Leifer. 1983. “Development of Training Procedures for an Interactive Voice-Controlled
Robotic Aid.” Paper presented at the Proceedings of the 6th Annual Conference on Rehabilitation Engineering: The Promise of
Technology., San Diego, CA, USA.
Bogner, Karin, Ulrich Pferschy, Roland Unterberger, and Herwig Zeiner. 2018. “Optimised Scheduling in Human-Robot Collaboration –
A Use Case in the Assembly of Printed Circuit Boards.” International Journal of Production Research 56 (16): 5522–5540.
doi:10.1080/00207543.2018.1470695.
Cheng, Huiping, Wenjun Xu, Qingsong Ai, Quan Liu, Zude Zhou, and Pham Duc Truong. 2017. “Manufacturing Capability Assessment
for Human-Robot Collaborative Disassembly based on Multi-Data Fusion.” In 45th SME North American Manufacturing Research
Conference, edited by L. Wang, L. Fratini and A. J. Shih, 26–36.
Christensen, H. 2016. “A Roadmap for US Robotics from Internet to Robotics, 2016 edn.” Sponsored by National Science Foundation &
University of California, San Diego.
Djuric, Ana M., R. J. Urbanic, and J. L. Rickli. 2016. “A Framework for Collaborative Robot (CoBot) Integration in Advanced
Manufacturing Systems.” SAE International Journal of Materials and Manufacturing 9 (2). doi:10.4271/2016-01-0337.
Du, Guanglong, and Ping Zhang. 2014. “Markerless Human–Robot Interface for Dual Robot Manipulators Using Kinect Sensor.” Robotics
and Computer-Integrated Manufacturing 30 (2): 150–159.
Edwards, Shaun, and Chris Lewis. 2012. “ROS-Industrial: Applying the Robot Operating System (ROS) to Industrial Applications.”
Paper presented at the IEEE Int. Conference on Robotics and Automation, ECHORD Workshop.
Gerbers, Roman, Kathrin Wegener, Franz Dietrich, and Klaus Dröder. 2018. “Safe, Flexible and Productive Human-Robot-Collaboration
for Disassembly of Lithium-Ion Batteries.” In Recycling of Lithium-Ion Batteries, 99–126. Cham: Springer.
International Journal of Production Research 17
Glaessgen, E. H., and D. S. Stargel. 2012. “The Digital Twin Paradigm for Future NASA and U.S. Air Force Vehicles.” Paper presented at
the 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference 2012, April 23, 2012 - April
26, 2012, Honolulu, HI, United states.
Haapala, Karl R., Fu Zhao, Jaime Camelio, John W. Sutherland, Steven J. Skerlos, David A. Dornfeld, I. S. Jawahir, Hong Chao Zhang,
Andres F. Clarens, and Jeremy L. Rickli. 2013. “A Review of Engineering Research in Sustainable Manufacturing.” Journal of
Manufacturing Science & Engineering 135 (4): 599–619.
Han, Bo, Dhanya Menoth Mohan, Muhammad Azhar, Kana Sreekanth, and Domenico Campolo. 2016. “Human-Robot Collaboration for
Tooling Path Guidance.” Paper presented at the IEEE International Conference on Biomedical Robotics and Biomechatronics.
Holloway, Karen, Larry Leifer, and Machiel Van der Loos. 1985. “Factors in the Design and Development of an Interactive Human-robot
Workstation.” Paper presented at the Proceedings of the Eighth Annual Conference on Rehabilitation Technology: Technology - A
Bridge to Independence., Memphis, TN, USA.
Hu, Jhen-Jia, Chung-Ning Huang, Hau-Wei Wang, Po-Huang Shieh, and Jwu-Sheng Hu. 2013. Safety-based Human-Robot Collaboration
in Cellular Manufacturing: A Case Study of Power Protector Assembly, 2013 International Conference on Advanced Robotics and
Intelligent Systems.
Jovane, Francesco, Engelbert Westkämper, and David Williams. 2008. The ManuFuture Road to High-Adding-Value Competitive
Sustainable Manufacturing. Berlin: Springer.
Jovane, F., H. Yoshikawa, L. Alting, C. R. Boër, E. Westkamper, D. Williams, M. Tseng, G. Seliger, and A. M. Paci. 2008. “The Incoming
Global Technological and Industrial Revolution Towards Competitive Sustainable Manufacturing.” CIRP Annals 57 (2): 641–659.
Kato, Ryu, Marina Fujita, and Tamio Arai. 2010. Development of Advanced Cellular Manufacturing System with Human-Robot
Collaboration-Assessment of Mental Strain on Human Operators Induced by the Assembly Support, 2010 IEEE Ro-Man.
Kobayashi, H. 2000. “Special Issue on Human-Robot Interaction.” Robotics and Autonomous Systems 31 (3): 129–130.
doi:10.1016/s0921-8890(99)00102-5.
Kulkarni, P. 2012. Reinforcement and Systemic Machine Learning for Decision Making. Hoboken, NJ: IEEE PRESS; WILEY.
Kusiak, Andrew. 2018. “Smart Manufacturing.” International Journal of Production Research 56 (1–2): 508–517. doi:10.1080/00207543.
2017.1351644.
Lee, Jay, Behrad Bagheri, and Hung An Kao. 2015. “A Cyber-Physical Systems Architecture for Industry 4.0-Based Manufacturing
Systems.” Manufacturing Letters 3: 18–23.
Liu, Zhihao, Quan Liu, Wenjun Xu, Aiming Liu, Zude Zhou, and Duc Truong Pham. 2017. “Multi-Mode Perception for Human-
Robot Collaborative Manufacturing: Framework and Enabling Technologies.” Paper presented at the International Conference
on Innovative Design and Manufacturing.
Liu, Jiayi, Zude Zhou, Duc Truong Pham, Wenjun Xu, Chunqian Ji, and Quan Liu. 2018. “Robotic Disassembly Sequence Planning Using
Enhanced Discrete Bees Algorithm in Remanufacturing.” International Journal of Production Research 56 (9): 3134–3151.
Matthias, Bjoern. 2015. “ISO/TS 15066 - Collaborative Robots - Present Status.” Paper presented at the European Robotics Forum.
Miranda, Jhonattan, Roberto Pérez-Rodríguez, Vicente Borja, Paul K Wright, and Arturo Molina. 2017. “Sensing, Smart and Sustainable
Product Development (S3 Product) Reference Framework.” International Journal of Production Research 57 (4): 1–22.
Mnih, Volodymyr, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, et al. 2015. “Human-
level Control Through Deep Reinforcement Learning.” Nature 518 (7540): 529–533.
Monostori, László. 2014. “Cyber-Physical Production Systems: Roots, Expectations and R&D Challenges.” Procedia CIRP 17: 9–13.
Nikolakis, Nikolaos, Vasilis Maratos, and Sotiris Makris. 2019. “A Cyber Physical System (CPS) Approach for Safe
Human-Robot Collaboration in a Shared Workplace.” Robotics and Computer-Integrated Manufacturing 56: 233–243.
doi:10.1016/j.rcim.2018.10.003.
Ore, Fredrik, Bhanoday Reddy Vemula, Lars Hanson, and Magnus Wiktorsson. 2016. “Human – Industrial Robot Collaboration:
Application of Simulation Software for Workstation Optimisation.” Procedia CIRP 44: 181–186. doi:10.1016/j.procir.2016.02.002.
Rahimi, Mansour, and Waldemar Karwowski. 1990. “A Research Paradigm in Human-Robot Interaction.” International Journal of
Industrial Ergonomics 5 (1): 59–71. doi:10.1016/0169-8141(90)90028-Z.
Rahman, S. M. Mizanoor, Behzad Sadrfaridpour, and Yue Wang. 2016. Trust-based Optimal Subtask Allocation and Model Predictive
Control for Human-robot Collaborative Assembly in Manufacturing, Proceedings of the ASME 8th Annual Dynamic Systems and
Control Conference, 2015, Vol 2.
Robla-Gómez, S., Victor M Becerra, Jose R Llata, Esther Gonzalez-Sarabia, Carlos Torre-Ferrero, and Juan Perez-Oria. 2017. “Working
Together: A Review on Safe Human-Robot Collaboration in Industrial Environments.” IEEE Access 5: 26754–26773.
“ROS 2.” https://github.com/ros2/ros2/wiki.
Sadrfaridpour, Behzad, Hamed Saeidi, and Yue Wang. 2016. “An Integrated Framework for Human-Robot Collaborative Assembly in
Hybrid Manufacturing Cells.” Paper presented at the IEEE International Conference on Automation Science and Engineering.
Sanchez, Luis M, and Rakesh Nagi. 2001. “A Review of Agile Manufacturing Systems.” International Journal of Production Research
39 (16): 3561–3600.
Silver, D., A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, et al. 2016. “Mastering the Game of Go
with Deep Neural Networks and Tree Search.” Nature 529 (7587): 484–489.
Skarredghost. 2016. “The Difference Between Kinect v2 and v1.” https://skarredghost.com/2016/12/02/the-difference-between-kinect-
v2-and-v1/.
18 Q. Liu et al.
Tan, Jeffrey Too Chuan, and Tamio Arai. 2010. “Analytic Evaluation of Human-Robot System for Collaboration in Cellular
Manufacturing System.” In 2010 IEEE/ASME International Conference on Advanced Intelligent Mechatronics.
Tan, Jeffrey Too Chuan, Feng Duan, Ryu Kato, and Tamio Arai. 2012. “Safety Strategy for Human-Robot Collabora-
tion: Design and Development in Cellular Manufacturing (vol 24, pg 839, 2012).” Advanced Robotics 26 (10): 1203.
doi:10.1080/01691864.2012.704724.
Tang, Quan, Wenjun Xu, Zude Zhou, and Duc Truong Pham. 2017. “Sequence Planning for Human-Robot Collaborative Disassembly in
Remanufacturing.” Paper presented at the International Conference on Innovative Design and Manufacturing.
Tao, Yiyun, Kai Meng, Peihuang Lou, Xianghui Peng, and Xiaoming Qian. 2018. “Joint Decision-Making on Automated Disassem-
bly System Scheme Selection and Recovery Route Assignment Using Multi-Objective Meta-Heuristic Algorithm.” International
Journal of Production Research 57 (1): 1–19.
Tao, Fei, Meng Zhang, Jiangfeng Cheng, and Qinglin Qi. 2017. “Digital Twin Workshop: A New Paradigm for Future Workshop.”
Computer Integrated Manufacturing Systems 23: 1–9.
Tsarouchi, Panagiota, Sotiris Makris, and George Chryssolouris. 2016. “Human-Robot Interaction Review and Challenges
on Task Planning and Programming.” International Journal of Computer Integrated Manufacturing 29 (8): 916–931.
doi:10.1080/0951192x.2015.1130251.
Waldherr, S., R. Romero, and S. Thrun. 2000. “A Gesture Based Interface for Human-Robot Interaction.” Autonomous Robots 9 (2):
151–173. doi:10.1023/a:1008918401478.
Wang, Lihui. 2015. “Collaborative Robot Monitoring and Control for Enhanced Sustainability.” International Journal of Advanced
Manufacturing Technology 81 (9–12): 1433–1445.
Wang, Xi Vincent, Zsolt Kemény, József Váncza, and Lihui Wang. 2017. “Human–Robot Collaborative Assembly in Cyber-physical
Production: Classification Framework and Implementation.” CIRP Annals - Manufacturing Technology 66 (1): 5–8.
Weber, A. 2014. “Human-Robot Collaboration Comes of Age.” Assembly 57.
Yang, Sida, Wenjun Xu, Zhihao Liu, Zude Zhou, and Duc Truong Pham. 2018. “Multi-Source Vision Perception for Human-Robot
Collaboration in Manufacturing.” Paper presented at the IEEE 15th International Conference on Networking, Sensing and Control.
Zanchettin, Andrea Maria, Nicola Maria Ceriani, Paolo Rocco, Hao Ding, and Bjoern Matthias. 2016. “Safety in Human-Robot Collab-
orative Manufacturing Environments: Metrics and Control.” IEEE Transactions on Automation Science and Engineering 13 (2):
882–893. doi:10.1109/tase.2015.2412256.
Zanchettin, Andrea Maria, and Paolo Rocco. 2013. “Path-Consistent Safety in Mixed Human-Robot Collaborative Manufacturing
Environments.” In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, edited by N. Amato, 1131–1136.
Zhao, Dong-Bin, Kun Shao, Yuan-Heng Zhu, Dong Li, Ya-Ran Chen, Hai-Tao Wang, De-Rong Liu, Tong Zhou, and Cheng-Hong
Wang. 2016. “Review of Deep Reinforcement Learning and Discussions on the Development of Computer Go.” Kongzhi Lilun Yu
Yingyong/Control Theory and Applications 33 (6): 701–717. doi:10.7641/CTA.2016.60173.