0% found this document useful (0 votes)
21 views41 pages

Machines 12 00124

This document summarizes a research study that introduced a cybernetic control framework for a robotic fish avatar operated by a human. The robot fish's behavior is influenced by the human operator's electromyography (EMG) signals, which are classified by an artificial neural network. A fuzzy-based oscillation pattern generator produces coordinated undulations to emulate fish swimming. The robotic fish prototype comprises an electromagnetic oscillator propulsion system, elastic tendons, and spring-like joints. The study measured controlled movements like turns through the robot's caudal fin dynamics, informed by the operator's EMG signals.

Uploaded by

jamel-shams
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views41 pages

Machines 12 00124

This document summarizes a research study that introduced a cybernetic control framework for a robotic fish avatar operated by a human. The robot fish's behavior is influenced by the human operator's electromyography (EMG) signals, which are classified by an artificial neural network. A fuzzy-based oscillation pattern generator produces coordinated undulations to emulate fish swimming. The robotic fish prototype comprises an electromagnetic oscillator propulsion system, elastic tendons, and spring-like joints. The study measured controlled movements like turns through the robot's caudal fin dynamics, informed by the operator's EMG signals.

Uploaded by

jamel-shams
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

machines

Article
Electromyography-Based Biomechanical Cybernetic Control of a
Robotic Fish Avatar
Manuel A. Montoya Martínez 1,† , Rafael Torres-Córdoba 1 , Evgeni Magid 2,3 and Edgar A. Martínez-García 1, *,†

1 Laboratorio de Robótica, Institute of Engineering and Technology, Universidad Autónoma de Ciudad Juárez,
Ciudad Juárez 32310, Mexico; [email protected] (M.A.M.M.); [email protected] (R.T.-C.)
2 Institute of Information Technology and Intelligent Systems, Kazan Federal University, Kazan 420008, Russia;
[email protected]
3 HSE Tikhonov Moscow Institute of Electronics and Mathematics, HSE University, Moscow 101000, Russia
* Correspondence: [email protected]
† These authors contributed equally to this work.

Abstract: This study introduces a cybernetic control and architectural framework for a robotic fish
avatar operated by a human. The behavior of the robot fish is influenced by the electromyographic
(EMG) signals of the human operator, triggered by stimuli from the surrounding objects and scenery.
A deep artificial neural network (ANN) with perceptrons classifies the EMG signals, discerning
the type of muscular stimuli generated. The research unveils a fuzzy-based oscillation pattern
generator (OPG) designed to emulate functions akin to a neural central pattern generator, producing
coordinated fish undulations. The OPG generates swimming behavior as an oscillation function,
decoupled into coordinated step signals, right and left, for a dual electromagnetic oscillator in the
fish propulsion system. Furthermore, the research presents an underactuated biorobotic mechanism
of the subcarangiform type comprising a two-solenoid electromagnetic oscillator, an antagonistic
musculoskeletal elastic system of tendons, and a multi-link caudal spine composed of helical springs.
The biomechanics dynamic model and control for swimming, as well as the ballasting system for
submersion and buoyancy, are deduced. This study highlights the utilization of EMG measurements
encompassing sampling time and µ-volt signals for both hands and all fingers. The subsequent
feature extraction resulted in three types of statistical patterns, namely, Ω, γ, λ, serving as inputs
Citation: Montoya Martínez, M.A.;
for a multilayer feedforward neural network of perceptrons. The experimental findings quantified
Torres-Córdoba, R.; Magid, E.;
Martínez-García, E.A.
controlled movements, specifically caudal fin undulations during forward, right, and left turns, with
Electromyography-Based a particular emphasis on the dynamics of caudal fin undulations of a robot prototype.
Biomechanical Cybernetic Control of a
Robotic Fish Avatar. Machines 2024, 12, Keywords: biorobotics; cybernetics; neural network; robot fish; EMG signals; robotic avatar; dynamic
124. https://doi.org/10.3390/ control
machines12020124

Academic Editor: Med Amine Laribi

Received: 3 January 2024 1. Introduction


Revised: 5 February 2024 Avatar robotics involves remotely controlling a robot to interact with the physical
Accepted: 6 February 2024 environment on behalf of a human operator, enabling them to virtually embody the robot
Published: 9 February 2024 and perform actions as if physically present [1,2]. This transformative technology extends
human presence to remote or hazardous locations, with applications spanning space
exploration, disaster response, remote inspection, telemedicine, and diverse domains.
Leveraging progress in robotics, teleoperation systems, sensory feedback interfaces, and
Copyright: © 2024 by the authors.
communication networks, avatar robotics enhances human capabilities, ensures safer
Licensee MDPI, Basel, Switzerland.
This article is an open access article
operations, and broadens human presence and expertise in various fields [3,4].
distributed under the terms and
Furthermore, cybernetic control functions as a regulatory system utilizing feedback
conditions of the Creative Commons mechanisms to uphold stability and achieve desired outcomes [5]. The incorporation of
Attribution (CC BY) license (https:// feedback loops is central to cybernetic control systems, continuously monitoring a system’s
creativecommons.org/licenses/by/ behavior, comparing it to a reference state, and generating corrective actions to address any
4.0/). deviations. This iterative feedback process facilitates self-regulation and goal attainment

Machines 2024, 12, 124. https://doi.org/10.3390/machines12020124 https://www.mdpi.com/journal/machines


Machines 2024, 12, 124 2 of 41

within the system. The application domains of cybernetic control span engineering, biology,
and psychology, with the goal of enabling robots to interact with humans in more intuitive
ways [6]. This involves adapting their actions and responses based on human feedback and
behavior, cultivating a more seamless and responsive human–robot interaction.
Cybernetic biorobotics, at its core, is an interdisciplinary frontier that harmonizes
principles from cybernetics, biology, and robotics. Its primary mission is the exploration and
development of robots or robotic systems intricately inspired by the marvels of biological
organisms. This field is driven by the ambition to conceive robots capable of mimicking
and integrating the sophisticated principles and behaviors observed in living entities.
Researchers draw inspiration from the intricate control systems of biological organisms
and this creative synthesis results in the creation of robots characterized by adaptive and
intelligent behaviors, thus mirroring the intricacies found in the natural world. Bioinspired
robotics, a central focus within this discipline, involves distilling the fundamental principles
and behaviors intrinsic to biological entities and skillfully incorporating them into the
design and control of robotic systems. It has the potential to advance the development
of robots endowed with locomotion and manipulation capabilities akin to animals, as
well as robots capable of adapting to dynamic environments or interacting with humans
in more natural and intuitive ways [7]. Moreover, research in cybernetic biorobotics can
offer valuable insights into comprehending biological systems, fostering advancements in
disciplines like neuroscience and biomechanics. Furthermore, remote cybernetic robots
may rely on haptic systems as essential interfaces. A haptic system, characterized by its
ability to provide users with a sense of touch or tactile feedback through force, vibration,
or other mechanical means, comprises a haptic interface and a haptic rendering system.
Collaboratively, these components simulate touch sensations, enabling users to engage
with virtual or remote environments in a tactile manner [8].
This research introduces a control and sensing architecture that integrates a cybernetic
scheme based on the recognition of electromyographic control signals, governing a range of
locomotive behaviors in a robotic fish. Conceptually, the human operator receives feedback
signals from the sensors of the biorobotic avatar, conveying information about its remote
environment. The proposed approach stands out due to its key features and contributions,
which include:
1. The exposition of an innovative conceptual cybernetic fish avatar architecture.
2. The creation of an EMG data filtering algorithm, coupled with a method for extract-
ing, classifying, and recognizing muscular patterns using a deep ANN, serves as a
cybernetic interface for the governance of the fish avatar.
3. The development of a fuzzy-based oscillation pattern generator (OPG) designed to
generate periodic oscillation patterns around the fish’s caudal fin. These coordinated
oscillations are decoupled into right and left step functions, specifically crafted to input
into a lateral pair of electromagnetic coils, thereby producing undulating swimming
motions of the robot fish.
4. The conception of a bioinspired robotic fish mechanism is characterized by the incor-
poration of underactuated elements propelled by serial links featuring helical springs.
This innovative design is empowered by a dual solenoid electromagnetic oscillator
and a four-bar linkage, reflecting a novel approach to bioinspired robotics.
5. The derivation of closed-form control laws for both the undulation of the underactu-
ated caudal multilink dynamics and the ballasting system.
Section 2 provides a comprehensive discussion of the comparative analysis of the
current state of the art. Section 3 provides a detailed description of the proposed archi-
tecture of thecybernetic system model. In Section 4 presents an approach for filtering
electromyography (EMG) data and delves into an in-depth discussion of a classifier based
on deep ANN for the recognition of hand-motion EMG stimuli patterns. Section 5 presents
the development of a fuzzy-based oscillation pattern generator. Section 6 details the robot’s
mechanism parts and its dynamic model. Section 7 focuses on the development of a feed-
Machines 2024, 12, 124 3 of 41

back control for the fish’s ballasting system. Finally, Section 8 provides the concluding
remarks of the research study.

2. Analysis of the State of the Art


This section syntheses the relevant literature and provides insights into the current
state of the art. Further, it aims to examine and evaluate the existing research and advance-
ments in the field. This brief analysis identifies and compares different aspects, providing a
comprehensive overview including the relevant research and advancements in the field
about methodologies and outcomes.
Multiple basic concepts of cybernetics [9] at the intersection of physics, control theory,
and molecular systems were presented in [10], where a speed-gradient approach to mod-
eling the dynamics of physical systems is discussed. A novel research approach, namely
Ethorobotics, proposes the use and development of advanced bioinspired robotic replicas
as a method for investigating animal behavior [11]. In the domain of telepresence and
teleoperation, diverse systems and methodologies have been devised to facilitate remote
control of robots [12]. One such system is the multi-robot teleoperation system based on a
brain–computer interface, as documented by [13]. This system aims to enable individuals
with severe neuromuscular deficiencies to operate multiple robots solely through their brain
activity, thus offering telepresence via a thought-based interaction mode. A comprehen-
sive review addressing the existing teleoperation methods and techniques for enhancing
the control of mobile robots has been presented by [14]. This review critically analyzes,
categorizes, and summarizes the existing teleoperation methods for mobile robots while
highlighting various enhancement techniques that have been employed. It makes clear
the relative advantages and disadvantages associated with these methods and techniques.
The field of telepresence and teleoperation robotics has witnessed substantial attention and
interest over the past decade [15], finding extensive applications in healthcare, education,
surveillance, disaster recovery, and corporate/government sectors. In the specific context of
underwater robots, gesture recognition-based teleoperation systems have been developed
to enable users to control the swimming behavior of these robots. Such systems foster
direct interaction between onlookers and the robotic fish, thereby enhancing the intuitive
experience of human–robot interaction. Furthermore, efforts have been made to enhance
the consistency and quality of robotic fish tails through improved fabrication processes, and
target tracking algorithms have been developed to enhance the tracking capabilities of these
robots [16]. The study in [17] developed teleoperation for remote control of a robotic fish by
hand-gesture recognition. It allowed direct interaction between onlookers and the biorobot.
Another notable system is the assistive telepresence system employing augmented reality
in conjunction with a physical robot, as detailed in the work by [18]. This system leverages
an optimal non-iterative alignment solver to determine the optimally aligned pose of the
3D human model with the robot, resulting in faster computations compared to baseline
solvers and delivering comparable or superior pose alignments. The review presented
in [19] analyzes the progress of robot skin in multimodal sensing and machine perception
for sensory feedback in feeling proximity, pressure, and temperature for collaborative robot
applications considering immersive teleoperation and affective interaction. Reference [20]
reported an advanced robotic avatar system designed for immersive teleoperation, having
some key functions such as human-like manipulation and communication capabilities,
immersive 3D visualization, and transparent force-feedback telemanipulation. Suitable
human–robot collaboration in medical application has been reported [21], where force
perception is augmented for the human operator during needle insertion in soft tissue.
Telepresence of mobile robotic systems may incorporate remote video transmission to steer
the robot by seeing through its eyes remotely. Reference [22] presented an overview includ-
ing social application domains. Research has been conducted on the utilization of neural
circuits to contribute to limb locomotion [23] in the presence of uncertainty. Optimizing
the data showed the combination of circuits necessary for efficient locomotion. A review
has also been conducted on central pattern generators (CPGs) employed for locomotion
Machines 2024, 12, 124 4 of 41

control in robots [24]. This review encompasses neurobiological observations, numerical


models, and robotic applications of CPGs. Reference [25] describes an extended mathe-
matical model of the CPG supported by two neurophysiological studies: identification of
a two-layered CPG neural circuitry and a specific neural model for generating different
patterns. The CPG model is used as the low-level controller of a robot to generate walking
patterns, with the inclusion of an ANN as a layer of the CPG to produce rhythmic and
non-rhythmic motion patterns. The work in [26] presented a review of bionic robotic
fish, tackling major concepts in kinematics, control, learning, hydrodynamic forces, and
critical concepts in locomotion coordination. The research presented in [27] reviews the
human manual control of devices in cybernetics using mathematical models and advances
of theory and applications, from linear time-invariant modeling of stationary conditions
to methods and analyses of adaptive and time-varying cybernetics–human interactions
in control tasks. A new foundations for cybernetics will emerge and impact numerous
domains involving humans in manual and neuromuscular system modeling control.
Building upon the preceding analysis regarding the relevant literature, the subsequent
table (Table 1) encapsulates the primary distinctions articulated in this study in relation to
the most pertinent literature identified.

Table 1. Pertinent related work: comprehensive comparison.

Distinctive Aspect of This


Research Topic References
Study
Remote mobile robots [13,28] Swimming response from
HRI teleoperation [16,17] biological EMG stimuli.
Teleoperation and Haptic perception robot to
[12,14]
telepresence HRI reviews, human.
Cybernetic control human to
techniques, and applications [15,29]
robot.
Haptic and 2D visual data
Telepresence by avatar [18,30,31]
avatar
and neuromuscular control
and immersion systems [20]
response.
Central pattern generator
[23,24] Neuro-fuzzy caudal swim
(CPG);
neural and locomotion studies [25,32] undulation pattern generator.
Reactive swimming by remote
Human–robot collaboration [19,21,22]
human
stimuli and haptic robot
haptics and teleoperation [33]
feedback.
Underactuated biomechanical
Cybernetic control [9–11]
model and propulsive
and bionic systems [26,27] electromagnetic oscillator.

As delineated in Table 1, the present study introduces distinctive elements that set it
apart from the recognized relevant literature. However, it is noteworthy to acknowledge
that various multidisciplinary domains may exhibit commonalities. Across these diverse
topics, shared elements encompass robotic avatars, teleoperation, telepresence, immersive
human–robot interfaces, as well as haptic or cybernetic systems in different application
domains. In this research, the fundamental principle of a robotic avatar entails controlling
its swimming response to biological stimuli from the human operator. The human controller
is able to gain insight into the surrounding world of the robotic fish avatar through a haptic
interface. This interface allows the human operator to yield biological electromyography
stimuli as the result of their visual and skin impressions (e.g., pressure, temperature,
heading vibrations). The biorobotic fish generates its swimming locomotive behavior,
which is governed by EMG stimuli yielded in real-time in the human. Through a neuro-
Machines 2024, 12, 124 5 of 41

fuzzy controller, the neuronal part (cybernetic observer) classifies the type of human EMG
reaction, and the fuzzy part determines the swimming behavior.

3. Conceptual System Architecture


This section encompasses a comprehensive framework that highlights the integration
of various components to propose a cohesive cybernetic robotic model. In addition, this
section outlines the key concepts and elucidates their interactions within the system.
Figure 1 presents an overview of the key components constituting the proposed system
architecture. This manuscript thoroughly explores the modeling of four integral elements:
(i) the cybernetic human controller, employing ANN classification of EMG signals; (ii) a
fuzzy-based locomotion pattern generator; (iii) an underactuated bioinspired robot fish;
and (iv) the robot’s sensory system, contributing feedback for the haptic system. While
we will discuss the relevance and impact of the latter item within the architecture, it is
important to note that the detailed exploration of topics related to haptic development and
wearable technological devices goes beyond the scope of this paper and will be addressed
in future work. Nevertheless, we deduce the observable variables that serve as crucial
inputs for the haptic system.

Figure 1. Cybernetic robotic avatar system architecture. Signal electrodes (green circles) and ground
electrodes (red circles) are experimentally positioned on the Flexor Digitorum Superficialis, Flexor
Digitorum Profundus, and Flexor Carpi muscles.

Essentially, there are six haptic feedback sensory inputs of interest for the human,
representing the observable state of the avatar robot: Eulerian variables, including angular
and linear displacements and their higher-order derivatives; biomechanical caudal motion;
hydraulic pressure; scenario temperature; and passive vision. Figure 2 left provides an
illustration of the geometric distribution of the sensing devices.
The instrumented robot is an embodiment of the human submerged in water, featuring
an undulatory swimming mechanical body imbued with muscles possessing underactuated
characteristics. These features empower the biorobotic avatar to execute movements and
swim in its aquatic surroundings.

Figure 2. Robot’s underactuated mechanisms and sensory system onboard.

The observation models aim to provide insights into how these sensory perceptions
are conveyed to the haptic helmet with haptic devices, including a wheel reaction mecha-
nism. A comprehensive schema emerges wherein the haptic nexus, bolstered by pivotal
Machines 2024, 12, 124 6 of 41

human biosensorial components, including gravireceptors, Ruffini corpuscles, Paccinian


receptors, and retinal photoreceptors, converges to interface with the sensory substrate
of the human operator. Such a convergence engenders a cascading sequence wherein
biological input stimuli coalesce to yield discernible encephalographic activity, the primary
layer of subsequent electromyographic outputs. These consequential EMG outputs un-
dergo processing in a swim oscillatory pattern generator, thereby embodying control of
biomechanical cybernetic governance.
In accordance with Figure 1, it is noteworthy that the various haptic input variables,
such as temperature, pressure, and the visual camera images, represent direct sensory
measurements transmitted from the robotic fish to the components of the haptic interface.
Conversely, the robotic avatar takes on the role of a thermosensory adept in order for the
human to assess the ambient thermal landscape. Thus, from this discernment, a surrogate
thermal approach is projected onto the tactile realm of the human operator through the
modulation of thermally responsive plates enmeshed within the haptic interface. There-
fore, a crafted replica of the temperature patterns detected by the robotic aquatic entity is
seamlessly integrated into the human sensory experience. This intertwining of thermal em-
ulation is reached by the network of Ruffini corpuscles, intricately nestled within the human
skin, thereby enhancing the experiential authenticity of this multisensory convergence. As
for interaction through the haptic functions, the Paccinian corpuscles function as discerning
receptors, proficiently registering subtle haptic pressures. It finds its origin in the dynamic
tactile signals inherent to the aquatic habitat, intricately associated with the underwater
depth traversed by the robotic avatar. Integral to the comprehensive sensory scheme, the
optical sensors housed within the robotic entity acquire visual data. These visual data are
subsequently channeled to the human’s cognition through the haptic interface’s perceptual
canvas. Within this, the human sensory apparatus assumes the role of an engaged receptor,
duly transducing these visual envoys through the lattice of retinal photoreceptors.
Embedded within the robotic fish’s body, several inertial measurement units (IMU)
play a pivotal role in quantifying Eulerian inclinations intrinsic to the aquatic environment.
These intricate angular displacements are subsequently channeled to the human operator,
thereby initiating an engagement of the reaction wheel mechanism. As a consequential
outcome of this interplay, a synchronized emulation of tilting motion is induced, mirroring
the nuanced cranial adjustments executed by the human operator. Any alignment of move-
ments assumes perceptible form, relayed through the human’s network of gravireceptors
nestled within the internal auditory apparatus. The Euler angular and linear speeds are not
directly measured; instead, they must be integrated using various sensor fusion approaches
to enhance the avatar’s fault tolerance in reading its environment. For example, the angular
observations of the robot fish are obtained through numerical integro-differential equations,
which are solved online as measurements are acquired. Let us introduce the following
notation for inclinometers (αi ) and accelerometers (aa ), with the singular direct sensor
measurement α̇ g derived from the gyroscopes. The observation for the fish’s roll velocity
combining the three inertial sensors is

dαι 1
Z
ωα = + α˙g + aα dt, (1)
dt dα t

while the pitch velocity is modelled by

dβ ι 1
Z
ωβ = + β˙g + a β dt, (2)
dt dβ t

and the yaw velocity is obtained by

dγι 1
Z
ωγ = + γ˙g + aγ dt. (3)
dt dγ t
Machines 2024, 12, 124 7 of 41

Within this context, the tangential accelerations experienced by the robot body are
denoted aα,β,γ [m/s2 ]. Additionally, the angular velocities measured by the gyroscopes
are represented by α̇, β̇, γ̇ [rad/s2 ]. Correspondingly, the inclinometers provide angle
measurements denoted α, β, γ [rad]. These measurements collectively contribute to the
comprehensive observability and characterization of the robot’s dynamic behavior and
spatial orientation.
Furthermore, the oscillations of the caudal tail are reflections of the dynamics of the
underactuated spine. These dynamics are captured by quantifying encoder pulses, denoted
ηt , which provide precise angular positions for each vertebra. Given that real-time angular
measurements of the vertebrae are desired, higher-order data are prioritized. Consequently,
derivatives are computed by initiating from the Taylor series to approximate the angle of
each vertebral element with respect to time, denoted t.

(0) (1) (2)


ϕi ϕ ϕ
ϕi ≈ ( t2 − t1 )0 + i ( t2 − t1 )1 + i ( t2 − t1 )2 + . . . (4a)
0! 1! 2!
thus, rearranging the math notation and trunking up to the first derivative,

(1)
ϕi ≊ ϕi + ϕi (t2 − t1 ), (4b)

(1)
dropping ϕi off as a state variable, the first-order derivative (ϕ(1) (t) ≡ ϕ̇(t)) is given by

ϕ2 − ϕ1
ϕ̇(t) = , (4c)
t2 − t1

and assuming a vertebra’s angular measurement model in terms of the encoder’s pulses η
with resolution R, then it is stated that by substituting the pulses encoder model into the
angular speed function for the first vertebra,

η2 − η1
      
2π 2π 2π
ϕ̇1 = η2 − η = . (5a)
R ( t2 − t1 ) R ( t2 − t1 ) 1 R t2 − t1

As for the second vertebra,


η2 − η1
 

ϕ̇2 = ϕ̇1 + (5b)
R t2 − t1
2π η2 − η1
 
ϕ̇3 = ϕ̇1 + ϕ̇2 + (5c)
R t2 − t1
and
η2 − η1
 

ϕ̇4 = ϕ̇1 + ϕ̇2 + ϕ̇3 + . (5d)
R t2 − t1
The preliminary sensing models serve as a comprehensive representation, strategically
integrated into the control models as crucial feedback terms. A detailed exploration of this
integration is elucidated in Sections 6 and 7.

4. Deep ANN-Based EMG Data Classification


This section details the experimental acquisition of EMG data, their spatial filtering,
and pattern extraction achieved through the statistical combination of linear envelopes.
Additionally, an adaptive method for class separation and data dispersion reduction is
described. This section also covers the structure of a deep neural network, presenting its
classification output results from mapping input EMG stimuli.
A related study reported a system for automatic pattern generation for neurosimu-
lation in [34], where a neurointerface was used as a neuro-protocol for outputting finger
deflection and nerve stimulation. In the present research, numerous experiments were
carried out to pinpoint the optimal electrode placement and achieve precise electromyo-
graphic readings for each predefined movement in the experiment. The positions of the
Machines 2024, 12, 124 8 of 41

electrodes were systematically adjusted, and the results from each trial were compared.
Upon data analysis, it was discerned that the most effective electrode placement is on the
ulnar nerve, situated amidst the muscles flexor digitorium superficialis, flexor digitorium
profundus, and flexor carpi ulnaris. A series of more than ten experiments was executed
for each planned stimulus or action involving hands, allowing a 2s interval between each
action, including the opening and closing of hands, as well as the extension and flexion of
the thumb, index, middle, ring, and little fingers. The data were measured by a g.MOBIlab+
device with two-channel electrodes and quantified in microvolts per second [µv/s], as
depicted in Figure 3.
The data acquired from the electromyogram often exhibit substantial noise, attributed
to both the inherent nature of the signal and external vibrational factors. To refine the data
quality by mitigating this noise, a filtering process is essential.

(a)

(b)

(c)

Figure 3. Experimental raw EMG data (from left to right): (a) Left and right hand, left and right
thumb. (b) Left and right index, left and right middle. (c) Left and right ring, left and right little.

In this context, a second-order Notch filter was utilized. This filter is tailored to target
specific frequencies linked to noise, proving particularly effective in eliminating electrical
interferences and other forms of stationary noise [35]. A Notch filter is a band-rejection
filter to greatly reduce interference caused by a specific frequency component or a narrow
band signal. Hence, in the Laplace space, the second-order filter is represented by the
analog Laplace domain transfer function:

s2 + ωo2
H (s) = , (6)
s2 + 2sξωo + ωo2

where ω0 signifies the cut angular frequency targeted for elimination, and 2ξ signifies the
damping factor or filter quality, determining the bandwidth. Consequently, we solve to
obtain its solution in the physical variable space. The bilinear transformation relates the
variable s from the Laplace domain to the variable zk in the Z domain, considering T as the
sampling period, and is defined as follows:

zk − 1
  
. 1
s= . (7)
T zk + 1

Upon substituting the previous expression into the transfer function of the analog Notch
filter and algebraically simplifying, the following transfer function in the Z domain is
Machines 2024, 12, 124 9 of 41

obtained, redefining the notation as υt = zk just to meet equivalence with the physical
variable:
1 − 2 cos(ω0 t)υt−1 + υt−2
h( υt ) = . (8)
1 − 2ξ cos(ω0 t)υt−1 + ξυt−2
The second-order Notch filter h(υ) was employed on raw EMG data to alleviate noise
resulting from electrical impedance and vibrational electrode interference, with parameters
set at ω0 = 256 Hz and ξ = 0.1 and results depicted in Figure 4.

(a) (b) (c)


Figure 4. Notch-filtered EMG showing one period. (a) Right hand. (b) Right index. (c) Right middle.

Subsequently, while other studies explored time and frequency feature extraction,
as seen in [36], in the present context, by utilizing the outcomes of the Notch filter, our
data undergo processing through three distinct filters or linear envelopes. This serves as a
secondary spatial filter and functions as a pattern extraction mechanism. These include a
filter for average variability, one for linear variability, and another for average dispersion.
Each filter serves a specific purpose, enabling the analysis of different aspects of the signal.
Consider n as the number of measurements constituting a single experimental stimulus, and
let N represent the entire sampled data space obtained from multiple measurements related
to the same stimulus. Furthermore, denote v̂i as the ith EMG measurement of an upper
limb, measured in microvolts (µV). From such statements, the following Propositions 1–3
are introduced as new data patterns.

Proposition 1 (Filter γ). The γ pattern refers to a statistical linear envelope described by the
difference of a local mean v̂k in a window of samples and the statistical mean υ̂i of all samples in an
experiment.
1 n
γ(vk ) = v̂i − ∑ vk . (9)
n k =1

Proposition 2 (Filter λ). The λ(vk ) pattern refers to a statistical linear envelope denoted by
the difference of a local mean v̂k in a window of samples and the statistical mean υ̂i of the whole
population of experiments of the same type,

Nk
1
λ(vk ) = v̂i −
Nk ∑ vˆk . (10)
k =1

Proposition 3 (Filter Ω). The Ω pattern refers to a statistical linear envelope denoted by the
difference of statistical means between the population of one experiment v̂i and the whole population
of numerous experiments of the same type:

n N
1 1
Ω(vk ) =
n ∑ vk − N ∑ vk . (11)
k =1 k =1
Machines 2024, 12, 124 10 of 41

Hence, let the vector ⃗δ ∈ R3 such that δk = (Ωk , γk , λk )⊤ and represent filtered data
points in the Ωγλ-space. This study includes a brief data preprocessing as a method to
improve the multi-class separability and data scattering reduction in pattern extraction.
Three distinctive patterns—γ, λ, Ω—captivatingly converge in Figure 5. This illustration
exclusively features patterns associated with sequences of muscular stimuli from both the
right and left hands. For supplementary stimulus plots, refer to Appendix A at the end of
this manuscript.

(a)

(b)
Figure 5. Components of hand pattern space: filters γ, λ, and Ω. (a) Filters applied to the left hand.
(b) Filters applied to the right hand.
Machines 2024, 12, 124 11 of 41

From numerous laboratory experiments, over 75% of the sampled raw data fall within
the range of one standard deviation. Consider the vector ⃗σ ∈ R3 such that the standard
deviation vector ⃗σ = (σΩ , σγ , σλ )⊤ encompasses the three spatial components by its norm.
v
   2
Ωk
u 
u N µγ
2 1

u
σ=t   γ k
 −  µλ  . (12)
N k =1 λk µΩ

Building upon the preceding statement, we can formulate an adaptive discrimination


criterion, as elucidated in Definition 1.

Definition 1 (Discrimination condition). Consider the scalar value δj as preprocessed EMG data
located within a radius of magnitude κd times the standard deviation ∥σ ∥:
(
δk , κd ∥σ ∥ ≤ ∥⃗δ∥
δj = (13)
0, κd ∥σ ∥ > ∥⃗ δ∥

where 0 = (0, 0, 0)⊤ represents discriminated data.

Hence, consider the recent Definition 1 in the current scenario with κd = 1.0, which
serves as a tuning discrimination factor. Therefore, the norm lh represents the distance
between the frame origin and any class in the Ωγλ-space. This distance is adaptively
calculated based on the statistics of each EMG class.
q
lh = 2 (κΩ σΩ )2 + (κγ σγ )2 + (κλ σλ )2 (14)

where the coefficients κΩ , κγ , and κλ are smooth adjustment parameters to set separa-
bility along the axes. Hence, relocating each class center to a new position is stated by
Proposition 4.

Proposition 4 (Class separability factor). A new class position µ+


Ω,γ,λ in the Ωγλ-space is
established by the statistically adaptive linear relationship:

µ+
Ω = µΩ + ζ Ω lh , (15a)

µ+
σ = µγ + ζ γ lh (15b)
and
µ+
λ = µλ + ζ λ lh , (15c)
where ζ Ωγλ are coarse in-space separability factors. The mean values µΩγλ are the actual class
positions obtained from the linear envelopes Ω(υk ), γ(υk ), and λ(υk ).

Thus, by following the step-by-step method outlined earlier, Figure 6 showcases the
extracted features of the EMG data, representing diverse experimental muscular stim-
uli. These results hold notable significance in the research, as they successfully achieve
the desired class separability and data scattering, serving as crucial inputs for the multi-
layer ANN.
Machines 2024, 12, 124 12 of 41

(a) (b)
Figure 6. EMG stimuli pattern space (γ, λ, Ω). (a) Left hand classes. (b) Right hand classes.

Henceforth, the focus lies on identifying and interpreting the EMG patterns projected
in the γλΩ-space, as illustrated in Figure 6. The subsequent part of this section delves into
the architecture and structure of the deep ANN employed as a classifier, providing a de-
tailed account of the training process. Additionally, this section highlights the performance
metrics and results achieved by the classifier, offering insights into its effectiveness. Despite
the challenges posed by nonlinearity, multidimensionality, and extensive datasets, various
neural network structures were configured and experimented with. These configurations
involved exploring different combinations of hidden layers, neurons, and the number of
outputs in the ANN.
A concise comparative analysis was performed among three distinct neural network
architectures: the feedforward multilayer network, the convolutional network, and the
competing self-organizing map. Despite notable configuration differences, efforts were
made to maintain similar features, such as 100 training epochs, five hidden layers (except
for the competing structure), and an equal number of input and output neurons (three
input and four output neurons). The configuration parameters and results are presented in
Table 2. This study delves deeper into the feedforward multilayer ANN due to its superior
classification rate. Further implementations with enhanced features are planned in the
C/C++ language as a compiled program.

Table 2. Comparative results of neural network architectures for EMG classification.

Feedforward Competing
Measures Convolutional
Multilayer Self-Organizing Map
Training epochs 100 100 100
Classification rate 69.85% 24.17% 23.88%
Training time rate 1 1 0.9046 0.9861
Number of hidden
5 5 1
layers
Neurons per hidden
20 20 32
layer
Neurons per output
4 4 4
layer
Total neurons 104 104 32
1 The training dataset comprised 103,203 samples. The training time rate was set to 1 as the comparative reference.

To achieve the highest success rate in accurate data classification through experi-
mentation, the final ANN was designed with perceptron units, as depited in Figure 7.
It featured three inputs corresponding to the three EMG patterns γ, λ, Ω and included
Machines 2024, 12, 124 13 of 41

12 hidden layers, each with 20 neurons. The supervised training process, conducted on a
standard-capability computer, took approximately 20–30 min, resulting in nearly 1% error
in pattern classification.

Figure 7. Multi-layered ANN for EMG pattern recognition.

However, in the initial stages of the classification computations, with some mistuned
adaptive parameters, the classification error was notably higher, even with much deeper
ANN structures, such as 100 hidden layers with 99 neurons per layer. To facilitate the
implementation, this work utilized the multilayer feedforward architecture increased to
300 epochs during training and implemented in C/C++, deploying the library Fast Artificial
Neural Networks (FANN), generating extremely fast binary code once the ANN was trained.
In the training process of this research, about 50 datasets from separate experiments for
each type of muscular stimulus were collectively stored, each comprising several thousand
muscular repetitions. A distinct classification label was assigned a priori for each class
type within the pattern space. To demonstrate the reliability of the approach, 16 different
stimuli per ANN were established for classification and recognition, resulting in the ANN
having four combinatory outputs, each with two possible states. Figure 8 depicts mixed
sequences encompassing all types of EMG stimuli, with the ANN achieving a 100% correct
classification rate.

(a) (b)
Figure 8. Sequence of mixed EMG stimuli over time and ANN’s decimal output with 100% classifica-
tion success. (a) Right limb. (b) Left limb.

Moreover, Table 3 delineates the mapping relationship between the ANN’s input,
represented by the EMG stimuli, and the ANN’s output linked to a swimming behavior for
controlling the robotic avatar.
Machines 2024, 12, 124 14 of 41

Table 3. ANN results of mapping EMG to robotic avatar swimming behaviors.

ANN’s EMG Inputs y3 y2 y1 y0 Swimming Style 1


quiet 0 0 0 0 Sink
right hand 0 0 0 1 Buoyant
right thumb 0 0 1 0 Gliding
right index 0 0 1 1 Slow thrusting
right middle 0 1 0 0 Medium thrusting
right ring 0 1 0 1 Fast thrusting
right little 0 1 1 0 Slow right maneuvering
left hand 0 1 1 1 Medium right maneuvering
left thumb 1 0 0 0 Fast right maneuvering
left index 1 0 0 1 Slow left maneuvering
left middle 1 0 1 0 Medium left maneuvering
left ring 1 0 1 1 Fast left maneuvering
left little 1 1 0 0 Speed up right turn
both index 1 1 0 1 Speed up left turn
right thumb–little 1 1 1 0 Slow down right turn
left thumb–little 1 1 1 1 Slow down left turn
1 The variables y0,1,2,3 represent combinatory outputs, while subsequently, yC corresponds to the decimal value.

5. Fuzzy-Based Oscillation Pattern Generator


This section delineates the methodology utilized to produce electric oscillatory signals,
essential for stimulating the inputs of electromagnetic devices (solenoids) embedded within
the mechanized oscillator of the biorobotic fish. The outlined approach for generating
periodic electric signals encompasses three key components: (a) the implementation of
a fuzzy controller; (b) the incorporation of a set of periodic functions dictating angular
oscillations to achieve the desired behaviors in the caudal undulation of the fish; and (c) the
integration of a transformation model capable of adapting caudal oscillation patterns into
step signals, facilitating the operation of the dual-coil electromagnetic oscillator.
Another study [37] reported a different approach, a neuro-fuzzy-topological biody-
namical controller for muscular-like joint actuators. In the present research, an innovative
strategy suggested for the fuzzy controller involves the combination of three distinct input
fuzzy sets: the artificial neural network outputs transformed into crisp sets and the linear
and angular velocities of the robot derived from sensor measurements. Simultaneously,
the fuzzy outputs correspond to magnitudes representing the periods of time utilized
to regulate the frequency and periodicity of the caudal oscillations. This comprehensive
integration enables the fuzzy controller to effectively process both neural network-derived
information and real-time sensor data, dynamically adjusting the temporal parameters that
govern the fish’s undulatory motions.
The depiction of the outputs from the EMG pattern recognition neural network is
outlined in Table 3. Each binary output in the table is linked to its respective crisp-type
input fuzzy sets when represented in the decimal numerical base yC , as illustrated in
Figure 9a. Moreover, Definition 2 details the parametric nature of the input sets.
Machines 2024, 12, 124 15 of 41

(a)

(b) (c)
Figure 9. Swimming behavior fuzzy sets. (a) Crisp input (ANN’s output). (b) Input of robot’s thrust
velocity observation. (c) Input of robot’s angular speed observation.

Definition 2 (Input fuzzy sets). The output of the artificial neural network (ANN) corresponds
to the fuzzy input, denoted yC , and only when it falls within the crisp set C = [0, 1, . . . , 15], the
membership in the crisp set is referred to as µy (yC ).
(
0, yC ∈
/C
µy (yC ) = (16a)
1, yC ∈ C .

In relation to the sensor observations of the biorobot, the thrusting velocity v [cm/s] and angular
velocity ω [rad/s] exhibit S-shaped sets modeled by sigmoid membership functions. This modeling
approach is applied consistently to both types of input variables, capturing their extreme-sided
characteristics. Let µs, f (v) define the sets labeled “stop” and “fast” in relation to the thrusting
velocity v, elucidated by
1
µs, f (v) = . (16b)
1 + e± v ∓ a
Likewise, for the sets designated “left–fast” (lr) and “right–fast” (r f ) concerning the angular
velocity ω, the S-shaped sets are modeled as

1
µl f ,r f (ω ) = . (16c)
1 + e± v ∓ b
Machines 2024, 12, 124 16 of 41

In addition, the rest of the sets in between are any of the kth Gauss membership functions
(“slow”, “normal”, and “agile”) for the robot’s thrusting velocity with parametric mean v̄ and
standard deviation σvk , !
(v̄−v)2

2σv2
µk (v) = e k , (16d)
and for its angular velocity (“left–slow”, “no turn”, and “right-slow”), with parametric mean ω̄
and standard deviation σωk , !
(ω̄ −v)2
− 2
2σω
µk (ω ) = e k . (16e)

Therefore, the reasoning rules articulated in the inference engine have been devised by
applying inputs derived from Table 3, specifically tailored to generate desired outputs that
align with the oscillation periods T [s] of the fish’s undulation frequency (see Figure 10).

Definition 3 (v, ω = any). For any linguistic value v representing sensor observations of the
fish’s thrusting velocity,
.
v = any = stop or slow or normal or agile or fast .

Likewise, for any linguistic value ω representing sensor observations of the fish’s angular
velocity,
.
ω = any = left-fast or left-slow or no turn or right-slow or right-fast .

Therefore, the following inference rules describe the essential robot fish swimming behavior.

1. if yC = sink and v = any or ω = any then T f ,r,l = too_slow


2. if yC = buoyant and v = any or ω = any then T f ,r,l = too_slow
3. if yC = gliding and v = any and ω = any then T f = slow, Tr,l = too_slow

4. if yC = slow_thrust and v = any and ω = any then T f = slow, Tr,l = too_slow


5. if yC = medium_thrust and v = any and ω =any then T f = normal, Tr,l = too_slow
6. if yC = fast_thrust and v = any and ω = any then T f = agile, Tr,l = too_slow

7. if yC = slow-right_maneuvering and v = any and ω = any then T f ,l = too_slow,


Tr = normal
8. if yC = medium-right_maneuvering and v = any and ω = any then T f = slow, Tr = agile,
Tl = too_slow
9. if yC = fast-right_maneuvering and v = any and ω = any then T f = normal, Tr = fast,
Tl = too_slow

10. if yC = slow-left_maneuvering and v = any and ω = any then T f ,r =too_slow, Tl = normal


11. if yC = medium-left_maneuvering and v = any and ω = any then T f = slow, Tr = too_slow,
Tl = agile
12. if yC = fast-left_maneuvering and v = any and ω = any then T f = normal, Tr = too_slow,
Tl = fast

13. if yC = speed-up_right-turn and v = any and ω = any then T f ,l = too_slow, Tr = fast


14. if yC = speed-up_left-turn and v = any and ω = any then T f ,r = too_slow, Tl = fast

15. if yC = slow-down_right-turn and v = any and ω = any then T f ,l = too_slow, Tr = slow


16. if y = slow-down_left-turn and v = any and ω = any then T f ,r = too_slow, Tl = slow
Hence, the output fuzzy sets delineate the tail undulation speeds of the robot fish across
the caudal oscillation period T [s], as illustrated in Figure 10. Notably, three identical output
Machines 2024, 12, 124 17 of 41

sets with distinct concurrent values correspond to the periods of three distinct periodic
functions: forward (f), right (r), and left (l), as subsequently defined by Equations (24a),
(24b), and (24c).

Figure 10. Three identical output fuzzy sets depicting the oscillation period of the caudal tail’s
undulation.

The inference engine’s rules dictate the application of fuzzy operators to assess the
terms involved in fuzzy decision-making. As for the thrusting velocity fuzzy sets, where
v = any was previously established and by applying Definition 3, the fuzzy operator is
described by  
µmax
v = max µstop , µslow , µnormal , µ agile , µ f ast .
µk ∈v

Likewise, the angular velocity fuzzy sets, where previously ω = any was stated, by
applying the second part of Definition 3, the following fuzzy operator is described by
 
µmax
ω = max µle f t− f ast , µle f t−slow , µnoturn , µright−slow , µright− f ast .
µk ∈ω

Therefore, according to the premise (v = any or ω = any), the following fuzzy


operator applies:
µmax max max
v,ω = max ( µv , µω ).
µk ∈v∪ω

Essentially, the fuzzification process applies strictly similar for the rest of the inference
rules, according to the following Proposition.

Proposition 5 (Combined rules µi∗ ). The general fuzzy membership expression for the ith infer-
ence rule that combines multiple inference propositions is

µi∗ = min µyi (yCi ) = 1, µmax



µ ∈yC ∩v ∪ω k ( v, ω ) . (17)

In any crisp set, each µi attains a distinct value of 1, irrespective of the corresponding inference
rule indexed by i. This value aligns with the ith entry in the neural network outputs outlined in
Table 3. Additionally, k represents a specific fuzzy set associated with the same input.

Executing the previously mentioned proposition, Remark 1 provides a demonstration


of its application.
Machines 2024, 12, 124 18 of 41

Remark 1 (Proposition 5 example). Let us consider rule i = 1, where yC1 = “sink” and either
(v = 10.0 cm/s or ω = 0.0 rad/s). Thus, articulated in the context of the resulting fuzzy operator,
 
µ1∗ = min µsink (yC ), max µv,ω (1, 1) = min (1, 1) = 1.
µk ∈yC ∩v ∪ω v∪ω µk ∈yC ∩v ∪ω

Here, µv (10.0) = 1 and µω (0.0) = 1. Based on the earlier proposition, the resulting µ1∗ = 1,
and given that rule 1 indicates an output period T = “too-slow”, its inverse outcome T (µ1∗ ) = 6.0
s. This outcome is entirely accurate, because the fish’s swim undulation will slow down up to 6 s,
which is the slowest period oscillation.

Moving forward, during the defuzzification process, the primary objective is to attain
an inverse solution. The three output categories for periods T include “forward”, “right”,
and “left”, all sharing identical output fuzzy sets of T (Figure 10). Nevertheless, the output
fuzzy sets consist of two categories of distributions: Gauss and sigmoid distribution sets,
as outlined in Definition 4. Regarding the Gauss distributions, their functional form is
specified by:

Definition 4 (Output fuzzy sets). The membership functions for both extreme-sided output sets
are defined as “fast” with µ f and “too slow” with µts , such that

1
µ f ,ts ( T ) = . (18)
1 + e±Tk ∓ck
Here, T [s] denotes the period of time for oscillatory functions, with the slope direction
determined by its sign. The parameter c represents an offset, and k is the numerical index of a
specific set.
Furthermore, the membership functions for three intermediate output sets are defined as “agile”
with µ a , “normal” with µn , and “slow” with µs , such that:
!
( T̄k − T )2

2σ2
Tk
µ a,n,s ( T ) = e . (19)

Here, T̄k represents the mean value, and σTk denotes the standard deviation of set k, with k
serving as the numeric index of a specific set.

In accordance with Definition 4, any µk possesses a normalized membership outcome


within the interval µk ∈ [0, . . . , 1]. The inverse sigmoid membership function, denoted
Tk ∈ R ∀ µk , is determined by the general inverse expression:
 
1
T (µk ) = ∓ ln ± ck . (20)
µk

Similarly, the inverse Gaussian membership function, where Tk ∈ R ∀ µk , is defined


by the inverse function: q
T (µk ) = µk − 2
−2σk2 ln(µk ). (21)

Hence, exclusively for the jth category among the output fuzzy sets affected by the
fuzzy inference rule essential for estimating the value of T, the centroid method is employed
for defuzzification through the following expression:

∑ j µ∗j Tj (µ∗j )
T f ,r,l = , (22)
∑ j µ∗j
Machines 2024, 12, 124 19 of 41

or more specifically,

µ∗f T (µ∗f ) + µ∗a T (µ∗a ) + µ∗n T (µ∗n ) + µ∗s T (µ∗s ) + µ∗ts (µ∗ts )
T f ,r,l = . (23)
µ∗f + µ∗a + µ∗n + µ∗s + µ∗ts

For terms T (µ f ) and T (µts ), the inverse membership function (20) is applicable,
whereas for the remaining sets in the jth category, the inverse membership (21) is applied.
Reference [38] reported a CPG model to control a robot fish’s motion in swimming
and crawling and let it perform different motions influenced by sensory input from light,
water, and touch sensors. The study of [39] reported a CPG controller with proprioceptive
sensory feedback for an underactuated robot fish. Oscillators and central pattern generators
(CPGs) are closely related concepts. Oscillators are mathematical or physical systems
exhibiting periodic behavior and are characterized by the oscillation around a stable
equilibrium point (limit cycle). In the context of CPGs, these are neural networks that utilize
oscillators that create positive and negative feedback loops, allowing for self-sustaining
oscillations and the generation of rhythmic patterns, particularly implemented in numerous
robotic systems [40]. CPGs are neural networks found in the central nervous system of
animals (e.g., fish swimming [41]) that generate rhythmic patterns of motor activity and
are responsible for generating and coordinating optimized [42] repetitive movements.
The present research proposes a different approach from the basic CPG model, and as
a difference from other wire-driven robot fish motion approaches [43], this study introduces
three fundamental undulation functions: forward, right-turning, and left-turning. These
functions are derived from empirical measurements of the robot’s caudal fin oscillation
angles. However, a distinctive behavioral undulation swim is achieved by blending these
three oscillation functions, each incorporating corresponding estimation magnitudes de-
rived from the fuzzy controller outputs. The formulation of each function involves fitting
empirical data through Fourier series. As a difference from other approaches on CPG
parameter adjustment [44], the preceding fuzzy outputs obtained from (23) to estimate time
periods T f , Tr , Tl play a pivotal role in parameterizing the time periods for the periodic
oscillation functions, as outlined in Proposition 6.

Proposition 6 (Oscillation pattern function). Three fundamental caudal oscillation patterns,


designed to generate swimming undulations, are introduced, each characterized by 11 pre-defined
numerical coefficients. These patterns are described by amplitude functions, denoted ψ(ϕ, T ), where
ϕ represents oscillation angles, and the time period T is an adjustable parameter.
The undulation pattern for forward motion is provided by the following function:

! ! !
2π 2π 2π
ψ f (ϕ, T f ) = 0.0997 + 0.3327 cos ϕ − 0.1297 sin ϕ − 0.5760 cos 2ϕ +
Tf Tf Tf
! ! ! !
2π 2π 2π 2π
0.3701 sin 2ϕ − 0.1431 cos 3ϕ + 0.1055 sin 3ϕ − 0.0870 cos 4ϕ + (24a)
Tf Tf Tf Tf
! ! !
2π 2π 2π
0.06323 sin 4ϕ − 0.0664 cos 5ϕ − 0.0664 sin 5ϕ .
Tf Tf Tf

Likewise, the undulation pattern for right-turn motion is given by the function
Machines 2024, 12, 124 20 of 41

!    
2π 2π 2π
ψr (ϕ, Tr ) = 0.3324 + 0.1915 cos ϕ + 0.0622 sin ϕ + 0.4019 cos 2ϕ +
Tf T T
       
2π 2π 2π 2π (24b)
0.2920 sin 2ϕ − 0.264 cos 3ϕ − 0.3634 sin 3ϕ − 0.0459 cos 4ϕ
T T T T
   
2π 2π
− 0.1413 sin 4ϕ + 0.0665 sin 5ϕ .
T T

Finally, the undulation pattern for left-sided turning motion is established by the expression

     
2π 2π 2π
ψl (ϕ, Tl ) = −0.1994 + 0.125 cos ϕ − 0.0622 sin ϕ + 0.3354 cos 2ϕ −
Tl T T
     l   l 
2π 2π 2π 2π
0.292 sin 2ϕ − 0.3305 cos 3ϕ + 0.3634 sin 3ϕ − 0.1124 cos 4ϕ + (24c)
Tl Tl Tl Tl
 
2π 2π 2π
0.1413 sin 4ϕ − 0.0664 cos(5ϕ ) + 0.2659 sin(5ϕ ).
Tl Tl Tl

The approaches to forward, right-turn, and left-turn based on the findings of Propo-
sition 6 are illustrated in Figure 11. Additionally, a novel combined oscillation pattern
emerges by blending these three patterns (25), each assigned distinct numerical weights
through the neuro-fuzzy controller.

ψ(ϕ, T f , Tr , Tl ) = ψ f (ϕ, T f ) + ψr (ϕ, Tr ) + ψl (ϕ, Tl ). (25)

The proposed robotic mechanism features a multilink-based propulsive spine driven


by an electromagnetic oscillator composed of a pair of antagonistic solenoids that necessitate
a synchronized sequence of electric pulses (see Figure 12b). The amplitudes generated by
ψ(ϕ, T f , Tr , Tl ) in Equation (25) essentially represent the desired undulation pattern for the
robotic fish’s caudal fin. However, these oscillations are not directly suitable for the inputs
of the coils. To address this, our work introduces a decomposition of ψ into two step signals
centered around a stable equilibrium point (limit cycle), one for the right coil (positive with
respect to the limit cycle) and another for the left coil (negative with respect to the limit
cycle). The coil’s step function, for either the right-sided or left-sided coil, is given by sr,l ,
taking the equilibrium point as their limit value ξ:
(
0, ψ ≤ ξ
sr,l = (26)
1, ψ > ξ

The Boolean [0, 1] values derived from (26) are illustrated in Figure 11a–c as sequences
of step signals for both the right (R) and left (L) solenoids of the robot. Each plot maintains
a uniform time scale, ensuring vertical alignment for clarity. In contrast to the work
presented in [45] focusing on the swimming modes and gait transition of a robotic fish,
the current study, as depicted in Figure 11, introduces a distinctive context. The three
oscillatory functions, ψ f ,r,l , are displayed both overlapped and separated, highlighting
their unique decoupled step signals. Assuming ξ = 0 for all ϕ in each case, in Figure 11a,
ψr = ψl ≈ 0, with Tr,l ≥ 6; in Figure 11b, ψ f = ψl ≈ 0, with T f ,l ≥ 6; and in Figure 11c,
ψ f = ψr ≈ 0, with T f ,r ≥ 6. For a more comprehensive understanding, Figure 12b presents
the electromechanical components of the caudal motion oscillator.
Machines 2024, 12, 124 21 of 41

(a)

(b)

(c)
Figure 11. Oscillation functions for the caudal tail (measured in radians) synchronized with dual
Boolean coil step patterns (dimensionless), presented on a consistent time scale: (a) Forward undula-
tions. (b) Right-turn undulations. (c) Left-turn undulations.
Machines 2024, 12, 124 22 of 41

(a)

(b) (c)
Figure 12. Model of the robot fish mechanism, illustrating: (a) Top view of the musculoskeletal
system. (b) Top view of the robot’s head with antagonistic muscle-based electromagnetic oscillator.
(c) Side view of the ballast system device positioned beneath the robot’s head.

6. Robot Fish Biomechanical Model


This section introduces the design of the robotic fish mechanism and explores the
model of the underactuated physical system to illustrate the fish undulation motions. The
conceptualization of the proposed system is inspired by an underactuated structure fea-
turing a links-based caudal spine with passive joints, utilizing helical springs to facilitate
undulatory locomotion (see Figure 12a). The robotic fish structure introduces a mechani-
cal oscillator comprising a pair of solenoids activated through coordinated sequences of
step signals, as described by (26). Essentially, the electromagnetic coils generate antago-
nistic attraction/repulsion linear motions, translating into rhythmic oscillations within a
mechanized four-bar linkage (depicted in Figure 12b). This linkage takes on the form of a
trapezoid composed of two parallel rigid links and two lateral linear springs functioning
as antagonistic artificial muscles. Moreover, beneath the electromagnetic oscillator, there
is a ballasting device for either submersion or buoyancy (Figure 12c). The robot’s fixed
reference system consists of the X axis, which intersects the lateral sides, and the Y axis
aligned with the robot’s longitudinal axis.
In Figure 12a, the electromagnetic oscillator of the robotic avatar responds to opposing
coordinated sequences of step signals. The right-sided (R) and left-sided (L) solenoids
counteract each other’s oscillations, generating angular moments in the trapezoid linkage
(first vertebra). Both solenoids are identical, each comprising a coil and a cylindrical
neodymium magnet nucleus. The trapezoid linkage, depicted in Figure 12b, experiences
magnetic forces ± f osRL at the two neodymium magnet attachments situated at a radius of
ros , resulting in two torques, τos and τs , with respect to their respective rotation centers. As
input forces ± f osRL come into play, the linear muscle in its elongated state stores energy.
Machines 2024, 12, 124 23 of 41

Upon restitution contraction, this stored energy propels the rotation of the link rs , which
constitutes the first vertebra of the fish.
Furthermore, the caudal musculoskeletal structure, comprising four links (ℓ1 , ℓ2 , ℓ3 , ℓ4 )
and three passive joints (θ1 , θ2 , θ3 ), facilitates a sequential rotary motion transmitted from
link 1 to link 4. This transmission is accompanied by an incremental storage of energy in
each helical spring that is serially connected. Consequently, the last link (link 4) undulates
with significantly greater mechanical advantage. In summary, a single electrical pulse in
any coil is sufficient to induce a pronounced undulation in the swimming motion of the
robot’s skeleton.
As for the ballasting control device situated beneath the floor of the electromechanical
oscillator, activation occurs only when either of two possible outputs from the artificial
neural network (ANN) is detected: when yC equals “sink” or “buoyancy”. However, the
fuzzy nature of these inputs results in a gradual slowing down of the fish’s undulation to
its minimum speed. Additionally, both actions are independently regulated by a dedicated
feedback controller overseeing the ballasting device.
Now, assuming knowledge of the input train of electrical step signals sr,l applied to
the coils, let us derive the dynamic model of the biorobot, starting from the electromagnetic
oscillator and extending to the motion transmitted to the last caudal link. Thus, as illustrated
in Figure 12a,b, the force f [N] of the solenoid’s magnetic field oscillator is established on
either side (right, denoted R, or left, denoted L),

B2 A
f = . (27)
2µo

In this context, A [m2 ] represents the area of the solenoid’s pole. The symbol µo denotes the
magnetic permeability of air, expressed as µo = 4π × 10−7 H/m (henries per meter). Hence,
the magnetic field B (measured in Teslas) at one extreme of a solenoid is approximated by:

µo iN
B= , (28)
l
where i represents the coil current [A], N is the number of wire turns in a coil, and l denotes
the coil length [m]. Furthermore, a coil’s current is described by the following linear
differential equation as a function of time t (in seconds), taking into account a potential
difference v (in volts):
1 T
Z
i= vdt + i0 . (29)
L 0
Here, L represents the coil’s inductance (measured in henries, H) with an initial current
condition denoted i0 . Additionally, the coil’s induction model is formulated by:

µo N 2 A
L= . (30)
l
In essence, this study states that both lateral solenoids exhibit linear motion character-
ized by an oscillator force f os . This force is expressed as:

i2 Al
f os = (31)
2µo N 2 A

and due to the linear impacts of solenoids on both sides R and L (refer to Figure 12a), the
first bar of the oscillator mechanism generates a torque, expressed as:

τos = ( f os )(ros ). (32)

It is theorized that the restitution/elongation force along the muscle is denoted f m (R or L),
with this force being transmitted from the electromechanical oscillator to the antagonistic
Machines 2024, 12, 124 24 of 41

muscle in the opposite direction (refer to Figure 12b). This implies that the force generated
from the linear motion solenoid in the oscillator’s right-sided coil, denoted f osR , is applied at
point R and subsequently reflected towards point L with an opposite direction, represented
as − f os L . Similarly, conversely from the oscillator’s left-sided solenoid, the force f os L is
applied at point L and transmitted to point R as − f os L . For f os L applied in L,

− f os L − f osR
f mR = ; f mL = . (33)
cos(α R ) sin(α L )

Hence, the angles α R,L assume significance as the forces acting along the muscles f m
differ, resulting in distinct instant elongations xm (t). Consequently, the four-bar trapezoid-
shaped oscillator mechanism manifests diverse inner angles, namely, θ1,2 , β, and γ1,2 , as
illustrated in Figure 12b.
Thus, prior to deriving an analytical solution for α, it is imperative to formulate a
theoretical model for the muscle. In this study, a Hill’s model is adopted, as depicted in
Figure 12a (on the right side). The model incorporates a serial element SE (overdamped), a
contractile element CE (critically damped), and a parallel element PE (critically damped),
each representing distinct spring-mass-damper systems.
The generalized model for the antagonistic muscle is conceptualized in terms of the
restitution force, and it is expressed as:

f m = f SE − ( f CE + f PE ). (34)

Therefore, by postulating an equivalent restitution/elongation mass mw associated


with instantaneous weight-force loads w (such as due to hydrodynamic flows), the preced-
ing model is replaced with Newton’s second law of motion,

f m = mw ẍSE − mw ẍCE − mw ẍ PE . (35)

Furthermore, through the independent solution of each element within the system in terms
of elongations, the SE model can be expressed as:

xSE (t) = s1 eλ1 t + s2 eλ2 t. (36)

Here, s1,2 represent arbitrary constants representing the damping amplitude. The
terms λ1,2 denote the root factors,
q
c
2
( cmSEw )2 − 4 kmSEw
λ1,2 = − SE ± . (37)
mw 2

Here, the factors λ1,2 are expressed in relation to the damping coefficient cSE (in kg/s)
and the elasticity coefficient k SE (in kg/s²).
Similarly, for the contractile element CE, its elongation is determined by:
cCE
xCE (t) = (c1 + c2 t)e− mw t . (38)

With amplitude factors c1,2 and damping coefficient cCE , a similar expression is ob-
tained for the parallel element PE:
c PE
x PE (t) = ( p1 + p2 t)e− mw t . (39)

With amplitude factors p1,2 and damping coefficient c PE , the next step involves substituting
these functional forms into the general muscle model,

d2 d2 d2
 
f m (t) = mw x SE ( t ) + x CE ( t ) + x PE (t) (40)
dt2 dt2 dt2
Machines 2024, 12, 124 25 of 41

such that the complete muscle’s force model f m is formulated by

s1 λ21 λ1 t s2 λ22 λ2 t c2 −c −c −c
f m (t) = e + e + c1 CE e mw t − c2 cCE e mw t − c2 cCE e mw t +
mw mw mw
c 2 2
−c c −c −c −c
c2 CE te m t + p1 PE e mw t − p2 c PE e mw t − p2 c PE e mw t + (41)
mw mw
c2 −c
p2 ( PE )te mw t .
mw

Subsequently, simplifying the preceding expression leads to the formulation presented


in Proposition 7.

Proposition 7 (Muscle force model). The solution to the muscle force model, based on a Hill’s
approach, is derived as a time-dependent function f m (t) encompassing its three constituent elements
(serial, contractile, and parallel). This formulation is expressed as:

s1 λ21 eλ1 t + s2 λ22 eλ2 t


 
c + c2 t −cCE
f m (t) = + 1 cCE − 2c2 cCE e mw t +
mw mw
  (42)
p1 + p2 t −c PE
c PE − 2p2 c PE e mw t .
mw

Thus, without loss of generality, considering a muscle model characterized by elon-


gation xm and a force-based model f m , we proceed to derive the passive angles of the
oscillator and the output forces f x and f y for ℓ1 .
Under initial conditions, the trapezoid oscillator bars are assumed to have θ0 = 0o ,
aligning the four-bar mechanism with the X axis. As the bars rotate by an angle θ1 due
to solenoid impacts at points R or L, the input bar of the oscillator with a radius of ros
undergoes an arc displacement s1 . Simultaneously, the output bar of shorter radius rs
experiences a displacement rate of s2 , such that:
 
rS
s2 = s . (43)
ros 1

The arc displacement at point R or L is given by s1 = ros θos . Consequently, the rotation
angle of the input oscillator is expressed as:
s1
θos = . (44)
ros

Therefore, by formulating this relationship in the context of forces and subsequently


substituting the newly introduced functions, the resulting expression is

1
ZZ
θos = ÿdt2 . (45)
ros t

Here, ÿ denotes the linear acceleration of either point R or L along the robot’s Y axis.
By replacing the solenoid’s mass-force formulation,

1 f t2
ZZ
θos = f os dt2 = . (46)
mros t 2mros

Hence, the functional expression for s1 takes the form

rs f os t2
s2 = 2
. (47)
2mros
Machines 2024, 12, 124 26 of 41

Without loss of generality, the inner angle θ1 of the oscillator mechanism (refer to
Figure 12b) is derived as:
π
θ1 = ± θos . (48)
2
Initially, when the oscillator bars are aligned with respect to the X axis, an angular
displacement denoted by θ1 occurs as a result of the transfer of motion from the solenoid’s
tangential linear motion to the input bar. Similarly, in the output bar, the corresponding
angular displacement is represented by θ2 ,

θ2 = π ± ( θ + ∆ θ ). (49)

Here, ∆θ signifies a minute variation resulting from motion perturbation along the various
links of the caudal spine. The selection of the ± operator depends on the robot’s side,
whether it is denoted R or L. As part of the analysis strategy, the four-bar oscillator was
geometrically simplified to half a trapezoid for the purpose of streamlining deductions
(refer to Figure 12b). Within this reduced mechanism, two triangles emerge. One triangle is
defined by the parameters ros , ℓ, d, while the other is characterized by xm (t), ℓ, rs , where ℓ
serves as the hypotenuse and the sides d, ros , and rs remain constant. Consequently, the
instantaneous length of the hypotenuse is deduced as follows:

ℓ2 = ros
2
+ d2 − 2dros cos(θ1 ). (50)

Upon determining the value of ℓ, the inner angle γ1 can be derived as follows:

ros2 =d2 +ℓ2 −2ℓd cos(γ1 ) . (51)

Therefore, by isolating γ1 ,

ros − ℓ2 − d2
 
γ1 = arccos . (52)
−2dℓd

Until this point, given the knowledge of γ1 and θ2 , it is feasible to determine the inner
complementary angle γ2 through the following process:

γ2 = θ2 − γ1 . (53)

Subsequently, the angle formed by the artificial muscle and the output bar can be
established according to the following principle:

sin(γ2 ) sin( β)
= . (54)
xm ℓ
Thus, the inner angle β is  

β = arcsin sin(γ2 ) , (55)
xm
or, alternatively, an approximation of the muscle length is

sin(γ2 )
xm = ℓ . (56)
sin β

This is the mechanism through which the input bar transmits a force f m1 , as defined in
expression (33), from the tangent f os to the output bar, achieving a mechanical advantage
denoted f m2 ,  
ros
f m2 = f m1 . (57)
rs
Machines 2024, 12, 124 27 of 41

Hence, in accordance with the earlier stipulation in expression (33), Definition 5


delineates the instantaneous angles α R,L .

Definition 5 (Angles α R,L ). The instantaneous angle α, expressed as a function of the inner angles
of the oscillator, is introduced by:
α R,L = β R,L − θ1R,L . (58)

It is noteworthy that, owing to the inertial system of the robot, the longitudinal force
output component f y aligns with the input force f os in direction. Consequently, for a
.
right-sided force, we have α R = β R − θ1R , where:
 
ros sin(α R )
f xR = f os L (59a)
rs cos(α R )

and  
ros
f yR = f osR . (59b)
rs
.
Likewise, for the left-sided α L = β L − θ1L ,
 
ros sin(α L )
f xL = f osR (60a)
rs cos(α L )

as well as  
ros
f yL = f os L . (60b)
rs
In this scenario, an inverse solution is only applicable for f xR,L , with no necessity for
determining f yR,L . Consequently, the mechanical advantage transferred between the input
and output bars can be expressed by a simplified coefficient

. ros
κ= . (61)
rs

Furthermore, through the utilization of the following trigonometric identity,

sin( β − θ1 )
≡ tan( β − θ1 ) (62)
cos( β − θ1 )

can substitute and streamline the ensuing system of nonlinear equations by solving them
simultaneously. Additionally, let θ1 be defined as:

π rs f osR,L t2
θ1R,L = ± 2
. (63)
2 2mros

Hence, the simultaneous nonlinear system is explicitly presented solely for the force
components along the X axis:

f xR = κ f os L tan( β R − θ1R ) (64a)

and
f x L = κ f osR tan( β L − θ1L ). (64b)
Machines 2024, 12, 124 28 of 41

Therefore, for the numerical solution of the system, a multidimensional Newton–


Raphson approach is employed as outlined in the provided solution:
∂ f xL ∂ f xL
f xR ∂β L − f xL ∂β L
β R t +1 = β R t − ∂ f xR ∂ f xL ∂ f xR ∂ f xL
(65a)
∂β R ∂β L − ∂β L ∂β R

and
∂ f xR ∂ f xL
f xL ∂β R − f xR ∂β R
β L t +1 = β L t − ∂ f xR ∂ f xL ∂ f xR ∂ f xL
. (65b)
∂β R ∂β L − ∂β L ∂β R

Thus, by defining all derivative terms to finalize the system,

− θ1 R
 
∂ f xR
= κ f osR , (66a)
∂β R cos2 ( β R − θ1R )

∂ f xR
= 0, (66b)
∂β L
∂ f xL
= 0, (66c)
∂β R
− θ1 L
 
∂ f xL
= κ f os L . (66d)
∂β L cos2 ( β L − θ1L )
Therefore, by subsequently organizing and algebraically simplifying,
∂ f xL
f xR ∂β L tan( β R − θ1R ) cos2 ( β R − θ1R )
β R t +1 = β R t − ∂ f xR ∂ f xL
= β Rt + (67a)
θ1 R
∂β R ∂β L

and
∂ f xR
f xL ∂β R tan( β L − θ1L ) cos2 ( β L − θ1L )
β L t +1 = β L t − ∂ f xR ∂ f xL
= β Lt + . (67b)
θ1 L
∂β R ∂β L

The objective is to achieve numerical proximity, aiming for β t+1 ≈ β t . Consequently,


through this inverse solution, the lateral force components of the first spinal link, denoted
by fx , are intended to be estimated because they are perpendicular to the links and produce
the angular moments at each passive joint.
  
rs f os L t2
 f os L tan β Rt+1 − 2 + 2mros
  π
2
f xR 
fx = = κ·  2
 . (68)
f xL  rs f os t 
f osR tan β Lt+1 − π2 − 2mrR2
os

Thus, given that the torque of the trapezoid’s second bar is τs = f s rs (see Figure 12b),
we establish a torque–angular moment equivalence, denoted τs ≡ M1 . Leveraging this
equivalence and the prior understanding of the torque τs acting on the second bar of
the trapezoid, mechanically connected to the first link ℓ1 , we affirm their shared angular
moment. Consequently, the general expression for the tangential force f k applied at the
end of each link ℓk is:
M
fk = k . (69)
ℓk
Yet, considering the angular moment Mk for each helical-spring joint, supporting the
mass of the successive links, let us introduce equivalent inertial moments, starting with
Iε 1 = I1 + I2 + I3 + I4 . Subsequently, we define Iε 2 = I2 + I3 + I4 , Iε 3 = I3 + I4 , and finally,
Machines 2024, 12, 124 29 of 41

I4 . Thus, in the continuum of the caudal spine, the transmission of energy to each link is
contingent upon the preceding joints, as established by

M1 = Iε 1 θ¨1 , M2 = Iε 2 θ¨2 , M3 = Iε 3 θ¨3 , M4 = I4 θ¨4 . (70)

Each helical spring, connecting pairs of vertebrae, undergoes an input force f = −kx,
directly proportional to the angular spring deformation indicated by elongation x. Here, k
[kg m2 /s2 ] represents the stiffness coefficient. External forces result in an angular moment,
given by τ = −kθ, where torque serves as an equivalent variable to angular momentum,
such that Iα = −kθ. Consequently, when expressing the formula as a linear second-order
differential equation, we have:
k
θ̈ + θ = θ̈ Lk . (71)
I
Here, θ̈ Lk represents undulatory accelerations arising from external loads or residual
motions along the successive caudal links, which are detectable √ through encoders and
2
IMUs. Assuming an angular frequency ω = k/I, a period p = 2π I/k, and moments of
inertia expressed as Ik = rk2 mk , the general equation is formulated as follows:

Mk = Iε k θ̈k , (72)

where θ̈k is replaced by the helical spring expression (71) to derive


 
k
Mk = Iε k θ̈ Lk − k . (73)
Iε k

By algebraically extending, omitting terms, and rearranging for all links in the caudal
spine, we arrive at the following matrix-form equation:
       
M1 Iε 1 0 0 0 θ̈ L1 k 1 θ1
 M2   0 Iε 2 0 0  · θ̈ L2  − k2 θ2 .
   
 = (74)
 M3   0 0 Iε 3 0   θ̈ L3   k 3 θ3 
M4 0 0 0 Iε 4 θ̈ L4 k 4 θ4

Hence, in accordance with Expression (69), the tangential forces exerted on all the
caudal links of the robotic fish are delineated by the following expression:
I  k
ε1 
  0 0 0  1
0 0 0  
f l1  ℓ1 Iε 2  θ̈ L1  ℓ1 θ1
k2 
 f l2   0 ℓ2 0 0  θ̈ L2   0

ℓ2 0 0   θ2 
 =
 f l3   Iε 3 ·
θ̈ L  −  0
 
k3
·  (75)
0 0 0 3 0 ℓ3 0  θ3 
ℓ3  
f l4
 
Iε 4 θ̈ L4 k4 θ4
0 0 0 0 0 0 ℓ4
ℓ4

Alternatively, the last expression can be denoted as the following control law:

M(θ̇tt1 − θ̇tt1 )
f = Mθ̈L − Qθt ≡ − Qθt , (76)
t2 − t1

where f = ( f l1 , f l2 , f l3 , f l4 )⊤ , M represents mass dispersion, and θ̈ = (θ̈ L1 , θ̈ L2 , θ̈ L3 , θ̈ L4 )⊤


denotes the vector of angular accelerations for the caudal vertebrae, including external
loads. Additionally, Q stands for the matrix of stiffness coefficients. Therefore, the inverse
dynamics control law, presented in a recursive form, is:

M−1
θ̇Lt+1 = θ̇Lt + (f + Qθt ). (77)
t2 − t1
Machines 2024, 12, 124 30 of 41

Finally, for feedback control, both equations are simultaneously employed within a
computational recursive scheme, and angular observations are frequently derived from
sensors on both joints: encoders and IMUs.

7. Ballasting Control System


This section delineates the integration of the ballasting control system, crafted to
complement the primary structure of the biorobot. It introduces the ballasting model-based
control system, selectively activated in response to the artificial neural network’s (ANN)
output, particularly triggered when the ANN signals “sink” or “buoyancy”. Figure 13a
visually depicts the biorobot’s ballasting system, while Figure 13b presents a diagram illus-
trating the fundamental components of the hydraulic piston, crucial for control modeling.

(a) (b)
Figure 13. Ballasting system of the robot fish. (a) Detailed 3D model of the robot fish with the
ballast device positioned beneath its floor. (b) Components of the basic ballasting device designed for
modeling and control purposes.

The core operational functions of the ballasting device involve either filling its con-
tainer chamber with water to achieve submergence or expelling water from the container
to attain buoyancy. Both actions entail the application of a linear force for manipulating
a plunger or hydraulic cylindrical piston, thereby controlling water flow through either
suction or exertion. Consequently, the volume of the liquid mass fluctuates over time,
contingent upon a control reference or desired level marked H, along with quantifying a
filling rate u(t) and measuring the actual liquid level h(t).
Hence, we can characterize the filling rate u(t) as the change in volume V with respect
to time, expressed as
dV (t)
= u ( t ), (78)
dt
and assuming a cylindrical plunger chamber with radius r and area A = πr2 , the volume
is expressed as
V (t) = Ah(t). (79)
Here, h(t) represents the actual position of the plunger due to the incoming hydraulic
mass volume. Consequently, the filling rate can also be expressed as

u(t) = k( H − h(t)). (80)

Consider k as an adjustment coefficient, and let H be the reference or desired filling


level. The instantaneous longitudinal filling level is denoted h(t). By substituting the
previous expressions into the initial Equation (78), we derive the following first-order linear
differential equation:
dh(t)
A = k( H − h(t)). (81)
dt
Machines 2024, 12, 124 31 of 41

To solve the aforementioned equation, we employ the integrating factor method, such
that
k k
ḣ(t) + h(t) = H. (82)
A A
In this instance, the integrating factor is determined as follows:
k kt
R
µ(t) = e A dt = eA. (83)

Thus, by applying the integrating factor, we effectively reduce the order of derivatives
in the subsequent steps,
kt kt k k kt
e A ḣ(t) + e A h(t) = He A . (84)
A A
Through algebraic simplification of the left side of the aforementioned expression, the
following result is determined:
 kt
′ k kt
h(t)e A = He A . (85)
A
Following this, by integrating both sides of the equation with respect to time,
Z
kt
′ Z
k kt
h(t)e A dt = He A dt, (86)
t t A
where the expression on the left side undergoes a transformation into

k
Z
kt kt
h(t)e A = H e A dt, (87)
A t

and the right side of the equation, once solved, transforms into
kt kt
h(t)e A = He A + c. (88)
kt
Now, to obtain the solution for h(t), it is isolated by rearranging the term e A :
kt
h(t) = H + ce− A (89)

For initial conditions where h(t0 ) = 0 indicates the plunger is completely inside the
contained chamber at the initial time t0 = 0 s, the integration constant c is determined as

0 = H + ce0 . (90)

Therefore, the value of c takes on c = − H, and substituting it into the previously


obtained solution,
kt
h ( t ) = H (1 − e − A ). (91)
In addition, considering that the required force of the piston f e is hence given by:

dv
f e = m(t) + f k + ρ a A, (92)
dt
where f k is the friction force of the piston in the cylindrical piston, and ρ a A refers to the
water pressure at that depth over the piston’s entry area. The instantaneous mass considers
the piston’s mass me and the liquid mass of the incoming water m a :

m(t) = me + m a , (93)
Machines 2024, 12, 124 32 of 41

ma
where the water density is δa = Va and Va = πr2 h(t), thus completing the mass model:

kt
m(t) = me + δa πr2 H (1 − e− A ). (94)

Therefore, the force required to pull/push the plunge device is stated by the control
law, given as
 kt
 dv
f e = me + δa πr2 H (1 − e− A ) + f k + ρ a A. (95)
dt
Finally, two sensory aspects were considered for feedback in the ballast system control
and depth estimation. Firstly, the ballast piston is displaced by a worm screw with an
advance parameter L p [m] and a motor, whose sensory measurement is taken at motor
rotations ϕ p by a pulses η̂t encoder of angular resolution R, where ϕ p = 2π
R ηt . This allows
for the instantaneous position of the piston to be measured by the observation ĥt ,

2L p π
ĥt = L p ϕ p = η̂t . (96)
R
Secondly, the robot’s aquatic depth estimation, denoted as yt , is acquired using a
pressure sensor as previously described in Figure 2. This involves measuring the variation
in pressure according to Pascal’s principle between two consecutive measurements taken
at different times.
f t − f t−1 − m f g = 0, (97)
where the instantaneous fluid force at a certain depth is expressed in terms of the measured
pressure ρ̂. Additionally, the area of the fish body subjected to fluid force f t , and robot of
mass m f and averaged area A f ,
f t = ρ̂ a A f . (98)
Assuming an ideal fluid, and substituting the fluid forces in terms of measurable
pressures, as well as expressing the fish mass in terms of its density δ f and volume Vf ,
which describes the weight of the robot’s body. It follows

ρ2 A f − ρ1 A f − δa Vf g = 0, (99)

where y1 and y2 represent the surface depth and the actual depth of the robot, respectively,
such that y = y2 − y1 denotes the total depth. Therefore,

A f (ρ2 − ρ1 − δa yg) = 0, (100)

and by determining the pressure in the liquid as a function of depth, we select the level of
the liquid’s free surface such that ρ1 ≡ ρ A , where ρ A represents the atmospheric pressure,
resulting in
ρ̂ − ρ A − δ f yg = 0. (101)
.
Establishing ρ̂ = ρ2 as the water pressure sensor measurement. Thus, the feedback
robot fish depth estimation is given by

ρ̂ − ρ A
y(ρ̂) = . (102)
δa g

8. Conclusions and Future Work


In summary, this study introduces a cybernetic control approach integrating elec-
tromyography, haptic feedback, and an underactuated biorobotic fish avatar. Human
operators control the fish avatar using their muscular stimuli, eliminating the need for
handheld apparatus. The incorporation of fuzzy control, combining EMG stimuli with
motion sensor observations, has proven highly versatile in influencing the decision-making
process governing the fish’s swimming behavior. The implementation of a deep neural
Machines 2024, 12, 124 33 of 41

network achieved remarkable accuracy, surpassing 99.02%, in recognizing sixteen distinct


electromyographic gestures. This underscores the system’s robustness, effectively translat-
ing human intentions into precise control commands for the underactuated robotic fish.
The adoption of robotic fish technologies as human avatars in deep-sea exploration
offers significant benefits and implications. Robotic fish avatars enable more extensive
and efficient exploration of hazardous or inaccessible deep-sea environments, leading to
groundbreaking discoveries in marine biology, geology, and other fields. By replacing
human divers, robotic avatars mitigate risks associated with deep-sea exploration, ensuring
the safety of researchers by avoiding decompression sickness, extreme pressure, and
physical danger. Robotic avatar technology presents a more cost-effective alternative to
manned missions, eliminating the need for specialized life support systems, extensive
training, and logistical support for human divers. Robotic avatars facilitate continuous
monitoring of deep-sea ecosystems, collecting data over extended periods without being
limited by human endurance or logistical constraints. Minimizing human presence in the
deep sea reduces environmental disturbance and the risk of contamination or damage
to fragile ecosystems, preserving them for future study and conservation efforts. The
development of robotic fish technologies drives innovation in robotics, artificial intelligence,
and sensor technologies, with potential applications extending beyond marine science into
various industries. Overall, while robotic fish avatars offer numerous benefits for deep-sea
exploration and research, their deployment should be carefully managed to maximize
scientific advancement while minimizing potential negative consequences.
While the adoption of robotic fish avatars holds significant promise for deep-sea ex-
ploration, several limitations and areas for future research must be addressed. Developing
robotic fish avatars capable of accurately mimicking the behaviors of real fish in diverse
deep-sea conditions remains a considerable challenge, necessitating improvements in
propulsion, maneuverability, and energy efficiency. Enhancing their sensory capabilities to
detect environmental stimuli effectively, alongside improving communication systems for
real-time data transmission, is crucial for efficient exploration and navigation. Additionally,
these avatars must adapt to the harsh deep-sea conditions, requiring research into materials
and components that withstand extreme pressures, low temperatures, and limited visibility
while maintaining functionality. Ensuring their long-term reliability and durability through
maintenance strategies and robust designs is essential for sustained exploration missions.
Integrating robotic fish avatars with emerging technologies like artificial intelligence and
advanced sensors could further enhance their effectiveness. Addressing these challenges
through continued research is vital for realizing the full potential of robotic fish avatars in
deep-sea exploration.
This manuscript presents the outcomes of experimental EMG data classification and
recognition achieved through a multilayered artificial neural network. These results signify
a significant advancement in the field of robotic control, as they demonstrate the successful
classification and recognition of EMG data, paving the way for enhanced control strategies
in robotics. Utilizing an oscillation pattern generator, real signals were supplied to an
experimental prototype of an underactuated robotic avatar fish equipped with an electro-
magnetic oscillator. This successful integration showcases the feasibility of incorporating
real-time EMG data into robotic control systems, enabling more dynamic and responsive
behavior. Additionally, validation of the fuzzy controller and the fish’s dynamical con-
trol model was conducted via computer simulations, providing further evidence of the
effectiveness and reliability of the proposed control architecture. While the introduction
of haptic feedback and interface remains conceptual within the proposed architecture, it
signifies a promising direction for future research, aiming to augment remote operation
with immersive experiences. The advancements demonstrated in this study hold sub-
stantial potential for future applications in underwater exploration, offering immersive
cybernetic control capabilities that could revolutionize the field. The future trajectory of
this research endeavors to enhance telepresence and avatar functionalities by integrating
electroencephalography signals from the human brain. This integration will harness more
Machines 2024, 12, 124 34 of 41

sophisticated deep learning artificial neural network structures to achieve superior signal
recognition. This advancement promises to unlock new levels of immersive interaction
and control, paving the way for transformative applications in fields such as virtual reality,
human-robot interaction, and neurotechnology.

Author Contributions: Conceptualization, E.A.M.-G.; project administration, E.A.M.-G.; supervision,


E.A.M.-G. and R.T.-C.; writing original draft preparation, E.A.M.-G.; data curation, M.A.M.M.;
investigation, M.A.M.M. and E.A.M.-G.; methodology, M.A.M.M., R.T.-C. and E.A.M.-G.; software,
M.A.M.M.; formal analysis, M.A.M.M. and E.A.M.-G.; writing—review and editing, E.A.M.-G. and
E.M.; validation, E.A.M.-G. and E.M.; visualization, M.A.M.M., E.A.M.-G. and E.M. All authors have
read and agreed to the published version of the manuscript.
Funding: This research received partial funding through scholarship grant number 2022-000018-
02NACF, awarded to CVU 1237846 by the Consejo Nacional de Humanidades, Ciencias y Tecnologías
(CONAHCYT).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The dataset for this research, along with a supporting experimental
video, has been made accessible on Zenodo. The files were uploaded on 12 January 2024 at https:
//doi.org/10.5281/zenodo.10477143.
Acknowledgments: The corresponding author acknowledges the support of Laboratorio de Robótica.
The third author acknowledge the support of the Kazan Federal University Strategic Academic
Leadership Program (“PRIORITY-2030”).
Conflicts of Interest: The corresponding author asserts that they served as the guest editor for the
Special Issue of the journal, emphasizing their lack of influence on the blind peer-review process
or the final decision on the manuscript. Additionally, the authors declare that there are no conflicts
of interest.

Appendix A. EMG Stimuli Patterns

(a)
Figure A1. Cont.
Machines 2024, 12, 124 35 of 41

(b)
Figure A1. Components of thumb pattern space: filters γ, λ, and Ω. (a) Filters applied to the left
thumb. (b) Filters applied to the right thumb.

(a)
Figure A2. Cont.
Machines 2024, 12, 124 36 of 41

(b)
Figure A2. Components of index finger pattern space: filters γ, λ, and Ω. (a) Filters applied to the
left index finger. (b) Filters applied to the right index finger.

(a)
Figure A3. Cont.
Machines 2024, 12, 124 37 of 41

(b)
Figure A3. Components of middle finger pattern space: filters γ, λ, and Ω. (a) Filters applied to the
left middle finger. (b) Filters applied to the right middle finger.

(a)
Figure A4. Cont.
Machines 2024, 12, 124 38 of 41

(b)
Figure A4. Components of ring finger pattern space: filters γ, λ, and Ω. (a) Filters applied to the left
ring finger. (b) Filters applied to the right ring finger.

(a)
Figure A5. Cont.
Machines 2024, 12, 124 39 of 41

(b)
Figure A5. Components of little finger pattern space: filters γ, λ, and Ω. (a) Filters applied to the left
little finger. (b) Filters applied to the right little finger.

References
1. Pan, Y.; Steed, A. A Comparison of Avatar-, Video-, and Robot-Mediated Interaction on Users’ Trust in Expertise. Front. Robot. AI
2016, 6, 12. [CrossRef]
2. Tanaka, K.; Nakanishi, H.; Ishiguro, H. Comparing Video, Avatar, and Robot Mediated Communication: Pros and Cons of
Embodiment. In Collaboration Technologies and Social Computing. CollabTech; Yuizono, T., Zurita, G., Baloian, N., Inoue, T., Ogata,
H., Eds.; Communications in Computer and Information Science; Springer: Berlin/Heidelberg, Germany, 2014.
3. Khatib, O.; Yeh, X.; Brantner, G.; Soe, B.; Kim, B.; Ganguly, S.; Stuart, H.; Wang, S.; Cutkosky, M.; Edsinger, A.; et al. Ocean One: A
Robotic Avatar for Oceanic Discovery. IEEE Robot. Autom. Mag. 2016, 23, 20–29. [CrossRef]
4. Baba, J.; Song, S.; Nakanishi, J.; Yoshikawa, Y.; Ishiguro, H. Local vs. Avatar Robot: Performance and Perceived Workload of
Service Encounters in Public Space. Front. Robot. AI 2021, 8, 778753. [CrossRef]
5. Wiener, N. Cybernetics. Bull. Am. Acad. Arts Sci. 1950, 3, 2–4. [CrossRef]
6. Novikov, D.A. Cybernetics: From Past to Future; Springer: Berlin, Germany, 2015.
7. Tamburrini, G.; Datteri, E. Machine Experiments and Theoretical Modelling: From Cybernetic Methodology to Neuro-Robotics.
Mind Mach. 2005, 15, 335–358. [CrossRef]
8. Skogerson, G. Embodying robotic art: Cybernetic cinematics. IEEE MultiMedia 2001, 8, 4–7. [CrossRef]
9. Beer, S. What is cybernetics? Kybernetes 2002, 31, 209–219. [CrossRef]
10. Fradkov, A.L. Application of cybernetic methods in physics. Physics-Uspekhi 2005, 48, 103. [CrossRef]
11. Romano, D.; Benelli, G.; Kavallieratos, N.G.; Athanassiou, C.G.; Canale, A.; Stefanini, C. Beetle-robot hybrid interaction: Sex,
lateralization and mating experience modulate behavioural responses to robotic cues in the larger grain borer Prostephanus
truncatus (Horn). Biol. Cybern. 2020, 114, 473–483. [CrossRef] [PubMed]
12. Park, S.; Jung, Y.; Bae, J. An interactive and intuitive control interface for a tele-operated robot (AVATAR) system. Mechatronics
2018, 55, 54–62. [CrossRef]
13. Escolano, C.; Antelis, J.M.; Minguez, J. A telepresence mobile robot controlled with a noninvasive brain–computer interface. IEEE
Trans. Syst. Man Cybern. Part B (Cybern.) 2011, 8, 793–804. [CrossRef]
14. Moniruzzaman, M.; Rassau, A.; Chai, D.; Islam, S.M. Teleoperation methods and enhancement techniques for Mobile Robots: A
comprehensive survey. Robot. Auton. Syst. 2022, 150, 103973. [CrossRef]
15. A Perspective on Robotic Telepresence and Teleoperation Using Cognition: Are We There Yet? Available online: https://arxiv.
org/abs/2203.02959 (accessed on 10 August 2023).
16. Williamson, R. MIT SoFi: A Study in Fabrication, Target Tracking, and Control of Soft Robotic Fish. Bachelor’s Thesis, Mas-
sachusetts Institute of Technology, Cambridge, MA, USA, February 2022.
17. Mi, J.; Sun, Y.; Wang, Y.; Deng, Z.; Li, L.; Zhang, J.; Xie, G. Gesture recognition based teleoperation framework of robotic fish.
In Proceedings of the 2016 IEEE International Conference on Robotics and Biomimetics, Qingdao, China, 3–7 December 2016;
pp. 1–6.
Machines 2024, 12, 124 40 of 41

18. An Avatar Robot Overlaid with the 3D Human Model of a Remote Operator. Available online: https://arxiv.org/abs/2303.02546
(accessed on 14 September 2023).
19. Pang, G.; Yang, G.; Pang, Z. Review of Robot Skin: A Potential Enabler for Safe Collaboration, Immersive Teleoperation, and
Affective Interaction of Future Collaborative Robots. IEEE Trans. Med. Robot. Bion. 2021, 3, 681–700. [CrossRef]
20. Schwarz, M.; Lenz, C.; Rochow, A.; Schreiber, M.; Behnke, S. NimbRo avatar: Interactive immersive telepresence with force-
feedback telemanipulation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Praque,
Czech Republic, 27 September–1 October 2021.
21. Li, H.; Nie, X.; Duan, D.; Li, Y.; Zhang, J.; Zhou, M.; Magid, E. An Admittance-Controlled Amplified Force Tracking Scheme for
Collaborative Lumbar Puncture Surgical Robot System. Int. J. Med. Robot. Comp. Assis. Surg. 2022, 18, e2428. [CrossRef] [PubMed]
22. Kristoffersson, A.; Coradeschi, S.; Loutfi, A. A Review of Mobile Robotic Telepresence. Adv. Hum.-Comp. Interact. 2013, 2013,
902316. [CrossRef]
23. Ryu, H.X.; Kuo, A.D. An optimality principle for locomotor central pattern generators. Sci. Rep. 2021, 11, 13140. [CrossRef]
24. Ijspeert, A.J. Central pattern generators for locomotion control in animals and robots: A Review. Neural Netw. 2008, 21, 642–653.
[CrossRef]
25. Nassour, J.; Henaff, P.; Ouezdou, F.B.; Cheng, G. Multi-layered multi-pattern CPG for adaptive locomotion of humanoid robots.
Biol. Cybern. 2014, 108, 291–303. [CrossRef]
26. Yu, J.; Wang, M.; Dong, H.; Zhang, Y.; Wu, Z. Motion Control and Motion Coordination of Bionic Robotic Fish: A Review. J. Biol.
Eng. 2018, 15, 579–598. [CrossRef]
27. Mulder, M.; Pool, D.M.; Abbink, D.A.; Boer, E.R.; Zaal, P.M.T.; Drop, F.M.; El K.; Van Paassen, M.M. Manual Control Cybernetics:
State-of-the-Art and Current Trends. IEEE Tran. Hum.-Mach. Syst. 2017, 48, 1–18. [CrossRef]
28. Kastalskiy, I.; Mironov, V.; Lobov, S.; Krilova, N.; Pimashkin, A.; Kazantsev, V. A Neuromuscular Interface for Robotic Devices
Control. Comp. Math. Meth. Med. 2018, 2018, 8948145. [CrossRef]
29. Inami, M.; Uriu, D.; Kashino, Z.; Yoshida, S.; Saito, H.; Maekawa, A.; Kitazaki, M. Cyborgs, Human Augmentation, Cybernetics,
and JIZAI Body. In Proceedings of the AHs 2022: Augmented Humans, Chiba, Japan, 13–15 March 2022; pp. 230–242.
30. De la Rosa, S.; Lubkull, M.; Stephan, S.; Saulton, A.; Meilinger, T.; Bülthoff, H.; Cañal-Bruland, R. Motor planning and control:
Humans interact faster with a human than a robot avatar. J. Vis. 2015, 15, 52. [CrossRef]
31. Osawa, H.; Sono, T. Tele-Nininbaori: Intentional Harmonization in Cybernetic Avatar with Simultaneous Operation by Two-
persons. In Proceedings of the HAI’21: Proceedings of the 9th International Conference on Human-Agent Interaction, Virtual
Event, Japan, 9–11 November 2021; pp. 235–240.
32. Wang, G.; Chen, X.; Han, S. Central pattern generator and feedforward neural network-based self-adaptive gait control for a
crab-like robot locomoting on complex terrain under two reflex mechanisms. Int. J. Adv. Robot. Syst. 2017, 14, 1729881417723440.
[CrossRef]
33. Aymerich-Franch, L.; Petit, D.; Ganesh, G.; Kheddar, A. Object Touch by a Humanoid Robot Avatar Induces Haptic Sensation in
the Real Hand. J. Comp.-Med. Commun. 2017, 22, 215–230. [CrossRef]
34. Talanov, M.; Suleimanova, A.; Leukhin, A.; Mikhailova, Y.; Toschev, A.; Militskova, A.; Lavrov, I.; Magid, E. Neurointerface
implemented with Oscillator Motifs. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems,
Prague, Czech Republic, 27 September–1 October 2021; pp. 4150–4155.
35. Ladrova, M.; Martinek, R.; Nedoma, J.; Fajkus, M. Methods of Power Line Interference Elimination in EMG Signals. Trans. Tech.
Publ. 2019, 40, 64–70. [CrossRef]
36. Tigrini, A.; Verdini, F.; Fioretti, S.; Mengarelli, A. On the decoding of shoulder joint intent of motion from transient EMG: Feature
evaluation and classificatio. IEEE Trans. Med. Robot. Bion. 2023, 5, 1037–1044. [CrossRef]
37. Ivancevic, V.; Beagley, N. Brain-like functor control machine for general humanoid biodynamics. Intl. J. Math. Math. Sci. 2005,
2005, 171485. [CrossRef]
38. Crespi, A.; Lachat, D.; Pasquier, A.; Ijspeert, A.J. Controlling swimming and crawling in a fish robot using a central pattern
generator. Auton. Robot. 2008, 25, 3–13. [CrossRef]
39. Manduca, G.; Santaera, G.; Dario, P.; Stefanini, C.; Romano, D. Underactuated Robotic Fish Control: Maneuverability and
Adaptability through Proprioceptive Feedback. In Biomimetic and Biohybrid Systems. Living Machines 2023; Meder, F., Hunt, A.,
Margheri, L., Mura, A., Mazzolai, B., Eds.; LNCS; Springer: Berlin, Germany, 2023; p. 14157. [CrossRef]
40. Pattern Generators for the Control of Robotic Systems. arXiv 2015, arXiv:1509.02417.
41. Uematsu, K. Central nervous system underlying fish swimming [A review]. In Bio-Mechanisms of Swimming and Flying; Kato, N.,
Kamimura, S., Eds.; Springer: Tokyo, Japan, 2008; pp. 103–116.
42. Ki-In, N.; Chang-Soo, P.; In-Bae, J.; Seungbeom, H.; Jong-Hwan, K. Locomotion generator for robotic fish using an evolutionary
optimized central pattern generator. In Proceedings of the IEEE International Conference on Robotics and Biomimetics, Tianjin,
China, 14–18 December 2010.
43. Xie, F.; Zhong, Y.; Kwok, M.F.; Du, R. Central Pattern Generator Based Control of a Wire-driven Robot Fish. In Proceedings of the
IEEE International Conference on Information and Automation, Wuyishan, China, 11–13 August 2018; pp. 475–480.
Machines 2024, 12, 124 41 of 41

44. Wang, M.; Yu, J.; Tan, M. Parameter Design for a Central Pattern Generator Based Locomotion Controller. In Proceedings of the
ICIRA 2008, Wuhan, China, 15–17 October 2008; pp. 352–361.
45. Wang, W.; Guo, J.; Wang, Z.; Xie, G. Neural controller for swimming modes and gait transition on an ostraciiform fish robot. In
Proceedings of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Wollongong, NSW, Australia,
9–12 July 2013; pp. 1564–1569.

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like