Reconstructing Creative Thoughts Hopfield Neural Network - 2024 - Neurocomputin
Reconstructing Creative Thoughts Hopfield Neural Network - 2024 - Neurocomputin
Neurocomputing
journal homepage: www.elsevier.com/locate/neucom
A R T I C L E I N F O A B S T R A C T
Communicated by Zidong Wang From a brain processing perspective, the perception of creative thinking is rooted in the underlying cognitive
process, which facilitates exploring and cultivating novel avenues and problem-solving strategies. However, it is
Keywords: challenging to emulate the intricate complexity of how the human brain presents a novel way to uncover unique
Creative Thinking solutions. One potential approach to mitigating this complexity is incorporating creative cognition into the
Hopfield Neural Network
evolving artificial intelligence systems and associated neural models. Hopfield neural network (HNN) are
Patterns
commonly acknowledged as a simplified neural model, renowned for their biological plausibility to store and
Memory
Associative Chains retrieve information, specifically patterns of neurons. Our findings suggest utilizing modern HNN to emulate
Semantic Association creative thinking by making meaningful associations between seemingly disparate concepts. This semantic link is
represented as a radio knob that can be set to determine whether the network solves problems creatively or shuts
down; the threshold is a parameter. We used the term "first knob of creativity" to describe a certain pattern and
utilized the "second knob of creativity" to aid in the examination of alternatives within the network. By
manipulating the knobs, it is possible to selectively suppress specific patterns, facilitating the creative func
tioning of the HNN and identifying other patterns with which input can be linked.
1. Introduction input from all neurons to retrieve memories from partial or noisy input
[24]. After receiving an input, the HNN uses iterative updating to update
Through the creative act, we can perceive the world differently, find the neuron states synchronously or asynchronously to converge toward
hidden patterns, and construct connections between seemingly unre one of the stored memories and reach a stable state that recalls the in
lated events to generate extraordinary outputs [6,23,29,30]. Conse formation it was trained [36]. Therefore, HNNs have been used for
quently, there is a broad consensus that creative thinking is recognized associative memory research, including associating input with its most
for its ability to generate solutions that are novel, useful, and surprising similar memorized pattern or identifying unknown systems by
[16,19,22,30,42,51,58,64]. Nevertheless, creativity is multifaceted: it computing the learning mechanism’s optimized coefficients to fit the
can be expressed in various forms, hues, and shapes and is, thus, enor nonlinear system [43,61,62].
mously challenging to pin down [5,20,27,52,53,56,57]. One approach Using this network structure strongly resembles the architecture of
to reducing this complexity is applying the neural network architectures human memory, which encodes several neurons that fire together in
and their relevant learning rules, as Khalil and Moustafa [30] suggested. response to certain "microfeatures" [18]. HNN recalls patterns by
Therefore, we intend to incorporate creative cognitive process-based considering the association between the neurons using the weight ma
semantic association [12,28,37] into a conceptual framework for a trix, not storing the patterns. Therefore, we chose the HNN model [24]
creative, plausible neural network using the hopfield neural network to explain this semantic linkage and, thus, better understand the creative
(HNN). HNN is a biologically inspired mathematical model developed process’s foundation based on the semantic association framework. The
by John J. Hopfield in 1982. It is best recognized for its ability to store associative foundation of the creative process was first introduced by
and recall knowledge [43], and establishing the HNN model was Mednick [44], and the semantic hierarchy of creative operation could be
remarkable in the neural computational and modeling era [2,24,33,48]. elucidated through Mendelson’s (1974) model of knowledge access
HNN is a recurrent associative neural network model that features [45], which engaged the concepts of defocused and focused attention.
* Corresponding author.
E-mail address: [email protected] (R. Khalil).
1
https://orcid.org/0000-0003-2632-8306
https://doi.org/10.1016/j.neucom.2024.127324
Received 14 September 2023; Received in revised form 2 January 2024; Accepted 21 January 2024
Available online 25 January 2024
0925-2312/© 2024 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
D. Checiu et al. Neurocomputing 575 (2024) 127324
Fig. 1. Description of the neural network structure of Hopfield neural networks (HNN) and its implementation. This schematic representation of a binary HNN
illustrates its connectivity and the potential states of the neurons, which can be either on (active) or off (inactive). The energy structure of HNN has multiple point
attractors; stable fixed points are shown as small blue dots, and a dotted circle denotes a basin of attraction. The network consists of a pattern and a weight matrix
[W], which directs its capacity to undergo single-step training as the computation is executed in a singular operation. The diagonal symmetry of W can be attributed
to the symmetric and bi-directional connections among the neurons, which is a consequence of the complete connectivity. The updating is illustrated as a graphical
visualization of the successful implementation of an HNN, a randomly chosen pattern, its perturbation as the input is fed into the HNN, and the output after two
complete cycles of asynchronous updates from the initial experiments. Asynchronous neuron updates of an input iterated through the network are represented
in Fig. 2.
2
D. Checiu et al. Neurocomputing 575 (2024) 127324
Fig. 2. Representation of asynchronous neuron updates of an input iterated through the network. Green cells refer to updated neurons, while red cells translate to
them staying at the same value.
unexplored links between them. Based on this reasoning, our patterns the most similar pattern, signifying the synchronized firing of neurons
represent unrelated concepts; therefore, a large percentage of neurons and indicating the connection between them [8,15].
firing per pattern (i.e., meaning less likelihood for association) was not Based on this Hebbian rule, we created W; thus, the weight matrix is
required, which was supported by previous research suggestions [7,9, computed by summing up the outer products of all the patterns, pi - to
24,43]. measure the correlation between the patterns and input.
∑
n
2.1.2. Weight Matrix W= pTi pi = pT1 p1 + pT2 p2 + ... + pTn pn
We designed the connectivity of the neurons as a matrix with defined i=1
weights, including n patterns in HNN to store and memorize; this process We considered each pattern pi a row vector, and as such, pTi is a
is performed through the computation of the weight matrix W. column vector, the outer product computation of a pattern resulting in a
We used the Hebbian learning rule to store the patterns in HNN. The matrix.
Hebbian rule is based on the premise that neurons that activate simul One of the remarkable advantages of the HNN is its ability to train in
taneously establish connections: "Neurons that fire together wire one step, as the computation of the weights matrix W is completed in one
together." Neurons that do not fire in synchronization cannot establish a operation [2,4,10,25,33,48]. We used the diagonal symmetry of W
connection, which means that when two neurons fire simultaneously, caused by the symmetric and bi-directional connections of the neurons
the strength of their connection (i.e., the connectivity value) increases, (Fig. 1).
and their activity amplifies with time [47]. This notion may be applied
to the alternative scenario, wherein the neurons exhibit contrasting 2.1.3. Updating
states, i.e., assuming their connection value diminishes and reaches a Given a binary input x of N neurons, we iterate it through HNN and
null state. This tendency facilitates the convergence of inputs towards make it converge towards a stable state. This convergence is facilitated
3
D. Checiu et al. Neurocomputing 575 (2024) 127324
by utilizing a weight matrix, W. Notably, the patterns themselves are not iterations. The algorithm stochastically selects a single pattern from the
required for this operation, as the network has been trained, and all the n training patterns and perturbs it by flipping a specified number of
necessary information is encapsulated within W [48,55]. neurons, resulting in a change from 0 to 1 or vice versa.
There are two ways to perform an update: synchronously, meaning As a logical progression, we proceeded to augment the number of
all of the neurons from the input x get updated in one iteration [31,46], patterns n gradually and then evaluated the behavior of the HNN. It has
and asynchronously, where at each iteration, a random neuron (out of been indicated that the memory capacity2 of the HNN decreases as the
the N available ones) is chosen and updated [11,24,48]. We used overlap of patterns increases [24,43,48].
asynchronous updates to comprehensively examine the intermediate We considered an input that is a slightly perturbed version of pattern
steps in reaching a solution. Consequently, an iteration is defined as no. 1; this means that once we iterate the input through the network, we
updating a single input neuron, necessitating N iterations to fully update expect to return to that same pattern no. 1. The input features are closest
all input neurons. to those of the pattern we disturbed. To be creative and find another
Considering x as an input, W as the weight matrix, t as a chosen pattern to connect with, we need to ensure that our network will not let
threshold value, and the Heaviside function H, a synchronous update input iterate towards pattern no. 1, simultaneously retaining it in our
follows the formula: system’s memory for later use [3] In other words, we do not erase a
concept or break its connections; instead, we shift focus to finding
xnew = H(x⋅W − t)
another solution, and this is the rationale beyond reaching the idea of
When performing an asynchronous update of the input x at a suppressing a pattern.
randomly chosen position j, we computed the dot product between the We suppress a pattern from network response without deleting it by
input x and column j of the weight matrix. effectively subtracting its contributions from the weight matrix W,
The output of this product will yield a non-negative numerical value which measures the correlation between patterns perceived by each
with a possible minimum value of 0, and this occurs if no correlations neuron. When performing a specific neuron update, we have to reduce
were observed between the firing neurons in the input and the training the value of a given pattern to disable its contribution. Therefore, the
patterns or if the specific position j was never active in any of the new updating formula for input x while suppressing pattern no. k, which
training patterns. After subtracting a threshold parameter, the Heaviside will be later evaluated through a threshold, replaces x⋅W by:
function converts each value to 0 or 1. ( )
x⋅ W − pTk pk
H(z) = 1, if z > 0 : H(z) = 0, if z≤0
This formula is what we refer to as the "second knob of creativity",
Recalling a memorized pattern corresponds to reaching a (stable) which enables the network to reach an alternative solution. By manip
stationary state, which can be interpreted as reaching a local minimum ulating the knob, certain patterns are inhibited, enabling the HNN to
of an energy function, and it is the main idea behind the updating pro dynamically choose an alternative pattern to establish an association
cess of HNN [32,48,55]: with the input and hence creative-based semantic association.
1 T Throughout the implementation, the significant role of the threshold
E= −
2
⋅x ⋅W⋅x in the binary HNN became evident. Specifically, we observed only a
small range of values for the threshold within which the network can
Each stored pattern has a corresponding energy minimum, which attain a stable non-zero state while simultaneously suppressing certain
acts as an attractor. Therefore, it is rational to consider minimizing the patterns. One possible approach is to utilize the first knob that was
energy function, as these are the states we aim to achieve with each introduced and tweak the threshold values gradually until the network
update [32,48,55]. This approach is effective for a limited quantity of exhibits creative behavior and ultimately converges into an alternative
loosely connected patterns and can retrieve all of them. Nevertheless, pattern.
the capacity is constrained, and endeavors to augment their quantity or
follow similar patterns tend to fail. One of the many features observed is 3. Results
that the energy function develops minima corresponding to spurious
states; those are not patterns but modifications or linear combinations, 3.1. Threshold limits
which can potentially cause input to become trapped [32,55].
We measured the success rate by counting how many tests resulted in
2.1.4. Initial Experiments and Pattern Suppression a perfectly reconstructed memory. Therefore, we simulated several ex
The implementation process was initiated by constructing the system periments, each with nϵ{1, 2, …, 9}, new patterns, including 10% of
from its fundamental components, wherein various HNN models such as randomly positioned firing neurons. We perturbed a randomly selected
binary (i.e., 0 s and 1 s) and bipolar (i.e., 1 s and − 1 s) patterns and pattern by flipping 100 neurons and fed it as input (initial state) to the
inputs were explored through experimentation. respective network running the updates with threshold values
We chose to construct the binary HNN instead of the bipolar HNN tϵ{0, 10, 20, …, 100}.
because of its better alignment with the plausibility of neuron behavior, As n increases, the range of threshold values t leads to a success rate
specifically regarding firing when surpassing a specific activation of 100%, indicating that the network effectively recalls the perturbed
threshold. The emphasis is placed on the magnitude of activation rather pattern in all tests (Fig. 3).
than the sign, as supported by previous studies [40,63]. The preference For values of n greater than or equal to 8, none of the networks
for asynchronous neuron updates over synchronous updates is based on achieved a complete success rate of 100%. Specifically, for n = 8, the
their advantage in facilitating the visualization of the network’s iterative maximum success rate observed was 87%. For n = 9, the maximum
process in shifting an input towards a stable state, as opposed to only dropped to 72%. For all n, a range of thresholds (t) for which a maximum
perceiving the result in a single step [11,24], and thus avoiding oscil success rate was achieved in percentage. Remarkably, the upper
latory behavior. boundary of all these ranges was a threshold of 80; the network
Each cycle of neuron updates is performed asynchronously and completely shuts down for thresholds t ≥ 100, having a 0 recall rate. The
randomized (Fig. 2). Consequently, we secured the full update after N
iterations of the neurons and subsequently implemented this asynchro
nous updating method.
Afterward, we used perturbed versions of the HNN’s memorized 2
We refer to the memory capacity as the maximum value for the number of
patterns as initial states to determine the outcome after a series of patterns n, which can be stored and retrieved without errors [36].
4
D. Checiu et al. Neurocomputing 575 (2024) 127324
Fig. 3. Successful reconstruction of perturbed patterns and threshold values. Panel A illustrates a headmap of threshold values for the percentage of success rates for
each number n of stored patterns. The percentage of perfect success rate is observed only for limited ranges of threshold values. These ranges get smaller for
increasing n and disappear entirely for n > 7. The distribution of the success rates separately for each number of patterns (i.e., one graph for each value of tested n,
Supplementary Fig.1) is illustrated in Fig. 2. Panel B depicts the successful threshold values as maximum and minimum limits for the 9 patterns. The lowest and highest
threshold values with a perfect success rate visualize the decrease of these ranges as n increases. Threshold ranges with a percentage perfect success rate and a
reasonable success rate (≥75% and <100%); thus, the maximum threshold equal to 80 for a total success rate is the same for all values of n.
5
D. Checiu et al. Neurocomputing 575 (2024) 127324
Fig. 4. Hypothetical model of the experiment. Initially, we started with binary HNN, and due to its limitations, we decided on modern HNN (two-layer model) as a
suitable model to represent the associative chains of creativity. The first layer of modern HNN comprises the neurons of the input, and the connections between the
two layers are realized through the components of the patterns. The second layer contains the core part of the updating process, which may be regarded as the "new
weights," denoted as ci . The ’new weights’ are a function of the correlation between the current state of the input x and the respective pattern i, later put through a
normalization performed by the softmax function. Using controls of these new weights (2nd layer neurons) to implement the first and second knobs of creativity,
symbolized as a semantic association, was suggested.
threshold. Accordingly, all current HNN implementations used a creativity had the task to effectively take out a certain pattern from the
threshold of 0.5. coupling matrix W – on demand and to suppress this particular associ
We achieved a significantly higher recall rate and increased storage ation. In the binary HNN, we have to subtract the corresponding
capacity using the modern HNN. We tested the network and got a perfect contribution. The evident question is: In which location would the necessary
success rate, even when storing 100 patterns. This first step is impressive information be stored?
compared to the simple HNN with 8 stored patterns as a capacity. The In the modern HNN, information about the connection from layer 2
new updating rule improves the model performance using pattern to layer 1 is available, effectively diminishing the influence of pattern k
components instead of the weight matrix. by suppressing the "new weight"ck .
In modern HNNs, ’new weights’ function as neurons, storing a single
value from 0 to 1 due to normalization and dictating the idea activated, 3.2.2. First knob of creativity
similar to a ’decision’ neuron. After successfully implementing the two We used the threshold (t) as a parameter to twist this "first knob of
HNN layers, our second aim was to establish the associative chains. The creativity" to regulate whether the network can operate creatively (for
most crucial part of their construction was suppressing a pattern in this intermediate thresholds), find a solution, shut down completely (for
new framework. We deactivated the "decision" neuron, denoted as ci = high thresholds), or state of indecision (for low thresholds), by simul
0, which is responsible for controlling the pattern we intend to inhibit, taneously activating neurons from different patterns.
resembling the process of memory reconstruction. Thus, we retained its As an example, if we twist the knob to a threshold equal to 60 for
linkages but focused on the other concepts, creating associative chains n = 3 stored patterns, the network will act creatively and find a solution,
linking previously unrelated concepts in a framework with a higher but if we twist it to a threshold of 35, it will fire more neurons and not
capacity. provide a distinct response. Nevertheless, if it is twisted to a threshold of
The emphasis does not lie not on the normalization but on the sub 70, this network will act creatively for n = 5 patterns and find a solution.
stantial nonlinearity introduced through the softmax function. This
nonlinear transformation of the c Vector, almost a winner-takes-all rule, 3.2.3. The second knob of creativity
leads to a better separation of patterns and results in a larger storage For intermediate threshold values t and a limited number (n) of
capacity, as Krotov and Hopfield [36] suggested. Thus, we propose a stored patterns, we can feed the HNN with a perturbed version of a
second operation on top of the competition. The second knob of trained pattern, i.e., pattern 0, and consequently, the HNN manages to
6
D. Checiu et al. Neurocomputing 575 (2024) 127324
remove those perturbations until the fully reconstructed original pattern defined the "first knob of creativity", which controls whether the
0 remains. For lower threshold values, the removal is not fully suc network acts creatively (i.e., finds a new path) or completely shuts
cessful; even additional neurons can join – as lower thresholds make down.
firing easier for them. Nevertheless, it is not a disadvantage but an op The network’s capacity (n = 8) was determined by how many pat
portunity to explore more complex processes. terns we could store while maintaining a perfect recall rate. At this stage,
The key observation is that the persistent perturbation components suppressing a pattern such that it still exists in the binary HNN model
typically result from the overlap of pattern 0 with other (associated) and makes its connections were defined and guided us to stepping the
patterns. They require input to ensure that they persist, which they theoretical step back and focusing on finding another connection, thus,
obtain via an overlap with pattern 0. This mechanism remains complete establishing a "second knob of creativity" to assist the network in finding
even if we suppress the original pattern 0 to compensate for its contri a solution other than suppression.
bution to the weight matrix. With intermediate thresholds, suppressing Being constrained by the small capacity of the binary HNN resulted
pattern 0 turns off the whole network. With low enough non-zero in a transition to a modern HNN consisting of two layers. After rede
thresholds, overlapping patterns can still be completed even after sup fining the suppression of a pattern, successfully generated associations
pressing pattern 0. For a threshold value of 15, patterns no.1 and no. 2 were tested implementation using more patterns.
build up from the overlap; for a threshold of 16.5, only pattern 1 man The first and second knobs assisted us in creating associative links
ages as its overlap is slightly more considerable. between previously unrelated concepts and further developing them
Consequently, suppression deletes the original pattern, but others computationally into associative chains connecting several concepts; for
overlapping with it may survive depending on the threshold. This pro a better understanding of the concept, see Fig. 6.
cess requires thorough parameter tuning if only a single pattern is sup In conclusion, we successfully presented for the first time a neuro
posed to respond. However, it can be extended to whole association computational framework model for creativity-based semantic associa
chains by sequentially deleting the original pattern, building up an tions, which was implemented using both a binary and modern HNN. We
associated pattern from the overlap regions, then deleting the new benefit from retrieving the memorized items seamlessly instead of
pattern and allowing the next to develop (Fig. 5). searching the whole network and comparing the input with all the
Feeding input into the network, which iterates toward its most patterns to find a solution; this could be an analog of creative thinking
associated pattern, i.e., pattern no. 6, establishes the first associative based on semantic associations [13,28,37,41], allowing to search on the
connection. Pattern no. 6 generates the initial associative link, thus lower levels of memories to find which concepts relate even on a
considering pattern no. 6 as an input into the network to locate the micro-level.
desired link while simultaneously inhibiting it. As a result, the HNN is The patterns and how they are stored and reconstructed align with
compelled to establish an association between pattern no. 6 and other. previous research on the distribution, coarse-coding, and content-
Consequently, the generation of the second link with pattern no. 1 oc addressing of the memory process [13,14,21,43]. A potential interpre
curs. The validity of this new link is based on the assumption that there is tation of our findings is pattern inhibition that neurons take a step back
a correlation between pattern no. 6 and the initial input. Accordingly, from the contextual focus and find alternatives associated with seem
the essential essence of the input is preserved while determining the ingly unrelated concepts, which aligns with previous studies on
forthcoming association. To extend the associative chain, we exclude creativity-based semantic associations [12,18,36,43,44]. Therefore, an
pattern no. 1 and pattern no. 6. This approach guarantees the identifi associative link is created when a cluster of neurons fires up together
cation of subsequent links, specifically with pattern no. 3. while being part of both connected concepts; this allows finding a so
Suppressing pattern no. 6 and feeding it to the network forces the lution that would not be an oblivious choice but solves the problem.
HNN to find an association between it and another pattern, and conse
quently, the second link, with pattern no. 1, is generated. This new link 5. Future directions
is valid because pattern no. 6 and the initial input have correlated fea
tures; therefore, the input kernel is not lost when finding the next as Our preliminary results have numerous possibilities and directions
sociation. For a longer associative chain, supply pattern no. 1 as an input for investigation. For instance, our findings are promising for the second
while suppressing itself and pattern 6 to locate the following link, step toward the mechanistic level, especially the shift in neuron acti
especially with pattern no. 3. vation from discrete to continuous, which allows us to have more no
tions based on activation shades. By finding creative ways to connect
4. Discussion and concluding remarks concepts and determining how far a chain can reach without imposing a
predetermined limit on the extent of these connections, one can imagine
We constructed a creativity-based semantic network conceptual how associative links algorithm may be integrated into a more extensive
framework using HNN, validating the advantage of using modern HNN neural network and provide a mechanism for problem-solving through
rather than binary HNN to establish this framework. Nevertheless, using association. The potential for providing a comprehensive creative
binary HNN at the first step was crucial because it directed us toward the framework based on semantic associations is fascinating, but several
significance of the threshold after performing initial experiments by remaining questions remain open, and we list a few.
testing several threshold values for different numbers of patterns. We One direction is to consider memorized HNN applications in
Fig. 5. Depiction of the input pattern exhibits the highest correlation, indicating distinct associative chains represented in various links.
7
D. Checiu et al. Neurocomputing 575 (2024) 127324
Fig. 6. An illustrative situation demonstrates the link of concepts and the potential extent of an associative chain without imposing a preconceived restriction on
these links. If we were to examine the input " rose" and the desired endpoint "honey," what would be the associative chains? Which strategies can be utilized to
determine the connection between these entities? The rose is classified as a flower and is mainly associated with the concept of flowers. Therefore, an initial
connection is established with the category of flowers. In this context, the focus is directed towards the attributes of a flower, which may be considered the initial
catalyst for the first knob of creativity. Pollen emerges as the subsequent element, representing the second facet of creative exploration (i.e., the second knob of
creativity). In this context, constructing associative chains and engaging in cognitive processes to identify additional associations with pollen while avoiding
simplistic connections such as flowers, roses, or the pollen itself is required. Consequently, we discover a correlation between bees and their role in pollen collection.
The desired objective has yet to be achieved; further exploration has been started to ascertain the factors bees associate with accessing honey, including the bee,
flower, and pollen.
understanding creative thinking-based semantic association from a cluster of neurons, such as the initial 10 neurons in each pattern. Its
mechanistic level. In a recent review by Lin et al. [39], numerous definition is rooted in the arrangement of the active neurons, where a
noteworthy memristive HNN-based chaotic system outcomes were neuron firing in the second position within a weight feature indicates a
emphasized. The researchers introduced the fundamental concepts of light object, while a value of 0 signifies that weight is irrelevant.
HNN, memristors, and chaotic dynamics, discussed the categorization of Going one step further could allow specifying how associative link
memorized HNNs into four models based on their memristor functions: ages are established by iterating only on the defined features and sup
autapses, synapses, electromagnetic radiation, and electromagnetic in pressing specific features using larger-scale network models. Recent
duction, followed by the modeling mechanism, modeling method, and years have witnessed substantial advances and tremendous growth in
pioneering works of each type are introduced (for review, see Table 1 of artificial intelligence (AI)3 systems and their applications [26,59,65].
[39]). If we consider the mechanistic level of memerized HNN models, a These advances give premises for testing this theoretical framework on
memristor-coupled asymmetric neural network (MANN) model was concepts modeled by real-life concepts; accordingly, the network’s
examined in a study by Lin et al. [38]. This study suggested that the creativity-based semantic associations will be easier to observe. The
MANN can exhibit hyper-chaos with an infinitely wide range, groundbreaking progress of conversational AI such as ChatGPT, which
hyper-chaotic initial-boosted behavior, and many multi-structure broke significant barriers to automation in communication [1,60], and
attractors. Memristive HNN-based chaotic systems and their chaotic the emerging era of human-AI interactions, in which computers act as
dynamics are still under development and need more investigation; autonomous agents capable of making decisions [26,59], is indisputable
however, such progress will benefit future research on creativity-based and could allow up to move to next advanced steps.
semantic associations.
Another direction is to investigate the patterns representative of real-
life concepts (i.e., create a dataset for and define a set of multiple fea
tures that should be defined for each of them (e.g., human/object,
weight, height, color, etc.)) as a next step, daily creativity [49,50].
Although it is challenging from an empirical perspective, it is feasible to 3
AI research can be classified into two primary categories: the emulation
develop an algorithm that takes a concept as input, and by having the category and the application category. The first seeks to understand human
participants answer questions about its features, we can collect the data perception, cognition, and motor abilities (e.g., humanoid robots), and the
and transform it into a pattern. A feature can be conceptualized as a second aims to use AI methods to model and create large-scale products (e.g.,
teleoperated devices) [26, 54].
8
D. Checiu et al. Neurocomputing 575 (2024) 127324
CRediT authorship contribution statement [21] C.R. Gerver, J.W. Griffin, N.A. Dennis, R.E. Beaty, Memory and creativity: a meta-
analytic examination of the relationship between memory systems and creative
cognition, Psychon. Bull. Rev. (2023), https://doi.org/10.3758/s13423-023-
Checiu Denisa: Data curation, Methodology, Writing – original 02303-4.
draft. Bode Mathias: Conceptualization, Investigation, Methodology, [22] J. Han, H. Forbes, D. Schaefer, An exploration of how creativity, functionality, and
Supervision, Validation. Khalil Radwa: Conceptualization, Methodol aesthetics are related in design, Res. Eng. Des. 32 (3) (2021) 289–307, https://doi.
org/10.1007/s00163-021-00366-9.
ogy, Supervision, Visualization, Writing – original draft, Writing – re [23] N. Herrmann, The creative brain, Train. Dev. 35 (10) (1981) 10–16.
view & editing. [24] J.J. Hopfield, Neural networks and physical systems with emergent collective
computational abilities, Proc. Natl. Acad. Sci. 79 (8) (1982) 2554–2558, https://
doi.org/10.1073/pnas.79.8.2554.
[25] G. Joya, M.A. Atencia, F. Sandoval, Hopfield neural networks for optimization:
Declaration of Competing Interest study of the different dynamics, Neurocomputing 43 (1–4) (2002) 219–237,
https://doi.org/10.1016/S0925-2312(01)00337-X.
[26] A. Kappas, J. Gratch, These aren’ t the droids you are looking for: promises and
The authors declare that they do not have competing financial in challenges for the intersection of affective science and robotics / AI, Affect. Sci.
terests or personal relationships that could have appeared to influence (2023) 0123456789, https://doi.org/10.1007/s42761-023-00211-3.
the work reported in this paper. [27] J.C. Kaufman, R. a Beghetto, Beyond big and little: the four C model of creativity,
Rev. Gen. Psychol. 13 (1) (2009) 1–12, https://doi.org/10.1037/a0013688.
[28] Y.N. Kenett, R.E. Beaty, On semantic structures and processes in creative thinking,
Data Availability Trends Cogn. Sci. (2023), https://doi.org/10.1016/j.tics.2023.07.011.
[29] R. Khalil, V. Demarin, Creative therapy in health and disease: inner vision, CNS
Neurosci. Ther. (2023), https://doi.org/10.1111/cns.14266.
Data will be made available on request.
[30] R. Khalil, A.A. Moustafa, A neurocomputational model of creative processes,
Neurosci. Biobehav. Rev. 137 (2022) 104656, https://doi.org/10.1016/J.
Appendix A. Supporting information NEUBIOREV.2022.104656.
[31] M. Kobayashi, Complex-valued Hopfield neural networks with real weights in
synchronous mode, Neurocomputing 423 (2021) 535–540, https://doi.org/
Supplementary data associated with this article can be found in the 10.1016/j.neucom.2020.10.072.
online version at doi:10.1016/j.neucom.2024.127324. [32] P. Koiran, Dynamics of discrete time, continuous state hopfield networks, Neural
Comput. 6 (3) (1994) 459–468, https://doi.org/10.1162/neco.1994.6.3.459.
[33] S. KÖksal, S. Sivasundaram, Stability properties of the Hopfield-type neural
References networks, Dyn. Stab. Syst. 8 (3) (1993) 181–187, https://doi.org/10.1080/
02681119308806157.
[34] D. Krotov, A new frontier for Hopfield networks, in: Nature Reviews Physics, vol. 5,
[1] M. Abdullah, A. Madain, Y. Jararweh, ChatGPT: fundamentals, applications and
Springer Nature, 2023, pp. 366–367, https://doi.org/10.1038/s42254-023-00595-
social impacts, 2022 Ninth Int. Conf. Soc. Netw. Anal. Manag. Secur. (SNAMS)
y.
(2022) 1–8, https://doi.org/10.1109/SNAMS58071.2022.10062688.
[35] Krotov, D., & Hopfield, J. (2021). Large Associative Memory Problem in
[2] Abe, Theories on the Hopfield neural networks, Int. Jt. Conf. Neural Netw. (1989)
Neurobiology and Machine Learning. 〈http://arxiv.org/abs/2008.06996〉.
557–564 vol.1, https://doi.org/10.1109/IJCNN.1989.118633.
[36] D. Krotov, J.J. Hopfield, Dense associative memory for pattern recognition, Adv.
[3] S. Abe, Global convergence and suppression of spurious states of the Hopfield
Neural Inf. Process. SystemsNips (2016) 1180–1188.
neural networks, IEEE Trans. Circuits Syst. I: Fundam. Theory Appl. 40 (4) (1993)
[37] Y. Li, Y.N. Kenett, W. Hu, R.E. Beaty, Flexible semantic network structure supports
246–257, https://doi.org/10.1109/81.224297.
the production of creative metaphor, Creat. Res. J. 33 (3) (2021) 209–223, https://
[4] S. Abe, A.H. Gee, Global convergence of the Hopfield neural network with non-zero
doi.org/10.1080/10400419.2021.1879508.
diagonal elements. IEEE Trans. Circuits Syst. II: Analog Digit. Signal Process. 42 (1)
[38] H. Lin, C. Wang, J. Sun, X. Zhang, Y. Sun, H.H.C. Iu, Memristor-coupled
(1995) 39–45, https://doi.org/10.1109/82.363543.
asymmetric neural networks: Bionic modeling, chaotic dynamics analysis and
[5] A. Abraham, The promises and perils of the neuroscience of creativity, Front. Hum.
encryption application, Chaos Solitons Fractals 166 (2023), https://doi.org/
Neurosci. 7 (2013), https://doi.org/10.3389/fnhum.2013.00246.
10.1016/j.chaos.2022.112905.
[6] A. Abraham, The Neuroscience of Creativity, Cambridge University Press, 2018,
[39] H. Lin, C. Wang, F. Yu, J. Sun, S. Du, Z. Deng, Q. Deng, A Review of Chaotic
https://doi.org/10.1017/9781316816981.
Systems Based on Memristive Hopfield Neural Networks, in: Mathematics, vol. 11,
[7] S.-I. Amari, K. Maginu, Statistical neurodynamics of associative memory, Neural
MDPI, 2023, https://doi.org/10.3390/math11061369.
Netw. 1 (1) (1988) 63–73, https://doi.org/10.1016/0893-6080(88)90022-6.
[40] L.B. Litinskii, Hopfield model with a dynamic threshold, Theor. Math. Phys. 130
[8] D.J. Amit, The Hebbian paradigm reintegrated: Local reverberations as internal
(1) (2001) 136–151, https://doi.org/10.1023/A:1013840801300.
representations, Behav. Brain Sci. 18 (4) (1995) 617–626, https://doi.org/
[41] S. Luchini, Y.N. Kenett, D.C. Zeitlen, A.P. Christensen, D.M. Ellis, G.A. Brewer, R.
10.1017/S0140525×00040164.
E. Beaty, Convergent thinking and insight problem solving relate to semantic
[9] A. Anishchenko, A. Treves, Autoassociative memory retrieval and spontaneous
memory network structure, Think. Skills Creat. 48 (2023) 101277, https://doi.org/
activity bumps in small-world networks of integrate-and-fire neurons, J. Physiol.
10.1016/j.tsc.2023.101277.
-Paris 100 (4) (2006) 225–236, https://doi.org/10.1016/j.jphysparis.2007.01.004.
[42] S. Mastria, S. Agnoli, G.E. Corazza, M. Grassi, L. Franchin, What inspires us? An
[10] M. Atencia, G. Joya, F. Sandoval, Hopfield neural networks for parametric
experimental analysis of the semantic meaning of irrelevant information in creative
identification of dynamical systems, Neural Process. Lett. 21 (2) (2005) 143–152,
ideation, Think. Reason. 0 (0) (2022) 1–28, https://doi.org/10.1080/
https://doi.org/10.1007/s11063-004-3424-3.
13546783.2022.2132289.
[11] Y. Aviel, D. Horn, M. Abeles, The doubly balanced network of spiking neurons: a
[43] R. McEliece, E. Posner, E. Rodemich, S. Venkatesh, The capacity of the Hopfield
memory model with high capacity, Adv. Neural Inf. Process. Syst. (2004).
associative memory, IEEE Trans. Inf. Theory 33 (4) (1987) 461–482, https://doi.
[12] R.E. Beaty, Y.N. Kenett, Associative thinking at the core of creativity, Trends Cogn.
org/10.1109/TIT.1987.1057328.
Sci. 27 (7) (2023) 671–683, https://doi.org/10.1016/j.tics.2023.04.004.
[44] S. Mednick, The associative basis of the creative process, Psychol. Rev. 69 (3)
[13] R.E. Beaty, Y.N. Kenett, R.W. Hass, D.L. Schacter, Semantic memory and creativity:
(1962) 220–232, https://doi.org/10.1037/h0048850.
the costs and benefits of semantic memory structure in generating original ideas,
[45] G.A. Mendelsohn, Associative and attentional processes in creative performance,
Think. Reason. 29 (2) (2023) 305–339, https://doi.org/10.1080/
J. Personal. 44 (2) (1976) 341–369, https://doi.org/10.1111/j.1467-6494.1976.
13546783.2022.2076742.
tb00127.x.
[14] M. Benedek, R.E. Beaty, D.L. Schacter, Y.N. Kenett, The role of memory in creative
[46] A.N. Michel, J.A. Farrell, H.-F. Sun, Analysis and synthesis techniques for Hopfield-
ideation, Nat. Rev. Psychol. 2 (4) (2023) 246–257, https://doi.org/10.1038/
type synchronous discrete time neural networks with application to associative
s44159-023-00158-z.
memory, IEEE Trans. Circuits Syst. 37 (11) (1990) 1356–1366, https://doi.org/
[15] N. Brunel, Hebbian learning of context in recurrent neural networks, Neural
10.1109/31.62410.
Comput. 8 (8) (1996) 1677–1710, https://doi.org/10.1162/neco.1996.8.8.1677.
[47] R.G. Morris, D.O. Hebb: the organization of behavior, 1949, in: Brain Research
[16] G.E. Corazza, Potential originality and effectiveness: the dynamic definition of
Bulletin, 50, Wiley, New York, 1999, p. 437, https://doi.org/10.1016/S0361-9230
creativity, Creat. Res. J. 28 (3) (2016) 258–267, https://doi.org/10.1080/
(99)00182-3, 1949.
10400419.2016.1195627.
[48] Ramsauer, H., Schäfl, B., Lehner, J., Seidl, P., Widrich, M., Adler, T., Gruber, L.,
[17] A.P. Davison, PyNN: a common interface for neuronal network simulators, Front.
Holzleitner, M., Kreil, D., Kopp, M., Klambauer, G., Brandstetter, J., Hochreiter, S.
Neuroinformatics 2 (2008), https://doi.org/10.3389/neuro.11.011.2008.
(2021). Hopfield Networks Is All You Need. ICLR 2021 - 9th International
[18] L. Gabora, Revenge of the "Neurds": characterizing creative thought in terms of the
Conference on Learning Representations.
structure and dynamics of memory, Creat. Res. J. 22 (1) (2010) 1–13, https://doi.
[49] R. Richards, Everyday creativity, eminent creativity, and psychopathology,
org/10.1080/10400410903579494.
Psychol. Inq. 4 (3) (1993) 212–217, https://doi.org/10.1207/s15327965pli0403_
[19] Gardner, Creating Minds: An Anatomy of Creativity as Seen Through the Lives of
12.
Freud, Einstein, Picasso, Stravinsky, Eliot, Graham, and Gandhi (1994), https://
[50] M.A. Runco, M.D. Bahleda, Implicit theories of artistic, scientific, and everyday
doi.org/10.1002/j.2162-6057.1996.tb00770.x.
creativity, J. Creat. Behav. 20 (2) (1986) 93–98, https://doi.org/10.1002/j.2162-
[20] B. Gaut, The philosophy of creativity, Philos. Compass 5 (12) (2010) 1034–1046,
6057.1986.tb00423.x.
https://doi.org/10.1111/j.1747-9991.2010.00351.x.
9
D. Checiu et al. Neurocomputing 575 (2024) 127324
[51] M.A. Runco, G.J. Jaeger, The standard definition of creativity, Creat. Res. J. 24 (1) Denisa Checiu received her BSc degree in Robotics and
(2012) 92–96, https://doi.org/10.1080/10400419.2012.650092. Intelligent Systems with a minor in Computer Science from
[52] Sawyer, Explaining creativity - the science if human innovation, Creat. Conscious.: Constructor University, Bremen, Germany. Here is the Link
Philos. (2006), https://doi.org/10.1016/0140-1750(88)90050-4. edIn profile: https://www.linkedin.com/in/denisa-checiu-geo
[53] Sawyer, The cognitive neuroscience of creativity: a critical review, Creat. Res. J. 23 rgiana/.
(2) (2011) 137–154, https://doi.org/10.1080/10400419.2011.571191.
[54] B. Shneiderman, Design lessons from AI’s two grand goals: human emulation and
useful applications, IEEE Trans. Technol. Soc. 1 (2) (2020) 73–82, https://doi.org/
10.1109/TTS.2020.2992669.
[55] J. Šíma, P. Orponen, T. Antti-Poika, On the computational complexity of binary
and analog symmetric hopfield nets, Neural Comput. 12 (12) (2000) 2965–2989,
https://doi.org/10.1162/089976600300014791.
[56] J. Squalli, K. Wilson, Intelligence, creativity, and innovation, Intelligence 46 (1)
(2014), https://doi.org/10.1016/j.intell.2014.07.005.
[57] T.Z. Tardif, R.J. Sternberg, What do we know about creativity? Nat. Creat.:
Contemp. Psychol. Perspect. (1988).
Dr. Mathias Bode received his Dr. rer. nat. in Physics from the
[58] E.P. Torrance, The Torrance Tests of Creative Thinking: Norms-Technical Manual.
University of Münster, Germany, in 1992 and his habilitation
Research Edition. Verbal Tests, Forms A and B. Figural Tests, Forms A and B,
in 1998. He is a distinguished lecturer at Constructor Univer
Personnel Press, Princeton, NJ, 1974 (Research Edition).
sity, Bremen, Germany. His research interests are pattern
[59] M. Virvou, The emerging era of Human-AI interaction: keynote address, 2022 13th
generation, reaction-diffusion systems, dynamical systems,
Int. Conf. Inf., Intell., Syst. Appl. (IISA) (2022) 1–10, https://doi.org/10.1109/
signal processing, and artificial neural networks.
IISA56318.2022.9904422.
[60] D. Wang, L. Feng, J. Ye, J. Zou, Y. Zheng, Accelerating the integration of ChatGPT
and other large-scale AI models into biomedical research and healthcare,
MedComm – Future Med. 2 (2) (2023), https://doi.org/10.1002/mef2.43.
[61] Wang, I.-Hsum Li, Wei-Ming Wang, Shun-Feng Su, Nai-Jian Wang, A new
convergence condition for discrete-time nonlinear system identification using a
hopfield neural network, IEEE Int. Conf. Syst., Man Cybern. 1 (2005) 685–689,
https://doi.org/10.1109/ICSMC.2005.1571226.
[62] Wang, Z. Tang, Q.P. Cao, An efficient approximation algorithm for finding a
maximum clique using hopfield network learning, Neural Comput. 15 (7) (2003)
1605–1619, https://doi.org/10.1162/089976603321891828. Dr. Radwa Khalil is a neuroscientist and lecturer with exper
[63] G. Weisbuch, F. Fogelman-Soulie, Scaling laws for the attractors of hopfield tise in multiple disciplines, including psychology and neuro
networks, J. De. Phys. Lett. 46 (14) (1985) 623–630, https://doi.org/10.1051/ science, employing comprehensive methods that combine
jphyslet:019850046014062300. experimental and computational approaches. An area of
[64] R.C. Wilson, J.P. Guilford, P.R. Christensen, D.J. Lewis, A factor-analytic study of particular interest is investigating how creativity is perceived
creative-thinking abilities, Psychometrika 19 (4) (1954) 297–311, https://doi.org/ and the neural mechanisms that influence it, such as cognition,
10.1007/BF02289230. emotion, and their interaction. ther information on https:
[65] X. Zhang, X. Mao, Y. Yin, C. Chai, T. Zhang, Melting your models: an integrated AI- //neuroscience.pub/.
based creativity support tool for inspiration evolution, 2022 15th Int. Symp.
Comput. Intell. Des. (ISCID) (2022) 97–101, https://doi.org/10.1109/
ISCID56505.2022.00029.
10