An Innovative Hashing Scheme and BiLSTM-based Dynamic Resume Ranking System
An Innovative Hashing Scheme and BiLSTM-based Dynamic Resume Ranking System
Case Study of HM Application: In the case study, the d1 and The network architecture details have been listed in table II.
d2 are two English sentences. These sentences are The embedding layer, which receives the signal from the HM
experimental. These are not any real job descriptions. They module, processes the input signal with 12,000 learning
have been carefully chosen to explain the HM module’s parameters. The bidirectional layer contains 194,460
operations properly. parameters. And the dense layer consists of 564 parameters.
Together there are 207,024 learning parameters in the
• d1: Software Engineering Salary network. The proposed network uses a Log-Loss function to
• d2: Software Engineering with excellent salary and measure the loss during the learning process defined by
excellent benefits. equation 8 [41].
It calculates the average of squared gradients for each weight. 𝑅𝑠 = ∑ 𝑠[𝑖] (14)
After that, it divides the graduate by the square root of the 𝑖=1
mean square. First, the output from the network v(w, t) is Here is equation 14, the s is the skillset specified by the HR
calculated using the equation 11 [43]. hiring managers. The skill set of a particular candidate is a
subset of the Rs defined by the equation 15.
2
𝑣(𝑤, 𝑡) = γ𝑣(𝑤, 𝑡 − 1) + (1 − γ)(δ𝑄𝑖 (𝑤)) (11)
𝑠 ∈ 𝑅𝑠 (15)
After that, the v(w, t) is used to update weight using the
equation 12. The comparison between the s and the 𝑅𝑠 is made through the
η probability score obtained from the processing unit. The
𝑤𝑡 = 𝑤𝑡−1 − δ𝑄𝑖 (𝑤) (12) higher the probability, the higher the resume rank for a
√𝑣(𝑤, 𝑡) particular job circular.
After getting the experimental results from the Adagrad and
RMS-Prop algorithms, we experimented with the network
using the Adaptive Moment Estimation (ADAM) algorithm.
The adaptive momentum used to update weight is calculated
using the equation 13 [44].
δ𝐿
𝑚𝑡 = β1 𝑚𝑡−1 + (1 − β1 ) [ ]𝑣 (13)
δ𝑤𝑡 𝑡
Figure 5: (a)𝑅2 on 7000 instances, (b) RMSE on 500 instances, and (c) MAE on 500 instances