Measuring and Improving Agile Processes in A Small-Size Software Development Company
Measuring and Improving Agile Processes in A Small-Size Software Development Company
ABSTRACT Context: Agile software development has become commonplace in software development
companies due to the numerous benefits it provides. However, conducting Agile projects is demanding
in Small and Medium Enterprises (SMEs), because projects start and end quickly, but still have to fulfil
customers’ quality requirements. Objective: This paper aims at reporting a practical experience on the use
of metrics related to the software development process as a means supporting SMEs in the development of
software following an Agile methodology. Method: We followed Action-Research principles in a Polish
small-size software development company. We developed and executed a study protocol suited to the needs
of the company, using a pilot case. Results: A catalogue of Agile development process metrics practically
validated in the context of a small-size software development company, adopted by the company in their
Agile projects. Conclusions: Practitioners may adopt these metrics in their Agile projects, especially if
working in an SME, and customise them to their own needs and tools. Academics may use the findings as a
baseline for new research work, including new empirical studies.
INDEX TERMS Agile software development, process metrics, software engineering, software quality, rapid
software development, SMEs.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
78452 VOLUME 8, 2020
M. Choraś et al.: Measuring and Improving Agile Processes in a Small-Size Software Development Company
metrics are aggregated into quality factors related to devel- these reasons, measuring software processes, especially in an
opment and usage. Finally, the quality factors are aggregated SME, becomes a bigger challenge.
into high-level indicators named strategic indicators, which Kupiainen et al. [22] conducted a systematic review of
can be aligned to the strategic goals of the organisation [27]. the use and impact of software metrics in ASD in industry.
The assessment data processed by the Data Modelling and The authors reported that software metrics are mainly used
Analysis module is visualised by the dashboards included for sprint planning, tracking progress, improving software
in the Strategic Decision Making module. The data is visu- quality, fixing software process, and motivating people. The
alised as aggregated data to the end-user through the web- authors reported that metrics like velocity, effort estimation,
based GUI, also named Strategic Dashboard. The strategic customer satisfaction, defect count, technical debt and build
dashboard also includes links to customised dashboards that are used prominently in ASD. In their systematic review,
can be developed to visualise dedicated charts including the Tahir et al. [46] observed that metrics for defects, effort, size,
data ingested from the data producers’ tools. This dashboard duration, productivity, employee commitment, and customer
allows the user to display those data calculated for the current satisfaction are commonly reported in the state of the art.
stage of the project, as well as the evolution of the metrics, These findings complement another review by Gómez et al.
factors and strategic indicators over the time. Another con- [15], where complexity and size were found to be the most
figurable property of the data visualisation is the possibility measured attributes in MPs. Other usage of metrics in ASD
to adjust grid of the time-based charts to the current needs discussed in the literature are for planning and tracking soft-
and present evolution of data with granularity from days up ware development [22], understanding development perfor-
to months. The dashboard also allows navigating through the mance and product quality [48], measuring process quality
different elements, which provides traceability and enforces [50], estimating effort [47], and reporting progress and qual-
the understanding of the assessment. ity to stakeholders not involved in the actual development [4].
Other software analytics tools similar to Q-Rapids have Taken together, metrics targeting sprint planning, fixing soft-
recently emerged in the software engineering landscape. ware process, effort estimation, development performance,
Some of them are domain-dependent, e.g. the European and software defects can be used to measure an organization’s
Cooperation for Space Standardization (ECSS) metric frame- process performance. However, this objective is not expressly
work to improve transparency of software development in stated in any of the reviews mentioned above. On the contrary,
customer-supplier relationships of space missions [35]. Also, [22] remarked that more studies are needed to explore the
some commercial tools are available in the market with simi- rationale behind utilizing the metrics the authors found in
lar characteristics as the Q-Rapids tool. For instance, Squore4 their review.
provides a similar dashboard to Q-Rapids’ and includes sev- Most of the studies present initial emerging results of MP
eral software metrics and indicators measuring software qual- implementation in organizations, which have not been evalu-
ity, although they are not comparable with the set of metrics ated within a larger industrial context. One of the exceptions
analysed in this paper. is the study by Dubinsky et al. [11] reporting on the experi-
ence of using an MP at an extreme programming (XP) devel-
III. RELATED WORK opment team of the Israeli Air Force. The authors found that
There is a long history of research on metrics programs (MPs) using metrics to measure the amount, quality, pace, and work
[24], and plenty of literature recommending success factors status could lead to more accurate and professional decision-
for their implementation [17], [28], [34], [44]. However, making. A similar study by Diaz-Ley et al. [9] proposed
literature on SMEs using MPs in the context of Agile is a measurement framework, customized for SMEs. One key
rather scarce. Moreover, the literature on measuring software benefit the authors reported was better measurement goals
processes and their role in improving SME processes is even that align with the company’s maturity. Specific to process
scarcer. Measuring software processes with the use of process metrics, most studies focus on using process metrics mainly
metrics enables objective and quantitative evaluation of soft- to predict software faults/defects [18], [29], [40]. However,
ware processes, which can lead to continuous improvement the role of process metrics in improving an organization’s
and learning [34], [42]. However, measuring process metrics overall process performance, especially in the context of SME
is a challenge [43]. Software processes are inherently com- and ASD, is missing from these studies.
plex and intangible, which makes their measurement more There have been studies evaluating MPs in SMEs, but
difficult than their product counterparts [19], [41]. Ideally, the scope has been limited to a particular region, which
measurement activities should consume little effort and time, makes it difficult to generalize their findings. For example,
while being adequate enough to meet an organization’s mea- with the goal of evaluating MPs in the Pakistani software
surement demands. Software organizations need to weigh industry, Tahir et al. [46] conducted a systematic mapping
in cost-efficiency while prioritizing measurement objectives study combined with a survey among 200 practitioners to
and targets. SMEs have the added constraints of limited bud- highlight the state of measurement practices. Forty-two per-
get, ambitious deadlines, and short-term strategy [45]. Due to cent of the organizations that responded to the survey were
SMEs. Overall, SMEs fared poorer than their larger coun-
4 https://www.vector.com/int/en/products/products-a-z/software/squore/ terparts. For instance, SMEs have the lowest share among
organizations that have any defined measurement process, Research study focused on one particular ITTI software prod-
measurement standards, and usage of measurement models uct, named and branded CONTRA. CONTRA is an enterprise
and tools. Furthermore, 65% of SMEs tend to use MPs pri- class integrated software system for Warehouse (WMS) and
marily at project level, and only 13% of SMEs implement Manufacturing (MES) management and deployed in the form
it across the organization. One of the positive findings, with of web application.
respect to measuring software processes, was that 70% of ITTI applies Scrum in their software development projects,
the SMEs reported to either focus on measuring process or a including CONTRA. Typically, from 7 up to 10 developers
combination of process and the other two entities. How- work daily on specific deployments or on the new features to
ever, the corresponding primary studies were unclear on the improve the product. The Scrum team holds weekly sprint
context in which the process measurement was undertaken, meetings on the last day of the sprint. Each Scrum Team
and the focus on process metrics for process improvement meeting consists of the following parts: review, retrospective
was missing. For example, the study by Díaz-Ley et al. [9] and planning for the next sprint.
reported the experiences of a Spanish SME in implementing
an MP and reported that the practitioners could now objec- B. ACTION-RESEARCH APPROACH
tively evaluate the trade-off between on-time releases and To conduct this research, we applied an Action-Research
software product reliability. Tosun et al. [49] collaborated cycle: diagnosis, action planning and design, action taking,
with a Turkish healthcare SME to institutionalize process evaluation, and specifying learning [31], [38]. ITTI par-
improvement practices, and reported improvements in the ticipants in the Q-Rapids project played a double role as
organization’s time allocation for requirements, coding, and researchers and project champions in the company.
testing steps. Furthermore, the authors found that defect rates The action started in September 2018 with the diagnosis of
as well as testing effort estimation decreased. One of the the industry needs in the form of process improvements that
more interesting approaches for improving an SME’s process ITTI wanted to address. This originated our research goal and
was documented in an experimental study by Caballero et al. research questions (documented in Section IV-C).
[5]. The authors introduced Scrum to enhance process pro- As participant of the H2020 Q-Rapids project, in order to
ductivity without impacting product quality at a very small tackle these improvements, ITTI decided to customise the
enterprise. The authors claim that Scrum can prove to be a Q-Rapids approach and tool with extended process metrics.
good alternative for process improvement in an organization In October 2018, both the team of researchers and the CON-
with very limited resources, which has been a long-time con- TRA Scrum team decided the company repositories to be
cern in implementing MPs in SMEs. It is evident from these used and how to make actionable this data with the outcomes
studies that the evaluations of MPs in SMEs are concerned of the Q-Rapids project (action planning and design).
mainly with the overall software process improvement, where Next, the same joint researchers-practitioners team jointly
the role of process metrics towards this objective is either elicited 25 candidate process metrics fed with the selected
implied or absent altogether. company data. The resulting process metrics were used by
As per the state of the art, there is extensive reliance on the CONTRA Scrum Team during their meetings to reflect on
measurement experts and experience [46], and organizations the process performance and quality of the product as well as
tend to prefer employee perception to objective measurement to estimate tasks (action taking). This action took place from
processes for process improvements [32]. In contrast, our November 2018 to May 2019.
study provides empirical evidence of using process metrics The process metrics were evaluated in a retrospective ses-
for improving process performance, and even facilitating sion with the Scrum team of CONTRA in June 2019 (evalu-
decision-making. The empirical validation is especially a ating).
distinguishing aspect of our study, as it has been identified Finally, the team learned the subset of process metrics
as a research gap in [22]. Furthermore, it should be noted that which are more effective for the diagnosed problems (spec-
the Q-Rapids solution, embodying the MP, integrates basic ifying learning). Since July 2019 to the present, such subset
features like automatic data collection, support for diverse of process metrics has been used in other projects in ITTI.
data sources, expert-based metrics elicitation, and visualisa-
tion; something that is absent from the MPs reported in the C. GOAL AND RESEARCH QUESTIONS
literature. In the regular meetings reported in Section IV-A, ITTI Scrum
Teams diagnosed the need to: (a) monitor the process per-
formance of the team, (b) keep a stable product quality level
IV. RESEARCH METHODOLOGY while adding new features, and (c) improve the estimation of
A. CONTEXT tasks during sprints.
ITTI is a software development and consulting company Bearing in mind these industrial needs, ITTI considered the
based in Poznan, Poland. ITTI currently has about 70- customisation of process metrics, a concept already imple-
90 employees. ITTI delivers software products in a number of mented in the Q-Rapids project, and applied them to CON-
application domains (e.g. administration, utilities, e-Health, TRA, serving as the pilot case. Following the Goal-Question-
and crisis management). In this paper, we report an Action- Metric approach (GQM) [2], we can define the resulting
research goal of this study as: Analyze process metrics with in the period of fifteen days. On the other hand, in the same
the purpose to evaluate them with respect to monitoring period, there is a clear improvement in the Average resolved
and estimating the Agile process performance from the view issues’ throughput (last 7 days) and some improvement in
point of the Scrum team in the context of an SME. Density of estimated development tickets (last 7 days).
We break this generic research goal into three research Figure 3 shows an example of the use of Kibana dash-
questions, aligned with the needs anticipated above: boards. In this case, the presented view aggregates several
• RQ1. Do process metrics help the Scrum team of an SME metrics related to the average elapsed time for tasks (accord-
to monitor their own process performance? ing to their state).
• RQ2. Do process metrics support the Scrum team of
an SME in keeping a stable product quality level while V. RESULTS
adding new features? A. PROCESS METRICS DEFINITION
• RQ3. Do process metrics help the Scrum team of an SME In the Action taking phase of the Action-Research cycle,
to improve the estimation of tasks during sprints? the ITTI CONTRA Scrum Team and the research team dis-
cussed and analysed what type of GitLab-based process met-
D. INSTRUMENTATION rics they consider candidates for assessing the Agile devel-
In the action planning and design phase of the Action- opment processes at CONTRA. These metrics were imple-
Research cycle, we decided to use GitLab as the data source mented in the CONTRA case in order to understand their
for the pilot. GitLab is an open source, web based tool that significance.
provides support to the full software development life cycle, The CONTRA Scrum Team provided relevant observa-
with focus on repository management and issue tracking, tions that drove the design of the candidate set of metrics:
among other capabilities. GitLab is used extensively in all
• The three main concepts that they use in their daily
ITTI projects and in particular, the CONTRA Scrum Team
practices to monitor progresses are: Task, Issue/Bug and
affirmed that it is the tool that may best reflect the process
Effort.
followed by the team during development.
• The state transition among development tasks (opened
ITTI gathered data from GitLab from about 12 months
→ completed → closed) is particularly important in
history of the CONTRA development, so that the process
analysing progress.
metrics could be assessed during a long life span. Dur-
• The effort is particularly interesting in relation to its
ing this time, a total of 31 unique assignees opened up to
estimation, because resource planning in the team (e.g.,
2.975 issues from which they closed up to 2.651, and there
developer allocation) greatly depends on its accuracy.
happened 40.947 events describing the changes of status of
• The concepts above may be analysed mainly from
tasks or issues.
two perspectives: numerical (e.g., number, accumulated
Table 1 includes the collected data for each issue from
sum or average) and time dimension.
GitLab. This data is stored in a dedicated index in the Elas-
• The metrics should be measurable using GitLab data.
ticSearch engine. From this data, during the studied period,
the Q-Rapids tool provided a total of 1.830 metrics, 1.098 fac- Considering these principles, the team consolidated a pro-
tors, and 732 strategic indicators assessment data points. posal of 25 candidate metrics. Although many other metrics
In order to evaluate the usefulness of the process metrics appeared interesting, the Action-Research team preferred to
defined in CONTRA, the Scrum Team proceeded as follows: keep the proposal manageable in this first iteration, thus
• They implemented the connectors to GitLab that focusing on those metrics which the developers agreed upon
allowed to effectively gather the data to start the mea- being the most determinant. The metrics can be divided into
surement process (see Section II-B). several categories as shown in Table 2:
• Given the existence of two strategic indicators provided • General metrics. Following the Scrum Team observa-
by the Q-Rapids project, related to the two first research tions, we propose an indication of the total number
questions (Process performance and Product Quality), and the average number of development tasks (metrics
they used the Q-Rapids dashboard for the process met- #1 and #4), number of tasks based on their status, e.g.
rics related to these questions. completed, closed, etc. (#2 and #3), and average time
• For the third research question, they preferred to deploy of tasks lifetime (#5 and #6). Metrics #2 and #3 can
Kibana dashboards in order to assess task estimation. be extended and the number of tasks marked as ‘‘in
In order to provide an integrated solution, these Kibana progress’’, ‘‘testing’’, ‘‘ready’’, etc., can be also anal-
dashboards were integrated into the Q-Rapids dash- ysed, but we discarded this in order to keep the approach
board. simple in this first iteration. Each of the general metrics
Figure 2 illustrates the Q-Rapids dashboard by showing can be calculated in different dimensions, i.e. per devel-
some example of historical data views. In these charts we can oper, per specific project, per area (frontend/backend),
see that Density of tickets pending testing and Density of bug per sprint or release and narrowing the timespan to the
have been stable in low and high quality values respectively specific range.
• Task estimation metrics. This category includes the met- accuracy of such estimation, including average deviation
rics related to planning an effort allocation and analysis of estimation in relation to the real effort consumption
of effort/resources consumption. They can indicate the (metrics #7 and #8), total sum of estimated or used
resources (#11 and #12) as well as completeness of task of average time needed/pending time to fix the bug
estimation (#9 and #10). Similarly to general metrics, (19 and 20).
those related to task estimation also can be calculated in • Testing metrics. Similarly, metrics #22 and #23 indicate
different dimensions, such as taking assignees, projects, average testing time and average pending testing time,
sprints or time range into account. while metric #24 shows the current percentage of the
pending task to be tested.
For the specific stages of the development process, we distin- • Other metrics. Moreover, we propose a metric showing
guished the metrics related to task implementation, bug fixing the number of non-commented merge requests (metric
and testing. #25), identified as relevant by the CONTRA developers.
• Task implementation metrics. This category includes
time-based metrics indicating average time of imple-
mentation and average time of waiting for implemen- B. PROCESS METRICS ASSESSMENT
tation (metrics #13 and #14). The other three metrics Next, we report the impact from using the process metrics in
in this category are related to the task implementation the CONTRA pilot project (corresponding to the evaluating
status, namely assignment to the given sprint (#15 and phase of the Action-Research cycle) on the three research
#17) and to a given developer (#14). questions: (a) monitoring process performance, (b) keeping a
• Bug fixing metrics. They include the number of tasks stable product quality level, and (c) improving the estimation
with reported bug (metrics #18 and #21) and indication of tasks during sprints.
The process of using the metrics was led by the CON- 1) PROCESS PERFORMANCE
TRA Project Owner. With respect to process performance Figure 4 shows an example, in which we can see that
and product quality, the Project Owner analysed monthly the Product Quality remains at a steady level, while in
strategic indicators rendered by the Q-Rapids dashboard for the case of Process Performance, there are significant
these two concepts. changes.
• Metric #21: Percentage of ‘non-bug’ type tasks to total motivate the team and improve the process, and also to find
tasks on the board. the problems in order to resolve them.
These metrics significantly improve management of such
B. BENEFITS AND ADOPTION OF PROCESS METRICS AT
processes as task estimation and bug fixing, which are crucial
ITTI
in rapid software development of high quality and stable
software. Moreover, after applying those metrics, team man- The most important advantage of the process metrics per-
agement is now more efficient and transparent. ceived by ITTI is the focus on the process and team effec-
tiveness. The proposed solution has improved the way devel-
opers now report time spent on issues/tickets and allows for
VI. DISCUSSION comparison to the effort planned.
The overall assessment of the process metrics in the CON- Including dashboards and the process metrics into the ITTI
TRA case proved their value to the company. The proposed, software development process enhanced and improved the
calculated and visualized process metrics (using either the willingness and efficiency in reporting spent time and plan-
Q-Rapids dashboard, Kibana views or even ad-hoc visual- ning the effort. Moreover, the gap between effort planned
izations developed at ITTI) were assessed as useful by the and spent is continuously decreasing which means Product
CONTRA Scrum Team and some of the metrics are now used Owners, Scrum Masters as well as developers estimate much
in practice not only in the pilot project but company-wide. better.
From more practical advantages and possible deci-
A. PROCESS METRICS IN THE SCRUM PROCESS sions, the proposed metrics allow for efficient tracking of
Once the selection was made, at each Scrum Team retro- tasks/issues in the project, per developer and per sprint (or
spective meeting, the team usually spends 15 to 20 minutes the chosen timespan).
on visualizing and analysing these selected process metrics. As shown in the previous section, now the efficiency of
Process metrics are a great fit since this part of the meeting is each developer can be checked. What we found out is that the
devoted to people, processes, tools, lessons learnt and how optimal reported time/effort should be close to 4 days (which
to improve the way of working. Of course the role of the means that 1 day is spent on unreported aspects (e.g. when
Product Owner and Scrum Master is to make those discus- experienced developers help those less experienced ones),
sions and displays interesting, but this turned out to be an and this is well understood and justified). What Product
easy job for them, because developers usually like statistics Owners and Scrum Masters mostly seek is the information
and some trends/graphs, such as those shown in the previous of bottlenecks of the process, and basically how much time
section. These results, trends and metrics values are used to the ticket ‘lies’ in each phase of the process. The board of the
FIGURE 8. Kibana dashboard including a chart for developers’ assigned issues per developer. Black boxes are used
to hide developers’ real names.
process at ITTI is presented in Figure 10. It consists of 8 steps, as well as to some practical guidelines such as those from
and the proposed process metrics nicely show the status of the European project CIPHER [7].
tickets in the project, per phase of the process, per developer
and in the given timespan. VII. THREATS TO VALIDITY
At ITTI, in the project CONTRA, the solution showed As with any empirical study, there might be limitations to our
at first that the bottleneck was in the testing phase. Such research method and findings. This section discusses possible
situation facilitated a quick decision to engage more testers, threats to validity in terms of construct, conclusion, internal,
but did not solve all the problems. While looking at process and external validity and emphasises the mitigation actions
metrics, the company found out that the bottleneck shifted applied.
to the ‘merge request’ phase. Such a situation meant that
experienced developers (those who can perform code review A. CONSTRUCT VALIDITY
and merge) do not have enough time and resources to perform
The retrospective session enabled us to further elaborate the
tasks in this phase of the software development process.
practical relevance of the process metrics with two members
The situation is now solved by granting the rights to per-
of the Scrum team. The use of quantitative and qualita-
form ‘merges’ to medium-experienced developers in order to
tive measures and observations reduced the mono-method
improve the overall process.
bias. Furthermore, we aimed at creating a safe environ-
ment, encouraging the participants to highlight any negative
aspects and make suggestions for the improvement of the
C. CONSIDERATIONS ON HUMAN ASPECTS process metrics. Finally, some of our results could be caused
An important aspect to note is that some of the proposed by not optimal implementation of the process metrics (See
process metrics, while calculated per developer, have to be Section VI.A). Still, these results are useful for others to learn
used wisely by the Product Owners and Scrum Masters taking how to build such an infrastructure in realistic settings.
into account a plethora of human factors and aspects. This is
extremely important especially now (as of early 2020), when B. CONCLUSION VALIDITY
we have the employee-market in IT world and developers To ensure the reliability of this evaluation, the measurement
are lacking. At ITTI, the usage of the process metrics is plan and procedure were documented in detail. Additionally,
also compliant to the General Data Protection Regulation, the results were reviewed by the Scrum team. In this way,
FIGURE 10. The view of the process board used at ITTI (the board is in Polish (the real used one) but any other
language can also be used).
we mitigated risks such as fishing for results during the VIII. CONCLUSION
analysis, which would have led to a subjective analysis. In this paper we presented an approach to the definition and
utilisation of process metrics related to Agile software devel-
C. INTERNAL VALIDITY opment. This has been implemented with: the formulation of
We evaluated the integrated Q-Rapids solution by drawing a set of process metrics, their assessment in a real project,
a convenient sample of a Scrum master and a developer. and the description of the practical and empirical usage in a
One limitation of our work is that we were not able to particular SME company.
get a random sample of participants in the pilot project. In More precisely, in terms of the research questions:
addition, we defined an evaluation protocol in advance, which
• RQ1: The major contribution of the paper is the very
included a specific description of our planned procedure and
needed solution to monitor the performance of each
the order of using the materials, i.e., an explanation with
phase of the software development process. The solution
all the steps that had to be performed. After all the partners
includes a subset of effective process metrics.
had agreed on the final version of the evaluation guidelines,
• RQ2: Indeed, the major benefit for ITTI is the pos-
we executed the evaluation accordingly. This should miti-
itive impact on the stability of the CONTRA system
gate the fact that we needed to split the work of conduct-
while adding new features. In fact, the offered software
ing the evaluation among different researchers and partners.
product (CONTRA) needs customisation for each client
Some of the five researchers who conducted the evaluation
(while the domains of clients’ businesses vary signifi-
were involved in developing the Q-Rapids tool components.
cantly). The proposed process metrics are continuously
To minimise that bias, we made sure that in each case there
used to help assuring the quality and stability of the
were at least two researchers present; one acting as the moder-
software.
ator/experimenter and one as the observer, to emphasise that
• RQ3: The value of the proposed solution are the mech-
the participants could speak freely.
anisms for visualizing task estimation, e.g., trace tasks
(tickets, issues) live (in real time). By using these process
D. EXTERNAL VALIDITY metrics visualisations, the Product Owner in the Scrum
Our results are tied to the context of CONTRA. Our goal Team of ITTI was able to improve task estimations.
was to better understand practitioners’ perception. We charac-
terised the environment as realistically as possible and studied Even though our findings are based on a particular SME
the suitability of our sampling (see Section IV.A). GitLab is company and product, we believe that the presented findings
one of the most extensively used software management tools on process metrics and Q-Rapids usage can be applicable
in SME software development companies. Therefore, we can to a wider context. ‘‘If the forces within an organization
expect that these metrics analysis may provide actionable that drove the observed behavior are likely to exist in other
insights to a software development company for improving organizations, it is likely that those other organizations, too,
the quality of their processes. will exhibit similar behavior’’ [39].
In fact, most SME software development companies use [13] W. A. Florac and A. D. Carleton, Measuring the Software Process.
SCRUM-like processes as the one presented in Fig 10, Reading, MA, USA: Addison-Wesley, 1999.
[14] A. Fuggetta, ‘‘Software Process: A Roadmap,’’ in Proc. Conf. Future Softw.
and would be interested in the practical metrics related to Eng., 2000, pp. 25–34.
processes and team effectiveness. Even though the used [15] O. Gómez, H. Oktaba, M. Piattini, and F. García, ‘‘A systematic review
tools or labels or names of the process phases may be dif- measurement in software engineering: State-of-the-art in measures,’’ in
Software Data Technology Berlin, Germany: Springer, pp. 165–176, 2006.
ferent, the solution is general, although of course it would [16] L. Guzmán, M. Oriol, P. Rodríguez, X. Franch, A. Jedlitschka, and
require some customisation. It is worth to note that, in this M. Oivo, ‘‘How can quality awareness support rapid software
paper, we showed real benefits based on a real implementa- development?—A research preview,’’ in Proc. REFSQ, 2017, pp. 167–173.
[17] T. Hall and N. Fenton, ‘‘Implementing effective software metrics pro-
tion for GitLab; however other tools such as JIRA can also be
grams,’’ IEEE Softw., vol. 14, no. 2, pp. 55–65, 1997.
used as the data source, and in fact, the connectors to JIRA [18] D Radjenović, M Heričko, R. Torkar, and A. Živkovi, ‘‘Software fault
are already implemented and available as the output of the prediction metrics: A systematic literature review,’’ Inf. Softw. Technol.,
Q-Rapids project.5 vol. 55, no. 8, pp. 1397–1418, Aug. 2013.
[19] M. Kasunic, ‘‘The state of software measurement practice: Results of 2006
Indeed, the proposed metrics and related Q-Rapids solu- survey,’’ Software Engineering Institute (SEI), Pittsburgh, PA, USA, Tech.
tions fill the current need for tools related to the processes Rep. CMU/SEI-2006-TR-009, ESC-TR- 2006-009, 2006.
in Agile software development. Most tools focus on software [20] R. Kozik, M. Choraà, D. Puchalski, and R. Renk, ‘‘Q-rapids framework
for advanced data analysis to improve rapid software development,’’
quality or continuous integration, without the measures for J. Ambient Intell. Hum. Comput., vol. 10, no. 5, pp. 1927–1936, May 2019.
the process. Basically, there is only one competing solution [21] B. Kitchenham, ‘‘What’s up with software metrics?—A preliminary map-
that could be used to analyze the process, namely GitLab ping study,’’ J. Syst. Softw., vol. 83, no. 1, pp. 37–51, 2010.
[22] E. Kupiainen, M. V. Mäntylä, and J. Itkonen, ‘‘Using metrics in agile and
Time Tracker. However, as shown in the paper, we propose lean software development—A systematic literature review of industrial
a wider set of calculated process metrics, better visualization studies,’’ Inf. Softw. Technol., vol. 62, pp. 143–163, Jun. 2015.
as well as much more enhanced analysis capabilities. [23] C. Y. Laporte, S. Alexandre, and R. V. O’Connor, ‘‘A software engineering
lifecycle standard for very small enterprises,’’ in Proc. Softw. Process
Improvement, 2008, pp. 129–141.
ACKNOWLEDGMENT [24] F. Van Latum, R. Van Solingen, M. Oivo, B. Hoisl, D. Rombach, and
The authors would like to thank all the members of the Q- G. Ruhe, ‘‘Adopting GQM based measurement in an industrial environ-
Rapids H2020 project consortium. ment,’’ IEEE Softw., vol. 15, no. 1, pp. 78–86, Dec. 1998.
[25] L. López, S. Martínez-Fernández, C. Gómez, M. Choraś, R. Kozik,
L. Guzmán, A. M. Vollmer, X. Franch, and A. Jedlitschka, ‘‘Q-rapids
REFERENCES tool prototype: Supporting decision-makers in managing quality in rapid
[1] E. Arisholm and L. C. Briand, ‘‘Predicting fault-prone components in a software development,’’ in Proc. CAiSE Forum, 2018, pp. 200–208.
java legacy system,’’ in Proc. ACM/IEEE Int. Symp. Int. Symp. Empirical [26] S. Martínez-Fernández, P. Jovanovic, X. Franch, and A. Jedlitschka,
Softw. Eng. (ISESE), 2006, pp. 8–17. ‘‘Towards automated data integration in software analytics,’’ in Proc. Int.
[2] V. Basili, G. Caldiera, and H. Rombach, ‘‘The goal question metric Workshop Real-Time Bus. Intell. Anal., 2018. p. 6
approach,’’ Encyclopedia Softw. Eng., vol. 1, pp. 528–532, 1994. [27] S. Martinez-Fernandez, A. M. Vollmer, A. Jedlitschka, X. Franch,
[3] A. M. Bhatti, H. M. Abdullah, and C. Gencel, ‘‘A model for selecting an L. Lopez, P. Ram, P. Rodriguez, S. Aaramaa, A. Bagnato, M. Choras,
optimum set of measures in software organizations,’’ in Proc. Eur. Conf. and J. Partanen, ‘‘Continuously assessing and improving software qual-
Softw. Process Improvement., 2009, pp. 44–56. ity with software analytics tools: A case study,’’ IEEE Access, vol. 7,
[4] M. P. Boerman, Z. Lubsen, D. A. Tamburri, and J. Visser, ‘‘Measuring pp. 68219–68239, 2019.
and monitoring agile development status,’’ in Proc. IEEE/ACM 6th Int. [28] M. G. Mendonca and V. R. Basili, ‘‘Validation of an approach for improv-
Workshop Emerg. Trends Softw. Metrics, May 2015, pp. 54–62. ing existing measurement frameworks,’’ IEEE Trans. Softw. Eng., vol. 26,
[5] E. Caballero, J. A. Calvo-Manzano, and T. S. Feliu, ‘‘Introducing Scrum no. 6, pp. 484–499, Jun. 2000.
in a very small enterprise: A productivity and quality analysis,’’ Commun. [29] R. Moser, W. Pedrycz, and G. Succi, ‘‘A comparative analysis of the effi-
Comput. Inf. Sci., vol. 172, no. May 2014, pp. 215–224, 2011. ciency of change metrics and static code attributes for defect prediction,’’
[6] M. Choraś, R. Kozik, D. Puchalski, and R. Renk, ‘‘Increasing prod- in Proc. 13th Int. Conf. Softw. Eng. (ICSE), 2008, pp. 181–190.
uct owners’ cognition and decision-making capabilities by data analy- [30] D. J. Paulish and A. D. Carleton, ‘‘Case studies of software-process-
sis approach,’’ Cognition, Technol. Work, vol. 21, no. 2, pp. 191–200, improvement measurement,’’ Computer, vol. 27, no. 9, pp. 50–57,
May 2019. Sep. 1994.
[7] M. Choraś, R. Kozik, R. Renk, and W. Holubowicz, ‘‘A practical frame- [31] K. Petersen, C. Gencel, N. Asghari, D. Baca, and S. Betz, ‘‘Action
work and guidelines to enhance cyber security and privacy,’’ in Proc. research as a model for industry-academia collaboration in the software
8th Int. Conf. Comput. Intell. Secur. Inf. Syst., Burgos, Spain Jun. 2015, engineering context,’’ in Proc. Int. Workshop Long-Term Ind. Collab-
pp. 15–17. oration Softw. Eng. (WISE), New York, NY, USA, 2014, pp. 55–62,
[8] V. Claudia, M. Mirna, and M. Jezreel, ‘‘Characterization of software doi: 10.1145/2647648.2647656.
processes improvement needs in SMEs,’’ in Proc. Int. Conf. Mechatronics, [32] F. J. Pino, F. García, and M. Piattini, ‘‘Software process improvement in
Electron. Automot. Eng. (ICMEAE), Dec. 2013, pp. 223–228. small and medium software enterprises: A systematic review,’’ Softw. Qual.
[9] M Díaz-Ley, F García, and M. Piattini, ‘‘Implementing a software measure- J., vol. 16, no. 2, pp. 237–261, Jun. 2008.
ment program in small and medium enterprises: A suitable framework,’’ [33] P. Ram, P. Rodríguez, and M. Oivo, ‘‘Software process measurement and
IET Softw., vol. 2, no. 5, pp. 417–436, 2008. related challenges in agile software development: A multiple case study,’’
[10] K. Dikert, M. Paasivaara, and C. Lassenius, ‘‘Challenges and success fac- in Proc. Int. Conf. Product-Focused Softw. Process Improvement. Cham,
tors for large-scale agile transformations: A systematic literature review,’’ Switzerland: Springer, Nov. 2018, pp. 272–287.
J. Syst. Softw., vol. 119, pp. 87–108, Sep. 2016. [34] P. Ram, P. Rodriguez, M. Oivo, and S. Martinez-Fernandez, ‘‘Success
[11] Y. Dubinsky, D. Talby, O. Hazzan, and A. Keren, ‘‘Agile metrics at the factors for effective process metrics operationalization in agile software
israeli air force,’’ in Proc. Agile Develop. Conf. (ADC), 2005, pp. 12–19. development: A multiple case study,’’ in Proc. IEEE/ACM Int. Conf. Softw.
[12] T. Dybà, ‘‘Factors of software process improvement success in small and Syst. Processes (ICSSP), May 2019, pp. 14–23.
large organizations: An empirical study in the scandinavian context,’’ in [35] C. R. Prause, A, Hönle, ‘‘Emperor’s new clothes: Transparency through
Proc. 9th Eur. Softw. Eng. Conf., pp. 148–157, 2003. metrication in customer-supplier relationships,’’ in Product-Focused Softw.
Process Improvement (Lecture Notes in Computer Science), vol. 11271,
5 https://github.com/q-rapids/ M. Kuhrmann, Ed. Cham, Switzerland: Springer, 2018.
[36] I. Richardson and C. G. Von Wangenheim, ‘‘Guest Editors’ introduction: TOMASZ SPRINGER received the M.Sc. degree
Why are small software organizations different?’’ IEEE Softw., vol. 24, in applied computer science from Adam Mick-
no. 1, pp. 18–22, Jan. 2007. iewicz University, Poland, in 2009. He currently
[37] P. Rodríguez, J. Markkula, M. Oivo, and K. Turula, ‘‘Survey on agile and holds the position of Lead IT Architect at ITTI sp.
lean usage in finnish software industry,’’ in Proc. ACM-IEEE Int. Symp. z o.o., Pozna?, Poland. He is also an Experienced
Empirical Softw. Eng. Meas. (ESEM), Lund, Sweden, 2012, pp. 139–148. Team Leader, a Product Owner, a Scrum Master,
[38] P. S. M. dos Santos and G. H. Travassos, ‘‘Action research can swing and an Active Agile Practitioner. He is involved
the balance in experimental software engineering,’’ Adv. Comput., vol. 83,
in commercial software product CONTRA devel-
pp. 205–276, Dec. 2011.
opment. He took part as an Expert, end-user, and
[39] P. B. Seddon and R. Scheepers, ‘‘Towards the improved treatment of gener-
alization of knowledge claims in IS research: Drawing general conclusions practitioner in H2020 Q-Rapids project. For many
from samples,’’ Eur. J. Inf. Syst., vol. 21, no. 1, pp. 6–21, Jan. 2012. years, he was an Active Member of different EU FP7, H2020, European
[40] E. Shihab, Z. M. Jiang, W. M. Ibrahim, B. Adams, and A. E. Hassan, Defence Agency, European Space Agency, and NATO projects. He is person-
‘‘Understanding the impact of code and process metrics on post-release ally, interested in user-centered design, IT systems ergonomics, and usability.
defects: A case study on the eclipse project,’’ in Proc. ACM-IEEE Int.
Symp. Empirical Softw. Eng. Meas. (ESEM), Sep. 2010, p. 4.
[41] J. Soini, ‘‘A survey of metrics use in finnish software companies,’’ in Proc.
Int. Symp. Empirical Softw. Eng. Meas., Sep. 2011, pp. 49–57.
[42] R. van Solingen and E. Berghout, The Goal/Question/Metric Method:
A Practical Guide for Quality Improvement of Software Development.
New York, NY, USA: McGraw-Hill, 1999.
[43] M. Söylemez and A. Tarhan, ‘‘Challenges of software process and product
quality improvement: Catalyzing defect root-cause investigation by pro-
cess enactment data analysis,’’ Softw. Qual. J., vol. 26, no. 2, pp. 779–807,
Jun. 2018.
[44] M. Staron and W. Meding, ‘‘Factors determining long-term success of
a measurement program: An industrial case study,’’ E-Infor. Softw. Eng. RAFAŁ KOZIK received the Ph.D. degree in
J., vol. 1, no. 1, pp. 7–23, Jan. 2012. telecommunications from the University of Sci-
[45] M. Sulayman and E. Mendes, ‘‘A systematic literature review of software ence and Technology (UTP) in Bydgoszcz,
process improvement in small and medium Web companies,’’ in Proc. Adv. in 2013, and the D.Sc. degree in computer science
Softw. Eng., 2009, pp. 1–8. from the West Pomeranian University of Tech-
[46] T. Tahir, G. Rasool, W. Mehmood, and C. Gencel, ‘‘An evaluation of nology in Szczecin, in 2019. Since 2009, he has
software measurement processes in pakistani software industry,’’ IEEE been involved in a number of international and
Access, vol. 6, pp. 57868–57896, 2018.
national research projects related to cybersecu-
[47] B. Tanveer, L. Guzmán, and U. M. Engel, ‘‘Understanding and improving
rity, critical infrastructures protection, software
effort estimation in agile software development,’’ in Proc. Int. Workshop
Softw. Syst. Process (ICSSP), 2016. 41–50 quality, and data privacy (e.g., FP7 INTERSEC-
[48] A. Tarhan and S. G. Yilmaz, ‘‘Systematic analyses and comparison of TION, FP7 INSPIRE, FP7 CAMINO, FP7 CIPRNet, SOPAS, SECOR,
development performance and product quality of incremental process and and H2020 Q-Rapids). He is currently an Assistant Professor with the
agile process,’’ Inf. Softw. Technol., vol. 56, no. 5, pp. 477–494, May 2014. Department of Telecommunication, University of Science and Technology
[49] A. Tosun, A. Bener, and B. Turhan, ‘‘Implementation of a software quality in Bydgoszcz (UTP). He has authored more than 70 reviewed scientific
improvement project in an SME: A before and after comparison,’’ in Proc. publications.
35th Euromicro Conf. Softw. Eng. Adv. Appl., 2009, pp. 203–209.
[50] M. Unterkalmsteiner, T. Gorschek, A. K. M. M. Islam, C. K. Cheng,
R. B. Permadi, and R. Feldt, ‘‘Evaluation and measurement of soft-
ware process Improvement—A systematic literature review,’’ IEEE Trans.
Softw. Eng., vol. 38, no. 2, pp. 398–424, Apr. 2012.