Assessing Students Prior Knowledge and Learning in An Engineering Management Course For Civil Engineers
Assessing Students Prior Knowledge and Learning in An Engineering Management Course For Civil Engineers
Abstract
The objectives of this study were (1) to use a pre-test to assess the knowledge of Civil
Engineering students at The Citadel in their understanding of engineering management topics
prior to taking a required junior-level course in engineering management, and (2) to use a post-
test to assess student learning as a result of various pedagogies employed though the course
instruction. Additional insight into broader student performance indicators was accomplished by
comparing post-test results with embedded indicator data, which is collected annually, evaluated
against department standards and used in department assessment of student outcomes.
Introduction
Departmental outcomes aligning the curriculum along professional skills were established to link
course goals across a course-by-course strategy for student development. An essential
component of this plan was adoption of Embedded Indicators, aligned with CEE Department
outcomes, and mapped across all four years of the undergraduate curriculum. Embedded
Indicators are mapped to appropriate Bloom’s Taxonomy levels and organized sequentially to
provide a progression of student learning and instructional development.
After students are exposed to instructional material, Embedded Indicators collectively measure
student performance as determined by graded test questions, assignments, reports and projects
commonly used by instructors to assess student learning. Prior to teaching a Civil Engineering
course, faculty pre-identifies specific Embedded Indicator tools for use in measuring each goal
contained in the course syllabus. Throughout the semester, students are assessed using pre-
designated tools. If student performance averages for an Embedded Indicator is measured as
75% or higher, it is concluded students have collectively achieved appropriate learning
requirements and met departmental standards. Example work from three representative students
(good, average, poor) is included with an Embedded Indicator summary that provides an
assessment of student performance and is mapped to reflect linkage with appropriate 1-22
program outcomes and Bloom’s Taxonomy.
Students learn more effectively by actively analyzing, discussing, and applying content in
meaningful ways rather than by passively absorbing information [13]. Various teaching and
learning techniques were employed to improve the student learning of key concepts in
engineering management.
To assist students with learning course material, each student was required to teach a lesson
during the semester. This method is equally beneficial for those students who are being taught
and the peer teachers [14, 15]. Peer teachers can reinforce their own learning by instructing
others and students feel more comfortable when interacting with a peer [14, 15]. Daily quizzes
on assigned reading were administered at the beginning of class. These quizzes were given to
increase students’ attendance, preparation, participation, study habits and to improve exam
scores. Short YouTube videos were shown daily to facilitate and stimulate some introductory
discussions on each day’s topic. One-Minute papers [16] were used to monitor student learning
and address students’ misconceptions and preconceptions. Students were typically asked to
write a concise summary of the presented topic, write an exam question for the topic, or answer
in 60 seconds a big picture question from the material that was presented in the current or
previous course lesson.
A commonly accepted assessment instrument useful for both diagnostic and formative purposes
is the concept inventory [19, 20, 21], which refers to any kind of research-based assessment
technique that measures conceptual understanding [19, 20, 21]. Use of concept inventories helps
instructors measure the effectiveness of their teaching [20, 21] and determines if students have
the correct understanding of important concepts on specific topics. When the same set of
questions is used, concept inventories may help in evaluating students’ pre- and post-knowledge
on a subject. Pre-tests establish students’ prior knowledge on a subject, and post-tests measure
the learning at the end of the educational experience [19, 20, 22]. These types of tests are also
helpful in distinguishing between learning and performance [22].
Students’ academic performance, such as examination scores, as a proxy for learning [23-26]
does not represent direct evidence of learning [26-29]. Learning is different from performance.
Learning is the difference between what students know at the beginning of the semester
compared with the end of the semester [22]. Performance is demonstrating mastery of course
material such as correct answer to a question on the final exam. This distinction between
learning and performance is important since students enter courses with unequal and varying
levels of knowledge, skills, and educational experiences [22]. Consequently, pretests are useful
in establishing prior knowledge, and post-tests are effective in measuring student learning [22].
An eight question pre- and post-test was developed based upon key concepts in engineering
management course (see Table 2). The pre-tests were administered to measure students’ prior
engineering management knowledge and to identify student misconceptions at the beginning of
the semester. The same short-answer test was administered on the last day of the term to assess
knowledge gained as a result of the course experience. Each question was scored against an
established correct answer. One of the authors graded the short-answer test for all sections to
ensure uniform application of grading rubric. When grading the pre- and post-test instruments, a
professor looked for key words, phrases and concepts. The professor scored each of the eight
questions with a bracketed score of zero (no credit for incorrect answer or no answer), 0.5
(partial credit for partially correct answer), or one (full credit for correct answer). It is important
to note neither the pre-test nor post-test counted toward the course grade.
Table 2. The short-answer questions for Engineering Management pre- and post-test (n=57)
No. Question
Q1 Describe what critical path is in project management.
Q4 Explain the need for lifelong learning and describe skills required of a lifelong learner
Q5 Describe the term multiplier, which is commonly used in the consulting business.
Q7 Explain the role of a leader and list important leadership principles and attitudes
Q8 What is the difference between Quality Control and Quality Assurance?
The list of questions for the pre-test and post-test are strategically mapped to course goals and
learning objectives. Each of the eight-course goal are linked to support a specific student
learning outcome, as part of the Department Assessment process. A list of 23 Student learning
outcomes were developed and aligned with ASCE Body of Knowledge (BOK) 2 [7-9]. Course
embedded indicators are used to measure overall student performance through an assignment or
test question aligned with a student learning outcome, largely focusing on professional
skills. Currently, the pre-test and post-test results are stand-alone measures of student
knowledge growth, which are not used to support the Department Assessment process.
Research Question 1: To what degree do junior or senior Civil Engineering majors at The
Citadel have exposure to engineering management prior to the introductory engineering
management course?
Figure 1 shows the mean score (in percentage) for each question and analyzes students’
performance on each question on the pre-test. The pre-test mean of overall scores ranged from
13% to 19%. The pre-test scores for individual students ranged from zero to 46%. All students
scored below 10% on Questions Q1, Q2, Q5, and Q6. Student performance (at below 50% level)
on all questions of the pre-test is an extremely poor performance, indicating little to no prior
experience with these concepts. The highest score on the pre-test was Question 7 (explain the
role of a leader and list important leadership principles and attitudes), which is an important
theme in the Engineering Management course that the students successfully mastered. The
lowest scores on the pre-test were Question 1 (critical path in project management); Question 2
(defining acronym RFP); Question 5 (describe the term multiplier) and Question 6
(consequences of uncompensated scope creep).
The pre-test standard deviation of overall scores range from 8% to 37% (Figure 2). The pre-test
standard deviation for each question, a measure of student performance variation, is also shown
in Figure 2 and ranges from zero to 30%. The pre-test standard deviation for Questions 1, 2, 3,
4, 5, 6, 7, and 8 range from 10% to 13%, 0 to 19%, 22% to 30%, 26% to 29%, 0 to 11%, 13 to
19%, 13% to 23%, and 21% to 25%, respectively.
100
80
60
Mean (%)
46
39.4 42
40 35
31.5 29 29
25 23
17 19
20 13 11 1315
8 5.3 8 4
2.6 4 2.6 0 0
2 0 0
0
Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 total
100
80
Standard Deviation (%)
60
40 37
30
26 272926 2524
22 23 21
19 19 19
20 16
111013 11 13 13 11
8
0 0 0 0
0
Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Total
The same short-answer test in Table 1 was administered on the last day of the term to assess
knowledge gained as a result of the course educational experience. Figures 3 and 4 illustrate the
mean and standard deviation of overall scores on the post-test in this study. The post-test student
performance mean and standard deviation of overall 1-8 question scores range from 50% to 84%,
and 10% to 23%, respectively.
100 92 89 92
88 86 86
82 84
76 79 79
80 70 69 71 73
63 66
56
Mean (%)
60 50
45 45
41
40 34
24 25 23
21
20
0
Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Total
100
Standard Deviation (%)
80
60
47 47 44 47
42 42 43 41
38 38 36 3739 36 38 36
40 31 3029 33
23 25 23
21 18
20 15
10
0
Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Total
Table 3. Paired-t test results for the pre- and post-test results
Section 1 Section 2 Section 3
(df = 18) (df = 23) (df = 13)
Mean p- Mean p- Mean p-
Paired t value Paired t value Paired t value
Measure Diff (%) Diff (%) Diff (%)
Total 37 7.04 < 0.001 37 10.57 < 0.001 69 26.60 < 0.001
Embedded Indicator data from 2017 and 2018 used for course assessment is tabulated and
presented in Table 4. The following observations are noted:
Relatively small variations exist in data tabulations for these eight-course outcomes over the
two-year analysis period, with an overall student performance of 82.5%.
A possible explanation for lower performing outcomes is likely related to the difficulty in
explaining complicated engineering business and project management nuances needed for
future successful careers to undergraduate students who have little experience or knowledge
of real-world situations. Also, students knew in advance they would be receiving a grade for
Embedded Indicator questions, typically implemented on tests and/or major projects, while
pre-test and post-test results were not administered to students as graded assignments.
A comparison of the mean post-test and embedded indicators results is provided in Figure 5.
Post-test combined results trailed Embedded Indicator combined results by an average of 21%
over the two-year period. Concept inventories are widely credited as a viable means to provide
reliable data and to positively influence pedagogical practices [21]. Results from the post-test
concept inventory is helpful in distinguish between student learning and performance [22].
100
84 80 84
80
Mean (%)
60 53
40
20
0
2017 2018
Year
Conclusions
Based on the pre- and post-test results, students experienced measurable gains in conceptual
understanding of engineering management concepts during the course, across all three sections
taught in 2017 and 2018, which includes a total sample size of 57 students. There was an
increase from an overall average percentage correct of 16 % on the pre-test to an overall average
percentage correct of 61% on the post-test, over the eight questions posed to students. In an
evaluation of post-test results for the combined 24 question responses, 21 of 24 demonstrated
statistically significant differences between paired pre-test to post-test results (all p < 0.001),
equating to an 87% effectiveness for identified course concepts. Results indicated that since the
post-test was unannounced, it provided a true measure of student learning. The pre-test to post-
test changes in overall scores were influenced by a variety of pedagogical techniques used for the
course, as described in this paper.
Future research should include expansion to include subsequent engineering management course
offerings for the pre-test, post-test instrument, to increase the sample size for improved insight
into student learning and performance across these important course subject matter.
Additionally, post-test results should be compared with Fundamentals of Engineering subject
area results and corresponding Senior Exit Survey questions mapping to program outcomes of
interest. Lastly course professors should continue to identify and administer teaching techniques,
instructional methods and strategic assignments and exercises to continue to improve student
learning and performance.
References
[1] R. Unal, Keating, C., P. Kauffmann, and W. Peterson, “Engineering Management The Minor of Choice,”
Proceedings of the American Society of Engineering Education Annual Conference, Montreal, Canada, 2002
[2] P. Dunn, B. Pearce, “Introducing Project Management to Senior Civil Engineering Students,” Proceedings of
the American Society of Engineering Education Annual Conference, Chicago, IL, 2006.
[3] E.T. Pascarella,, P.T. Terenzini, (Eds.). (2005). How College Affects Student: Volume 2 A Third Decade of
Research: Volume 2 A Third Decade of Research.
[4] D. Merino, “A Proposed Engineering Management Body of Knowledge (Embok)” Proceedings of the American
Society of Engineering Education Annual Conference, Chicago, IL 2006.
[5] S. Murray, and S. Raper, “Encouraging Lifelong Learning For Engineering Management Undergraduates.
Proceedings of the American Society of Engineering Education Annual Conference, Honolulu, Hawaii, 2007.
[6] W. Davis, K. Bower, R. Welch, D. Furman, “Developing and Assessing Student’s Principled Leadership Skills:
to achieve the Vision for Civil Engineers in 2025,” Proceedings of the 120th. American Society for Engineering
Education Annual Conference, Atlanta, GA, 2013.
[7] The Vision for Civil Engineering in 2025, American Society of Civil Engineers, Reston, VA, June 2006.
[8] Achieving the Vision for Civil Engineering in 2025: A Roadmap for the Profession, American Society of Civil
Engineers, Reston, VA, 2009.
[9] Civil Engineering Body of Knowledge for the 21 st Century, Preparing the Civil Engineer for the Future, Second
Edition, Committee on Academic Prerequisites for Professional Practice, American Society of Civil Engineers,
Reston, VA, 2008.
[10] S.G. Walesh, "The Raise The Bar Effort: Charting The Future By Understanding The Path To The Present –
The BOK and Lessons Learned," Proceedings of the American Society for Engineering Education Annual
Conference, Austin, TX, 2012.
[11] C. Bonwell, and J. Eison, Active learning: Creating excitement in the classroom. Washington, D.C. Jossey-
Bass, 1991.
[12] C. Mayers, and T. Jones, Promoting active learning: Strategies for the college classroom. San Francisco:
Jossey-Bass, 1993.
[13] S.T. Ghanat, J. Kaklamanos, K. Ziotopoulou, I. Selvaraj, and Fallon, D. “A Multi-Institutional Study of Pre-
and Post- Course Knowledge Surveys in Undergraduate Geotechnical Engineering Courses,” Proceedings of ASEE,
New Orleans, LA, 2016.
[14] N.A. Whitman, and J.D. Fife, Peer Teaching: To Teach Is To Learn Twice. ASHE-ERIC Higher Education
Report No. 4. 1988.
[15] S.T. Ghanat, and W. Davis, “Pedagogical Techniques Employed in an Engineering Management Course,”
proceedings of ASEE-SE, Daytona Beach, FL, 2018.
[16] T.A. Angelo, and K.P. Cross, Classroom Assessment Techniques A Handbook for College Teachers: 2nd ed,
Jossey-Bass Publishers, San Francisco, CA, 1993.
[17] A. Snider, and M. Schnurer, Many sides, Debate across the curriculum. New York: International Debate
Education Association, Upper Saddle River, N.J, 1999.
[18] A. Freeley, and D. Steinberg, Argumentation and Debate: Critical thinking for reasoned decision making, 11 th
ed, Belmont, CA, 2005.
[19] S.T. Ghanat, J. Kaklamanos, I. Selvaraj, C. Walton-Macaulay, and M. Sleep, “Assessment of students’ prior
knowledge and learning in an undergraduate foundation engineering course,” Proceedings of the American Society
for Engineering Education 2016 Annual Conference and Exposition, Columbus, Ohio, 25–28 June 2017.
[20] T. Reed-Rhoads, and P.K. Imbrie, “Concept inventories in engineering education,” School of Engineering
Education, Purdue University.
[21] A. Madsen, S.B. McKagan, and E.C. Sayre, “Best practices for administering concept inventories,” The Physics
Teacher, vol. 55, no. 9, pp. 530-536, 2017.
[22] M. Delucchi, “Measuring student learning in social statistics: A pretest-posttest study of knowledge gain,”
Teaching Sociology, vol. 42, no. 3, pp. 231-239, 2014.
[23] R.C. Borresen, “Success in Introductory Statistics with Small Groups.” College Teaching 38(1):26–28, 1990.
[24] M. Delucchi, “Assessing the Impact of Group Projects on Examination Performance in Social Statistics.”
Teaching in Higher Education 12(4):447–60, 2007.
[25] D.V. Perkins, and R.N. Saris, “A ‘Jigsaw Classroom’ Technique for Undergraduate Statistics Courses.”
Teaching of Psychology 28(2):111–13, 2001.
[26] S.Yamarik, “Does Cooperative Learning Improve Student Learning Outcomes?” Journal of Economic
Education 38:259–77, 2007.
[27] P. Baker, “Does the Sociology of Teaching Inform Teaching Sociology?” Teaching Sociology 12(3):361–75,
1985.
[28] J. Chin, “Is There a Scholarship of Teaching and Learning in Teaching Sociology? A Look at Papers from 1984
to 1999.” Teaching Sociology 30(1):53–62, 2002.
[29] B. Lucal, C. Albers J. Ballantine, J. Burmeister-May, J. Chin, S. Dettmer, and S. Larson. “Faculty Assessment
and the Scholarship of Teaching and Learning: Knowledge Available/Knowledge Needed.” Teaching Sociology
31(2):146–61, 2003.