0% found this document useful (0 votes)
69 views9 pages

2015 ASEE National Paper Final Jones

This document describes a rubric developed by Dr. Allen Jones at South Dakota State University to assess student achievement of ABET Student Outcome b, which is the ability to design and conduct experiments and analyze results. The rubric was applied to a geotechnical engineering laboratory experiment and used indicators to evaluate student performance on a scale of below expectations, meets expectations, and exceeds expectations. Data collected from 2007 to 2014 showed students meeting the threshold score of 2.0, though some minor curriculum adjustments were made. The rubric and assessment process were developed based on literature regarding rubric design and ABET accreditation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views9 pages

2015 ASEE National Paper Final Jones

This document describes a rubric developed by Dr. Allen Jones at South Dakota State University to assess student achievement of ABET Student Outcome b, which is the ability to design and conduct experiments and analyze results. The rubric was applied to a geotechnical engineering laboratory experiment and used indicators to evaluate student performance on a scale of below expectations, meets expectations, and exceeds expectations. Data collected from 2007 to 2014 showed students meeting the threshold score of 2.0, though some minor curriculum adjustments were made. The rubric and assessment process were developed based on literature regarding rubric design and ABET accreditation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Paper ID #13196

A Metric for Assessment of ABET Student Outcome ”b” – Experimental De-


sign and Analyzing the Results
Dr. Allen L Jones PE, South Dakota State University
Dr. Allen Jones is a Professor of Civil Engineering at South Dakota State University (SDSU). His area
of specialty is geotechnical engineering and general civil engineering. Prior to joining SDSU he was
a predoctoral Associate at the University of Washington teaching graduate courses and completing his
PhD in Civil Engineering. Prior to that, he was a Senior Engineer for 18 years at a consulting/design
firm in Seattle. He is registered or licensed as a Civil Engineer, Geotechnical Engineer, Geologist and
Engineering Geologist.

American
c Society for Engineering Education, 2015
A METRIC FOR ASSESSMENT OF ABET ACCREDITATION
STUDENT OUTCOME “b” – EXPERIMENTAL DESIGN AND
ANALYZING THE RESULTS

Allen L. Jones, PE, PhD


South Dakota State University

Introduction

The Accreditation Board for Engineering and Technology, Inc. (ABET) requires evaluation of
student outcomes (SOs) as part of the undergraduate engineering curricula accreditation process.
Assessment under this criterion is one or more processes that identify, collect, and prepare data
to evaluate the achievement of student outcomes. The Department of Civil and Environmental
Engineering at South Dakota State University (SDSU) chose to use student outcomes originally
established, known as the “a” through “k” outcomes. Evaluation of outcome “b”, “a graduating
student should have an ability to design and conduct experiments, as well as to analyze and
interpret data” was accomplished using a well-designed rubric, and is the subject of this paper.
The rubric was established and administered in CEE-346L, Geotechnical Engineering
Laboratory. The means of assessment was a particular laboratory experiment, the one
dimensional consolidation test. The rubric consisted of several indicators in each of the
categories: below expectations, meets expectations, and exceeds expectations, with a desired
average metric threshold score of 2.0 or greater. The rubric was applied to the entire class for the
selected laboratory exercise during the years of 2007, 2009, and 2011 through 2014. The class
average was used as assessment relative to the threshold score. Data collected to date indicates
the threshold score is being met; however evaluation of the metric has promulgated minor
adjustments in selected areas of the curriculum to improve scores. This paper outlines the details
of the assessment process, metric results, and changes to the curriculum.

Accreditation Framework

The ABET student outcomes (SOs) are statements that describe what students are both expected
to know and to apply at the time of graduation. This achievement indicates that the student is
equipped to attain the program educational objectives (PEOs) during the start of their careers.
SOs are measured and assessed routinely through national, university, department, and
curriculum level assessment processes. The SOs themselves are evaluated and updated
periodically to maintain their ties to both the department’s mission and PEOs. The assessment
and evaluation process for the student outcomes follows a continuous improvement process. The
first step is to establish student outcomes that are tied directly to the program educational
objectives. The student outcomes were adopted from the ABET Engineering Criteria 2000. The
SOs were reviewed by the faculty in the Department of Civil and Environmental Engineering
(CEE) at SDSU as well as the department’s advisory board before being adopted by the program.
SDSU’s Civil Engineering student outcomes “a” through “k” are adopted from ABET criterion
three. During the fall semester of 2008, the CEE department faculty established the following
formal methodology for reviewing and revising student outcomes. In general terms, the
following outlines the Student Outcome Assessment Process (SDSU, 2009):

1. A metric or metrics will be established for a SO.


2. A threshold value will be established for each metric by faculty and the advisory board.
3. The value of the metric will be determined for an evaluation cycle and compared to the
threshold value. Typically, the value will be determined and evaluated annually based on
a 2-year moving average value of the metric.
4. For the first evaluation cycle:
a. If the value of the metric exceeds the threshold value, then no action is necessary,
b. If the value of the metric is less than the threshold value, then the variance is
noted and possible causes for the variance will be discussed and reported by the
department faculty, but no additional action is required at this time.
5. For the second evaluation cycle:
a. For those metrics that previously exceeded the established threshold from 4a:
i. If the value of the metric again exceeds the threshold value, then no action is
necessary,
ii. If the value of the metric is now less than the established threshold, then same
response as 4b above.
b. For those metrics that previously were less than the established threshold from 4b:
i. If the metric now exceeds the threshold value, then no action is required,
ii. If the value of the metric again is less than the established metric value, then
the situation is considered to be a concern. The departmental faculty will at
this time develop potential corrective action(s) that will be agreed upon by
consensus.
6. For subsequent evaluation cycles:
a. If the value of the metric exceeds the established threshold value, then no action is
necessary,
b. If the value of the metric exceeds the threshold value for three consecutive
evaluations, the department will consider increasing the threshold value.

Evaluation Rubric

The CEE departmental faculty have established evaluation metrics for the assessment of the
achievement of the outcomes for each of the eleven SOs. These metrics include a multitude of
survey results, laboratory and course rubrics, class assignments, interviews, and results from the
Fundamentals of Engineering (FE) examination. A critical threshold value for each metric has
been established that is realistic and attainable, yet ambitious enough to result in continuous
improvement. Evaluation of ABET SO “b”, the subject of this paper, “a graduating student
should have an ability to design and conduct experiments, as well as to analyze and interpret
data” was accomplished using a well-designed rubric.

Rubrics are scoring tools that are generally considered subjective assessments. A set of criteria
and/or standards are created to assess a student’s performance relative to some educational
outcome. The unique feature of a rubric is that it allows for standardized evaluation of each
student to specified criteria, making assessment more transparent and objective. A well-designed
rubric allows instructors to assess complex criteria and identify areas of instruction that may
require revision to achieve the desired outcome.

There are numerous articles on methods for rubric development and validation within the context
of program assessment. Moskal and Leydens (2000) discuss the validity and reliability in
developing rubrics. Lane (1993) discusses aligning instruction, learning, and assessment in a
methodical manner and how to define and measure competence in programs. Gardiner et al
(2009) discuss rubrics within the framework of program assessment and accreditation. Although
focused towards design, Moazzen et al (2013) provides a literature review of engineering
assessment tools, of which rubrics are included.

At the time of developing the rubric, the literature was sparse on assessing SO “b” directly in
civil engineering; therefore the literature was searched in constructing the rubric from other
engineering disciplines. Felder and Brent (2003) discuss instructional techniques in meeting
evaluation criteria for the various SOs. The Engineering Education Assessment Methodologies
and Curricula Innovation Website (2007) also discussed some strategies for SO assessment, but
in a broad, general sense. McCreanor (2001) discussed assessing POs from an Industrial,
Electrical, and Biomedical Engineering perspective. Winncy et al (2005) discussed meeting SO
“b” from a Mechanical and Aeronautical Engineering perspective. Review of the literature
revealed the following common features of rubrics: each focused on a stated objective
(evaluating a minimum performance level), each used a range of evaluative scores to rate
performance, and each contained a list of specific performance indicators arranged in levels that
characterized the degree to which a standard had been met.

Many engineering programs also publish their evaluation rubrics for SO “b” (formerly called
Program Outcomes by ABET) on their program websites. Although these are not vetted in the
literature, they provided a useful basis for developing the rubric discussed in this paper.
Although dated (downloaded in 2006), the following program websites were gleaned for rubrics
in developing the SO “b” rubric at SDSU:

 Auburn University, Department of Chemical Engineering,


 University of Alabama at Birmingham, School of Engineering,
 University of Delaware Dept. of Civil and Environmental Engineering, and
 Michigan State University, Department of Chemical Engineering and Material Science.

Information obtained in the literature and program websites was coupled with the CEE
department’s needs relative to the continuous improvement model established for ABET
accreditation to produce an evaluation rubric. Table 1 presents the various scoring areas of the
rubric.

Experiment Design

The author believes that conducting experiments and analyzing and interpreting the data is well
within the capabilities of an undergraduate student. However, it is a challenge for students to
design experiments with objectives to produce a specific result. Designing experiments was
therefore a key consideration in developing the metric to measure SO “b”. The CEE program at
SDSU choose a laboratory experiment that allowed the students the opportunity to choose
elements of experimental design in satisfying the SO.

The One Dimensional Consolidation Test laboratory exercise in CEE 346L – Geotechnical
Engineering Laboratory was chosen for the rubric. The laboratory exercise was initially
evaluated to have the expectation elements outlined in Table 1. The consolidation test is used to
evaluate the load deformation properties of fine-grained soils. When an area of soil is loaded
vertically, the compression of the underlying soil near the center of the loaded area can be
assumed to occur in only the vertical direction, that is, one-dimensionally. This one-dimensional
nature of soil settlement can be simulated in a laboratory test device called a consolidometer.
Using this device, one can obtain a relationship between load and deformation for a soil.
Analysis of the results ultimately allows the calculation or estimation of the settlement under
induced loads such as a building or other large structure.

The elements of experimental design included in the laboratory exercise consisted of choosing:

 A testing method that is stress or strain controlled. The student must choose one over the
other.
 Analog or digital devices for deformation measurement. Both methods require design in
how deformation will be measured and incorporated into the experiment.
 When to apply to next load increment. A primary premise in the test is that the soil
reaches equilibrium prior to applying the next load. The students are presented with two
methods for determining the time when equilibrium is reached and must decide which to
use. The students are also encouraged to devise alternative methods than those presented.
 A method to manage the measurement of time. Without the aid of test automation,
managing the measurement of time becomes important in the test. The students are
required to devise a method by which they can do this.
 Methods to appropriately represent the data as they perform the experiment. This
experiment is an excellent opportunity for the students to design plotting methods during
the experiment. Given that methods to determine soil equilibrium are graphical based,
the students must devise a plotting scheme for the data as they collect test data. This
involves choosing appropriate axis ranges and scale.
 A load schedule by which they will apply the various loads to the soil specimen. The
results of the test are usually plotted as a function of log-stress to obtain a linear relation.
The students must use testing design to determine the starting and final value of stress.

Once the test is complete, the students analyze the data and assemble the results in a report.
Given the size of the class and limitation of instructors, there is no in-laboratory assessment
relative to the rubric. A cutoff score of 2.0 (Meets Expectations – Table 1) was established after
the rubric was initially developed. The rubric was then applied to the entire class of multiple
laboratory sections for the selected laboratory exercise. The class average was used as
assessment relative to the cutoff score. The rubric was originally developed to be administered
every other academic year, but was changed to every year in 2009.
Level 1 Below Expectations Level 2 Meets Expectations Level 3 Exceeds Expectations
Uses unsafe and/or risky procedures Observes occasional unsafe laboratory Observes established laboratory safety plan and
procedures procedures
Does not develop a systematic plan of data gathering; Development of experimental plan does not Formulates an experimental plan of data
experimental data collection is disorganized and recognize entire scope of the laboratory exercise, gathering to attain the stated laboratory
incomplete therefore data gathering is overly simplistic (not objectives (develops a plan, tests a model, checks
all parameters affecting the results are performance of equipment)
Data are poorly documented Not all data collected is thoroughly documented, Carefully documents data collected
units may be missing, or some measurements are
not recorded
Does not follow experimental procedure Experimental procedures are mostly followed, Develops and implements logical experimental
but occasional oversight leads to loss of procedures
experimental efficiency or loss of some data
Table 1. Rubric Scoring Criteria

Cannot select the appropriate equipment and Needs some guidance in selecting appropriate Independently selects the appropriate equipment
instrumentation equipment and instrumentation and instrumentation to perform the experiment
Does not operate instrumentation and process equipment, Needs some guidance in operation of instruments Independently operate instruments and
does so incorrectly or requires frequent supervision and equipment equipment to obtain the data
Makes no attempt to relate data to theory Needs some guidance in applying appropriate Independently analyzes and interprets data using
theory to data, may occasionally misinterpret appropriate theory
physical significance of theory or variable
involved and may make errors in conversions
Is unaware of measurement error Is aware of measurement error but does not Systematically accounts for measurement error
systematically account for it or does so at a and incorporates error into analysis and
minimal level interpretation
Seeks no additional information for experiments other than Seeks reference material from a few sources - Independently seeks additional reference

Expectations Characterized By
what is provided by instructor mainly from the textbook or the instructor material and properly references sources to
substantiate analysis
Reporting Reporting Reporting
 Reporting methods are poorly organized, illogical and  Reporting methods are mostly organized  Reporting methods are well organized,
incomplete with areas that are incomplete logical and complete
 Uses unconvincing language  Uses language that mostly supports means  Uses convincing language
 Devoid of engaging points of view and methods  Makes engaging points of view
 Uses inappropriate word choices for the purpose  Occasionally makes engaging points of  Uses appropriate word choices for the
 Consistently uses inappropriate grammar structure and view purpose
has frequent misspellings  Mostly uses appropriate word choices for  Consistently uses appropriate grammar
the purpose structure and is free of misspellings
 Occasionally uses incorrect grammar
structure and/or misspellings
Evaluative score of 1 Evaluative score of 2 Evaluative score of 3
It should be emphasized that the rubric was used to evaluate the department’s student outcomes,
not the course outcomes in the particular course where the rubric was administered. The
scoring/grades that students received on the laboratory assignment were assigned relative to
course outcomes. Therefore, when the rubric was applied, the laboratory assignments were
graded twice for each evaluation purpose. As such, students were not aware of the assessment
relative to the department’s SO “b”. This was by design so as not to bias student’s effort and
work for the particular laboratory assignment.

Results

The constructed rubric was initiated in the 2006-2007 academic year in multiple laboratory
sections. Laboratory sections were taught by the same Teaching Assistant to avoid epistemic
variation. The laboratory data was collected the first week by the students and subsequently
analyzed in a second week of the laboratory. The students’ reports were submitted for grading
one week after that. Thirty three laboratory reports were evaluated with a resulting average score
of 2.0 and a standard deviation of 0.9. Therefore, the student outcome for 2007 was achieved
and a baseline for future evaluation was established. Although the cutoff was met, the class
average was exactly at the cutoff score and enhancements were qualitatively deemed advisable to
address the level 1 performer. Therefore, selected technical aspects of the lecture materials were
enhanced to address areas of the rubric that were scored lower than desired. These included:

 Terminology was updated to the standard of practice and synchronized with the
laboratory Teaching Assistant’s lecture notes.
 Figures between the course lecture notes and the self-developed laboratory manual were
also edited for consistency.
 Photographs as figures were added to the laboratory manual.
 Additional reading materials was placed on the course webpage (as a Desire2Learn
page).

The rubric was re-administered in the 2006-2007 and 2008-2009 academic years, and then every
academic spring starting in 2011. Table 2 summarizes the results of the rubric effort. As shown
in Table 2, the average scores are consistently above the threshold of 2.0. Given the averages
increased and the standard deviations decreased compared to the baseline, the implemented
improvements (at year one of administering the rubric) were achieved in evaluated student
performance. Most notable was the improvement in the range of student performance; there
were fewer students that performed at Level 1. The student outcome was considered achieved
and no changes were made to the lecture materials thereafter.
Table 2. Rubric Results by Year

Semester Number of Standard


Average
Administered Scored Reports Deviation
Spring 2007 33 2.0 0.9
Spring 2009 48 2.5 0.4
Spring 2011 50 2.2 0.5
Spring 2012 46 2.4 0.6
Spring 2013 45 2.3 0.4
Spring 2014 31 2.1 0.6

Conclusions

A well-established evaluation metric, a rubric in this case, can be used to both evaluate and
enhance Student Outcomes in an ABET accreditation process. Based on the experience from
the process outlined in this paper, the following conclusions are offered:

 Evaluation metrics should be conceived based on the continuous improvement process


of: desired outcome  devise metrics  establish threshold and actions  first
evaluation cycle and actions, if necessary  subsequent evaluation cycles and actions, if
necessary.
 Evaluation metrics can take on many forms, choose the appropriate metric to measure the
desired outcome.
 The rubric used to assess ABET SO “b” allowed for evaluation relative to meeting the
desired outcomes, but also allowed to review curriculum in addressing specific areas of
concern.
 Stated outcomes are reasonably assessed by rubric scoring.

Acknowledgement

This manuscript was greatly improved by the suggested enhancements by one of the reviewers.
Their thorough and thoughtful review comments are appreciated.

Epilog

During the ABET CE program review at SDSU in the fall of 2009, the Program Evaluator (PEV)
made complimentary comments regarding the rubric presented in this paper. In fact, the main
reason for the change in administering the rubric from biennial to yearly in 2009 was based on
discussions with the PEV. It is under the encouragement of that PEV that this paper is being
published. The department now has six cycles of continuing data and looks forward to our
program review in 2015.
References

Engineering Education Assessment Methodologies and Curricula Innovation Website, “Learning


Outcomes/Attributes, ABET b—Designing and conducting experiments, analyzing and interpreting data”,
University of Pittsburg, accessed January 2007.

Felder, R.M., and Brent, R. (2003). “Designing and Teaching Courses to Satisfy the ABET Engineering Criteria”,
Journal of Engineering Education, 92:1, 7-25.

Lane, S. (1993). “The Conceptual Framework for the Development of a Mathematics Performance Assessment
Instrument”, Journal of Educational Measurement: Issues and Practice, Issue 12, pages 16–23.

Gardiner, L. R., Corbett, G., Adams, S. J. (2009). “Program Assessment: Getting to a Practical How-To Model”,
Journal of Education for Business, Volume 85, Issue. 3, 2009.

McCreanor, P.T. (2001). “Quantitatively Assessing an Outcome on Designing and Conducting Experiments and
Analyzing Data for ABET 2000”, Proceedings, Frontiers in Education Conference, October 10 – 13, 2001, Las
Vegas, Nevada.

Moazzen, I., Hansen, T., Miller, M., Wild, P., Hadwin, A., and Jackson, L. A. (2013). “Literature Review on
Engineering Design Assessment Tools”, Proceedings of the 2013 Canadian Engineering Education Association
(CEEA13) Conference, Montreal, QC, June 17-20, 2013.

Moskal, B. M. and Leydens, J.A. (2000). Scoring rubric development: validity and reliability. Practical Assessment,
Research & Evaluation, 7(10).

South Dakota State University, Civil Engineering Program. (2009) “ABET Self-Study Report”, Confidential
Document.

Winncy Y. Du, Burford J. Furman, Nikos J. Mourtos. (2005). “On the ability to design engineering experiments” 8th
UICEE Annual Conference on Engineering Education Kingston, Jamaica, 7-11 February 2005.

Biographical Information

Allen L. Jones, PE PhD, Professor, South Dakota State University, Box 4419, CEH 212, Brookings, SD 57006, 605-
688-6467, [email protected]

You might also like