0% found this document useful (0 votes)
50 views

BSBITU311 Use Simple Relational Databases

This document discusses the importance of validity and reliability when developing instruments for research studies. It defines validity as the degree to which an instrument accurately measures what it intends to measure. The three main types of validity discussed are content validity, construct validity, and criterion validity. Reliability is defined as the degree to which an instrument yields consistent results. Common measures of reliability include internal consistency, test-retest, and inter-rater reliabilities. The document provides details on how to understand and test for each type of validity and reliability.

Uploaded by

Sasa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views

BSBITU311 Use Simple Relational Databases

This document discusses the importance of validity and reliability when developing instruments for research studies. It defines validity as the degree to which an instrument accurately measures what it intends to measure. The three main types of validity discussed are content validity, construct validity, and criterion validity. Reliability is defined as the degree to which an instrument yields consistent results. Common measures of reliability include internal consistency, test-retest, and inter-rater reliabilities. The document provides details on how to understand and test for each type of validity and reliability.

Uploaded by

Sasa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

BSBITU311 Use Simple Relational Databases & BSBINM301 Organise Workplace Information

Q1 Answer
Records provide evidence of what an organisation has done, and why. Making and keeping full and
accurate records of business activities means that your organisation can account for its actions, meet
legislative requirements, and make informed and consistent decisions. Records can be in any format,
including hard copy files, letters, notes (including file notes and informal notes), emails, databases,
photographs, text messages, and social media posts.
Organisations should make and keep records of their business activities, transactions and decisions.
A record should be made if a matter relates to the organisation’s work, or may be required for future
reference.
For a meeting or conversation, a record (file note) may need to be created manually. These notes
should include details of the meeting or conversation, the matters discussed, any decisions or commitments
made, and follow up actions.
Records should be managed in a designated system/s or register, so information can be kept
together and be easily accessed.
An organisation’s recordkeeping practices should be set out in formalised policies and procedures.
These should cover activities such as: how records should be captured/filed, storage of hard copy or digital
records, security and access, recovering records following a disaster, retention periods, and training for staff
and volunteers.
Records must be retained with consideration given to accountability and legal requirements, and
business needs.
Recordkeeping requirements are statements specifying which records are to be created and
maintained by public offices. These requirements may be set out in:
 legislation and regulations
 whole-of-government policies and procedures
 major government or industry standards and codes of practice imposed on or adopted
 internal policies, procedures, processes or business rules
 agreements and other contracts.
Sometimes the public expects government to create and keep certain records of its activities as part of
provision of services and citizen’s rights and entitlement. These expectations reflect either an interest in the
records themselves as sources for research, or the desire for government to be transparent and accountable
through good recordkeeping.
Types of recordkeeping requirements
Recordkeeping requirements usually relate to:
 creating a record
 capturing a record, including information that needs to be captured
 providing or accepting supporting documentation
 maintaining a record, including security, storage and handling
 providing access to records
 retention and disposal of records.
Requirements can be explicit, but are more often implicit.
Sources of recordkeeping requirements
There are many sources of recordkeeping requirements, and this page outlines only some of these.
 NSW legislation
 Your organisation’s enabling legislation
 Laws that your organisation is responsible for overseeing
 Administrative legislation and associated regulations such as State Records Act 1998, Government
Information (Public Access) Act 2009, Privacy and Personal Information Protection Act 1998,
Government Sector Employment Act 2013, Electronic Transactions Act 2000
Memoranda and Circulars
Premier’s Memorandum and Department of Premier and Cabinet Circulars
NSW Treasurer’s Directions and Treasury Circulars via Administrative Requirements Portal
Retention and disposal authorities
Relevant Audit reports.
NSW Ombudsman’s Good conduct and administrative practice - Guidelines for state and local government

Q2 Answer
In 2016, there were 712,884 enrolments generated by 554,179 full-fee paying international students
in Australia on a student visa. This represents a 10.9% increase on 2015 and compares with an average
annual enrolments growth rate of 6.5% per year over the preceding ten years. There were 414,292
commencements (new enrolments) in 2016, representing a 10.0% increase on 2015 figures. This compares
with the average annual commencements growth rate of 7.1% per year over the preceding ten years. The
higher education sector had the largest share of enrolments at 43.0%. Enrolments and commencements in
the sector increased by 12.9% and 13.2% respectively. China and India accounted for 36.8% and 14.6%
respectively of enrolments by students in higher education. Bachelor degree commencements grew by 11.6%
in 2016. Postgraduate research commencements increased by 3.8%, while other postgraduate
commencements increased by 18.9% on 2015 figures. The VET sector accounted for 26.3% of total
enrolments and 28.9% of total commencements. Enrolments and commencements in the sector increased
11.6% and 10.1% respectively in 2016. India had the largest share of total enrolments (14.7%) and total
commencements (13.4%). The Republic of Korea was the next largest source country for enrolments with
8.6%, followed by Thailand (8.3%) and China (7.4%). The English Language Intensive Courses for Overseas
Students (ELICOS) sector accounted for 21.2% of total enrolments and 27.8% of total commencements in
2016. Enrolments and commencements grew by 4.3% and 3.6% respectively in the sector. China was the
largest ELICOS market in the period with a 27.7% share of enrolments and 26.9% of commencements. Brazil
was the next largest nationality for ELICOS enrolments with 10.2%, followed by Thailand (7.8%) and Colombia
(7.5%). In 2016, the schools sector accounted for 3.3% of total enrolments and 3.0% of total
commencements. Enrolments and commencements in the sector grew by 13.6% and 12.7% respectively. By
nationality, China contributed the largest share of enrolments in schools at 51.8% followed by Vietnam and the
Republic of Korea at 9.4% and 5.5% respectively. Enrolments and commencements in non-award courses
(such as exchange and foundation programs) increased by 17.3% and 20.0% respectively. China (35.2%),
the USA (13.3%) and the UK (4.9%) accounted for more enrolments in non-award courses than any other
nationality. Commencements from China and the USA increased by 47.8% and 4.2% respectively.

Q3 Answer
Established Melbourne is successfully growing housing near jobs, services and transport which will be
enhanced through the implementation of Plan Melbourne.
• The success of activity centre policy shows that strategic planning takes time and requires statutory
implementation and a clear logic linking objectives to plans.
• Government is seeking to maintain high quality urban environments as the city increases its density. The
introduction of apartment standards, garden area requirements and new set back and height requirements in
the central city are important steps in achieving liveability while the city changes.
• Competition for land between economic and residential land use requires ongoing monitoring

Validity and reliability are two important factors to consider when developing and testing any instrument (e.g.,
content assessment test, questionnaire) for use in a study. Attention to these considerations helps to insure
the quality of your measurement and of the data collected for your study.

Understanding and Testing Validity


Validity refers to the degree to which an instrument accurately measures what it intends to measure. Three
common types of validity for researchers and evaluators to consider are content, construct, and criterion
validities.
 Content validity indicates the extent to which items adequately measure or represent the content
of the property or trait that the researcher wishes to measure. Subject matter expert review is
often a good first step in instrument development to assess content validity, in relation to the
area or field you are studying.
 Construct validity indicates the extent to which a measurement method accurately represents a
construct (e.g., a latent variable or phenomena that can’t be measured directly, such as a
person’s attitude or belief) and produces an observation, distinct from that which is produced by
a measure of another construct. Common methods to assess construct validity include, but are
not limited to, factor analysis, correlation tests, and item response theory models (including
Rasch model).
 Criterion-related validity indicates the extent to which the instrument’s scores correlate with an
external criterion (i.e., usually another measurement from a different instrument) either at present
(concurrent validity) or in the future (predictive validity). A common measurement of this type of
validity is the correlation coefficient between two measures.
Often times, when developing, modifying, and interpreting the validity of a given instrument, rather than view
or test each type of validity individually, researchers and evaluators test for evidence of several different forms
of validity, collectively (e.g., see Samuel Messick’s work regarding validity).
Understanding and Testing Reliability
Reliability refers to the degree to which an instrument yields consistent results. Common measures of
reliability include internal consistency, test-retest, and inter-rater reliabilities.

 Internal consistency reliability looks at the consistency of the score of individual items on an


instrument, with the scores of a set of items, or subscale, which typically consists of several
items to measure a single construct. Cronbach’s alpha is one of the most common methods for
checking internal consistency reliability. Group variability, score reliability, number of items,
sample sizes, and difficulty level of the instrument also can impact the Cronbach’s alpha value.
 Test-retest measures the correlation between scores from one administration of an instrument to
another, usually within an interval of 2 to 3 weeks. Unlike pre-post tests, no treatment occurs
between the first and second administrations of the instrument, in order to test-retest reliability. A
similar type of reliability called alternate forms, involves using slightly different forms or versions
of an instrument to see if different versions yield consistent results.
 Inter-rater reliability checks the degree of agreement among raters (i.e., those completing items
on an instrument). Common situations where more than one rater is involved may occur when
more than one person conducts classroom observations, uses an observation protocol or scores
an open-ended test, using a rubric or other standard protocol. Kappa statistics, correlation
coefficients, and intra-class correlation (ICC) coefficient are some of the commonly reported
measures of inter-rater reliability.
Developing a valid and reliable instrument usually requires multiple iterations of piloting and testing which can
be resource intensive. Therefore, when available, I suggest using already established valid and reliable
instruments, such as those published in peer-reviewed journal articles. However, even when using these
instruments, you should re-check validity and reliability, using the methods of your study and your own
participants’ data before running additional statistical analyses. This process will confirm that the instrument
performs, as intended, in your study with the population you are studying, even though they are identical to
the purpose and population for which the instrument was initially developed.

You might also like