0% found this document useful (0 votes)
14 views25 pages

9713_s14_er

The Principal Examiner Report for the Cambridge International Advanced Level Applied ICT June 2014 indicates that while candidates were generally well-prepared, many struggled with specific syllabus areas such as OMR vs. OCR and online banking. There was a notable reliance on rote-learned answers, and candidates were advised to read questions carefully and apply their knowledge to the given scenarios. The report highlights the need for improved presentation and understanding of key concepts, as many candidates provided vague or incorrect responses.

Uploaded by

saimafahim40
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views25 pages

9713_s14_er

The Principal Examiner Report for the Cambridge International Advanced Level Applied ICT June 2014 indicates that while candidates were generally well-prepared, many struggled with specific syllabus areas such as OMR vs. OCR and online banking. There was a notable reliance on rote-learned answers, and candidates were advised to read questions carefully and apply their knowledge to the given scenarios. The report highlights the need for improved presentation and understanding of key concepts, as many candidates provided vague or incorrect responses.

Uploaded by

saimafahim40
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Cambridge International Advanced Subsidiary Level and Advanced Level

9713 Applied ICT June 2014


Principal Examiner Report for Teachers

APPLIED ICT
Paper 9713/11
Written A

Key Messages

Overall, candidates appeared to have been well prepared for this assessment.

Candidates showed a reasonable level of understanding though there are still areas of the syllabus which
appear to be left untouched by many candidates. This was particularly noticeable with the comparison of
OMR and OCR, online banking, aggregated information and the use of software in computer aided
assessment. Comparison of printers, home-working, batch processing and payroll also seemed to be topics
that many candidates struggled with.

There still appeared to be a degree of rote-learned answers from previous years’ mark schemes. Rote-
learning mark schemes is not recommended as, although questions might cover a similar topic, the
questions themselves might change considerably. This was particularly the case with Questions 3a and
10b.

Candidates must read questions carefully before answering. This was particularly the case with Question
5c where some candidates answered from the point of view of the customer rather than the bank. In this
paper, as with any exam paper at this standard, candidates are required to show a level of understanding as
well as a depth of knowledge. As has been highlighted in previous reports, this cannot be achieved by
simply repeating mark points from previous mark schemes.

Centres are again reminded that this is ‘Applied ICT’ and candidates are expected to apply their knowledge
to the context of the scenario. It is important for candidates to realise that they need to refer back to the
scenario when answering questions. This was particularly the case with Question 10b where a number of
candidates seemed to ignore the scenario and wrote generic answers.

The standard of presentation of answers was poorer than in previous sessions with some of the writing, on
occasions, being almost unintelligible. Candidates must be aware that if the answer is illegible then no credit
can be given by the examiner.

Comments on specific questions

Question 1

Candidates did fairly well on this question, particularly with part (c).

(a) The majority of candidates achieved the mark but it was disappointing to see the number of
candidates who did not understand what a check digit was. Equal numbers of these put flight
number or number of passengers for their response. Occasionally name of destination was
suggested.

(b) The majority of candidates achieved this mark but it was again disappointing to see the number of
candidates who did not understand what a format check was. Equal numbers of these put bar
code number or name of destination for their response.

(c) Candidates did better on this part with the vast majority of candidates gaining the mark, but it was
still surprising to see the number of candidates who did not understand the most appropriate use of
a range check with many suggesting any of the other three options.

© 2014
Cambridge International Advanced Subsidiary Level and Advanced Level
9713 Applied ICT June 2014
Principal Examiner Report for Teachers
Question 2

The vast majority of candidates achieved all four marks for this question. The most common incorrect
answer was the first option in the list.

Question 3

This question was fairly well answered with more able candidates performing very well, particularly part (b).

(a) Generally, candidates were able to gain at least one mark but many gave three different types of
user feedback, which was a different question from a previous paper.

(b) Most candidates gained at least two marks on this question. Some candidates were able to name
the type of documentation but did not elaborate on the content. Others seemed to be confused
and gave User documentation as a type of technical documentation. A large number of candidates
omitted this question, more than any other question.

Question 4

Candidates, generally, did not perform well on this question with few candidates achieving more than half
marks. Part (a) produced better responses than part (b)

(a) A number of candidates did not gain any marks as they responded without mentioning either
method of input. They just gave the features of a well-designed paper-based form. Many answers
were generalised descriptions of such a form without going into specifics such as easy to read,
suitable colour schemes etc.

(b) This part was poorly answered, with many candidates failing to make comparisons. Most
candidates just described what each method was, with many candidates mixing up what OMR is
with what OCR is. A lot of answers focused on cost and many just said quicker or easier without
any qualification.

Question 5

This question was fairly well answered with candidates performing better on part (a) than on parts (b) and
(c).

(a) Many candidates gaining at least two marks on this part. A number of candidates were able to
describe phishing and pharming without being able to describe spyware accurately.

(b) This question was not well answered with most candidates only making one valid point.
Candidates frequently provided incomplete or vague answers.

(c) Candidates did not do as well as expected with many only making one valid point. Candidates
often lost marks by writing that the bank did not need any premises (instead of fewer premises) or
that they would pay their workers less (instead of paying fewer workers).

Question 6

This question was answered reasonably well with the majority of candidates performing better on part (a).

(a) Most candidates seemed to know what anonymised information was but very few knew what
aggregated information was.

(b) A significant number of candidates omitted this question and those that did answer seemed not to
know what aggregated information was. Many wrote about security and hacking rather than how
useful the data might be.

(c) Candidates did relatively well on this question with many achieving at least two marks. The most
common answers were for duty of confidence and keeping data secure. A number of responses
just listed data protection rules.

© 2014
Cambridge International Advanced Subsidiary Level and Advanced Level
9713 Applied ICT June 2014
Principal Examiner Report for Teachers
Question 7

This question was answered quite well with the majority of candidates achieving one or two marks.
However, despite the guidance given in the question a number of candidates wrote about increased
unemployment and employment.

Question 8

On the whole, this question was not answered well; in particular, parts (c) and (d).

a) Candidates performed the best on this part with most managing at least two features. However,
many did not give an adequate use. There was little evidence of a genuine understanding of either
spreadsheets or the scenario. A worrying number of candidates gave features of a database rather
than a spreadsheet.

(b) Most candidates did not really seem to understand what analysing results meant. Many just
reworded the question and consequently were giving one word answers writing that spreadsheets
were more accurate or quicker without saying in what way.

(c) Most candidates did not perform well on this part. A number gave general features of word
processing without addressing the requirements of the question.

(d) This part was, surprisingly, poorly answered. Most candidates failed to compare with a specific
printer just repeating the question’s phrase ‘other printers’. Many said cheaper or faster without
saying than which type of printer.

Question 9

This question was answered quite well with the majority of candidates doing better on part (b) than part (a).

(a) Candidates did not do as well on this part with less than half the candidates gaining one mark and
few getting two or more. Many gave lower costs as an answer without saying in what way.

(b) Candidates did a lot better on this question though some squandered their mark scoring
opportunities by giving three different ways that a worker could arrange their day to suit
themselves.

Question 10

Generally, candidates did not do as well as expected on this question. Part (b) in particular, tended to be
poorly answered.

(a) Most candidates gained the mark, though a number of candidates reworded the question simply
saying that there was no order.

(b) Many candidates gave answers relating to batch process control and the mixture of raw materials.
Some tried, without much success, to describe batch process control in terms of a payroll. Those
that tried to provide an answer relating to batch processing ignored the scenario completely
thereby limiting their opportunity to achieve marks.

Question 11

Generally the majority of candidates managed to achieve at least one mark but only the more able
candidates were able to achieve more than half marks. Having said that, there were some excellent answers
from these candidates.

© 2014
Cambridge International Advanced Subsidiary Level and Advanced Level
9713 Applied ICT June 2014
Principal Examiner Report for Teachers

APPLIED ICT
Paper 9713/12
Written A

Key Messages

Overall, candidates appeared to have been well prepared for this assessment.

Candidates showed a reasonable level of understanding though there are still areas of the syllabus which
appear to be left untouched by many candidates. This was particularly noticeable with the comparison of
OMR and OCR, online banking, aggregated information and the use of software in computer aided
assessment. Comparison of printers, home-working, batch processing and payroll also seemed to be topics
that many candidates struggled with.

There still appeared to be a degree of rote-learned answers from previous years’ mark schemes. Rote-
learning mark schemes is not recommended as, although questions might cover a similar topic, the
questions themselves might change considerably. This was particularly the case with Questions 3a and
10b.

Candidates must read questions carefully before answering. This was particularly the case with Question
5c where some candidates answered from the point of view of the customer rather than the bank. In this
paper, as with any exam paper at this standard, candidates are required to show a level of understanding as
well as a depth of knowledge. As has been highlighted in previous reports, this cannot be achieved by
simply repeating mark points from previous mark schemes.

Centres are again reminded that this is ‘Applied ICT’ and candidates are expected to apply their knowledge
to the context of the scenario. It is important for candidates to realise that they need to refer back to the
scenario when answering questions. This was particularly the case with Question 10b where a number of
candidates seemed to ignore the scenario and wrote generic answers.

The standard of presentation of answers was poorer than in previous sessions with some of the writing, on
occasions, being almost unintelligible. Candidates must be aware that if the answer is illegible then no credit
can be given by the examiner.

Comments on specific questions

Question 1

Candidates did fairly well on this question, particularly with part (c).

(a) The majority of candidates achieved the mark but it was disappointing to see the number of
candidates who did not understand what a check digit was. Equal numbers of these put flight
number or number of passengers for their response. Occasionally name of destination was
suggested.

(b) The majority of candidates achieved this mark but it was again disappointing to see the number of
candidates who did not understand what a format check was. Equal numbers of these put bar
code number or name of destination for their response.

(c) Candidates did better on this part with the vast majority of candidates gaining the mark, but it was
still surprising to see the number of candidates who did not understand the most appropriate use of
a range check with many suggesting any of the other three options.

© 2014
Cambridge International Advanced Subsidiary Level and Advanced Level
9713 Applied ICT June 2014
Principal Examiner Report for Teachers
Question 2

The vast majority of candidates achieved all four marks for this question. The most common incorrect
answer was the first option in the list.

Question 3

This question was fairly well answered with more able candidates performing very well, particularly part (b).

(a) Generally, candidates were able to gain at least one mark but many gave three different types of
user feedback, which was a different question from a previous paper.

(b) Most candidates gained at least two marks on this question. Some candidates were able to name
the type of documentation but did not elaborate on the content. Others seemed to be confused
and gave User documentation as a type of technical documentation. A large number of candidates
omitted this question, more than any other question.

Question 4

Candidates, generally, did not perform well on this question with few candidates achieving more than half
marks. Part (a) produced better responses than part (b)

(a) A number of candidates did not gain any marks as they responded without mentioning either
method of input. They just gave the features of a well-designed paper-based form. Many answers
were generalised descriptions of such a form without going into specifics such as easy to read,
suitable colour schemes etc.

(b) This part was poorly answered, with many candidates failing to make comparisons. Most
candidates just described what each method was, with many candidates mixing up what OMR is
with what OCR is. A lot of answers focused on cost and many just said quicker or easier without
any qualification.

Question 5

This question was fairly well answered with candidates performing better on part (a) than on parts (b) and
(c).

(a) Many candidates gaining at least two marks on this part. A number of candidates were able to
describe phishing and pharming without being able to describe spyware accurately.

(b) This question was not well answered with most candidates only making one valid point.
Candidates frequently provided incomplete or vague answers.

(c) Candidates did not do as well as expected with many only making one valid point. Candidates
often lost marks by writing that the bank did not need any premises (instead of fewer premises) or
that they would pay their workers less (instead of paying fewer workers).

Question 6

This question was answered reasonably well with the majority of candidates performing better on part (a).

(a) Most candidates seemed to know what anonymised information was but very few knew what
aggregated information was.

(b) A significant number of candidates omitted this question and those that did answer seemed not to
know what aggregated information was. Many wrote about security and hacking rather than how
useful the data might be.

(c) Candidates did relatively well on this question with many achieving at least two marks. The most
common answers were for duty of confidence and keeping data secure. A number of responses
just listed data protection rules.

© 2014
Cambridge International Advanced Subsidiary Level and Advanced Level
9713 Applied ICT June 2014
Principal Examiner Report for Teachers
Question 7

This question was answered quite well with the majority of candidates achieving one or two marks.
However, despite the guidance given in the question a number of candidates wrote about increased
unemployment and employment.

Question 8

On the whole, this question was not answered well; in particular, parts (c) and (d).

a) Candidates performed the best on this part with most managing at least two features. However,
many did not give an adequate use. There was little evidence of a genuine understanding of either
spreadsheets or the scenario. A worrying number of candidates gave features of a database rather
than a spreadsheet.

(b) Most candidates did not really seem to understand what analysing results meant. Many just
reworded the question and consequently were giving one word answers writing that spreadsheets
were more accurate or quicker without saying in what way.

(c) Most candidates did not perform well on this part. A number gave general features of word
processing without addressing the requirements of the question.

(d) This part was, surprisingly, poorly answered. Most candidates failed to compare with a specific
printer just repeating the question’s phrase ‘other printers’. Many said cheaper or faster without
saying than which type of printer.

Question 9

This question was answered quite well with the majority of candidates doing better on part (b) than part (a).

(a) Candidates did not do as well on this part with less than half the candidates gaining one mark and
few getting two or more. Many gave lower costs as an answer without saying in what way.

(b) Candidates did a lot better on this question though some squandered their mark scoring
opportunities by giving three different ways that a worker could arrange their day to suit
themselves.

Question 10

Generally, candidates did not do as well as expected on this question. Part (b) in particular, tended to be
poorly answered.

(a) Most candidates gained the mark, though a number of candidates reworded the question simply
saying that there was no order.

(b) Many candidates gave answers relating to batch process control and the mixture of raw materials.
Some tried, without much success, to describe batch process control in terms of a payroll. Those
that tried to provide an answer relating to batch processing ignored the scenario completely
thereby limiting their opportunity to achieve marks.

Question 11

Generally the majority of candidates managed to achieve at least one mark but only the more able
candidates were able to achieve more than half marks. Having said that, there were some excellent answers
from these candidates.

© 2014
Cambridge International Advanced Subsidiary Level and Advanced Level
9713 Applied ICT June 2014
Principal Examiner Report for Teachers

APPLIED ICT
Paper 9713/13
Written A

Key Messages

Candidates showed a reasonable level of understanding with their responses to questions. However, there
are still areas of the syllabus which appear to be left untouched by many candidates. These included
working from home - benefits to the company and confidentiality of data. Other topics included facsimile
transmission, batch processing, financial reports, systems analysis and testing.

Fewer candidates appeared to have rote-learned answers from previous years’ mark schemes. Rote-
learning mark schemes is not recommended as, although questions might cover a similar topic, the
questions themselves might change considerably. This was particularly the case with Question 6c. In this
paper, as with any exam paper at this standard, candidates are required to show a level of understanding as
well as a depth of knowledge. As has been highlighted in previous reports, this cannot be achieved by
simply repeating mark points from previous mark schemes. A number of candidates explained batch
process control instead of batch processing.

Overall, however, marks were distributed quite well throughout the paper with more able candidates being
able to score well.

Comments on specific questions

Question 1

Candidates did fairly well on this question. The less able candidates managed to score a few marks with the
stronger candidates achieving ten or more. Part (b) was not well answered and to a lesser extent part (e)

(a) Candidates did well on this question with the majority achieving at least two marks. Most answered
product advertising and gained a mark for defining this but only the most able went on to make
another mark point.

(b) Most candidates lost marks by ignoring the requirement to describe the information, concentrating
instead on a description of the software despite the question saying they only needed to identify the
software and describe the information.

(c) This was well answered with most candidates gaining two or more marks. The less able
candidates confused this with generally advertising on someone else’s website.

(d) Most candidates did not answer this part as well as part (c). However, the more able candidates
tended to gain at least two marks. A surprising number of candidates did not attempt this part
despite some of these having answered part (c) quite well. There were many vague answers such
as cheaper, more effective etc.

(e) The majority of candidates gained one of the four marks available but there were a number of
misunderstandings with many candidates saying it would be more expensive to advertise on
someone else’s website. Other points tended to be quite vague or incomplete, lacking the detail
required at this level.

© 2014
Cambridge International Advanced Subsidiary Level and Advanced Level
9713 Applied ICT June 2014
Principal Examiner Report for Teachers
Question 2

Most candidates did quite well on part (a), but many found part (b) very challenging.

(a) Many candidates achieved a mark for employees being able to arrange their work schedule to suit
themselves but laboured the point by making the same point over and over again. Another popular
answer was saving money travelling to work but again this was sometimes repeated in other
answers.

(b) This was a challenging question concentrating as it did on the saving of costs to the company.
Most answers were quite vague or incomplete. Some candidates seemed to think that they could
pay lower wages if employees were working from home.

Question 3

This question was answered quite well with the more able candidates performing very well. Candidates did
slightly better on part (b) compared to part (a).

(a) A significant number of candidates did not attempt this question. However, the remainder
answered this question better than has been the case with similar questions on previous papers.
Many candidates mentioned sensors but lacked detail in the description of their use. Few seemed
very clear about the part the computer plays.

(b) The vast majority of candidates gave at least one good benefit with half the candidates giving at
least two. A number of answers such as greater accuracy or continuously monitoring were
rephrased and given as a repeated answer. Candidates should realise that the mark for any mark
point can only be awarded once.

Question 4

Candidates, generally, did not perform well on this question with several candidates not attempting it. Many
candidates concentrated on not sharing the information without giving other points. A number of candidates
just quoted elements of the data protection act.

Question 5

This question was not answered very well, with candidates seeming to be unaware of what a fax machine
consists of. Many candidates concentrated on how a fax is sent without mentioning specific features of a fax
machine. Again, a substantial number of candidates did not attempt this question.

Question 6

Overall this question proved to be quite difficult, especially parts (c) and (d). The majority of candidates did,
however, perform well on part (b).

(a) Most candidates seemed to ignore the requirement of the question to name items of information
related to their pay. Many items were listed which would be found on a master file but not related
to their pay.

(b) This part was answered much better, with the majority of candidates giving at least one correct
response.

(c) Many candidates found this part challenging. They answered as if it was a question on batch
process control giving answers related to the mixing of quantities of raw materials thereby ignoring
the scenario, the stem of the question and the preceding parts of the question which led into this
part.

Others were quite vague about what batch processing is with some trying to explain the processing
of payrolls in terms of batch process control.

(d) This was the least well answered question on the paper. This should not have been surprising with
so few candidates understanding batch processing. Many answers revolved around hacking and
most concentrated on batch processing without referring to online processing.

© 2014
Cambridge International Advanced Subsidiary Level and Advanced Level
9713 Applied ICT June 2014
Principal Examiner Report for Teachers

(e) A large number of candidates did not attempt this part of the question. However, having said that,
for those that did respond the majority of candidates made at least one good point; with the more
able making more than three. A number of candidates gave general overviews of the process
rather than describing the processing in detail.

(f) Again, a large number of candidates did not attempt this part of the question. The remainder
however did well with many making at least one good point and the most able scoring two or three
marks. Some confused these reports with a payslip and listed the items that would be found there.

Question 7

This question was quite well answered with the majority of candidates achieving at least two marks. Many
gained marks for naming the way but did not go on to describe it in sufficient detail.

Question 8

This question was answered reasonably well by many candidates. Performance was equally split on both
parts (a) and (b).

(a) Candidates did quite well on this part with many achieving at least one mark. Where marks were
lost, it was due to ignoring the scenario and referring to, for example, fewer shops and high street
branches.

(b) Most candidates gained at least one mark with some more able candidates giving two or three
good answers. There were some confusing answers as some candidates wrote about the expense
of having a website installed. There were a number of vague references to hacking.

Question 9

Overall this question was not well answered although the majority of candidates did better on part (b) than
part (a).

(a) A substantial number of candidates did not answer this part. Of the candidates who did attempt it
many ignored the requirements of the question. Some gave methods of research despite the
question saying this had been completed. Others gave various stages of the systems life cycle.
Only the most able appeared to understand the requirements of the question but even these did
not make more than two good points.

(b) Most candidates achieved at least one mark with the more able candidates usually gaining three
marks. Most candidates were limited in the number of validation rules they could describe and
often described a rule without providing an example.

© 2014
Cambridge International Advanced Subsidiary Level and Advanced Level
9713 Applied ICT June 2014
Principal Examiner Report for Teachers

APPLIED ICT
Paper 9713/02
Practical Test A

General comments

The majority of candidates attempted and completed all elements of the paper. There were significant
differences in the range of results from Centre to Centre and from candidate to candidate within Centres.
The paper gave a good spread of marks. Candidate errors were spread evenly over the sections of the
paper, although the application of candidates’ knowledge to produce appropriate database structures with
the given naming conventions caused a number of candidates some issues. A significant number of
candidates omitted this section.

A number of candidates did not print their name, Centre number and candidate number on some of the
documents submitted for assessment. Without clear printed evidence of the author of the work, Examiners
were unable to award any marks for these pages. It is not acceptable for candidates to annotate their
printouts by hand with their name as there is no real evidence that they are the originators of the work.

A few candidates omitted one or more of the pages from the required printouts. Some partially completed
the databases report but only printed the first page. Some candidates did not realise that if printouts
contained a significant number of pages that it may be worth checking their table structure or query criteria,
extremes like over 200 and even more than 2000 pages were seen. A small number of candidates
submitted multiple printouts for some of the tasks and failed to cross out those printouts that were draft
copies. Where multiple printouts are submitted, Examiners will only mark the first occurrence of each page.
Candidates must be aware of the dangers of cutting and pasting cropped versions of evidence in order to
save space on a sheet. It often looks impressive but this invariably leads to the loss of crucial data which
could achieve marks. Overall the paper performed very well.

Comments on specific questions

Question 1

This question was completed well by most candidates, as evidenced by their subsequent printouts of this
evidence document.

Question 2

This question was completed well by almost all candidates.

Question 3

This question was completed well by almost all candidates. The majority used =LEFT(B2,3) although a
significant number used a MID function, with functions like =MID(B2,1,3). The evidence of this was not often
shown for each employee, as a significant number of candidates did not show any evidence of replicating
this function.

Question4

This question was completed well by many candidates. There were a range of acceptable answers shown
by candidates, =C2&REPT(0,6-LEN(D2)&D2 was seen a number of times as was a similar function using
=CONCATENATE(C2,REPT(0,6-LEN(D2),D2). Other candidates set some formatting and fixed the data
length using TEXT functions like =TEXT(C2,”LLL”)&TEXT(D2,”000000”). The evidence of replicating this
was not shown by many candidates.

© 2014
Cambridge International Advanced Subsidiary Level and Advanced Level
9713 Applied ICT June 2014
Principal Examiner Report for Teachers
Question 5

The creation of the table was completed well by the vast majority of candidates. A small number did not call
the table Employees. A significant number of candidates did not follow the case or naming conventions used
in this step, instead relying on the import wizard to create the field names from the original data file. The
most appropriate data types were selected by the majority of candidates. Most candidates showed evidence
of their table structure, but a significant number omitted the datasheet view of this table to show that the
contents had been imported as specified.

Question 6

Despite the clear instruction as to which fields to include in this table, a significant number of candidates
added or omitted some of these fields. Candidates were required to select suitable field names that fitted
within the naming conventions (as set out in Question 5) but many added Address fields like “Address !”
without the underscore used to delimit the words. Some candidates had clearly examined the data and
selected suitable fieldnames but others included fields like postcode or zipcode which were not appropriate
to the data given as some entries in this field contained other address data (as found in different parts of the
world). Some successful candidates manipulated the data moving data items from one address field to
another to ensure the consistency of data which was an acceptable solution to the problem. This task was
designed to be more challenging than Question 5, requiring candidates to manipulate the data outside the
package in order to use the import wizard. This task differentiated superbly given the number of solutions
seen, indicating a wide range of abilities and problem solving skills of the candidates.

Many candidates used the clue in the provided data file; in that the address for each employee in Male is
given as 94-96 STO Trade Centre. Many candidates manipulated the data to ensure the same address was
used for all employees in Bangkok, Tawara and Shanghai, some candidates showing the use of editing the
address and then the ‘remove duplicates’ tool within the spreadsheet. A small number of candidates
identified the sequential addresses for each of these three cities and changed all the data in Bangkok to
1090-1105 SoiSomkid, for Tawara to 1427-1439 Acacia Heights, and for Shanghai to 56-71 Moganshan
Road. More candidates used a solution which set each address to any one of the single addresses
provided, this would ensure that the mail was delivered within the company and as this was the first time this
style of questioning was used this will be given full credit by Examiners. Likewise with the CEP (postal code)
in the second address field for the Rio addresses, any of the postal codes would allow the mail to reach the
correct company office so full credit was given. A variety of fields were used by candidates as the key field
for this table.

Question 7

Most candidates added this field to the Offices table, although some candidates created a new table with this
field and another field, which was set as a key field to allow them to create a relationship. Most candidates
successfully added some of the telephone numbers, although not all printed the evidence of this showing all
of the telephone numbers had been entered with accuracy.

Question 8

The creation of the table was completed well by the vast majority of candidates. A small number did not call
the table Salaries. A significant number of candidates did not follow the case or naming conventions shown
in Question 5 which included case and underscores. The most appropriate data types were selected by the
majority of candidates. Most candidates showed evidence of their table structure and the datasheet view of
this table to show that the contents had been imported as specified and that the table was saved in £ sterling
to 2 decimal places.

Question 9

Most candidates selected the correct field and attempted to restrict the data entry. Few restricted the
number of characters for the field to 3, most candidates leaving the field set to 255 characters. The most
effective and frequently used solution to enforcing only letters; was to use an input mask set to LLL. Input
masks set to ???, would also allow numeric values so did not gain any credit.

© 2014
Cambridge International Advanced Subsidiary Level and Advanced Level
9713 Applied ICT June 2014
Principal Examiner Report for Teachers
Question 10

Most candidates who attempted this question set some form of validation rule. Most used: Between 7000
AND 100000, although some candidates used the mathematical notation for this. There were a significant
number of candidates who erroneously entered >7000 AND <100000 which would not work for the two
extreme values. Few candidates achieved full marks for the message to the user, most included the correct
values in their message but few told the user that an invalid entry had been made.

Question 11

Where candidates had created the tables as specified, the majority created both relationships correctly. A
small number created the Office_Code to Office_Code relationship with an indeterminate join type as they
had not removed all duplicate data in an earlier question.

Question 12

This report was generally created well by the vast majority of candidates. Errors were occasionally found in
the title; usually where the title did not offer sufficient information about the report to the user, and in
extracting the correct fields (with 7 fields rather than 8 being displayed). Most candidates grouped the report
by office name. Although it was not specifically asked for in the question paper, some candidates also
included the currency field in the group header which avoided the duplication of data in the printout. Many
candidates sorted the data (within the grouping) by the Pay_Grade field. A small number set this into
ascending order rather than descending order. Almost all the printouts seen were a single page wide.

Question 13

Many of the candidates who attempted this question gained full marks. Most calculated the correct average
earnings and sorted the data into the correct order.

Question 14

This question posed many candidates some problems. A significant number of candidates used their results
from Question 13 and produced graphs or charts based upon that data. Few candidates chose to create a
pie chart. This question required the candidates to calculate the total salaries for each office and use this to
create a pie chart (which would in many packages automatically calculate the percentage values. The chart
was rarely labelled correctly; few candidates included the total salaries, that it compared each office and
displayed percentage values. Most candidates added the office names as segment labels (or as category
axis labels if they had erroneously created a bar chart) but fewer candidates displayed percentage values
with each segment.

Question 15

Many candidates who had produced a chart for Question 14 removed Tawara and Male from the data and
created a similar chart to their original. A significant number of candidates produced the same chart again
with the same data as their original submission for Question 14, sometimes changing the theme or colour
scheme for the chart. The labelling of this chart was expected to be similar to Question 14, although the title
should refer to the removal of the data from Male and Tawara.

Question 16

The candidates were required to produce evidence of their query structure in response to this question.
Most candidates who submitted an answer included Tour or Technician in the criteria row/s of the query.
Many gained full marks by setting these as wildcard values using asterisks (*) around both of these words.
The most significant omission seen was candidates who did not show that all the correct fields had been
included in the query (either through partial screen shots or cropped screen shots).

Question 17

Many candidates exported their extract into a spreadsheet but did not extract the correct rows from their
data. Those who were successful added extra formulae to extract the right 2 characters from the employee
number, changed this into a numeric value (with VALUE, INT, *1 or similar) and then used this 2 digit
numeric value to check that the number was less than 10. A number of candidates extracted the correct 4
employees but did not show their method.

© 2014
Cambridge International Advanced Subsidiary Level and Advanced Level
9713 Applied ICT June 2014
Principal Examiner Report for Teachers

Question 18

The majority of candidates who attempted this question inserted the correct number of rows and copied the
cells into this space.

Question 19

This question was performed well by many candidates. Most showed the evidence in their evidence
document as a screen shot showing the dialogue box, but many showed the highlighted range with the range
name in the name box in the spreadsheet toolbar.

Question 20

Most candidates who attempted this question added a new column called Bonus. Many completed the
calculation correctly, although a variety of methods was evidenced. Few candidates applied appropriate
formatting to the cells in this column; each data item should be formatted in the currency for that
country/region. The results of the calculations used for this field frequently showed the need for candidates
to validate their own work, as some of the bonuses calculated appeared to be excessive.

Question 21

This question was completed well by almost all candidates.

© 2014
Cambridge International Advanced Subsidiary Level and Advanced Level
9713 Applied ICT June 2014
Principal Examiner Report for Teachers

APPLIED ICT
Paper 9713/31
Written B

Key Messages

Many candidates appeared to know the syllabus content quite well but did not apply their knowledge to the
given scenarios and to the context set in the questions. It was also apparent that candidates did not read the
question carefully before attempting to answer it. There were instances where candidates did not answer the
question as set on the question paper but appeared to spot ‘key words’ in the question and wrote answers
based on those without applying their knowledge. Candidates must read the scenarios carefully and they
must apply their knowledge when answering the questions.

When answering the questions, candidates must read the rubric and give the required number of responses
where appropriate. Candidates who give too many or too few responses risk losing marks. Where a
question asks for e.g. three descriptions then only the first three descriptions will be marked and subsequent
descriptions will be ignored by the marker.

Some candidates did not attempt to answer all the questions and consequently lost the opportunity to
achieve those marks. A number of candidates added extra pages to the answer booklet – when doing this it
is important that candidates indicate on the main question paper that they have done so and where the
Examiner can find the additional parts to their responses.

Comments on specific questions

Question 1

(a) This question was about the online services that an airline could offer its passengers – apart from
issuing e-tickets. While many candidates correctly identified an appropriate online service, and
many were able to describe the service, too many candidates described features of a website or
wrote about the issuing of e-tickets and so did not score marks. Other common
misconceptions/errors included ‘provision of extra baggage space’ and ‘provide the passenger with
the ability to fly the plane’. Candidates must apply their knowledge to the scenario and not give
generic or unrelated responses.

(b) This question was about the steps taken by a customer when booking flights online. Marks were
only awarded for the actual booking process and not for setting up accounts. Most candidates
could describe the steps and this question was well answered by most candidates with marks
being awarded for inputting the departure/destination airports and dates and times.

(c) This question was not well answered. Candidates were required to describe an e-ticket – how to
use an e-ticket was not required; many candidates referring to printing out the ticket and presenting
it at check-in rather than giving a description that included the facts that an e-ticket is a digital ticket
that exists (as a digital record of booking/reservation) in an airline computer and which contains a
reservation number and an e-ticket number.

(d) This question required candidates to describe the advantages to a customer of booking airline
tickets online. Most candidates answered this question quite well; pointing out that the booking
could be carried out from anywhere with an internet connection and at times convenient to the
customer. Good answers also included the ability to compare prices and there being no costs of
e.g. paying a travel agent.

© 2014
Cambridge International Advanced Subsidiary Level and Advanced Level
9713 Applied ICT June 2014
Principal Examiner Report for Teachers
Question 2

(a) This question asked candidates to describe the benefits to the airline of using a call centre to
answer customer queries. Many candidates could describe the benefits but many failed to apply
these to the airline. Further, many candidates did not seem to know the role of call centres. This
question was not well answered.

(b) This question required candidates to discuss the advantages and disadvantages to a passenger of
using a call Centre and, unlike, Question 2 (a) was quite well answered. However, there were
many candidates who gave generic answers that did not apply to the scenario.

(c) This question was omitted by a number of candidates. There were some good answers that
described how an airline could use the features of CTI software but CTI software appears not to be
understood or known about by many candidates. Good answers included reference to queuing
calls that are waiting for attention, routing calls to the next available operative, and combining voice
and data inputs with a description of how these features could be used by the airline.

Question 3

(a) This question was about ‘specialist computer hardware’ so responses that referred to e.g.
keyboards or a mouse, or referred to the mechanical components of the simulator, e.g. hydraulic
rams, did not score the marks. Most candidates, however, referred to joysticks, large monitors
and/or loudspeakers so scored at least three marks. Better responses went on to describe the use
within the simulator of these items of specialist computer hardware.

Centres are advised that repetition of the answers to questions set in previous examination series
will not score marks as these answers do not apply to the question set here; candidates must apply
their knowledge to the current scenario and question. A few candidates omitted this question
completely and failed to score any marks at all.

(b) As in part (a), candidates must apply their knowledge to the current scenario and question. It
appeared that many did not read the question and wrote about “trains.” Many answers were
generic and did not apply to the scenario or were not descriptions. Good answers included
rehearsing unusual scenarios, practising different situations repeatedly, and practising take-
offs/landings at particular airports.

Question 4

This question proved difficult for most candidates. Candidates were unable to explain the use of computer
models in a financial context. Most answers were minimalistic e.g. ‘do calculations’ and did not refer to
financial aspects at all. Better responses included explaining the inputs e.g. wages or salaries, explaining
the use of e.g. goal seek and ‘what if..’ in the context of company finances.

Question 5

(a) This question required candidates to describe the difficulties that disabled people might encounter
when using online banking. Marks were not awarded for descriptions of the website accessibility
options as these are designed to help alleviate the difficulties. It was not possible to award marks
to candidates who did make it clear what disability was encountering the described difficulty. Better
responses mentioned the disability and then described the difficulties that might be encountered.

(b) This question required candidates to describe ‘software’ configurations but too many candidates
described the hardware that might be used and so did not score marks. However, most candidates
managed to score good marks on this question with responses that referred to e.g. the sticky keys
feature so that e.g. the shift key is not needed for accessing upper case, the use of filter keys to
prevent unintended multiple key presses and the use of the zoom feature to enlarge a font size for
easier reading. Other good responses included the use of voice recognition for inputting
commands or data and/or the use of eye control software to move the cursor.

© 2014
Cambridge International Advanced Subsidiary Level and Advanced Level
9713 Applied ICT June 2014
Principal Examiner Report for Teachers
Question 6

This question was not well answered with many candidates giving generic responses about user names and
passwords but not explaining how these actually check that the person is authorised to access the account.
Good explanations could include the use of customer IDs to identify the user, the use of a PIN that is known
only to the customer, the use of a card reader using customer bank card to generate unique TAN, using
security questions and answers known only to the individual and the checking of biometrics that are unique
to individual.

Question 7

(a) This question proved surprisingly difficult for most candidates. To score marks the components of
the expert system had to be referred to in the context of financial planning. While most candidates
appeared to be aware of the components of an expert system, most were unable to describe in any
detail their use in financial planning. Generic responses about the components did not achieve
marks as candidates must relate their answers to the current scenario.

(b) This question was slightly better answered than part (a). However, many answers were too vague
e.g. ‘it is expensive’ or ‘it is faster’ without any description or expansion showing a lack of
knowledge about the benefits of expert systems.

Question 8

This question should have been an easy question as the context should be familiar to candidates but it was
not well answered. Many answers showed some understanding but were often left lacking in detail e.g.
many candidates mentioned CAL/CBT/CAI but failed to describe how these would improve candidates’
learning.

Question 9

This question was quite well answered by most candidates who described the use of ICT for recording and
monitoring of candidates’ progress. However, many candidates failed to describe how ICT could be used to
report the progress to parents. To score the higher marks, candidates had to describe the use of ICT for
both recording/monitoring and reporting the progress.

Question 10

(a) Most candidates could answer this question well with good descriptions of how their identified
device dealt with data packets. However, there is still some confusion amongst candidates about
the functioning of hubs, switches and routers when handling data packets.

(b) Most candidates knew the meaning of the term VPN and many could explain what it was but a
significant number of candidates appeared to have little knowledge of virtual private networks,
many of these not even knowing what the letters VPN stand for.

© 2014
Cambridge International Advanced Subsidiary Level and Advanced Level
9713 Applied ICT June 2014
Principal Examiner Report for Teachers

APPLIED ICT
Paper 9713/32
Written B

Key Messages

Many candidates appeared to know the syllabus content quite well but did not apply their knowledge to the
given scenarios and to the context set in the questions. It was also apparent that candidates did not read the
question carefully before attempting to answer it. There were instances where candidates did not answer the
question as set on the question paper but appeared to spot ‘key words’ in the question and wrote answers
based on those without applying their knowledge. Candidates must read the scenarios carefully and they
must apply their knowledge when answering the questions.

When answering the questions, candidates must read the rubric and give the required number of responses
where appropriate. Candidates who give too many or too few responses risk losing marks. Where a
question asks for e.g. three descriptions then only the first three descriptions will be marked and subsequent
descriptions will be ignored by the marker.

Some candidates did not attempt to answer all the questions and consequently lost the opportunity to
achieve those marks. A number of candidates added extra pages to the answer booklet – when doing this it
is important that candidates indicate on the main question paper that they have done so and where the
Examiner can find the additional parts to their responses.

Comments on specific questions

Question 1

(a) This question was about the online services that an airline could offer its passengers – apart from
issuing e-tickets. While many candidates correctly identified an appropriate online service, and
many were able to describe the service, too many candidates described features of a website or
wrote about the issuing of e-tickets and so did not score marks. Other common
misconceptions/errors included ‘provision of extra baggage space’ and ‘provide the passenger with
the ability to fly the plane’. Candidates must apply their knowledge to the scenario and not give
generic or unrelated responses.

(b) This question was about the steps taken by a customer when booking flights online. Marks were
only awarded for the actual booking process and not for setting up accounts. Most candidates
could describe the steps and this question was well answered by most candidates with marks
being awarded for inputting the departure/destination airports and dates and times.

(c) This question was not well answered. Candidates were required to describe an e-ticket – how to
use an e-ticket was not required; many candidates referring to printing out the ticket and presenting
it at check-in rather than giving a description that included the facts that an e-ticket is a digital ticket
that exists (as a digital record of booking/reservation) in an airline computer and which contains a
reservation number and an e-ticket number.

(d) This question required candidates to describe the advantages to a customer of booking airline
tickets online. Most candidates answered this question quite well; pointing out that the booking
could be carried out from anywhere with an internet connection and at times convenient to the
customer. Good answers also included the ability to compare prices and there being no costs of
e.g. paying a travel agent.

© 2014
Cambridge International Advanced Subsidiary Level and Advanced Level
9713 Applied ICT June 2014
Principal Examiner Report for Teachers
Question 2

(a) This question asked candidates to describe the benefits to the airline of using a call centre to
answer customer queries. Many candidates could describe the benefits but many failed to apply
these to the airline. Further, many candidates did not seem to know the role of call centres. This
question was not well answered.

(b) This question required candidates to discuss the advantages and disadvantages to a passenger of
using a call Centre and, unlike, Question 2 (a) was quite well answered. However, there were
many candidates who gave generic answers that did not apply to the scenario.

(c) This question was omitted by a number of candidates. There were some good answers that
described how an airline could use the features of CTI software but CTI software appears not to be
understood or known about by many candidates. Good answers included reference to queuing
calls that are waiting for attention, routing calls to the next available operative, and combining voice
and data inputs with a description of how these features could be used by the airline.

Question 3

(a) This question was about ‘specialist computer hardware’ so responses that referred to e.g.
keyboards or a mouse, or referred to the mechanical components of the simulator, e.g. hydraulic
rams, did not score the marks. Most candidates, however, referred to joysticks, large monitors
and/or loudspeakers so scored at least three marks. Better responses went on to describe the use
within the simulator of these items of specialist computer hardware.

Centres are advised that repetition of the answers to questions set in previous examination series
will not score marks as these answers do not apply to the question set here; candidates must apply
their knowledge to the current scenario and question. A few candidates omitted this question
completely and failed to score any marks at all.

(b) As in part (a), candidates must apply their knowledge to the current scenario and question. It
appeared that many did not read the question and wrote about “trains.” Many answers were
generic and did not apply to the scenario or were not descriptions. Good answers included
rehearsing unusual scenarios, practising different situations repeatedly, and practising take-
offs/landings at particular airports.

Question 4

This question proved difficult for most candidates. Candidates were unable to explain the use of computer
models in a financial context. Most answers were minimalistic e.g. ‘do calculations’ and did not refer to
financial aspects at all. Better responses included explaining the inputs e.g. wages or salaries, explaining
the use of e.g. goal seek and ‘what if..’ in the context of company finances.

Question 5

(a) This question required candidates to describe the difficulties that disabled people might encounter
when using online banking. Marks were not awarded for descriptions of the website accessibility
options as these are designed to help alleviate the difficulties. It was not possible to award marks
to candidates who did make it clear what disability was encountering the described difficulty. Better
responses mentioned the disability and then described the difficulties that might be encountered.

(b) This question required candidates to describe ‘software’ configurations but too many candidates
described the hardware that might be used and so did not score marks. However, most candidates
managed to score good marks on this question with responses that referred to e.g. the sticky keys
feature so that e.g. the shift key is not needed for accessing upper case, the use of filter keys to
prevent unintended multiple key presses and the use of the zoom feature to enlarge a font size for
easier reading. Other good responses included the use of voice recognition for inputting
commands or data and/or the use of eye control software to move the cursor.

© 2014
Cambridge International Advanced Subsidiary Level and Advanced Level
9713 Applied ICT June 2014
Principal Examiner Report for Teachers
Question 6

This question was not well answered with many candidates giving generic responses about user names and
passwords but not explaining how these actually check that the person is authorised to access the account.
Good explanations could include the use of customer IDs to identify the user, the use of a PIN that is known
only to the customer, the use of a card reader using customer bank card to generate unique TAN, using
security questions and answers known only to the individual and the checking of biometrics that are unique
to individual.

Question 7

(a) This question proved surprisingly difficult for most candidates. To score marks the components of
the expert system had to be referred to in the context of financial planning. While most candidates
appeared to be aware of the components of an expert system, most were unable to describe in any
detail their use in financial planning. Generic responses about the components did not achieve
marks as candidates must relate their answers to the current scenario.

(b) This question was slightly better answered than part (a). However, many answers were too vague
e.g. ‘it is expensive’ or ‘it is faster’ without any description or expansion showing a lack of
knowledge about the benefits of expert systems.

Question 8

This question should have been an easy question as the context should be familiar to candidates but it was
not well answered. Many answers showed some understanding but were often left lacking in detail e.g.
many candidates mentioned CAL/CBT/CAI but failed to describe how these would improve candidates’
learning.

Question 9

This question was quite well answered by most candidates who described the use of ICT for recording and
monitoring of candidates’ progress. However, many candidates failed to describe how ICT could be used to
report the progress to parents. To score the higher marks, candidates had to describe the use of ICT for
both recording/monitoring and reporting the progress.

Question 10

(a) Most candidates could answer this question well with good descriptions of how their identified
device dealt with data packets. However, there is still some confusion amongst candidates about
the functioning of hubs, switches and routers when handling data packets.

(b) Most candidates knew the meaning of the term VPN and many could explain what it was but a
significant number of candidates appeared to have little knowledge of virtual private networks,
many of these not even knowing what the letters VPN stand for.

© 2014
Cambridge International Advanced Subsidiary Level and Advanced Level
9713 Applied ICT June 2014
Principal Examiner Report for Teachers

APPLIED ICT
Paper 9713/33
Written B

Key Messages

Many candidates appeared to know the syllabus content quite well but did not apply their knowledge to the
given scenarios and to the context set in the questions. It was also apparent that candidates did not read the
question carefully before attempting to answer it. There were instances where candidates did not answer the
question as set on the question paper but appeared to spot ‘key words’ in the question and wrote answers
based on those without applying their knowledge. Candidates must read the scenarios carefully and they
must apply their knowledge when answering the questions.

When answering the questions, candidates must read the rubric and give the required number of responses
where appropriate. Candidates who give too many or too few responses risk losing marks. Where a
question asks for e.g. three descriptions then only the first three descriptions will be marked and subsequent
descriptions will be ignored by the marker.

Some candidates did not attempt to answer all the questions and consequently lost the opportunity to
achieve those marks. A number of candidates added extra pages to the answer booklet – when doing this it
is important that candidates indicate on the main question paper that they have done so and where the
Examiner can find the additional parts to their responses.

Comments on specific questions

Question 1

(a) This question was about the online services that an airline could offer its passengers – apart from
issuing e-tickets. While many candidates correctly identified an appropriate online service, and
many were able to describe the service, too many candidates described features of a website or
wrote about the issuing of e-tickets and so did not score marks. Other common
misconceptions/errors included ‘provision of extra baggage space’ and ‘provide the passenger with
the ability to fly the plane’. Candidates must apply their knowledge to the scenario and not give
generic or unrelated responses.

(b) This question was about the steps taken by a customer when booking flights online. Marks were
only awarded for the actual booking process and not for setting up accounts. Most candidates
could describe the steps and this question was well answered by most candidates with marks
being awarded for inputting the departure/destination airports and dates and times.

(c) This question was not well answered. Candidates were required to describe an e-ticket – how to
use an e-ticket was not required; many candidates referring to printing out the ticket and presenting
it at check-in rather than giving a description that included the facts that an e-ticket is a digital ticket
that exists (as a digital record of booking/reservation) in an airline computer and which contains a
reservation number and an e-ticket number.

(d) This question required candidates to describe the advantages to a customer of booking airline
tickets online. Most candidates answered this question quite well; pointing out that the booking
could be carried out from anywhere with an internet connection and at times convenient to the
customer. Good answers also included the ability to compare prices and there being no costs of
e.g. paying a travel agent.

© 2014
Cambridge International Advanced Subsidiary Level and Advanced Level
9713 Applied ICT June 2014
Principal Examiner Report for Teachers
Question 2

(a) This question asked candidates to describe the benefits to the airline of using a call centre to
answer customer queries. Many candidates could describe the benefits but many failed to apply
these to the airline. Further, many candidates did not seem to know the role of call centres. This
question was not well answered.

(b) This question required candidates to discuss the advantages and disadvantages to a passenger of
using a call Centre and, unlike, Question 2 (a) was quite well answered. However, there were
many candidates who gave generic answers that did not apply to the scenario.

(c) This question was omitted by a number of candidates. There were some good answers that
described how an airline could use the features of CTI software but CTI software appears not to be
understood or known about by many candidates. Good answers included reference to queuing
calls that are waiting for attention, routing calls to the next available operative, and combining voice
and data inputs with a description of how these features could be used by the airline.

Question 3

(a) This question was about ‘specialist computer hardware’ so responses that referred to e.g.
keyboards or a mouse, or referred to the mechanical components of the simulator, e.g. hydraulic
rams, did not score the marks. Most candidates, however, referred to joysticks, large monitors
and/or loudspeakers so scored at least three marks. Better responses went on to describe the use
within the simulator of these items of specialist computer hardware.

Centres are advised that repetition of the answers to questions set in previous examination series
will not score marks as these answers do not apply to the question set here; candidates must apply
their knowledge to the current scenario and question. A few candidates omitted this question
completely and failed to score any marks at all.

(b) As in part (a), candidates must apply their knowledge to the current scenario and question. It
appeared that many did not read the question and wrote about “trains.” Many answers were
generic and did not apply to the scenario or were not descriptions. Good answers included
rehearsing unusual scenarios, practising different situations repeatedly, and practising take-
offs/landings at particular airports.

Question 4

This question proved difficult for most candidates. Candidates were unable to explain the use of computer
models in a financial context. Most answers were minimalistic e.g. ‘do calculations’ and did not refer to
financial aspects at all. Better responses included explaining the inputs e.g. wages or salaries, explaining
the use of e.g. goal seek and ‘what if..’ in the context of company finances.

Question 5

(a) This question required candidates to describe the difficulties that disabled people might encounter
when using online banking. Marks were not awarded for descriptions of the website accessibility
options as these are designed to help alleviate the difficulties. It was not possible to award marks
to candidates who did make it clear what disability was encountering the described difficulty. Better
responses mentioned the disability and then described the difficulties that might be encountered.

(b) This question required candidates to describe ‘software’ configurations but too many candidates
described the hardware that might be used and so did not score marks. However, most candidates
managed to score good marks on this question with responses that referred to e.g. the sticky keys
feature so that e.g. the shift key is not needed for accessing upper case, the use of filter keys to
prevent unintended multiple key presses and the use of the zoom feature to enlarge a font size for
easier reading. Other good responses included the use of voice recognition for inputting
commands or data and/or the use of eye control software to move the cursor.

© 2014
Cambridge International Advanced Subsidiary Level and Advanced Level
9713 Applied ICT June 2014
Principal Examiner Report for Teachers
Question 6

This question was not well answered with many candidates giving generic responses about user names and
passwords but not explaining how these actually check that the person is authorised to access the account.
Good explanations could include the use of customer IDs to identify the user, the use of a PIN that is known
only to the customer, the use of a card reader using customer bank card to generate unique TAN, using
security questions and answers known only to the individual and the checking of biometrics that are unique
to individual.

Question 7

(a) This question proved surprisingly difficult for most candidates. To score marks the components of
the expert system had to be referred to in the context of financial planning. While most candidates
appeared to be aware of the components of an expert system, most were unable to describe in any
detail their use in financial planning. Generic responses about the components did not achieve
marks as candidates must relate their answers to the current scenario.

(b) This question was slightly better answered than part (a). However, many answers were too vague
e.g. ‘it is expensive’ or ‘it is faster’ without any description or expansion showing a lack of
knowledge about the benefits of expert systems.

Question 8

This question should have been an easy question as the context should be familiar to candidates but it was
not well answered. Many answers showed some understanding but were often left lacking in detail e.g.
many candidates mentioned CAL/CBT/CAI but failed to describe how these would improve candidates’
learning.

Question 9

This question was quite well answered by most candidates who described the use of ICT for recording and
monitoring of candidates’ progress. However, many candidates failed to describe how ICT could be used to
report the progress to parents. To score the higher marks, candidates had to describe the use of ICT for
both recording/monitoring and reporting the progress.

Question 10

(a) Most candidates could answer this question well with good descriptions of how their identified
device dealt with data packets. However, there is still some confusion amongst candidates about
the functioning of hubs, switches and routers when handling data packets.

(b) Most candidates knew the meaning of the term VPN and many could explain what it was but a
significant number of candidates appeared to have little knowledge of virtual private networks,
many of these not even knowing what the letters VPN stand for.

© 2014
Cambridge International Advanced Subsidiary Level and Advanced Level
9713 Applied ICT June 2014
Principal Examiner Report for Teachers

APPLIED ICT
Paper 9713/04
Practical Test B

General comments

It is pleasing to note that many Centres had clearly addressed issues detailed in the reports for previous
sessions.

Although this paper did not specify that solutions to the tasks were to comprise a system to be used by
others, it was still important to bear in mind that solutions had to be “repeatable” in that results would
automatically reflect updates or changes to data. A number of candidates provided solutions that although
successful in terms of output for the given data, would have required significant intervention for the use of
new or updated data.

Comments on specific questions

Task 1 – Populate a results table from scores provided

In the first task candidates were required to display test scores recorded in a specified external data source
(JS1_Scores.csv), compare the scores with given table of threshold marks and display the correct result.

The first part, display the candidate scores, was best tackled using the HLOOKUP function. Some
candidates preferred to use the VLOOKUP formula and most recognised the source data would need to be
transposed. This was not credited as a valid solution since in order for changes to the data to be reflected in
correct outcomes, the source file would need to be processed manually once again. Centres should also
note that, in general, it is expected that candidates will reference only the source files provided and not data
copied to other worksheets.

The second part of the task required candidates to compare the scores to a table of marks thresholds and
determine the result for each candidate. There were a number of valid solutions to this task and it was
pleasing to see some sophisticated problem solving. The simplest and most common solution was a nested
“IF” statement, but a number of candidates failed to determine the simplest logic and use of the criteria.
Possibly the most efficient solution was the use of an inverted threshold table and the VLOOKUP formula
using the TRUE parameter. Some candidates recognised that this would allow for changes to the thresholds
to be reflected in the results automatically.

Marks were awarded for the formatting of the table, the accuracy of the scores and configuration of the
formulae.

A number of candidates lost marks for inserting formulae due to lack of attention to relative and absolute cell
referencing. This was of particular relevance to marks for the replication of the formulae.

Task 2 – Creation of a chart to analyse the test results

The creation of the chart was straightforward and almost all candidates managed to display a chart showing
the number of correct answers for each question in the test shown in the JS1_Scores file. Very few however
managed to display the number of marks for each question from the Module_JS1 file correctly. Most of the
candidates who attempted this part of the task displayed just the data labels above each column. The task
required them to add another series.

A key part of this task was for candidates to recognise the importance of getting information from data. The
chart title, the axes titles and any legend used, had to convey the purpose of the chart clearly. This was that
the chart showed the number of correct responses to each question and the marks awarded for correct
responses to each question.

© 2014
Cambridge International Advanced Subsidiary Level and Advanced Level
9713 Applied ICT June 2014
Principal Examiner Report for Teachers

Also worth noting is that the task required a printout. Candidates lost marks if they submitted only
screenshots.

Task 3 –A mail merge using a template document

Most candidates produced merged letters but many failed to select the recipients correctly and printed letters
to all the teachers listed in the Course_Teachers file. Centres should make candidates aware that it is
extremely unlikely that a task will require a large number of letters to be printed. For this paper, only letters
for the three teachers of the JS1 module were required. Selection of the recipients could have been
automated during the merge by use of a SKIPIF field or by applying a filter to the mail merge recipients.
Evidence of the method was required, however, and manual selection was not awarded marks.

Inclusions to the letters were to be decided by a conditional field. Many candidates managed to complete
this part of the task successfully but some chose to include the chart shown as an image file. This was not
acceptable since changes to the data and hence the chart would not update in an image.

Once again, the importance of “proofing” results before submission needs to be stressed. The template for
the mail merge showed where required data was to be placed but a number of candidates did not pay
enough attention to spacing and layout when inserting the fields. A great many candidates also failed to
ensure the final letters were fit for purpose due to incorrect capitalisation or missing full stops in the text
inserted by the conditional fields.

Task 4 –Populate a table of test scores

This task required the comparison of values in VB1_Responses file with the correct answers in the
Module_VB1 file. The table was required to show 0 for an incorrect response and the mark for each correct
response.

There needed to be a total for each column (candidate codes) and a count of the correct answers for each
row (question numbers). Marks were awarded for the accuracy and formatting of the labels and the accuracy
of the data.

In general, the table was successfully completed by most candidates but some lost marks for lack of
attention to the formatting which was shown in the question paper. At this level the ability to work to a
specification with total accuracy has to be clearly demonstrated.

Even when the data displayed in the table was accurate, the methods used by most candidates were very
inefficient. A simple IF statement was sufficient but many candidates used complex lookups and took
advantage of the relatively small dataset to manually alter the formulae for each row or column.

Centres would profit from attention to relative and absolute cell referencing and replication. At this level,
marks for replication are only awarded if formulae are clearly configured for valid replication and the required
outcome.

In the case of task 4, the most efficient formula would be:


=IF(VB1_Responses.csv!B3=Module_VB1.csv!$B3,Module_VB1.csv!$C3,0)

The careful use of relative and absolute referencing ensures this formula would replicate across columns and
down rows.

For formulae without the need for such careful referencing, evidence of the replication was required. In this
case candidates who showed only screenshots of the formula bar were not awarded the marks for
replication.

Task 5 –Automate the printing of three documents of selected and sorted data

The most common problem with this task seemed to be in the two level field sort for the first printout. Many
candidates sorted on the “Class” field correctly but the sort was then cleared when sorting on the “Score”
field. This showed that they had sorted the data in two operations rather than adding a level.

© 2014
Cambridge International Advanced Subsidiary Level and Advanced Level
9713 Applied ICT June 2014
Principal Examiner Report for Teachers
A number of candidates also submitted many pages of macro code in which it could be seen that they were
experimenting with the process of producing the required printouts. Centres could profit from placing some
emphasis on encouraging candidates to refine their recordings until they have produced the required output
efficiently.

It is pleasing to note that almost all candidates who attempted this task inserted comments using the correct
syntax.

In conclusion

For this session, the main issues for Centres to bear in mind seem to be:

● the importance of efficient solutions


● the selection or filtering of recipients for mail merge
● careful cell referencing in formulae
● the logic of conditional mergefields
● the need to “proof” outcomes
● the refinement of macro recordings before submission.

© 2014

You might also like