SDD Notes - From Class of 2015
SDD Notes - From Class of 2015
Hey guys,
Just a little information about these notes. Being of the class of 2015, I wrote these notes for myself
a few years back and never got around to publishing them. I found myself on BOS a few days ago
and noticed that some people we're asking for notes and decided I may as well publish mine.
Keep in mind I never intended on publishing these onto the internet. I can't guarantee that
everything I said was correct and I'm not even sure if they're complete. That’s up to you guys to
figure out! If anything seems wrong or inconsistent with more reliable sources of information, do a
little research and determine which would be more correct.
You can always use these to complement your own notes, help you understand many of the
concepts in the course, to use while listening to your teacher that wasn't qualified to teach SDD, or
when you decide to study the entire course within 24 hours of your exam. It can be used pretty
much all situations. It works pretty much anywhere.
If you have any questions, I would have recommended to PM me although I most likely wouldn't see
it, soz.
Y2K problem
Due to the price of storage during the 60s and 70s, developers would attempt to use the
least amount of storage space possible. This would include writing the date using 2 digits
e.g. 60 instead of 1960. When the year 2000 would come along, people anticipated that
computers would think the year was 1900 not 2000. This could potentially have caused
major problems in all computer systems from small computers to large airline systems.
Malware
Malware is any software which performs an unwanted or uninvited task. This includes
viruses, Trojan horses, worms and spyware although the syllabus only mentions "malware
such as viruses". Viruses are unique in the way that they replicate themselves. They quickly
spread throughout a computer and to other computers to expand their reach and complete
their desired task.
*Syllabus doesn’t mention the following forms of malware although I think a basic
understanding is still necessary and/or useful.*
1. Worms aim to slow down a system by replicating itself until the entire disk (and RAM)
is full and overwhelmed (sometimes RAM can be so full that the system can't execute
any tasks).
2. Trojan horses are malicious software which disguises itself as legit software. For
example you may download a game but once you run the program the malicious
software will run instead (usually in the background without you knowing...while you
play your game)
3. Spyware is solely built to obtain your information. They may attempt to access your
browsing habits, emails, keystrokes etc. and may be used for identity theft.
Reliance on software
• Software must be very reliable as it runs planes, electrical grids, dams and many other
critical services (as well as everyday products).
Social networking
A social network is an online service which allows people to communicate with each other.
Examples include Facebook, Twitter, LinkedIn etc. Although there are many advantages to
these sites there are also disadvantages such as identity theft, stalking and online bullying as
well as the fact that whatever you put up online, stays online and can be detrimental for
some people e.g. what a young person puts up online could be the reason they don’t get a
job 5 years down the track.
Cyber safety
Cyber safety is about protecting yourself online and minimising the risk of online dangers.
Precautions should be taken when meeting people online, sharing information, speaking to
known and unknown people (could be cyberbullies or even stalkers) and you should watch
your digital footprint. (cyber safety includes identity theft and online purchasing).
Quality
Customers perception of quality are influenced by their expectations. If their expectations
are exceeded they will perceive the product to be one of high quality and vice versa. Quality
Assurance aims to ensure high quality is achieved throughout the development process. It is
the developers responsibility that this is achieved. Financial constraints and development
expertise are the main factors affecting quality. Hardware, OS, and interference from other
running software are all external factors affecting quality.
Response to problems
It is the developers responsibility to have systems in place to deal with problems in their
software accurately, efficiently and timely.
Code of conduct
This is a set of standards which the developers agree to abide by. Its aim is to enhance their
(and the industry's) reputation and standards. Some points may be: to maintain
professionalism, to honour contracts, respect laws etc.
Malware
Developers have the responsibility to ensure their products don’t include malware. They
should make sure any software they include also doesn’t contain malware. It is the users
right to purchase software without malware included.
Ergonomic issues
Definition: ergonomics is the study of the relationship between human workers and their
work environment (if just mentioning the definition, make sure to link software somehow).
Ergonomic strategies in UI design include consistency in design, explanation/manual or
walkthrough of usage, colours/fonts/alignment/sizes should be chosen appropriately.
According to Davis text., usability testing is about the efficiency and satisfaction users
experience as they use the software.
Inclusivity issues
This means not excluding people based on culture, gender, economic background, disability
etc. (use examples, such as those in the Davis text. p21-25)
Privacy
Privacy is about protecting personal information (information which may identify someone).
Users have the right to know how their information is being handled. Developers have the
Users have the right to know how their information is being handled. Developers have the
responsibility to handle their users information according to law, and only using their
information for what they (developer and user) have agreed upon.
Intellectual property
Discussed previously.
Plagiarism
Plagiarism is taking somebody else's work and passing them off as your own.
Copyright laws
Legal protection given to the author of the original work from illegal copying, modifying or
distribution. They safeguard intellectual property and aim to foster the creation of creative
work as authors know their work will not be illegally copied, modified or distributed.
Shareware
Shareware is acquiring a licence which allows you to use the software for a limited time
(free trials basically). It is protected by the same copyright laws as commercial software with
the exception that the software can be distributed.
Open Source
Open Source software allows anybody to freely modify and redistribute software as long as
the author is recognised and that the modified product is released using the same open
source licence. This encourages collaboration and the sharing of ideas (think Linux).
Public Domain
This is when the copyright holder gives up all rights to the software. This means that
basically no copyright laws apply so the software can be modified, redistributed,
decompiled, used in other projects etc.
Decompilation
Decompilation is the opposite of compilation. It involves translating executable machine
code back into higher-level code (usually assembly code, not source code). This allows the
programs design to be more easily understood.
Reverse engineering
Reverse engineering is analysing a product and its parts to understand how it works and to
recreate its original design, usually with the purpose of creating a similar product based on
its design. (In the half yearly's the correct answer -after finding the answers online- was
'analysing a product to determine its original design'). (Also, reverse engineering usually
involves decompilation)
Use of networks
Use of networks by developers
1. Access to resources: the internet provides many resources including everything from
documentation to source code, all available in an instant, some free and some for a
fee.
2. Ease of communication: networks and the internet allow developers to communicate
and collaborate on projects without having to meet face-to-face, they may even live in
different countries and speak different languages.
3. Productivity: the ease of communication and access to resources both improve
productivity.
Legal implications
Software implemented throughout a country can lead to significant legal action if the
software breaks the law in some way e.g. breaching a contract, breaching copyright,
breaking the country's law etc.
Structured approach
• Follows the SDC strictly.
• Each step must be completed before moving to the next (which is why this approach is
also called the 'waterfall method')
• The 'defining and understanding the problem' stage is vital. Although errors created
during this stage are easy to fix before moving on, they become increasingly time
consuming and [very] expensive as you work through the remaining stages.
Agile approach
• Focused on the team (generally up to 6 people) developing the project instead of
following predetermined requirements.
• Well suited for web development and mobile app development.
• Quick to develop initial release.
• Continual improvements with added features are regularly made and delivered.
• Developed closely with users/clients.
• Can be difficult to outsource because detailed requirements aren't made. The solution
is usually to set a fixed budget and time, and once these are exhausted, then the
software is released. This requires a lot of trust between developer and customer.
Prototyping approach
• Was created for the same reason as the agile approach, to adapt to changing
requirements, unlike the structured approach.
• The problem is defined, prototype is planned, then implemented, then tested, then
shown to the user who will define new requirements, this will repeat until the
software is considered ready which the prototype will then be made the final solution
(evolutionary prototype) or using the prototype, a new system will be replicated then
released.
• There are two types of prototypes:
1. Concept prototypes is where the prototype is never meant to be the final
product. Its made to aid in the definition and refinement of requirements.
2. Evolutionary prototypes are prototypes which are intended to become the final
product. The prototype will be continually refined until it is considered suitable
for final implementation.
Combinations of approaches
Usually, a combination of approaches will be most suitable as opposed to strictly following
one approach.
Oracle Designer
Oracle designer assists in the creation of system modelling diagrams and source code
generation (production of code). Help files for the application can also be automatically
generated (production of documentation). Whenever an object is modified Oracle Designer
stores a record of the modification coupled with the user who made the modification and
stores a record of the modification coupled with the user who made the modification and
then increments the version number of the object (version control).
DataFactory
DataFactory is a test data generation tool. It allows the testing of software with large
amounts of data without actually having to release the product to the masses. They can
simulate many real world situations such as testing odd dates e.g. leap years.
Methods of installation
When a new product is produced it must be installed and implemented on site. There are
four common methods of installation (visual representation of these methods are extremely
useful for memorisation):
Direct cut-over
This is where the old system is completely dropped, and the new system is put into full
function. To do this, you must make sure the new system is fully functional and meets all its
requirements. This method is usually chosen when it isn't feasible to run two systems at
once.
Parallel
Parallel involves running both systems at the same time for a specific amount of time. Once
the time is up, the old system will be terminated usually using the direct cut-over method.
This method allows any critical problems encountered with the new system to be fixed
while the old system is still fully operational (sort of as a backup).
Phased
Phased installation involves the gradual introduction of the new system whilst gradually
removing the old system. This is usually done by replacing the old modules with the new
ones. This method is often used when the system as a whole isn't complete.
Pilot
The pilot method involves introducing the new system for a small amount of users (think of
beta testers). This allows a small amount of users to test the product and when it is finally
introduced, they can teach the rest of the team how to use the new software.
*Note: examples, benefits and disadvantages of each method will most likely be required if
they are asked in the HSC exam.
Outsourcing
Businesses nowadays seek the services of a specialist software developer instead of writing
their own in-house software. It is more feasible to outsource instead of creating in-house
software especially when a business doesn’t possess the resources to do so. There are many
benefits to outsourcing including:
• Better response to change
• Saves time
• Generally higher quality results
• Reduces costs
Contract programmers
Many job positions are opening for short term contracts. They are usually either specified
for a specific time (e.g. 6 months) or until a project is finished (think of seek.com.au contract
jobs). They can also be freelance jobs.
From page 2. (before SDC) "the requirements are thoroughly understood including
determining whether the existing system is viable or a new system is required. "
The following will also need to be considered as part of understanding client needs.
1. Functionality requirements describe the aim of the system. If the aim is achieved, the
needs are met. Requirements must be verifiable which means it must easily checked
to see if needs are met and are usually mathematical or scientific. For example 'the
system will be able to redraw the screen at least 12 times per second'. This is a result
which can be measured and verified.
2. Compatibility issues: Requirements must be specified so that compatibility issues are
at a minimum. Questions such as 'what software/OS will the program run on?' or
'what hardware will the program support?' must all be considered.
3. Performance issues: when defining the problem, performance should always be
considered and developers should attempt to create efficient algorithms and software.
It is sometimes hard to gauge the consistent performance of a system before it is built.
Depending on the complexity, size, networking and many other factors of the system,
its performance may vary.
Cost effectiveness
Costs and budgets are obviously a major constraint on development. Constant monitoring of
costs should take place to ensure development wont go over budget. When creating a
budget, some of the areas that should be considered are:
• Hardware costs - will new hardware have to be purchased/leased?
• Software costs - will CASE tools, DBMSs, graphic creation software have to be
purchased?
• Personnel costs - salaries of all staff (dev. team, support, trainers, management etc.)
• Outsourcing costs - what will have to be outsourced? What's its cost?
Developers perspective
The developers specifications will not directly affect the product from the users perspective
but they will give the developers a framework for all of them to work by. These may include:
• Specifying what system modelling tools will be used (e.g. flowcharts, DFDs. IPOs etc.)
• Descriptions of algorithms
• Setting up naming conventions for data types and structures
• Setting up a system to maintain a data dictionary
CASE tools will aid in making sure developers comply with the specifications. Once the
specifications are developed, a system model can be developed and specific tasks for each
member can be allocated.
User's perspective
These specifications should include any design specs that may affect the user's experience
such as:
• Interface design: consistency should be used when designing the screen. It should also
be user friendly and appealing to the eye.
• Social and ethical issues: ergonomics should also be considered. e.g. what text size,
font, colour? Keyboard shortcuts? What data entry methods should be used?
• User's environment and computer configuration: the software should utilise OS
settings such as fonts, font sizes, colours and printer setting (as the user may have
changed the settings to meet their needs). Also, a consistency with common usage
should try to be achieved such as common keyboard shortcuts and design elements
(think google design guidelines which streamline usage across all apps e.g. 3-dot menu
button, swipe over navigation drawer etc.) as this allows a transfer of skill sets and
minimal retraining.
… (skipped a large portion of the textbook and syllabus. Refer to other resources for this
section such as MrBrightside's notes and other online resources as well as the textbook/s)
Quality Assurance
Quality assurance ensures standards and requirements are met throughout the
development of software such that the final products are of high quality. This is important
as peoples perception of the products quality impacts the overall success of the product.
The areas that should be assessed to determine quality include:
• Efficiency - how efficiently does the software utilise the systems/computers resources
e.g. RAM and CPU?
• Integrity - this is the correctness of data. This improves when data validation is utilised
e.g. checking for valid phone numbers, addresses and email addresses.
• Reliability - will the software work without failures? If the product does fail (or
encounter some kind of error) then how long will it take to fix?
• Usability - user friendly? Easy to learn and use? Ergonomic?
• Accuracy - does the software perform its functions correctly according to its
specifications? The code should be well documented to maintain accuracy during
future upgrades.
future upgrades.
• Maintainability - how easy can changes be made to the software? Well documented
code is much easier to maintain.
• Testability - the ability to test all aspects of the software. Testing should occur on
individual modules and the system as a whole.
• Reusability - the ability to reuse code in other systems. Once modules are created
there is no need to create them again. This saves time, effort, money, and generally
modules are thoroughly tested .
4. Planning and Designing Software Solutions
…
Small note: you plan and design the software using pseudocode, flowcharts, storyboards,
IPO charts etc.
*Note: there are large parts of this chapter which I haven't written notes for. Also, Davis has
structured the textbook quite differently to the syllabus so headings may be different and
this chapter's notes may be (and probably is) incomplete. Just follow the syllabus closely and
make sure to cover everything, even though they may be out of order.
Efficient data structures assist in the development of algorithms. Here are some common
data structures from the syllabus:
1. One-dimensional arrays are a collection of data items of the same data type.
2. Multi-dimensional arrays are array with multiple indexes/dimensions. Often the
dimensions/indexes are called subscripts.
3. Two-dimensional arrays and other multi-dimensional arrays have
a. Two-dimensional arrays can be visualised as a table.
b. Three-dimensional arrays can be visualised as a cube.
c. Four-dimensional arrays can't be visualised but have 4 indexes.
d. An example of a multi-dimensional array would be: AssessResults(YearLevel,
ClassNum, StudentNum, TaskNum). So AssessResults(12, 4, 8, 3) would retrieve
the results of the 3rd task of a year 12 student in class number 4, with a student
number of 8.
4. Files
5. Records
6. Arrays of records
Arrays of records
Records are data structures which are a collection of fields. Arrays of records are just a
collection of records. Once a record is declared, it becomes a data type and can be accessed
the same as any other data type e.g. you can enter data into a record called MyDetails by
typing MyDetails.Surname = "Peters".
Definition: Files are a collection of data that is stored in a logical manner.
Sequential files
• Sequential data must be accessed from beginning to end (think of cassette tapes). You
cannot access a part of the file if you haven't first accessed what's preceding it.
• Because of this, if you want to write to the file, you can only append. (you cant change
any data, you can only add data).
• You can open a sequential file in three ways - input, output and append. Input is used
to read data, output is to write data to a new file and append is used to write data to
the end of an existing file.
• A sentinel value is a dummy value used to indicate the end of the file (or logical breaks
in data).
• When reading a file, a priming read is used before the main processing to check if the
file only contains a sentinel value.
Audience identification
The interface needs to be design to suite the intended audience. For example, the design of
a program written for pre-schoolers will differ than a program written to carry out complex
calculations for engineers.
Consistency in approach
Consistent user interface allow users to transfer their existing skills to new systems.
Examples of consistent design include:
• Names of commands - common commands should keep the same names e.g. copy, cut
and paste.
• Use of icons - e.g. the save button is always a floppy disk.
• Placement of screen icons - e.g. save button is always in the toolbar.
• Feedback - use of loading bars and icons.
• Forgiveness - warnings about potentially dangerous operations and their recovery
methods such as deleting a file should always be included and be kept consistent.
This is the third stage of the SDC where the previously planned algorithms (planned using
pseudocode, flowcharts, diagrams etc.) are actually written, along with the rest of the
source code, and into a form where it can finally be processed by the computer.
EBNF
○ Extended Backus-Naur Format is a metalanguage which supplies a formal method of
describing the syntax of a language.
○ It differs slightly from BNF as it has added repetition and options (i.e. where '{}' means
repetition and where '|' means 'alternative').
Railroad diagrams
○ These are a visual alternative to EBNF, which means it works in a very similar way.
Translation methods in software solutions (syllabus: the need for translational to machine code
from source code)
Definition: translation is the process of converting high-level source code into executable
machine code (or object code). The two common methods of translation are interpretation
and compilation.
Interpretation
Interpretation is where source code is translated, line by line, into an intermediate language
and then immediately executed. So basically, it translates the first line and runs it, then
translates the second line and runs it etc. The intermediate language isn't necessarily
machine code.
Commonly used on web servers and languages such as Python and Ruby.
Advantages:
○ Easier to debug because translation occurs line by line until there is an error, which it
will then stop and can be quickly corrected by the developer; that single line will be
retranslated & execution will continue.
Disadvantages:
○ Slower program than in compilation as each line of source code is translated right
before it is executed.
○ For a program to be executed, the user must obtain the source code itself, which
raises the issue of intellectual rights.
○ Users must also have an interpreter installed, which uses resources i.e. memory.
○ The intermediate code can take longer to process compared to machine code.
Compilation
Compilation is where the source code, as a whole, is translated into machine code. The
executable code can then be executed at a later time without the need for a translator.
executable code can then be executed at a later time without the need for a translator.
When there is an error, the translation will fail and a list of the errors will be displayed.
Advantages:
○ Program runs faster because code is already translated.
○ Program will be distributed in machine code, thus protecting intellectual rights, and
avoiding many issues of interpretation.
○ Compiled/machine code is usually smaller than source code and interpreted code,
thus reducing file size and resource requirements.
Disadvantages:
○ Harder to debug as the entire program runs before the errors are produced.
○ The whole program needs to be recompiled when changes are made.
○ Because the code will be in machine language, the code will be machine specific,
which means the program will need to be recompiled using a compiler for the specific
processor or OS.
Lexical analysis
A lexical analysis is the process of converting source code, character at a time, into a
sequence of tokens.
Each character is read according to the rules of the specific programming language being
used. Characters such as spaces, indentation and comments are discarded. Programming
languages will have a predefined table of symbols (a symbol table) which source code will
be compared to, character by character. If there is a match e.g. WHILE, that part of the
source code (called a lexeme, which is a string of characters) will be replaced by a token.
Symbols created by the programmer, which aren't predefined, will be given a token, and will
be added to the symbol table. This will continue until the entire source code has been
tokenised.
Error messages may be produced by a lexical analysis as a result of incorrect syntax, naming,
undeclared identifiers etc. This is because they don’t conform with the specific
programming language's syntax.
Syntactical analysis
Syntactic analysis, or parsing, is an examination which 'tests the validity' of whether the
identified elements (those assigned with a token) of a statement are combined together in a
way that is legal according to the syntax of the language. This is because the syntax not only
consists of rules to determine the elements of the language, but also the ways these in
which these elements interact e.g. age = 17 is syntactically correct as it is assigning a value
to a variable, age = "ABC" + 17 is obviously syntactically incorrect.
to a variable, age = "ABC" + 17 is obviously syntactically incorrect.
There are two aims of a syntactical analysis: (1st) to ensure the source code adheres to the
specific languages' syntax rules and (2nd) to check the validity and integrity of data types.
Step 1: Parsing is the process of actually checking the syntax of a sentence (I'm not quite
sure which, but either the source code or the tokens is what's being 'parsed', it makes sense
that it’s the tokens though). A parse tree is created using the tokens from the lexical
analysis, then each statement will be compared against the language's specific syntax rules.
If any errors occur at this stage, i.e. the tokens cannot be parsed, then it means the tokens
are arranged incorrectly and therefore a syntax error.
Step 2: Type checking "tests the validity and integrity of data types (i.e. makes sure strings
aren’t added to integers) appropriate storage locations are also allocated. "
Code generation
Code is generated once the lexical and syntactical analyses are completed without error.
This is where the tokens are converted to machine code, there will also be no errors found
in this stage.
If an interpreter is used, then each statement will be executed as soon as its translated.
In a parse tree, it will be traversed from left to right when generating the code.
If a compiler is being used, the resulting machine code is stored in an object file. A linker
links runtime libraries and dynamic link libraries (DLLs) required for execution. (A dynamic
link library -DLL- is a file containing object code subroutines that add extra functionality to
high level languages. They need to be distributed -may just exist in a folder or may be
installed- with applications so that they are present on the users computer.)
If an interpreter is being used then each instruction, once generated, will be executed
immediately. The interpreter takes care of access to runtime libraries.
The role of machine code in the execution of a program
Machine code
1. Word size: the amount of bits the processor can process in one go. Generally 32 bits or
64 bits in modern computers.
2. Register: temporary storage location within the CPU. The size of the register is the
same as the word size of the processor.
○ Accumulator: general purpose register used to accept the results of
computations from the ALU.
○ Program counter: stores the address of the next instruction being executed by
the CPU (it is only ever incremented by one, or is reset).
○ Instruction register: stores the current instruction being executed by the CPU.
Every CPU contains three basic units (which are all combinations of circuits):
• Arithmetic Control Unit (ALU): carries out all the arithmetic (e.g. +, -, /, x) and logic
(AND, OR, NOT, NAND, XOR etc.) in the CPU.
https://www.techopedia.com/definition/2849/arithmetic-logic-unit-alu
• Control Unit (CU): decodes instructions to be executed (which determines the nature
of the instruction.
• Registers: explained above.
Types of errors:
1. Syntax: syntax errors are those that break the rules of the specific programming
language. They prevent the translation of code and are found during the syntactical
analyses. Examples would be typing and spelling mistakes of data types.
2. Logic: these are syntactically correct code which doesn’t produce the expected output
or complete the desired task. These are the most difficult to debug as there is
technically nothing wrong with the code. An example would be Avg = 1 + 2 / 2 instead
of Avg = (1 + 2) / 2. If the developer cannot find the issue, peer checking and desk
checking may help.
3. Runtime: these aren't detected until the program is run. They can be caused by:
○ Arithmetic overflow: when the result of a calculation is too large for the
allocated storage area. For example, in most languages, an integer can only be
approximately equal to ±32767, so creating an integer larger than this would
cause an arithmetic overflow (a runtime error).
○ Division by zero: mathematically, dividing any number by zero is undefined and
cannot be done.
○ Accessing inappropriate memory locations: for example, creating an array with
10 items, and trying to access the 11th.
Debugging tools:
1. Breakpoints: temporarily halting the execution of a program. Often used to inspect
the program at a specific point/line.
2. Resetting variable contents: sometimes, it can be helpful to change a variable to one
that is known (e.g. simple, easy numbers as opposed to large, complex numbers in a
mathematical calculation) to easily test the output.
3. Program traces: allows the order of execution of statements to be tracked or the
changing contents of variable to be tracked. It is a log about the programs execution.
They analyse the flow of execution. Some development environments display each line
of code as its executed, others allow you to analyse the call stack (shows info about
any active subroutines).
4. Single line stepping: halting execution after each statement is executed.
Emerging technologies
*Davis textbook doesn’t say much about this, only (slightly) notable info is spoken about
below*
• Game consoles which include various motion, video and sound sensors.
• Biometrics e.g. finger prints, iris, face recognition and other biometric scanning
devices.
• Scanning pens to scan text, phone camera to scan QR codes, smart card reader within
eftpos, RFID for stock control.
• Mind control devices which detect facial expressions, emotional state and cognitive
process using 14 sensors.
process using 14 sensors.
6. Testing and Evaluating Software Solutions
Levels of testing
Module testing:
• Each module is tested independently to make sure it works on its own, without any
external influences.
• A driver may need to be used to substitute the mainline in order to provide an input
and accept an output.
• Will usually use both black and white box testing.
Program testing:
• Tests to see that all modules (the entire program) work together as a whole.
• Concentrates on the interface between modules (because after module testing,
everything should be working, right?).
• Program testing is usually done using bottom up or top down testing:
○ Bottom-up testing incorporates each module into the program for testing, one
at a time, from the lowest/deepest modules up to the higher modules. Drivers
will be used to sub for the mainline. The program as a whole will be tested last.
○ Top-down testing starts with the main program and uses stubs for some
modules as it incorporates each module into the program.
○ The choice of testing is usually a matter of preference.
System testing:
• System testing is the testing of the program outside of its development environment,
so basically, real world testing. This is usually done by those outside of the
development team.
• This is done because the software may work perfectly in the controlled, or high-end,
developers environment and computers.
• Usually black box testing (as you cant really white box test a user's computer)
• Acceptance testing is to determine whether the program has met its requirements. So
basically, if it is of acceptable quality for delivery.
Communication with those for whom the solution has been developed
1. Test results: results should be communicated to both developers and clients and
should cover both positive and negative results. Test results should contain info about
problems identified (e.g. bugs, long response times etc.) and limitations of the
program.
2. Comparison with the original design specification: the final report should detail the
way in which the solution meets its design specifications.
Documenting changes
1. Source code documentation: includes comments and intrinsic documentation.
2. Updating associated hardcopy documentation: hardcopy documentation is rarely
used anymore, updating this to online help is becoming more popular.
3. Use of CASE tools to monitor changes and versions: CASE tools aid in tracking changes
such as documentation, testing, data structures, bug fixing etc. Version control CASE
tools keep track/manage different versions of each module in the software. This is
done to the extent that previous versions of the software could be completely
reconstructed/restored if the developer wishes. (If asked about CASE tools to monitor
changes and versions, speak about tracking changes and version control.)
8. Developing A Solution Package
*more of a practical chapter which integrates previous chapters. Notes don’t seem
necessary, rather just read over the chapter to know how all the theory is implemented*
10. The Interrelationship Between Software and Hardware
*Note: I didn’t find it very practical to write notes for this chapter. It is heavily based on
fundamental concepts and your ability to understand and make use of these concepts.
Rough/random notes
ASCII vs Unicode (HSC sample answers)
• Unicode can represent many more characters than ASCII (more than 1 million
compared with 128). Unicode is a 'superset' of ASCII and can therefore represent all
ASCII characters.
• The characters of most languages can be represented using Unicode, however the
letters represented by ASCII are in English only.
http://www.differencebetween.net/technology/software-technology/difference-between-
unicode-and-ascii/
http://stackoverflow.com/questions/19212306/difference-between-ascii-and-unicode
Quick notes
1. A latch is a circuit that is used to store 1 binary digit. (it’s the S/R flip flop which just
loops around).
2. The clocking component allows us to control when a latch is allowed to change state
(where a 0 means no changes can occur and a 1 means changes are allowed to
occur...think of it as a gate where 1 is open and 0 is closed)
3. A D-latch is where the R component is removed, as R is always the inverse of Q, you
can achieve the same outcome by replacing R with a NOT gate, as long as there is the
clocking component).
4. The edge trigger component is used to ensure the flip-flop only changes state once for
each tick (a tich is a 1) of the clock. They consist of two latch circuits, a master and a
slave.
Shift registers are used to store binary data. They allow for data to be moved into and
out of the register and they all share a common clock.
a. This (shift registers) is what is used to perform binary multiplication and
subtraction.
A shift of one bit to the left multiplies by two and a shift of one bit to the right
A shift of one bit to the left multiplies by two and a shift of one bit to the right
divides by two.
b. They are just flip flops grouped together in a chain.