0% found this document useful (0 votes)
87 views

Information Search and Analysis Skills (Isas)

tentang isas berjudul software security

Uploaded by

Rifki Ardian
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
87 views

Information Search and Analysis Skills (Isas)

tentang isas berjudul software security

Uploaded by

Rifki Ardian
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 46

ChapterI.

Introduction

Information Search and Analysis Skills ( ISAS )

SOFTWARE SECURITY

Name Registration number Faculty Advisor Semester

: Aris Suryadi : R063015400051 : Mr.Joko Handoko : 4th

Center for Computing and Information Technology


Gedung Pasca Sarjana Lt 3 , Fakultas Teknik Universitas Indonesia

ChapterI. Introduction

CHAPTER I INTRODUCTION
I.1. Background. Information security breaches pose a significant and increasing threat to national security and economic wellbeing. According to Symantec Internet Security Threat Report (2003), each company surveyed experienced on average 30 attacks per week. These attacks often exploited Anecdotal evidence suggests that losses from such software defects or vulnerabilities. Software vendors, including Microsoft, have announced cyber-attacks can run in the millions. their intention to increase the quality of their products and reduce vulnerabilities. Despite this, it is likely that vulnerabilities will continue to be discovered and disclosed in foreseeable future. Often, vulnerability discoverers report vulnerabilities to vendors and keep it secret to allow. The argument was that the vendor would come up with a time for vendors to develop patch workaround strategy or a patch and make the vulnerability public, in due course, balancing costs of patching and disclosure with the benefits. However, many discoverers came to believe that frequently disclosure was excessively delayed or inadequate, leading to the creation of The proponents of full disclosure full-disclosure mailing lists in late 90s, such as Bugtraq. claim that the threat of instant disclosure increases public awareness, puts pressure on the vendors to issue high quality patches quickly, and improves the quality of software over time. But many believe that disclosure of vulnerabilities, especially without a good patch is dangerous, for it leaves users defenseless against attackers. At the 2002 Black Hat Conference. Growth in information technology area and communications at this time is very fast and have an effect very significant to community and also person, all

ChapterI. Introduction

activity, life, job, learn method, life style and also way of thinking. Therefore, exploiting of information technology and communications have to be introduced to wide of society so that they have knowledge stock and adequate experience to be able to apply and using it in everyday activity specially in the field of Information Technology. With information technology input and communications in society, will assist society to learn information technology and communications, and use all existing potency for the development of ability of themself. Study of information technology and communications will give easiness and motivation to society to learn and work self-support. As we know, security have a relational to the technology. Is used for many side at this life, such as office security, home security, health security and more. Usage the technology for security is very usable. Whichmean it can be a good merge between technology and security. As programmer, beside writing the code and analyzing the lifecycle of application, we must see the side of security, such as registering the application with online or entering the serial number. But many programmer thinking that not always be the first priority, however we can secure our security using a license. There is many techniques to secure the application, some use registering online, entering the serial, limiting the copy, limiting the time use, and many other. But if we see the other site, many underground community can crack that, the techniques is reversing. Software protection is one of the most important issues concerning computer practice. There exist many heuristics and ad-hoc methods for protection, but the problem as a whole has not received the theoretical treatment it deserves.

ChapterI. Introduction

In this paper we provide theoretical treatment of software protection. We reduce the problem of software protection to the problem of efficient simulation on the application. A machine is oblivious if the sequence in which it accesses memory locations is equivalent for any two inputs with the same running time. For example, an oblivious Turing Machine is one for which the movement of the heads on the tapes is identical for each computation. (Thus, it is independent of the actual input.) What is the slowdown in the running time of any machine, if it is required to be oblivious? In 1979 Pippenger and Fischer showed how a two-tape oblivious Turing Machine can simulate, on-line, a one-tape Turing Machine, with a logarithmic slowdown in the running time. I show an analogous result For the random-access machine (RAM) model of computation. In particular, I show how to do an on-line simulation of an arbitrary RAM input by a probabilistic oblivious RAM with a poly-logarithmic slowdown in the running time. On the other hand, we show that a logarithmic slowdown is a lower. Software is very expensive to create and very easy to steal. Software piracy is a major concern (and a major loss of revenue) to all software-related companies. Software pirates borrow/rent software they need, copy it to their computer and use it without paying anything for it. Thus, the question of software protection is one of the most important issues concerning computer practice. The problem is to sell programs that can be executed by the buyer, yet cannot be redistributed by the buyer to other users. Much engineering effort is put into trying to provide the software protection, but this effort seems to lack theoretical foundations. In particular, there is no crisp definition of what the problems are and what should be considered as a

ChapterI. Introduction

satisfactory solution. In this paper, we provide a theoretic treatment of software protection, by distilling a key problem and solving it efficiently. Before going any further, we distinguish between two folklore notions: the problem of protection against illegitimate duplication and the problem of protection against redistribution (or fingerprinting software). Loosely speaking, the first problem consists of ensuring that there is no efficient method for creating executable copies of the software; while the second problem consists of ensuring that only the software producer can prove in court that he has designed the program. I.2. .Objective Intention of writing of this project, is as my participation to explain more circumstantial analyzing the security of the application. I hope it can be useful for every programmer and system analyst. With the standard securing application, we can learn about security of the application that can be more efficiently for build the software even it the simple application Applied by Writing method is simple enough writer that is gathering of materials to be studied and analyzed and also checked from open source, applying with method of learn do and which very have an effect on in research, and also take conclusion from all got report of research.

I.3. Problem Domain


Security is a serious problem and, if present trends continue, could be much worse in the future. No simple silver bullets will solve the software security

ChapterI. Introduction

problem. As a longterm multifaceted problem, it requires multiple solutions and the application of resources throughout the lifecycle. Improving software security and safeguarding the IT infrastructure is a research and education issue for universities; a skill, process, and incentives issue for producers; a requirements issue for customers; a quality and testing issue for providers; a maintenance and patching issue for IT administrators; an ease-of use issue for users; a configuration issue for installers; and an enforcement issue for governments. There is many securing techniques, is about networking or offline application. At this session writer be focus on the software security, how analyzing the security and how to defend our software from piracy or reversing. I.4. Methodology My methodology is learn and do, first Im collecting the research content and I analyzed that after doing the research I writing the report.

Writing Structure
To be more directional, hence solution will be organizational with the following writing systematic way: Chapter of I Introduction Chapter of I elaborate about background of problem of, definition of problem of, purposes and objectives, benefit, used approach method and writing systematic way Chapter of II the Basis For Theory

ChapterI. Introduction

Chapter of II elaborate various theoretical evaluation which include;cover congeniality of analysis, system and desain, cycle of system life, peripheral of systems analysis, software description, and about security CHAPTER 3 Solution And Analysis System In CHAPTER 3 this solution more circumstantial about defending the software, analyzing the building of application and how it works. CHAPTER 4 Conclusion and Suggestion Contain about conclusion of writer of got by research experience.having taken steps and useful Suggestions which given.

ChapterI. Introduction

CHAPTER II BASIC OF THEORY II.1 History of software engineering


Software engineering has evolved steadily from its founding days in the 1940s until today in the 2000s. Applications have evolved continuously. The ongoing goal to improve technologies and practices, seeks to improve the productivity of practitioners and the quality of applications to users. The most important development was that new computers were coming out almost every year or two, rendering existing ones obsolete. Software people had to rewrite all their programs to run on these new machines. Programmers did not have computers on their desks and had to go to the "machine room". Jobs were run by signing up for machine time or by operational staff. Jobs were run by putting punched cards for input into the machine's card reader and waiting for results to come back on the printer. The field was so new that the idea of management by schedule was non-existent. Making predictions of a project's completion date was almost impossible. Computer hardware was application-specific. Scientific and business tasks needed different machines. Due to the need to frequently translate old software to meet the needs of new machines, high-order languages like FORTRAN, COBOL, and ALGOL were developed. Hardware vendors gave away systems software for free as hardware could not be sold without software. A few companies sold the service of building custom software but no software companies were selling packaged software. The notion of reuse flourished. As software was free, user organizations commonly gave it away. Groups like IBM's scientific user group SHARE offered catalogs of reusable components. Academia did not yet teach the

ChapterI. Introduction

principles of computer science. Modular programming and data abstraction were already being used in programming. The term software engineering first was used in the late 1950s and early 1960s. Programmers have always known about civil, electrical, and computer engineering and debated what engineering might mean for software. The NATO Science Committee sponsored two conferences on software engineering in 1968 (Garmisch, Germany see conference report) and 1969, which gave the field its initial boost. Many believe these conferences marked the official start of the profession of software engineering. Software engineering was spurred by the so-called software crisis of the 1960s, 1970s, and 1980s, which identified many of the problems of software development. Many software projects ran over budget and schedule. Some projects caused property damage. A few projects caused loss of life. Some used the term software crisis to refer to their inability to hire enough qualified programmers. The software crisis was originally defined in terms of productivity, but evolved to emphasize quality. Cost and Budget Overruns: The OS/360 operating system was a classic example. This decade-long project from the 1960s and 1970s eventually produced one of the most complex software systems ever created. OS/360 was one of the first large (1000 programmer) software projects. Fred Brooks claims in The Mythical Man Month that he made a multi-million dollar mistake by not developing a coherent architecture before starting development. Property Damage: Software defects can cause property damage. Poor software security allows hackers to steal identities, costing time, money, and reputations.

ChapterI. Introduction

10

Life and Death: Software defects can kill. Some embedded systems used in radiotherapy machines failed so catastrophically that they administered lethal doses of radiation to patients. Peter G. Neumann has kept a contemporary list of software problems and disasters at Computer Risks. The software crisis has been slowly fizzling out, because it is unrealistic to remain in crisis mode for more than 20 years. SEs are accepting that the problems of SE are truly difficult and only hard work over many decades can solve them. For decades, solving the software crisis was paramount to researchers and companies producing software tools. Seemingly, they trumpeted every new technology and practice from the 1970s to the 1990s as a silver bullet to solve the software crisis. Tools, discipline, formal methods, process, and professionalism were touted as silver bullets. Tools : Especially emphasized were tools: Structured programming, objectoriented programming, CASE tools, Ada, Java, documentation, standards, and Unified Modeling Language were touted as silver bullets. Discipline: Some pundits argued that the software crisis was due to the lack of discipline of programmers. Formal methods: Some believed that if formal engineering methodologies would be applied to software development, then production of software would become as predictable an industry as other branches of engineering. They advocated proving all programs correct. Process: Many advocated the use of defined processes and methodologies like the Capability Maturity Model. Professionalism: This led to work on a code of ethics, licenses, and professionalism. Cheap Asian Labour then lead to the mass lay-offs of

ChapterI. Introduction

11

(close to pensionable) North American design staff by the early 21st century, as the shipment of jobs to foreign countries increased dramatically, not long before it became established that Asian-based project support costs far exceeded project costs in North America. In 1987, Fred Brooks published the No Silver Bullet article, arguing that no individual technology or practice would ever make a 10-fold improvement in productivity within 10 years. Debate about silver bullets raged over the following decade. Advocates for Ada, components, and processes continued arguing for years that their favorite technology would be a silver bullet. Skeptics disagreed. Eventually, almost everyone accepted that no silver bullet would ever be found. Yet, claims about silver bullets pop up now and again, even today. Some interpret no silver bullet to mean that SE failed. The search for a single key to success never worked. All known technologies and practices have only made incremental improvements to productivity and quality. Yet, there are no silver bullets for any other profession, either. Others interpret no silver bullet as proof that SE has finally matured and recognized that projects succeed due to hard work. However, it could also be said that there are, in fact, a range of silver bullets today, including lightweight methodologies report generators, integrated (see "Project management"), coding-editors with spreadsheet calculators, customized browsers, in-site search engines, database design-test memory/differences/undo, and specialty shops that generate niche software, such as information websites, at a fraction of the cost of totally customized website development. Never the less, the field of software engineering appears too complex and diverse for a single "silver bullet" to improve most issues, and each issue accounts for only a small portion of all software problems.

ChapterI. Introduction

12

The rise of the Internet, based on pre-planned government-sponsored technology, led to very rapid growth in the demand for international information display/e-mail systems on the world wide web. Programmers were required to handle illustrations, maps, photographs, and other images, plus simple animation, at a rate never before seen, with few well-known methods to optimize image display/storage (such as the use of thumbnail images). The growth of browser usage, running on the HTML language, changed the way in which information-display and retrieval was organized. The wide-spread network connections led to the growth and prevention of international computer viruses on MS Windows computers, and the vast proliferation of spam e-mail became a major design issue in e-mail systems, flooding communication channels and requiring semi-automated pre-screening. Keyword-search systems evolved into web-based search engines, and many software systems had to be re-designed, for international searching, depending on Search Engine Optimization (SEO) techniques. Human natural-language translation systems were needed to attempt to translate the information flow in multiple foreign languages, with many software systems being designed for multi-language usage, based on design concepts from human translators. Typical computer-user bases went from hundreds, or thousands of users, to, often, many-millions of international users. With the expanding demand for software in many smaller organizations, the need for inexpensive software solutions led to the growth of simpler, faster methodologies that developed running software, from requirements to deployment, quicker & easier. The use of rapid-prototyping evolved to entire lightweight methodologies, such as Extreme Programming (XP), which attempted to simplify many areas of software engineering, including requirements gathering and reliability testing for the growing, vast number of small software systems. Very large software systems still used heavily-documented methodologies, with many volumes in the documentation set; however, smaller systems had a simpler, faster

ChapterI. Introduction

13

alternative approach to managing the development and maintenance of software calculations and algorithms, information storage/retrieval and display. There are a numbers of areas where the evolution of software engineering is notable: Emergence as a profession: By the early 1980s, software engineering had already emerged as a bona fide profession, to stand beside computer science and traditional engineering. See also software engineering professionalism. Role of women: In the 1940s, 1950s, and 1960s, men often filled the more prestigious and better paying hardware engineering roles, but often delegated the writing of software to women. Grace Hopper, Jamie Fenton and many other unsung women filled many programming jobs during the first several decades of software engineering. Today, many fewer women work in software engineering than in other professions, a complex problem related to sexual discrimination, cyberculture, education, and individual identity, and one which many academic and professional organizations are trying hard to solve. Processes and Methodology: Processes and methodologies have become big parts of software engineering, and are both hailed for their potential to improve software and sharply criticized for their potential to constrict programmers. Cost of hardware: The relative cost of software versus hardware has changed substantially over the last 50 years. When mainframes were expensive and required large support staffs, the few organizations buying them also had the resources to fund large, expensive custom software engineering projects. Computers are now much more numerous and much more powerful, which has several effects on software. The larger market can support large projects to create commercial off the shelf software, as done by companies such as Microsoft. The cheap machines allow each programmer to have a terminal capable of fairly rapid compilation. The programs in question can use techniques such as garbage collection, which

ChapterI. Introduction

14

make them easier and faster for the programmer to write, although slower for the machine to run. On the other hand, many fewer organizations are interested in employing programmers for large custom software projects, instead using commercial off the shelf software as much as possible. Privacy software is software built to protect the privacy of its users. The software typically works in conjunction with internet usage to control or limit the amount of information made available to third parties. The software can apply encryption or filtering of various kinds. Privacy software can refer to two different types of protection. One type is protecting a users Internet privacy from the World Wide Web. There are software products that will mask or hide a users IP address from the outside world in order to protect the user from identity theft. The other type of protection is hiding or deleting the users Internet traces that are left on their PC after they have been surfing the Internet. There is software that will erase all the users Internet traces and there is software that will hide and encrypt a users traces so that others using their PC will not know where they have been surfing. The software architecture of a program or computing system is the structure or structures of the system, which comprise software elements, the externally visible properties of those elements, and the relationships between them. The term also refers to documentation of a system's software architecture. Documenting software architecture facilitates communication between stakeholders, documents early decisions about high-level design, and allows reuse of design components and patterns between projects. The field of computer science has come across problems associated with complexity since its formation. Earlier problems of complexity were solved by developers by choosing the right data structures, developing algorithms, and by applying the concept of separation of concerns. Although the term software

ChapterI. Introduction

15

architecture is relatively new to the industry, the fundamental principles of the field have been applied sporadically by software engineering pioneers since mid 1980s. Early attempts to capture and explain software architecture of a system were imprecise and disorganized - often characterized by a set of box-and-line diagrams. During the 1990s there was a concentrated effort to define and codify fundamental aspects of the discipline. Initial sets of design patterns, styles, best practices, description languages, and formal logic were developed during that time. The software architecture discipline is centered on the idea of reducing complexity through abstraction and separation of concerns. To date there is still no agreement on the precise definition of the term software architecture. As a maturing discipline with no clear rules on the right way to architect a system the action of architecting is still a composition of art and science. The art aspect of software architecture is due to the fact that a commercial software system supports some aspect of a business or a mission. How a system supports key business drivers is described via scenarios as non-functional requirements of a system, also known as quality attributes, determine how a system will behave. Every system is unique due to the nature of the business drivers it supports, as such the degree of quality attributes exhibited by a system such as fault-tolerance, backward compatibility, extensibility, reliability, maintainability, availability, security, usability, and such other ilities will vary with each implementation. To bring a software architecture user's perspective into the software architecture, it can be said that software architecture gives the direction to take steps and do the tasks involved in each such user's speciality area and interest e.g. the stake holders of software systems, the software developer, the software system operational support group, the software maintenance specialists, the deloyer, the tester and also the business end user.In this sense software architecture is really the amalgamation of the multiple perspectives a system always embodies. The fact that those several different perspectives can be put together into a software architecture

ChapterI. Introduction

16

stands as the vindication of the need and justification of creation of software architecture before the software development in a project attains maturity. The origin of software architecture as a concept was first identified in the research work of Edsger Dijkstra in 1968 and David Parnas in the early 1970s. The scientists emphasized that the structure of a software system matters and getting the structure right is critical. The study of the field increased in popularity since the early 1990s with research work concentrating on architectural styles (patterns), architecture description languages, architecture documentation, and formal methods. Research institutions have played a prominent role in furthering software architecture as a discipline. Mary Shaw and David Garlan of Carnegie Mellon wrote a book titled Software Architecture: Perspectives on an Emerging Discipline in 1996, which brought forward the concepts in Software Architecture, such as components, connectors, styles and so on. The University of California, Irvine's Institute for Software Research's efforts in software architecture research is directed primarily in architectural styles, architecture description languages, and dynamic architectures. ANSI/IEEE 1471-2000: Recommended Practice for Architecture

Description of Software-Intensive Systems is the first formal standard in the area of software architecture, and was recently adopted by ISO as ISO/IEC DIS 25961. Architecture description languages (ADLs) are used to describe a Software Architecture. Several different ADLs have been developed by different organizations, including Wright (developed by Carnegie Mellon), Acme (developed by Carnegie Mellon), xADL (developed by UCI), Darwin (developed by Imperial College London), DAOP-ADL (developed by University of Mlaga). Common elements of an ADL are component, connector and configuration.

ChapterI. Introduction

17

Software architecture is commonly organized in views, which are analogous to the different types of blueprints made in building architecture. Within the ontology established by ANSI/IEEE 1471-2000, views are instances of viewpoints, where a viewpoint exists to describe the architecture in question from the perspective of a given set of stakeholders and their concerns. Some possible views (actually, viewpoints in the 1471 ontology) are: Functional/logic view Code view Development/structural view Concurrency/process/thread view Physical/deployment view User action/feedback view

Several languages for describing software architectures have been devised, but no consensus has yet been reached on which symbol-set and view-system should be adopted. The UML was established as a standard "to model systems (and not just software)," and thus applies to views about software architecture. Others believe that effective development of software relies on understanding unique constraints of each problem, and so universal notations are doomed because each provides a notational bias that necessarily makes the notation useless or dangerous for some set of tasks citation needed. They point to the proliferation of programming languages and a succession of failed attempts to impose a single 'universal language' on programmers, as proof that software thrives on diversity and not on standards.

II.2 Software Categories

ChapterI. Introduction

18

As life software have a categories, it help the programmer to cataloging what the have built, in standard international software have 3 categories, such as :

II.2.1 Free Software


Free software or usually said Freeware, is the one of category software that user not must to buy or is free to use with or without warranty. A free software licence is a software licence which grants recipients additional rights to modify and redistribute the software beyond that granted by copyright law. These freedoms would normally be prohibited by copyright law, so with free software, the copyright holder must give recipients the explicit permission to do these things. Freeware is copyrighted computer software which is made available for use free of charge, for an unlimited time, as opposed to shareware where the user is required to pay (e.g. after some trial period or for additional fuctionality). Authors of freeware often want to "give something to the community", but also want credit for their software and to retain control of its future development. Sometimes when programmers decide to stop developing a freeware product, they will give the source code to another programmer or release the product's source code to the public as free software. FSF-approved free software licences : free Software Foundation, the group that maintains The Free Software Definition, maintains a list of free software licences. The list distinguishes between free software licences that are compatible or incompatible with the FSF licence of choice, the GNU General Public License, which is a copyleft licence. The list also contains licences which the FSF considers non-free for various reasons. OSI-approved "open source" licences : Another group, Open Source Initiative, also maintains a list of approved licences. Their list differs slightly, but in almost all cases the definitions apply to the same licences. Freedom-preserving restrictions In order to preserve the freedom to use, study, modify, and redistribute free software, most free software licences

ChapterI. Introduction

19

carry requirements and restrictions which apply to distributors. There exists an ongoing debate within the free software community regarding the fine line between restrictions which preserve freedom and those which reduce it. During the 1990s, free software licences began including clauses, such as patent retaliation, in order to protect against software patent litigation cases which had not previously existed. This new threat became the primary purpose for composing the new version 2 of the GNU GPL \. In the decade 2000, tivoisation has emerged as yet another new threat which current free software licences are not protected from. The term freeware was coined by Andrew Fluegelman when he wanted to sell a communications program named PC-Talk that he had created but for which he did not wish to use traditional methods of distribution because of their cost. Previously, he held a trademark on the term "freeware" but this trademark has since been abandoned. Fluegelman actually distributed PC-Talk via a process now referred to as shareware. The only criterion for being classified as "freeware" is that the software must be made available for use for an unlimited time at no cost. The software license may impose one or more other restrictions on the type of use including personal use, individual use, non-profit use, non-commercial use, academic use, commercial use or any combination of these. For instance, the license may be "free for personal, non-commercial use." Everything created with the freeware programs can be distributed at no cost (for example graphic, documents, or sounds made by user). There is some software which may be considered freeware, but which has limited distribution; that is, it may only be downloaded from a specific site, and cannot be redistributed. Hence, this software wouldn't be freely redistributable

ChapterI. Introduction

20

software. According to the basic definition, that software would be freeware; according to stricter definitions, it wouldn't be. Freeware contrasts with free software (Open-source software), because of the different meanings of the word "free". Freeware is gratis and refers to zero cost, versus free software that is described as "libre", which means free to study, change, copy, redistribute, share and use the software in any purpose. However, many programs are both freeware and free software. They are available for zero cost, provide the source code and are distributed with free software permissions. This software would exclusively be called free software to avoid confusion with freeware that usually does not come with the source code and is therefore proprietary software.

Figure I.1 Freeware

II.2.2 History of free software

ChapterI. Introduction

21

As early as the 1950s and into the 1970s, software was seen as an add-on supplied by mainframe vendors to make computers useful. Thus, programmers and developers frequently shared their software freely. This was especially common in large users groups, such as SHARE and DECUS. DECUS was the Digital Equipment Corporation (DEC) Users Group and SHARE was a user group for the IBM 701. In the 1960s and 1970s, people who received software generally had the freedoms to run it for any purpose, to study and modify the source code, and to redistribute modified versions. Software was produced largely by academics and corporate researchers working in collaboration and was not itself seen as a commodity. Operating systems, such as early versions of UNIX, were widely distributed and maintained by the community of users. Source code was distributed with software because users frequently modified the software themselves - to fix bugs, or modify the software to different hardware. Thus in this era, software was principally free software, not because of any concerted effort by software users or developers, but rather because software was developed inside an existing academic community of sharing. Additionally, early versions of UNIX were distributed freely in part because of court rulings in anti-trust cases that forbid AT&T (the company that owned the license to the software) from selling anything other than phone service. AT&T's lawyers interpreted these rulings very conservatively, and thus forbade AT&T from commercializing the software it developed. In the late 1970s and early 1980s, companies began routinely imposing restrictions on programmers with software license agreements. Sometimes this was because companies were now making money from proprietary software or they were trying to keep hardware characteristics secret by hiding the source code. Other times, the increasingly corporatised attitude in the growing and previously eclectic industry saw protecting source code and trade secrets as a norm, even if it didn't provide any benefit to business. Bill Gates signaled the change of the times in 1976 when he wrote his

ChapterI. Introduction

22

Open Letter to Hobbyists, sending out the message that what hackers call "sharing" is, in his words, "stealing".

II.2.3 Shareware
Shareware is a marketing method for computer software. Shareware software is typically obtained free of charge, either by downloading from the Internet or on magazine cover-disks. A user tries out the program, and thus shareware has also been known as "try before you buy". A shareware program is accompanied by a request for payment, and the software's distribution license often requires such a payment. Usually the shareware limit the use time, so the user must registering to continued the use of that software, the way is like registering number, registering via online, registering with telephone number, and many other.

Figure II.2 Shareware unregistered

ChapterI. Introduction

23

Figure II.3 Shareware Registered

II.2.4 Open Source


The Open Software License (OSL) is a software license created by Lawrence Rosen. The Open Source Initiative (OSI) has certified it as an open source license, but the Debian project judged version 1.1 to be incompatible with the DFSG. The OSL is a copyleft license, with a termination clause triggered by filing a lawsuit alleging patent infringement. Many people in the free software / open source community feel that software patents are harmful to software, and are particularly harmful to open source software. The OSL attempts to counteract that by creating a pool of software which a user can use if that user does not harm it by attack of it with a patent lawsuit. As of the end of 2006, the OSL is not used by any well-known free software projects. Lawrence Rosen created the Open Software License, which is certified as an open source license by the Open Source Initiative. The OSL has a termination clause intended to dissuade users from filing patent infringement lawsuits:

ChapterI. Introduction

24

Termination for Patent Action. This License shall terminate automatically and You may no longer exercise any of the rights granted to You by this License as of the date You commence an action, including a cross-claim or counterclaim, against Licensor or any licensee alleging that the Original Work infringes a patent. This termination provision shall not apply for an action alleging patent infringement by combinations of the Original Work with other software or hardware.

Figure II.4 Open Source (ref : www.vbbego.com)

Figure II.5 The Open Source Code

II.2.5 Application Security

ChapterI. Introduction

25

Application Security encompasses measures taken to prevent exceptions in the security policy of an application or the underlying system (vulnerabilities) through flaws in the design, development, or deployment of the application. Applications only control the use of resources granted to them, and not which resources are granted to them. They, in turn, determine the use of these resources by users of the application through application security. Security testing techniques scour for vulnerabilities or security holes in applications. These vulnerabilities leave applications open to exploitation. Ideally, security testing is implemented throughout the entire software development life cycle (SDLC) so that vulnerabilities may be addressed in a timely and thorough manner. Unfortunately, testing is often conducted as an afterthought at the end of the development cycle.

II.2.6 Decompiler
A decompiler is the name given to a computer program that performs the reverse operation to that of a compiler. That is, it translates a file containing information at a relatively low level of abstraction (usually designed to be computer readable rather than human readable) in to a form having a higher level of abstraction (usually designed to be human readable). The term "decompiler" is most commonly applied to a program which translates executable programs (the output from a compiler) into source code in a (relatively) high level language (which when compiled will produce an executable whose behavior is the same as the original executable program). By comparison, a disassembler translates an executable program into assembly language (an assembler could be used to assemble it back into an executable program). Decompilation is the act of using a decompiler, although the term can also refer to the decompiled output. It can be used for the recovery of lost source code, and is also useful in some cases for computer security, interoperability, error

ChapterI. Introduction

26

correction, and more (see "Why Decompilation"). The success of decompilation depends on the amount of information present in the code being decompiled and the sophistication of the analysis. The first decompilation phase is the loader, which parses the input machine code program's binary file format. The loader should be able to discover basic facts about the input program, such as the architecture (Pentium, PowerPC, etc), and the entry point. In many cases, it should be able to find the equivalent of the main function of a C program, which is the start of the user written code. This excludes the runtime initialisation code, which should not be decompiled if possible. Performed on it. The bytecode formats used by many virtual machines (such as Java's JVM) often include extensive metadata and high-level features that make decompilation quite feasible. Machine code has typically much less metadata, and is therefore much harder to decompile. Some compilers and post compilation tools obfuscate the executable code (that is, attempt to produce output that is very difficult to decompile). This is done to make it more difficult to reverse engineer the executable. The next logical phase is the disassembly of machine code instructions into a machine independent intermediate representation (IR). For example, the Pentium machine instruction mov eax, [ebx+0x04] might be translated to the IR eax := m[ebx+4]; Idiomatic machine code sequences are sequences of code whose combined semantics is not immediately apparent from the instructions' individual semantics. Either as part of the disassembly phase, or as part of later analyses, these idiomatic sequences need to be translated into known equivalent IR. For example, the x86 assembly code:

ChapterI. Introduction

27

cdq xor sub

eax eax, edx eax, edx

; edx is set to the sign-extension of eax

could be translated to eax := abs(eax); Some idiomatic sequences are machine independent; some involve only one instruction. For example, xor eax, eax clears the eax register (sets it to zero). This can be implemented with a machine independent simplification rule, such as a xor a = 0. In general, it is best to delay detection of idiomatic sequences if possible, to later stages that are less affected by instruction ordering. For example, the instruction scheduling phase of a compiler may insert other instructions into an idiomatic sequence, or change the ordering of instructions in the sequence. A pattern matching process in the disassembly phase would probably not recognize the altered pattern. Later phases group instruction expressions into more complex epressions, and modify them into a canonical (standardized) form, making it more likely that even the altered idiom will match a higher level pattern later in the decompilation. A good machine code decompiler will perform type analysis. Here, the way registers or memory locations are used result in constraints on the possible type of the location. For example, an and instruction implies that the operand is an integer; programs do not use such an operation on floating point values (except in special library code) or on pointers. An add instruction results in three constraints, since the operands may be both integer, or one integer and one pointer (with integer and pointer results respectively; the third constraint comes from the ordering of the two operands when the types are different).

ChapterI. Introduction

28

Various high level expressions can be recognized which trigger recognition of structures or arrays. However, it is difficult to distinguish many of the possibilities, because of the freedom that machine code or even some high level languages such as C allow with casts and pointer arithmetic. The example from the previous section could result in the following high level code: struct T1* ebx; struct T1 { int v0004; int v0008; int v000C; }; ebx->v000C -= ebx->v0004 + ebx->v0008; The penultimate decompilation phase involves structuring of the IR into higher level constructs such as while loops and if/then/else conditional statements. For example, the machine code xor eax, eax l0002: or ebx, ebx jge l0003 add eax,[ebx] mov ebx,[ebx+0x4] jmp l0002 l0003: mov [0x10040000],eax could be translated into: eax = 0;

ChapterI. Introduction

29

while (ebx < 0) { eax += ebx->v0000; ebx = ebx->v0004; } v10040000 = eax; Unstructured code is more difficult to translate into structured code than already structured code. Solutions include replicating some code, or adding boolean variables. The majority of computer programs are covered by copyright laws. Although the precise scope of what is copied by copyright differs from region to region, copyright law generally provides the author (the programmer(s) or employer) with a collection of exclusive rights to the program. These rights include the right to make copies, including copies made into the computer's RAM memory. Since the decompilation process involves making multiple such copies, it is generally prohibited without the authorization of the copyright holder. However, because decompilation is often a necessary step in achieving software interoperability, copyright laws in both the United States and Europe permit decompilation to a limited extent. In the United States, the copyright fair use defense has been successfully invoked in decompilation cases. For example, in Sega v. Accolade, the court held that Accolade could lawfully engage in decompilation in order to circumvent the software locking mechanism used by Sega's game consoles. In Europe, the 1991 Software Directive explicitly provides for a right to decompile in order to achieve interoperability. The result of a heated debate between, on the one side, software protectionists, and, on the other, academics as well as independent software developers, Article 6 permits decompilation only if a number of conditions are met:

ChapterI. Introduction

30

First, the decompiler must have a license to use the program to be decompiled. Second, decompilation must be necessary to achieve interoperability with the target program or other programs. Interoperability information may therefore not be readily available, such as through manuals or API documentation. This is an important limitation. The necessity must be proven by the decompiler. The purpose of this important limitation is primarily to provide an incentive for developers to document and disclose their products' interoperability information.

Third, the decompilation process must, if possible, be confined to the parts of the target program relevant to interoperability. Since one of the purposes of decompilation is to gain an understanding of the program structure, this third limitation may be difficult to meet. Again, the burden of proof is on the decompiler. Overall, the decompilation right provided by Article 6 is interesting, as it

codifies what is claimed to be common practice in the software industry. Few European lawsuits are known to have emerged from the decompilation right. This could be interpreted as meaning either one of two things: 1) the decompilation right is not used frequently and the decompilation right may therefore have been unnecessary, or 2) the decompilation right functions well and provides sufficient legal certainty not to give rise to legal disputes. In a recent report regarding implementation of the Software Directive by the European member states, the European Commission seems to support the second interpretation.

ChapterI. Introduction

31

CHAPTER III PROBLEM ANALYSIS III.1 Software analysis


The software architecture of a program or computing system is the structure or structures of the system, which comprise software elements, the externally visible properties of those elements, and the relationships between them. The term also refers to documentation of a system's software architecture. Documenting software architecture facilitates communication between stakeholders, documents early decisions about high-level design, and allows reuse of design components and patterns between projects. A software development process is a structure imposed on the development of a software product. Synonyms include software lifecycle and software process. There are several models for such processes, each describing approaches to a variety of tasks or activities that take place during the process. A growing body of software development organizations implement process methodologies. Many of them are in the defense industry, which in the U.S. requires a rating based on 'process models' to obtain contracts. The international standard for describing the method of selecting, implementing and monitoring the life cycle for software is ISO 12207. The Capability Maturity Model (CMM) is one of the leading models. Independent assessments grade organizations on how well they follow their defined processes, not on the quality of those processes or the software produced. CMM is gradually replaced by CMMI. ISO 9000 describes standards for formally organizing processes with documentation.

ChapterI. Introduction

32

Table II.1 Standard Software Development ISO 15504, also known as Software Process Improvement Capability Determination (SPICE), is a "framework for the assessment of software processes". This standard is aimed at setting out a clear model for process comparison. SPICE is used much like CMM and CMMI. It models processes to manage, control, guide and monitor software development. This model is then used to measure what a development organization or project team actually does during software development. This information is analyzed to identify weaknesses and drive improvement. It also identifies strengths that can be continued or integrated into common practice for that organization or team.

ChapterI. Introduction

33

Figure III.1 Security Analyze step Six Sigma is a methodology to manage process variations that uses data and statistical analysis to measure and improve a company's operational performance. It works by identifying and eliminating defects in manufacturing and service-related processes. The maximum permissible defects is 3.4 per one million opportunities. However, Six Sigma is manufacturing-oriented and needs further research on its relevance to software development. Software Elements Analysis:The most important task in creating a software product is extracting the requirements. Customers typically know what they want, but not what software should do, while incomplete, ambiguous or contradictory requirements are recognized by skilled and experienced software engineers. Frequently demonstrating live code may help reduce the risk that the requirements are incorrect. Specification: Specification is the task of precisely describing the software to be written, possibly in a rigorous way. In practice, most successful specifications are written to understand and fine-tune applications that were already well-developed, although safety-critical software systems are often carefully specified prior to application development. Specifications are most important for external interfaces that must remain stable. Software architecture: The architecture of a software system refers to an abstract representation of that system. Architecture is concerned with

ChapterI. Introduction

34

making sure the software system will meet the requirements of the product, as well as ensuring that future requirements can be addressed. The architecture step also addresses interfaces between the software system and other software products, as well as the underlying hardware or the host operating system. Implementation (or coding): Reducing a design to code may be the most obvious part of the software engineering job, but it is not necessarily the largest portion. Testing: Testing of parts of software, especially where code by two different engineers must work together, falls to the software engineer. Documentation: An important (and often overlooked) task is documenting the internal design of software for the purpose of future maintenance and enhancement. Documentation is most important for external interfaces. Software Training and Support: A large percentage of software projects fail because the developers fail to realize that it doesn't matter how much time and planning a development team puts into creating software if nobody in an organization ends up using it. People are occasionally resistant to change and avoid venturing into an unfamiliar area, so as a part of the deployment phase, its very important to have training classes for the most enthusiastic software users (build excitement and confidence), shifting the training towards the neutral users intermixed with the avid supporters, and finally incorporate the rest of the organization into adopting the new software. Users will have lots of questions and software problems which leads to the next phase of software. Maintenance: Maintaining and enhancing software to cope with newly discovered problems or new requirements can take far more time than the initial development of the software. Not only may it be necessary to add code that does not fit the original design but just determining how software works at some point after it is completed may require

ChapterI. Introduction

35

significant effort by a software engineer. About of all software engineering work is maintenance, but this statistic can be misleading. A small part of that is fixing bugs. Most maintenance is extending systems to do new things, which in many ways can be considered new work. In comparison, about of all civil engineering, architecture, and construction work is maintenance in a similar way.

Figure III.2 Software Development Lifecycle

III.2 Application Vulnerability


In computer security, the word vulnerability refers to a weakness in a system allowing an attacker to violate the confidentiality, integrity, availability [i.e (C.I.A) NSTISSC's triangle], access control, consistency or audit mechanisms of the system or the data and applications it hosts. Vulnerabilities may result from bugs or design flaws in the system. A vulnerability can exist either only in theory, or could have a known exploit. Vulnerabilities are of significant interest when the program containing the vulnerability operates with special privileges, performs

ChapterI. Introduction

36

authentication or provides easy access to user data or facilities (such as a network server or RDBMS). A construct in a computer language is said to be a vulnerability when many program faults can have their root cause traced to its use. Vulnerabilities often result from the carelessness of a programmer, though they may have other causes. A vulnerability may allow an attacker to misuse an application through (for example) bypassing access control checks or executing commands on the system hosting the application. Some vulnerabilities arise from un-sanitized user input, often allowing the direct execution of commands or SQL statements (known as SQL injection). Others arise from the programmer's failure to check the size of data buffers, which can then be overflowed, causing corruption of the stack or heap areas of memory (including causing the computer to execute code provided by the attacker). The method of disclosing vulnerabilities is a topic of debate in the computer security community. Some advocate immediate full disclosure of information about vulnerabilities once they are discovered. Others argue for limiting disclosure to the users placed at greatest risk, and only releasing full details after a delay, if ever. Such delays may allow those notified to fix the problem by developing and applying patches, but may also increase the risk to those not privy to full details. This debate has a long history in security; see full disclosure and security through obscurity. More recently a new form of commercial vulnerability disclosure has taken shape, see for example TippingPoint's Zero Day Initiative which provides a legitimate market for the purchase and sale of vulnerability information from the security community.

ChapterI. Introduction

37

From the security perspective, only a free and public disclosure can ensure that all interested parties get the relevant information. Security through obscurity is a concept that most experts consider unreliable. It should be unbiased to enable a fair dissemination of security critical information. Most often a channel is considered trusted when it is a widely accepted source of security information in the industry (e.g CERT, SecurityFocus, Secunia and FrSIRT). Analysis and risk rating ensure the quality of the disclosed information. The mere discussion on a potential flaw in a mailing list or vague information from a vendor do therefore not qualify. The analysis must include enough details to allow a concerned user of the software to assess his individual risk or take immediate action to protect his assets.

Figure III.3 The Way Of Application security Many software tools exist that can aid in the discovery (and sometimes removal) of vulnerabilities in a computer system. Though these tools can provide an auditor with a good overview of possible vulnerabilities present, they can not

ChapterI. Introduction

38

replace human judgment. Relying solely on scanners will yield false positives and a limited-scope view of the problems present in the system. Vulnerabilities have been found in every major operating system including Windows, Mac OS, various forms of Unix and Linux, OpenVMS, and others. The only way to reduce the chance of a vulnerability being used against a system is through constant vigilance, including careful system maintenance (e.g. applying software patches), best practices in deployment (e.g. the use of firewalls and access controls) and auditing (both during development and throughout the deployment lifecycle).

III.2.1 Protecting
Outlaw always beside the law, so bad and good cannot be separated, as programmer all we can is minimize the hole and bug, so the newbie cracker or reverser hard to patch or crack our software. Many techniques can we use to protect the software such as using encryption or other protector application, but basicly we can protect the software using standard protection. Check some code or seriall that applied on the folder system or the unidentified file. Many programmer use the keycode on registry.

ChapterI. Introduction

39

Figure III.4 Cycle of verifying the registration Intrusion and malicious software cost US industry and government ten plus billion dollars per year and potential attacks on critical infrastructure remain a serious concern. New automatic attack triggers require no human action to deliver destructive payloads. Security incidents reported to the CERT Coordination Center rose 2,099 percent from 1998 through 2002 an average annual compounded rate of 116 percent. D uring 2003, the total was 137,529 incidents up from 82,094 in 2002. An incident may involve one to hundreds (or even thousands) of sites and ongoing activity for long periods. These incidents res ulted from vulnerabilities.Figure 2 shows the yearly number of vulnerabilities reported to CERT CC.These can impact the cr itical inf rastructure of the US as well as its commerce and security.

ChapterI. Introduction

40

III.2.2 Encryption
In cryptography, encryption is the process of obscuring information to make it unreadable without special knowledge, sometimes referred as scrambling. Encryption has been used to protect communications for centuries[citation needed], but only organizations and individuals with extraordinary privacy and/or secrecy requirements had bothered to exert the effort required to implement it. In the mid1970s[citation needed], strong encryption emerged from the preserve of secretive government agencies into the public domain, and is now used in protecting many kinds of systems, such as the Internet e-commerce, mobile telephone networks and bank automatic teller machines. Encryption can be used to ensure secrecy and/or privacy, but other techniques are still needed to make communications secure, particularly to verify the integrity and authenticity of a message; for example, a message authentication code (MAC) or digital signatures. Another consideration is protection against traffic analysis. Encryption or software code obfuscation is also used in software copy protection against reverse engineering, unauthorized application analysis, cracks and software piracy used in different encryption or obfuscating software.

III.2.3 Chipper
In cryptography, a cipher (or cypher) is an algorithm for performing encryption and decryption a series of well-defined steps that can be followed as a procedure. An alternative term is encipherment. In most cases, that process is varied depending on a key which changes the detailed operation of the algorithm. In non-technical usage, a "cipher" is the same thing as a "code"; however, the concepts are distinct in cryptography. In classical cryptography, ciphers were distinguished

ChapterI. Introduction

41

from codes. Codes operated by substituting according to a large codebook which linked a random string of characters or numbers to a word or phrase. For example, UQJHSE could be the code for "Proceed to the following coordinates". The original information is known as plaintext, and the encrypted form as ciphertext. The ciphertext message contains all the information of the plaintext message, but is not in a format readable by a human or computer without the proper mechanism to decrypt it; it should resemble random gibberish to those not intended to read it. The operation of a cipher usually depends on a piece of auxiliary information, called a key or, in traditional NSA parlance, a cryptovariable. The encrypting procedure is varied depending on the key, which changes the detailed operation of the algorithm. A key must be selected before using a cipher to encrypt a message. Without knowledge of the key, it should be difficult, if not impossible, to decrypt the resulting cipher into readable plaintext. Most modern ciphers can be categorized in several ways: By whether they work on blocks of symbols usually of a fixed size (block ciphers), or on a continuous stream of symbols (stream ciphers). By whether the same key is used for both encryption and decryption (symmetric key algorithms), or if a different key is used for each (asymmetric key algorithms). If the algorithm is symmetric, the key must be known to the recipient and to no one else. If the algorithm is an asymmetric one, the encyphering key is different from, but closely related to, the decyphering key. If one key cannot be deduced from the other, the asymmetric key algorithm has the public/private key property and one of the keys may be made public without loss of confidentiality. The Feistel cipher uses a combination of substitution and transposition techniques. Most (block ciphers) algorithms are based on this structure.

ChapterI. Introduction

42

There are a variety of different types of encryption. Algorithms used earlier in the history of cryptography are substantially different from modern methods, and modern ciphers can be classified according to how they operate and whether they use one or two keys. Historical pen and paper ciphers used in the past are sometimes known as classical ciphers. They include simple substitution ciphers and transposition ciphers. For example GOOD DOG can be encrypted as PLLX XLP where L substitutes for O, P for G, and X for D in the message. Transposition of the letters GOOD DOG can result in DGOGDOO. These simple ciphers are easy to crack, even without plaintext-ciphertext pairs. Simple ciphers were replaced by polyalphabetic substitution ciphers which changed the substitution alphabet for every letter. For example GOOD DOG can be encrypted as PLSX TWF where L, S, and W substitute for O. With even a small amount of known plaintext, polyalphabetic substitution ciphers and letter transposition ciphers designed for pen and paper encryption are easy to crack. During the early twentieth century, electro-mechanical machines were invented to do encryption and decryption using a combination of transposition, polyalphabetic substitution, and "additive" substitution. In rotor machines, several rotor disks provided polyalphabetic substitution, while plug boards provided transposition. Keys were easily changed by changing the rotor disks and the plugboard wires. Although these encryption methods were more complex than previous schemes and required machines to encrypt and decrypt, other machines such as the British Bombe were invented to crack these encryption methods. Modern encryption methods can be divided into symmetric key algorithms (Private-key cryptography) and asymmetric key algorithms (Public-key cryptography). In a symmetric key algorithm (e.g., DES and AES), the sender and receiver must have a shared key set up in advance and kept secret from all other

ChapterI. Introduction

43

parties; the sender uses this key for encryption, and the receiver uses the same key for decryption. In an asymmetric key algorithm (e.g., RSA), there are two separate keys: a public key is published and enables any sender to perform encryption, while a private key is kept secret by the receiver and enables only him to perform decryption. Symmetric key ciphers can be distinguished into two types, depending on whether they work on blocks of symbols of fixed size (block ciphers), or on a continuous stream of symbols (stream ciphers).

Figure III.5 The Ciphers Scheme

III.3 Protector and crack tool


We can protect our application using many tool such as, protector, packer, compressor, and encryption, and otherwise we can debug and patch the software using debugger and patcher. Protector 1. Armadillo 2. ASProtect

ChapterI. Introduction

44

Debugger and Disassembler 1. SoftIce 2. OllyDebugger 3. Wdasm 4. VbDebug 5. P32Dasm Hex Editor 1. UltraEdit 2. WinHex

ChapterI. Introduction

45

CHAPTER IV SUMMARY IV.1 Conclusion


Many application that was secured by many other techniques, but back to the basic, nothing is impossible for the cracker and reverser, there is nothing cant be break. As programmer we just can minimize the bug, and always learn the hole and foul, so the application must be patch or always be up to date. so dont think perfect, just think the best

IV.2 Suggestion
So far many application can be break by reverser, but we can minimize the correction with the code. Watch bellow private function GenSerial()as string ' -= calculated =GenSerial = <isiserialnumber> end function or Private Sub CheckSerial() if(serialnumberOK)then tipe = "REGISTERED" else

ChapterI. Introduction

46

tipe = "UNREGISTERED" endif End Sub Look the code, that is very easy way to crack.. So avoid the code Use many decision code and way and always using the protector

You might also like