0% found this document useful (0 votes)
99 views

Project Report - 03-IT-2016-2020

This document is a project report submitted by Sanjay Bitra, Salman Ahamed.J, and Syed Muhammed Dhanish.M.R in partial fulfillment of the Bachelor of Technology degree in Information Technology from Anna University, Chennai. It presents an event management system project that analyzes datasets to identify soil type, water source, and suitable crops for lands, in order to help farmers choose optimal crops.

Uploaded by

syed saba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
99 views

Project Report - 03-IT-2016-2020

This document is a project report submitted by Sanjay Bitra, Salman Ahamed.J, and Syed Muhammed Dhanish.M.R in partial fulfillment of the Bachelor of Technology degree in Information Technology from Anna University, Chennai. It presents an event management system project that analyzes datasets to identify soil type, water source, and suitable crops for lands, in order to help farmers choose optimal crops.

Uploaded by

syed saba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 87

Event Management System

A PROJECT REPORT

Submitted by

SANJAY BITRA

SALMAN AHAMED.J

SYED MUHAMMED DHANISH.M.R

in partial fulfilment for the award of the

degree of

BACHELOR OF TECHNOLOGY

IN

INFORMATION TECHNOLOGY

ANNA UNIVERSITY : CHENNAI 600 025

MARCH 2022

1
ANNA UNIVERSITY : CHENNAI 600 025

BONAFIDE CERTIFICATE

Certified that this project report “Event Management System ”

is the bonafide work of “SANJAY BITRA,SALMAN

AHAMED.J,SYED MUHAMMED DHANISH.M.R” who

carried out the project work under my supervision.

SIGNATURE SIGNATURE

DR.M.AMANULLAH R.LAVANYA,AP/IT

HEAD OF THE DEPARTMENT SUPERVISOR

PROFESSOR ASSITANT PROFESSOR

Department of Information Department of Information


Technology Technology
Aalim Muhammed Salegh Aalim Muhammed Salegh
College of Engineering College of Engineering
Avadi-I.A.F, Muthapudupet Avadi-I.A.F, Muthapudupet
Chennai-600 055 Chennai-600 055
CERTIFICATE OF EVALUATION

COLLEGE NAME : AALIM MUHAMMED SALEGH COLLEGE OF


ENGINEERING
BRANCH : INFORMATION TECHNOLOGY
PROJECT TITLE : EVENT MANAGEMENT SYSTEM

NAME OF THE REGISTRATION NAME OF THE


STUDENTS NUMBER SUPERVISOR

SANJAY BITRA 110118205013

SALMAN AHAMED.J 110118205011 R.LAVANYA,AP/IT

SYED MUHAMMED 110118205015


DHANISH.M.R

The report of this project is submitted by the above students in partial fulfilment
for the award of Bachelor Technology in Information Technology of Anna
University are evaluated and confirmed to report of the work done by the above
students during the academic year of 2018-2022.

This report work is submitted for the Anna University project viva-voce work
held on

INTERNAL EXAMINER EXTERNAL EXAMINER


ACKNOWLEDGEMENT

First and foremost we would like to thank the God of Almighty Allah
who is our refuge and strength . We would like to express our heartfelt thanks to
our Beloved Parents who sacrifice their presence for our better future.

We are very much indebted to thank our college Founder Alhaj. Dr.
S.M.SHAIK NURDDIN, Chairperson JANABA ALHAJIYANIM.S.
HABIBUNNISA,
Aalim Muhammed Salegh Group of Educational Institutions, Honourable
Secretary& Correspondent JANAB ALHAJI S.SEGU JAMALUDEEN, Aalim
Muhammed Salegh Group of Educational Institutions for providing necessary
facilities all through the course.

We take this opportunity to put forth our deep sense of gratitude to our beloved
Principal Prof. Dr. M. AFZAL ALI BAIG for granting permission to
undertake the project.

We also express our gratitude to Dr.M.AMANULLAH, Head of the


Department, Information Technology, Aalim Muhammed Salegh College of
Engineering, Chennai, for his involvement and constant support, valuable
guidance to effectively complete our project in our college

We would also like to express our sincere thanks to our project guide
Ms.R.LAVANAYA Assistant Professor, Information technology, Aalim
Muhammed Salegh College of Engineering who persuaded us to take this
project and never ceased to lend his encouragement and support.

Finally we take immense pleasure to thank our family members, faculties and
friends for their constant support and encouragement in doing our project.
ABSTRACT

Agricultural research is a vast and important field which has strengthened the

optimized economical profit and benefits. Agriculture has great scope in future,

But most people lack the knowledge of choosing the right crop to cultivate in

their land that leads to loss. In our proposed system, we implement the

application to identify the types of soil, water source of that land whether that

land is based on rain or bore water and suitable crop for that soil. Thus we

provide solution for the people to do better agriculture through this application .
TABLE OF CONTENTS

CHAPTER NO TITLE PAGE NO

ABSTRACT i

LIST OF FIGURES v

LIST OF SYMBOLS vi

LIST OF ABBREVIATIONS vii

1 INTRODUCTION

1.1 PROJECT INTRODUCTION 1

1.2 PROBLEM DEFINITION 1

1.3 DATAMINING 2

1.4 BIG DATA 3

2 SYSTEM STUDY

2.1 FEASIBILITY STUDY 6

2.2 ECONOMICAL FEASIBILITY 6

2.3 TECHNICAL FEASIBILITY 6

2.4 OPERATIONAL FEASIBILITY 7

3 LITERATURE SURVEY

3.1 PAPER 1 8

3.2 PAPER 2 9

3.3 PAPER 3 10

3.4 PAPER 4 11
4 SYSTEM ANALYSIS

4.1 EXISTING SYSTEM 12

4.2 PROPOSED SYSTEM 12

5 REQUIREMENTSAND SPECIFICATION

5.1 SYSTEM REQUIREMENTS 14

5.1.1 HARDWARE REQUIREMENTS 14

5.1.2 SOFTWARE REQUIREMENTS 14

6 SOFTWARE DESCRIPION

6.1 JAVA JDK 16

6.2 MYSQL 17

6.3 HADOOP 19

7 PROJECT DESCRIPTION

7.1 AIM OF THE PROJECT 21

7.2 USE CASE DIAGRAMS 22

7.3 CLASS DIAGRAM 23

7.4 SEQUENCE DIAGRAM 23

7.5 COLLABORATION DIAGRAM 24

7.6 ACTIVITY DIAGRAM 24


8 SYSTEM IMPLEMENTATION

MODULES LIST 25

MODULES DESCRIPTION 25

9 TESTING CODING STANDARDS

9.1 NAMING CONVENTION 28

9.1.2 SCRIPT WRITING COMMENTING 29

9.1.3 MESSAGE BOX FORMAT 29

9.2 TESTING PROCEDURE 30

9.3 TEST DATA AND OUTPUT

9.3.1 UNIT TESTING 31

9.3.2 FUNCTIONAL TESTING 31

9.3.3 PERFORMANCE TESTING 31

9.3.4 STRESS TEST 32

9.3.5 STRUCTURED TEST 32

9.3.6 INTEGRATION TESTING 33

9.3.7 TESTING STRATERGIES 34

10 CONCLUSION

10.1 CONCLUSION 40

10.2 FUTURE ENHANCEMENT 40

11 APPENDIX 1 41

12 APPENDIX 2 87
13 REFERENCE 72
LIST OF FIGURES

FIG.NO LIST OF FIGURES PAGE NO

1.3 DATA PROCESS OVERVIEW 2


1.5 FOUR Vs OF BIG DATA 4
6.1 SOFTWARE DEVELOPMENT PROCESS 16
ARCHITECTURE DIAGRAM 21
WORKFLOW DIAGRAM 22
USECASE DIAGRAM 22
CLASS DIAGRAM 23
SEQUENCE DIAGRAM 23
COLLABORATION DIAGRAM 24
ACTIVITY DIAGRAM 25
LIST OF SYMBOLS

SYMBOLS SYMBOLS/NAME MEANING/DEFINITION

== EQUAL SIGN EQUALITY

() PARENTHESES OPERATOR PRECEDENCE

[] BRACKETS ARRAY CONSTRUCTION

+ PLUS SIGN ADDITION

- MINUS SIGN SUBTRACTION

* MTIMES MATRIX MULTIPICATION

, COMMA SEPARATOR

; SEMI COLON SUPPRESS OUTPUT OF THE


CODE LINE

: COLON FOR LOOPING ITERATION

““ DOUBLE QUOTES STRING CONSTRUCTOR

‘’ SINGLE QUOTES CHARACTER ARRAY


CONSTRUCTOR
= EQUAL SIGN ASSIGNMENT
LIST OF ABBREVIATIONS
ARMA
Auto -
Regressive Moving Average
SARIMA
Seasonal -
Auto Regressive Integrated Moving Average ARMA with exogenous
variables
ARMAX -
JDK - Java Development Kit
MYSQL - MY Structured Query Language
IDE - Integrated Development Environment
UI - User Interface
SVM - Support Vector Machine
CHAPTER 1

INTRODUCTION

PROJECT INTRODUCTION
Agricultural advancement has strengthened the optimized economical
growth globally. It is a very vast and important field of industry to gain
more benefits. In future, Agriculture has the potential to be one of the
crucial field for the people. But today, Many people who own a land who
want to start an agricultural project do not have the knowledge and
awareness of the technicalities of the crop cultivation as well as the market
demands. Due to which most of the people get perform agriculture by
cultivating crop on soil that are not suitable for that area.

In this system, we implement the application to identify the types of soil,


water source of that land whether that land is based on rain or bore water
through performing analysis of dataset. These data suggest what of crop is
suitable for that soil. So through this application we provide application for
the people to perform agriculture in the systematic manner.

PROBLEM DEFINITION
The major problem is we could not come up proper stability with this
Recommendation process. The recommendation problem is reduced to the
problem of estimating ratings for the items that have not been seen by a
user, and this estimation is usually based on the other available ratings given
by this and/ or other users.
Data Process Overview

The above figure represents the application of this project. The user(farmer)
would have to give the input data to the user interface such as type of soil, type
of water, weather conditions. These data will be processed in the system
programmed based on the machine learning concept. Once the data is processed
the result is displayed to the user.

The best crop will be recommended by the system according to the input
obtained by the user. This will allow the user to take an informed decision in
order to cultivate the crops in the land he wants to do the agriculture and the
facilities accessible to in that location according to his budget. There will be
improvement the yield to provide regular market supply. This will help the
development of agriculture cultivation and business.

DATA MINING

Datamining is a vital area of modern research world for processing,


analysing and evaluating large datasets to identify associations, classifications,
and
clustering, etc.; between different attributes and predict the best results with
relevant patterns Significantly, these methods can be used in the field of
agriculture and can produce extraordinary significant benefits nd predictions
that can be used for commercial and scientific purposes. Traditionally,
Agriculture decision making is based on experts’ judgments and these
judgments may not apply to classify the soil suitability and may lead the lower
crop yield. The explicit dataset management by the data mining techniques and
algorithms contain the huge analytical potential for accurate and valid results
and these can help to automate the classification process, depending on the
predefined parameters developed by Agriculture research centres.
Decision tree, Naïve Bayes algorithm, Rule-Based classification, Neural
Networks, Support Vector Machine (SVM) and Genetic Algorithm etc are
very well known algorithm for data classification and further for knowledge
discovery.

BIG DATA
Big data is an all-encompassing term for any collection of data sets so
large and complex that it becomes difficult to process using traditional
data processing applications. The challenges include analysis, capture,
duration, search, sharing, storage, transfer, visualization, and privacy
violations. Big data is characterized by the 4V definition to four Vs,
namely, volume, variety, velocity.
Fig Four V’s Of Big Data

Volume - Amount of all types of data generated from different sources by

creation of hidden information and patterns through data analysis

Variety- Types of data collected via sensors, smart phones, or social networks.

Such data types include video, image, text, audio, and data logs, in either
structured or unstructured format..

Velocity -Refers to the speed of data transfer. The contents of data constantly

change because of the absorption of complementary data collections,


introduction of previously archived data or legacy collections, and streamed
data arriving from multiple sources

Value- is the most important aspect of big data; it refers to the process of
discovering huge hidden values from large datasets with various types and rapid
generation.

Big data are classified into different categories to better understand their
characteristics shows the numerous categories of big data. The classification is
important because of large-scale data in the cloud. The classification is based on
five aspects: (i) data sources, (ii) content format, (iii) data stores, (iv) data
staging, and (v) data processing.
HADOOP
Hadoop is an open-source Apache Software Foundation project written in
Java that enables the distributed processing of large datasets across clusters
of commodity. Hadoop has two primary components, namely, HDFS and
Map Reduce programming framework. The most significant feature of
Hadoop is that HDFS and Map Reduce are closely related to each other;
each are co-deployed such that a single cluster is produced. Therefore, the
storage system is not physically separated from the processing system.

HDFS is a distributed file system designed to run on top of the local file
systems of the cluster nodes and store extremely large files suitable for
streaming data access. HDFS is highly fault tolerant and can scale up from a
single server to thousands of machines, each offering local computation and
storage.

Map Reduce

Map Reduce is a popular cloud computing framework that robotically


performs scalable distributed applications and provides an interface that allows
for parallelization and distributed computing in a cluster of servers. The
approach is to apply scientific computing problems to the MapReduce
framework where scientists can efficiently utilize existing resources in the cloud
to solve computationally large-scale scientific data.
CHAPTER 2
SYSTEM STUDY

FEASIBILITY STUDY
The feasibility of the project is analysed in this phase and business
proposal is put forth with a very general plan for the project and some
cost estimates. During system analysis the feasibility study of the
proposed system is to be carried out. This is to ensure that the proposed
system is not a burden to the company. For feasibility analysis, some
understanding of the major requirements for the system is essential.

ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the
system will have on the organization. The amount of fund that the
company can pour into the research and development of the system is
limited. The expenditures must be justified. Thus the developed system as
well within the budget and this was achieved because most of the
technologies used are freely available. Only the customized products had
to be purchased.

TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is,
the technical requirements of the system. Any system developed must not
have a high demand on the available technical resources. This will lead to
high demands on the available technical resources. This will lead to high
demands being placed on the client. The developed system must have a
modest requirement.
OPERATIONAL FEASIBILITY
The aspect of study is to check the level of acceptance of the
system by the user. This includes the process of training the user to use
the system efficiently. The user must not feel threatened by the system,
instead m it as a necessity. The level of acceptance by the users solely
depends on the methods that are employed to educate the user about the
system and to make him familiar with it. His level of confidence must be
raised so that he is also able to make some constructive criticism, which
is welcomed, as he is the final user of the system.
CHAPTER 3

LITERATURE SURVEY

PAPER 1:

IMPACTS OF POULATION GROWTH,ECONOMIC DEVELOPMENT


AND TECHNICAL CHANGE ON A GLOBAL FOOD PRODUCTION
AND CONSUMPTION

U.A. Schneider et al. / Agricultural Systems 104 (2011) 204–215 (24-


Dec.2010)
- Uwe A. Schneider a, Petr Havlik b, Erwin Schmid c, Hugo Valin b, Aline
Mosnier b,c, Michael Obersteiner b,

Over the next decades mankind will demand more food from fewer land

and water resources. This study quantifies the food production impacts of four

alternative development scenarios from the Millennium Ecosystem Assessment

and the Special Report on Emission Scenarios. Partially and jointly considered

are land and water supply impacts from population growth, and technical

change, as well as forest and agricultural commodity demand shifts from

population growth and economic development. The income impacts on food

demand are computed with dynamic elasticities. Simulations with a global,

partial equilibrium model of the agricultural and forest sectors show that per

capita food levels increase in all examined development scenarios with minor

impacts on food prices. Global agricultural land increases by up to 14%

between 2010 and 2030.


PAPER 2:

BRIEF HISTORY OF AGRICULTURAL SYSTEMS MODELLING

Jones, J.W., et al., Brief history of agricultural systems modeling, Agricultural

Systems (2016),

20 MAY 2016)

- James W. Jones a, John M. Antle b, Bruno O. Basso c, Kenneth J. Boote

Agricultural systems science generates knowledge that allows researchers to

consider complex problems or take informed agricultural decisions. The rich

history of this science exemplifies the diversity of systems and scales over

which they operate and have been studied. Modeling, an essential tool in

agricultural systems science,has been accomplished by scientists from a wide

range of disciplines, who have contributed concepts and tools over more than

six decades. As agricultural scientists now consider the “next generation”

models, data, and knowledge products needed to meet the increasingly complex

systems problems faced by society, it is important to take stock of this history

and its lessons to ensure that we avoid re-invention and strive to consider all

dimensions of associated challenges.


PAPER 3:

A SURVEY ON DATAMINIG AND PATTERN RECOGNITION


TECHNIQUES FOR SOIL DATAMINING

IJCSI International Journal of Computer Science Issues, Vol. 8, Issue 3, No. 1,


(MAY .2011)

- Dr. D. Ashok Kumar, N. Kannathasan

Data mining has emerged as one of the major research domain in the recent

decades in order to extract implicit and useful knowledge. This knowledge can

be comprehended by humans easily. Initially, this knowledge extraction was

computed and evaluated manually using statistical techniques. Subsequently,

semiautomated data mining techniques emerged because of the advancement in

the technology. Such advancement was also in the form of storage which

increases the demands of analysis. In such case, semi-automated techniques

have become inefficient. Therefore, automated data mining techniques were

introduced to synthesis knowledge efficiently. A survey of the available

literature on data mining and pattern recognition for soil data mining is

presented in this paper. Data mining in Agricultural soil datasets is a relatively

novel research field.


PAPER 4:

PREDICTING FARMER UPTAKE OF NEW AGRICULTURAL


PRACTICES

Agricultural Systems 156 (2017) 115–125


116 - Kuehne et al.

There is much existing knowledge about the factors that influence adoption

of new practices in agriculture but few attempts have been made to construct

predictive quantitative models of adoption for use by those planning

agricultural research, development, extension and policy. ADOPT (Adoption

and Diffusion Outcome Prediction Tool) is the result of such an attempt,

providing predictions of a practice's likely rate and peak level of adoption as

well as estimating the importance of various factors influencing adoption. It

employs a conceptual framework that incorporates a range of variables,

including variables related to economics, risk, environmental outcomes, farmer

networks, characteristics of the farm and the farmer, and the ease and

convenience of the new practice. The ability to learn about the relative

advantage of the practice, as influenced by characteristics of both the

practice and the potential adopters, plays a central role.


CHAPTER 4

SYSTEM ANALYSIS

EXISTING SYSTEM:

 This predicts crop yield based on temperature and rainfall using Fuzzy
Logic Model.
 Three models ARMA,SARIMA and ARMAX are used for weather
prediction.
 The three models are compared and best model is used to predict
temperature and rainfall which in turn used to predict crop yield based
on Fuzzy Logic model.

DRAWBACKS :

 Takes huge amount of time to analyze the datasets.


 Need to collect more information such as soil type,sand
type,location and water resource.

PROPOSED SYSTEM:

 We implement the application to identify the types of soil, water source


of that land whether that land is based on rain or bore water and
suggest the suitable crop for that soil.
 So through this application we provide solution for people to know
more about agriculture .
 We predict type of crop which one is suitable for that
particular soil,weather condition, temperature and so on.
 We are using machine learning with set of datasets and we identify
the crop for the corresponding soil.
ADVANTAGE :

 People can easily know about crops.


 People get benefits through this application.
 Educated people can easily understand about soil and crops.
 We can increase the income by cultivating absolute crop on the
land based on weather condition of particular location.
CHAPTER 5

REQUIREMENTS AND SPECIFICATION

SYSTEM REQUIREMENTS :

HARDWARE REQUIREMENTS:

 The hardware requirements may serve as the basis for a contract for
the implementation of the system and should therefore be a complete
and consistent specification of the whole system.
 They are used by software engineers as the starting point for the
system design.
 It shows what the system do and not how it should be implemented.
 Processor : Core i3/i5/i7
 RAM : 2-4 GB
 HDD : 500

SOFTWARE REQUIREMENTS :

⚫ The software requirements are the specification of the system.


⚫ It should include both a definition and a specification of
requirements.
⚫ It is a set of what the system should do rather than how it shoulddo
it.
⚫ The software requirements provide a basis for creating the software
requirements specification.
⚫ It is useful in estimating cost, planning team activities, performing
tasks and tracking the team’s and tracking the team’s progress
throughout the development activity.

 Platform : Windows Xp/7/8


 Fronds End : Java-JDK1.7
 Back End : MYSQL
 Tool : Hadoop
CHAPTER 6

SOFTWARE DESCRIPTION

. JAVA-JDK1.7 :

 The Java platform is the ideal for network computing that are
running across AI platform from server to cell phones to
smart cards.
 The Java platform benefits from a massive community of
developers and supporters that actively work on delivering
Java technology-based products and services.
 The fact is today,you can find Java technology just
about everywhere!

Fig 6.1. Software Development Process

JAVA DEVELOPMENT KIT:


 The Java Development Kit is an implementation of either Java
Platform- Standard Edition, Enterprise Edition or Micro Edition
platform released by Oracle Corporation in the form of binary
product aimed at Java developers Solaris,Linux,macOS or
Windows.
 The JDK includes a private JVM and a few other resources to
finish the development of a Java App.
 Since the introduction of the Java platform, it has been by
far the most widely used Software Development Kit(SDK).
 The Java Development Kit(JDK) is a software development
environment used for developing Java applications and
applets.
 JDK includes the Java Runtime Environment (JRE),an
Interpreter/loader(Java),a Compiler (javac),an Archiver(jar),a
Documentation generator(Javadic) and other tools needed in
Java Development.
 JDK is lithwick provides the environment to develop
and execute the Java program.
 JDK is a kit which includes two things – Development Tools
and JRE.

APPLICATION :

 Servlets
 Java Server Pages(JSPs)
 Utility Classes
 Static documents, including
HTML,images,Javascript libraries, Cascading Style
Sheets, and so on
 Client-side classes
 Meta-information describing the web application

MYSQL :

Definition :

 MySQL, the mmost popular Open Source SQL database management


system, is developed, distributed, and supported by Oracle Corporation.
 MySQL web site provides the latest information about MySQL software.
 MySQL software delivers a very fast, multi-threaded, multi-use, and
robust SQL database server.
 The MySQL software is Dual Licensed.
 Users can choose to use MySQL software as an Open Source product
under the terms of the General Public License (GPL) or can purchase a
standard commercial license from MySQL.

MySQL OVERVIEW :

 MySQL is a database management system


 MySQL is a relational database management system.
 MySQL software is open source.
 MySQL Server works in client/server or embedded system.
 A large amount of contributed MySQL software is available.

MySQL ARCHITECTURE :

 The architecture consist of three layers.

 Application Layer – It contains common network services for


connection, handling, authentication and security. This layer is where
different clients interact with MySQL. These clients can be written
in different APIs : NET, Jva,C,C++,Phython,Ruby,Tcl,Eiffel,etc.
 Logical Layer – Wgere MySQL intelligence resides, it includes
functionality for query parsing, analysis, caching and all built-in
functions. This layer also provides functionality common across
storage engines.
 Physical Layer – It is responsible for storing and retrieving all data
stored in MySQL. Associated with this layer re storage engines,
which MySQL interacts with very basic standard APIs. Each storage
engines has it strengths and weakness, some of this engines are
MyISAM, InnoDB, CSV, NDB Cluster,Falcon,etc.

HADOOP :

DEFINITION :

 Hadoop is an open-source software framework for storing data and


running applications on clusters of commodity hardware.
 It provides massive storage for any kind of data, enormous processing
power and ability to handle virtually limitless concurrent tasks or jobs.
 Hadoop provides HDFS, MapReduce and YARN.

IMPORTANCE OF HADOOP :

 Ability to store and process huge amounts of any kind of data, quickly
 Computing power
 Fault tolerance
 Flexibility
 Low cost
 Scalability
 High availability
 Economic
 Easy to use
 Data Locality
 Distributed processing
 Reliability
CHAPTER 7

PROJECT DESCRIPTION

AIM OF THE PROJECT

 This project predicts the crop for cultivation based on sand ,


weather condition and water resources.

 To increase the yield and income.

 To make the agriculture even more smart and better.

ARCHITECTURE DIAGRAM
WORKFLOW DIAGRAM

UML DIAGRAM

USECASE DIAGRAM

● Use Case
registrati

feed input to the training

give new Use

compare with existing


serv

predict the
CLASS DIAGRAM

SEQUENCES DIAGRAM

Admin training set system server user input crop prediction

ad
min feed the input to the training
set
dat ver
aset will maintain on system ser

user will give the soil name


compare with the training set

return result to the UI


identify the crops ltivate
cu on the land

suggest the best crop


COLLABORATION DIAGRAM

Admin

system server

2: dataset will maintain on system


1: admin f server 5: return result to the
eed e ut to the training se UI ltivate on the land
th inp
3: user will give the soil name
rops
t
6: identify the cu
c
4: compare with the training set

training
set

7: suggest the best


user crop
crop
input prediction

ACTIVITY DIAGRAM

Admin

input for training set

user
System

compare input with existing dataset

suggest the best crop


CHAPTER 8

SYSTEM

IMPLEMENTATION

MODULES LIST
1. User interface design

2. Dataset comparison

3. Soil estimation

4. Water source and weather analysis

5. Best crop recommendation

MODULES DESCRIPTION

User Interface Design:


 In this module user will identify the soil type on UI.

 To develop our application we use netbeans as a IDE and MSQL as


a back end.

 All inputs and output will put and get through this IDE only.Maintaining
Training dataset:

 The Server will monitor the entire dataset information in their database
and verify them if required.

 Also the Server will store the entire information in their database. Here
the Server has to establish the connection to communicate with the
Users.
 The Server will update the each soil and input details into its database.

 The soil and crop datasets are the main input of the user.Based on
that system will compare and predict the best crop to the user.

Soil Estimation:
 In this module, The soil type is analyzed.Soil type usually refers to the
different sizes of mineral particles in a particular sample.

 Soil is made up in part of finely ground rock . Hard surface of base is


called hard strata soil particles, grouped according to size as sand
and silt in addition to clay, organic material such as decomposed
plant matter.

 We have to feed different types of soil and their features on dataset.

Water Source And Weather Analysis


 This module gathers the information about water and temperature land
of particular area.

 Since the best crop to cultivate highly depends on the weather


condition and water facility .

 So the source of weather that land is depending on well or rain


fall. Through this we can easily predict the crop type.

Best Crop Suggestion:

 In this module system will compare the new input with existing
training set data.

 Here it generates a new set of output for the given input . user will get the
output based on input.
 If user gives soil as input they will output as type of crop which is to
be cultivate on that land.

 If they give crop name , output will become soil as a output.

 Output will be like, in which sand those crops will cultivate.

ALGORITHM EXPLANATION

Support Vector Machine

 Support Vector Machines (SVMs) are a set of related

supervised learning methods used for classification and

regression.

 This algorithm is applied here to predict the right crop for the

given soil of the location.

 We give input to the training set to obtain sample output.

 We keep testing the training set until we receive the intended

output.
CHAPTER 9

TESTING CODING STANDARDS

Coding standards are guidelines to programming that focuses on the physical


structure and appearance of the program. They make the code easier to read,
understand and maintain. This phase of the system actually implements the
blueprint developed during the design phase. The coding specification should be
in such a way that any programmer must be able to understand the code and can
bring about changes whenever felt necessary. Some of the standard needed to
achieve the above-mentioned objectives are as follows:

• Program should be simple, clear and easy to


understand.
• Naming conventions
• Value conventions
• Script and comment procedure

NAMING CONVENTIONS

Naming conventions of classes, data member, member functions,


procedures etc., should be self-descriptive. One should even get the meaning
and scope of the variable by its name. The conventions are adopted for easy
understanding of the intended message by the user. So it is customary to follow
the conventions. These conventions are as follows:
VALUE CONVENTIONS

Value conventions ensure values for variable at any point of time. This
involves the following:

• Proper default values for the


variables.
• Proper validation of values in
the field.

SCRIPT WRITING AND COMMENTING

Script writing is an art in which indentation is utmost important.


Conditional and looping statements are to be properly aligned to facilitate easy
understanding. Comments are included to minimize the number of surprises that
could occur when going through the code.

MESSAGE BOX FORMAT

When something has to be prompted to the user, he must be able to understand


it properly. To achieve this, a specific format has been adopted in displaying
messages to the user. They are as follows:

• X – User has performed illegal operation.


• ! – Information to the user.

TEST PROCEDURE SYSTEM TESTING

Testing is performed to identify errors. It is used for quality assurance.


Testing is an integral part of the entire development and maintenance process.
The goal of the testing during phase is to verify that the specification has been
accurately and completely incorporated into the design, as well as to ensure the
correctness of the design itself. For example the design must not have any logic
faults in the design is detected before coding commences, otherwise the cost of
fixing the faults will be considerably higher as reflected. Detection of design
faults can be achieved by means of inspection as well as walkthrough.

Testing is one of the important steps in the software development phase. Testing
checks for the errors, as a whole of the project testing involves the following
test cases:

• Static analysis is used to investigate the structural properties of the


Source code.

• Dynamic testing is used to investigate the behavior of the source code


by executing the program on the test data.
TEST DATA AND OUTPUT:

UNIT TESTING

Unit testing is conducted to verify the functional performance of each modular


component of the software. Unit testing focuses on the smallest unit of the
software design (i.e.), the module. The white-box testing techniques were
heavily employed for unit testing.

FUNCTIONAL TEST

Functional test cases involved exercising the code with nominal input
values for which the expected results are known, as well as boundary values and
special values, such as logically related inputs, files of identical elements, and
empty files.

Three types of tests in Functional test:

• Performance Test

• Stress Test

• Structure Test

PERFORMANCE TEST
It determines the amount of execution time spent in various parts of the
unit, program throughput, and response time and device utilization by the
program unit.

STRESS TEST

Stress Test is those test designed to intentionally break the unit. A Great
deal can be learned about the strength and limitations of a program by
examining the manner in which a programmer in which a program unit breaks.

STRUCTURED TEST

Structure Tests are concerned with exercising the internal logic of a program
and traversing particular execution paths. The way in which White- Box test
strategy was employed to ensure that the test cases could Guarantee that all
independent paths within a module have been have been exercised at least once

• Exercise all logical decisions on their true or false sides.

• Execute all loops at their boundaries and within their operational bounds.

• Exercise internal data structures to assure their validity.

• Checking attributes for their correctness.

• Handling end of file condition, I/O errors, buffer problems and textual
errors in output information
INTEGRATION TESTING

Integration testing is a systematic technique for construction the program


structure while at the same time conducting tests to uncover errors associated
with interfacing. i.e., integration testing is the complete testing of the set of
modules which makes up the product. The objective is to take untested modules
and build a program structure tester should identify critical modules. Critical
modules should be tested as early as possible. One approach is to wait until all
the units have passed testing, and then combine them and then tested. This
approach is evolved from unstructured testing of small programs. Another
strategy is to construct the product in increments of tested units. A small set of
modules are integrated together and tested, to which another module is added
and tested in combination. And so on. The advantage of this approach are that,
interface dispenses can be easily found and corrected.

The major error that was faced during the project is linking error. When all the
modules are combined the link is not set properly with all support files. Then we
checked out for interconnection and the links. Errors are localized to the new
module and its intercommunications. The product development can be staged,
and modules integrated in as they complete unit testing. Testing is completed
when the last module is integrated and tested.
TESTING TECHNIQUES / TESTING STRATERGIES

 TESTING:

Testing is a process of executing a program with the intent of finding an


error. A good test case is one that has a high probability of finding an as-yet –
undiscovered error. A successful test is one that uncovers an as-yet-
undiscovered error. System testing is the stage of implementation, which is
aimed at ensuring that the system works accurately and efficiently as expected
before live operation commences. It verifies that the whole set of programs hang
together. System testing requires a test consists of several key activities and
steps for run program, string, system and is important in adopting a successful
new system. This is the last chance to detect and correct errors before the
system is installed for user acceptance testing.

The software testing process commences once the program is created and the
documentation and related data structures are designed. Software testing is
essential for correcting errors. Otherwise the program or the project is not said
to be complete.Software testing is the critical element of software quality
assurance and represents the ultimate the review of specification design and
coding. Testing is the process of executing the program with the intent of
finding the error. A good test case design is one that as a probability of finding
and yet undiscovered error. A successful test is one that uncovers and yet
undiscovered error. Any engineering product can be tested in one of the two
ways.

 WHITE BOX TESTING


This testing is also called as Glass box testing. In this testing, by knowing the
specific functions that a product has been design to perform test can be
conducted that demonstrate each function is fully operational at the same time
searching for errors in each function. It is a test case design method that uses the
control structure of the procedural design to derive test cases. Basis path testing
is a white box testing.

Basis path testing:

• Flow graph notation

• Deriving test cases

• Graph matrices Control

 BLACK BOX TESTING

In this testing by knowing the internal operation of a product, test can be


conducted to ensure that “all gears mesh”, that is the internal operation performs
according to specification and all internal components have been adequately
exercised. It fundamentally focuses on the functional requirements of the
software.

The steps involved in black box test case design are:


• Graph based testing methods

• Boundary value analysis

• Comparison testing

 SOFTWARE TESTING STRATEGIES:

A software testing strategy provides a road map for the software


developer. Testing is a set activity that can be planned in advance and
conducted systematically. For this reason a template for software testing a set of
steps into which we can place specific test case design methods should be
strategy should have the following characteristics:

• Testing begins at the module level and works “outward” toward the

• integration of the entire computer based system.

• Different testing techniques are appropriate at different points in time.

• The developer of the software and an independent test group conducts testing.

• Testing and Debugging are different activities but debugging must be


accommodated in any testing strategy.
 INTEGRATION TESTING:

Integration testing is a systematic technique for constructing the program


structure while at the same time conducting tests to uncover errors associated
with. Individual modules, which are highly prone to interface errors, should not
be assumed to work instantly when we put them together. The problem of
course, is “putting them together”- interfacing. There may be the chances of
data lost across on another’s sub functions, when combined may not produce
the desired major function; individually acceptable impression may be
magnified to unacceptable levels; global data structures can present problems.

 PROGRAM TESTING:

The logical and syntax errors have been pointed out by program testing. A
syntax error is an error in a program statement that in violates one or more rules
of the language in which it is written. An improperly defined field dimension or
omitted keywords are common syntax error. These errors are shown through
error messages generated by the computer. A logic error on the other hand deals
with the incorrect data fields, out-off-range items and invalid combinations.
Since the compiler s will not deduct logical error, the programmer must
examine the output. Condition testing exercises the logical conditions contained
in a module. The possible types of elements in a condition include a Boolean
operator, Boolean variable, a pair of Boolean parentheses A relational operator
or on arithmetic expression. Condition testing method focuses on testing each
condition in the program the purpose of condition test is to deduct not only
errors in the condition of a program but also other a errors in the program.
 SECURITY TESTING:

Security testing attempts to verify the protection mechanisms built in to a


system well, in fact, protect it from improper penetration. The system security
must be tested for invulnerability from frontal attack must also be tested for
invulnerability from rear attack. During security, the tester places the role of
individual who desires to penetrate system.

 VALIDATION TESTING:

At the culmination of integration testing, software is completely


assembled as a package. Interfacing errors have been uncovered and corrected
and a final series of software test-validation testing begins. Validation testing
can be defined in many ways, but a simple definition is that validation succeeds
when the software functions in manner that is reasonably expected by the
customer. Software validation is achieved through a series of black box tests
that demonstrate conformity with requirement. After validation test has been
conducted, one of two conditions exists.

 USER ACCEPTANCE TESTING:

User acceptance of the system is key factor for the success of any
system. The system under consideration is tested for user acceptance by
constantly keeping in touch with prospective system and user at the time of
developing and making changes whenever required. This is done in regarding to
the following points
CHAPTER 10

CONCLUSION

CONCLUSION
Thus the paper infer that using machine learning we implement a system to
predict the crop and yield for that crop. Through this app farmers and normal
people can get more advantages .

FUTURE ENHANCEMENT
In future, using Support Vector Machine(SVM) and Machine Learning
farmers and the related land owners can make an informed decision on to
predict a suitable crop to give quality yield and good business.
APPENDIX 1

SAMPLE SOURCE CODE

User registration:
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package com.nura.servlet;

import com.nura.db.dao.UserDetailsDAO;
import com.nura.db.entity.UserDetails;
import java.io.IOException;
import java.io.PrintWriter;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
public class SaveUserDetailsController extends HttpServlet {

protected void processRequest(HttpServletRequest request, HttpServletResponse response)


throws ServletException, IOException {
response.setContentType("text/html;charset=UTF-8");
PrintWriter out = response.getWriter();
try {
String userid = request.getParameter("uname");
String pwd = request.getParameter("pwd");
String mailid = request.getParameter("mailid");
String mobno = request.getParameter("mobno");
String userType = request.getParameter("user_type");
UserDetails ud = new UserDetails();
ud.setUserName(userid);
ud.setPasswd(pwd);
ud.setMobNo(mobno);
ud.setMailid(mailid);
ud.setRoleType(userType);
//persisting the details in the db
boolean isSaved = new UserDetailsDAO().persistUserDetails(ud);
if (isSaved) {
out.print("User details updated in the db");
response.sendRedirect("loginPage.jsp");
}
} finally
{ out.close
();
}
}

// <editor-fold defaultstate="collapsed" desc="HttpServlet methods. Click on the + sign on


the left to edit the code.">
@Override
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
processRequest(request, response);
}

@Override
protected void doPost(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
processRequest(request, response);
}
@Override
public String getServletInfo() {
return "Short description";
}// </editor-fold>

}
Validate user:
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package com.nura.servlet;

import com.nura.db.dao.UserDetailsDAO;
import java.io.IOException;
import java.io.PrintWriter;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.servlet.http.HttpSession;

/**
*
* @author Arun
*/
public class ValidateUser extends HttpServlet {
/**
* Processes requests for both HTTP
* <code>GET</code> and
* <code>POST</code> methods.
*
* @param request servlet request
* @param response servlet response
* @throws ServletException if a servlet-specific error occurs
* @throws IOException if an I/O error occurs
*/
protected void processRequest(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
response.setContentType("text/html;charset=UTF-8");
PrintWriter out = response.getWriter();
HttpSession session = request.getSession();
try {
String uname = request.getParameter("uname");
String pwd = request.getParameter("pwd");
UserDetailsDAO uDAO = new UserDetailsDAO();
boolean isValid = uDAO.validateUser(uname, pwd);
String roleType = uDAO.getUstDtls(uname).get(0).getRoleType();
//response.sendRedirect("index.html");
if (isValid) {
response.sendRedirect("UserMenu.jsp");

} else {
response.sendRedirect("response.jsp?msg=Invalid User");
}
} finally
{ out.close
();
}
}

// <editor-fold defaultstate="collapsed" desc="HttpServlet methods. Click on the + sign on


the left to edit the code.">
/**
* Handles the HTTP
* <code>GET</code> method.
*
* @param request servlet request
* @param response servlet response
* @throws ServletException if a servlet-specific error occurs
* @throws IOException if an I/O error occurs
*/
@Override
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
processRequest(request, response);
}

/**
* Handles the HTTP
* <code>POST</code> method.
*
* @param request servlet request
* @param response servlet response
* @throws ServletException if a servlet-specific error occurs
* @throws IOException if an I/O error occurs
*/
@Override
protected void doPost(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
processRequest(request, response);
}

/**
* Returns a short description of the servlet.
*
* @return a String containing servlet description
*/
@Override
public String getServletInfo() {
return "Short description";
}// </editor-fold>

}
User Post:
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package com.nura.servlet;

import com.nura.db.dao.UserPostDetailsDAO;
import com.nura.db.entity.UserPostDetails;
import java.io.IOException;
import java.io.PrintWriter;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.servlet.http.HttpSession;

/**
*
* @author Arun
*/
public class UserPost extends HttpServlet {

/**
* Processes requests for both HTTP
* <code>GET</code> and
* <code>POST</code> methods.
*
* @param request servlet request
* @param response servlet response
* @throws ServletException if a servlet-specific error occurs
* @throws IOException if an I/O error occurs
*/
protected void processRequest(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
response.setContentType("text/html;charset=UTF-8");
PrintWriter out = response.getWriter();
HttpSession session = request.getSession();
try {
/* TODO output your page here. You may use following sample code. */
UserPostDetails mDtls = new UserPostDetails();
mDtls.setduration(request.getParameter("days"));
mDtls.setmoney(request.getParameter("money"));
mDtls.setlocation(request.getParameter("state"));
mDtls.setDistrict(request.getParameter("district"));
mDtls.setStatus("WAITING");
UserPostDetailsDAO _usr=new UserPostDetailsDAO();
if( _usr.persistUserDetails(mDtls)){
response.sendRedirect("response.jsp?msg=" + ". Waiting for Hadoop Response");
}else{
response.sendRedirect("response.jsp?msg=" + ". Db Error");
}

//System.out.println("" + recMdDtls);
} finally
{ out.close
();
}
}

// <editor-fold defaultstate="collapsed" desc="HttpServlet methods. Click on the + sign on


the left to edit the code.">
/**
* Handles the HTTP
* <code>GET</code> method.
*
* @param request servlet request
* @param response servlet response
* @throws ServletException if a servlet-specific error occurs
* @throws IOException if an I/O error occurs
*/
@Override
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
processRequest(request, response);
}

/**
* Handles the HTTP
* <code>POST</code> method.
*
* @param request servlet request
* @param response servlet response
* @throws ServletException if a servlet-specific error occurs
* @throws IOException if an I/O error occurs
*/
@Override
protected void doPost(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
processRequest(request, response);
}

/**
* Returns a short description of the servlet.
*
* @return a String containing servlet description
*/
@Override
public String getServletInfo() {
return "Short description";
}// </editor-fold>

}
Crop suggestion:
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package com.nura.ui;

import com.faceset.database.AddService;
import com.hadoopanalyzer.HadoopAnalyzer;
import com.nura.hadoop.HadoopAnalysis;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.time.Duration;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Iterator;
import java.util.LinkedHashMap;
import java.util.Map;
import java.util.Set;
import java.util.StringTokenizer;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.swing.JOptionPane;

/**
*

* @author Vinayak
*/
public class CropSaMPLE {
//public static String User_location="Tamil Nadu";
//public static String User_Sub_location="VELLORE";
//public static String User_Cost="50000";
//public static String User_Duration="200";
public static String User_location="";
public static String User_Sub_location="";
public static String User_Cost="";
public static String User_Duration="";

public static ArrayList <String>FINAL_MAP_Result=new <String>ArrayList();

public static boolean RAINFALL=false;


public static boolean COST=false;

public static int FINAL_INDEX=0;


public static String Rainfall_Mapping="";

static ArrayList <String>Cost_mappingIndex=new ArrayList<String>();


static ArrayList <String>Water_mappingIndex=new ArrayList<String>();
static ArrayList <String>Location_mappingIndex=new ArrayList<String>();

static ArrayList <String>subUserlocation=new ArrayList<String>();


static ArrayList <String>MasterFile=new ArrayList<String>();

static ArrayList <String>subResultsSoil=new ArrayList<String>();


static ArrayList <String>subResultsCrops=new ArrayList<String>();
static ArrayList <String>subResultsDuration=new ArrayList<String>();
static ArrayList <String>subResultsWaterLevel=new ArrayList<String>();
static ArrayList <String>subResultsCost=new ArrayList<String>();
static ArrayList <String>Costcontent=new ArrayList<String>();
static ArrayList <String>RainfallContent=new ArrayList<String>();
static ArrayList <String>LocationContent=new ArrayList<String>();

static ArrayList <String>CostResult=new ArrayList<String>();


static ArrayList <String>RainfallResult=new ArrayList<String>();
static boolean status=false;
/**

* @param args the command line arguments


*/
public static void main(String[] args) throws FileNotFoundException, IOException {
try {
AddService ad=new AddService();
String result=ad.getCrop();
StringTokenizer strrr=new StringTokenizer(result,"$");
User_Sub_location=strrr.nextToken();
User_Duration=strrr.nextToken();
User_location=strrr.nextToken();
User_Cost=strrr.nextToken();
// TODO code application logic here
// new HadoopAnalysis().processFiles(new
File(constants.Constants.FILE_HADOOP_IN_LOCATION));
//new HadoopAnalyzer().processFiles(new
java.io.File(constants.Constants.FILE_HADOOP_IN_LOCATION));
User_location=User_location.replace(" ","");
File f=new File("D:\\temp\\test\\crop.csv");
File rainfall_file=new File("D:\\temp\\test\\rainfall.csv");
int wordindex=0;
ArrayList <String>Soil_types=new <String>ArrayList();
ArrayList <String>Location_types=new <String>ArrayList();
ArrayList <String>Crops_types=new <String>ArrayList();
ArrayList <String>Cost_types=new <String>ArrayList();
ArrayList <String>Water_level_types=new <String>ArrayList();
ArrayList <String>Duration_types=new <String>ArrayList();
ArrayList <String>MappingResults=new <String>ArrayList();
String cc;
String s="";
String line="";
BufferedReader bf=new BufferedReader(new FileReader(f));
HashMap <String,String>hm=new <String,String>HashMap();
while((s=bf.readLine())!=null){
line=s;
line=line.replace(" ", "");
System.out.println(line);
String ss[]=line.split("|",0);
//line.split("|", );
System.out.println("First Index "+ss[0]);

hm.put(ss[0],line);
MasterFile.add(line);
}
Set set=hm.entrySet();
Iterator iteartor=set.iterator();
while(iteartor.hasNext()){
Map.Entry me=(Map.Entry)iteartor.next();
String value=me.getValue().toString();
//System.out.print("Map Value-->"+me.getValue());
StringTokenizer stt=new
StringTokenizer(value,"|"); stt.nextToken();
Soil_types.add(stt.nextToken());
Location_types.add(stt.nextToken());
Crops_types.add(stt.nextToken());
Duration_types.add(stt.nextToken());
Water_level_types.add(stt.nextToken());
Cost_types.add(stt.nextToken());
System.out.print("Map String Value-->"+value);

}
int size=Crops_types.size();
print("Sizeeee "+String.valueOf(size));
for(String cropp:Crops_types){
cc=cropp;
System.out.println(cc);
}
for(int i=0;i<Location_types.size();i++)
{ System.out.println("Filter--
>"+Location_types.get(i)); String
LocationResult=Location_types.get(i);
StringTokenizer st=new StringTokenizer(LocationResult,",");
int k=0;
while(st.hasMoreTokens()){
String dblocation=(String)st.nextToken();
if(dblocation.equalsIgnoreCase(User_location.trim())){
LocationContent.add(String.valueOf(k));
Location_mappingIndex.add(String.valueOf(i));
subUserlocation.add(String.valueOf(i));
}else{

} k+
+;
}

}
for(String subresults:subUserlocation){
String masterdata=MasterFile.get(Integer.parseInt(subresults));
System.out.println(masterdata);
StringTokenizer st=new StringTokenizer(masterdata,"|");
st.nextToken();
subResultsSoil.add(st.nextToken());
st.nextToken();
subResultsCrops.add(st.nextToken());
subResultsDuration.add(st.nextToken());
subResultsWaterLevel.add(st.nextToken());
subResultsCost.add(st.nextToken());

}
//sample
for(String soil:subResultsWaterLevel)
{ System.out.println("Water=====---
>"+soil);
}
//Reading Rain fall Data

BufferedReader rain_br=null;
rain_br=new BufferedReader(new FileReader(rainfall_file));
String Rain_data="";
String rems="";
while((s=rain_br.readLine())!=null){
rems=s;
rems=rems.replace(" ", "");
//print(User_location.toUpperCase());
//print(rems);
if(rems.trim().startsWith(User_location.trim().toUpperCase())){
Rain_data+=rems+"\n";
Rain_data=Rain_data.replace(" ", "");

}
}
StringTokenizer st=new StringTokenizer(Rain_data,"\n");
while(st.hasMoreTokens()){
String splitdata=st.nextToken();
StringTokenizer st_sub=new StringTokenizer(splitdata,",");
st_sub.nextToken();
String sublocation=st_sub.nextToken();
System.out.println(sublocation +" "+User_Sub_location);
if(sublocation.equalsIgnoreCase(User_Sub_location)){

st_sub.nextToken();st_sub.nextToken();st_sub.nextToken();st_sub.nextToken();st_sub.nextToken();st
_sub.nextToken();st_sub.nextToken();st_sub.nextToken();
st_sub.nextToken();st_sub.nextToken();st_sub.nextToken();st_sub.nextToken();
String Rain_Fall=st_sub.nextToken();
System.out.println("Rain Level- - ->> "+Rain_Fall);
System.out.println("<----MAPPING END- - -> ");
Rainfall_Mapping=Rain_Fall;

}
}

/*Rainfalll*/

int d=0;
ArrayList <String>rainfall_templist=new ArrayList<String>();
for(int h=0;h<subResultsWaterLevel.size();h++){
System.out.println("III"+h);
String waterlevel=subResultsWaterLevel.get(h);
System.out.println("Water Level"+waterlevel);
StringTokenizer stm=new StringTokenizer(waterlevel,",");
while(stm.hasMoreTokens())
{ rainfall_templist.add(stm.nextToken(
));
}
for(d=0;LocationContent.size()>d;d++){
String RainFall=rainfall_templist.get(Integer.parseInt(LocationContent.get(h)));
print("Predict Water--->"+RainFall);

print(String.valueOf(Rainfall_Mapping));
StringTokenizer str=new StringTokenizer(RainFall,"-");
String start=str.nextToken();
String end=str.nextToken();
print(start +" "+end);
//print(end);
double start_int=Double.parseDouble(start);
double end_int=Double.parseDouble(end);
print("RainFall..Mapping"+Rainfall_Mapping);
double dbrainfall=Double.parseDouble(Rainfall_Mapping);
if (start_int <= dbrainfall && dbrainfall <= end_int){
COST=true;
print("true");
FINAL_MAP_Result.add("TRUE"+","+LocationContent.get(h)+","+RainFall);

}
else{
FINAL_MAP_Result.add("FALSE"+","+LocationContent.get(h)+","+RainFall);

}
break;
}

}
/*Cost Prediction*/

int cd=0;
ArrayList <String>cost_templist=new ArrayList<String>();
for(int h=0;h<subResultsCost.size();h++){
String costlevel=subResultsCost.get(h);
System.out.println("Cost Level"+costlevel);
cost_templist=new ArrayList<String>();
StringTokenizer stm=new StringTokenizer(costlevel,",");
while(stm.hasMoreTokens()){
cost_templist.add(stm.nextToken());
}
for(cd=0;cd<LocationContent.size();cd++){
String RainFall=cost_templist.get(Integer.parseInt(LocationContent.get(h)));
print("PEDICT "+RainFall);
print(String.valueOf(User_Cost));
StringTokenizer str=new StringTokenizer(RainFall,"-");
String start=str.nextToken();
String end=str.nextToken();
print(start +" "+end);
//print(end);
double start_int=Double.parseDouble(start);
double end_int=Double.parseDouble(end);
double user_amount=Double.parseDouble(User_Cost);
if (start_int <= user_amount && user_amount <= end_int){
COST=true;
print("true"); FINAL_MAP_Result.add("TRUE"+","+LocationContent.get(h)
+","+RainFall);
}else{
FINAL_MAP_Result.add("FALSE"+","+LocationContent.get(h)+","+RainFall);
System.out.println("index "+h);
}
break;

/*User Duration Time*/

int dt=0;
ArrayList <String>durationtemplist=new ArrayList<String>();
for(int h=0;h<subResultsDuration.size();h++){
String costlevel=subResultsDuration.get(h);
System.out.println("Duration Level"+costlevel);
durationtemplist=new ArrayList<String>();
StringTokenizer stm=new StringTokenizer(costlevel,",");
while(stm.hasMoreTokens()){
durationtemplist.add(stm.nextToken());
}
for(dt=0;dt<LocationContent.size();dt++){
String RainFall=durationtemplist.get(Integer.parseInt(LocationContent.get(h)));
print("PREDICT Duration"+RainFall);
print(String.valueOf(User_Duration));
StringTokenizer str=new StringTokenizer(RainFall,"-");
String start=str.nextToken();
String end=str.nextToken();
print(start +" "+end);
//print(end);
double start_int=Double.parseDouble(start);
double end_int=Double.parseDouble(end);
double user_amount=Double.parseDouble(User_Duration);
if (start_int <= user_amount && user_amount <= end_int){
COST=true;
print("true");
FINAL_MAP_Result.add("TRUE"+","+LocationContent.get(h)+","+RainFall);
}else{
FINAL_MAP_Result.add("FALSE"+","+LocationContent.get(h)+","+RainFall);
System.out.println("index "+h);
}

break;

/*Crop Filter*/

int ct=0;
ArrayList <String>Crop_templist=new ArrayList<String>();
for(int h=0;h<subResultsCrops.size();h++){
String costlevel=subResultsCrops.get(h);
System.out.println("Duration Level"+costlevel);
Crop_templist=new ArrayList<String>();
StringTokenizer stm=new StringTokenizer(costlevel,",");
while(stm.hasMoreTokens()){
Crop_templist.add(stm.nextToken());
}
StringBuilder sb=new StringBuilder();
for(ct=0;ct<LocationContent.size();ct++){
String RainFall=Crop_templist.get(Integer.parseInt(LocationContent.get(h)));
print("PREDICT Crops"+RainFall);
sb.append("CROPS ARE : "+"\n"+RainFall);

break;

}
JOptionPane.showMessageDialog(null, sb.toString());

} catch (Exception ex)


{ Logger.getLogger(CropSaMPLE.class.getName()).log(Level.SEVERE, null,
ex);
}

}
public static void print(String Message){
System.out.println(Message);
}
}

Hadoop Analysis:
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package com.nura.hadoop;

import com.faceset.database.AddService;
import com.nura.dao.impl.JSONEntityDAOImpl;

import com.nura.entity.JSONEntity;
import constants.ServerIP;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.StringTokenizer;

import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.mapred.TextInputFormat;
import org.apache.hadoop.mapred.TextOutputFormat;

import javax.swing.JOptionPane;

/**
*
* @author ArunRamya
*/
public class HadoopAnalysis
{ private static String result="";
private static boolean status=false;
private static float NetBal;
private static String ProductName="";
private static String ProductOne="";
private static String ProductTwo="";
private static float NetAmount=0;
private static String FinalProduct="";
static String s1="",s2="";
static String content="";
static String originalcontent="";
static ArrayList <String>origin_sq=new ArrayList<String>();
static ArrayList <String>user_sq=new ArrayList<String>();
static int i=0;
static String User_id="";
private static float FinalNetAmount=0;
public static class Map extends MapReduceBase implements Mapper<LongWritable, Text,
LongWritable, Text> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();

@Override
public void map(LongWritable key, Text value, OutputCollector<LongWritable, Text>
output, Reporter reporter)
throws IOException {
try {
} catch (Exception ex) {
ex.printStackTrace();
}
}
}

public static class Reduce extends MapReduceBase implements Reducer<LongWritable,


Text, LongWritable, Text> {

public void reduce(LongWritable key, Iterator<Text> values,


OutputCollector<LongWritable, Text> output,
Reporter reporter) throws IOException {
int k=0;
String likes="";
output.collect(key, new Text(likes));
}
}

public void processFiles(File inputFile) throws Exception {

HadoopAnalysis tweetAnalyze = new HadoopAnalysis();

JobConf conf = new JobConf(HadoopAnalysis.class);


conf.set("fs.defaultFS", "hdfs://127.0.0.1:9000");
conf.set("mapred.job.tracker", "127.0.0.1:9001");
conf.setJobName("hadooptrans");
conf.setOutputKeyClass(LongWritable.class);
conf.setOutputValueClass(Text.class);
conf.setMapperClass(Map.class);
//conf.setCombinerClass(Reduce.class);
conf.setReducerClass(Reduce.class);
conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);

//Code for accessing HDFS file system


FileSystem hdfs = FileSystem.get(conf);
Path homeDir = hdfs.getHomeDirectory();
//Print the home directory
System.out.println("Home folder -" + homeDir);

//Add below code For creating and deleting directory


Path workingDir = hdfs.getWorkingDirectory();
Path newFolderPath = new Path("/hinput");
newFolderPath = Path.mergePaths(workingDir, newFolderPath);
if (hdfs.exists(newFolderPath)) {
hdfs.delete(newFolderPath, true); //Delete existing Directory
}
hdfs.mkdirs(newFolderPath); //Create new Directory

//Code for copying File from local file system to HDFS


String filePath = inputFile.getAbsolutePath();
System.out.println("FilePATH"+filePath);
filePath = filePath.substring(0, filePath.lastIndexOf(File.separator));
Path localFilePath = new Path(inputFile.getAbsolutePath());
Path hdfsFilePath = new Path(newFolderPath + "/" + inputFile.getName());
hdfs.copyFromLocalFile(localFilePath, hdfsFilePath);
hdfs.copyFromLocalFile(localFilePathnewFolderPath);
FileInputFormat.addInputPath(conf, hdfsFilePath);
FileSystem fs = FileSystem.get(conf);
Path out = new Path("hdfs://127.0.0.1:9000/hout");
fs.delete(out, true);
FileOutputFormat.setOutputPath(conf, new Path("hdfs://127.0.0.1:9000/hout"));
JobClient.runJob(conf);
//Finally copying the out file to local after job has run
fs.copyToLocalFile(new Path("hdfs://127.0.0.1:9000/hout/part-00000"),
new Path(constants.Constants.FILE_HADOOP_OUT_LOCATION));
System.out.println("End of the program");
}
public static void main(String[] args) throws Exception
{ User_id=JOptionPane.showInputDialog("Enter User-
ID");
new HadoopAnalysis().processFiles(new
File(constants.Constants.FILE_HADOOP_IN_LOCATION));

}
}
APPENDIX 2

SAMPLE SCREEN SHOTS


APPENDIX 2
SAMPLE SCREENSHOT :
REFERENCES

1. Uwe A. Schneider a,⇑, Petr Havlik b, Erwin Schmid c, Hugo Valin b, Aline Mosnier b,c,
Michael Obersteiner b,Hannes Bottcher b, Rastislav Skalsky´ d, Juraj Balkovicˇ d, Timm
Sauer a, Steffen Fritz b” Impacts of population growth, economic development, and technical
change on global food production and consumption” Agricultural Systems 104 (2011) 204–
215 inelsvier
2. Wahbeh, A. H., Al-Radaideh, Q. A., Al-Kabi, M. N., & Al- Shawakfa, E. M. (2011). A
comparison study between data mining tools over some classification methods. International
Journal of Advanced Computer Science and Applications, 8(2),18-26.
3. Eiben, A. E., Raue, P. E., & Ruttkay, Z. (1994, October). Genetic algorithms with multi-
parent recombination. In International Conference on Parallel Problem Solving from Nature
(pp. 78-87). Springer, Berlin, Heidelberg.
4. James W. Jones a,⁎, John M. Antle b, Bruno O. Basso c, Kenneth J. Boote a, Richard T.
Conant d, Ian Foster e, H. Charles J. Godfray f, Mario Herrero g, Richard E. Howitt h, Sander
Jansseni, Brian A. Keating g, Rafael Munoz-Carpena a, Cheryl H. Porter a, Cynthia
Rosenzweig j, Tim R.Wheeler k “Brief history of agricultural systems modeling” in science
direct.
5. Geoff Kuehnea, Rick Llewellyna,⁎, David J. Pannellb, Roger Wilkinsonc, Perry Dollingd,
Jackie Ouzmana, Mike Ewinge, ”Predicting farmer uptake of new agricultural practices: A
tool for research,extension and policy” in Elsevier science direct.
6. Zhou, S., Ling, T. W., Guan, J., Hu, J., & Zhou, A. (2003, March). Fast text classification:
a training-corpus pruning based approach. In Database Systems for Advanced Applications,
2003.(DASFAA 2003). Proceedings. Eighth International Conference on (pp. 127-136).
IEEE.
7. Li, Y., & Bontcheva, K. (2008). Adapting support vector machines for f-term-based
classification of patents. ACM Transactions on Asian Language Information Processing
(TALIP), 7(2), 7.
8. Eiben, A. E., Raue, P. E., & Ruttkay, Z. (1994, October). Genetic algorithms with multi-
parent recombination. In International Conference on Parallel Problem Solving from Nature
(pp. 78-87). Springer, Berlin, Heidelberg.
9. Tubiello, F. N., Salvatore, M., Cóndor Golec, R. D., Ferrara, A., Rossi, S., Biancalani, R.,
... & Flammini, A. (2014). Agriculture, forestry and other land use emissions by sources and
removals by sinks. Rome, Italy..

You might also like