Project Report - 03-IT-2016-2020
Project Report - 03-IT-2016-2020
A PROJECT REPORT
Submitted by
SANJAY BITRA
SALMAN AHAMED.J
degree of
BACHELOR OF TECHNOLOGY
IN
INFORMATION TECHNOLOGY
MARCH 2022
1
ANNA UNIVERSITY : CHENNAI 600 025
BONAFIDE CERTIFICATE
SIGNATURE SIGNATURE
DR.M.AMANULLAH R.LAVANYA,AP/IT
The report of this project is submitted by the above students in partial fulfilment
for the award of Bachelor Technology in Information Technology of Anna
University are evaluated and confirmed to report of the work done by the above
students during the academic year of 2018-2022.
This report work is submitted for the Anna University project viva-voce work
held on
First and foremost we would like to thank the God of Almighty Allah
who is our refuge and strength . We would like to express our heartfelt thanks to
our Beloved Parents who sacrifice their presence for our better future.
We are very much indebted to thank our college Founder Alhaj. Dr.
S.M.SHAIK NURDDIN, Chairperson JANABA ALHAJIYANIM.S.
HABIBUNNISA,
Aalim Muhammed Salegh Group of Educational Institutions, Honourable
Secretary& Correspondent JANAB ALHAJI S.SEGU JAMALUDEEN, Aalim
Muhammed Salegh Group of Educational Institutions for providing necessary
facilities all through the course.
We take this opportunity to put forth our deep sense of gratitude to our beloved
Principal Prof. Dr. M. AFZAL ALI BAIG for granting permission to
undertake the project.
We would also like to express our sincere thanks to our project guide
Ms.R.LAVANAYA Assistant Professor, Information technology, Aalim
Muhammed Salegh College of Engineering who persuaded us to take this
project and never ceased to lend his encouragement and support.
Finally we take immense pleasure to thank our family members, faculties and
friends for their constant support and encouragement in doing our project.
ABSTRACT
Agricultural research is a vast and important field which has strengthened the
optimized economical profit and benefits. Agriculture has great scope in future,
But most people lack the knowledge of choosing the right crop to cultivate in
their land that leads to loss. In our proposed system, we implement the
application to identify the types of soil, water source of that land whether that
land is based on rain or bore water and suitable crop for that soil. Thus we
provide solution for the people to do better agriculture through this application .
TABLE OF CONTENTS
ABSTRACT i
LIST OF FIGURES v
LIST OF SYMBOLS vi
1 INTRODUCTION
1.3 DATAMINING 2
2 SYSTEM STUDY
3 LITERATURE SURVEY
3.1 PAPER 1 8
3.2 PAPER 2 9
3.3 PAPER 3 10
3.4 PAPER 4 11
4 SYSTEM ANALYSIS
5 REQUIREMENTSAND SPECIFICATION
6 SOFTWARE DESCRIPION
6.2 MYSQL 17
6.3 HADOOP 19
7 PROJECT DESCRIPTION
MODULES LIST 25
MODULES DESCRIPTION 25
10 CONCLUSION
10.1 CONCLUSION 40
11 APPENDIX 1 41
12 APPENDIX 2 87
13 REFERENCE 72
LIST OF FIGURES
, COMMA SEPARATOR
INTRODUCTION
PROJECT INTRODUCTION
Agricultural advancement has strengthened the optimized economical
growth globally. It is a very vast and important field of industry to gain
more benefits. In future, Agriculture has the potential to be one of the
crucial field for the people. But today, Many people who own a land who
want to start an agricultural project do not have the knowledge and
awareness of the technicalities of the crop cultivation as well as the market
demands. Due to which most of the people get perform agriculture by
cultivating crop on soil that are not suitable for that area.
PROBLEM DEFINITION
The major problem is we could not come up proper stability with this
Recommendation process. The recommendation problem is reduced to the
problem of estimating ratings for the items that have not been seen by a
user, and this estimation is usually based on the other available ratings given
by this and/ or other users.
Data Process Overview
The above figure represents the application of this project. The user(farmer)
would have to give the input data to the user interface such as type of soil, type
of water, weather conditions. These data will be processed in the system
programmed based on the machine learning concept. Once the data is processed
the result is displayed to the user.
The best crop will be recommended by the system according to the input
obtained by the user. This will allow the user to take an informed decision in
order to cultivate the crops in the land he wants to do the agriculture and the
facilities accessible to in that location according to his budget. There will be
improvement the yield to provide regular market supply. This will help the
development of agriculture cultivation and business.
DATA MINING
BIG DATA
Big data is an all-encompassing term for any collection of data sets so
large and complex that it becomes difficult to process using traditional
data processing applications. The challenges include analysis, capture,
duration, search, sharing, storage, transfer, visualization, and privacy
violations. Big data is characterized by the 4V definition to four Vs,
namely, volume, variety, velocity.
Fig Four V’s Of Big Data
Variety- Types of data collected via sensors, smart phones, or social networks.
Such data types include video, image, text, audio, and data logs, in either
structured or unstructured format..
Velocity -Refers to the speed of data transfer. The contents of data constantly
Value- is the most important aspect of big data; it refers to the process of
discovering huge hidden values from large datasets with various types and rapid
generation.
Big data are classified into different categories to better understand their
characteristics shows the numerous categories of big data. The classification is
important because of large-scale data in the cloud. The classification is based on
five aspects: (i) data sources, (ii) content format, (iii) data stores, (iv) data
staging, and (v) data processing.
HADOOP
Hadoop is an open-source Apache Software Foundation project written in
Java that enables the distributed processing of large datasets across clusters
of commodity. Hadoop has two primary components, namely, HDFS and
Map Reduce programming framework. The most significant feature of
Hadoop is that HDFS and Map Reduce are closely related to each other;
each are co-deployed such that a single cluster is produced. Therefore, the
storage system is not physically separated from the processing system.
HDFS is a distributed file system designed to run on top of the local file
systems of the cluster nodes and store extremely large files suitable for
streaming data access. HDFS is highly fault tolerant and can scale up from a
single server to thousands of machines, each offering local computation and
storage.
Map Reduce
FEASIBILITY STUDY
The feasibility of the project is analysed in this phase and business
proposal is put forth with a very general plan for the project and some
cost estimates. During system analysis the feasibility study of the
proposed system is to be carried out. This is to ensure that the proposed
system is not a burden to the company. For feasibility analysis, some
understanding of the major requirements for the system is essential.
ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the
system will have on the organization. The amount of fund that the
company can pour into the research and development of the system is
limited. The expenditures must be justified. Thus the developed system as
well within the budget and this was achieved because most of the
technologies used are freely available. Only the customized products had
to be purchased.
TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is,
the technical requirements of the system. Any system developed must not
have a high demand on the available technical resources. This will lead to
high demands on the available technical resources. This will lead to high
demands being placed on the client. The developed system must have a
modest requirement.
OPERATIONAL FEASIBILITY
The aspect of study is to check the level of acceptance of the
system by the user. This includes the process of training the user to use
the system efficiently. The user must not feel threatened by the system,
instead m it as a necessity. The level of acceptance by the users solely
depends on the methods that are employed to educate the user about the
system and to make him familiar with it. His level of confidence must be
raised so that he is also able to make some constructive criticism, which
is welcomed, as he is the final user of the system.
CHAPTER 3
LITERATURE SURVEY
PAPER 1:
Over the next decades mankind will demand more food from fewer land
and water resources. This study quantifies the food production impacts of four
and the Special Report on Emission Scenarios. Partially and jointly considered
are land and water supply impacts from population growth, and technical
partial equilibrium model of the agricultural and forest sectors show that per
capita food levels increase in all examined development scenarios with minor
Systems (2016),
20 MAY 2016)
history of this science exemplifies the diversity of systems and scales over
which they operate and have been studied. Modeling, an essential tool in
range of disciplines, who have contributed concepts and tools over more than
models, data, and knowledge products needed to meet the increasingly complex
and its lessons to ensure that we avoid re-invention and strive to consider all
Data mining has emerged as one of the major research domain in the recent
decades in order to extract implicit and useful knowledge. This knowledge can
the technology. Such advancement was also in the form of storage which
literature on data mining and pattern recognition for soil data mining is
There is much existing knowledge about the factors that influence adoption
of new practices in agriculture but few attempts have been made to construct
networks, characteristics of the farm and the farmer, and the ease and
convenience of the new practice. The ability to learn about the relative
SYSTEM ANALYSIS
EXISTING SYSTEM:
This predicts crop yield based on temperature and rainfall using Fuzzy
Logic Model.
Three models ARMA,SARIMA and ARMAX are used for weather
prediction.
The three models are compared and best model is used to predict
temperature and rainfall which in turn used to predict crop yield based
on Fuzzy Logic model.
DRAWBACKS :
PROPOSED SYSTEM:
SYSTEM REQUIREMENTS :
HARDWARE REQUIREMENTS:
The hardware requirements may serve as the basis for a contract for
the implementation of the system and should therefore be a complete
and consistent specification of the whole system.
They are used by software engineers as the starting point for the
system design.
It shows what the system do and not how it should be implemented.
Processor : Core i3/i5/i7
RAM : 2-4 GB
HDD : 500
SOFTWARE REQUIREMENTS :
SOFTWARE DESCRIPTION
. JAVA-JDK1.7 :
The Java platform is the ideal for network computing that are
running across AI platform from server to cell phones to
smart cards.
The Java platform benefits from a massive community of
developers and supporters that actively work on delivering
Java technology-based products and services.
The fact is today,you can find Java technology just
about everywhere!
APPLICATION :
Servlets
Java Server Pages(JSPs)
Utility Classes
Static documents, including
HTML,images,Javascript libraries, Cascading Style
Sheets, and so on
Client-side classes
Meta-information describing the web application
MYSQL :
Definition :
MySQL OVERVIEW :
MySQL ARCHITECTURE :
HADOOP :
DEFINITION :
IMPORTANCE OF HADOOP :
Ability to store and process huge amounts of any kind of data, quickly
Computing power
Fault tolerance
Flexibility
Low cost
Scalability
High availability
Economic
Easy to use
Data Locality
Distributed processing
Reliability
CHAPTER 7
PROJECT DESCRIPTION
ARCHITECTURE DIAGRAM
WORKFLOW DIAGRAM
UML DIAGRAM
USECASE DIAGRAM
● Use Case
registrati
predict the
CLASS DIAGRAM
SEQUENCES DIAGRAM
ad
min feed the input to the training
set
dat ver
aset will maintain on system ser
Admin
system server
training
set
ACTIVITY DIAGRAM
Admin
user
System
SYSTEM
IMPLEMENTATION
MODULES LIST
1. User interface design
2. Dataset comparison
3. Soil estimation
MODULES DESCRIPTION
All inputs and output will put and get through this IDE only.Maintaining
Training dataset:
The Server will monitor the entire dataset information in their database
and verify them if required.
Also the Server will store the entire information in their database. Here
the Server has to establish the connection to communicate with the
Users.
The Server will update the each soil and input details into its database.
The soil and crop datasets are the main input of the user.Based on
that system will compare and predict the best crop to the user.
Soil Estimation:
In this module, The soil type is analyzed.Soil type usually refers to the
different sizes of mineral particles in a particular sample.
In this module system will compare the new input with existing
training set data.
Here it generates a new set of output for the given input . user will get the
output based on input.
If user gives soil as input they will output as type of crop which is to
be cultivate on that land.
ALGORITHM EXPLANATION
regression.
This algorithm is applied here to predict the right crop for the
output.
CHAPTER 9
NAMING CONVENTIONS
Value conventions ensure values for variable at any point of time. This
involves the following:
Testing is one of the important steps in the software development phase. Testing
checks for the errors, as a whole of the project testing involves the following
test cases:
UNIT TESTING
FUNCTIONAL TEST
Functional test cases involved exercising the code with nominal input
values for which the expected results are known, as well as boundary values and
special values, such as logically related inputs, files of identical elements, and
empty files.
• Performance Test
• Stress Test
• Structure Test
PERFORMANCE TEST
It determines the amount of execution time spent in various parts of the
unit, program throughput, and response time and device utilization by the
program unit.
STRESS TEST
Stress Test is those test designed to intentionally break the unit. A Great
deal can be learned about the strength and limitations of a program by
examining the manner in which a programmer in which a program unit breaks.
STRUCTURED TEST
Structure Tests are concerned with exercising the internal logic of a program
and traversing particular execution paths. The way in which White- Box test
strategy was employed to ensure that the test cases could Guarantee that all
independent paths within a module have been have been exercised at least once
• Execute all loops at their boundaries and within their operational bounds.
• Handling end of file condition, I/O errors, buffer problems and textual
errors in output information
INTEGRATION TESTING
The major error that was faced during the project is linking error. When all the
modules are combined the link is not set properly with all support files. Then we
checked out for interconnection and the links. Errors are localized to the new
module and its intercommunications. The product development can be staged,
and modules integrated in as they complete unit testing. Testing is completed
when the last module is integrated and tested.
TESTING TECHNIQUES / TESTING STRATERGIES
TESTING:
The software testing process commences once the program is created and the
documentation and related data structures are designed. Software testing is
essential for correcting errors. Otherwise the program or the project is not said
to be complete.Software testing is the critical element of software quality
assurance and represents the ultimate the review of specification design and
coding. Testing is the process of executing the program with the intent of
finding the error. A good test case design is one that as a probability of finding
and yet undiscovered error. A successful test is one that uncovers and yet
undiscovered error. Any engineering product can be tested in one of the two
ways.
• Comparison testing
• Testing begins at the module level and works “outward” toward the
• The developer of the software and an independent test group conducts testing.
PROGRAM TESTING:
The logical and syntax errors have been pointed out by program testing. A
syntax error is an error in a program statement that in violates one or more rules
of the language in which it is written. An improperly defined field dimension or
omitted keywords are common syntax error. These errors are shown through
error messages generated by the computer. A logic error on the other hand deals
with the incorrect data fields, out-off-range items and invalid combinations.
Since the compiler s will not deduct logical error, the programmer must
examine the output. Condition testing exercises the logical conditions contained
in a module. The possible types of elements in a condition include a Boolean
operator, Boolean variable, a pair of Boolean parentheses A relational operator
or on arithmetic expression. Condition testing method focuses on testing each
condition in the program the purpose of condition test is to deduct not only
errors in the condition of a program but also other a errors in the program.
SECURITY TESTING:
VALIDATION TESTING:
User acceptance of the system is key factor for the success of any
system. The system under consideration is tested for user acceptance by
constantly keeping in touch with prospective system and user at the time of
developing and making changes whenever required. This is done in regarding to
the following points
CHAPTER 10
CONCLUSION
CONCLUSION
Thus the paper infer that using machine learning we implement a system to
predict the crop and yield for that crop. Through this app farmers and normal
people can get more advantages .
FUTURE ENHANCEMENT
In future, using Support Vector Machine(SVM) and Machine Learning
farmers and the related land owners can make an informed decision on to
predict a suitable crop to give quality yield and good business.
APPENDIX 1
User registration:
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package com.nura.servlet;
import com.nura.db.dao.UserDetailsDAO;
import com.nura.db.entity.UserDetails;
import java.io.IOException;
import java.io.PrintWriter;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
public class SaveUserDetailsController extends HttpServlet {
@Override
protected void doPost(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
processRequest(request, response);
}
@Override
public String getServletInfo() {
return "Short description";
}// </editor-fold>
}
Validate user:
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package com.nura.servlet;
import com.nura.db.dao.UserDetailsDAO;
import java.io.IOException;
import java.io.PrintWriter;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.servlet.http.HttpSession;
/**
*
* @author Arun
*/
public class ValidateUser extends HttpServlet {
/**
* Processes requests for both HTTP
* <code>GET</code> and
* <code>POST</code> methods.
*
* @param request servlet request
* @param response servlet response
* @throws ServletException if a servlet-specific error occurs
* @throws IOException if an I/O error occurs
*/
protected void processRequest(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
response.setContentType("text/html;charset=UTF-8");
PrintWriter out = response.getWriter();
HttpSession session = request.getSession();
try {
String uname = request.getParameter("uname");
String pwd = request.getParameter("pwd");
UserDetailsDAO uDAO = new UserDetailsDAO();
boolean isValid = uDAO.validateUser(uname, pwd);
String roleType = uDAO.getUstDtls(uname).get(0).getRoleType();
//response.sendRedirect("index.html");
if (isValid) {
response.sendRedirect("UserMenu.jsp");
} else {
response.sendRedirect("response.jsp?msg=Invalid User");
}
} finally
{ out.close
();
}
}
/**
* Handles the HTTP
* <code>POST</code> method.
*
* @param request servlet request
* @param response servlet response
* @throws ServletException if a servlet-specific error occurs
* @throws IOException if an I/O error occurs
*/
@Override
protected void doPost(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
processRequest(request, response);
}
/**
* Returns a short description of the servlet.
*
* @return a String containing servlet description
*/
@Override
public String getServletInfo() {
return "Short description";
}// </editor-fold>
}
User Post:
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package com.nura.servlet;
import com.nura.db.dao.UserPostDetailsDAO;
import com.nura.db.entity.UserPostDetails;
import java.io.IOException;
import java.io.PrintWriter;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.servlet.http.HttpSession;
/**
*
* @author Arun
*/
public class UserPost extends HttpServlet {
/**
* Processes requests for both HTTP
* <code>GET</code> and
* <code>POST</code> methods.
*
* @param request servlet request
* @param response servlet response
* @throws ServletException if a servlet-specific error occurs
* @throws IOException if an I/O error occurs
*/
protected void processRequest(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
response.setContentType("text/html;charset=UTF-8");
PrintWriter out = response.getWriter();
HttpSession session = request.getSession();
try {
/* TODO output your page here. You may use following sample code. */
UserPostDetails mDtls = new UserPostDetails();
mDtls.setduration(request.getParameter("days"));
mDtls.setmoney(request.getParameter("money"));
mDtls.setlocation(request.getParameter("state"));
mDtls.setDistrict(request.getParameter("district"));
mDtls.setStatus("WAITING");
UserPostDetailsDAO _usr=new UserPostDetailsDAO();
if( _usr.persistUserDetails(mDtls)){
response.sendRedirect("response.jsp?msg=" + ". Waiting for Hadoop Response");
}else{
response.sendRedirect("response.jsp?msg=" + ". Db Error");
}
//System.out.println("" + recMdDtls);
} finally
{ out.close
();
}
}
/**
* Handles the HTTP
* <code>POST</code> method.
*
* @param request servlet request
* @param response servlet response
* @throws ServletException if a servlet-specific error occurs
* @throws IOException if an I/O error occurs
*/
@Override
protected void doPost(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
processRequest(request, response);
}
/**
* Returns a short description of the servlet.
*
* @return a String containing servlet description
*/
@Override
public String getServletInfo() {
return "Short description";
}// </editor-fold>
}
Crop suggestion:
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package com.nura.ui;
import com.faceset.database.AddService;
import com.hadoopanalyzer.HadoopAnalyzer;
import com.nura.hadoop.HadoopAnalysis;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.time.Duration;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Iterator;
import java.util.LinkedHashMap;
import java.util.Map;
import java.util.Set;
import java.util.StringTokenizer;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.swing.JOptionPane;
/**
*
* @author Vinayak
*/
public class CropSaMPLE {
//public static String User_location="Tamil Nadu";
//public static String User_Sub_location="VELLORE";
//public static String User_Cost="50000";
//public static String User_Duration="200";
public static String User_location="";
public static String User_Sub_location="";
public static String User_Cost="";
public static String User_Duration="";
hm.put(ss[0],line);
MasterFile.add(line);
}
Set set=hm.entrySet();
Iterator iteartor=set.iterator();
while(iteartor.hasNext()){
Map.Entry me=(Map.Entry)iteartor.next();
String value=me.getValue().toString();
//System.out.print("Map Value-->"+me.getValue());
StringTokenizer stt=new
StringTokenizer(value,"|"); stt.nextToken();
Soil_types.add(stt.nextToken());
Location_types.add(stt.nextToken());
Crops_types.add(stt.nextToken());
Duration_types.add(stt.nextToken());
Water_level_types.add(stt.nextToken());
Cost_types.add(stt.nextToken());
System.out.print("Map String Value-->"+value);
}
int size=Crops_types.size();
print("Sizeeee "+String.valueOf(size));
for(String cropp:Crops_types){
cc=cropp;
System.out.println(cc);
}
for(int i=0;i<Location_types.size();i++)
{ System.out.println("Filter--
>"+Location_types.get(i)); String
LocationResult=Location_types.get(i);
StringTokenizer st=new StringTokenizer(LocationResult,",");
int k=0;
while(st.hasMoreTokens()){
String dblocation=(String)st.nextToken();
if(dblocation.equalsIgnoreCase(User_location.trim())){
LocationContent.add(String.valueOf(k));
Location_mappingIndex.add(String.valueOf(i));
subUserlocation.add(String.valueOf(i));
}else{
} k+
+;
}
}
for(String subresults:subUserlocation){
String masterdata=MasterFile.get(Integer.parseInt(subresults));
System.out.println(masterdata);
StringTokenizer st=new StringTokenizer(masterdata,"|");
st.nextToken();
subResultsSoil.add(st.nextToken());
st.nextToken();
subResultsCrops.add(st.nextToken());
subResultsDuration.add(st.nextToken());
subResultsWaterLevel.add(st.nextToken());
subResultsCost.add(st.nextToken());
}
//sample
for(String soil:subResultsWaterLevel)
{ System.out.println("Water=====---
>"+soil);
}
//Reading Rain fall Data
BufferedReader rain_br=null;
rain_br=new BufferedReader(new FileReader(rainfall_file));
String Rain_data="";
String rems="";
while((s=rain_br.readLine())!=null){
rems=s;
rems=rems.replace(" ", "");
//print(User_location.toUpperCase());
//print(rems);
if(rems.trim().startsWith(User_location.trim().toUpperCase())){
Rain_data+=rems+"\n";
Rain_data=Rain_data.replace(" ", "");
}
}
StringTokenizer st=new StringTokenizer(Rain_data,"\n");
while(st.hasMoreTokens()){
String splitdata=st.nextToken();
StringTokenizer st_sub=new StringTokenizer(splitdata,",");
st_sub.nextToken();
String sublocation=st_sub.nextToken();
System.out.println(sublocation +" "+User_Sub_location);
if(sublocation.equalsIgnoreCase(User_Sub_location)){
st_sub.nextToken();st_sub.nextToken();st_sub.nextToken();st_sub.nextToken();st_sub.nextToken();st
_sub.nextToken();st_sub.nextToken();st_sub.nextToken();
st_sub.nextToken();st_sub.nextToken();st_sub.nextToken();st_sub.nextToken();
String Rain_Fall=st_sub.nextToken();
System.out.println("Rain Level- - ->> "+Rain_Fall);
System.out.println("<----MAPPING END- - -> ");
Rainfall_Mapping=Rain_Fall;
}
}
/*Rainfalll*/
int d=0;
ArrayList <String>rainfall_templist=new ArrayList<String>();
for(int h=0;h<subResultsWaterLevel.size();h++){
System.out.println("III"+h);
String waterlevel=subResultsWaterLevel.get(h);
System.out.println("Water Level"+waterlevel);
StringTokenizer stm=new StringTokenizer(waterlevel,",");
while(stm.hasMoreTokens())
{ rainfall_templist.add(stm.nextToken(
));
}
for(d=0;LocationContent.size()>d;d++){
String RainFall=rainfall_templist.get(Integer.parseInt(LocationContent.get(h)));
print("Predict Water--->"+RainFall);
print(String.valueOf(Rainfall_Mapping));
StringTokenizer str=new StringTokenizer(RainFall,"-");
String start=str.nextToken();
String end=str.nextToken();
print(start +" "+end);
//print(end);
double start_int=Double.parseDouble(start);
double end_int=Double.parseDouble(end);
print("RainFall..Mapping"+Rainfall_Mapping);
double dbrainfall=Double.parseDouble(Rainfall_Mapping);
if (start_int <= dbrainfall && dbrainfall <= end_int){
COST=true;
print("true");
FINAL_MAP_Result.add("TRUE"+","+LocationContent.get(h)+","+RainFall);
}
else{
FINAL_MAP_Result.add("FALSE"+","+LocationContent.get(h)+","+RainFall);
}
break;
}
}
/*Cost Prediction*/
int cd=0;
ArrayList <String>cost_templist=new ArrayList<String>();
for(int h=0;h<subResultsCost.size();h++){
String costlevel=subResultsCost.get(h);
System.out.println("Cost Level"+costlevel);
cost_templist=new ArrayList<String>();
StringTokenizer stm=new StringTokenizer(costlevel,",");
while(stm.hasMoreTokens()){
cost_templist.add(stm.nextToken());
}
for(cd=0;cd<LocationContent.size();cd++){
String RainFall=cost_templist.get(Integer.parseInt(LocationContent.get(h)));
print("PEDICT "+RainFall);
print(String.valueOf(User_Cost));
StringTokenizer str=new StringTokenizer(RainFall,"-");
String start=str.nextToken();
String end=str.nextToken();
print(start +" "+end);
//print(end);
double start_int=Double.parseDouble(start);
double end_int=Double.parseDouble(end);
double user_amount=Double.parseDouble(User_Cost);
if (start_int <= user_amount && user_amount <= end_int){
COST=true;
print("true"); FINAL_MAP_Result.add("TRUE"+","+LocationContent.get(h)
+","+RainFall);
}else{
FINAL_MAP_Result.add("FALSE"+","+LocationContent.get(h)+","+RainFall);
System.out.println("index "+h);
}
break;
int dt=0;
ArrayList <String>durationtemplist=new ArrayList<String>();
for(int h=0;h<subResultsDuration.size();h++){
String costlevel=subResultsDuration.get(h);
System.out.println("Duration Level"+costlevel);
durationtemplist=new ArrayList<String>();
StringTokenizer stm=new StringTokenizer(costlevel,",");
while(stm.hasMoreTokens()){
durationtemplist.add(stm.nextToken());
}
for(dt=0;dt<LocationContent.size();dt++){
String RainFall=durationtemplist.get(Integer.parseInt(LocationContent.get(h)));
print("PREDICT Duration"+RainFall);
print(String.valueOf(User_Duration));
StringTokenizer str=new StringTokenizer(RainFall,"-");
String start=str.nextToken();
String end=str.nextToken();
print(start +" "+end);
//print(end);
double start_int=Double.parseDouble(start);
double end_int=Double.parseDouble(end);
double user_amount=Double.parseDouble(User_Duration);
if (start_int <= user_amount && user_amount <= end_int){
COST=true;
print("true");
FINAL_MAP_Result.add("TRUE"+","+LocationContent.get(h)+","+RainFall);
}else{
FINAL_MAP_Result.add("FALSE"+","+LocationContent.get(h)+","+RainFall);
System.out.println("index "+h);
}
break;
/*Crop Filter*/
int ct=0;
ArrayList <String>Crop_templist=new ArrayList<String>();
for(int h=0;h<subResultsCrops.size();h++){
String costlevel=subResultsCrops.get(h);
System.out.println("Duration Level"+costlevel);
Crop_templist=new ArrayList<String>();
StringTokenizer stm=new StringTokenizer(costlevel,",");
while(stm.hasMoreTokens()){
Crop_templist.add(stm.nextToken());
}
StringBuilder sb=new StringBuilder();
for(ct=0;ct<LocationContent.size();ct++){
String RainFall=Crop_templist.get(Integer.parseInt(LocationContent.get(h)));
print("PREDICT Crops"+RainFall);
sb.append("CROPS ARE : "+"\n"+RainFall);
break;
}
JOptionPane.showMessageDialog(null, sb.toString());
}
public static void print(String Message){
System.out.println(Message);
}
}
Hadoop Analysis:
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package com.nura.hadoop;
import com.faceset.database.AddService;
import com.nura.dao.impl.JSONEntityDAOImpl;
import com.nura.entity.JSONEntity;
import constants.ServerIP;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.StringTokenizer;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.mapred.TextInputFormat;
import org.apache.hadoop.mapred.TextOutputFormat;
import javax.swing.JOptionPane;
/**
*
* @author ArunRamya
*/
public class HadoopAnalysis
{ private static String result="";
private static boolean status=false;
private static float NetBal;
private static String ProductName="";
private static String ProductOne="";
private static String ProductTwo="";
private static float NetAmount=0;
private static String FinalProduct="";
static String s1="",s2="";
static String content="";
static String originalcontent="";
static ArrayList <String>origin_sq=new ArrayList<String>();
static ArrayList <String>user_sq=new ArrayList<String>();
static int i=0;
static String User_id="";
private static float FinalNetAmount=0;
public static class Map extends MapReduceBase implements Mapper<LongWritable, Text,
LongWritable, Text> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
@Override
public void map(LongWritable key, Text value, OutputCollector<LongWritable, Text>
output, Reporter reporter)
throws IOException {
try {
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
}
}
APPENDIX 2
1. Uwe A. Schneider a,⇑, Petr Havlik b, Erwin Schmid c, Hugo Valin b, Aline Mosnier b,c,
Michael Obersteiner b,Hannes Bottcher b, Rastislav Skalsky´ d, Juraj Balkovicˇ d, Timm
Sauer a, Steffen Fritz b” Impacts of population growth, economic development, and technical
change on global food production and consumption” Agricultural Systems 104 (2011) 204–
215 inelsvier
2. Wahbeh, A. H., Al-Radaideh, Q. A., Al-Kabi, M. N., & Al- Shawakfa, E. M. (2011). A
comparison study between data mining tools over some classification methods. International
Journal of Advanced Computer Science and Applications, 8(2),18-26.
3. Eiben, A. E., Raue, P. E., & Ruttkay, Z. (1994, October). Genetic algorithms with multi-
parent recombination. In International Conference on Parallel Problem Solving from Nature
(pp. 78-87). Springer, Berlin, Heidelberg.
4. James W. Jones a,⁎, John M. Antle b, Bruno O. Basso c, Kenneth J. Boote a, Richard T.
Conant d, Ian Foster e, H. Charles J. Godfray f, Mario Herrero g, Richard E. Howitt h, Sander
Jansseni, Brian A. Keating g, Rafael Munoz-Carpena a, Cheryl H. Porter a, Cynthia
Rosenzweig j, Tim R.Wheeler k “Brief history of agricultural systems modeling” in science
direct.
5. Geoff Kuehnea, Rick Llewellyna,⁎, David J. Pannellb, Roger Wilkinsonc, Perry Dollingd,
Jackie Ouzmana, Mike Ewinge, ”Predicting farmer uptake of new agricultural practices: A
tool for research,extension and policy” in Elsevier science direct.
6. Zhou, S., Ling, T. W., Guan, J., Hu, J., & Zhou, A. (2003, March). Fast text classification:
a training-corpus pruning based approach. In Database Systems for Advanced Applications,
2003.(DASFAA 2003). Proceedings. Eighth International Conference on (pp. 127-136).
IEEE.
7. Li, Y., & Bontcheva, K. (2008). Adapting support vector machines for f-term-based
classification of patents. ACM Transactions on Asian Language Information Processing
(TALIP), 7(2), 7.
8. Eiben, A. E., Raue, P. E., & Ruttkay, Z. (1994, October). Genetic algorithms with multi-
parent recombination. In International Conference on Parallel Problem Solving from Nature
(pp. 78-87). Springer, Berlin, Heidelberg.
9. Tubiello, F. N., Salvatore, M., Cóndor Golec, R. D., Ferrara, A., Rossi, S., Biancalani, R.,
... & Flammini, A. (2014). Agriculture, forestry and other land use emissions by sources and
removals by sinks. Rome, Italy..