0% found this document useful (0 votes)
17 views57 pages

Department of Computer Science and Engineering: Maturi Venkatasubba Rao (MVSR) Engineering College

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views57 pages

Department of Computer Science and Engineering: Maturi Venkatasubba Rao (MVSR) Engineering College

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

June - 2018

MATURI VENKATASUBBA RAO (MVSR) ENGINEERING COLLEGE

Annual Technical Magazine


(Sponsored by Matrusri Education Society, Estd.1980)
Affiliated to Osmania University, Recognized by AICTE
EAMCET/ PGECET/ ICET Code: MVSR

Department of Computer Science and Engineering


www.mvsrec.edu.in
NBA Accredited NAAC Accredited ISO 9001:2015 Certified
Department of Computer Science and Engineering
 VISION
 To impart technical education of the highest standards, producing competent and
condent engineers with an ability to use computer science knowledge to solve societal
problems.

 MISSION
 To make learning process exciting, stimulating and interesting.
 To impart adequate fundamental knowledge and soft skills to students.
 To expose students to advanced computer technologies in order to excel in engineering
practices by bringing out the creativity in students.
 To develop economically feasible and socially acceptable software.

B.E. PEOs, POs & PSOs


PROGRAM EDUCATIONAL OBJECTIVES (PEOs)
The Program Educational Objectives of undergraduate program in Computer Science &
Engineering are to prepare graduates who will:
1. Obtain strong fundamentals concepts, technical competency and problem solving skills to
generate innovative solutions to engineering problems.
2. Continuously enhance their skills through training, independent inquiry, professional practices
and pursue higher education or research by adapting to rapidly changing technology.
3. Advance in their professional careers including increased technical, multidisciplinary
approach and managerial responsibility as well as attainment of leadership positions thus
making them competent professionals at global level.
4. Exhibit commitment to ethical practices, societal contributions and lifelong learning.

PROGRAM OUTCOMES(POs)
At the end of the program the students (Engineering Graduates) will be able to:
1. Engineering knowledge: Apply the knowledge of mathematics, science, engineering
fundamentals, and an engineering specialization to the solution of complex engineering
problems.
2. Problem analysis: Identify, formulate, review research literature, and analyse complex
engineering problems reaching substantiated conclusions using rst principles of
mathematics, natural sciences, and engineering sciences.
3. Design/development of solutions: Design solutions for complex engineering problems and
design system components or processes that meet the specied needs with appropriate
consideration for the public health and safety, and the cultural, societal, and environmental
considerations.
4. Conduct investigations of complex problems: Use research-based knowledge and research
methods including design of experiments, analysis and interpretation of data, and synthesis
of the information to provide valid conclusions.
5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and
modern engineering and IT tools including prediction and modelling to complex engineering
activities with an understanding of the limitations.
6. The engineer and society: Apply reasoning informed by the contextual knowledge to assess
societal, health, safety, legal and cultural issues and the consequent responsibilities relevant
to the professional engineering practice.
7. Environment and sustainability: Understand the impact of the professional engineering
solutions in societal and environmental contexts, and demonstrate the knowledge of, and
need for sustainable development.
8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities and
norms of the engineering practice.
I
9. Individual and team work: Function effectively as an individual, and as a member or leader
in diverse teams, and in multidisciplinary settings.
10. Communication: Communicate effectively on complex engineering activities with the
engineering community and with society at large, such as, being able to comprehend and
write effective reports and design documentation, make effective presentations, and give
and receive clear instructions.
11. Project management and nance: Demonstrate knowledge and understanding of the
engineering and management principle and apply these to one's own work, as a member
and leader in a team, to manage projects and in multidisciplinary environments.
12. Lifelong learning: Recognize the need for, and have the preparation and ability to engage
in independent and life-long learning in the broadest context of technological change.

PROGRAM SPECIFIC OUTCOMES (PSOs)


13. Efcient coding: an ability to analyse a problem, design the algorithm and optimally code its
solution.
14. Software deployment: an ability to identify & dene computing requirements to test,
implement and maintain a software product.

M.Tech PEOs, POs & PSOs


PROGRAM EDUCATIONAL OBJECTIVES (PEOs)
The Program Educational Objectives of postgraduate program in Computer Science &
Engineering are to prepare graduates who will:
1. Gain in-depth knowledge of advanced computational methods, to apply in relevant
real-world issues within the context of a specic application domain.
2. Design and develop innovative solutions making use of modern computing platforms
by exhibiting commitment to ethical practices and lifelong learning.
3. Understand and contribute to prevalent literature for pursuing research in the eld of
computer science and engineering.
4. Exhibit technical and managerial skills in multidisciplinary domains andbecome
competent professionals.

PROGRAM OUTCOMES(POs)
At the end of the program the students (Engineering Graduates) will be able to:

1. An ability to independently carry out research /investigation and development work


to solve practical problems
2. An ability to write and present a substantial technical report/document
3. Students should be able to demonstrate a degree of mastery over computer science
and engineering for holistic professional development.
4. An ability to demonstrate understanding for designing and developing software for
multidisciplinary problems.
PROGRAM SPECIFIC OUTCOMES (PSOs)

1. Conduct research using knowledge gained to identify and solve problems in


multidisciplinary domains.
2. Demonstrate critical thinking ability to propose efcient solutions to the real world
computational problems taking into consideration environmental and societal issues

II
Creative Desk Index
Dr.AkhilKhare, Title Page No.
Professor,
(Reviewer Technical Magazine)
Faculty Articles 1-8

Dr. B. Sandhya,
Professor, Student Articles 9-14
(Reviewer Technical Magazine)

Meduri. Anupama, Faculty Research Publications 15-34


Associate Professor,
(Reviewer Technical Magazine)
Student Publications 35-39
Dr. Daggubati. Sirisha,
Assistant Professor,
(Incharge Technical Magazine) Best Student Projects 40-41

V. Sudhindra,
Assistant Programmer,
Alumni 42-43
(Magazine Design Team)

Achievements 44-47
M. Nagarani,
Programming Assistant,
(Magazine Design Team)
Faculty/ Student Corner 48-50

III
Messages
It is often said “ Give me a copy of your college Technical Magazine”, I will tell
Chairmen's Message
you about the quality of your college. “I strongly believe in this statement.
Magazine carries the contributions reecting ethos and aspirations of the
faculty, students and other team members of an institution. I am happy to know
that Computer Science & Engineering Department is bringing out its rst
department technical magazine this year. It is my pleasure to congratulate the
editorial team for bringing out a quality Technical Magazine. Reading this

Dr. K. P. Srinivasa Rao, technical Magazine would denitely be an inspiration and motivation for all
MBBS, MS (Ophthalmology), students and staff to contribute even more to the forthcoming issues.
Chairman, MVSREC

Principal's Message

TekEssenCSE is the manifestation of the desire of Computer Science Faculty


and Students to share their innovative ideas on common platform. It gives me
great pleasure to know that TekEssenCSE college magazine is ready for
publication. This magazine is a perfect blend of magnicent and
groundbreaking articles. It has concentrated in disseminating information to the
student community and quenches their thirst for knowledge updations. I am
Dr. G. Kanaka Durga,
Principal, MVSREC very glad to congratulate the editor for their hard work and bringing out this
(Professor, Department of
edition.
Electronics and Communication
Engineering)
HOD's Message

TekEssenCSE is the annual magazine released by the Department of Computer


Science & Engineering. It is a blend of exquisite articles and innovative ideas
from the faculty and new – age Students of Computer Science & Engineering
Department. I strongly believe that the informative articles & innovative ideas
presented in the magazine will be appealing and useful to the readers.

Prof.J.Prasanna Kumar,
HOD, MVSREC
(Department of Computer
Science and Engineering)
“Coming together is a beginning, keeping together is progress and working
together is success” – This magazine “TekEssenCSE”, a ag ship magazine of
Computer Science & Engineering Department of MaturiVenkataSubba Rao
Creative Desk Engineering College, is the culmination of the never tiring initiative and
endeavors taken by the faculty and students of CSE. The Magazine strives to
inform, engage, inspire and educate diverse readership on developments in
Computer Science eld.
IV
Securing The Insecure
Meduri Anupama
Associate Professor

The internet is a war zone. Enter if you dare.


The internet was borne from the need for unfettered scientic communication across different
locales.People adopted it into their everyday lives so they could share in that democratization of
information. Digital economies emerged and more powerful computing systems allowed for more
powerful web services.

Being able to carry out bank transactions on your phone, for example,may be a boon to a
traveling salesman, but it's also a recipe for ending up as prey for bad actors looking to make it rich.

Nearly every internet fad has its own ledger of breaches, cybersecurity attacks, or digital ransoms.

It goes back to the dotcom boom. Yahoo dominated the cyberspace in the late 1990s and early
2000s, its emailservices reigning supreme in the edgling digital society.The California-based company
was valued at $54.9 billion– or INR 410,900 crores – in 2006.Following a 2013data breachthat left 3
billion accounts compromised, the poster child of the Y2K era was purchased in 2017 by Verizon for
$4.48 billion – a 92% nosedive.

The early-2010s started with a fad to throw out physical hard drives and toss our personal data
into the cloud – as if they were really oating,white puffs of ones and zeros. The
cybersecurityattacksthat followed included a 2012 Dropbox hack that 'exposed 68 million users
information, a 2014 phishing campaign that exposed photos of celebrities, and a 2012 LinkedIn
breach that involved the theft of countless encrypted passwords. Heck, even Microsoft's cloud services
were hacked in 2010.

And here we are creating vacuum cleaners that connect tothe internet.

To no one's surprise, the internet of things has its own record of unglamorous cyber break-
ins.Hackers at the 2016 DEF CON security conference – equivalent to an annual county fair for hackers
looking to have fun and do some good – found 47 vulnerabilities for 23 devices. Everything from door
locks to wheelchairs to thermostats walked out of the conference with newly documented weaknesses.

Researchers in 2015 demonstrated the ability to not just kill the engines of a Jeep
Cherokeeonmiddle of the highway, but also mess with its air conditioning, radio settings, and
windshield wipers – all via its Internet-connected, onboard entertainment system.

You get the point: The more devices we hook up to the World Wide Web, the more we hookup
our lifelines to this network.Add in enough internet-connected utilities and you cook up a cyberstorm
that can wipe out large swaths of our daily lives.

That's what happened in the fall 2016 Mirai botnet attack.

1
The attack exploited one of the essential facets of how the internet works: communication.
Devices connected to the internet communicate via multi-byte units of information known as packets.
They come with sender and recipient addresses, which are known as Internet Protocol Addresses – IP
for short – and hop along network hubs stationed across the world via ber optic cables deep under the
sea or satellites up in space – a digital form of hot potato, if you will.

Every time you re up your web browser and type in google.com, you're sending packets along
this distributed network, eventually to one of Google's many high-powered web servers, which then
sends packets of data hopping back to you. Every time a new page loads, the more the two of you
exchange data.

Now imagine that back-and-forth happening simultaneously, trillions of times over.


That's what happened in 2016.

The Mirai botnet was quite simple: An unknowing user would download the malware, which
would then infect internet-connected devices – webcams and routers, for example – by guessing every
password from its in-built dictionary of common passwords.

Didn't know your webcam and router had passwords? Neither did a lot of people.

Many users kept and still keep the default passwords on their internet-connected devices, and
the internet of things was no exception. The malware webbed together hundreds, if not thousands of
infected devices – from smart toasters to printers to smart washers to smart ovens to DVRs to even baby
monitors – and had them do one thing: repeatedly send web requests to internet services.

That's all it took to break the internet.

Fittingly so, “mirai” is Japanese for “future.” It's a dark one, alright.

Consequently, we shouldn't be asking ourselves how to sign out of this digital nightmare. The
better question is whether the internet of things can be tamed.

If the boom and bust of businesses like Yahoo have taught us anything, it's that rushing to
embrace something just because it's a fad – and because it's connected to the internet – is a bad idea.

In other words, when Nike's light-up,self-lacing shoes hit stores near you, wait before rushing to
the cash register. A security researcher might just show you why an app-controlled shoe that needs a
nightly charge isasking for trouble.

Zuckerberg and Pichai can change their tunes all they want, but their message is surprisingly
consistent: The internet is inevitable – and so is the loss of your privacy.

2
Green Tomorrow (GT) - An Energy Efcient
Novel Approach Using Crowd Computing
Dr.AkhilKhare
Professor

'Save Energy – Save Earth' is a novel approach for GT using crowd computing. Crowd computing is one of the
upcoming research areas allowing the people around the world to work together in assistance with the machines. A
decade after the machines will completely take over the control with the heavy boom in industrial revolution (IR). The
exponential increase in data and its advance techniques, if not managed properly will result in the increase of high energy
consumption (EC) and e-wastes.

Currently, all of us are living and experiencing the era of 'Advance Computer/Network Age', where almost
everything is handled by the computer software. Today, software alone might oversee the world. But, there will
always be a margin between what humans understand and what computers can perform. Certain things are difcult
to model with computers but can be performed by the human beings with their extraordinary intelligence.
Consider an example, where the medical prescription written by your family doctor, which is really hard to
recognize is given to the computer/robot to read for the medicines. Here, the various available algorithms, which
involves huge computations and requires large amount of power consumption (PC); fails to decipher the scrawls.

Rather the same data when given to a group of people, and asked to recognize, with each other's help, they're able
to fairly accurately glean words out of the seemingly unreadable. A human percepts and classies the samples (or
patterns) in the original space (pattern space) whereas a computer does not understand the pattern space. Thus,
crowd computing allows the humans to proactively use their own natural energy rather than articial computations,
requiring a large power consumption. We all have learnt the law of energy conservation in science which says; in a
closed system, theamount of energy is xed. Energy inside the system can neither be created nor
be destroyed. The energy can be converted from one form to another (and sometimes back
again). Today, the distributed computing infrastructure makesuse of millions of devices which are the parts of IR
and results in the increase of carbon emissions which is a serious threat for energy and power. The environmental
cost function C(x) considered in many existing studies is as follows:

C(x) = c.r.PUE.P(x) …….(1)


c – carbon footprint cost ($/gm)
r – avg. carbon emission rate (gm/KWh)
PUE – Power Usage Effectiveness
P(x) – server power = server utilization
P(x) = Requirement/Capacity = Pbusy/(Pbusy+Pidle) ……..(2)

3
Green Tomorrow (GT )
GT α ES. PS ……..(3)
Green tomorrow (GT) as shown in above Figure proposes an energy efcient approach where the Humans
and Machines make a great team to save 'Mother Earth'. GT is directly proportional to the integration of energy
savings (ES) and power savings (PS). The reduction in energy consumption and reduction in power consumption will
collectively contribute to the vast amount of energy and power savings. These valuable savings will drive us all to
2080 rather than having hazardous energy crises in 2050, which might lay the critical scenarios for the living beings
to survive.

Rob Miller, a professor of computer science at User Interface Design Group, Massachusetts Institute of
Technology (MIT) - Computer Science and Articial Intelligence Laboratory (CSAIL), highlights the
'crowdcomputing' revolution as a challenge that makes theworkers/employers and machines/software as colleagues
rather than competitors. The dependence of human beings on the various advanced gadgets and their
accompanying software for the daily use is increasing day by day, thus superseding the human intelligence. The
industrial revolution has given an inferiority complex to the humans as they may become secondary to the
technologies they create, which requires huge amount of power and energy consumption. The day-to-day
technological innovations & advancements are demanding for more and more amount of energy resources. Thus,
energy conservation is one of the challenging tasks for the humans to survive in future.

There are various denitions and views for crowd computing, which takes a different approach for solving
different problems. SriniDevadas, professor of electrical engineering and computer science at the MIT CSAIL,
states crowd computing as a symbiotic relationship between software and humans. Crowd computing analyses and
recognizes the strong and weak points of both, and efciently utilizes those qualities for improved energy savings.
The human knowledge is combined with the varying technological advances to yield efciency and protability. It
leverages the human intelligence through its experience and subjectivity, against that of articial intelligence through
its speed and objectivity. Brain-guided computations can perform certain tasks at a great speed, that computers
alone cannot, such as transcription, video moderation, etc and result in the increase of techno power savings. The
power of crowd computing has benetted the digital universe with increase in the energy savings and reduction in
harmful emissions, e-wastes. According to Murray et al. in crowd computing, opportunistic networks can be used
to spread computation and collect results. The mobile phones having large bandwidth (BW) is used as nodes and
crowd computing allows the distributed human interaction tasks with optimized utilization of various resources.
Cooke and Gillam describes crowd computing as the group of people who are offering their intellect and
computers to solve problems which are at present unsuitable for computational approaches. According to
Schneider crowd computing is a myriad of human interaction tools that allow the exchange of ideas, nonhierarchical
decision making, and full use of the world's mind space.

4
The emergence of Ubiquitous computing is rapidly changing the scenario of industries from ownership-based
approach to subscription-oriented approach, where the access to scalable infrastructure and services is on-demand:
anytime-anywhere. As shown in Figure this massive growth of industrial revolution demands for high-end computing
resources (HCR) with huge capital investments (CI). The inefcient management and handling of these resources
result in critical penalty. The intensive increase in IR has following impact:

 The traditional approach demands high energy consumption/usage (EC) for computational operations, resulting in
low resource utilization and wastage of energy.

 There is an increasing demand for heavy power consumption (PC), for example a typical datacenter with
2000 racks needs nearly 25 Megawatt of power to operate, which results in higher operational cost along
with the additional cost incurred on cooling process.

 The continuous increase in the level of Carbon emissions (CO2) by the industry is a dangerous pollutant that leads
to adverse effects on the environment and all the living organisms. Gartner et al. [2007] estimated that the
Information and Communication Technologies (ICT) industry generates about 2% of the total global CO2
emissions. The proposed approach suggest that a decrease in emission volume of 15% – 30% is essential before
year 2025 to manage the global warming situation and keep the global temperature increase below 20C. It is
predicted that if the global temperature rises by 3.60 C, the polar ice caps and glaciers would melt, which would
increase the water level of oceans by about 100 m and hence lead to the ooding of low-lying coastal areas of the
earth.

 The increase in e-waste (ew) due to the sophisticated advancements in electronic gadgets with large amount of
energy and power requirements.

Thus, energy consumption, power consumption, carbon emission and e-wastes by industrial
revolution/infrastructures have become a key environmental concern. All these factors together pose a Big Threat
for Environment Sustainability – A state in which the demands placed on theenvironment can be met without
reducing its capacity to allow all living beings to live well, Today and in Future.'

5
Hence, GT is considered to be the function of IR.

GT = f (IR) ……(4)
IR α HCR . CI
IR directly depends on the heavy demand of HCR with huge CI. Therefore to have the green solution (GS) we
need to control IR with appropriate utilization of various resources.
GT α 1 / IR

IR = (EC.PC.CO2.ew. CI )dn …….(5)


n> 0 and n € { HCR | As per the Application requirements }
Max. (GT) = Min. (IR) ≈ (Appropriate Resource utilization)

The proposed framework of GT gives the GREEN Solution for the Big Threat .GTis the result of
CrowdComputing, which is a new energy efcient research paradigm. Here, the human intelligence and machine
intelligence work hand-in-hand to create a miracle called 'GT'. EENACC is a great boon for green future: Save
Energy – Save Earth. Crowd computing allows improvements by consuming less amount of energy. It's a fantastic
solution to get a massive amount of computing power at fairly cheap amount of energy. The energy consumption
can be minimized by actually measuring the amount of energy being used by monitoring the machines of the people
who participate or contribute for the solution of a problem as the part of Crowd Computing process. There should
be efcient resource management for proper industrial work with less power consumption. One of the important
objective as well as economic incentive for an organization is the cutback in the energy budget of a datacenter.
Crowd sourcing makes the traditional datacenters more energy efcient by using technologies such as resource
virtualization and efcient workload allocation. Server consolidation reduces the energy consumption by allowing
different workloads to share the same physical host using virtualization and also switching off the unused servers.
Thus energy optimization can be achieved by combining resources as per the current utilization, efcient virtual
network topologies and thermal status of computing hardware and nodes. The EC and PC can be further reduced
by performing the execution of massive computations on slow speed with an additional advantage of doing some
other work simultaneously using all the computing resources without disturbing the know-how of the machines.
Thus, bring together the unusedcomputational power (PS) and save the additional amount of energy (ES)
which would have been required in Future. Today we can have lots of computational power at fairly cheap
amount of energy with efcient use of resources and heavy nancial gains-prots.

A recent research survey of various energy efcient solutions and strategies show that shifting the business
applications to crowd computing can reduce and vanish the carbon footprints of organizations in the upcoming
days. The use of crowd sourcing practices has shown following good results in reduction of carbon emissions:

 Small Scale industries – upto 80 - 85 percent


 Large Scale industries – upto 40 - 65 percent
 Small Scale industries – upto 65 - 80 percent

Electricity is a signicant source of energy which is used to power homes, business, and industries. The
combustion of fossil fuels to generate electricity is the largest single source of CO2 emissions in the world. The
lifetime of CO2 is hard to dene as the gas is not destroyed over time, but instead moves among different parts of the
ocean–atmosphere–land system and has a negative effect on all living organisms. Hence by accepting and using the
green solution there will denitely be a signicant reduction in the carbon wastes. Saving the electricity will directly
minimize the corresponding CO2 emissions. Also, there must be efcient management and handling of the
advanced electronic gadgets to reduce the e-wastes. There should be proper dispersal of the e-wastes or
appropriate recycling methods must be used for the e-wastes to have reduction in energy and power consumption.

6
MIT Media Lab Open Agriculture Initiative
(OpenAg)
Dr.DaggubatiSirisha - Assistant Professor
MVR Jyothisree - Assistant Professor

She was saying to me, “Small boys become big men through the inuence of big men who care about small
boys and for that reason we must be truly thankfuland grateful to our farmers”. I was pondering whether being thankful
and grateful is the maximum thing that we can do? Can't we do something more?I was petried when I came across
certain facts like the average period between the apple being plucked from the farm and apple being consumed by us,
in our country is 10 months. This means that at the time we eat it, apple is nothing more than a sugar ball and it has lost
all its antioxidants.

This proved that being grateful and thankful won't sufce. The other terrifying fact is the population is
exponentially increasing and number of farmers are exponentially decreasing due to climate and water drought. So
what will we do for food?

In 2015, Engineers from MIT came up with the idea of food computers. This madeclimate, a democratic one,
such that farming can be done anywhere independent of climate.The plants were tracked and each plant had a plant
prole and also all plants had a plantface-book, thereby, monitoring its progress day by day. If the pH went down, they
adjusted it immediately and if humidity is less they adjusted it accordingly. At the end of harvest they obtained a nutrient
rich food. For every 8 seconds, the parameters like temperature, humidity, CO2,Light spectrum, Light intensity,
Water, Dissolved Oxygen etc…were tracked. AI monitored crops with stationary cameras, tracking growth and
potential diseases overtime. We estimate that with computer vision and AI, 5-10% of crops can be saved by
earlydetection.

7
As a result of this we can be sure that the food that we consume will be nutrient rich and won't be
conundrum like now. As Caleb Haper said,” The future of food is nothing but creating a network of shared tools that
empowers the next 1 billion people to simply ask what if?”. It is roughly estimated that in few years there will be 1
billion farmers all over the world. So with the tremendous development in technology, it is time to stop only being
thankful and start contributing from our side.

8
Demystifying Quantum Computing
Varun Kumar Palakodeti -
B.E. 4/4 CSE – A - 2451-16-733-044

Quantum Computing, a relatively booming eld of computer science and a eld that is proven to be
signicantly faster in solving np-complete problems. Even big name corporations like Microsoft, Google, IBM are
investing millions into this technology gold rush, also in the recent budget session our nancial minister
Mrs.NirmalaSitharaman announced $1.12 Billion towards the area of quantum technologies in India.

The emergence of the Technology in 2010's has quite a lot of importance, Moore's Law is plateauing and
this is seen very evidently as the size of transistors has already reduced to 7nm. Any further reduction in size would
cause the electron to experience a phenomenon of Quantum Tunneling, i.e. the transistor fails to work normally
because the known laws of physics don't apply for an electron of size 4nm to 6nm, and quantum physics starts to
take over. Thus, we can say that nature is inevitably forcing us to shift towards quantum. Coming to the
mathematical/computational aspect of the technology, it needs to be viewed as an entirely unique paradigm of
computing, unlike traditional turing machines this one is not a deterministic computation, rather a probabilistic one.
In simple terms, when an operation is performed on a bit of a normal computer it changes from one state to
another, and the output is very much predictable using truth tables, but when dealing with a quantum computer and
Quantum bits(qubits) the output is in probabilities of zero and one, this is also known as superposition of a qubit.

A very popular myth regarding quantum computers is that they will replace conventional computers that
we use today. But the reality is far different, Quantum Processing units or qpu, when they come into existence they
would work together with the conventional cpu just as the way the gpu works today. The gpu's have a particular job
to perform, which the cpu fails to achieve when compared to the performance of gpu. Likely, the qpu will have a
unique job that the cpu cannot perform. Hence, the cpu's will not be replaced totally. Another point of interest is that
quantum computers can solve complicated problems like protein folding that can nd a cure to cancer, and crack the
RSA cryptosystems, etc. But in reality this would take around two to three decades to have a qpu that has such
potential.

9
In terms of programming a quantum computer uses a completely different style of code, or it is safe to say
that there would not be a code like the higher level programming languages of today, rather a circuit constructed
using quantum(reversible) logic gates on a quantum wire would be employed. The circuit and the gates that it
constitutes are not physical but rather abstract, technically speaking they would be vibrations on a quantum eld. For
starters, there are a few quantum programming languages like the q# (pronounced 'q sharp') from Microsoft and
QISKIT (quantum information systems kit) from IBM, and google'scirq. In most of these quantum specic
programming languages, we don't write code to be converted into binary or something like assembly language that
will later be read by a computer bit by bit like the humans read a book word by word. But rather the circuit written in
code for quantum computers would be like music where each gate acts as a musical note.

“Programming a Traditional Computer is like writing a book, the better your lines the better the
performance, but programming a Quantum Computer is like composing Music the better the circuit the better the
Rhythm.”

10
Breast Cancer Analysis
P. Nischala
M.Tech. [CSE] – 2451-19-742-007

I. Introduction :
There has been a growing increase in the incidence of breast cancer which is still the most signicant cancer-
related cause of female mortality. In spite of signicant progress in the management of breast cancer, the search for a
curative treatment is still ongoing. Although a number of crucial studies and clinical trials have signicantly
contributed to the improvement of breast cancer care, many often remain unknown to the majority of clinicians,
suggesting a need to identify at least the top 100 most cited studies in the eld. Breast cancer is most frequently
discovered as an asymptomatic nodule on a mammogram. Anew breast symptom should be taken seriously by
both patients and their doctors by the possibility of an underlying breast cancer at almost anyage.

Machine learning a sub-eld of Articial Intelligence is used to achieve thorough understanding of the learning
process and to implant learning capabilities in computer system. It has various applications in the areas of science,
engineering and the society. Machine learning approaches can provide generalized solutions for a wide range of
problems effectively and efciently. The machine learning approaches make computers more intelligent. Machine
learning helps in solving prognostic and diagnostic problems in a variety of medical domains. It is mainly used for
prediction of disease progression, for therapy planning, support and for overall patient management. Hypothesis
from the patient data can be drawn from expert systems mechanisms that use medical diagnostic reasoning. As
mentioned earlier breast cancer is dreadful, so there is a need for computerized systems that emulate the doctors
expertise in detecting the disease and help in accurate diagnosis. Machine learning has various approaches for
building such systems. There is no single approach for all the problems and each approach perform differently for
different problems. So there is a need for nding the approaches that perform well for a particular problem. In this
thesis various approaches are used for breast cancer diagnosis and they are compared to nd the best
performingones.

II. Problem Statement


n the current system, the tumor images and the screenings take a lot of time to be analyzed by the radiologists
and give a mammogram report. The mammogram report consists of certain characteristics of the tumor such as its
radius, shape and texture. All these characteristics are later analyzed by oncologists and lets them decide which
factors contribute to the malignant tumor. The entire process takes a few weeks and also puts a lot of pressure on
the patient. To reduce the stress and cost a new system is required which generates instant results and also gives the
patient some relief.

III. Proposed System


In the presence of tumour, this project will make predictions more accurately about the presence of tumour
in a patient based upon the test report. The project eliminates the presence of a Doctor for the consultation in order
to nd out about the presence of benign or malignant tumour. As it is a well trained machine learning model, the
accuracy with which itgives us results that are very high. The accuracy of the model is calculated to be 90% which is
very high and an effective way to solve the existing problem.

11
IV. Domain Information
Machine learning is an application of articial intelligence (AI) that provides systems the ability to
automatically learn and improve from experience without being explicitly programmed. Machine learning focuses
on the development of computer programs that can access data and use it to learn for themselves.

The process of learning begins with observations or data, such as examples, direct experience, or
instruction, in order to look for patterns in data and make better decisions in the future based on the examples that
we provide. The primary aim is to allow the computers learn automatically without human intervention or
assistance and adjust actions accordingly.

Some machine learning methods :


Machine learning algorithms are often categorized as supervised or unsupervised.
 Supervised machine learning algorithms can apply what has been learned in the past to new data using labeled
examples to predict future events. Starting from the analysis of a known training dataset, the learning algorithm
produces an inferred function to make predictions about the output values. The system is able to provide
targets for any new input after sufcient training. The learning algorithm can also compare its output with the
correct, intended output and nd errors in order to modify the model accordingly.
 In contrast, unsupervised machine learning algorithms are used when the information used to train is neither
classied nor labeled. Unsupervised learning studies how systems can infer a function to describe a hidden
structure from unlabeled data. The system doesn't gure out the right output, but it explores the data and can
draw inferences from datasets to describe hidden structures from unlabeled data.

V. Experimentation Analysis
The dataset containing the report details is initially loaded into the program and the attributes which have
high correlation are chosen. The getdummies() method is used to get the dummy values such as 'M' and 'B'. The test
cases are split into test and train cases and analyzed. The train data is t into the model and trained. Later the model
predicts the values for the test data. The accuracy of the model is calculated by seeing how many test cases have
been predicted accurately and how many test cases are wrongly predicted.

VI. Architecture Of Proposed System


Architecture diagram is a diagram of a system, in which the principal parts or functions are represented by
blocks connected by lines that show the relationships of the blocks. The block diagram is typically used for a higher
level, less detailed description aimed more at understanding the overall concepts and less at understanding the
details of implementation.

12
VII. Algorithms
Logistic Regression :
Logistic regression is a statistical model that in its basic form uses a logisticfunction to model a binary
dependent variable, although many more complex extensions exist. In regression analysis, logistic regression (or
logit regression) is estimating the parameters of a logistic model (a form of binary regression). Mathematically, a
binary logistic model has a dependent variable with two possible values, such as pass/fail which is represented by an
indicator variable, where the two values are labelled "0" and "1". In the logistic model, the log-odds (the logarithm of
the odds) for the value labelled "1" is a linear combination of one or more independentvariables ("predictors");
the independent variables can each be a binary variable (two classes, coded by an indicator variable) or a continuous
variable (any real value). The corresponding probability of the value labelled "1" can vary between 0 (certainly the
value "0") and 1 (certainly the value "1"), hence the labelling; the function that converts log-odds to probability is the
logistic function, hence the name. The unit of measurement for the log-odds scale is called a logit, from logistic unit,
hence the alternative names. Analogous models with a different sigmoid function instead of the logistic function
can also be used, such as the probit model; the dening characteristic of the logistic model is that increasing one
of the independent variables multiplicatively scales the odds of the given outcome at a constant rate, with each
independent variable having its own parameter; for a binary dependent variable this generalizes the odds ratio.

Logistic regression is a statistical method for predicting binary classes. The outcome or target variable is
dichotomous in nature. Dichotomous means there are only two possible classes. For example, it can be used for
cancer detection problems. It computes the probability of an event occurrence.

It is a special case of linear regression where the target variable is categorical in nature. It uses a log of odds as
the dependent variable. Logistic Regression predicts the probability of occurrence of a binary event utilizing a logistic
function.

Linear Regression Equation :

Where, y is dependent variable and x1, x2 ... and Xn are explanatory variables. Sigmoid Function :
Logistic regression is named for the function used at the core of the method, the logistic function.

The logistic function, also called the sigmoid function was developed by statisticians to describe properties
of population growth in ecology, rising quickly and maxing out at the carrying capacity of the environment. It's an S-
shaped curve that can take any real-valued number and map it into a value between 0 and 1, but never exactly at
those limits.

Apply Sigmoid function on linear regression :

Properties of Logistic Regression :

 The dependent variable in logistic regression follows Bernoulli Distribution.


 Estimation is done through maximum likelihood.

13
VIII. Results And Discussions

IX. Conclusion
In this project, we have outlined and technique for the problem of breast cancer detection. Although
different classication techniques have been developed for cancer classication, there are still many drawbacks in
their classication capability. In order to enhance breast cancer classication, in this project we proposed a new
framework for breast cancer classication by combining mammogram wavelet transformation and neural network.
According to results, classication based on locations of any abnormalities that may be present, character of
background tissue, class of abnormality present, does not always shows the desired result. Finally, the evaluation
and performance analysis of the proposed approach clearly shows that the preliminary 'results are promising in
breast cancer discovery at early stage.

14
Topic Modeling for Online Social Network
Dr. AkhilKhare
International Journal of Research in Advent Technology, Vol.6, No.9, September 2018
E-ISSN: 2321-9637 Available online at www.ijrat.org2462

Abstract
Huge amount of data is generated due to use of social network. The need to study and analyze this data to
generate information to nd which topic is discussed heavily in the network is topic modelling. Existing study show
that LDA is an effective method for topic modelling. LDA has shown to produce good results over many domains.
Consideration of social attributes along with the content on the network will increase the accuracy and efciency of
the hot topics identied. We propose a modied LDA model by considering different attributes of social network.

Conclusion
The experimentation shows that our proposed approach outperforms the existing methods. The
consideration of social attributes of the users give better accuracy as compared to traditional LDA and twitter –LDA
model.The perplexity values of MP-LDA is much lower as compared to other two models. Thus MP-LDA models
better than the traditional methods.

Social Network Topic Diffusion for Inuential


Node Cluster Identication
Dr. AkhilKhare
IInternational Journal of Allied Practice, Research and ReviewWebsite: www.ijaprr.com
(ISSN 2350-1294)¹Research Scholar, Faculty of Engineering, Udaipur, Rajasthan, India

Abstract
The social networking and e-commerce is the backbone of information diffusion. This paper contributes an
analysis of information inuence over the internet. Paper also focuses on proposed Modied Topic Diffusion Model
Algorithm. The quantitative analysis is usually carried out for information diffusion study; hence for the proposed
study, we analyzed twitter and Flipkart web portals visitor responses. Data clusters are optimized to know the most
inuential information spreader. Based on such information the analysis is done for identication of nearest
inuential network nodes. The analysis shows the positive results for third-party web clients for the unsupervised
network

15
Conclusion
We proposed an integrated and novel methodology to model opinion/idea diffusion in web forums.The
present modied topic diffusion model algorithmic approach is used to analyze topic selection and knowledge
diffusion for information spreader analysis and subsequent node analysis. The model was evaluated on a large
dataset from twitter and Flipkart customer review dataset. Theanalysis results revealed that the proposed model
performed well in modeling topic diffusion for selected web forums.

Identication of Malicious node for Effective


Top-k Query Processing inMANETS
Dr. AkhilKhare
International Journal on Future Revolution in Computer Science & Communication
Engineering ISSN: 2454-4248 Volume: 4 Issue: 10 14 – 19

Abstract
In Mobile Ad-hoc networks, query processing is optimized using Top-k query processing. The accuracy of
the results can be lowered if there exists malicious node. In our proposed system, we assume that malicious node
perform Data Replacement Attack, in which the malicious node replaces necessary data sets with the false data sets.
In our system malicious node identication method, the query issuing node receives the reply messages from the
nodes; if a query-issuing node detects a DRA then it performs subsequent inquiries with the nodes which receive
the information from the malicious node. In this way the query issuing node identies the malicious node, and
shares the information with the neighbouring nodes. Then the nodes share the information regarding the malicious
node with the other nodes which are far away. Each node tends to identify the malicious node in the network, and
then oods the information. Query issuing node performs grouping of the nodes based on the similarity of the
information on malicious node detected by the nodes. Identication of malicious node is performed based on the
results of malicious node identications by these groups.

Conclusion
In this system, we have proposed methods for identication of malicious node for effective top-k query
processing.The detection of DRA attack is performed. This system helps in preventing or avoiding an attack in its
initial stage. It can identify all the addresses of nodes in the selected routing path from a source to destination after the
source has received the RREP message. This system helps in improving packet delivery rate. This system helps in
achieving the reduced overhead. As the future work we plan to implement the system for multiple malicious nodes
and also design a message authentication method to prevent malicious nodes from performing false notication
attacks.

16
Unied Framework For Coupling Measurement
In Object Oriented Systems
Dr. AkhilKhare
JASC: Journal of Applied Science and Computations ISSN NO: 1076-5131

Abstract
Coupling measurement is a focus of study for many of the software professionals from last few years.
Objectoriented programming is an efcient programming technique for programmers because of its features like
reusability, data abstraction etc. Coupling is a very impotent factor in object-oriented programming for software
quality measurement and used as predictors of software quality attributes such as fault proneness, impact analysis,
ripple effects of changes, changeability etc. Many researchers have worked on coupling measurement and found
various dimensions of coupling. Researchers have also worked on various aspects of coupling like static coupling
measurement, dynamic coupling measurement, class level coupling, object level coupling etc. But still there is no
standardization in the eld of coupling measurement which is accepted worldwide. As a result of this it is very difcult
to select any existing measure which obtain clear picture of state-of-art of coupling measurement for object-
oriented systems. This paper analyses some terminologies of coupling measurement proposed earlier and
discusses usefulness of each. Coupling in software has been linked with maintainability and existing metrics are used
as predictors of external software quality attributes such as fault-proneness, impact analysis, ripple effects of
changes, changeability, etc. Many coupling measures for object-oriented (OO) software have been proposed, each
of them capturing specic dimensions of coupling.

Conclusion
In this paper, we describe and evaluate some recently innovated coupling metrics for objectoriented (OO)
design. We present an investigation into the run-time behavior of objects in Java programs, using specially adapted
coupling metrics. These new metrics seek to quantify coupling at different layers of granularity that is at class-class
and object-class level. For each measure, we indicate the type of coupling it uses what factors determine the
strength of coupling, if it is an import or export coupling measure how indirect coupling is accounted for and how
inheritance is dealt. we have analyzed two coupling metrics proposed by unied framework. We have selected and
the measures which are sufcient to predict complexity of object-oriented software. We have collected the class-
wise values of each measure from our code implemented in java. The values of each measure are useful to analyze
the complexity of any class. The paper simplied the work of coupling measurement. The proposed metrics could
be further rened by taking more detailed formalism for each measure.

17
A Basic Analysis Of Machine Learning And
Itsalgorithm Types
Dr. Sesham Anand
JASC: Journal of Applied Science and Computations Volume VI, Issue I,
January/2019 ISSN NO: 1076-5131.

Abstract
Over the past few decades, Machine Learning (ML) has evolved from the Endeavour of few computer
enthusiasts exploiting the possibility of computers learning to play games, and a part of Mathematics (Statistics) that
seldom considered computational approaches, to an independent research discipline that has not only provided
the necessary base for statistical-computational principles of learning procedures, but also has developed various
algorithms that are regularly used for text interpretation, pattern recognition, and a many other commercial
purposes and has led to a separate research interest in data mining to identify hidden regularities or irregularities in
social data that growing by second. This paper focuses on explaining the concept and evolution of Machine
Learning, some of the popular Machine Learning algorithms.

Conclusion
The foremost target of ML researchers is to design more efcient (in terms of both time and space)and
practical general purpose learning methods that can perform better over a widespread domain. In the context of
ML, the efciency with which a method utilizes data resources that is also an important performance paradigm along
with time and space complexity. Higher accuracy of prediction and humanly interpretable prediction rules are also
of high importance. Being completely data-driven and having the ability to examine a large amount of data in smaller
intervals of time, ML algorithms have an edge over manual or direct programming. Also they are often more
accurate and not prone to human bias.

Consider the following scenarios: Development of software to solve perception tasks using sensors, like
speech recognition, computer vision etc. It is easy for anyone to label an image of a letter by the alphabet it denotes,
but designing an algorithm to perform this task is difcult. Customization of software according to the environment it
is deployed to. Consider speech recognition software's that has to be customized according to the needs of the
customer. Like ecommerce sites that customize the products displayed according to customers or email reader that
enables spam detection as per user preferences. Direct programming lacks the ability to adapt when exposed to
different environment.

18
An Attribute-Based Storage System with Secure
Deduplication in a Hybrid Cloud Setting
Dr. Sesham Anand
International Journal of Management, Technology And EngineeringVolume IX,
Issue V, MAY/2019ISSN NO : 2249-7455

Abstract
Attribute based encryption (ABE) has been generally utilized in distributed computing where an information
supplier redistributes his/her encoded information to a cloud specialist co-op, and can impart the information to
clients having explicit qualications (or properties). In any case, the standard ABE [11] framework does not bolster
secure deduplication, which is pivotal for wiping out copy duplicates of indistinguishable information so as to spare
storage room and system data transfer capacity. In this paper, we present a property based capacity framework with
secure deduplication in a half breed cloud setting, where a private cloud is in charge of copy identication and an
open cloud deals with the capacity. Contrasted and the earlier information deduplication frameworks, our
framework has two points of interest. Right off the bat, it very well may be utilized to privately impart information to
clients by indicating access approaches as opposed to sharing decoding keys. Furthermore, it accomplishes the
standard thought of semantic security for information secrecy while existing frameworks just accomplish it by
characterizing a more fragile security idea. Also, we set forth a technique to alter a ciphertext more than one access
strategy into ciphertexts of the equivalent plaintext yet under different access strategies without uncovering the
hidden plaintext.

Conclusion
Quality based encryption (ABE) [11] has been generally utilized in distributed computing where information
suppliers redistribute their scrambled information to the cloud and can impart the information to clients having
indicated qualications. Then again, deduplication is a vital strategy to spare the storage room and system
transmission capacity, which takes out copy duplicates of indistinguishable information. Be that as it may, the
standard ABE [11] frameworks don't bolster secure deduplication, which makes them exorbitant to be connected
in some business stockpiling administrations. In this paper, we displayed a novel way to deal with understand a
characteristic based capacity framework supporting secure deduplication. Our capacity framework is worked under
cross breed cloud engineering, where a private cloud controls the calculation and an open cloud deals with the
capacity. The private cloud is furnished with a trapdoor key related with the comparing ciphertext, with which it can
exchange the ciphertext more than one access arrangement into ciphertexts of the equivalent plaintext under some
other access strategies without monitoring the fundamental plaintext. In the wake of accepting a capacity ask for, the
private cloud rst checks the legitimacy of the transferred thing through the connected verication. In the event that
the evidence is legitimate, the private cloud runs a label coordinating calculation to see whether similar information
hidden the ciphertext has been put away. Provided that this is true, at whatever point it is fundamental, it recovers
the ciphertext into a ciphertext of the equivalent plaintext over an entrance strategy which is the association set of
both access arrangements. The proposed stockpiling framework appreciates two noteworthy points of interest.
Right off the bat, it tends to be utilized to secretly impart information to different clients by determining an entrance
arrangement as opposed to sharing the decoding key. Also, it accomplishes the standard idea of semantic security
while existing deduplication conspires just accomplish it under a imsier security thought.

19
Data Mining Planning Model Based
On Socio-EconomicStatus
Dr. Sesham Anand
International Journal of ResearchVolume VIII,
Issue V, MAY/2019ISSN NO:2236-6124

Abstract
Planning a trip not only depends on the travelling cost, time andpath, but also on the socio-economicstatus
of the traveller. Data mining techniques support numerous applications of intelligent transportationsystems (ITSs).
This paper critically reviews various data mining techniques for achieving trip planningin ITSs. The literature review
starts with the discussion on the contributions of descriptive and predictivemining techniques in ITSs, and later
continues on the contributions of the clustering techniques. Being thelargely used approach, the use of cluster
analysis in ITSs is assessed. The relevance of the socio-economicconstraints is dened using correlations, whereas
the frequent as well as the feasible attributes are minedthrough the sequential pattern mining approach. The
proposed model maintained a substantial trade-offbetween multiple performance metrics, though the trip mean
model performed statistically

Conclusion
This paper has introduced a new trip planning model using data mining approaches. Real-time travel

information has been acquired from the Indian city of Hyderabad, and the experimentation has been carried out to

demonstrate the performance of the proposed planning model. The proposed planning model was able to

produce the socio-economic constraints, which are highly relevant to the trip, rather than its frequency. Three

levels of performance investigation have revealed that the proposed model has maintained an adequate trade-off

between all these performance metrics.

20
Novel Design of Machine Learning for Malicious
Software Analysis – Malicious URLCase Study
Dr. Sesham Anand
International Journal of Interdisciplinary Research and Innovations ISSN 2348-1226 (online)
Vol. 6, Issue 4, pp: (292-298), Month: October - December 2018, Available at: www.researchpublish.com

Abstract

This research work proposes a novel and innovative idea of application of Machine Learning for malicious
software analysis with a case study of malicious URL's implementations and validatings. Traditionally Data Mining and
its associated tools were developed for Malware Detection. Also Data Mining and Machine Learning strategies
were used in literature for Cyber security. Deep learning is also used for Malware Analysis. Machine learning
techniques for Malware Detection includes: in supervised learning - Hidden Markov Model (HMM), Prole Hidden
Markov Model (PHMM), Support Vector Machines (SVM) etc.; in unsupervised learning includes Principal
Component Analysis (PCA), K-means etc. Machine learning for Web Mining includes strategies like: Web Structure
Mining (Web Crawlers / Indexer/ Ranking – PageRank algorithm), Web Content Mining (Parsing), Natural Language
Processing (Information Retrieval models –TF-IDF, Latent Semantic Analysis (LSA), Doc2Vec (word2wec), CBOW
model), Post Processing (Latent Dirichlet allocation and Opinion Mining (sentiment analysis) etc.

Conclusion
This research paper proposed a novel and innovative idea of application of Machine Learning for malicious
software analysis with case study of malicious URL classication. Future work includes applying this innovative idea
framework as technology transfer for Software Development industry for Secure Software Engineering for Systems
and Software Assurance for Malware Analysis exploring research areas like: Code and Design Flaw Vulnerabilities,
Malware Analysis Driven Use Cases. Distributed Systems involves high performance computing environments like
cloud computing, Big Data etc... Design patterns and paradigm for scalable reliable services needs to explored using
machine learning and implementations using docker, kubernetes, dcos etc... Further work involves
implementations using HMM, PHMM, PCA, SVM, K-Means clustering for malware analysis and detection.

21
An Approach to Establish Correlation of Courses
to POs & PSOsand Attainment of COs, POs & PSOs
Dr.H. Jayasree, Dr.G. Kanaka Durga
IOSR Journal of Research & Method in Education - ISSN: 2320–7388

Abstract
Outcome based education (OBE) is student-centered instruction model that focuses on measuring student
performance through outcomes. Outcomes include knowledge, skills and attitudes. An important component of
OBE is the attainment of Course Outcomes (COs), Program Outcomes (POs) and Program Specic Outcomes
(PSOs).Criterion 3 of Self-Assessment Report (SAR) emphasize on attainment of COs, POs and PSOs.The
correlation between CO-PO describes the level at which a particular PO is addressed through a CO. The
correlation is justied based on number of sessions mapping. Direct and Indirect assessment tools are applied for
measuring attainment of COs, POs and PSOs. The paper presents a simple yet robust approach followed for
establishing correlation between CO-PO/PSOs; and for measurement of attainment of COs, POs and PSOs as per
the guidelines of SAR.

Conclusion
In this paper a more realistic approach for establishing CO-POs &PSOs correlation and measurement or
computationof the attainment of COs, POs and PSOs has been presented. The attainment values thus obtained can
be compared with the set target levels and action plan can be suggested for those POs &PSOs where attained values
are less than target value.

22
Heart Disease Prediction System (HDPS)
Dr.H.Jayasree
JASC: Journal of Applied Science and ComputationsVolume VI, Issue VI, JUNE/2019
ISSN NO: 1076-5131

Abstract
The healthcare industry collects huge amounts of health care data which, unfortunately, are not mined and
analyzed in a proper mannerto discover hidden information, to take decisions effectively, to discover the relations
that connect patterns. The aim of this paper is to develop adecision support in Heart Disease Prediction System
(HDPS) using machine learning's effective algorithms. Using medical prole of the patient(age, gender, blood
pressure, blood sugar, cholesterol, chest pain, ECG graph etc.), it can predict the likelihood of patient getting a heart
disease.The likelihood (class label) may be of 5 stages: no, low, medium, high and very high. If an unknown sample
comes, then the system will predictthe class label of the sample. Hence two basic functions namely classication and
prediction will be performed. Initially binary classication isperformed to nd whether there is a likelihood of
disease. If yes, then multiclass classication is used to classify the disease among theremaining four stages. It is
implemented in python as an application which takes medical test's parameter as an input. Comparison between
thealgorithm's performances is also depicted. It can be used as a training tool to train nurses and medical students to
diagnose patients with heartdisease.

Conclusion
Heart Disease Estimation System is an enhancement that makes estimation systems more effective by
having more accuracy andcondence level on the data at present, based on its knowledge and behavior. The aim of
Heart Disease Estimation System is toprovide the right patient with the right information at the right time. A heart
disease estimation system is based on two principalclassication models: a binary model (which is all about classifying
the test data into either of the 2 classes available), amulticlass classication (which classies the test data into either of
the classes available). An important aspect of this model is itshigh capability of showing up good condence level. It is
highly accurate at determining the accuracy of a Yes case. Theproposed system can recommend and prescribe a
suitable medication to the patient and immediately let the patient know theinformation needed at the right time.

23
Feature Selection & Classication on Human
Activity Recognition using Machine Learning Approaches
B. Saritha
IJSRD - International Journal for Scientic Research & Development|
Vol. 6, Issue 07, 2018 | ISSN (online): 2321-0613

Abstract
Human Activity Recognition (HAR) is a wellresearched area that aims to recognize the activity performedby
a person. But the practical applications often encountercomplications such as “The curse of dimensionality”
and“Redundant features” which results in their poorperformance. Hence, the need for feature selection is
veryimperative in such cases. This paper aims at identifying thesubsets of HAR dataset that consists of most
important andrelevant features using Boruta Feature Selection Algorithm.Because the dataset with less number of
features that are morerelevant requires less computational time to train theclassier and it also improves the
accuracy rate of theclassication model. Upon identication, this paper alsoimplements Support Vector Machines
(SVMs) classicationalgorithm on identied subsets as well as HAR dataset.Furthermore, comparing the accuracy
rates attained by theclassier on different subsets as well as their computationaltime.

Conclusion
On implementing the Boruta feature selection algorithm, thecomputational time taken to train the classier
has decreased,and the accuracy rate of the trained classication model hasimproved with the right subsets of HAR
dataset. From all theresults obtained, we can say that HAR dataset is more linearlyseparable as the SVM linear kernel
has obtained highestaccuracy for all test cases. The scope of this paper is toenhance the accuracy rate of a
classication model byimplementing feature selection on HAR dataset, whichidenties subsets that consist of only
most important andrelevant features. This research has assisted us inunderstanding the HAR dataset and the
mechanism ofalgorithms.

24
A Multi Order key Sharing and Dual
Channel Based Secure Routing for WSN
Mohammed Abdul Azeem
International Journal of Recent Technology and Engineering (IJRTE)
ISSN: 2277-3878, Volume-7 Issue-4, November 2018

Abstract
Growth in the remote monitoring and automation in maintenance for all elds of industrialization in
motivating the sensor researches more and more. The major challenge faced by the practitioners and the
researchers are to keep up with the advancements of the data accumulation and analytic demands as the resource
capabilities of sensors, especially the wireless sensors, are limited in terms of battery or energy, processing capacity
and security. A number of wireless sensor networks collect mission critical and sensitive data for the processing.
Also, the feedback systems through the same sensor networks are also important and sensitive. Due to the fragile
structure of the network, often it is vulnerable to the attacks. A number of studies have demonstrated the types of
the attacks and their effect on the network. The identied attacks are highly versatile in nature, thus leaving a less
scope for a single solution to prevent the attacks. Numerous research attempts are presented till date to nd the
most effective method of securing the wireless sensor networks. Nevertheless, all these solutions are criticised for
neglecting one or the other possible threats. It is been observed that, the majority of the attacks happen during the
data transmission time and the new node registration time. The transmissions of the data in the network are
managed by the routing protocols and the registrations of the new node into the network are managed by node
registration algorithms or strategies. Thus, these two are the highly vulnerable situations for any wireless sensor
networks life cycle. Hence, this work addresses two unique solutions for these two situations, which is again
mutually exclusive. The major outcome of this work is to secure the routing using randomize channels and node
registration process using multi order key in order to avoid majority of the attacks on the network. Also, during the
transmission or the routing of the data through the network channels, it is often recommended that the data must
be encrypted. Nonetheless, the encryption and decryption of the data is a signicant load on the limited processing
capabilities of the sensor nodes. Thus this highly recommended process is habitually ignored, compromising the
threats. Yet another outcome from this work, is to separate the header and the content part of the data packets to
reduce the network loads.

Conclusion
The wide use of wireless sensor networks for various critical information sharing purposes makes the
domain highly adopted for research. A number of attempts for securing the routing scheme without tampering the
low cost implementation, exible structure, low energy consumptions are carried out. The most popular being the
trust based management schemes are also not guaranteed to be secure. Thus the demand for enhancements
continued to persist. The major drawback of the trust based systems is provocation of the newer types of attacks.
The trust based schemes are easy to guess and that makes the policies vulnerable to attacks. Thus, the proposed
multi order key sharing with dual channel based secure routing protocol has been established. The proposed
scheme demonstrates high packet delivery ratio with benets from majority of the attacks. This work is proven to be
a newer direction of research in order to full the demand of secure routing for a worldwide secure
communication.

25
Survey On Swine Flu Prediction
Dr A.V. Krishna Prasad
International Journal of Management, Technology And Engineering
Volume IX, Issue V, MAY/2019ISSN NO : 2249-7455

Abstract
The helpful organizations industry gathers an enormous proportion of information which isn't really mined
and not putto the ideal use. Divulgence of these shrouded representations and afliations a signicant part of the
time goesunexploited. Any way there is propelling investigation in therapeutic end which can anticipate the
infections of the heart,lungs and assorted tumours in context of the past information gathered from the patients.
Our examination bases on thisbit of Medical end by knowledge plan throughout the amassed information for Swine
Flu. This examination has mademodel Intelligent Swine infection Prediction show and issue zone. We used DLSC
Classier (Dynamic Learning dividedclassier). Information mining assumes a huge job in anticipating maladies. The
database report of therapeutic patientisn't increasingly effective, as of now we implement an undertaking to
distinguish the mainly broadly extend infection ineverywhere throughout the world named Swine infection. Swine
infection is a breathing illness which has Numeralnumber of tests must be essential from the patient for distinguishing
an ailment. Propelled information mining systemsgives us help to cure this circumstance.

Conclusion
In This literature has portrayed out how the data mining approach overhaul the computational prevailing
uponthe objective that it might be reasonably used to help general prosperity the investigation of
maladytransmission. The batching count k denotes is expected to design the patients as demonstrated by the Swine
Fluindication on Google layout. Despite the fact that the desire estimations are utilized to obtain a genuine check
omaginations and probable threat. As comparable examination the ordinary assessment tree computation,
forinstance, C4.5 contain a powerful creating gathering regulations yet though taking care of with the
contrastingatmosphere parameters it settles on difcult choice tree structure. The Bayesian Theorem is
aoriginalprobabilistic computation that just deal by the chance of different probable yields from this time forward if
thereought to be an event of varying atmosphere parameter it perform improved than anything assessment
treeestimation.

26
Environmental Data Analytics for Empirical
Values on Environmental Issues
Dr A.V. Krishna Prasad
International Journal of Recent Technology and Engineering (IJRTE)
ISSN: 2277-3878, Volume-8, Issue-1, May 2019

Abstract
In the contemporary era, big data is highlyregarded as the driver to promote productivity, efciency
andinnovation. Emergence of big data and data science paved wayfor comprehensive analysis of data for obtaining
businessintelligence. Big data analytics has become crucial forenterprises to garner accurate knowhow for making
wellinformed decisions. The cloud-big data ecosystem has beenrealized and thus it became easier to deal with big
data and it'sprocessing as the storage and processing are outsourced tocloud. Different cloud computing platforms
like Amazon AWS,Google cloud and Microsoft Azure made it a reality to work withbig data which provides
comprehensive understanding of data.With big data, environmental issues especially air pollutionmeasurement and
prediction can add value to existinginfrastructure so as to improve the quality of prediction and alsohelp in making
strategic decisions. This paper represents thepresent state of the art on usage of big data analytics for addingvalue to
different industries focusing more on environmentalissues. It also provides the empirical values made with
ApacheFlink and Apache Spark for handling environment data. Thepreliminary results revealed that these
frameworks play crucialrole in processing big data

Conclusion
In this paper we explored the big-data eco system anddistributed programming frameworks such as
Apache Flinkand Apache Spark. It throws light into introduction to bigdata, the need for big data processing and how
big data canhelp adding value to businesses. Especially it covers theutility of big data for adding value to
environmental study. Itis understood that big data can help in the research of climatechanges and environment
including aim pollution in terms ofprediction and making strategic decisions for well beingofsociety. Our
investigation into big data revealed that with theemergence of cloud computing and many distributedprogramming
frameworks paved way for unprecedentedpossibilities. With pollution data (big data) we have made anempirical
study on the usage of Apache Flink and ApacheSpark frameworks. In future we continue this research todevelop a
framework that can help in measuring andprediction of air pollution by exploiting big data and cloudeco-system.
Such framework can be used in Decision SupportSystem (DSS) pertaining Pollution Control Boards (PCBs).

27
Two Classes Of Algorithms To Minimize The Makespan And The
Total Completion Time For An Ofine Mapreduce Work Load
Dr A.V. Krishna Prasad
International Journal of Management, Technology And EngineeringVolume IX,
Issue IV, APRIL/2019ISSN NO : 2249-7455

Abstract
MapReduce is a standard parallelenlisting perspective for broad scale datataking care of in gatherings and
serverranches. A MapReduce [3] remaining weightgenerally contains a ton of occupations, all ofwhich involves
different guide endeavourssought after by various diminishes errands. Onaccount of 1) that map endeavours can
simplycontinue running in guide spaces and reduceerrands can simply continue running inabatement openings, and
2) the generalexecution impediments that map assignmentsare executed before decrease errands,
variousoccupation execution demands andguide/lessen opening setups for a MapReduceremarkable weight have
basically phenomenalexecution and system use. This paper proposestwo classes of guring's to restrict
theMakespan [6] and the full scale completingtime for a detached MapReduce [10]exceptional job that needs to be
done. - Ourchoice of estimations revolves around theaction asking for streamlining for aMapReduce exceptional
job needing to be doneunder a given guide/decrease space course ofaction. Alternately, our underneath normal
ofestimations considers the circumstance that wecan perform headway for guide/decreaseopening setup for a
MapReduce remaining jobneeding to be done. We perform re-institutionssimilarly as examinations on Amazon
EC2 andshow that our proposed computations produceresults that are up to 15 80 percent better thanright now
unoptimizedHadoop, inciting basicdeclines in running time before long.

Conclusion
This paper centres on the activity requestingand map/diminish opening arrangementissues for
MapReducegenerationoutstanding tasks at hand that runintermittently in an information stockroom,where the
normal execution time ofguide/decrease assignments for aMapReduce employment can be proledfrom the
history run, under the FIFOplanning for a Hadoop bunch. Twoexecution measurements are considered,
i.e.,Makespan and absolute consummation time.We rst spotlight on the Makespan [6]. Wepropose work
requesting streamliningcalculation and guide/diminish openingarrangement advancement calculation. Wesee that
the absolute nish time can be poorsubject to getting the ideal Makespan, alongthese lines, we further propose
another eageractivity requesting calculation and aguide/decrease opening setup calculation tolimit the Makespan
and all outconsummation time together. Thehypothetical investigation is likewise givenfor our proposed heuristic
calculations,including guess proportion, upper and lowerlimits on Makespan. At long last, we leadbroad tests to
approve the dequacy of ourproposed calculations and their hypotheticaloutcomes.

28
Two-Level Image Encryption Algorithm
Based On Key-Image
V. Sridhar , M. Dyna
JCSE International Journal of Computer Sciences and EngineeringResearch Paper
Vol.-6, Issue-8, Aug 2018 E-ISSN: 2347-2693

Abstract
In the contemporary world images play key role in information interchange. Medical, defence, space and
variousareas of domain make use of high scale images in several applications. Security becomes the main concern
wherein the imagesare to be protected so that they cannot be seen by any advisory. This can be achieved by image
encryption. There are variousimage encryption methods that are based on textual key and text data which are not
efcient for high denition images. In thispaper we propose a three step image encryption algorithm which uses
another image as a key. In the rst step, key image isscaled and tiled, in step2 encryption is achieved using grey-value
substitution and in step-3 the output generated from step-2 isscrambled using Fibonacci transformation to add
additional security. This multistep encryption provides high security forimages. The performance of this algorithm is
analyzed using different attack models which results in high security without anyloss of input image.

Conclusion

The proposed image encryption process can be combinedwith image compression so that it can be
transferred over thenetwork with less transmission delay. Different existingimage compression methods can be
applied in the process ofencryption

29
Querying methods of Encrypted Cloud Data
V. Sridhar, M. Dyna
Asian Journal of Convergence in Technology Volume IV Issue II
ISSN NO: 2350-1146 I.F-5.11

Abstract
In the recent times business organizations are optingfor cloud based storage for reducing maintenance and
storagecost for which data security is a major issue of concern. TheOrganizations may not completely rely on
security provided bycloud service provider. Instead they would prefer for their ownsecurity model. Here data may
be in relational or Non-relationalwith its own structures. If the data stored in the cloud is inencrypted format,
querying will be difcult because for everyretrieval process, the data has to be decrypted. This is againsecurity
problem with server. This paper explores about how toprocess and query the encrypted data stored in cloud.
Differentmethods of querying, ranging from relational to Non-relationaldata are discussed in this paper.

Conclusion
The survey given in this paper discussed about differentways of processing encrypted cloud data. For
structured andun-structured encrypted data it is possible to retrieveinformation without decryption using special
encryptedmethods and CryptDB discussed above. Some of them are notcompletely in public use but they are
open for research scope.

30
Adaptive TerraSAR-X Image Registration(AIR)
Using Spatial Fisher Kernel Framework
B. Sirisha, B. Sandhya
Springer Lecture Notes in Computer Science (LNCS) ISSN: 0302-9743

Abstract
TerraSAR-X image registration is a forerunner for remotesensing application like target detection, which
need accurate spatialtransformation between the real time sensed image and the referenceoff-line image. It is
observed that the outcome of registration of two TerraSAR images even when acquired from the same sensor is
unpredictablewith all the parameters of the feature extraction, matching and transformation algorithm are xed.
Hence we have approached the problemby trying to predict if the given TerraSAR-X images that can be registered
without actually registering them. The proposed adaptive imageregistration (AIR) approach incorporates a classier
into the standardpipeline of feature based image registration. The attributes for the classier model are derived from
fusing the spatial parameters of the featuredetector with the descriptor vector in Fisher kernel framework. We
havedemonstrated that the proposed AIR approach saves the time of featurematching and transformation
estimation for SAR images which cannotbe registered.

Conclusion
Look-angle varied TerraSAR image registration is a challenging task as slightshift in look-angle alters the
geometric and photometric characteristics of theimages. The proposed framework can predetermine if the given
TerraSARimage pairs can be accurately registered. The asset of this framework is that,the attributes required for
prediction model are computed from the featuresextracted as part of registration pipeline using spatial Fisher vector
framework.It is established that detector parameters when fused with the descriptor inFisher vector framework
improves prediction accuracy. While registering twoTerraSAR images without prior knowledge of deformation, the
fused detector,descriptor parameters in the scrim of spatial Fisher vector framework, is productive in predicting
registration outcome.

31
Feature Selection on Deep CNN features
used for Image Classication
B. Venkataramana, Dr. B Sandhya

Abstract
Feature Extraction forms thecore of vision based applications such as imageclassication, recognition,
retrieval etc. Due to successof deep learning in several domains, feature learninghas gained importance as
compared to conventionalimage feature extraction. It has been observed thatactivation values extracted from pre
trainedconvolution neural nets such as AlexNet, VGGNetgive efcient results as compared to generic
featureextractors in matching, retrieval etc. Howeverfeatures obtained from fully connected layers of suchdeep
nets are of considerable size as compared toconventional features of an image. Hence, it isimportant to select the
best features which canrepresent the image distinctly without reducing theefciency of operation being performed.
In this paperwe have experimented with features extracted fromfully connected layers of VGG19 for classication
omages. We have experimentally shown that featureselection when applied on fc1 or fc2 features greatlyreduces
time to build a classication model withoutaffecting accuracy across all kinds of classiers.

Conclusion
The features extracted from VGG16 are compared with conventional global features in terms of
classication accuracy

32
Secure Data Dissemination In Wireless
Sensor Networks UsingEnhanced DIDRIP
H. Jayasree, N.Sabitha
Indian J.Sci.Res. 17(2): 52-525, 2018 ISSN: 0976-2876 (Print)
ISSN: 2250-0138(Online)

Abstract
Data discovery and dissemination protocols are applied to update conguration parameters
&distributedmanagement commands in Wireless Sensor Networks (WSN). Available protocols have two
drawbacks: Firstly, they areconstructed on centralized procedure; where data items are distributed by only base
station and hence this procedure doesnot support emerging concept of multi-owner-multi-user WSNs. Secondly,
these protocols were not built to support securityso intruders can easily initiate attacks to harm the network. In this
paper, we prefer rst secure and distributed datadiscovery and dissemination protocol known as DiDrip. This
enables the network owners to grant multiple network userswith different permissions to simultaneously and
directly disseminate data items to sensor nodes. The DiDrip protocol isenhanced (EDiDrip) to enhance the
network life time in distributed wireless sensor network with pre-failure recticationtechnique. In this enhanced
version of DiDrip protocol we replace a node in case of node failure in order to persist theprocess of data
dissemination. DiDrip is demonstrated as provably secure by extensive security analysis. Our analysisreveals that
EDiDrip can solve viable number of security issues that are identied.

Conclusion
Wireless Sensor Networks is a wide and open area innetworking research, which is increasingly being
deployedfor monitoring applications. This demands the need forquickly and efciently disseminating data to sensor
nodesto reprogram them to suite the current needs of theapplication. We experimented EDIDRIP protocol, the
rstdistributed information discovery and disseminationprotocol that permits network owners and approved
usersto disperse information items into WSNs without hopingon the base station and with network life
timemanagement. From the results obtained, we conclude thatthe EDIDRIP protocol provides good energy
efcientsecurity architecture to wireless sensor network.The efciency and security can be improved by
addingadditional mechanisms to ensure data condentially in thedesign of secure and distributed data discovery
anddissemination routing internet protocol

33
Attacks in RPL and Detection Technique used
for Internet of Things
M.V.R Jyothisree
International Journal of Recent Technology and Engineering (IJRTE)
ISSN: 2277-3878, Volume-8, Issue-1, May 2019

Abstract
The Internet of Things (IoT) is a fast-growingtechnology. In IoT, the devices are connected through
theInternet and controlled from any remote areas. Before theadvent of IoT, the interaction between the users was
onlythrough the internet. By 2020 there will be 75.4 billion devicesinterconnected through the internet. Machine-
to-machine(M2M) interaction is achieved by sending and receiving theinformation, such as room temperature,
humidity etc. IoTcanbe viewed as heterogeneous networks that bring some securitychallenges like network privacy
problems, condentiality,integrity and availability. In IoT, we have Routing Protocol forLow-Power and Lossy
networks (RPL). RPL is a light-weightprotocol which has a good routing functionality, context awareand it supports
dynamic topology but has only the basic securityfunctionality. This paper elaborates on attacks in RoutingProtocol
for Low Power and Lossy networks (RPL) and itsimplementation using Cooja simulation methodologyand inContiki
operating system. Blackhole and version number attackare the most vulnerable security attacks based on routing of
datain IoT networks. We proposed a common prevention techniqueto overcome those attacks based on the
measurements ofthroughput, packet delay and packet delivery ratio values. Theresults show that our proposed
detection technique is very securefrom both the attacks. This technique can be used in real timeapplications like
smart living, smart mobility and smartenvironment etc.

Conclusion
We conclude that our proposed algorithm is very efcient indetecting the blackhole and version number
modicationattack. TBBVD algorithm is a common detection algorithmfor both the attacks which is mainly used to
detect the type ofattack based on the behavior and trust between the motes.The proposed algorithm not only
increases the packetdelivery ratio but also decreases the delay time. We havefocused on two main attacks in RPL
and gave a commonsolution. There are other vulnerable attacks like oodingattack and overload attack in RPL
which can be implementedas future work.

34
EPCWebX: An unsophisticated approach for Embarrassingly
Parallel Computations over the Web using XAMPP stack - IEEE Confe...
Suryaa Pranav Meduri, Sujanavan Tiruvayipati

Abstract
Web based parallel computing is the present reality that ease the current execution over heterogeneous
arrangements that exists. To address the particular needs of the server programming language to access parallel
computing equipment over the web, only one broadly utilized language exists since the successive past: JavaScript,
for most client side computations. Current arrangements don't exchange well to the universe of JavaScript because
of contrasts in programming models, the extra prerequisites of the web and to designer's desire. To address this we
propose an unsophisticated approach planned explicitly using XAMPP stack along with JavaScript that fullls the
requirements of embarrassingly parallel computing over the web. To demonstrate that the proposed methodology
is practical, an application to haze removal is experimented.

Conclusion
We have demonstrated how EPCWebX's methodology meets the prerequisites of the web developer: the
prerequisite of deterministic execution, the necessity of keeping up a solitary programming model to save quick
prototyping, lastly the prerequisite of movability. The EPCWebX model demonstrates that it is possible and can be
productively actualized. In this paper, we have proposed an engineering that permits machines associated with the
Internet to make a bit of their guring assets accessible to remote clients and furthermore use assets offered by
different machines. A master node is utilized to enlist and guarantee assets, and to do bookkeeping. EPCWebX
exhibits a simple browser-based intermediary engineering utilizing JavaScript which can handle situations of
conceivable improvements and different employments of the proposed framework, from coarse grained
supercomputing applications running on a great many hubs, to Internet stations. We foresee a few specialized issues
while executing a total form of the proposed engineering. Specically, we analyzed the high elucidation overhead of
JavaScript code, and arrangements like without a moment to spare accumulation and dynamic gathering. The
displayed foundation is fundamentally planned to work over the entire Internet. It is likewise usable on a littler scale,
and we envision that its underlying usage would be inside vast associations directing their very own intranets. This
would enable such associations to utilize their total equipment base all the more prociently. In the meantime this
improves a few of the specialized issues that are not yet comprehended totally. In our further research, we intend to
make our simple engineering increasingly usable and easy to use, improve its execution, book keeping and
validation instruments.

35
Two Classes Of Algorithms To Minimize The Makespan And
The Total Completion Time For An Ofine Mapreduce Workload
K. Sindhuja A.V. Krishna Prasad
International Journal of Management, Technology And Engineering Volume IX,
Issue IV, APRIL/2019 ISSN NO : 2249-7455

Abstract
MapReduce is a standard parallel enlisting perspective for broad scale data taking care of in gatherings and
server ranches. A MapReduce [3] remaining weight generally contains a ton of occupations, all of which involves
different guide endeavours sought after by various diminishes errands. On account of 1) that map endeavours can
simply continue running in guide spaces and reduce errands can simply continue running in abatement openings,
and 2) the general execution impediments that map assignments are executed before decrease errands, various
occupation execution demands and guide/lessen opening setups for a MapReduce remarkable weight have
basically phenomenal execution and system use. This paper proposes two classes of guring's to restrict the
Makespan [6] and the full scale completing time for a detached MapReduce [10] exceptional job that needs to be
done. - Our choice of estimations revolves around the action asking for streamlining for a MapReduce exceptional
job needing to be done under a given guide/decrease space course of action. Alternately, our underneath normal of
estimations considers the circumstance that we can perform headway for guide/decrease opening setup for a
MapReduce remaining job needing to be done. We perform re-institutions similarly as examinations on Amazon
EC2 and show that our proposed computations produce results that are up to 15 80 percent better than right now
unoptimizedHadoop, inciting basic declines in running time before long.

Conclusion
This paper centres on the activity requesting and map/diminish opening arrangement issues for MapReduce
generation outstanding tasks at hand that run intermittently in an information stockroom, where the normal
execution time of guide/decrease assignments for a MapReduce employment can be proled from the history run,
under the FIFO planning for a Hadoop bunch. Two execution measurements are considered, i.e., Makespan and
absolute consummation time. We rst spotlight on the Makespan [6]. We propose work requesting streamlining
calculation and guide/diminish opening arrangement advancement calculation. We see that the absolute nish time
can be poor subject to getting the ideal Makespan, along these lines, we further propose another eager activity
requesting calculation and a guide/decrease opening setup calculation to limit the Makespan and all out
consummation time together. The hypothetical investigation is likewise given for our proposed heuristic
calculations, including guess proportion, upper and lower limits on Makespan. At long last, we lead broad tests to
approve the adequacy of our proposed calculations and their hypothetical outcomes.

36
An Attribute-Based Storage System with Secure
Deduplication in a Hybrid Cloud Setting
B.Saisudha, Dr.Sesham Anand
international Journal of Management, Technology And Engineering Volume IX,
Issue V, MAY/2019 ISSN NO : 2249-7455

Abstract
Attribute based encryption (ABE) has been generally utilized in distributed computing where an information
supplier redistributes his/her encoded information to a cloud specialist co-op, and can impart the information to
clients having explicit qualications (or properties). In any case, the standard ABE [11] framework does not bolster
secure deduplication, which is pivotal for wiping out copy duplicates of indistinguishable information so as to spare
storage room and system data transfer capacity. In this paper, we present a property based capacity framework with
secure deduplication in a half breed cloud setting, where a private cloud is in charge of copy identication and an
open cloud deals with the capacity. Contrasted and the earlier information deduplication frameworks, our
framework has two points of interest. Right off the bat, it very well may be utilized to privately impart information to
clients by indicating access approaches as opposed to sharing decoding keys. Furthermore, it accomplishes the
standard thought of semantic security for information secrecy while existing frameworks just accomplish it by
characterizing a more fragile security idea. Also, we set forth a technique to alter a ciphertext more than one access
strategy into ciphertexts of the equivalent plaintext yet under different access strategies without uncovering the
hidden plaintext.

Conclusion
Quality based encryption (ABE) [11] has been generally utilized in distributed computing where information
suppliers redistribute their scrambled information to the cloud and can impart the information to clients having
indicated qualications. Then again, deduplication is a vital strategy to spare the storage room and system
transmission capacity, which takes out copy duplicates of indistinguishable information. Be that as it may, the
standard ABE [11] frameworks don't bolster secure deduplication, which makes them exorbitant to be connected
in some business stockpiling administrations. In this paper, we displayed a novel way to deal with understand a
characteristic based capacity framework supporting secure deduplication. Our capacity framework is worked under
cross breed cloud engineering, where a private International Journal of Management, Technology And Engineering
Volume IX, Issue V, MAY/2019 ISSN NO : 2249-7455 Page No: 1511 cloud controls the calculation and an open
cloud deals with the capacity. The private cloud is furnished with a trapdoor key related with the comparing
ciphertext, with which it can exchange the ciphertext more than one access arrangement into ciphertexts of the
equivalent plaintext under some other access strategies without monitoring the fundamental plaintext. In the wake
of accepting a capacity ask for, the private cloud rst checks the legitimacy of the transferred thing through the
connected verication. In the event that the evidence is legitimate, the private cloud runs a label coordinating
calculation to see whether similar information hidden the ciphertext has been put away. Provided that this is true, at
whatever point it is fundamental, it recovers the ciphertext into a ciphertext of the equivalent plaintext over an
entrance strategy which is the association set of both access arrangements. The proposed stockpiling framework
appreciates two noteworthy points of interest. Right off the bat, it tends to be utilized to secretly impart information
to different clients by determining an entrance arrangement as opposed to sharing the decoding key. Also, it
accomplishes the standard idea of semantic security while existing deduplication conspires just accomplish it under a
imsier security thought.

37
Identication of Malicious node for Effective
Top-k Query Processing in MANETS
H. Swathi, Dr. Akhil Khare
International Journal on Future Revolution in Computer Science & Communication
Engineering ISSN: 2454-4248 Volume: 4 Issue: 10 14 – 19

Abstract
In Mobile Ad-hoc networks, query processing is optimized using Top-k query processing. The accuracy of
the results can be lowered if there exists malicious node. In our proposed system, we assume that malicious node
perform Data Replacement Attack, in which the malicious node replaces necessary data sets with the false data sets.
In our system malicious node identication method, the query issuing node receives the reply messages from the
nodes; if a query-issuing node detects a DRA then it performs subsequent inquiries with the nodes which receive
the information from the malicious node. In this way the query issuing node identies the malicious node, and
shares the information with the neighbouring nodes. Then the nodes share the information regarding the malicious
node with the other nodes which are far away. Each node tends to identify the malicious node in the network, and
then oods the information. Query issuing node performs grouping of the nodes based on the similarity of the
information on malicious node detected by the nodes. Identication of malicious node is performed based on the
results of malicious node identications by these groups.

Conclusion
In this system, we have proposed methods for identication of malicious node for effective top-k query
processing.The detection of DRA attack is performed. This system helps in preventing or avoiding an attack in its
initial stage. It can identify all the addresses of nodes in the selected routing path from a source to destination after the
source has received the RREP message. This system helps in improving packet delivery rate. This system helps in
achieving the reduced overhead. As the future work we plan to implement the system for multiple malicious nodes
and also design a message authentication method to prevent malicious nodes from performing false notication
attacks.

38
Multi LevelKeyFrame Selection for Video Summarization
T. Soumya, T. Chandrakanth, B. Sandhya, B. Sirisha
International Journal on Future Revolution in Computer Science & Communication
Engineering ISSN: 2454-4248 Volume: 4 Issue: 10 14 – 19

Abstract
Due to exponential growth of video technology there is a huge multimedia content obtainable on the
internet the main challenge for user is how to inspect and review rapidly these large multimedia data. Video
Summarization is a Technique that permit rapid overview of multimedia data which is widely used in computer
vision related applications like video browsing, video retrival system . Video summarization aims to segment the
input video to shots and extract the most informative video frames referred as key Frames. In our paper we
proposed a new approach for video summarization by introducing BOW and Entropy model for extracting the
informative and meaning full summary. Evaluation is done Using VSUMM Dataset by calculating delity category
using Manhattan Distance between summarized key frames and total number of video frames.

Conclusion
Video summarization is done using entropy in different color spaces and evaluated using delity Measure
and we observed that in RGB colorspace summarization gives a better delity, compared with Lab and HSV
colorspace summarizations. Our future work, we focus mainly to improve summarization by using different feature
extraction methods and by changing clustering techniques

39
General Election Using Block Chain
2451-15-733-066, 116, 119
Guide - K. MuraliKrishna

Abstract
The Election Application is held using a decentralized system. The Application is deployed on a local
Blockchain Network or Ethereum Network. The local Blockchain is run on the host system with the help of
Ganache. The Ganache Application provides 10 Ethereum accounts. The Metamask extension is added to browser
which connects the Ethereum accounts to the front end so that the account can cast a vote. The results are updated
on all the nodes in the network if a vote has been cast. This provides transparency. Once the votes are cast the
results cannot be changed since the Ethereumblockchain is immutable.

Classication Of Deformation Complexity Of


Images For Image Registration
2451-15-733-040, 047, 058
Guide - D.Sirisha

Abstract
Image registration is a pre-processing step for different applications. It consists of mainly four steps: feature
detection, feature matching, transformation model estimation and image resampling. View synthesis is
incorporated in the standard pipeline of feature-based image registration, to feed the feature detector with
additional synthetic views of an image. A predictive approach is proposed in which the number of synthetic views to
be generated in order for the image to get registered is estimated based on their deformation complexity w.r.t one
another, thereby eliminating the need to generate extra synthetic views as in the case of iterative approach which
incurs cost of additional memory and time to be spent on generation of relatively more views and feature extraction
across all those views.

40
Detect,Vision& Speech System(DVS System)
2451-15-733-079, 080
Guide - Dr. H. Jayasree

Abstract
DVS system is an automated version of existing Material Gate Pass system. The purpose of this system is to
detect a vehicle, generate and scan QR-code on vehicle and produce relevant voice synthesized instructions. The
vehicle loaded with material is detected. It consists of a pertinent QR-code that is generated and further scanned
under authorized conditions. Here, Admin phase generates a particular format of QR-code inclusive of materials
present in a vehicle and checks for the message or details provided by security and maintains the database by
clearing unnecessary data. Security phase- detects the vehicle, scans the QR-code and sends appropriate message
to admin.

Advancements In The Field Of Wireless Network Communication


AkshithaShinde - M.Tech Research Assistant
Email : [email protected]
Indian Institute Of Technology Hyderabad

Telecommunication and networking has been and will be one of the core technologies in helping the
evolution of mankind and technology itself. If it wasn't for these channels of communications and data transmission,
we would probably still be in an era where technology isn't as advanced as today.
We are currently in 4th Generation trying to move to 5th Generation.

41
Advancements In The Field Of Wireless Network Communication
AkshithaShinde - M.Tech Research Assistant
Email : [email protected]
Indian Institute Of Technology Hyderabad

Telecommunication and networking has been and will be one of the core technologies in helping the
evolution of mankind and technology itself. If it wasn't for these channels of communications and data transmission,
we would probably still be in an era where technology isn't as advanced as today.
We are currently in 4th Generation trying to move to 5th Generation.

/sg-eht-fo-noitulove-eht-g5-g2-g1/dnuoss/70/7102/ude.drofnats.golb832esm//:sptth :feR egamI

But where did it all start ?


If you remember we all started with 2G. It was only about to be able to make a call to someone else far
away from us (CDMA).We then moved to Global Systems for Mobile Communication (GSM) that enabled
data transfer on top of voice communications. When we started off with data transfers the max we had
achieved was (30-35 kbps).

Then with GPRS General Packet Radio Service had technology similar to GSM but data speeds improved to
110 kbps

Then with EDGE enhanced Data rates for GSM Evolution in 2003 we were able to achieve 135 kbps
of data speed. It's still used in many parts of the world and by many operators as it satises basic needs of both
carriers and users.

Then when we entered in 3rd Generation where there was a drastic development we achieved the
speeds of 2 mbps. This is when we could easily send emails and messages from our smartphones.Then in 4G
Data rates of 100 mbps were achieved.

But the main drawback was the cost. 1GB of data cost us more than 150 rupees.

42
The game changer technology that came into the market was 4G LTE (long term Evolution). It's a complete
redesign of previous architecture.

Detect,Vision& Speech System(DVS System)


Why do you think the very famous telecomm JIO offers only 4G LTE services ?
Technologies until 4G and that from 4G LTE have different architectures. It's going to cost the company a lot
to maintain 2 different architectures. Even companies like Airtel are planning to withdraw services below 4G.

This LTE technology has drastically got down the charges. 1 GB data on average costs us around 5 rupees now.

We are now in the era of the Internet of Things. We have so many smart devices, smart vehicles, smart
homes etc. The need for a reliable network is greater than ever. That is what drives us to transmit to the 5th
generation that aims at meeting the increasing demands and is more reliable. There are so many features that 5G
offers. one of them is 5G New radio. 5G NR is specically meant for vehicular technologies.

There has been a lot of advancements in the eld of wireless network communication over the years in
terms of overall development and change in core functionality, which has been crucial to put us in a era that is driven
by technology all around us and with 5G a couple years away, technologies such as IoT, Cloud computing and AI will
completely redene our world by 2025.

Reference : https://mse238blog.stanford.edu/2017/07/ssound/1g-2g-5g-the-evolution-of-the-gs/

43
Faculty Achievements
Date
Faculty Name Category Level Title DD-MM-YYYY Description

This exam will be


an elegibilty for
National Eligibility admission into
Ambati Saritha Achievement National 01-07-2018
Test Phd in many
universities in
the state
During "Big
Data Analytics"
For efforts &
Tiruvayipati FDP organized
Appreciation Institution services as 14-08-2018
Sujanavan in association
Organizing Member
with E&ICT
Academy, NIT(W)

During "Big
Data Analytics"
For efforts &
Kanajam Appreciation services as
FDP organized
Institution 14-08-2018
Muralikrishna Organizing Member
in association
with E&ICT
Academy, NIT(W)

To work as
Talent Next
Training Mentor
Pothavarjula Wipro Certied to train the pre-
Achievement National Faculty 22-08-2018
Phani Prasad nal year
students in java
programming
area.
A Three-day
Workshop on
Innovations,
Intellectual Property
Vikram Rights and Startups” Department
Appreciation Institution 12-10-2018
Narayandas organised by coordinator
Entrepreneurship
Development
Cell & Innovations
and Incubation Center

Nagamala Certicate of organizing spoken


Appreciation Institution 17-11-2018
Sabitha Appreciation tutorials courses

Certicate of
Spoken Tutorial Appreciation for
Gummedelli Appreciation National Organizing Member 17-11-2018 organizing spoken
Srishailam
tutorial certication
courses

44
Faculty Achievements

Date
Faculty Name Category Level Title DD-MM-YYYY Description

Wipro's Project
Gummedelli Wipro Certied Based Learning
Achievement National Faculty 22-12-2018 framework in
Srishailam
Java-J2EE
The award of
JRF and or
Eligibility for
Assistant Professor
Battula National Eligibility depends on the
Achievement National 05-01-2019 aggregrate
Venkata Ramana Test
performance
of the candidate
in Paper-I and
paper-II of UGC-NET.

Appreciation
Nagamala Appreciation State Letter of 12-04-2019 Swecha NGO
Sabitha
Recommendation

45
Student Achievements

Name of International
Institution / Organisation
Roll Number Name of the Award
from where the award has
been received

2451-15-733-023 OU Rank 1 OU

2451-15-733-067 OU Rank 3 OU

2451-15-733-128 OU Rank 7 OU

2451-15-733-071 Rank-808, GATE 2019 GATE


2451-15-733-121 Rank-1357 , GATE 2019 GATE

2451-15-733-086 Rank-4669, GATE 2019 GATE


2451-15-733-134 GATE 2019 GATE

2451-15-733-171 Rank-5189,GATE 2019 GATE

2451-15-733-306 Rank-1470 in TS-PGECET 2019 TSPGECET

2451-15-733-023 Rank-1406 in GATE 2019 GATE


MS Admission at ILLINOIS Inst.
2451-15-733-143 Of Technology , USA MS

2451-15-733-015 Score-298,GRE MS

2451-15-733-064 Score-319 , GRE 2018 MS

2451-15-733-067 Score-321 , GRE 2018 MS

2451-15-733-077 Score-309 , GRE 2018 MS

2451-15-733-079 Score-304, GRE MS

2451-15-733-100 Score-290 , GRE 2018 MS

2451-15-733-102 Score-319, GRE MS

2451-15-733-027 MS, MONASH University, Australia MS

2451-15-733-009 IEEE Publication Publication

2451-15-733-009 EPCWebX Copyright

46
Student Achievements

Name of International
Institution / Organisation
Roll Number Name of the Award
from where the award has
been received
92%, took Admission in IIM
2451-15-733-152 Banglore.

2451-15-733-306 Got copyright SAMYAMINI –


Meta-cognition framework for SAMYAMINI
2451-15-733-034 Multi faced User

2451-15-733-067 Got Copyright ANUVANI-A


rustic wizard framework for ANUVANI
2451-15-733-064 naive users

47
Jokes
1. A programmer gets stopped at an airport and is asked, "Do you have anything to
declare?"
He answers, yes, three variables and a constant.
2. Why didn't the integer and string fall in love? It was a type miss-match.
3. Why do Java programmers tend to wear glasses?
Because they can't C#.
4. Why computers are like women :

 No one but the Creator understands their internal logic.


 The native language they use to communicate with other computers is incomprehensible to
everyone else.
 Even your smallest mistakes are stored in long-term memory for later retrieval.
 As soon as you make a commitment to one, you find yourself spending half your paycheck on
accessories for it.

5.

M.Naga Rani
Programming Assistant

48
Tricky Riddles
1) WHICH WORD IS WRITTEN INCORRECTLY IN A DICTONARY?
2) PEOPLE BUY ME TO EAT, BUT NEVER EAT ME. WHAT AM I?
3) WHO MAKES MOVES WHILE BEING SEATED?
4) EVA'S MOTHER HAD THREE CHILDREN. THE FIRST WAS CALLED APRIL, THE
SECOND WAS CALLED MAY. WHAT WAS THE NAME OF THE THIRD?

5) CAN YOU FIND THE NUMBER OF TRIANGLES IN THE GIVEN FIGURE?

Answers:
1) THE WORD IS WRITTEN “INCORRECTLY”
2) A PLATE

49
Tricky Riddles
3) A CHESS PLAYER

4) ITS EVA!
5) THE NUMBER OF TRIANGLES IN THE GIVEN FIGURE IS '24'

M.Naga Rani

50
Cross Words
H V D W O R K J K R O W L I N G N S H Z
P A E L D F A T Z B P N Q A L F H U Q M
Q J N Q R U S L M A I L R F D K O S N F
V E N N P S B O J E V E T S Z N G D Q K
Z N I R A A V K I F E Q O A O Q S A W E
G S S Q G A S A D H F Y E S I W E K J R
N E R X A R N X S Y A D T U A E M I A T
I N I N T Y A D N K Z A M L D I N L D Y
K A T O H G E R B O W O T N O S P M E G
W C C T A N Q X M A N E F X U H A O N S
A K H Y C I C Z M Z R S I L B K Q G S P
H L I L H L B M F E V B P A M I J S M N
N E E B R S E R L N L M E H N R X E I L
E S O D I O U I I E F R G R U E O M T X
H V G I S G A U V C N U B N A K Y Q H X
P Q S N T S C R U S L M A I L L I W R Y
E C E E I E A N I U A L S I R R A H P S
T M M S A M O H T Y S S E T P W L U A P
S B W U R A S M U S L E R D O R F O R D
Q A H P H J V U E W Q E S I U R C M O T

CLUES : ANSWERS :
1.FOUNDER OF JAVA 1.JAMES GOSLING
2.FATJER OF VIDEO GAMES 2.RALPTH BAER
3.FAMOUS SERIES AUTHOR 3.J.K.ROWLING
4.TREDNING SONG LYRICIST 4.ED SHEERAN
5.WALT DISNEY PRODUCER 5.WALTER ELIAS
6.CARTOON SERIES WRITER 6.HANNA AND BARBERA
7.ANDROIDS COMPETITOR 7.STEVE JOBS
8.SCIENCE'S BRIGHTEST STAR 8.STEPHEN HAWKING
9.“NEVER SAY NEVER” KID 9.JADEN SMITH
10.INDIA'S MISSILE WOMAN 10.TESSY THOMAS

Dr.D.Sirisha, Asst. Professor

51
MATURI VENKATASUBBA RAO (MVSR) ENGINEERING COLLEGE
(Sponsored by Matrusri Education Society, Estd.1980)
Affiliated to Osmania University, Recognized by AICTE
EAMCET/ PGECET/ ICET Code: MVSR

Department of Computer Science and Engineering


www.csemvsr.blogspot.com

You might also like