Autonomic fault aware scheduling is a feature quite important for cloud computing and it is related to adoption of workload variation. In this context, this paper proposes an fault aware pattern matching autonomic scheduling for cloud computing based on autonomic computing concepts. In order to validate the proposed solution, we performed two experiments one with traditional approach and other other with pattern recognition fault aware approach. The results show the effectiveness of the scheme.
Score based deadline constrained workflow scheduling algorithm for cloud systemsijccsa
Cloud Computing is the latest and emerging trend in information technology domain. It offers utility- based
IT services to user over the Internet. Workflow scheduling is one of the major problems in cloud systems. A
good scheduling algorithm must minimize the execution time and cost of workflow application along with
QoS requirements of the user. In this paper we consider deadline as the major constraint and propose a
score based deadline constrained workflow scheduling algorithm that executes workflow within
manageable cost while meeting user defined deadline constraint. The algorithm uses the concept of score
which represents the capabilities of hardware resources. This score value is used while allocating
resources to various tasks of workflow application. The algorithm allocates those resources to workflow
application which are reliable and reduce the execution cost and complete the workflow application within
user specified deadline. The experimental results show that score based algorithm exhibits less execution
time and also reduces the failure rate of workflow application within manageable cost. All the simulations
have been done using CloudSim toolkit.
This document proposes a new task scheduling algorithm called Dynamic Heterogeneous Shortest Job First (DHSJF) for heterogeneous cloud computing systems. DHSJF aims to improve performance metrics like reduced makespan and low energy consumption by considering the heterogeneity of resources and workloads. It discusses existing scheduling algorithms like Round Robin, First Come First Serve and their limitations. The proposed DHSJF algorithm prioritizes tasks with the shortest estimated completion time to optimize resource utilization and improve overall performance of the cloud computing system. Simulation results show that DHSJF provides better results for metrics like average waiting time and turnaround time as compared to Round Robin and First Come First Serve scheduling algorithms.
IRJET- Scheduling of Independent Tasks over Virtual Machines on Computati...IRJET Journal
This document discusses scheduling independent tasks over virtual machines in a cloud computing environment. It compares the performance of four scheduling algorithms: First Come First Serve (FCFS), Shortest Job First (SJF), Round Robin, and Particle Swarm Optimization (PSO). The algorithms are tested on virtual machines with 1, 2, and 4 CPU cores. PSO consistently achieves the shortest makespan (task completion time). While FCFS, SJF, and Round Robin perform similarly on single-core and dual-core VMs, Round Robin's performance degrades on quad-core VMs likely due to core collision issues. Overall, PSO schedules tasks most efficiently across all virtual machine configurations.
A hybrid approach for scheduling applications in cloud computing environment IJECEIAES
Cloud computing plays an important role in our daily life. It has direct and positive impact on share and update data, knowledge, storage and scientific resources between various regions. Cloud computing performance heavily based on job scheduling algorithms that are utilized for queue waiting in modern scientific applications. The researchers are considered cloud computing a popular platform for new enforcements. These scheduling algorithms help in design efficient queue lists in cloud as well as they play vital role in reducing waiting for processing time in cloud computing. A novel job scheduling is proposed in this paper to enhance performance of cloud computing and reduce delay time in queue waiting for jobs. The proposed algorithm tries to avoid some significant challenges that throttle from developing applications of cloud computing. However, a smart scheduling technique is proposed in our paper to improve performance processing in cloud applications. Our experimental result of the proposed job scheduling algorithm shows that the proposed schemes possess outstanding enhancing rates with a reduction in waiting time for jobs in queue list.
A Survey on Service Request Scheduling in Cloud Based ArchitectureIJSRD
Cloud computing has become quite popular now-a-days. It facilitates the users to store and process their data which is stored in 3rd party data centers. Today in IT sector everything is run and managed on the cloud environment. As the number of users is increasing day by day, faster and efficient processing of large volume of data and resources is desired at all levels. So the management of resources attains prime importance. While using cloud computing various issues are encountered like load balancing, traffic while computation etc. Job scheduling is one of the solution of these problems which reduces the waiting time and maximizes the quality of services. In job scheduling “priority†is an important factor. In this paper, we will be discussing various scheduling algorithms and a review on dynamic priority scheduling algorithm.
This document discusses and compares various load balancing techniques in cloud computing. It begins by introducing load balancing as an important issue in cloud computing for efficiently scheduling user requests and resources. Several load balancing algorithms are then described, including honeybee foraging algorithm, biased random sampling, active clustering, OLB+LBMM, and Min-Min. Metrics for evaluating and comparing load balancing techniques are defined, such as throughput, overhead, fault tolerance, migration time, response time, resource utilization, scalability, and performance. The algorithms are then analyzed based on these metrics.
Resource Provisioning Algorithms for Resource Allocation in Cloud ComputingIRJET Journal
This document discusses resource provisioning algorithms for resource allocation in cloud computing. It begins by introducing distributed computing and cloud computing, and how effective resource management is important for cloud suppliers and users. The existing systems of using virtual machines (VMs) to map to physical resources is described, along with its disadvantages such as not adapting well to heterogeneous demands. The proposed system aims to dynamically allocate resources to meet VM demands while minimizing physical machine usage. It introduces the concept of "skewness" to measure uneven resource usage across machines. The system design and literature survey on related topics like load balancing and green computing are also summarized.
IRJET-Framework for Dynamic Resource Allocation and Efficient Scheduling Stra...IRJET Journal
This document discusses a framework for dynamic resource allocation and efficient scheduling strategies in cloud computing platforms for high-performance computing (HPC). It proposes using a parallel genetic algorithm to find optimal allocation of virtual machines to physical resources in order to maximize resource utilization. The algorithm represents the resource allocation problem as an unbalanced job scheduling problem. It uses genetic operators like mutation and crossover to efficiently allocate requests for resources to idle nodes. Compared to a traditional genetic algorithm, the parallel genetic algorithm improves the speed of finding the best allocation and increases resource utilization. Future work could explore implementing dynamic load balancing and using big data concepts on the cloud.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
Cloud computing Review over various scheduling algorithmsIJEEE
Cloud computing has taken an importantposition in the field of research as well as in thegovernment organisations. Cloud computing uses virtualnetwork technology to provide computer resources tothe end users as well as to the customer’s. Due tocomplex computing environment the use of high logicsand task scheduler algorithms are increase which resultsin costly operation of cloud network. Researchers areattempting to build such kind of job scheduling algorithms that are compatible and applicable in cloud computing environment.In this paper, we review research work which is recently proposed by researchers on the base of energy saving scheduling techniques. We also studying various scheduling algorithms and issues related to them in cloud computing.
LOAD BALANCING ALGORITHM ON CLOUD COMPUTING FOR OPTIMIZE RESPONE TIMEijccsa
To improve the performance of cloud computing, there are many parameters and issues that we should consider, including resource allocation, resource responsiveness, connectivity to resources, unused resources exploration, corresponding resource mapping and planning for resource. The planning for the use of resources can be based on many kinds of parameters, and the service response time is one of them.
The users can easily figure out the response time of their requests, and it becomes one of the important QoSs. When we discover and explore more on this, response time can provide solutions for the distribution, the load balancing of resources with better efficiency. This is one of the most promising
research directions for improving the cloud technology. Therefore, this paper proposes a load balancing algorithm based on response time of requests on cloud with the name APRA (ARIMA Prediction of Response Time Algorithm), the main idea is to use ARIMA algorithms to predict the coming response time, thus giving a better way of effectively resolving resource allocation with threshold value. The experiment
result outcomes are potential and valuable for load balancing with predicted response time, it shows that prediction is a great direction for load balancing.
Final Report - Optimizing Work Distribution for NP OrdersBrian Kaiser, PE
This document discusses optimizing work distribution for non-pay (NP) orders at Indianapolis Power & Light (IPL). After IPL implemented a new Computer-Aided Dispatch (CAD) system in 2011, job routing became inefficient for 5 of 6 work types. A cross-functional team analyzed NP order data and identified issues. They found legacy route management practices led to workload variability. Recommendations include redrawing billing districts using quantitative data, reducing workload through new technology, adjusting estimated work times, and using more robust routing algorithms. The goals are decreasing average cut costs and increasing routing accuracy.
A survey of various scheduling algorithm in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document provides a summary of a student's seminar paper on resource scheduling algorithms. The paper discusses the need for resource scheduling algorithms in cloud computing environments. It then describes several types of algorithms commonly used for resource scheduling, including genetic algorithms, bee algorithms, ant colony algorithms, workflow algorithms, and load balancing algorithms. For each algorithm type, it provides a brief introduction, overview of the basic steps or concepts, and some examples of applications where the algorithm has been used. The paper was submitted by a student named Shilpa Damor to fulfill requirements for a degree in information technology.
AUTO RESOURCE MANAGEMENT TO ENHANCE RELIABILITY AND ENERGY CONSUMPTION IN HET...IJCNCJournal
This document summarizes an article from the International Journal of Computer Networks & Communications that proposes an Auto Resource Management (ARM) scheme to improve reliability and reduce energy consumption in heterogeneous cloud computing environments. The ARM scheme includes three components: 1) static and dynamic thresholds to detect host over/underutilization, 2) a virtual machine selection policy, and 3) a method to select placement hosts for migrated VMs. It also proposes a Short Prediction Resource Utilization method to improve decision making by considering predicted future utilization along with current utilization. The scheme is tested on a cloud simulator using real workload trace data, and results show it can enhance decision making, reduce energy consumption and SLA violations.
This document discusses scheduling in cloud computing. It proposes a priority-based scheduling protocol to improve resource utilization, server performance, and minimize makespan. The protocol assigns priorities to jobs, allocates jobs to processors based on completion time, and processes jobs in parallel queues to efficiently schedule jobs in cloud computing. Future work includes analyzing time complexity and completion times through simulation to validate the protocol's efficiency.
This document summarizes an article from the International Journal of Research in Advent Technology that proposes algorithms for energy-aware resource allocation in datacenters with minimized virtual machine migrations. It discusses how virtualization allows servers to be consolidated onto fewer physical machines to reduce hardware and power consumption. The algorithms aim to dynamically reallocate VMs according to current resource needs while ensuring quality of service and reliability, with the goal of minimizing the number of active physical nodes and switching idle nodes to a low-power state. It describes two proposed VM selection policies - the Minimum Migrations policy that selects the minimum number of VMs to migrate from overloaded hosts, and the Highest Potential Growth policy that migrates VMs with the lowest current CPU usage to prevent future
Modeling and Optimization of Resource Allocation in Cloud [PhD Thesis Progres...AtakanAral
This document summarizes Atakan Aral's PhD thesis progress report on modeling and optimizing resource allocation in cloud computing. The report outlines Aral's contributions, including developing a topology-based matching algorithm for distributed VM placement and evaluating it against baseline methods. Evaluation covers factors like bandwidth, costs, loads, and optimization criteria including deployment time, communication latency, throughput, and rejection rates. Future work is planned to enhance the algorithm and evaluation.
Case Study: Vivo Automated IT Capacity Management to Optimize Usage of its Cr...CA Technologies
Learn how Vivo used CA Capacity Management to monitor current capacity and assure the optimized usage of their critical infrastructure environments, enabling them to dispose of manual procedures and spreadsheets and achieve increased time to value and high speed.
For more information on DevOps solutions from CA Technologies, please visit: http://bit.ly/1wbjjqX
An optimized scientific workflow scheduling in cloud computingDIGVIJAY SHINDE
The document discusses optimizing scientific workflow scheduling in cloud computing. It begins with definitions of workflow and cloud computing. Workflow is a group of repeatable dependent tasks, while cloud computing provides applications and hardware resources over the Internet. There are three cloud service models: SaaS, PaaS, and IaaS. The document explores how to efficiently schedule workflows in the cloud to reduce makespan, cost, and energy consumption. It reviews different scheduling algorithms like FCFS, genetic algorithms, and discusses optimizing objectives like time and cost. The document provides a literature review comparing various workflow scheduling methods and algorithms. It concludes with discussing open issues and directions for future work in optimizing workflow scheduling for cloud computing.
1) The document proposes a bandwidth-aware virtual machine migration policy for cloud data centers that considers both the bandwidth and computing power of resources when scheduling tasks of varying sizes.
2) It presents an algorithm that binds tasks to virtual machines in the current data center if the load is below the saturation threshold, and migrates tasks to the next data center if the load is above the threshold, in order to minimize completion time.
3) Experimental results show that the proposed algorithm has lower completion times compared to an existing single data center scheduling algorithm, demonstrating the benefits of considering bandwidth and utilizing multiple data centers.
Cloud workflow scheduling with deadlines and time slot availabilityKamal Spring
Allocating service capacities in cloud computing is based on the assumption that they are unlimited and can be used at any time. However, available service capacities change with workload and cannot satisfy users’ requests at any time from the cloud
provider’s perspective because cloud services can be shared by multiple tasks. Cloud service providers provide available time slots for new user’s requests based on available capacities. In this paper, we consider workflow scheduling with deadline and time slot availability in cloud computing. An iterated heuristic framework is presented for the problem under study which mainly consists of initial solution construction, improvement, and perturbation. Three initial solution construction strategies, two greedy- and fair-based improvement strategies and a perturbation strategy are proposed. Different strategies in the three phases result in several heuristics.
Experimental results show that different initial solution and improvement strategies have different effects on solution qualities.
Task Scheduling using Tabu Search algorithm in Cloud Computing Environment us...AzarulIkhwan
1. The document proposes using Tabu Search algorithm for task scheduling in cloud computing environments using CloudSim simulator. It aims to maximize throughput and minimize turnaround time compared to traditional algorithms like FCFS.
2. The methodology section describes how CloudSim simulator works and the components involved in task scheduling. It also provides an overview of how the Tabu Search algorithm guides the search process to avoid getting stuck at local optima.
3. The expected result is that Tabu Search algorithm will provide higher throughput and lower turnaround times for cloud tasks compared to FCFS, as Tabu Search is designed to escape local optima and find better solutions.
A Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Comp...1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: [email protected]
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
A novel methodology for task distributionijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
IRJET- An Energy-Saving Task Scheduling Strategy based on Vacation Queuing & ...IRJET Journal
This document summarizes a research paper that proposes an energy-saving task scheduling strategy for cloud computing based on vacation queuing and optimization of resources. The proposed approach aims to minimize energy consumption, reduce processing time, and increase the number of sleeping nodes to make the system more efficient. It introduces a task scheduling algorithm that assigns tasks to computing nodes based on their properties using a load balancer. Simulation results show the proposed algorithm reduces energy consumption while meeting task performance compared to the vacation queuing algorithm. The document discusses related work on energy optimization techniques, presents the proposed approach, and analyzes results showing improvements in energy usage, time, and idle nodes.
The document proposes an Earthquake Disaster Based Resource Scheduling (EDBRS) framework for efficiently allocating cloud computing resources during earthquake disasters. The framework aims to minimize execution costs and times of cloud workloads by prioritizing urgent workloads related to emergency response. It models the resource scheduling problem and considers factors like workload deadlines, resource speeds and costs. The framework also presents algorithms for optimally assigning equal-length and variable-length workloads across multiple public and private cloud resources to balance performance and cost. The goal is to efficiently allocate cloud resources to disaster response zones based on urgency to reduce loss of life during earthquakes.
IRJET- Advance Approach for Load Balancing in Cloud Computing using (HMSO) Hy...IRJET Journal
This document proposes a new hybrid multi-swarm optimization (HMSO) algorithm for load balancing in cloud computing. It aims to minimize response time and costs while improving resource utilization and customer satisfaction. The HMSO algorithm uses multi-level particle swarm optimization to find an optimal resource allocation solution. Simulation results show that the proposed HMSO technique reduces response time and datacenter costs compared to other algorithms. It also achieves a more balanced load distribution across resources.
Differentiating Algorithms of Cloud Task Scheduling Based on various Parametersiosrjce
Cloud computing is a new design structure for large, distributed data centers. Cloud computing
system promises to offer end user “pay as go” model. To meet the expected quality requirements of users, cloud
computing need to offer differentiated services to users. QoS differentiation is very important to satisfy
different users with different QoS requirements. In this paper, various QoS based scheduling algorithms,
scheduling parameters and the future scope of discussed algorithms have been studied. This paper summarizes
various cloud scheduling algorithms, findings of algorithms, scheduling factors, type of scheduling and
parameters considered
This document discusses different algorithms for task scheduling in cloud computing environments based on various quality of service (QoS) parameters. It summarizes several QoS-based scheduling algorithms including QDA, Improved Cost Based, PAPRIKA, ANT Colony, CMultiQoSSchedule, and SHEFT Workflow. It also provides a comparative table of these algorithms and discusses the various metrics considered by QoS-based scheduling algorithms like time, cost, makespan, trust, and resource utilization. The paper concludes that scheduling is an important factor for cloud environments and that existing algorithms can be improved by considering additional parameters like trust values, execution rates, and success rates.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
Cloud computing Review over various scheduling algorithmsIJEEE
Cloud computing has taken an importantposition in the field of research as well as in thegovernment organisations. Cloud computing uses virtualnetwork technology to provide computer resources tothe end users as well as to the customer’s. Due tocomplex computing environment the use of high logicsand task scheduler algorithms are increase which resultsin costly operation of cloud network. Researchers areattempting to build such kind of job scheduling algorithms that are compatible and applicable in cloud computing environment.In this paper, we review research work which is recently proposed by researchers on the base of energy saving scheduling techniques. We also studying various scheduling algorithms and issues related to them in cloud computing.
LOAD BALANCING ALGORITHM ON CLOUD COMPUTING FOR OPTIMIZE RESPONE TIMEijccsa
To improve the performance of cloud computing, there are many parameters and issues that we should consider, including resource allocation, resource responsiveness, connectivity to resources, unused resources exploration, corresponding resource mapping and planning for resource. The planning for the use of resources can be based on many kinds of parameters, and the service response time is one of them.
The users can easily figure out the response time of their requests, and it becomes one of the important QoSs. When we discover and explore more on this, response time can provide solutions for the distribution, the load balancing of resources with better efficiency. This is one of the most promising
research directions for improving the cloud technology. Therefore, this paper proposes a load balancing algorithm based on response time of requests on cloud with the name APRA (ARIMA Prediction of Response Time Algorithm), the main idea is to use ARIMA algorithms to predict the coming response time, thus giving a better way of effectively resolving resource allocation with threshold value. The experiment
result outcomes are potential and valuable for load balancing with predicted response time, it shows that prediction is a great direction for load balancing.
Final Report - Optimizing Work Distribution for NP OrdersBrian Kaiser, PE
This document discusses optimizing work distribution for non-pay (NP) orders at Indianapolis Power & Light (IPL). After IPL implemented a new Computer-Aided Dispatch (CAD) system in 2011, job routing became inefficient for 5 of 6 work types. A cross-functional team analyzed NP order data and identified issues. They found legacy route management practices led to workload variability. Recommendations include redrawing billing districts using quantitative data, reducing workload through new technology, adjusting estimated work times, and using more robust routing algorithms. The goals are decreasing average cut costs and increasing routing accuracy.
A survey of various scheduling algorithm in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document provides a summary of a student's seminar paper on resource scheduling algorithms. The paper discusses the need for resource scheduling algorithms in cloud computing environments. It then describes several types of algorithms commonly used for resource scheduling, including genetic algorithms, bee algorithms, ant colony algorithms, workflow algorithms, and load balancing algorithms. For each algorithm type, it provides a brief introduction, overview of the basic steps or concepts, and some examples of applications where the algorithm has been used. The paper was submitted by a student named Shilpa Damor to fulfill requirements for a degree in information technology.
AUTO RESOURCE MANAGEMENT TO ENHANCE RELIABILITY AND ENERGY CONSUMPTION IN HET...IJCNCJournal
This document summarizes an article from the International Journal of Computer Networks & Communications that proposes an Auto Resource Management (ARM) scheme to improve reliability and reduce energy consumption in heterogeneous cloud computing environments. The ARM scheme includes three components: 1) static and dynamic thresholds to detect host over/underutilization, 2) a virtual machine selection policy, and 3) a method to select placement hosts for migrated VMs. It also proposes a Short Prediction Resource Utilization method to improve decision making by considering predicted future utilization along with current utilization. The scheme is tested on a cloud simulator using real workload trace data, and results show it can enhance decision making, reduce energy consumption and SLA violations.
This document discusses scheduling in cloud computing. It proposes a priority-based scheduling protocol to improve resource utilization, server performance, and minimize makespan. The protocol assigns priorities to jobs, allocates jobs to processors based on completion time, and processes jobs in parallel queues to efficiently schedule jobs in cloud computing. Future work includes analyzing time complexity and completion times through simulation to validate the protocol's efficiency.
This document summarizes an article from the International Journal of Research in Advent Technology that proposes algorithms for energy-aware resource allocation in datacenters with minimized virtual machine migrations. It discusses how virtualization allows servers to be consolidated onto fewer physical machines to reduce hardware and power consumption. The algorithms aim to dynamically reallocate VMs according to current resource needs while ensuring quality of service and reliability, with the goal of minimizing the number of active physical nodes and switching idle nodes to a low-power state. It describes two proposed VM selection policies - the Minimum Migrations policy that selects the minimum number of VMs to migrate from overloaded hosts, and the Highest Potential Growth policy that migrates VMs with the lowest current CPU usage to prevent future
Modeling and Optimization of Resource Allocation in Cloud [PhD Thesis Progres...AtakanAral
This document summarizes Atakan Aral's PhD thesis progress report on modeling and optimizing resource allocation in cloud computing. The report outlines Aral's contributions, including developing a topology-based matching algorithm for distributed VM placement and evaluating it against baseline methods. Evaluation covers factors like bandwidth, costs, loads, and optimization criteria including deployment time, communication latency, throughput, and rejection rates. Future work is planned to enhance the algorithm and evaluation.
Case Study: Vivo Automated IT Capacity Management to Optimize Usage of its Cr...CA Technologies
Learn how Vivo used CA Capacity Management to monitor current capacity and assure the optimized usage of their critical infrastructure environments, enabling them to dispose of manual procedures and spreadsheets and achieve increased time to value and high speed.
For more information on DevOps solutions from CA Technologies, please visit: http://bit.ly/1wbjjqX
An optimized scientific workflow scheduling in cloud computingDIGVIJAY SHINDE
The document discusses optimizing scientific workflow scheduling in cloud computing. It begins with definitions of workflow and cloud computing. Workflow is a group of repeatable dependent tasks, while cloud computing provides applications and hardware resources over the Internet. There are three cloud service models: SaaS, PaaS, and IaaS. The document explores how to efficiently schedule workflows in the cloud to reduce makespan, cost, and energy consumption. It reviews different scheduling algorithms like FCFS, genetic algorithms, and discusses optimizing objectives like time and cost. The document provides a literature review comparing various workflow scheduling methods and algorithms. It concludes with discussing open issues and directions for future work in optimizing workflow scheduling for cloud computing.
1) The document proposes a bandwidth-aware virtual machine migration policy for cloud data centers that considers both the bandwidth and computing power of resources when scheduling tasks of varying sizes.
2) It presents an algorithm that binds tasks to virtual machines in the current data center if the load is below the saturation threshold, and migrates tasks to the next data center if the load is above the threshold, in order to minimize completion time.
3) Experimental results show that the proposed algorithm has lower completion times compared to an existing single data center scheduling algorithm, demonstrating the benefits of considering bandwidth and utilizing multiple data centers.
Cloud workflow scheduling with deadlines and time slot availabilityKamal Spring
Allocating service capacities in cloud computing is based on the assumption that they are unlimited and can be used at any time. However, available service capacities change with workload and cannot satisfy users’ requests at any time from the cloud
provider’s perspective because cloud services can be shared by multiple tasks. Cloud service providers provide available time slots for new user’s requests based on available capacities. In this paper, we consider workflow scheduling with deadline and time slot availability in cloud computing. An iterated heuristic framework is presented for the problem under study which mainly consists of initial solution construction, improvement, and perturbation. Three initial solution construction strategies, two greedy- and fair-based improvement strategies and a perturbation strategy are proposed. Different strategies in the three phases result in several heuristics.
Experimental results show that different initial solution and improvement strategies have different effects on solution qualities.
Task Scheduling using Tabu Search algorithm in Cloud Computing Environment us...AzarulIkhwan
1. The document proposes using Tabu Search algorithm for task scheduling in cloud computing environments using CloudSim simulator. It aims to maximize throughput and minimize turnaround time compared to traditional algorithms like FCFS.
2. The methodology section describes how CloudSim simulator works and the components involved in task scheduling. It also provides an overview of how the Tabu Search algorithm guides the search process to avoid getting stuck at local optima.
3. The expected result is that Tabu Search algorithm will provide higher throughput and lower turnaround times for cloud tasks compared to FCFS, as Tabu Search is designed to escape local optima and find better solutions.
A Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Comp...1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: [email protected]
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
A novel methodology for task distributionijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
IRJET- An Energy-Saving Task Scheduling Strategy based on Vacation Queuing & ...IRJET Journal
This document summarizes a research paper that proposes an energy-saving task scheduling strategy for cloud computing based on vacation queuing and optimization of resources. The proposed approach aims to minimize energy consumption, reduce processing time, and increase the number of sleeping nodes to make the system more efficient. It introduces a task scheduling algorithm that assigns tasks to computing nodes based on their properties using a load balancer. Simulation results show the proposed algorithm reduces energy consumption while meeting task performance compared to the vacation queuing algorithm. The document discusses related work on energy optimization techniques, presents the proposed approach, and analyzes results showing improvements in energy usage, time, and idle nodes.
The document proposes an Earthquake Disaster Based Resource Scheduling (EDBRS) framework for efficiently allocating cloud computing resources during earthquake disasters. The framework aims to minimize execution costs and times of cloud workloads by prioritizing urgent workloads related to emergency response. It models the resource scheduling problem and considers factors like workload deadlines, resource speeds and costs. The framework also presents algorithms for optimally assigning equal-length and variable-length workloads across multiple public and private cloud resources to balance performance and cost. The goal is to efficiently allocate cloud resources to disaster response zones based on urgency to reduce loss of life during earthquakes.
IRJET- Advance Approach for Load Balancing in Cloud Computing using (HMSO) Hy...IRJET Journal
This document proposes a new hybrid multi-swarm optimization (HMSO) algorithm for load balancing in cloud computing. It aims to minimize response time and costs while improving resource utilization and customer satisfaction. The HMSO algorithm uses multi-level particle swarm optimization to find an optimal resource allocation solution. Simulation results show that the proposed HMSO technique reduces response time and datacenter costs compared to other algorithms. It also achieves a more balanced load distribution across resources.
Differentiating Algorithms of Cloud Task Scheduling Based on various Parametersiosrjce
Cloud computing is a new design structure for large, distributed data centers. Cloud computing
system promises to offer end user “pay as go” model. To meet the expected quality requirements of users, cloud
computing need to offer differentiated services to users. QoS differentiation is very important to satisfy
different users with different QoS requirements. In this paper, various QoS based scheduling algorithms,
scheduling parameters and the future scope of discussed algorithms have been studied. This paper summarizes
various cloud scheduling algorithms, findings of algorithms, scheduling factors, type of scheduling and
parameters considered
This document discusses different algorithms for task scheduling in cloud computing environments based on various quality of service (QoS) parameters. It summarizes several QoS-based scheduling algorithms including QDA, Improved Cost Based, PAPRIKA, ANT Colony, CMultiQoSSchedule, and SHEFT Workflow. It also provides a comparative table of these algorithms and discusses the various metrics considered by QoS-based scheduling algorithms like time, cost, makespan, trust, and resource utilization. The paper concludes that scheduling is an important factor for cloud environments and that existing algorithms can be improved by considering additional parameters like trust values, execution rates, and success rates.
Time and Reliability Optimization Bat Algorithm for Scheduling Workflow in CloudIRJET Journal
This document describes using a meta-heuristic optimization algorithm called the Bat Algorithm (BA) to schedule workflows in cloud computing environments. The BA is applied to optimize a multi-objective function that minimizes workflow execution time and maximizes reliability while keeping costs within a user-specified budget. The BA is compared to a basic randomized evolutionary algorithm (BREA) that uses greedy approaches. Experimental results show the BA performs better by finding schedules that have lower execution times and higher reliability within the given budget constraints. The BA is well-suited for this problem because it can efficiently search large solution spaces and automatically focus on optimal regions like other metaheuristics.
Intelligent Workload Management in Virtualized Cloud EnvironmentIJTET Journal
Abstract— Cloud computing is a rising high performance computing environment with a huge scale, heterogeneous collection of self-sufficient systems and elastic computational design. To develop the overall performance of cloud computing, through the deadline constraint, a task scheduling replica is traditional for falling the system power utilization of cloud computing and recovering the yield of service providers. To improve the overall act of cloud environment, with the deadline constraint, a task scheduling model is conventional for reducing the system performance time of cloud computing and improving the profit of service providers. In favor of scheduling replica, a solving technique based on multi-objective genetic algorithm (MO-GA) is considered and the study is determined on programming rules, intersect operators, mixture operators and the scheme of arrangement of Pareto solutions. The model is designed based on open source cloud computing simulation platform CloudSim, to obtainable scheduling algorithms, the result shows that the proposed algorithm can obtain an enhanced solution, thus balancing the load for the concert of multiple objects.
Cloud computing is the fastest emerging technology and a novel buzzword in the field of IT domain that offer distinct services, applications and focuses on providing sustainable, reliable, scalable and virtualized resources to its consumer. The main aim of cloud computing is to enhance the use of distributed resources to achieve higher throughput and resource utilization in large-scale computation problems. Scheduling affects the efficiency of cloud and plays a significant role in cloud computing to create high performance environment. The Quality of Service (QoS) requirements of user application define the scheduling of resources. Numbers of researchers have tried to solve these scheduling problems using different QoS based scheduling techniques. In this paper, a detail analysis of resource scheduling methodology is presented, with different types of scheduling based on soft computing techniques, their comparisons, benefits and results are discussed. Major finding of this paper helps researchers to decide suitable approach for scheduling user’s applications considering their QoS requirements.
Load Balancing in Cloud Computing Through Virtual Machine PlacementIRJET Journal
This document discusses load balancing in cloud computing through virtual machine placement. It proposes using a binary search tree approach to map virtual machines to host machines in a way that optimizes resource utilization, minimizes resource allocation time, and reduces violations of service level agreements. The approach is analyzed using the CloudSim simulator and compared to other placement strategies. The document provides background on resource allocation, types of virtual machine placement algorithms, and related work on power-aware and energy-efficient placement strategies.
Scheduling of Heterogeneous Tasks in Cloud Computing using Multi Queue (MQ) A...IRJET Journal
This document proposes a Multi Queue (MQ) task scheduling algorithm for heterogeneous tasks in cloud computing. It aims to improve upon the Round Robin and Weighted Round Robin algorithms by overcoming their drawbacks. The MQ algorithm splits tasks and resources into separate queues based on size/length and speed. Small tasks are scheduled on slower resources and large tasks on faster resources. The document compares the performance of MQ to Round Robin and Weighted Round Robin algorithms based on makespan, average resource utilization, and load balancing level using CloudSim simulations. The results show that MQ scheduling performs better than the other algorithms in most cases in terms of these metrics.
IRJET- Time and Resource Efficient Task Scheduling in Cloud Computing Environ...IRJET Journal
This document summarizes a research paper that proposes a Task Based Allocation (TBA) algorithm to efficiently schedule tasks in a cloud computing environment. The algorithm aims to minimize makespan (completion time of all tasks) and maximize resource utilization. It first generates an Expected Time to Complete (ETC) matrix that estimates the time each task will take on different virtual machines. It then sorts tasks by length and allocates each task to the VM that minimizes its completion time, updating the VM wait times. The algorithm is evaluated using CloudSim simulation and is shown to reduce makespan, execution time and costs compared to random and first-come, first-served scheduling approaches.
An Enhanced Throttled Load Balancing Approach for Cloud EnvironmentIRJET Journal
The document proposes an enhanced throttled load balancing approach for cloud environments. It discusses existing load balancing techniques like round robin, weighted round robin, and throttled approaches. It identifies that existing throttled approaches can lead to overloading as they do not consider task size when assigning tasks to virtual machines. The proposed approach aims to improve performance for cloud users by enhancing the basic throttled mapping approach to better distribute tasks among resources. The approach is evaluated using the CloudAnalyst simulator and results show it performs better than original techniques.
Hybrid Task Scheduling Approach using Gravitational and ACO Search AlgorithmIRJET Journal
The document proposes a hybrid task scheduling approach for cloud computing called ACGSA that combines ant colony optimization and gravitational search algorithms. It describes using the Cloudsim simulator to test the performance of ACGSA and comparing it to ant colony optimization. The results show that ACGSA achieves better performance than the basic ant colony approach on relevant parameters like task scheduling time and resource utilization.
Efficient fault tolerant cost optimized approach for scientific workflow via ...IAESIJAI
Cloud computing is one of the dispersed and effective computing models, which offers tremendous opportunity to address scientific issues with big scale characteristics. Despite having such a dynamic computing paradigm, it faces several difficulties and falls short of meeting the necessary quality of services (QoS) standards. For sustainable cloud computing workflow, QoS is very much required and need to be addressed. Recent studies looked on quantitative fault-tolerant programming to reduce the number of copies while still achieving the reliability necessity of a process on the heterogeneous infrastructure as a service (IaaS) cloud. In this study, we create an optimal replication technique (ORT) about fault tolerance as well as cost-driven mechanism and this is known as optimal replication technique with fault tolerance and cost minimization (ORT-FTC). Here ORT-FTC employs an iterative-based method that chooses the virtual machine and its copies that have the shortest makespan in the situation of specific tasks. By creating test cases, ORT-FTC is tested while taking into account scientific workflows like CyberShake, laser interferometer gravitational-wave observatory (LIGO), montage, and sipht. Additionally, ORT-FTC is shown to be only slightly improved over the current model in all cases.
SERVICE LEVEL AGREEMENT BASED FAULT TOLERANT WORKLOAD SCHEDULING IN CLOUD COM...ijgca
Cloud computing is a concept of providing user and application oriented services in a virtual environment.Users can use the various cloud services as per their requirements dynamically. Different users have
different requirements in terms of application reliability, performance and fault tolerance. Static and rigid
fault tolerance policies provide a consistent degree of fault tolerance as well as overhead. In this research
work we have proposed a method to implement dynamic fault tolerance considering customerrequirements. The cloud users have been classified in to sub classes as per the fault olerance requirements.Their jobs have also been classified into compute intensive and data intensive categories. The varying
degree of fault tolerance has been applied consisting of replication and input buffer. From the simulation
based experiments we have found that the proposed dynamic method performs better than the existing methods.
SERVICE LEVEL AGREEMENT BASED FAULT TOLERANT WORKLOAD SCHEDULING IN CLOUD COM...ijgca
Cloud computing is a concept of providing user and application oriented services in a virtual environment.
Users can use the various cloud services as per their requirements dynamically. Different users have
different requirements in terms of application reliability, performance and fault tolerance. Static and rigid
fault tolerance policies provide a consistent degree of fault tolerance as well as overhead. In this research
work we have proposed a method to implement dynamic fault tolerance considering customer
requirements. The cloud users have been classified in to sub classes as per the fault tolerance requirements.
Their jobs have also been classified into compute intensive and data intensive categories. The varying
degree of fault tolerance has been applied consisting of replication and input buffer. From the simulation
based experiments we have found that the proposed dynamic method performs better than the existing
methods.
SERVICE LEVEL AGREEMENT BASED FAULT TOLERANT WORKLOAD SCHEDULING IN CLOUD COM...ijgca
Cloud computing is a concept of providing user and application oriented services in a virtual environment.
Users can use the various cloud services as per their requirements dynamically. Different users have
different requirements in terms of application reliability, performance and fault tolerance. Static and rigid
fault tolerance policies provide a consistent degree of fault tolerance as well as overhead. In this research
work we have proposed a method to implement dynamic fault tolerance considering customer
requirements. The cloud users have been classified in to sub classes as per the fault tolerance requirements.
Their jobs have also been classified into compute intensive and data intensive categories. The varying
degree of fault tolerance has been applied consisting of replication and input buffer. From the simulation
based experiments we have found that the proposed dynamic method performs better than the existing
methods.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
Dynamic Task Scheduling based on Burst Time Requirement for Cloud EnvironmentIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
A Novel Dynamic Priority Based Job Scheduling Approach for Cloud EnvironmentIRJET Journal
The document proposes a new dynamic priority-based job scheduling algorithm for cloud environments to optimize the problem of starvation. It assigns priority to jobs based on criteria like CPU requirements, I/O requirements, and job criticality. The algorithm aims to reduce wait time, turnaround time, and increase throughput and CPU utilization. It was tested against the Shortest Job First algorithm in CloudSim simulation software. The results showed improvements in wait time, turnaround time, and total finish time compared to the SJF algorithm.
Cloud Computing Task Scheduling Algorithm Based on Modified Genetic AlgorithmIRJET Journal
This document presents a cloud computing task scheduling algorithm based on a modified genetic algorithm. It begins with an abstract discussing scalable cloud computing and the need for efficient task scheduling and virtual machine allocation. It then discusses the problem of existing scheduling algorithms having high overhead and slow convergence. The proposed methodology uses a heuristic-based prediction model with a logistic normal distribution technique to improve data transmission prediction. Simulation results show the proposed approach has better throughput and computation time than existing algorithms for different data packet sizes. The conclusion discusses overcoming drawbacks of earlier algorithms and future work focusing on algorithms with better tradeoffs between performance characteristics.
Service Request Scheduling in Cloud Computing using Meta-Heuristic Technique:...IRJET Journal
This document discusses using the Teaching Learning Based Optimization (TLBO) meta-heuristic technique for service request scheduling between users and cloud service providers. TLBO is a nature-inspired algorithm that mimics the teacher-student learning process. It is compared to other meta-heuristic algorithms like Genetic Algorithm. The key steps of TLBO involve initializing a population, evaluating fitness, selecting the best solution as teacher, and updating the population through teacher and learner phases until termination criteria is met. The document proposes using number of users and virtual machines as parameters for TLBO scheduling in cloud computing. MATLAB simulation results show the initial and final iterations converging to an optimal scheduling solution.
Square transposition: an approach to the transposition process in block cipherjournalBEEI
The transposition process is needed in cryptography to create a diffusion effect on data encryption standard (DES) and advanced encryption standard (AES) algorithms as standard information security algorithms by the National Institute of Standards and Technology. The problem with DES and AES algorithms is that their transposition index values form patterns and do not form random values. This condition will certainly make it easier for a cryptanalyst to look for a relationship between ciphertexts because some processes are predictable. This research designs a transposition algorithm called square transposition. Each process uses square 8 × 8 as a place to insert and retrieve 64-bits. The determination of the pairing of the input scheme and the retrieval scheme that have unequal flow is an important factor in producing a good transposition. The square transposition can generate random and non-pattern indices so that transposition can be done better than DES and AES.
Hyper-parameter optimization of convolutional neural network based on particl...journalBEEI
The document proposes using a particle swarm optimization (PSO) algorithm to optimize the hyperparameters of a convolutional neural network (CNN) for image classification. The PSO algorithm is used to find optimal values for CNN hyperparameters like the number and size of convolutional filters. In experiments on the MNIST handwritten digit dataset, the optimized CNN achieved a testing error rate of 0.87%, which is competitive with state-of-the-art models. The proposed approach finds optimized CNN architectures automatically without requiring manual design or encoding strategies during training.
Supervised machine learning based liver disease prediction approach with LASS...journalBEEI
In this contemporary era, the uses of machine learning techniques are increasing rapidly in the field of medical science for detecting various diseases such as liver disease (LD). Around the globe, a large number of people die because of this deadly disease. By diagnosing the disease in a primary stage, early treatment can be helpful to cure the patient. In this research paper, a method is proposed to diagnose the LD using supervised machine learning classification algorithms, namely logistic regression, decision tree, random forest, AdaBoost, KNN, linear discriminant analysis, gradient boosting and support vector machine (SVM). We also deployed a least absolute shrinkage and selection operator (LASSO) feature selection technique on our taken dataset to suggest the most highly correlated attributes of LD. The predictions with 10 fold cross-validation (CV) made by the algorithms are tested in terms of accuracy, sensitivity, precision and f1-score values to forecast the disease. It is observed that the decision tree algorithm has the best performance score where accuracy, precision, sensitivity and f1-score values are 94.295%, 92%, 99% and 96% respectively with the inclusion of LASSO. Furthermore, a comparison with recent studies is shown to prove the significance of the proposed system.
A secure and energy saving protocol for wireless sensor networksjournalBEEI
The research domain for wireless sensor networks (WSN) has been extensively conducted due to innovative technologies and research directions that have come up addressing the usability of WSN under various schemes. This domain permits dependable tracking of a diversity of environments for both military and civil applications. The key management mechanism is a primary protocol for keeping the privacy and confidentiality of the data transmitted among different sensor nodes in WSNs. Since node's size is small; they are intrinsically limited by inadequate resources such as battery life-time and memory capacity. The proposed secure and energy saving protocol (SESP) for wireless sensor networks) has a significant impact on the overall network life-time and energy dissipation. To encrypt sent messsages, the SESP uses the public-key cryptography’s concept. It depends on sensor nodes' identities (IDs) to prevent the messages repeated; making security goals- authentication, confidentiality, integrity, availability, and freshness to be achieved. Finally, simulation results show that the proposed approach produced better energy consumption and network life-time compared to LEACH protocol; sensors are dead after 900 rounds in the proposed SESP protocol. While, in the low-energy adaptive clustering hierarchy (LEACH) scheme, the sensors are dead after 750 rounds.
Plant leaf identification system using convolutional neural networkjournalBEEI
This paper proposes a leaf identification system using convolutional neural network (CNN). This proposed system can identify five types of local Malaysia leaf which were acacia, papaya, cherry, mango and rambutan. By using CNN from deep learning, the network is trained from the database that acquired from leaf images captured by mobile phone for image classification. ResNet-50 was the architecture has been used for neural networks image classification and training the network for leaf identification. The recognition of photographs leaves requested several numbers of steps, starting with image pre-processing, feature extraction, plant identification, matching and testing, and finally extracting the results achieved in MATLAB. Testing sets of the system consists of 3 types of images which were white background, and noise added and random background images. Finally, interfaces for the leaf identification system have developed as the end software product using MATLAB app designer. As a result, the accuracy achieved for each training sets on five leaf classes are recorded above 98%, thus recognition process was successfully implemented.
Customized moodle-based learning management system for socially disadvantaged...journalBEEI
This study aims to develop Moodle-based LMS with customized learning content and modified user interface to facilitate pedagogical processes during covid-19 pandemic and investigate how teachers of socially disadvantaged schools perceived usability and technology acceptance. Co-design process was conducted with two activities: 1) need assessment phase using an online survey and interview session with the teachers and 2) the development phase of the LMS. The system was evaluated by 30 teachers from socially disadvantaged schools for relevance to their distance learning activities. We employed computer software usability questionnaire (CSUQ) to measure perceived usability and the technology acceptance model (TAM) with insertion of 3 original variables (i.e., perceived usefulness, perceived ease of use, and intention to use) and 5 external variables (i.e., attitude toward the system, perceived interaction, self-efficacy, user interface design, and course design). The average CSUQ rating exceeded 5.0 of 7 point-scale, indicated that teachers agreed that the information quality, interaction quality, and user interface quality were clear and easy to understand. TAM results concluded that the LMS design was judged to be usable, interactive, and well-developed. Teachers reported an effective user interface that allows effective teaching operations and lead to the system adoption in immediate time.
Understanding the role of individual learner in adaptive and personalized e-l...journalBEEI
Dynamic learning environment has emerged as a powerful platform in a modern e-learning system. The learning situation that constantly changing has forced the learning platform to adapt and personalize its learning resources for students. Evidence suggested that adaptation and personalization of e-learning systems (APLS) can be achieved by utilizing learner modeling, domain modeling, and instructional modeling. In the literature of APLS, questions have been raised about the role of individual characteristics that are relevant for adaptation. With several options, a new problem has been raised where the attributes of students in APLS often overlap and are not related between studies. Therefore, this study proposed a list of learner model attributes in dynamic learning to support adaptation and personalization. The study was conducted by exploring concepts from the literature selected based on the best criteria. Then, we described the results of important concepts in student modeling and provided definitions and examples of data values that researchers have used. Besides, we also discussed the implementation of the selected learner model in providing adaptation in dynamic learning.
Prototype mobile contactless transaction system in traditional markets to sup...journalBEEI
1) Researchers developed a prototype contactless transaction system using QR codes and digital payments to support physical distancing during the COVID-19 pandemic in traditional markets.
2) The system allows sellers and buyers in traditional markets to conduct fast, secure transactions via smartphones without direct cash exchange. Buyers scan sellers' QR codes to view product details and make e-wallet payments.
3) Testing showed the system's functions worked properly and users found it easy to use and useful for supporting contactless transactions and digital transformation of traditional markets. However, further development is needed to increase trust in digital payments for users unfamiliar with the technology.
Wireless HART stack using multiprocessor technique with laxity algorithmjournalBEEI
The use of a real-time operating system is required for the demarcation of industrial wireless sensor network (IWSN) stacks (RTOS). In the industrial world, a vast number of sensors are utilised to gather various types of data. The data gathered by the sensors cannot be prioritised ahead of time. Because all of the information is equally essential. As a result, a protocol stack is employed to guarantee that data is acquired and processed fairly. In IWSN, the protocol stack is implemented using RTOS. The data collected from IWSN sensor nodes is processed using non-preemptive scheduling and the protocol stack, and then sent in parallel to the IWSN's central controller. The real-time operating system (RTOS) is a process that occurs between hardware and software. Packets must be sent at a certain time. It's possible that some packets may collide during transmission. We're going to undertake this project to get around this collision. As a prototype, this project is divided into two parts. The first uses RTOS and the LPC2148 as a master node, while the second serves as a standard data collection node to which sensors are attached. Any controller may be used in the second part, depending on the situation. Wireless HART allows two nodes to communicate with each other.
Implementation of double-layer loaded on octagon microstrip yagi antennajournalBEEI
This document describes the implementation of a double-layer structure on an octagon microstrip yagi antenna (OMYA) to improve its performance at 5.8 GHz. The double-layer consists of two double positive (DPS) substrates placed above the OMYA. Simulation and experimental results show that the double-layer configuration increases the gain of the OMYA by 2.5 dB compared to without the double-layer. The measured bandwidth of the OMYA with double-layer is 14.6%, indicating the double-layer can increase both the gain and bandwidth of the OMYA.
The calculation of the field of an antenna located near the human headjournalBEEI
In this work, a numerical calculation was carried out in one of the universal programs for automatic electro-dynamic design. The calculation is aimed at obtaining numerical values for specific absorbed power (SAR). It is the SAR value that can be used to determine the effect of the antenna of a wireless device on biological objects; the dipole parameters will be selected for GSM1800. Investigation of the influence of distance to a cell phone on radiation shows that absorbed in the head of a person the effect of electromagnetic radiation on the brain decreases by three times this is a very important result the SAR value has decreased by almost three times it is acceptable results.
Exact secure outage probability performance of uplinkdownlink multiple access...journalBEEI
In this paper, we study uplink-downlink non-orthogonal multiple access (NOMA) systems by considering the secure performance at the physical layer. In the considered system model, the base station acts a relay to allow two users at the left side communicate with two users at the right side. By considering imperfect channel state information (CSI), the secure performance need be studied since an eavesdropper wants to overhear signals processed at the downlink. To provide secure performance metric, we derive exact expressions of secrecy outage probability (SOP) and and evaluating the impacts of main parameters on SOP metric. The important finding is that we can achieve the higher secrecy performance at high signal to noise ratio (SNR). Moreover, the numerical results demonstrate that the SOP tends to a constant at high SNR. Finally, our results show that the power allocation factors, target rates are main factors affecting to the secrecy performance of considered uplink-downlink NOMA systems.
Design of a dual-band antenna for energy harvesting applicationjournalBEEI
This report presents an investigation on how to improve the current dual-band antenna to enhance the better result of the antenna parameters for energy harvesting application. Besides that, to develop a new design and validate the antenna frequencies that will operate at 2.4 GHz and 5.4 GHz. At 5.4 GHz, more data can be transmitted compare to 2.4 GHz. However, 2.4 GHz has long distance of radiation, so it can be used when far away from the antenna module compare to 5 GHz that has short distance in radiation. The development of this project includes the scope of designing and testing of antenna using computer simulation technology (CST) 2018 software and vector network analyzer (VNA) equipment. In the process of designing, fundamental parameters of antenna are being measured and validated, in purpose to identify the better antenna performance.
Transforming data-centric eXtensible markup language into relational database...journalBEEI
eXtensible markup language (XML) appeared internationally as the format for data representation over the web. Yet, most organizations are still utilising relational databases as their database solutions. As such, it is crucial to provide seamless integration via effective transformation between these database infrastructures. In this paper, we propose XML-REG to bridge these two technologies based on node-based and path-based approaches. The node-based approach is good to annotate each positional node uniquely, while the path-based approach provides summarised path information to join the nodes. On top of that, a new range labelling is also proposed to annotate nodes uniquely by ensuring the structural relationships are maintained between nodes. If a new node is to be added to the document, re-labelling is not required as the new label will be assigned to the node via the new proposed labelling scheme. Experimental evaluations indicated that the performance of XML-REG exceeded XMap, XRecursive, XAncestor and Mini-XML concerning storing time, query retrieval time and scalability. This research produces a core framework for XML to relational databases (RDB) mapping, which could be adopted in various industries.
Key performance requirement of future next wireless networks (6G)journalBEEI
The document provides an overview of the key performance indicators (KPIs) for 6G wireless networks compared to 5G networks. Some of the major KPIs discussed for 6G include: achieving data rates of up to 1 Tbps and individual user data rates up to 100 Gbps; reducing latency below 10 milliseconds; supporting up to 10 million connected devices per square kilometer; improving spectral efficiency by up to 100 times through technologies like terahertz communications and smart surfaces; and achieving an energy efficiency of 1 pico-joule per bit transmitted through techniques like wireless power transmission and energy harvesting. The document outlines how 6G aims to integrate terrestrial, aerial and maritime communications into a single network to provide ubiquitous connectivity with higher
Noise resistance territorial intensity-based optical flow using inverse confi...journalBEEI
This paper presents the use of the inverse confidential technique on bilateral function with the territorial intensity-based optical flow to prove the effectiveness in noise resistance environment. In general, the image’s motion vector is coded by the technique called optical flow where the sequences of the image are used to determine the motion vector. But, the accuracy rate of the motion vector is reduced when the source of image sequences is interfered by noises. This work proved that the inverse confidential technique on bilateral function can increase the percentage of accuracy in the motion vector determination by the territorial intensity-based optical flow under the noisy environment. We performed the testing with several kinds of non-Gaussian noises at several patterns of standard image sequences by analyzing the result of the motion vector in a form of the error vector magnitude (EVM) and compared it with several noise resistance techniques in territorial intensity-based optical flow method.
Modeling climate phenomenon with software grids analysis and display system i...journalBEEI
This study aims to model climate change based on rainfall, air temperature, pressure, humidity and wind with grADS software and create a global warming module. This research uses 3D model, define, design, and develop. The results of the modeling of the five climate elements consist of the annual average temperature in Indonesia in 2009-2015 which is between 29oC to 30.1oC, the horizontal distribution of the annual average pressure in Indonesia in 2009-2018 is between 800 mBar to 1000 mBar, the horizontal distribution the average annual humidity in Indonesia in 2009 and 2011 ranged between 27-57, in 2012-2015, 2017 and 2018 it ranged between 30-60, during the East Monsoon, the wind circulation moved from northern Indonesia to the southern region Indonesia. During the west monsoon, the wind circulation moves from the southern part of Indonesia to the northern part of Indonesia. The global warming module for SMA/MA produced is feasible to use, this is in accordance with the value given by the validate of 69 which is in the appropriate category and the response of teachers and students through a 91% questionnaire.
An approach of re-organizing input dataset to enhance the quality of emotion ...journalBEEI
The purpose of this paper is to propose an approach of re-organizing input data to recognize emotion based on short signal segments and increase the quality of emotional recognition using physiological signals. MIT's long physiological signal set was divided into two new datasets, with shorter and overlapped segments. Three different classification methods (support vector machine, random forest, and multilayer perceptron) were implemented to identify eight emotional states based on statistical features of each segment in these two datasets. By re-organizing the input dataset, the quality of recognition results was enhanced. The random forest shows the best classification result among three implemented classification methods, with an accuracy of 97.72% for eight emotional states, on the overlapped dataset. This approach shows that, by re-organizing the input dataset, the high accuracy of recognition results can be achieved without the use of EEG and ECG signals.
Parking detection system using background subtraction and HSV color segmentationjournalBEEI
Manual system vehicle parking makes finding vacant parking lots difficult, so it has to check directly to the vacant space. If many people do parking, then the time needed for it is very much or requires many people to handle it. This research develops a real-time parking system to detect parking. The system is designed using the HSV color segmentation method in determining the background image. In addition, the detection process uses the background subtraction method. Applying these two methods requires image preprocessing using several methods such as grayscaling, blurring (low-pass filter). In addition, it is followed by a thresholding and filtering process to get the best image in the detection process. In the process, there is a determination of the ROI to determine the focus area of the object identified as empty parking. The parking detection process produces the best average accuracy of 95.76%. The minimum threshold value of 255 pixels is 0.4. This value is the best value from 33 test data in several criteria, such as the time of capture, composition and color of the vehicle, the shape of the shadow of the object’s environment, and the intensity of light. This parking detection system can be implemented in real-time to determine the position of an empty place.
Quality of service performances of video and voice transmission in universal ...journalBEEI
The universal mobile telecommunications system (UMTS) has distinct benefits in that it supports a wide range of quality of service (QoS) criteria that users require in order to fulfill their requirements. The transmission of video and audio in real-time applications places a high demand on the cellular network, therefore QoS is a major problem in these applications. The ability to provide QoS in the UMTS backbone network necessitates an active QoS mechanism in order to maintain the necessary level of convenience on UMTS networks. For UMTS networks, investigation models for end-to-end QoS, total transmitted and received data, packet loss, and throughput providing techniques are run and assessed and the simulation results are examined. According to the results, appropriate QoS adaption allows for specific voice and video transmission. Finally, by analyzing existing QoS parameters, the QoS performance of 4G/UMTS networks may be improved.
Department of Environment (DOE) Mix Design with Fly Ash.MdManikurRahman
Concrete Mix Design with Fly Ash by DOE Method. The Department of Environmental (DOE) approach to fly ash-based concrete mix design is covered in this study.
The Department of Environment (DOE) method of mix design is a British method originally developed in the UK in the 1970s. It is widely used for concrete mix design, including mixes that incorporate supplementary cementitious materials (SCMs) such as fly ash.
When using fly ash in concrete, the DOE method can be adapted to account for its properties and effects on workability, strength, and durability. Here's a step-by-step overview of how the DOE method is applied with fly ash.
UNIT-1-PPT-Introduction about Power System Operation and ControlSridhar191373
Power scenario in Indian grid – National and Regional load dispatching centers –requirements of good power system - necessity of voltage and frequency regulation – real power vs frequency and reactive power vs voltage control loops - system load variation, load curves and basic concepts of load dispatching - load forecasting - Basics of speed governing mechanisms and modeling - speed load characteristics - regulation of two generators in parallel.
DIY Gesture Control ESP32 LiteWing Drone using PythonCircuitDigest
Build a gesture-controlled LiteWing drone using ESP32 and MPU6050. This presentation explains components, circuit diagram, assembly steps, and working process.
Read more : https://circuitdigest.com/microcontroller-projects/diy-gesture-controlled-drone-using-esp32-and-python-with-litewing
Ideal for DIY drone projects, robotics enthusiasts, and embedded systems learners. Explore how to create a low-cost, ESP32 drone with real-time wireless gesture control.
UNIT-5-PPT Computer Control Power of Power SystemSridhar191373
Introduction
Conceptual Model of the EMS
EMS Functions and SCADA Applications.
Time decomposition of the power system operation.
Open Distributed system in EMS
OOPS
This presentation provides a comprehensive overview of a specialized test rig designed in accordance with ISO 4548-7, the international standard for evaluating the vibration fatigue resistance of full-flow lubricating oil filters used in internal combustion engines.
Key features include:
ISO 4020-6.1 – Filter Cleanliness Test Rig: Precision Testing for Fuel Filter Integrity
Explore the design, functionality, and standards compliance of our advanced Filter Cleanliness Test Rig developed according to ISO 4020-6.1. This rig is engineered to evaluate fuel filter cleanliness levels with high accuracy and repeatability—critical for ensuring the performance and durability of fuel systems.
🔬 Inside This Presentation:
Overview of ISO 4020-6.1 testing protocols
Rig components and schematic layout
Test methodology and data acquisition
Applications in automotive and industrial filtration
Key benefits: accuracy, reliability, compliance
Perfect for R&D engineers, quality assurance teams, and lab technicians focused on filtration performance and standard compliance.
🛠️ Ensure Filter Cleanliness — Validate with Confidence.
Expansive soils (ES) have a long history of being difficult to work with in geotechnical engineering. Numerous studies have examined how bagasse ash (BA) and lime affect the unconfined compressive strength (UCS) of ES. Due to the complexities of this composite material, determining the UCS of stabilized ES using traditional methods such as empirical approaches and experimental methods is challenging. The use of artificial neural networks (ANN) for forecasting the UCS of stabilized soil has, however, been the subject of a few studies. This paper presents the results of using rigorous modelling techniques like ANN and multi-variable regression model (MVR) to examine the UCS of BA and a blend of BA-lime (BA + lime) stabilized ES. Laboratory tests were conducted for all dosages of BA and BA-lime admixed ES. 79 samples of data were gathered with various combinations of the experimental variables prepared and used in the construction of ANN and MVR models. The input variables for two models are seven parameters: BA percentage, lime percentage, liquid limit (LL), plastic limit (PL), shrinkage limit (SL), maximum dry density (MDD), and optimum moisture content (OMC), with the output variable being 28-day UCS. The ANN model prediction performance was compared to that of the MVR model. The models were evaluated and contrasted on the training dataset (70% data) and the testing dataset (30% residual data) using the coefficient of determination (R2), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) criteria. The findings indicate that the ANN model can predict the UCS of stabilized ES with high accuracy. The relevance of various input factors was estimated via sensitivity analysis utilizing various methodologies. For both the training and testing data sets, the proposed model has an elevated R2 of 0.9999. It has a minimal MAE and RMSE value of 0.0042 and 0.0217 for training data and 0.0038 and 0.0104 for testing data. As a result, the generated model excels the MVR model in terms of UCS prediction.
Forensic Science – Digital Forensics – Digital Evidence – The Digital Forensi...ManiMaran230751
Forensic Science – Digital Forensics – Digital Evidence – The Digital Forensics Process – Introduction – The
Identification Phase – The Collection Phase – The Examination Phase – The Analysis Phase – The
Presentation Phase.
This presentation provides a comprehensive overview of air filter testing equipment and solutions based on ISO 5011, the globally recognized standard for performance testing of air cleaning devices used in internal combustion engines and compressors.
Key content includes:
Kevin Corke Spouse Revealed A Deep Dive Into His Private Life.pdfMedicoz Clinic
Kevin Corke, a respected American journalist known for his work with Fox News, has always kept his personal life away from the spotlight. Despite his public presence, details about his spouse remain mostly private. Fans have long speculated about his marital status, but Corke chooses to maintain a clear boundary between his professional and personal life. While he occasionally shares glimpses of his family on social media, he has not publicly disclosed his wife’s identity. This deep dive into his private life reveals a man who values discretion, keeping his loved ones shielded from media attention.
Filters for Electromagnetic Compatibility ApplicationsMathias Magdowski
In this lecture, I explain the fundamentals of electromagnetic compatibility (EMC), the basic coupling model and coupling paths via cables, electric fields, magnetic fields and wave fields. We also look at electric vehicles as an example of systems with many conducted EMC problems due to power electronic devices such as rectifiers and inverters with non-linear components such as diodes and fast switching components such as MOSFETs or IGBTs. After a brief review of circuit analysis fundamentals and an experimental investigation of the frequency-dependent impedance of resistors, capacitors and inductors, we look at a simple low-pass filter. The input impedance from both sides as well as the transfer function are measured.
MODULE 5 BUILDING PLANNING AND DESIGN SY BTECH ACOUSTICS SYSTEM IN BUILDINGDr. BASWESHWAR JIRWANKAR
: Introduction to Acoustics & Green Building -
Absorption of sound, various materials, Sabine’s formula, optimum reverberation time, conditions for good acoustics Sound insulation:
Acceptable noise levels, noise prevention at its source, transmission of noise, Noise control-general considerations
Green Building: Concept, Principles, Materials, Characteristics, Applications
Bituminous binders are sticky, black substances derived from the refining of crude oil. They are used to bind and coat aggregate materials in asphalt mixes, providing cohesion and strength to the pavement.
Tesia Dobrydnia brings her many talents to her career as a chemical engineer in the oil and gas industry. With the same enthusiasm she puts into her work, she engages in hobbies and activities including watching movies and television shows, reading, backpacking, and snowboarding. She is a Relief Senior Engineer for Chevron and has been employed by the company since 2007. Tesia is considered a leader in her industry and is known to for her grasp of relief design standards.
1. Bulletin of Electrical Engineering and Informatics
ISSN: 2302-9285
Vol. 6, No. 2, June 2017, pp. 174~180, DOI: 10.11591/eei.v6i2.649 174
Received March 7, 2017; Revised May 8, 2017; Accepted May 22, 2017
Proactive Scheduling in Cloud Computing
Ripandeep Kaur*
1
, Gurjot Kaur
2
Department of CSE Chandigarh University, Gharuan, Punjab, India
*Corresponding author, e-mail: [email protected]
1
, randhawa789 @gmail.com
2
Abstract
Autonomic fault aware scheduling is a feature quite important for cloud computing and it is
related to adoption of workload variation. In this context, this paper proposes an fault aware pattern
matching autonomic scheduling for cloud computing based on autonomic computing concepts. In order to
validate the proposed solution, we performed two experiments one with traditional approach and other
other with pattern recognition fault aware approach. The results show the effectiveness of the scheme.
Keywords: fault tolerance, scheduling, performance metrics, cloud computing, QoS
1. Introduction
Cloud computing is a recent advancement wherein IT infrastructure and applications
are provided as „services‟ to end-users under a usage-based payment model. It can leverage
virtualized services even on the fly based on requirements (workload patterns and QoS) varying
with time [1]. According to NIST definition: “Cloud computing (CC) is a model for enabling
convenient, on-demand network access to a shared pool of configurable computing resources
(e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and
released with minimal management effort or service provider (SP) interaction [2].” Cloud service
users demand for their services end-to-end QoS assurance, high levels of service reliability, and
continued availability to their SPs. Now a days, IT enterprise is adopting cloud computing in
order to reduce the total cost involved and also improve the QoS delivered to the customers.
There are no standard metrics or a standard way to ensure QoS to the customers.
There are several models or algorithms that are proposed to ensure QoS to the users and
proper management of workloads to provide QoS and performance. So in CC, there are various
important research issues which need to be focused for its efficient performance is fault
tolerance and scheduling [3].
There has been various types of scheduling algorithm exist in cloud computing system.
Most of them can be applied in the cloud environment with suitable verifications. The main
advantage of job scheduling algorithm is to achieve a high performance computing and the best
system throughput. During scheduling, existing algorithms are not fully capable to evaluate the
fault and take decisions accordingly. Multiple reasons exist for low performance in scheduling
algorithms. Majority of literature focussed on the work to decrease response time in order to
provide scheduling in the cloud environments with Quality of Service (QoS). Therefore
performance is definitely one of the major concerns in using existing scheduling algorithm, but
improving performance with enhancing the fault tolerance of the cloud system is one of the
major research area which is not been explored very well [4-5]. To provide guaranteed Quality
of Service (QoS) to users, it is necessary that jobs should be efficiently mapped to
given resources.
Service Level Agreement (SLA) is the major parameter i.e. considered for assuring QoS
and it is responsibility of SPs whether at infrastructure, platform or software level- provide
quality guarantees usually in terms of availability and performance to their customers in the form
of SLAs. So it should be fault-tolerant, and recovery time should be minimal to avoid SLA
violation. The replica should be maintained near the customer‟s location to reduce the recovery
time after any failure or disaster. So SLA should include the availability, response time, and
degree of support [6].
2. Bulletin of EEI ISSN: 2302-9285
Proactive Scheduling in Cloud Computing (Ripandeep Kaur)
175
This research paper proposes a service ranking algorithm in a CC on the basis of
detailed performance monitoring and historical analysis and based on their contribution, a
weight age is assign to all service quality factors or performance metrics and as a final point
aggregated to compute ranking score (R) of a service by developed formula. This new model is
used for VM allocation, re-allocation and placement with consideration of best/high ranked
virtual machine/datacenter available. Workload requested by the users under pre-analyzing the
job requests and the resource status of the data center considering various parameters like
Reliability, Reputation, Network Latency, Processing time, Availability etc.
We further summarize our objectives as under:
a. To develop a system that pre assume the time consumption of workload to identify the high
ranked VMs/DCs.
b. To establish the effectiveness of the system in correspondence to Service Level
Agreements (SLAs) Violations. We will be evaluating the impact of faults on scheduling and
improving scheduling by minimization of SLA violations.
c. To develop a QoS system for users in terms of response time i.e. time taken from the Cloud
to respond user‟s request.
2. Literature Review
Stelios Sidiroglou [7] et al presented an ASSURE, a new self-healing software (s/w)
based approach that presents rescue points (RP) for detecting/tolerating/recovering from s/w
failures in server apps while preserving system availability and integrity. Using fuzzing, they
identify rescue point and implemented by checkpoint/restart technique. When fault detects
initially, it uses an application replica to find out what RPs can be employed for recover
execution of future programs. This approach implemented on various applications of server like
proxy servers, domain name, database and web. The main goal of this approach is to healing
s/w services automatically from s/w failures that were previously unidentified or not known.
Hai Jin [8] et al introduce a SHelp, a novel self-healing s/w based approach which is
considering extension of original approach ASSURE that applies error virtualization and
weighted RP (WRP) methods which helps server applications to avoid faulty path. It can survive
s/w failures and to make sure high availability of service in CC environment. SHelp presents two
approaches. First, WRPs for recover from faults that are complicated to handle for ASSURE.
Second, to adopt two-level RP database which helps to share information related to faults with
applications that are helpful for further faults recovery.
Sheheryar Malik [9] et al proposed an AFTRC (Adaptive FT in Real time CC) model. By
computing reliability(R) of every VM, a system tolerates faults. After every cycle, reliability of
every VM is changed because of its adaptive behavior. A main goal of this model is to assign R
weights to every VM and removing/adding a VM, if it is not performing efficient in real time
environment. AFTRC also provides backward/forward recovery in case if any VM doesn‟t
achieve minimum reliability level and it also uses replication technique to achieve FT.
Dilbag Singh [10] et al proposed a smart failover approach for offering high availability
to the cloud‟s customers by using new algorithm namely; integrated checkpointing with load
balancing (ICWLB) and to reduce overheads of checkpointing by using multilevel checkpoint. A
proposed strategies used two different algorithms namely; global and local checkpointing
algorithms. This approach has been made performance comparison of various metrics like
Maximum/Minimum Execution time (Max/Min ET), Maximum/Minimum Waiting time (Max/Min
WT) with some existing methods and also shows a proposed strategy gives better results than
other strategies.
Pranesh Das [11] et al proposed a smart failover approach namely; Virtualization FT
(VFT) to attain the FT by using redundancy or replication technique. They presented a
virtualization technique where the Load balancer (LB) distributing loads to those nodes whose
related computing nodes have excellent performance history which further measure by using
Success rate of those computing nodes. This model helps to decrease timing of services and to
improve the availability by decision maker and cloud manager modules.
Deepak Poola [12] et al proposed a scheduling algorithm to schedule workflow tasks or
jobs on CC resources with the help of spot instances (SI) and on-demand instances (ODI)
pricing models and also to reduce execution cost in case of tasks deadline. A proposed
algorithm is used bidding method to decrease cost and bids according to the requirement of
3. ISSN: 2302-9285
Bulletin of EEI Vol. 6, No. 2, June 2017 : 174 – 180
176
workflow. They also tolerate faults against early extinction of SI and robust against CC
instances variations in performance. This work saves cost upto 14% by using
checkpointing technique.
Mohammed Amoon [13] proposed an economy based FT framework to maintain
monetary profit by providing dynamic number of replicas and to tolerate faults for avoiding
failures. A main work presented by two algorithms namely VMC (VM Classification) and FTSS
(FT Strategy Selection). VMC classifies cloud VMs by using available information of usage
service time and probability of failures VMs and to select most valuable VM that are profitable
for cloud. FTSS is basically used for selecting suitable FT approach for selected virtual machine
that depends on requirements of customers like time deadline and cost of cloud applications.
This framework used various FT approaches like Proactive and Reactive FT and provide hybrid
FT. In Reactive, it uses various strategies like checkpointing, replication and further used
parallel and multiversion mechanisms of replication strategy.
Anju Bala [14] et al proposed an Autonomic FT (AFT) scheduling approach to assist the
execution of parallel tasks in cloud computing applications like scientific workflows (SW). Cloud
Service providers involve well-organized scheduling fault tolerant (FT) and Hybrid heuristics
(HH) techniques. HH merges the various features of FCFS, Min-Min and Max-Child heuristic. In
FT technique, due to over-consumption of resources if task failure happens then VM migration
(VMM) automatically migrates the VM. AFT approach significantly reducing make-span,
standard deviation and total mean execution time and improve performance of SW.
Punit Gupta [15] et al proposed a FLHB Scheduling algorithm for cloud IaaS. It provides
higher quality of services to the customer with least cost and also considers various datacenters
quality of service parameters like System load (MIPS), Network load, initialization time and Fault
Rate for improving the performance and quality of services to the customer in cloud
infrastructure environment.
3. System Model
In this section our proposed system model which explains Fault Aware Scheduling
Technique (FAST), as shown in Figure 1. where workload generator is responsible for creating
workloads. It is similar to the users who are requesting for VMs. These users defined a set of
quality parameters which needs to be met by any system.
Figure 1. Proposed system model
Therefore a model is required which involved the following steps:
a. A monitoring application collects the following values and after retrieving monitored values
fuzzy prediction process is initiated which sets the min and max performance of VM
components, i.e for each request to process there is requirement for CPU which ranges
[CPUmin∼CPUmax], similarly for Memory [MEMmin∼MEMmax] and for Bandwidth
[BWmin∼BWmax], after obtaining the degree of truthness for each component of VM
(fuzzification) these values get put into LOG.
4. Bulletin of EEI ISSN: 2302-9285
Proactive Scheduling in Cloud Computing (Ripandeep Kaur)
177
b. Clustering of VMs: Each VM, with a common set of configuration is put into a common
cluster.
c. A commonly used VM allocation policy (Round Robin algorithm) is used to allocate the
incoming request to these clusters.
d. A constant tracking of SLA violations is done and in the event of any positive sign, a pattern
of VM working is obtained by comparing the current value with the LOG.
e. The pattern algorithm which is based on density-based spatial clustering [10] which
identifies the distribution of data in the current cluster and generates a trigger in the case of
any change required. Hence the first SLA violation is acting as the threshold value and is
represented by ε.
f. After getting the first SLA, the process of inputting is started and obtained results are
refuzzified. After this the current performance is logged in. Now this new cluster is used for
scheduling new job requests, which is done by identified faulty VMs who are not meeting
the requirements of users.
g. After identifying these which helps load balancer to take decision by redirecting the
incoming request to the VM who are working up to their capacity and very minimal requests
is inflow towards faulty VMs.
Algorithm 1: Clustering
For every VMi
Resi=Get_monitored_result(VM);//[CPUmin∼CPUmax], [MEMmin∼/ //MEMmax], [BWmin
∼ BWmax],
LOG(Resi);
Set(min,max)I // Fuzzification
Get_result=Match_Cluster_to_VMi (Cluster_Name, VMi)
If (Get_result=TRUE)
Set_ VMi(Cluster_Name)
endIf
Algorithm 2: Tracking Faults
Set_threshold=first_SLA_violation (VMi)
If (SLA_violation==TRUE)
Set(min,max) // ReFuzzification
Get_result= Match_Cluster_to_VMi (Cluster_Name, VMi)
If (Get_result=TRUE)
Set_ VMi (Cluster_Name)
endIf
4. Experimental Set up and Results
Following are the simulation parameters:
Number of Datacenters: 4
Number of Host/DC: 1
Number of VM/Host: 4
a. Experiment No. 1
In this experiment, a fault is created by lowering the CPU capacity, which directly
lowered the CPU consumption. In Figure 2. we can observe the CPU consumption where VM 1
is offering lowered capacity.
5. ISSN: 2302-9285
Bulletin of EEI Vol. 6, No. 2, June 2017 : 174 – 180
178
Figure 2. Average CPU consumption
b. Experiment No. 2
In this experiment we analyze the response time of commonly used round robin
algorithm. We can observe that with the introduction of faults the response time is increased
drastically. Therefore the average response time i.e. 7.9 ms for the all the requests to process
shown in Figure 3 which is very high.
Figure 3. Response time
c. Experiment No. 3
In Figure 4, SLA violation is observed for the round robin algorithm which is very high
for VM1.
Figure 4. SLA violation for VM allocation policy
d. Experiment No. 4
In this experiment, again a fault is created to observe the behavior of proposed
algorithm (FAST). An average CPU usage for 2 VMs is constant but for third VM it is fluctuating,
hence underperforming CPU observed. Hence a optimal requests are being allotted, because
the workload was better distributed among the better performing virtual machines and also the
6. Bulletin of EEI ISSN: 2302-9285
Proactive Scheduling in Cloud Computing (Ripandeep Kaur)
179
response time of requests initially had the lowest values, and most requests were allocated to
the VM 2, 3, and 4. The increase in workload leads to more allocations to only these VMs. Also
the observed Response time in this case is 4.46 ms.
This directly corresponds to the performance of SLA violations as we can observe in
Figure 7. all the VMs are performing equally. This is done by moving under performing VM to its
right cluster.
Figure 5. VM performance indicator
Figure 6. Response time for FAST
Figure 7. SLA violations
Both experiments used the same workload and resource allocation strategy. However,
the thresholds were different because of differnt SLA violations.
7. ISSN: 2302-9285
Bulletin of EEI Vol. 6, No. 2, June 2017 : 174 – 180
180
5. Conclusion Future Work
Fault aware cloud computing environments to support the elastic provisioning has
proved to be very beneficial. Experiments conducted for validating the architecture clearly depict
that autonomic computing and cloud computing can be used together with various technologies
and different providers. The future work involves different criteria that should be used for rules
design (e.g., average response time of requests or latency). Furthermore, the use of other levels
of control loops may improve the architecture‟s effectiveness, focusing on better performance.
References
[1] Q Zhang, L Cheng, R Boutaba. Cloud Computing: State-of-the-art and Research Challenges. In Cloud
Computing. The Brazilian Computer Society Conference on Springer. 2010: 7-18.
[2] G Shroff. Enterprise Cloud Computing Technology, Architecture, Applications, Cambridge South Asian
ed., 2011: 51-60. ISBN: 978-1-107-64889-0.
[3] A Bahga, V Madisetti. Cloud Computing A hands-on Approach. Universities Press, 1st ed., 2014: 117-
120. ISBN: 978-81-7371-923-3.
[4] A Ganesh, Dr M Sandhya, Dr S Shankar. A Study on Fault Tolerance Methods in Cloud Computing. In
International Advance Computing Conference (IACC), 2014 IEEE Conference on 2014: 844-849.
[5] V Kumar, S Sharma. A Comparative Review on Fault Tolerance Methods and Models in Cloud
Computing. In International Research Journal of Engineering and Technology (IRJET). Nov 2015;
2(8): 1-7.
[6] K Chandrasekaran, Essentials of Cloud Computing, CRC Press, 3
rd
Ed., 2015: 49-60. ISBN: 978-1-
4822-0544-2.
[7] S Sidiroglou, O Laadan, C Perez, N Viennot, J Nieh, AD Keromytis. Assure: Automatic Software Self-
healing Using Rescue Points. In ACM Sigplan Notices. 2009; 44(3): 37-48.
[8] G Chen, H Jin, D Zou, BB Zhou, W Qiang, G Hu. SHelp: Automatic Self-healing for Multiple
Application Instances in a Virtual Machine Environment. In Cluster Computing (CLUSTER), 2010
IEEE International Conference on IEEE. 2010: 97-106.
[9] S Malik, F Huet. Adaptive Fault Tolerance in Real Time Cloud Computing. In Services (SERVICES),
2011 IEEE World Congress on IEEE. 2011; 280-287.
[10] D Singh, J Singh, A Chhabra. High Availability of Clouds: Failover Strategies for Cloud Computing
using Integrated Checkpointing Algorithms. IEEE International Conference on Communication
Systems and Network Technologies, 2012.
[11] P Das, PM Khilar. VFT: A Virtualization and Fault Tolerance Approach for Cloud Computing. In
Information & Communication Technologies (ICT), 2013 IEEE Conference on, 2013: 473-478.
[12] D Poola, K Ramamohanarao, R Buyya. Fault-Tolerant Workflow Scheduling Using Spot Instances on
Clouds. 14th International Conference on Computational Science (ICCS), Elsevier. 2014: 523–533.
[13] M Amoon. A Framework for Providing a Hybrid Fault Tolerance in Cloud Computing. Science and
Information Conference, London, UK, July 2015: 28-30.
[14] A Bala, I Chana. Autonomic Fault Tolerant Scheduling Approach for Scientific Workflows in Cloud
Computing. Concurrent Engineering: Research and Applications, SAGE. 2015: 1-13.
[15] P Gupta, SP Ghrera. Load and Fault Aware Honey Bee Scheduling Algorithm for Cloud Infrastructure.
Proc. of the 3rd Int. Conf. on Front. of Intell. Comput. (FICTA) 2014, Springer. 2015: 135-143.
[16] Cloud Service Measurement Index Consortium (CSMIC), SMI framework. [Last accessed:] 2/15, 2015,
[Online]. Available: http://beta-www.cloudcommons.com/servicemeasurementindex.
[17] SK Garg, S Versteegb, R Buyya. A Framework for Ranking of Cloud Computing Services. Future
Generation Computer Systems, Elsevier. 2013; 29(4): 1012–1023.
[18] K Elissa. Title of paper if known. unpublished.
[19] R Nicole.Title of paper with only first word capitalized. J. Name Stand. Abbrev. in press. University
Science, 1989.