Mergesort and Quicksort are two efficient sorting algorithms that run in O(n log n) time. Mergesort uses a divide-and-conquer approach, recursively splitting the array in half until single elements remain, then merging the sorted halves back together. Quicksort chooses a pivot element and partitions the array into elements less than or greater than the pivot, then recursively sorts the subarrays. The document provides pseudocode for both algorithms and analyzes their time complexities.
Quick sort is an internal algorithm which is based on divide and conquer strategy. In this:
The array of elements is divided into parts repeatedly until it is not possible to divide it further.
It is also known as “partition exchange sort”.
It uses a key element (pivot) for partitioning the elements.
One left partition contains all those elements that are smaller than the pivot and one right partition contains all those elements which are greater than the key element.divide and conquer strategy. In this:
The elements are split into two sub-arrays (n/2) again and again until only one element is left.
Merge sort uses additional storage for sorting the auxiliary array.
Merge sort uses three arrays where two are used for storing each half, and the third external one is used to store the final sorted list by merging other two and each array is then sorted recursively.
At last, the all sub arrays are merged to make it ‘n’ element size of the array. Quick Sort vs Merge Sort
Partition of elements in the array : In the merge sort, the array is parted into just 2 halves (i.e. n/2). whereas In case of quick sort, the array is parted into any ratio. There is no compulsion of dividing the array of elements into equal parts in quick sort.
Worst case complexity : The worst case complexity of quick sort is O(n^2) as there is need of lot of comparisons in the worst condition. whereas In merge sort, worst case and average case has same complexities O(n log n).
Usage with datasets : Merge sort can work well on any type of data sets irrespective of its size (either large or small). whereas The quick sort cannot work well with large datasets.
Additional storage space requirement : Merge sort is not in place because it requires additional memory space to store the auxiliary arrays. whereas The quick sort is in place as it doesn’t require any additional storage.
Efficiency : Merge sort is more efficient and works faster than quick sort in case of larger array size or datasets. whereas Quick sort is more efficient and works faster than merge sort in case of smaller array size or datasets.
Sorting method : The quick sort is internal sorting method where the data is sorted in main memory. whereas The merge sort is external sorting method in which the data that is to be sorted cannot be accommodated in the memory and needed auxiliary memory for sorting.
Stability : Merge sort is stable as two elements with equal value appear in the same order in sorted output as they were in the input unsorted array. whereas Quick sort is unstable in this scenario. But it can be made stable using some changes in code.
Preferred for : Quick sort is preferred for arrays. whereas Merge sort is preferred for linked lists.
Locality of reference : Quicksort exhibits good cache locality and this makes quicksort faster than merge sort (in many cases like in virtual memory environment).
Quicksort has average time complexity of O(n log n), but worst case of O(n^2). It has O(log n) space complexity for the recursion stack. It works by picking a pivot element, partitioning the array into sub-arrays of smaller size based on element values relative to the pivot, and recursively
The quicksort algorithm works by first picking a pivot element. It then partitions the array into two sub-arrays - one with elements less than the pivot and one with elements greater than the pivot. It recursively sorts the two sub-arrays. In the best case, the pivot splits the array evenly in half each time, resulting in a runtime of O(n log n).
The document discusses quicksort and merge sort algorithms. It provides pseudocode for quicksort, explaining how it works by picking a pivot element and partitioning the array around that element. Quicksort has average time complexity of O(n log n) but worst case of O(n^2). Merge sort is also explained, with pseudocode showing how it recursively splits the array in half and then merges the sorted halves. Merge sort runs in O(n log n) time in all cases.
The document discusses various sorting algorithms like bubble sort, selection sort, insertion sort, merge sort, quicksort and heap sort. It provides pseudocode to explain the algorithms. For each algorithm, it explains the basic approach, complexity analysis and provides an example to illustrate the steps. Quicksort is explained in more detail with pseudocode and examples to demonstrate how it works by picking a pivot element, partitioning the array and recursively sorting the sub-arrays.
The quicksort algorithm sorts an array by recursively dividing it into smaller sub-arrays by picking a pivot element. It partitions the array into two halves based on the pivot, putting elements less than the pivot in one half and greater elements in the other. The process is then repeated on each sub-array until the entire array is sorted. It works by first choosing a pivot element, partitioning the array around the pivot so that elements less than the pivot come before elements greater than it, and then quicksorting the two resulting sub-arrays.
The quicksort algorithm works by recursively sorting arrays of data. It first selects a pivot element and partitions the array around the pivot so that all elements less than the pivot come before it and all elements greater than the pivot come after it. It then recursively sorts the sub-arrays to the left and right of the pivot until the entire array is sorted.
The document discusses algorithms and their analysis. It begins by defining an algorithm and key aspects like correctness, input, and output. It then discusses two aspects of algorithm performance - time and space. Examples are provided to illustrate how to analyze the time complexity of different structures like if/else statements, simple loops, and nested loops. Big O notation is introduced to describe an algorithm's growth rate. Common time complexities like constant, linear, quadratic, and cubic functions are defined. Specific sorting algorithms like insertion sort, selection sort, bubble sort, merge sort, and quicksort are then covered in detail with examples of how they work and their time complexities.
The document discusses quicksort analysis and ways to improve its performance. It shows that quicksort has a best case running time of O(n log n) when keys are randomly distributed, but a worst case of O(n^2) if the array is already sorted. To avoid the worst case, the document suggests improving pivot selection by choosing the median of three randomly selected elements rather than just the first element. It also recommends using brute force for small subarrays of size 3 or less.
Sorting plays a very important role in data structure.
Different type of sorting is given that is most popular and mostly used. It is described in very easy and brief way.
Please I want a detailed complete answers for each part.Then make.pdfsiennatimbok52331
Please I want a detailed complete answers for each part.Then make sure you provide clear
picture(s) of soultions that I can read. Try making all numbers and letters clrearly readable easy
to recognize. You also can type it which I prefer so I can copy all later, paste them in Microsoft
Word to enlarge & edit if needed. Thanks for all :) 1. Sort the record 9, 34,17,6, 15, 3:8 by using
the following sorting techniques, respectively, 1) QuickSort with a good call for the first pivot
value 2) MergeSort 3) HeapSort (including the details for building 1st Heap, and all 6 heaps) 4)
Binsort (Choose Bins as: 0-9; 10-19, 20-29, 30-39) 5) Radix sort The details of each sorting step
should be ncluded The details of each sorting step should be included
Solution
QuickSort:
QuickSort is a Divide and Conquer algorithm. It picks an element as pivot and partitions the
given array around the picked pivot. There are many different versions of quickSort that pick
pivot in different ways.
The key process in quickSort is partition(). Target of partitions is, given an array and an element
x of array as pivot, put x at its correct position in sorted array and put all smaller elements
(smaller than x) before x, and put all greater elements (greater than x) after x. All this should be
done in linear time
elements:
9 34 17 6 15 38
9 34 17 6 15 38
9 34 17 6 15 38
9 34 17 6 15 38
9 34 17 6 15 38
9 34 17 6 15 38
9 6 17 34 15 38
swapping 15 and 34
15 is choosing pivot
compare with i-1 value
Program
def quickSort(alist):
quickSortHelper(alist,0,len(alist)-1)
def quickSortHelper(alist,first,last):
if first= pivotvalue and rightmark >= leftmark:
rightmark = rightmark -1
if rightmark < leftmark:
done = True
else:
temp = alist[leftmark]
alist[leftmark] = alist[rightmark]
alist[rightmark] = temp
temp = alist[first]
alist[first] = alist[rightmark]
alist[rightmark] = temp
return rightmark
alist = [9,34,17,6,15,8]
quickSort(alist)
print(alist)
Mergesort:
MergeSort is a Divide and Conquer algorithm. It divides input array in two halves, calls itself for
the two halves and then merges the two sorted halves. The merge() function is used for merging
two halves. The merge(arr, l, m, r) is key process that assumes that arr[l..m] and arr[m+1..r] are
sorted and merges the two sorted sub-arrays into one. See following C implementation for
details.
9 34 17 6 15 38
divide two parts
9 34 17 6 15 38
divide separate parts
9 34 17 6 15 38
compare two elements in each section
9 34 17 6 15 38
compare section to section
9 17 34 6 15 38
compare two section
6 9 15 17 34 38
Merges two subarrays of arr[].
# First subarray is arr[l..m]
# Second subarray is arr[m+1..r]
def merge(arr, l, m, r):
n1 = m - l + 1
n2 = r- m
# create temp arrays
L = [0] * (n1)
R = [0] * (n2)
# Copy data to temp arrays L[] and R[]
for i in range(0 , n1):
L[i] = arr[l + i]
for j in range(0 , n2):
R[j] = arr[m + 1 + j]
# Merge the temp arrays back into arr[l..r]
i = 0 # Initial index of first subarray
j = 0 # Initial index of second.
In computer science, divide and conquer is an algorithm design paradigm. A divide-and-conquer algorithm recursively breaks down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem.
Designing efficient divide-and-conquer algorithms can be difficult. As in mathematical induction, it is often necessary to generalize the problem to make it amenable to a recursive solution. The correctness of a divide-and-conquer algorithm is usually proved by mathematical induction, and its computational cost is often determined by solving recurrence relations.
The divide-and-conquer paradigm is often used to find an optimal solution of a problem. Its basic idea is to decompose a given problem into two or more similar, but simpler, subproblems, to solve them in turn, and to compose their solutions to solve the given problem. Problems of sufficient simplicity are solved directly. For example, to sort a given list of n natural numbers, split it into two lists of about n/2 numbers each, sort each of them in turn, and interleave both results appropriately to obtain the sorted version of the given list. This approach is known as the merge sort algorithm.
Quicksort is a sorting algorithm that uses a divide and conquer approach. It works by selecting a pivot element and partitioning the array around the pivot so that elements less than the pivot are to its left and greater elements are to its right. It then recursively applies this process to the subarrays until each contains a single element, at which point the array is fully sorted. The example demonstrates quicksort sorting an array from 0 to 7.
Quicksort is a divide and conquer algorithm that works by picking an element as a pivot and partitioning the array around it. It recursively sorts elements before and after the pivot. The average runtime is O(n log n) but it can be O(n^2) in the worst case if the pivot selection is poor. The algorithm involves picking a pivot element, partitioning the array around it, and then recursively sorting the subarrays.
Quicksort is a divide and conquer algorithm that works by partitioning an array around a pivot value and recursively sorting the subarrays. It first selects a pivot element and partitions the array by moving all elements less than the pivot before it and greater elements after it. The subarrays are then recursively sorted through this process. When implemented efficiently with an in-place partition, quicksort is one of the fastest sorting algorithms in practice, with average case performance of O(n log n) time but worst case of O(n^2) time.
The document describes the merge sort algorithm. Merge sort is a sorting algorithm that works by dividing an unsorted list into halves, recursively sorting each half, and then merging the sorted halves into one sorted list. It has the following steps:
1. Divide: Partition the input list into equal halves.
2. Conquer: Recursively sort each half by repeating steps 1 and 2.
3. Combine: Merge the sorted halves into one sorted output list.
The algorithm has a runtime of O(n log n), making it an efficient sorting algorithm for large data sets. An example execution is shown step-by-step to demonstrate how merge sort partitions, recursively sorts, and merges sublists to produce
Quicksort is a divide and conquer algorithm that picks an element as a pivot and partitions the array around that pivot. It recursively sorts the sub-arrays on each side of the pivot. The algorithm involves picking a pivot element, partitioning the array by the pivot value, and then recursively sorting the sub-arrays. Quicksort has average case performance of O(n log n) time but can perform poorly on worst-case inputs.
Quicksort is a divide-and-conquer sorting algorithm that has average case performance of O(N log N). It works by recursively dividing the array into two partitions based on a pivot value and sorting them independently. While quicksort's average case is efficient, its worst case is O(N^2) if the pivot choices are poor, though this seldom occurs in practice with randomization techniques. Quicksort is generally faster than mergesort due to its simpler in-place partitioning avoiding extra data movement.
DIVIDE AND CONQUERMETHOD IN DATASTRUCTURE.pptxLakshmiSamivel
The document discusses the divide and conquer algorithm and provides examples of algorithms that use this approach, including merge sort, quicksort, and binary search. It explains that divide and conquer works by dividing a problem into smaller subproblems, solving those subproblems recursively, and then combining the solutions to solve the original problem. Specific steps and pseudocode are provided for merge sort, quicksort, and binary search to illustrate how they apply the divide and conquer strategy.
The document discusses two sorting algorithms: Quicksort and Mergesort. Quicksort works by picking a pivot element and partitioning the array around that pivot, recursively sorting the subarrays. It has average time complexity of O(n log n) but worst case of O(n^2). Mergesort works by dividing the array into halves, recursively sorting the halves, and then merging the sorted halves together. It has time complexity of O(n log n) in all cases. The document also includes Java code for implementing MergeSort and discusses how it works.
DIY Gesture Control ESP32 LiteWing Drone using PythonCircuitDigest
Build a gesture-controlled LiteWing drone using ESP32 and MPU6050. This presentation explains components, circuit diagram, assembly steps, and working process.
Read more : https://circuitdigest.com/microcontroller-projects/diy-gesture-controlled-drone-using-esp32-and-python-with-litewing
Ideal for DIY drone projects, robotics enthusiasts, and embedded systems learners. Explore how to create a low-cost, ESP32 drone with real-time wireless gesture control.
More Related Content
Similar to Quicksort and MergeSort Algorithm Analysis (20)
The quicksort algorithm works by recursively sorting arrays of data. It first selects a pivot element and partitions the array around the pivot so that all elements less than the pivot come before it and all elements greater than the pivot come after it. It then recursively sorts the sub-arrays to the left and right of the pivot until the entire array is sorted.
The document discusses algorithms and their analysis. It begins by defining an algorithm and key aspects like correctness, input, and output. It then discusses two aspects of algorithm performance - time and space. Examples are provided to illustrate how to analyze the time complexity of different structures like if/else statements, simple loops, and nested loops. Big O notation is introduced to describe an algorithm's growth rate. Common time complexities like constant, linear, quadratic, and cubic functions are defined. Specific sorting algorithms like insertion sort, selection sort, bubble sort, merge sort, and quicksort are then covered in detail with examples of how they work and their time complexities.
The document discusses quicksort analysis and ways to improve its performance. It shows that quicksort has a best case running time of O(n log n) when keys are randomly distributed, but a worst case of O(n^2) if the array is already sorted. To avoid the worst case, the document suggests improving pivot selection by choosing the median of three randomly selected elements rather than just the first element. It also recommends using brute force for small subarrays of size 3 or less.
Sorting plays a very important role in data structure.
Different type of sorting is given that is most popular and mostly used. It is described in very easy and brief way.
Please I want a detailed complete answers for each part.Then make.pdfsiennatimbok52331
Please I want a detailed complete answers for each part.Then make sure you provide clear
picture(s) of soultions that I can read. Try making all numbers and letters clrearly readable easy
to recognize. You also can type it which I prefer so I can copy all later, paste them in Microsoft
Word to enlarge & edit if needed. Thanks for all :) 1. Sort the record 9, 34,17,6, 15, 3:8 by using
the following sorting techniques, respectively, 1) QuickSort with a good call for the first pivot
value 2) MergeSort 3) HeapSort (including the details for building 1st Heap, and all 6 heaps) 4)
Binsort (Choose Bins as: 0-9; 10-19, 20-29, 30-39) 5) Radix sort The details of each sorting step
should be ncluded The details of each sorting step should be included
Solution
QuickSort:
QuickSort is a Divide and Conquer algorithm. It picks an element as pivot and partitions the
given array around the picked pivot. There are many different versions of quickSort that pick
pivot in different ways.
The key process in quickSort is partition(). Target of partitions is, given an array and an element
x of array as pivot, put x at its correct position in sorted array and put all smaller elements
(smaller than x) before x, and put all greater elements (greater than x) after x. All this should be
done in linear time
elements:
9 34 17 6 15 38
9 34 17 6 15 38
9 34 17 6 15 38
9 34 17 6 15 38
9 34 17 6 15 38
9 34 17 6 15 38
9 6 17 34 15 38
swapping 15 and 34
15 is choosing pivot
compare with i-1 value
Program
def quickSort(alist):
quickSortHelper(alist,0,len(alist)-1)
def quickSortHelper(alist,first,last):
if first= pivotvalue and rightmark >= leftmark:
rightmark = rightmark -1
if rightmark < leftmark:
done = True
else:
temp = alist[leftmark]
alist[leftmark] = alist[rightmark]
alist[rightmark] = temp
temp = alist[first]
alist[first] = alist[rightmark]
alist[rightmark] = temp
return rightmark
alist = [9,34,17,6,15,8]
quickSort(alist)
print(alist)
Mergesort:
MergeSort is a Divide and Conquer algorithm. It divides input array in two halves, calls itself for
the two halves and then merges the two sorted halves. The merge() function is used for merging
two halves. The merge(arr, l, m, r) is key process that assumes that arr[l..m] and arr[m+1..r] are
sorted and merges the two sorted sub-arrays into one. See following C implementation for
details.
9 34 17 6 15 38
divide two parts
9 34 17 6 15 38
divide separate parts
9 34 17 6 15 38
compare two elements in each section
9 34 17 6 15 38
compare section to section
9 17 34 6 15 38
compare two section
6 9 15 17 34 38
Merges two subarrays of arr[].
# First subarray is arr[l..m]
# Second subarray is arr[m+1..r]
def merge(arr, l, m, r):
n1 = m - l + 1
n2 = r- m
# create temp arrays
L = [0] * (n1)
R = [0] * (n2)
# Copy data to temp arrays L[] and R[]
for i in range(0 , n1):
L[i] = arr[l + i]
for j in range(0 , n2):
R[j] = arr[m + 1 + j]
# Merge the temp arrays back into arr[l..r]
i = 0 # Initial index of first subarray
j = 0 # Initial index of second.
In computer science, divide and conquer is an algorithm design paradigm. A divide-and-conquer algorithm recursively breaks down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem.
Designing efficient divide-and-conquer algorithms can be difficult. As in mathematical induction, it is often necessary to generalize the problem to make it amenable to a recursive solution. The correctness of a divide-and-conquer algorithm is usually proved by mathematical induction, and its computational cost is often determined by solving recurrence relations.
The divide-and-conquer paradigm is often used to find an optimal solution of a problem. Its basic idea is to decompose a given problem into two or more similar, but simpler, subproblems, to solve them in turn, and to compose their solutions to solve the given problem. Problems of sufficient simplicity are solved directly. For example, to sort a given list of n natural numbers, split it into two lists of about n/2 numbers each, sort each of them in turn, and interleave both results appropriately to obtain the sorted version of the given list. This approach is known as the merge sort algorithm.
Quicksort is a sorting algorithm that uses a divide and conquer approach. It works by selecting a pivot element and partitioning the array around the pivot so that elements less than the pivot are to its left and greater elements are to its right. It then recursively applies this process to the subarrays until each contains a single element, at which point the array is fully sorted. The example demonstrates quicksort sorting an array from 0 to 7.
Quicksort is a divide and conquer algorithm that works by picking an element as a pivot and partitioning the array around it. It recursively sorts elements before and after the pivot. The average runtime is O(n log n) but it can be O(n^2) in the worst case if the pivot selection is poor. The algorithm involves picking a pivot element, partitioning the array around it, and then recursively sorting the subarrays.
Quicksort is a divide and conquer algorithm that works by partitioning an array around a pivot value and recursively sorting the subarrays. It first selects a pivot element and partitions the array by moving all elements less than the pivot before it and greater elements after it. The subarrays are then recursively sorted through this process. When implemented efficiently with an in-place partition, quicksort is one of the fastest sorting algorithms in practice, with average case performance of O(n log n) time but worst case of O(n^2) time.
The document describes the merge sort algorithm. Merge sort is a sorting algorithm that works by dividing an unsorted list into halves, recursively sorting each half, and then merging the sorted halves into one sorted list. It has the following steps:
1. Divide: Partition the input list into equal halves.
2. Conquer: Recursively sort each half by repeating steps 1 and 2.
3. Combine: Merge the sorted halves into one sorted output list.
The algorithm has a runtime of O(n log n), making it an efficient sorting algorithm for large data sets. An example execution is shown step-by-step to demonstrate how merge sort partitions, recursively sorts, and merges sublists to produce
Quicksort is a divide and conquer algorithm that picks an element as a pivot and partitions the array around that pivot. It recursively sorts the sub-arrays on each side of the pivot. The algorithm involves picking a pivot element, partitioning the array by the pivot value, and then recursively sorting the sub-arrays. Quicksort has average case performance of O(n log n) time but can perform poorly on worst-case inputs.
Quicksort is a divide-and-conquer sorting algorithm that has average case performance of O(N log N). It works by recursively dividing the array into two partitions based on a pivot value and sorting them independently. While quicksort's average case is efficient, its worst case is O(N^2) if the pivot choices are poor, though this seldom occurs in practice with randomization techniques. Quicksort is generally faster than mergesort due to its simpler in-place partitioning avoiding extra data movement.
DIVIDE AND CONQUERMETHOD IN DATASTRUCTURE.pptxLakshmiSamivel
The document discusses the divide and conquer algorithm and provides examples of algorithms that use this approach, including merge sort, quicksort, and binary search. It explains that divide and conquer works by dividing a problem into smaller subproblems, solving those subproblems recursively, and then combining the solutions to solve the original problem. Specific steps and pseudocode are provided for merge sort, quicksort, and binary search to illustrate how they apply the divide and conquer strategy.
The document discusses two sorting algorithms: Quicksort and Mergesort. Quicksort works by picking a pivot element and partitioning the array around that pivot, recursively sorting the subarrays. It has average time complexity of O(n log n) but worst case of O(n^2). Mergesort works by dividing the array into halves, recursively sorting the halves, and then merging the sorted halves together. It has time complexity of O(n log n) in all cases. The document also includes Java code for implementing MergeSort and discusses how it works.
DIY Gesture Control ESP32 LiteWing Drone using PythonCircuitDigest
Build a gesture-controlled LiteWing drone using ESP32 and MPU6050. This presentation explains components, circuit diagram, assembly steps, and working process.
Read more : https://circuitdigest.com/microcontroller-projects/diy-gesture-controlled-drone-using-esp32-and-python-with-litewing
Ideal for DIY drone projects, robotics enthusiasts, and embedded systems learners. Explore how to create a low-cost, ESP32 drone with real-time wireless gesture control.
Optimize Indoor Air Quality with Our Latest HVAC Air Filter Equipment Catalogue
Discover our complete range of high-performance HVAC air filtration solutions in this comprehensive catalogue. Designed for industrial, commercial, and residential applications, our equipment ensures superior air quality, energy efficiency, and compliance with international standards.
📘 What You'll Find Inside:
Detailed product specifications
High-efficiency particulate and gas phase filters
Custom filtration solutions
Application-specific recommendations
Maintenance and installation guidelines
Whether you're an HVAC engineer, facilities manager, or procurement specialist, this catalogue provides everything you need to select the right air filtration system for your needs.
🛠️ Cleaner Air Starts Here — Explore Our Finalized Catalogue Now!
Build an IoT-based Weather Monitoring System Using Arduino?CircuitDigest
Build an IoT weather station with Arduino UNO R4 WiFi! Monitor temperature, humidity, air quality, and rainfall in real-time using local web server, no cloud needed.
Read more : https://circuitdigest.com/microcontroller-projects/how-to-build-an-iot-based-weather-monitoring-system-using-arduino
Ideal for smart farming, home automation, and environmental monitoring.
Perfect for Arduino enthusiasts, students, and IoT developers seeking hands-on weather station projects.
Peak ground acceleration (PGA) is a critical parameter in ground-motion investigations, in particular in earthquake-prone areas such as Iran. In the current study, a new method based on particle swarm optimization (PSO) is developed to obtain an efficient attenuation relationship for the vertical PGA component within the northern Iranian plateau. The main purpose of this study is to propose suitable attenuation relationships for calculating the PGA for the Alborz, Tabriz and Kopet Dag faults in the vertical direction. To this aim, the available catalogs of the study area are investigated, and finally about 240 earthquake records (with a moment magnitude of 4.1 to 6.4) are chosen to develop the model. Afterward, the PSO algorithm is used to estimate model parameters, i.e., unknown coefficients of the model (attenuation relationship). Different statistical criteria showed the acceptable performance of the proposed relationships in the estimation of vertical PGA components in comparison to the previously developed relationships for the northern plateau of Iran. Developed attenuation relationships in the current study are independent of shear wave velocity. This issue is the advantage of proposed relationships for utilizing in the situations where there are not sufficient shear wave velocity data.
Expansive soils (ES) have a long history of being difficult to work with in geotechnical engineering. Numerous studies have examined how bagasse ash (BA) and lime affect the unconfined compressive strength (UCS) of ES. Due to the complexities of this composite material, determining the UCS of stabilized ES using traditional methods such as empirical approaches and experimental methods is challenging. The use of artificial neural networks (ANN) for forecasting the UCS of stabilized soil has, however, been the subject of a few studies. This paper presents the results of using rigorous modelling techniques like ANN and multi-variable regression model (MVR) to examine the UCS of BA and a blend of BA-lime (BA + lime) stabilized ES. Laboratory tests were conducted for all dosages of BA and BA-lime admixed ES. 79 samples of data were gathered with various combinations of the experimental variables prepared and used in the construction of ANN and MVR models. The input variables for two models are seven parameters: BA percentage, lime percentage, liquid limit (LL), plastic limit (PL), shrinkage limit (SL), maximum dry density (MDD), and optimum moisture content (OMC), with the output variable being 28-day UCS. The ANN model prediction performance was compared to that of the MVR model. The models were evaluated and contrasted on the training dataset (70% data) and the testing dataset (30% residual data) using the coefficient of determination (R2), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) criteria. The findings indicate that the ANN model can predict the UCS of stabilized ES with high accuracy. The relevance of various input factors was estimated via sensitivity analysis utilizing various methodologies. For both the training and testing data sets, the proposed model has an elevated R2 of 0.9999. It has a minimal MAE and RMSE value of 0.0042 and 0.0217 for training data and 0.0038 and 0.0104 for testing data. As a result, the generated model excels the MVR model in terms of UCS prediction.
Better Builder Magazine brings together premium product manufactures and leading builders to create better differentiated homes and buildings that use less energy, save water and reduce our impact on the environment. The magazine is published four times a year.
THE RISK ASSESSMENT AND TREATMENT APPROACH IN ORDER TO PROVIDE LAN SECURITY B...ijfcstjournal
Local Area Networks(LAN) at present become an important instrument for organizing of process and
information communication in an organization. They provides important purposes such as association of
large amount of data, hardware and software resources and expanding of optimum communications.
Becase these network do work with valuable information, the problem of security providing is an important
issue in organization. So, the stablishment of an information security management system(ISMS) in
organization is significant. In this paper, we introduce ISMS and its implementation in LAN scop. The
assets of LAN and threats and vulnerabilities of these assets are identified, the risks are evaluated and
techniques to reduce them and at result security establishment of the network is expressed.
2. Sorting algorithms
• Insertion, selection and bubble sort have
quadratic worst-case performance
• The faster comparison based algorithm ?
O(nlogn)
• Mergesort and Quicksort
3. Merge Sort
• Apply divide-and-conquer to sorting problem
• Problem: Given n elements, sort elements into
non-decreasing order
• Divide-and-Conquer:
– If n=1 terminate (every one-element list is already
sorted)
– If n>1, partition elements into two or more sub-
collections; sort each; combine into a single sorted list
• How do we partition?
4. Partitioning - Choice 1
• First n-1 elements into set A, last element set B
• Sort A using this partitioning scheme recursively
– B already sorted
• Combine A and B using method Insert() (= insertion
into sorted array)
• Leads to recursive version of InsertionSort()
– Number of comparisons: O(n2
)
• Best case = n-1
• Worst case = 2
)
1
(
2
n
n
i
c
n
i
5. Partitioning - Choice 2
• Put element with largest key in B, remaining
elements in A
• Sort A recursively
• To combine sorted A and B, append B to sorted A
– Use Max() to find largest element recursive
SelectionSort()
– Use bubbling process to find and move largest element to
right-most position recursive BubbleSort()
• All O(n2
)
6. Partitioning - Choice 3
• Let’s try to achieve balanced partitioning
• A gets n/2 elements, B gets rest half
• Sort A and B recursively
• Combine sorted A and B using a process
called merge, which combines two sorted
lists into one
– How? We will see soon
9. Static Method mergeSort()
Public static void mergeSort(Comparable []a, int left,
int right)
{
// sort a[left:right]
if (left < right)
{// at least two elements
int mid = (left+right)/2; //midpoint
mergeSort(a, left, mid);
mergeSort(a, mid + 1, right);
merge(a, b, left, mid, right); //merge from a to b
copy(b, a, left, right); //copy result back to a
}
}
12. Solution
By Substitution:
T(n) = 2T(n/2) + c2n
T(n/2) = 2T(n/4) + c2n/2
T(n) = 4T(n/4) + 2 c2n
T(n) = 8T(n/8) + 3 c2n
T(n) = 2i
T(n/2i
) + ic2n
Assuming n = 2k
, expansion halts when we get T(1) on right side; this
happens when i=k T(n) = 2k
T(1) + kc2n
Since 2k
=n, we know k=logn; since T(1) = c1, we get
T(n) = c1n + c2nlogn;
thus an upper bound for TmergeSort(n) is O(nlogn)
13. Quicksort Algorithm
Given an array of n elements (e.g., integers):
• If array only contains one element, return
• Else
– pick one element to use as pivot.
– Partition elements into two sub-arrays:
• Elements less than or equal to pivot
• Elements greater than pivot
– Quicksort two sub-arrays
– Return results
15. Pick Pivot Element
There are a number of ways to pick the pivot element. In this
example, we will use the first element in the array:
40 20 10 80 60 50 7 30 100
16. Partitioning Array
Given a pivot, partition the elements of the array
such that the resulting array consists of:
1. One sub-array that contains elements >= pivot
2. Another sub-array that contains elements < pivot
The sub-arrays are stored in the original data array.
Partitioning loops through, swapping elements
below/above pivot.
45. Quicksort Analysis
• Assume that keys are random, uniformly
distributed.
• What is best case running time?
– Recursion:
1. Partition splits array in two sub-arrays of size n/2
2. Quicksort each sub-array
46. Quicksort Analysis
• Assume that keys are random, uniformly
distributed.
• What is best case running time?
– Recursion:
1. Partition splits array in two sub-arrays of size n/2
2. Quicksort each sub-array
– Depth of recursion tree?
47. Quicksort Analysis
• Assume that keys are random, uniformly
distributed.
• What is best case running time?
– Recursion:
1. Partition splits array in two sub-arrays of size n/2
2. Quicksort each sub-array
– Depth of recursion tree? O(log2n)
48. Quicksort Analysis
• Assume that keys are random, uniformly
distributed.
• What is best case running time?
– Recursion:
1. Partition splits array in two sub-arrays of size n/2
2. Quicksort each sub-array
– Depth of recursion tree? O(log2n)
– Number of accesses in partition?
49. Quicksort Analysis
• Assume that keys are random, uniformly
distributed.
• What is best case running time?
– Recursion:
1. Partition splits array in two sub-arrays of size n/2
2. Quicksort each sub-array
– Depth of recursion tree? O(log2n)
– Number of accesses in partition? O(n)
50. Quicksort Analysis
• Assume that keys are random, uniformly
distributed.
• Best case running time: O(n log2n)
51. Quicksort Analysis
• Assume that keys are random, uniformly
distributed.
• Best case running time: O(n log2n)
• Worst case running time?
52. Quicksort: Worst Case
• Assume first element is chosen as pivot.
• Assume we get array that is already in
order:
2 4 10 12 13 50 57 63 100
pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
53. 1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]
2 4 10 12 13 50 57 63 100
pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
54. 1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]
2 4 10 12 13 50 57 63 100
pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
55. 1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]
2 4 10 12 13 50 57 63 100
pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
56. 1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]
2 4 10 12 13 50 57 63 100
pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
57. 1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]
2 4 10 12 13 50 57 63 100
pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
58. 1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]
2 4 10 12 13 50 57 63 100
pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
59. 1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]
2 4 10 12 13 50 57 63 100
pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
> data[pivot]
<= data[pivot]
60. Quicksort Analysis
• Assume that keys are random, uniformly
distributed.
• Best case running time: O(n log2n)
• Worst case running time?
– Recursion:
1. Partition splits array in two sub-arrays:
• one sub-array of size 0
• the other sub-array of size n-1
2. Quicksort each sub-array
– Depth of recursion tree?
61. Quicksort Analysis
• Assume that keys are random, uniformly
distributed.
• Best case running time: O(n log2n)
• Worst case running time?
– Recursion:
1. Partition splits array in two sub-arrays:
• one sub-array of size 0
• the other sub-array of size n-1
2. Quicksort each sub-array
– Depth of recursion tree? O(n)
62. Quicksort Analysis
• Assume that keys are random, uniformly
distributed.
• Best case running time: O(n log2n)
• Worst case running time?
– Recursion:
1. Partition splits array in two sub-arrays:
• one sub-array of size 0
• the other sub-array of size n-1
2. Quicksort each sub-array
– Depth of recursion tree? O(n)
– Number of accesses per partition?
63. Quicksort Analysis
• Assume that keys are random, uniformly
distributed.
• Best case running time: O(n log2n)
• Worst case running time?
– Recursion:
1. Partition splits array in two sub-arrays:
• one sub-array of size 0
• the other sub-array of size n-1
2. Quicksort each sub-array
– Depth of recursion tree? O(n)
– Number of accesses per partition? O(n)
64. Quicksort Analysis
• Assume that keys are random, uniformly
distributed.
• Best case running time: O(n log2n)
• Worst case running time: O(n2
)!!!
65. Quicksort Analysis
• Assume that keys are random, uniformly
distributed.
• Best case running time: O(n log2n)
• Worst case running time: O(n2
)!!!
• What can we do to avoid worst case?
66. Improved Pivot Selection
Pick median value of three elements from data array:
data[0], data[n/2], and data[n-1].
Use this median value as pivot.
67. Improving Performance of
Quicksort
• Improved selection of pivot.
• For sub-arrays of size 3 or less, apply brute
force search:
– Sub-array of size 1: trivial
– Sub-array of size 2:
• if(data[first] > data[second]) swap them
– Sub-array of size 3: left as an exercise.