Merge sort is a sorting technique based on divide and conquer technique. With worst-case time complexity being Ο(n log n), it is one of the most respected algorithms.
Merge sort first divides the array into equal halves and then combines them in a sorted manner.
Java was developed in 1991 by James Gosling, Mike Sheridan, and Patrick Naughton at Sun Microsystems. It was originally called "Oak" but was renamed to Java in 1995. Java was created to be platform independent, allowing programs written in Java to run on any device without modification, unlike other languages at the time. This platform independence is known as "write once, run anywhere." Java was later acquired by Oracle Corporation in 2010 and continues to be updated with new versions, the most recent being Java SE9 released in September 2017.
The presentation introduces arrays, including their definition, types (one-dimensional, two-dimensional, multi-dimensional), syntax, declaration, accessing elements, and code examples. Arrays allow storing large amounts of data under a single variable name, facilitate faster searching, and are useful for representing matrices. They are a collection of similar data types indexed with integers.
Economics is the study of how individuals and societies make decisions about using scarce resources to fulfill wants and needs. It can be studied at the macro level of whole economies or micro level of individual decision making. Resources are limited so choices must be made between alternatives, which involves tradeoffs. Production requires factors of land, labor, capital and entrepreneurship to transform inputs into goods and services. Firms aim to maximize profits by equating their marginal costs with marginal revenues from sales. Different economic systems approach these decisions in various ways such as traditional economies based on custom, command economies controlled by the government, and free market economies driven by supply and demand.
Transistors are electronic devices made of semiconductor material that can act as both an insulator and conductor. They have three layers - an emitter, base, and collector - and come in two types: NPN and PNP bipolar junction transistors (BJTs). BJTs use both holes and electrons as charge carriers. Transistors have different operating regions - cutoff, linear, and saturation - and can be used as electronic switches or amplifiers by controlling the base current to mimic an input signal with greater amplitude at the collector. Common transistor types include BJTs, JFETs, FETs, and MOS transistors.
A distributed database is a collection of logically interrelated databases distributed over a computer network. A distributed database management system (DDBMS) manages the distributed database and makes the distribution transparent to users. There are two main types of DDBMS - homogeneous and heterogeneous. Key characteristics of distributed databases include replication of fragments, shared logically related data across sites, and each site being controlled by a DBMS. Challenges include complex management, security, and increased storage requirements due to data replication.
Flip-flops are basic memory circuits that have two stable states and can store one bit of information. There are several types of flip-flops including SR, JK, D, and T. The SR flip-flop has two inputs called set and reset that determine its output state, while the JK flip-flop's J and K inputs can toggle its output. Flip-flops like the D and JK can be constructed from more basic flip-flops. For sequential circuits, flip-flops are made synchronous using a clock input so their state only changes at the clock edge.
Merge sort is a divide and conquer algorithm that divides an array into halves, recursively sorts the halves, and then merges the sorted halves back together. The key steps are:
1. Divide the array into equal halves until reaching base cases of arrays with one element.
2. Recursively sort the left and right halves by repeating the divide step.
3. Merge the sorted halves back into a single sorted array by comparing elements pairwise and copying the smaller element into the output array.
Merge sort has several advantages including running in O(n log n) time in all cases, accessing data sequentially with low random access needs, and being suitable for external sorting of large data sets that do not fit in memory
John von Neumann invented the merge sort algorithm in 1945. Merge sort follows the divide and conquer paradigm by dividing the unsorted list into halves, recursively sorting each half through merging, and then merging the sorted halves back into a single sorted list. The time complexity of merge sort is O(n log n) in all cases (best, average, worst) due to its divide and conquer approach, while its space complexity is O(n) to store the temporary merged list.
Merge sort is a sorting algorithm that uses a divide and conquer approach. It divides an array into halves, recursively sorts the halves, and then merges the sorted halves into a single sorted array. The key steps are dividing, conquering by recursively sorting subarrays, and combining the sorted subarrays through a merging process. It has advantages of being a stable sort and having no worst-case scenario, though it requires more memory than other sorts.
There are two broad categories of sorting methods based on merging: internal merge sort and external merge sort. Internal merge sort handles small lists that fit into primary memory, including simple merge sort and two-way merge sort. External merge sort is for very large lists that exceed primary memory, including balanced two-way merge sort and multi-way merge sort. The simple merge sort uses a divide-and-conquer approach to recursively split lists in half, sort each sublist, and then merge the sorted sublists.
Mergesort is a divide and conquer algorithm that works as follows:
1) Recursively sort the left and right halves of the array.
2) Merge the two sorted halves into a new sorted array.
3) Repeat until the entire array is sorted.
It has superior time complexity of O(n log n) in all cases but requires O(n) additional space for the auxiliary array used during merging.
The document discusses divide and conquer algorithms. It describes divide and conquer as a design strategy that involves dividing a problem into smaller subproblems, solving the subproblems recursively, and combining the solutions. It provides examples of divide and conquer algorithms like merge sort, quicksort, and binary search. Merge sort works by recursively sorting halves of an array until it is fully sorted. Quicksort selects a pivot element and partitions the array into subarrays of smaller and larger elements, recursively sorting the subarrays. Binary search recursively searches half-intervals of a sorted array to find a target value.
Insertion sort works by iterating through an array, inserting each element into its sorted position by shifting other elements over. It finds the location where each element should be inserted into the sorted portion using a linear search, moving larger elements out of the way to make room. This sorting algorithm is most effective for small data sets and can be implemented recursively or iteratively through comparisons and shifts.
Merge sort is a divide and conquer algorithm that works as follows:
1) Divide the array to be sorted into two halves recursively until single element subarrays are reached.
2) Merge the subarrays in a way that combines them in a sorted order.
3) The merging process involves taking the first element of each subarray and comparing them to place the smaller element in the final array until all elements are merged.
Merge sort runs in O(n log n) time in all cases making it one of the most efficient sorting algorithms.
Merge sort is a sorting algorithm that uses a divide and conquer technique. It divides an array into halves, recursively sorts each half, and then merges the sorted halves into a single sorted array. John Von Neumann developed merge sort in 1945 for the EDVAC computer. Merge sort has a time complexity of O(n log n), making it one of the most efficient sorting algorithms.
The document discusses the divide and conquer algorithm design technique. It begins by explaining the basic approach of divide and conquer which is to (1) divide the problem into subproblems, (2) conquer the subproblems by solving them recursively, and (3) combine the solutions to the subproblems into a solution for the original problem. It then provides merge sort as a specific example of a divide and conquer algorithm for sorting a sequence. It explains that merge sort divides the sequence in half recursively until individual elements remain, then combines the sorted halves back together to produce the fully sorted sequence.
The document discusses asymptotic notations that are used to describe the time complexity of algorithms. It introduces big O notation, which describes asymptotic upper bounds, big Omega notation for lower bounds, and big Theta notation for tight bounds. Common time complexities are described such as O(1) for constant time, O(log N) for logarithmic time, and O(N^2) for quadratic time. The notations allow analyzing how efficiently algorithms use resources like time and space as the input size increases.
Tries are a data structure for storing strings that allow for fast pattern matching. A trie is a tree where each edge represents a character and each path from the root node to a leaf spells out a key. Standard tries insert strings by adding nodes for each character. Compressed tries reduce redundant nodes by compressing chains. Suffix tries store all suffixes of a text in a compressed trie to enable quick string queries. Tries support faster insertion and lookup compared to hash tables, with no collisions between keys.
Quicksort is a sorting algorithm that works by partitioning an array around a pivot value, and then recursively sorting the sub-partitions. It chooses a pivot element and partitions the array based on whether elements are less than or greater than the pivot. Elements are swapped so that those less than the pivot are moved left and those greater are moved right. The process recursively partitions the sub-arrays until the entire array is sorted.
The document describes insertion sort, a sorting algorithm. It lists the group members who researched insertion sort and provides an introduction. It then explains how insertion sort works by example, showing how it iterates through an array and inserts elements into the sorted portion. Pseudocode and analysis of insertion sort's runtime is provided. Comparisons are made between insertion sort and other algorithms like bubble sort, selection sort, and merge sort, analyzing their time complexities in best, average, and worst cases.
Strassen's algorithm improves on the basic matrix multiplication algorithm which runs in O(N3) time. It achieves this by dividing the matrices into sub-matrices and performing 7 multiplications and 18 additions on the sub-matrices, rather than the 8 multiplications of the basic algorithm. This results in a runtime of O(N2.81) using divide and conquer, providing an asymptotic improvement over the basic O(N3) algorithm.
This document presents selection sort, an in-place comparison sorting algorithm. It works by dividing the list into a sorted part on the left and unsorted part on the right. It iterates through the list, finding the smallest element in the unsorted section and swapping it into place. This process continues until the list is fully sorted. Selection sort has a time complexity of O(n^2) in all cases. While it requires no extra storage, it is inefficient for large lists compared to other algorithms.
The document discusses solving the 8 queens problem using backtracking. It begins by explaining backtracking as an algorithm that builds partial candidates for solutions incrementally and abandons any partial candidate that cannot be completed to a valid solution. It then provides more details on the 8 queens problem itself - the goal is to place 8 queens on a chessboard so that no two queens attack each other. Backtracking is well-suited for solving this problem by attempting to place queens one by one and backtracking when an invalid placement is found.
This document discusses randomized algorithms. It begins by listing different categories of algorithms, including randomized algorithms. Randomized algorithms introduce randomness into the algorithm to avoid worst-case behavior and find efficient approximate solutions. Quicksort is presented as an example randomized algorithm, where randomness improves its average runtime from quadratic to linear. The document also discusses the randomized closest pair algorithm and a randomized algorithm for primality testing. Both introduce randomness to improve efficiency compared to deterministic algorithms for the same problems.
In this slide I explained about merge sort algorithm. By reading attentively and watching my slide topic you will easily understand merge sort algorithm.
John von Neumann invented the merge sort algorithm in 1945. Merge sort follows the divide and conquer paradigm by dividing the unsorted list into halves, recursively sorting each half through merging, and then merging the sorted halves back into a single sorted list. The time complexity of merge sort is O(n log n) in all cases (best, average, worst) due to its divide and conquer approach, while its space complexity is O(n) to store the temporary merged list.
Merge sort is a sorting algorithm that uses a divide and conquer approach. It divides an array into halves, recursively sorts the halves, and then merges the sorted halves into a single sorted array. The key steps are dividing, conquering by recursively sorting subarrays, and combining the sorted subarrays through a merging process. It has advantages of being a stable sort and having no worst-case scenario, though it requires more memory than other sorts.
There are two broad categories of sorting methods based on merging: internal merge sort and external merge sort. Internal merge sort handles small lists that fit into primary memory, including simple merge sort and two-way merge sort. External merge sort is for very large lists that exceed primary memory, including balanced two-way merge sort and multi-way merge sort. The simple merge sort uses a divide-and-conquer approach to recursively split lists in half, sort each sublist, and then merge the sorted sublists.
Mergesort is a divide and conquer algorithm that works as follows:
1) Recursively sort the left and right halves of the array.
2) Merge the two sorted halves into a new sorted array.
3) Repeat until the entire array is sorted.
It has superior time complexity of O(n log n) in all cases but requires O(n) additional space for the auxiliary array used during merging.
The document discusses divide and conquer algorithms. It describes divide and conquer as a design strategy that involves dividing a problem into smaller subproblems, solving the subproblems recursively, and combining the solutions. It provides examples of divide and conquer algorithms like merge sort, quicksort, and binary search. Merge sort works by recursively sorting halves of an array until it is fully sorted. Quicksort selects a pivot element and partitions the array into subarrays of smaller and larger elements, recursively sorting the subarrays. Binary search recursively searches half-intervals of a sorted array to find a target value.
Insertion sort works by iterating through an array, inserting each element into its sorted position by shifting other elements over. It finds the location where each element should be inserted into the sorted portion using a linear search, moving larger elements out of the way to make room. This sorting algorithm is most effective for small data sets and can be implemented recursively or iteratively through comparisons and shifts.
Merge sort is a divide and conquer algorithm that works as follows:
1) Divide the array to be sorted into two halves recursively until single element subarrays are reached.
2) Merge the subarrays in a way that combines them in a sorted order.
3) The merging process involves taking the first element of each subarray and comparing them to place the smaller element in the final array until all elements are merged.
Merge sort runs in O(n log n) time in all cases making it one of the most efficient sorting algorithms.
Merge sort is a sorting algorithm that uses a divide and conquer technique. It divides an array into halves, recursively sorts each half, and then merges the sorted halves into a single sorted array. John Von Neumann developed merge sort in 1945 for the EDVAC computer. Merge sort has a time complexity of O(n log n), making it one of the most efficient sorting algorithms.
The document discusses the divide and conquer algorithm design technique. It begins by explaining the basic approach of divide and conquer which is to (1) divide the problem into subproblems, (2) conquer the subproblems by solving them recursively, and (3) combine the solutions to the subproblems into a solution for the original problem. It then provides merge sort as a specific example of a divide and conquer algorithm for sorting a sequence. It explains that merge sort divides the sequence in half recursively until individual elements remain, then combines the sorted halves back together to produce the fully sorted sequence.
The document discusses asymptotic notations that are used to describe the time complexity of algorithms. It introduces big O notation, which describes asymptotic upper bounds, big Omega notation for lower bounds, and big Theta notation for tight bounds. Common time complexities are described such as O(1) for constant time, O(log N) for logarithmic time, and O(N^2) for quadratic time. The notations allow analyzing how efficiently algorithms use resources like time and space as the input size increases.
Tries are a data structure for storing strings that allow for fast pattern matching. A trie is a tree where each edge represents a character and each path from the root node to a leaf spells out a key. Standard tries insert strings by adding nodes for each character. Compressed tries reduce redundant nodes by compressing chains. Suffix tries store all suffixes of a text in a compressed trie to enable quick string queries. Tries support faster insertion and lookup compared to hash tables, with no collisions between keys.
Quicksort is a sorting algorithm that works by partitioning an array around a pivot value, and then recursively sorting the sub-partitions. It chooses a pivot element and partitions the array based on whether elements are less than or greater than the pivot. Elements are swapped so that those less than the pivot are moved left and those greater are moved right. The process recursively partitions the sub-arrays until the entire array is sorted.
The document describes insertion sort, a sorting algorithm. It lists the group members who researched insertion sort and provides an introduction. It then explains how insertion sort works by example, showing how it iterates through an array and inserts elements into the sorted portion. Pseudocode and analysis of insertion sort's runtime is provided. Comparisons are made between insertion sort and other algorithms like bubble sort, selection sort, and merge sort, analyzing their time complexities in best, average, and worst cases.
Strassen's algorithm improves on the basic matrix multiplication algorithm which runs in O(N3) time. It achieves this by dividing the matrices into sub-matrices and performing 7 multiplications and 18 additions on the sub-matrices, rather than the 8 multiplications of the basic algorithm. This results in a runtime of O(N2.81) using divide and conquer, providing an asymptotic improvement over the basic O(N3) algorithm.
This document presents selection sort, an in-place comparison sorting algorithm. It works by dividing the list into a sorted part on the left and unsorted part on the right. It iterates through the list, finding the smallest element in the unsorted section and swapping it into place. This process continues until the list is fully sorted. Selection sort has a time complexity of O(n^2) in all cases. While it requires no extra storage, it is inefficient for large lists compared to other algorithms.
The document discusses solving the 8 queens problem using backtracking. It begins by explaining backtracking as an algorithm that builds partial candidates for solutions incrementally and abandons any partial candidate that cannot be completed to a valid solution. It then provides more details on the 8 queens problem itself - the goal is to place 8 queens on a chessboard so that no two queens attack each other. Backtracking is well-suited for solving this problem by attempting to place queens one by one and backtracking when an invalid placement is found.
This document discusses randomized algorithms. It begins by listing different categories of algorithms, including randomized algorithms. Randomized algorithms introduce randomness into the algorithm to avoid worst-case behavior and find efficient approximate solutions. Quicksort is presented as an example randomized algorithm, where randomness improves its average runtime from quadratic to linear. The document also discusses the randomized closest pair algorithm and a randomized algorithm for primality testing. Both introduce randomness to improve efficiency compared to deterministic algorithms for the same problems.
In this slide I explained about merge sort algorithm. By reading attentively and watching my slide topic you will easily understand merge sort algorithm.
The document discusses the merge sort algorithm. It works by recursively dividing an array into two halves, sorting each half, and then merging the sorted halves back together. The key steps are:
1) Divide the array into equal halves recursively until arrays contain a single element.
2) Sort the halves by recursively applying the merge sort algorithm.
3) Merge the sorted halves back into a single sorted array by comparing elements and copying the smaller value into the output array.
The document describes the merge sort algorithm. It explains that merge sort uses a divide and conquer approach to sort an array. It works by recursively splitting the array into smaller sub-arrays of size n/2 until the sub-arrays contain a single element, which is trivially sorted. It then merges the sorted sub-arrays back together to produce the final sorted array. The time complexity of merge sort is O(n log n) as it recursively solves two subproblems of size n/2 at each step. This makes merge sort more efficient than insertion sort, which has a worst-case time of O(n2), especially for large problem sizes.
The document discusses algorithms analysis and sorting algorithms. It introduces insertion sort and merge sort, and analyzes their time complexities. Insertion sort runs in O(n^2) time in the worst case, while merge sort runs in O(n log n) time in the worst case, which grows more slowly. Therefore, asymptotically merge sort performs better than insertion sort for large data sets. The document also covers asymptotic analysis, recurrences, and using recursion trees to solve recurrences.
This document discusses the divide and conquer algorithm design strategy and provides an analysis of the merge sort algorithm as an example. It begins by explaining the divide and conquer strategy of dividing a problem into smaller subproblems, solving those subproblems recursively, and combining the solutions. It then provides pseudocode and explanations for the merge sort algorithm, which divides an array in half, recursively sorts the halves, and then merges the sorted halves back together. It analyzes the time complexity of merge sort as Θ(n log n), proving it is more efficient than insertion sort.
Quicksort is a divide and conquer algorithm that works by partitioning an array around a pivot value and recursively sorting the subarrays. It has the following steps:
1. Pick a pivot element and partition the array into two halves based on element values relative to the pivot.
2. Recursively sort the two subarrays using quicksort.
3. The entire array is now sorted after sorting the subarrays.
The worst case occurs when the array is already sorted or reverse sorted, taking O(n^2) time due to linear-time partitioning at each step. The average and best cases take O(nlogn) time as the array is typically partitioned close to evenly.
The document provides an overview of several sorting algorithms, including insertion sort, bubble sort, selection sort, and radix sort. It describes the basic approach for each algorithm through examples and pseudocode. Analysis of the time complexity is also provided, with insertion sort, bubble sort, and selection sort having worst-case performance of O(n^2) and radix sort having performance of O(nk) where k is the number of passes.
The document discusses algorithm analysis and asymptotic notation. It begins by explaining how to analyze algorithms to predict resource requirements like time and space. It defines asymptotic notation like Big-O, which describes an upper bound on the growth rate of an algorithm's running time. The document then provides examples of analyzing simple algorithms and classifying functions based on their asymptotic growth rates. It also introduces common time functions like constant, logarithmic, linear, quadratic, and exponential time and compares their growth.
The document discusses divide and conquer algorithms and merge sort. It provides details on how merge sort works including: (1) Divide the input array into halves recursively until single element subarrays, (2) Sort the subarrays using merge sort recursively, (3) Merge the sorted subarrays back together. The overall running time of merge sort is analyzed to be θ(nlogn) as each level of recursion contributes θ(n) work and there are logn levels of recursion.
This document discusses parallel algorithms for sorting. It begins by defining parallel algorithms and explaining that the lower bound for comparison-based sorting of n elements is Θ(n log n). It then discusses several parallel sorting algorithms: odd-even transposition sort on a linear array, quicksort, and sorting networks. It also covers sorting on different parallel models like CRCW, CREW, and EREW. An example is provided of applying an EREW sorting algorithm to a sample data set by recursively dividing it into subsequences until single elements remain to be sorted locally.
Merge sort analysis and its real time applicationsyazad dumasia
The document provides an analysis of the merge sort algorithm and its applications in real-time. It begins with introducing sorting and different sorting techniques. Then it describes the merge sort algorithm, explaining the divide, conquer, and combine steps. It analyzes the time complexity of merge sort, showing that it has O(n log n) runtime. Finally, it discusses some real-time applications of merge sort, such as recommending similar products to users on e-commerce websites based on purchase history.
Analysis and design of algorithms part2Deepak John
Analysis of searching and sorting. Insertion sort, Quick sort, Merge sort and Heap sort. Binomial Heaps and Fibonacci Heaps, Lower bounds for sorting by comparison of keys. Comparison of sorting algorithms. Amortized Time Analysis. Red-Black Trees – Insertion & Deletion.
Shell Sort is a generalization of insertion sort that works by sorting elements with large gaps first then decreasing the gaps until a gap of 1, which is regular insertion sort. It has a worst case time complexity of O(n^2) but average complexity near O(n). Merge Sort divides the array into halves recursively, merges the sorted halves back together to fully sort the array. It has a best, average, and worst case time complexity of O(nlogn) and requires O(n) auxiliary space but is not an in-place sorting algorithm.
This document discusses various sorting algorithms and their time complexities. It covers common sorting algorithms like bubble sort, selection sort, insertion sort, which have O(N^2) time complexity and are slow for large data sets. More efficient algorithms like merge sort, quicksort, heapsort with O(N log N) time complexity are also discussed. Implementation details and examples are provided for selection sort, insertion sort, merge sort and quicksort algorithms.
This document provides an overview of an introduction to algorithms course. It discusses insertion sort and merge sort algorithms. Insertion sort runs in O(n^2) time in the worst case, making it inefficient for large data sets. Merge sort runs in O(n log n) time, making it faster than insertion sort for most data sets. The document provides pseudocode and examples of how insertion sort and merge sort work, and uses recurrence relations and recursion trees to analyze their asymptotic runtime performances.
The document discusses sorting algorithms. It defines sorting as arranging a list of records in a certain order based on their keys. Some key points made:
- Sorting is important as it enables efficient searching and other tasks. Common sorting algorithms include selection sort, insertion sort, mergesort, quicksort, and heapsort.
- The complexity of sorting in general is Θ(n log n) but some special cases allow linear time sorting. Internal sorting happens in memory while external sorting handles data too large for memory.
- Applications of sorting include searching, finding closest pairs of numbers, checking for duplicates, and calculating frequency distributions. Sorting also enables efficient algorithms for computing medians, convex hulls, and
This document summarizes an introduction to algorithms lecture. It introduces concepts like asymptotic analysis, worst case running times, and examples of sorting algorithms like insertion sort and merge sort. Insertion sort runs in O(n^2) time in the worst case, while merge sort runs faster in O(n log n) time due to dividing the problem into smaller subproblems and merging sorted lists. The document provides pseudocode and examples of how these algorithms work at a high level.
This document summarizes an introduction to algorithms lecture. It introduces concepts like asymptotic analysis, worst case running times, and examples of sorting algorithms like insertion sort and merge sort. Insertion sort runs in O(n^2) time in the worst case, while merge sort runs faster in O(n log n) time due to dividing the problem into smaller subproblems and merging sorted lists. The document provides pseudocode and examples of how these algorithms work at a high level.
Fluid mechanics is the branch of physics concerned with the mechanics of fluids (liquids, gases, and plasmas) and the forces on them. Originally applied to water (hydromechanics), it found applications in a wide range of disciplines, including mechanical, aerospace, civil, chemical, and biomedical engineering, as well as geophysics, oceanography, meteorology, astrophysics, and biology.
It can be divided into fluid statics, the study of various fluids at rest, and fluid dynamics.
Fluid statics, also known as hydrostatics, is the study of fluids at rest, specifically when there's no relative motion between fluid particles. It focuses on the conditions under which fluids are in stable equilibrium and doesn't involve fluid motion.
Fluid kinematics is the branch of fluid mechanics that focuses on describing and analyzing the motion of fluids, such as liquids and gases, without considering the forces that cause the motion. It deals with the geometrical and temporal aspects of fluid flow, including velocity and acceleration. Fluid dynamics, on the other hand, considers the forces acting on the fluid.
Fluid dynamics is the study of the effect of forces on fluid motion. It is a branch of continuum mechanics, a subject which models matter without using the information that it is made out of atoms; that is, it models matter from a macroscopic viewpoint rather than from microscopic.
Fluid mechanics, especially fluid dynamics, is an active field of research, typically mathematically complex. Many problems are partly or wholly unsolved and are best addressed by numerical methods, typically using computers. A modern discipline, called computational fluid dynamics (CFD), is devoted to this approach. Particle image velocimetry, an experimental method for visualizing and analyzing fluid flow, also takes advantage of the highly visual nature of fluid flow.
Fundamentally, every fluid mechanical system is assumed to obey the basic laws :
Conservation of mass
Conservation of energy
Conservation of momentum
The continuum assumption
For example, the assumption that mass is conserved means that for any fixed control volume (for example, a spherical volume)—enclosed by a control surface—the rate of change of the mass contained in that volume is equal to the rate at which mass is passing through the surface from outside to inside, minus the rate at which mass is passing from inside to outside. This can be expressed as an equation in integral form over the control volume.
The continuum assumption is an idealization of continuum mechanics under which fluids can be treated as continuous, even though, on a microscopic scale, they are composed of molecules. Under the continuum assumption, macroscopic (observed/measurable) properties such as density, pressure, temperature, and bulk velocity are taken to be well-defined at "infinitesimal" volume elements—small in comparison to the characteristic length scale of the system, but large in comparison to molecular length scale
RICS Membership-(The Royal Institution of Chartered Surveyors).pdfMohamedAbdelkader115
Glad to be one of only 14 members inside Kuwait to hold this credential.
Please check the members inside kuwait from this link:
https://www.rics.org/networking/find-a-member.html?firstname=&lastname=&town=&country=Kuwait&member_grade=(AssocRICS)&expert_witness=&accrediation=&page=1
Passenger car unit (PCU) of a vehicle type depends on vehicular characteristics, stream characteristics, roadway characteristics, environmental factors, climate conditions and control conditions. Keeping in view various factors affecting PCU, a model was developed taking a volume to capacity ratio and percentage share of particular vehicle type as independent parameters. A microscopic traffic simulation model VISSIM has been used in present study for generating traffic flow data which some time very difficult to obtain from field survey. A comparison study was carried out with the purpose of verifying when the adaptive neuro-fuzzy inference system (ANFIS), artificial neural network (ANN) and multiple linear regression (MLR) models are appropriate for prediction of PCUs of different vehicle types. From the results observed that ANFIS model estimates were closer to the corresponding simulated PCU values compared to MLR and ANN models. It is concluded that the ANFIS model showed greater potential in predicting PCUs from v/c ratio and proportional share for all type of vehicles whereas MLR and ANN models did not perform well.
We introduce the Gaussian process (GP) modeling module developed within the UQLab software framework. The novel design of the GP-module aims at providing seamless integration of GP modeling into any uncertainty quantification workflow, as well as a standalone surrogate modeling tool. We first briefly present the key mathematical tools on the basis of GP modeling (a.k.a. Kriging), as well as the associated theoretical and computational framework. We then provide an extensive overview of the available features of the software and demonstrate its flexibility and user-friendliness. Finally, we showcase the usage and the performance of the software on several applications borrowed from different fields of engineering. These include a basic surrogate of a well-known analytical benchmark function; a hierarchical Kriging example applied to wind turbine aero-servo-elastic simulations and a more complex geotechnical example that requires a non-stationary, user-defined correlation function. The GP-module, like the rest of the scientific code that is shipped with UQLab, is open source (BSD license).
The role of the lexical analyzer
Specification of tokens
Finite state machines
From a regular expressions to an NFA
Convert NFA to DFA
Transforming grammars and regular expressions
Transforming automata to grammars
Language for specifying lexical analyzers
"Feed Water Heaters in Thermal Power Plants: Types, Working, and Efficiency G...Infopitaara
A feed water heater is a device used in power plants to preheat water before it enters the boiler. It plays a critical role in improving the overall efficiency of the power generation process, especially in thermal power plants.
🔧 Function of a Feed Water Heater:
It uses steam extracted from the turbine to preheat the feed water.
This reduces the fuel required to convert water into steam in the boiler.
It supports Regenerative Rankine Cycle, increasing plant efficiency.
🔍 Types of Feed Water Heaters:
Open Feed Water Heater (Direct Contact)
Steam and water come into direct contact.
Mixing occurs, and heat is transferred directly.
Common in low-pressure stages.
Closed Feed Water Heater (Surface Type)
Steam and water are separated by tubes.
Heat is transferred through tube walls.
Common in high-pressure systems.
⚙️ Advantages:
Improves thermal efficiency.
Reduces fuel consumption.
Lowers thermal stress on boiler components.
Minimizes corrosion by removing dissolved gases.
ADVXAI IN MALWARE ANALYSIS FRAMEWORK: BALANCING EXPLAINABILITY WITH SECURITYijscai
With the increased use of Artificial Intelligence (AI) in malware analysis there is also an increased need to
understand the decisions models make when identifying malicious artifacts. Explainable AI (XAI) becomes
the answer to interpreting the decision-making process that AI malware analysis models use to determine
malicious benign samples to gain trust that in a production environment, the system is able to catch
malware. With any cyber innovation brings a new set of challenges and literature soon came out about XAI
as a new attack vector. Adversarial XAI (AdvXAI) is a relatively new concept but with AI applications in
many sectors, it is crucial to quickly respond to the attack surface that it creates. This paper seeks to
conceptualize a theoretical framework focused on addressing AdvXAI in malware analysis in an effort to
balance explainability with security. Following this framework, designing a machine with an AI malware
detection and analysis model will ensure that it can effectively analyze malware, explain how it came to its
decision, and be built securely to avoid adversarial attacks and manipulations. The framework focuses on
choosing malware datasets to train the model, choosing the AI model, choosing an XAI technique,
implementing AdvXAI defensive measures, and continually evaluating the model. This framework will
significantly contribute to automated malware detection and XAI efforts allowing for secure systems that
are resilient to adversarial attacks.
The Fluke 925 is a vane anemometer, a handheld device designed to measure wind speed, air flow (volume), and temperature. It features a separate sensor and display unit, allowing greater flexibility and ease of use in tight or hard-to-reach spaces. The Fluke 925 is particularly suitable for HVAC (heating, ventilation, and air conditioning) maintenance in both residential and commercial buildings, offering a durable and cost-effective solution for routine airflow diagnostics.
In tube drawing process, a tube is pulled out through a die and a plug to reduce its diameter and thickness as per the requirement. Dimensional accuracy of cold drawn tubes plays a vital role in the further quality of end products and controlling rejection in manufacturing processes of these end products. Springback phenomenon is the elastic strain recovery after removal of forming loads, causes geometrical inaccuracies in drawn tubes. Further, this leads to difficulty in achieving close dimensional tolerances. In the present work springback of EN 8 D tube material is studied for various cold drawing parameters. The process parameters in this work include die semi-angle, land width and drawing speed. The experimentation is done using Taguchi’s L36 orthogonal array, and then optimization is done in data analysis software Minitab 17. The results of ANOVA shows that 15 degrees die semi-angle,5 mm land width and 6 m/min drawing speed yields least springback. Furthermore, optimization algorithms named Particle Swarm Optimization (PSO), Simulated Annealing (SA) and Genetic Algorithm (GA) are applied which shows that 15 degrees die semi-angle, 10 mm land width and 8 m/min drawing speed results in minimal springback with almost 10.5 % improvement. Finally, the results of experimentation are validated with Finite Element Analysis technique using ANSYS.
5. MERGE SORT :
Merge sort was invented by John Von Neumann
(1903 - 1957)
Merge sort is divide & conquer technique of sorting
element
Merge sort is one of the most efficient sorting
algorithm
Time complexity of merge sort is O(n log n)
6. DIVIDE & CONQUER
• DIVIDE : Divide the unsorted list into two sub lists of
about half the size
• CONQUER : Sort each of the two sub lists recursively.
If they are small enough just solve them in a straight
forward manner
• COMBINE : Merge the two-sorted sub lists back into
one sorted list
7. DIVIDE & CONQUER TECHNIQUE
a problem of size n
sub problem 1 of size n/2
sub problem 2 of
size n/2
a solution to sub
problem 1
a solution to sub
problem 2
a solution to the
original problem
8. MERGE SORT PROCESS :
• The list is divided into two equal (as equal as
possible) part
• There are different ways to divide the list into two
equal part
• The following algorithm divides the list until the list
has just one item which are sorted
• Then with recursive merge function calls these
smaller pieces merge
9. MERGE SORT ALGORITHM :
• Merge(A[],p , q , r)
{ n1 = q – p + 1
n2 = r – q
Let L[1 to n1+1] and R[1 to n2+1] be new array
for(i = 1 to n1)
L[i] = A[p + i - 1]
for(j = 1 to n2)
R[j] = A[q + j]
L[n1 + 1] = infinity
R[n2 + 1] = infinity
10. for(k = p to r)
{ if ( L[i] <= R[j])
A[k] = L[j]
i = i + 1
else
A[k] = R[j]
j = j + 1
}
22. IMPLEMENTING MERGE SORT :
There are two basic ways to implement
merge sort
In place : Merge sort is done with only the
input array
Double storage : Merging is done with a
temporary array of the same size as the input
array
23. MERGE-SORT ANALYSIS :
Time, merging
log n levels
Total time for merging : O (n log n)
Total running time : Order of n log n
Total space : Order of n
n/2 n/2
n/4 n/4 n/4 n/4
n
24. Performance of MERGESORT :
• Unlike quick sort, merge sort guarantees O (n log n)in
the worst case
• The reason for this is that quick sort depends on the
value of the pivot whereas merge sort divides the list
based on the index
• Why is it O (n log n ) ?
• Each merge will require N comparisons
• Each time the list is halved
• So the standard divide & conquer recurrence applies
to merge sort
T(n) = 2T * n/2 + (n)
25. FEATURES OF MERGE SORT :
• It perform in O(n log n ) in the worst case
• It is stable
• It is quite independent of the way the initial list is organized
• Good for linked lists. Can be implemented in such a way that
data is accessed sequentially.
Drawbacks :
It may require an array of up to the size of the original list.
This can be avoided but the algorithm becomes significantly
more complicated making it worth while.
Instead of making it complicated we can use HEAP SORT which
is also O(n log n ). But you have to remember that HEAP SORT is not
stable in comparison to MERGE SORT