A step-by-step illustration of Quicksort to help you walk through a series of operations. Illustration is accompanied by actual code with bold line indicating the current operation.
The document discusses the knapsack problem and greedy algorithms. It defines the knapsack problem as an optimization problem where given constraints and an objective function, the goal is to find the feasible solution that maximizes or minimizes the objective. It describes the knapsack problem has having two versions: 0-1 where items are indivisible, and fractional where items can be divided. The fractional knapsack problem can be solved using a greedy approach by sorting items by value to weight ratio and filling the knapsack accordingly until full.
The document discusses various sorting algorithms that use the divide-and-conquer approach, including quicksort, mergesort, and heapsort. It provides examples of how each algorithm works by recursively dividing problems into subproblems until a base case is reached. Code implementations and pseudocode are presented for key steps like partitioning arrays in quicksort, merging sorted subarrays in mergesort, and adding and removing elements from a heap data structure in heapsort. The algorithms are compared in terms of their time and space complexity and best uses.
What Is Dynamic Programming? | Dynamic Programming Explained | Programming Fo...Simplilearn
?
This presentation on 'What Is Dynamic Programming?' will acquaint you with a clear understanding of how this programming paradigm works with the help of a real-life example. In this Dynamic Programming Tutorial, you will understand why recursion is not compatible and how you can solve the problems involved in recursion using DP. Finally, we will cover the dynamic programming implementation of the Fibonacci series program. So, let's get started!
The topics covered in this presentation are:
1. Introduction
2. Real-Life Example of Dynamic Programming
3. Introduction to Dynamic Programming
4. Dynamic Programming Interpretation of Fibonacci Series Program
5. How Does Dynamic Programming Work?
What Is Dynamic Programming?
In computer science, something is said to be efficient if it is quick and uses minimal memory. By storing the solutions to subproblems, we can quickly look them up if the same problem arises again. Because there is no need to recompute the solution, this saves a significant amount of calculation time. But hold on! Efficiency comprises both time and space difficulty. But, why does it matter if we reduce the time required to solve the problem only to increase the space required? This is why it is critical to realize that the ultimate goal of Dynamic Programming is to obtain considerably quicker calculation time at the price of a minor increase in space utilized. Dynamic programming is defined as an algorithmic paradigm that solves a given complex problem by breaking it into several sub-problems and storing the results of those sub-problems to avoid the computation of the same sub-problem over and over again.
What is Programming?
Programming is an act of designing, developing, deploying an executlable software solution to the given user-defined problem.
Programming involves the following stages.
- Problem Statement
- Algorithms and Flowcharts
- Coding the program
- Debug the program.
- Documention
- Maintainence
Simplilearn¡¯s Python Training Course is an all-inclusive program that will introduce you to the Python development language and expose you to the essentials of object-oriented programming, web development with Django and game development. Python has surpassed Java as the top language used to introduce U.S.
Learn more at: https://www.simplilearn.com/mobile-and-software-development/python-development-training
An array is a data structure that stores fixed number of items of the same type. It allows fast access of elements using indices. Basic array operations include traversing elements, inserting/deleting elements, searching for elements, and updating elements. Arrays are zero-indexed and elements are accessed via their index.
The document discusses greedy algorithms and their application to optimization problems. It provides examples of problems that can be solved using greedy approaches, such as fractional knapsack and making change. However, it notes that some problems like 0-1 knapsack and shortest paths on multi-stage graphs cannot be solved optimally with greedy algorithms. The document also describes various greedy algorithms for minimum spanning trees, single-source shortest paths, and fractional knapsack problems.
Data structures allow for the organization and storage of data. There are linear and non-linear data structures. Linear structures include arrays, stacks, queues, and linked lists. Arrays store elements in contiguous memory locations. Stacks and queues follow first-in last-out and first-in first-out rules respectively. Linked lists connect nodes using pointers. Non-linear structures include trees and graphs which emulate hierarchical and network-like connections. Common operations on data structures include traversing, searching, insertion, and deletion.
This document provides information about Python lists. Some key points:
- Lists can store multiple elements of any data type. They are versatile for working with multiple elements.
- Lists maintain element order and allow duplicate elements. Elements are accessed via indexes.
- Lists support operations like concatenation, membership testing, slicing, and methods to add/remove elements.
- Nested lists allow lists within lists, for representing matrices and other complex data structures.
linked list in Data Structure, Simple and Easy TutorialAfzal Badshah
?
A linked list is a linear data structure where elements are linked using pointers. Each element contains a data field and a pointer to the next node. Linked lists allow for efficient insertion and deletion, but random access is slow. There are several types of linked lists including singly linked, doubly linked, and circular linked lists. Singly linked lists only traverse in one direction while doubly linked lists can traverse forwards and backwards. Circular linked lists connect the first and last nodes so the list has no end.
This document contains a presentation on solving the coin change problem using greedy and dynamic programming algorithms. It introduces the coin change problem and provides an example. It then describes the greedy algorithm approach and how it works for some cases but fails to find an optimal solution in other cases when coin values are not uniform. The document next explains dynamic programming, its four step process, and how it can be applied to the coin change problem to always find an optimal solution using a bottom-up approach and storing results of subproblems to build the final solution.
The document describes the quicksort algorithm. Quicksort works by:
1) Partitioning the array around a pivot element into two sub-arrays of less than or equal and greater than elements.
2) Recursively sorting the two sub-arrays.
3) Combining the now sorted sub-arrays.
In the average case, quicksort runs in O(n log n) time due to balanced partitions at each recursion level. However, in the worst case of an already sorted input, it runs in O(n^2) time due to highly unbalanced partitions. A randomized version of quicksort chooses pivots randomly to avoid worst case behavior.
The document discusses applications of stacks, including reversing strings and lists, Polish notation for mathematical expressions, converting between infix, prefix and postfix notations, evaluating postfix and prefix expressions, recursion, and the Tower of Hanoi problem. Recursion involves defining a function in terms of itself, with a stopping condition. Stacks can be used to remove recursion by saving local variables at each step.
The selection sort algorithm works by iterating through an array, finding the minimum/maximum value, and swapping it into the correct sorted position. It does this by keeping track of the index of the minimum/maximum value found on each pass. The number of passes is equal to the length of the array. In each pass, it finds the minimum/maximum value and swaps it into the current place, sorting the array further.
Divide and Conquer Algorithms - D&C forms a distinct algorithm design technique in computer science, wherein a problem is solved by repeatedly invoking the algorithm on smaller occurrences of the same problem. Binary search, merge sort, Euclid's algorithm can all be formulated as examples of divide and conquer algorithms. Strassen's algorithm and Nearest Neighbor algorithm are two other examples.
Talk is about simple data structures like queue and Tree and their possible implementation in Scala. It also talks about binary search trees and their traversals.
Selection sort is an in-place comparison sorting algorithm where the minimum element from the unsorted section of the list is selected in each pass and swapped with the first element. It has a time complexity of O(n2) making it inefficient for large lists. The algorithm involves dividing the list into sorted and unsorted sublists, finding the minimum element in the unsorted sublist, swapping it with the first element and moving the imaginary wall between the two sublists by one element. This process is repeated for n-1 passes to completely sort an input list of n elements. Pseudocode for the algorithm using a nested for loop to find the minimum element and swap it is also provided.
The document discusses minimum spanning trees (MSTs). It defines MSTs and provides examples of applications like wiring electronic circuits. It then describes two common algorithms for finding MSTs: Kruskal's algorithm and Prim's algorithm. Kruskal's algorithm finds MSTs by sorting edges by weight and adding edges that connect different components without creating cycles. Prim's algorithm grows an MST from a single vertex by always adding the lowest-weight edge connecting a vertex to the growing tree.
Counting sort is an algorithm that sorts elements by counting the number of occurrences of each unique element in an array. It works by:
1) Creating a count array to store the count of each unique object in the input array.
2) Modifying the count array to store cumulative counts.
3) Creating an output array by using the modified count array to output elements in sorted order.
Quicksort is a divide and conquer sorting algorithm that works by partitioning an array around a pivot value. It then recursively sorts the sub-arrays on each side. The key steps are: 1) Choose a pivot element to split the array into left and right halves, with all elements on the left being less than the pivot and all on the right being greater; 2) Recursively quicksort the left and right halves; 3) Combine the now-sorted left and right halves into a fully sorted array. The example demonstrates quicksorting an array of 6 elements by repeatedly partitioning around a pivot until the entire array is sorted.
BFS uses a queue to perform a traversal of a graph, visiting all adjacent unvisited vertices of the vertex at the front of the queue and adding them to the queue. This produces a spanning tree without loops as the final result, where each vertex in the graph can be reached from the starting vertex without cycles. The queue, which has a maximum size of the total number of vertices, ensures a breadth-first search where all vertices at each level are explored before moving to the next level out.
This document introduces different data structures. It defines data structures as logical models for organizing data that are important for algorithm development and program implementation. It classifies data structures into primitive and non-primitive types. Primitive types include basic data like integers, while non-primitive types are more complex structures like arrays, linked lists, stacks, and queues that organize groups of data. Key non-primitive data structures are then defined, including their purposes and common operations.
This document discusses binary trees and various tree traversal algorithms. It defines what a binary tree is, including nodes, roots, leaves, and siblings. It explains different types of binary tree traversals including preorder, inorder, postorder, and level order. Pseudocode is provided for algorithms to perform inorder, preorder, and postorder traversals on a binary tree. Advantages of using trees are also listed.
Knapsack problem algorithm, greedy algorithmHoneyChintal
?
The document discusses the knapsack problem and algorithms to solve it. It describes the 0-1 knapsack problem, which does not allow breaking items, and the fractional knapsack problem, which does. It provides an example comparing the two. The document then explains the greedy algorithm approach to solve the fractional knapsack problem by calculating value to weight ratios and filling the knapsack with the highest ratio items first. Pseudocode for the greedy fractional knapsack algorithm is provided along with analysis of its time complexity.
This document discusses sparse matrices. It defines a sparse matrix as a matrix with more zero values than non-zero values. Sparse matrices can save space by only storing the non-zero elements and their indices rather than allocating space for all elements. Two common representations for sparse matrices are the triplet representation, which stores the non-zero values and their row and column indices, and the linked representation, which connects the non-zero elements. Applications of sparse matrices include solving large systems of equations.
This document describes graph search algorithms like breadth-first search (BFS) and their applications. It provides details on how BFS works, including that it maintains queues to search levels outwards from the starting vertex, and outputs the distance and predecessor of each vertex. BFS runs in O(V+E) time by visiting each vertex and edge once. The document also discusses how BFS can be used to find connected components in a graph and determine if a graph is bipartite.
An array is a group of consecutive memory locations that share the same name and type. An array allows storing multiple values of the same type using a single name. Arrays have advantages like efficiently storing and processing large amounts of data. Array declaration specifies the array name, length, and data type. Array initialization assigns initial values to the elements at declaration time by enclosing comma-separated values in braces.
The document discusses quicksort, an efficient sorting algorithm. It begins with a review of insertion sort and merge sort. It then presents the quicksort algorithm, including choosing a pivot, partitioning the array around the pivot, and recursively sorting subarrays. Details are provided on the partition process with examples. Analysis shows quicksort has worst-case time complexity of O(n^2) but expected complexity of O(nlogn) with only O(1) extra memory. Strict proofs are outlined for the worst and expected cases. The document concludes with notes on implementing quicksort in Java for practice.
Insertion sort is a simple sorting algorithm that works by building a sorted array from left to right by inserting each element into its sorted position. It is more efficient for smaller data sets but less efficient for larger data sets compared to other algorithms like merge sort. Merge sort works by dividing an array into halves, recursively sorting the halves, and then merging the sorted halves into a single sorted array.
This document contains a presentation on solving the coin change problem using greedy and dynamic programming algorithms. It introduces the coin change problem and provides an example. It then describes the greedy algorithm approach and how it works for some cases but fails to find an optimal solution in other cases when coin values are not uniform. The document next explains dynamic programming, its four step process, and how it can be applied to the coin change problem to always find an optimal solution using a bottom-up approach and storing results of subproblems to build the final solution.
The document describes the quicksort algorithm. Quicksort works by:
1) Partitioning the array around a pivot element into two sub-arrays of less than or equal and greater than elements.
2) Recursively sorting the two sub-arrays.
3) Combining the now sorted sub-arrays.
In the average case, quicksort runs in O(n log n) time due to balanced partitions at each recursion level. However, in the worst case of an already sorted input, it runs in O(n^2) time due to highly unbalanced partitions. A randomized version of quicksort chooses pivots randomly to avoid worst case behavior.
The document discusses applications of stacks, including reversing strings and lists, Polish notation for mathematical expressions, converting between infix, prefix and postfix notations, evaluating postfix and prefix expressions, recursion, and the Tower of Hanoi problem. Recursion involves defining a function in terms of itself, with a stopping condition. Stacks can be used to remove recursion by saving local variables at each step.
The selection sort algorithm works by iterating through an array, finding the minimum/maximum value, and swapping it into the correct sorted position. It does this by keeping track of the index of the minimum/maximum value found on each pass. The number of passes is equal to the length of the array. In each pass, it finds the minimum/maximum value and swaps it into the current place, sorting the array further.
Divide and Conquer Algorithms - D&C forms a distinct algorithm design technique in computer science, wherein a problem is solved by repeatedly invoking the algorithm on smaller occurrences of the same problem. Binary search, merge sort, Euclid's algorithm can all be formulated as examples of divide and conquer algorithms. Strassen's algorithm and Nearest Neighbor algorithm are two other examples.
Talk is about simple data structures like queue and Tree and their possible implementation in Scala. It also talks about binary search trees and their traversals.
Selection sort is an in-place comparison sorting algorithm where the minimum element from the unsorted section of the list is selected in each pass and swapped with the first element. It has a time complexity of O(n2) making it inefficient for large lists. The algorithm involves dividing the list into sorted and unsorted sublists, finding the minimum element in the unsorted sublist, swapping it with the first element and moving the imaginary wall between the two sublists by one element. This process is repeated for n-1 passes to completely sort an input list of n elements. Pseudocode for the algorithm using a nested for loop to find the minimum element and swap it is also provided.
The document discusses minimum spanning trees (MSTs). It defines MSTs and provides examples of applications like wiring electronic circuits. It then describes two common algorithms for finding MSTs: Kruskal's algorithm and Prim's algorithm. Kruskal's algorithm finds MSTs by sorting edges by weight and adding edges that connect different components without creating cycles. Prim's algorithm grows an MST from a single vertex by always adding the lowest-weight edge connecting a vertex to the growing tree.
Counting sort is an algorithm that sorts elements by counting the number of occurrences of each unique element in an array. It works by:
1) Creating a count array to store the count of each unique object in the input array.
2) Modifying the count array to store cumulative counts.
3) Creating an output array by using the modified count array to output elements in sorted order.
Quicksort is a divide and conquer sorting algorithm that works by partitioning an array around a pivot value. It then recursively sorts the sub-arrays on each side. The key steps are: 1) Choose a pivot element to split the array into left and right halves, with all elements on the left being less than the pivot and all on the right being greater; 2) Recursively quicksort the left and right halves; 3) Combine the now-sorted left and right halves into a fully sorted array. The example demonstrates quicksorting an array of 6 elements by repeatedly partitioning around a pivot until the entire array is sorted.
BFS uses a queue to perform a traversal of a graph, visiting all adjacent unvisited vertices of the vertex at the front of the queue and adding them to the queue. This produces a spanning tree without loops as the final result, where each vertex in the graph can be reached from the starting vertex without cycles. The queue, which has a maximum size of the total number of vertices, ensures a breadth-first search where all vertices at each level are explored before moving to the next level out.
This document introduces different data structures. It defines data structures as logical models for organizing data that are important for algorithm development and program implementation. It classifies data structures into primitive and non-primitive types. Primitive types include basic data like integers, while non-primitive types are more complex structures like arrays, linked lists, stacks, and queues that organize groups of data. Key non-primitive data structures are then defined, including their purposes and common operations.
This document discusses binary trees and various tree traversal algorithms. It defines what a binary tree is, including nodes, roots, leaves, and siblings. It explains different types of binary tree traversals including preorder, inorder, postorder, and level order. Pseudocode is provided for algorithms to perform inorder, preorder, and postorder traversals on a binary tree. Advantages of using trees are also listed.
Knapsack problem algorithm, greedy algorithmHoneyChintal
?
The document discusses the knapsack problem and algorithms to solve it. It describes the 0-1 knapsack problem, which does not allow breaking items, and the fractional knapsack problem, which does. It provides an example comparing the two. The document then explains the greedy algorithm approach to solve the fractional knapsack problem by calculating value to weight ratios and filling the knapsack with the highest ratio items first. Pseudocode for the greedy fractional knapsack algorithm is provided along with analysis of its time complexity.
This document discusses sparse matrices. It defines a sparse matrix as a matrix with more zero values than non-zero values. Sparse matrices can save space by only storing the non-zero elements and their indices rather than allocating space for all elements. Two common representations for sparse matrices are the triplet representation, which stores the non-zero values and their row and column indices, and the linked representation, which connects the non-zero elements. Applications of sparse matrices include solving large systems of equations.
This document describes graph search algorithms like breadth-first search (BFS) and their applications. It provides details on how BFS works, including that it maintains queues to search levels outwards from the starting vertex, and outputs the distance and predecessor of each vertex. BFS runs in O(V+E) time by visiting each vertex and edge once. The document also discusses how BFS can be used to find connected components in a graph and determine if a graph is bipartite.
An array is a group of consecutive memory locations that share the same name and type. An array allows storing multiple values of the same type using a single name. Arrays have advantages like efficiently storing and processing large amounts of data. Array declaration specifies the array name, length, and data type. Array initialization assigns initial values to the elements at declaration time by enclosing comma-separated values in braces.
The document discusses quicksort, an efficient sorting algorithm. It begins with a review of insertion sort and merge sort. It then presents the quicksort algorithm, including choosing a pivot, partitioning the array around the pivot, and recursively sorting subarrays. Details are provided on the partition process with examples. Analysis shows quicksort has worst-case time complexity of O(n^2) but expected complexity of O(nlogn) with only O(1) extra memory. Strict proofs are outlined for the worst and expected cases. The document concludes with notes on implementing quicksort in Java for practice.
Insertion sort is a simple sorting algorithm that works by building a sorted array from left to right by inserting each element into its sorted position. It is more efficient for smaller data sets but less efficient for larger data sets compared to other algorithms like merge sort. Merge sort works by dividing an array into halves, recursively sorting the halves, and then merging the sorted halves into a single sorted array.
Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort. However, insertion sort provides several advantages:
The document describes the insertion sort algorithm sorting the array [15, 9, 9, 10, 12, 1, 11, 3, 9]. It works by taking each element from the unsorted part of the array and inserting it into the correct position in the sorted part. It iterates through the array, swapping elements if the current element is less than the element preceding it, until the subarray is sorted.
Evaluating different techniques for pneumoperitonium in comparison to Needle Scope, reaching a risk score for laparoscopy. Reaching best technique for pneumoperitonium for each individual patient..
Quick sort Algorithm Discussion And AnalysisSNJ Chaudhary
?
Quicksort is a divide-and-conquer algorithm that works by partitioning an array around a pivot element and recursively sorting the subarrays. In the average case, it has an efficiency of ¦¨(n log n) time as the partitioning typically divides the array into balanced subproblems. However, in the worst case of an already sorted array, it can be ¦¨(n^2) time due to highly unbalanced partitioning. Randomizing the choice of pivot helps avoid worst-case scenarios and achieve average-case efficiency in practice, making quicksort very efficient and commonly used.
A guide for how to use image generating AI to create images you like. Walks you through fundamental concepts such as generative AI, AI models, and prompts. Shows you how to compose a prompt in step-by-step manner.
Find n th fibonacci iteratively - illustrated walkthroughYoshi Watanabe
?
A step-by-step illustration of Find n-th Fibonacci function to help you walk through a series of operations. Illustration is accompanied by actual code with bold line indicating the current operation.
https://github.com/yoshiwatanabe/Algorithms/blob/master/Finding/Fibonacci.cs
Binary search tree exact match - illustrated walkthroughYoshi Watanabe
?
A step-by-step illustration of Binary Search Tree (Exact Match) to help you walk through a series of operations. Illustration is accompanied by actual code with bold line indicating the current operation.
Binary search: illustrated step-by-step walk throughYoshi Watanabe
?
A step-by-step illustration of Binary Search to help you walk through a series of operations. Illustration is accompanied by actual code with bold line indicating the current operation.
https://github.com/yoshiwatanabe/Algorithms/blob/master/Finding/BinarySearch.cs
Merge sort: illustrated step-by-step walk throughYoshi Watanabe
?
A step-by-step illustration of Merge sort to help you walk through a series of operations. Illustration is accompanied by actual code with bold line indicating the current operation.
2. Partition function
This function does the most of the heavy lifting,
so we look at it first, then see it in the context of
Quicksort algorithm
3. [0]
[1]
[2]
[3]
[4]
[5]
12
7
14
9
10
11
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
4. store
Index
i
begin
12
last
7
14
9
10
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
11
5. i
store
Index
0
begin
12
last
7
14
9
10
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
11
6. i
0
store
Index
0
begin
12
last
7
14
9
10
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
11
8. i
store
Index
0
12 <= 11
is false
0
begin
12
last
7
14
9
10
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
11
9. i
store
Index
1
0
begin
12
last
7
14
9
10
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
11
10. i
store
Index
1
7 <= 11
is true
0
begin
12
last
7
14
9
10
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
11
11. i
store
Index
1
0
begin
12
Swap
last
7
14
9
10
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
11
12. i
store
Index
1
0
begin
7
Swap
last
12
14
9
10
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
11
13. i
1
store
Index
1
begin
7
last
12
14
9
10
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
11
14. i
store
Index
2
1
begin
7
last
12
14
9
10
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
11
16. i
store
Index
2
14 <= 11
is fase
1
begin
7
last
12
14
9
10
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
11
17. i
store
Index
3
1
begin
7
last
12
14
9
10
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
11
19. i
store
Index
3
9 <= 11
is true
1
begin
7
last
12
14
9
10
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
11
20. Swap
i
store
Index
3
1
begin
7
last
12
14
9
10
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
11
21. Swap
i
store
Index
3
1
begin
7
last
9
14
12
10
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
11
22. i
store
Index
3
2
begin
7
last
9
14
12
10
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
11
23. i
store
Index
4
2
begin
7
last
9
14
12
10
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
11
25. 10 <= 11
is true
store
Index
i
4
2
begin
7
last
9
14
12
10
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
11
26. Swap
i
store
Index
4
2
begin
7
last
9
14
12
10
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
11
27. Swap
i
store
Index
4
2
begin
7
last
9
10
12
14
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
11
28. i
store
Index
4
3
begin
7
last
9
10
12
14
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
11
29. i
store
Index
3
begin
7
5
last
9
10
12
14
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
11
31. Swap
i
store
Index
3
begin
7
5
last
9
10
12
14
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
11
32. Swap
i
store
Index
3
begin
7
5
last
9
10
11
14
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
12
33. i
store
Index
3
begin
7
5
last
9
10
11
14
int storeIndex = begin;
for (int i = begin; i < last; i++) {
if (array[i] <= array[last]) {
Swap(array, i, storeIndex);
storeIndex = storeIndex + 1;
}
}
Swap(array, storeIndex, last);
return storeIndex;
12