From 26053d5382c4f2e26160bba7ea1385b92c228c5c Mon Sep 17 00:00:00 2001 From: Eshparsi Date: Sun, 19 May 2024 12:08:10 +0530 Subject: [PATCH 1/7] graph --- contrib/Data-Structure-Graphs/graph.md | 220 +++++++++++++++++++++++++ contrib/Data-Structure-Graphs/index.md | 3 + 2 files changed, 223 insertions(+) create mode 100644 contrib/Data-Structure-Graphs/graph.md create mode 100644 contrib/Data-Structure-Graphs/index.md diff --git a/contrib/Data-Structure-Graphs/graph.md b/contrib/Data-Structure-Graphs/graph.md new file mode 100644 index 0000000..b9bd709 --- /dev/null +++ b/contrib/Data-Structure-Graphs/graph.md @@ -0,0 +1,220 @@ +# Graph Data Stucture + +Graph is a non-linear data structure consisting of vertices and edges. It is a powerful tool for representing and analyzing complex relationships between objects or entities. + +## Components of a Graph: + +**Vertices:** Vertices are the fundamental units of the graph. Sometimes, vertices are also known as vertex or nodes. Every node/vertex can be labeled or unlabeled. +**Edges:** Edges are drawn or used to connect two nodes of the graph. It can be ordered pair of nodes in a directed graph. Edges can connect any two nodes in any possible way. There are no rules. very edge can be labelled/unlabelled. + +## Basic Operations on Graphs: +- Insertion of Nodes/Edges in the graph +- Deletion of Nodes/Edges in the graph +- Searching on Graphs +- Traversal of Graphs + +## Types of Graph ## + + +**1.Undirected Graph:** In an undirected graph, edges have no direction, and they represent symmetric relationships between nodes. If there is an edge between node A and node B, you can travel from A to B and from B to A. + +**2. Directed Graph (Digraph):** In a directed graph, edges have a direction, indicating a one-way relationship between nodes. If there is an edge from node A to node B, you can travel from A to B but not necessarily from B to A. + +**3. Weighted Graph:** In a weighted graph, edges have associated weights or costs. These weights can represent various attributes such as distance, cost, or capacity. Weighted graphs are commonly used in applications like route planning or network optimization. + +**4. Cyclic Graph:** A cyclic graph contains at least one cycle, which is a path that starts and ends at the same node. In other words, you can traverse the graph and return to a previously visited node by following the edges. + +**5. Acyclic Graph:** An acyclic graph, as the name suggests, does not contain any cycles. This type of graph is often used in scenarios where a cycle would be nonsensical or undesirable, such as representing dependencies between tasks or events. + +**6. Tree:** A tree is a special type of acyclic graph where each node has a unique parent except for the root node, which has no parent. Trees have a hierarchical structure and are frequently used in data structures like binary trees or decision trees. + +## Representation of Graphs:## +There are two ways to store a graph: + +**1. Adjacency Matrix** +In this method, the graph is stored in the form of the 2D matrix where rows and columns denote vertices. Each entry in the matrix represents the weight of the edge between those vertices. + +```python +def create_adjacency_matrix(graph): + num_vertices = len(graph) + + adj_matrix = [[0] * num_vertices for _ in range(num_vertices)] + + for i in range(num_vertices): + for j in range(num_vertices): + if graph[i][j] == 1: + adj_matrix[i][j] = 1 + adj_matrix[j][i] = 1 + + return adj_matrix + + +graph = [ + [0, 1, 0, 0], + [1, 0, 1, 0], + [0, 1, 0, 1], + [0, 0, 1, 0] +] + +adj_matrix = create_adjacency_matrix(graph) + +for row in adj_matrix: + print(' '.join(map(str, row))) + +``` + +**2. Adjacency List** +This graph is represented as a collection of linked lists. There is an array of pointer which points to the edges connected to that vertex. + +```python +def create_adjacency_list(edges, num_vertices): + adj_list = [[] for _ in range(num_vertices)] + + for u, v in edges: + adj_list[u].append(v) + adj_list[v].append(u) + + return adj_list + +if __name__ == "__main__": + num_vertices = 4 + edges = [(0, 1), (0, 2), (1, 2), (2, 3), (3, 1)] + + adj_list = create_adjacency_list(edges, num_vertices) + + for i in range(num_vertices): + print(f"{i} -> {' '.join(map(str, adj_list[i]))}") +``` +`Output +0 -> 1 2 +1 -> 0 2 3 +2 -> 0 1 3 +3 -> 2 1 ` + + + +# Traversal Techniques # + +## Breadth First Search (BFS) ## +- It is a graph traversal algorithm that explores all the vertices in a graph at the current depth before moving on to the vertices at the next depth level. +- It starts at a specified vertex and visits all its neighbors before moving on to the next level of neighbors. +BFS is commonly used in algorithms for pathfinding, connected components, and shortest path problems in graphs. + +**Steps of BFS algorithms** + + +- **Step1:** Initially queue and visited arrays are empty. +- **Step2:** Push node 0 into queue and mark it visited. +- **Step 3:** Remove node 0 from the front of queue and visit the unvisited neighbours and push them into queue. +- **Step 4:** Remove node 1 from the front of queue and visit the unvisited neighbours and push them into queue. +- **Step 5:** Remove node 2 from the front of queue and visit the unvisited neighbours and push them into queue. +- **Step 6:** Remove node 3 from the front of queue and visit the unvisited neighbours and push them into queue. +- **Steps 7:** Remove node 4 from the front of queue and visit the unvisited neighbours and push them into queue. + +```python + +from collections import deque + +def bfs(adjList, startNode, visited): + q = deque() + + visited[startNode] = True + q.append(startNode) + + while q: + currentNode = q.popleft() + print(currentNode, end=" ") + + for neighbor in adjList[currentNode]: + if not visited[neighbor]: + visited[neighbor] = True + q.append(neighbor) + +def addEdge(adjList, u, v): + adjList[u].append(v) + +def main(): + vertices = 5 + + adjList = [[] for _ in range(vertices)] + + addEdge(adjList, 0, 1) + addEdge(adjList, 0, 2) + addEdge(adjList, 1, 3) + addEdge(adjList, 1, 4) + addEdge(adjList, 2, 4) + + visited = [False] * vertices + + print("Breadth First Traversal", end=" ") + bfs(adjList, 0, visited) + +if __name__ == "__main__": #Output : Breadth First Traversal 0 1 2 3 4 + main() + +``` + +**Time Complexity:** `O(V+E)`, where V is the number of nodes and E is the number of edges. +**Auxiliary Space:** `O(V)` + + +## Depth-first search ## + +Depth-first search is an algorithm for traversing or searching tree or graph data structures. The algorithm starts at the root node (selecting some arbitrary node as the root node in the case of a graph) and explores as far as possible along each branch before backtracking. + +**Steps of DFS algorithms** +- **Step1:** Initially stack and visited arrays are empty. +- **Step 2:** Visit 0 and put its adjacent nodes which are not visited yet into the stack. +- **Step 3:** Now, Node 1 at the top of the stack, so visit node 1 and pop it from the stack and put all of its adjacent nodes which are not visited in the stack. +- **Step 4:** Now, Node 2 at the top of the stack, so visit node 2 and pop it from the stack and put all of its adjacent nodes which are not visited (i.e, 3, 4) in the stack. +- **Step 5:** Now, Node 4 at the top of the stack, so visit node 4 and pop it from the stack and put all of its adjacent nodes which are not visited in the stack. +- **Step 6:** Now, Node 3 at the top of the stack, so visit node 3 and pop it from the stack and put all of its adjacent nodes which are not visited in the stack. + + + +```python +from collections import defaultdict + +class Graph: + + def __init__(self): + + self.graph = defaultdict(list) + + def addEdge(self, u, v): + self.graph[u].append(v) + + def DFSUtil(self, v, visited): + + visited.add(v) + print(v, end=' ') + + for neighbour in self.graph[v]: + if neighbour not in visited: + self.DFSUtil(neighbour, visited) + + def DFS(self, v): + + visited = set() + + self.DFSUtil(v, visited) + +if __name__ == "__main__": + g = Graph() + g.addEdge(0, 1) + g.addEdge(0, 2) + g.addEdge(1, 2) + g.addEdge(2, 0) + g.addEdge(2, 3) + g.addEdge(3, 3) + + print("Depth First Traversal (starting from vertex 2): ",g.DFS(2)) #Output: Depth First Traversal (starting from vertex 2): 2 0 1 3 + +``` + +**Time complexity:** `O(V + E)`, where V is the number of vertices and E is the number of edges in the graph. +**Auxiliary Space:** `O(V + E)`, since an extra visited array of size V is required, And stack size for iterative call to DFS function. + +
+ + diff --git a/contrib/Data-Structure-Graphs/index.md b/contrib/Data-Structure-Graphs/index.md new file mode 100644 index 0000000..6471576 --- /dev/null +++ b/contrib/Data-Structure-Graphs/index.md @@ -0,0 +1,3 @@ +# List of sections + +- [Graphs](graph.md) \ No newline at end of file From 0db9cef68cc452409013ad9d94335d8e1e0c4124 Mon Sep 17 00:00:00 2001 From: Eshparsi Date: Sun, 19 May 2024 12:21:46 +0530 Subject: [PATCH 2/7] g --- contrib/Data-Structure-Graphs/graph.md | 61 +++++++++++++------------- 1 file changed, 30 insertions(+), 31 deletions(-) diff --git a/contrib/Data-Structure-Graphs/graph.md b/contrib/Data-Structure-Graphs/graph.md index b9bd709..9837896 100644 --- a/contrib/Data-Structure-Graphs/graph.md +++ b/contrib/Data-Structure-Graphs/graph.md @@ -2,21 +2,22 @@ Graph is a non-linear data structure consisting of vertices and edges. It is a powerful tool for representing and analyzing complex relationships between objects or entities. -## Components of a Graph: +## Components of a Graph -**Vertices:** Vertices are the fundamental units of the graph. Sometimes, vertices are also known as vertex or nodes. Every node/vertex can be labeled or unlabeled. -**Edges:** Edges are drawn or used to connect two nodes of the graph. It can be ordered pair of nodes in a directed graph. Edges can connect any two nodes in any possible way. There are no rules. very edge can be labelled/unlabelled. +1. **Vertices:** Vertices are the fundamental units of the graph. Sometimes, vertices are also known as vertex or nodes. Every node/vertex can be labeled or unlabeled. -## Basic Operations on Graphs: +2. **Edges:** Edges are drawn or used to connect two nodes of the graph. It can be ordered pair of nodes in a directed graph. Edges can connect any two nodes in any possible way. There are no rules. very edge can be labelled/unlabelled. + +## Basic Operations on Graphs - Insertion of Nodes/Edges in the graph - Deletion of Nodes/Edges in the graph - Searching on Graphs - Traversal of Graphs -## Types of Graph ## +## Types of Graph -**1.Undirected Graph:** In an undirected graph, edges have no direction, and they represent symmetric relationships between nodes. If there is an edge between node A and node B, you can travel from A to B and from B to A. +**1. Undirected Graph:** In an undirected graph, edges have no direction, and they represent symmetric relationships between nodes. If there is an edge between node A and node B, you can travel from A to B and from B to A. **2. Directed Graph (Digraph):** In a directed graph, edges have a direction, indicating a one-way relationship between nodes. If there is an edge from node A to node B, you can travel from A to B but not necessarily from B to A. @@ -28,10 +29,10 @@ Graph is a non-linear data structure consisting of vertices and edges. It is a p **6. Tree:** A tree is a special type of acyclic graph where each node has a unique parent except for the root node, which has no parent. Trees have a hierarchical structure and are frequently used in data structures like binary trees or decision trees. -## Representation of Graphs:## +## Representation of Graphs There are two ways to store a graph: -**1. Adjacency Matrix** +1. **Adjacency Matrix** In this method, the graph is stored in the form of the 2D matrix where rows and columns denote vertices. Each entry in the matrix represents the weight of the edge between those vertices. ```python @@ -63,8 +64,8 @@ for row in adj_matrix: ``` -**2. Adjacency List** -This graph is represented as a collection of linked lists. There is an array of pointer which points to the edges connected to that vertex. +2. **Adjacency List** +In this method, the graph is represented as a collection of linked lists. There is an array of pointer which points to the edges connected to that vertex. ```python def create_adjacency_list(edges, num_vertices): @@ -85,17 +86,17 @@ if __name__ == "__main__": for i in range(num_vertices): print(f"{i} -> {' '.join(map(str, adj_list[i]))}") ``` -`Output -0 -> 1 2 -1 -> 0 2 3 -2 -> 0 1 3 -3 -> 2 1 ` +`Output` +`0 -> 1 2` +`1 -> 0 2 3` +`2 -> 0 1 3` +`3 -> 2 1 ` -# Traversal Techniques # +# Traversal Techniques -## Breadth First Search (BFS) ## +## Breadth First Search (BFS) - It is a graph traversal algorithm that explores all the vertices in a graph at the current depth before moving on to the vertices at the next depth level. - It starts at a specified vertex and visits all its neighbors before moving on to the next level of neighbors. BFS is commonly used in algorithms for pathfinding, connected components, and shortest path problems in graphs. @@ -103,13 +104,13 @@ BFS is commonly used in algorithms for pathfinding, connected components, and sh **Steps of BFS algorithms** -- **Step1:** Initially queue and visited arrays are empty. -- **Step2:** Push node 0 into queue and mark it visited. +- **Step 1:** Initially queue and visited arrays are empty. +- **Step 2:** Push node 0 into queue and mark it visited. - **Step 3:** Remove node 0 from the front of queue and visit the unvisited neighbours and push them into queue. - **Step 4:** Remove node 1 from the front of queue and visit the unvisited neighbours and push them into queue. - **Step 5:** Remove node 2 from the front of queue and visit the unvisited neighbours and push them into queue. - **Step 6:** Remove node 3 from the front of queue and visit the unvisited neighbours and push them into queue. -- **Steps 7:** Remove node 4 from the front of queue and visit the unvisited neighbours and push them into queue. +- **Step 7:** Remove node 4 from the front of queue and visit the unvisited neighbours and push them into queue. ```python @@ -154,16 +155,17 @@ if __name__ == "__main__": #Output : Breadth First Traversal 0 1 2 3 4 ``` -**Time Complexity:** `O(V+E)`, where V is the number of nodes and E is the number of edges. -**Auxiliary Space:** `O(V)` +- **Time Complexity:** `O(V+E)`, where V is the number of nodes and E is the number of edges. +- **Auxiliary Space:** `O(V)` -## Depth-first search ## +## Depth-first search Depth-first search is an algorithm for traversing or searching tree or graph data structures. The algorithm starts at the root node (selecting some arbitrary node as the root node in the case of a graph) and explores as far as possible along each branch before backtracking. **Steps of DFS algorithms** -- **Step1:** Initially stack and visited arrays are empty. + +- **Step 1:** Initially stack and visited arrays are empty. - **Step 2:** Visit 0 and put its adjacent nodes which are not visited yet into the stack. - **Step 3:** Now, Node 1 at the top of the stack, so visit node 1 and pop it from the stack and put all of its adjacent nodes which are not visited in the stack. - **Step 4:** Now, Node 2 at the top of the stack, so visit node 2 and pop it from the stack and put all of its adjacent nodes which are not visited (i.e, 3, 4) in the stack. @@ -178,14 +180,12 @@ from collections import defaultdict class Graph: def __init__(self): - self.graph = defaultdict(list) def addEdge(self, u, v): self.graph[u].append(v) def DFSUtil(self, v, visited): - visited.add(v) print(v, end=' ') @@ -194,9 +194,7 @@ class Graph: self.DFSUtil(neighbour, visited) def DFS(self, v): - visited = set() - self.DFSUtil(v, visited) if __name__ == "__main__": @@ -208,12 +206,13 @@ if __name__ == "__main__": g.addEdge(2, 3) g.addEdge(3, 3) - print("Depth First Traversal (starting from vertex 2): ",g.DFS(2)) #Output: Depth First Traversal (starting from vertex 2): 2 0 1 3 + print("Depth First Traversal (starting from vertex 2): ",g.DFS(2)) ``` +`Output: Depth First Traversal (starting from vertex 2): 2 0 1 3 ` -**Time complexity:** `O(V + E)`, where V is the number of vertices and E is the number of edges in the graph. -**Auxiliary Space:** `O(V + E)`, since an extra visited array of size V is required, And stack size for iterative call to DFS function. +- **Time complexity:** `O(V + E)`, where V is the number of vertices and E is the number of edges in the graph. +- **Auxiliary Space:** `O(V + E)`, since an extra visited array of size V is required, And stack size for iterative call to DFS function.
From 05071b50871a74cfa03420c70014f6a6c16d96fd Mon Sep 17 00:00:00 2001 From: Eshparsi <112681516+Eshparsi@users.noreply.github.com> Date: Sun, 19 May 2024 12:24:13 +0530 Subject: [PATCH 3/7] Update graph.md --- contrib/Data-Structure-Graphs/graph.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/contrib/Data-Structure-Graphs/graph.md b/contrib/Data-Structure-Graphs/graph.md index 9837896..517c90d 100644 --- a/contrib/Data-Structure-Graphs/graph.md +++ b/contrib/Data-Structure-Graphs/graph.md @@ -32,7 +32,7 @@ Graph is a non-linear data structure consisting of vertices and edges. It is a p ## Representation of Graphs There are two ways to store a graph: -1. **Adjacency Matrix** +1. **Adjacency Matrix:** In this method, the graph is stored in the form of the 2D matrix where rows and columns denote vertices. Each entry in the matrix represents the weight of the edge between those vertices. ```python @@ -64,7 +64,7 @@ for row in adj_matrix: ``` -2. **Adjacency List** +2. **Adjacency List:** In this method, the graph is represented as a collection of linked lists. There is an array of pointer which points to the edges connected to that vertex. ```python From 7d98fe81f13817199ba58f2f2709423bebfe35b7 Mon Sep 17 00:00:00 2001 From: Eshparsi Date: Thu, 23 May 2024 07:20:00 +0530 Subject: [PATCH 4/7] shifted content to ds-algorithms --- contrib/Data-Structure-Graphs/index.md | 3 --- contrib/{Data-Structure-Graphs => ds-algorithms}/graph.md | 0 contrib/ds-algorithms/index.md | 1 + 3 files changed, 1 insertion(+), 3 deletions(-) delete mode 100644 contrib/Data-Structure-Graphs/index.md rename contrib/{Data-Structure-Graphs => ds-algorithms}/graph.md (100%) diff --git a/contrib/Data-Structure-Graphs/index.md b/contrib/Data-Structure-Graphs/index.md deleted file mode 100644 index 6471576..0000000 --- a/contrib/Data-Structure-Graphs/index.md +++ /dev/null @@ -1,3 +0,0 @@ -# List of sections - -- [Graphs](graph.md) \ No newline at end of file diff --git a/contrib/Data-Structure-Graphs/graph.md b/contrib/ds-algorithms/graph.md similarity index 100% rename from contrib/Data-Structure-Graphs/graph.md rename to contrib/ds-algorithms/graph.md diff --git a/contrib/ds-algorithms/index.md b/contrib/ds-algorithms/index.md index 5b52155..347a9d0 100644 --- a/contrib/ds-algorithms/index.md +++ b/contrib/ds-algorithms/index.md @@ -2,3 +2,4 @@ - [Section title](filename.md) - [Sorting Algorithms](sorting-algorithms.md) +- [Graphs](graph.md) From f52fa729cffddbeed121826024ca972ff97cd1a9 Mon Sep 17 00:00:00 2001 From: Eshparsi <112681516+Eshparsi@users.noreply.github.com> Date: Thu, 23 May 2024 07:28:59 +0530 Subject: [PATCH 5/7] Update index.md --- contrib/ds-algorithms/index.md | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/contrib/ds-algorithms/index.md b/contrib/ds-algorithms/index.md index 347a9d0..adec6aa 100644 --- a/contrib/ds-algorithms/index.md +++ b/contrib/ds-algorithms/index.md @@ -1,5 +1,11 @@ # List of sections -- [Section title](filename.md) -- [Sorting Algorithms](sorting-algorithms.md) +- [Queues in Python](Queues.md) - [Graphs](graph.md) +- [Sorting Algorithms](sorting-algorithms.md) +- [Recursion and Backtracking](recursion.md) +- [Divide and Conquer Algorithm](divide-and-conquer-algorithm.md) +- [Searching Algorithms](searching-algorithms.md) +- [Greedy Algorithms](greedy-algorithms.md) +- [Dynamic Programming](dynamic-programming.md) + From bc18bfed040ddae6301112acda317672950291fa Mon Sep 17 00:00:00 2001 From: Eshparsi Date: Thu, 23 May 2024 07:32:15 +0530 Subject: [PATCH 6/7] . --- .../divide-and-conquer-algorithm.md | 54 ++++++ contrib/ds-algorithms/dynamic-programming.md | 132 ++++++++++++++ contrib/ds-algorithms/greedy-algorithms.md | 135 +++++++++++++++ contrib/ds-algorithms/recursion.md | 107 ++++++++++++ contrib/ds-algorithms/searching-algorithms.md | 161 ++++++++++++++++++ 5 files changed, 589 insertions(+) create mode 100644 contrib/ds-algorithms/divide-and-conquer-algorithm.md create mode 100644 contrib/ds-algorithms/dynamic-programming.md create mode 100644 contrib/ds-algorithms/greedy-algorithms.md create mode 100644 contrib/ds-algorithms/recursion.md create mode 100644 contrib/ds-algorithms/searching-algorithms.md diff --git a/contrib/ds-algorithms/divide-and-conquer-algorithm.md b/contrib/ds-algorithms/divide-and-conquer-algorithm.md new file mode 100644 index 0000000..b5a356e --- /dev/null +++ b/contrib/ds-algorithms/divide-and-conquer-algorithm.md @@ -0,0 +1,54 @@ +# Divide and Conquer Algorithms + +Divide and Conquer is a paradigm for solving problems that involves breaking a problem into smaller sub-problems, solving the sub-problems recursively, and then combining their solutions to solve the original problem. + +## Merge Sort + +Merge Sort is a popular sorting algorithm that follows the divide and conquer strategy. It divides the input array into two halves, recursively sorts the halves, and then merges them. + +**Algorithm Overview:** +- **Divide:** Divide the unsorted list into two sublists of about half the size. +- **Conquer:** Recursively sort each sublist. +- **Combine:** Merge the sorted sublists back into one sorted list. + +```python +def merge_sort(arr): + if len(arr) > 1: + mid = len(arr) // 2 + left_half = arr[:mid] + right_half = arr[mid:] + + merge_sort(left_half) + merge_sort(right_half) + + i = j = k = 0 + + while i < len(left_half) and j < len(right_half): + if left_half[i] < right_half[j]: + arr[k] = left_half[i] + i += 1 + else: + arr[k] = right_half[j] + j += 1 + k += 1 + + while i < len(left_half): + arr[k] = left_half[i] + i += 1 + k += 1 + + while j < len(right_half): + arr[k] = right_half[j] + j += 1 + k += 1 + +arr = [12, 11, 13, 5, 6, 7] +merge_sort(arr) +print("Sorted array:", arr) +``` + +## Complexity Analysis +- **Time Complexity:** O(n log n) in all cases +- **Space Complexity:** O(n) additional space for the merge operation + +--- diff --git a/contrib/ds-algorithms/dynamic-programming.md b/contrib/ds-algorithms/dynamic-programming.md new file mode 100644 index 0000000..43149f8 --- /dev/null +++ b/contrib/ds-algorithms/dynamic-programming.md @@ -0,0 +1,132 @@ +# Dynamic Programming + +Dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems and solving each subproblem only once. It stores the solutions to subproblems to avoid redundant computations, making it particularly useful for optimization problems where the solution can be obtained by combining solutions to smaller subproblems. + +## Real-Life Examples of Dynamic Programming +- **Fibonacci Sequence:** Computing the nth Fibonacci number efficiently. +- **Shortest Path:** Finding the shortest path in a graph from a source to a destination. +- **String Edit Distance:** Calculating the minimum number of operations required to transform one string into another. +- **Knapsack Problem:** Maximizing the value of items in a knapsack without exceeding its weight capacity. + +# Some Common Dynamic Programming Techniques + +# 1. Fibonacci Sequence + +The Fibonacci sequence is a classic example used to illustrate dynamic programming. It is a series of numbers where each number is the sum of the two preceding ones, usually starting with 0 and 1. + +**Algorithm Overview:** +- **Base Cases:** The first two numbers in the Fibonacci sequence are defined as 0 and 1. +- **Memoization:** Store the results of previously computed Fibonacci numbers to avoid redundant computations. +- **Recurrence Relation:** Compute each Fibonacci number by adding the two preceding numbers. + +## Fibonacci Sequence Code in Python (Top-Down Approach with Memoization) + +```python +def fibonacci(n, memo={}): + if n in memo: + return memo[n] + if n <= 1: + return n + memo[n] = fibonacci(n-1, memo) + fibonacci(n-2, memo) + return memo[n] + +n = 10 +print(f"The {n}th Fibonacci number is: {fibonacci(n)}.") +``` + +## Fibonacci Sequence Code in Python (Bottom-Up Approach) + +```python +def fibonacci(n): + fib = [0, 1] + for i in range(2, n + 1): + fib.append(fib[i - 1] + fib[i - 2]) + return fib[n] + +n = 10 +print(f"The {n}th Fibonacci number is: {fibonacci(n)}.") +``` + +## Complexity Analysis +- **Time Complexity**: O(n) for both approaches +- **Space Complexity**: O(n) for the top-down approach (due to memoization), O(1) for the bottom-up approach + +
+
+
+ +# 2. Longest Common Subsequence + +The longest common subsequence (LCS) problem is to find the longest subsequence common to two sequences. A subsequence is a sequence that appears in the same relative order but not necessarily contiguous. + +**Algorithm Overview:** +- **Base Cases:** If one of the sequences is empty, the LCS is empty. +- **Memoization:** Store the results of previously computed LCS lengths to avoid redundant computations. +- **Recurrence Relation:** Compute the LCS length by comparing characters of the sequences and making decisions based on whether they match. + +## Longest Common Subsequence Code in Python (Top-Down Approach with Memoization) + +```python +def longest_common_subsequence(X, Y, m, n, memo={}): + if (m, n) in memo: + return memo[(m, n)] + if m == 0 or n == 0: + return 0 + if X[m - 1] == Y[n - 1]: + memo[(m, n)] = 1 + longest_common_subsequence(X, Y, m - 1, n - 1, memo) + else: + memo[(m, n)] = max(longest_common_subsequence(X, Y, m, n - 1, memo), + longest_common_subsequence(X, Y, m - 1, n, memo)) + return memo[(m, n)] + +X = "AGGTAB" +Y = "GXTXAYB" +print("Length of Longest Common Subsequence:", longest_common_subsequence(X, Y, len(X), len(Y))) +``` + +## Complexity Analysis +- **Time Complexity**: O(m * n) for the top-down approach, where m and n are the lengths of the input sequences +- **Space Complexity**: O(m * n) for the memoization table + +
+
+
+ +# 3. 0-1 Knapsack Problem + +The 0-1 knapsack problem is a classic optimization problem where the goal is to maximize the total value of items selected while keeping the total weight within a specified limit. + +**Algorithm Overview:** +- **Base Cases:** If the capacity of the knapsack is 0 or there are no items to select, the total value is 0. +- **Memoization:** Store the results of previously computed subproblems to avoid redundant computations. +- **Recurrence Relation:** Compute the maximum value by considering whether to include the current item or not. + +## 0-1 Knapsack Problem Code in Python (Top-Down Approach with Memoization) + +```python +def knapsack(weights, values, capacity, n, memo={}): + if (capacity, n) in memo: + return memo[(capacity, n)] + if n == 0 or capacity == 0: + return 0 + if weights[n - 1] > capacity: + memo[(capacity, n)] = knapsack(weights, values, capacity, n - 1, memo) + else: + memo[(capacity, n)] = max(values[n - 1] + knapsack(weights, values, capacity - weights[n - 1], n - 1, memo), + knapsack(weights, values, capacity, n - 1, memo)) + return memo[(capacity, n)] + +weights = [10, 20, 30] +values = [60, 100, 120] +capacity = 50 +n = len(weights) +print("Maximum value that can be obtained:", knapsack(weights, values, capacity, n)) +``` + +## Complexity Analysis +- **Time Complexity**: O(n * W) for the top-down approach, where n is the number of items and W is the capacity of the knapsack +- **Space Complexity**: O(n * W) for the memoization table + +
+
+
\ No newline at end of file diff --git a/contrib/ds-algorithms/greedy-algorithms.md b/contrib/ds-algorithms/greedy-algorithms.md new file mode 100644 index 0000000..c79ee99 --- /dev/null +++ b/contrib/ds-algorithms/greedy-algorithms.md @@ -0,0 +1,135 @@ +# Greedy Algorithms + +Greedy algorithms are simple, intuitive algorithms that make a sequence of choices at each step with the hope of finding a global optimum. They are called "greedy" because at each step, they choose the most advantageous option without considering the future consequences. Despite their simplicity, greedy algorithms are powerful tools for solving optimization problems, especially when the problem exhibits the greedy-choice property. + +## Real-Life Examples of Greedy Algorithms +- **Coin Change:** Finding the minimum number of coins to make a certain amount of change. +- **Job Scheduling:** Assigning tasks to machines to minimize completion time. +- **Huffman Coding:** Constructing an optimal prefix-free binary code for data compression. +- **Fractional Knapsack:** Selecting items to maximize the value within a weight limit. + +# Some Common Greedy Algorithms + +# 1. Coin Change Problem + +The coin change problem is a classic example of a greedy algorithm. Given a set of coin denominations and a target amount, the objective is to find the minimum number of coins required to make up that amount. + +**Algorithm Overview:** +- **Greedy Strategy:** At each step, the algorithm selects the largest denomination coin that is less than or equal to the remaining amount. +- **Repeat Until Amount is Zero:** The process continues until the remaining amount becomes zero. + +## Coin Change Code in Python + +```python +def coin_change(coins, amount): + coins.sort(reverse=True) + num_coins = 0 + for coin in coins: + num_coins += amount // coin + amount %= coin + if amount == 0: + return num_coins + else: + return -1 + +coins = [1, 5, 10, 25] +amount = 63 +result = coin_change(coins, amount) +if result != -1: + print(f"Minimum number of coins required: {result}.") +else: + print("It is not possible to make the amount with the given denominations.") +``` + +## Complexity Analysis +- **Time Complexity**: O(n log n) for sorting (if not pre-sorted), O(n) for iteration +- **Space Complexity**: O(1) + +
+
+
+ +# 2. Activity Selection Problem + +The activity selection problem involves selecting the maximum number of mutually compatible activities that can be performed by a single person or machine, assuming that a person can only work on one activity at a time. + +**Algorithm Overview:** +- **Greedy Strategy:** Sort the activities based on their finish times. +- **Selecting Activities:** Iterate through the sorted activities, selecting each activity if it doesn't conflict with the previously selected ones. + +## Activity Selection Code in Python + +```python +def activity_selection(start, finish): + n = len(start) + activities = [] + i = 0 + activities.append(i) + for j in range(1, n): + if start[j] >= finish[i]: + activities.append(j) + i = j + return activities + +start = [1, 3, 0, 5, 8, 5] +finish = [2, 4, 6, 7, 9, 9] +selected_activities = activity_selection(start, finish) +print("Selected activities:", selected_activities) +``` + +## Complexity Analysis +- **Time Complexity**: O(n log n) for sorting (if not pre-sorted), O(n) for iteration +- **Space Complexity**: O(1) + +
+
+
+ +# 3. Huffman Coding + +Huffman coding is a method of lossless data compression that efficiently represents characters or symbols in a file. It uses variable-length codes to represent characters, with shorter codes assigned to more frequent characters. + +**Algorithm Overview:** +- **Frequency Analysis:** Determine the frequency of each character in the input data. +- **Building the Huffman Tree:** Construct a binary tree where each leaf node represents a character and the path to the leaf node determines its code. +- **Assigning Codes:** Traverse the Huffman tree to assign codes to each character, with shorter codes for more frequent characters. + +## Huffman Coding Code in Python + +```python +from heapq import heappush, heappop, heapify +from collections import defaultdict + +def huffman_coding(data): + frequency = defaultdict(int) + for char in data: + frequency[char] += 1 + + heap = [[weight, [symbol, ""]] for symbol, weight in frequency.items()] + heapify(heap) + + while len(heap) > 1: + lo = heappop(heap) + hi = heappop(heap) + for pair in lo[1:]: + pair[1] = '0' + pair[1] + for pair in hi[1:]: + pair[1] = '1' + pair[1] + heappush(heap, [lo[0] + hi[0]] + lo[1:] + hi[1:]) + + return sorted(heappop(heap)[1:], key=lambda p: (len(p[-1]), p)) + +data = "Huffman coding is a greedy algorithm" +encoded_data = huffman_coding(data) +print("Huffman Codes:") +for symbol, code in encoded_data: + print(f"{symbol}: {code}") +``` + +## Complexity Analysis +- **Time Complexity**: O(n log n) for heap operations, where n is the number of unique characters +- **Space Complexity**: O(n) for the heap + +
+
+
diff --git a/contrib/ds-algorithms/recursion.md b/contrib/ds-algorithms/recursion.md new file mode 100644 index 0000000..7ab3136 --- /dev/null +++ b/contrib/ds-algorithms/recursion.md @@ -0,0 +1,107 @@ +# Introduction to Recursions + +When a function calls itself to solve smaller instances of the same problem until a specified condition is fulfilled is called recursion. It is used for tasks that can be divided into smaller sub-tasks. + +# How Recursion Works + +To solve a problem using recursion we must define: +- Base condition :- The condition under which recursion ends. +- Recursive case :- The part of function which calls itself to solve a smaller instance of problem. + +Steps of Recursion + +When a recursive function is called, the following sequence of events occurs: +- Function Call: The function is invoked with a specific argument. +- Base Condition Check: The function checks if the argument satisfies the base case. +- Recursive Call: If the base case is not met, the function performs some operations and makes a recursive call with a modified argument. +- Stack Management: Each recursive call is placed on the call stack. The stack keeps track of each function call, its argument, and the point to return to once the call completes. +- Unwinding the Stack: When the base case is eventually met, the function returns a value, and the stack starts unwinding, returning values to previous function calls until the initial call is resolved. + +# What is Stack Overflow in Recursion + +Stack overflow is an error that occurs when the call stack memory limit is exceeded. During execution of recursion calls they are simultaneously stored in a recursion stack waiting for the recursive function to be completed. Without a base case, the function would call itself indefinitely, leading to a stack overflow. + +# Example + +- Factorial of a Number + + The factorial of i natural numbers is nth integer multiplied by factorial of (i-1) numbers. The base case is if i=0 we return 1 as factorial of 0 is 1. + +```python +def factorial(i): + #base case + if i==0 : + return 1 + #recursive case + else : + return i * factorial(i-1) +i = 6 +print("Factorial of i is :", factorial(i)) # Output- Factorial of i is :720 +``` +# What is Backtracking + +Backtracking is a recursive algorithmic technique used to solve problems by exploring all possible solutions and discarding those that do not meet the problem's constraints. It is particularly useful for problems involving combinations, permutations, and finding paths in a grid. + +# How Backtracking Works + +- Incremental Solution Building: Solutions are built one step at a time. +- Feasibility Check: At each step, a check is made to see if the current partial solution is valid. +- Backtracking: If a partial solution is found to be invalid, the algorithm backtracks by removing the last added part of the solution and trying the next possibility. +- Exploration of All Possibilities: The process continues recursively, exploring all possible paths, until a solution is found or all possibilities are exhausted. + +# Example + +- Word Search + + Given a 2D grid of characters and a word, determine if the word exists in the grid. The word can be constructed from letters of sequentially adjacent cells, where "adjacent" cells are horizontally or vertically neighboring. The same letter cell may not be used more than once. + +Algorithm for Solving the Word Search Problem with Backtracking: +- Start at each cell: Attempt to find the word starting from each cell. +- Check all Directions: From each cell, try all four possible directions (up, down, left, right). +- Mark Visited Cells: Use a temporary marker to indicate cells that are part of the current path to avoid revisiting. +- Backtrack: If a path does not lead to a solution, backtrack by unmarking the visited cell and trying the next possibility. + +```python +def exist(board, word): + rows, cols = len(board), len(board[0]) + + def backtrack(r, c, suffix): + if not suffix: + return True + + if r < 0 or r >= rows or c < 0 or c >= cols or board[r][c] != suffix[0]: + return False + + # Mark the cell as visited by replacing its character with a placeholder + ret = False + board[r][c], temp = '#', board[r][c] + + # Explore the four possible directions + for row_offset, col_offset in [(0, 1), (1, 0), (0, -1), (-1, 0)]: + ret = backtrack(r + row_offset, c + col_offset, suffix[1:]) + if ret: + break + + # Restore the cell's original value + board[r][c] = temp + return ret + + for row in range(rows): + for col in range(cols): + if backtrack(row, col, word): + return True + + return False + +# Test case +board = [ + ['A','B','C','E'], + ['S','F','C','S'], + ['A','D','E','E'] +] +word = "ABCES" +print(exist(board, word)) # Output: True +``` + + + diff --git a/contrib/ds-algorithms/searching-algorithms.md b/contrib/ds-algorithms/searching-algorithms.md new file mode 100644 index 0000000..78b86d1 --- /dev/null +++ b/contrib/ds-algorithms/searching-algorithms.md @@ -0,0 +1,161 @@ +# Searching Algorithms + +Searching algorithms are techniques used to locate specific items within a collection of data. These algorithms are fundamental in computer science and are employed in various applications, from databases to web search engines. + +## Real Life Example of Searching +- Searching for a word in a dictionary +- Searching for a specific book in a library +- Searching for a contact in your phone's address book +- Searching for a file on your computer, etc. + +# Some common searching techniques + +# 1. Linear Search + +Linear search, also known as sequential search, is a straightforward searching algorithm that checks each element in a collection until the target element is found or the entire collection has been traversed. It is simple to implement but becomes inefficient for large datasets. + +**Algorithm Overview:** +- **Sequential Checking:** The algorithm iterates through each element in the collection, starting from the first element. +- **Comparing Elements:** At each iteration, it compares the current element with the target element. +- **Finding the Target:** If the current element matches the target, the search terminates, and the index of the element is returned. +- **Completing the Search:** If the entire collection is traversed without finding the target, the algorithm indicates that the element is not present. + +## Linear Search Code in Python + +```python +def linear_search(arr, target): + for i in range(len(arr)): + if arr[i] == target: + return i + return -1 + +arr = [5, 3, 8, 1, 2] +target = 8 +result = linear_search(arr, target) +if result != -1: + print(f"Element {target} found at index {result}.") +else: + print(f"Element {target} not found.") +``` + +## Complexity Analysis +- **Time Complexity**: O(n) +- **Space Complexity**: O(1) + +
+
+
+ +# 2. Binary Search + +Binary search is an efficient searching algorithm that works on sorted collections. It repeatedly divides the search interval in half until the target element is found or the interval is empty. Binary search is significantly faster than linear search but requires the collection to be sorted beforehand. + +**Algorithm Overview:** +- **Initial State:** Binary search starts with the entire collection as the search interval. +- **Divide and Conquer:** At each step, it calculates the middle element of the current interval and compares it with the target. +- **Narrowing Down the Interval:** If the middle element is equal to the target, the search terminates successfully. Otherwise, it discards half of the search interval based on the comparison result. +- **Repeating the Process:** The algorithm repeats this process on the remaining half of the interval until the target is found or the interval is empty. + +## Binary Search Code in Python (Iterative) + +```python +def binary_search(arr, target): + low = 0 + high = len(arr) - 1 + while low <= high: + mid = (low + high) // 2 + if arr[mid] == target: + return mid + elif arr[mid] < target: + low = mid + 1 + else: + high = mid - 1 + return -1 + +arr = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19] +target = 13 +result = binary_search(arr, target) +if result != -1: + print(f"Element {target} found at index {result}.") +else: + print(f"Element {target} not found.") +``` + +## Binary Search Code in Python (Recursive) + +```python +def binary_search_recursive(arr, target, low, high): + if low <= high: + mid = (low + high) // 2 + if arr[mid] == target: + return mid + elif arr[mid] < target: + return binary_search_recursive(arr, target, mid + 1, high) + else: + return binary_search_recursive(arr, target, low, mid - 1) + else: + return -1 + +arr = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19] +target = 13 +result = binary_search_recursive(arr, target, 0, len(arr) - 1) +if result != -1: + print(f"Element {target} found at index {result}.") +else: + print(f"Element {target} not found.") +``` + +## Complexity Analysis +- **Time Complexity**: O(log n) +- **Space Complexity**: O(1) (Iterative), O(log n) (Recursive) + +
+
+
+ +# 3. Interpolation Search + +Interpolation search is an improved version of binary search, especially useful when the elements in the collection are uniformly distributed. Instead of always dividing the search interval in half, interpolation search estimates the position of the target element based on its value and the values of the endpoints of the search interval. + +**Algorithm Overview:** +- **Estimating Position:** Interpolation search calculates an approximate position of the target element within the search interval based on its value and the values of the endpoints. +- **Refining the Estimate:** It adjusts the estimated position based on whether the target value is likely to be closer to the beginning or end of the search interval. +- **Updating the Interval:** Using the refined estimate, it narrows down the search interval iteratively until the target is found or the interval becomes empty. + +## Interpolation Search Code in Python + +```python +def interpolation_search(arr, target): + low = 0 + high = len(arr) - 1 + while low <= high and arr[low] <= target <= arr[high]: + if low == high: + if arr[low] == target: + return low + return -1 + pos = low + ((target - arr[low]) * (high - low)) // (arr[high] - arr[low]) + if arr[pos] == target: + return pos + elif arr[pos] < target: + low = pos + 1 + else: + high = pos - 1 + return -1 + +arr = [10, 20, 30, 40, 50, 60, 70, 80, 90, 100] +target = 60 +result = interpolation_search(arr, target) +if result != -1: + print(f"Element {target} found at index {result}.") +else: + print(f"Element {target} not found.") +``` + +## Complexity Analysis +- **Time Complexity**: O(log log n) (Average) +- **Space Complexity**: O(1) + +
+
+
+ From 8b11aa5ade20c585a25ff48e34e08af59742d34b Mon Sep 17 00:00:00 2001 From: Eshparsi Date: Thu, 23 May 2024 07:34:09 +0530 Subject: [PATCH 7/7] . --- contrib/ds-algorithms/index.md | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/contrib/ds-algorithms/index.md b/contrib/ds-algorithms/index.md index 347a9d0..706729e 100644 --- a/contrib/ds-algorithms/index.md +++ b/contrib/ds-algorithms/index.md @@ -1,5 +1,10 @@ # List of sections -- [Section title](filename.md) -- [Sorting Algorithms](sorting-algorithms.md) +- [Queues in Python](Queues.md) - [Graphs](graph.md) +- [Sorting Algorithms](sorting-algorithms.md) +- [Recursion and Backtracking](recursion.md) +- [Divide and Conquer Algorithm](divide-and-conquer-algorithm.md) +- [Searching Algorithms](searching-algorithms.md) +- [Greedy Algorithms](greedy-algorithms.md) +- [Dynamic Programming](dynamic-programming.md) \ No newline at end of file