kopia lustrzana https://github.com/animator/learn-python
Merge pull request #483 from Shubhanshu-356/updated-ds-algorithm
Updated ds-algorithmpull/482/head^2
commit
e29e8c894f
|
@ -0,0 +1,54 @@
|
|||
# Divide and Conquer Algorithms
|
||||
|
||||
Divide and Conquer is a paradigm for solving problems that involves breaking a problem into smaller sub-problems, solving the sub-problems recursively, and then combining their solutions to solve the original problem.
|
||||
|
||||
## Merge Sort
|
||||
|
||||
Merge Sort is a popular sorting algorithm that follows the divide and conquer strategy. It divides the input array into two halves, recursively sorts the halves, and then merges them.
|
||||
|
||||
**Algorithm Overview:**
|
||||
- **Divide:** Divide the unsorted list into two sublists of about half the size.
|
||||
- **Conquer:** Recursively sort each sublist.
|
||||
- **Combine:** Merge the sorted sublists back into one sorted list.
|
||||
|
||||
```python
|
||||
def merge_sort(arr):
|
||||
if len(arr) > 1:
|
||||
mid = len(arr) // 2
|
||||
left_half = arr[:mid]
|
||||
right_half = arr[mid:]
|
||||
|
||||
merge_sort(left_half)
|
||||
merge_sort(right_half)
|
||||
|
||||
i = j = k = 0
|
||||
|
||||
while i < len(left_half) and j < len(right_half):
|
||||
if left_half[i] < right_half[j]:
|
||||
arr[k] = left_half[i]
|
||||
i += 1
|
||||
else:
|
||||
arr[k] = right_half[j]
|
||||
j += 1
|
||||
k += 1
|
||||
|
||||
while i < len(left_half):
|
||||
arr[k] = left_half[i]
|
||||
i += 1
|
||||
k += 1
|
||||
|
||||
while j < len(right_half):
|
||||
arr[k] = right_half[j]
|
||||
j += 1
|
||||
k += 1
|
||||
|
||||
arr = [12, 11, 13, 5, 6, 7]
|
||||
merge_sort(arr)
|
||||
print("Sorted array:", arr)
|
||||
```
|
||||
|
||||
## Complexity Analysis
|
||||
- **Time Complexity:** O(n log n) in all cases
|
||||
- **Space Complexity:** O(n) additional space for the merge operation
|
||||
|
||||
---
|
|
@ -0,0 +1,132 @@
|
|||
# Dynamic Programming
|
||||
|
||||
Dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems and solving each subproblem only once. It stores the solutions to subproblems to avoid redundant computations, making it particularly useful for optimization problems where the solution can be obtained by combining solutions to smaller subproblems.
|
||||
|
||||
## Real-Life Examples of Dynamic Programming
|
||||
- **Fibonacci Sequence:** Computing the nth Fibonacci number efficiently.
|
||||
- **Shortest Path:** Finding the shortest path in a graph from a source to a destination.
|
||||
- **String Edit Distance:** Calculating the minimum number of operations required to transform one string into another.
|
||||
- **Knapsack Problem:** Maximizing the value of items in a knapsack without exceeding its weight capacity.
|
||||
|
||||
# Some Common Dynamic Programming Techniques
|
||||
|
||||
# 1. Fibonacci Sequence
|
||||
|
||||
The Fibonacci sequence is a classic example used to illustrate dynamic programming. It is a series of numbers where each number is the sum of the two preceding ones, usually starting with 0 and 1.
|
||||
|
||||
**Algorithm Overview:**
|
||||
- **Base Cases:** The first two numbers in the Fibonacci sequence are defined as 0 and 1.
|
||||
- **Memoization:** Store the results of previously computed Fibonacci numbers to avoid redundant computations.
|
||||
- **Recurrence Relation:** Compute each Fibonacci number by adding the two preceding numbers.
|
||||
|
||||
## Fibonacci Sequence Code in Python (Top-Down Approach with Memoization)
|
||||
|
||||
```python
|
||||
def fibonacci(n, memo={}):
|
||||
if n in memo:
|
||||
return memo[n]
|
||||
if n <= 1:
|
||||
return n
|
||||
memo[n] = fibonacci(n-1, memo) + fibonacci(n-2, memo)
|
||||
return memo[n]
|
||||
|
||||
n = 10
|
||||
print(f"The {n}th Fibonacci number is: {fibonacci(n)}.")
|
||||
```
|
||||
|
||||
## Fibonacci Sequence Code in Python (Bottom-Up Approach)
|
||||
|
||||
```python
|
||||
def fibonacci(n):
|
||||
fib = [0, 1]
|
||||
for i in range(2, n + 1):
|
||||
fib.append(fib[i - 1] + fib[i - 2])
|
||||
return fib[n]
|
||||
|
||||
n = 10
|
||||
print(f"The {n}th Fibonacci number is: {fibonacci(n)}.")
|
||||
```
|
||||
|
||||
## Complexity Analysis
|
||||
- **Time Complexity**: O(n) for both approaches
|
||||
- **Space Complexity**: O(n) for the top-down approach (due to memoization), O(1) for the bottom-up approach
|
||||
|
||||
</br>
|
||||
<hr>
|
||||
</br>
|
||||
|
||||
# 2. Longest Common Subsequence
|
||||
|
||||
The longest common subsequence (LCS) problem is to find the longest subsequence common to two sequences. A subsequence is a sequence that appears in the same relative order but not necessarily contiguous.
|
||||
|
||||
**Algorithm Overview:**
|
||||
- **Base Cases:** If one of the sequences is empty, the LCS is empty.
|
||||
- **Memoization:** Store the results of previously computed LCS lengths to avoid redundant computations.
|
||||
- **Recurrence Relation:** Compute the LCS length by comparing characters of the sequences and making decisions based on whether they match.
|
||||
|
||||
## Longest Common Subsequence Code in Python (Top-Down Approach with Memoization)
|
||||
|
||||
```python
|
||||
def longest_common_subsequence(X, Y, m, n, memo={}):
|
||||
if (m, n) in memo:
|
||||
return memo[(m, n)]
|
||||
if m == 0 or n == 0:
|
||||
return 0
|
||||
if X[m - 1] == Y[n - 1]:
|
||||
memo[(m, n)] = 1 + longest_common_subsequence(X, Y, m - 1, n - 1, memo)
|
||||
else:
|
||||
memo[(m, n)] = max(longest_common_subsequence(X, Y, m, n - 1, memo),
|
||||
longest_common_subsequence(X, Y, m - 1, n, memo))
|
||||
return memo[(m, n)]
|
||||
|
||||
X = "AGGTAB"
|
||||
Y = "GXTXAYB"
|
||||
print("Length of Longest Common Subsequence:", longest_common_subsequence(X, Y, len(X), len(Y)))
|
||||
```
|
||||
|
||||
## Complexity Analysis
|
||||
- **Time Complexity**: O(m * n) for the top-down approach, where m and n are the lengths of the input sequences
|
||||
- **Space Complexity**: O(m * n) for the memoization table
|
||||
|
||||
</br>
|
||||
<hr>
|
||||
</br>
|
||||
|
||||
# 3. 0-1 Knapsack Problem
|
||||
|
||||
The 0-1 knapsack problem is a classic optimization problem where the goal is to maximize the total value of items selected while keeping the total weight within a specified limit.
|
||||
|
||||
**Algorithm Overview:**
|
||||
- **Base Cases:** If the capacity of the knapsack is 0 or there are no items to select, the total value is 0.
|
||||
- **Memoization:** Store the results of previously computed subproblems to avoid redundant computations.
|
||||
- **Recurrence Relation:** Compute the maximum value by considering whether to include the current item or not.
|
||||
|
||||
## 0-1 Knapsack Problem Code in Python (Top-Down Approach with Memoization)
|
||||
|
||||
```python
|
||||
def knapsack(weights, values, capacity, n, memo={}):
|
||||
if (capacity, n) in memo:
|
||||
return memo[(capacity, n)]
|
||||
if n == 0 or capacity == 0:
|
||||
return 0
|
||||
if weights[n - 1] > capacity:
|
||||
memo[(capacity, n)] = knapsack(weights, values, capacity, n - 1, memo)
|
||||
else:
|
||||
memo[(capacity, n)] = max(values[n - 1] + knapsack(weights, values, capacity - weights[n - 1], n - 1, memo),
|
||||
knapsack(weights, values, capacity, n - 1, memo))
|
||||
return memo[(capacity, n)]
|
||||
|
||||
weights = [10, 20, 30]
|
||||
values = [60, 100, 120]
|
||||
capacity = 50
|
||||
n = len(weights)
|
||||
print("Maximum value that can be obtained:", knapsack(weights, values, capacity, n))
|
||||
```
|
||||
|
||||
## Complexity Analysis
|
||||
- **Time Complexity**: O(n * W) for the top-down approach, where n is the number of items and W is the capacity of the knapsack
|
||||
- **Space Complexity**: O(n * W) for the memoization table
|
||||
|
||||
</br>
|
||||
<hr>
|
||||
</br>
|
|
@ -0,0 +1,135 @@
|
|||
# Greedy Algorithms
|
||||
|
||||
Greedy algorithms are simple, intuitive algorithms that make a sequence of choices at each step with the hope of finding a global optimum. They are called "greedy" because at each step, they choose the most advantageous option without considering the future consequences. Despite their simplicity, greedy algorithms are powerful tools for solving optimization problems, especially when the problem exhibits the greedy-choice property.
|
||||
|
||||
## Real-Life Examples of Greedy Algorithms
|
||||
- **Coin Change:** Finding the minimum number of coins to make a certain amount of change.
|
||||
- **Job Scheduling:** Assigning tasks to machines to minimize completion time.
|
||||
- **Huffman Coding:** Constructing an optimal prefix-free binary code for data compression.
|
||||
- **Fractional Knapsack:** Selecting items to maximize the value within a weight limit.
|
||||
|
||||
# Some Common Greedy Algorithms
|
||||
|
||||
# 1. Coin Change Problem
|
||||
|
||||
The coin change problem is a classic example of a greedy algorithm. Given a set of coin denominations and a target amount, the objective is to find the minimum number of coins required to make up that amount.
|
||||
|
||||
**Algorithm Overview:**
|
||||
- **Greedy Strategy:** At each step, the algorithm selects the largest denomination coin that is less than or equal to the remaining amount.
|
||||
- **Repeat Until Amount is Zero:** The process continues until the remaining amount becomes zero.
|
||||
|
||||
## Coin Change Code in Python
|
||||
|
||||
```python
|
||||
def coin_change(coins, amount):
|
||||
coins.sort(reverse=True)
|
||||
num_coins = 0
|
||||
for coin in coins:
|
||||
num_coins += amount // coin
|
||||
amount %= coin
|
||||
if amount == 0:
|
||||
return num_coins
|
||||
else:
|
||||
return -1
|
||||
|
||||
coins = [1, 5, 10, 25]
|
||||
amount = 63
|
||||
result = coin_change(coins, amount)
|
||||
if result != -1:
|
||||
print(f"Minimum number of coins required: {result}.")
|
||||
else:
|
||||
print("It is not possible to make the amount with the given denominations.")
|
||||
```
|
||||
|
||||
## Complexity Analysis
|
||||
- **Time Complexity**: O(n log n) for sorting (if not pre-sorted), O(n) for iteration
|
||||
- **Space Complexity**: O(1)
|
||||
|
||||
</br>
|
||||
<hr>
|
||||
</br>
|
||||
|
||||
# 2. Activity Selection Problem
|
||||
|
||||
The activity selection problem involves selecting the maximum number of mutually compatible activities that can be performed by a single person or machine, assuming that a person can only work on one activity at a time.
|
||||
|
||||
**Algorithm Overview:**
|
||||
- **Greedy Strategy:** Sort the activities based on their finish times.
|
||||
- **Selecting Activities:** Iterate through the sorted activities, selecting each activity if it doesn't conflict with the previously selected ones.
|
||||
|
||||
## Activity Selection Code in Python
|
||||
|
||||
```python
|
||||
def activity_selection(start, finish):
|
||||
n = len(start)
|
||||
activities = []
|
||||
i = 0
|
||||
activities.append(i)
|
||||
for j in range(1, n):
|
||||
if start[j] >= finish[i]:
|
||||
activities.append(j)
|
||||
i = j
|
||||
return activities
|
||||
|
||||
start = [1, 3, 0, 5, 8, 5]
|
||||
finish = [2, 4, 6, 7, 9, 9]
|
||||
selected_activities = activity_selection(start, finish)
|
||||
print("Selected activities:", selected_activities)
|
||||
```
|
||||
|
||||
## Complexity Analysis
|
||||
- **Time Complexity**: O(n log n) for sorting (if not pre-sorted), O(n) for iteration
|
||||
- **Space Complexity**: O(1)
|
||||
|
||||
</br>
|
||||
<hr>
|
||||
</br>
|
||||
|
||||
# 3. Huffman Coding
|
||||
|
||||
Huffman coding is a method of lossless data compression that efficiently represents characters or symbols in a file. It uses variable-length codes to represent characters, with shorter codes assigned to more frequent characters.
|
||||
|
||||
**Algorithm Overview:**
|
||||
- **Frequency Analysis:** Determine the frequency of each character in the input data.
|
||||
- **Building the Huffman Tree:** Construct a binary tree where each leaf node represents a character and the path to the leaf node determines its code.
|
||||
- **Assigning Codes:** Traverse the Huffman tree to assign codes to each character, with shorter codes for more frequent characters.
|
||||
|
||||
## Huffman Coding Code in Python
|
||||
|
||||
```python
|
||||
from heapq import heappush, heappop, heapify
|
||||
from collections import defaultdict
|
||||
|
||||
def huffman_coding(data):
|
||||
frequency = defaultdict(int)
|
||||
for char in data:
|
||||
frequency[char] += 1
|
||||
|
||||
heap = [[weight, [symbol, ""]] for symbol, weight in frequency.items()]
|
||||
heapify(heap)
|
||||
|
||||
while len(heap) > 1:
|
||||
lo = heappop(heap)
|
||||
hi = heappop(heap)
|
||||
for pair in lo[1:]:
|
||||
pair[1] = '0' + pair[1]
|
||||
for pair in hi[1:]:
|
||||
pair[1] = '1' + pair[1]
|
||||
heappush(heap, [lo[0] + hi[0]] + lo[1:] + hi[1:])
|
||||
|
||||
return sorted(heappop(heap)[1:], key=lambda p: (len(p[-1]), p))
|
||||
|
||||
data = "Huffman coding is a greedy algorithm"
|
||||
encoded_data = huffman_coding(data)
|
||||
print("Huffman Codes:")
|
||||
for symbol, code in encoded_data:
|
||||
print(f"{symbol}: {code}")
|
||||
```
|
||||
|
||||
## Complexity Analysis
|
||||
- **Time Complexity**: O(n log n) for heap operations, where n is the number of unique characters
|
||||
- **Space Complexity**: O(n) for the heap
|
||||
|
||||
</br>
|
||||
<hr>
|
||||
</br>
|
|
@ -3,3 +3,7 @@
|
|||
- [Section title](filename.md)
|
||||
- [Sorting Algorithms](sorting-algorithms.md)
|
||||
- [Recursion and Backtracking](recursion.md)
|
||||
- [Divide and Conquer Algorithm](divide-and-conquer-algorithm.md)
|
||||
- [Searching Algorithms](searching-algorithms.md)
|
||||
- [Greedy Algorithms](greedy-algorithms.md)
|
||||
- [Dynamic Programming](dynamic-programming.md)
|
||||
|
|
|
@ -0,0 +1,161 @@
|
|||
# Searching Algorithms
|
||||
|
||||
Searching algorithms are techniques used to locate specific items within a collection of data. These algorithms are fundamental in computer science and are employed in various applications, from databases to web search engines.
|
||||
|
||||
## Real Life Example of Searching
|
||||
- Searching for a word in a dictionary
|
||||
- Searching for a specific book in a library
|
||||
- Searching for a contact in your phone's address book
|
||||
- Searching for a file on your computer, etc.
|
||||
|
||||
# Some common searching techniques
|
||||
|
||||
# 1. Linear Search
|
||||
|
||||
Linear search, also known as sequential search, is a straightforward searching algorithm that checks each element in a collection until the target element is found or the entire collection has been traversed. It is simple to implement but becomes inefficient for large datasets.
|
||||
|
||||
**Algorithm Overview:**
|
||||
- **Sequential Checking:** The algorithm iterates through each element in the collection, starting from the first element.
|
||||
- **Comparing Elements:** At each iteration, it compares the current element with the target element.
|
||||
- **Finding the Target:** If the current element matches the target, the search terminates, and the index of the element is returned.
|
||||
- **Completing the Search:** If the entire collection is traversed without finding the target, the algorithm indicates that the element is not present.
|
||||
|
||||
## Linear Search Code in Python
|
||||
|
||||
```python
|
||||
def linear_search(arr, target):
|
||||
for i in range(len(arr)):
|
||||
if arr[i] == target:
|
||||
return i
|
||||
return -1
|
||||
|
||||
arr = [5, 3, 8, 1, 2]
|
||||
target = 8
|
||||
result = linear_search(arr, target)
|
||||
if result != -1:
|
||||
print(f"Element {target} found at index {result}.")
|
||||
else:
|
||||
print(f"Element {target} not found.")
|
||||
```
|
||||
|
||||
## Complexity Analysis
|
||||
- **Time Complexity**: O(n)
|
||||
- **Space Complexity**: O(1)
|
||||
|
||||
</br>
|
||||
<hr>
|
||||
</br>
|
||||
|
||||
# 2. Binary Search
|
||||
|
||||
Binary search is an efficient searching algorithm that works on sorted collections. It repeatedly divides the search interval in half until the target element is found or the interval is empty. Binary search is significantly faster than linear search but requires the collection to be sorted beforehand.
|
||||
|
||||
**Algorithm Overview:**
|
||||
- **Initial State:** Binary search starts with the entire collection as the search interval.
|
||||
- **Divide and Conquer:** At each step, it calculates the middle element of the current interval and compares it with the target.
|
||||
- **Narrowing Down the Interval:** If the middle element is equal to the target, the search terminates successfully. Otherwise, it discards half of the search interval based on the comparison result.
|
||||
- **Repeating the Process:** The algorithm repeats this process on the remaining half of the interval until the target is found or the interval is empty.
|
||||
|
||||
## Binary Search Code in Python (Iterative)
|
||||
|
||||
```python
|
||||
def binary_search(arr, target):
|
||||
low = 0
|
||||
high = len(arr) - 1
|
||||
while low <= high:
|
||||
mid = (low + high) // 2
|
||||
if arr[mid] == target:
|
||||
return mid
|
||||
elif arr[mid] < target:
|
||||
low = mid + 1
|
||||
else:
|
||||
high = mid - 1
|
||||
return -1
|
||||
|
||||
arr = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
|
||||
target = 13
|
||||
result = binary_search(arr, target)
|
||||
if result != -1:
|
||||
print(f"Element {target} found at index {result}.")
|
||||
else:
|
||||
print(f"Element {target} not found.")
|
||||
```
|
||||
|
||||
## Binary Search Code in Python (Recursive)
|
||||
|
||||
```python
|
||||
def binary_search_recursive(arr, target, low, high):
|
||||
if low <= high:
|
||||
mid = (low + high) // 2
|
||||
if arr[mid] == target:
|
||||
return mid
|
||||
elif arr[mid] < target:
|
||||
return binary_search_recursive(arr, target, mid + 1, high)
|
||||
else:
|
||||
return binary_search_recursive(arr, target, low, mid - 1)
|
||||
else:
|
||||
return -1
|
||||
|
||||
arr = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
|
||||
target = 13
|
||||
result = binary_search_recursive(arr, target, 0, len(arr) - 1)
|
||||
if result != -1:
|
||||
print(f"Element {target} found at index {result}.")
|
||||
else:
|
||||
print(f"Element {target} not found.")
|
||||
```
|
||||
|
||||
## Complexity Analysis
|
||||
- **Time Complexity**: O(log n)
|
||||
- **Space Complexity**: O(1) (Iterative), O(log n) (Recursive)
|
||||
|
||||
</br>
|
||||
<hr>
|
||||
</br>
|
||||
|
||||
# 3. Interpolation Search
|
||||
|
||||
Interpolation search is an improved version of binary search, especially useful when the elements in the collection are uniformly distributed. Instead of always dividing the search interval in half, interpolation search estimates the position of the target element based on its value and the values of the endpoints of the search interval.
|
||||
|
||||
**Algorithm Overview:**
|
||||
- **Estimating Position:** Interpolation search calculates an approximate position of the target element within the search interval based on its value and the values of the endpoints.
|
||||
- **Refining the Estimate:** It adjusts the estimated position based on whether the target value is likely to be closer to the beginning or end of the search interval.
|
||||
- **Updating the Interval:** Using the refined estimate, it narrows down the search interval iteratively until the target is found or the interval becomes empty.
|
||||
|
||||
## Interpolation Search Code in Python
|
||||
|
||||
```python
|
||||
def interpolation_search(arr, target):
|
||||
low = 0
|
||||
high = len(arr) - 1
|
||||
while low <= high and arr[low] <= target <= arr[high]:
|
||||
if low == high:
|
||||
if arr[low] == target:
|
||||
return low
|
||||
return -1
|
||||
pos = low + ((target - arr[low]) * (high - low)) // (arr[high] - arr[low])
|
||||
if arr[pos] == target:
|
||||
return pos
|
||||
elif arr[pos] < target:
|
||||
low = pos + 1
|
||||
else:
|
||||
high = pos - 1
|
||||
return -1
|
||||
|
||||
arr = [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
|
||||
target = 60
|
||||
result = interpolation_search(arr, target)
|
||||
if result != -1:
|
||||
print(f"Element {target} found at index {result}.")
|
||||
else:
|
||||
print(f"Element {target} not found.")
|
||||
```
|
||||
|
||||
## Complexity Analysis
|
||||
- **Time Complexity**: O(log log n) (Average)
|
||||
- **Space Complexity**: O(1)
|
||||
|
||||
</br>
|
||||
<hr>
|
||||
</br>
|
||||
|
Ładowanie…
Reference in New Issue