kopia lustrzana https://github.com/animator/learn-python
pull/427/head
rodzic
7d98fe81f1
commit
bc18bfed04
|
@ -0,0 +1,54 @@
|
|||
# Divide and Conquer Algorithms
|
||||
|
||||
Divide and Conquer is a paradigm for solving problems that involves breaking a problem into smaller sub-problems, solving the sub-problems recursively, and then combining their solutions to solve the original problem.
|
||||
|
||||
## Merge Sort
|
||||
|
||||
Merge Sort is a popular sorting algorithm that follows the divide and conquer strategy. It divides the input array into two halves, recursively sorts the halves, and then merges them.
|
||||
|
||||
**Algorithm Overview:**
|
||||
- **Divide:** Divide the unsorted list into two sublists of about half the size.
|
||||
- **Conquer:** Recursively sort each sublist.
|
||||
- **Combine:** Merge the sorted sublists back into one sorted list.
|
||||
|
||||
```python
|
||||
def merge_sort(arr):
|
||||
if len(arr) > 1:
|
||||
mid = len(arr) // 2
|
||||
left_half = arr[:mid]
|
||||
right_half = arr[mid:]
|
||||
|
||||
merge_sort(left_half)
|
||||
merge_sort(right_half)
|
||||
|
||||
i = j = k = 0
|
||||
|
||||
while i < len(left_half) and j < len(right_half):
|
||||
if left_half[i] < right_half[j]:
|
||||
arr[k] = left_half[i]
|
||||
i += 1
|
||||
else:
|
||||
arr[k] = right_half[j]
|
||||
j += 1
|
||||
k += 1
|
||||
|
||||
while i < len(left_half):
|
||||
arr[k] = left_half[i]
|
||||
i += 1
|
||||
k += 1
|
||||
|
||||
while j < len(right_half):
|
||||
arr[k] = right_half[j]
|
||||
j += 1
|
||||
k += 1
|
||||
|
||||
arr = [12, 11, 13, 5, 6, 7]
|
||||
merge_sort(arr)
|
||||
print("Sorted array:", arr)
|
||||
```
|
||||
|
||||
## Complexity Analysis
|
||||
- **Time Complexity:** O(n log n) in all cases
|
||||
- **Space Complexity:** O(n) additional space for the merge operation
|
||||
|
||||
---
|
|
@ -0,0 +1,132 @@
|
|||
# Dynamic Programming
|
||||
|
||||
Dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems and solving each subproblem only once. It stores the solutions to subproblems to avoid redundant computations, making it particularly useful for optimization problems where the solution can be obtained by combining solutions to smaller subproblems.
|
||||
|
||||
## Real-Life Examples of Dynamic Programming
|
||||
- **Fibonacci Sequence:** Computing the nth Fibonacci number efficiently.
|
||||
- **Shortest Path:** Finding the shortest path in a graph from a source to a destination.
|
||||
- **String Edit Distance:** Calculating the minimum number of operations required to transform one string into another.
|
||||
- **Knapsack Problem:** Maximizing the value of items in a knapsack without exceeding its weight capacity.
|
||||
|
||||
# Some Common Dynamic Programming Techniques
|
||||
|
||||
# 1. Fibonacci Sequence
|
||||
|
||||
The Fibonacci sequence is a classic example used to illustrate dynamic programming. It is a series of numbers where each number is the sum of the two preceding ones, usually starting with 0 and 1.
|
||||
|
||||
**Algorithm Overview:**
|
||||
- **Base Cases:** The first two numbers in the Fibonacci sequence are defined as 0 and 1.
|
||||
- **Memoization:** Store the results of previously computed Fibonacci numbers to avoid redundant computations.
|
||||
- **Recurrence Relation:** Compute each Fibonacci number by adding the two preceding numbers.
|
||||
|
||||
## Fibonacci Sequence Code in Python (Top-Down Approach with Memoization)
|
||||
|
||||
```python
|
||||
def fibonacci(n, memo={}):
|
||||
if n in memo:
|
||||
return memo[n]
|
||||
if n <= 1:
|
||||
return n
|
||||
memo[n] = fibonacci(n-1, memo) + fibonacci(n-2, memo)
|
||||
return memo[n]
|
||||
|
||||
n = 10
|
||||
print(f"The {n}th Fibonacci number is: {fibonacci(n)}.")
|
||||
```
|
||||
|
||||
## Fibonacci Sequence Code in Python (Bottom-Up Approach)
|
||||
|
||||
```python
|
||||
def fibonacci(n):
|
||||
fib = [0, 1]
|
||||
for i in range(2, n + 1):
|
||||
fib.append(fib[i - 1] + fib[i - 2])
|
||||
return fib[n]
|
||||
|
||||
n = 10
|
||||
print(f"The {n}th Fibonacci number is: {fibonacci(n)}.")
|
||||
```
|
||||
|
||||
## Complexity Analysis
|
||||
- **Time Complexity**: O(n) for both approaches
|
||||
- **Space Complexity**: O(n) for the top-down approach (due to memoization), O(1) for the bottom-up approach
|
||||
|
||||
</br>
|
||||
<hr>
|
||||
</br>
|
||||
|
||||
# 2. Longest Common Subsequence
|
||||
|
||||
The longest common subsequence (LCS) problem is to find the longest subsequence common to two sequences. A subsequence is a sequence that appears in the same relative order but not necessarily contiguous.
|
||||
|
||||
**Algorithm Overview:**
|
||||
- **Base Cases:** If one of the sequences is empty, the LCS is empty.
|
||||
- **Memoization:** Store the results of previously computed LCS lengths to avoid redundant computations.
|
||||
- **Recurrence Relation:** Compute the LCS length by comparing characters of the sequences and making decisions based on whether they match.
|
||||
|
||||
## Longest Common Subsequence Code in Python (Top-Down Approach with Memoization)
|
||||
|
||||
```python
|
||||
def longest_common_subsequence(X, Y, m, n, memo={}):
|
||||
if (m, n) in memo:
|
||||
return memo[(m, n)]
|
||||
if m == 0 or n == 0:
|
||||
return 0
|
||||
if X[m - 1] == Y[n - 1]:
|
||||
memo[(m, n)] = 1 + longest_common_subsequence(X, Y, m - 1, n - 1, memo)
|
||||
else:
|
||||
memo[(m, n)] = max(longest_common_subsequence(X, Y, m, n - 1, memo),
|
||||
longest_common_subsequence(X, Y, m - 1, n, memo))
|
||||
return memo[(m, n)]
|
||||
|
||||
X = "AGGTAB"
|
||||
Y = "GXTXAYB"
|
||||
print("Length of Longest Common Subsequence:", longest_common_subsequence(X, Y, len(X), len(Y)))
|
||||
```
|
||||
|
||||
## Complexity Analysis
|
||||
- **Time Complexity**: O(m * n) for the top-down approach, where m and n are the lengths of the input sequences
|
||||
- **Space Complexity**: O(m * n) for the memoization table
|
||||
|
||||
</br>
|
||||
<hr>
|
||||
</br>
|
||||
|
||||
# 3. 0-1 Knapsack Problem
|
||||
|
||||
The 0-1 knapsack problem is a classic optimization problem where the goal is to maximize the total value of items selected while keeping the total weight within a specified limit.
|
||||
|
||||
**Algorithm Overview:**
|
||||
- **Base Cases:** If the capacity of the knapsack is 0 or there are no items to select, the total value is 0.
|
||||
- **Memoization:** Store the results of previously computed subproblems to avoid redundant computations.
|
||||
- **Recurrence Relation:** Compute the maximum value by considering whether to include the current item or not.
|
||||
|
||||
## 0-1 Knapsack Problem Code in Python (Top-Down Approach with Memoization)
|
||||
|
||||
```python
|
||||
def knapsack(weights, values, capacity, n, memo={}):
|
||||
if (capacity, n) in memo:
|
||||
return memo[(capacity, n)]
|
||||
if n == 0 or capacity == 0:
|
||||
return 0
|
||||
if weights[n - 1] > capacity:
|
||||
memo[(capacity, n)] = knapsack(weights, values, capacity, n - 1, memo)
|
||||
else:
|
||||
memo[(capacity, n)] = max(values[n - 1] + knapsack(weights, values, capacity - weights[n - 1], n - 1, memo),
|
||||
knapsack(weights, values, capacity, n - 1, memo))
|
||||
return memo[(capacity, n)]
|
||||
|
||||
weights = [10, 20, 30]
|
||||
values = [60, 100, 120]
|
||||
capacity = 50
|
||||
n = len(weights)
|
||||
print("Maximum value that can be obtained:", knapsack(weights, values, capacity, n))
|
||||
```
|
||||
|
||||
## Complexity Analysis
|
||||
- **Time Complexity**: O(n * W) for the top-down approach, where n is the number of items and W is the capacity of the knapsack
|
||||
- **Space Complexity**: O(n * W) for the memoization table
|
||||
|
||||
</br>
|
||||
<hr>
|
||||
</br>
|
|
@ -0,0 +1,135 @@
|
|||
# Greedy Algorithms
|
||||
|
||||
Greedy algorithms are simple, intuitive algorithms that make a sequence of choices at each step with the hope of finding a global optimum. They are called "greedy" because at each step, they choose the most advantageous option without considering the future consequences. Despite their simplicity, greedy algorithms are powerful tools for solving optimization problems, especially when the problem exhibits the greedy-choice property.
|
||||
|
||||
## Real-Life Examples of Greedy Algorithms
|
||||
- **Coin Change:** Finding the minimum number of coins to make a certain amount of change.
|
||||
- **Job Scheduling:** Assigning tasks to machines to minimize completion time.
|
||||
- **Huffman Coding:** Constructing an optimal prefix-free binary code for data compression.
|
||||
- **Fractional Knapsack:** Selecting items to maximize the value within a weight limit.
|
||||
|
||||
# Some Common Greedy Algorithms
|
||||
|
||||
# 1. Coin Change Problem
|
||||
|
||||
The coin change problem is a classic example of a greedy algorithm. Given a set of coin denominations and a target amount, the objective is to find the minimum number of coins required to make up that amount.
|
||||
|
||||
**Algorithm Overview:**
|
||||
- **Greedy Strategy:** At each step, the algorithm selects the largest denomination coin that is less than or equal to the remaining amount.
|
||||
- **Repeat Until Amount is Zero:** The process continues until the remaining amount becomes zero.
|
||||
|
||||
## Coin Change Code in Python
|
||||
|
||||
```python
|
||||
def coin_change(coins, amount):
|
||||
coins.sort(reverse=True)
|
||||
num_coins = 0
|
||||
for coin in coins:
|
||||
num_coins += amount // coin
|
||||
amount %= coin
|
||||
if amount == 0:
|
||||
return num_coins
|
||||
else:
|
||||
return -1
|
||||
|
||||
coins = [1, 5, 10, 25]
|
||||
amount = 63
|
||||
result = coin_change(coins, amount)
|
||||
if result != -1:
|
||||
print(f"Minimum number of coins required: {result}.")
|
||||
else:
|
||||
print("It is not possible to make the amount with the given denominations.")
|
||||
```
|
||||
|
||||
## Complexity Analysis
|
||||
- **Time Complexity**: O(n log n) for sorting (if not pre-sorted), O(n) for iteration
|
||||
- **Space Complexity**: O(1)
|
||||
|
||||
</br>
|
||||
<hr>
|
||||
</br>
|
||||
|
||||
# 2. Activity Selection Problem
|
||||
|
||||
The activity selection problem involves selecting the maximum number of mutually compatible activities that can be performed by a single person or machine, assuming that a person can only work on one activity at a time.
|
||||
|
||||
**Algorithm Overview:**
|
||||
- **Greedy Strategy:** Sort the activities based on their finish times.
|
||||
- **Selecting Activities:** Iterate through the sorted activities, selecting each activity if it doesn't conflict with the previously selected ones.
|
||||
|
||||
## Activity Selection Code in Python
|
||||
|
||||
```python
|
||||
def activity_selection(start, finish):
|
||||
n = len(start)
|
||||
activities = []
|
||||
i = 0
|
||||
activities.append(i)
|
||||
for j in range(1, n):
|
||||
if start[j] >= finish[i]:
|
||||
activities.append(j)
|
||||
i = j
|
||||
return activities
|
||||
|
||||
start = [1, 3, 0, 5, 8, 5]
|
||||
finish = [2, 4, 6, 7, 9, 9]
|
||||
selected_activities = activity_selection(start, finish)
|
||||
print("Selected activities:", selected_activities)
|
||||
```
|
||||
|
||||
## Complexity Analysis
|
||||
- **Time Complexity**: O(n log n) for sorting (if not pre-sorted), O(n) for iteration
|
||||
- **Space Complexity**: O(1)
|
||||
|
||||
</br>
|
||||
<hr>
|
||||
</br>
|
||||
|
||||
# 3. Huffman Coding
|
||||
|
||||
Huffman coding is a method of lossless data compression that efficiently represents characters or symbols in a file. It uses variable-length codes to represent characters, with shorter codes assigned to more frequent characters.
|
||||
|
||||
**Algorithm Overview:**
|
||||
- **Frequency Analysis:** Determine the frequency of each character in the input data.
|
||||
- **Building the Huffman Tree:** Construct a binary tree where each leaf node represents a character and the path to the leaf node determines its code.
|
||||
- **Assigning Codes:** Traverse the Huffman tree to assign codes to each character, with shorter codes for more frequent characters.
|
||||
|
||||
## Huffman Coding Code in Python
|
||||
|
||||
```python
|
||||
from heapq import heappush, heappop, heapify
|
||||
from collections import defaultdict
|
||||
|
||||
def huffman_coding(data):
|
||||
frequency = defaultdict(int)
|
||||
for char in data:
|
||||
frequency[char] += 1
|
||||
|
||||
heap = [[weight, [symbol, ""]] for symbol, weight in frequency.items()]
|
||||
heapify(heap)
|
||||
|
||||
while len(heap) > 1:
|
||||
lo = heappop(heap)
|
||||
hi = heappop(heap)
|
||||
for pair in lo[1:]:
|
||||
pair[1] = '0' + pair[1]
|
||||
for pair in hi[1:]:
|
||||
pair[1] = '1' + pair[1]
|
||||
heappush(heap, [lo[0] + hi[0]] + lo[1:] + hi[1:])
|
||||
|
||||
return sorted(heappop(heap)[1:], key=lambda p: (len(p[-1]), p))
|
||||
|
||||
data = "Huffman coding is a greedy algorithm"
|
||||
encoded_data = huffman_coding(data)
|
||||
print("Huffman Codes:")
|
||||
for symbol, code in encoded_data:
|
||||
print(f"{symbol}: {code}")
|
||||
```
|
||||
|
||||
## Complexity Analysis
|
||||
- **Time Complexity**: O(n log n) for heap operations, where n is the number of unique characters
|
||||
- **Space Complexity**: O(n) for the heap
|
||||
|
||||
</br>
|
||||
<hr>
|
||||
</br>
|
|
@ -0,0 +1,107 @@
|
|||
# Introduction to Recursions
|
||||
|
||||
When a function calls itself to solve smaller instances of the same problem until a specified condition is fulfilled is called recursion. It is used for tasks that can be divided into smaller sub-tasks.
|
||||
|
||||
# How Recursion Works
|
||||
|
||||
To solve a problem using recursion we must define:
|
||||
- Base condition :- The condition under which recursion ends.
|
||||
- Recursive case :- The part of function which calls itself to solve a smaller instance of problem.
|
||||
|
||||
Steps of Recursion
|
||||
|
||||
When a recursive function is called, the following sequence of events occurs:
|
||||
- Function Call: The function is invoked with a specific argument.
|
||||
- Base Condition Check: The function checks if the argument satisfies the base case.
|
||||
- Recursive Call: If the base case is not met, the function performs some operations and makes a recursive call with a modified argument.
|
||||
- Stack Management: Each recursive call is placed on the call stack. The stack keeps track of each function call, its argument, and the point to return to once the call completes.
|
||||
- Unwinding the Stack: When the base case is eventually met, the function returns a value, and the stack starts unwinding, returning values to previous function calls until the initial call is resolved.
|
||||
|
||||
# What is Stack Overflow in Recursion
|
||||
|
||||
Stack overflow is an error that occurs when the call stack memory limit is exceeded. During execution of recursion calls they are simultaneously stored in a recursion stack waiting for the recursive function to be completed. Without a base case, the function would call itself indefinitely, leading to a stack overflow.
|
||||
|
||||
# Example
|
||||
|
||||
- Factorial of a Number
|
||||
|
||||
The factorial of i natural numbers is nth integer multiplied by factorial of (i-1) numbers. The base case is if i=0 we return 1 as factorial of 0 is 1.
|
||||
|
||||
```python
|
||||
def factorial(i):
|
||||
#base case
|
||||
if i==0 :
|
||||
return 1
|
||||
#recursive case
|
||||
else :
|
||||
return i * factorial(i-1)
|
||||
i = 6
|
||||
print("Factorial of i is :", factorial(i)) # Output- Factorial of i is :720
|
||||
```
|
||||
# What is Backtracking
|
||||
|
||||
Backtracking is a recursive algorithmic technique used to solve problems by exploring all possible solutions and discarding those that do not meet the problem's constraints. It is particularly useful for problems involving combinations, permutations, and finding paths in a grid.
|
||||
|
||||
# How Backtracking Works
|
||||
|
||||
- Incremental Solution Building: Solutions are built one step at a time.
|
||||
- Feasibility Check: At each step, a check is made to see if the current partial solution is valid.
|
||||
- Backtracking: If a partial solution is found to be invalid, the algorithm backtracks by removing the last added part of the solution and trying the next possibility.
|
||||
- Exploration of All Possibilities: The process continues recursively, exploring all possible paths, until a solution is found or all possibilities are exhausted.
|
||||
|
||||
# Example
|
||||
|
||||
- Word Search
|
||||
|
||||
Given a 2D grid of characters and a word, determine if the word exists in the grid. The word can be constructed from letters of sequentially adjacent cells, where "adjacent" cells are horizontally or vertically neighboring. The same letter cell may not be used more than once.
|
||||
|
||||
Algorithm for Solving the Word Search Problem with Backtracking:
|
||||
- Start at each cell: Attempt to find the word starting from each cell.
|
||||
- Check all Directions: From each cell, try all four possible directions (up, down, left, right).
|
||||
- Mark Visited Cells: Use a temporary marker to indicate cells that are part of the current path to avoid revisiting.
|
||||
- Backtrack: If a path does not lead to a solution, backtrack by unmarking the visited cell and trying the next possibility.
|
||||
|
||||
```python
|
||||
def exist(board, word):
|
||||
rows, cols = len(board), len(board[0])
|
||||
|
||||
def backtrack(r, c, suffix):
|
||||
if not suffix:
|
||||
return True
|
||||
|
||||
if r < 0 or r >= rows or c < 0 or c >= cols or board[r][c] != suffix[0]:
|
||||
return False
|
||||
|
||||
# Mark the cell as visited by replacing its character with a placeholder
|
||||
ret = False
|
||||
board[r][c], temp = '#', board[r][c]
|
||||
|
||||
# Explore the four possible directions
|
||||
for row_offset, col_offset in [(0, 1), (1, 0), (0, -1), (-1, 0)]:
|
||||
ret = backtrack(r + row_offset, c + col_offset, suffix[1:])
|
||||
if ret:
|
||||
break
|
||||
|
||||
# Restore the cell's original value
|
||||
board[r][c] = temp
|
||||
return ret
|
||||
|
||||
for row in range(rows):
|
||||
for col in range(cols):
|
||||
if backtrack(row, col, word):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
# Test case
|
||||
board = [
|
||||
['A','B','C','E'],
|
||||
['S','F','C','S'],
|
||||
['A','D','E','E']
|
||||
]
|
||||
word = "ABCES"
|
||||
print(exist(board, word)) # Output: True
|
||||
```
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,161 @@
|
|||
# Searching Algorithms
|
||||
|
||||
Searching algorithms are techniques used to locate specific items within a collection of data. These algorithms are fundamental in computer science and are employed in various applications, from databases to web search engines.
|
||||
|
||||
## Real Life Example of Searching
|
||||
- Searching for a word in a dictionary
|
||||
- Searching for a specific book in a library
|
||||
- Searching for a contact in your phone's address book
|
||||
- Searching for a file on your computer, etc.
|
||||
|
||||
# Some common searching techniques
|
||||
|
||||
# 1. Linear Search
|
||||
|
||||
Linear search, also known as sequential search, is a straightforward searching algorithm that checks each element in a collection until the target element is found or the entire collection has been traversed. It is simple to implement but becomes inefficient for large datasets.
|
||||
|
||||
**Algorithm Overview:**
|
||||
- **Sequential Checking:** The algorithm iterates through each element in the collection, starting from the first element.
|
||||
- **Comparing Elements:** At each iteration, it compares the current element with the target element.
|
||||
- **Finding the Target:** If the current element matches the target, the search terminates, and the index of the element is returned.
|
||||
- **Completing the Search:** If the entire collection is traversed without finding the target, the algorithm indicates that the element is not present.
|
||||
|
||||
## Linear Search Code in Python
|
||||
|
||||
```python
|
||||
def linear_search(arr, target):
|
||||
for i in range(len(arr)):
|
||||
if arr[i] == target:
|
||||
return i
|
||||
return -1
|
||||
|
||||
arr = [5, 3, 8, 1, 2]
|
||||
target = 8
|
||||
result = linear_search(arr, target)
|
||||
if result != -1:
|
||||
print(f"Element {target} found at index {result}.")
|
||||
else:
|
||||
print(f"Element {target} not found.")
|
||||
```
|
||||
|
||||
## Complexity Analysis
|
||||
- **Time Complexity**: O(n)
|
||||
- **Space Complexity**: O(1)
|
||||
|
||||
</br>
|
||||
<hr>
|
||||
</br>
|
||||
|
||||
# 2. Binary Search
|
||||
|
||||
Binary search is an efficient searching algorithm that works on sorted collections. It repeatedly divides the search interval in half until the target element is found or the interval is empty. Binary search is significantly faster than linear search but requires the collection to be sorted beforehand.
|
||||
|
||||
**Algorithm Overview:**
|
||||
- **Initial State:** Binary search starts with the entire collection as the search interval.
|
||||
- **Divide and Conquer:** At each step, it calculates the middle element of the current interval and compares it with the target.
|
||||
- **Narrowing Down the Interval:** If the middle element is equal to the target, the search terminates successfully. Otherwise, it discards half of the search interval based on the comparison result.
|
||||
- **Repeating the Process:** The algorithm repeats this process on the remaining half of the interval until the target is found or the interval is empty.
|
||||
|
||||
## Binary Search Code in Python (Iterative)
|
||||
|
||||
```python
|
||||
def binary_search(arr, target):
|
||||
low = 0
|
||||
high = len(arr) - 1
|
||||
while low <= high:
|
||||
mid = (low + high) // 2
|
||||
if arr[mid] == target:
|
||||
return mid
|
||||
elif arr[mid] < target:
|
||||
low = mid + 1
|
||||
else:
|
||||
high = mid - 1
|
||||
return -1
|
||||
|
||||
arr = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
|
||||
target = 13
|
||||
result = binary_search(arr, target)
|
||||
if result != -1:
|
||||
print(f"Element {target} found at index {result}.")
|
||||
else:
|
||||
print(f"Element {target} not found.")
|
||||
```
|
||||
|
||||
## Binary Search Code in Python (Recursive)
|
||||
|
||||
```python
|
||||
def binary_search_recursive(arr, target, low, high):
|
||||
if low <= high:
|
||||
mid = (low + high) // 2
|
||||
if arr[mid] == target:
|
||||
return mid
|
||||
elif arr[mid] < target:
|
||||
return binary_search_recursive(arr, target, mid + 1, high)
|
||||
else:
|
||||
return binary_search_recursive(arr, target, low, mid - 1)
|
||||
else:
|
||||
return -1
|
||||
|
||||
arr = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
|
||||
target = 13
|
||||
result = binary_search_recursive(arr, target, 0, len(arr) - 1)
|
||||
if result != -1:
|
||||
print(f"Element {target} found at index {result}.")
|
||||
else:
|
||||
print(f"Element {target} not found.")
|
||||
```
|
||||
|
||||
## Complexity Analysis
|
||||
- **Time Complexity**: O(log n)
|
||||
- **Space Complexity**: O(1) (Iterative), O(log n) (Recursive)
|
||||
|
||||
</br>
|
||||
<hr>
|
||||
</br>
|
||||
|
||||
# 3. Interpolation Search
|
||||
|
||||
Interpolation search is an improved version of binary search, especially useful when the elements in the collection are uniformly distributed. Instead of always dividing the search interval in half, interpolation search estimates the position of the target element based on its value and the values of the endpoints of the search interval.
|
||||
|
||||
**Algorithm Overview:**
|
||||
- **Estimating Position:** Interpolation search calculates an approximate position of the target element within the search interval based on its value and the values of the endpoints.
|
||||
- **Refining the Estimate:** It adjusts the estimated position based on whether the target value is likely to be closer to the beginning or end of the search interval.
|
||||
- **Updating the Interval:** Using the refined estimate, it narrows down the search interval iteratively until the target is found or the interval becomes empty.
|
||||
|
||||
## Interpolation Search Code in Python
|
||||
|
||||
```python
|
||||
def interpolation_search(arr, target):
|
||||
low = 0
|
||||
high = len(arr) - 1
|
||||
while low <= high and arr[low] <= target <= arr[high]:
|
||||
if low == high:
|
||||
if arr[low] == target:
|
||||
return low
|
||||
return -1
|
||||
pos = low + ((target - arr[low]) * (high - low)) // (arr[high] - arr[low])
|
||||
if arr[pos] == target:
|
||||
return pos
|
||||
elif arr[pos] < target:
|
||||
low = pos + 1
|
||||
else:
|
||||
high = pos - 1
|
||||
return -1
|
||||
|
||||
arr = [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
|
||||
target = 60
|
||||
result = interpolation_search(arr, target)
|
||||
if result != -1:
|
||||
print(f"Element {target} found at index {result}.")
|
||||
else:
|
||||
print(f"Element {target} not found.")
|
||||
```
|
||||
|
||||
## Complexity Analysis
|
||||
- **Time Complexity**: O(log log n) (Average)
|
||||
- **Space Complexity**: O(1)
|
||||
|
||||
</br>
|
||||
<hr>
|
||||
</br>
|
||||
|
Ładowanie…
Reference in New Issue