Ruby Data Structures and Algorithms — Ruby Deep Dive [04]
DSA with Ruby, handpicked problems: RubyPrograms on GitHub
Let’s dive into the different data structures and algorithms provided by most programming languages, and some specific to ruby, showing the ease of use if we follow the Ruby-way.
Before We Begin
Big O complexity, also known as Big O notation, is a mathematical notation used in computer science to describe the performance or complexity of an algorithm. Specifically, it describes the worst-case scenario for the time or space an algorithm needs as the input size grows.
Key points about Big O complexity:
- It expresses the upper bound of an algorithm’s growth rate.
- It focuses on the dominant term as the input size approaches infinity.
- Common Big O notations include:
- O(1): Constant time
- O(log n): Logarithmic time
- O(n): Linear time
- O(n log n): Linearithmic time
- O(n²): Quadratic time
- O(2^n): Exponential time
4. It helps compare algorithms’ efficiency and scalability.
5. It ignores constants and lower-order terms.
Big O notation is crucial for analyzing and optimizing algorithms, especially when dealing with large datasets.
Why Data Structures and Algorithms (DSA)?
Frameworks and libraries are built on top of data structures and algorithms, and they keep getting updated. The only foolproof plan to stay updated is to understand the foundational principles upon which these updates and new tools are based. This is why learning Data Structures and Algorithms is crucial.
Characteristics of the Best Code:
- Readable: Easy to understand and maintain.
- Memory Efficient: Uses the least amount of memory necessary.
- Time Efficient: Executes as quickly as possible.
Important Data Structures:
- Arrays
- Hashes
- Sets
- Strings
- Symbols
- Ranges
- Structs
- OpenStructs
- Queues
- Stacks
- Linked Lists
- Trees
- Graphs
- Heap
Important Algorithms:
- Sorting
- Dynamic Programming
- BFS (Breadth-First Search) + DFS (Depth-First Search)
- Recursion
Why Data Structures?
Data structures are used to store data on our computers efficiently.
Where Data Gets Stored:
- Permanent: Storage devices (like hard drives).
- Temporary: RAM (Random Access Memory).
Who Processes This Data?
- CPU: The central processing unit processes the data.
Common Operations:
- Insertion
- Traversal
- Deletion
- Searching
- Sorting
- Access
Data Structures in Ruby
1. Arrays
Arrays are sequential data structures stored in order in memory.
Usage:
- Fast for finding elements, push/pop operations.
- Slow for insert/delete operations in the middle.
Big O Complexity:
- Access: O(1)
- Search: O(n)
- Insertion: O(n)
- Deletion: O(n)
Example:
# Creating an array
arr = [1, 2, 3, 4, 5]
# Accessing elements
puts arr[2] # Output: 3
# Adding elements
arr.push(6) # [1, 2, 3, 4, 5, 6]
arr << 7 # [1, 2, 3, 4, 5, 6, 7]
# Removing elements
arr.pop # [1, 2, 3, 4, 5, 6]
arr.shift # [2, 3, 4, 5, 6]
# Iterating over elements
arr.each do |element|
puts element
end
# Multidimensional array (matrix)
matrix = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
puts matrix[1][2] # Output: 6
2. Hashes
Hashes (also known as hash maps) are key-value pairs, offering fast lookups.
Usage:
- Ideal for quick lookups and associative arrays.
Big O Complexity:
- Access: O(1)
- Search: O(1)
- Insertion: O(1)
- Deletion: O(1)
Example:
# Creating a hash
char_count = { 'a' => 1, 'b' => 2, 'c' => 3 }
# Accessing elements
puts char_count['b'] # Output: 2
# Adding elements
char_count['d'] = 4
# Removing elements
char_count.delete('a')
# Iterating over elements
char_count.each do |key, value|
puts "#{key}: #{value}"
end
3. Sets
Sets are collections of unordered values with no duplicates, implemented using hashes under the hood.
Usage:
- Useful for testing membership, eliminating duplicates.
Big O Complexity:
- Access: O(1)
- Search: O(1)
- Insertion: O(1)
- Deletion: O(1)
Example:
require 'set'
# Creating a set
set = Set.new([1, 2, 3, 4, 5])
# Adding elements
set.add(6)
# Removing elements
set.delete(2)
# Checking membership
puts set.include?(3) # Output: true
# Iterating over elements
set.each do |element|
puts element
end
4. Strings
Strings are sequences of characters, essentially arrays of characters.
Usage:
- Useful for text manipulation.
Big O Complexity:
- Access: O(1)
- Search: O(n)
- Insertion: O(n)
- Deletion: O(n)
Example:
# Creating a string
str = "Hello, world!"
# Accessing characters
puts str[7] # Output: w
# Adding to a string
str += " How are you?"
# Removing from a string
str.slice!(0, 5)
# Iterating over characters
str.each_char do |char|
puts char
end
5. Symbols
Symbols are immutable, reusable constants represented internally by an integer value.
Usage:
- Useful for identifiers and keys, more memory efficient than strings.
Big O Complexity:
- Symbols are generally more efficient in memory usage and comparison than strings due to their immutability.
Example:
# Creating symbols
symbol = :my_symbol
# Comparing symbols
puts :a == :a # Output: true
puts :a == :b # Output: false
6. Ranges
Ranges represent an interval of values.
Usage:
- Useful for creating sequences and checking inclusion.
Big O Complexity:
- Access: O(1)
- Search: O(n) for inclusion checks
- Insertion: O(n) (but generally not used for insertion)
- Deletion: O(n) (but generally not used for deletion)
Example:
# Creating a range
range = (1..10)
# Iterating over a range
range.each do |value|
puts value
end
# Checking inclusion
puts range.include?(5) # Output: true
7. Structs
Structs are a convenient way to bundle related attributes together using accessor methods.
Usage:
- Useful for creating simple data structures without defining a full class.
Big O Complexity:
- Access: O(1)
- Insertion: O(1)
- Deletion: O(1)
Example:
# Defining a struct
Person = Struct.new(:name, :age)
# Creating a new struct instance
person = Person.new("Alice", 30)
# Accessing attributes
puts person.name # Output: Alice
puts person.age # Output: 30
# Modifying attributes
person.age = 31
puts person.age # Output: 31
8. OpenStructs
OpenStructs provide a flexible data structure similar to a hash but with object-like access.
Usage:
- Useful for quick and flexible data structures.
Big O Complexity:
- Access: O(1)
- Insertion: O(1)
- Deletion: O(1)
Example:
require 'ostruct'
# Creating an OpenStruct
person = OpenStruct.new(name: "Alice", age: 30)
# Accessing attributes
puts person.name # Output: Alice
puts person.age # Output: 30
# Modifying attributes
person.age = 31
puts person.age # Output: 31
9. Queues
Queues follow the FIFO (First In, First Out) principle. Ruby does not have a built-in Queue class, but it can be implemented using an array.
Usage:
- Useful for processing tasks in the order they arrive.
Big O Complexity:
- Access: O(1)
- Insertion: O(1)
- Deletion: O(1)
Example:
class Queue
def initialize
@elements = []
end
def enqueue(element)
@elements.push(element)
end
def dequeue
@elements.shift
end
def peek
@elements.first
end
def empty?
@elements.empty?
end
end
# Using the queue
queue = Queue.new
queue.enqueue(1)
queue.enqueue(2)
puts queue.dequeue # Output: 1
puts queue.peek # Output: 2
puts queue.empty? # Output: false
10. Stacks
Stacks follow the LIFO (Last In, First Out) principle. Ruby does not have a built-in Stack class, but it can be implemented using an array.
Usage:
- Useful for managing nested structures or undo operations.
Big O Complexity:
- Access: O(1)
- Insertion: O(1)
- Deletion: O(1)
Example:
class Stack
def initialize
@elements = []
end
def push(element)
@elements.push(element)
end
def pop
@elements.pop
end
def peek
@elements.last
end
def empty?
@elements.empty?
end
end
# Using the stack
stack = Stack.new
stack.push(1)
stack.push(2)
puts stack.pop # Output: 2
puts stack.peek # Output: 1
puts stack.empty? # Output: false
11. Linked Lists
Ruby does not have a built-in LinkedList class, but it can be implemented.
Usage:
- Useful for efficient insertion and deletion operations.
Big O Complexity:
- Access: O(n)
- Search: O(n)
- Insertion: O(1)
- Deletion: O(1)
Example:
class Node
attr_accessor :value, :next_node
def initialize(value)
@value = value
@next_node = nil
end
end
class LinkedList
def initialize
@head = nil
end
def append(value)
if @head.nil?
@head = Node.new(value)
else
current = @head
current = current.next_node until current.next_node.nil?
current.next_node = Node.new(value)
end
end
def delete(value)
return if @head.nil?
if @head.value == value
@head = @head.next_node
return
end
current = @head
while current.next_node && current.next_node.value != value
current = current.next_node
end
current.next_node = current.next_node.next_node if current.next_node
end
def display
current = @head
while current
print "#{current.value} -> "
current = current.next_node
end
puts "nil"
end
end
# Using the linked list
list = LinkedList.new
list.append(1)
list.append(2)
list.append(3)
list.display # Output: 1 -> 2 -> 3 -> nil
list.delete(2)
list.display # Output: 1 -> 3 -> nil
12. Trees
Ruby does not have a built-in Tree class, but it can be implemented.
Usage:
- Useful for hierarchical data structures, like file systems.
Big O Complexity:
- Access: O(log n)
- Search: O(log n)
- Insertion: O(log n)
- Deletion: O(log n)
Example:
class TreeNode
attr_accessor :value, :left, :right
def initialize(value)
@value = value
@left = nil
@right = nil
end
end
class BinaryTree
def initialize
@root = nil
end
def insert(value)
if @root.nil?
@root = TreeNode.new(value)
else
insert_rec(@root, value)
end
end
def insert_rec(node, value)
if value < node.value
if node.left.nil?
node.left = TreeNode.new(value)
else
insert_rec(node.left, value)
end
else
if node.right.nil?
node.right = TreeNode.new(value)
else
insert_rec(node.right, value)
end
end
end
def inorder(node = @root)
return if node.nil?
inorder(node.left)
print "#{node.value} "
inorder(node.right)
end
end
# Using the binary tree
tree = BinaryTree.new
tree.insert(10)
tree.insert(5)
tree.insert(15)
tree.inorder # Output: 5 10 15
13. Graphs
Graphs are collections of nodes (or vertices) and edges connecting pairs of nodes.
Usage: Useful for modeling relationships and connections, such as social networks, transportation networks, and dependency graphs.
Big O Complexity:
- Accessing an edge: O(1) (adjacency matrix), O(E) (adjacency list)
- Adding a vertex: O(V²) (adjacency matrix), O(1) (adjacency list)
- Adding an edge: O(1) (adjacency matrix), O(1) (adjacency list)
- Traversals (DFS, BFS): O(V + E)
Example:
class Graph
attr_accessor :adjacency_list
def initialize
@adjacency_list = {}
end
def add_vertex(vertex)
@adjacency_list[vertex] = []
end
def add_edge(vertex1, vertex2)
@adjacency_list[vertex1] << vertex2
@adjacency_list[vertex2] << vertex1 # For undirected graph
end
end
graph = Graph.new
graph.add_vertex('A')
graph.add_vertex('B')
graph.add_edge('A', 'B')
Graph-Specific Traversals:
- Depth-First Search (DFS)
- Breadth-First Search (BFS)
def dfs(graph, start, visited = Set.new)
return if visited.include?(start)
puts start
visited.add(start)
graph.adjacency_list[start].each do |neighbor|
dfs(graph, neighbor, visited)
end
end
def bfs(graph, start)
queue = [start]
visited = Set.new([start])
until queue.empty?
vertex = queue.shift
puts vertex
graph.adjacency_list[vertex].each do |neighbor|
next if visited.include?(neighbor)
visited.add(neighbor)
queue << neighbor
end
end
end
14. Heaps
A Heap is a specialized tree-based data structure that satisfies the heap property.
Usage: Useful for implementing priority queues, heapsort, and efficient finding of minimum/maximum elements.
Big O Complexity:
- Accessing the minimum/maximum: O(1)
- Inserting an element: O(log n)
- Deleting the minimum/maximum: O(log n)
Min-Heap: A complete binary tree where the value of each node is greater than or equal to the value of its parent.
Max-Heap: A complete binary tree where the value of each node is less than or equal to the value of its parent.
Example:
class MinHeap
attr_accessor :heap
def initialize
@heap = []
end
def insert(val)
@heap << val
heapify_up(@heap.size - 1)
end
def extract_min
return if @heap.empty?
min = @heap.first
@heap[0] = @heap.pop
heapify_down(0)
min
end
private
def heapify_up(index)
parent_index = (index - 1) / 2
return if index <= 0 || @heap[parent_index] <= @heap[index]
@heap[parent_index], @heap[index] = @heap[index], @heap[parent_index]
heapify_up(parent_index)
end
def heapify_down(index)
child_index = 2 * index + 1
return if child_index >= @heap.size
right_child_index = child_index + 1
if right_child_index < @heap.size && @heap[right_child_index] < @heap[child_index]
child_index = right_child_index
end
return if @heap[index] <= @heap[child_index]
@heap[index], @heap[child_index] = @heap[child_index], @heap[index]
heapify_down(child_index)
end
end
min_heap = MinHeap.new
min_heap.insert(3)
min_heap.insert(2)
min_heap.insert(1)
puts min_heap.extract_min # Output: 1
Algorithms
1. Sorting Algorithms
Bubble Sort
Bubble Sort repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order.
- Usage: Simple sorting algorithm, educational purposes.
- Big O Complexity: O(n²)
Example:
def bubble_sort(arr)
n = arr.size
loop do
swapped = false
(n-1).times do |i|
if arr[i] > arr[i+1]
arr[i], arr[i+1] = arr[i+1], arr[i]
swapped = true
end
end
break unless swapped
end
arr
end
bubble_sort([4, 3, 2, 1]) # Output: [1, 2, 3, 4]
Merge Sort
Merge Sort divides the list into halves, recursively sorts them, and then merges the sorted halves.
- Usage: Efficient, stable sort.
- Big O Complexity: O(n log n)
Example:
def merge_sort(arr)
return arr if arr.size <= 1
mid = arr.size / 2
left = merge_sort(arr[0...mid])
right = merge_sort(arr[mid..-1])
merge(left, right)
end
def merge(left, right)
result = []
until left.empty? || right.empty?
result << (left.first <= right.first ? left.shift : right.shift)
end
result + left + right
end
merge_sort([4, 3, 2, 1]) # Output: [1, 2, 3, 4]
Quick Sort
Quick Sort selects a ‘pivot’ element, partitions the array into elements less than and greater than the pivot, and recursively sorts the partitions.
- Usage: Efficient sort, average case.
- Big O Complexity: Average O(n log n), Worst O(n²)
Example:
def quick_sort(arr)
return arr if arr.size <= 1
pivot = arr.delete_at(rand(arr.size))
left, right = arr.partition { |x| x <= pivot }
quick_sort(left) + [pivot] + quick_sort(right)
end
quick_sort([4, 3, 2, 1]) # Output: [1, 2, 3, 4]
2. Dynamic Programming (DP)
Dynamic Programming is an optimization technique that solves complex problems by breaking them into simpler subproblems and storing the results of subproblems to avoid redundant computations.
Fibonacci Sequence (DP)
Example:
def fibonacci(n, memo = {})
return n if n <= 1
memo[n] ||= fibonacci(n-1, memo) + fibonacci(n-2, memo)
end
fibonacci(10) # Output: 55
3. Breadth-First Search (BFS) and Depth-First Search (DFS)
BFS explores nodes level by level, while DFS explores as far as possible along each branch before backtracking.
BFS
Example:
def bfs(graph, start)
queue = [start]
visited = Set.new([start])
until queue.empty?
vertex = queue.shift
puts vertex
graph[vertex].each do |neighbor|
next if visited.include?(neighbor)
visited.add(neighbor)
queue << neighbor
end
end
end
DFS
Example:
def dfs(graph, start, visited = Set.new)
return if visited.include?(start)
puts start
visited.add(start)
graph[start].each do |neighbor|
dfs(graph, neighbor, visited)
end
end
4. Recursion
Recursion is a technique where a function calls itself in order to solve smaller instances of the same problem.
Example:
def factorial(n)
return 1 if n <= 1
n * factorial(n-1)
end
factorial(5) # Output: 120
5. Kadane’s Algorithm
Kadane’s Algorithm is used to find the maximum sum subarray in a given array.
Example:
def kadane(arr)
max_ending_here = max_so_far = arr[0]
arr[1..-1].each do |x|
max_ending_here = [x, max_ending_here + x].max
max_so_far = [max_so_far, max_ending_here].max
end
max_so_far
end
kadane([-2,1,-3,4,-1,2,1,-5,4]) # Output: 6
6. Prim’s Algorithm
Prim’s Algorithm is used to find the Minimum Spanning Tree (MST) of a graph.
Example:
def prim(graph)
start = graph.keys.first
mst = []
visited = Set.new([start])
edges = graph[start].dup
until edges.empty?
edge = edges.min_by { |_, weight| weight }
node, weight = edge
next if visited.include?(node)
mst << edge
visited.add(node)
edges += graph[node].reject { |n, _| visited.include?(n) }
end
mst
end
7. Greedy Algorithms
Greedy Algorithms make locally optimal choices at each step with the hope of finding a global optimum.
Coin Change Problem (Greedy)
Example:
def coin_change(coins, amount)
coins.sort.reverse
count = 0
coins.each do |coin|
while amount >= coin
amount -= coin
count += 1
end
end
count
end
coin_change([1, 2, 5], 11) # Output: 3
8. Dijkstra’s Algorithm
Used for finding the shortest path from a single source to all other nodes in a graph.
- Usage: Shortest path in weighted graphs.
- Big O Complexity: O((V + E) log V)
Example:
def dijkstra(graph, start)
distances = {}
previous = {}
nodes = PriorityQueue.new
graph.each_key do |vertex|
distances[vertex] = Float::INFINITY
previous[vertex] = nil
nodes.push(vertex, distances[vertex])
end
distances[start] = 0
nodes.change_priority(start, 0)
until nodes.empty?
smallest = nodes.pop
break if distances[smallest] == Float::INFINITY
graph[smallest].each do |neighbor, weight|
alt = distances[smallest] + weight
if alt < distances[neighbor]
distances[neighbor] = alt
previous[neighbor] = smallest
nodes.change_priority(neighbor, alt)
end
end
end
distances
end
9. Bellman-Ford Algorithm
Used for finding the shortest paths from a single source to all other nodes in a graph, even with negative weights.
- Usage: Shortest path in graphs with negative weights.
- Big O Complexity: O(VE)
Example:
def bellman_ford(graph, start)
distances = Hash.new(Float::INFINITY)
distances[start] = 0
(graph.keys.size - 1).times do
graph.each do |u, edges|
edges.each do |v, weight|
if distances[u] + weight < distances[v]
distances[v] = distances[u] + weight
end
end
end
end
distances
end
10. A* Algorithm
A (A-star) is a search algorithm that finds the shortest path between two nodes using heuristics to improve speed.*
- Usage: Pathfinding and graph traversal.
- Big O Complexity: O(E)
Example:
def a_star(graph, start, goal, heuristic)
open_set = Set.new([start])
came_from = {}
g_score = Hash.new(Float::INFINITY)
g_score[start] = 0
f_score = Hash.new(Float::INFINITY)
f_score[start] = heuristic[start]
until open_set.empty?
current = open_set.min_by { |node| f_score[node] }
return reconstruct_path(came_from, current) if current == goal
open_set.delete(current)
graph[current].each do |neighbor, weight|
tentative_g_score = g_score[current] + weight
if tentative_g_score < g_score[neighbor]
came_from[neighbor] = current
g_score[neighbor] = tentative_g_score
f_score[neighbor] = g_score[neighbor] + heuristic[neighbor]
open_set.add(neighbor)
end
end
end
nil
end
def reconstruct_path(came_from, current)
total_path = [current]
while came_from.key?(current)
current = came_from[current]
total_path.unshift(current)
end
total_path
end
11. Floyd-Warshall Algorithm
Floyd-Warshall is used for finding the shortest paths between all pairs of nodes in a graph.
- Usage: All-pairs shortest path in graphs.
- Big O Complexity: O(V³)
Example:
def floyd_warshall(graph)
dist = Hash.new { |h, k| h[k] = Hash.new(Float::INFINITY) }
graph.each do |u, edges|
dist[u][u] = 0
edges.each do |v, weight|
dist[u][v] = weight
end
end
graph.keys.each do |k|
graph.keys.each do |i|
graph.keys.each do |j|
if dist[i][j] > dist[i][k] + dist[k][j]
dist[i][j] = dist[i][k] + dist[k][j]
end
end
end
end
dist
end
12. Topological Sort
Topological Sort is used to order vertices of a directed acyclic graph (DAG) linearly such that for every directed edge u -> v, u comes before v in the ordering.
- Usage: Task scheduling, course prerequisites.
- Big O Complexity: O(V + E)
Example:
def topological_sort(graph)
visited = Set.new
stack = []
graph.keys.each do |vertex|
topological_sort_util(vertex, visited, stack, graph) unless visited.include?(vertex)
end
stack.reverse
end
def topological_sort_util(vertex, visited, stack, graph)
visited.add(vertex)
graph[vertex].each do |neighbor|
topological_sort_util(neighbor, visited, stack, graph) unless visited.include?(neighbor)
end
stack << vertex
end
13. Kruskal’s Algorithm
Kruskal’s Algorithm is used to find the Minimum Spanning Tree (MST) of a graph by sorting edges and adding them to the MST while avoiding cycles.
- Usage: Finding MST in graphs.
- Big O Complexity: O(E log E)
Example:
class UnionFind
def initialize(n)
@parent = (0...n).to_a
@rank = Array.new(n, 0)
end
def find(x)
return x if @parent[x] == x
@parent[x] = find(@parent[x])
end
def union(x, y)
root_x = find(x)
root_y = find(y)
return if root_x == root_y
if @rank[root_x] > @rank[root_y]
@parent[root_y] = root_x
elsif @rank[root_x] < @rank[root_y]
@parent[root_x] = root_y
else
@parent[root_y] = root_x
@rank[root_x] += 1
end
end
end
def kruskal(graph)
edges = []
graph.each do |u, neighbors|
neighbors.each do |v, weight|
edges << [weight, u, v]
end
end
edges.sort_by!(&:first)
mst = []
uf = UnionFind.new(graph.size)
edges.each do |weight, u, v|
if uf.find(u) != uf.find(v)
uf.union(u, v)
mst << [u, v, weight]
end
end
mst
end
14. Backtracking
Backtracking is a recursive technique for solving problems by trying to build a solution incrementally and removing solutions that fail to satisfy the constraints.
N-Queens Problem
Example:
def solve_n_queens(n)
result = []
solve(0, [], result, n)
result
end
def solve(row, columns, result, n)
if row == n
result << columns.map { |c| '.' * c + 'Q' + '.' * (n - c - 1) }
else
(0...n).each do |col|
if valid?(columns, row, col)
solve(row + 1, columns + [col], result, n)
end
end
end
end
def valid?(columns, row1, col1)
columns.each_with_index do |col2, row2|
return false if col1 == col2 || (row1 - row2).abs == (col1 - col2).abs
end
true
end
15. Divide and Conquer
Divide and Conquer is a technique where a problem is divided into smaller subproblems, each subproblem is solved recursively, and their solutions are combined.
Example:
Merge Sort (Already shown above)
Quick Sort (Already shown above)
16. Binary Search
Binary Search is an efficient algorithm for finding an item from a sorted list of items.
- Usage: Searching in sorted arrays.
- Big O Complexity: O(log n)
Example:
def binary_search(arr, target)
left, right = 0, arr.size - 1
while left <= right
mid = left + (right - left) / 2
return mid if arr[mid] == target
if arr[mid] < target
left = mid + 1
else
right = mid - 1
end
end
-1
end
binary_search([1, 2, 3, 4, 5], 3) # Output: 2
17. Hashing
Hashing is a technique used to uniquely identify a specific object from a group of similar objects.
Hash Table (Dictionary in Ruby)
Example:
hash = {}
hash[:key] = "value"
puts hash[:key] # Output: "value"
18. Two-Pointer Technique
The Two-Pointer Technique is used for solving problems involving sorted arrays or linked lists by using two pointers to traverse the structure.
Example: Finding a pair with a given sum
def two_sum(arr, target)
left, right = 0, arr.size - 1
while left < right
sum = arr[left] + arr[right]
return [left, right] if sum == target
if sum < target
left += 1
else
right -= 1
end
end
nil
end
two_sum([1, 2, 3, 4, 5], 9) # Output: [3, 4]
19. Sliding Window
The Sliding Window technique is used for problems involving subarrays or substrings by maintaining a window that satisfies certain conditions.
Example: Maximum sum of subarray of size k
def max_sum_subarray(arr, k)
return nil if arr.size < k
max_sum = arr[0, k].sum
current_sum = max_sum
(k...arr.size).each do |i|
current_sum += arr[i] - arr[i - k]
max_sum = [max_sum, current_sum].max
end
max_sum
end
max_sum_subarray([1, 2, 3, 4, 5, 6, 7, 8, 9], 3) # Output: 24
20. Bit Manipulation
Bit Manipulation involves using bitwise operations to solve problems efficiently at the bit level.
Example: Counting the number of 1 bits
def hamming_weight(n)
count = 0
while n != 0
count += 1
n &= n - 1
end
count
end
hamming_weight(11) # Output: 3 (binary representation of 11 is 1011)
Conclusion
Understanding data structures and algorithms is crucial for writing efficient and maintainable code. Ruby provides a rich set of built-in data structures and methods that allow you to solve complex problems effectively. By mastering these fundamental concepts, you’ll be better equipped to handle any programming challenge that comes your way.