Choose Your Implementation
Browse through 51+ Data Structures and Algorithms implementations. Filter by category, or search for specific topics.
A linear data structure where elements are stored in nodes, each pointing to the next node. Provides O(1) insertion/deletion at head, O(n) for search and access. Space complexity: O(n). Ideal for dynamic lists where size changes frequently.
A linear data structure with nodes containing data and pointers to both next and previous nodes. Allows bidirectional traversal. Time complexity: O(1) for insertion/deletion at known positions, O(n) for search. Space complexity: O(n) with extra memory for prev pointers.
A LIFO (Last In, First Out) data structure. Push, pop, and peek operations are O(1). Space complexity: O(n). Essential for expression evaluation, backtracking, and function call management.
A FIFO (First In, First Out) data structure. Enqueue and dequeue operations are O(1) with deque implementation. Space complexity: O(n). Essential for BFS traversals, process scheduling, and handling requests in order.
A hierarchical data structure where each node has at most two children. Tree traversals are O(n), search/insert/delete are O(h) where h is height. Space complexity: O(n). Foundation for BSTs, heaps, and expression trees.
A binary tree where left subtree contains smaller values and right subtree contains larger values. Average case operations are O(log n), worst case O(n). Space complexity: O(n). Efficient for searching, sorting, and range queries.
A complete binary tree where parent nodes are smaller than children. Insert, delete, and extract-min are O(log n). Space complexity: O(n). Essential for priority queues, heap sort, and graph algorithms like Dijkstra's.
Fundamental graph traversal algorithms. BFS finds shortest path in unweighted graphs, DFS explores deeply. Both have O(V + E) time complexity. Space complexity: O(V). Essential for connectivity, pathfinding, and topological sorting.
Optimization technique solving complex problems by breaking them into simpler subproblems. Includes memoization and tabulation approaches. Time complexity varies by problem. Essential for optimization problems like knapsack, LCS, and path counting.
Essential array operations including sorting (quicksort, mergesort), searching (binary search), and manipulation. Time complexities range from O(n) to O(n log n). Space complexity varies. Fundamental for most algorithmic problems.
String processing algorithms including pattern matching (KMP), palindrome detection, and manipulation. Time complexity ranges from O(n) to O(n + m). Essential for text processing, DNA sequencing, and search engines.
Least Recently Used (LRU) cache using doubly linked list and hash map. Get and put operations are O(1). Space complexity: O(capacity). Essential for memory management and caching systems.
Tree-like data structure for storing strings efficiently. Insert, search, and prefix operations are O(m) where m is string length. Space complexity: O(ALPHABET_SIZE * N * M). Perfect for autocomplete and spell checkers.
Data structure for tracking disjoint sets with union and find operations. With path compression and union by rank, operations are nearly O(1) amortized. Essential for detecting cycles and connected components.
Tree data structure for range queries and updates. Range queries and updates are O(log n). Space complexity: O(n). Excellent for range sum, min/max queries with updates.
Probabilistic data structure for membership testing. Space-efficient with possible false positives but no false negatives. Insert and lookup are O(k) where k is number of hash functions. Great for caching and databases.
Tree data structure for efficient prefix sum queries and updates. Both operations are O(log n). Space complexity: O(n). More space-efficient than segment trees for prefix sum queries.
Suffix array provides sorted order of all suffixes. Construction is O(n log n), pattern search is O(m log n). LCP array gives longest common prefix between adjacent suffixes. Essential for string algorithms.
A complete binary tree where parent nodes are larger than children. Insert, delete, and extract-max are O(log n). Space complexity: O(n). Used in priority queues, heap sort, and finding largest elements.
Self-balancing binary search tree with height-balanced property. All operations are O(log n) guaranteed. Space complexity: O(n). More rigidly balanced than red-black trees.
Self-balancing binary search tree with color properties. All operations are O(log n) amortized. Space complexity: O(n). Used in many standard library implementations.
Self-balancing tree data structure for sorted data with multiple keys per node. All operations are O(log n). Space complexity: O(n). Essential for databases and file systems.
Data structure for key-value pairs using hash functions. Average case O(1) for insert, delete, lookup. Space complexity: O(n). Handles collisions with chaining or open addressing.
Linear data structure supporting insertion and deletion at both ends. All operations are O(1). Space complexity: O(n). Useful for sliding window problems and palindrome checking.
Abstract data type with priority-based access. Insert is O(log n), extract-min/max is O(log n). Space complexity: O(n). Essential for Dijkstra's algorithm and task scheduling.
Linked list where the last node points back to the first. Allows continuous traversal. Operations are O(1) for insertion/deletion at known positions, O(n) for search.
Probabilistic data structure with multiple levels for fast search. Average operations are O(log n). Space complexity: O(n). Alternative to balanced trees with simpler implementation.
Graph algorithm for finding shortest path from source to all vertices. Time complexity: O(Vยฒ or (V + E) log V) with priority queue. Essential for GPS systems and network routing.
Graph algorithm for shortest path that handles negative weights. Time complexity: O(VE). Space complexity: O(V). Can detect negative cycles unlike Dijkstra's algorithm.
Dynamic programming algorithm for all-pairs shortest paths. Time complexity: O(Vยณ). Space complexity: O(Vยฒ). Works with negative weights and detects negative cycles.
Linear ordering of vertices in a directed acyclic graph. Time complexity: O(V + E). Space complexity: O(V). Essential for dependency resolution and task scheduling.
Greedy algorithm for finding minimum spanning tree. Time complexity: O(E log E). Uses union-find data structure. Essential for network design and clustering.
Greedy algorithm for minimum spanning tree starting from a vertex. Time complexity: O(Vยฒ or E log V). Space complexity: O(V). Alternative to Kruskal's algorithm.
Efficient search algorithm for sorted arrays. Time complexity: O(log n). Space complexity: O(1) iterative, O(log n) recursive. Foundation for many divide-and-conquer algorithms.
Divide-and-conquer sorting algorithm. Time complexity: O(n log n) guaranteed. Space complexity: O(n). Stable sort, good for linked lists and external sorting.
Divide-and-conquer sorting algorithm. Average case: O(n log n), worst case: O(nยฒ). Space complexity: O(log n) average. In-place sorting, widely used in practice.
Comparison-based sorting using binary heap. Time complexity: O(n log n) guaranteed. Space complexity: O(1). In-place sorting with consistent performance.
Non-comparison sorting algorithm for integers. Time complexity: O(d ร n) where d is digits. Space complexity: O(n + k). Efficient for fixed-width integer keys.
Non-comparison sorting for integers in known range. Time complexity: O(n + k) where k is range. Space complexity: O(k). Stable sort, efficient for small ranges.
Efficient pattern matching algorithm. Time complexity: O(n + m). Space complexity: O(m). Uses failure function to avoid redundant comparisons.
String matching using rolling hash. Average case: O(n + m), worst case: O(nm). Good for multiple pattern matching and plagiarism detection.
Linear time algorithm for finding all palindromes in a string. Time complexity: O(n). Space complexity: O(n). Efficiently handles both odd and even length palindromes.
Dynamic programming algorithm for minimum edit distance between strings. Time complexity: O(mn). Space complexity: O(mn) or O(min(m,n)). Used in spell checkers and DNA analysis.
Dynamic programming algorithm for finding LCS of two sequences. Time complexity: O(mn). Space complexity: O(mn) or O(min(m,n)). Used in diff tools and bioinformatics.
Classic dynamic programming problem for optimal selection. Time complexity: O(nW). Space complexity: O(nW) or O(W). Models resource allocation and optimization problems.
Dynamic programming for minimum coins needed for target amount. Time complexity: O(n ร amount). Space complexity: O(amount). Classic example of optimal substructure.
Dynamic programming for optimal matrix multiplication order. Time complexity: O(nยณ). Space complexity: O(nยฒ). Minimizes scalar multiplications in matrix chain.
Dynamic programming for maximum sum contiguous subarray. Time complexity: O(n). Space complexity: O(1). Elegant solution to maximum subarray problem.
Algorithmic technique using two pointers for array/string problems. Time complexity varies, often O(n). Space complexity: O(1). Useful for pairs, palindromes, and sorted arrays.
Technique for problems involving contiguous subarrays/substrings. Time complexity: O(n). Space complexity: O(1) or O(k). Efficient for maximum/minimum in windows.
Systematic search through solution space with pruning. Time complexity varies, often exponential. Includes N-Queens, Sudoku solver, and subset generation.