Big O Notation in Data Structures: Time and Space Complexity

Big O Notation in Data Structures: Time and Space Complexity

06 Jun 2024
Beginner
3.1K Views
7 min read

Big O Notation in Data Structures: An Overview

In the previous tutorial on Complexity Analysis, we saw the three asymptotic notations used to denote time and space complexities. In this DSA tutorial, we will discuss the analysis of the algorithm using the most commonly used Big O asymptotic notation in complete detail. For more information and understanding, consider our Best Dsa Courseprogram.

Big O Runtime Analysis of the Algorithm

We saw this mathematical representation of Big O notation in the previous tutorial
O(g(n)) = { f(n): there exist positive constants c and n0
 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }
Now, we will see the general step-by-step process for Big O runtime analysis
  • Determine the input and what n stands for.
  • Describe the algorithm's highest limit of operations in terms of n.
  • Remove all but the terms with the highest order.
  • Eliminate all the consistent factors.

Features of Big O notation analysis

  • Constant Multiplication: If f(n) = c.g(n), then O(f(n)) = O(g(n)); where c is a nonzero constant.
  • Polynomial Function: If f(n) = a0 + a1.n + a2.n2 + —- + am.nm, then O(f(n)) = O(nm).
  • Summation Function: If f(n) = f1(n) + f2(n) + —- + fm(n) and fi(n)≤fi+1(n) ∀ i=1, 2, —-, m, then O(f(n)) = O(max(f1(n), f2(n), —-, fm(n))).
  • Logarithmic Function: If f(n) = logan and g(n)=logbn, then O(f(n))=O(g(n)); all log functions grow in the same manner in terms of Big O.

Runtime Analysis of Algorithms

The performance of an algorithm depends on n i.e. the size of the input or the number of operations required for each input item.

The algorithms can be classified from the best-to-worst performance (Running Time Complexity):

  • Logarithmic algorithmO(logn) Runtime grows logarithmically in proportion to n.
  • Linear algorithmO(n) Runtime grows directly in proportion to n.
  • Superlinear algorithmO(nlogn) Runtime grows in proportion to n.
  • Polynomial algorithmO(nc) Runtime grows quicker than previous all based on n.
  • Exponential algorithmO(cn) Runtime grows even faster than the polynomial algorithm based on n.
  • Factorial algorithmO(n!) Runtime grows the fastest and becomes quickly unusable for even small values of n.

Read More - Data Structure Interview Questions for Experienced

Examples of algorithms with high runtime complexity i.e worst-case scenario

Big O Space Complexity Analysis of Algorithms

For performance analysis of an algorithm, not only time complexity needs to be considered but also the memory usage amount of the program. We need to measure and compare the worst-case theoretical space complexities of algorithms.

Functions are categorized using the Big O notation according to how quickly they expand; many functions with the same rate of growth could be written using the same notation. Since a function's order is also referred to as its development rate, the symbol O is used. A function's development rate is typically only constrained by the upper bound in a large O notation representation of the function.

Space complexity analysis depends on two major factors

  • Program implementation for a specific algorithm.
  • The input size or the amount of storage required for each item.

Examples of some Algorithms Space Complexity

  • Linear search, binary search, bubble sort, selection sort, heap sort, and insertion sort: O(1)
  • Radix sort: O(n+k)
  • Quick sort: O(n)
  • Merge Sort: O (log n)
Summary
Big O Notation is particularly helpful in understanding algorithms while working with big data. It helps programmers determine the scalability of an algorithm or count the steps necessary to produce outputs based on the data the program utilizes. If you want to go to a level above, enroll in our Data Structures Certificationprogram. It will benefit you a lot.

FAQs

The performance of an algorithm depends on n i.e. the size of the input or the number of operations required for each input item.

  • Program implementation for a specific algorithm.
  • The input size or the amount of storage required for each item.

Share Article
About Author
Shailendra Chauhan (Microsoft MVP, Founder & CEO at Scholarhat by DotNetTricks)

Shailendra Chauhan is the Founder and CEO at ScholarHat by DotNetTricks which is a brand when it comes to e-Learning. He provides training and consultation over an array of technologies like Cloud, .NET, Angular, React, Node, Microservices, Containers and Mobile Apps development. He has been awarded Microsoft MVP 9th time in a row (2016-2024). He has changed many lives with his writings and unique training programs. He has a number of most sought-after books to his name which has helped job aspirants in cracking tough interviews with ease.
Accept cookies & close this