The Big-O notation simply describes how well an algorithm scales or performs in the worst case scenario as the number of elements in a data structure increases. The Big-O notation can also be used to describe other behavior such as memory consumption.
Sometimes you may need to choose a slower algorithm because it also consumes less memory.
The algorithms used by various data structures for different operations like search, insert and delete fall into the following performance groups like :
Constant-time O(1)
Linear O(n)
Logarithmic O (log n)
Exponential O (c to the power n)
Polynomial O (n to the power c)
Quadratic O (n to the power 2)
Factorial O (n!)
where n is the number of elements in the data structure.
Examples
1. Hashmap - Finding an element
Usually a constant-time, which is O(1)
This is a constant time because a hashing function is used to find an element, and computing a hash value does not depend on the number of elements in the HashMap.
2. Array / List / LinkedList - Linear search
Linear, which is O(n)
This is linear because you will have to search the entire list. This means that if a list is twice as big, searching it will take twice as long.
3. Array - Sorting
Comparing every element in an array to sort the array has polynomial complexity, which is O (n2)
A nested for loop is O (n2)
4. Sorted array / ArrayList - Binary search
Logarithmic, which is O(log n)
5. LinkedList - Searching an element
Normally requires O(n)
This is one of the disadvantages of LinkedList over the other data structures like an ArrayList / Array offering a O (log n) performance, which offers better performance than O(n) as the number of elements increases.
6. ArrayList / Array - Searching an element
Offers a O (log n) as the number of elements increases.
A logarithmic running times mean, if 10 items take at most x amount of time, 100 items will take say at most 2x amount of time, and 10,000 items will take at most 4x.
No comments:
Post a Comment
Note: only a member of this blog may post a comment.