Access – O(1)
Appending – O(n)
Prepending – O(n)
Insertion – O(n)
Deletion – O(n)
Swapping – O(1)
Can I expect better performance for some things than my assumptions here? I don’t expect anything is outlined in any specifications, but in practice it could be that all major implementations use optimized arrays behind the scenes. Are there dynamically expanding arrays or some other performance boosting algorithms at work?
The reason I’m wondering this is because I’m researching some sorting algorithms, most of which seem to assume appending and deleting are O(1) operations when describing their overall big O.
- Access – O(1)
- Appending – Amortized O(1) (sometimes resizing the hashtable is required; usually only insertion is required)
- Prepending – O(n) via
unshift, since it requires reassigning all the indexes
- Insertion – Amortized O(1) if the value does not exist. O(n) if you want to shift existing values (Eg, using
- Deletion – Amortized O(1) to remove a value, O(n) if you want to reassign indices via
- Swapping – O(1)
In general, setting or unsetting any key in a dict is amortized O(1), and the same goes for arrays, regardless of what the index is. Any operation that requires renumbering existing values is O(n) simply because you have to update all the affected values.