The naive implementation has O(n2)
as worst case but O(1)
space complexity.
/*
Without a buffer, we can iterate with two pointers:
* “current” does a normal iteration,
* while “runner” iterates through all prior nodes to check for dups.
Runner will only see one dup per node, because if there were multiple duplicates they would have been removed already.
*/
public static void deleteDups(LinkedListNode head)
{
if (head == null) return;
LinkedListNode previous = head;
LinkedListNode current = previous.next;
while (current != null)
{
LinkedListNode runner = head;
while (runner != current) { // Check for earlier dups
if (runner.data == current.data)
{
LinkedListNode tmp = current.next; // remove current
previous.next = tmp;
current = tmp; // update current to next node
break; // all other dups have already been removed
}
runner = runner.next;
}
if (runner == current) { // current not updated - update now
previous = current;
current = current.next;
}
}
}
How about using a HashMap? This way it will take O(n)
time and O(n)
space. I will write psuedocode.
function removeDup(LinkedList list){
HashMap map = new HashMap();
for(i=0; i<list.length;i++)
if list.get(i) not in map
map.add(list.get(i))
else
list.remove(i)
end
end
end
Of course we assume that HashMap has O(1)
read and write.
Another solution is to use a mergesort and removes duplicate from start to end of the list. This takes O(n log n)
- mergesort is
O(n log n)
- removing duplicate from a sorted list is
O(n)
- therefore the entire operation takes
O(n log n)