Union Find

Union Find by rank and path compression
Time complexity of union and find operations is O(log n) by using rank and path compression.

// Detect cycle in an undirected graph.

public class Main {
	public static class Edge {
		int v1, v2;
		public Edge(int v1, int v2) {
			this.v1 = v1;
			this.v2 = v2;
	public static class Graph {
		int v;
		Edge[] e;
		public Graph(int v, int edgeCount) {
			this.v = v;
			e = new Edge[edgeCount];
	public static class UnionFind {
		int rank;
		int parent;
		public UnionFind(int rank, int parent){
			this.rank = rank;
			this.parent = parent;
	public static int find(int x, UnionFind[] uf) {
		if(uf[x].parent != x) {
			uf[x].parent = find(uf[x].parent, uf);
		return uf[x].parent;
	public static void union(int v1, int v2, UnionFind[] uf) {
		int p1 = find(v1, uf);
		int p2 = find(v2, uf);
	    if(uf[p1].rank < uf[p2].rank){ 	    	uf[p1].parent = p2; 	    } else if(uf[p1].rank > uf[p2].rank){
	    	uf[p2].parent = p1;
	    } else {
	    	uf[p1].parent = p2;
	public static int checkCycle(Graph g){
		UnionFind[] uf = new UnionFind[g.v];
		for(int i=0; i<g.v; i++) {
			uf[i] = new UnionFind(0,i);
		for(int i=0; i<g.e.length; i++) {
			int p1 = find(g.e[i].v1, uf);
			int p2 = find(g.e[i].v2, uf);
			if(p1 == p2){
				return 1;
			union(g.e[i].v1, g.e[i].v2, uf);			
		return 0;
	public static void main(String[] arsg){
		Main m = new Main();
		Graph g = new Graph(5, 4);
		g.e[0] = new Edge(0,1);
		g.e[1] = new Edge(1,2);
		g.e[2] = new Edge(2,3);
		g.e[3] = new Edge(3,0);
		if(checkCycle(g) == 1){
			System.out.println("Graph contains cycle");
		} else {
			System.out.println("Graph does not contains cycle");

In naive implementation the time complexity of union and find operations in O(n)

// Naive implementation of find
int find(int parent[], int i)
    if (parent[i] == -1)
        return i;
    return find(parent, parent[i]);
// Naive implementation of union()
void Union(int parent[], int x, int y)
    int xset = find(parent, x);
    int yset = find(parent, y);
    parent[xset] = yset;

LRU Cache

The LRU caching scheme is to remove the least recently used frame when the cache is full and a new page is referenced which is not there in cache.

We use two data structures to implement an LRU Cache.

1. A Queue which is implemented using a doubly linked list. The maximum size of the queue will be equal to the total number of frames available (cache size).
The most recently used pages will be near front end and least recently pages will be near rear end.

2. A HashMap with page number as key and address of the corresponding queue node as value.

Time Complexity:

The LRU cache is a hash table of keys and double linked nodes. The hash table makes the time of get() to be O(1). The list of double linked nodes make the nodes adding/removal operations O(1).

class Node{
    int key;
    int value;
    Node pre;
    Node next;
    public Node(int key, int value){
        this.key = key;
        this.value = value;

public class LRUCache {
    int capacity;
    HashMap<Integer, Node> map = new HashMap<Integer, Node>();
    Node head=null;
    Node end=null;
    public LRUCache(int capacity) {
        this.capacity = capacity;
    public int get(int key) {
        if(map.containsKey(key)) {
            Node n = map.get(key);
            return n.value;
        return -1;
    public void remove(Node n){
            n.pre.next = n.next;
            head = n.next;
            n.next.pre = n.pre;
            end = n.pre;
    public void setHead(Node n){
        n.next = head;
        n.pre = null;
            head.pre = n;
        head = n;
        if(end ==null)
            end = head;
    public void set(int key, int value) {
            Node old = map.get(key);
            old.value = value;
            Node created = new Node(key, value);
            map.put(key, created);

Lucky for Java already provides a class that is very suitable for our purpose – LinkedHashMap. This class maintains the entries in a HashMap for fast lookup at the same time maintains a doubly linked list of the entries either in AccessOrder or InsertionOrder. This is configurable so use use AccessOrder as true. It also has a method removeOldestEntry() which we can override to return true when the cache size exceeds the specified capacity(upper limit). So here is the implementation.

import java.util.LinkedHashMap;
import java.util.Map.Entry;
public class LRUCache < K, V > extends LinkedHashMap < K, V > {
    private int capacity; // Maximum number of items in the cache.
    public LRUCache(int capacity) { 
        super(capacity+1, 1.0f, true); // Pass 'true' for accessOrder.
        this.capacity = capacity;
    protected boolean removeEldestEntry(Entry entry) {
        return (size() > this.capacity);

Src for the above post : http://www.programcreek.com/2013/03/leetcode-lru-cache-java/, http://www.geeksforgeeks.org/implement-lru-cache/, http://www.codewalk.com/2012/04/least-recently-used-lru-cache-implementation-java.html