Friday, December 27, 2019

EnumMap walkthrough

EnumMap is a Map implementation for Enumeration types. It extends AbstractMap and implements Map interface. EnumMap is a collection to store key value pairs similar to HashMap, however, the key in EnumMap, as you already might have guessed, is an Enum. Please go through https://delveintojava.blogspot.com/2019/11/hashmap-walkthrough.html for understanding HashMap concepts before starting this section.

EnumMap maintains it's entries in the natural order of their keys i.e. in the order in which enum constants are declared inside Enum. Let's clarify this with an example.

    public enum Seasons {
        WINTER, SPRING, SUMMER, FALL;
    }

    public static void main(String[] args) {
        EnumMap<Seasons, String> funMap = new EnumMap<>(Seasons.class);

        funMap.put(Seasons.WINTER, "Build a snowman");
        funMap.put(Seasons.SPRING, "Maintain the garden");
        funMap.put(Seasons.SUMMER, "Go surfing");
        funMap.put(Seasons.FALL, "Lots of Shopping!");
     
        System.out.println("EnumMap: " + funMap);

        System.out.println("Key: " + Seasons.WINTER + " Value: " + funMap.get(Seasons.WINTER));
        System.out.println("Does funMap has key " + Seasons.SPRING + ": " + funMap.containsKey(Seasons.SPRING));
        System.out.println("Does funMap has value - Maintain the garden: " + funMap.containsValue("Maintain the garden"));
        System.out.println("Does funMap has null value: " + funMap.containsValue(null));
    }


OUTPUT:
EnumMap: {WINTER=Build a snowman, SPRING=Maintain the garden, SUMMER=Go surfing, FALL=Lots of Shopping!}
Key: WINTER Value: Build a snowman
Does funMap has key SPRING: true
Does funMap has value - Practice Quizes: true
Does funMap has null value: false


From the example, you can see that, we are creating an EnumMap object with key as Seasons which is an Enum. If we try to instantiate EnumMap with any class other than Enum, then it will result in a compilation error - Bound mismatch. As you can see, the class declaration below shows that EnumMap expects a key that is an Enum.



Let's check EnumMap's constructor, and understand why we give the class object as a parameter while instantiating EnumMap object.




In the constructor, first the key type is stored. Secondly, all the values of Enum type that we are using is returned by getKeyUniverse for caching purpose, since the number of keys in EnumMap is always constant. Finally, an array of object is created to store the values for each keys, and the array length is assigned the same value as the number of Enum key constants. In the above example, it would be 4.

PUT:
Let's look into the put implementation for EnumMap.


First the put implementation checks whether the key type is of the same class type as the one passed in the constructor. typeCheck method handles it. If the check fails, ClassCastException is thrown.


Next, the index to store values is calculated using the ordinal method provided by Enum. ordinal method returns the index of enum element. Ordinal of Enum elements start from 0. So, in our example, ordinal of WINTER = 0, SPRING = 1, SUMMER = 2, FALL = 3.

In the put implementation, we can see that maskNull is called to get the value to be stored. Why is maskNull required?
For that, let's understand one more property of EnumMap. It allows storage of null values, but not null keys as we know keys must be Enum constants (You will get NullPointerException if you try to insert a null key).get method (which we will see next) returns null when there is nothing stored for the Enum Constant key. It also returns null if we store a null value for that key. So, how do we know whether null is returned because we stored null, or because there is no value stored for the key. So, to handle this situation, maskNull returns a dummy object, and this object is stored in the EnumMap whenever user inserts null for any key (We will clarify it more later).


The value then is stored in the vals object array and the old value that we had retrieved earlier is returned using unmaskNull method.


GET:
get method calculates the index of the value with the key that is passed using the same ordinal method as put implementation, and returns the value using unmaskNull.




isValidKey checks whether the key is null or not, and also whether the key is of the Enum type that we saved while instantiating EnumMap. If not, the get method directly returns null.

So, the get method can return null either when the value is null or if there is no value or even if we pass null key. We can always check whether the null value actually exists by using containsValue and containsKey methods or containsMapping method.



The put and get methods simply use an array to store only values, and the implementation doesn't use complex data structure and logic to store values. This makes EnumMap highly efficient than HashMap. When you have a fixed number of keys to be used for storing values, it would be probably a good idea to use EnumMap.

EnumMap is not synchronized and is not suitable for multi threaded programs. It uses Fail Safe iterators as opposed to Fail fast iterator in HashMap.


References:
http://androidxref.com/9.0.0_r3/xref/libcore/ojluni/src/main/java/java/util/EnumMap.java
http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/util/EnumMap.java







Friday, December 13, 2019

ConcurrentHashMap walkthrough

ConcurrentHashMap uses the same concept of HashMap to store key value pairs. Before starting this section, please go through https://delveintojava.blogspot.com/2019/11/hashmap-walkthrough.html for better understanding of ConcurrentHashMap.

As you might have noticed, the put and remove methods in HashMap are not synchronized. That means, if multiple threads try to update the same HashMap object, there is a chance of data inconsistency. So, HashMap is not preferred for multi-threaded program. But you might have heard of a legacy class called Hashtable which is built just for that purpose. So, why do we need one more class to serve the purpose of concurrency ? Let's see few methods inside Hashtable class, and try to find out the flaw.







Did you see the pattern ? Take a look at one more.


Yes, you got that right. Every method is synchronized in Hashtable. So, if a thread tries to update Hashtable, all other operations are blocked for the rest of the threads since Hashtable object's lock is used for synchronization. This ensures that all the threads work with the latest data in Hashtable, however, this makes Hashtable slow and inefficient. Can we improve it's efficiency ? You now know that would be the exact reason why ConcurrentHashMap was introduced in java 1.5.

Let's find out the improvement ConcurrentHashMap has to offer.
We know that HashMap contains buckets to store the entries. Each bucket is like a different store house for the entries whose key evaluates to the same hash value and in turn the same bucket index. So, instead of locking the complete object, we can lock just the bucket that we are working on. This will allow just as many threads as the number of buckets to simultaneously operate on the Map object.

We know the default capacity of HashMap is 16, so is the initial capacity and concurrency level of ConcurrentHashMap. That means 16 threads can simultaneously update ConcurrentHashMap, provided each key used by different threads maps to different buckets.
Using significantly higher value of concurrency level can increase space and time complexity, and significantly lower value can lead to thread contention.



You can increase the concurrency level of ConcurrentHashMap with the three parameter constructor, last parameter of which serves the purpose.



This is all good. Now let's get to the part where the magic happens. We will take example of put method.


You see "synchronized" somewhere in there. You probably figured things out. But let's get there one step at a time.

1. The first line is important, it tells that ConcurrentHashMap does not allow null keys and null values. It throws a null pointer exception when you try to insert null key or value. Why does ConcurrentHashMap does not allow null keys and values.
One thing to note is that get operation in ConcurrentHashMap does not block, so it may overlap with update operations (put, remove). So, retrieval operation returns the value of the most recent update operation.

This causes ambiguity if we allow null keys and values. If get(key) returns null, you can't tell whether the key maps to null or the key isn't mapped at all. In case of HashMap, you can check this via contains() method, but in case of ConcurrentHashMap, the value might get changed between calls due to some other threads operating on that same node.

For example, after checking contains, the value might not be there because of removal from other threads. You might say the same thing could happen in case of HashMap as well. Well, in case of normal HashMap, we are assuming that the program is not the multi-threaded one. If multi-threading is your goal, then HashMap shouldn't be used in the first place.


2. The next line is calculation of hash value for the key.


3. Next few lines contain 3 conditional checks:
    a. If the ConcurrentHashMap table is null or it's length is 0, then initialize the table, and the for loop continues.
    b. If the Node in the index is null, then insert the new node with given key, value and hash as the first node in that index.
    c. If re-size is in progress, the method helpTransfer helps to return the correct index for the new key value pairs and the for loop continues.


4. If the conditions in step 3 does not satisfy, that means there are already some entries present in the bucket. And, we need to check whether the bucket already contains the key, if it does then replace the value for that entry, and if it does not then insert the new entry with given key value pairs in the end.


The implementation is pretty much the same as a HashMap. However the update operation is blocked for other threads using the synchronized keyword. Here the lock of the first node is used for the synchronization. Since, we acquired the lock on the first entry of the bucket, all other threads trying to update on the same bucket will get blocked.

Get implementation is not synchronized.


However, get will always take the latest value for the key. This is ensured by the use of volatile keyword for the entry's value as you can see in it's Node's member variables.





References: http://androidxref.com/9.0.0_r3/xref/libcore/ojluni/src/main/java/java/util/concurrent/ConcurrentHashMap.java
http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/util/concurrent/ConcurrentHashMap.java





Monday, December 9, 2019

IdentityHashMap walkthrough

IdentityHashMap implements Map interface, that means it stores key value pairs similar to HashMap. What is the difference then you say? As the name suggests, IdentityHashMap compares the identity of the key object instead of checking object equality. In other words, IdentityHashMap uses reference equality (==) to compare keys while HashMap uses object equality.
Before starting IdentityHashMap, You can go through https://delveintojava.blogspot.com/2019/11/hashmap-walkthrough.html before starting this section to understand implementation of HashMap.

Unlike HashMap, IdentityHashMap stores just keys and values in an array - keys in even array index positions and values in the next adjacent odd indexed position. So, there are no nodes required and there is no need of buckets or we can say a bucket consists of single key value pair.




IdentityHashMap uses System.identityHashCode(Object) to find the index location in the table array instead of hashCode(). identityHashCode method always returns the same value returned by the default implementation of hashCode method. In other words, value returned by identityHashCode does not change throughout the life of an object.



The shift operations for calculating the return index value ensures that only even values between 0 to (length - 1) are returned. The even index stores the keys, and the next odd index stores values.

Let's check the put implementation for IdentityHashMap.




As you can see, there is no implementation to create new nodes to store key value pairs. The index search implementation is pretty straight forward. The for loop checks whether the key is already present. If the location of index does not have any key (empty), then the for loop exits and puts the key in the same index and value in the next index.

Algorithm for put:
1. Find the index for the new key and value using hash method.
2. If the index of the array is empty, put the new key in the same location, and it's corresponding value in the next index.
3. If the index of the array is not empty, then move to next index (current index + 2) and check
      a. If the next index is null, insert the key in that index, and value in the next index
      b. If the next index has the same key object (reference equality) as the one to be inserted, replace the value in the next index with the new value. And return the old value.
      c. If a and b does not satisfy, move to next index (current index + 2) and return a and b.

One last thing in put method - maskNull. It allows values to be inserted for null keys. If you pass a null key, then maskNull method returns a final dummy object. The dummy key object is then used to calculate the index for storing the corresponding value for null key. If the key is not null, maskNull returns the same key object.


You can now easily guess the implementation for get method. Yes, for the given key, index is calculated with the same hash method. Then the key to be inserted is compared with the key present in the index using reference equality. If the key matches, then return the value stored in the next index, else move to next index. If the key in the index is null, that means the key was never inserted, and so return null.



Easy, right !

If you wondered enough, there is one important case that we are missing. Think what will happen if we remove any key value pair from the identityHashMap. Think what will happen if we removed one of the key value pairs that were mapped to the same index.

In the example below, let's say, key1, key2, and key3 maps to the same index 0. Since the index 0 was already occupied by key1, other two keys - key2 and key3 were inserted in the next subsequent locations. After the removal of key2, if get method is called for key3, what would it return ? Since the next subsequent index after index 0 has null key, it should return null, which is wrong.



closeDeletion method comes to the rescue. After every removal, this method is called to rehash colliding entries.





http://androidxref.com/9.0.0_r3/xref/libcore/ojluni/src/main/java/java/util/IdentityHashMap.java

http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/util/IdentityHashMap.java

Tuesday, December 3, 2019

WeakHashMap walkthrough

WeakHashMap stores key value pairs just like HashMap, however, the keys in WeakHashMaps are stored indirectly as a Weak Reference. This causes the entries in WeakHashMap get automatically removed when it's key does not have any strong reference. Well, you might not call it automatic, because there is a lot of work that goes into before the entries are removed. We will see how in just a moment. But, before that, go through my HashMap code walkthrough before starting this section, as it is absolutely required to understand WeakHashMap.

https://delveintojava.blogspot.com/2019/11/hashmap-walkthrough.html

Let's first understand what a WeakReference is.
Unlike Strong references, Weak reference allows the object that it references get garbage collected.

Below is an example for WeakReference.

// val is strong to string object
String val = new String("Hello");

// Creating weakReference to string object
WeakReference<String> weakReference = new WeakReference<String>(val);
// This if condition will always hold true, as string still has a strong reference.
if (null != weakReference.get()) {
System.out.println(weakReference.get());
}
// Remove strong reference to string object
val = null;
// Invoke system garbage collection. This might not result in immediate GC
System.gc();
// At this point the object could be garbage collected as it has only weak reference
if (null != weakReference.get()) {
System.out.println(weakReference.get());
else {
System.out.println("Garbage collection cleared weak reference");
}


Now that we are clear about WeakReference, let's talk WeakHashMap. Let's start with an example.

WeakHashMap<String, String> weakHashMap = new WeakHashMap<>();
// Create strong references for both key and values
String key1 = new String("key1");
String key2 = new String("key2");
String value1 = new String("val1");
String value2 = new String("val2");
weakHashMap.put(key1, value1);
weakHashMap.put(key2, value2);

        System.out.println(weakHashMap);

// strong Removing reference to key1
key1 = null;
System.gc();
System.out.println(weakHashMap);


Can you guess the output?

OUTPUT:
{key1=value1, key2=value2} // Before GC
{key2=value2} // After GC (Note that System.gc() might not cause GC immediately, so you might get both key value pairs second time also)

As you can see, after garbage collection, the entry with key1 and value1 are removed since the strong reference for key1 was removed by making it null. So, the key string had only weak reference to it and was eligible for garbage collection.
Had it been a HashMap, the output would be as below:

{key1=value1, key2=value2} // Before GC
{key2=value2, key2=value2} // After GC

As you can see, all values are retained even after GC.

So, how does WeakhashMap know that the key has been garbage collected ? That information will be required to remove the weakReference node, don't you think ?

The answer is ReferenceQueue. Before we talk about ReferenceQueue let's see the the blueprint of WeakHashmap's entries.


You can see that here, entry does not extend the regular HashMap node. The entry extends WeakReference using reference field as key since we are passing key to the super class constructor, and not the whole object(this reference).

We can also see, a reference queue has been passed to the super constructor. To understand how ReferenceQueue plays part in WeakHashMap implementation, let's understand it first.

It's all garbage collection story. Let's rewind a little, and see what a garbage collector does when it comes across an object that is weakly reachable i.e an object which has only weak reference to it.

  • First, the WeakReference object's referent field is set to null. So, now WeakReference doesn't refer to any object in the heap.
  • Then the WeakReference object is added to the ReferenceQueue, and the object's memory in the heap is freed.
Whenever JVM collects the WeakReference's key, the entry (which is a weak reference object) is added to the ReferenceQueue that we pass during the entry creation. So, WeakHashMap checks this reference queue and if it finds any weak reference objects in the queue, it removes the same from it's own table/bucket as well.
This all happens in the expungeStaleEntries function. expungeStaleEntries keeps an eye on the queue and updates its internal table accordingly.


queue is a private final member variable of WeakHashMap.


expungeStaleEntries function is called from multiple places within the WeakHashMap implementation. For example, re-sizing, calculating size, get value etc.

As you can see, we are polling the reference queue. As long as it keeps finding WeakReference entry objects in the reference queue, it finds the bucket index where the object is present. Then it moves through the bucket to find the specific entry. Once the entry in the bucket matches (Notice that the entries are compared through reference (address) comparison (p == e)) the entry present in reference queue, it removes that entry from the bucket.

At last, let's revise the difference between the HashMap and WeakHashMap.


References:

Sunday, December 1, 2019

LinkedHashMap walkthrough

LinkedHashMap is a data structure that stores key value pairs just as a HashMap, but with the added advantage of preserving insertion order.
To understand internal working of LinkedHashMap, understanding HashMap implementations are important. You can go through https://delveintojava.blogspot.com/2019/11/hashmap-walkthrough.html before starting this section.

LinkedHashMap extends HashMap and implements Map interface. It's nodes contain two extra pointers to track the previous and next nodes in terms of the insertion order.
The figure shows the internal structure of LinkedHashMap. As you can see in the figure, the before and after references point to the entries that were inserted before and after that node.

The following code would result into the LinkedHashMap internal structuring as shown in the figure.

LinkedHashMap<String, String> linkedHM = new LinkedHashMap<>();
linkedHM.put("key1", "val1");
linkedHM.put("key2", "val2");
linkedHM.put("key3", "val3");
linkedHM.put("key4", "val4");
System.out.println(linkedHM);

OUTPUT: {key1=val1, key2=val2, key3=val3, key4=val4}

Let's break things down. Let's see the constructor of LinkedHashMap entry.



As you can see, Entries in linkedHashMap extends HashMap.Node, so the structure of entries is the same as HashMap node with the addition of two more references - before and after.

Apart from that, we can also see two entry member variables declared - head and tail. These are required to track the first node and last node that was inserted in the LinkedHashMap.

Whenever new key value pair is inserted in the LinkedHashMap, the insertion happens in the same way as a HashMap. But before insertion, the new entry is linked with the last entry that was inserted (Note that till this point node is not linked with the last node in the bucket. The new entry is just linked to the last inserted entry). Let's see what happens when a new entry is created.



This is the overridden newNode method of HashMap. So, whenever HashMap's putVal method is called to insert key value pairs, it internally calls newNode implementation of LinkedHashMap to take care of maintaining insertion order.
As you can see after creation of the new node, it is linked with the last node that was inserted in the function - linkNodeLast. If it's the first node, the node is saved as head. The new node is made the tail node, and linked with the previous node.

So, you can tell that bucket maintains singly linked list that will be used to search a key in the same bucket. And the insertion order is maintained by a doubly linked list implementation, where each node might point to nodes in other buckets.

So, now let's see how the iterator in LinkedHashMap works. How does it return the entries in the order they were inserted ?



The only thing worth noting is the traversal to the next node. In the nextNode - second last line, next node is pointing to the after reference, and not the next reference in entry node. Since after refers to the insertion order, the traversal too happens in the insertion order.

References:
http://androidxref.com/9.0.0_r3/xref/libcore/ojluni/src/main/java/java/util/LinkedHashMap.java
http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/util/LinkedHashMap.java

Saturday, November 23, 2019

HashMap walkthrough

Let's be done with the boring definitions first, or you could just skip to the next section if you already know what a HashMap does. HashMap is a data structure that maps keys to values, duh ! It is a map based collection class, you would have thought otherwise. However, HashMap does not guarantee of the order in which data is stored. A HashMap uses a hash function to compute an index that finds the location of the bucket into which data is to be stored into or retrieved from. Woah, let's break things down.
So, before delving into the internal workings of HashMap, let's understand it's building blocks.
1. Map Entry
2. Bucket
3. Hash function

1. Map Entry:
These are the actual data holders in a HashMap. When we try to store a key value pair into HashMap, it stores the data in the form of a Node. The Node is a static class defined inside HashMap that implements Map.Entry interface.



As you can see, Node class contains not just key and value, but also the hash and a reference to the next Node. That means when we call put method in HashMap, it stores hash and next node reference along with key and value.

2. Bucket:
A bucket is a list of Nodes. The Nodes whose hash values are same fall under the same bucket. More about hash value next. HashMap is nothing but an array of buckets -> Node<K, V>[].

3. Hash function:
Each node stores a hash along with key and value. This hash value is used to determine the bucket index where the key value pairs will be stored as a Node.


From the hash function, we can infer that if the key is null, 0 is returned. That means the value for null key is always inserted in the first bucket (i.e. the bucket with index 0).

However, if the key is not null, then hashcode of the key is calculated. The higher bits (obtained by the use of unsigned right shitt operator) and the lower bits of thus obtained hash code are XORed. This will mean all the bits of hash code are involved in calculating the final hash value. XOR operation is good for shuffling bits, so it helps in spreading the storage of values more uniformly in HashMap.

Note: Index of bucket is found using ((n - 1) & h) formula (Explained later).

Before checking how the values are stored and retrieved from HashMap. Let's check the constructor of a HashMap.


HashMap has no argument constructor as well, the one we generally use. I chose the above constructor to explain two important ingredients that are used to create a HashMap. Let's tackle them one at a time.

1. initialCapacity: This looks pretty straight forward. yes, you guessed that right! This value represents the number of buckets the HashMap can have initially. The default initial capacity of HashMap is 16.

static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16

2. loadFactor: Why do we need loadFactor ? Let's suppose we have a HashMap that will always contain 16 buckets. If we have a large number of values to be stored, then multiple values will be stored in the same bucket. So, the complexity of search becomes linear, and we lose the significance of using HashMap which is to provide a constant search time. Similarly, if we create a HashMap with large number of buckets, and we only have a few values to be stored, the extra memory allocation will be wasted.

So, it becomes necessary to increase the number of buckets as and when the number of values to be stored increases.

The default loadFactor value is 0.75

static final float DEFAULT_LOAD_FACTOR = 0.75f;


loadFactor decides when the HashMap has to be re-sized so that the complexity of the get and put operations remain O(1).

So, the above value means when the number of values stored in HashMap is 0.75 times it's current capacity, the size of HashMap is doubled.

Suppose if we add 50 elements, the final capacity after adding all the values will be 128.

let's see how.

(0.75 X 16)  = 12th addition -> Capacity = 16 X 2 = 32
(0.75 X 32)  = 24th addition -> Capacity = 32 X 2 = 64
(0.75 X 64)  = 48th addition -> Capacity = 64 X 2 = 128

Phew ε-(´・`) フ

Now, we can get into the part where we actually do operations on HashMap. Let's put some value into HashMap.

put: put internally calls a final method putVal.


Key points to observe in HashMap putVal:
1. When we call new HashMap<>(), it does not allocate the space for the bucket right away. Instead it waits till the first insertion is done. If there is no bucket list created, then allocate memory using resize().


2. If bucket has no nodes, then allocate the new value as first node in that bucket. The position of bucket where new value is to be inserted is calculated using the formula ((n - 1) & hash), where n is the number of buckets in the HashMap. This ensures that the index value does not exceed the bucket list size.


3. If there are already some nodes in the bucket, we need to traverse it and see if any node has the same key as the key we have passed.


4. The match is done based on 2 conditions.
    a. hash value must match
    b. the keys refer to the same object, or are equal (Object equality measured by calling equals method of the object).
    So, the HashMap traverses through the bucket list one node at a time, once it finds the same key, it replaces the value for that node.


Extra note:
Why is it said that if we override equals method for an object, we must override hashCode method also ?
Let's search the answer in HashMap put implementation.
What if you override only equals method ? If you do so, hashCode method in the key class will produce different hash code values for two different keys that are equal (Common wisdom is the default implementation of hashCode returns integer representation of the memory address). This could cause the values for same keys to be stored in different buckets. This totally breaks down the purpose of using HashMap which says that a single key should always map to the same value.

Follow the below two rules for hashCode and equals methods while overriding them:
i. If two objects are equal according to equal(), then calling the hashCode method on each of those two objects should produce some hash code.

ii. It is not required that if two objects are unequal according to the equal(), then calling the hashCode method on each of the two objects must produce distinct value

5. If first node key does not match, then traverse through the each node in the bucket in search of the matching key.
    If the node reaches the end of the list, i.e. p.next is null, then create and insert a new node with the given key, value and calculated hash.
    One more thing you might have noticed is the call treeifyBin function. This helps in preventing linear time search of key in case multiple keys map to the same bucket. If the node count in a bucket is more than or equal to TREEIFY_THRESHOLD value, then instead of storing the nodes in  a linked list, store them as a tree, so that the search takes O(logn) time.
    If node finds the match, break the loop.


6. After coming out of the else condition, if e is not null, that means a matching key was already present in some bucket. So, we need to retrieve the old value and return it, and save the new value.


If onlyIfAbsent value is true, HashMap does not change the current value.
afterNodeAccess is just a callback method that you can override to do any operation post update.

14. modCount is used to monitor any modification in HashMap. If there is any change in modCount, HashMap's fail fast iterator throws concurrentModificationException. It let's you know that the HashMap's values are updated, and you might need to reiterate with new instance of iterator again to get the latest values.


After every update in the HashMap, reSize method is called if there is the size increase beyond the load factor limit as explained earlier.

If there was no matching key, return null in the end.

That is all for the put implementation.

get(): 
HashMap get implementation gets equally simpler. get internally calls a final method getNode. getNode retrieves the node which contains a matching key. If there is a node containing matching key, then it return the  node's value, else null is returned.



The getNode method first finds the bucket in which the key could be present. Then it traverses each node in the list to find a node with a matching key. Once it finds the node with matching key, same node is returned. In all other conditions, null value is returned.

References:
http://androidxref.com/9.0.0_r3/xref/libcore/ojluni/src/main/java/java/util/HashMap.java
http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/util/HashMap.java