Narasimha karumanchi data structures 2018 pdf download






















This book serves as guide to prepare for interviews, exams, and campus work. In short, this book offers solutions to various complex data structures and algorithmic problems. What is unique? Our main objective isn't to propose theorems and proofs about DS and Algorithms. We took the direct route and solved problems of varying complexities. We also do not have links that lead to sites DMCA copyright infringement. If You feel that this book is belong to you and you want to unpublish it, Please Contact us.

Data Structures and Algorithms Made Easy. Download e-Book. Posted on. Page Count. Time Complexity: O n , for scanning the complete list of size n. A node in a singly linked list cannot be removed unless we have the pointer to its predecessor. Similar to a singly linked list, let us implement the operations of a doubly linked list.

If you understand the singly linked list operations, then doubly linked list operations are obvious. Inserting a Node in Doubly Linked List at the Middle As discussed in singly linked lists, traverse the list to the position node and insert the new node. Also, new node left pointer points to the position node. Now, let us write the code for all of these three cases. In the worst case, we may need to insert the node at the end of the list.

Then, dispose of the temporary node. Deleting the Last Node in Doubly Linked List This operation is a bit trickier than removing the first node, because the algorithm should find a node, which is previous to the tail first. By the time we reach the end of the list, we will have two pointers, one pointing to the tail and the other pointing to the node before the tail.

Deleting an Intermediate Node in Doubly Linked List In this case, the node to be removed is always located between two nodes, and the head and tail links are not updated. But circular linked lists do not have ends. While traversing the circular linked lists we should be careful; otherwise we will be traversing the list infinitely. In circular linked lists, each node has a successor. Note that unlike singly linked lists, there is no node with NULL pointer in a circularly linked list.

In some situations, circular linked lists are useful. For example, when several processes are using the same computer resource CPU for the same amount of time, we have to assure that no process accesses the resource before all other processes do round robin algorithm.

The following is a type declaration for a circular linked list of integers: In a circular linked list, we access the elements using the head node similar to head node in singly linked list and doubly linked lists. Counting Nodes in a Circular Linked List The circular list is accessible through the node marked head. To count the nodes, the list has to be traversed from the node marked head, with the help of a dummy node current, and stop the counting when current reaches the starting node head.

Otherwise, set the current pointer to the first node, and keep on counting till the current pointer reaches the starting node. Printing the Contents of a Circular Linked List We assume here that the list is being accessed by its head node. Since all the nodes are arranged in a circular fashion, the tail node of the list will be the node previous to the head node. Let us assume we want to print the contents of the nodes starting with the head node. Print its contents, move to the next node and continue printing till we reach the head node again.

Space Complexity: O 1 , for temporary variable. Inserting a Node at the End of a Circular Linked List Let us add a node containing data, at the end of a list circular list headed by head. The new node will be placed just after the tail node which is the last node of the list , which means it will have to be inserted in between the tail node and the first node.

That means in a circular list we should stop at the node whose next node is head. Inserting a Node at the Front of a Circular Linked List The only difference between inserting a node at the beginning and at the end is that, after inserting the new node, we just need to update the pointer. That means in a circular list we should stop at the node which is its previous node in the list. This has to be named as the tail node, and its next field has to point to the first node.

Consider the following list. To delete the last node 40, the list has to be traversed till you reach 7. Space Complexity: O 1 , for a temporary variable. Deleting the First Node in a Circular List The first node can be deleted by simply replacing the next field of the tail node with the next field of the first node.

Tail node is the previous node to the head node which we want to delete. Also, update the tail nodes next pointer to point to next node of head as shown below. Create a temporary node which will point to head. Applications of Circular List Circular linked lists are used in managing the computing resources of a computer. We can use circular lists for implementing stacks and queues. That means elements in doubly linked list implementations consist of data, a pointer to the next node and a pointer to the previous node in the list as shown below.

This implementation is based on pointer difference. Each node uses only one pointer field to traverse the list back and forth. New Node Definition The ptrdiff pointer field contains the difference between the pointer to the next node and the pointer to the previous node. As an example, consider the following linked list. A memory-efficient implementation of a doubly linked list is possible with minimal compromising of timing efficiency.

However, it takes O n to search for an element in a linked list. There is a simple variation of the singly linked list called unrolled linked lists.

An unrolled linked list stores multiple elements in each node let us call it a block for our convenience. In each block, a circular linked list is used to connect all nodes. Assume that there will be no more than n elements in the unrolled linked list at any time.

To simplify this problem, all blocks, except the last one, should contain exactly elements. Searching for an element in Unrolled Linked Lists In unrolled linked lists, we can find the kth element in O : 1.

Traverse the list of blocks to the one that contains the kth node, i. It takes O since we may find it by going through no more than blocks. Find the k mod th node in the circular linked list of this block.

It also takes O since there are no more than nodes in a single block. Suppose that we insert a node x after the ith node, and x should be placed in the jth block. Nodes in the jth block and in the blocks after the jth block have to be shifted toward the tail of the list so that each of them still have nodes.

In addition, a new block needs to be added to the tail if the last block of the list is out of space, i. Performing Shift Operation Note that each shift operation, which includes removing a node from the tail of the circular linked list in a block and inserting a node to the head of the circular linked list in the block after, takes only O 1. The total time complexity of an insertion operation for unrolled linked lists is therefore O ; there are at most O blocks and therefore at most O shift operations.

A temporary pointer is needed to store the tail of A. In block A, move the next pointer of the head node to point to the second-to-last node, so that the tail node of A can be removed. Let the next pointer of the node, which will be shifted the tail node of A , point to the tail node of B.

Let the next pointer of the head node of B point to the node temp points to. Finally, set the head pointer of B to point to the node temp points to. Now the node temp points to becomes the new head node of B.

We have completed the shift operation to move the original tail node of A to become the new head node of B. First, if the number of elements in each block is appropriately sized e. Comparing Linked Lists and Unrolled Linked Lists To compare the overhead for an unrolled list, elements in doubly linked list implementations consist of data, a pointer to the next node, and a pointer to the previous node in the list, as shown below.

Assuming we have 4 byte pointers, each node is going to take 8 bytes. But the allocation overhead for the node could be anywhere between 8 and 16 bytes. So, if we want to store IK items in this list, we are going to have 16KB of overhead.

Thinking about our IK items from above, it would take about 4. Also, note that we can tune the array size to whatever gets us the best overhead for our application.

They work well when the elements are inserted in a random order. Some sequences of operations, such as inserting the elements in order, produce degenerate data structures that give very poor performance. If it were possible to randomly permute the list of items to be inserted, trees would work well with high probability for any input sequence. In most cases queries must be answered on-line, so randomly permuting the input is impractical.

Balanced tree algorithms re- arrange the tree as operations are performed to maintain certain balance conditions and assure good performance. Skip lists are a probabilistic alternative to balanced trees. Skip list is a data structure that can be used as an alternative to balanced binary trees refer to Trees chapter.

As compared to a binary tree, skip lists allow quick search, insertion and deletion of elements. This is achieved by using probabilistic balancing rather than strictly enforce balancing. It is basically a linked list with additional pointers such that intermediate nodes can be skipped. It uses a random number generator to make some decisions. In an ordinary sorted linked list, search, insert, and delete are in O n because the list must be scanned node-by-node from the head to find the relevant node.

If somehow we could scan down the list in bigger steps skip down, as it were , we would reduce the cost of scanning. This is the fundamental idea behind Skip Lists. The find, insert, and remove operations on ordinary binary search trees are efficient, O logn , when the input data is random; but less efficient, O n , when the input data is ordered.

Skip List performance for these same operations and for any data set is about as good as that of randomly- built binary search trees - namely O logn. The nodes in a Skip List have many next references also called forward references. We speak of a Skip List node having levels, one level per forward reference.

The number of levels in a node is called the size of the node. In an ordinary sorted list, insert, remove, and find operations require sequential traversal of the list. This results in O n performance per operation. Skip Lists allow intermediate nodes in the list to be skipped during a traversal - resulting in an expected performance of O logn per operation. Solution: Refer to Stacks chapter. Solution: Brute-Force Method: Start with the first node and count the number of nodes present after that node.

Continue this until the numbers of nodes after current node are n — 1. Time Complexity: O n2 , for scanning the remaining list from current node for each node. Space Complexity: O 1. Solution: Yes, using hash table. As an example consider the following list. That means, key is the position of the node in the list and value is the address of that node.

Position in List Address of Node 1 Address of 5 node 2 Address of 1 node 3 Address of 17 node 4 Address of 4 node By the time we traverse the complete list for creating the hash table , we can find the list length. Let us say the list length is M. Space Complexity: Since we need to create a hash table of size m, O m. Solution: Yes.

If we observe the Problem-3 solution, what we are actually doing is finding the size of the linked list. That means we are using the hash table to find the size of the linked list.

We can find the length of the linked list just by starting at the head node and traversing the list. So, we can find the length of the list without creating the hash table. Hence, no need to create the hash table. Initially, both point to head node of the list. From there both move forward until pTemp reaches the end of the list.

As a result pNthNode points to nth node from the end of the linked list. Note: At any point of time both move one node at a time. Solution: Brute-Force Approach. As an example, consider the following linked list which has a loop in it. The difference between this list and the regular list is that, in this list, there are two nodes whose next pointers are the same.

That means the repetition of next pointers indicates the existence of a loop. If there is a node with the same address then that indicates that some other node is pointing to the current node and we can say a loop exists.

Continue this process for all the nodes of the linked list. Does this method work? As per the algorithm, we are checking for the next pointer addresses, but how do we find the end of the linked list otherwise we will end up in an infinite loop?

Note: If we start with a node in a loop, this method may work depending on the size of the loop. Using Hash Tables we can solve this problem. This is possible only if the given linked list has a loop in it.

Time Complexity; O n for scanning the linked list. Note that we are doing a scan of only the input. Space Complexity; O n for hash table. Solution: No. Consider the following algorithm which is based on sorting. Time Complexity; O nlogn for sorting the next pointers array. Space Complexity; O n for the next pointers array. Problem with the above algorithm: The above algorithm works only if we can find the length of the list.

But if the list has a loop then we may end up in an infinite loop. Due to this reason the algorithm fails. The solution is named the Floyd cycle finding algorithm.

It uses two pointers moving at different speeds to walk the linked list. Once they enter the loop they are expected to meet, which denotes that there is a loop. This works because the only way a faster moving pointer would point to the same location as a slower moving pointer is if somehow the entire list or a part of it is circular. Think of a tortoise and a hare running on a track. The faster running hare will catch up with the tortoise if they are running in a loop.

As an example, consider the following example and trace out the Floyd algorithm. From the diagrams below we can see that after the final step they are meeting at some point in the loop which may not be the starting point of the loop.

Note: slowPtr tortoise moves one pointer at a time and fastPtr hare moves two pointers at a time. There are two possibilities for L: it either ends snake or its last element points back to one of the earlier elements in the list snail. Give an algorithm that tests whether a given list L is a snake or a snail. Solution: It is the same as Problem If there is a cycle find the start node of the loop.

Solution: The solution is an extension to the solution in Problem After finding the loop in the linked list, we initialize the slowPtr to the head of the linked list. From that point onwards both slowPtr and fastPtr move only one node at a time. The point at which they meet is the start of the loop.

Generally we use this method for removing the loops. Solution: This problem is at the heart of number theory. Furthermore, the tortoise is at the midpoint between the hare and the beginning of the sequence because of the way they move.

Solution: Yes, but the complexity might be high. Trace out an example. If there is a cycle, find the length of the loop. Solution: This solution is also an extension of the basic cycle detection problem. After finding the loop in the linked list, keep the slowPtr as it is.

The fastPtr keeps on moving until it again comes back to slowPtr. While moving fastPtr, use a counter variable which increments at the rate of 1. Solution: Traverse the list and find a position for the element and insert it. The element itself. The reverse of the second element followed by the first element. Space Complexity: O n ,for recursive stack.

The head or start pointers of both the lists are known, but the intersecting node is not known. Also, the number of nodes in each of the lists before they intersect is unknown and may be different in each list. Give an algorithm for finding the merging point. Solution: Brute-Force Approach: One easy solution is to compare every node pointer in the first list with every other node pointer in the second list by which the matching node pointers will lead us to the intersecting node.

But, the time complexity in this case will be O mn which will be high. Time Complexity: O mn. Consider the following algorithm which is based on sorting and see why this algorithm fails.

Any problem with the above algorithm? In the algorithm, we are storing all the node pointers of both the lists and sorting. But we are forgetting the fact that there can be many repeated elements. This is because after the merging point, all node pointers are the same for both the lists. The algorithm works fine only in one case and it is when both lists have the ending node at their merge point.

Space Complexity: O n or O m. By combining sorting and search techniques we can reduce the complexity. Space Complexity: O Max m, n. Solution: Brute-Force Approach: For each of the node, count how many nodes are there in the list, and see whether it is the middle node of the list.

The reasoning is the same as that of Problem Time Complexity: Time for creating the hash table. Space Complexity: O n. Since we need to create a hash table of size n. Solution: Efficient Approach: Use two pointers. Move one pointer at twice the speed of the second. When the first pointer reaches the end of the list, the second pointer will be pointing to the middle node. Solution: Traverse recursively till the end of the linked list. While coming back, start printing the elements.

Solution: Use a 2x pointer. Take a pointer that moves at 2x [two nodes at a time]. At the end, if the length is even, then the pointer will be NULL; otherwise it will point to the last node. Solution: Assume the sizes of lists are m and n. Solution: Refer Trees chapter. Solution: Refer Sorting chapter. If the number of nodes in the list are odd then make first list one node extra than second list.

As an example, consider the following circular list. Solution: Algorithm: 1. Get the middle of the linked list. Reverse the second half of the linked list. Compare the first half and second half. Construct the original linked list by reversing the second half again and attaching it back to the first half.

Else return. Otherwise, we can return the head. Create a linked list and at the same time keep it in a hash table. For n elements we have to keep all the elements in a hash table which gives a preprocessing time of O n.

Hence by using amortized analysis we can say that element access can be performed within O 1 time. Time Complexity — O 1 [Amortized]. Space Complexity - O n for Hash Table. Find which person will be the last one remaining with rank 1. Solution: Assume the input is a circular linked list with N nodes and each node has a number range 1 to N associated with it.

The head node has number 1 as data. Give an algorithm for cloning the list. Solution: We can use a hash table to associate newly created nodes with the instances of node in the given list.

We scan the original list again and set the pointers building the new list. Delete that node from the linked list. So what do we do? We can easily get away by moving the data from the next node into the current node and then deleting the next node. Time Complexity: O 1. Solution: To solve this problem, we can use the splitting logic. While traversing the list, split the linked list into two: one contains all even nodes and the other contains all odd nodes.

Now, to get the final list, we can simply append the odd node linked list after the even node linked list. To split the linked list, traverse the original linked list and move all odd nodes to a separate linked list of all odd nodes. At the end of the loop, the original list will have all the even nodes and the odd node list will have all the odd nodes. To keep the ordering of all nodes the same, we must insert all the odd nodes at the end of the odd node list.

Solution: For this problem the value of n is not known in advance. Solution: For this problem the value of n is not known in advance and it is the same as finding the kth element from the end of the the linked list. Assume the value of n is not known in advance.

The other steps run in O 1. Therefore the total time complexity is O min n,m. If we have an even number of elements, the median is the average of two middle numbers in a sorted list of numbers.

We can solve this problem with linked lists with both sorted and unsorted linked lists. First, let us try with an unsorted linked list. In an unsorted linked list, we can insert the element either at the head or at the tail. The disadvantage with this approach is that finding the median takes O n.

Also, the insertion operation takes O 1. Now, let us try with a sorted linked list. Insertion to a particular location is also O 1 in any linked list. Note: For an efficient algorithm refer to the Priority Queues and Heaps chapter. The result should be stored in the third linked list.

Also note that the head node contains the most significant digit of the number. Data structures and algorithms in java is a book with different solutions for various problems which are related to data structures and algorithms. It was published in and it is coded in Java language.

Students studying computer science and engineering can use this book as a reference manual. It covers the following topics; recursion and backtracking, linked lists, stack queues, trees, priority queue and heaps, disjoint set ADT, Graph algorithms, sorting, searching, selection algorithms medians , symbols tables, hashing, programming, complexity classes and other important concepts.

People can use data structures and algorithms in java as a guide to prepare for interviews, exams and campus work.



0コメント

  • 1000 / 1000