myBooks/Data Structures and Algorithms Made Easy in Java - Narasimha Find file Copy path. @sivaprak sivaprak updated - 25 May . Data Structures And Algorithms Made Easy -To All My Readers. By Narasimha Karumanchi. Data Structures and Algorithms - Narasimha Designed by Narasimha Karumanchi teaching Data Structures and Algorithms at their training centers. System defined data types (Primitive data types).

Narasimha Karumanchi Data Structures Pdf

Language:English, Portuguese, German
Genre:Fiction & Literature
Published (Last):02.05.2016
ePub File Size:24.42 MB
PDF File Size:16.18 MB
Distribution:Free* [*Sign up for free]
Uploaded by: BRITTNEY

–Narasimha Karumanchi M-Tech, I IT Bombay Founder, Other Books by Narasimha Karumanchi IT Interview Questions Data Structures and. Jun 13, please check theses below links Data Structures and Algorithms - Narasimha Narasimha Karumanchi Peeling Data Structures and Algorithms for (C/C++ version): * Programming puzzles for interviews * Campus Any pdf available.

Using Master Theorem we get: By iteration: The recurrence relation for the running time of this program is: Note T n has two recurrence calls indicating a binary tree. Each step recursively calls the program for n reduced by 1 and 2, so the depth of the recurrence tree is O n. The number of leaves at depth n is 2n since this is a full binary tree, and each leaf takes at least O 1 computations for the constant factor. Running time is clearly exponential in n and it is O 2n.

Its running time is. First write the recurrence formula and then find its complexity. The recurrence for this code is. First write a recurrence formula, and show its solution using induction.

The if statement requires constant time [O 1 ]. With the for loop, we neglect the loop overhead and only count three times that the function is called recursively. This implies a time complexity recurrence: The given recurrence is not in the master theorem format. Now, the given function becomes: To make it simple we assume. Applying the logic of Problem gives.

Applying the logic of Problem gives: For the above code, the recurrence function can be given as: This is same as that of Problem Consider the comments in below pseudo-code and call running time of function n as T n. T n can be defined as follows: Using the master theorem gives: Consider the comments in the pseudocode below: Complexity of above program is: O nlogn. The time complexity of this program is: The recurrence for this function is: Time Complexity: According to the rate of growth: We can easily see the above order by taking logarithms of the given 3 functions: Note that, log n!

A B n C D Solution: Let us assume that the loop executes k times. After kth step the value of j is 2k. Taking logarithms on both sides gives.

Since we are doing one more comparison for exiting from the loop, the answer is. Let T n denote the number of times the for loop is executed by the program on input n.

Which of the following is true? Big O notation describes the tight upper bound and Big Omega notation describes the tight lower bound for an algorithm. How many recursive calls are made by this function? No option is correct.

Which one of the following is false? This indicates that tight lower bound and tight upper bound are the same. So option C is wrong. Start with 1 and multiply by 9 until reaching 9n. Refer to the Divide and Conquer chapter. Let us solve this problem by method of guessing. How much work do we do in each level of the recursion tree? In level 0, we take n2 time.

At level 1, the two subproblems take time: At level 2 the four subproblems are of size and respectively. These two subproblems take time: Similarly the amount of work at level k is at most. Let , the total runtime is then: That is, the first level provides a constant fraction of the total runtime. Consider the worst-case. Any function which calls itself is called recursive. A recursive method solves a problem by calling a copy of itself to work on a smaller problem.

This is called the recursion step. The recursion step can result in many more such recursive calls. It is important to ensure that the recursion terminates.

Each time the function calls itself with a slightly simpler version of the original problem.

Psychology - A Self-Teaching Guide

The sequence of smaller problems must eventually converge on the base case. Recursion is a useful technique borrowed from mathematics. Recursive code is generally shorter and easier to write than iterative code. Generally, loops are turned into recursive functions when they are compiled or interpreted.

Recursion is most useful for tasks that can be defined in terms of similar subtasks. For example, sort, search, and traversal problems often have simple recursive solutions. At some point, the function encounters a subtask that it can perform without calling itself. This case, where the function does not recur, is called the base case. The former, where the function calls itself to perform a subtask, is referred to as the ecursive case. We can write all recursive functions using the format: As an example consider the factorial function: The definition of recursive factorial looks like: This definition can easily be converted to recursive implementation.

Here the problem is determining the value of n! In the recursive case, when n is greater than 1, the function calls itself to determine the value of n — l!

In the base case, when n is 0 or 1, the function simply returns 1. This looks like the following: Once a method ends that is, returns some data , the copy of that returning method is removed from memory. The recursive solutions look simple but visualization and tracing takes time. For better understanding, let us consider the following example. Now, let us consider our factorial function. The answer to this question depends on what we are trying to do.

A recursive approach mirrors the problem that we are trying to solve. A recursive approach makes it simpler to solve a problem that may not have the most obvious of answers. That means any problem that can be solved recursively can also be solved iteratively. By the time you complete reading the entire book, you will encounter many recursion problems.

The Towers of Hanoi is a mathematical puzzle. It consists of three rods or pegs or towers , and a number of disks of different sizes which can slide onto any rod. The puzzle starts with the disks on one rod in ascending order of size, the smallest at the top, thus making a conical shape. The objective of the puzzle is to move the entire stack to another rod, satisfying the following rules: Once we solve Towers of Hanoi with three disks, we can solve it with any number of disks with the above algorithm.

Space Complexity: O n for recursive stack space. Backtracking is an improvement of the brute force approach. It systematically searches for a solution to a problem among all available options. In backtracking, we start with one possible option out of many available options and try to solve the problem if we are able to solve the problem with the selected move then we will print the solution else we will backtrack and select some other option and try to solve it.

If none if the options work out we will claim that there is no solution for the problem. Backtracking is a form of recursion. The usual scenario is that you are faced with a number of options, and you must choose one of these. This procedure is repeated over and over until you reach a final state.

The tree is a way of representing some initial starting position the root node and a final goal state one of the leaves. Backtracking allows us to deal with situations in which a raw brute-force approach would explode into an impossible number of options to consider. Backtracking is a sort of refined brute force. At each node, we eliminate choices that are obviously not possible and proceed to recursively check only those that have potential.

In general, that will be at the most recent decision point. Eventually, more and more of these decision points will have been fully explored, and we will have to backtrack further and further. If we backtrack all the way to our initial state and have explored all alternatives from there, we can conclude the particular problem is unsolvable. In such a case, we will have done all the work of the exhaustive recursion and known that there is no viable solution possible.

Assume A[ Let T n be the running time of binary n. Assume function printf takes time O 1. Using Subtraction and Conquer Master theorem we get: This means the algorithm for generating bit-strings is optimal.

Let us assume we keep current k-ary string in an array A[ Call function k- string n, k: Let T n be the running time of k — string n.

For more problems, refer to String Algorithms chapter. Given a matrix, each of which may be 1 or 0. The filled cells that are connected form a region. Two cells are said to be connected if they are adjacent to each other horizontally, vertically or diagonally.

There may be several regions in the matrix. How do you find the largest region in terms of number of cells in the matrix? The simplest idea is: Sample Call: At each level of the recurrence tree, the number of problems is double from the previous level, while the amount of work being done in each problem is half from the previous level. Formally, the ith level has 2i problems, each requiring 2n—i work. Thus the ith level requires exactly 2n work. The depth of this tree is n, because at the ith level, the originating call will be T n — i.

Thus the total complexity for T n is T n2n. A linked list is a data structure used for storing collections of data. A linked list has the following properties. It allocates memory as list grows. There are many other data structures that do the same thing as linked lists. Before discussing linked lists it is important to understand the difference between linked lists and arrays.

Both linked lists and arrays are used to store collections of data, and since both are used for the same purpose, we need to differentiate their usage. That means in which cases arrays are suitable and in which cases linked lists are suitable. The array elements can be accessed in constant time by using the index of the particular element as the subscript. To access an array element, the address of an element is computed as an offset from the base address of the array and one multiplication is needed to compute what is supposed to be added to the base address to get the memory address of the element.

First the size of an element of that data type is calculated and then it is multiplied with the index of the element to get the value to be added to the base address. This process takes one multiplication and one addition. Since these two operations take constant time, we can say the array access can be performed in constant time. The size of the array is static specify the array size before using it. To allocate the array itself at the beginning, sometimes it may not be possible to get the memory for the complete array if the array size is big.

To insert an element at a given position, we may need to shift the existing elements. This will create a position for us to insert the new element at the desired position. If the position at which we want to add an element is at the beginning, then the shifting operation is more expensive.

Dynamic Arrays Dynamic array also called as growable array, resizable array, dynamic table, or array list is a random access, variable-size list data structure that allows elements to be added or removed. One simple way of implementing dynamic arrays is to initially start with some fixed size array. As soon as that array becomes full, create the new array double the size of the original array. Similarly, reduce the array size to half if the elements in the array are less than half.

We will see the implementation for dynamic arrays in the Stacks, Queues and Hashing chapters. Advantages of Linked Lists Linked lists have both advantages and disadvantages. The advantage of linked lists is that they can be expanded in constant time. To create an array, we must allocate memory for a certain number of elements.

To add more elements to the array when full, we must create a new array and copy the old array into the new array.

This can take a lot of time. We can prevent this by allocating lots of space initially but then we might allocate more than we need and waste memory.

Data Structures And Algorithms Made Easy Books

With a linked list, we can start with space for just one allocated element and add on new elements easily without the need to do any copying and reallocating. Issues with Linked Lists Disadvantages There are a number of issues with linked lists. The main disadvantage of linked lists is access time to individual elements. Array is random-access, which means it takes O 1 to access any element in the array. Linked lists take O n for access to an element in the list in the worst case.

Another advantage of arrays in access time is spacial locality in memory. Arrays are defined as contiguous blocks of memory, and so any array element will be physically near its neighbors. This greatly benefits from modern CPU caching methods. Although the dynamic allocation of storage is a great advantage, the overhead with storing and retrieving data can make a big difference. Sometimes linked lists are hard to manipulate. If the last item is deleted, the last but one must then have its pointer changed to hold a NULL reference.

This requires that the list is traversed to find the last but one link, and its pointer set to a NULL reference.

Finally, linked lists waste memory in terms of extra reference points. This list consists of a number of nodes in which each node has a next pointer to the following element. The link of the last node in the list is NULL, which indicates the end of the list.

Following is a type declaration for a linked list of integers: The ListLength function takes a linked list as input and counts the number of nodes in the list.

The function given below can be used for printing the list data with extra print function.

O n , for scanning the list of size n. O 1 , for creating a temporary variable. Singly Linked List Insertion Insertion into a singly-linked list has three cases: To insert an element in the linked list at some position p, assume that after inserting the element the position of this new node is p.

Inserting a Node in Singly Linked List at the Beginning In this case, a new node is inserted before the current head node. Inserting a Node in Singly Linked List at the Ending In this case, we need to modify two next pointers last nodes next pointer and new nodes next pointer.

Inserting a Node in Singly Linked List at the Middle Let us assume that we are given a position where we want to insert the new node. In this case also, we need to modify two next pointers. That means we traverse 2 nodes and insert the new node. For simplicity let us assume that the second node is called position node. The new node points to the next node of the position where we want to add this node.

Let us write the code for all three cases. We must update the first element pointer in the calling function, not just in the called function.

For this reason we need to send a double pointer. The following code inserts a node in the singly linked list. We can implement the three variations of the insert operation separately. O n , since, in the worst case, we may need to insert the node at the end of the list. O 1 , for creating one temporary variable. It can be done in two steps: This operation is a bit trickier than removing the first node, because the algorithm should find a node, which is previous to the tail.

It can be done in three steps: By the time we reach the end of the list, we will have two pointers, one pointing to the tail node and the other pointing to the node before the tail node.

Deleting an Intermediate Node in Singly Linked List In this case, the node to be removed is always located between two nodes. Head and tail links are not updated in this case. Such a removal can be done in two steps: In the worst case, we may need to delete the node at the end of the list.

O 1 , for one temporary variable. After freeing the current node, go to the next node with a temporary variable and repeat this process for all nodes. O n , for scanning the complete list of size n. A node in a singly linked list cannot be removed unless we have the pointer to its predecessor. The primary disadvantages of doubly linked lists are: Similar to a singly linked list, let us implement the operations of a doubly linked list. If you understand the singly linked list operations, then doubly linked list operations are obvious.

Following is a type declaration for a doubly linked list of integers: Doubly Linked List Insertion Insertion into a doubly-linked list has three cases same as singly linked list: Previous and next pointers need to be modified and it can be done in two steps: Inserting a Node in Doubly Linked List at the Middle As discussed in singly linked lists, traverse the list to the position node and insert the new node.

Also, new node left pointer points to the position node. Now, let us write the code for all of these three cases.

In the worst case, we may need to insert the node at the end of the list. Doubly Linked List Deletion Similar to singly linked list deletion, here we have three cases: Then, dispose of the temporary node.

Deleting the Last Node in Doubly Linked List This operation is a bit trickier than removing the first node, because the algorithm should find a node, which is previous to the tail first. This can be done in three steps: By the time we reach the end of the list, we will have two pointers, one pointing to the tail and the other pointing to the node before the tail. Deleting an Intermediate Node in Doubly Linked List In this case, the node to be removed is always located between two nodes, and the head and tail links are not updated.

The removal can be done in two steps: But circular linked lists do not have ends. While traversing the circular linked lists we should be careful; otherwise we will be traversing the list infinitely. In circular linked lists, each node has a successor. Note that unlike singly linked lists, there is no node with NULL pointer in a circularly linked list. In some situations, circular linked lists are useful. For example, when several processes are using the same computer resource CPU for the same amount of time, we have to assure that no process accesses the resource before all other processes do round robin algorithm.

The following is a type declaration for a circular linked list of integers: In a circular linked list, we access the elements using the head node similar to head node in singly linked list and doubly linked lists.

Counting Nodes in a Circular Linked List The circular list is accessible through the node marked head. To count the nodes, the list has to be traversed from the node marked head, with the help of a dummy node current, and stop the counting when current reaches the starting node head. Otherwise, set the current pointer to the first node, and keep on counting till the current pointer reaches the starting node. Printing the Contents of a Circular Linked List We assume here that the list is being accessed by its head node.

Since all the nodes are arranged in a circular fashion, the tail node of the list will be the node previous to the head node. Let us assume we want to print the contents of the nodes starting with the head node. Print its contents, move to the next node and continue printing till we reach the head node again.

O 1 , for temporary variable. Inserting a Node at the End of a Circular Linked List Let us add a node containing data, at the end of a list circular list headed by head. The new node will be placed just after the tail node which is the last node of the list , which means it will have to be inserted in between the tail node and the first node. That means in a circular list we should stop at the node whose next node is head.

Inserting a Node at the Front of a Circular Linked List The only difference between inserting a node at the beginning and at the end is that, after inserting the new node, we just need to update the pointer. The steps for doing this are given below: That means in a circular list we should stop at the node which is its previous node in the list. This has to be named as the tail node, and its next field has to point to the first node. Consider the following list. To delete the last node 40, the list has to be traversed till you reach 7.

O 1 , for a temporary variable. Deleting the First Node in a Circular List The first node can be deleted by simply replacing the next field of the tail node with the next field of the first node. Tail node is the previous node to the head node which we want to delete.

Also, update the tail nodes next pointer to point to next node of head as shown below. Create a temporary node which will point to head.

Applications of Circular List Circular linked lists are used in managing the computing resources of a computer. We can use circular lists for implementing stacks and queues. That means elements in doubly linked list implementations consist of data, a pointer to the next node and a pointer to the previous node in the list as shown below.

This implementation is based on pointer difference. Each node uses only one pointer field to traverse the list back and forth. New Node Definition The ptrdiff pointer field contains the difference between the pointer to the next node and the pointer to the previous node.

As an example, consider the following linked list. A memory-efficient implementation of a doubly linked list is possible with minimal compromising of timing efficiency. However, it takes O n to search for an element in a linked list. There is a simple variation of the singly linked list called unrolled linked lists. An unrolled linked list stores multiple elements in each node let us call it a block for our convenience. In each block, a circular linked list is used to connect all nodes.

Assume that there will be no more than n elements in the unrolled linked list at any time. To simplify this problem, all blocks, except the last one, should contain exactly elements. Searching for an element in Unrolled Linked Lists In unrolled linked lists, we can find the kth element in O: Traverse the list of blocks to the one that contains the kth node, i.

It takes O since we may find it by going through no more than blocks. Find the k mod th node in the circular linked list of this block. It also takes O since there are no more than nodes in a single block. Suppose that we insert a node x after the ith node, and x should be placed in the jth block. Nodes in the jth block and in the blocks after the jth block have to be shifted toward the tail of the list so that each of them still have nodes. In addition, a new block needs to be added to the tail if the last block of the list is out of space, i.

Performing Shift Operation Note that each shift operation, which includes removing a node from the tail of the circular linked list in a block and inserting a node to the head of the circular linked list in the block after, takes only O 1. The total time complexity of an insertion operation for unrolled linked lists is therefore O ; there are at most O blocks and therefore at most O shift operations.

A temporary pointer is needed to store the tail of A. In block A, move the next pointer of the head node to point to the second-to-last node, so that the tail node of A can be removed.

Let the next pointer of the node, which will be shifted the tail node of A , point to the tail node of B. Let the next pointer of the head node of B point to the node temp points to. Finally, set the head pointer of B to point to the node temp points to. Now the node temp points to becomes the new head node of B. We have completed the shift operation to move the original tail node of A to become the new head node of B. First, if the number of elements in each block is appropriately sized e.

Comparing Linked Lists and Unrolled Linked Lists To compare the overhead for an unrolled list, elements in doubly linked list implementations consist of data, a pointer to the next node, and a pointer to the previous node in the list, as shown below.

Assuming we have 4 byte pointers, each node is going to take 8 bytes. But the allocation overhead for the node could be anywhere between 8 and 16 bytes. So, if we want to store IK items in this list, we are going to have 16KB of overhead.

It will look something like this: Thinking about our IK items from above, it would take about 4. Also, note that we can tune the array size to whatever gets us the best overhead for our application. They work well when the elements are inserted in a random order. Some sequences of operations, such as inserting the elements in order, produce degenerate data structures that give very poor performance.

If it were possible to randomly permute the list of items to be inserted, trees would work well with high probability for any input sequence. In most cases queries must be answered on-line, so randomly permuting the input is impractical.

Balanced tree algorithms re- arrange the tree as operations are performed to maintain certain balance conditions and assure good performance. Skip lists are a probabilistic alternative to balanced trees. Skip list is a data structure that can be used as an alternative to balanced binary trees refer to Trees chapter. As compared to a binary tree, skip lists allow quick search, insertion and deletion of elements. This is achieved by using probabilistic balancing rather than strictly enforce balancing.

It is basically a linked list with additional pointers such that intermediate nodes can be skipped. It uses a random number generator to make some decisions. In an ordinary sorted linked list, search, insert, and delete are in O n because the list must be scanned node-by-node from the head to find the relevant node.

If somehow we could scan down the list in bigger steps skip down, as it were , we would reduce the cost of scanning. This is the fundamental idea behind Skip Lists. The find, insert, and remove operations on ordinary binary search trees are efficient, O logn , when the input data is random; but less efficient, O n , when the input data is ordered.

Skip List performance for these same operations and for any data set is about as good as that of randomly- built binary search trees - namely O logn. The nodes in a Skip List have many next references also called forward references. We speak of a Skip List node having levels, one level per forward reference. The number of levels in a node is called the size of the node.

In an ordinary sorted list, insert, remove, and find operations require sequential traversal of the list. This results in O n performance per operation. Skip Lists allow intermediate nodes in the list to be skipped during a traversal - resulting in an expected performance of O logn per operation.

Refer to Stacks chapter. Brute-Force Method: Start with the first node and count the number of nodes present after that node. Continue this until the numbers of nodes after current node are n — 1. O n2 , for scanning the remaining list from current node for each node.

Yes, using hash table. As an example consider the following list. That means, key is the position of the node in the list and value is the address of that node. Position in List Address of Node 1 Address of 5 node 2 Address of 1 node 3 Address of 17 node 4 Address of 4 node By the time we traverse the complete list for creating the hash table , we can find the list length.

Let us say the list length is M. Since we need to create a hash table of size m, O m. If we observe the Problem-3 solution, what we are actually doing is finding the size of the linked list. That means we are using the hash table to find the size of the linked list.

We can find the length of the linked list just by starting at the head node and traversing the list. So, we can find the length of the list without creating the hash table. This solution needs two scans: Hence, no need to create the hash table. Efficient Approach: Use two pointers pNthNode and pTemp. Initially, both point to head node of the list. From there both move forward until pTemp reaches the end of the list.

As a result pNthNode points to nth node from the end of the linked list. At any point of time both move one node at a time. Brute-Force Approach. As an example, consider the following linked list which has a loop in it. The difference between this list and the regular list is that, in this list, there are two nodes whose next pointers are the same. That means the repetition of next pointers indicates the existence of a loop.

If there is a node with the same address then that indicates that some other node is pointing to the current node and we can say a loop exists. Continue this process for all the nodes of the linked list. Does this method work? As per the algorithm, we are checking for the next pointer addresses, but how do we find the end of the linked list otherwise we will end up in an infinite loop?

If we start with a node in a loop, this method may work depending on the size of the loop. Using Hash Tables we can solve this problem. This is possible only if the given linked list has a loop in it. Time Complexity; O n for scanning the linked list. Note that we are doing a scan of only the input. Space Complexity; O n for hash table.

Consider the following algorithm which is based on sorting. Time Complexity; O nlogn for sorting the next pointers array. Space Complexity; O n for the next pointers array. Problem with the above algorithm: The above algorithm works only if we can find the length of the list.

But if the list has a loop then we may end up in an infinite loop. Due to this reason the algorithm fails. Efficient Approach Memoryless Approach: This problem was solved by Floyd. The solution is named the Floyd cycle finding algorithm. It uses two pointers moving at different speeds to walk the linked list. Once they enter the loop they are expected to meet, which denotes that there is a loop. This works because the only way a faster moving pointer would point to the same location as a slower moving pointer is if somehow the entire list or a part of it is circular.

Think of a tortoise and a hare running on a track. The faster running hare will catch up with the tortoise if they are running in a loop. As an example, consider the following example and trace out the Floyd algorithm. From the diagrams below we can see that after the final step they are meeting at some point in the loop which may not be the starting point of the loop.

There are two possibilities for L: Give an algorithm that tests whether a given list L is a snake or a snail. It is the same as Problem If there is a cycle find the start node of the loop. The solution is an extension to the solution in Problem After finding the loop in the linked list, we initialize the slowPtr to the head of the linked list. From that point onwards both slowPtr and fastPtr move only one node at a time. The point at which they meet is the start of the loop.

Generally we use this method for removing the loops. This problem is at the heart of number theory. Furthermore, the tortoise is at the midpoint between the hare and the beginning of the sequence because of the way they move. Yes, but the complexity might be high. Trace out an example. If there is a cycle, find the length of the loop.

This solution is also an extension of the basic cycle detection problem. After finding the loop in the linked list, keep the slowPtr as it is. The fastPtr keeps on moving until it again comes back to slowPtr. While moving fastPtr, use a counter variable which increments at the rate of 1. Traverse the list and find a position for the element and insert it.

Recursive version: We will find it easier to start from the bottom up, by asking and answering tiny questions this is the approach in The Little Lisper: The element itself.

The reverse of the second element followed by the first element. O n ,for recursive stack. The head or start pointers of both the lists are known, but the intersecting node is not known.

Also, the number of nodes in each of the lists before they intersect is unknown and may be different in each list. Give an algorithm for finding the merging point. Brute-Force Approach: One easy solution is to compare every node pointer in the first list with every other node pointer in the second list by which the matching node pointers will lead us to the intersecting node.

But, the time complexity in this case will be O mn which will be high. Consider the following algorithm which is based on sorting and see why this algorithm fails.

Any problem with the above algorithm? In the algorithm, we are storing all the node pointers of both the lists and sorting. But we are forgetting the fact that there can be many repeated elements. This is because after the merging point, all node pointers are the same for both the lists.

The algorithm works fine only in one case and it is when both lists have the ending node at their merge point. By combining sorting and search techniques we can reduce the complexity.

O Max m, n. For each of the node, count how many nodes are there in the list, and see whether it is the middle node of the list.

The reasoning is the same as that of Problem Time for creating the hash table. Since we need to create a hash table of size n. Use two pointers. Move one pointer at twice the speed of the second. When the first pointer reaches the end of the list, the second pointer will be pointing to the middle node.

Traverse recursively till the end of the linked list. While coming back, start printing the elements. Use a 2x pointer. Take a pointer that moves at 2x [two nodes at a time]. At the end, if the length is even, then the pointer will be NULL; otherwise it will point to the last node. Assume the sizes of lists are m and n. Refer Trees chapter. Refer Sorting chapter. If the number of nodes in the list are odd then make first list one node extra than second list. As an example, consider the following circular list.

After the split, the above list will look like: Circular Doubly Linked Lists. Get the middle of the linked list. Reverse the second half of the linked list.

Compare the first half and second half. Construct the original linked list by reversing the second half again and attaching it back to the first half. Output for different K values: This is an extension of swapping nodes in a linked list. Else return. Otherwise, we can return the head. Create a linked list and at the same time keep it in a hash table. For n elements we have to keep all the elements in a hash table which gives a preprocessing time of O n.

Data Structures And Algorithms Made Easy In Java Narasimha Karumanchi Pdf >>>CLICK HERE

Hence by using amortized analysis we can say that element access can be performed within O 1 time. Time Complexity — O 1 [Amortized]. Space Complexity - O n for Hash Table. N people have decided to elect a leader by arranging themselves in a circle and eliminating every Mth person around the circle, closing ranks as each person drops out.

Upcoming Events

Find which person will be the last one remaining with rank 1. Assume the input is a circular linked list with N nodes and each node has a number range 1 to N associated with it. The head node has number 1 as data. Give an algorithm for cloning the list. We can use a hash table to associate newly created nodes with the instances of node in the given list. We scan the original list again and set the pointers building the new list. Delete that node from the linked list.

So what do we do? We can easily get away by moving the data from the next node into the current node and then deleting the next node. To solve this problem, we can use the splitting logic. While traversing the list, split the linked list into two: Now, to get the final list, we can simply append the odd node linked list after the even node linked list. To split the linked list, traverse the original linked list and move all odd nodes to a separate linked list of all odd nodes.

At the end of the loop, the original list will have all the even nodes and the odd node list will have all the odd nodes.

To keep the ordering of all nodes the same, we must insert all the odd nodes at the end of the odd node list. For this problem the value of n is not known in advance. For this problem the value of n is not known in advance and it is the same as finding the kth element from the end of the the linked list. Given a singly linked list, write a function to find the element, where n is the number of elements in the list.

Assume the value of n is not known in advance. Merge them into the third list in ascending order so that the merged list will be: The while loop takes O min n,m time as it will run for min n,m times. The other steps run in O 1. Therefore the total time complexity is O min n,m. Median is the middle number in a sorted list of numbers if we have an odd number of elements. If we have an even number of elements, the median is the average of two middle numbers in a sorted list of numbers.

We can solve this problem with linked lists with both sorted and unsorted linked lists. First, let us try with an unsorted linked list. In an unsorted linked list, we can insert the element either at the head or at the tail. The disadvantage with this approach is that finding the median takes O n.

Also, the insertion operation takes O 1. Now, let us try with a sorted linked list. Insertion to a particular location is also O 1 in any linked list. For an efficient algorithm refer to the Priority Queues and Heaps chapter. The result should be stored in the third linked list. Also note that the head node contains the most significant digit of the number. Since the integer addition starts from the least significant digit, we first need to visit the last node of both lists and add them up, create a new node to store the result, take care of the carry if any, and link the resulting node to the node which will be added to the second least significant node and continue.

First of all, we need to take into account the difference in the number of digits in the two numbers. So before starting recursion, we need to do some calculation and move the longer list pointer to the appropriate place so that we need the last node of both lists at the same time.

The other thing we need to take care of is carry. If two digits add up to more than 10, we need to forward the carry to the next node and add it.

If the most significant digit addition results in a carry, we need to create an extra node to store the carry. The function below is actually a wrapper function which does all the housekeeping like calculating lengths of lists, calling recursive implementation, creating an extra node for the carry in the most significant digit, and adding any remaining nodes left in the longer list.

O max List1 length,List2 length. O min List1 length, List1 length for recursive stack. It can also be solved using stacks. Simple Insertion sort is easily adabtable to singly linked lists. To insert an element, the linked list is traversed until the proper position is found, or until the end of the list is reached. It is inserted into the list by merely adjusting the pointers without shifting any elements, unlike in the array.

This reduces the time required for insertion but not the time required for searching for the proper position. Find the middle of the linked list. We can do it by slow and fast pointer approach. After finding the middle node, we reverse the right halfl then we do a in place merge of the two halves of the linked list. The solution is based on merge sort logic. Assume the given two linked lists are: Since the elements are in sorted order, we run a loop till we reach the end of either of the list.

We compare the values of list1 and list2. If the values are equal, we add it to the common list. A stack is a simple data structure used for storing data similar to Linked Lists. Goodreads helps you keep track of books you want to read. Want to Read saving…. Want to Read Currently Reading Read.

Other editions. Enlarge cover. Error rating book. Refresh and try again. Open Preview See a Problem? Details if other: Thanks for telling us about the problem. Return to Book Page. Get A Copy. Published first published March 30th More Details Other Editions 5. Friend Reviews. To see what your friends thought of this book, please sign up. I want to read this book.. Any pdf available..? Praveen Kanamarlapudi Check this link..

It may be helpful https: Lists with This Book.

Community Reviews. Showing Rating details. Sort order. Nov 02, Siddhant Shrivastava rated it liked it. The shoddy editing of the book is evident by the direct copy-pasted material from online sources.

Wouldn't recommend to anyone to learn Algorithms unless they are on a time crunch. Feb 25, Anuj Duggal marked it as to-read. This review has been hidden because it contains spoilers.

To view it, click here. Its competitive. View 2 comments. May 18, Umesh rated it it was amazing. Excellent book for getting yourself ready for the coding interviews.

View 1 comment. Feb 22, Chitrank Dixit rated it it was amazing Shelves: Explains all the Data structures and Algorithms concepts.

This books language is easy to understand and it has lots of problems to solve, to strengthen your knowledge. Mar 03, Himanshu Goyal rated it liked it. Get it from https: At amazing discount, download this from Ideakart.Find the middle of the linked list.

Pinky rated it it was ok Mar 04, The reverse of the second element followed by the first element. Backtracking is an improvement of the brute force approach. Here the problem is determining the value of n! This is because the cost of the car is high compared to the cost of the bicycle approximating the cost of the bicycle to the cost of the car.

Since we are doing one more comparison for exiting from the loop, the answer is. This process takes one multiplication and one addition. Let us assume that the loop executes k times. Let us see the O—notation with a little more detail.