Algebra Question
Description
PLEASE IF YOU ARE NOT FAMILIAR WITH ALGORITHMS AND COMPUTER SCINECE DON’T TAKE THIS TASK
Two Different algorithms :
– Data Structure drivin:
* red-black trees, or other binary search trees.
* Hash maps Tree maps
– Algorithms that are somewhat agnostic about the data structure:
* Union-find (Join group A and B, and find which group x is in)
* Sorting
+ through typically with arrays not linked lists
* Graph search/ traversal
+ Adjacency matrix or Adjacency list are probably just as good for most cases
Terminology about general algorithms:
– Divid and conquer algorithms:
*Strategy:
+ Divide task into separate steps (using division not just taking same fixed size)
+ If we think of just dividing by two, for example :
# if we can throw away one half, we have:
* T_n + T_n/2 + T_n/4 + T_n/8 + … + n where T is the time to decide which half to keep
* there are log(n) terms, so sum is T*log(n)
* E.g, binary search :
# T is the time to look at the center value and decide to go left or right next (so constant time )
# thus, O(log(n))
* If we can’t throw away half:
# We might need to examine the whole range before deciding how to proceed with the halves
+ T*n to examine , T*n + 2*T*N/2 + 4*T*n/4 + … +n*T*1 is O(n log(n))
# Eg., Mergsort:
+ Divides first, and for each step (when the divisions complete their steps), merges the two halves
which takes n time ; so log(n) splits, n time for each = O (n log (n))
#Eg., quick sort :
+scans entire array first, then divides it ; doesn’t need to scan again when divisions are done
with their steps, so still O(n log (n))
# what if we examine the whole array on the way down and the way up?
+ O(n log (n)) – do the examination twice per log(n) division, but twice is not
significantly worse than once since it is 2*n vs n.
Backtracking algorithms:
* Depth-first search
* (not breadth-first search, because it doesn’t leave behind any options later before it moves to a deeper level)
* We can use our DFS/graph search to refuse to generate more Adjacent nodes at some spot in the graph if we believe that not further search from that point will be fruitful; thus it run out of nodes to examine from there, and back up to something it saw prior
* Cannot guarantee O() complexity for backtracking algorithms because we might have to search the whole tree
* some problems are best solved with this approach ,eg., solving sudoku
Greedy algorithms:
*Greedy = never backtrack, never undo a decision
* like a graph search that doesn’t remember where it’s been
# if it did, it could do backtracking (and it wouldn’t be greedy then)
* typically do not find the best solution but can find a working solution for some problems
# A case where optimal technique is greedy
+ Activity selection : you have N activities, each with a specific start and stop time
+ Task is to find which activities to attend so you attend most possible, without overlapping
activities
+ Solution : sort activities by end time , then pick the activity that finishes soonest first , then next soonest (among thos that start later than the last activity chosen),etc
Dynamic programming:
* recursive algorithms that have identical subproblems somewhere in the recursion (you see the same problem appear again from a different direction)
# Mergesort and quicksort are not examples of dynamic programming
*Eg., graph search, and you arrive at the smae node from two+ directions
# no reason to search further again because that’s repeat work
* also, caching function results : memorization – also called tabling
# you mark function (in your code ) to say “remember outputs from this function for all its inputs”
# when this function is run again with the same input, just reproduce the saved output
# Eg., fibonacci sequence : fib (n) = fib(n-1) +fib(n-2); fib(1) = 1;fib (2) = 1
_____________________
Q1 – Suppose I have a large array, and I pick a particular random element. Describe an algorithm that can efficiently determine if that random element is in the same position it would be if the array were sorted ascending. For example, for the array [1, 4, 3, 5, 7, 0], if I pick element 2 (value 3), the algorithm would say yes, the element is in its correct sorted position, since the sorted array is [0, 1, 3, 4, 5, 7] and the element is in the same position in both arrays. Give the order of complexity in big-O terms. You can only receive full credit if your algorithm is the most efficient possible.
Q2 – Suppose I’m looking at the 4-SAT problem, which is where you need to find values for variables such that a conjunction of 4-element disjunctions is true, e.g., (a OR !b OR c OR !d) AND (!a OR b OR !c OR d) AND …; you find a way to convert a 4-SAT problem into a 3-SAT problem (perhaps by creating more variables). We know 3-SAT is NP-complete. You also find a way to convert backwards from any 3-SAT problem to the 4-SAT version. What does that say about the “difficulty” of the 4-SAT problem?
Q3 – You’re building an inventory of all devices that exist on your company’s network. The network addresses all start with 10. but then they range from 10.0.0.0 to 10.255.255.255. That’s 24 bits of range, so 2^24 possible network addresses. Your company only has ~10000 devices, so they are quite sparse compared to the range of addresses. How should you keep track of your company’s devices, so you can quickly identify if a device exists for a certain network address? And vice versa, how can you efficiently list all the network addresses that are used? Furthermore, tell me if your approach allows you to efficiently list all the devices in the 10.30.0.0 to 10.30.255.255 range (more efficiently than testing all numbers in that 2^16 range), or if it’s not efficient to do that with your approach, tell me why not. Give the big-O for all aspects of this question.
Q4- We used red-black trees to get and put elements (searching). We didn’t use it for sorting. Can we? How efficient would that be? Describe an algorithm and give the big-O.
Q5- Suppose I have a problem I want to solve, and I find a way to represent (“model”) the problem using a 3-SAT solver tool, like Google’s OR-tools. This means I can translate my problem into a 3-SAT problem and use a 3-SAT solver. What does that say about the “difficulty” of my problem?
Q6- Social media companies have huge graphs of people and people who follow them. What technique would you use to efficiently be able to say, “N of your friends or friends’ friends also shared this item”?
Q7- A* is a planning problem not a doing problem. Why is it not ideal (or is it impossible) to make a robot perform A* in the real world, rather than in its head? I.e., it has to actually move to the next spot in the path as it runs through A*. Is there a different search that works better for physically acting out a graph search / pathfinding algorithm?
Unformatted Attachment Preview
Have a similar assignment? "Place an order for your assignment and have exceptional work written by our team of experts, guaranteeing you A results."