- **Puzzles**
(*https://www.mersenneforum.org/forumdisplay.php?f=18*)

- - **Matching problem**
(*https://www.mersenneforum.org/showthread.php?t=26868*)

[QUOTE=Walter;579823]It is pretty hard to imagine this game with such a lot of letters and envelopes...[/QUOTE]
It's trivial to imagine. |

[QUOTE=henryzz;579773]
What is the expected number of times you need to pair the letters with envelopes to get all matching?[/QUOTE] You need at least (1-eps)*n guesses on average (for fixed eps>0 and for that large n). Proof: On the k-th level (after k guesses) there are at most (n+1)^k nodes, because each answer could be only 0..n. If we found the permutation, then that node is a leaf in the tree. let L=argmax( h: 1+(n+1)+...+(n+1)^h<=n! ) ofcourse L=(1-eps)*n. Then for the average expected number of guesses: F(n)>=((L+1)*(n!-sum(h=0,L,(n+1)^h))+sum(k=0,L,k*(n+1)^k))/n! and from this F(n)>(1-eps2)*n. (it could be possible to improve this estimation a very little). My wild guess is that you can get the permutation with only O(n*log(n)) guesses, as your ideas, for this you only need to halve the search space, what is not very unlikely because we have much more, (n+1) different answers on the guesses. [ we know: log2(n!)=O(n*log(n)) ]. ps. even n=10 could be a very hard problem. |

[QUOTE=Walter;579823]So, with n = 1M, it is true that my approach would probably use over 100k tries in this worst case scenario (it needs to "touch" each wrong position at least once). However, what would the average case scenario look like? My gut feeling is telling me it would be somewhere in that range too. But might completely off on that estimate. It is pretty hard to imagine this game with such a lot of letters and envelopes...[/QUOTE]
By "bisection strategy", I meant something like trying to rotate large blocks of guesses (if your permutation was 012345, rotating 3 first gets 120345). If the block was originally all correct, it's now all incorrect, so the number of matches drops by size of the rotated block. If one of the incorrect pair was there, it drops by less. This allows you to bisect the location of the relevant two entries, and solve the problem a lot faster. The issue with trying only attempts that could be correct given previous ones is that while it may be optimal use of previous information to maximize the chance of solving the problem in one more try, it is not an optimal way to gain the maximal amount of new information from the try if it turns out to not be correct. |

[QUOTE=R. Gerbicz;579825]ps. even n=10 could be a very hard problem.[/QUOTE]
Like I said... Trivial to imagine... |

It's not clear to me what is allowed. Apparently you are allowed to switch pairs, and keep track of which pairs you've switched. Given n envelopes and n letters, a given pairing-up of letters and envelopes corresponds to one of the n! permutations of n elements.
Assuming you eventually reach complete matching up (identity permutation) by switching one pair at a time, this would correspond to expressing the (inverse of) the original permutation as a product of transpositions. I suppose it is possible to determine, by repeated switching, [i]which[/i] of the letters are in the correct envelopes. Perhaps it is possible to aim for a nested series of subgroups of S[sub]n[/sub]. Amusingly, if you suppose that each attempt is totally from scratch, and any of the n! permutations is equally likely, the "expected" number of tries before you match them all up (identity permutation) is n!. This is actually not hard to prove. Unfortunately, n! grows quite rapidly with n; 10! is already 3,628,800 so this approach is likely to take a while. In fact, the probability that the desired matching has [i]not[/i] been found in n! steps is (1 - 1/n!)^n!, which for n >=10 is very close to 1/e = .367879+ The number of "derangements" - permutations with no fixed points - [i]none[/i] of the letters is in the right envelope - is [tex]n!\sum_{i=0}^n\frac{(-1)^{i}}{i!}[/tex] - sometimes denoted !n or n¡ or "subfactorial n". If n is at all large, this is very close to n!/e. |

I was watching Netflix and returned back to this thread only now.
Walter: So your program is basically doing what you explained with the Excel pictures? If so, now I get it; even though still not quite educated in your code, I get the idea. To explain my 1-4 switch - I wanted it to be as many steps as possible whilst preserving the minimalism. In the situation before the 9th call, there were only four switches possible. 2-3 was absolutely bad because both 2 and 3 would get to already known bad positions. 3-1 and 2-4 were, as I realize now, the best choices. I went with 1-4 because 4 would get to a known bad position, but 1 would not. Thus I considered it acceptable as I was trying to get the worst-case scenario. I didn't realize that I should have kept my focus on making the worst best switches based on my information. (worst by randomness, best by strategy) |

[QUOTE=R. Gerbicz;579825]My wild guess is that you can get the permutation with only O(n*log(n)) guesses,
as your ideas, for this you only need to halve the search space, what is not very unlikely because we have much more, (n+1) different answers on the guesses. [ we know: log2(n!)=O(n*log(n)) ].[/QUOTE] I'm pretty sure O(n*log(n)) is possible on average. For that it's enough that you can find a single correct position in O(log(n)) tries on average, and then repeat that for the remaining problem of size n-1. I believe this is possible with a bisection-like strategy. If you change some subset of the elements, and the number of correct matches changes, then you know the subset has at least one correct element in the version with more matches. You can get at least one correct element overall by trying random permutations. I think the expected number of tries for just a simple strategy of "pick some half of remaining elements, randomly permute them" before you see a change in the number of matches would have a finite upper bound over all n. That'd allow you to bisect to the location of one correct element in O(log(n)). |

It occurred to me that you can also get O(n*log(n)) by kind of the opposite strategy: instead of proving for one element at a time what its correct value must be, you can also collect permutations with zero matches that prove where every element can not be, and get enough of those in O(n*log(n)) to rule out any wrong value.
With each element having 1/n chance of being correct, all being wrong is about 1/e probability. So just trying random permutations will get you zero-matches ones which rule all values in that permutation. You can repeat this until you get mostly repeats of already-ruled-out values. With a constant multiple of n tries, you can rule out any particular percentage of possible values. So with O(n) tries, you can assume that for element in the permutation, about 90% of values have been ruled out, and n/10 possible values are left. To proceed further, you can specifically select some of the remaining possible values in a permutation. If each has one in n/10 chance of being correct, you can again place n/10 of them in the same permutation and still have a decent chance of there being zero matches overall. Select all other places with known-false values (should be pretty easy with 90% possibilities being known false). So you get one new known-wrong value for each of n/10 places, and need to do this 10 times to get one more for all n. So how many tries total? When there are k possible values remaining, you can try about k places in parallel, and need n/k to process all n. Total is sum of n/k for k from 2 to n (or n/10, if you get to 90% ruled out by totally random permutations as above), which is O(n*log(n)). |

An offline version of the puzzle, where you need to give all of your guesses in advance,
so you can profit nothing from the previous answers. For n=10 here it is a solution that is using "only" 23 guesses, which is quite good if you compare this to log2(10!)=21.79 ( proving nothing, so we could get even better than log2(n!) ). Hence there is no two different permutations of 1..10 where you give the same 23 answers. [CODE] P=[[3,10,6,4,8,9,2,7,5,1],\ [3,10,1,7,9,6,5,2,4,8],\ [5,3,6,2,4,10,7,8,1,9],\ [2,6,5,8,7,1,9,4,3,10],\ [8,10,3,7,1,5,4,9,6,2],\ [6,5,10,8,4,2,3,1,7,9],\ [3,8,7,2,4,5,6,10,9,1],\ [4,1,7,5,2,3,10,9,8,6],\ [8,4,10,5,1,2,3,6,7,9],\ [10,9,8,2,5,3,6,1,4,7],\ [4,9,7,2,8,6,3,1,5,10],\ [4,5,1,9,3,10,8,7,6,2],\ [3,9,7,10,4,2,5,8,6,1],\ [1,4,2,6,10,8,5,7,3,9],\ [3,4,6,9,7,1,8,10,2,5],\ [2,10,9,4,6,1,5,8,7,3],\ [1,9,8,6,5,10,7,4,3,2],\ [1,10,3,2,9,8,4,7,5,6],\ [7,3,8,4,2,10,6,5,1,9],\ [1,5,10,8,9,6,7,3,4,2],\ [7,3,5,4,6,1,9,10,2,8],\ [9,6,2,1,4,3,7,5,8,10],\ [7,2,9,8,6,1,4,10,3,5]]; [/CODE] |

In worst case scenario this can be solved in ~ N^2 tries.
The faster practical solution (not sure if this is allowed) is to create a hash table and assign each unique letter a number. Set the letter and envelopes set in ascending order based on the same hash table. Then this can then be solved in ~k*N trials. (depending on the definition of the problem- the problem is slightly vague) |

All times are UTC. The time now is 23:31. |

Powered by vBulletin® Version 3.8.11

Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.