That is fine. I moved some stuff around, close to what you were talking about. I didn’t delete anything, just moved score and pc-cycles over far to the right. Feel free to customize also.
That looks good smokie. I should add the multiplicity to the solve log.
Another test run completed, for cipher: jarlve_p20_8nulls. Overall solve rate percentage is 5.6% and at the correct number of nulls & skips it is 50%.
The high score, multiplicity table:
8 nulls: 24763, 0.183
7 nulls & 1 skip: 21737, 0.185
6 nulls & 2 skips: 21716, 0.187
5 nulls & 3 skips: 21974, 0.189
4 nulls & 4 skips: 21961, 0.191
3 nulls & 5 skips: 21726, 0.192
2 nulls & 6 skips: 21795, 0.194
1 null & 7 skips: 21506, 0.196
8 skips: 21597, 0.198
50% is really great. We should calculate the number of restarts depending on the possible division combinations, right? Like for an 8 null skip cipher, 9 divisions, so how many restarts per division? If it is 50% that is like flipping a coin, so how many coin tosses makes us sure that we haven’t missed any opportunity to solve the cipher? I haven’t tried the new version yet, but will the output show the divisions? What about an option to lock in on a particular division?
50% is really great. We should calculate the number of restarts depending on the possible division combinations, right? Like for an 8 null skip cipher, 9 divisions, so how many restarts per division? If it is 50% that is like flipping a coin, so how many coin tosses makes us sure that we haven’t missed any opportunity to solve the cipher?
Yes, but we don’t know the cipher’s individual solve rate percentage at any given number of hc iterations. We cannot make the coin flip assumption.
I haven’t tried the new version yet, but will the output show the divisions?
It makes directories under the output directory and sorts the results like that:
What about an option to lock in on a particular division?
Could be done. Or a switch between the divisions and random bias system. What do you want?
I suppose no additional options for me right now. I have thought about trying to predict what will happen, but maybe not. Do you think that work on a particular null skip count will show that the message more likely has a certain number of null, and a certain number of skips, and then we will want to focus on that more? Or, If it can indicate how many nulls and skips, then no need to focus because we will already have a decent solution?
I like the divisions approach much more than the random bias approach.
Thanks.
Do you think that work on a particular null skip count will show that the message more likely has a certain number of null, and a certain number of skips, and then we will want to focus on that more? Or, If it can indicate how many nulls and skips, then no need to focus because we will already have a decent solution?
I think it is potentially valuable information but it is hard for me to say at this point and would prefer not to focus and give every division an equal fighting chance.
Doing a run on smokie_p20_5nulls_5skips and it has not solved yet:
10 nulls: 21983, 190
9 nulls & 1 skip: 21867, 0.192
8 nulls & 2 skips: 21789, 0.194
7 nulls & 3 skips: 21931, 0.196
6 nulls & 4 skips: 21746, 0.198
5 nulls & 5 skips: 22005, 0.200
4 nulls & 6 skips: 22213, 0.201 <— spike
3 nulls & 7 skips: 21986, 0.203
2 nulls & 8 skips: 21847, 0.205
1 null & 9 skips: 21975, 0.206
10 skips: 21983, 0.208
Spike at 4 nulls & 6 skips which is close to 5 nulls & 5 skips though I would not put my money on it. I see it as extra information to interpret.
I like the divisions approach much more than the random bias approach.
Me too.
I am looking forward to trying it.
I am looking forward to trying it.
There is a download link on the first post of page 102.
O.k. I am downloading it now. Thanks.
I made a lot of additions to the spreadsheet, starting over at 8 null & skips. I felt that starting over at 1 null & skip would be a waste of time, but let me know if you want any more. I wanted to see the new program and options, and we should be pretty much set.
I am playing around with the program for the moment. Thanks.
I felt that starting over at 1 null & skip would be a waste of time, but let me know if you want any more.
I prefer to start over completely but keep the work we already did in a separate spreadsheet.
Currently doing more testing on smokie_p20_5nulls_5skips, the cipher is still very difficult. It worries me a bit and I will take some time to figure things out.
O.k. I will make an all new spreadsheet. It will be on a new tab at the bottom. Working on it now.
Why do you think that smokie_p20_5nulls_5skips is so difficult? Because it has 10 nulls & skips? Something else?
The program does do a good job of detecting null and skip locations. For smokie_17/0 I put it on 17, with 1 restart, and 160k, 320k and 640k null skip hillclimber iterations.
With 160k, I got 4 exact null positions and 4 null positions within 1 or 2. For such a complicated mess, it came up with almost half of the positions.
With 320k and 640k, the scores were higher, but I only got 1 or 2 close positions out of 17.
I am working on smokie_5/5 now, at 160k. It seems like for every null skip count, there must be an optimum number of iterations, and that the program can get stuck on certain incorrect positions with no way to back out.
I wonder if taking all of the positions found, for all division, and then seeing which ones are found most frequently would help. Maybe to be able to lock in some of them, if so. I am sure that you would prefer that the program be fully automated, but maybe by looking at the stats we can learn something.
Why do you think that smokie_p20_5nulls_5skips is so difficult? Because it has 10 nulls & skips? Something else?
The amount and positions. These kind of test ciphers are exactly what we need.
It seems like for every null skip count, there must be an optimum number of iterations, and that the program can get stuck on certain incorrect positions with no way to back out.
Yes. At some point restarts will become more efficient from a solves per time perspective.
I wonder if taking all of the positions found, for all division, and then seeing which ones are found most frequently would help. Maybe to be able to lock in some of them, if so. I am sure that you would prefer that the program be fully automated, but maybe by looking at the stats we can learn something.
You can sort of lock in manually by going to manipulation and pick either add or remove character and enter the position.
Do a certain amount of restarts and lock in the best candidate and repeat. Something like that right?