I am going to resume testing on Jarlve 3/5, wanting to know where the point of diminishing returns is for homophonic hill climber iterations. I ran 500 restarts at 200,000 10,000 60 15 2, and got 12 solves. Will add 200,000 for another 500 restarts and so on. Curious. Will make entries on the optimization spreadsheet soon.
Okay, sounds cool.
Zdecrypt multigrams stats for:
——————————————————–
Bigrams: 41
– Normalized: 0.1547169811320755
Bigram ioc: 98
– Normalized: 0.0008552826796529996
Ngrams: 49
Asymmetry: 2446
Bigram map:
——————————————————–
10 28 19 10
25 4 30 50 10
28 13 17 5 15 19 53
27 62 34 5 19 6
16 47 7 23 51 14 20 9 27
5 19 7 25 21 19 53
21 19 5 19 15 19 11 14 20
55 3 30 50
18 35 59 40 63 55 19 6 22 16
20 23 29 42
37 51 58 19 20 37 51 18 35 21 19
22 16 23 11 5
19 19 20 58 19 20 22 16
7 25 19 40
29 42
17 5 55 3 19 53 11 5
16 47 7 23 51 55 19 40
29 42 59 40 63 9 27 62 34 28 13
also has four trigrams still no solve
hi jarlvie.. any way you can drop a few short cut buttons onto your programme underneath the load state button… one would be the 340 cipher to input window and the other would be 17×20 dimensions to input window.. its a bit awkward and after a few hundred different tests time consuming.. cheers.
hi jarlvie.. any way you can drop a few short cut buttons onto your programme underneath the load state button… one would be the 340 cipher to input window and the other would be 17×20 dimensions to input window.. its a bit awkward and after a few hundred different tests time consuming.. cheers.
Hey Mr lowe, sure would like to help but why do you need that functionality? If you want a fresh copy of the 340 in the input window then you could just use save/load state, not?
it is after i have used the transposition matrix .. basically their is no back button.. not sure if you know what i mean. no big deal i get around it ok it just adds a few processes to get me at a new starting point. cheers
it is after i have used the transposition matrix .. basically their is no back button.. not sure if you know what i mean. no big deal i get around it ok it just adds a few processes to get me at a new starting point. cheers
Okay. Try the following. Open the 340 and then click on Save state. Then do your transposition or whatever. And now, to get the 340 back just click on Load state. Is that what you want?
it is after i have used the transposition matrix .. basically their is no back button.. not sure if you know what i mean. no big deal i get around it ok it just adds a few processes to get me at a new starting point. cheers
Okay. Try the following. Open the 340 and then click on Save state. Then do your transposition or whatever. And now, to get the 340 back just click on Load state. Is that what you want?
will do i will pm you if i have any questions.. cheers
I see that you tried gradual shift from 0 to 20%. It looks like there may have been a slight improvement but you are doing another 1,000 restarts?
Here is another idea that we touched on before. Instead of a percentage, two variables, x and y. The program loops through 1 to x and chooses random positions for skips or nulls between 1 and 340. They it loops through 1 to y, and chooses random positions within the shift divisor. This way, we could try x = 85, y = 15, or x = 850, y= 150, etc. The program is still getting close but not exact for positions. This idea might interfere with the sub restarts though, huh?
And maybe a timer so that we can track how many solves we get an hour.
I am still working on sub its and am amazed that I have not found a point of diminishing returns for increasing sub its. Jarlve 3/5.
200,000 2.4%
400,000 10.2%
600,000 14.0%
800,000 16.0%
1,000,000 18.8%
1,200,000 22.0%
1,400,000 26.0%
It is only 500 restarts, but the Acer took 4 days to do the 1,400,000. I stuck with practical cryptography 5 grams PM. Once I find the point of diminishing returns, if I ever do, I will try reddit1805 to compare.
I was thinking with the timer we could calculate what percentage of the time the program allocates to gen its versus sub its, to see if it is more efficient to do just twice as many restarts instead of twice as many sub its. But I really do like getting high percentages though because it helps us to judge whether we have thoroughly explored a particular division.
What is performance mode exactly?
The program searches through a huge array to find a match for the n gram that it is going to score. I saw that the n gram files are sorted by frequency, so sorting this way must save time. But the performance mode I don’t know. Have you tried sorting alphabetical order, breaking up the array into 26 separate arrays, then sorting each array by frequency, and then making the program decide which array to look at by the first letter in the n gram? Just wondering about the program.
Thanks.
I see that you tried gradual shift from 0 to 20%. It looks like there may have been a slight improvement but you are doing another 1,000 restarts?
2000 restarts for all test. Will also try from 0 to 25 etc. My hunch is that around 0 to 30 will be optimal since that is closest to 15 on average.
Here is another idea that we touched on before. Instead of a percentage, two variables, x and y. The program loops through 1 to x and chooses random positions for skips or nulls between 1 and 340. They it loops through 1 to y, and chooses random positions within the shift divisor. This way, we could try x = 85, y = 15, or x = 850, y= 150, etc. The program is still getting close but not exact for positions. This idea might interfere with the sub restarts though, huh?
Sure. I am up to try your ideas but let me test a few things of my own first if you do not mind. Will let you know when I am done.
And maybe a timer so that we can track how many solves we get an hour.
Not a good idea I think since the speed is never stable and differs from one system to another. Better to track solvers per 1000k sub iterations or something.
I am still working on sub its and am amazed that I have not found a point of diminishing returns for increasing sub its. Jarlve 3/5.
200,000 2.4%
400,000 10.2%
600,000 14.0%
800,000 16.0%
1,000,000 18.8%
1,200,000 22.0%
1,400,000 26.0%It is only 500 restarts, but the Acer took 4 days to do the 1,400,000. I stuck with practical cryptography 5 grams PM. Once I find the point of diminishing returns, if I ever do, I will try reddit1805 to compare.
Okay, wait a little bit with testing the reddit1805 ngrams. I am working on a few different ngram extraction techniques that may provide a better set.
I was thinking with the timer we could calculate what percentage of the time the program allocates to gen its versus sub its, to see if it is more efficient to do just twice as many restarts instead of twice as many sub its. But I really do like getting high percentages though because it helps us to judge whether we have thoroughly explored a particular division.
It will differ from cipher to cipher.
What is performance mode exactly?
It hard wires some of the settings of the solver internally plus a few other tricks to improve performance. Only 26 letters can be used and they must be "ABC…" so it is not applicable to everything and therefore is not the standard setting.
The program searches through a huge array to find a match for the n gram that it is going to score. I saw that the n gram files are sorted by frequency, so sorting this way must save time. But the performance mode I don’t know. Have you tried sorting alphabetical order, breaking up the array into 26 separate arrays, then sorting each array by frequency, and then making the program decide which array to look at by the first letter in the n gram? Just wondering about the program.
The ngram order does not matter but I usually sort them by frequency. The program does not search through a huge array, it simply stores them in a 5-dimensional array where each dimension has 26 elements for example.
Rushed response. I’ll add later if needed.
Smokie I am ready to test some of your ideas you have mentioned for the nulls & skips solver. Not sure what idea you want to test first but let us go through it one by one. If you can quote or reformulate the idea that would be wonderful.
O.k., see what you think about this one to get faster, higher accuracy for skip null positions. Maybe it will work maybe not.
Instead of shift percentage, use two variables, x and y. The program loops through 1 to x and chooses random positions for skips or nulls between 1 and 340. Then it loops through 1 to y, and chooses random positions within the shift divisor, which seems to work really well set to 2. This way, we could try
x=7, y=1
x = 85, y = 15
x = 850, y= 150, etc.
Or we can set x and y so that the sum coincides with the sub restarts somehow if we wanted to. Simple example, if the program tries 500 iterations before a sub restart, the set x = 425 and y = 75. Just before the sub restart then, the program will try 75 times in a row within the shift divisor. Something like that.
This way we can still set it to x=7 and y=1, which is pretty much the same as just one variable, shift percentage 15%, if we want to.
So instead of having a random chance to do either A or B, do A x times and then B y times and repeat. Will work on it.
deleted post due to a spread sheet glitch
@smokie, I decided not to test anymore new things and go with temp 40, shift 0 to 30% and shift div 2 to attack the 340.
Here’s the version with the gradual shift: https://drive.google.com/open?id=1gOkMd … 70mWfC6nhT
Feel free to test it out a bit. Will wait for your feedback and then we can agree on how the final test on the 340 will look like. I am thinking to go up to 14 skips & nulls, and starting from 10 use 6-grams and 1,000,000 substitution iterations.
Not testing or optimizing any more is absolutely fine. I made the trials 2 spreadsheet a long time ago, see if you want to update it.
https://docs.google.com/spreadsheets/d/ … =509220377
My most recent testing shows that 1,400,000 has a much higher solve percentage than 1,000,000. I am working on temperature, and it seems as though the higher the temperature, the better the solve percentage. I get best at 80, but that is only with 500 restarts, so… maybe not accurate enough? 0% to 30% is fine and I totally agree with shift div 2.