Elora
New Member
Posts: 18
|
Post by Elora on Nov 18, 2018 20:05:31 GMT
In most cases certain aspects of an algorithm are calibrated to fit one's personal taste. Just as importantly, it's good practice to check the expected range and distribution of the function by running parts of it a very large number of times. In the graphs below, each single point represents a complete run of the algorithm. The only variable that is important during this calibration is the angle from the center. The radius is a random factor added in to provide clarity over large numbers of runs. Each of the four circles represents 2500 runs of an unspecified algorithm. Each separate pattern is created with a separate starting expression, such as x, x^2, x^3, etc The algorithm that uses the expression is completely unimportant from the aspect of calibration, since the calibration is only concerned with aligning the output's range and distribution.
The density of each angle from the center is the likelihood that something will generate at the specific point defined by the inside edge of the circle. By running the algorithm's framework a large number of times, these density patterns emerge. One can then alter the expressions that determine the range accordingly, and one can spot places where it overlaps with itself.
In the graphs below, the third circle distributes the dense/light patches in the most useful way for my purposes, of course this is always a judgement call, and one has to design calibrations that reveal the information sought after in a useful way.
Attachments:
|
|
Elora
New Member
Posts: 18
|
Post by Elora on Nov 19, 2018 18:17:35 GMT
I'm hoping this would be useful as an example of diagnostic testing in general terms, that can be applied to any situation. Anyway, in analyzing the patterns, it becomes evident that the distribution is gaussian, linearly along the lines of increasing angle, and the distribution is normalized by factoring the inverse of the input through the algorithm. The result is the middle graph, evening out the weights/biases in the distribution. It also reveals that the output itself has a quantum nature, even though each line is weighted evenly, there are clearly-defined lines where the output tends to collect. Attachments:
|
|
|
Post by honkytonk on Nov 20, 2018 13:05:44 GMT
but where is the code of these pretty drawings ?
|
|
Elora
New Member
Posts: 18
|
Post by Elora on Nov 20, 2018 15:04:59 GMT
Of course that would be the question. The graphs are merely used to reveal tendencies, and this example is specifically about critical thinking, and concordance independent of code, as we tend to itemize and contain our experiences to specific examples. Far more important is honing one's ability to develop systems of adaptation that can be applied to any example. I remember a teacher who explained it as "engaging hyperspace".
|
|
|
Post by honkytonk on Nov 21, 2018 16:47:40 GMT
Of course that would be the question. The graphs are merely used to reveal tendencies, and this example is specifically about critical thinking, and concordance independent of code, as we tend to itemize and contain our experiences to specific examples. Far more important is honing one's ability to develop systems of adaptation that can be applied to any example. I remember a teacher who explained it as "engaging hyperspace".
Oh ok, then it is necessary that the codes of the drawings are well commented to understand of who it is.
|
|
Elora
New Member
Posts: 18
|
Post by Elora on Nov 21, 2018 21:02:29 GMT
Still focused on something in machine-runnable form? This post is about general diagnostic strategies.
|
|
|
Post by Rod on Nov 22, 2018 15:06:00 GMT
But, what is the expected outcome? What is to say that a bland even distribution is correct? Take the text analysis. You would be looking for contrast differences and patterns. So letters have slopes, we would be looking for slopes and keeping aligned slope info and discounting non aligned slop info.
I can see the angle being useful in that context but you lose me when you take a single point and smear it into an even toned circle.
|
|
Elora
New Member
Posts: 18
|
Post by Elora on Nov 22, 2018 22:31:14 GMT
Of course the process is subjective, extremely so, and depends directly on what is being tested. It's good that it invokes problem-solving responses in unrelated areas, for all too often when we put a name on something we stop exploring it and accept what has been decided about it, usually at the expense of further discovery.
In the above example, each single point represents one complete run, and each graph is a calibration test, to verify that the expected area will be completely and evenly covered by random selections. Since the goal in this case was completeness and evenness, then the variation producing these properties is chosen and further refined, and repeated until arriving at a sufficient approximation. The algorithm itself wasn't too terribly interesting, but as a foundation step, a lot of other things built on top of it can be effected, even to the point of the top-layers appearing to be broken.
The radiating marks in the tests ended up being unimportant as they were an artifact of scaling to show the calibration itself, and every valid output was indeed covered.
|
|
|
Post by honkytonk on Nov 23, 2018 12:54:50 GMT
Without a concrete example (with the code commented) nothing proves that these drawings were not made with "Paint". Lol!
|
|
Elora
New Member
Posts: 18
|
Post by Elora on Nov 23, 2018 15:46:32 GMT
Without a concrete example (with the code commented) nothing proves that these drawings were not made with "Paint". Lol! Your words of Pathos will be ignored. I see through the attempt to manipulate people into expecting and demanding code, and your choice to attack people because you value code more than respect.
This is part of the importance of developing thinking processes that are independent of code, that there is the danger of becoming a derivative thinker caused by absolute reliance on others' concrete examples, as evidenced by the measures you are willing to take at the expense of others, BYE!
|
|
|
Post by tenochtitlanuk on Nov 23, 2018 17:49:24 GMT
I think honkytonk simply didn't understand what you are doing- he should not have been rude and imply that you had not produced your results programmatically. But I too am not sure what your results represent! HOWEVER this forum is a friendly place and I LOVE learning what others have been doing with JB/LB. Real examples of someone doing something that's useful for the OP are always the most interesting. Keep posting Elora and stretching our expectations!
|
|
|
Post by Rod on Nov 23, 2018 18:07:22 GMT
Yes, I am interested in the concept of image enhancement and the example provided is classic. But how do we get there? Fundamental generic testing fine but can we get closer to real life analysis example which will keep me focused?
|
|
|
Post by honkytonk on Nov 24, 2018 15:21:42 GMT
Still no code ?
|
|
Elora
New Member
Posts: 18
|
Post by Elora on Nov 24, 2018 16:30:15 GMT
Moderator please block this person from my thread, they are single-minded in their expectations and (snipped).
This thread is about the discussion of diagnostic methodology, there is a section of the forums devoted to code examples, which this post is not located in.
|
|
|
Post by Rod on Nov 24, 2018 19:14:10 GMT
Well free speech is in vogue. You are being a little sensitive. Honkytonk is showing just how insensitive he is and he is not winning friends here. But if you want Moderation I can wipe the slate clean and we can all start again?
This is a coding forum and folks expect code, so don’t expect a nebulous discussion about concepts to hold folks attention for to long.
The idea is interesting but you are talking to a forum full of coders.
|
|