What would be the downside fractal diffusion?
I’m using TF and using sampling to extract eigenvectors to be placed on the outer vertices as an function. What are dangers in stability?
9 Replies
How are you implementing this in code?
(me being a mathlet, this is gibberish to me otherwise)
This is hardly a beginner question lol
C#, TensorFlow, forgive me, I’m extracting a magnitude, finding 000 xyz’s and getting velocity, I’m sending that velocity to a 3d fractal based diffusion pattern and sample. That’s my basis vector and computing the function in n dimensions
The rate of dissipation is the basis vector for curvature. Fractals typically being highly unstable I’m wanting to evaluate the downside, this is the same fractal every time: float theta = (r < 1e-6f) ? 0 : MathF.Acos(z.Z / r);
float phi = MathF.Atan2(z.Y, z.X);
float newR = MathF.Pow(r, Power);
float newTheta = Power * theta;
float newPhi = Power * phi;
It's certainly beyond me, maybe we should have a Maths channel in this discord...
Thank you for taking the time to take a look, maybe I can go step by step on what I am hoping to do?
Sure, it can't hurt.
So we are talking about proof of concept, my objective is to circumvent the iterative nature of MLA within the compute procedure.
Deep Seek Paper, page 10 figure 3:
https://huggingface.co/papers/2412.19437
In conjunction with the strategies latest Absolute Zero:
https://huggingface.co/papers/2505.03335
What process can we use to embed low rank joint compression while at the same time providing structure for dynamic activation sequencing before MOE dispatch right?
We could create a cascading isomorphism, the isomorphisms isometric behavior would not be within correlation of the dependent vertices but could be based upon dissipation. The absolute zero paper has the right idea, but is structured as a framework. That contextual premise suggests that dynamically structuring RL based upon pretense is viable but how can that be established in structuring dependencies in MOE before activation?
This is that correlation between establishing 0 entropy loss and providing that same context in both MOE and node activation being both context of initialization in both considerations. Essentially, is it possible to have reference in initialization, attention, and activation with some basis of reinforcement pre execution.
I am suggesting yes, with encapsulation and dynamic entropy. For example in this repository my embedding is nested on the outer most vertex, the basis vectors serve as both curvature and function derived from a cumulative reference before training. This is just poc, of course operations would be on the kernel but what is cool is the ability to measure dissipation while at the same time use the rate of dissipation as a contingent to formulate curvature based upon extracted eigenvalue:
I will send you my poc repo
GitHub
GitHub - cdascientist/Base_Pre
Contribute to cdascientist/Base_Pre development by creating an account on GitHub.
@Pope here