C#C
C#3y ago
nalka

❔ Need help to optimize this simulation

For context, I'm trying to build a simulation where you start with X "nodes" and every "round", one node is replaced by one of X predetermined possibilities (.NextNodes, which may be empty) and so on until Xth round is reached.

For example, we could be working with integers and on each round one of the integers we have gets +1 or +2 so if we start with 1 and 3 and play 2 rounds some of the results could be
  • 3 (1 + 1 + 1) and 3
  • 4 (1 + 1 + 2) and 3
  • 3 (1 + 2) and 4 (3 + 1)
    ...
The aim of the simulation is to compute every possible outcome and so far it works fine with little amount of data or start nodes but if I scale up to a point where I have a few million possibilities I start feeling the slowness and if I go even further it doesn't even finish.

I'm pretty sure the most important thing to do is instantiating far less lists, but I can't figure out how. I also thought about creating one task per initial node and that task would only focus on cases where its initial node is replaced in the first round to parallelize but that wouldn't fix the memory eaten by those many lists.

Here's my actual method :

public static List<Node[]> ComputePossibilities(List<Node[]> currentList, int depth)
{
    if (depth == 0)
    {
        return currentList;
    }
    else
    {
        List<Node[]> ret = new();
        foreach (Node[] path in currentList)
        {
            foreach (Node node in path)
            {
                foreach (Node nextNode in node.NextNodes)
                {
                    ret.Add(path.ExceptNthOccurrenceOf(node, path.Count(x => x == node) - 1).Append(nextNode).ToArray());
                }
            }
        }

        if (ret.Count == 0)
        {
            return currentList;
        }
        else
        {
            return ComputePossibilities(ret, depth - 1);
        }
    }
}
Was this page helpful?