How to Automate Code Compare and Merge
Yesterday I had to manually compare 25 methods in a version 3 and 22 or so in a version 4 of the same class. Beyond Compare is always my goto, but this was complicated by the fact that the methods were ordered differently in each version. I could not get Rider's Code Cleanup to order them properly because about 22 our of 25 were overloads of the same method, with only one difference, being the type of one of the method parameters. Rider's code style file layout XAML lacks the ability to order methods by their parameters.
To help using Beyond Compare, I manually reordered all the methods in each class in the same way, and this was incredibly helpful. Then I copied methods not yet in v3 from v4, and ensured the logic of all the V3 and V4 methods was the same, i.e. the result of calling the V3 method was exactly the same as that of calling the V4 method. Then I manually restored the original ordering of the overloads in the updated v3 class, so the PR review wasn't a total headache.
Except to get a list of method declarations with signatures but not bodies, i.e. a list of methods in the class, I found doing everything manually more tedious but much more accurate and easier than using any AI or other tooling at my disposal. The main hurdle was when I tried using Claude to order the methods, the sheer size of the file, 1.3k lines of C#, was simply too much for the prompt and response limits of the AI assistant. I have since learned that I can upload files to Claude, which probably would have helped quite a lot with token limits for prompts and context, but without knowing to upload the file, I couldn't get Claude to get its head around doing this job.
What could I have done differently to make this process more automated, and thus more repeatable and less prone to manual error?
To help using Beyond Compare, I manually reordered all the methods in each class in the same way, and this was incredibly helpful. Then I copied methods not yet in v3 from v4, and ensured the logic of all the V3 and V4 methods was the same, i.e. the result of calling the V3 method was exactly the same as that of calling the V4 method. Then I manually restored the original ordering of the overloads in the updated v3 class, so the PR review wasn't a total headache.
Except to get a list of method declarations with signatures but not bodies, i.e. a list of methods in the class, I found doing everything manually more tedious but much more accurate and easier than using any AI or other tooling at my disposal. The main hurdle was when I tried using Claude to order the methods, the sheer size of the file, 1.3k lines of C#, was simply too much for the prompt and response limits of the AI assistant. I have since learned that I can upload files to Claude, which probably would have helped quite a lot with token limits for prompts and context, but without knowing to upload the file, I couldn't get Claude to get its head around doing this job.
What could I have done differently to make this process more automated, and thus more repeatable and less prone to manual error?