Sharplab Clone Prototype
I wanna create a site like Sharplab in the hopes of possibly creating something more pleasant to use.
My goal is to embed the VSCode text editor with the C# grammar (for syntax highlighting) and the Roslyn LSP (for warnings and errors).
A major criterion to fulfill is fast loading of the page. I want to offer the ability to choose different versions/branches of Roslyn, the ability to view decompiled C#, IL, and ASM outputs for different runtimes.
I'm looking for some opinions on what framework I should go with for my frontend. Since startup time is a concern, should I even go with .NET?
110 Replies
I mean, if you want to embed VS Code in the browser, you'll be using Monaco, not any other specific frontend framework
what exactly is monaco then? i imagined i could use anything to embed the editor?
Monaco is the editor
ah
Unless you want to, dunno, surround that editor window with other stuff
I guess you could do that, sure. Any frontend framework should be fine
well yes, i need to offer some controls for the user to choose the options i listed
Most sites that rely on Monaco and include their own stuff rather choose to extend Monaco itself
Be it via a plugin, or editing the menu bar, or what have you
Check out Codesandbox for example
i'd honestly just make it a vscode extension if i could, but i've always had a really tough time even getting started there
which makes me think opening this post might have been kind of a pointless effort
i think my idea is too ambitious for what i know about web dev (nothing)
i've seen it, it's really not that
It really is
does not have asm
does not have syntax highlighting
does not have diagnostics
Has syntax highlighting and diagnostics
Does not have asm, true, because that's very hard to do with a wasm running on your box, rather than on a real server somewhere
Already uses monaco, even has vim mode
Jan has been experimenting with more complete lsp support, not sure what the status on that is
i'm not really sure what i'm missing then, it does not have syntax highlighting for me

and neither does it for you
Uh
Yes it does?
it highlights keywords, whoopdiedoo
Ok then. If that's your reaction to "this already exists", you do you I guess
that's exactly why this post exists :p
Consider contributing to the existing project rather than trying to make your own though
Because this is a lot of work


i'm so ungrateful
Share a link
oh i know what the error is, if that's what you're wondering
ok but what is this
https://lab.razor.fyi/#47Ll4g4oyk8vSsw11EsuFtJLzkksLlaACXFVcykoKCgUlySWZCYrlOVnpij4JmbmaWgqVCvUctVycSG0G2FqNyKk3Yu5qDQvipPj2oS-9TdTBVgSGAE
wait

???
genons
You have two files
okay maybe some caching issue
well yeah
idk you wanted the link
@Jan Jones not sure why this isn't displaying the actual error in the list; it shows it when you go to the second tab
Probably compiling as a dll to show the errors, but then compiling as an executable to run?
That would also explain why IL and decompiled C# are visible
yes, we are compiling as DLL by default (you can change that in Configuration), but we compile as EXE in Run, I have improved the experience in a branch where I'm currently working on improving intellisense - see https://ls.dotnetinternals.pages.dev/#47Ll4g4oyk8vSsw11EsuFtJLzkksLlaACXFVcykoKCgUlySWZCYrlOVnpij4JmbmaWgqVCvUctVycSG0G2FqNyKk3Yu5qDQvipPj2oS-9TdTBVgSGAE
(If you try that out, make sure you turn on the experimental language services in options)

currently it has syntax highlighting only, semantic highlighting is coming right after I merge this intellisense improvements branch 🙂
didn't know there's a difference
but that makes sense actually
my main goal was really to have one app which can do everything
a merging of sharplab and godbolt
that's my goal too 😄
and even sicker if there's a vscode extension
what is sharplab missing that godbolt has?
in godbolt, you can choose the runtime for the asm
and the asm is also more accurate
let me show an example
so like coreclr and mono and whatever
I was investigating asm support too, but it would be very complicated to do in wasm; so I plan to create a MAUI hybrid version of the .NET Lab where it will be much easier (since it will run on full .NET)


left is sharplab of course
A side note, but I never heard of Godbolt, and they seem to advertise https://quick-bench.com/... I would love something like that for C#
never having heard of godbolt is crazy
being able to write quick benchmarks in .net lab is also on my todo list... it's a long todo list 😄
That was a motivating scenario for my developer sdk extension, but I haven't found the motivation to work on it like Jan has
i was thinking something like disasmo
just have an option on a method/file/project to show the il/asm and it opens to the side
man i'd be the happiest person in the world
never again do i need to use the browser for it
I kinda have the IL part working, though I do think my extension may have broken recently and I need to find time to fix that
https://github.com/333fred/compiler-developer-sdk/?tab=readme-ov-file#il-and-c-decompilation
I never care about the asm, so it's even harder to find motivation to work on that
Anyway, my hope is that we've convinced you to contribute to the existing things, ero, rather than trying to create your own
that looks awesome, but i'd use such an extension mainly for asm and decomp, il isn't a priority for me
I actually meant lab.razor.fyi
and i meant this
I know
i don't think i would have gotten far with this anyway, to be fair
But it looked like you were saying that as response to this
As the entire chain of messages, not just that specific message
i can try and take a look at how much i can contribute to lab.razor.fyi, but honestly, roslyn would be higher on that list for me
i'd prefer learning how that codebase works to contribute
Well, we're happy to help with that too
the tests are just so daunting
actually idk why i said that
everything is daunting in that codebase
it's so big and i know so little about it
Sure, but I would actually say the tests are one of the best parts
The tests are an incredible safety net
definitely, i wouldn't have it any other way
I can be pretty confident when I make big changes, because if I broke something important, there will near-certainly be a test that finds it
i'll just hijack my own thread and convert this into a roslyn good-first-issue help thread
ide or compiler?
let's see which one of my own issues i wanna do
err, probably ide
most of my issues are ide
Can't help you as much there, and Cyrus is currently on vacation so he may not be available for the next couple of days
oof
But I can try
i love you guys, can i just say that
like the whole compiler team
i have this one open https://github.com/dotnet/roslyn/issues/75113
but that doesn't actually appear compiler related?
that looks ide related to me
Assuming Rekkon was correct about that investigation, they're indeed right that it's non-trivial
this one too, looks to be in the correct area, but so much more difficult https://github.com/dotnet/roslyn/issues/75664
The IDE gets its information from the compiler. In this case, there appears to be no information to get
oh, huh
Honestly? This might be easier, but I don't know what we're currently accepting for collection expression optmization changes. @rikki or @jaredpar might know more
i had another issue the other day, put a code snippet aside, and then closed my ide without saving...
it had to do with renaming some generic parameter
Adding new collection expression optimizations is actually fairly well contained
i'm so mad that i didn't create an issue immediately
That would be squarely IDE codebase, 100%
absolutely, like i said, most of my issues are
Well, the two you've linked so far aren't 😄
wanted to show the only 2 open compiler issues is have 🥲

GitHub
Use explicit type instead of 'var'
emitted on enum value assignme...Version Used Compiler: 4.12.0-3.24470.4 (d799b05) .NET: 9.0.100-rc.1.24452.12 Visual Studio Code: 1.93.1, 38c31bc77e0dd6ae88a4e9cc93428cc27a56ba40, x64 C# Extension: v2.49.25 C# Dev Kit Extension: ...
from cyrus' comment, i assumed this was not going to be changed
That's not how I read it
oh my god
I read it as "yeah, I agree"
i keep reading it as "this does not"
for whatever reason
like for so long i thought, ok, i don't get it, but you do you
If he didn't think it would be good, he'd have closed it out
that looks fairly easy indeed
77181 also doesn't look too bad, you just need to learn about trivia
And that one is explicitly marked as help wanted
i've opened something similar in the past, where formatting would remove leading whitespace from one line
could be a similar fix
Sounds similar indeed
I’m happy to review collection-expr optimizations, but for anything ambitious I do first want the contributor to share a fairly specific plan for how they expect codegen to change and ideally a microbench of the old and expected new codegen.
For example, to motivate “use ROS for List<T> = [consts]”, some microbenching of list creation in the current and new ways, with various list lengths, to show where we actually think the right point is to kick in such an optimization.
Actually implementing that one should not too be bad, we already know how to make a ROS pointing to consts, we just have to CopyTo the span of the list elements.
i do have benchmarks here https://gist.github.com/just-ero/bc8461fdd982f2b344c84d3a2cb7cde7
This particular time of the release cycle is busy so I would possibly need a few weeks to be able to get a review in.
i'd have to fine grain it to find a more optimal count, but it is < 100
so probably ~32
Nice! I would like to see how much worse is copyto with very small collections (size 1, 2, 4), and I would also like to see the margin of error reported by benchmarkdotnet in all cases.
I’m expecting you are about right, enabling this somewhere between 10-32 elements will be the way. But just knowing the characteristics of more sizes will be helpful.
i see. i explicitly hide the Error column in my benchmarks. is that wrong? do you prefer hiding StdDev? or hide neither?
do you believe the benchmarks themselves are adequate?
or am i doing something wrong
I’ll take a look at the actual benchmark code when I get back to my pc
thanks
Yes, it is wrong to hide error. You cannot judge whether a result is significant without that column
i see. i believe i thought stddev did what error does
i'm not sure of the difference
Error is basically a +/-. The only way to say that a result is significant is to use the error column to take the worst possible "better result" (ie, better result + error) and compare with the best possible "worse result" (ie, worse result - error). If that overlaps, then the result is not significant
stddev is about how spread out the values tend to be: if you were to plot each individual result on a graph, std dev would tell you how clustered they are
IE, is it wildly all over the place, or is it mostly clustered right around the average, with just a few outliers
Both values are important to determining the significance of a result, but (imo) error is more so
As a concrete example, let's say that you have operation a and b. Operation a takes 7 nanoseconds, with an error of .6 nanoseconds. Operation B takes 8 nanoseconds, with an error of .4 nanoseconds. The "better" result is a, so we look at the worst possible a value, 7 + .6 = 7.6 ns. The "worse" result is b, so we look at the best possible value, 8 - .4 = 7.6 ns. Those values overlap, so there is likely no gain, but to confirm we can look at std dev: if the std dev for both operations is .01ns, then that means the best/worst here is real outlier, so maybe there actually is a perf improvement, but it's hard to say without more benchmarking
We are a bit under the water right now. Later on there might be more appetite for this. But at the same time,for optimizations, we usually watn a runtime person to sign off on the direction. At the least they need a heads up on what we are betting on
We are a bit under the water right now.that's the impression i have as well looks like
8
is already close, and from 16
onwards, CopyTo
is definitely faster
would you like it even more fine tuned between 8
and 16
?
benchmark code is just
I would say it's questionable until somewhere between 32 and 64
Questionable due to how close the results are?
I'll wait for rikki's input, but I see what you mean
Yes. Best compared to worst is within hundreths of a nanosecond
8 and below we can very definitively say is not significant. 16 and 32 are questionable. 64 is a clear improvement