Http scripts and Plugin scripts
hello! I know I've been MIA for a while (6 months?). The AI world and some traveling has me in other directions. I'm currently in a surf town in Brazil (I don't surf) and brought my Quest 2 with me hoping to dip my toes back in. I've been demoing the real-time scripts to a couple of people who've been enjoying painting with b&w pass-through in the late evenings.
Anyways, I've had this idea for a while for much richer text using the API. Without getting into all the learning ideas that did not work, I found you can automatically convert TTF (font) files to SVG?! I've always known a basic draw.svg was built into the http API and here was my chance to use it exclusively. However, unless I'm not understanding something, it seems buggy? Here is my quick and dirty documentation of the issue:
https://github.com/dwillington/open-brush/blob/main/python_scripts/text/README.md
I'm still dipping my toes back with playing with the API, so don't need to prioritize on my behalf. I'll be in and out of Discord. But I'd be interested in publishing some code for an awesome unlock of so many fonts suddenly being available in Open Brush!
156 Replies
Thanks for the awesome bug report. I'll take a look.
I've rather been neglecting the http api in favour of the realtime lua API. Here's the equivalent. I'll see if that is broken in the same way.
https://docs.openbrush.app/alternate-and-experimental-builds/runtime-scripting/plugin-api-scripting-reference/svg#svg-drawpathstring-svg-tr
I just tried on a recent version of the scripting branch and it seems to work
Rule of thumb - anything related to scripting is probably more up to date on that branch.
(one thing to be aware of though - You have to manually subdivide strokes now - not tested that via the http api)
previously we automatically subdivided straight line segments i believe
okay, yes I should'v specified the version I was using. I downloaded the latest from experimental-moonsharp and installed via SideQuest (0.1.0.618):
https://nightly.link/IxxyXR/open-brush/workflows/build/experiments%2Fmoonsharp/Oculus%20Quest.zip
However, I'm getting super thin lines in the headset, barely visible. And the connection points are often falling short. Hard to tell for sure with the lines so thin, tried increasing size with ob.brush.size.set("0.9"), no effect.
Also, since I'm traveling (no gpu on my Surface pro 7), I'm using GCP cloud VM for Monoscopic for previews. I wonder if I can request a recent Linux Monoscopic Experimental.zip? Even if it's off the main branch. I currently have something from Dec/2022:
http://wearcam.org/abaq/openbrush/
I downloaded the latest from experimental-moonsharp and installed via SideQuest (0.1.0.618):So - the original problem with the svg where there where extra lines. Is that happening still in this version?
no, it is not. I have the latest release, 2.3.52, installed just now. extra line issues are there, but the lines and curves are really good, promising ha. But when I switch to the latest experimental, extra lines issue not there.
However, there is a circle, the center of which seems to represent where the "cursor" stopped. And the lines are bloody thin with default settings. And I can see that certain connections are stopping at 90% point, let me try and get you a pic from headset...
(don't quote version numbers - i've got no idea how they work with experimental feature builds! just tell me what branch and when you downloaded it)
i.e. It's recent - and you got it from here: https://docs.openbrush.app/alternate-and-experimental-builds/runtime-scripting ?
Yes, I just got it from that link about 30 minutes ago. Which translates to experiments-moonsharp:
Here is the image, I've put red squares where little segments are missing. I would continue experimenting with this if the brush thickness issue was good. It's so thin that it's not even showing up as a sold line in the pic and it's hard to tell where things are missing but I can see better in the headset.

yeah. i can repro that issue on the lua side as well. i'll take a look in the next few days
all good. Looks like the real-time (not http) API has matured quite a bit:
https://docs.openbrush.app/alternate-and-experimental-builds/runtime-scripting/plugin-api-scripting-reference
I'll see if I get around to it. 6-7 months ago when I last played with it, I was frustrated with the slow edit, push, test cycle. I wonder if with all the enhancements, there is a way to "edit file on p.c., hit save, trigger run via command line, put on headset to view", basically no need to use joysticks. I see some new API calls "scripts.{toolscript, ...}.activate", but I wonder if this will trigger the script directly. Not even sure if it's appropriate for my usecase where I want to use an API (like http API, although I see something new with websockets) to run lots of commands to draw something complex and additive, i.e. write out a whole poem using letters from draw.svg() and / or use draw.path to draw lines that make up the elements of a complex structure like an entire generated city or building.
So - two things should help with that -
1. Google Drive sync for plugin scripts
2. A HTTP script called "Remote Control" that can turn plugins on and off
I haven't tested the Google Drive sync in a while - but it should work. The whole point of that was to make it viable to have a nice edit/run cycle on the Quest
Also - we now have a class of plugin called "Background Plugins" which don't require any user interaction. Usually I just put whatever code I want to run in the Start() function and then simply toggling that script (via Remote Control) will run my code. This has mostly taken over from using Http scripts for me.
Oh - by the way there's built-support for loading fonts in .chr format. You can grab more example fonts here: https://www.ncplot.com/stickfont/stickfont.htm
It's a format optimized for plotters. CNC machines etc and seemed rather idea for brush strokes.
Okay...I'll have to put in some time to get the realtime scripting testing environment setup going again:
https://docs.openbrush.app/alternate-and-experimental-builds/runtime-scripting/plugin-api-scripting-reference/app#app-setfont-fontdata
You can get a hell of a lot done in monoscopic mode. Is it the lack of a Linux build that's holding you up there?
I don't know much about that. @mikeage might know more than me.
oh cool, was going to ask if I can test in mono, much faster. however, only latest windows mono is shipped with build and that was giving me some dll error (even though dll was in folder structure) on GCP Windows 2022 desktop server. I don't know if that's a windows 2022 thing. Windows 2022 mono would start and paint some things and then nothing, and log file showed the dll error (I wish I had saved exact error but deleted the Win env, also it costs more for Win image). Yes, Linux Mono would speed up testing. I know we've requested this before, if we can request @mikeage again for a Linux Mono on experiment-moonsharp branch, would be good (stepping away for a swim)
Sorry, I'm not clear on the ask here. We do create a Linux Monoscopic build; what we no longer have is a Linux VR build (because it's not supported in OpenXR). @andybak , do you need to enable the nightly link for this -- is that what's asked for?
Oh - I didn't even think to check. I vaguely recall we'd removed something at some point and assumed that was what @dwillington was asking about
so mike's right - there is a linux monoscopic build. Just click on "Other Builds" and it takes you here: https://nightly.link/IxxyXR/open-brush/workflows/build/experiments%2Fmoonsharp
okay, I can get this latest to start...I have the following file, I wonder if that does anything? I've tried changing some fullscreen settings to no effect.
.config/unity3d/Icosa/Open\ Brush\ (experiments_moonsharp)/Player.log
Also, my draw.path commands are not showing anything, and I don't see any issues in:
.config/unity3d/Icosa/Open\ Brush\ (experiments_moonsharp)/Player.log
okay, I can get this latest to start..I presume that was meant to be "can't"?
Well, it starts up. The reso is off a bit. So I tried change some pref settings to no effect. Then I went to see if I could just get something to draw, which I can't just yet. I tried the http://.../help page but access denied even thought I have the following set:
more ~/Documents/Open\ Brush/Open\ Brush.cfg
{
"User": {
},
"Brushes": {
},
"Video": {
},
"Flags": {
"EnableApiRemoteCalls": true,
"EnableApiCorsHeaders": true
},
"Export": {
},
}
Also, the older OpenBrush-tempprexr2 was working, so I had something working, but I don't know how much has changed in all these months
ok. rewind a bit
the res stuff - can we leave that to one side for now?
So - you're running Linux monoscopic and trying to access the api via a browser on the same device. Correct?
And http://localhost:40074/help/ gives you a browser error "Access Denied"?
Or are you using a browser on a different machine?
I didn't test localhost b/c mono takes over the whole desktop so I can't get to a browser, but I just confirmed from an ssh session:
curl http://localhost:40074/help
<h3>Open Brush API Help</h3>
<ul>
<li>List of API commands: <a href='/help/commands'>/help/commands</a></li>
<li>List of brushes: <a href='/help/brushes'>/help/brushes</a></li>
<li>User Scripts: <a href='/scripts'>/scripts</a></li>
<li>Example Scripts: <a href='/examplescripts'>/examplescripts</a></li>
</ul>
ok. so - it's working but just not from another machine
so - firewall or IPtables stuff?
I suspect Linux is more locked down out of the box
I had this working for the older mono version, I've already opened the ports, but let's set this aside
(mono can't run open brush windowed?)
I have script "ob" which send http commands, so the following works, I see the orientation move back.
ob user.move.to=-10,10,20
but this does nothing:
ob draw.text=hello
these are my standard tests to see if all is working...
it's quite possible text drawing broke. remember i made a big change to drawing logic
in fact - it's quite possible drawing over the http api is broken entirely
as i didn't add an api call to subdivide paths like i did for lua
remind me why we're using the lua branch at the moment? there was a reason but i've forgotten without scrolling back
I was going to play with the new realtime api, and you said I could test most of it with mono (and I had to use linux) and I could some remote method to enable scripting and then google drive, etc.
basically, just start playing with the latest realtime scripting api, but it all started with svg for font importing
ok. so it's only the http drawing that's broken so that plan still makes sense
okay, well I'll get to testing some of the realtime stuff at some point. I was working on some "line art" stuff with the http api which I'm biased towards as I've spent so much time knowing it well. I'll have to put in some time to achieve my usecase with the realtime scripts. But based on what I was working on I just happened to try it on the new mono build...
I'm using line detection via AI or other means to recreate as line art, check this out...
becomes...
you can use any other build for the http api - it's only broken on the plugin scripting branch

nice. @n1ckfg has done a load of stuff with line detection. specifically 3d line detection i believe
Make sure you've read this page: https://docs.openbrush.app/alternate-and-experimental-builds/runtime-scripting/writing-plugins/defining-and-drawing-brush-strokes
and the stuff on subdivide here: https://docs.openbrush.app/alternate-and-experimental-builds/runtime-scripting/writing-plugins/writing-a-tool-plugin
oh cool, I wonder if he has a repo. Yea I'm just searching "github python line detect" and modifying the code to output draw.path from its existing image to lines logic
So wait, that means I can get a recent linux mono build from head?
I don't see one here...
https://github.com/icosa-foundation/open-brush/releases/tag/2.3.0
So only for moonsharp where likely the http api may not do what I want for now
2.3.0 is the main release not the most recent release
we're on 2.3.86 currently: https://github.com/icosa-foundation/open-brush/releases
that's the current beta (although Quest Store beta is lagging for boring reasons to do with SDK changes)
I need to save all these links. I never end up at the right place starting from google. Okay. Either way, this new latest does not have linux mono either. I was just hoping to have something recent as there were other new http api methods that looked interesting. But, I guess I should probably just put in the time to get on the new approach...
Either way, this new latest does not have linux mono either.Ah ha! I see the problem... So there are two different places on Github where you can download a build - releases (which stick around forever) and the automated build (which disappear after 30 days) We currently don't save all build targets in releases - so you have to go to the latter
trouble is - that mixes together all the different branches in one list: https://github.com/icosa-foundation/open-brush/actions
GitHub
Workflow runs · icosa-foundation/open-brush
Open Brush is the open source, community led evolution of Tilt Brush! Forked from https://github.com/googlevr/tilt-brush - Workflow runs · icosa-foundation/open-brush
so you have to scroll back until you see "main":

note it says "main on the right" and "Builds" on the left. (The red cross implies the build failed but it didn't in this case so dunno why it's not got a tick)
The only time this will doesn't work is when it's been more than 30 days since we merged anything to main. Which hopefully won't happen very often
Whoa, so are you running flask or something and sending API commands in to OB, or are you translating the line detection into C#
I made a Barracuda model to do line detection, or I guess they just renamed that Sentis
I'm gonna release all of it with my dissertation
Runs on Android standalones as well as desktop
I have a bigger model that tries to do point cloud segmentation but I haven't converted it into the format Barracuda/Sentis needs, which is ONNX
That's also 500MB, the line detection model is 16MB so a lot easier to include with a real build
@andybak I borrowed an iPad lidar to try that idea using the iOS version of the line detector app to send brushstrokes into OB
Has anyone worked out how devices are supposed to find each other on the network?
It's a perennial problem with mobile prototype apps, I find, where that logic comes from
Has anyone worked out how devices are supposed to find each other on the network?yeah - it's sucky
Hardcoding IP addresses, etc doesn't scale very well
simple dumb solution is a matchmaking server
One thing I've been trying in Raspberry Pi-land is to make the peripheral device the server and the desktop the client
Upnp/Bonjour was meant to handle this kinda thing but i don't know much about it
So the device with the keyboard attached, is where you type in the URL, while the little peripheral thing just has to listen for connections
Oh, bonjour is excellent but it's just human-readable names for computers instead of IP addresses
It wouldn't help sharing a prototype and having it connect on someone else's local network where the computers have different names
There are also virtual VPN tools like Tailscale where computers on different networks can appear to each other to be on the same network
Neither of those addresses this precise problem though
different networks is a harder problem
i'd definitely use a proxy server in that case
or a tunnelling app for tech-savy users
Aren't some OB people working on network gaming right now
Maybe API connections can run on that same solution
Or maybe this is an excuse to do the parallel node.js thing that uses the same abstractions as whatever they're building for photon right now
-- oh yeah, not suggesting we also somehow build tailscale into OB or whatever
yeah - the api stuff can piggy back off multiplayer
I just started playing with the line-art -> line detection stuff mere days ago. I had some success with the first 3-4 repos (minus tennis-tracking).
https://github.com/topics/line-detection
I basically inject some python at the right place to outpoint the line coordinates for an image and then send them to OB using http api draw.path().
Here is another repo I'm experimenting with, so this input:

@dwillington - lua can call remote api's and parse JSON
so it could connect directly to any AI service running locally
there's a few example scripts showing that working
Or is this flask right now
I've been meaning to learn flask servers, for exactly this reason, running scientific python stuff and ML models easily
Node.js isn't as straightforward for ML of course

./demo.py <input image> spits out the coordinates which I later parse and send to OB. I'm doing very preliminary stuff piggybacking off other's work. But it could interesting the more I play with it...
thanks, this is something cool for me to look into:
https://github.com/keijiro/MlsdBarracuda
https://github.com/navervision/mlsd
okay, I got the latest moonsharp linux mono to start...
I run the following:
ob scripts.backgroundscript.activate=BackgroundScript.Lines
and get this in the log ():
KeyNotFoundException: The given key 'BackgroundScript.Lines' was not present in the dictionary.
Just in case, I downloaded the following file:
~/Documents/Open Brush/Scripts/BackgroundScript.Lines.lua
Still getting an error...will keep looking at it.
BTW, when yous say "A HTTP script called "Remote Control" that can turn plugins on and off", do you simply mean scripts.*.activate(xx)?
Let me take a look tomorrow
BTW, when yous say "A HTTP script called "Remote Control" that can turn plugins on and off", do you simply mean scripts.*.activate(xx)?No. I presumed you'd use that example script first. I.e. open it in a browser and click buttons. I can't remember the syntax without checking but look at how the Remote Control script works. It might be that you omit the prefix and just do "BackgroundScript.activate=lines" But start with the actual tested working example before you branch out Yeah - you had the syntax wrong. It's scripts.backgroundscript.activate look at remotecontrol.html - that's the only source of truth here as that's the only working example i guess the reason you're doing it this way is that you can't run a browser that can access the machine that has Open Brush on it? That seems like the issue we need to resolve. (or just use the Monoscopic GUI to run scripts. It's not so bad when you get used to it!) Yeah. It seems to be something like:
scripts.backgroundscript.activate=Linesto activate an individual background script and
scripts.backgroundscript.activateallto turn on all active background scripts (the reason you need both is - unlike other script types - you can have more than one active at the same time so you also need a global switch) EDIT - I just tried it and it works. I couldn't see any output from "Lines" for some reason but try:
scripts.backgroundscript.activate=DrawAndAnimateStrokes scripts.backgroundscript.activateall
I really like the idea of a local server sending API commands into OB for dev
Yeah. I'm slightly disturbed by the fact we have two completely separate scripting systems at the moment but the HTTP scripting was quick to develop and served it's purpose
the lua scripting is much more tightly integrated and has a different use case
Wait what's the other, Moonsharp?
Yes. We've got a simple scripting layer that's basically HTML get or posts
and then the actual realtime plugin API that uses lua/moonsharp
The new network protocol is going to be a third right?
wassat?
Photon or whatever
there's no scripting involved there (unless you squint)
(I guess you could use it for scripting if you reverse engineer the serialization format!)
lol. now you've done it. I want to try that
Does the Moonsharp API work over a network?
No. I nearly went that far and then someone whispered "security holes" in my ear
you can trigger scripts over the network - but I have to be much more careful with the HTTP API than the plugin API because of the whole network thing.
Before we launch it properly I need to carefully consider how to restrict clipboard and network access for plugins!
(such a shame to cripple functionality but otherwise I've just reinvented ActiveX for VR 😉 )
Makes sense, if it'll run any arbitrary code in Moonsharp
Vs. I imagine the attack surface of a brushstroke API is limited to drawing something rude
Been there. Done that.
There is a slightly bigger attack surface to the http api but hopefully it would take an undiscovered buffer overflow and a careless user to exploit it.
I have the browser now. I found the "examplescripts/remotecontrol.html", I didn't realize that is what you were talking about, this "script" word is getting thrown around all over. Now, when I use the examplescripts gui, I don't see any changes in mono. I also tried this method on the headset and it's inconsistent. When "EnableActiveScripts" under "Background Scripts" I hear a rapidty of strokes, but nowhere to be seen in the headset. I then cycle through scripts and randomly a bunch of colored circles show up. But it doesn't line up to my clicks. That's okay, at least the headset can show me how it should work. But I want to trigger something from the API, using remotecontrol, and have it start something in mono. And then ideally update the script in ~/Documents.../Scripts and rerun.
okay wait, it's doing something in mono...wow, kind of inconsistent, maybe the network. would be good to see in the log if it's registering the command, but not all commands are getting logged
I've added the following script, pretty much a copy of Lines. I also added the corresponding meta file (copy of Lines.meta):
~/Documents/Open Brush/Scripts/BackgroundScript.test.lua
ob scripts.backgroundscript.activateall
ob scripts.backgroundscript.activate=test
When I run above I see the following in log:
KeyNotFoundException: The given key 'test' was not present in the dictionary.
I remember playing with this 7-8 months ago and you had it auto load if copied in to Scripts folder. I tested that on headset, not mono
1. The location to put them is now "Plugins" for the lua scripts (I am trying to disambiguate the two types of scripts - so I'm referring to lua as "plugins" or "plugin scripts". From the perspective of someone who just wants to use them and not write them, they are plugins)
2. You don't need .meta files. They are only used inside Unity itself
https://docs.openbrush.app/alternate-and-experimental-builds/runtime-scripting/writing-plugins/getting-started
The above docs page covers where to put the files and tips on setting up an editor
I totally missed this and was even looking through the docs, will report back
@andybak , coming back around to see if you got to look at why "ob.draw.svg" in the http API was not working properly? I documented my issues below...
https://github.com/dwillington/open-brush/tree/main/python_scripts/text
Looking through our chat history, it seems the svg.path is drawn correctly in the moonsharp branch (i.e. plugin scripting) however there is a separate issue where brush stroke is impossibly thin, which you acknowledged. I would prefer to just get it corrected on the main (http api) branch, as all the brushes look wonderful with draw.svg. It's just not drawing the paths correctly, and you suggested you think it could be that "subdivide" was introduced and not implemented in main? Something like that.
GitHub
open-brush/python_scripts/text at main · dwillington/open-brush
Contribute to dwillington/open-brush development by creating an account on GitHub.
Motivation for me: There are some beautiful google fonts (https://fonts.google.com/) I would love to be able to use for labeling 3d charting. I'm trying to reproduce some of @PythonMaps (https://twitter.com/PythonMaps) work in Open Brush, in 3d of course.
Here is a quick and dirty 3d chart I created using the http API:
https://twitter.com/DwillingtonWest/status/1778112256619737297
You can see the labels are a little whacky b/c draw.svg does not work correctly. There is a rich genre of data visualization which I think would lend itself well to being visualized in Open Brush. Think of the hordes of crypto freaks who'll come running when they think visualizing 3d charts in OB gives them a trading edge ha
Google Fonts
Browse Fonts - Google Fonts
Making the web more beautiful, fast, and open through great typography
Python Maps (@PythonMaps) on X
Mapping the world with Python. Geospatial data scientist who likes maps.
Contact info@pythonmaps.com
Twitter
dwillington (@DwillingtonWest) on X
@cryptomeccan @Altcoinist_com Been experimenting with drawing and visualizing 3d charts in the Meta Quest using Open Brush. Much more experiential as I can scale it larger scene in my room. Then walk around "inside" the chart and data. Hopefully gleaning new insights as more spatial neurons are firing off.…
Twitter
Hey! Nice to hear from you again.
So - I guess a way of using fonts directly would be nice?
GitHub
Text tool by andybak · Pull Request #512 · icosa-foundation/open-br...
Text widgets and a text tool for creating them.
Work in progress...
GitHub
GitHub - Ixxy-Open-Source/GlyphLoader-Unity: watertrans/GlyphLoader...
watertrans/GlyphLoader repackaged with some Unity specific tweaks - Ixxy-Open-Source/GlyphLoader-Unity
(basically - a long-winded way of saying "I've been doing a fair bit with type recently" - and SVG)
A ton of commits here that were specifically around type support: https://github.com/icosa-mirror/com.unity.vectorgraphics/commits/open-brush/
GitHub
Commits · icosa-mirror/com.unity.vectorgraphics
[Mirrored from UPM, not affiliated with Unity Technologies.] 📦 The vector graphics package provides an SVG importer and runtime vector graphics APIs. - Commits · icosa-mirror/com.unity.vectorgraphics
Okay, when I searched for svg to find our old conversation, I though I saw some activity. But is this related to the http API? Because I'll be generating everything with a script
It will eventually be added to the Http API. I'm still more focused on the realtime API so that will probably come first.
Actually - on reflection - I usually add basic http API support just to help me test during development - so it might well be the other way round!
Okay. Well, draw.svg is like a superset function that gives me access to thousands of amazing fonts. Within OB, I can:
1) create amazing word art
2) add beautiful labeling to charts and data visuals (a rich deep genre)
3) and generally leverage free positioning (rotation, etc.) and sizing of fonts.
I'm using Python to:
1) Automate converting TTF to SVG (giving me access to thousands of fonts!)
2) Create a helper library to allow sizing and rotation of an svg path. Something like:
fontsDict = getFonts(google_font_name)
ascii = getAscii(fontsDict)
ascii = getSize(ascii, size)
ascii = getRotation(ascii, angle)
svgPath = ascii.getSVGPath()
brush.move.to()
brush.look*()
draw.svg(svgPath)
And, draw.svg is already supported in the http api. There's just a bug where it looks like ~ 80% of the character is drawn and then it goes whacky. Maybe a "Pick up the pen and Move" operation is not being respected.
I may actually be close to pinpointing the bug even further b/c I wanted this to work so bad I started implementing my own draw.svg(path) using draw.path 😆. let me share a video of the characters being drawn side by side, my implementation in Pink and draw.svg in Blue...
You can see at the 0.23s mark that it's also "caching" parts of the prior path...Anyways, I can share my code, will take a little time to upload. All this exercise, like anything with OB has been super enlightening for me as I've learned much more about SVG paths and am grateful to be able to go deeper like this 🙏 ...
Just a thought - have you checked in case this is fixed in the Plugin Scripting branch? That has had a few updates that touch svg code.
Yes, the svg path seems to paint correctly in that branch, but there is another issue where the character is barely visible. I'll reply to your old message further up this thread where you reproduced the issue a while back...Ideally, we could fix in the main branch if it was simple enough and you had time. Then I could benefit from using all the latest and greatest features and running the main app on my headset...
Here is our last discussion...
why not just use the plugin branch though? It's always ahead of main
are you using the beta of regular open brush?
It says "OpenBrush 2.7.13" on the back of my controller...
yeah - that's the beta
Is this the "plugin" branch?
https://nightly.link/IxxyXR/open-brush/workflows/build/experiments%2Fmoonsharp
yep
but - bookmark this page:
https://docs.openbrush.app/alternate-and-experimental-builds/runtime-scripting
that's the page i ensure always points to the right place
that nightly.link might not one day
the docs are the "source of truth" hopefully.
All right, just downloaded 0.1.0.715. I know in the past version numbers didn't mean anything to you. I installed the latest from this link and I think I remember where we left off at:
1) svg paths are painting correctly, but infinitesimally visible. I've tried different brushes
2) draw.path is broken! Ooof, I rely almost entirely on this. And you pointed me to path2d...and the same thing happened where I just wasn't ready to embrace the plugin scripting api (much more powerful that it is) yet
I'm going to carry on using the working http api for now. There are too many ideas I have for experimenting with data visualization in OB that I can create rapidly with existing http api knowledge
OK. At least I know what needs fixing. Which only happens when people try these builds out!
I'll let you know when I've fixed these issues.
Oh my god. You're using 2-space indentation in Python. I'm not sure we can be friends.
@dwillington - I can't get your text.py to run. The ob import was broken - I fixed that but then this bit:
path isn't defined at the point this code is called. It's defined elsewhere but text() is the first part that's called - and in any case path isn't a global.
I just want to follow the same steps as you to fix the svg bug.
oh I didn't realize you were going to run the same code, I've checked in my code
It's always easiest to follow someone else's steps to reproduce a bug. At least then you know you're both talking about the same thing.
So @dwillington - i think there's a couple of things to discuss.
1. You don't seem to be using DrawSvg - instead you're drawing everything manually. You'd be better off passing entire SVG path strings (or full SVG files) to the API.
The latter is probably untested but it's meant to work so let me know.
2. On the plugin scripting branch - the default behaviour for drawing polylines has changed. Not sure if this is one of the reasons you're seeing unexpected results - in any case you don't need to be doing it this way so we can leave that to one side for now.
I'd like to start at the other end - and properly understand your goals. I'm still 99% convinced that doing it in lua would be easier (even if I need to fill in a few gaps in the API)
The lua plugin API is *much * more polished and complete than the HTTP API.
We never seem to be online and active at the same time? It would be easier to have a realtime chat about this some time.
Yes, I think a chat would be great, I'm free now...
Ah cool. I must admit if I come back to 10 pages of text - I don't always take it all in properly!
so - give me a brief overview of your end goal. Draw charts with nice fonts?
I thought you might suggesting a voice chat in the audio channel? faster?
dunno. maybe. i need to get to a different pc for that
i meant "typing in realtime" really
but give me 10m if you do want a voice chat.
I think that would be good. 10 minutes will also give me time to reaquaint myself with the plugin api issues I had
ack. you couldn't give me an hour could you? i've just noticed how sunny it is out and it won't be for much longer.
i can type outdoors but i can't VR!
i guess i could voice chat - but not screenshare
absolutely! Never want to get in the way of someone's sun. It's ~ 1 p.m. here on the beach where I am.
i'm seriously vitamin D deprived after a british winter
I can come back in 90 minutes. does that work? Or voice chat now. Whatever you prefer
90m works
Back now in case you are
Here's a google font rendered in 3d - not currently part of the API but the hard work has been done so it could be added fairly easily:

Here's one of the included example scripts that calls a web API, grabs some SVG and then renders it - pretty much in realtime
The bit that does the drawing itself is basically these 3 lines (with the 3rd line modified as it would be in a standalone script like your Python)
From the docs:
Svg:ParseDocument(svg, offsetPerPath, includeColors) Parses an SVG document Returns: Returns a PathList representing the parsed SVG documentYou can also parse json: (that's assuming the data makes senses as a path)
I'm back
i've got time for a quick chat
let's meet in the chat channel?
yeah. i'm wondering if i can screenshare from one computer and chat from another
not got a mic on the PC
and the Mac is slow as a dog in Unity because Apple thinks 8gb is enough RAM for a base model
I don't need a screen share at the moment, just might be easier to talk and avoid confusion
question: How can I make script run only once? I started looking into triggering running of plugin script.
1) http API: ob scripts.toolscript.activate=xyz
2) ToolScript.xyz.lua:
I can confirm that it's making a call to my machine on port 8000. But then, it keeps hammering it. I want it done once.
I can turn it off by running this from http API.
ob scripts.toolscript.deactivate
But I want that to be result from line 2 in Main() above.
okay, nevermind, for some reason, shortening line to the following is "working":
The call is still happening multiple times, like 1-3, but I can fix that with process every X frames:
Now I just my PC http server to respond with json that can be handled by the ToolScript
just a quick update...I tried to create a Delegate ToolScript, where I thought I could get an svg_path from a json WebRequest to my PC. I spent hours looking through similar Svg*.lua scripts from the examples and Random*.lua and using the examples. Things seemed to work and then not work. I was getting inconsistent results and it was deja vu (😆 ) trying to understand this significantly more complicated runtime model. For e.g. I feel like there were race conditions. I couldn't tell which method calls were synchronous or asynchronous. BTW, where is the log file for lua output on the Quest 2, I couldn't find it anywhere...
Just for reference, here is an 101 python http server that listens for WebRequest from a ToolScript. It's only job is to pass json {'svgPath': 'M703 1129l306 ...'}
https://github.com/dwillington/open-brush/blob/main/python_scripts/text/delegate.py
Here is the ToolScript, where I had most of my trouble.
https://github.com/dwillington/open-brush/blob/main/LuaScriptExamples/ToolScript.Delegate.lua
Anyways, I want to leverage many functions like Svg:DrawPathString, pathList:Draw(), etc. But without the complexity of how Start(), Main(), etc. and all the other lua / plugin model stuff works. I need to ease into this. I don't have the programming chops. And I'm going to go out on a limb and be bold and say, I think it does feel too complex. This is the second or third time I'm taking a good faith crack at it, and I'm not sure where I'm going wrong. So as a power user of the API, maybe this could be factored in for adoption curve for future users. If I'm having these issues, others likely will too.
For the moment, I would love to see draw.path and draw.svg in the http API working again. I'm willing to continue down this delegate model where computation is done on my familiar Python side and drawing functions leverage the plugin scripting API. But, I need help understanding these lua scripts.
If you put your code in Start() instead of Main() it should only run once.
I couldn't tell which method calls were synchronous or asynchronousAs far as lua is concerned all calls should be synchronous. However in some cases I'm calling Open Brush code that might have it's own delayed execution. I probably haven't spotted these cases so any examples would be useful. I know model import is a bit async - but the Open Brush code is pretty complex and I've not managed to get a simple "do this right now" version working. I'd be curious to know where else things weren't happening on the exact frame you called them. But yeah - a lot of this stuff hasn't been thoroughly tested - which is why it's great you're trying it!
BTW, where is the log file for lua output on the Quest 2, I couldn't find it anywhere...
it should just go to the regular Unity log.
which on android means using logcat: https://docs.unity3d.com/Manual/android-debugging-on-an-android-device.html#view-android-logs
@dwillington I had an idea for the best way to handle Http to Lua interaction.
A new type of Lua plugin script explicitly for interfacing with outside processes. Something like this:
This would add new endpoints. If the plugin was named Foobar then maybe:
http://localhost:40074/plugins/Foobar?SayHello=Andy
Would this be dynamic? i.e. one could
1) add a function "Sayxxx" to the special lua plugin file
2) and then just call http://localhost:40074/plugins/Foobar?Sayxxx=Andy
Basically, I could just expose things as needed? Everything would come in as strings, and I would parse to floats where it's expected?
not sure what you mean by dynamic? You add a new script and it automatically adds endpoints - I guess?
There's not one "special lua plugin file" - it's a new type of plugin.
so currently: Pointer plugins, Tool plugins etc. This would be - maybe - External plugin or something?
Okay. If this is the way to go to simplify http to plugin API, then I'd love to try it. Anything that gets me moving...If I can start making draw.path operations via plugin functions, then I can start testing things. If you decide to refactor the scope of this new Plugin based on how it's being used, that can come after some feedback from usage.
My TODO list from our chat was:
1. fix draw paths
2. Allow http api to execute lua
3. Integrate new text functionality from Text Tool branch
I did (3) yesterday. I'll look at (1) tomorrow and I have a plan for (2)
That's amazing. For #1, I could also use draw.svg, which you acknowledged is drawing extremely thin lines. Both can reproduced from "Try it" links in the api docs.
#2 should enable me to start testing the plugin API without a lot of effort. I think something to aim for, is an easy way to convert a lot of my code to using "http://localhost:40074/plugins".
I got 'access to the log' working earlier, this was critical...I've documented a bunch of my "working on lua scripts" process here:
https://github.com/dwillington/open-brush/blob/main/LuaScriptExamples/README.md
I'm getting an error where I can't seem to do a simple json parse:

I'm not looking for a fix...but just showing how challenged this type of development process is for me...
"bad argument #1 to 'parse' (string expected, got table)"
I'm printing out a valid json string in the result var before I try and parse it. It even concatenates to a string in the print statement. There is nothing seemingly irregular about the json compared to what one would get in any of the working scripts. I've looked at the following working scripts below that use "json:parse".

That's interesting. I think you're actually one step beyond me. You see where you set the accept headers?
I think that might be causing GET to return actual json rather than just a string. The error message:
string expected, got tableimplies that it's already got a table (lua-speak for a dictionary) so you don't need to parse it. If that's correct then I've just learned something! @dwillington


Cool! Will try it in a bit. Any way to control sizing? A major advantage for me would be to control sizing for labels...See my post from last night:
https://discord.com/channels/783806589991780412/805082441038823444/1229673996447121420
you can't try it until i release it 😉
that's on my machine at the moment
none of this is really suitable for labels. you want to use text objects for that. they will be more efficient
all a work in progress. be patient for now.
i want to fix strokes first
ah no worries! I thought you were looking for me to test. No rush on my side as I'm having fun learning more about geopandas and finding interesting datasets
@dwillington Can you try the latest build and see if the issues with the draw commands are fixed?
draw.path looks to be working! And draw.svg too. There are some differences between the main branch (original code) and moonsharp.
2 main issues that might effect me:
1) draw.path is not drawing the "same" for smaller line segments. I had observed that in the original code that with the Light brush, anything with math.dist(p0,p1) < 0.058, would not draw. I accept this limitation and simply hardcoded this threshold in my lib to not send to draw.path. I tested the new threshold and it feels much higher...
2) draw.svg is not "closing" the "loop" in drawing letters. For e.g. a square in the middle of a block letter A will have a small gap, you'll see in the video...
minor issues:
3) The control buzzes every time something is drawn it seems, might help be debug
4) There is a circle (representing the pen?) that is visible. I almost want to keep this if it's a bug, as it may help me debug
5) The info panel, which contains useful info, is at 0,0,0, when main has it behind the right hand controller. I prefer the latter of course. I think at one point I could capture lua print statements here
I've updated my text code to be a little more clearer where I show the comparison between draw.text, draw.svg, and draw.path, with large comments showing the 3 sections:
https://github.com/dwillington/open-brush/blob/main/python_scripts/text/text.py
Here is a comparison of the first 20 or so characters running: 1) on main, 2) on moonsharp
white = draw.text
pink= draw.path to simulate draw.svg but with lines between all points
blue = draw.svg
The pink is very useful in showing how draw.path may be effected with smaler line segements, as they are showing up better in the first video than the second. BTW, this is not some catastrophe, as at least ~90% is now being drawn. But, if it wasn't too bad to get that extra bit, and ensure that line segments as small as "threshold_value" are shown to be drawn, I could just count on that. I notice the "%" sign shows up nicely in the first video, original code vs second video (video1 = 0.27s mark vs video2 = 0.20s mark). Asterisk is also a good comparison for differences. I'm just being as detailed as possible if you waned to look.
Also notice, draw.text does not work as well in moonsharp. Not important for me, but it reveals how lines seem to behave worse when they are very small in moonsharp.
These are cursory observations. I can look closer if that's what you want. For my purposes, this is enough for me to start using moonsharp to continue some of the map work I'm interested in for now.
btw, just tested draw.svg for maps, just beautiful! It's leaves a small gap, but it looks amazing for capturing the continuous curves in a map boundary. However, I thought ob.brush.turn would change the brush angle so I could "tilt" maps? Would be cool if I could draw them in more than XY plane.
When you expose "External Plugin", I'm happy to look into converting a lot of my brush.move, draw.path code to exercising the plugin api version.
Also, thinking out loud, here is my typical use case around how I'm using the API from externally:
ob.color.html.set, ob.draw.path SINGLE line segment
with the http API being performant for my reasons, I can send these API commands individually and things are drawn reasonably fast.
If I can use the "External" Plugin to make fast enough API calls to delegate to the plugin API via scripts, it would be a foot inside the plugin API being exercised. From there, I imagine I could naturally be teased into experimenting with more of it. This is in fact how my exploration of the http API started.
to reproduce minimum line length threshold, which does not draw at length < 0.1. Of course, would prefer the original 0.058

So - this is probably because I changed the previously hardcoded smoothing amount from 0.25 to 0.1 (to try and tighten up corners. I've made this something you can potentially change so you'll be able t do something like:
draw.smoothing=0.25
to change the setting.
This is handled differently in lua. No smoothing is applied by default. Instead you choose how many extra points to add before you draw a path:
This has advantages because brushes differ in how much smoothing they need - and the correct amount of smoothing varies based on the scale and type of path you're drawing.
I could probably come up with a better way to automatically determine the correct amount of smoothing - but it requires a load of trial and error and testing with different brushes under different conditions.
Not sure how far you are down (2), but having attempted a ToolScript.Delegate, I wonder if it would be easier to "expose" plugin API via "http" using existing functionality? I have a way to "trigger" execution: ob scripts.toolscript.activate=Delegate
1) We only want this to run once
2) This would ideally fetch json which could be all the http API commands calls in a json file for a particular Python execution
3) Delegate would just have if statements to determine which plugin API call to make:
draw.path -> Path:Draw
color.html.set -> Brush.colorHtml
https://github.com/dwillington/open-brush/blob/main/LuaScriptExamples/Delegate/ToolScript.Delegate.lua
If you could help with creating a working skeleton for ToolScript.Delegate and a ServeOpenBrush.py that does a few examples, I would be happy to run with that and just expand on your code. Right now I can't get a basic json served to a ToolScript, and I don't know what I'm doing wrong...
https://github.com/dwillington/open-brush/blob/main/LuaScriptExamples/Delegate/delegate.py
Anyways, just a thought in case this is less work than a new Plugin type, and I have something to experiment with...
Those endpoints already exist. You don't need delegates for them?
What lua functionality do you want to access that doesn't already have a http endpoint?
Are you saying I can call these methods via http endpoints? Like Path:Draw()?

No. I said the examples you gave existed already. What do you want that's currently not available? A lot of lua stuff only makes sense if it's returning values - which doesn't happen with the http api
Let's say I want to do a sequence of draw.path and color.html.set http API calls, but with the plugin API functions instead. This is to exercise the plugin API. I want the parameters, i.e.: "[0,0,0],[1,1,1],..." and "4CCD99" to come from Python. Since I can't pass them to a lua script directly, I want to trigger a lua script, to run once, and fetch these from my Python http server, via a WebRequest. Then it can call the Brush.colorHtml, Path:Draw functions using the parameters fetched in json from my Python server
OK. Just to keep it simple let's talk about drawing a path.
Currently you can:
1. Do this via the HTTP API
2. Do this directly in lua
You're proposing a third way - which is a general way to wrap any lua command and call it from HTTP? By writing a big long list of JSON that maps one to the other?
I must admit I'm not keen on the idea.
For commands that already exist in both APIs - just use the HTTP version. For stuff that's absent in the HTTP API - let me know what it is and we can discuss the best approach.
The whole power of the lua scripting is not about individual commands - it's about being able to pass values and lists around, process them, introspect the running Open Brush sketch etc etc. Just exposing single commands - I can do that easily enough just by adding a new endpoint.
Yes, I would prefer to use the http API. I just don't want to waste your effort with the "External Plugin" we talked about, if the reasoning for it was not right. At first, I just want to convert draw.path, color.set to the equivalent Plugin API, but am still generating the sequence and parameters in Python. If all I do is generate a lua file, and then use this new external plugin to "send and execute", I thought I could just do this via the delegate process I mentioned above. Anyways, I'm happy to wait and play with the External Plugin when it's available. Perhaps you've thought about the architecture and how it serves my use case and yet allows exercising and testing of the Plugin functions. But, if you want to level set again, I think it's better to jump on voice chat. I'm happy with the http API fixes so far
I just don't want to waste your effort with the "External Plugin"I'd rather spend a bit longer on the right solution rather than implement something hacky that we'd be stuck with for eternity. I'm trying to anticipate future use-cases and avoid creating awful security holes or foot-guns. I think it's important for external apps and services to have a way to interact with real-time plugins. But - if there's ever a missing piece of the HTTP API that exists for Lua plugins let me know. It's usually a job of a few minutes to add a new endpoint if the functionality already exists. I'm also not wedded to Lua. I'd have gone with Javascript if I could have found an open-source embeddable js runtime that had semi-reasonable docs. Same with Python. I would have preferred either over Lua but Lua is just really well catered for for embedding in another app.
So, having learned more about svg, I'm leaning in to it further, as it solves all the issues with approximating via small line segments, especially when the line segments fall below minimum segment threshold, now 0.1. All the Python shapely objects return svg, very handy for all the maps. 2 questions:
1) Did you say I can directly pass an svg like the following to draw.svg?
<polyline fill="none" stroke="#66cc99" stroke-width="2.0" points="70.45,9.600000000002183...
Even a whole file? This would be very interesting and open up a lot of possibilities if this is easy to acheive. I don't believe this is how things worked b/c I tried it ha, and the API says only "path" statements:

2) I'm working on an interesting map right now, however, it is only open lines, and not closed polygons. Which is why I noticed what I think is a bug.
ob draw.svg="M 0 0 L 0 1000 L 1000 1000 L 1000 0"
This should not close the square:

But it does close the square. All draw.path strings are getting "closed" and I hadn't noticed this when I was drawing polygons but now I'm noticing it when drawing just lines and curves.
p.s.:
draw.svg has become extremely useful now that I know more about how svg works. Especially when it comes to map data. Some data is 1000s of line segments each 0.00x unit length long. However, if I scale these up and draw them individually, the whole structure scales off the OB canvas. But if I convert these connected segments into an svg, I can get the overall shape and fit it easily scaled to whatever works on the canvas. And with line segments there was always discontinuity. Becomes very obvious with most brushes. But now the whole shape can be drawn in a continuous stroke, preserving continuity, capturing subtle curvatures, etc. I can try this with all sorts of brushes.
Did you say I can directly pass an svg like the following to draw.svg? <polyline fill="none" stroke="#66cc99" stroke-width="2.0" points="70.45,9.600000000002183... Even a whole file? This would be very interesting and open up a lot of possibilities if this is easy to acheive. I don't believe this is how things worked b/c I tried it ha, and the API says only "path" statements:Ah - that's just an oversight. To maintain backwards compatibility I might add a new endpoint called draw.svgpath - that explicitly only accepts paths - but keep the current behaviour of draw.svg (but only IF it doesn't first detect a valid full svg document) you're probably going to want to use POST rather than GET to call draw.svg - there's a limit on the size of url parameters. Plus escaping special characters is gonna be fun!
But if I convert these connected segments into an svg, I can get the overall shape and fit it easily scaled to whatever works on the canvas. And with line segments there was always discontinuity. Becomes very obvious with most brushes. But now the whole shape can be drawn in a continuous stroke, preserving continuity, capturing subtle curvatures, etc. I can try this with all sorts of brushes.If you draw a path using draw.path and pass in multiple points - it should be continuous. It would only be separate segments if you did multiple draw.path calls each with just 2 points. essentially draw.svg ends up calling the same code. it just converts the path into a list of points first. Ugh. This is a limitation of how I'm (mis)using the Unity Vector Graphics library. All the SVG parsing code I can find seems to be geared towards generating meshes from SVGs rather than polylines. And meshes are always closed shapes. I'd say if you want open curves then use draw.path - taking into account what I said above about separate segments.
argh! I keep forgetting this. I'm already getting much smoother results. I wish I'd done this before my latest video:
https://discord.com/channels/783806589991780412/805082441038823444/1231789210374111292
