If you haven't, start by Data manipulation using pandas and NumPy, understanding algorithms, and working with libraries like Scikit-learn, TensorFlow, and PyTorch. U can start by creating basic AI chat not till u understand the logic flow to move into complex projects
@techielew I was just reading the scroll back, this is quite interesting, it also sounds almost identical to a hardware in the loop ci setup with the key differences being we push from users instead of source control (I guess it would be encumbering for users to push git to flash the hardware?) And that users want to poke/shoogle/look at the hardware instead of automated tests.
I'd be interested to discuss further / help with this.
My gut feeling re the aws thing is I'm on the side of the $5 linode - I'm generally of the opinion that if the part talking to hardware needs to scale for users the architecture was built wrong - we usually run these things on something in the raspberry pi territory
P.s. - if you do use a PI, you get the gpios/busses so it's easy to hang off a couple of i2c/spi devices to stimulate various parts of the system, the linode would basically serve some rpc interface which should be very light and the "fancy web stuff" can be done client side - the linode is more there so that you arent exposing a pi in your house to the nasty world
You could actually drive a system like this from git, people drop a link into some things, when it's their turn the system picks up the code flashes it and runs it.
The difficult part here is if you want people to be able to interact with the target in realtime - now just throwing jobs at it doesnt work, it becomes more like booking an appointment.
We were talking about building something like this on another server for an embedded advent of code, we hashed out some ideas but didnt have the time to commit and make it happen
Not tagged but the HAL and the use of device trees gives more hardware abstraction than other options. The closest thing in terms of completeness is the arduino hal
The general quality of the drivers is also good enough that you don't have to hack it up all the time
That's a great point, Zephyrβs HAL and device tree really do provide that extra level of hardware abstraction, which can simplify development significantly. How would you compare working with Zephyrβs device tree to other approaches you've used in the past? Do you find it as intuitive, or are there challenges youβve had to overcome to fully leverage its potential?
Yeah looking at it like scheduled HIL testing is a good point. There's a way to do this that could be much less expensive (ie, linode) and not reinventing the wheel if we're willing to do round robin 'appointment' scheduling. It's not what was originally envisioned, but perhaps a necessary next step.
We'd certainly welcome the assist, @ke7c2mi. Thanks for the offer. I have a lot going on this upcoming week but hopefully we'll be able to regroup next weekend.
Also, I'm open to any ideas from anyone that would provide an alternative to the "scheduling/appointment" phenomenon, which would be annoying.
Maybe it's an extension of the AMA we just did with Ming. You have to submit your question and code ahead of time and then we'll select ones that are of the broadest interest and let that user 'drive' under guidance of the instructor/expert.
I don't think it's particularly intuitive and I think it's quite a steep learning curve coming from bare metal / reading datasheets and poking memory. It has felt cumbersome for me compared to things like freertos/rtic/baremetal.
Device trees themselves are neat, and I think pretty intuitive. It's just some descriptive templatey IDL. On the other hand they map to your entire system in an abstract way so actually doing things with them can be complicated - the device tree itself isn't the problem there though!
Hey Guys Cherry here, just joined the group. Excited to be here. Little intro about myself. I am a recent grad with over an year experience as an Embedded Software Developer.
Thanks for the suggestion. @Edison_ngunjiri also suggested a tinyML project that involves hardware and code that runs on a microcontroller. Am I correct to assume that it would be the first thing I need to understand as the step towards AI/ML in Embedded Systems?
It's pretty neat Was able to ship few modules with Zephyr on it. Development is pretty direct especially since I like Linux kernel and the whole dts style of development.
One oof moment was the difference in opensource licence between the Linux kernel and Zephyr Had to take down my PR due to the incompatibility.
If you are interested in ML/AI in embedded systems, I would suggest you go right into it. Just repeat the already existing projects on tinyML/picovoice.
With these two platforms, it's more of a process than actual coding. After several projects ,you may want to look at pandas,numpy and scikit-learn for more understanding. This has always been my approach. From known to unknown.
Hi, I currently work for locomotive manufacturing company its been year. My current work is mostly on testing and integrating the legacy projects to QNX.
In my job I get to speak to lots of people about Rust. Some are just starting out, some have barely ever heard of it, and then some people are running Rust silently in production at a very large c ...
It seems like the automotive industry is preparing to adopt Rust. However, the author admits that Rust may not be suitable for early prototypes and certain use cases. I wonder what those use cases might be?
In my experience Rust sucks if you have to do discovery or prototyping, this because you almost have to build the whole house of cards before it works. However, if you know exactly where you are going Rust is ok, I noticed that the earliest POC was in C then Rust came along, Also this is the 12 Volt non critical ECU so it seems there still is a bit of work to get to the mission critical stuff.
Hey guys, I could use some career advice! I'm currently in my second year of university studying Electronics and Communication, and Iβve been working with Arduino and microcontrollers since I was 14. Over the years, Iβve learned quite a bit about Arduino, AVR, STM32, ESP32, custom board design, Embedded C, register-level programming, and some IoT as well.
Right now, Iβm trying to figure out what to specialize in so I can land a good job in the next two years. Iβd love to work in a big MNC within the electronics industry or possibly join my dadβs instrumentation manufacturing businessβor even start my own hardware company down the line.
I feel like Iβve got a good foundation in generalized electronics, but Iβm not sure which area I should focus on next, especially considering current trends. What fields do you guys think would be best for me to master, given my background? I also feel like my early start gives me an edge compared to my classmates (most of them haven't even touched an Arduino), so I want to make sure Iβm leveraging that advantage.
Basically In C, you can quickly prototype a motor control application by directly accessing registers and using global variables, even cutting corners for speed. In Rust, strict rules like ownership and borrowing force you to handle memory safety upfront, slowing down development during prototyping.
Hi folks, 46 year old software engineer by trade, leveraging some time off to tinker around with some embedded, iot, 3d printing, and other various projects
Hi @ZacckOsiemo, my last gig was in the Open Source Program Office at Cisco and have spent a lot of time over the years working on Kubernetes and Infrastructure/Platform automation.
I've always been interested in alternative computing architectures, microcontrollers, and fpgas, but never "had the time" to really explode. Now that I have a few months off, I'm planning on taking the time to dive in head first.
I'm currently furiously 3d printing some organizational needs for my office, then I'm planning on blogging, recording, and streaming my misadventures in hardware
Hi @Ming A friend of mine was wondering: How do you approach a situation where a build doesn't work as expected due to variables changing, skipping, or being overridden by included meta-layers? The build is technically successful, but the outcome isn't what we anticipated. How can we debug this kind of behavior and effectively track variable values and overrides?
For a concrete example, consider an SoC vendor meta-layer with two different versions, such as community and proprietary. We aim to include only specific parts of the vendor's meta-layer, but it's interfering with the development kit BSP layers.