𝐅𝐫𝐞𝐞𝐑𝐓𝐎𝐒™ 𝐀𝐫𝐜𝐡𝐢𝐭𝐞𝐜𝐭𝐮𝐫𝐞: 𝐒𝐨𝐮𝐫𝐜𝐞 𝐂𝐨𝐝𝐞 𝐎𝐫𝐠𝐚𝐧𝐢𝐳𝐚𝐭𝐢𝐨𝐧 and IoT D
����������������™ ������������������������: ������������ �������� ������������������������ and IoT Device integration server are same event or diffrent one?
I recently ran into a tricky memory access violation on x86 64 that wasn't easy to catch with gdb. Switching to Intel's VTune gave me deeper insight into cache misses and pipeline stalls, which made all the difference. In lighter cases, I’ve also used perf to track CPU usage spikes and lock contention across cores.
What tools or methods do you rely on for debugging low level issues on x86 64? Have tools like perf, gdb, or VTune been particularly helpful for you in diagnosing hardware specific issues ?
I wonder how the devs here feel about AI/chatbots/copilots in their development workflows. My casual conversations have yielded all sorts of responses.
In Embeetle IDE we provide a chat window that is powered by Pieces for Developers. In there you can discuss with a chatbot about your embedded project (or ask it to generate some code - up to you). The LLM is context-aware: it has access to your project code, so that's the benefit you have compared to just firing up a browser and launching a discussion with ChatGPT over there.
You can then copy-paste things from the Embeetle chat window into your Embeetle code files. That part isn't automated: you need to copy-paste manually if you want to use output from our chatbot. I'm not sure if we even want to automate things here. Doing it manually still feels the safest (I wouldn't want to give up control and let a chatbot mess with my code).
You can select from 49 different LLMs (including ChatGPT 4o, Google Gemini, ...). If you're worried about security, just pick an offline LLM to discuss your code with; then nothing leaks out.
There are 49 models to choose from, so I haven't tested them all yet I must admit that the offline models are a bit sluggish. But that's probably because I'm working on a laptop with a moderate GPU. I guess it will perform better on high-end hardware.
So I prefer to use the online models when working with the Pieces AI in Embeetle. However, I can imagine that this isn't an option if you worry about code leaking out. Then you should pick an offline model.
@ZacckOsiemo : the AI in Embeetle is not so invasive compared to other IDEs. It's not all the time pushing stuff while you're coding. At the moment it's only present in a separate chat window that you have to open manually. In the future we might integrate the AI in other parts of the IDE, but we'll always be careful not to become "too pushy".
I usually work with ChatGPT4-o and Google Gemini. The great thing about the AI chats in Embeetle IDE is that you can switch between models even mid-conversation. So you can try out a bit of everything.
Have you noticed any significant differences between the models you’ve tested so far? Curious if some are more efficient or better suited for certain tasks.