The current capabilities of AI to generate the cloud backend and edge logic are deceptive.

I asked AI to write AWS CDK code deploying backend infrastructure for a sample Internet of Things system. It took me a few prompts, but finally, it generated nicely looking and well-commented Python code. I was surprised by the overall quality of the output I received...

... until I reviewed that code more closely.

After thorough verification, I got suspicious about the merit of the obtained solution. While the code looked correct during the initial review, it had a few security vulnerabilities.

I repeated my exercise and asked AI to generate an edge logic communicating with the backend system using MQTT. The result had similar issues as before.

AI models are trained based on publicly available data containing diverse code quality. Ultimately, we obtain a blend of good/bad code fragments in a single script. It takes in-depth domain knowledge to verify and adjust generated solutions. Don't let the first impression mislead you!
ai_bugs.png
Was this page helpful?