Search
Star
Feedback
Setup for Free
© 2026 Hedgehog Software, LLC
Twitter
GitHub
Discord
System
Light
Dark
More
Communities
Docs
About
Terms
Privacy
llm.mojo: GPT2 fine-tuning and inference in a single Mojo file - Modular
M
Modular
•
2y ago
•
17 replies
Jack
llm.mojo: GPT2 fine-tuning and inference in a single Mojo file
https://github.com/dorjeduck/llm.mojo
GitHub
GitHub - dorjeduck/llm.mojo: port of Andrjey Karpathy's llm.c to Mojo
port of Andrjey Karpathy
's llm
.c to Mojo
. Contribute to dorjeduck
/llm
.mojo development by creating an account on GitHub
.
Modular
Join
This server is the home of the MAX and Mojo community! Join us to chat about all things Modular.
20,199
Members
View on Discord
Resources
ModelContextProtocol
ModelContextProtocol
MCP Server
Recent Announcements
Similar Threads
Was this page helpful?
Yes
No
Similar Threads
Inference CNN model in mojo (yolo implementation)
M
Modular / community-showcase
2y ago
heapq.mojo - A Python's priority queue rewritten in Mojo.
M
Modular / community-showcase
2y ago
JavaScript in Mojo
M
Modular / community-showcase
2y ago
Implement and benchmark Softmax algorithms in Mojo
M
Modular / community-showcase
2y ago