Modular

M

Modular

This server is the home of the MAX and Mojo community. Join us to chat about all things Modular.

Join

questions

community-showcase

How should I be loading the `get_scalar_from_managed_tensor_slice` kernel?

I am apparently missing it, despite using this as my session load: ```mojo var model = session.load( graph,...

Has anyone ever seen this error

While I was writing out some code on mojo, this error message appeared on the first line of my code. A crash happened in the Mojo Language Server when processing this document. The Language Server will try to reprocess this document once it is edited again. Please report this issue in https://github.com/modularml/mojo/issues along with all the relevant source codes with their current contents. Afterwards, regardless of the changes I make (ie making a new file or altering current file) I keep getting the following error and stack dump:
Stack dump without symbol names (ensure you have llvm-symbolizer in your PATH or set the environment var LLVM_SYMBOLIZER_PATH to point to it): After spitting out a couple hundred messages identical to 255 mojo 0x000057bc2c289433 The output finally crashes ```mojo crashed! Please file a bug report....

Question about the FFI, unsafe_cstr_ptr, and PathLike

When calling into a dylib from mojo using the FFI, the char* I pass is seemingly freed before the c function receives it - the following code's output won't show the path. However, if I artificially "retain" the path_string until after the external call, it succeeds. It also succeeds if I use a string literal instead of a Path. Is this a bug or is this a misunderstanding on my part of the ASAP deconstruction? ``` example.mojo #...

Tried the new Max custom_ops examples with my RTX 3050 and using CPU

When I run the custom_ops examples in nightly branch I realized that they were slower than in the demo, and then I checked that accelerator_count() was returning 0 even if I have an Nvidia GPU:
No description

Is there a way in Mojo to know the type of a variable/object ?

Something like type() in Python. If not, is this useful and/or on the roadmap?

Unable to use PyTorch from Mojo

I just wanted to try Pytorch and created a project using nightly build. I then installed PyTorch using: magic add "pytorch". This is the code I tried: `from python import Python...

cannot return reference iwth incompatible origin

What's the easiest way to deal with this error?
No description

Mojo in CMD.

How may I run Mojo directly in Command Prompt instead of running it through a virtual machine?

What are the best practices for handling slices and their performance in Mojo?

I'm trying to better understand how slices work in Mojo, particularly regarding performance. From what I've seen, slices in languages like Python can introduce overhead in high-performance scenarios. In Mojo, are there any specific optimizations for slices, or situations where their use should be avoided? Is there any comparison between slices and other approaches (like pointers or arrays) in terms of efficiency?

Is there a ticket to follow for regex in mojo?

Is there a ticket to follow for regex in mojo?

Publish mojo package

Hi, I am new to mojo. I am wondering is it possible to build a package and then publish it to something similar to PyPi so other users could just install it package as a dependency to their project? I only found information on how to build a package but couldn't really find anything on publishing it.

Passing a Slice to a function

What is happening when I pass a slice of a list to a function? With these examples (very contrived, but reflective of what I'm seeing in larger real code), passing a slice of a list to a function of signature borrowed items: List[UInt8] is waaaaayyyy slower than any other way. Is it allocating a new copy on a slice? Should I be looking into Span's instead? Or is this an area still being looked at https://github.com/modularml/mojo/issues/3653 ? The signature of __getitem__ for List makes it look like it returns a ref to itself though, which seemingly wouldn't need to allocate?...

Max Installation (Arch Linux)

On the modular website there's nowhere to download Max. How do I download Max? Does the Magic🪄 venv already contain both Max and Mojo🔥?

How to configure execution arguments for mojo-lldb in VSCode

Is there a way to pass command-line parameters to the Mojo debugger, so that the following code prints out: I'm looking for the launcher.json configuration that would be equivalent to:
$ mojo debug -D DEBUG ./debug_mode.mojo
$ mojo debug -D DEBUG ./debug_mode.mojo
...

I can no longer post in the other channels

And this feels like a dumb question, but I can't figure out why? I wanted to post in the advent of code channel. I've gone through all the "Start Here" material. Am I missing something or are channels locked down?

Why is alias U8 = Dtype.uint8 not a type?

``` alias U8 = DType.uint8 @inline_always fn is_digit(value: U8) -> Bool:...

linker error: library not found when running mojo build

Hello, I am trying to compile a mojo file, however the linker can't find the library zlib, which is installed on my system (nix package manager): ```zsh ❯ magic run mojo build psm.mojo ld: library not found for -lz...

When can we expect gpu kernels in mojo?

I need to implement a gpu kernel to perform a highly parallel mathematical computation. I saw on github gpu support should land at the next max release, but when is that expected? I need to know if I should wait, or start implementing in cuda instead......

What is the correct way to call polynomial_evaluate?

I am trying to follow this code from Mojo 24.2 in Mojo 24.5: Newton-Raphson but I get the error: invalid call to 'polynomial_evaluate': failed to infer parameter #0. Here's is a relevant snippet of the code:
from math.polynomial import polynomial_evaluate from math import ulp...

Arena Allocated Coroutines

I was watching the Efficient Coroutine Implementation in MLIR, and it seems like there isn't any room in that design to support arena allocating the frames, nor any place for handling the allocation of a coroutine frame failing. This is somewhat concerning to me because while being able to move to stack allocations is nice, being able to grab a right-sized allocation from an arena allocator is nicer, especially in the context of ensuring you have enough memory for the coroutine. For frequently allocated coroutines (consider the handle_request top-level function of an HTTP server), this means that instead of going through all of the machinery in tcmalloc you may be performing a dequeue operation on a ring buffer of free frames, substantially faster. Would it be possible to have the coroutine take an alloc: Allocator[CoroutineFrameType] = DefaultMojoAllocator parameter in some way or otherwise inject an allocator into the coroutine? I'm still thinking over how I would want custom allocators to behave, but I know that this a feature I and others will want. As for my specialty of databases, not being able to handle allocation failures (because the database is likely the largest memory consumer on any system it is on and typically has a lot of caching, so it can actually do something about allocation failures), means that you can't use the feature in production code because it could lead to unnecessary crashes....