nice_byte
today at 10:33 PM
> No you don't, cuMemAlloc(&ptr, size) will just give you device memory, and cuMemAllocHost will give you pinned host memory.
that's exactly what i said. You have to explicitly allocate one or the other type of memory. I.e. you have to think about what you need this memory _for_. It's literally just usage flags with extra steps.
> Why would UMA be necessary for this?
UMA is necessary if you want to be able to "just allocate some memory without caring about usage flags". Which is something you're not doing with CUDA.
> OpenGL handles this trivially,
OpenGL also doesn't allow you to explicitly manage memory. But you were asking for an explicit malloc. So which one do you want, "just make me a texture" or "just give me a chunk of memory"?
> Let me create a texture handle, and give me a function that queries the size that I can feed to malloc. That's it. No heap types, no usage flags.
Sure, that's what VMA gives you (modulo usage flags, which as we had established you can't get rid of). Excerpt from some code:
```
VmaAllocationCreateInfo vma_alloc_info = {
.usage = VMA_MEMORY_USAGE_GPU_ONLY,
.requiredFlags = VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT};
VkImage img;
VmaAllocation allocn;
const VkResult create_alloc_vkerr = vmaCreateImage(
vma_allocator,
&vk_image_info, // <-- populated earlier with format, dimensions, etc.
&vma_alloc_info,
&img,
&allocn,
NULL);
```
Since i dont care about reslurce aliasing, that's the extent of "memory management" that i do in my rhi. The last time i had to think about different heap types or how to bind memory was approximately never.