karolherbst , Englisch
@karolherbst@chaos.social avatar

I'm currently looking into what's the best way to support SVM/USM in #rusticl and one thing I'm wondering about is, if there are any drawbacks doing:

mmap(some_chosen_address, ram_size, PROT_NONE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED_NOREPLACE, 0, 0);

and reserve a lot of virtual memory, and then suballocate SVM allocations out of this region with "PROT_READ | PROT_WRITE" and "MAP_FIXED"?

Alternatively, I was considering allocating smaller heaps on demand.

jhwgh1968 ,
@jhwgh1968@chaos.social avatar

@karolherbst I'm sleep deprived today, but doesn't FIXED_NOREPLACE cause pinning, i.e. you can't move or swap it?

If I'm misremembering, then that's probably fine. Look at the heap allocation in "top" of old Golang programs 😄

karolherbst OP ,
@karolherbst@chaos.social avatar

@jhwgh1968 nah, FIXED_NOREPLACE just means the mmap call won't replace existing mappings.

jhwgh1968 ,
@jhwgh1968@chaos.social avatar

@karolherbst then I stand corrected. I say, go for it until something else breaks!

karolherbst OP ,
@karolherbst@chaos.social avatar

@jhwgh1968 mhhh.. I can't really sub-allocate, because after munmap the original mapping is just gone as well...

though maybe I can make it work without reserving such a huge range, but I'm kinda worried about fragmentation, but... maybe that's fine...

jhwgh1968 ,
@jhwgh1968@chaos.social avatar

@karolherbst hence my last comment 😄

lina ,
@lina@vt.social avatar

@karolherbst @jhwgh1968 I think what you really want is to logically detach the mapping from the backing (and go back to PROT_NONE) without leaving the address space momentarily available, right? So instead of unmapping you just do a mmap(addr, size, PROT_NONE, MAP_PRIVATE | MAP_ANONYMOUS, 0, 0); and that should work I think?

karolherbst OP ,
@karolherbst@chaos.social avatar

@lina @jhwgh1968 yeah, it should.

Which just brings me back to the original question, if reserving that amount of VM space is a good idea and if there are other options.

I also wonder how well that would work with things like libasan, which just allocates 20TB of virtual memory here.

Anyway, it seems like Intel's CL stack just hopes nothing conflicts and maybe I just do the same, because it should be fine (tm).

  • Alle
  • Abonniert
  • Moderiert
  • Favoriten
  • random
  • haupteingang
  • Alle Magazine