This was the last major obstacle in being able to
manage virtual memory maps. Object caches are
custom allocators that allow for more fine-grained
allocation policies, including being able to use
memory from the DMAP region.
This is primarily for the slab allocator update
that's about to come, but to be completely tbh
not even i myself am sure what i should include
here because i made a longer break and there are
like 40 modified files that are all interlinked.
This is hopefully the last time in a while that
something in the mm subsystem needs a refactor
this large. There are two main changes:
- The page frame allocator returns a vm_page_t
rather than a virtual address.
- Data for the slab allocator is now stored in
struct vm_page, which means there is no overhead
in the slab itself so the space is used in a
more efficient manner.
This is the final part of the major mm subsystem
refactor (for now). The new and improved slab
allocator can do *proper* poisoning, with pretty
accurate out-of-bounds and use-after-free
detection.
vm_page_t has also been restructured; its flags
and order are now combined into one atomic field.
This is part 3 of the mm subsystem overhaul.
The allocator doesn't rely on mutexes anymore and
uses individual per-order spinlocks instead.
Also, it is aware of multiple memory zones (normal
and DMA) as well as emergency reserves.
Page bitmaps take up 50 % less overhead now.
As of now, everything except the code imported
from FreeBSD is proprietary. Of course, it won't
be like this for long, only until we have decided
which license we like to use. The rationale is
that releasing everything under a copyleft license
later is always easier than doing so immediately
and then changing it afterwards.
Naturally, any changes made before this commit are
still subject to the terms of the CNPL.
This seems like a huge commit but it's really just
renaming a bunch of symbols. The entire mm
subsystem is probably gonna have to go through
some major changes in the near future, so it's
best to start off with something that is not too
chaotic i guess.
kqueues are going to form the basis for anything
related to I/O and IPC. They are a lock-free,
atomic FIFO queue and support multiple emitters
and consumers.
Up to now, the page frame allocator's
initialization routine relied on the fact that
map_page() never needs to get new pages on i386 if
the mapped page is a hugepage. This is not at all
true on other architectures, however.
This also includes a minor refactor of everything,
as well as some optimizations. The bitmap
operations have been moved into a separate file
because they are probably gonna come in handy in
other parts of the system as well.
This is part of a series of commits where i
completely rewrite kmalloc() because it needs to
be able to return physically contiguous memory for
DMA operations.
Yes, i know the build isn't working right now.
So this was painful. kprintf() supports most of
the format specifiers from BSD now, except for the
$ sequence. It has gotten a general overhaul and
is significantly more sophisticated, bloated and
slower now. There is also some minor stuff i
forgot about, like the 't' length modifier, but
that isn't so important right now and will be
fixed later(TM).
Now that memory allocation finally kind of works,
we can finally start focusing on the core system
architecture. This commit also fixes some bugs in
get_page() and friends, as well as performance
improvements because the page map is addressed as
unsigned longs rather than individual bytes.
Turns out you can't pass a va_list to subroutines
as per the C standard, even though it worked
perfectly fine on ARM. Well then, the entire
kprintf thing needs to be refactored anyway at
some point in the future, so that more formatting
options are supported.