Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions doc/kipc.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -260,6 +260,13 @@ the dump region index. If the dump region index is equal to or greater
than the number of dump regions for the specified task, `None` will
be returned.

Passing a dump region of 0 returns the task's descriptor in kernel memory, and
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused. It sounds like this is a new feature, and now you need to pass different region indices to GetTaskDumpRegionRequest than before. But I don't see any change to the kernel's implementation of get_task_dump_region...

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're correct that nothing has changed; this is just documenting existing behavior to make it explicit!

will always return `Some(..)` (for a valid task). Subsequent regions are
guaranteed to be sorted by (ascending) base address.

CAUTION: The descriptor region may not be sorted with respect to other regions;
kernel memory may be either above or below task memory.

=== `read_task_dump_region` (7)

For a given task and task dump region, this will read the specified region and
Expand Down
24 changes: 24 additions & 0 deletions sys/userlib/src/kipc.rs
Original file line number Diff line number Diff line change
Expand Up @@ -63,9 +63,33 @@ pub fn find_faulted_task(task: usize) -> Option<NonZeroUsize> {
NonZeroUsize::new(response as usize)
}

/// Returns a [`TaskDumpRegion`](abi::TaskDumpRegion) for this task's descriptor
///
/// The task descriptor is located in kernel memory.
pub fn get_task_desc_region(task: usize) -> abi::TaskDumpRegion {
// It is always valid to ask the kernel for the 0th dump region
get_task_dump_region_inner(task, 0).unwrap_lite()
}

/// Returns the `i`'th dumpable region for the given task (or `None`)
///
/// Dumpable regions are located in task RAM, and are returned in sorted
/// (ascending) order by base address.
pub fn get_task_dump_region(
task: usize,
region: usize,
) -> Option<abi::TaskDumpRegion> {
// Region 0 is the task descriptor, so we add 1 here
get_task_dump_region_inner(task, region.checked_add(1)?)
}

/// Access to the raw [`GetTaskDumpRegion`](Kipcnum::GetTaskDumpRegion) KIPC
///
/// Wrapped by [`get_task_desc_region`] or [`get_task_dump_region`] for
/// higher-level semantics.
fn get_task_dump_region_inner(
task: usize,
region: usize,
) -> Option<abi::TaskDumpRegion> {
let msg = (task as u32, region as u32);
let mut buf = [0; core::mem::size_of::<(u32, u32)>()];
Expand Down
108 changes: 72 additions & 36 deletions task/jefe/src/dump.rs
Original file line number Diff line number Diff line change
Expand Up @@ -247,25 +247,11 @@ pub fn dump_task(base: u32, task: usize) -> Result<u8, DumpAgentError> {

let area = dump_task_setup(base, DumpTaskContents::SingleTask)?;

for ndx in 0..=usize::MAX {
//
// We need to ask the kernel which regions we should dump for this
// task, which we do by asking for each dump region by index. Note
// that get_task_dump_region is O(#regions) -- which makes this loop
// quadratic: O(#regions * #dumpable). Fortunately, these numbers are
// very small: the number of regions is generally 3 or less (and -- as
// of this writing -- tops out at 7), and the number of dumpable
// regions is generally just one (two when including the task TCB, but
// that's constant time to extract). So this isn't as bad as it
// potentially looks (and boils down to two iterations over all
// regions in a task) -- but could become so if these numbers become
// larger.
//
let Some(region) = kipc::get_task_dump_region(task, ndx) else {
break;
};
// Helper function to add a region to the dump
let add_dump_region = |region: abi::TaskDumpRegion| {
// Skip regions which are in the space used for raw dump data
if in_dump_area(region.base, region.size) {
continue;
return Ok(());
}
ringbuf_entry!(Trace::DumpRegion(region));

Expand All @@ -281,8 +267,28 @@ pub fn dump_task(base: u32, task: usize) -> Result<u8, DumpAgentError> {
|addr, buf| unsafe { humpty::to_mem(addr, buf) },
) {
ringbuf_entry!(Trace::DumpRegionsFailed(e));
return Err(DumpAgentError::BadSegmentAdd);
Err(DumpAgentError::BadSegmentAdd)
} else {
Ok(())
}
};

// We need to ask the kernel which regions we should dump for this task,
// which we do by asking for each dump region by index. Note that
// get_task_dump_region is O(#regions) -- which makes this loop quadratic:
// O(#regions * #dumpable). Fortunately, these numbers are very small: the
// number of regions is generally 3 or less (and -- as of this writing --
// tops out at 7), and the number of dumpable regions is generally just one
// (two when including the task TCB, but that's constant time to extract).
// So this isn't as bad as it potentially looks (and boils down to two
// iterations over all regions in a task) -- but could become so if these
// numbers become larger.
add_dump_region(kipc::get_task_desc_region(task))?;
for ndx in 0..=usize::MAX {
let Some(region) = kipc::get_task_dump_region(task, ndx) else {
break;
};
add_dump_region(region)?;
}

dump_task_run(area.region.address, task)?;
Expand Down Expand Up @@ -323,24 +329,54 @@ pub fn dump_task_region(
let mem = start..end;
let mut okay = false;

for ndx in 0..=usize::MAX {
// This is Accidentally Quadratic; see the note in `dump_task`
let Some(region) = kipc::get_task_dump_region(task, ndx) else {
break;
};

if in_dump_area(region.base, region.size) {
continue;
}

ringbuf_entry!(Trace::DumpRegion(region));
// Get the task descriptor region (in kernel memory)
let desc = kipc::get_task_desc_region(task);
// Note: we implicitly trust kipc won't give us a region that wraps,
// unlike untrusted user data from the request that we checked above.
let desc_region = desc.base..desc.base + desc.size;
if mem.start >= desc_region.start && mem.end <= desc_region.end {
// We are reading from the kernel descriptor region, great job
okay = true;
} else {
// Otherwise, iterate over task regions. We will start with `mem`
// representing the full memory range to be dumped, then adjust
// `mem.start` as we find overlaps within the task dump regions. If
// `mem` becomes empty, then we know that it is valid (because the
// entire `mem` region has overlapped with task dump regions).
//
// Note: we also implicitly trust that kipc gives us regions which are
// in sorted order by base address.
let mut mem = start..end;
let mut started = false;
for ndx in 0..=usize::MAX {
// This is Accidentally Quadratic; see the note in `dump_task`
let Some(region) = kipc::get_task_dump_region(task, ndx) else {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit/out of scope refactor suggestion: It feels like we probably could stand to have a helper for "iterate over dump regions", to avoid manual calls like this. It would be neat to be able to do:

for region in kipc::iter_task_dump_regions(task).skip(1) {
  // ...
}

break;
};

if in_dump_area(region.base, region.size) {
continue;
}

// Note: we implicitly trust kipc won't give us a region that wraps,
// unlike untrusted user data from the request that we checked above.
let region = region.base..region.base + region.size;
if mem.start >= region.start && mem.end <= region.end {
okay = true;
break;
ringbuf_entry!(Trace::DumpRegion(region));

// Slide `mem.start` based on overlap
let region = region.base..region.base + region.size;
if region.contains(&mem.start) {
mem.start = region.end.min(mem.end);
started = true;
if mem.start == mem.end {
okay = true;
break;
}
} else if region.start > mem.start
|| (started && region.start < mem.start)
{
// If we are beyond the start of our `mem` region (or have
// started overlapping but this region does not overlap), then
// there are no more overlaps and we can bail out immediately.
break;
}
Copy link
Copy Markdown
Contributor

@jamesmunns jamesmunns Apr 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Misc perf nit: it might be worth tracking if we've STARTED matching on items, and bail here if started && region.start < mem.start, because then we've found non-continugous overlaps

Assuming:

  • region(1): 0x1000..0x2000
  • region(2): 0x3000..0x4000

And the user asked to dump 0x1000..0x3800, currently I think they would never get okay = true because we'd take 0x1000..0x2000, leaving 0x2000..0x3800, but then 0x3000..0x4000 doesn't contains 0x2000, but then we keep iterating over all remaining regions.

This could probably also be:

} else if (region.start != start) && region.start < mem.start {
    break;
}

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems reasonable! N is small here (which is why we don't mind quadratic behavior), but also has a context switch into the kernel, so I support cheap early-exit optimizations.

}
}

Expand Down
Loading