-
Notifications
You must be signed in to change notification settings - Fork 1.3k
fix(core): Flush pending writes before mapAsync, if needed #9307
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: trunk
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -264,6 +264,8 @@ pub enum BufferAccessError { | |
| Failed, | ||
| #[error(transparent)] | ||
| DestroyedResource(#[from] DestroyedResourceError), | ||
| #[error("An error occurred while flushing pending writes to the buffer: {0}")] | ||
| QueueSubmit(String), | ||
| #[error("Buffer is already mapped")] | ||
| AlreadyMapped, | ||
| #[error("Buffer map is pending")] | ||
|
|
@@ -329,6 +331,8 @@ impl WebGpuError for BufferAccessError { | |
| Self::InvalidResource(e) => e.webgpu_error_type(), | ||
| Self::DestroyedResource(e) => e.webgpu_error_type(), | ||
|
|
||
| Self::QueueSubmit(_) => ErrorType::Internal, | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The spec allows the internal variant to only be returned by pipeline creation. |
||
|
|
||
| Self::Failed | ||
| | Self::AlreadyMapped | ||
| | Self::MapAlreadyPending | ||
|
|
@@ -654,50 +658,72 @@ impl Buffer { | |
| return Err((op, e.into())); | ||
| } | ||
|
|
||
| { | ||
| // `submit_index` will be set to: | ||
| // - `Some(index)`, if there is a submission the mapping operation must wait for. | ||
| // - `Some(0)`, if we have a queue and there is no submission to wait for. | ||
| // - `None`, if we don't have a queue. | ||
| let submit_index = { | ||
| let snatch_guard = device.snatchable_lock.read(); | ||
| if let Err(e) = self.check_destroyed(&snatch_guard) { | ||
| return Err((op, e.into())); | ||
| } | ||
| } | ||
|
|
||
| { | ||
| let map_state = &mut *self.map_state.lock(); | ||
| *map_state = match *map_state { | ||
| BufferMapState::Init { .. } | BufferMapState::Active { .. } => { | ||
| return Err((op, BufferAccessError::AlreadyMapped)); | ||
| } | ||
| BufferMapState::Waiting(_) => { | ||
| return Err((op, BufferAccessError::MapAlreadyPending)); | ||
| } | ||
| BufferMapState::Idle => BufferMapState::Waiting(BufferPendingMapping { | ||
| range: offset..end_offset, | ||
| op, | ||
| _parent_buffer: self.clone(), | ||
| }), | ||
| }; | ||
| } | ||
| // Review note: the code previously dropped the snatch lock here. I don't see how | ||
| // that was correct, if we drop it then the buffer could be destroyed. | ||
|
Comment on lines
+671
to
+672
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think it was ok because we were not using the raw hal buffer for any operation but we were chaning state on the core buffer so, maybe we could have gotten into an unexpected situation. |
||
|
|
||
| { | ||
| let map_state = &mut *self.map_state.lock(); | ||
| *map_state = match *map_state { | ||
| BufferMapState::Init { .. } | BufferMapState::Active { .. } => { | ||
| return Err((op, BufferAccessError::AlreadyMapped)); | ||
| } | ||
| BufferMapState::Waiting(_) => { | ||
| return Err((op, BufferAccessError::MapAlreadyPending)); | ||
| } | ||
| BufferMapState::Idle => BufferMapState::Waiting(BufferPendingMapping { | ||
| range: offset..end_offset, | ||
| op, | ||
| _parent_buffer: self.clone(), | ||
| }), | ||
| }; | ||
| } | ||
|
|
||
| if let Some(queue) = device.get_queue().as_ref() { | ||
| Some(match queue.flush_writes_for_buffer(self, snatch_guard) { | ||
| Err(err) => { | ||
| let state = mem::replace(&mut *self.map_state.lock(), BufferMapState::Idle); | ||
| let BufferMapState::Waiting(BufferPendingMapping { op, .. }) = state else { | ||
| unreachable!(); | ||
| }; | ||
| return Err((op, err)); | ||
| } | ||
| Ok(Some(submit_index)) => submit_index, | ||
| Ok(None) => queue.lock_life().map(self).unwrap_or(0), | ||
| }) | ||
| } else { | ||
| None | ||
| } | ||
| }; | ||
|
|
||
| // TODO: we are ignoring the transition here, I think we need to add a barrier | ||
| // at the end of the submission | ||
| // TODO(https://github.com/gfx-rs/wgpu/issues/9306): we are ignoring the transition | ||
| // here, I think we need to add a barrier at the end of the submission | ||
| device | ||
| .trackers | ||
| .lock() | ||
| .buffers | ||
| .set_single(self, internal_use); | ||
|
|
||
| let submit_index = if let Some(queue) = device.get_queue() { | ||
| queue.lock_life().map(self).unwrap_or(0) // '0' means no wait is necessary | ||
| if let Some(index) = submit_index { | ||
| Ok(index) | ||
| } else { | ||
| // We don't have a queue, so go ahead and map the buffer. | ||
| // We can safely unwrap below since we just set the `map_state` to `BufferMapState::Waiting`. | ||
| let (mut operation, status) = self.map(&device.snatchable_lock.read()).unwrap(); | ||
| if let Some(callback) = operation.callback.take() { | ||
| callback(status); | ||
| } | ||
| 0 | ||
| }; | ||
|
|
||
| Ok(submit_index) | ||
| Ok(0) | ||
| } | ||
| } | ||
|
|
||
| pub fn get_mapped_range( | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should either panic here or add a new submit function to make it impossible for these variants to be returned.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refactoring the submit function to be used by a new submit function that doesn't return these variants seems a lot more tricky.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe it's possible to take out this block and call it from
map_async.wgpu/wgpu-core/src/device/queue.rs
Lines 1375 to 1470 in a58f51a
Below it we call
device.maintainto cleanup resources of previous submissions but I don't think we should do that inmap_async.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A heads-up: @atlv24 has pulled this out in #9361.