Add filtering and sorting for custom fields#773
Add filtering and sorting for custom fields#773daften wants to merge 16 commits intoDonkie:masterfrom
Conversation
|
Is there something I can do to move this forward or make this ready for consideration? :) |
|
I've cloned daften repo and i'm using this to filter extra.nfc_id created by filaman. Works great to retieve the spool id to activate the spool in moonraker. |
|
+1 for this to merge into master :) |
|
Something I missed before and noticed when starting to rebase, the integration tests for postgres fail. There were also significant issues in the tests written. i've fixed the tests, but haven't had time to fix the issues with postgres, which is because of json_extract. I'll come back to that at a later time |
1ae07d5 to
f545ae5
Compare
|
Actually was able to fix it and the tests seem to run well and I was able to rebase everything on master. All feedback welcome :) |
f545ae5 to
33c669e
Compare
|
I tested some more, fixed some bugs and added test coverage for more scenario's :) |
|
@akira69 I think you mean #846 instead of #856? i also don't see in your PR's what is encompassed here, searching on any custom field added by the user. You added a lot of other information in structured data, but this seems more generic (unless I'm looking over that part). Can you point me to where your PR's encompass what's in this PR and why this one would be superfluous? |
yes #846: fixed that in the comment. |
|
@akira69 while your intentions are noble, I have 2 small issues with your approach:
So I'm wondering what is your goal with those big PR's? Because you give a comment here that your PR makes mine no longer necessary, but you didn't even test your functionality. |
|
you're right - I had the worry about the scale of changes as they grew - maybe there's a way to separate the different features/improvements into individual PRs without too much work. probably the best approach. Especially for testing - That I will consider and see what I can do to simplify. Unfortunately I didn't look for what was already done in the PR list, an omission on my part and not a intention to bull over what's already done. Would have been smarter to build upon. Nevertheless, seems the implementation is similar to your approach. While there's no intention to disrespect, it's easy to see how it looks that way. The scope-creep express was over-eager. Lesson Learned. I've updated the PR for now to reference your code as prior art. At this point, rebase to #773 and build up looks very difficult. Let's see. |
|
I appreciate the feedback and understanding. Don't let my (hopefully constructive) feedback to you discourage you from contributing, even with mistakes and misunderstandings in communication, it's valuable to contribute :) The main thing I'd still advise to do is to split it in more granular items, if that's possible. Mainly because I have contributed to and maintained some projects, and reviewing big chunks can be very cumbersome. I'm closing off for today, have a great night @akira69 |
|
@daften Hallo mein Freund! Ich spreche ein bisschen Deutsch und glaube, dass du Deutscher bist. Wohnst du in Deutschland? Ich habe drei Jahre in Deutschland gelebt, in der Nähe von Nürnberg. Jetzt bin ich in Chicago. Aber, Englisch ist besser fur mich: Check #858 Therefore the suggestion is to fold commit Why this matters:
Scope is intentionally small (2 files) and can be cherry-picked cleanly from If this is folded into #773, #858 can be closed as superseded. |
|
And, by the way, I've split up my 2 or 3 PRs to a whole bunch of separate smaller ones - thanks for the tips |
|
@akira69 I've cherry-picked the commit you indicated and also fixed the linting issues that were present in the frontend because of all the commits. I wasn't able to validate it, it's midnight here now, but I'm trusting what you wrote :) Thanks for notifying so I can update this PR, I really appreciate it! I won't close any other PR (#858), I'll leave it to you to decide on that and do it. I'm also seeing the separate PR's, I think that makes things easier. I hope at least. Good luck in Chicage, I hope you can stay safe, with all the news I see, I worry sometimes. |
| def get_field_table_for_entity(entity_type: Any) -> Type[models.Base]: | ||
| """Get the field table class for a given entity type.""" | ||
| # Import here to avoid circular imports | ||
| from spoolman.extra_fields import EntityType |
There was a problem hiding this comment.
importing stuff like this is never the correct answer, fix the circular dependency issues properly by rearranging in the files/adding new files if necessary
There was a problem hiding this comment.
Fixed in 9711495.
I removed the local imports that were acting as the circular-import workaround and split the responsibilities into dedicated modules instead, so the query paths can import them directly without importing inside function bodies. Helper branch: akira69/Spoolman_Labels:feat/pr773-review-feedback.
| value_parts = value.split(",") | ||
|
|
||
| # Handle filtering for empty values | ||
| if any(p == "<empty>" or len(p) == 0 for p in value_parts): |
There was a problem hiding this comment.
empty strings are used to filter empty values, this "empty>" is an unnecessary new concept
There was a problem hiding this comment.
Fixed across 9711495 and 3208f8d.
I kept <empty> only as the dropdown label for readability, but restored the backend/API contract to the existing empty-string semantics. So the UI label is still <empty>, but the actual filter value sent to the API is now "".
| # Condition A subquery | ||
| empty_conditions = [ | ||
| field_table.value.is_(None), | ||
| field_table.value == "null", |
There was a problem hiding this comment.
see add_where_clause_str on how empty values should be handled to stay consistent with the rest of the API
There was a problem hiding this comment.
Fixed in 9711495.
The extra-field path now follows the same empty-value semantics as the rest of the API: empty string means missing or unset value, including both missing rows and effectively empty/null stored values.
| try: | ||
| conditions.append(field_table.value == json.dumps(int(value_part))) | ||
| except ValueError: | ||
| pass |
There was a problem hiding this comment.
silently failing if bad value is not good api design
There was a problem hiding this comment.
Fixed in 9711495.
Invalid custom-field filter values no longer get ignored or silently coerced. The API now returns 400 for invalid integer, float, boolean, and range inputs. I also added coverage for the invalid integer/boolean cases and verified the behavior locally against a running app.
| # Get the field definition | ||
| from spoolman.extra_fields import EntityType, get_extra_fields | ||
|
|
||
| extra_fields = await get_extra_fields(db, EntityType.vendor) |
There was a problem hiding this comment.
get_extra_fields only needs to be run max at once per call to find, same comment to spool and filament.py
There was a problem hiding this comment.
Fixed in 9711495.
Each find() path now does a single get_extra_fields() lookup and reuses that metadata for both filtering and sorting instead of repeating the fetch in multiple branches.
| except ValueError: | ||
| pass | ||
| elif field_type == ExtraFieldType.boolean: | ||
| bool_value = value_part.lower() in ("true", "1", "yes") |
There was a problem hiding this comment.
only use "true" here, otherwise we add unnecessary ambiguitiy
There was a problem hiding this comment.
Fixed in cb62dd6.
I tightened the boolean custom-field parser to accept explicit true / false only, so 1/0/yes/no are no longer accepted. I also added a regression check for the old ambiguous yes token.
| # Clean up | ||
| httpx.delete(f"{URL}/api/v1/field/vendor/vendor_tier").raise_for_status() | ||
| httpx.delete(f"{URL}/api/v1/vendor/{vendor_id1}").raise_for_status() | ||
| httpx.delete(f"{URL}/api/v1/vendor/{vendor_id2}").raise_for_status() |
There was a problem hiding this comment.
this is a good start, but I would like to see more tests in here, that cover all features that you've added and all details of the features. filtering for every type and sorting for every type, for both spools, filaments and vendors.
There was a problem hiding this comment.
Expanded in 9711495.
I kept the existing spool/filament/vendor coverage and added review-focused cases for empty-value filtering semantics plus invalid numeric/boolean filters returning 400.
This pass also fixed a related correctness bug where paginated extra-field queries could report the wrong x-total-count because the count was being calculated before extra-field filters were applied.
|
linting issues needs to be fixed also, see https://github.com/Donkie/Spoolman/wiki/Contribute#style on how to check them locally |
|
Quick context note: this is still @daften's PR. I'm not trying to take over the thread, just helping from the side with a Codex-assisted fix pass so the requested changes are easier to validate and cherry-pick. I split the follow-up into four commits on
Summary:
Helper PR: #893 I verified the updated state locally on March 25, 2026 with:
If @daften wants to cherry-pick directly: |
@Donkie replying to the March 16 lint note as well: the helper branch / PR now has local |
|
I'll check all this over the weekend. Thanks for the review Donkie and for the prep @akira69 ! Appreciate the joint effort a lot :) |
…e filter/sort - Add comprehensive tests covering all 9 field types (text, integer, float, boolean, single-choice, multi-choice, datetime, integer_range, float_range) for filter and sort on spool, filament, and vendor entities - Add invalid-filter 400 tests for float, integer_range, and float_range - Fix integer_range/float_range filter to use LIKE pattern matching against Python's deterministic json.dumps output instead of fragile .contains() - Add integer_range/float_range sort support via a @compiles helper (_JsonArrayFirstElement) that emits CAST(col AS JSON)->>0 on PostgreSQL and JSON_EXTRACT(col, '$[0]') on SQLite/MariaDB - Add logger and __all__ to extra_fields.py (aligns with PR Donkie#893) - All tests verified passing on postgres, sqlite, and mariadb
- Add text input filter dropdown for text fields (substring search) - Add range input filter dropdowns (min/max) for integer, float, integer_range, and float_range fields - Add datetime picker filter dropdown for datetime fields - Fix boolean "No" filter to use empty value so it correctly matches unset/false fields - Fix integer_range/float_range filter to use numeric comparisons instead of LIKE-based exact matching (stored_min >= filter_min, stored_max <= filter_max) - Add range filter support for integer/float fields (min:max format) - Add _JsonArraySecondElement cross-database helper for extracting second JSON array element
…ntics Backend: - Add _JsonArraySecondElement cross-database helper (mirrors _JsonArrayFirstElement) - integer_range/float_range filter now uses numeric comparisons: stored_min >= filter_min and stored_max <= filter_max (was LIKE exact match) - integer/float filter now supports min:max range format in addition to exact match: stored_value >= min and/or stored_value <= max Frontend: - integer and float fields now use a range filter dropdown (min/max inputs) instead of a single exact-value input Tests: - Fix integer_range/float_range spool and vendor tests broken by new >= semantics (filter values updated to discriminate between test entries) - Add range filter tests (min only, max only, both) for integer and float fields
Introduce a _create_entity helper and @pytest.mark.parametrize("entity_type",
["spool", "filament", "vendor"]) to run each numeric field type test against
all three entity types from a single test function.
- Removes 12 individual tests (integer/float/integer_range/float_range × 3 entities)
- Adds 4 parametrized tests covering the same ground plus the previously missing
min-only / max-only range filter cases for filament and vendor
- Uses try/finally for cleanup so entities are always deleted even on assertion failure
126b199 to
f718d8a
Compare
- Backend: add datetime range filter using '|' separator (ISO dates contain ':') - Backend: restore integer/float range filter support (lost during attribution rewrite) - Frontend: replace single DateTimePicker with From/To range pickers for datetime fields
The DateTimeRangeFilterDropdown was initializing pickers with dayjs.utc(), causing them to display and accept UTC times directly. The entry form shows local time and converts to UTC, so the filter was inconsistent: entering the same displayed time would produce a filter value 2h offset from what was stored (in UTC+2), making exact boundaries fail unexpectedly. Fix: initialize picker values with dayjs() (local mode) so the filter picker behaves the same as the entry form — shows local times and converts to UTC. Tests: replace 3 separate entity-specific datetime tests with one parametrized test covering spool/filament/vendor, and add range filter cases (start|, |end, start|end).
Replace 16 separate spool/filament/vendor tests for text, boolean, single-choice, multi-choice, and empty-filter with 5 parametrized tests that run against all three entity types. Each test is now defined once and executed 3 times, eliminating copy-paste and ensuring consistent coverage across all entity types. Parametrize invalid-filter-value 400 tests across all entity types The non-numeric-value error uses "Invalid integer/float range filter value" while the missing-colon error uses "Invalid range filter value". Both contain "range filter value", so use that as the assertion substring.
All three custom filter dropdowns (text, number range, datetime range) now match the built-in choice filter footer: a border-top separator, Reset as a link button on the left (disabled when no filter is active), and OK as a primary button on the right.
|
I have incorporated the proposed changed by @akira69 and added on them.
I think all feedback was covered, please let me know if anything is missing! Thanks @akira69 for the helping hand and @Donkie for the detailed feedback :) |

This PR adds filtering and sorting for custom fields in a generic way. This was tested in a limited capacity and doesn't encompass everything, e.g. filtering on text fields only works to filter empty items.
Nevertheless , I think this is already a useful addition.
My main use case is I have a boolean extra field that I check when I open a roll. This allows me to filter all open spools e.g.
The changes were done for all data types and several tests were written.
Disclaimer: I am not a Python developer and this was largely done using AI. Any feedback si more than welcome. High-level this looks okay to me, but I most likely miss items that are important to get this perfect. I also had issues with circular imports that I now fixed doing the imports not in the header, feedback on this is definitely more than welcome.