OSIDB-4821: Logs for revert issues.#1225
Conversation
ee95605 to
765138e
Compare
6b12290 to
7a73290
Compare
| LOG_IGNORE_AUTO_TIMESTAMPS = get_env( | ||
| "LOG_IGNORE_AUTO_TIMESTAMPS", default="False", is_bool=True | ||
| ) |
There was a problem hiding this comment.
We are currently using pydantic_settings for new variables, please update it
| logger.info( | ||
| f"WARNING saved outdated model instance {self.__class__.__name__}, id: {self.pk}, db_updated_dt: {db_self.updated_dt}, self_updated_dt: {self.updated_dt}" |
There was a problem hiding this comment.
why not use logger.waring() instead of writing warning?
| from django.core.exceptions import ObjectDoesNotExist, ValidationError | ||
| from django.db import IntegrityError, models, transaction | ||
| from django.utils import timezone | ||
|
|
There was a problem hiding this comment.
please squash/fixup this commit with the one this change were introduced.
| # cut off the microseconds to allow mid-air | ||
| # collision comparison as API works in seconds | ||
| self.updated_dt = timezone.now().replace(microsecond=0) | ||
| elif LOG_IGNORE_AUTO_TIMESTAMPS: |
There was a problem hiding this comment.
I believe we want to fail updates on outdated models instead of logging it, is there any scenario we want them still happen? Have I misunderstood what this is doing?
There was a problem hiding this comment.
yes, that keeps going but the sync_manager process avoids this validation (auto_timestamps=false) .
This is to log and better understand how the issue is happening.
To better understand this is what I saw.
- User makes a request -> changes to state A to B
- Async process runs -> Changes B to A
The theory is that the async process gets a stale object and because avoids auto_timestamps it continues.
There was a problem hiding this comment.
Im logging it to help us debug but the real change is in the answer with the raise, I wanted to show what changed the model so the user could better understand it but for that I need the history that had the performance issue (not sure if its fixed now)
The second best solution I thought was to explain to the user the outdated model and then use the logs to find out who or what changed the model.
7a73290 to
8f88790
Compare
📝 WalkthroughSummary by CodeRabbit
WalkthroughThe pull request introduces a new Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@osidb/mixins.py`:
- Line 19: Replace the direct import of LOG_IGNORE_AUTO_TIMESTAMPS with use of
the settings object so test overrides work: remove the "from config.settings
import LOG_IGNORE_AUTO_TIMESTAMPS" import and update any references to
LOG_IGNORE_AUTO_TIMESTAMPS in osidb/mixins.py to use
settings.LOG_IGNORE_AUTO_TIMESTAMPS (settings is already imported on line 10);
ensure all functions/classes in this file that check LOG_IGNORE_AUTO_TIMESTAMPS
(e.g., any mixin methods) reference settings.LOG_IGNORE_AUTO_TIMESTAMPS instead.
In `@osidb/signals.py`:
- Around line 35-43: The info-level signal logs in log_signal_update are
suppressed because the package logger is set to WARNING; to make these
diagnostics visible without changing global logging behavior, configure the
signals module logger to INFO: in osidb.signals ensure you obtain the module
logger (logging.getLogger(__name__)) and call setLevel(logging.INFO) for that
logger so log_signal_update, signal_parent_save and other functions in this
module emit INFO records while preserving the existing package/root logger
settings.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro Plus
Run ID: 0d72be77-b306-4f3b-ac7a-6d1fa38c981f
📒 Files selected for processing (3)
config/settings.pyosidb/mixins.pyosidb/signals.py
| from django.db import IntegrityError, models, transaction | ||
| from django.utils import timezone | ||
|
|
||
| from config.settings import LOG_IGNORE_AUTO_TIMESTAMPS |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify potential impact of import-time settings capture and override usage.
# 1) Find direct imports from config.settings in runtime modules.
rg -n 'from config\.settings import ' -g '!config/settings.py'
# 2) Find settings overrides/reads related to LOG_IGNORE_AUTO_TIMESTAMPS.
rg -n 'override_settings\(|LOG_IGNORE_AUTO_TIMESTAMPS|settings\.LOG_IGNORE_AUTO_TIMESTAMPS'Repository: RedHatProductSecurity/osidb
Length of output: 1814
🌐 Web query:
Django override_settings direct import vs settings object access
💡 Result:
In Django, always use from django.conf import settings to access settings in your application code, rather than directly importing from your settings module (e.g., from myproject.settings import SOME_SETTING). Direct imports from the settings module capture the values at import time and bypass Django's settings system. They do not reflect runtime changes, such as those made by override_settings in tests. Code using direct imports will read the original values even under test overrides, causing tests to fail unexpectedly. Using django.conf.settings ensures access to the dynamic LazySettings object, which supports overrides and other Django features like signals for setting changes. Official Django documentation explicitly recommends this pattern: "In your Django apps, use settings by importing the object django.conf.settings." Direct module imports are discouraged as they break with overrides and different settings modules. Example of the problem: # Bad - direct import (fails with override_settings) from myproject.settings import PAGE_SIZE # Good - uses django.conf.settings (works with overrides) from django.conf import settings page_size = settings.PAGE_SIZE This applies universally, including in tests, utils modules, views, and anywhere settings are read. For module-level globals depending on settings, avoid import-time evaluation or use importlib.reload + patching as a workaround.
Citations:
- 1: https://stackoverflow.com/questions/46085879/override-settings-for-django-test-does-not-seem-to-work/46090245
- 2: https://adamj.eu/tech/2022/11/24/django-settings-patterns-to-avoid/
- 3: https://docs.djangoproject.com/en/4.2/topics/settings
- 4: https://docs.djangoproject.com/en/6.0/topics/testing/tools/
- 5: https://docs.djangoproject.com/en/4.2/topics/testing/overview/
- 6: https://docs.djangoproject.com/en/4.0/topics/testing/advanced/
- 7: https://docs.djangoproject.com/en/5.0/topics/testing/tools
- 8: https://docs.djangoproject.com/en/stable/topics/testing/tools/
- 9: https://delftstack.com/howto/django/django-import-settings
Remove direct import and use the settings object to enable test overrides
Line 19 directly imports LOG_IGNORE_AUTO_TIMESTAMPS, which freezes the value at import time and breaks Django's override_settings in tests. Since settings is already imported on line 10, use settings.LOG_IGNORE_AUTO_TIMESTAMPS instead.
♻️ Refactor
-from config.settings import LOG_IGNORE_AUTO_TIMESTAMPS
@@
- elif LOG_IGNORE_AUTO_TIMESTAMPS:
+ elif settings.LOG_IGNORE_AUTO_TIMESTAMPS:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from config.settings import LOG_IGNORE_AUTO_TIMESTAMPS |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@osidb/mixins.py` at line 19, Replace the direct import of
LOG_IGNORE_AUTO_TIMESTAMPS with use of the settings object so test overrides
work: remove the "from config.settings import LOG_IGNORE_AUTO_TIMESTAMPS" import
and update any references to LOG_IGNORE_AUTO_TIMESTAMPS in osidb/mixins.py to
use settings.LOG_IGNORE_AUTO_TIMESTAMPS (settings is already imported on line
10); ensure all functions/classes in this file that check
LOG_IGNORE_AUTO_TIMESTAMPS (e.g., any mixin methods) reference
settings.LOG_IGNORE_AUTO_TIMESTAMPS instead.
| def log_signal_update(instance, sender, handler_name, *, flaw=None): | ||
| logger.info( | ||
| "signal_parent_save handler=%s sender=%s instance_model=%s instance_pk=%s flaw_pk=%s", | ||
| handler_name, | ||
| getattr(sender, "__name__", str(sender)), | ||
| type(instance).__name__, | ||
| getattr(instance, "pk", None), | ||
| flaw.pk if flaw is not None else None, | ||
| ) |
There was a problem hiding this comment.
Info-level signal logs are currently suppressed by logger configuration
Line 37 uses logger.info, but with config/settings.py setting osidb logger level to WARNING, osidb.signals won’t emit these records by default. That makes the new diagnostics effectively invisible in normal runtime.
🔧 Proposed fix (configure `osidb.signals` for INFO while preserving existing logger behavior)
--- a/config/settings.py
+++ b/config/settings.py
@@
"loggers": {
@@
"osidb": {"level": "WARNING", "handlers": ["console"], "propagate": False},
+ "osidb.signals": {
+ "level": "INFO",
+ "handlers": ["console"],
+ "propagate": False,
+ },
"api_req": {"level": "INFO", "handlers": ["console"], "propagate": False},As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@osidb/signals.py` around lines 35 - 43, The info-level signal logs in
log_signal_update are suppressed because the package logger is set to WARNING;
to make these diagnostics visible without changing global logging behavior,
configure the signals module logger to INFO: in osidb.signals ensure you obtain
the module logger (logging.getLogger(__name__)) and call setLevel(logging.INFO)
for that logger so log_signal_update, signal_parent_save and other functions in
this module emit INFO records while preserving the existing package/root logger
settings.
|
Im going to close this because its also covered in this PR #1213 As this logs will conflict with the PR |
It adds logs for signals and reverts. The idea is to have a better understanding on why some data in the flaw is being reverted.
After checking the code and the logs a very good possibility is that its saving a stale instance that could be coming from a signal save or a suprocess that bypassess the updated_dt check
Closes OSIDB-4821