-
Notifications
You must be signed in to change notification settings - Fork 12
OSIDB-4821: Logs for revert issues. #1225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||
|---|---|---|---|---|
|
|
@@ -16,6 +16,7 @@ | |||
| from django.db import IntegrityError, models, transaction | ||||
| from django.utils import timezone | ||||
|
|
||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. please squash/fixup this commit with the one this change were introduced. |
||||
| from config.settings import LOG_IGNORE_AUTO_TIMESTAMPS | ||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🛠️ Refactor suggestion | 🟠 Major 🧩 Analysis chain🏁 Script executed: #!/bin/bash
# Verify potential impact of import-time settings capture and override usage.
# 1) Find direct imports from config.settings in runtime modules.
rg -n 'from config\.settings import ' -g '!config/settings.py'
# 2) Find settings overrides/reads related to LOG_IGNORE_AUTO_TIMESTAMPS.
rg -n 'override_settings\(|LOG_IGNORE_AUTO_TIMESTAMPS|settings\.LOG_IGNORE_AUTO_TIMESTAMPS'Repository: RedHatProductSecurity/osidb Length of output: 1814 🌐 Web query:
💡 Result: In Django, always use from django.conf import settings to access settings in your application code, rather than directly importing from your settings module (e.g., from myproject.settings import SOME_SETTING). Direct imports from the settings module capture the values at import time and bypass Django's settings system. They do not reflect runtime changes, such as those made by override_settings in tests. Code using direct imports will read the original values even under test overrides, causing tests to fail unexpectedly. Using django.conf.settings ensures access to the dynamic LazySettings object, which supports overrides and other Django features like signals for setting changes. Official Django documentation explicitly recommends this pattern: "In your Django apps, use settings by importing the object django.conf.settings." Direct module imports are discouraged as they break with overrides and different settings modules. Example of the problem: # Bad - direct import (fails with override_settings) from myproject.settings import PAGE_SIZE # Good - uses django.conf.settings (works with overrides) from django.conf import settings page_size = settings.PAGE_SIZE This applies universally, including in tests, utils modules, views, and anywhere settings are read. For module-level globals depending on settings, avoid import-time evaluation or use importlib.reload + patching as a workaround. Citations:
Remove direct import and use the Line 19 directly imports ♻️ Refactor-from config.settings import LOG_IGNORE_AUTO_TIMESTAMPS
@@
- elif LOG_IGNORE_AUTO_TIMESTAMPS:
+ elif settings.LOG_IGNORE_AUTO_TIMESTAMPS:📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||
| from osidb.exceptions import DataInconsistencyException | ||||
|
|
||||
| from .core import generate_acls | ||||
|
|
@@ -75,14 +76,20 @@ def save(self, *args, auto_timestamps=True, **kwargs): | |||
| raise DataInconsistencyException( | ||||
| "Save operation based on an outdated model instance: " | ||||
| f"Updated datetime in the request {self.updated_dt} " | ||||
| f"differes from the DB {db_self.updated_dt}. " | ||||
| f"differs from the DB {db_self.updated_dt}. " | ||||
| "You need to refresh." | ||||
| ) | ||||
|
|
||||
| # auto-set updated_dt as now on any change | ||||
| # cut off the microseconds to allow mid-air | ||||
| # collision comparison as API works in seconds | ||||
| self.updated_dt = timezone.now().replace(microsecond=0) | ||||
| elif LOG_IGNORE_AUTO_TIMESTAMPS: | ||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I believe we want to fail updates on outdated models instead of logging it, is there any scenario we want them still happen? Have I misunderstood what this is doing?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yes, that keeps going but the sync_manager process avoids this validation (auto_timestamps=false) . To better understand this is what I saw.
The theory is that the async process gets a stale object and because avoids auto_timestamps it continues.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Im logging it to help us debug but the real change is in the answer with the raise, I wanted to show what changed the model so the user could better understand it but for that I need the history that had the performance issue (not sure if its fixed now) The second best solution I thought was to explain to the user the outdated model and then use the logs to find out who or what changed the model. |
||||
| db_self = type(self).objects.filter(pk=self.pk).first() | ||||
| if db_self is not None and db_self.updated_dt != self.updated_dt: | ||||
| logger.warning( | ||||
| f"saved outdated model instance {self.__class__.__name__}, id: {self.pk}, db_updated_dt: {db_self.updated_dt}, self_updated_dt: {self.updated_dt}" | ||||
| ) | ||||
|
|
||||
| super().save(*args, **kwargs) | ||||
|
|
||||
|
|
||||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -32,6 +32,17 @@ | |
| logger = logging.getLogger(__name__) | ||
|
|
||
|
|
||
| def log_signal_update(instance, sender, handler_name, *, flaw=None): | ||
| logger.info( | ||
| "signal_parent_save handler=%s sender=%s instance_model=%s instance_pk=%s flaw_pk=%s", | ||
| handler_name, | ||
| getattr(sender, "__name__", str(sender)), | ||
| type(instance).__name__, | ||
| getattr(instance, "pk", None), | ||
| flaw.pk if flaw is not None else None, | ||
| ) | ||
|
Comment on lines
+35
to
+43
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Info-level signal logs are currently suppressed by logger configuration Line 37 uses 🔧 Proposed fix (configure `osidb.signals` for INFO while preserving existing logger behavior)--- a/config/settings.py
+++ b/config/settings.py
@@
"loggers": {
@@
"osidb": {"level": "WARNING", "handlers": ["console"], "propagate": False},
+ "osidb.signals": {
+ "level": "INFO",
+ "handlers": ["console"],
+ "propagate": False,
+ },
"api_req": {"level": "INFO", "handlers": ["console"], "propagate": False},As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity." 🤖 Prompt for AI Agents |
||
|
|
||
|
|
||
| def get_bz_user_id(email: str) -> str: | ||
| api_key = get_env("BZIMPORT_BZ_API_KEY") | ||
| bz_url = get_env("BZIMPORT_BZ_URL", "https://bugzilla.redhat.com") | ||
|
|
@@ -126,6 +137,9 @@ def update_flaw_fields(sender, instance, **kwargs): | |
| @receiver(post_save, sender=FlawCollaborator) | ||
| @receiver(post_save, sender=FlawCVSS) | ||
| def flaw_dependant_update_local_updated_dt(sender, instance, **kwargs): | ||
| log_signal_update( | ||
| instance, sender, "flaw_dependant_update_local_updated_dt", flaw=instance.flaw | ||
| ) | ||
| instance.flaw.save(auto_timestamps=False, raise_validation_error=False) | ||
|
|
||
|
|
||
|
|
@@ -140,6 +154,9 @@ def update_local_updated_dt_tracker(sender, instance, **kwargs): | |
| for affect in instance.affects.all(): | ||
| flaws.add(affect.flaw) | ||
| for flaw in list(flaws): | ||
| log_signal_update( | ||
| instance, sender, "update_local_updated_dt_tracker", flaw=flaw | ||
| ) | ||
| flaw.save( | ||
| auto_timestamps=False, | ||
| no_alerts=True, # recreating alerts from nested entities can cause deadlocks | ||
|
|
@@ -149,6 +166,12 @@ def update_local_updated_dt_tracker(sender, instance, **kwargs): | |
|
|
||
| @receiver(post_save, sender=AffectCVSS) | ||
| def updated_local_updated_dt_affectcvss(sender, instance, **kwargs): | ||
| log_signal_update( | ||
| instance, | ||
| sender, | ||
| "updated_local_updated_dt_affectcvss", | ||
| flaw=instance.affect.flaw, | ||
| ) | ||
| instance.affect.flaw.save(auto_timestamps=False, raise_validation_error=False) | ||
|
|
||
|
|
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are currently using pydantic_settings for new variables, please update it