Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions config/settings.py
Original file line number Diff line number Diff line change
Expand Up @@ -189,6 +189,9 @@
# In our case, every hour ~200 tasks are executed. Reasonable number is then (200 / concurrency).
CELERY_WORKER_MAX_TASKS_PER_CHILD = 60

LOG_IGNORE_AUTO_TIMESTAMPS = get_env(
"LOG_IGNORE_AUTO_TIMESTAMPS", default="False", is_bool=True
)
Comment on lines +192 to +194
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are currently using pydantic_settings for new variables, please update it

LOGGING = {
"version": 1,
"disable_existing_loggers": False,
Expand Down
9 changes: 8 additions & 1 deletion osidb/mixins.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
from django.db import IntegrityError, models, transaction
from django.utils import timezone

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please squash/fixup this commit with the one this change were introduced.

from config.settings import LOG_IGNORE_AUTO_TIMESTAMPS
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify potential impact of import-time settings capture and override usage.

# 1) Find direct imports from config.settings in runtime modules.
rg -n 'from config\.settings import ' -g '!config/settings.py'

# 2) Find settings overrides/reads related to LOG_IGNORE_AUTO_TIMESTAMPS.
rg -n 'override_settings\(|LOG_IGNORE_AUTO_TIMESTAMPS|settings\.LOG_IGNORE_AUTO_TIMESTAMPS'

Repository: RedHatProductSecurity/osidb

Length of output: 1814


🌐 Web query:

Django override_settings direct import vs settings object access

💡 Result:

In Django, always use from django.conf import settings to access settings in your application code, rather than directly importing from your settings module (e.g., from myproject.settings import SOME_SETTING). Direct imports from the settings module capture the values at import time and bypass Django's settings system. They do not reflect runtime changes, such as those made by override_settings in tests. Code using direct imports will read the original values even under test overrides, causing tests to fail unexpectedly. Using django.conf.settings ensures access to the dynamic LazySettings object, which supports overrides and other Django features like signals for setting changes. Official Django documentation explicitly recommends this pattern: "In your Django apps, use settings by importing the object django.conf.settings." Direct module imports are discouraged as they break with overrides and different settings modules. Example of the problem: # Bad - direct import (fails with override_settings) from myproject.settings import PAGE_SIZE # Good - uses django.conf.settings (works with overrides) from django.conf import settings page_size = settings.PAGE_SIZE This applies universally, including in tests, utils modules, views, and anywhere settings are read. For module-level globals depending on settings, avoid import-time evaluation or use importlib.reload + patching as a workaround.

Citations:


Remove direct import and use the settings object to enable test overrides

Line 19 directly imports LOG_IGNORE_AUTO_TIMESTAMPS, which freezes the value at import time and breaks Django's override_settings in tests. Since settings is already imported on line 10, use settings.LOG_IGNORE_AUTO_TIMESTAMPS instead.

♻️ Refactor
-from config.settings import LOG_IGNORE_AUTO_TIMESTAMPS
@@
-        elif LOG_IGNORE_AUTO_TIMESTAMPS:
+        elif settings.LOG_IGNORE_AUTO_TIMESTAMPS:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
from config.settings import LOG_IGNORE_AUTO_TIMESTAMPS
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@osidb/mixins.py` at line 19, Replace the direct import of
LOG_IGNORE_AUTO_TIMESTAMPS with use of the settings object so test overrides
work: remove the "from config.settings import LOG_IGNORE_AUTO_TIMESTAMPS" import
and update any references to LOG_IGNORE_AUTO_TIMESTAMPS in osidb/mixins.py to
use settings.LOG_IGNORE_AUTO_TIMESTAMPS (settings is already imported on line
10); ensure all functions/classes in this file that check
LOG_IGNORE_AUTO_TIMESTAMPS (e.g., any mixin methods) reference
settings.LOG_IGNORE_AUTO_TIMESTAMPS instead.

from osidb.exceptions import DataInconsistencyException

from .core import generate_acls
Expand Down Expand Up @@ -75,14 +76,20 @@ def save(self, *args, auto_timestamps=True, **kwargs):
raise DataInconsistencyException(
"Save operation based on an outdated model instance: "
f"Updated datetime in the request {self.updated_dt} "
f"differes from the DB {db_self.updated_dt}. "
f"differs from the DB {db_self.updated_dt}. "
"You need to refresh."
)

# auto-set updated_dt as now on any change
# cut off the microseconds to allow mid-air
# collision comparison as API works in seconds
self.updated_dt = timezone.now().replace(microsecond=0)
elif LOG_IGNORE_AUTO_TIMESTAMPS:
Copy link
Copy Markdown
Contributor

@costaconrado costaconrado Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe we want to fail updates on outdated models instead of logging it, is there any scenario we want them still happen? Have I misunderstood what this is doing?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, that keeps going but the sync_manager process avoids this validation (auto_timestamps=false) .
This is to log and better understand how the issue is happening.

To better understand this is what I saw.

  • User makes a request -> changes to state A to B
  • Async process runs -> Changes B to A

The theory is that the async process gets a stale object and because avoids auto_timestamps it continues.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Im logging it to help us debug but the real change is in the answer with the raise, I wanted to show what changed the model so the user could better understand it but for that I need the history that had the performance issue (not sure if its fixed now)

The second best solution I thought was to explain to the user the outdated model and then use the logs to find out who or what changed the model.

db_self = type(self).objects.filter(pk=self.pk).first()
if db_self is not None and db_self.updated_dt != self.updated_dt:
logger.warning(
f"saved outdated model instance {self.__class__.__name__}, id: {self.pk}, db_updated_dt: {db_self.updated_dt}, self_updated_dt: {self.updated_dt}"
)

super().save(*args, **kwargs)

Expand Down
23 changes: 23 additions & 0 deletions osidb/signals.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,17 @@
logger = logging.getLogger(__name__)


def log_signal_update(instance, sender, handler_name, *, flaw=None):
logger.info(
"signal_parent_save handler=%s sender=%s instance_model=%s instance_pk=%s flaw_pk=%s",
handler_name,
getattr(sender, "__name__", str(sender)),
type(instance).__name__,
getattr(instance, "pk", None),
flaw.pk if flaw is not None else None,
)
Comment on lines +35 to +43
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Info-level signal logs are currently suppressed by logger configuration

Line 37 uses logger.info, but with config/settings.py setting osidb logger level to WARNING, osidb.signals won’t emit these records by default. That makes the new diagnostics effectively invisible in normal runtime.

🔧 Proposed fix (configure `osidb.signals` for INFO while preserving existing logger behavior)
--- a/config/settings.py
+++ b/config/settings.py
@@
     "loggers": {
@@
         "osidb": {"level": "WARNING", "handlers": ["console"], "propagate": False},
+        "osidb.signals": {
+            "level": "INFO",
+            "handlers": ["console"],
+            "propagate": False,
+        },
         "api_req": {"level": "INFO", "handlers": ["console"], "propagate": False},

As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@osidb/signals.py` around lines 35 - 43, The info-level signal logs in
log_signal_update are suppressed because the package logger is set to WARNING;
to make these diagnostics visible without changing global logging behavior,
configure the signals module logger to INFO: in osidb.signals ensure you obtain
the module logger (logging.getLogger(__name__)) and call setLevel(logging.INFO)
for that logger so log_signal_update, signal_parent_save and other functions in
this module emit INFO records while preserving the existing package/root logger
settings.



def get_bz_user_id(email: str) -> str:
api_key = get_env("BZIMPORT_BZ_API_KEY")
bz_url = get_env("BZIMPORT_BZ_URL", "https://bugzilla.redhat.com")
Expand Down Expand Up @@ -126,6 +137,9 @@ def update_flaw_fields(sender, instance, **kwargs):
@receiver(post_save, sender=FlawCollaborator)
@receiver(post_save, sender=FlawCVSS)
def flaw_dependant_update_local_updated_dt(sender, instance, **kwargs):
log_signal_update(
instance, sender, "flaw_dependant_update_local_updated_dt", flaw=instance.flaw
)
instance.flaw.save(auto_timestamps=False, raise_validation_error=False)


Expand All @@ -140,6 +154,9 @@ def update_local_updated_dt_tracker(sender, instance, **kwargs):
for affect in instance.affects.all():
flaws.add(affect.flaw)
for flaw in list(flaws):
log_signal_update(
instance, sender, "update_local_updated_dt_tracker", flaw=flaw
)
flaw.save(
auto_timestamps=False,
no_alerts=True, # recreating alerts from nested entities can cause deadlocks
Expand All @@ -149,6 +166,12 @@ def update_local_updated_dt_tracker(sender, instance, **kwargs):

@receiver(post_save, sender=AffectCVSS)
def updated_local_updated_dt_affectcvss(sender, instance, **kwargs):
log_signal_update(
instance,
sender,
"updated_local_updated_dt_affectcvss",
flaw=instance.affect.flaw,
)
instance.affect.flaw.save(auto_timestamps=False, raise_validation_error=False)


Expand Down
Loading