Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions schema/openapi.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -658,6 +658,8 @@ components:
eventSpecifier:
pattern: \S
type: string
heapDump:
type: boolean
id:
format: int64
type: integer
Expand All @@ -684,6 +686,8 @@ components:
format: int32
minimum: 0
type: integer
threadDump:
type: boolean
required:
- name
- description
Expand Down Expand Up @@ -2639,6 +2643,8 @@ paths:
type: boolean
eventSpecifier:
type: string
heapDump:
type: boolean
initialDelaySeconds:
format: int32
type: integer
Expand All @@ -2659,6 +2665,8 @@ paths:
preservedArchives:
format: int32
type: integer
threadDump:
type: boolean
type: object
multipart/form-data:
schema:
Expand All @@ -2672,6 +2680,8 @@ paths:
type: boolean
eventSpecifier:
type: string
heapDump:
type: boolean
initialDelaySeconds:
format: int32
type: integer
Expand All @@ -2692,6 +2702,8 @@ paths:
preservedArchives:
format: int32
type: integer
threadDump:
type: boolean
type: object
required: true
responses:
Expand Down
4 changes: 4 additions & 0 deletions src/main/java/io/cryostat/rules/Rule.java
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,10 @@ public class Rule extends PanacheEntity {

public boolean enabled;

public boolean heapDump;

public boolean threadDump;
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if these flags are sufficient to capture all of the functionality we might want to expose/implement. For example, rules currently have both an archivalPeriodSeconds and a preservedArchives, which rule execution use to determine when to execute the recording archival job associated with the rule, and how many prior copies of archived recordings from the same active source recording should be retained. It looks like this implementation simply enables rules to also perform a thread/heap dump alongside the recording archival execution on the archivalPeriodSeconds schedule, but without any equivalent handling of preservedArchives.

But then this raises an important feature design question: should preservedArchives apply equally and symmetrically to all three data types that can now be captured by a rule? Or should there be three different fields like preservedJfrArchives, preservedThreadDumps, preservedHeapDumps? Or, should a rule only be valid if it configures JFR archives OR thread dumps OR heap dumps, so if the user wants to have periodic capture of each it should be three different rule definitions? If we start down that path, this also raises the question of whether these should then remain as one Rule entity type or three?


public String getName() {
return this.name;
}
Expand Down
19 changes: 19 additions & 0 deletions src/main/java/io/cryostat/rules/RuleExecutor.java
Original file line number Diff line number Diff line change
Expand Up @@ -23,12 +23,16 @@
import java.util.Objects;
import java.util.Optional;
import java.util.Set;
import java.util.UUID;
import java.util.stream.Collectors;

import io.cryostat.expressions.MatchExpressionEvaluator;
import io.cryostat.libcryostat.templates.Template;
import io.cryostat.libcryostat.templates.TemplateType;
import io.cryostat.recordings.ActiveRecording;
import io.cryostat.recordings.LongRunningRequestGenerator;
import io.cryostat.recordings.LongRunningRequestGenerator.HeapDumpRequest;
import io.cryostat.recordings.LongRunningRequestGenerator.ThreadDumpRequest;
import io.cryostat.recordings.RecordingHelper;
import io.cryostat.recordings.RecordingHelper.RecordingOptions;
import io.cryostat.recordings.RecordingHelper.RecordingReplace;
Expand All @@ -42,6 +46,7 @@
import io.quarkus.runtime.ShutdownEvent;
import io.quarkus.vertx.ConsumeEvent;
import io.smallrye.mutiny.Uni;
import io.vertx.mutiny.core.eventbus.EventBus;
import jakarta.enterprise.context.ApplicationScoped;
import jakarta.enterprise.event.Observes;
import jakarta.inject.Inject;
Expand Down Expand Up @@ -79,6 +84,9 @@ public class RuleExecutor {
@Inject MatchExpressionEvaluator evaluator;
@Inject Scheduler quartz;

@Inject LongRunningRequestGenerator generator;
@Inject EventBus eventBus;

void onStop(@Observes ShutdownEvent evt) throws SchedulerException {
quartz.shutdown();
}
Expand All @@ -103,6 +111,17 @@ Uni<Void> onMessage(ActivationAttempt attempt) {
if (priorRecording.isPresent()) {
recordingHelper.stopRecording(priorRecording.get()).await().indefinitely();
}
if (rule.heapDump) {
HeapDumpRequest request =
new HeapDumpRequest(UUID.randomUUID().toString(), target.id);
eventBus.publish(LongRunningRequestGenerator.HEAP_DUMP_REQUEST_ADDRESS, request);
}
if (rule.threadDump) {
ThreadDumpRequest request =
new ThreadDumpRequest(
UUID.randomUUID().toString(), target.id, "threadPrint");
eventBus.publish(LongRunningRequestGenerator.THREAD_DUMP_ADDRESS, request);
}
var labels = new HashMap<>(rule.metadata.labels());
labels.put(RULE_LABEL_KEY, rule.name);
ActiveRecording recording = null;
Expand Down
12 changes: 11 additions & 1 deletion src/main/java/io/cryostat/rules/Rules.java
Original file line number Diff line number Diff line change
Expand Up @@ -207,6 +207,12 @@ public Rule update(
if (body.containsKey("metadata")) {
rule.metadata = body.getJsonObject("metadata").mapTo(Metadata.class);
}
if (body.containsKey("heapDump")) {
rule.heapDump = body.getBoolean("heapDump");
}
if (body.containsKey("threadDump")) {
rule.threadDump = body.getBoolean("threadDump");
}

rule.persist();

Expand All @@ -229,7 +235,9 @@ public RestResponse<Rule> create(
@RestForm int maxAgeSeconds,
@RestForm int maxSizeBytes,
@RestForm("metadata") Optional<String> rawMetadata,
@RestForm boolean enabled)
@RestForm boolean enabled,
@RestForm boolean heapDump,
@RestForm boolean threadDump)
throws JsonMappingException, JsonProcessingException {
MatchExpression expr = new MatchExpression(matchExpression);
expr.persist();
Expand All @@ -243,6 +251,8 @@ public RestResponse<Rule> create(
rule.preservedArchives = preservedArchives;
rule.maxAgeSeconds = maxAgeSeconds;
rule.maxSizeBytes = maxSizeBytes;
rule.heapDump = heapDump;
rule.threadDump = threadDump;
if (rawMetadata.isPresent()) {
rule.metadata = mapper.readValue(rawMetadata.get(), Metadata.class);
}
Expand Down
2 changes: 2 additions & 0 deletions src/main/resources/db/migration/V4.0.0__cryostat.sql
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,8 @@
name text unique check (char_length(name) < 255),
preservedArchives integer not null,
matchExpression bigint unique,
threadDump boolean not null,
heapDump boolean not null,
Copy link
Copy Markdown
Member

@andrewazores andrewazores Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a wrong approach - if a Cryostat instance has already been installed and is being upgraded, the previously-run migration scripts will not re-execute. So patching up a migration file for an already-released Cryostat version is going to create some really messy headaches for fresh installs vs upgraded installs where they will now have divergent database scheme.

In fact, I think Flyway will even raise an error on upgrade and fail in this case. I'm pretty sure it does some migration script checksumming and will catch that this has been changed.

Any modifications to the database schema for a Cryostat vX.Y release feature should only be done in a net new VX.Y.0__cryostat.sql migration script corresponding to that release version, so that all schema updates for that release are done exactly once at upgrade time.

primary key (id)
);

Expand Down
2 changes: 2 additions & 0 deletions src/main/resources/db/migration/V4.2.0__cryostat.sql
Original file line number Diff line number Diff line change
Expand Up @@ -248,6 +248,8 @@ CREATE TABLE Rule_AUD (
maxSizeBytes INTEGER,
metadata TEXT,
enabled BOOLEAN,
threadDump BOOLEAN,
heapDump BOOLEAN,
PRIMARY KEY (id, REV),
FOREIGN KEY (REV) REFERENCES REVINFO (REV),
FOREIGN KEY (REVEND) REFERENCES REVINFO (REV)
Expand Down
Loading