Skip to content

Add Podman support for Docker Compose deploy and unify container runtime abstraction#16074

Merged
davidfowl merged 12 commits intomainfrom
davidfowl/podman-compose-support
Apr 12, 2026
Merged

Add Podman support for Docker Compose deploy and unify container runtime abstraction#16074
davidfowl merged 12 commits intomainfrom
davidfowl/podman-compose-support

Conversation

@davidfowl
Copy link
Copy Markdown
Contributor

Description

Add first-class Podman support for Docker Compose deployment and unify all container runtime usage behind a single abstraction.

Problem: Docker Compose deploy hardcoded ProcessSpec("docker") in three places. Podman users — common in security-conscious, daemonless, rootless environments — couldn't deploy at all (#13315).

What changed:

  • IContainerRuntime compose methods — Added ComposeUpAsync, ComposeDownAsync, ComposeListServicesAsync. Podman overrides service discovery to use native podman ps --filter label=... which works with both podman-compose (Python) and Docker Compose v2 providers.

  • IContainerRuntimeResolver — New async resolver that auto-detects the best available runtime, matching DCP's detection logic: probe Docker and Podman in parallel, prefer running over installed, Docker as default tiebreaker. Result is cached. Eliminates sync-over-async blocking during DI resolution. Breaking change for the experimental IContainerRuntime API — callers now resolve IContainerRuntimeResolver instead of IContainerRuntime directly.

  • Shared ContainerRuntimeDetector — Single detection implementation in src/Shared/ used by both the hosting layer and aspire doctor. AOT-friendly (JsonDocument parsing, no reflection). Includes version info, Docker Desktop detection, server OS. Accepts optional ILogger for diagnostics.

  • aspire doctor improvements — Now reports all runtimes with status, version, and explains which one is active and why (explicit config vs auto-detected default vs only runtime running).

  • Diagnostics — Actionable error messages on compose failures, runtime binary validation before compose ops, runtime name in pipeline UI steps.

  • Localhive tooling--output, --rid, --archive flags for producing portable hive layouts for remote/cross-platform testing.

  • E2E testPodmanDeploymentTests (currently skipped, pending Hex1b AdditionalRunArgs PR).

Validated end-to-end on Ubuntu 24.04 VM with pure Podman (zero Docker): TypeScript AppHost → aspire deploy → Redis + Aspire Dashboard running as Podman containers. Tested auto-detection across all four configurations (neither, Podman only, Docker only, both).

Fixes #13315

Checklist

  • Is this feature complete?
  • Are you including unit tests for the changes and scenario tests if relevant?
    • Yes
    • No
  • Did you add public API?
    • Yes
      • If yes, did you have an API Review for it?
        • Yes
        • No
      • Did you add <remarks /> and <code /> elements on your triple slash comments?
        • Yes
        • No
    • No
  • Does the change make any security assumptions or guarantees?
    • Yes
    • No
  • Does the change require an update in our Aspire docs?

davidfowl and others added 8 commits April 11, 2026 01:32
Add compose lifecycle methods (ComposeUpAsync, ComposeDownAsync,
ComposeListServicesAsync) to IContainerRuntime so each runtime handles
compose operations natively.

Docker uses 'docker compose ps --format json' for service discovery.
Podman overrides ComposeListServicesAsync to use native 'podman ps
--filter label=com.docker.compose.project=X --format json', which works
with both Docker Compose v2 and podman-compose providers.

The compose publisher now resolves IContainerRuntime from DI instead of
hardcoding ProcessSpec("docker"). All process execution for compose
operations is encapsulated in the runtime implementations.

Validated on Ubuntu 24.04 with Podman 4.9.3: compose up/down/ps work
correctly with both Docker Compose v2 provider and native podman-compose.

Fixes #13315

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
New flags for producing portable hive layouts:
- --output / -o: write the .aspire layout to a custom directory instead
  of $HOME/.aspire. Forces copy mode (no symlinks).
- --rid / -r: target a different RID for bundle and CLI builds (e.g.
  linux-x64 from macOS). Cross-RID CLI uses dotnet publish with
  --self-contained and PublishSingleFile.
- --archive: create a .tar.gz (or .zip for win-* RIDs) archive of the
  output directory. Requires --output.

Usage example:
  ./localhive.sh -o /tmp/aspire-linux -r linux-x64 --archive
  scp /tmp/aspire-linux.tar.gz user@host:~
  # On target: tar -xzf aspire-linux.tar.gz -C ~/.aspire

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Add PodmanDeploymentTests that validates aspire deploy works with Podman
as the container runtime. The test sets ASPIRE_CONTAINER_RUNTIME=podman,
creates a .NET AppHost with Docker Compose environment, deploys, and
verifies containers are running via podman ps.

Marked as OuterloopTest since it requires Podman and docker-compose v2
installed on the host machine.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Improve observability when compose operations use the wrong runtime:

- Log the resolved container runtime at DI registration time:
  'Container runtime resolved: Docker (configured via ASPIRE_CONTAINER_RUNTIME=docker)'
- Log the runtime name when compose up starts:
  'Using container runtime podman for compose operations'
- Include the runtime binary name in error messages:
  'podman compose up failed with exit code 125. Ensure podman is installed and available on PATH.'
- Validate runtime binary exists on PATH before attempting compose
  operations — fail fast with actionable message instead of cryptic
  exit codes
- Use runtime name in pipeline step UI messages so the user can see
  which runtime is being used at each step

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Create ContainerRuntimeDetector in src/Shared/ that mirrors DCP's
detection logic (see internal/containers/runtimes/runtime.go):
- Probe Docker and Podman in parallel
- Prefer installed+running over installed-only over not-found
- Prefer Docker as default when both are equally available

Used in two places:
- DistributedApplicationBuilder: auto-detects runtime when
  ASPIRE_CONTAINER_RUNTIME is not set (instead of always defaulting
  to Docker)
- ContainerRuntimeCheck (aspire doctor): uses shared detector for
  initial availability check, then does extended checks (version,
  Windows containers, tunnel config)

This means on a Podman-only machine, Aspire will automatically use
Podman without needing ASPIRE_CONTAINER_RUNTIME=podman.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
aspire doctor now reports the status of every known container runtime
(Docker and Podman) instead of just the first one found. Each entry
shows:
- Whether the runtime is installed and running
- Whether it's the active (selected) runtime
- Why it was selected (explicit config, auto-detected default, or
  only runtime running)

Example output with Podman only:
  ❌ Docker: not found
  ✅ Podman: running (auto-detected, only runtime running) ← active

Example with both + explicit override:
  ✅ Docker: running (available)
  ✅ Podman: running (configured via ASPIRE_CONTAINER_RUNTIME=podman) ← active

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Fix localhive.ps1 $aspireRoot used before assignment (critical:
  script would crash with null path)
- Fix EnsureRuntimeAvailable process leak: use async/await with
  IAsyncDisposable instead of broken IDisposable cast
- Fix compose-up error message: only mention ASPIRE_CONTAINER_RUNTIME
  when env var is actually set, otherwise say 'auto-detected'
- Restore Docker server version check in aspire doctor (was dropped
  during refactor)
- Fix localhive.sh archive path: resolve to absolute before cd to
  avoid relative path issues with zip

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…lver

Introduce IContainerRuntimeResolver with async ResolveAsync() that
caches the result. This eliminates the .GetAwaiter().GetResult() call
in the DI singleton factory that blocked the thread during startup
while probing container runtimes.

Callers now resolve IContainerRuntimeResolver and await ResolveAsync()
instead of resolving IContainerRuntime directly. This is a breaking
change for the experimental IContainerRuntime API surface.

Also consolidates version detection into ContainerRuntimeDetector
(AOT-friendly JsonDocument parsing), adds FindBestRuntime() for
reuse without re-probing, and slims ContainerRuntimeCheck to pure
policy checks with no process spawning.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Copilot AI review requested due to automatic review settings April 11, 2026 21:58
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 11, 2026

🚀 Dogfood this PR with:

⚠️ WARNING: Do not do this without first carefully reviewing the code of this PR to satisfy yourself it is safe.

curl -fsSL https://raw.githubusercontent.com/microsoft/aspire/main/eng/scripts/get-aspire-cli-pr.sh | bash -s -- 16074

Or

  • Run remotely in PowerShell:
iex "& { $(irm https://raw.githubusercontent.com/microsoft/aspire/main/eng/scripts/get-aspire-cli-pr.ps1) } 16074"

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.

davidfowl and others added 2 commits April 11, 2026 15:31
Fix ContainerRuntimeResolver caching:
- Use Lazy<Task<T>> for thread-safe single-initialization
- Use CancellationToken.None so cached task isn't poisoned by
  per-operation cancellation tokens

Add 22 unit tests:
- FindBestRuntime: all priority permutations (running > installed >
  default tiebreaker, empty, single, neither)
- ParseVersionOutput: Docker JSON, Podman JSON, Docker Desktop
  detection, Windows containers, regex fallback, null/empty
- ParseComposeServiceEntries: NDJSON, JSON array, empty, invalid
- ParsePodmanPsOutput: ports + labels, multi-container aggregation,
  no labels, empty, invalid JSON

Make ParsePodmanPsOutput and ParseComposeServiceEntries internal
for testability.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Migrate ContainerRuntimeCheckTests from deleted ContainerVersionInfo
  to ContainerRuntimeDetector.ParseVersionOutput
- Fix JSON null handling: use JsonSerializerContext with strong types
  instead of JsonDocument (Server:null is handled by nullable properties)
- Add missing IContainerRuntimeResolver registration in
  ProjectResourceTests
- Remove deleted ContainerVersionJson from CLI source gen context

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@davidfowl davidfowl force-pushed the davidfowl/podman-compose-support branch from 147db4a to dbac599 Compare April 12, 2026 01:13
Skip runtimes that aren't installed instead of showing them as
failures. Only show a failure if NO runtime is found at all.
If a runtime was explicitly configured via ASPIRE_CONTAINER_RUNTIME
but not found, that IS shown as a failure with install guidance.

Before: ❌ Podman: not found (even though Docker is healthy)
After:  Only Docker shown, Podman silently omitted

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
await deployTask.CompleteAsync($"Compose deployment failed ({runtime.Name}): {ex.Message}", CompletionState.CompletedWithError, context.CancellationToken).ConfigureAwait(false);
throw;
}
}
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Behavioral change: compose failure is now a hard exception instead of a soft report.

Previously, a non-zero exit code from docker compose up called deployTask.FailAsync() but did not throw — the pipeline could continue. Now ContainerRuntimeBase.ComposeUpAsync throws DistributedApplicationException on non-zero exit, which is caught here and re-thrown (throw;).

This means a compose-up failure that was previously a soft report is now a hard exception that propagates up the pipeline. If any consumer relies on compose-up being non-fatal (continuing to subsequent steps, retrying, etc.), this will break.

Same behavioral change applies to DockerComposeDownAsync below.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at the pipeline executor (DistributedApplicationPipeline.cs:660-668), it wraps ExecuteStepAsync in try/catch — exceptions are caught, FailAsync is called on the step, and the exception propagates to cancel dependent steps via linkedCts.Cancel().

If we don't throw, stepTcs.TrySetResult() fires and the step is considered succeeded from the pipeline's perspective, even though compose failed. The original pre-PR code that called FailAsync without throwing was actually a bug — the step appeared to succeed.

Throwing is the correct behavior. Kept as-is.

@davidfowl davidfowl force-pushed the davidfowl/podman-compose-support branch from f3f57cc to 3179952 Compare April 12, 2026 03:38
@davidfowl
Copy link
Copy Markdown
Contributor Author

On the CancellationToken in ContainerRuntimeResolver:

Currently Lazy<Task> captures the factory delegate at construction time, so the first caller's token wins. But we intentionally use CancellationToken.None — the detection result is cached for the process lifetime, so it shouldn't be tied to any single operation's cancellation scope.

If we flow the token, a cancelled first caller poisons the cache with an OperationCanceledException for all subsequent callers. With CancellationToken.None, the detection always runs to completion (max 10s timeout per runtime) and the result is stable.

The tradeoff: a shutting-down process can't cancel the initial 10s detection probe. But in practice, detection completes in <1s on healthy systems.

Open to discussion if there's a scenario where flowing the token is important.

@davidfowl davidfowl force-pushed the davidfowl/podman-compose-support branch from 3179952 to 6d0be14 Compare April 12, 2026 03:48
- Restore soft-report behavior for compose up/down failures: use
  FailAsync without re-throwing, matching the original non-fatal
  behavior (JamesNK feedback)
- Add EnsureRuntimeAvailableAsync to Podman ComposeListServicesAsync
  override so missing podman binary gets an actionable error
- Make EnsureRuntimeAvailableAsync protected for subclass access
- Rename PascalCase ContainerRuntime locals to camelCase in
  ResourceContainerImageManager

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@davidfowl davidfowl force-pushed the davidfowl/podman-compose-support branch from 6d0be14 to 66ee164 Compare April 12, 2026 03:49
@github-actions
Copy link
Copy Markdown
Contributor

🎬 CLI E2E Test Recordings — 68 recordings uploaded (commit 66ee164)

View recordings
Test Recording
AddPackageInteractiveWhileAppHostRunningDetached ▶️ View Recording
AddPackageWhileAppHostRunningDetached ▶️ View Recording
AgentCommands_AllHelpOutputs_AreCorrect ▶️ View Recording
AgentInitCommand_DefaultSelection_InstallsSkillOnly ▶️ View Recording
AgentInitCommand_MigratesDeprecatedConfig ▶️ View Recording
AllPublishMethodsBuildDockerImages ▶️ View Recording
AspireAddPackageVersionToDirectoryPackagesProps ▶️ View Recording
AspireUpdateRemovesAppHostPackageVersionFromDirectoryPackagesProps ▶️ View Recording
Banner_DisplayedOnFirstRun ▶️ View Recording
Banner_DisplayedWithExplicitFlag ▶️ View Recording
Banner_NotDisplayedWithNoLogoFlag ▶️ View Recording
CertificatesClean_RemovesCertificates ▶️ View Recording
CertificatesTrust_WithNoCert_CreatesAndTrustsCertificate ▶️ View Recording
CertificatesTrust_WithUntrustedCert_TrustsCertificate ▶️ View Recording
ConfigSetGet_CreatesNestedJsonFormat ▶️ View Recording
CreateAndRunAspireStarterProject ▶️ View Recording
CreateAndRunAspireStarterProjectWithBundle ▶️ View Recording
CreateAndRunEmptyAppHostProject ▶️ View Recording
CreateAndRunJavaEmptyAppHostProject ▶️ View Recording
CreateAndRunJsReactProject ▶️ View Recording
CreateAndRunPythonReactProject ▶️ View Recording
CreateAndRunTypeScriptEmptyAppHostProject ▶️ View Recording
CreateAndRunTypeScriptStarterProject ▶️ View Recording
CreateJavaAppHostWithViteApp ▶️ View Recording
CreateStartAndStopAspireProject ▶️ View Recording
CreateTypeScriptAppHostWithViteApp ▶️ View Recording
DashboardRunWithOtelTracesReturnsNoTraces ▶️ View Recording
DeployK8sBasicApiService ▶️ View Recording
DeployK8sWithGarnet ▶️ View Recording
DeployK8sWithMongoDB ▶️ View Recording
DeployK8sWithMySql ▶️ View Recording
DeployK8sWithPostgres ▶️ View Recording
DeployK8sWithRabbitMQ ▶️ View Recording
DeployK8sWithRedis ▶️ View Recording
DeployK8sWithSqlServer ▶️ View Recording
DeployK8sWithValkey ▶️ View Recording
DeployTypeScriptAppToKubernetes ▶️ View Recording
DescribeCommandResolvesReplicaNames ▶️ View Recording
DescribeCommandShowsRunningResources ▶️ View Recording
DetachFormatJsonProducesValidJson ▶️ View Recording
DoctorCommand_DetectsDeprecatedAgentConfig ▶️ View Recording
DoctorCommand_WithSslCertDir_ShowsTrusted ▶️ View Recording
DoctorCommand_WithoutSslCertDir_ShowsPartiallyTrusted ▶️ View Recording
GlobalMigration_HandlesCommentsAndTrailingCommas ▶️ View Recording
GlobalMigration_HandlesMalformedLegacyJson ▶️ View Recording
GlobalMigration_PreservesAllValueTypes ▶️ View Recording
GlobalMigration_SkipsWhenNewConfigExists ▶️ View Recording
GlobalSettings_MigratedFromLegacyFormat ▶️ View Recording
InitTypeScriptAppHost_AugmentsExistingViteRepoAtRoot ▶️ View Recording
InvalidAppHostPathWithComments_IsHealedOnRun ▶️ View Recording
LegacySettingsMigration_AdjustsRelativeAppHostPath ▶️ View Recording
LogsCommandShowsResourceLogs ▶️ View Recording
PsCommandListsRunningAppHost ▶️ View Recording
PsFormatJsonOutputsOnlyJsonToStdout ▶️ View Recording
PublishWithDockerComposeServiceCallbackSucceeds ▶️ View Recording
RestoreGeneratesSdkFiles ▶️ View Recording
RestoreSupportsConfigOnlyHelperPackageAndCrossPackageTypes ▶️ View Recording
RunFromParentDirectory_UsesExistingConfigNearAppHost ▶️ View Recording
SecretCrudOnDotNetAppHost ▶️ View Recording
SecretCrudOnTypeScriptAppHost ▶️ View Recording
StagingChannel_ConfigureAndVerifySettings_ThenSwitchChannels ▶️ View Recording
StartAndWaitForTypeScriptSqlServerAppHostWithNativeAssets ▶️ View Recording
StopAllAppHostsFromAppHostDirectory ▶️ View Recording
StopAllAppHostsFromUnrelatedDirectory ▶️ View Recording
StopNonInteractiveMultipleAppHostsShowsError ▶️ View Recording
StopNonInteractiveSingleAppHost ▶️ View Recording
StopWithNoRunningAppHostExitsSuccessfully ▶️ View Recording
UnAwaitedChainsCompileWithAutoResolvePromises ▶️ View Recording

📹 Recordings uploaded automatically from CI run #24298061757

@davidfowl
Copy link
Copy Markdown
Contributor Author

E2E Validation on Fresh Ubuntu 24.04 VM

Tested PR build 13.3.0-pr.16074.g66ee164b on a fresh Ubuntu 24.04 VM (rebuilt between test rounds). TypeScript AppHost deploying httpbin API + Redis + Aspire Dashboard via Docker Compose.

Test Matrix

Scenario aspire doctor aspire deploy API
No runtimes ❌ "No container runtime detected" + install links N/A N/A
Podman only (no Docker) ✅ Podman ← active ✅ SUCCEEDED
Docker only (no Podman) ✅ Docker ← active ✅ SUCCEEDED
Both installed, auto-detect ✅ Docker ← active, Podman listed ✅ SUCCEEDED (Docker)
Both, ASPIRE_CONTAINER_RUNTIME=podman ✅ SUCCEEDED (Podman)
Both, ASPIRE_CONTAINER_RUNTIME=docker ✅ SUCCEEDED (Docker)
Podman forced, Docker installed ✅ Podman containers, Docker idle

Key Observations

  • Auto-detection correctly prefers Docker when both are available (matches DCP behavior)
  • Explicit ASPIRE_CONTAINER_RUNTIME override works in both directions
  • Podman works with native podman-compose (Python) — no docker-compose binary needed
  • aspire doctor only shows installed runtimes (no noise about missing runtimes)
  • Actionable error messages when compose fails: shows which runtime, suggests env var override

@davidfowl davidfowl merged commit 598c9d4 into main Apr 12, 2026
279 checks passed
@davidfowl davidfowl added the breaking-change Issue or PR that represents a breaking API or functional change over a prerelease. label Apr 12, 2026
@joperezr joperezr added this to the 13.3 milestone Apr 14, 2026
radical pushed a commit that referenced this pull request Apr 14, 2026
…ime abstraction (#16074)

* Support Podman as container runtime for Docker Compose deploy

Add compose lifecycle methods (ComposeUpAsync, ComposeDownAsync,
ComposeListServicesAsync) to IContainerRuntime so each runtime handles
compose operations natively.

Docker uses 'docker compose ps --format json' for service discovery.
Podman overrides ComposeListServicesAsync to use native 'podman ps
--filter label=com.docker.compose.project=X --format json', which works
with both Docker Compose v2 and podman-compose providers.

The compose publisher now resolves IContainerRuntime from DI instead of
hardcoding ProcessSpec("docker"). All process execution for compose
operations is encapsulated in the runtime implementations.

Validated on Ubuntu 24.04 with Podman 4.9.3: compose up/down/ps work
correctly with both Docker Compose v2 provider and native podman-compose.

Fixes #13315

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Add --output, --rid, and --archive flags to localhive scripts

New flags for producing portable hive layouts:
- --output / -o: write the .aspire layout to a custom directory instead
  of $HOME/.aspire. Forces copy mode (no symlinks).
- --rid / -r: target a different RID for bundle and CLI builds (e.g.
  linux-x64 from macOS). Cross-RID CLI uses dotnet publish with
  --self-contained and PublishSingleFile.
- --archive: create a .tar.gz (or .zip for win-* RIDs) archive of the
  output directory. Requires --output.

Usage example:
  ./localhive.sh -o /tmp/aspire-linux -r linux-x64 --archive
  scp /tmp/aspire-linux.tar.gz user@host:~
  # On target: tar -xzf aspire-linux.tar.gz -C ~/.aspire

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Add Podman deployment E2E test

Add PodmanDeploymentTests that validates aspire deploy works with Podman
as the container runtime. The test sets ASPIRE_CONTAINER_RUNTIME=podman,
creates a .NET AppHost with Docker Compose environment, deploys, and
verifies containers are running via podman ps.

Marked as OuterloopTest since it requires Podman and docker-compose v2
installed on the host machine.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Add container runtime diagnostics for compose operations

Improve observability when compose operations use the wrong runtime:

- Log the resolved container runtime at DI registration time:
  'Container runtime resolved: Docker (configured via ASPIRE_CONTAINER_RUNTIME=docker)'
- Log the runtime name when compose up starts:
  'Using container runtime podman for compose operations'
- Include the runtime binary name in error messages:
  'podman compose up failed with exit code 125. Ensure podman is installed and available on PATH.'
- Validate runtime binary exists on PATH before attempting compose
  operations — fail fast with actionable message instead of cryptic
  exit codes
- Use runtime name in pipeline step UI messages so the user can see
  which runtime is being used at each step

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Add shared container runtime auto-detection

Create ContainerRuntimeDetector in src/Shared/ that mirrors DCP's
detection logic (see internal/containers/runtimes/runtime.go):
- Probe Docker and Podman in parallel
- Prefer installed+running over installed-only over not-found
- Prefer Docker as default when both are equally available

Used in two places:
- DistributedApplicationBuilder: auto-detects runtime when
  ASPIRE_CONTAINER_RUNTIME is not set (instead of always defaulting
  to Docker)
- ContainerRuntimeCheck (aspire doctor): uses shared detector for
  initial availability check, then does extended checks (version,
  Windows containers, tunnel config)

This means on a Podman-only machine, Aspire will automatically use
Podman without needing ASPIRE_CONTAINER_RUNTIME=podman.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Improve aspire doctor to report all container runtimes

aspire doctor now reports the status of every known container runtime
(Docker and Podman) instead of just the first one found. Each entry
shows:
- Whether the runtime is installed and running
- Whether it's the active (selected) runtime
- Why it was selected (explicit config, auto-detected default, or
  only runtime running)

Example output with Podman only:
  ❌ Docker: not found
  ✅ Podman: running (auto-detected, only runtime running) ← active

Example with both + explicit override:
  ✅ Docker: running (available)
  ✅ Podman: running (configured via ASPIRE_CONTAINER_RUNTIME=podman) ← active

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Fix issues found in code review

- Fix localhive.ps1 $aspireRoot used before assignment (critical:
  script would crash with null path)
- Fix EnsureRuntimeAvailable process leak: use async/await with
  IAsyncDisposable instead of broken IDisposable cast
- Fix compose-up error message: only mention ASPIRE_CONTAINER_RUNTIME
  when env var is actually set, otherwise say 'auto-detected'
- Restore Docker server version check in aspire doctor (was dropped
  during refactor)
- Fix localhive.sh archive path: resolve to absolute before cd to
  avoid relative path issues with zip

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Replace sync-over-async runtime resolution with IContainerRuntimeResolver

Introduce IContainerRuntimeResolver with async ResolveAsync() that
caches the result. This eliminates the .GetAwaiter().GetResult() call
in the DI singleton factory that blocked the thread during startup
while probing container runtimes.

Callers now resolve IContainerRuntimeResolver and await ResolveAsync()
instead of resolving IContainerRuntime directly. This is a breaking
change for the experimental IContainerRuntime API surface.

Also consolidates version detection into ContainerRuntimeDetector
(AOT-friendly JsonDocument parsing), adds FindBestRuntime() for
reuse without re-probing, and slims ContainerRuntimeCheck to pure
policy checks with no process spawning.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Fix resolver race condition and add unit tests

Fix ContainerRuntimeResolver caching:
- Use Lazy<Task<T>> for thread-safe single-initialization
- Use CancellationToken.None so cached task isn't poisoned by
  per-operation cancellation tokens

Add 22 unit tests:
- FindBestRuntime: all priority permutations (running > installed >
  default tiebreaker, empty, single, neither)
- ParseVersionOutput: Docker JSON, Podman JSON, Docker Desktop
  detection, Windows containers, regex fallback, null/empty
- ParseComposeServiceEntries: NDJSON, JSON array, empty, invalid
- ParsePodmanPsOutput: ports + labels, multi-container aggregation,
  no labels, empty, invalid JSON

Make ParsePodmanPsOutput and ParseComposeServiceEntries internal
for testability.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Fix CI: migrate CLI tests to shared detector, fix JSON null handling

- Migrate ContainerRuntimeCheckTests from deleted ContainerVersionInfo
  to ContainerRuntimeDetector.ParseVersionOutput
- Fix JSON null handling: use JsonSerializerContext with strong types
  instead of JsonDocument (Server:null is handled by nullable properties)
- Add missing IContainerRuntimeResolver registration in
  ProjectResourceTests
- Remove deleted ContainerVersionJson from CLI source gen context

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Fix doctor: don't show not-found runtimes as errors

Skip runtimes that aren't installed instead of showing them as
failures. Only show a failure if NO runtime is found at all.
If a runtime was explicitly configured via ASPIRE_CONTAINER_RUNTIME
but not found, that IS shown as a failure with install guidance.

Before: ❌ Podman: not found (even though Docker is healthy)
After:  Only Docker shown, Podman silently omitted

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Address PR review feedback

- Restore soft-report behavior for compose up/down failures: use
  FailAsync without re-throwing, matching the original non-fatal
  behavior (JamesNK feedback)
- Add EnsureRuntimeAvailableAsync to Podman ComposeListServicesAsync
  override so missing podman binary gets an actionable error
- Make EnsureRuntimeAvailableAsync protected for subclass access
- Rename PascalCase ContainerRuntime locals to camelCase in
  ResourceContainerImageManager

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

breaking-change Issue or PR that represents a breaking API or functional change over a prerelease.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

aspire deploy fails when using Podman without Docker Compose

5 participants