- Summary
- Why?
- Prerequisites
- Limitations
- Apple Silicon / Rosetta Support
- Maintenance
- Troubleshooting/FAQ
- Contribution
- Prior art
This repository provides a Docker setup for AMD's Vivado FPGA development tools, specifically version 2025.2.
Important: This repository does not contain any Vivado software. Instead, it offers a recipe to build your own Docker container using a Vivado installer that you download from AMD. Due to its size and licensing restrictions, the built Docker image is not available for download from Docker Hub or other public registries.
The script builds a Docker container with a pre-configured installation of AMD's (formerly Xilinx) Vivado tools. Building the container is a time-consuming process (multiple hours), as is loading the image into Docker or saving it as an archive. Please allocate sufficient time.
By default, the script installs a selection of features from the "Vivado ML Standard" edition, which is free to use for development.
This project aims to provide a repeatable, hermetic, and self-maintaining development environment using Docker containers. This ensures a consistent Vivado setup across different machines. If this is not a concern for you, a standard Vivado installation may be sufficient.
- Git
- Docker (with BuildKit enabled for faster builds)
- A downloaded Vivado Unified Installer archive (for which you hold a valid license).
This solution for dockerizing Vivado has the following known limitations:
- Supported Vivado Edition: This project currently only supports dockerizing the "Vivado ML Standard" edition. "Vivado ML Enterprise" (which requires a paid license and may have different installation mechanisms) is not supported. The repository https://github.com/esnet/xilinx-tools-docker seems to do the same, but for Vivado ML Enterprise. I have not tested this.
- Installer Availability: You must download the Vivado installer archive yourself directly from AMD. This repository cannot and will not provide the installer due to licensing and distribution restrictions.
- Testing Constraints: Thoroughly testing all possible configurations and Vivado versions is challenging due to the dependency on specific, large installer archives from AMD and the lengthy build times.
This setup works on Apple Silicon Macs (M1/M2/M3/M4) via Rosetta x86_64 emulation in Docker (OrbStack or Docker Desktop). Key adaptations:
--platform linux/amd64is added to all Docker commands (build & run)- libudev stub: Vivado's license manager and WebTalk telemetry call
udev_enumerate_scan_devices()which crashes under Rosetta withrealloc(): invalid pointer. A stub shared library at/opt/udev_stub.sois built into the image and loaded viaLD_PRELOADautomatically when running on ARM64 hosts. launch_runsmay crash under Rosetta because it spawns child processes that also trigger the libudev crash. For synthesis, prefer in-process commands (synth_design,place_design,route_design) overlaunch_runsin your TCL scripts.
On native x86_64 hosts, the Rosetta workarounds are harmless but unnecessary.
- Download Vivado Installer: Obtain the Vivado Unified Installer archive from AMD. You are responsible for complying with all software licensing terms.
- Place Installer in Repo: Copy the downloaded archive into the top-level
directory of this repository. For Vivado 2025.1, the archive name is
typically
FPGAs_AdaptiveSoCs_Unified_SDI_2025.1_0530_0145.tar. - Generate
install_config.txt:- Use the Xilinx setup program to generate the installation configuration
file:
`xsetup -b SetupGen`. - During the generation process, select the "Vivado ML Standard" edition.
- The setup program will create
install_config.txtin$HOME/.Xilinx/. Copy this file into the root directory of this repository. - Edit the
Modules=section withininstall_config.txtto enable the specific Vivado components you require. Change the0to a1for each desired module (e.g.,Vivado Simulator:1).
- Use the Xilinx setup program to generate the installation configuration
file:
Navigate to the repository's root directory and run:
make HOST_TOOL_ARCHIVE_NAME=FPGAs_AdaptiveSoCs_Unified_SDI_2025.1_0530_0145.tar buildThe build process is lengthy. See the FAQ section for more details on build times and optimizations.
Approximate durations for key steps:
- Loading archive into build context: ~0 min (skipped due to bind mount)
- Copying archive into container: ~0 min (skipped due to bind mount)
- Unpacking archive: ~30 min
- Vivado installation: ~30 min
- Exporting Docker image layers: ~90 min
After a successful build, you can save the Docker image to a .tar archive:
make saveThis archive (e.g., xilinx-vivado.docker.tgz) can be transferred to other
machines. The image is too large for Docker Hub and is not hosted there.
To load the image from an archive:
docker load -i xilinx-vivado.docker.tgzNote: Loading very large Docker images can sometimes be unreliable. See the FAQ section for more details.
Once the image is loaded into Docker, start Vivado using:
make runOr use run.vivado.sh directly with environment overrides:
# Interactive TCL console
VIVADO_CMD="vivado -mode tcl" ./run.vivado.sh
# Batch synthesis
SRC_DIR=/path/to/fpga/project WORK_DIR=/path/to/output \
VIVADO_CMD="vivado -mode batch -source /src/build.tcl" \
./run.vivado.shrun.vivado.sh environment variables:
| Variable | Default | Description |
|---|---|---|
VIVADO_VERSION |
2025.2 |
Vivado version to use |
SRC_DIR |
current directory | Host directory mounted at /src |
WORK_DIR |
current directory | Host directory mounted at /work |
VIVADO_CMD |
vivado (GUI) |
Command to run inside container |
ROSETTA |
auto-detect | Set to 1 to force libudev stub |
Q: Why does the Docker build take so long (several hours)?
A: The Vivado installer is very large, and the installation process itself is complex. Several steps contribute to the long duration: unpacking the archive, running the Vivado installer, and finally exporting the numerous layers of the resulting Docker image. (Note: Using Docker BuildKit with bind mounts, as done in this repository, significantly speeds up the process by skipping the need to load the multi-gigabyte archive into the build context and copy it into the container. This optimization was contributed by @gretel in PR #5). The overall process will still be lengthy.
Q: The Docker image is over 200GB. Is this normal?
A: Yes, this is unfortunately normal. Vivado is a comprehensive tool suite, and a full installation contains a very large number of files and libraries, leading to a massive Docker image.
Q: My Docker build fails with errors related to X11 or display servers. What can I do?
A: This script builds Vivado in a headless environment (without a graphical
display). Some Vivado installation options or components might require an X11
display server during the installation itself. This script does not support such
options. Ensure your install_config.txt only selects components compatible
with a headless installation. The default "Vivado ML Standard" components are
generally compatible.
Q: How do I choose which Vivado components are installed?
A: You can customize the installation by editing the install_config.txt file
before starting the build. This file is generated by the Xilinx setup program
(`xsetup -b SetupGen`). In the Modules= section of this file, you can enable
or disable specific components by changing their value from :0 (disabled) to
:1 (enabled). For example, to enable the Vivado Simulator, ensure the line
reads Vivado Simulator:1.
Q: Vivado crashes with realloc(): invalid pointer on Apple Silicon.
A: This is a known issue with Rosetta x86_64 emulation. Vivado's license
manager calls udev_enumerate_scan_devices() which triggers a crash in glibc's
allocator under Rosetta. The image includes a libudev stub at
/opt/udev_stub.so that provides no-op implementations. run.vivado.sh
applies this automatically on ARM64 hosts. If running Vivado manually inside
the container, add: export LD_PRELOAD=/opt/udev_stub.so before sourcing
settings64.sh.
Q: launch_runs crashes but synth_design works. Why?
A: launch_runs spawns child processes which each independently load
libudev. Under Rosetta, these children crash even with LD_PRELOAD set
(the preload may not propagate correctly to all children). Use in-process TCL
commands instead: synth_design, opt_design, place_design, route_design,
write_bitstream.
Q: `docker load -i xilinx-vivado.docker.tgz` fails or takes many attempts. Any advice?
A: Loading extremely large Docker image archives can be unreliable with some versions or configurations of Docker. Ensure you have sufficient disk space in your Docker daemon's storage location (check Docker settings). Trying the command again sometimes helps. If persistent issues occur, consider checking Docker daemon logs for more specific errors or consulting Docker community forums for advice on handling large images.
Contributions are welcome! Please feel free to submit pull requests or open issues.
This repo was not built in a vacuum. I consulted a number of resources out there on the internet.
- Dockerizing Xilinx tools. discussion on Reddit, which bootstrapped this work.
- Xilinx tools docker: the freshest piece of instruction that I could find.
- Xilinx Vivado with Docker and Jenkins. Does what it says on the tin.
- Xilinx Vivado/Vivado HLS from CERN.
- Xilinx guides about Docker, which I'm not sure helped at all.
- AMD guides about Vivado on Kubernetes et al..
- Install Xilinx Vivado using Docker [link broken?], another blog recount of the process.
- Run GUI applications in Docker or podman containers.
- Dockerized Vivado ML Enterprise by esnet.