Provision Proxmox VMs Using Pulumi and Ansible
Introduction
Creating and configuring virtual machines manually is time-consuming and error-prone. Infrastructure as Code (IaC) addresses this by defining environments in code so setups can be replicated or updated reliably. In our case, we use Pulumi (for provisioning) and Ansible (for configuration) to fully automate VM lifecycles on Proxmox. Pulumi is a modern IaC platform that uses general-purpose programming languages to manage cloud resources, while Ansible is an open-source automation engine for provisioning and configuration management. Importantly, these tools complement each other: Pulumi creates and starts machines, and Ansible applies settings on them, as noted in examples of IaC workflows
You will learn:
- How to set up and configure a new Pulumi TypeScript project for Proxmox VM provisioning
- How to integrate Ansible with Pulumi outputs, generating inventories and playbooks
- How to write Ansible tasks to wait for Cloud-Init and install software (e.g., Docker) on new VMs
- How to use Pulumi configuration to define Proxmox provider settings, VM parameters, SSH keys, and pre-installed Docker images
- How to orchestrate the end-to-end process so that running
pulumi upbrings up a new VM that is fully configured automatically
Prerequisites
- Ansible 2.18.6
- Pulumi 3.117
- VM Template with cloudInit, which you can create by following Making a Ubuntu 24.04 VM Template for Proxmox and CloudInit
Step 1: Create and Prepare Your Pulumi Project
- Open a terminal in your project directory.
- Initialize a new Pulumi TypeScript project. Run:
pulumi new typescript --name "vm" --stack "dev" --non-interactive -y
This creates a Pulumi project named “vm” with a development stack. It will scaffold a basic TypeScript setup.
- Install required NPM dependencies for Proxmox and remote commands:
npm install \
@muhlba91/pulumi-proxmoxve@^7.1.0 \
@pulumi/command@^1.1.0 \
@pulumi/pulumi@^3.177.0 \
@pulumi/tls@^5.2.0 \
sshpk@^1.18.0
- Install development tools and type definitions to support TypeScript:
npm install --save-dev \
@types/node@^18 \
@types/nunjucks@^3.2.6 \
@types/sshpk@^1.17.4 \
typescript@^5.0.0
Step 2: Set Up Ansible Integration
- Create an
ansible/directory in your project root. This will hold playbooks, templates, and inventory scripts:
mkdir -p ./ansible
- Generate the dynamic inventory script. In
ansible/inventory.js, write a function that Pulumi will use to produce an Ansible JSON inventory. For example:
export default function(ctx) {
const hosts = {};
ctx.hostsCfg.forEach(({ name, ip, username, password }) => {
hosts[name] = {
ansible_host: ip,
ansible_user: username,
ansible_password: password
};
});
return {
all: {
vars: {
docker_images: ctx.dockerCfg.images
},
hosts: hosts
}
}
}
Note: This script takes Pulumi’s output context (
ctx) and maps VM names, IPs, and credentials into an Ansible inventory format. Pulumi will later call this to createinventory.json.
- Prepare an Ansible configuration file (e.g.,
ansible/ansible.cfg) if needed, so Ansible knows how to use this dynamic inventory and SSH keys. (You can specifyinventory = ./ansible/inventory.jsonand disable host key checking, for example.)
By organizing playbooks and inventories under ansible/, we keep a clear structure. The inventory script bridges Pulumi and Ansible, producing an up-to-date host list based on the VMs Pulumi creates.
Step 3: Handle Cloud-Init Properly on Ubuntu 24.04
Ubuntu cloud images use cloud-init for initial setup on first boot (networking, SSH keys, user accounts, etc.). We must wait for cloud-init to finish before running other Ansible tasks, or else those tasks may fail. To do this, create a dedicated Ansible playbook ansible/cloud-init.yml:
---
- name: Waiting for Cloud-Init to complete in Ubuntu 24.04.
hosts: all
connection: ssh
gather_facts: no
ignore_unreachable: yes
tasks:
- name: Wait for SSH connection (before cloud-init wait)
ansible.builtin.wait_for_connection:
delay: 5
timeout: 300
- block:
- name: Wait for cloud-init (1st try)
ansible.builtin.shell: cloud-init status --wait
register: cloud_init_result
changed_when: false
ignore_unreachable: true
always:
- name: Wait for SSH connection after reboot (rescue)
ansible.builtin.wait_for_connection:
delay: 5
timeout: 300
- name: Wait for cloud-init (2nd try after reboot)
ansible.builtin.shell: cloud-init status --wait
register: cloud_init_result
changed_when: false
...
This playbook does the following:
- Uses
wait_for_connectionto ensure SSH is available (Pulumi’s VM is up). - Runs
cloud-init status --waitto pause until cloud-init completes. - In case the VM reboots during cloud-init, it waits again for SSH and then checks status a second time.
This ensures that all cloud-init user data (like setting up SSH keys or initial packages) is done before we run our main configuration. (Cloud-init is commonly used for first-boot initialization in Proxmox+Ubuntu setups.)
Step 4: Create an Ansible Playbook
Now write an actual configuration playbook that will run after the VM is ready. For example, to install Docker and pull images, do:
- Install necessary Ansible roles. For instance, use
geerlingguy.dockerto install Docker:
ansible-galaxy role install geerlingguy.docker
- Write
ansible/playbook.ymlwith the tasks. For example:
---
- name: Install and configure Docker
hosts: all
become: true
serial: 1
vars:
docker_install_python_sdk: true
docker_install_compose_plugin: true
docker_ok: true
roles:
- role: geerlingguy.docker
tasks:
- name: Install docker-compose
ansible.builtin.apt:
name: docker-compose
- name: Pull required Docker images
community.docker.docker_image:
name: "{{ item }}"
source: pull
loop: "{{ docker_images }}"
register: image_pull_results
retries: 10
delay: 10
until: image_pull_results.failed is not defined or not image_pull_results.failed
...
This playbook does the following:
- Uses the
geerlingguy.dockerrole (from Ansible Galaxy) to install Docker and Python requirements. - Installs
docker-composevia the system package manager. - Pulls a list of Docker images specified by the
docker_imagesvariable (defined in Pulumi config). It retries up to 10 times to handle network or registry delays.
Using pre-built roles (like geerlingguy.docker) speeds up writing playbooks, as Ansible is designed for reusable, modular automation. In this step, we ensure that once the VM is provisioned, it ends up with Docker installed and images pre-fetched according to our needs.
Step 5: Configure VM Settings via Pulumi Config
We will use Pulumi’s configuration system (pulumi config) to define parameters for the Proxmox provider, SSH keys, the VM itself, and Docker images. These values will be loaded into our Pulumi program.
- Provider: Add Proxmox API details. In
Pulumi.dev.yaml:
config:
...
vm:provider:
endpoint: https://proxmox...
insecure: true
apiToken: ...
You can set these via the Pulumi CLI:
pulumi config set --path "provider.endpoint" {endpoint}
pulumi config set --path "provider.insecure" {insecure} --type bool
pulumi config set --secure --path "provider.apiToken" {token}
- SSH Keys: Define public keys for the VM’s default user:
config:
...
vm:keys:
- ssh-rsa AAAAB3NzaC... user1@example.com
- ssh-rsa AAAAB3NzaD... user2@example.com
You can set these via the Pulumi CLI:
pulumi config set --path 'keys[0]' 'ssh-rsa AAAAB3NzaC... user1@example.com'
pulumi config set --path 'keys[1]' 'ssh-rsa AAAAB3NzaD... user2@example.com'
These keys will be injected via Cloud-Init so we can SSH in.
- VM Initialization: Under
vm:VM, specify VM parameters matching VirtualMachineArgs. For example:
config:
...
vm:VM:
initialization:
type: nocloud # using NoCloud for cloud-init
datastoreId: local
dns:
servers:
- 10.3.0.2
ipConfigs:
- ipv4:
address: 10.3.0.201/24
gateway: 10.3.0.2
userAccount:
username: user
password: password
nodeName: prox01
agent:
enabled: false
trim: true
type: virtio
cpu:
cores: 4
sockets: 2
type: kvm64
clone:
nodeName: prox01
vmId: 900
disks:
- interface: scsi0
datastoreId: local
size: 32
fileFormat: qcow2
memory:
dedicated: 4096
name: littePig
clonetells Proxmox which template (VM 900 on node prox01) to use as the base.initializationblock sets up cloud-init: using NoCloud, network config, and initial user. Pulumi will add the SSH keys intouserAccount.keyslater.
Important: You don’t need to specify access keys in
userAccount. They are configured separately using thekeysfield.
- Docker images: Define in config which Docker images Ansible should pull:
config:
vm:docker:
images:
- harbor...
- ...
Or via CLI:
pulumi config set --path "docker.images[0]" {docker_image0}
pulumi config set --path "docker.images[1]" {docker_image1}
These config sections match Pulumi input types for the Proxmox provider and our helper types. We haven’t hard-coded any secrets (the API token and such are set as secure configs), making the setup reproducible and configurable per environment.
Step 6: Configure the VM in Code
Now we write the Pulumi code that ties together the config and prepares for provisioning:
- Define data types (e.g.,
HostandDocker) for later use:
export interface Host {
name: string;
ip: string;
username: string;
password: string;
}
export interface Docker {
images: string[];
}
For the complete list of types, see the GitHub repository
- Generate RSA key pair (Pulumi TLS provider) for SSH access:
import * as tls from "@pulumi/tls";
export const genKey = new tls.PrivateKey("private-key", {
algorithm: "RSA",
});
- Load Pulumi configuration:
import * as pulumi from "@pulumi/pulumi";
import { ProviderArgs, vm } from "@muhlba91/pulumi-proxmoxve";
const cfg = new pulumi.Config();
export const proxmoxProviderArgConf = cfg.requireObject<ProviderArgs>("provider");
export const keysConf = cfg.requireObject<string[]>("keys");
export const argsConf = cfg.requireObject<vm.VirtualMachineArgs>("VM");
export const dockerConf = cfg.requireObject<Docker>("docker");
This pulls in our YAML/CLI-configured values.
- Combine public keys into the VM args (existing keys + generated key):
export const publicKeys = pulumi
.all([keysConf, genKey.publicKeyOpenssh])
.apply(([cfgKeys, genKey]) => [...cfgKeys, genKey]);
export const vmArgs = {
...argsConf,
initialization: pulumi.output(argsConf.initialization).apply(init => ({
...init,
userAccount: {
...(init?.userAccount ?? {}),
keys: publicKeys,
},
})),
};
Now vmArgs includes initialization.userAccount.keys with all SSH keys (Pulumi will feed these to cloud-init).
- Extract VM connection info to use for Ansible:
export const vmIp = vmArgs.initialization.apply(init => {
const ip = init?.ipConfigs
?.map(cfg => cfg.ipv4?.address || cfg.ipv6?.address)
.find(Boolean);
if (!ip) throw new Error("No IPv4 or IPv6 address found in ipConfigs.");
return ip.split("/")[0];
});
export const vmUserAccount = vmArgs.initialization.apply(init => {
const user = init?.userAccount;
if (!user || !user.username)
throw new Error("Missing userAccount or username in VM initialization.");
return {
keys: user.keys ?? [],
username: user.username,
password: user.password ?? "",
};
});
export const connectionArgs = {
host: vmIp,
port: 22,
user: vmUserAccount.username,
privateKey: pulumi.secret(genKey.privateKeyOpenssh),
}
Here we capture the VM’s IP and user credentials. We split the CIDR to get the IP without the mask. connectionArgs will be used by Pulumi’s command provider to SSH into the new VM.
- Compose the Ansible hosts configuration for our inventory:
export const hostsConf: pulumi.Output<Host[]> = pulumi.all([
vmArgs.name,
vmIp,
vmUserAccount.username,
vmUserAccount.password,
]).apply(([name, ip, username, password]) => [{
name: name as string,
ip: ip as string,
username: username as string,
password: password as string
}]);
This hostsConf is an array of Host objects. We will later pass dockerConf (the Docker images) and this hostsConf into our inventory generation function.
These code snippets set up all necessary Pulumi outputs and computations for later steps. They effectively translate the Pulumi config into runtime values that will drive VM creation and Ansible invocation. (Full source code is available in the GitHub repository.)
Step 7: Prepare the Workspace
Before running Ansible, we need to transfer the Ansible files, generated inventory, and SSH keys into a working directory that Pulumi can use:
- Helper functions in utils.ts can read files and write to the workspace. Use them to copy all files from
./ansibleinto a./workspace/ansiblefolder, then append the generated inventory:
const ansibleHash = pulumi
.all([dockerConf, hostsConf])
.apply(async ([dockerCfg, hostsCfg]) =>
writeFiles("./workspace/ansible", [
...(await readFiles("./ansible")),
await createAnsibleInventory("./ansible/inventory.js", {
dockerCfg,
hostsCfg
}),
])
);
- Write SSH key files for use by the
ansible-playbookcommands:
const keysHash = pulumi
.all([
genKey.publicKeyOpenssh,
genKey.privateKeyOpenssh,
]).apply(([publicKey, privateKey]) =>
writeFiles("./workspace", [
{
parentPath: "./",
path: "./id_rsa.pub",
name: "id_rsa.pub",
data: Buffer.from(publicKey),
options:{ mode: 0o644 }
},
{
parentPath: "./",
path: "./id_rsa",
name: "id_rsa",
data: Buffer.from(privateKey),
options:{ mode: 0o600 }
},
])
);
This writes out id_rsa and id_rsa.pub (the generated keys) into workspace/.
Important: Add the
workspace/directory to.gitignoreso these secrets aren’t committed.
After this, the directory structure is roughly:
./workspace
├── ansible
│ ├── cloud-init.yml
│ ├── inventory.js
│ ├── inventory.json (generated)
│ └── playbook.yml
├── id_rsa (private SSH key)
└── id_rsa.pub (public SSH key)
The ansibleHash and keysHash outputs ensure Pulumi tracks these files. We haven’t run any commands yet, but now everything Ansible needs is staged.
Step 8: Create the VM with Pulumi
Now we use Pulumi to instruct Proxmox to create the VM using the parameters we set up.
- Initialize the Proxmox provider and create the VM resource:
import * as proxmoxve from "@muhlba91/pulumi-proxmoxve";
import { remote } from "@pulumi/command";
const provider = new proxmoxve.Provider("provider", proxmoxProviderArgs);
const VMR = new proxmoxve.vm.VirtualMachine("VMR", vmArgs, { provider });
This creates a Proxmox VirtualMachine named VMR with all the vmArgs we configured. According to Pulumi’s docs, this manages a virtual machine on Proxmox and uses SSH to access the host node. (That’s why we needed connectionArgs with SSH details.)
2. Wait for SSH to become available. We add a Pulumi remote.Command that simply echoes a message to confirm we can connect:
const connection = new remote.Command("check-ready", {
connection: connectionArgs,
create: "echo SSH is up",
}, { dependsOn: [VMR] });
This ensures Pulumi waits until the VM is up and SSH is accepting connections.
- Run the cloud-init wait playbook. Now that SSH is up, we execute the
cloud-init.ymlplaybook using Ansible via a local command:
import { local } from "@pulumi/command";
const waitCloudInit = new local.Command("cloud-init", {
create: pulumi.interpolate`
ANSIBLE_CONFIG=./workspace/ansible/ansible.cfg \
ANSIBLE_HOST_KEY_CHECKING=False \
ansible-playbook -i ./workspace/ansible/inventory.json \
--private-key ./workspace/id_rsa \
./workspace/ansible/cloud-init.yml`
}, {
dependsOn: [connection]
});
It waits (via Ansible) for cloud-init to finish. We disable host key checking for convenience (ANSIBLE_HOST_KEY_CHECKING=False).
At this point, Pulumi has created the VM and Ansible has verified it’s ready (cloud-init done). Next, we perform the main configuration steps.
Step 9: Run Ansible Playbooks
Finally, after cloud-init is done, run our main Ansible playbook to install Docker and other software:
const playAnsiblePlaybook = new local.Command("playAnsiblePlaybook", {
create: pulumi.interpolate`
ANSIBLE_CONFIG=./workspace/ansible/ansible.cfg \
ANSIBLE_HOST_KEY_CHECKING=False \
ansible-playbook -i ./workspace/ansible/inventory.json \
--private-key ./workspace/id_rsa \
./workspace/ansible/playbook.yml`,
triggers: [ansibleHash], // rerun if playbook or inventory changes
}, {
dependsOn: [waitCloudInit],
});
It applies the Docker installation and image pulls. The dependsOn ensures it only runs after cloud-init is finished. We also add triggers: [ansibleHash] so Pulumi will rerun this if any of the Ansible files change.
Step 10: Deploy the Infrastructure
With everything defined, run the Pulumi deployment by executing:
pulumi up
Pulumi will show a preview of resources to create (the VM, command invocations, etc.). Confirm the update. It will then carry out all the steps in order: provisioning the VM, waiting for SSH, running the cloud-init wait playbook, then running your main Ansible playbook.
After completion, you should see in the Proxmox GUI or via ssh user@10.3.0.201 (using your key) that the VM “littlePig” is running Ubuntu, has Docker installed, and the specified images are present.
Conclusion
By following this workflow, you achieve a fully automated, reproducible process for provisioning Proxmox VMs. Instead of manually clicking through the GUI and hand-editing configurations, Pulumi codifies the VM creation (node, disk, networking, cloud-init, etc.) and Ansible handles post-boot setup (installing Docker, etc.). This approach embodies IaC principles: manual infrastructure management is minimized, reducing errors and accelerating deployment. In particular, using Pulumi (provisioning) in tandem with Ansible (configuration) is a powerful, complementary IaC strategy. The result is a pipeline where pulumi up will reliably spawn a new VM, apply cloud-init, then run Ansible playbooks – yielding a ready-to-use server with minimal human intervention.
All source code and examples for this tutorial are available on GitHub: see the proxmox-vm-orchestrator repository. By adopting this automated workflow, teams can ensure consistent, scalable VM deployments in Proxmox, freeing developers and operators to focus on higher-level tasks

