Understanding OpenStack Heat
OpenStack Heat is a powerful orchestration engine comparable to AWS CloudFormation or Azure Resource Manager. It processes templates written in YAML or JSON format, describing infrastructure resources and their relationships.
| Capability | Description |
|---|---|
| Resource Management | Creates and configures compute instances, networks, storage volumes, and security groups |
| Dependency Resolution | Automatically determines the correct order for resource creation based on dependencies |
| Configuration Management | Integrates with cloud-init and software deployment resources for post-boot configuration |
| Stack Updates | Supports in-place updates with change preview capabilities |
| Auto-scaling | Implements dynamic scaling policies based on resource metrics |
Why Use Heat?
Unlike manual deployment processes or imperative scripts, Heat templates describe the desired end state of your infrastructure. Heat handles the complex orchestration required to bring that state into reality, managing dependencies, sequencing operations, and providing rollback capabilities when deployments fail.
Prerequisites for RamNode
Before implementing Heat stacks on your RamNode VPS infrastructure, ensure you have the following:
OpenStack Environment
A functioning OpenStack deployment with Nova (compute), Neutron (networking), and Glance (images). RamNode's OpenStack VPS plans provide these by default.
Heat Service
The Heat orchestration service must be installed and configured.
OpenStack CLI
Install the OpenStack command-line client and Heat client.
Authentication
Valid OpenStack credentials (typically stored in an openrc file) with sufficient permissions.
openstack orchestration service listpip install python-openstackclient python-heatclientExample 1: Basic Web Server Stack
This foundational example demonstrates a simple Heat template that deploys a single Ubuntu instance configured as a web server. The template showcases fundamental Heat concepts including parameters, resources, and outputs.
heat_template_version: 2021-04-16
description: Deploy a basic NGINX web server on Ubuntu
parameters:
key_name:
type: string
description: SSH key pair name for instance access
constraints:
- custom_constraint: nova.keypair
flavor:
type: string
description: Instance flavor (size)
default: m1.small
image:
type: string
description: Ubuntu image name
default: ubuntu-22.04
resources:
security_group:
type: OS::Neutron::SecurityGroup
properties:
rules:
- protocol: tcp
port_range_min: 22
port_range_max: 22
- protocol: tcp
port_range_min: 80
port_range_max: 80
- protocol: tcp
port_range_min: 443
port_range_max: 443
web_instance:
type: OS::Nova::Server
properties:
name: basic-webserver
flavor: { get_param: flavor }
image: { get_param: image }
key_name: { get_param: key_name }
security_groups:
- { get_resource: security_group }
user_data: |
#!/bin/bash
apt-get update
apt-get install -y nginx
systemctl enable nginx
systemctl start nginx
echo "<h1>Welcome to RamNode Heat Stack</h1>" > /var/www/html/index.html
outputs:
instance_ip:
description: The IP address of the web server
value: { get_attr: [web_instance, first_address] }Deployment Commands
# Create the stack
openstack stack create -t basic-webserver.yaml \
--parameter key_name=mykey \
--parameter flavor=m1.small \
basic-web-stack
# Check stack status
openstack stack show basic-web-stack
# View stack outputs
openstack stack output show basic-web-stack instance_ipVerification
Once the stack reaches CREATE_COMPLETE status, retrieve the instance IP from the outputs and access it via HTTP. You should see the custom welcome page confirming successful deployment.
Example 2: High-Availability Load Balanced Application
This advanced example creates a production-ready architecture with multiple web servers behind a load balancer. The template demonstrates resource dependencies, network configuration, and the use of resource groups for managing multiple identical instances.
Architecture Components
- • Private network with subnet
- • 3 web server instances
- • Load balancer pool with health monitors
- • Floating IP for external access
Benefits
- • High availability
- • Automatic failover
- • Health monitoring
- • Traffic distribution
heat_template_version: 2021-04-16
description: HA web application with load balancer
parameters:
key_name:
type: string
description: SSH key pair name
server_count:
type: number
default: 3
description: Number of web servers
flavor:
type: string
default: 2GB Standard
image:
type: string
default: ubuntu-22.04
external_network:
type: string
description: External network for floating IP
resources:
IPv4 Private:
type: OS::Neutron::Net
properties:
name: ha-webapp-network
private_subnet:
type: OS::Neutron::Subnet
properties:
network: { get_resource: IPv4 Private }
cidr: 10.0.0.0/24
gateway_ip: 10.0.0.1
dns_nameservers:
- 8.8.8.8
- 8.8.4.4
router:
type: OS::Neutron::Router
properties:
external_gateway_info:
network: { get_param: external_network }
router_interface:
type: OS::Neutron::RouterInterface
properties:
router: { get_resource: router }
subnet: { get_resource: private_subnet }
web_security_group:
type: OS::Neutron::SecurityGroup
properties:
rules:
- protocol: tcp
port_range_min: 22
port_range_max: 22
- protocol: tcp
port_range_min: 80
port_range_max: 80
web_server_group:
type: OS::Heat::ResourceGroup
properties:
count: { get_param: server_count }
resource_def:
type: OS::Nova::Server
properties:
name: web-server-%index%
flavor: { get_param: flavor }
image: { get_param: image }
key_name: { get_param: key_name }
networks:
- network: { get_resource: IPv4 Private }
security_groups:
- { get_resource: web_security_group }
user_data: |
#!/bin/bash
apt-get update && apt-get install -y nginx
echo "<h1>Server: $(hostname)</h1>" > /var/www/html/index.html
systemctl enable nginx && systemctl start nginx
load_balancer:
type: OS::Octavia::LoadBalancer
properties:
vip_subnet: { get_resource: private_subnet }
lb_listener:
type: OS::Octavia::Listener
properties:
loadbalancer: { get_resource: load_balancer }
protocol: HTTP
protocol_port: 80
lb_pool:
type: OS::Octavia::Pool
properties:
listener: { get_resource: lb_listener }
lb_algorithm: ROUND_ROBIN
protocol: HTTP
lb_health_monitor:
type: OS::Octavia::HealthMonitor
properties:
pool: { get_resource: lb_pool }
type: HTTP
delay: 5
max_retries: 3
timeout: 5
floating_ip:
type: OS::Neutron::FloatingIP
properties:
floating_network: { get_param: external_network }
port_id: { get_attr: [load_balancer, vip_port_id] }
outputs:
application_url:
description: URL to access the application
value:
str_replace:
template: http://FLOATING_IP
params:
FLOATING_IP: { get_attr: [floating_ip, floating_ip_address] }# Deploy the HA stack
openstack stack create -t ha-webapp.yaml \
--parameter key_name=mykey \
--parameter server_count=3 \
--parameter external_network=public \
ha-webapp-stack
# Get the application URL
openstack stack output show ha-webapp-stack application_urlLoad Balancer Verification
Execute curl commands multiple times. You'll observe requests being distributed across different backend servers as indicated by the varying Server ID in the response. The health monitor continuously checks each backend's availability, automatically removing failed instances from the pool.
Example 3: Auto-Scaling Application
This sophisticated example implements dynamic auto-scaling based on CPU metrics. The Heat template integrates with Aodh (OpenStack's alarming service) to automatically scale the application tier in response to load changes.
heat_template_version: 2021-04-16
description: Auto-scaling web application
parameters:
key_name:
type: string
description: SSH key pair name
min_servers:
type: number
default: 2
description: Minimum number of servers
max_servers:
type: number
default: 6
description: Maximum number of servers
flavor:
type: string
default: m1.small
image:
type: string
default: ubuntu-22.04
resources:
server_group:
type: OS::Heat::AutoScalingGroup
properties:
min_size: { get_param: min_servers }
max_size: { get_param: max_servers }
resource:
type: OS::Nova::Server
properties:
flavor: { get_param: flavor }
image: { get_param: image }
key_name: { get_param: key_name }
metadata:
metering.server_group: { get_param: "OS::stack_id" }
user_data: |
#!/bin/bash
apt-get update && apt-get install -y nginx stress
systemctl enable nginx && systemctl start nginx
scale_up_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: { get_resource: server_group }
cooldown: 60
scaling_adjustment: 1
scale_down_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: { get_resource: server_group }
cooldown: 60
scaling_adjustment: -1
cpu_alarm_high:
type: OS::Aodh::GnocchiAggregationByResourcesAlarm
properties:
metric: cpu_util
aggregation_method: mean
granularity: 300
evaluation_periods: 1
threshold: 70
comparison_operator: gt
resource_type: instance
query:
str_replace:
template: '{"=": {"server_group": "STACK_ID"}}'
params:
STACK_ID: { get_param: "OS::stack_id" }
alarm_actions:
- { get_attr: [scale_up_policy, alarm_url] }
cpu_alarm_low:
type: OS::Aodh::GnocchiAggregationByResourcesAlarm
properties:
metric: cpu_util
aggregation_method: mean
granularity: 300
evaluation_periods: 3
threshold: 30
comparison_operator: lt
resource_type: instance
query:
str_replace:
template: '{"=": {"server_group": "STACK_ID"}}'
params:
STACK_ID: { get_param: "OS::stack_id" }
alarm_actions:
- { get_attr: [scale_down_policy, alarm_url] }
outputs:
scale_up_url:
description: URL to trigger scale up
value: { get_attr: [scale_up_policy, alarm_url] }
scale_down_url:
description: URL to trigger scale down
value: { get_attr: [scale_down_policy, alarm_url] }# Deploy auto-scaling stack
openstack stack create -t autoscaling-webapp.yaml \
--parameter key_name=mykey \
autoscale-stack
# Simulate CPU load to trigger scale-up
stress --cpu 4 --timeout 300s
# Watch scaling events
openstack stack event list autoscale-stackScaling Behavior
When CPU utilization exceeds 70% for 60 seconds, the high CPU alarm triggers the scale-up policy, adding one instance. When CPU drops below 30% for 3 consecutive minutes, the scale-down policy removes an instance. Cooldown periods prevent excessive scaling thrashing.
Best Practices for Heat on RamNode
Troubleshooting Heat Deployments
Diagnosis Commands
# View detailed stack status
openstack stack show <stack-name> -f yaml
# List all resources with status
openstack stack resource list <stack-name>
# View stack events (deployment log)
openstack stack event list <stack-name>
# Validate template before deployment
openstack orchestration template validate -t <template.yaml>
# Check project quotas
openstack quota showCommon Issues
Quota Exceeded
Check project quotas if instance creation fails using openstack quota show. Request quota increases through RamNode support if needed.
Dependency Failures
When resources fail due to missing dependencies, review the event list to identify which resource failed first. The root cause is typically the earliest failure.
Network Connectivity
If instances deploy but fail health checks, verify security group rules, subnet configuration, and router attachments.
Template Syntax
YAML indentation errors are common. Validate templates before deployment and use a proper YAML editor with syntax highlighting.
Timeout Errors
For complex stacks, increase the timeout parameter: openstack stack create --timeout 60 (in minutes).
Ready to Deploy with Heat?
Get started with RamNode's OpenStack-powered Cloud VPS infrastructure and leverage Heat for automated deployments.
