Sunday, September 18, 2016

Host Key Verification and Ansible

I've been using Ansible to configure some instances on Amazon Web Services with a client, for two reasons:
  • To make it repeatable in the future, if we want to configure an instance again, or another instance.
  • Because I will be repeating it immediately now, making more than one server to live behind a load balancer that are meant to be configured identically.
When you connect to a host for the first time over SSH, you are asked to verify the host key. When you use Ansibleagainst a host for the first time, the same thing happens. If you are connecting to multiple hosts at the same time, bad things happen -- you get multiple prompts to verify the host key and responding to them doesn't seem to work:

$ ansible servergroup -m ping
The authenticity of host '10.0.2.161 ()' can't be established.
ECDSA key fingerprint is SHA256:hEdMy3XKWV/zWobmSuwf+b6oI9xt4cYJzM1eAa2T8Ak.
Are you sure you want to continue connecting (yes/no)? 
The authenticity of host '10.0.1.79 ()' can't be established.
ECDSA key fingerprint is SHA256:N5iv0/+zRHk7UTsIQOUlzn2ZiU9L2xL+Fn153nlZdjs.
Are you sure you want to continue connecting (yes/no)? yes
Please type 'yes' or 'no': yes
Please type 'yes' or 'no': yes
Please type 'yes' or 'no': yes
Please type 'yes' or 'no': yes
Please type 'yes' or 'no': yes
Please type 'yes' or 'no': ^CProcess WorkerProcess-3:
Process WorkerProcess-2:
Traceback (most recent call last):
 [ERROR]: User interrupted execution

 While I was pleased to read the toroid.org post about bugs filed, it doesn't seem to be fixed yet.

Happily, as long as you connect to a single host at at time, everything is fine, and after that you can connect to the group:
$ ansible server-one -m ping
The authenticity of host '10.0.1.79 ()' can't be established.
ECDSA key fingerprint is SHA256:N5iv0/+zRHk7UTsIQOUlzn2ZiU9L2xL+Fn153nlZdjs.
Are you sure you want to continue connecting (yes/no)? yes
server-one | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
That's fine for a couple of servers, but it wouldn't be fun with a large cluster. If there's a better way, I haven't discovered it yet.

2 comments: