Ansible: Copying One Unique File to Each Server in a Group

Ansible: copying one unique file to each server in a group

For example:

  tasks:
- set_fact:
padded_host_index: "{{ '{0:03d}'.format(play_hosts.index(inventory_hostname)) }}"

- copy: src=/mine/split_{{ padded_host_index }}.xz dest=/data/

How to load only the unique items of per file stored lists identified via the same key path into a single variable using ansible?

A simpler approach would be to fetch the files, e.g.

    - fetch:
dest: "{{ fetch_dir }}"
src: "{{ item.path }}"
loop: "{{ folder_files.files }}"

given the remote host is "test_11" and "fetch_dir=fetch" gives

shell> tree fetch/test_11/myfolder/
fetch/test_11/myfolder/
├── file-1.yml
├── file-2.yml
├── file-3.yml
├── file-4.yml
└── file-5.yml

Then, collect the lists, e.g.

    - set_fact:
tmp_fact: "{{ tmp_fact|default([]) +
(lookup('file', fetch_dir ~ '/' ~
inventory_hostname ~ '/' ~
item.path)|
from_yaml)['same-root-key']['same-list-key'] }}"
loop: "{{ folder_files.files }}"

gives

  tmp_fact:
- a
- b
- c
- y
- b
- a
- x

Then, select the unique items

    - set_fact:
result: "{{ tmp_fact|unique }}"

gives

  result:
- a
- b
- c
- y
- x

Q: "The solution ... downloads files ... in contrast to the solution ... in the question (section: Idea to concept)"

A: You might want to reconsider the concept. fetch is more user-friendly compared to slurp. The proposed solution is idempotent, i.e. the files will be downloaded only when changed. Running the task repeatedly

    - fetch:
dest: "{{ fetch_dir }}"
src: "{{ item.path }}"
loop: "{{ folder_files.files }}"
loop_control:
label: "{{ item.path }}"

gives

TASK [fetch] *************************************************************
ok: [test_11] => (item=/myfolder/file-1.yml)
ok: [test_11] => (item=/myfolder/file-4.yml)
ok: [test_11] => (item=/myfolder/file-3.yml)
ok: [test_11] => (item=/myfolder/file-2.yml)
ok: [test_11] => (item=/myfolder/file-5.yml)

As a result, the playbook

- hosts: test_11
gather_facts: false
vars:
fetch_dir: fetch
tasks:
- find:
paths: "/myfolder"
patterns: "file-*.yml"
register: folder_files
- fetch:
dest: "{{ fetch_dir }}"
src: "{{ item.path }}"
loop: "{{ folder_files.files }}"
loop_control:
label: "{{ item.path }}"

- set_fact:
tmp_fact: "{{ tmp_fact|default([]) +
(lookup('file', fetch_dir ~ '/' ~
inventory_hostname ~ '/' ~
item.path)|
from_yaml)['same-root-key']['same-list-key'] }}"
loop: "{{ folder_files.files }}"

shows no changes when running repeatedly

PLAY RECAP **************************************************************
test_11: ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

In your concept, you use slurp. This module always transfers the data from the remote host. The fetch module is more efficient. It compares the checksums and transfers the data only if the files are different. In addition to this, the fetch module supports check_mode.

Varying the src filename in an ansible copy loop based on criteria

It sounds like you would find lookup("fileglob", ...) helpful in this case, in order to know what local matching files are available before running that copy: command

- name: sniff out potential overrides on the controller machihne
set_fact:
# regrettably, jinja2 does not seem to support dict comprehensions
simple_copy_overrides: >-
{%- set result = {} -%}
{%- for it in simple_copy -%}
{%- set _ = result.update({
it.src: lookup("fileglob", it.src+".*", wantlist=True)
}) -%}
{%- endfor -%}
{{ result }}
- copy:
src: "{{ simple_copy_overrides[item.src][0] if simple_copy_overrides[item.src] else item.src }}"
...

and you can of course use more complex logic if you wish to restrict the override to just ending in "." + inventory_host or similar, but that's the gist of it

Ansible: Copy file into another users home directory

I created the ansible user on my child node to test out multiple users/isolate ansible. Maybe this is not a good idea?

Having a specific user for your deployments with full escalation rights on your target host is the most common setup to run ansible.

Is what I'm trying to do possible?

Absolutely not. If you have correctly set escalation rights to your ansible user as mentionned, all you are missing in your task or play is become: true. At play level, it will affect all task for that play:

---
- name: script transfer practice
hosts: devdebugs
remote_user: ansible
become: true

# here goes the rest of your play....

At task level, it will only affect the given task.

  - name: Copy file with owner and permissions
ansible.builtin.copy:
src: /home/ubuntu/files/test.txt
dest: /home/ubuntu/test.txt
owner: ubuntu
group: ubuntu
mode: '0600'
become: true

As reported by @SipSeb in the comments, you can also set the become flag for an entire playbook at runtime using the -b/--become flag on the ansible(-playbook) command line.

I couldn't find much reading on this

Probably because you are new to ansible and do not know exactly what to look for. For this particular subject, a good starting point is understanding ansible privilege escalation

Ansible - Copy command output to file for multiple hosts

If you're always writing to a file named output.txt, then of course you only see output for a single host -- for each host, Ansible re-writes the file with new data. There's no magic that tells Ansible it should be appending to the file or something.

The easiest solution is write output to files named after each host, like this:

    - name: copy output to file
copy:
content: "{{ output.stdout[0] }}
dest: "output-{{ inventory_hostname }}.txt"

If you want, you could add a task at the end of your playbook that
would concatenate all those files together.



Related Topics



Leave a reply



Submit