Attaching and mounting existing EBS volume to EC2 instance filesystem issue
The One Liner
Mount the partition (if disk is partitioned):
sudo mount /dev/xvdf1 /vol -t ext4
Mount the disk (if not partitioned):
sudo mount /dev/xvdf /vol -t ext4
where:
/dev/xvdf
is changed to the EBS Volume device being mounted/vol
is changed to the folder you want to mount to.ext4
is the filesystem type of the volume being mounted
Common Mistakes How To:
✳️ Attached Devices List
Check your mount command for the correct EBS Volume device name and filesystem type. The following will list them all:
sudo lsblk --output NAME,TYPE,SIZE,FSTYPE,MOUNTPOINT,UUID,LABEL
If your EBS Volume displays with an attached partition
, mount the partition
; not the disk.
✳️ If your volume isn't listed
If it doesn't show, you didn't Attach
your EBS Volume in AWS web-console
✳️ Auto Remounting on Reboot
These devices become unmounted again if the EC2 Instance ever reboots.
A way to make them mount again upon startup is to add the volume to the server's /etc/fstab
file.
Caution:/strong>
If you corrupt the /etc/fstab
file, it will make your system unbootable. Read AWS's short article so you know to check that you did it correctly.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html#ebs-mount-after-reboot
First:
With the lsblk
command above, find your volume's UUID
& FSTYPE
.
Second:
Keep a copy of your original fstab
file.
sudo cp /etc/fstab /etc/fstab.original
Third:
Add a line for the volume in sudo nano /etc/fstab
.
The fields of fstab
are 'tab-separated' and each line has the following fields:
<UUID> <MOUNTPOINT> <FSTYPE> defaults,discard,nofail 0 0
Here's an example to help you, my own fstab
reads as follows:
LABEL=cloudimg-rootfs / ext4 defaults,discard,nofail 0 0
UUID=e4a4b1df-cf4a-469b-af45-89beceea5df7 /var/www-data ext4 defaults,discard,nofail 0 0
That's it, you're done. Check for errors in your work by running:
sudo mount --all --verbose
You will see something like this if things are :
/ : ignored
/var/www-data : already mounted
attaching a previous EBS Volume to a new EC2 Linux Instance
Yes, you can attach an existing EBS volume to an EC2 instance. There are a number of ways to do this depending on your tools of preference. I prefer the command line tools, so I tend to do something like:
ec2-attach-volume --instance-id i-XXXXXXXX /dev/sdh --device vol-VVVVVVVV
You could also do this in the AWS console:
https://console.aws.amazon.com/ec2/home?#s=Volumes
Right click on the volume, then select [Attach Volume]. Select the instance and enter the device (e.g., /dev/sdh).
After you have attached the volume to the instance, you will want to ssh to the instance and mount the volume with a command like:
sudo mkdir -m000 /vol2
sudo mount /dev/xvdh /vol2
You can then access your old data and configuration under /vol2
Note: The EBS volume and the EC2 instance must be in the same region and in the same availability zone to make the attachment.
Creating a file system on EBS volume, mounting it to EC2 instance and persisting data when instance is replaced with CDK
You almost got it - you just mount it and that's it. Formatting it does indeed wipe the data, the solution is simply to skip that step.
The docs you link to address this:
(Conditional) If you discovered that there is a file system on the device in the previous step, skip this step. If you have an empty volume, use the mkfs -t command to create a file system on the volume.
Warning.
Do not use this command if you're mounting a volume that already has data on it (for example, a volume that was created from a snapshot). Otherwise, you'll format the volume and delete the existing data.
So the following should work:
# mount the EBS volume
sudo mkdir /data # make a directory on the EC2 machine
sudo mount /dev/sda1 /data # mount the volume on the directory that was created
Mounting Old EBS Volume to the new Instance - Amazon EC2
What you need to do is mount the old volume on the new instance. Go to the Amazon EC2 control panel, and click "Volumes" (under Elastic Block Store). Look at the attachment information for the old EBS volume. This will be something like <instance id> (<instance name>):/dev/sdg
Make a note of the path given here, so that'd be /dev/sdg in the example above. Then use SSH and connect to your new instance, and type mkdir /mnt/oldvolume
and then mount /dev/sdg /mnt/oldvolume
(or whatever the path given in the control panel was). Your files should now be available under /mnt/oldvolume
. If this does not solve your problem, please post again with the output of your df
command after doing all of this.
So, to recap, to use an EBS volume on an instance, you need to attach it to that instance using the control panel (or API tools), and then mount it on the instance itself.
Related Topics
How to Make Sure the Numpy Blas Libraries Are Available as Dynamically-Loadable Libraries
Attach to a Processes Output for Viewing
Can You Prevent a Command from Going into the Bash Shell Command History
How to Extract a Single Chunk of Bytes from Within a File
Ldconfig Equivalent in MAC Os X
How to Count the Number of Characters in a Bash Variable
How to Self Dlopen an Executable Binary
How to Make R Read My Environmental Variables
Why Is This Kernel Module Marked at Permanent on 2.6.39
Ctrl-P and Ctrl-N Behaving Unexpectedly Under Docker
How to Send Data to Local Clipboard from a Remote Ssh Session
Count Occurrences of a Char in Plain Text File
Shell: Redirect Stdout to /Dev/Null and Stderr to Stdout
Ld: Using -Rpath,$Origin Inside a Shared Library (Recursive)
How to Delete the First Column ( Which Is in Fact Row Names) from a Data File in Linux