Ruby - Append Content at The End of The Existing S3 File Using Fog

How to copy all files in a folder on s3 using Fog in Ruby

I don't think there is a direct way to do that per se, and that instead you would need to iterate over the appropriate files to do the move. I think it would look something like this:

require 'rubygems'
require 'fog'

# create a connection
connection = Fog::Storage.new({
provider: 'AWS',
aws_access_key_id: YOUR_AWS_ACCESS_KEY_ID,
aws_secret_access_key: YOUR_AWS_SECRET_ACCESS_KEY
})

directory = connection.directories.get(BUCKET, prefix: '/foo/')

directory.files.each do |file|
file.copy(BUCKET, "/bar/#{file.key.split('/').last}")
end

How to list all files in an S3 folder using Fog in Ruby

Use the prefix option on the directory.get method. Example:

def get_files(path, options)
connection = Fog::Storage.new(
provider: 'AWS',
aws_access_key_id: options[:key],
aws_secret_access_key: options[:secret]
)
connection.directories.get(options[:bucket], prefix: path).files.map do |file|
file.key
end
end

Set content_type of Fog storage files on s3

If you are just updating metadata (and not the body/content itself) you probably want to use copy instead of save. This is perhaps non-obvious, but that keeps the operation on the S3 side so that it will be MUCH faster.

The signature for copy looks like:

copy(target_directory_key, target_file_key, options = {})

So I think my proposed solution should look (more or less) like this:

directory.files.each do |f|
content_type = case f.key.split(".").last
when "jpg"
"image/jpeg"
when "mov"
"video/quicktime"
end
options = {
'Content-Type' => content_type,
'x-amz-metadata-directive' => 'REPLACE'
}
puts "copied!" if f.copy(f.directory, f.key, options)
end

That should basically tell S3 "copy my file over the top of itself, but change this header". That way you don't have to download/reupload the file. This is probably the approach you want.

So, solution aside, still seems like you found a bug. Could you include an example of what you mean by "individually update each file"? Just want to make sure I know exactly what you mean and that I can see the working/non-working cases side by side. Also, how/why do you think it isn't updating the content-type (it might actually be updating it, but just not displaying the updated value correctly, or something like that). Bonus points if you can create an issue here to make sure I don't forget to address it: https://github.com/fog/fog/issues?state=open

ruby-fog: Delete an item from the object storage in less than 3 requests

I think this was already answered on the mailing list, but if you use #new on directories/files it will give you just a local reference (vs #get which does a lookup). That should get you what you want, though it may raise errors if the file or directory does not exist.

Something like this:

storage = get_storage(...) // S3 / OpenStack / ...
dir = storage.directories.new(key: bucket)

dir.files.create(key: key, body: body) # 1st request

# or:
dir.files.get(key) # 1st request

#or
file = dir.files.new(key)

if !file.nil?
file.destroy # 1st request
end

Working in this way should allow any of the 3 modalities to work in a single request, but may result in errors if the bucket does not exist (as trying to add a file to non-existent bucket is an error). So it is more efficient, but would need different error handling. Conversely, you can make the extra requests if you need to be sure.

Use Fog with Ruby to generate a Pre-signed URL to PUT a file in Amazon S3

I think put_object_url is indeed what you want. If you follow the url method back to where it is defined, you can see it uses a similar method underlying it called get_object_url here (https://github.com/fog/fog/blob/dc7c5e285a1a252031d3d1570cbf2289f7137ed0/lib/fog/aws/models/storage/files.rb#L83). You should be able to do something similar and can do so by calling this method from the fog_s3 object you already created above. It should end up just looking like this:

headers = {}
options = { path_style: true }
url = fog_s3.put_object_url(bucket, object_path, expires, headers, options)

Note that unlike get_object_url there is an extra headers option snuck in there (which you can use to do stuff like set Content-Type I believe).

Hope that sorts it for you, but just let me know if you have further questions. Thanks!

Addendum

Hmm, seems there may be a bug related to this after all (I'm wondering now how much this portion of the code has been exercised). I think you should be able to work around it though (but I'm not certain). I suspect you can just duplicate the value in the options as a query param also. Could you try something like this?

headers = query = { 'Content-Type' => 'audio/wav' }
options = { path_style: true, query: query }
url = fog_s3.put_object_url(bucket, object_path, expires, headers, options)

Hopefully that fills in the blanks for you (and if so we can think some more about fixing that behavior within fog if it makes sense to do so). Thanks!

Fog Gem - Access Denied on deleting S3 file

From what I see, you are setting the bucket_name variable on fog gem to be 'bucket-name', either that or you have edited it to post here.
Your config/initializer/carrierwave.rb should look something like this

    CarrierWave.configure do |config|
config.fog_credentials = {
# Configuration for Amazon S3 should be made available through an Environment variable.
# For local installations, export the env variable through the shell OR
# if using Passenger, set an Apache environment variable.
#
# In Heroku, follow http://devcenter.heroku.com/articles/config-vars
#
# $ heroku config:add S3_KEY=your_s3_access_key S3_SECRET=your_s3_secret S3_REGION=eu-west-1 S3_ASSET_URL=http://assets.example.com/ S3_BUCKET_NAME=s3_bucket/folder

# Configuration for Amazon S3
:provider => 'AWS',
:aws_access_key_id => ENV['S3_KEY'],
:aws_secret_access_key => ENV['S3_SECRET'],
:region => ENV['S3_REGION']
}

# For testing, upload files to local `tmp` folder.
if Rails.env.test? || Rails.env.cucumber?
config.storage = :file
config.enable_processing = false
config.root = "#{Rails.root}/tmp"
else
config.storage = :fog
end

config.cache_dir = "#{Rails.root}/tmp/uploads" # To let CarrierWave work on heroku

config.fog_directory = ENV['S3_BUCKET_NAME']
config.s3_access_policy = :public_read # Generate http:// urls. Defaults to :authenticated_read (https://)
config.fog_host = "#{ENV['S3_ASSET_URL']}/#{ENV['S3_BUCKET_NAME']}"
end

You may be setting ENV['S3_BUCKET_NAME'] or ENV['S3_ASSET_URL'] variables wrong, or even setting it mannually, check those in your .env file

Rails 4, Fog, Amazon s3 - retrieving all the images as an array from a specific folder in a bucket.

I think you should be able to make a small change in your script to get the behavior you want. Simply append a forward slash to the prefix so that it clearly shows you want things that are like a directory instead of any/all things that begin with a particular character.

So, that would get you something like:

directory = connection.directories.get('upimages', prefix: image_folder + '/')
directory.files.map do |file|
file.key
end

(I just split it into two commands to make it format/read easier)

Need to change the storage directory of files in an S3 Bucket (Carrierwave / Fog)

You need to be interacting with the S3 Objects directly to move them. You'll probably want to look at copy_object and delete_object in the Fog gem, which is what CarrierWave uses to interact with S3.

https://github.com/fog/fog/blob/8ca8a059b2f5dd2abc232dd2d2104fe6d8c41919/lib/fog/aws/requests/storage/copy_object.rb

https://github.com/fog/fog/blob/8ca8a059b2f5dd2abc232dd2d2104fe6d8c41919/lib/fog/aws/requests/storage/delete_object.rb



Related Topics



Leave a reply



Submit