Using S3 Presigned-Url for Upload a File That Will Then Have Public-Read Access

Using S3 Presigned-URL for upload a file that will then have public-read access

When you generate a pre-signed URL for a PUT object request, you can specify the key and the ACL the uploader must use. If I wanted the user to upload an objet to my bucket with the key "files/hello.txt" and the file should be publicly readable, I can do the following:

s3 = Aws::S3::Resource.new
obj = s3.bucket('bucket-name').object('files/hello.text')

put_url = obj.presigned_url(:put, acl: 'public-read', expires_in: 3600 * 24)
#=> "https://bucket-name.s3.amazonaws.com/files/hello.text?X-Amz-..."

obj.public_url
#=> "https://bucket-name.s3.amazonaws.com/files/hello.text"

I can give the put_url to someone else. This URL will allow them to PUT an object to the URL. It has the following conditions:

  • The PUT request must be made within the given expiration. In the example above I specified 24 hours. The :expires_in option may not exceed 1 week.
  • The PUT request must specify the HTTP header of 'x-amz-acl' with the value of 'public-read'.

Using the put_url, I can upload any an object using Ruby's Net::HTTP:

require 'net/http'

uri = URI.parse(put_url)

request = Net::HTTP::Put.new(uri.request_uri, 'x-amz-acl' => 'public-read')
request.body = 'Hello World!'

http = Net::HTTP.new(uri.host, uri.port)
http.use_ssl = true
resp = http.request(request)

Now the object has been uploaded by someone else, I can make a vanilla GET request to the #public_url. This could be done by a browser, curl, wget, etc.

S3 - Upload - how to generate a pre-signed url that gives EVERYONE read access to the object?

The HTTP client that performs the upload needs to include the x-amz-acl: public-read header.

In your example, you're generating a request that includes that header. But, then you're generating a presigned URL from that request.

URLs don't contain HTTP headers, so whatever HTTP client you're using to perform the actual upload is not sending setting the header when it sends the request to the generated URL.

s3 presigned url for access to entire folder

No, S3 doesn't really have a true concept of a folder. The folders are "created" using segments of the object paths. They do not exist independently of objects.

S3 Object upload to a private bucket using a pre-signed URL result in Access denied

It's likely that your bucket has Block all public access turned on. Therefore, you cannot set the ACL of the object to public-read. The solution is that you can either turn off the Block all public access or change public-read to private.

Amazon s3: Block public access settings to allow for public read private write with signed url

S3 block public access is just another layer of protection with main purpose to prevent you from accidentally granting public access to bucket/objects.

The access-denied messages that you are experiencing are due to the mentioned S3 block public access feature that prevents you from setting public access to your bucket/objects (exactly what you are trying to do).

If your use case requires public access to S3 objects and you know what you are doing then you can/need to disable this this feature (or at least some of the sub-options based on how you are going to grant the access). Example would be public website hosted on S3 which clearly requires you to allow public read.

What is the best way set up access to your objects depends on whether you want every object inside of the bucket to be publicly readable.

If every object is supposed to be publicly readable, then the easiest way to accomplish this is via bucket policy - the one that you have included in your post.

{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::myapp/*"]
}
]
}

If only some of the objects are supposed to be publicly readable then you have several options.

First options is probably the easiest to implement - create separate buckets for private and non-private objects. This is usually the preferred way if possible because you can treat your confidential data separately from the public one and is the least error prone.

Second option is to create separate folders in a single bucket where one folder can hold confidential data and another folder can hold public data. Then you can use bucket policy again and add read access only to a specific folder.

"Resource":["arn:aws:s3:::myapp/public/*"]

Third option is to use object ACLs instead of Bucket Policies. You need to go for this option when there is no clear distinction between objects that are being uploaded and you want to let the one who is uploading the object to decide whether the object should be public or not. If you don't need to choose between public and private on per object case then you should avoid this option as it is the hardest one to manage and the easiest one to lose track of what is going on with your objects.

Just one last note. If you are using Bucket Policies to grant public read access to your objects then you don't need to specify ACL in s3params.

And to answer your questions:

Am I using S3 is an unintended way?

No, it is perfectly fine to grant public read access to your bucket/objects as long as you intent to do so. Those additional layers of protection are there because S3 buckets are used to store highly confidential data as well and from time to time, someone unintentionally changes the setting which can cause huge damage, depending on the nature of data stored inside. Therefore, public cloud providers are trying to make it harder to set public access for their data stores so that changing this setting is not made by mistake, but rather it should be well informed decision.

Am I suppose to use the CloudFront service to serve these images
publicly?

CloudFront provides you with some nice features such as additional protection and caching at the AWS network's edges but it is definitely not mandatory to use it and since it is not a free service, I would advise to look closer into it before choosing to use it so that you don't waste your resources (money) meaninglessly.

AWS S3 pre-signed POST access denied

There are a couple of small problems here:

  1. when you created the pre-signed URL, you indicated a condition of acl=public-read so your clients must include a form field of acl=public-read when POSTing their request
  2. because your clients indicate an ACL, the IAM policy associated with the credentials creating the pre-signed URL must allow both s3:PutObject and s3:PutObjectAcl.

How to load a website via s3 with pre-signed URL

This was resolved by only adding the policy to allow other files (js, css) to be accessed publicly, and not adding any policy (Allow/Deny) to the index.html.

For me all my other files we located under the folder called static. Therefore I added a policy to allow public access to this folder.

{
"Version": "2008-10-17",
"Id": "PolicyForPublicWebsiteContent",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::web-portal/static/*"
}
]
}

After adding the above policy you cannot access the index.html page via browser. However you can access the index.html if you have a pre-signed URL. As the other required files are in public domain, the index.html was able to load the rest.

I pressume the same can be done if we can gzip the entier website. Thus all files will reside in one single zip folder. Then provide a pre-signed url to that file. In this way we do not have to enter any bucket policies for other non index files.



Related Topics



Leave a reply



Submit