Curl: (2) Failed Initialization

curl: (2) Failed Initialization

At a wild guess, you've linked the /usr/local/bin/curl binary to the system curl library.

To verify that this is the case, you should do:

ldd /usr/local/bin/curl

If it indicates a line like:

libcurl.so.4 => /usr/lib/x86_64-linux-gnu/libcurl.so.4 (0x00007fea7e889000)

It means that the curl binary is picking up the system curl library. While it was linked at compile time to the correct library, at run-time it's picking up the incorrect library, which seems to be a pretty typical reason for this error happening.

If you run the configure with --disable-shared, then it will produce a .a, which, when linked to the curl binary will not depend on the system libcurl.so, but will instead have it's own private code.

If you're cross-compiling, then you'll also need to cross-compile all the dependent libraries, and that is another question.

Getting Error: CURLerror (curl_easy_perform() failed) - code=2 msg='Failed initialization'.

"KM" Response #1:

1) Is this issue happening intermittently or all the times ?

2) Is this issue is happening with small dataset ?

3) What is the Snowflake ODBC version you are using? Can you use the latest ODBC driver version 2.19.14 and let us know the behavior?

4) Are you using proxy into your network ?

5) Please run the below statement from Snowflake Web GUI or from SnowSQL terminal to get the list of the endpoint that needs to be whitelisted into the firewall/network. (share the endpoint details with your networking team)

 SELECT SYSTEM$WHITELIST();

OR (if you want a slightly more readable output):

select t.value:type::varchar as type, t.value:host::varchar as host, t.value:port as port from table(flatten(input => parse_json(system$whitelist()))) as t;

Note : In order to function properly, Snowflake must be able to access a set of HTTP/HTTPS addresses. If your server policy denies access to most or all external IP addresses and web sites, you must whitelist these addresses to allow normal service operation.

All communication with Snowflake happens over port 443. However, CRL and OCSP certification checks are transmitted over port 80. The network administrator for your organization must open your firewall to traffic on ports 443 and 80.


"M" Follow-Up Response #1:

Please find my response below for your questions:

1) This is an intermittent issue. Not it always fails.

2) The issue does not occur with small data set. For larger data set, the job runs for 11-12 hours and then it gets failed with the specified error.

3) We are using ODBC driver version: 2.19.09.00. Will check with higher version.

4) No. We are not using any proxy in the network.

5) OK.

I will check and white-list all the Snowflake IP address in our network, install the latest ODBC driver and run the job again. I will keep you posted about the result.


"M" Follow-Up Response #2:

I have upgraded the ODBC driver to the latest version 2.19.14.

Now on the running job, it is failing after running for 24 hours with a different error.

Error message:

Result download worker error: Worker error: [Snowflake][Snowflake] (4)

REST request for URL https://sfc-va-ds1-2-customer-stage.s3.amazonaws.com/ogn3-s-vass2706/results/018f030a-0164-1b0c-0000-2ab1004b8b96_0/main/data_5_6_219?x-amz-server-side-encryption-customer-algorithm=AES256&response-content-encoding=gzip&AWSAccessKeyId=AKIAJP5BI3JZEVKDRXDQ&Expires=1569308247&Signature=2DSCUhY7DU56cpq6jo31rU5LKRw%3D failed: CURLerror (curl_easy_perform() failed) - code=28 msg='Timeout was reached'.

Could you please advise on this?


KM Response #2:

1) What is your operating system?

2) Is this issue happening with Small dataset or big dataset ?

3) Can you try to clear some space on the TEMP location of the server example for Windows it will be C:\Windows\Temp and C:\Users\\AppData\Local\Temp\ and for linux /tmp.

4) Can you make sure URL https://sfc-va-ds1-2-customer-stage.s3.amazonaws.com is whitelisted.

5) try the command to check the connectivity.

curl -v -k https://sfc-va-ds1-2-customer-stage.s3.amazonaws.com


"M" Follow-Up Response #3:

1) Operating System: Windows Server 2012 R2.

2) This issue is happening only with big dataset - particularly when job runs for around 24 hours.

3) Done. Cleared the space.

4) The URL is white-listed.

5) On windows power shell, this command gives error:

Invoke-WebRequest : A parameter cannot be found that matches parameter name 'k'.

At line:1 char:9

+ curl -v -k https://sfc-va-ds1-2-customer-stage.s3.amazonaws.com

+ ~~

+ CategoryInfo : InvalidArgument: (:) [Invoke-WebRequest], ParameterBindingException

+ FullyQualifiedErrorId : NamedParameterNotFound,Microsoft.PowerShell.Commands.InvokeWebRequestCommand

KM Response #3:

Use the CURL command to test connectivity to Snowflake. (Make sure the curl is installed on the machine, If not then you can download curl from the third party ex. https://curl.haxx.se/download.html)

curl -v -k https://sfc-va-ds1-2-customer-stage.s3.amazonaws.com

Sometime this issue can happen when there is not much space left in the TEMP location. You can try to run the Job and monitor the %TEMP% space.

I am not sure how the Attunity tool works but some of the ETL tools like Informatica ETL tool create the temporary file on the server and utilize the %TEMP% location.


"M" Follow-Up Response #4:

Using curl command, now it is able to connect successfully. I will trigger the job now and monitor %TEMP% location.


Any other ideas, recommendations, or possible work-arounds?



Related Topics



Leave a reply



Submit