Bug G-Wan and Amazon Ec2

Bug G-WAN and Amazon EC2

The short answer is:

Newer releases of hypervisors report ZERO CPU, and/or ZERO CPU Core. This is creating a division by zero in G-WAN.

The longer story is:

As G-WAN is optimized for multicore architectures, it uses the CPUID instruction and the OS Kernel structures to find the platform architecture and the associated OS policies (the number of online/allowed CPUs).

In the past, checking the CPUID instruction and the Kernel structures worked well. But today hypervisors have switched to broken CPUID implementations and OS Kernel structures.

This problem created by hypervisors affects hosting companies (VPS servers), and Amazon EC2 instances, as well as companies and end-users.

The other web servers are not affected because users must manually configure, attach and run several server instances for each CPU Core (duplicating the resources that G-WAN uses only once).

Problems to start G-WAN

The problem is that some newer versions of hypervisors insist to report ZERO CPU, and/or ZERO CPU Core, leading to a division by zero.

Since G-WAN is optimized for multicore architectures, it queries the CPUID instruction and the OS Kernel structures to check the platform architecture and the associated OS policies (number of online and allowed CPUs).

Other web servers are not affected because they expect users to manually configure and run as many instances as desired (hereby creating the duplicated resource allocations that G-WAN was designed to avoid).

Checking both the CPUID instruction and the Kernel structure was enough, until recently. Now, for any reason, the hypervisors use broken CPUID implementations and OS Kernel structure.

This issue is affecting hosting companies (VPS servers), and Amazon EC2 instances, among others.

How to install and configure FTP on amazon Ec2?

The Windows EC2 instances are all Windows Server 2008. The easiest thing to do would be to enable the built-in FTP functionality.

See http://www.youtube.com/watch?v=QsGPqkobCs8.

aws glue IAM role cant connect to aws opensearch

I believe this is not possible because the AWS Glue Elasticsearch connector is based on an open-source Elasticsearch Spark library that doest not sign requests using AWS Signature Version 4 which is required for enforcing domain access policies.

If you take a look at the key concepts for fine-grained access control in OpenSearch, you'll see:

If you choose IAM for your master user, all requests to the cluster must be signed using AWS Signature Version 4.

If you visit the Elasticsearch Connector for AWS Glue AWS Marketplace page, you'll notice that the connector itself is based on an open-source implementation:

For more details about this open-source Elasticsearch spark connector, please refer to this open-source connector online reference

Under the hood, AWS Glue is using this library to index data from Spark dataframes to the Elasticsearch endpoint. Since this open-source library (maintained by the Elasticsearch community) does not have support for signing requests using using AWS Signature Version 4, it will only work with the "open permission" you've referenced. This is hinted at in the big picture on fine-grained access control:

In general, if you enable fine-grained access control, we recommend using a domain access policy that doesn't require signed requests.

Note that you can always fall back us using a master user based on username/password:

  1. Create a master user (username/password) for the OpenSearch domain's fine-grained access control configuration.
  2. Store the username/password in an AWS Secrets Manager secret as described here.
  3. Attach the secret to the AWS Glue connector as described here.

Hope this helps!

Query returning wrong values

Adding an answer so that the solution proposed in the comments can be accepted if the new version mentioned did resolve the issue.

Upgrading the Neptune instance to https://docs.aws.amazon.com/neptune/latest/userguide/engine-releases-1.0.2.1.R4.html should resolve the issue.

How do you write to the file system of an aws lambda instance?

So the answer lies in the context.fail() or context.succeed() functions. Being completely new to the world of aws and lambda I was ignorant to the fact that calling any of these methods stops execution of the lambda instance.

According to the docs:

The context.succeed() method signals successful execution and returns
a string.

By eliminating these and only calling them after I had run all the code that I wanted, everything worked well.



Related Topics



Leave a reply



Submit