(Mac) Leave Core File Where The Executable Is Instead of /Cores

(Mac) leave core file where the executable is instead of /cores?

Hmm - maybe you can edit /etc/sysctl.conf and help yourself you specifying the core_pattern?

kernel.core_pattern=/cores/core.%e.%p.%h.%t

Maybe that would help you find out more about which process was responsible for the dump

Core Dumps aren't written to /cores on Mac OS Monterey

I'm currently on Mac OS Monterey. The solution suggested at https://developer.apple.com/forums/thread/694233?answerId=695943022#695943022 worked for me.

A quick summary: it is now necessary to enable com.apple.security.get-task-allow entitlement per executable.

Example with cat:

  1. Make a copy first. Required since cat is on a read-only filesystem.

    % cp $(which cat) cat-copy
  2. Create a dummy .entitlements with the com.apple.security.get-task-allow entitlement set:

    % /usr/libexec/PlistBuddy -c "Add :com.apple.security.get-task-allow bool true" tmp.entitlements
    File Doesn't Exist, Will Create: tmp.entitlements
  3. Re-sign cat-copy with those entitlements:

    % codesign -s - -f --entitlements tmp.entitlements cat-copy 
    CrashSelf: replacing existing signature

gdb doesn't read the core dump on macOS

You should be able to launch lldb as

$ lldb --core "/cores/core.70087"

How to generate a core dump in Linux on a segmentation fault?

This depends on what shell you are using. If you are using bash, then the ulimit command controls several settings relating to program execution, such as whether you should dump core. If you type

ulimit -c unlimited

then that will tell bash that its programs can dump cores of any size. You can specify a size such as 52M instead of unlimited if you want, but in practice this shouldn't be necessary since the size of core files will probably never be an issue for you.

In tcsh, you'd type

limit coredumpsize unlimited

Azure VM pricing - Is it better to have 80 single core machines or 10 8-core machines?

Billing

According to Windows Azure Virtual Machines Pricing Details, Virtual Machines are charged by the minute (of wall clock time). Prices are listed as hourly rates (60 minutes) and are billed based on total number of minutes when the VMs run for a partial hour.

In July 2013, 1 Small VM (1 virtual core) costs $0.09/hr; 8 Small VMs (8 virtual cores) cost $0.72/hr; 1 Extra Large VM (8 virtual cores) cost $0.72/hr (same as 8 Small VMs).

VM Sizes and Performance

The VMs sizes differ not only in number of cores and RAM, but also on network I/O performance, ranging from 100 Mbps for Small to 800 Mbps for Extra Large.

Extra Small VMs are rather limited in CPU and I/O power and are inadequate for workloads such as you described.

For single-threaded, I/O bound applications such as described in the question, an Extra Large VM could have an edge because of faster response times for each request.

It's also advisable to benchmark workloads running 2, 4 or more processes per core. For instance, 2 or 4 processes in a Small VM and 16, 32 or more processes in an Extra Large VM, to find the adequate balance between CPU and I/O loads (provided you don't use more RAM than is available).

Auto-scaling

Auto-scaling Virtual Machines is built-into Windows Azure directly. It can be based either on CPU load or Windows Azure Queues length.

Another alternative is to use specialized tools or services to monitor load across the servers and run PowerShell scripts to add or remove virtual machines as needed.

Auto-run

You can use the Windows Scheduler to automatically run tasks when Windows starts.



Related Topics



Leave a reply



Submit