Building for Arm64E on Apple Silicon

Building for ARM64e on Apple silicon

The main difference between targeting arm64 and arm64e is ARMv8.3's Pointer Authentication Codes. Since the specific ABI hasn't stabilized, Apple is only using it for their own platform code since they can easily recompile the entire OS version with a different ABI by shipping a new OS whereas they can't really force all developers to start using a new ABI.

Despite this, you can compile for arm64e simply by choosing "other..." on the architectures drop down and typing in arm64e. By default, macOS will refuse to run any non-Apple code that has the arm64e slice (since they want to prevent anyone from shipping arm64e code before the ABI is stable). You can, however, override this behavior and allow macOS to run non-Apple arm64e code by setting the arm64e_preview_abi bootarg:

sudo nvram boot-args=-arm64e_preview_abi

Note that you will need to disable system integrity protection to do so. This bootarg is meant to allow developers to prepare their software to run with pointer authentication enabled and really not much else.

Should macOS driverkit system extensions be arm64 or arm64e for Apple Silicon / M1?

Dexts should indeed be arm64 and x86_64 (but as pmdj explains, system binaries are still arm64e.)

As hinted by the name of (and need for) the -arm64e_preview_abi, arm64e is currently only exposed as a developer preview, to allow for testing.

However, you shouldn't get the disallowing arm64 error: did you set other interesting boot-args on computer B? (in particular, amfi= may be relevant)

Why does my native application compiled on Apple Silicon sometimes build as arm64 and sometimes build as x86_64?

When your build command doesn't include specific flags for which architecture to build for, the compiler tools provided by Apple, like cc, perform some kind of introspection based on the architecture of the calling process. That means that if your build system has yet to be natively compiled for arm64, you might see this behavior as the compiler will assume you want to build for x86_64!

You can demonstrate this by using the arch tool to run the cc executable in x86_64 mode:

% arch -x86_64 cc hello.c -o hello

% file hello
hello: Mach-O 64-bit executable x86_64

As a work-around, you can introduce a shim compiler that always resets to the native architecture. Save this as force-arm64-cc and make it executable:

#!/usr/bin/env bash

# Note we are using arm64e because `cc` does not have an arm64 binary!
exec arch -arm64e cc "$@"

You can then use this shim in place of cc:

% CC=$PWD/force-arm64-cc ./my-build-system

% file hello
hello: Mach-O 64-bit executable arm64

The correct long term solution is to specify the target architecture when compiling:

% arch -x86_64 cc -arch arm64 hello.c -o hello

% file hello
hello: Mach-O 64-bit executable arm64

However, this currently produces a bogus executable when you rebuild the binary, which is quite common in an edit-compile-run cycle:

% ./hello
zsh: killed ./hello

See also:

  • Why does my native arm64 application built using an x86_64 build system fail to be code signed unless I remove the previous executable?

Xcode arm64 Vs arm64e

The arm64e architecture is used on the A12 chipset, which is added in the latest 2018 iPhone models (XS/XS Max/XR). The code compiles to ARMv8.3, which brings support for new features. Namely:

  • Pointer authentication
  • Nested virtualization
  • Advanced SIMD complex number support
  • Improved Javascript data type conversion support
  • A change to the memory consistency model
  • ID mechanism support for larger system-visible caches

The A12 features an Apple-designed 64-bit ARMv8.3-A six-core CPU

https://en.wikipedia.org/wiki/Apple_A12

Read more about the architecture here as well:

https://community.arm.com/processors/b/blog/posts/armv8-a-architecture-2016-additions

Build Apple Silicon binary on Intel machine

We ended up solving solving this and being able to compile darwin-arm64 and debian-aarch64 binaries on GitHub Actions' x86-64 machines.

We pre-compiled all our dependencies for arm64 and linked them statically as well as dynamically.

export RELAY_DEPS_PATH=./build-deps/arm64
export PKG_CONFIG_PATH=./build-deps/arm64/lib/pkgconfig

cd ./relay-deps
TARGET=./build-deps make install

cd ./relay
phpize
./configure CFLAGS='-target arm64-apple-macos' \
--host=aarch64-apple-darwin \
--enable-relay-jemalloc-prefix
[snip...]

make

# Dynamically linked binary
cc --target=arm64-apple-darwin \
${wl}-flat_namespace ${wl}-undefined ${wl}suppress \
-o .libs/relay.so -bundle .libs/*.o \
-L$RELAY_DEPS_PATH/lib -lhiredis -ljemalloc_pic [snip...]

# re-link to standard paths
./relay-deps/utils/macos/relink.sh .libs/relay.so /usr/local/lib
cp .libs/relay.so modules/relay.so

# Build a statically linked shared object
cc --target=arm64-apple-darwin \
${wl}-flat_namespace ${wl}-undefined ${wl}suppress \
-o .libs/relay-static.so -bundle .libs/*.o \
$RELAY_DEPS_PATH/lib/libhiredis.a \
$RELAY_DEPS_PATH/lib/libjemalloc_pic.a \
[snip...]

The relink.sh:

#!/bin/bash
set -e

printUsage() {
echo "$0 <shared-object> <prefix>"
exit 1
}

if [[ ! -f "$1" || -z "$2" ]]; then
printUsage
exit 1
fi

INFILE=$1
PREFIX=$2

links=(libjemalloc libhiredis [snip...])

if [ -z "$PREFIX" ]; then
PREFIX=libs
fi

for link in ${links[@]}; do
FROM=$(otool -L "$INFILE"|grep $link|awk '{print $1}')
FILE=$(basename -- "$FROM")
TO="$PREFIX/$FILE"

echo "$FROM -> $TO"
install_name_tool -change "$FROM" "$TO" "$1"
done

Unable to import psutil on M1 mac with miniforge: (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e'))

Have you tried:

pip uninstall psutil

followed by:

pip install --no-binary :all: psutil



Related Topics



Leave a reply



Submit