Using multiple render pipelines in a single MTLRenderCommandEncoder: How to Synchronize MTLBuffer?
I believe you need to use separate encoders. In this (somewhat dated) documentation about function writes, only atomic operations are synchronized for buffers shared between draw calls.
How To Set Up Byte Alignment From a MTLBuffer to a 2D MTLTexture?
If it's important to you that the texture share the same backing memory as the buffer, and you want the texture to reflect the actual image dimensions, you need to ensure that the data in the buffer is correctly aligned from the start.
Rather than copying the source data all at once, you need to ensure the buffer has room for all of the aligned data, then copy it one row at a time.
NSUInteger rowAlignment = [self.device minimumLinearTextureAlignmentForPixelFormat:MTLPixelFormatR32Float];
NSUInteger sourceBytesPerRow = imageWidth * sizeof(float);
NSUInteger bytesPerRow = AlignUp(sourceBytesPerRow, rowAlignment);
id<MTLBuffer> metalBuffer = [self.device newBufferWithLength:bytesPerRow * imageHeight
options:MTLResourceCPUCacheModeDefaultCache];
const uint8_t *sourceData = floatData.bytes;
uint8_t *bufferData = metalBuffer.contents;
for (int i = 0; i < imageHeight; ++i) {
memcpy(bufferData + (i * bytesPerRow), sourceData + (i * sourceBytesPerRow), sourceBytesPerRow);
}
Where AlignUp
is your alignment function or macro of choice. Something like this:
static inline NSUInteger AlignUp(NSUInteger n, NSInteger alignment) {
return ((n + alignment - 1) / alignment) * alignment;
}
It's up to you to determine whether the added complexity is worth saving a copy, but this is one way to achieve what you want.
Clarification on how argument data for a vertex function using MTLVertexDescriptor and the [[attribute(n)]] attribute is located
It appears that this is the case. Section 5.2.4, page 87 of Apple's "Metal Shading Language Specification" Version 2.2 reads
A vertex function can read per-vertex inputs by indexing into
buffer(s) passed as arguments to the vertex function using the vertex
and instance IDs. To assemble per-vertex inputs and pass them as
arguments to a vertex function, declare the inputs with the[[stage_in]]
attribute.
Is it possible to use a bicubic sampler in metal?
It depends on the type of device, operating system (version and type), and processor architecture.
The code below can be easy compiled on the following configurations: iOS 15, iPhone 12 / 13, Xcode 13.
#if defined(__HAVE_BICUBIC_FILTERING__)
constexpr sampler textureSampler (mag_filter::bicubic, min_filter::bicubic);
const half4 colorSample = colorTexture.sample (textureSampler, in.textureCoordinate);
return float4(colorSample);
#endif
Metal Shading Language - buffer binding
There are multiple issues here.
You are defining Vects
with a specific size. That allows Metal to check if the size of the buffer at index 2 is big enough to match the size of your vects
variable. It is complaining because it isn't big enough. (It wouldn't be able to do this check if vects
were declared as constant float3 *vects [[buffer(2)]]
, for example.)
Second, the size of your buffer — MemoryLayout<float3>.size * vectMaxCount
— is incorrect. It fails to take into account the alignment of float3
and therefore the padding that exists between elements in your [float3]
array. As noted in the documentation for MemoryLayout
, you should always use stride
, not size
, when calculating allocation sizes.
This is why the failure happens when Vects::position
is 8 or more elements long. You would expect it to start at 11 elements because vectMaxCount
is 10, but your buffer is shorter than an array of vectMaxCount
float3
s. To be specific, your buffer is 10 * 12 == 120 bytes long. The stride of float3
is 16 and 120 / 16 == 7.5.
If you switch from size
to stride
when allocating your buffer and change the element count of Vects::position
to 10 to match vectMaxCount
, then you'll get past this immediate issue. However, there are additional problems lurking.
Your compute function as it currently stands doesn't know how many elements of vects.position
are actually filled. You need to pass in the actual count of elements.
This line:
memcpy(bufferPointer, &metalvects, MemoryLayout<float3>.size * vectMaxCount)
is incorrect (even after replacing size
with stride
). It reads past the end of metalvects
. That's because the number of elements in metalvects
is less than vectMaxCount
. You should use metalvects.count
instead of vectMaxCount
.
Related Topics
Cocos2D Fcm Push Notification Not Working
iOS Application Support Directory Exists on Devices by Default
What's The How to Access a Swift Package Item from Objective-C
How to Download Video Urlstring from Firebase Database Not Storage in Swift
How to Make Phone Calls in Swift
How to Convert a Pair of Bytes into a Float Using Swift
Streaming .M3U8 Using Mpmovieplayercontroller Does Not Work
Trying to Display Location Data from Firebase to Mapkit
How to Read The Property Values of a JSON Error Object Using Combine in Swift
Swift Corebluetooth Reading a Float Array from Ble
Weird Behaviour in Swiftui+Combine When Class -> Struct
Preparing for Swift 4 - Unsafemutablepointer Migration to Unsafemutablebufferpointer
Expose an Interface of a Class Loaded from a Framework at Runtime
Loading Many UIimages from Disk Blocks Main Thread
How to Get User Nearby My Location in Geofire,Firebase
Error: Missing Return in a Closure Expected to Return 'Uiviewcontroller' (Xcode, Swift, iOS 13)