How to Play Audio

How to play audio?

If you don't want to mess with HTML elements:

var audio = new Audio('audio_file.mp3');
audio.play();

function play() {
var audio = new Audio('https://interactive-examples.mdn.mozilla.net/media/cc0-audio/t-rex-roar.mp3');
audio.play();
}
<button onclick="play()">Play Audio</button>

how to play an audio file - .NET MAUI

Currently, Maui does not have any audio playback framework. And there are some relative known issues in maui, you can follow them here:

https://github.com/dotnet/maui/issues/7152 .

https://github.com/CommunityToolkit/Maui/issues/113

Thanks for your feedback and support for maui.

How to play audio when a button is pressed?

This will work best acc. to your use case.

const btn = document.getElementById("btn");

btn.onclick = () => {
// Here you will add path to local file you have
const audio = new Audio(
"https://www.soundhelix.com/examples/mp3/SoundHelix-Song-1.mp3"
);

audio.play();
};
<!DOCTYPE html>
<html>
<head>
<title>Parcel Sandbox</title>
<meta charset="UTF-8" />
</head>

<body>
<div id="app">
<button id="btn">Play Sound</button>
</div>

<script src="src/index.js"></script>
</body>
</html>

How to play audio first time when creating customized audio player

// Immediately load the audio in the background. No need to add to DOMconst loadAudio = new Promise(resolve => {  const audio = new Audio()  audio.src = 'https://fetch-stream-audio.anthum.com/audio/opus/demo/96kbit.opus'
// resolve audio as ready to play when we've fetched enough playback bytes audio.addEventListener('canplaythrough', () => { resolve(audio) })})
async function play() { // wait for audio to be ready (loadAudio Promise resolved) const audio = await loadAudio
// restart if already playing audio.currentTime = 0
audio.play()}
async function stop() { const audio = await loadAudio audio.pause()}
button {  font-size: 18px;  padding: .5em 1em;  border: none;  border: none;  background: #e9e9e9;  color: #4c8bf5;  cursor: pointer;}
<button class="play" type="button" onclick="play()" title="Play">▶</button><button type="button" onclick="stop()" title="Stop">■</button>

How do I create/play audio in C++?

The problem here is that C++ is just a programming language. The same can be said for python, though Python lives in a different ecosystem of modules and package management which get conflated (rightly or wrongly) as part of the language

C++ doesn’t have the same history and the same ecosystem and this is part of the battle you will have when learning it. You don’t have pip, you have a nebulous series of frameworks, headers and libraries (some standard, some which need installation) all of which need linked, path-ed or compiled. It is an ecosystem that is unfriendly if you try and approach it like a novice Python programmer. If you approach it agnostically, it is simultaneously very powerful and exceptionally tedious, a combination that tends to polarise developers.

This means that simple answers like Use SFML!, SDL, NSOUND, OpenAL, CoreAudio, AVFoundation, JUCE, &c... are all technically "correct" but massively gloss over large parts of setup, nomenclature and workflow that are just a pip install away with python.

Pontificating aside, if you want to simply

  • create an array of floating point values
  • that represent the samples of sine tone
  • then play those samples
  • on macOS

Then you are probably best just

  • creating your array
  • writing a .wav
  • opening the .wav with afplay

Is that the most open, versatile, DSP orientated, play-from-RAM solution? No, of course not, but it is a solution to the problem you pose here. The alternative and correct answer is an exhaustive list of every major media library, cross-platform and macOS specific, their setup, quirks and minimum working example, which would result in an answer so obtusely long I hope you can sympathise with why it is not best addressed on Stack Overflow.

A Simple CLI App

You can find all the constituent parts of this on SO, but I have tallied-off so many how do I play a sound in C++ questions it has made me realise they are not going away.

The setup for Xcode is to create a Command Line Tool project (Console App for Visual Studio).

Here is a header that will wrap up everything into a playSound function

audio.h

#pragma once

//------------------------------------------------------------------------------
#include <iostream>
#include <fstream>
#include <cstddef>
#include <cstdlib>
#if defined _WIN32 || defined _WIN64
#pragma comment(lib, "Winmm")
#include <windows.h>
#endif
//------------------------------------------------------------------------------

/// <#Description#>
struct WaveHeader
{
/** waveFormatHeader: The first 4 bytes of a wav file should be the characters "RIFF" */
char chunkID[4] = { 'R', 'I', 'F', 'F' };
/** waveFormatHeader: This is the size of the entire file in bytes minus 8 bytes */
uint32_t chunkSize;
/** waveFormatHeader" The should be characters "WAVE" */
char format[4] = { 'W', 'A', 'V', 'E' };
/** waveFormatHeader" This should be the letters "fmt ", note the space character */
char subChunk1ID[4] = { 'f', 'm', 't', ' ' };
/** waveFormatHeader: For PCM == 16, since audioFormat == uint16_t */
uint32_t subChunk1Size = 16;
/** waveFormatHeader: For PCM this is 1, other values indicate compression */
uint16_t audioFormat = 1;
/** waveFormatHeader: Mono = 1, Stereo = 2, etc. */
uint16_t numChannels = 1;
/** waveFormatHeader: Sample Rate of file */
uint32_t sampleRate = 44100;
/** waveFormatHeader: SampleRate * NumChannels * BitsPerSample/8 */
uint32_t byteRate = 44100 * 2;
/** waveFormatHeader: The number of bytes for one sample including all channels */
uint16_t blockAlign = 2;
/** waveFormatHeader: 8 bits = 8, 16 bits = 16 */
uint16_t bitsPerSample = 16;
/** waveFormatHeader: Contains the letters "data" */
char subChunk2ID[4] = { 'd', 'a', 't', 'a' };
/** waveFormatHeader: == NumberOfFrames * NumChannels * BitsPerSample/8
This is the number of bytes in the data.
*/
uint32_t subChunk2Size;

WaveHeader(uint32_t samplingFrequency = 44100, uint16_t bitDepth = 16, uint16_t numberOfChannels = 1)
{
numChannels = numberOfChannels;
sampleRate = samplingFrequency;
bitsPerSample = bitDepth;

byteRate = sampleRate * numChannels * bitsPerSample / 8;
blockAlign = numChannels * bitsPerSample / 8;
};

/// sets the fields that refer to how large the wave file is
/// @warning This MUST be set before writing a file, or the file will be unplayable.
/// @param numberOfFrames total number of audio frames. i.e. total number of samples / number of channels
void setFileSize(uint32_t numberOfFrames)
{
subChunk2Size = numberOfFrames * numChannels * bitsPerSample / 8;
chunkSize = 36 + subChunk2Size;
}

};

/// write an array of float data to a 16-bit, 44100 Hz Mono wav file in the same directory as the program and then play it
/// @param audio audio samples, assumed to be 44100 Hz sampling rate
/// @param numberOfSamples total number of samples in audio
/// @param filename filename, should end in .wav and will be written to your Desktop
void playSound(float* audio,
uint32_t numberOfSamples,
const char* filename)
{
std::ofstream fs;
std::string filepath {filename};

if (filepath.substr(filepath.size() - 4, 4) != std::string(".wav"))
filepath += std::string(".wav");

fs.open(filepath, std::fstream::out | std::ios::binary);

WaveHeader* header = new WaveHeader{};
header->setFileSize(numberOfSamples);

fs.write((char*)header, sizeof(WaveHeader));

int16_t* audioData = new int16_t[numberOfSamples];
constexpr float max16BitValue = 32768.0f;

for (int i = 0; i < numberOfSamples; ++i)
{
int pcm = int(audio[i] * (max16BitValue));

if (pcm >= max16BitValue)
pcm = max16BitValue - 1;
else if (pcm < -max16BitValue)
pcm = -max16BitValue;

audioData[i] = int16_t(pcm);
}


fs.write((char*)audioData, header->subChunk2Size);

fs.close();
std::cout << filename << " written to:\n" << filepath << std::endl;


#if defined _WIN32 || defined _WIN64
// don't forget to add Add 'Winmm.lib' in Properties > Linker > Input > Additional Dependencies
PlaySound(std::wstring(filepath.begin(), filepath.end()).c_str(), NULL, SND_FILENAME);
#else
std::system((std::string("afplay ") + filepath).c_str());
#endif

}

main.cpp

Your main function could then be something like:

#include <iostream>
#include <cmath>
#include "audio.h"

int main(int argc, const char * argv[])
{
const int numSamples = 44100;
float sampleRate = 44100.0f;
float* sineWave = new float[numSamples];
float frequency = 440.0f;

float radsPerSamp = 2.0f * 3.1415926536f * frequency / sampleRate;

for (unsigned long i = 0; i < numSamples; i++)
{
sineWave[i] = std::sin (radsPerSamp * (float) i);
}

playSound(sineWave, numSamples, "test.wav");

return 0;
}

How to play audio continuously in xamarin forms

For Android

You need a hook way to implement a loopMediaPlayer, that is creating a new player when the audio is finished.

Refer to : https://stackoverflow.com/a/29883923/8187800 .

For iOS

Due to ARC , probably AVAudioPlayer is released after calling dependency service , to solved it you can try to make _player as global variable .

Refer to
https://stackoverflow.com/a/8415802/8187800 .

How to play audio with gstreamer in C?

Gstreamer is centered around pipelines that are lists of elements. Elements have pads to exchange data on. In your example, decodebin has an output pad and audioconvert has an input pad. At the start of the pipeline, the pads need to be linked.

This is when pads agree on the format of data, as well as some other information, such as who's in charge of timing and maybe some more format specifics.

Your problem arises from the fact that decodebin is not actually a real element. At runtime, when filesrc starts up, it tells decodebin what pad it has, and decodebin internally creates elements to handle that file.

For example:
filesrc location=test.mp4 ! decodebin would run in this order:

  • delay linking because types are unknown
  • start filesrc
  • filesrc says "trying to link, I have a pad of format MP4(h264)
  • decodebin sees this request, and in turn, creates on the fly a h264 parse element that can handle the mp4 file
  • decodebin now has enough information to describe it's pads, and it links the rest of the pipeline
  • video starts playing

Because you are using c to do this, you link the pipeline before filesrc loads the file. This means that decodebin doesn't know the format of it's pads at startup, and therefore fails to link.

To fix this, you have two options:

1.) Swap out decodebin for something that supports only one type. If you know your videos will always be mp4s with h264, for example, you can use h264parse instead of decodebin. Because h264parse only works with one type of format, it knows it's pad formats at the start, and will be able to link without issue.

2.) Reimplement the smart delaying linking. You can read the docs for more info, but you can delay linking of the pipeline, and install callbacks to complete the linking when there's enough information. This is what gst-launch-1.0 is doing under the hood. This has the benefit of being more flexible: anything support by decodebin will work. The downside is that it's much more complex, involves a nontrivial amount of work on your end, and is more fragile. If you can get away with it, try fix 1



Related Topics



Leave a reply



Submit