如何向后播放音频?(How to play audio backwards?)

编程入门 行业动态 更新时间:2024-10-28 20:21:59
如何向后播放音频?(How to play audio backwards?)

有些人建议从头到尾读取音频数据并创建从头到尾写的副本,然后简单地播放反转的音频数据。

是否有iOS的现有示例如何完成?

我找到了一个名为MixerHost的示例项目,该项目在某些时候使用AudioUnitSampleType来保存从文件中读取的音频数据,并将其分配给缓冲区。

这被定义为:

typedef SInt32 AudioUnitSampleType; #define kAudioUnitSampleFractionBits 24

根据Apple的说法:

用于音频单元的规范音频样本类型和iPhone OS中的其他音频处理是具有8.24位定点样本的非交织线性PCM。

换句话说,它拥有非交错的线性PCM音频数据。

但我无法弄清楚这些数据的读取位置以及存储位置。 这是加载音频数据并缓冲它的代码:

- (void) readAudioFilesIntoMemory { for (int audioFile = 0; audioFile < NUM_FILES; ++audioFile) { NSLog (@"readAudioFilesIntoMemory - file %i", audioFile); // Instantiate an extended audio file object. ExtAudioFileRef audioFileObject = 0; // Open an audio file and associate it with the extended audio file object. OSStatus result = ExtAudioFileOpenURL (sourceURLArray[audioFile], &audioFileObject); if (noErr != result || NULL == audioFileObject) {[self printErrorMessage: @"ExtAudioFileOpenURL" withStatus: result]; return;} // Get the audio file's length in frames. UInt64 totalFramesInFile = 0; UInt32 frameLengthPropertySize = sizeof (totalFramesInFile); result = ExtAudioFileGetProperty ( audioFileObject, kExtAudioFileProperty_FileLengthFrames, &frameLengthPropertySize, &totalFramesInFile ); if (noErr != result) {[self printErrorMessage: @"ExtAudioFileGetProperty (audio file length in frames)" withStatus: result]; return;} // Assign the frame count to the soundStructArray instance variable soundStructArray[audioFile].frameCount = totalFramesInFile; // Get the audio file's number of channels. AudioStreamBasicDescription fileAudioFormat = {0}; UInt32 formatPropertySize = sizeof (fileAudioFormat); result = ExtAudioFileGetProperty ( audioFileObject, kExtAudioFileProperty_FileDataFormat, &formatPropertySize, &fileAudioFormat ); if (noErr != result) {[self printErrorMessage: @"ExtAudioFileGetProperty (file audio format)" withStatus: result]; return;} UInt32 channelCount = fileAudioFormat.mChannelsPerFrame; // Allocate memory in the soundStructArray instance variable to hold the left channel, // or mono, audio data soundStructArray[audioFile].audioDataLeft = (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType)); AudioStreamBasicDescription importFormat = {0}; if (2 == channelCount) { soundStructArray[audioFile].isStereo = YES; // Sound is stereo, so allocate memory in the soundStructArray instance variable to // hold the right channel audio data soundStructArray[audioFile].audioDataRight = (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType)); importFormat = stereoStreamFormat; } else if (1 == channelCount) { soundStructArray[audioFile].isStereo = NO; importFormat = monoStreamFormat; } else { NSLog (@"*** WARNING: File format not supported - wrong number of channels"); ExtAudioFileDispose (audioFileObject); return; } // Assign the appropriate mixer input bus stream data format to the extended audio // file object. This is the format used for the audio data placed into the audio // buffer in the SoundStruct data structure, which is in turn used in the // inputRenderCallback callback function. result = ExtAudioFileSetProperty ( audioFileObject, kExtAudioFileProperty_ClientDataFormat, sizeof (importFormat), &importFormat ); if (noErr != result) {[self printErrorMessage: @"ExtAudioFileSetProperty (client data format)" withStatus: result]; return;} // Set up an AudioBufferList struct, which has two roles: // // 1. It gives the ExtAudioFileRead function the configuration it // needs to correctly provide the data to the buffer. // // 2. It points to the soundStructArray[audioFile].audioDataLeft buffer, so // that audio data obtained from disk using the ExtAudioFileRead function // goes to that buffer // Allocate memory for the buffer list struct according to the number of // channels it represents. AudioBufferList *bufferList; bufferList = (AudioBufferList *) malloc ( sizeof (AudioBufferList) + sizeof (AudioBuffer) * (channelCount - 1) ); if (NULL == bufferList) {NSLog (@"*** malloc failure for allocating bufferList memory"); return;} // initialize the mNumberBuffers member bufferList->mNumberBuffers = channelCount; // initialize the mBuffers member to 0 AudioBuffer emptyBuffer = {0}; size_t arrayIndex; for (arrayIndex = 0; arrayIndex < channelCount; arrayIndex++) { bufferList->mBuffers[arrayIndex] = emptyBuffer; } // set up the AudioBuffer structs in the buffer list bufferList->mBuffers[0].mNumberChannels = 1; bufferList->mBuffers[0].mDataByteSize = totalFramesInFile * sizeof (AudioUnitSampleType); bufferList->mBuffers[0].mData = soundStructArray[audioFile].audioDataLeft; if (2 == channelCount) { bufferList->mBuffers[1].mNumberChannels = 1; bufferList->mBuffers[1].mDataByteSize = totalFramesInFile * sizeof (AudioUnitSampleType); bufferList->mBuffers[1].mData = soundStructArray[audioFile].audioDataRight; } // Perform a synchronous, sequential read of the audio data out of the file and // into the soundStructArray[audioFile].audioDataLeft and (if stereo) .audioDataRight members. UInt32 numberOfPacketsToRead = (UInt32) totalFramesInFile; result = ExtAudioFileRead ( audioFileObject, &numberOfPacketsToRead, bufferList ); free (bufferList); if (noErr != result) { [self printErrorMessage: @"ExtAudioFileRead failure - " withStatus: result]; // If reading from the file failed, then free the memory for the sound buffer. free (soundStructArray[audioFile].audioDataLeft); soundStructArray[audioFile].audioDataLeft = 0; if (2 == channelCount) { free (soundStructArray[audioFile].audioDataRight); soundStructArray[audioFile].audioDataRight = 0; } ExtAudioFileDispose (audioFileObject); return; } NSLog (@"Finished reading file %i into memory", audioFile); // Set the sample index to zero, so that playback starts at the // beginning of the sound. soundStructArray[audioFile].sampleNumber = 0; // Dispose of the extended audio file object, which also // closes the associated file. ExtAudioFileDispose (audioFileObject); } }

哪个部分包含必须反转的音频样本数组? 是AudioUnitSampleType吗?

bufferList->mBuffers[0].mData = soundStructArray[audioFile].audioDataLeft;

注意:audioDataLeft定义为AudioUnitSampleType,它是一个SInt32但不是一个数组。

我在Core Audio Mailing列表中找到了一条线索:

嗯,据我所知,与iPh * n *无关(除非某些音频API被省略 - 我不是该程序的成员)。 AFAIR,AudioFile.h和ExtendedAudioFile.h应该为您提供读取或写入caf并访问其流/通道所需的内容。 基本上,您想要向后读取每个通道/流,因此,如果您不需要音频文件的属性,一旦您掌握了该通道的数据(假设它不是压缩格式),它就非常简单。 考虑到caf可以表示的格式数量,这可能需要比您想象的更多行代码。 一旦掌握了未压缩的数据,就应该像反转字符串一样简单。 然后你当然会用反转的数据替换文件的数据,或者你可以只输入音频输出(或者你发送反向信号的任何地方)读取你向后的任何流。

这是我尝试过的,但是当我将反向缓冲区分配给两个通道的mData时,我什么也听不见:

AudioUnitSampleType *leftData = soundStructArray[audioFile].audioDataLeft; AudioUnitSampleType *reversedData = (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType)); UInt64 j = 0; for (UInt64 i = (totalFramesInFile - 1); i > -1; i--) { reversedData[j] = leftData[i]; j++; }

Some people suggested to read the audio data from end to start and create a copy written from start to end, and then simply play that reversed audio data.

Are there existing examples for iOS how this is done?

I found an example project called MixerHost, which at some point uses an AudioUnitSampleType holding the audio data that has been read from file, and assigning it to a buffer.

This is defined as:

typedef SInt32 AudioUnitSampleType; #define kAudioUnitSampleFractionBits 24

And according to Apple:

The canonical audio sample type for audio units and other audio processing in iPhone OS is noninterleaved linear PCM with 8.24-bit fixed-point samples.

So in other words it holds noninterleaved linear PCM audio data.

But I can't figure out where this data is beeing read in, and where it is stored. Here's the code that loads the audio data and buffers it:

- (void) readAudioFilesIntoMemory { for (int audioFile = 0; audioFile < NUM_FILES; ++audioFile) { NSLog (@"readAudioFilesIntoMemory - file %i", audioFile); // Instantiate an extended audio file object. ExtAudioFileRef audioFileObject = 0; // Open an audio file and associate it with the extended audio file object. OSStatus result = ExtAudioFileOpenURL (sourceURLArray[audioFile], &audioFileObject); if (noErr != result || NULL == audioFileObject) {[self printErrorMessage: @"ExtAudioFileOpenURL" withStatus: result]; return;} // Get the audio file's length in frames. UInt64 totalFramesInFile = 0; UInt32 frameLengthPropertySize = sizeof (totalFramesInFile); result = ExtAudioFileGetProperty ( audioFileObject, kExtAudioFileProperty_FileLengthFrames, &frameLengthPropertySize, &totalFramesInFile ); if (noErr != result) {[self printErrorMessage: @"ExtAudioFileGetProperty (audio file length in frames)" withStatus: result]; return;} // Assign the frame count to the soundStructArray instance variable soundStructArray[audioFile].frameCount = totalFramesInFile; // Get the audio file's number of channels. AudioStreamBasicDescription fileAudioFormat = {0}; UInt32 formatPropertySize = sizeof (fileAudioFormat); result = ExtAudioFileGetProperty ( audioFileObject, kExtAudioFileProperty_FileDataFormat, &formatPropertySize, &fileAudioFormat ); if (noErr != result) {[self printErrorMessage: @"ExtAudioFileGetProperty (file audio format)" withStatus: result]; return;} UInt32 channelCount = fileAudioFormat.mChannelsPerFrame; // Allocate memory in the soundStructArray instance variable to hold the left channel, // or mono, audio data soundStructArray[audioFile].audioDataLeft = (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType)); AudioStreamBasicDescription importFormat = {0}; if (2 == channelCount) { soundStructArray[audioFile].isStereo = YES; // Sound is stereo, so allocate memory in the soundStructArray instance variable to // hold the right channel audio data soundStructArray[audioFile].audioDataRight = (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType)); importFormat = stereoStreamFormat; } else if (1 == channelCount) { soundStructArray[audioFile].isStereo = NO; importFormat = monoStreamFormat; } else { NSLog (@"*** WARNING: File format not supported - wrong number of channels"); ExtAudioFileDispose (audioFileObject); return; } // Assign the appropriate mixer input bus stream data format to the extended audio // file object. This is the format used for the audio data placed into the audio // buffer in the SoundStruct data structure, which is in turn used in the // inputRenderCallback callback function. result = ExtAudioFileSetProperty ( audioFileObject, kExtAudioFileProperty_ClientDataFormat, sizeof (importFormat), &importFormat ); if (noErr != result) {[self printErrorMessage: @"ExtAudioFileSetProperty (client data format)" withStatus: result]; return;} // Set up an AudioBufferList struct, which has two roles: // // 1. It gives the ExtAudioFileRead function the configuration it // needs to correctly provide the data to the buffer. // // 2. It points to the soundStructArray[audioFile].audioDataLeft buffer, so // that audio data obtained from disk using the ExtAudioFileRead function // goes to that buffer // Allocate memory for the buffer list struct according to the number of // channels it represents. AudioBufferList *bufferList; bufferList = (AudioBufferList *) malloc ( sizeof (AudioBufferList) + sizeof (AudioBuffer) * (channelCount - 1) ); if (NULL == bufferList) {NSLog (@"*** malloc failure for allocating bufferList memory"); return;} // initialize the mNumberBuffers member bufferList->mNumberBuffers = channelCount; // initialize the mBuffers member to 0 AudioBuffer emptyBuffer = {0}; size_t arrayIndex; for (arrayIndex = 0; arrayIndex < channelCount; arrayIndex++) { bufferList->mBuffers[arrayIndex] = emptyBuffer; } // set up the AudioBuffer structs in the buffer list bufferList->mBuffers[0].mNumberChannels = 1; bufferList->mBuffers[0].mDataByteSize = totalFramesInFile * sizeof (AudioUnitSampleType); bufferList->mBuffers[0].mData = soundStructArray[audioFile].audioDataLeft; if (2 == channelCount) { bufferList->mBuffers[1].mNumberChannels = 1; bufferList->mBuffers[1].mDataByteSize = totalFramesInFile * sizeof (AudioUnitSampleType); bufferList->mBuffers[1].mData = soundStructArray[audioFile].audioDataRight; } // Perform a synchronous, sequential read of the audio data out of the file and // into the soundStructArray[audioFile].audioDataLeft and (if stereo) .audioDataRight members. UInt32 numberOfPacketsToRead = (UInt32) totalFramesInFile; result = ExtAudioFileRead ( audioFileObject, &numberOfPacketsToRead, bufferList ); free (bufferList); if (noErr != result) { [self printErrorMessage: @"ExtAudioFileRead failure - " withStatus: result]; // If reading from the file failed, then free the memory for the sound buffer. free (soundStructArray[audioFile].audioDataLeft); soundStructArray[audioFile].audioDataLeft = 0; if (2 == channelCount) { free (soundStructArray[audioFile].audioDataRight); soundStructArray[audioFile].audioDataRight = 0; } ExtAudioFileDispose (audioFileObject); return; } NSLog (@"Finished reading file %i into memory", audioFile); // Set the sample index to zero, so that playback starts at the // beginning of the sound. soundStructArray[audioFile].sampleNumber = 0; // Dispose of the extended audio file object, which also // closes the associated file. ExtAudioFileDispose (audioFileObject); } }

Which part contains the array of audio samples which have to be reversed? Is it the AudioUnitSampleType?

bufferList->mBuffers[0].mData = soundStructArray[audioFile].audioDataLeft;

Note: audioDataLeft is defined as an AudioUnitSampleType, which is an SInt32 but not an array.

I found a clue in a Core Audio Mailing list:

Well, nothing to do with iPh*n* as far as I know (unless some audio API has been omitted -- I am not a member of that program). AFAIR, AudioFile.h and ExtendedAudioFile.h should provide you with what you need to read or write a caf and access its streams/channels. Basically, you want to read each channel/stream backwards, so, if you don't need properties of the audio file it is pretty straightforward once you have a handle on that channel's data, assuming it is not in a compressed format. Considering the number of formats a caf can represent, this could take a few more lines of code than you're thinking. Once you have a handle on uncompressed data, it should be about as easy as reversing a string. Then you would of course replace the file's data with the reversed data, or you could just feed the audio output (or wherever you're sending the reversed signal) reading whatever stream you have backwards.

This is what I tried, but when I assign my reversed buffer to the mData of both channels, I hear nothing:

AudioUnitSampleType *leftData = soundStructArray[audioFile].audioDataLeft; AudioUnitSampleType *reversedData = (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType)); UInt64 j = 0; for (UInt64 i = (totalFramesInFile - 1); i > -1; i--) { reversedData[j] = leftData[i]; j++; }

最满意答案

通常,在使用ASBD时,这些字段描述了由此描述表示的缓冲区中的样本数据的完整布局 - 其中通常这些缓冲区由AudioBufferList中包含的AudioBuffer表示。

但是,当ASBD具有kAudioFormatFlagIsNonInterleaved标志时,AudioBufferList具有不同的结构和语义。 在这种情况下,ASBD字段将描述列表中包含的一个AudioBuffers的格式,并且列表中的每个AudioBuffer被确定为具有单个(单声道)音频数据通道。 然后,ASBD的mChannelsPerFrame将指示AudioBufferList中包含的AudioBuffers的总数 - 其中每个缓冲区包含一个通道。 这主要用于此列表的AudioUnit(和AudioConverter)表示 - 并且不会在此结构的AudioHardware用法中找到。

Typically, when an ASBD is being used, the fields describe the complete layout of the sample data in the buffers that are represented by this description - where typically those buffers are represented by an AudioBuffer that is contained in an AudioBufferList.

However, when an ASBD has the kAudioFormatFlagIsNonInterleaved flag, the AudioBufferList has a different structure and semantic. In this case, the ASBD fields will describe the format of ONE of the AudioBuffers that are contained in the list, AND each AudioBuffer in the list is determined to have a single (mono) channel of audio data. Then, the ASBD's mChannelsPerFrame will indicate the total number of AudioBuffers that are contained within the AudioBufferList - where each buffer contains one channel. This is used primarily with the AudioUnit (and AudioConverter) representation of this list - and won't be found in the AudioHardware usage of this structure.

更多推荐

本文发布于:2023-07-19 17:12:00,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1183875.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:音频   play   audio

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!