Android音频子系统(二)

编程入门 行业动态 更新时间:2024-10-05 03:24:06

Android音频<a href=https://www.elefans.com/category/jswz/34/1751344.html style=子系统(二)"/>

Android音频子系统(二)

你好!这里是风筝的博客,
欢迎和我一起交流。

之前的文章:Android音频子系统(一)------openOutput打开流程
讲述了Output打开过程,那么接下来它是何时如何写入数据的呢?

这里以Android N为例

//@Threads.cpp
bool AudioFlinger::PlaybackThread::threadLoop()
{//......ret = threadLoop_write();//......
}

threadLoop还是比较太复杂的,我把他放在了这里:Android音频子系统(五)------AudioFlinger处理流程

简单看下PlaybackThread::threadLoop_write

//@Threads.cpp
ssize_t AudioFlinger::PlaybackThread::threadLoop_write()
{// If an NBAIO sink is present, use it to write the normal mixer's submixif (mNormalSink != 0) {ssize_t framesWritten = mNormalSink->write((char *)mSinkBuffer + offset, count);// otherwise use the HAL / AudioStreamOut directly} else {// Direct output and offload threads// FIXME We should have an implementation of timestamps for direct output threads.// They are used e.g for multichannel PCM playback over HDMI.bytesWritten = mOutput->write((char *)mSinkBuffer + offset, mBytesRemaining);}
}

从注释可知,如果mNormalSink有被赋值,那么会调用mNormalSink->write,否则就是调用mOutput->write。
所以这里分两种情况:
1.mNormalSink被赋值的情况
2.Direct output and offload 的情况

我们看下mNormalSink的情况吧。

1.mNormalSink被赋值

正常情况下,mixer的场景下mNormalSink肯定是会被赋值的

//@Threads.h
class PlaybackThread : public ThreadBase {
private:// The HAL output sink is treated as non-blocking, but current implementation is blockingsp<NBAIO_Sink>          mOutputSink;// If a fast mixer is present, the blocking pipe sink, otherwise clearsp<NBAIO_Sink>          mPipeSink;// The current sink for the normal mixer to write it's (sub)mix, mOutputSink or mPipeSinksp<NBAIO_Sink>          mNormalSink;
}//@NBAIO.h
class NBAIO_Sink : public NBAIO_Port {virtual ssize_t write(const void *buffer, size_t count) = 0;
}

mNormalSink是NBAIO_Sink类型指针,而NBAIO_Sink ->write又是纯虚函数,我们查找他的write实现,就得先看下mNormalSink被赋值给了什么。

通过结合代码和搜索mNormalSink,发现在MixerThread的构造函数中有进行赋值(MixerThread继承自PlaybackThread):

//@Threads.cpp
static const enum {FastMixer_Never,    // never initialize or use: for debugging onlyFastMixer_Always,   // always initialize and use, even if not needed: for debugging only// normal mixer multiplier is 1FastMixer_Static,   // initialize if needed, then use all the time if initialized,// multiplier is calculated based on min & max normal mixer buffer sizeFastMixer_Dynamic,  // initialize if needed, then use dynamically depending on track load,// multiplier is calculated based on min & max normal mixer buffer size
} kUseFastMixer = FastMixer_Static;AudioFlinger::MixerThread::MixerThread(const sp<AudioFlinger>& audioFlinger, AudioStreamOut* output,audio_io_handle_t id, audio_devices_t device, bool systemReady, type_t type):   PlaybackThread(audioFlinger, output, id, device, type, systemReady),//这里构造了PlaybackThread// mAudioMixer below// mFastMixer belowmFastMixerFutex(0),mMasterMono(false)// mOutputSink below// mPipeSink below// mNormalSink below
{mAudioMixer = new AudioMixer(mNormalFrameCount, mSampleRate);mOutputSink = new AudioStreamOutSink(output->stream);// initialize fast mixer depending on configurationbool initFastMixer;switch (kUseFastMixer) {//kUseFastMixer = FastMixer_Staticcase FastMixer_Never:initFastMixer = false;break;case FastMixer_Always:initFastMixer = true;break;case FastMixer_Static:case FastMixer_Dynamic:initFastMixer = mFrameCount < mNormalFrameCount;break;}MonoPipe *monoPipe = new MonoPipe(mNormalFrameCount * 4, format, true /*writeCanBlock*/);mPipeSink = monoPipe;// create fast mixer and configure it initially with just one fast track for our submixmFastMixer = new FastMixer();// start the fast mixermFastMixer->run("FastMixer", PRIORITY_URGENT_AUDIO);switch (kUseFastMixer) {//kUseFastMixer = FastMixer_Staticcase FastMixer_Never:case FastMixer_Dynamic:mNormalSink = mOutputSink;break;case FastMixer_Always:mNormalSink = mPipeSink;break;case FastMixer_Static:mNormalSink = initFastMixer ? mPipeSink : mOutputSink;break;}
}

这里还涉及到了fastMixer,这里不是本文重点,先不表。
头疼,这里mNormalSink 的情况又分两组,默认情况下kUseFastMixer=FastMixer_Static,initFastMixer = mFrameCount < mNormalFrameCount;

所以我们这里讨论两种情况:
1.mNormalSink = mOutputSink;
2.mNormalSink = mPipeSink;

1.1 mNormalSink = mOutputSink

看下mOutputSink的由来:mOutputSink = new AudioStreamOutSink(output->stream);

//@AudioStreamOutSink.h
class AudioStreamOutSink : public NBAIO_Sink {sp<StreamOutHalInterface> mStream;
}//@AudioStreamOutSink.cpp
AudioStreamOutSink::AudioStreamOutSink(sp<StreamOutHalInterface> stream) :NBAIO_Sink(),mStream(stream),mStreamBufferSizeBytes(0)
{ALOG_ASSERT(stream != 0);
}

这里将mStream初始化为入参stream,也就是传入的output->stream。
也就是说,当mNormalSink = mOutputSink时,PlaybackThread::threadLoop_write里的mNormalSink->write就是AudioStreamOutSink::write

//@AudioStreamOutSink.cpp
ssize_t AudioStreamOutSink::write(const void *buffer, size_t count)
{ssize_t ret = mStream->write(mStream, buffer, count * mFrameSize);if (ret > 0) {ret /= mFrameSize;mFramesWritten += ret;} else {// FIXME verify HAL implementations are returning the correct error codes e.g. WOULD_BLOCK}return ret;
}

那么这里的mStream->write又是调用到哪里呢?看类型也知道StreamOutHalInterface肯定是和hal相关。
刚刚我们说了AudioStreamOutSink的构造函数中,将传参output->stream初始化给了mStream,那么我们看下output->stream的由来:
output由MixerThread构造函数传入,那么MixerThread是在那里被new的呢?

sp<AudioFlinger::PlaybackThread> AudioFlinger::openOutput_l(audio_module_handle_t module,audio_io_handle_t *output,audio_config_t *config,audio_devices_t devices,const String8& address,audio_output_flags_t flags)
{AudioStreamOut *outputStream = NULL;status_t status = outHwDev->openOutputStream(&outputStream,*output,devices,flags,config,address.string());if (status == NO_ERROR) {PlaybackThread *thread;if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {thread = new OffloadThread(this, outputStream, *output, devices, mSystemReady);} else if ((flags & AUDIO_OUTPUT_FLAG_DIRECT)|| !isValidPcmSinkFormat(config->format)|| !isValidPcmSinkChannelMask(config->channel_mask)) {thread = new DirectOutputThread(this, outputStream, *output, devices, mSystemReady);} else {thread = new MixerThread(this, outputStream, *output, devices, mSystemReady);//这里!!!}mPlaybackThreads.add(*output, thread);}
}

之前的文章讲open流程有稍微提到过:Android音频子系统(一)------openOutput打开流程
这里是将outHwDev->openOutputStream的实参&outputStream,给了new MixerThread:

//@AudioHwDevice.cpp
status_t AudioHwDevice::openOutputStream(AudioStreamOut **ppStreamOut,audio_io_handle_t handle,audio_devices_t devices,audio_output_flags_t flags,struct audio_config *config,const char *address)
{//创建AudioStreamOut音频输出流AudioStreamOut *outputStream = new AudioStreamOut(this, flags);*ppStreamOut = outputStream;//这里做了赋值,也就是&outputStream
}

所以我们可知,outputStream是一个输出流,也就是说AudioStreamOutSink::write里的mStream->write,就是AudioStreamOut::write

//@AudioStreamOut.h
class AudioStreamOut {
public:audio_stream_out_t *stream;
}//@AudioStreamOut.cpp
ssize_t AudioStreamOut::write(const void *buffer, size_t numBytes)
{ALOG_ASSERT(stream != NULL);ssize_t bytesWritten = stream->write(stream, buffer, numBytes);if (bytesWritten > 0 && mHalFrameSize > 0) {mFramesWritten += bytesWritten / mHalFrameSize;}return bytesWritten;
}

这里明显看到stream->write,stream是AudioStreamOut类成员,又是在哪里赋值的呢?

status_t AudioStreamOut::open(audio_io_handle_t handle,audio_devices_t devices,struct audio_config *config,const char *address)
{audio_stream_out_t *outStream;int status = hwDev()->open_output_stream(hwDev(),handle,devices,customFlags,config,&outStream,address);if (status == NO_ERROR) {stream = outStream;}
}

这里滴流程是真滴多,open的时候stream会被赋值:stream = outStream,这里也就到了hal层了:adev->hw_device.open_output_stream = adev_open_output_stream;到hal层也就懒得细贴代码了。

总结下就是mNormalSink = mOutputSink时,mNormalSink->write最后就调用到hal层的write操作了。

1.2 mNormalSink = mPipeSink

那么如果是mNormalSink = mPipeSink;的情况呢?这个情况就比较简单了,
MixerThread构造函数里面:

MonoPipe *monoPipe = new MonoPipe(mNormalFrameCount * 4, format, true /*writeCanBlock*/);
mPipeSink = moniPipe

所以mNormalSink->write也就是MonoPipe::write

ssize_t MonoPipe::write(const void *buffer, size_t count)
{
}

这部分说实话没看懂,Android怎么这么复杂,先留着吧。。。。。。

2.Direct output and offload

Direct output and offload的情况下:
一般的HDMI设备就是走的Direct output了,我们分析下

//AudioStreamOut	*mOutput;
bytesWritten = mOutput->write((char *)mSinkBuffer + offset, mBytesRemaining);

我们在本文件搜索下mOutput是在哪被初始化的,结果找到了:

AudioFlinger::PlaybackThread::PlaybackThread(const sp<AudioFlinger>& audioFlinger,AudioStreamOut* output,audio_io_handle_t id,audio_devices_t device,type_t type,bool systemReady):   ThreadBase(audioFlinger, id, device, AUDIO_DEVICE_NONE, type, systemReady),//......mActiveTracksGeneration(0),// mStreamTypes[] initialized in constructor bodymOutput(output),//就是这里初始化了mLastWriteTime(-1), mNumWrites(0), mNumDelayedWrites(0), mInWrite(false),mMixerStatus(MIXER_IDLE),//......
{
}

在PlaybackThread的构造函数里面,会初始化mOutput为output,output是PlaybackThread构造函数的入参,那么它是在哪里被创建的呢?
一般是在播放线程实例,例如OffloadThread或者DirectOutputThread或者MixerThread的构造函数中创建的,MixerThread比较常见,文章开头也描述有,就以MixerThread分析:

AudioFlinger::MixerThread::MixerThread(const sp<AudioFlinger>& audioFlinger, AudioStreamOut* output,audio_io_handle_t id, audio_devices_t device, bool systemReady, type_t type):   PlaybackThread(audioFlinger, output, id, device, type, systemReady),//就是这里了// mAudioMixer below// mFastMixer belowmFastMixerFutex(0),mMasterMono(false)// mOutputSink below// mPipeSink below// mNormalSink below
{
}

继续往下追踪,PlaybackThread(audioFlinger, output, id, device, type, systemReady)中output,也就是MixerThread中的传参output,又是哪里传入的呢?

//AudioFlinger.cpp
sp<AudioFlinger::ThreadBase> AudioFlinger::openOutput_l(audio_module_handle_t module,audio_io_handle_t *output,audio_config_t *config,audio_devices_t devices,const String8& address,audio_output_flags_t flags)
{//这里outputStream的初始化AudioStreamOut *outputStream = NULL;status_t status = outHwDev->openOutputStream(&outputStream,*output,devices,flags,config,address.string());if (status == NO_ERROR) {PlaybackThread *thread;if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {thread = new OffloadThread(this, outputStream, *output, devices, mSystemReady);ALOGV("openOutput_l() created offload output: ID %d thread %p", *output, thread);} else if ((flags & AUDIO_OUTPUT_FLAG_DIRECT)|| !isValidPcmSinkFormat(config->format)|| !isValidPcmSinkChannelMask(config->channel_mask)) {thread = new DirectOutputThread(this, outputStream, *output, devices, mSystemReady);ALOGV("openOutput_l() created direct output: ID %d thread %p", *output, thread);} else {//创建MixerThread线程thread = new MixerThread(this, outputStream, *output, devices, mSystemReady);ALOGV("openOutput_l() created mixer output: ID %d thread %p", *output, thread);}mPlaybackThreads.add(*output, thread);return thread;}
}

又到了我们熟悉的openOutput_l函数:Android音频子系统(一)------openOutput打开流程

所以我们可知,mOutput->write也就是AudioStreamOut::write了,这里write就可以往底层写入数据。

不过…

其实也不要那么麻烦分析,mOutput是class PlaybackThread 的成员,类型是AudioStreamOut:

class PlaybackThread : public ThreadBase {AudioStreamOut                  *mOutput;
}

所以也可以得知mOutput->write也就是AudioStreamOut::write了

ssize_t AudioStreamOut::write(const void* buffer, size_t bytes)
{AudioOutputList::iterator I;bool checkDMAStart = false;bool hasActiveOutputs = false;{Mutex::Autolock _l(mRoutingLock);for (I = mPhysOutputs.begin(); I != mPhysOutputs.end(); ++I) {if (AudioOutput::PRIMED == (*I)->getState())checkDMAStart = true;if ((*I)->getState() == AudioOutput::ACTIVE)hasActiveOutputs = true;}}if (checkDMAStart) {int64_t junk;getNextWriteTimestamp_internal(&junk);}// We always call processOneChunk on the outputs, as it is the// tick for their state machines.{Mutex::Autolock _l(mRoutingLock);for (I = mPhysOutputs.begin(); I != mPhysOutputs.end(); ++I) {(*I)->processOneChunk((uint8_t *)buffer, bytes, hasActiveOutputs, mInputFormat);}// If we don't actually have any physical outputs to write to, just sleep// for the proper amount of time in order to simulate the throttle that writing// to the hardware would impose.uint32_t framesWritten = bytes / mInputFrameSize;finishedWriteOp(framesWritten, (0 == mPhysOutputs.size()));}
}

因为是Direct output直接输出的,里面先判断DMA开始了没,接着判断有没有active outputs。
之后调用(*I)->processOneChunk进行处理:

void AudioOutput::processOneChunk(const uint8_t* data, size_t len,bool hasActiveOutputs, audio_format_t format) {doPCMWrite(data, len, format);//写入pcm数据
}

最后也是调用到pcm_write,整条路就打通了。

更多推荐

Android音频子系统(二)

本文发布于:2024-02-13 21:47:52,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1761079.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:子系统   音频   Android

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!