Android Codec2 CCodec (三五)数据渲染

上一篇文章讲完了输出数据的回传过程,这一节我们再来过一遍数据渲染流程。

1、renderOutputBuffer

数据渲染分为视频渲染和音频渲染两部分。视频渲染包括丢弃(drop)和渲染(render)两种操作:丢弃时会调用discardBuffer,渲染时会调用renderOutputBuffer。而在音频渲染中,无论是丢弃还是渲染都会调用discardBuffer,这是因为渲染时上层会把数据写到AudioTrack,一旦数据写完output buffer就可以被丢弃了。这一节我们先了解video渲染方法renderOutputBuffer。

status_t CCodecBufferChannel::renderOutputBuffer(
        const sp<MediaCodecBuffer> &buffer, int64_t timestampNs)

renderOutputBuffer需要传入两个参数,一个是MediaCodecBuffer,另一个是渲染的系统时间戳。

std::shared_ptr<C2Buffer> c2Buffer;
bool released = false;
{
    Mutexed<Output>::Locked output(mOutput);
    if (output->buffers) {
        // 1.
        released = output->buffers->releaseBuffer(buffer, &c2Buffer);
    }
}
// 2. 
sendOutputBuffers();
feedInputBufferIfAvailable();
if (!c2Buffer) {
    return INVALID_OPERATION;
}
  • 调用OutputBuffers的releaseBuffer方法获取C2Buffer,将clientBuffer清空或者将ownedByClient置为false。调用完成后,compBuffer引用的是外部的c2Buffer。
  • 尝试将output buffer送回到上层。

接下来有一段忽略,直接到这里:

// 1. 从C2Buffer中获取C2ConstGraphicBlock
std::vector<C2ConstGraphicBlock> blocks = c2Buffer->data().graphicBlocks();
if (blocks.size() != 1u) {  // 如果size不为1,那么直接返回
    return UNKNOWN_ERROR;
}
// 2. 拿到C2ConstGraphicBlock
const C2ConstGraphicBlock &block = blocks.front();
// 3. 创建fence
C2Fence c2fence = block.fence();
sp<Fence> fence = Fence::NO_FENCE;
if (c2fence.isHW()) {
    int fenceFd = c2fence.fd();
    fence = sp<Fence>::make(fenceFd);
    if (!fence) {
        ALOGE("[%s] Failed to allocate a fence", mName);
        close(fenceFd);
        return NO_MEMORY;
    }
}

// 4. 创建QueueBufferInput,载入输入参数
IGraphicBufferProducer::QueueBufferInput qbi(
        timestampNs,
        false, // droppable
        dataSpace,
        Rect(blocks.front().crop().left,
                blocks.front().crop().top,
                blocks.front().crop().right(),
                blocks.front().crop().bottom()),
        videoScalingMode,
        transform,
        fence, 0);

qbi.setSurfaceDamage(Region::INVALID_REGION); // we don't have dirty regions
qbi.getFrameTimestamps = true; // we need to know when a frame is rendered
// 5. 创建QueueBufferOutput
IGraphicBufferProducer::QueueBufferOutput qbo;
// 6. 调用queueToOutputSurface
status_t result = mComponent->queueToOutputSurface(block, qbi, &qbo);
if (result != OK) {
    ALOGI("[%s] queueBuffer failed: %d", mName, result);
    if (result == NO_INIT) {
        mCCodecCallback->onError(UNKNOWN_ERROR, ACTION_CODE_FATAL);
    }
    return result;
}

int64_t mediaTimeUs = 0;
(void)buffer->meta()->findInt64("timeUs", &mediaTimeUs);
// 7. 追踪OutputFrame
if (mAreRenderMetricsEnabled && mIsSurfaceToDisplay) {
    trackReleasedFrame(qbo, mediaTimeUs, timestampNs);
    processRenderedFrames(qbo.frameTimestamps);
} else {
    // When the surface is an intermediate surface, onFrameRendered is triggered immediately
    // when the frame is queued to the non-display surface
    // 8. 调用onOutputFramesRendered
    mCCodecCallback->onOutputFramesRendered(mediaTimeUs, timestampNs);
}

渲染video output frame实际上是将buffer送到OutputSurface,所以最终调用的是Codec2Client::Component的queueToOutputSurface方法,输出参数为C2ConstGraphicBlock和QueueBufferInput,输出参数为QueueBufferOutput。

buffer送给surface之后会对buffer的渲染时间进行追踪,渲染成功需要调用onOutputFramesRendered callback通知MediaCodec,一般情况下进入的是第七步。

buffer渲染追踪过程如下:

void CCodecBufferChannel::trackReleasedFrame(const IGraphicBufferProducer::QueueBufferOutput& qbo,
                                             int64_t mediaTimeUs, int64_t desiredRenderTimeNs) {
    // 1. 获取当前系统时间
    int64_t nowNs = systemTime(SYSTEM_TIME_MONOTONIC);
    if (desiredRenderTimeNs < nowNs) {
        desiredRenderTimeNs = nowNs;    // 如果渲染的系统时间戳小于当前时间,设置为当前时间
    }

    if (desiredRenderTimeNs > nowNs + 1*1000*1000*1000LL) {
        desiredRenderTimeNs = nowNs;    // 如果渲染时间戳大于当前时间1s,设置为当前时间
    }
    // 2. 构建TrackedFrame,添加到列表中
    TrackedFrame frame;
    frame.number = qbo.nextFrameNumber - 1;     // 当前帧索引
    frame.mediaTimeUs = mediaTimeUs;
    frame.desiredRenderTimeNs = desiredRenderTimeNs;
    frame.latchTime = -1;
    frame.presentFence = nullptr;
    mTrackedFrames.push_back(frame);
}

trackReleasedFrame有三个参数:

  • qbo:producer返回的输出;
  • mediaTimeUs:buffer携带的pts;
  • desiredRenderTimeNs:buffer渲染的系统时间戳;

trackReleasedFrame会构建TrackedFrame,TrackedFrame的number记录的是当前buffer在surface中的序列号。

trackReleasedFrame调用完成紧接着就调用processRenderedFrames,它需要传入一个FrameEventHistoryDelta对象,该对象来自于qbo。

void CCodecBufferChannel::processRenderedFrames(const FrameEventHistoryDelta& deltas) {
    // 1. 遍历FrameEventHistoryDelta,查找mTrackedFrames与之FrameNumber相同的条目
    for (const auto& delta : deltas) {
        for (auto& frame : mTrackedFrames) {
            if (delta.getFrameNumber() == frame.number) {
                // 2. 获取latchTime和presentFence
                delta.getLatchTime(&frame.latchTime);
                delta.getDisplayPresentFence(&frame.presentFence);
            }
        }
    }

    // 3. 获取当前时间
    int64_t nowNs = systemTime(SYSTEM_TIME_MONOTONIC);
    // 4. 遍历
    while (!mTrackedFrames.empty()) {
        // 5. 如果元素渲染时间比当前时间大100ms,退出遍历
        TrackedFrame & frame = mTrackedFrames.front();
        if (frame.desiredRenderTimeNs > nowNs - 100*1000*1000LL) {
            break;
        }

        // 6. 获取渲染时间
        int64_t renderTimeNs = getRenderTimeNs(frame);
        if (renderTimeNs != -1) {
            // 7. 调用onOutputFramesRendered
            mCCodecCallback->onOutputFramesRendered(frame.mediaTimeUs, renderTimeNs);
        }
        // 8. 弹出队列首个元素
        mTrackedFrames.pop_front();
    }
}

我没有读过Graphic相关的内容,所以接下来的内容是一点猜想。FrameEventHistoryDelta携带有过去一段时间内的渲染事件,根据frameNumber可以拿到渲染事件的具体信息,信息会存储在latchTime和presentFence中。

调用getRenderTimeNs用于获取实际的渲染时间:

int64_t CCodecBufferChannel::getRenderTimeNs(const TrackedFrame& frame) {
    // 1. 不支持PresentFence
    if (!mHasPresentFenceTimes) {
        if (frame.latchTime == -1) {
            ALOGD("no latch time for frame %d", (int) frame.number);
            return -1;
        }
        return frame.latchTime;
    }

    if (frame.presentFence == nullptr) {
        ALOGW("no present fence for frame %d", (int) frame.number);
        return -1;
    }
    // 2. 
    nsecs_t actualRenderTimeNs = frame.presentFence->getSignalTime();

    if (actualRenderTimeNs == Fence::SIGNAL_TIME_INVALID) {
        ALOGW("invalid signal time for frame %d", (int) frame.number);
        return -1;
    }

    if (actualRenderTimeNs == Fence::SIGNAL_TIME_PENDING) {
        ALOGD("present fence has not fired for frame %d", (int) frame.number);
        return -1;
    }

    return actualRenderTimeNs;
}

如果NativeWindow不支持PresentFence就用latchTime来代替渲染时间,latchTime为-1表示当前还未渲染。如果支持PresentFence,可以从presentFence获取的actualRenderTimeNs,成功拿到renderTimeNs就可以调用onOutputFramesRendered通知MediaCodec了。

要注意的是,这个callback并不是output frame被渲染后立刻就会被调用,它有一定的滞后性,需要等下一帧被渲染时才知道上一帧是否渲染成功。

最后要注意,renderOutputBuffer执行完成后c2Buffer会被释放,这时候compBuffer会变成NULL。

2、queueToOutputSurface

status_t Codec2Client::Component::queueToOutputSurface(
        const C2ConstGraphicBlock& block,
        const QueueBufferInput& input,
        QueueBufferOutput* output) {
    return mOutputBufferQueue->outputBuffer(block, input, output);
}

queueToOutputSurface会调用OutputBufferQueue的outputBuffer方法,我们将它分为两部分来看:

status_t OutputBufferQueue::outputBuffer(
        const C2ConstGraphicBlock& block,
        const BnGraphicBufferProducer::QueueBufferInput& input,
        BnGraphicBufferProducer::QueueBufferOutput* output) {
    uint32_t generation;
    uint64_t bqId;
    int32_t bqSlot;
    // 1. 判断block类型是否为BufferQueue
    bool display = V1_0::utils::displayBufferQueueBlock(block);
    // 2. 获取generation、bufferqueueId、slot id
    if (!getBufferQueueAssignment(block, &generation, &bqId, &bqSlot) ||
        bqId == 0) {
        // Block not from bufferqueue -- it must be attached before queuing.

        std::shared_ptr<C2SurfaceSyncMemory> syncMem;
        mMutex.lock();
        bool stopped = mStopped;
        sp<IGraphicBufferProducer> outputIgbp = mIgbp;
        uint32_t outputGeneration = mGeneration;
        syncMem = mSyncMem;
        mMutex.unlock();

        if (stopped) {
            LOG(INFO) << "outputBuffer -- already stopped.";
            return DEAD_OBJECT;
        }
        // 3. 
        status_t status = attachToBufferQueue(
                block, outputIgbp, outputGeneration, &bqSlot, syncMem);

        if (status != OK) {
            LOG(WARNING) << "outputBuffer -- attaching failed.";
            return INVALID_OPERATION;
        }

        auto syncVar = syncMem ? syncMem->mem() : nullptr;
        // 4. 
        if(syncVar) {
            status = outputIgbp->queueBuffer(static_cast<int>(bqSlot),
                                         input, output);
            if (status == OK) {
                syncVar->lock();
                syncVar->notifyQueuedLocked();
                syncVar->unlock();
            }
        } else {
            status = outputIgbp->queueBuffer(static_cast<int>(bqSlot),
                                         input, output);
        }
        if (status != OK) {
            LOG(ERROR) << "outputBuffer -- queueBuffer() failed "
                       "on non-bufferqueue-based block. "
                       "Error = " << status << ".";
            return status;
        }
        return OK;
    }
    // ...

首先是第一部分:

  1. 检查block类型是否为BufferQueue;
  2. 调用getBufferQueueAssignment获取generation、bufferQueueId、slotId。如果获取失败或者是bufferQueueId为0,说明这个block来源不明,要使用它需要先将它attach到bufferqueue;
  3. 调用attachToBufferQueue将block绑定到bufferqueue(这个方法不做展开);
  4. 调用IGraphicBufferProducer的queueBuffer方法将slotId送入bufferqueue,更新同步变量。

如果正常拿到bufferQueueId、slotId会进入到下面这段:

std::shared_ptr<C2SurfaceSyncMemory> syncMem;
mMutex.lock();
bool stopped = mStopped;
sp<IGraphicBufferProducer> outputIgbp = mIgbp;
uint32_t outputGeneration = mGeneration;
uint64_t outputBqId = mBqId;
syncMem = mSyncMem;
mMutex.unlock();

if (stopped) {
    LOG(INFO) << "outputBuffer -- already stopped.";
    return DEAD_OBJECT;
}

if (!outputIgbp) {
    LOG(VERBOSE) << "outputBuffer -- output surface is null.";
    return NO_INIT;
}

if (!display) {
    LOG(WARNING) << "outputBuffer -- cannot display "
                    "bufferqueue-based block to the bufferqueue.";
    return UNKNOWN_ERROR;
}
if (bqId != outputBqId || generation != outputGeneration) {
    int32_t diff = (int32_t) outputGeneration - (int32_t) generation;
    LOG(WARNING) << "outputBuffer -- buffers from old generation to "
                    << outputGeneration << " , diff: " << diff
                    << " , slot: " << bqSlot;
    return DEAD_OBJECT;
}

auto syncVar = syncMem ? syncMem->mem() : nullptr;
status_t status = OK;
if (syncVar) {
    status = outputIgbp->queueBuffer(static_cast<int>(bqSlot),
                                                input, output);
    if (status == OK) {
        syncVar->lock();
        syncVar->notifyQueuedLocked();
        syncVar->unlock();
    }
} else {
    status = outputIgbp->queueBuffer(static_cast<int>(bqSlot),
                                                input, output);
}

if (status != OK) {
    LOG(ERROR) << "outputBuffer -- queueBuffer() failed "
                "on bufferqueue-based block. "
                "Error = " << status << ".";
    return status;
}

这段和上面基本一模一样,就这样吧…

3、discardBuffer

status_t CCodecBufferChannel::discardBuffer(const sp<MediaCodecBuffer> &buffer) {
    ALOGV("[%s] discardBuffer: %p", mName, buffer.get());
    bool released = false;
    {
        // 1. 丢弃输入
        Mutexed<Input>::Locked input(mInput);
        if (input->buffers && input->buffers->releaseBuffer(buffer, nullptr, true)) {
            released = true;
        }
    }
    {
        // 2. 丢弃输出
        Mutexed<Output>::Locked output(mOutput);
        if (output->buffers && output->buffers->releaseBuffer(buffer, nullptr)) {
            released = true;
        }
    }
    // 3. 如果成功释放
    if (released) {
        sendOutputBuffers();
        feedInputBufferIfAvailable();
    } else {
        ALOGD("[%s] MediaCodec discarded an unknown buffer", mName);
    }
    return OK;
}

从代码中可以看到,discardBuffer不仅可以用来丢弃输出,还可以用来丢弃输入,实际上需要让上层持有的buffer送回给CCodecBufferChannel时就要调用这个方法。

discardBuffer实现的方式是先尝试从InputBuffers进行释放(注意此时第二个参数为NULL),然后尝试从OutputBuffers进行释放(第二个参数也为NULL),只要释放成功会尝试调用sendOutputBuffers和feedInputBufferIfAvailable。如果我单纯想丢弃buffer,不想继续将buffer回传给上层,要咋办呢?别担心,feedInputBufferIfAvailable会有Queuesync状态检查的。

4、OutputBuffers

这一篇文章内容比较单薄,我们再对OutputBuffers的registerBuffer方法做一些补充。


原文阅读:
Android Codec2(三五)数据渲染

扫描下方二维码,关注公众号《青山渺渺》阅读音视频开发内容。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

青山渺渺

感谢支持

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值