The previous patches added an event from libvlc_media_player_t which
notify the application that the current media is stopping, regardless of
whether the player itself is stopping or that it will process the next
media.
Those patches solve a design problem on the imem-access and libvlc
callbacks where the application could not be notified that the input was
stopped when the read() callback was pending, which happens through
vlc_interrupt_t in other demux.
In that situation, an application wanting to trigger the stop of the
media player can only signal the media end-of-stream,
triggering a pipeline drain, before stopping the player.
```mermaid
sequenceDiagram
box Orange VLC side
participant input as Input thread
participant imem as imem access
end
box Cyan Application size
participant app as Application
end
input ->> imem: stream::read()
activate imem
imem ->> app: media callback read()
app --x imem: return "EOS" (no data to send)
imem --x input: Trigger pipeline drain (EOS)
deactivate imem
app ->> input: stop (change media)
```mermaid
This decoder and output drain adds latency to the change of a media. The
duration depends on condition but I've been able to reproduce a steady
400ms latency addition when changing media (meaning that it will take
400ms more to start reading the new media), up to a gigantic 1.6s in a
randomized manner (1.1s being taken by the drain of the audio decoder!).
With the new system, it's now possible to let the application respond to
the new event to handle the interruption and termination of what it has
been doing in the libvlc_media_read_cb function, without triggering the
end of the media when it would have triggered a drain.
```mermaid
sequenceDiagram
box Orange VLC side
participant destructor as Player input destructor
participant input as Input
participant imem as imem access
end
box Cyan Application side
participant player as Player
participant app as Application
end
par Input thread
activate imem
input ->> imem: stream::read()
imem ->> app: media callback read_cb()
app ->> app: read_cb() might be waiting for some data
and Application thread
app ->> player: Stop()
player ->> destructor: Add current input to destructor
and Player destructor thread
destructor ->> input: input_Stop()
destructor ->> app: libvlc_MediaPlayerMediaStopping event
end
app ->> app: Interrupt read_cb() and return EOF
deactivate imem
```
This commit verifies this behaviour in a reliable way to ensure that the
problem is indeed solved by this approach, while allowing to document
the solution on the application side.
To do so, two different tests are made:
- A sanity test which checks that we can stop a libvlc media which is
not blocking, basically checking that the media_callback API works
and running it when we want to check against sanitizers.
- A specific test checking that we can block the read() and only
unblock it when the libvlc_MediaPlayerMediaStopping event is emitted,
checking that the documented behaviour for the media callbacks is
working correctly.
---
The test needs:
- No intermediate cache so that we clearly control the access from the
demux.
- Better deterministic behaviour, typically to check that we don't call
AccessReadNonBlocking twice if we don't notify between them and try
to check that read() is correctly terminating in this case.
---
On a side note, other solutions were investigated to solve this issue:
1/ Provide an `errno` variable as parameter to read:
By providing an errno parameter, we can provide a specific error code
when -1 is returned, and in particular use EAGAIN/EWOULDBLOCK to inform
the imem-access code that we don't have data yet. Then, the imem-access
can wait for a notification from the application to start reading again,
while listening to interruption coming from the core.
This solution was submitted but it felt like alienating the libVLC imem
API and libvlc media API to solve this problem, without considering it
as an application problem as a whole.
2/ Provide the same solution but use `errno` directly:
This solution would be the simplest but there's no way for the
application to realize that errno returning EAGAIN, ie. mapping the read
callback directly to read() on a non-blocking socket for instance, might
create silent regression where the access would not try to fetch more
data, since the application won't know it must notify the access.
3/ Provide a poll_cb() callback in the imem interface
The solution would require another callback, instead of another
parameter. The new callback would be called every time before the read()
callback, and would notify whether the read() is ready or not. This adds
a new way for the application to exit, differentiating the return code
for read() from the return code from poll(), but the access still cannot
be interrupted and the additional cost on function call is higher.
4/ Provide a `waker` assignment (like rust futures)
Provide a new callback to specify an object that can be used to notify
the access that new data is available. This is much like poll() but it
has the benefit that the application can immediately notify that no data
is available, and then provide the object to notify the arrival of data
asynchronously without exposing a public function on libvlc_media_t.
This is still quite complex and still has a heavy function call cost,
but it also fullfill every points from the suggestion here. However, the
rust async model is not really the idiomatic model in C compared to
POSIX errno.
5/ Solve the problem at the decoder and output level
One of the main symptoms that can be observed when changing media is the
increase of the time to close a media, and in particular, to drain the
decoder. This is also a non-interruptible task, because a decoder drain
is synchronous. Admittedly, the interruption problem is less problematic
if the drain itself can be handled smoothly afterwards, and I actually
think both problems need to be solved eventually. But the decoder
behaviour on drain depends on the decoder implementation, whereas the
imem-access suggests a solution on the application side to correctly
express the required intent, and it is also quite suitable to express an
injection workflow (data being written from the application to the
VLC pipeline) as well as a pull workflow (data being read from the
application by VLC).