The compositor can announce it's default rendering device via
linux_dmabuf_feedback as the main_device [1]. We should use this device
whenever possible. If aquiring this device fails we are adviced to use
force linear layout on buffers allocated with the implicit api.
With linux_dmabuf_v1 the modifier event is deprecated. Instead the
format_table event in combination with the tranches of
linux_dmabuf_feedback_v1 has to be used.
[1] https://gitlab.freedesktop.org/wayland/wayland-protocols/-/blob/main/unstable/linux-dmabuf/feedback.rst
Registering a zwp_linux_dmabuf_v1_listener gives access to the supported
format modifier pairs of the compositor.
This handler emits events for each format modifier pair. Those are stored
and will later be used to announce capabilities via PipeWire.
It has shown that the assumption: "Allocation with implicit modifier will
always be available" doesn't hold true in all cases. Thus if allocation
of any dmabuf fails we mark the session to avoid dmabufs, thus falling
back to shm transport.
Dequeuing a new buffer imidiately creates a problem when the buffer is
destroid while renegotiation of a valid modifier because we can end up using
a buffer which was freed while waiting for the buffer_done event. Then
xdpw will segfault when requesting a copy with a nonexisting buffer. To fix
this we can just dequeue a buffer right before we need it. This makes
the fallback via on_process obsolete since we dequeue the buffer at the
lates possible time.
This is using the older gbm api without support for explicit modifiers.
This is required to support AMD gpus using the GFX8 or older
architecture and older intel gpus.
We want to support WL_SHM and DMABUFS based buffers. The buffer_type
member tracks the type of a xdpw_buffer and screencopy_frame_info of the
screencast_instance will be an array with an element for each buffer type
indexed by the value of the buffer_type enum. Only members of the
xdpw_screencopy_frame_info relevant to the buffer type should be used.
drm format defined by drm_fourcc.h is the standard to describe the
format of a buffer. This will be used when dealing with dmabufs and to
simplify things we should drm_formats for all internal structs.
In case a client doesn't return a buffer early enough we can give it a
second chance by triggering on_process before we pass the buffer to
wlroots in the frame_buffer_done event.
The previous implemented way to use wlr_screencopy events to cicle the
screencast had issues, like halting the stream if it was paused and
resumed before PipeWire triggered a recreation of buffers. This came
from not returning a dequeued buffer. This is now mititgated by
enqueuing the current pw_buffer imidiately on a paused event and trying
to dequeue a buffer just before requesting a screencopy if none is
present.
It showed that handling self contained buffers is much easier then have
the metadata of the buffer seperated from the actual buffer attached to
the screencast instance.
The goal of the following changes is to separate the meta informations
like requested buffer attributes and wlr_screencast data from the actual
buffers.
This enables us to:
* Simplify the flow between the PipeWire loop and the wlroots one
* Track and apply damage to each used buffer (ext_screencopy)
The enum xdpw_frame_state is used to track the state of the xdpw_frame through
the screencopy callbacks. xdpw_wlr_stream_finish is used as the new
endpoint of the screencopy callbacks. Here we clean up the
screencopy_frame, enqueue the pipewire buffer and restart the screencopy
loop if needed.
Sometimes it can happen that the first frame of the active stream
triggers the renegotiation and destroy the frame without copy and with
that starting the fps_limit counter. This triggers an assert in
fps_limit_measure_end. To avoid it, we only engage the fps_limiter after
a frame was copied successfully.
Since we are driving the screencast there are no events on the pipewire loop
calling the on_event callback. We want to import and export (if possible) on
every frame of the wlroots loop, so this event is no longer needed.
Supports "dmenu" chooser type, which is called with a dmenu type list
piped to stdin, "simple" type, which recieves nothing on stdin and
default, which tries the hardcoded choosers.
Choosers are required to return the name of the choosen output as given
by the xdg-output protocol.
Thanks to piater for closing overlooked pipes.
Thanks to ericonr for suggestions regarding fork and pipes.
The goal is to control the rate of capture while in screencast, as it
can represent a performance issue and can cause input lag and the
feeling of having a laggy mouse.
This commit addresses the issue reported in #66.
The code measures the time elapsed to make a single screen capture, and
calculates how much to wait for the next capture to achieve the targeted
frame rate. To delay the capturing of the next frame, the code
introduces timers into the event loop based on the event loop in
https://github.com/emersion/mako
Added a command-line argument and an entry in the config file as well
for the max FPS. The default value is 0, meaning no rate control.
Added code to measure the average FPS every 5 seconds and print it with
DEBUG level.
* Initial session support WIP
Remove libdrm dependency
Remove display from context, add dbus properties
Use random names for shm and pw_stream, init the stream only for new cast instances
Separate cast initialized flag from refcount, cleanup names and comments
* Refactor and stability improvements
Properly report xdp screencast implementation version