This allows compositors to easily add an xdg_surface to the
scene-graph while retaining the ability to unconstraint popups
and decide their final position.
Compositors can handle new popups with the wlr_xdg_shell.new_surface
event, get the parent scene-graph node via wlr_xdg_popup.parent.data,
create a new scene-graph node via wlr_scene_xdg_surface_tree_create,
and unconstraint the popup if they want to.
The headless backend no longer needs a parent renderer: it no longer
needs to return it in wlr_backend_impl.get_renderer, nor does it
need to return its DRM FD in wlr_backend_impl.get_drm_fd. Drop this
function altogether since it now behaves exactly like
wlr_headless_backend_create.
Sometimes the headless backend is used standalone with the Pixman
renderer, sometimes it's used together with another backend which
has already picked a DRM FD. In both of these cases it doesn't make
sense to pick a DRM FD.
Broadly speaking the headless backend doesn't really care which DRM
device is used for the buffers it receives. So it doesn't really
make sense to tie it to a particular DRM device.
Let the backend users (e.g. wlr_renderer_autocreate) open an arbitrary
DRM FD as needed instead.
If the backend doesn't have a DRM FD, fallback to the renderer's.
This accomodates for the situation where the headless backend hasn't
picked a DRM FD in particular, but the renderer has picked one.
If the backend hasn't picked a DRM FD but supports DMA-BUF, pick
an arbitrary render node. This will allow removing the DRM device
selection logic from the headless backend.
This field's ownership is unclear: it's in wlr_input_device, but
it's not managed by the common code, it's up to each individual
backend to use it and clean it up.
Since this is a backend implementation detail, move it to the
backend-specific structs.
There's no guarantee that the parent Wayland compositor uses
CLOCK_MONOTONIC for reporting presentation timestamps, they could
be using e.g. CLOCK_MONOTONIC_RAW or another system-specific clock.
Forward the value via wlr_backend_impl.get_presentation_clock.
References: https://gitlab.freedesktop.org/wlroots/wlroots/-/merge_requests/3254#note_1143061
The parameters are used when the client is in the process of
building a buffer. There's no reason why this internal
implementation detail should be exposed in our public header.
All graphics drivers supporting cursor planes support ARGB8888,
the default cursor format, so this fallback is almost certainly
unused.
Essentially all cursor themes use alpha transparency to make it
clearer where relative to the screen content the cursor hotspot is.
It is better to fall back to a slightly slower software cursor than
it is to fall back to the opaque square that is a hardware cursor
without an alpha channel.
This change introduces new double buffered state to the wlr_output,
corresponding to the buffer format to render to.
The format being rendered to does not control the bit depth of colors
being sent to the display; it does generally determine the format with
which screenshot data is provided. The DRM backend _may_ sent higher
bit depths if the render format depth is increased, but hardware and
other limitations may apply.
Most (and possibly all) compositors using wlroots only ever render
fully opaque content. To provide better performance, this change
switches the default format used by wlr_output buffers from
ARGB8888 to the opaque XRGB8888.
Compositors like mutter, kwin, and weston already default to
XRGB8888, so this change is unlikely to expose any new bugs in
underlying drivers and hardware.
This does not affect the hardware cursor's buffer format, which is
still ARGB8888 by default.
As part of this change, the X11 backend (which does not support
changing format at runtime) now picks a true color, 24 bit depth
visual (i.e. XRGB8888) instead of a 32 bit depth (ARGB8888) one.
This makes it possible for the two functions using output_pick_format
(output_pick_cursor_format and output_create_swapchain) to select
different buffer formats.
The backend and renderer don't directly interact together, so there's
no point in checking that their buffer caps intersect. What we want to
check is that:
- The backend and allocator buffer caps are compatible, because the
backend consumes buffers to display them.
- The renderer and allocator buffer caps are compatible, because the
renderer imports buffers to sample them or render to them.
For instance, when running with the DRM backend and the Pixman renderer,
the (backend & renderer) check will fail because backend = DMABUF and
renderer = DATA_PTR.
They are never used in practice, which makes all of our flag
handling effectively dead code. Also, APIs such as KMS don't
provide a good way to deal with the flags. Let's just fail the
DMA-BUF import when clients provide flags.
We were send a protocol error if INTERLACED or BOTTOM_FIRST was
set. This is incorrect for the zwp_linux_dmabuf_params.create
code-path because this kills the client without allowing it to
gracefully handle the error.
We should only send a protocol error if the client provides a bit
not listed in the protocol definition.