This lets the renderer handle the wlr_buffer directly, just like it
does in texture_from_buffer. This also allows the renderer to batch
the rectangle updates, and update more than the damage region if
desirable (e.g. too many rects), so can be more efficient.
GL_ALPHA_BITS is the number of bits of the alpha channel of the
currently bound frame buffer's color buffer -- which is precisely
renderer->current_buffer->rbo . Thus, instead of binding the color
buffer and checking its properties, we can query the already bound
frame buffer.
Note that GL_IMPLEMENTATION_COLOR_READ_{FORMAT,TYPE} are also
properties of frame buffer's color buffer.
Instead of checking whether the wlr_egl dependencies are available
in the GLES2 code, introduce internal_features['egl'] and check
that field.
When updating the EGL dependency list, we no longer need to update
the GLES2 logic.
Whether a texture is opaque or not doesn't depend on the renderer
at all, it just depends on the source buffer. Instead of forcing
all renderers to implement wlr_texture_impl.is_opaque, let's move
this in common code and use the wlr_buffer format to know whether
a texture will be opaque.
`vulkan_format_props_query` calls `query_modifier_support` which
initializes fields of `props` with allocated memory. this memory is
leaked if `query_modifier_support` does not find a supported format
and shmtex is not supported, as in this case `add_fmt_props` ends
up being false and the allocated fields of `props` are never freed.
Replace them with wlr_signal_emit_safe, which correctly handles
cases where a listener removes another listener.
Reported-by: Isaac Freund <ifreund@ifreund.xyz>
Now that the DRM backend no longer depends on GBM, we can make it
optional. The GLES2 renderer still depends on it because of our EGL
device selection.
This is useful for compositors with their own renderers, and for
compositors using the Vulkan renderer.
These formats require EXT_texture_norm16, which in turn needs OpenGL
ES 3.1. The EXT_texture_norm16 extension does not support passing
gl_internalformat = GL_RGBA to glTexImage2D, as can be done for
formats available in OpenGL ES 2.0, so this commit adds a field to
wlr_gles2_pixel_format to provide a more specific internalformat
parameter to glTexImage2D.
This removes an artificial limitation in form of an assert that disallowed the
creation of textures while the renderer is rendering.
A consumer might run its own rendering pipeline and after start of the renderer
still want to create textures for internal usage.
commit 44e8451cd9 ("render/gles2: hide shm formats without GL
support") added the is_gles2_pixel_format_supported() function to
render/gles2/pixel_format.c, whose stated purpose is to "check whether
the renderer has the needed GL extensions to read a given pixel format."
It then used that function to filter the pixel formats returned by
get_gles2_shm_formats().
The result of this change is that RGB formats are no longer reported for
GL drivers that don't implement EXT_read_format_bgra, even when those
formats are supported for rendering (which they have to be for
wlr_gles2_renderer_create() to succeed). This is a pretty clear
regression, since wlr_renderer_init_wl_shm() fails when either of
WL_SHM_FORMAT_ARGB8888 or WL_SHM_FORMAT_XRGB8888 are missing.
To fix the regression, change is_gles2_pixel_format_supported() to
accept all pixel formats that support rendering, regardless of whether
we can read them or not, and move the check for EXT_read_format_bgra
back into gles2_read_pixels(). (There's already a check for this
extension in gles2_preferred_read_format(), so we're not breaking any
abstraction that wasn't already broken.)
Tested on the NVIDIA 495.46 proprietary driver, which doesn't support
EXT_read_format_bgra.
Fixes: 44e8451cd9 ("render/gles2: hide shm formats without GL support")
This intersects two DRM format sets. This is useful for implementing
DMA-BUF feedback in compositors, see e.g. the Sway PR [1].
[1]: https://github.com/swaywm/sway/pull/6313
Support for EXT_image_dma_buf_import_modifiers doesn't necessarily
indicate support for modifiers. For instance, Mesa will advertise
EXT_image_dma_buf_import_modifiers for all drivers. This is a trick
to allow EGL clients to enumerate supported formats (something
EXT_image_dma_buf_import is missing). For more information, see [1].
Add a new wlr_egl.has_modifiers flag which indicates whether
modifiers are supported. It's set to true if any
eglQueryDmaBufModifiersEXT query returned a non-empty list.
Use that flag to figure out whether the buffer modifier should be
passed to the EGL implementation on import.
[1]: https://github.com/KhronosGroup/EGL-Registry/issues/142
The backends and allocators use INVALID, but the renderer uses
LINEAR. Running a compositor with WLR_RENDERER=pixman results in:
00:00:00.744 [types/output/render.c:59] Failed to pick primary buffer format for output 'WL-1'
This allows creating a wlr_egl from an already-existing EGL display
and context. This is useful to allow compositors to choose the exact
EGL initialization parameters.
If the backend doesn't have a DRM FD, fallback to the renderer's.
This accomodates for the situation where the headless backend hasn't
picked a DRM FD in particular, but the renderer has picked one.
If the backend hasn't picked a DRM FD but supports DMA-BUF, pick
an arbitrary render node. This will allow removing the DRM device
selection logic from the headless backend.
They are never used in practice, which makes all of our flag
handling effectively dead code. Also, APIs such as KMS don't
provide a good way to deal with the flags. Let's just fail the
DMA-BUF import when clients provide flags.
For `required` to disable search the value needs to be of `feature` type.
Checking `gles2` via `in` keyword returns a `bool` but `required: false`
makes the dependency optional instead of disabled.
This new renderer is implemented with the existing wlr_renderer API
(which is known to be sub-optimal for some operations). It's not
used by default, but users can opt-in by setting WLR_RENDERER=vulkan.
The renderer depends on VK_EXT_image_drm_format_modifier and
VK_EXT_physical_device_drm.
Co-authored-by: Simon Ser <contact@emersion.fr>
Co-authored-by: Jan Beich <jbeich@FreeBSD.org>
If we aren't trying to create a dumb buffer allocator, and if the
DRM device has a render node (ie, not a split render/display SoC),
then we can use the render node instead of the primary node. This
should allow wlroots to run under seatd when the current user
doesn't have the permission to open primary nodes (logind has a
quirk to allow physically logged in users to open primary nodes).
This allows callers to specify the operations they'll perform on
the returned data pointer. The motivations for this are:
- The upcoming Linux MAP_NOSIGBUS flag may only be usable on
read-only mappings.
- gbm_bo_map with GBM_BO_TRANSFER_READ hurts performance.
The half-float formats depend on GL_OES_texture_half_float_linear,
not just the GL_OES_texture_half_float extension, because the latter
does not include support for linear magni/minification filters.
The new 2101010 and 16161616F formats are only available on little-
endian builds, since their gl_types are larger than a byte and thus
endianness dependent.
Uses the EXT_device_query extension to get the EGL device matching the
requested DRM file descriptor. If the extension is not supported or no device
is found, the EGL device will be retrieved using GBM.
Depends on the EGL_EXT_device_enumeration to get the list of EGL devices.
This EGL extension has been added in [1]. The upsides are:
- We directly get a render node, instead of having to convert the
primary node name to a render node name.
- If EGL_DRM_RENDER_NODE_FILE_EXT returns NULL, that means there is
no render node being used by the driver.
[1]: https://github.com/KhronosGroup/EGL-Registry/pull/127
Without setting this the EGL implementation is allowed to perform
destructive actions on the buffer when imported: its contents
become undefined.
This is mostly a pedantic change, because Mesa processes the attrib
and does absolutely nothing with it.
Now that we have our own wl_drm implementation, there's no reason
to provide custom renderer hooks to init a wl_display in the
interface. We can just initialize the wl_display generically,
depending on the renderer capabilities.
Khronos refers to extensions with their namespace as a prefix in
uppercase. Change our naming to align with Khronos conventions.
This also makes grepping easier.
Khronos refers to extensions with their namespace as a prefix in
uppercase. Change our naming to align with Khronos conventions.
This also makes grepping easier.
Everything needs to go through the unified wlr_buffer interface
now.
If necessary, there are two ways support for
EGL_WL_bind_wayland_display could be restored by compositors:
- Either by using GBM to convert back EGL Wayland buffers to
DMA-BUFs, then wrap the DMA-BUF into a wlr_buffer.
- Or by wrapping the EGL Wayland buffer into a special wlr_buffer
that doesn't implement any wlr_buffer_impl hook, and special-case
that buffer type in the renderer.
Custom backends and renderers need to implement
wlr_backend_impl.get_buffer_caps and
wlr_renderer_impl.get_render_buffer_caps. They can't if enum
wlr_buffer_cap isn't made public.
We never create an EGL context with the platform set to something
other than EGL_PLATFORM_GBM_KHR. Let's simplify wlr_egl_create by
taking a DRM FD instead of a (platform, remote_display) tuple.
This hides the internal details of creating an EGL context for a
specific device. This will allow us to transparently use the device
platform [1] when the time comes.
[1]: https://github.com/swaywm/wlroots/pull/2671
The wlr_egl functions are mostly used internally by the GLES2
renderer. Let's reduce our API surface a bit by hiding them. If
there are good use-cases for one of these, we can always make them
public again.
The functions mutating the current EGL context are not made private
because e.g. Wayfire uses them.