Replace them with wlr_signal_emit_safe, which correctly handles
cases where a listener removes another listener.
Reported-by: Isaac Freund <ifreund@ifreund.xyz>
Now that the DRM backend no longer depends on GBM, we can make it
optional. The GLES2 renderer still depends on it because of our EGL
device selection.
This is useful for compositors with their own renderers, and for
compositors using the Vulkan renderer.
These formats require EXT_texture_norm16, which in turn needs OpenGL
ES 3.1. The EXT_texture_norm16 extension does not support passing
gl_internalformat = GL_RGBA to glTexImage2D, as can be done for
formats available in OpenGL ES 2.0, so this commit adds a field to
wlr_gles2_pixel_format to provide a more specific internalformat
parameter to glTexImage2D.
This removes an artificial limitation in form of an assert that disallowed the
creation of textures while the renderer is rendering.
A consumer might run its own rendering pipeline and after start of the renderer
still want to create textures for internal usage.
commit 44e8451cd9 ("render/gles2: hide shm formats without GL
support") added the is_gles2_pixel_format_supported() function to
render/gles2/pixel_format.c, whose stated purpose is to "check whether
the renderer has the needed GL extensions to read a given pixel format."
It then used that function to filter the pixel formats returned by
get_gles2_shm_formats().
The result of this change is that RGB formats are no longer reported for
GL drivers that don't implement EXT_read_format_bgra, even when those
formats are supported for rendering (which they have to be for
wlr_gles2_renderer_create() to succeed). This is a pretty clear
regression, since wlr_renderer_init_wl_shm() fails when either of
WL_SHM_FORMAT_ARGB8888 or WL_SHM_FORMAT_XRGB8888 are missing.
To fix the regression, change is_gles2_pixel_format_supported() to
accept all pixel formats that support rendering, regardless of whether
we can read them or not, and move the check for EXT_read_format_bgra
back into gles2_read_pixels(). (There's already a check for this
extension in gles2_preferred_read_format(), so we're not breaking any
abstraction that wasn't already broken.)
Tested on the NVIDIA 495.46 proprietary driver, which doesn't support
EXT_read_format_bgra.
Fixes: 44e8451cd9 ("render/gles2: hide shm formats without GL support")
This intersects two DRM format sets. This is useful for implementing
DMA-BUF feedback in compositors, see e.g. the Sway PR [1].
[1]: https://github.com/swaywm/sway/pull/6313
Support for EXT_image_dma_buf_import_modifiers doesn't necessarily
indicate support for modifiers. For instance, Mesa will advertise
EXT_image_dma_buf_import_modifiers for all drivers. This is a trick
to allow EGL clients to enumerate supported formats (something
EXT_image_dma_buf_import is missing). For more information, see [1].
Add a new wlr_egl.has_modifiers flag which indicates whether
modifiers are supported. It's set to true if any
eglQueryDmaBufModifiersEXT query returned a non-empty list.
Use that flag to figure out whether the buffer modifier should be
passed to the EGL implementation on import.
[1]: https://github.com/KhronosGroup/EGL-Registry/issues/142
The backends and allocators use INVALID, but the renderer uses
LINEAR. Running a compositor with WLR_RENDERER=pixman results in:
00:00:00.744 [types/output/render.c:59] Failed to pick primary buffer format for output 'WL-1'
This allows creating a wlr_egl from an already-existing EGL display
and context. This is useful to allow compositors to choose the exact
EGL initialization parameters.
If the backend doesn't have a DRM FD, fallback to the renderer's.
This accomodates for the situation where the headless backend hasn't
picked a DRM FD in particular, but the renderer has picked one.
If the backend hasn't picked a DRM FD but supports DMA-BUF, pick
an arbitrary render node. This will allow removing the DRM device
selection logic from the headless backend.
They are never used in practice, which makes all of our flag
handling effectively dead code. Also, APIs such as KMS don't
provide a good way to deal with the flags. Let's just fail the
DMA-BUF import when clients provide flags.
For `required` to disable search the value needs to be of `feature` type.
Checking `gles2` via `in` keyword returns a `bool` but `required: false`
makes the dependency optional instead of disabled.
This new renderer is implemented with the existing wlr_renderer API
(which is known to be sub-optimal for some operations). It's not
used by default, but users can opt-in by setting WLR_RENDERER=vulkan.
The renderer depends on VK_EXT_image_drm_format_modifier and
VK_EXT_physical_device_drm.
Co-authored-by: Simon Ser <contact@emersion.fr>
Co-authored-by: Jan Beich <jbeich@FreeBSD.org>
If we aren't trying to create a dumb buffer allocator, and if the
DRM device has a render node (ie, not a split render/display SoC),
then we can use the render node instead of the primary node. This
should allow wlroots to run under seatd when the current user
doesn't have the permission to open primary nodes (logind has a
quirk to allow physically logged in users to open primary nodes).
This allows callers to specify the operations they'll perform on
the returned data pointer. The motivations for this are:
- The upcoming Linux MAP_NOSIGBUS flag may only be usable on
read-only mappings.
- gbm_bo_map with GBM_BO_TRANSFER_READ hurts performance.
The half-float formats depend on GL_OES_texture_half_float_linear,
not just the GL_OES_texture_half_float extension, because the latter
does not include support for linear magni/minification filters.
The new 2101010 and 16161616F formats are only available on little-
endian builds, since their gl_types are larger than a byte and thus
endianness dependent.
Uses the EXT_device_query extension to get the EGL device matching the
requested DRM file descriptor. If the extension is not supported or no device
is found, the EGL device will be retrieved using GBM.
Depends on the EGL_EXT_device_enumeration to get the list of EGL devices.
This EGL extension has been added in [1]. The upsides are:
- We directly get a render node, instead of having to convert the
primary node name to a render node name.
- If EGL_DRM_RENDER_NODE_FILE_EXT returns NULL, that means there is
no render node being used by the driver.
[1]: https://github.com/KhronosGroup/EGL-Registry/pull/127
Without setting this the EGL implementation is allowed to perform
destructive actions on the buffer when imported: its contents
become undefined.
This is mostly a pedantic change, because Mesa processes the attrib
and does absolutely nothing with it.
Now that we have our own wl_drm implementation, there's no reason
to provide custom renderer hooks to init a wl_display in the
interface. We can just initialize the wl_display generically,
depending on the renderer capabilities.
Khronos refers to extensions with their namespace as a prefix in
uppercase. Change our naming to align with Khronos conventions.
This also makes grepping easier.
Khronos refers to extensions with their namespace as a prefix in
uppercase. Change our naming to align with Khronos conventions.
This also makes grepping easier.
Everything needs to go through the unified wlr_buffer interface
now.
If necessary, there are two ways support for
EGL_WL_bind_wayland_display could be restored by compositors:
- Either by using GBM to convert back EGL Wayland buffers to
DMA-BUFs, then wrap the DMA-BUF into a wlr_buffer.
- Or by wrapping the EGL Wayland buffer into a special wlr_buffer
that doesn't implement any wlr_buffer_impl hook, and special-case
that buffer type in the renderer.
Custom backends and renderers need to implement
wlr_backend_impl.get_buffer_caps and
wlr_renderer_impl.get_render_buffer_caps. They can't if enum
wlr_buffer_cap isn't made public.
We never create an EGL context with the platform set to something
other than EGL_PLATFORM_GBM_KHR. Let's simplify wlr_egl_create by
taking a DRM FD instead of a (platform, remote_display) tuple.
This hides the internal details of creating an EGL context for a
specific device. This will allow us to transparently use the device
platform [1] when the time comes.
[1]: https://github.com/swaywm/wlroots/pull/2671
The wlr_egl functions are mostly used internally by the GLES2
renderer. Let's reduce our API surface a bit by hiding them. If
there are good use-cases for one of these, we can always make them
public again.
The functions mutating the current EGL context are not made private
because e.g. Wayfire uses them.
Add wlr_pixman_buffer_get_current_image for wlr_pixman_renderer.
Add wlr_gles2_buffer_get_current_fbo for wlr_gles2_renderer.
Allow get the FBO/pixman_image_t, the compositor can be add some
action for FBO(for eg, attach a depth buffer), or without pixman
render to pixman_image_t(for eg, use QPainter of Qt instead of pixman).
The types of buffers supported by the renderer might depend on the
renderer's instance. For instance, a renderer might only support
DMA-BUFs if the necessary EGL extensions are available.
Pass the wlr_renderer to get_buffer_caps so that the renderer can
perform such checks.
Fixes: 982498fab3 ("render: introduce renderer_get_render_buffer_caps")
This new API allows buffer implementations to know when a user is
actively accessing the buffer's underlying storage. This is
important for the upcoming client-backed wlr_buffer implementation.
This allows compositors to choose a wlr_buffer to render to. This
is a less awkward interface than having to call bind_buffer() before
and after begin() and end().
Closes: https://github.com/swaywm/wlroots/issues/2618
This allows users to know the capabilities of the buffers that
will be allocated. The buffer capability is important to
know when negotiating buffer formats.
When importing a DMA-BUF wlr_buffer as a wlr_texture, the GLES2
renderer caches the result, in case the buffer is used for texturing
again in the future. When the wlr_texture is destroyed by the caller,
the wlr_buffer is unref'ed, but the wlr_gles2_texture is kept around.
This is fine because wlr_gles2_texture listens for wlr_buffer's destroy
event to avoid any use-after-free.
However, with this logic wlr_texture_destroy doesn't "really" destroy
the wlr_gles2_texture. It just decrements the wlr_buffer ref'count.
Each wlr_texture_destroy call must have a matching prior
wlr_texture_create_from_buffer call or the ref'counting will go south.
Wehn destroying the renderer, we don't want to decrement any wlr_buffer
ref'count. Instead, we want to go through any cached wlr_gles2_texture
and destroy our GL state. So instead of calling wlr_texture_destroy, we
need to call our internal gles2_texture_destroy function.
Closes: https://github.com/swaywm/wlroots/issues/2941
Make it so wlr_gles2_texture is ref'counted (via wlr_buffer). This
is similar to wlr_gles2_buffer or wlr_drm_fb work.
When creating a wlr_texture from a wlr_buffer, first check if we
already have a texture for the buffer. If so, increase the
wlr_buffer ref'count and make sure any changes made by an external
process are made visible (by invalidating the texture).
When destroying a wlr_texture created from a wlr_buffer, decrease
the ref'count, but keep the wlr_texture around in case the caller
uses it again. When the wlr_buffer is destroyed, cleanup the
wlr_texture.
This adds a a function to create a wlr_texture from a wlr_buffer.
The main motivation for this is to allow the renderer to create a
single wlr_texture per wlr_buffer. This can avoid needless imports
by re-using existing textures.
GL_RENDERER typically displays a human-readable string for the name
of the GPU, and EGL_VENDOR typically displays a human-readable string
for the GPU manufacturer. EGL_DRIVER_NAME_EXT should give the name of
the driver in use.
References: e8baa0bf39
This function is only required because the DRM backend still needs
to perform multi-GPU magic under-the-hood. Remove the wlr_ prefix
to make it clear it's not a candidate for being made public.
Prior to this commit, WLR_RENDERER was only looked up when the
backend returned a DRM FD.
Make it so WLR_RENDERER is always looked up, so that running wlroots
on a system without a GPU and with WLR_RENDERER=gles2 fails as
expected instead of falling back to the Pixman renderer.
Make it clear GLES2 is being used. Before this commit, various
GL-related information was printed, but not an easy-to-find line
about which renderer is being picked up.
This env var forces the creation of a specific renderer. If no renderer
is specified, the function will try to create all of the renderers one
by one until one is created successfuly.
It allocates in local main memory via shm_open, and provides a FD
to allow sharing with other processes.
This is suitable for software rendering under the Wayland and X11
backends.
The compositor shouldn't write to client buffers if the client
attaches a DMA-BUF to a wl_surface, then attaches a shm buffer.
Make gles2_texture_write_pixels return an error to prevent this
from happening.
The get_drm_fd was made available in an internal header with a53ab146f. Move it
now to the public header so consumers opting in to the unstable interfaces can
make use of it.
PRIME support for buffer sharing has become mandatory since the renderer
rewrite. Make sure we check for the appropriate capabilities in backend,
allocator and renderer.
See also #2819.
All backends use the GBM platform. We can't use it to figure out
whether the DRM backend is used anymore.
Let's just try to always request a high-priority EGL context. Failing
to do so is not fatal.
Split render/display setups have two separate devices: one display-only
with a primary node, and one render-only with a render node. However
in these cases the EGL implementation and the Wayland compositor will
advertise the display device instead of the render device [1]. The EGL
implementation will magically open the render device when the display
device is passed in.
So just pass the display device as if it were a render device. Maybe in
the future Mesa will advertise the render device instead and we'll be
able to remove this workaround.
[1]: https://gitlab.freedesktop.org/mesa/mesa/-/issues/4178
Compute only the transform matrix in the output. The projection matrix
will be calculated inside the gles2 renderer when we start rendering.
The goal is to help the pixman rendering process.
Mesa provides YUV shaders, and can import multi-planar YUV DMA-BUFs
as a single EGLImage. Remove the arbitrary limitation.
If the driver doesn't support importing YUV as a single EGLImage,
the import will fail and the result will be the same anyways.
Clamping texture coordinates prevents OpenGL from blending the left and
right edge (or top and bottom edge) when scaling textures with GL_LINEAR
filtering. This prevents visual artifacts like swaywm/sway#5809.
Per discussion on IRC, this behaviour is made default. Compositors that want
the wrapping behaviour (e.g. for tiled patterns) can override this by doing:
struct wlr_gles2_texture_attribs attribs;
wlr_gles2_texture_get_attribs(texture, &attribs);
glBindTexture(attribs.target, attribs.tex);
glTexParameteri(attribs.target, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(attribs.target, GL_TEXTURE_WRAP_T, GL_REPEAT);
glBindTexture(attribs.target, 0);
As surfaces are no longer going to be used for wlr_egl, I may as well just go and add this check as it is needed for safety whenever surface-less rendering is being used.
The creation of `wlr_egl` will fail is the device extension
EGL_MESA_device_software is defined. The creation process is allowed to
continue only if the environment variable `WLR_RENDERER_ALLOW_SOFTWARE`
is defined to the value 1.
When gbm_bo_create is used and the GBM implementation does support
modifiers, gbm_bo_get_modifier may return something other than
DRM_FORMAT_MOD_INVALID. This can cause issues with the rest of the
stack (e.g. EGL or KMS) in case these don't support modifiers.
Instead, force the modifier to INVALID, to make sure no one uses
modifiers.
Instead of lazily exporting BOs to DMA-BUFs, do it on init. This is
the only way to use the buffer, so there's no point in deferring
that work.
This is preparatory work for the next commit.
The GBM device needs to be destroyed after the EGL display.
==50931==ERROR: AddressSanitizer: SEGV on unknown address 0x7fe40a000049 (pc 0x7fe446121d30 bp 0x60400001bbd0 sp 0x7ffc99c774d0 T0)
==50931==The signal is caused by a READ memory access.
#0 0x7fe446121d30 (/usr/lib/dri/radeonsi_dri.so+0x5f0d30)
#1 0x7fe4474717bd (/usr/lib/../lib/libEGL_mesa.so.0+0x177bd)
#2 0x7fe4474677d9 (/usr/lib/../lib/libEGL_mesa.so.0+0xd7d9)
#3 0x7fe44cca7b6f in wlr_egl_destroy ../subprojects/wlroots/render/egl.c:379
#4 0x7fe44ccc2626 in gles2_destroy ../subprojects/wlroots/render/gles2/renderer.c:705
#5 0x7fe44ccb5041 in wlr_renderer_destroy ../subprojects/wlroots/render/wlr_renderer.c:37
#6 0x7fe44cd17850 in backend_destroy ../subprojects/wlroots/backend/wayland/backend.c:296
#7 0x7fe44ccca4de in wlr_backend_destroy ../subprojects/wlroots/backend/backend.c:48
#8 0x7fe44cd11b21 in multi_backend_destroy ../subprojects/wlroots/backend/multi/backend.c:58
#9 0x7fe44cd125b0 in handle_display_destroy ../subprojects/wlroots/backend/multi/backend.c:125
#10 0x7fe44c315e0e (/usr/lib/libwayland-server.so.0+0x8e0e)
#11 0x7fe44c3165a6 in wl_display_destroy (/usr/lib/libwayland-server.so.0+0x95a6)
#12 0x55a2c8870683 in server_fini ../sway/server.c:203
#13 0x55a2c886cbf2 in main ../sway/main.c:436
#14 0x7fe44b77c151 in __libc_start_main (/usr/lib/libc.so.6+0x28151)
#15 0x55a2c883172d in _start (/home/simon/src/sway/build/sway/sway+0x33472d)
Instead of requiring callers to manually make the EGL context current
before binding a buffer and unsetting it after unbinding a buffer, do
it inside wlr_renderer_bind_buffer.
This hides renderer-specific implementation details inside the
wlr_renderer interface. Non-GLES2 renderers may not use EGL.
This removes all EGL dependencies from the backends.
References: https://github.com/swaywm/wlroots/issues/2618
References: https://github.com/swaywm/wlroots/pull/2615#issuecomment-756687006
It can be surprising and unexpected that texture operations (such as
texture upload and import) change the current EGL context, especially
when it's done under-the-hood by wlroots in response to wl_surface
requests.
To prevent surprises, save and restore the previous EGL context.
It can be surprising that destroying a buffer changes the EGL context,
especially since this can be triggered from anywhere wlr_buffer_unlock
is called.
Prevent this from happening by saving and restoring the EGL context.
Our wlr_format_set structs don't hold GBM usage flags. Instead, users
who want to get a LINEAR buffer can use the DRM_FORMAT_MOD_LINEAR
modifier even if the kernel driver doesn't support modifiers.
Add a special case to wlr_drm_format_intersect to properly handle this
situation.
In wlr_drm_format_dup, allocate the new wlr_drm_format using cap instead
of len. This makes it so the cap field is up-to-date and the chunk of
memory isn't too small if we append new modifiers (we don't allow this
yet but might in the future).
Rename wlr_renderer_get_dmabuf_formats to
wlr_renderer_get_dmabuf_texture_formats. This makes it clear the formats
are only suitable for creating wlr_textures.
On some setups (e.g. remote access via SSH) the current user won't have
the permission to open the primary node at all. It's still possible to
use drmGetDevices to match the primary node name returned by EGL.
Closes: https://github.com/swaywm/wlroots/issues/2488
The swapchain maximum capacity is set to 4, so that we have enough room
for:
- A buffer currently displayed on screen
- A buffer queued for display (e.g. to KMS)
- A pending buffer that'll be queued next commit
- An additional pending buffer in case we want to invalidate the
currently pending one
Don't force compositors to check when an empty shape is being renderered.
References #2282. This was motivated by dwl crashing when setting window
borders to 0 (djpohly/dwl#51).
texcoord is a vector of coordinates, with the first member being the X
axis value and the second member being the Y axis value. It makes more
sense to use the accessors with the same names.
Prior to this commit, wlr_egl_init seemed to assume the extension string
queried via EGL_NO_DISPLAY was a subset of the extension string queried
via an initialized display. This isn't correct.
EGL_EXT_client_extensions [1] defines two types of extensions: client
extensions and display extensions. The set of supported client and
display extensions are disjoint (ie. an extension is either a client or
a display extension, not both). Client extensions are queried via
EGL_NO_DISPLAY, display extensions are queried via an initialized
display.
Rename the variables to make this clear. Remove the misleading comment.
Log both client and display extensions.
[1]: https://www.khronos.org/registry/EGL/extensions/EXT/EGL_EXT_client_extensions.txt
This leaves an EGL context current behind. wlr_gles2_renderer_create was
assuming the EGL context was already current because of this (because it
called a GL function right off the bat).
After swapping buffers, it doesn't make sense to perform more rendering
operations. Unset the context to reflect this.
This commit makes it so the context is always only current between
wlr_egl_make_current and wlr_egl_swap_buffers.
This is an alternative to [1].
[1]: https://github.com/swaywm/wlroots/pull/2212
This function can be called after wlr_egl_make_current to cleanup the
EGL context. This avoids having lingering EGL contexts that make things
work by chance.
Closes: https://github.com/swaywm/wlroots/issues/2197
Instead of requiring compositors to call wlr_texture_get_size each time
they want to access the texture's size, expose this information as
wlr_texture fields.
This makes it easier for the user of this library to properly handle
failure of this function.
The signature of wlr_renderer_impl.init_wl_display was also modified to
allow for proper error propagation.
Add a wlr_renderer.rendering bool, set it to true between
wlr_renderer_begin() and wlr_renderer_end(). Assert we're rendering when
calling functions that render.
Bumps minimum version to 0.51.0
- Remove all intermediate static libraries.
They serve no purpose and are just add a bunch of boilerplate for
managing dependencies and options. It's now managed as a list of
files which are compiled into libwlroots directly.
- Use install_subdir instead of installing headers individually.
I've changed my mind since I did that. Listing them out is annoying as
hell, and it's easy to forget to do it.
- Add not_found_message for all of our optional dependencies that have a
meson option. It gives some hints about what option to pass and what
the optional dependency is for.
- Move all backend subdirectories into their own meson.build. This
keeps some of the backend-specific build logic (especially rdp and
session) more neatly separated off.
- Don't overlink example clients with code they're not using.
This was done by merging the protocol dictionaries and setting some
variables containing the code and client header file.
Example clients now explicitly mention what extension protocols they
want to link to.
- Split compositor example logic from client example logic.
- Minor formatting changes
Some extensions are only advertised by the EGL implementation with a
non-zero EGLDisplay. That's the case when the extension can only be
enabled when the hardware/driver supports it for instance.
Instead of checking for all extensions without a display, check only for
EGL_EXT_platform_base and EGL_KHR_debug which are used before
eglGetDisplay. Check for all other extensions when we have a display.
Closes: https://github.com/swaywm/wlroots/issues/1955
Remove glapi.sh code generation, replace it with hand-written loading
code that checks extension strings before calling eglGetProcAddress.
The GLES2 renderer still uses global state because of:
- {PUSH,POP}_GLES2_DEBUG macros
- wlr_gles2_texture_from_* taking a wlr_egl instead of the renderer
Otherwise this error happens:
../subprojects/wlroots/render/wlr_texture.c: In function ‘wlr_texture_get_size’:
../subprojects/wlroots/render/wlr_texture.c:47:9: error: ISO C forbids ‘return’ with expression, in function returning void [-Werror=pedantic]
47 | return texture->impl->get_size(texture, width, height);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../subprojects/wlroots/render/wlr_texture.c:45:6: note: declared here
45 | void wlr_texture_get_size(struct wlr_texture *texture, int *width,
| ^~~~~~~~~~~~~~~~~~~~
This requires functions without a prototype definition to be static.
This allows to detect dead code, export less symbols and put shared
functions in headers.
Prior to this commit, compositors needed to render the texture to an
intermediate off-screen buffer using wlr_renderer APIs if they wanted to
use a custom rendering path (e.g. render to a 3D scene).
A new wlr_gles2_texture_get_attribs exposes the GL texture target and ID
so that compositors can render wlr_textures with their own shaders. An
example of a compositor doing so is available at [1].
[1]: 3db905b784/src/render.c (L227)
We don't need our own enum for types. Instead we just use
GL_TEXTURE_{2D,EXTERNAL_OES}, which already describes usage.
Also fixes a situation where we were using GL_TEXTURE_2D in a situation
we should not have. wl_drm buffers are always GL_TEXTURE_EXTERNAL_OES,
no matter if they're RGB or any other format.
When a texture is destroyed between wlr_egl_make_current and
wlr_egl_swap_buffers, it resets the current EGL surface to NULL. This
makes wlr_egl_swap_buffers fail.
If the EGL context is already current, there's no need to reset it.
According to the spec:
> If <n_rects> is 0 then <rects> is ignored and the entire
> surface is implicitly damaged and the behaviour is equivalent
> to calling eglSwapBuffers.
When we want to swap with an empty damage region, set the damage to a single
empty rectangle.
The deleted includes are redundant, because other headers will include
the necessary files. Additionally, they cause build failures, because
including EGL/egl.h or EGL/eglext.h directly, instead of through
wlr/render/egl.h or wlr/render/interface.h, will mean that
MESA_EGL_NO_X11_HEADERS will not have been defined, and so the EGL
headers will attempt to pull in unnecessary X11 headers that may not
exist on the system.
For the headers produced by glgen.sh, the includes couldn't simply be
deleted, because no other header would include the EGL headers. Neither
wlr/render/egl.h or wlr/render/interface.h felt appropriate to include,
so I opted instead to copy the MESA_EGL_NO_X11_HEADERS definition before
the EGL includes.
This types adds a container for formats + modifiers.
A list that is of [format [modifier]] was chosen instead of
[format modifer] because that is how GBM accepts them.
Co-Authored-By: emersion <contact@emersion.fr>
The read format is dependent on the output, so we first need to make it
current. This fixes a race condition in wlr-screencopy-v1 where a dmabuf
client would cause EGL_NO_SURFACE to be bound at the time when
screencopy needs to query for the preferred format, causing GL errors.
We were assuming GL_BGRA_EXT was always supported.
We now check that it's supported for rendering. We fail if it isn't because
this format is specified as "always supported" by the Wayland protocol.
We also check if it's supported for reading pixels. A new preferred_read_format
function returns the preferred format that can be used to read pixels. This is
used by the screencopy protocol.
This removes any assumptions about how the libdrm headers are installed,
and uses the pkg-config include directories as we're "supposed to".
This only adds a partial dependency, since we don't actually need to
link against libdrm.
This PR broke a private nixpkgs definition I have for wlroots: https://github.com/swaywm/wlroots/pull/1304
It is fixed by changing `#include <drm_fourcc.h>` to `#include <libdrm/drm_fourcc.h>`, which follows what is already done in the dmabuf example.
If a client uses an older version of the dmabuf protocol, use the
`formats` event instead of `modifiers` (since that didn't exist in older
versions).
With a bit of necessary guessing, support dmabuf importing even when
EGL_EXT_image_dma_buf_import_modifiers isn't present instead of
failing up front.
Detecting whether eglSwapBuffersWithDamageEXT or
eglSwapBuffersWithDamageKHR is used should be based on the extension
string, not only on the availability of the function.
Compositors now have more control over how the backend creates its
renderer. Currently all backends create an EGL/GLES2 renderer, so
the necessary attributes for creating the context are passed to a
user-provided callback function. It is responsible for initializing
provided wlr_egl and to return a renderer. On fail, return 0.
Fixes#987