This EGL extension has been added in [1]. The upsides are:
- We directly get a render node, instead of having to convert the
primary node name to a render node name.
- If EGL_DRM_RENDER_NODE_FILE_EXT returns NULL, that means there is
no render node being used by the driver.
[1]: https://github.com/KhronosGroup/EGL-Registry/pull/127
Without setting this the EGL implementation is allowed to perform
destructive actions on the buffer when imported: its contents
become undefined.
This is mostly a pedantic change, because Mesa processes the attrib
and does absolutely nothing with it.
Now that we have our own wl_drm implementation, there's no reason
to provide custom renderer hooks to init a wl_display in the
interface. We can just initialize the wl_display generically,
depending on the renderer capabilities.
Khronos refers to extensions with their namespace as a prefix in
uppercase. Change our naming to align with Khronos conventions.
This also makes grepping easier.
Khronos refers to extensions with their namespace as a prefix in
uppercase. Change our naming to align with Khronos conventions.
This also makes grepping easier.
Everything needs to go through the unified wlr_buffer interface
now.
If necessary, there are two ways support for
EGL_WL_bind_wayland_display could be restored by compositors:
- Either by using GBM to convert back EGL Wayland buffers to
DMA-BUFs, then wrap the DMA-BUF into a wlr_buffer.
- Or by wrapping the EGL Wayland buffer into a special wlr_buffer
that doesn't implement any wlr_buffer_impl hook, and special-case
that buffer type in the renderer.
Custom backends and renderers need to implement
wlr_backend_impl.get_buffer_caps and
wlr_renderer_impl.get_render_buffer_caps. They can't if enum
wlr_buffer_cap isn't made public.
We never create an EGL context with the platform set to something
other than EGL_PLATFORM_GBM_KHR. Let's simplify wlr_egl_create by
taking a DRM FD instead of a (platform, remote_display) tuple.
This hides the internal details of creating an EGL context for a
specific device. This will allow us to transparently use the device
platform [1] when the time comes.
[1]: https://github.com/swaywm/wlroots/pull/2671
The wlr_egl functions are mostly used internally by the GLES2
renderer. Let's reduce our API surface a bit by hiding them. If
there are good use-cases for one of these, we can always make them
public again.
The functions mutating the current EGL context are not made private
because e.g. Wayfire uses them.
Add wlr_pixman_buffer_get_current_image for wlr_pixman_renderer.
Add wlr_gles2_buffer_get_current_fbo for wlr_gles2_renderer.
Allow get the FBO/pixman_image_t, the compositor can be add some
action for FBO(for eg, attach a depth buffer), or without pixman
render to pixman_image_t(for eg, use QPainter of Qt instead of pixman).
The types of buffers supported by the renderer might depend on the
renderer's instance. For instance, a renderer might only support
DMA-BUFs if the necessary EGL extensions are available.
Pass the wlr_renderer to get_buffer_caps so that the renderer can
perform such checks.
Fixes: 982498fab3 ("render: introduce renderer_get_render_buffer_caps")
This new API allows buffer implementations to know when a user is
actively accessing the buffer's underlying storage. This is
important for the upcoming client-backed wlr_buffer implementation.
This allows compositors to choose a wlr_buffer to render to. This
is a less awkward interface than having to call bind_buffer() before
and after begin() and end().
Closes: https://github.com/swaywm/wlroots/issues/2618
This allows users to know the capabilities of the buffers that
will be allocated. The buffer capability is important to
know when negotiating buffer formats.
When importing a DMA-BUF wlr_buffer as a wlr_texture, the GLES2
renderer caches the result, in case the buffer is used for texturing
again in the future. When the wlr_texture is destroyed by the caller,
the wlr_buffer is unref'ed, but the wlr_gles2_texture is kept around.
This is fine because wlr_gles2_texture listens for wlr_buffer's destroy
event to avoid any use-after-free.
However, with this logic wlr_texture_destroy doesn't "really" destroy
the wlr_gles2_texture. It just decrements the wlr_buffer ref'count.
Each wlr_texture_destroy call must have a matching prior
wlr_texture_create_from_buffer call or the ref'counting will go south.
Wehn destroying the renderer, we don't want to decrement any wlr_buffer
ref'count. Instead, we want to go through any cached wlr_gles2_texture
and destroy our GL state. So instead of calling wlr_texture_destroy, we
need to call our internal gles2_texture_destroy function.
Closes: https://github.com/swaywm/wlroots/issues/2941
Make it so wlr_gles2_texture is ref'counted (via wlr_buffer). This
is similar to wlr_gles2_buffer or wlr_drm_fb work.
When creating a wlr_texture from a wlr_buffer, first check if we
already have a texture for the buffer. If so, increase the
wlr_buffer ref'count and make sure any changes made by an external
process are made visible (by invalidating the texture).
When destroying a wlr_texture created from a wlr_buffer, decrease
the ref'count, but keep the wlr_texture around in case the caller
uses it again. When the wlr_buffer is destroyed, cleanup the
wlr_texture.
This adds a a function to create a wlr_texture from a wlr_buffer.
The main motivation for this is to allow the renderer to create a
single wlr_texture per wlr_buffer. This can avoid needless imports
by re-using existing textures.
GL_RENDERER typically displays a human-readable string for the name
of the GPU, and EGL_VENDOR typically displays a human-readable string
for the GPU manufacturer. EGL_DRIVER_NAME_EXT should give the name of
the driver in use.
References: e8baa0bf39
This function is only required because the DRM backend still needs
to perform multi-GPU magic under-the-hood. Remove the wlr_ prefix
to make it clear it's not a candidate for being made public.
Prior to this commit, WLR_RENDERER was only looked up when the
backend returned a DRM FD.
Make it so WLR_RENDERER is always looked up, so that running wlroots
on a system without a GPU and with WLR_RENDERER=gles2 fails as
expected instead of falling back to the Pixman renderer.
Make it clear GLES2 is being used. Before this commit, various
GL-related information was printed, but not an easy-to-find line
about which renderer is being picked up.
This env var forces the creation of a specific renderer. If no renderer
is specified, the function will try to create all of the renderers one
by one until one is created successfuly.
It allocates in local main memory via shm_open, and provides a FD
to allow sharing with other processes.
This is suitable for software rendering under the Wayland and X11
backends.
The compositor shouldn't write to client buffers if the client
attaches a DMA-BUF to a wl_surface, then attaches a shm buffer.
Make gles2_texture_write_pixels return an error to prevent this
from happening.
The get_drm_fd was made available in an internal header with a53ab146f. Move it
now to the public header so consumers opting in to the unstable interfaces can
make use of it.
PRIME support for buffer sharing has become mandatory since the renderer
rewrite. Make sure we check for the appropriate capabilities in backend,
allocator and renderer.
See also #2819.
All backends use the GBM platform. We can't use it to figure out
whether the DRM backend is used anymore.
Let's just try to always request a high-priority EGL context. Failing
to do so is not fatal.
Split render/display setups have two separate devices: one display-only
with a primary node, and one render-only with a render node. However
in these cases the EGL implementation and the Wayland compositor will
advertise the display device instead of the render device [1]. The EGL
implementation will magically open the render device when the display
device is passed in.
So just pass the display device as if it were a render device. Maybe in
the future Mesa will advertise the render device instead and we'll be
able to remove this workaround.
[1]: https://gitlab.freedesktop.org/mesa/mesa/-/issues/4178
Compute only the transform matrix in the output. The projection matrix
will be calculated inside the gles2 renderer when we start rendering.
The goal is to help the pixman rendering process.
Mesa provides YUV shaders, and can import multi-planar YUV DMA-BUFs
as a single EGLImage. Remove the arbitrary limitation.
If the driver doesn't support importing YUV as a single EGLImage,
the import will fail and the result will be the same anyways.
Clamping texture coordinates prevents OpenGL from blending the left and
right edge (or top and bottom edge) when scaling textures with GL_LINEAR
filtering. This prevents visual artifacts like swaywm/sway#5809.
Per discussion on IRC, this behaviour is made default. Compositors that want
the wrapping behaviour (e.g. for tiled patterns) can override this by doing:
struct wlr_gles2_texture_attribs attribs;
wlr_gles2_texture_get_attribs(texture, &attribs);
glBindTexture(attribs.target, attribs.tex);
glTexParameteri(attribs.target, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(attribs.target, GL_TEXTURE_WRAP_T, GL_REPEAT);
glBindTexture(attribs.target, 0);