See the spec at [1]. tl;dr EGL has terrible defaults: eglTerminate()
may have side-effects on completely unrelated EGLDisplay objects.
This extension allows us to opt-in to get the sane behavior:
eglTerminate() only free's our own EGLDisplay without affecting
others.
[1]: https://registry.khronos.org/EGL/extensions/KHR/EGL_KHR_display_reference.txt
These are trivial wrappers around eglMakeCurrent and
eglGetCurrentContext. Compositors which need to call these
functions will also call other EGL or GL functions anyways. Let's
reduce our API surface a bit by making them private.
These formats require EXT_texture_norm16, which in turn needs OpenGL
ES 3.1. The EXT_texture_norm16 extension does not support passing
gl_internalformat = GL_RGBA to glTexImage2D, as can be done for
formats available in OpenGL ES 2.0, so this commit adds a field to
wlr_gles2_pixel_format to provide a more specific internalformat
parameter to glTexImage2D.
They are never used in practice, which makes all of our flag
handling effectively dead code. Also, APIs such as KMS don't
provide a good way to deal with the flags. Let's just fail the
DMA-BUF import when clients provide flags.
This new renderer is implemented with the existing wlr_renderer API
(which is known to be sub-optimal for some operations). It's not
used by default, but users can opt-in by setting WLR_RENDERER=vulkan.
The renderer depends on VK_EXT_image_drm_format_modifier and
VK_EXT_physical_device_drm.
Co-authored-by: Simon Ser <contact@emersion.fr>
Co-authored-by: Jan Beich <jbeich@FreeBSD.org>
The half-float formats depend on GL_OES_texture_half_float_linear,
not just the GL_OES_texture_half_float extension, because the latter
does not include support for linear magni/minification filters.
The new 2101010 and 16161616F formats are only available on little-
endian builds, since their gl_types are larger than a byte and thus
endianness dependent.
Khronos refers to extensions with their namespace as a prefix in
uppercase. Change our naming to align with Khronos conventions.
This also makes grepping easier.
We never create an EGL context with the platform set to something
other than EGL_PLATFORM_GBM_KHR. Let's simplify wlr_egl_create by
taking a DRM FD instead of a (platform, remote_display) tuple.
This hides the internal details of creating an EGL context for a
specific device. This will allow us to transparently use the device
platform [1] when the time comes.
[1]: https://github.com/swaywm/wlroots/pull/2671
The wlr_egl functions are mostly used internally by the GLES2
renderer. Let's reduce our API surface a bit by hiding them. If
there are good use-cases for one of these, we can always make them
public again.
The functions mutating the current EGL context are not made private
because e.g. Wayfire uses them.
This allows users to know the capabilities of the buffers that
will be allocated. The buffer capability is important to
know when negotiating buffer formats.
When importing a DMA-BUF wlr_buffer as a wlr_texture, the GLES2
renderer caches the result, in case the buffer is used for texturing
again in the future. When the wlr_texture is destroyed by the caller,
the wlr_buffer is unref'ed, but the wlr_gles2_texture is kept around.
This is fine because wlr_gles2_texture listens for wlr_buffer's destroy
event to avoid any use-after-free.
However, with this logic wlr_texture_destroy doesn't "really" destroy
the wlr_gles2_texture. It just decrements the wlr_buffer ref'count.
Each wlr_texture_destroy call must have a matching prior
wlr_texture_create_from_buffer call or the ref'counting will go south.
Wehn destroying the renderer, we don't want to decrement any wlr_buffer
ref'count. Instead, we want to go through any cached wlr_gles2_texture
and destroy our GL state. So instead of calling wlr_texture_destroy, we
need to call our internal gles2_texture_destroy function.
Closes: https://github.com/swaywm/wlroots/issues/2941
Make it so wlr_gles2_texture is ref'counted (via wlr_buffer). This
is similar to wlr_gles2_buffer or wlr_drm_fb work.
When creating a wlr_texture from a wlr_buffer, first check if we
already have a texture for the buffer. If so, increase the
wlr_buffer ref'count and make sure any changes made by an external
process are made visible (by invalidating the texture).
When destroying a wlr_texture created from a wlr_buffer, decrease
the ref'count, but keep the wlr_texture around in case the caller
uses it again. When the wlr_buffer is destroyed, cleanup the
wlr_texture.
This adds a a function to create a wlr_texture from a wlr_buffer.
The main motivation for this is to allow the renderer to create a
single wlr_texture per wlr_buffer. This can avoid needless imports
by re-using existing textures.
This function is only required because the DRM backend still needs
to perform multi-GPU magic under-the-hood. Remove the wlr_ prefix
to make it clear it's not a candidate for being made public.
It allocates in local main memory via shm_open, and provides a FD
to allow sharing with other processes.
This is suitable for software rendering under the Wayland and X11
backends.
Compute only the transform matrix in the output. The projection matrix
will be calculated inside the gles2 renderer when we start rendering.
The goal is to help the pixman rendering process.
The swapchain maximum capacity is set to 4, so that we have enough room
for:
- A buffer currently displayed on screen
- A buffer queued for display (e.g. to KMS)
- A pending buffer that'll be queued next commit
- An additional pending buffer in case we want to invalidate the
currently pending one
Instead of requiring compositors to call wlr_texture_get_size each time
they want to access the texture's size, expose this information as
wlr_texture fields.
Remove glapi.sh code generation, replace it with hand-written loading
code that checks extension strings before calling eglGetProcAddress.
The GLES2 renderer still uses global state because of:
- {PUSH,POP}_GLES2_DEBUG macros
- wlr_gles2_texture_from_* taking a wlr_egl instead of the renderer
We don't need our own enum for types. Instead we just use
GL_TEXTURE_{2D,EXTERNAL_OES}, which already describes usage.
Also fixes a situation where we were using GL_TEXTURE_2D in a situation
we should not have. wl_drm buffers are always GL_TEXTURE_EXTERNAL_OES,
no matter if they're RGB or any other format.
This commit matches sway's 2dc4978d8af326c310057ca8fd22a4c7f5d09335.
To help ensure a reproducible build (when debug info is disabled),
the meson build script now uses the -fmacro-prefix-map command line
argument supported by GCC to strip the build-path dependent bytes
of each __FILE__ string used by wlr_log and related functions.
A rather ugly algorithm is used to compute the relative path between
the build and source folders, because meson has no specific function
for this.
When the compiler does not support -fmacro-prefix-map, fall back
to shifting the start of each __FILE__ string by the length of the
relative path to the source directory.
The deleted includes are redundant, because other headers will include
the necessary files. Additionally, they cause build failures, because
including EGL/egl.h or EGL/eglext.h directly, instead of through
wlr/render/egl.h or wlr/render/interface.h, will mean that
MESA_EGL_NO_X11_HEADERS will not have been defined, and so the EGL
headers will attempt to pull in unnecessary X11 headers that may not
exist on the system.
For the headers produced by glgen.sh, the includes couldn't simply be
deleted, because no other header would include the EGL headers. Neither
wlr/render/egl.h or wlr/render/interface.h felt appropriate to include,
so I opted instead to copy the MESA_EGL_NO_X11_HEADERS definition before
the EGL includes.
We were assuming GL_BGRA_EXT was always supported.
We now check that it's supported for rendering. We fail if it isn't because
this format is specified as "always supported" by the Wayland protocol.
We also check if it's supported for reading pixels. A new preferred_read_format
function returns the preferred format that can be used to read pixels. This is
used by the screencopy protocol.