Not needs set GL_DEPTH_TEST, Because when rendering to a framebuffer
that has no depth buffer, depth testing always behaves as though
the test is disabled, The initial value for each capability with
the exception of GL_DITHER is GL_FALSE.
Based on five calls:
wlr_render_timer_create - creates a timer which can be reused across
frames on the same renderer
wlr_renderer_begin_buffer_pass - now takes a timer so that backends can
record when the rendering starts and finishes
wlr_render_timer_get_time - should be called as late as possible so that
queries can make their way back from the GPU
wlr_render_timer_destroy - self-explanatory
The timer is exposed as an opaque `struct wlr_render_timer` so that
backends can store whatever they want in there.
Some formats like sub-sampled YCbCr use a block of bytes to
store the color values for more than one pixel. Update our format
table to be able to handle such formats.
Setting the GLESv2 parameter GL_PACK_ALIGNMENT to 1 ensures that the
stride of the glReadPixels output matches the value computed in
`pack_stride`. Since the default value of GL_PACK_ALIGNMENT is 4, this
does not make a difference under normal use; but without this patch
the stride can be incorrect; for example, with RGB565 buffers and
screenshots of regions with odd width.
We'll use this function from wlr_shm too.
Add some assertions, use int32_t (since the wire protocol uses that,
and we don't want to use 16-bit integers on exotic systems) and
switch the stride check to be overflow-safe.
Call glGetGraphicsResetStatusKHR in wlr_renderer_begin to figure
out when a GPU reset occurs. Destroy the renderer when this
happens (the OpenGL context is defunct).
Instead of having a C file with strings for each shader, move each
shader into its own file. Use a small POSIX shell script to convert
the files into C strings (can't wait for C23 #embed...).
The benefits from this are:
- Improved readability and syntax highlighting.
- Line numbers in shader compiler errors are easier to make sense of.
- Consistency with the Vulkan renderer.
- Shaders will become more complicated as we add color management
features.
This lets the renderer handle the wlr_buffer directly, just like it
does in texture_from_buffer. This also allows the renderer to batch
the rectangle updates, and update more than the damage region if
desirable (e.g. too many rects), so can be more efficient.
GL_ALPHA_BITS is the number of bits of the alpha channel of the
currently bound frame buffer's color buffer -- which is precisely
renderer->current_buffer->rbo . Thus, instead of binding the color
buffer and checking its properties, we can query the already bound
frame buffer.
Note that GL_IMPLEMENTATION_COLOR_READ_{FORMAT,TYPE} are also
properties of frame buffer's color buffer.
Instead of checking whether the wlr_egl dependencies are available
in the GLES2 code, introduce internal_features['egl'] and check
that field.
When updating the EGL dependency list, we no longer need to update
the GLES2 logic.
Whether a texture is opaque or not doesn't depend on the renderer
at all, it just depends on the source buffer. Instead of forcing
all renderers to implement wlr_texture_impl.is_opaque, let's move
this in common code and use the wlr_buffer format to know whether
a texture will be opaque.
Now that the DRM backend no longer depends on GBM, we can make it
optional. The GLES2 renderer still depends on it because of our EGL
device selection.
This is useful for compositors with their own renderers, and for
compositors using the Vulkan renderer.
These formats require EXT_texture_norm16, which in turn needs OpenGL
ES 3.1. The EXT_texture_norm16 extension does not support passing
gl_internalformat = GL_RGBA to glTexImage2D, as can be done for
formats available in OpenGL ES 2.0, so this commit adds a field to
wlr_gles2_pixel_format to provide a more specific internalformat
parameter to glTexImage2D.
commit 44e8451cd9 ("render/gles2: hide shm formats without GL
support") added the is_gles2_pixel_format_supported() function to
render/gles2/pixel_format.c, whose stated purpose is to "check whether
the renderer has the needed GL extensions to read a given pixel format."
It then used that function to filter the pixel formats returned by
get_gles2_shm_formats().
The result of this change is that RGB formats are no longer reported for
GL drivers that don't implement EXT_read_format_bgra, even when those
formats are supported for rendering (which they have to be for
wlr_gles2_renderer_create() to succeed). This is a pretty clear
regression, since wlr_renderer_init_wl_shm() fails when either of
WL_SHM_FORMAT_ARGB8888 or WL_SHM_FORMAT_XRGB8888 are missing.
To fix the regression, change is_gles2_pixel_format_supported() to
accept all pixel formats that support rendering, regardless of whether
we can read them or not, and move the check for EXT_read_format_bgra
back into gles2_read_pixels(). (There's already a check for this
extension in gles2_preferred_read_format(), so we're not breaking any
abstraction that wasn't already broken.)
Tested on the NVIDIA 495.46 proprietary driver, which doesn't support
EXT_read_format_bgra.
Fixes: 44e8451cd9 ("render/gles2: hide shm formats without GL support")
They are never used in practice, which makes all of our flag
handling effectively dead code. Also, APIs such as KMS don't
provide a good way to deal with the flags. Let's just fail the
DMA-BUF import when clients provide flags.
This allows callers to specify the operations they'll perform on
the returned data pointer. The motivations for this are:
- The upcoming Linux MAP_NOSIGBUS flag may only be usable on
read-only mappings.
- gbm_bo_map with GBM_BO_TRANSFER_READ hurts performance.