08:19 wlb: weston/main: Sergio Gómez * libweston/input: Remove redundant surface destroy listener in constraints https://gitlab.freedesktop.org/wayland/weston/commit/64da736d37a7 include/libweston/libweston.h libweston/input.c
08:19 wlb: weston/main: Sergio Gómez * libweston: Add view unmap listener to pointer constraints https://gitlab.freedesktop.org/wayland/weston/commit/e3079393c400 include/libweston/libweston.h libweston/compositor.c libweston/input.c
08:19 wlb: weston/main: Sergio Gómez * libweston: Add assert for valid confine region in maybe_warp_confined_pointer() https://gitlab.freedesktop.org/wayland/weston/commit/b6423e59d911 libweston/input.c
08:19 wlb: weston Merge request !1182 merged \o/ (Disable pointer constraint if last commit unmapped the view https://gitlab.freedesktop.org/wayland/weston/-/merge_requests/1182)
08:19 wlb: weston Issue #721 closed \o/ (Weston crashes when quitting a fullscreen program after pointer confinement has been granted https://gitlab.freedesktop.org/wayland/weston/-/issues/721)
09:21 ukiran: pq, this is regards to the Color representation protocol
09:21 ukiran: am curious to know if there is any development activity going on for this at server side ?
09:23 ukiran: and also as per the https://gitlab.freedesktop.org/pq/color-and-hdr/-/blob/main/doc/cicp_h273.md#wayland-protocols , I see that there are 2 protocol extensions will be developed. one for color representation and other for color-management and what CICP code points are handled is mentioned as per the MR.
09:27 ukiran: As per my understanding about the CICP H.273 doc, CP/TC/MC these will be used to decode the colorspace of the incoming pixel buffer to the output color space
09:28 ukiran: https://www.itu.int/dms_pubrec/itu-r/rec/bt/R-REC-BT.2087-0-201510-I!!PDF-E.pdf
09:29 JEEB: ah that classic :P
09:29 JEEB: I used that to fix blu-ray subtitle decoding in FFmpeg (which does YCbCr to RGB conversion for some reason)
09:32 JEEB: then there's also https://www.itu.int/dms_pub/itu-r/opb/rep/R-REP-BT.2446-1-2021-PDF-E.pdf
09:33 ukiran: The input pixel format could be anything, but what matters is color space. CICP codepoints helps in mapping the color space right ?
09:33 JEEB: it describes what the samples mean, yes
09:34 ukiran: as per the Annex-1 diagram in my link, if the pixel format is RGB, then it does the colorspace mapping based on the input and output (Depends on the compositor selection)
09:35 ukiran: If it is YUV, we need to do the conversion first to RGB, then do the color space mapping
09:35 JEEB: yes if your compositor does not support YCbCr
09:35 JEEB: or well, doesn't support direct output of YCbCr
09:36 ukiran: Usually all primary planes supports YCbCr.. in that case, for colorspace mapping, dont we need to convert to RGB
09:36 ukiran: ?
09:37 JEEB: I know YUV is utilized in various APIs, but technically YUV was only in analog :D
09:37 JEEB: including FFmpeg's APIs which still call it yuvxyz
09:38 ukiran: Okay.
09:39 JEEB: anyways, for YCbCr passthrough if you now know of CICP/H.273 you then need to check which matrix, primaries and transfer the output thing supports
09:39 JEEB: otherwise if you don't check the device might support only BT.709, yet then you push BT.2020 YCbCr to it :D
09:41 ukiran: let me put an example, have a req where the input's format YUV420P10 in Bt.709 colorspace. Monitor supports Bt.2020 and the clients wants to map this the output's gamut.
09:42 JEEB: in CICP you need to define the holy triplet for both sides
09:42 JEEB: matrix, primaries and transfer
09:42 ukiran: What does the compositor's triplet in this case
09:43 ukiran: i can read the triplet from the AVCodec's API from client
09:43 JEEB: compositors usually do two things: they either attempt to pass-through, or they do something like conversion to floaty linear RGB and then to the output triplet
09:44 JEEB: also you did not define the triplet for that "monitor supports BT.2020"
09:45 JEEB: that could be primaries for RGB, in which case you have not defined the transfer. it could also mean YCbCr where matrix and primaries are BT.2020-NCL, and in this case as well you have not defined the transfer.
09:46 ukiran: could you please elaborate on "also you did not define the triplet for that "monitor supports BT.2020""
09:47 JEEB: I noted two common alternatives :P
09:47 JEEB: and how you left out transfer
09:49 JEEB: not sure if it helps your understanding, but what I commonly see with macOS and Windows regarding compositors you have the following: { application surface triplet } - { compositor that converts from input triplet to floaty linear scRGB when required } - { monitor output triplet }
09:52 ukiran: am trying to understand. what you mentioned is the stages involved from app to the display right
09:53 ukiran: i know the application surface triplet. Ex: yuv420P10le(tv, bt709/bt09/arib-std-b67)
09:54 JEEB: and there you finally said out loud the transfer :P
09:54 pq: zubzub, Mesa needs to still do some throttling even without framecallbacks to not need an enormous pool of buffers to draw into. The sync make any wl_buffer.release and equivalent events more likely to come back to Mesa before it spams a new buffer.
09:55 pq: ukiran, no-one that I know of yet.
09:55 JEEB: ukiran: please do not PM me
09:55 ukiran: JEEB, sure
09:56 ukiran: pq, we are planning to work on server side
09:57 ukiran: so am getting better understanding on the CICP usage.
09:57 pq: zubzub, I didn't think the serial from wl_display.sync replies is good for anything... it comes from an era when serials had a flawed design.
10:01 pq: ukiran, JEEB, btw. DRM KMS is incapable of direct YCbCr output. There will always be full-range RGB in the middle, even if both framebuffers and monitor signal were YCbCr.
10:02 pq: they are likely different YCbCr, too
10:03 ukiran: pq, so the conversion is needed from YUV to RGB in the middle ?
10:03 JEEB: pq: at least I've looked at people who accidentally had YCbCr advertised in the output lol. good to know that that's not a feature
10:04 emersion: JEEB: it is but gets converted to RGB but the hw
10:04 emersion: by*
10:05 JEEB: no I mean like the HDMI connection being YCbCr
10:05 emersion: yeah, that too
10:05 ukiran: emersion, though the planes/CRTC supporting ycbcr, there would be a conversion by the HW before display ?
10:05 emersion: source YUV -> RGB -> YUV -> monitor
10:06 emersion: it's just that the RGB in the middle can't go away
10:06 emersion: ie, *direct* YUV not a thing
10:06 pq: JEEB, what do you mean "advertised"?
10:07 JEEB: metadata on the display connection
10:07 JEEB: HDMI/DP
10:07 pq: JEEB, HDMI signal can well be YCbCr with DRM KMS, and it won't tell us if it is.
10:07 JEEB: this was mostly people accidentally getting that with some AMD mesa drivers etc
10:08 pq: it's all negotiated between the driver and the monitor, not telling userspace what it picked
10:08 emersion: mesa shouldn't matter here...
10:08 pq: yeah, I mean kernel drivers
10:08 wlb: weston/main: Michael Olbrich * ivi-shell: abort if shell_surface_send_configure is called for non-ivi surfaces https://gitlab.freedesktop.org/wayland/weston/commit/1c02bdfb8ec7 ivi-shell/ivi-shell.c
10:08 wlb: weston/main: Michael Olbrich * ivi-layout: add surface type to the surface properties https://gitlab.freedesktop.org/wayland/weston/commit/ad2c014ef37e ivi-shell/ ivi-layout-export.h ivi-layout.c
10:08 wlb: weston/main: Michael Olbrich * ivi-shell: add input panel support https://gitlab.freedesktop.org/wayland/weston/commit/5d68a6c4b5c6 ivi-shell/ ivi-layout-export.h ivi-layout-private.h ivi-layout-shell.h ivi-layout.c ivi-shell.c ivi-shell.h
10:08 wlb: weston/main: Michael Olbrich * hmi-controller: add input panel support https://gitlab.freedesktop.org/wayland/weston/commit/3c6866088d4b ivi-shell/hmi-controller.c
10:08 wlb: weston Merge request !1158 merged \o/ (ivi-shell: input panel support https://gitlab.freedesktop.org/wayland/weston/-/merge_requests/1158)
10:09 pq: JEEB, the metadata is also, well, a broken story.
10:10 pq: JEEB, currently if a KMS driver exposes "Colorspace" connector property, userspace can set that to RGB while the driver is sending YCbCr, and any other nonsense or correct combination of metadata vs. actual signal.
10:10 pq: Harry Wentland et al. are looking to fix that.
10:11 pq: the gist being that userspace cannot set it guaranteed right, because it makes a difference between RGB and YCbCr and userspace cannot know which one the driver picked.
10:12 JEEB: nice
10:12 pq: userspace also cannot control which one the driver picks
10:13 ukiran: pq, means though we set the pixel format of ycbcr on the plane, HW does the conversion to RGB ?
10:13 ukiran: before it display ?
10:14 dottedmag: pq: is this RGB-in-the-middle pretty much unfixable at this stage, or can be fixed without too much pain?
10:14 pq: ukiran, yes, the KMS color pipeline will do *some* conversion to *some* RGB which may or may not be what you wanted.
10:14 pq: ukiran, and the KMS color pipeline will also then convert from RGB to YCbCr if that is what the kernel driver chooses to send to the monitor.
10:14 ukiran: Revathi@1990
10:14 ukiran: sorry.. wrong window
10:15 pq: dottedmag, it is a built-in assumption into how KMS color pipeline works. It's not unfixable, but fixing it probably means throwing all existing color related KMS properties out.
10:17 pq: ukiran, have you read https://gitlab.freedesktop.org/pq/color-and-hdr/-/blob/main/doc/pixels_color.md and https://hg2dc.com ?
10:18 pq: the latter will take days or more to go through, but I think it is very much worth it
10:21 pq: ukiran, you need to build an understanding of how color is encoded, and what those encodings are actually anchored to (the concept of the CIE 1931 2-degree observer). Then it becomes much more clear on what, how and when to convert and what you actually need to know to able to do any of that.
10:22 pq: it's not absolutely necessary if you want to just follow pre-made recipes, but it helps in making sense of everything
10:22 ukiran: pq, Sure. i will go over these links.
10:22 wlb: weston Merge request !1188 opened by Michael Olbrich (mol) ivi-shell: handle subsurfaces for mouse/touch activation https://gitlab.freedesktop.org/wayland/weston/-/merge_requests/1188
10:22 pq: ukiran, OTOH, nothing you have said so far today seemed wrong to me. Vague perhaps, but not outright wrong.
10:23 pq: so that's good
10:24 ukiran: pq, thanks
10:24 pq: CICP only gives us words and enumerations that make it easier to talk about these things.
10:25 pq: The H.273 doc does include a ton of formulae, yes, but they are hard to follow and do not really explain too much.
10:26 ukiran: you are right.. but i can corelate the formulaes mentioned in H.273 doc with the Figure-1 in https://www.itu.int/dms_pubrec/itu-r/rec/bt/R-REC-BT.2087-0-201510-I!!PDF-E.pdf
10:26 pq: An important thing to remember, using CICP words, is that MatrixCoefficients would be BT.601, ColourPrimaries could be BT.709, and TransferCharacteristics could be from a third spec.
10:26 pq: *could be
10:27 JEEB: yea
10:29 pq: ukiran, yes, you can. That's a good diagram, generic for many but not all conversions. It does not work for constant-luminance MatrixCoefficients encodings, and PQ and HLG systems may add more details.
10:31 ukiran: for constant luminance, there is another diagram Figure-2 in the same doc
10:31 pq: ukiran, it is also a conversion from input to output. It does not enable composition alone.
10:32 pq: right
10:35 ukiran: pq, "it does not enable composition alone" --> meaning w.r.t format conversion ?
10:36 ukiran: normalized RGB colour signals EREGEB (Rec. 709) to
10:36 ukiran: linearly represented, normalized RGB colour signals EREGEB (Rec. 2020)
10:37 pq: composition requires not only format conversion but also fitting the color gamut and dynamic range of the different inputs into a single reference
10:40 pq: If you want to do alpha-coverage blending, that needs to be done in optical color values (tristimulus).
10:42 pq: I guess one can get far by just doing the format conversions properly into a single common format, and composite there, but I feel it is eventually going to need something more.
10:42 pq: depends on what such "format conversion" actually means
10:44 ukiran: okay
10:44 pq: I didn't check what BT.2087 does with e.g. the color gamut. Color gamut can be handled in various different ways depending on what the goals are. This is what ICC calls "rendering intents".
10:45 pq: A pure format conversion would not do color gamut or tone mapping, which means that the original colorimetry is preserved. Whether you actually want that or not depend on where the result will be used.
10:46 pq: With entertainment like video and games, you probably want to do some kind of re-mapping there to make the picture look "better".
11:27 wlb: weston Merge request !1172 merged \o/ (libweston: mitigate race when destroying outputs https://gitlab.freedesktop.org/wayland/weston/-/merge_requests/1172)
11:27 wlb: weston/main: Michael Olbrich * libweston: mitigate race when destroying outputs https://gitlab.freedesktop.org/wayland/weston/commit/638dee44ec38 libweston/compositor.c
11:33 wlb: weston/main: Michael Olbrich * ivi-shell: handle subsurfaces for mouse/touch activation https://gitlab.freedesktop.org/wayland/weston/commit/141943eb76e3 ivi-shell/ivi-shell.c
11:33 wlb: weston Merge request !1188 merged \o/ (ivi-shell: handle subsurfaces for mouse/touch activation https://gitlab.freedesktop.org/wayland/weston/-/merge_requests/1188)
11:40 ukiran: pq, am trying to understand the steps involved during this color mapping w.r.t cicp code points
11:41 ukiran: client read the cicp points from the video file, and the server has exposed the supported points through Imagedescription as per the protocol.
11:42 ukiran: once the points matched, client will set the image description with the set_image_description() call.
11:47 ukiran: client video contains the triplets : yuv420p10le(tv, bt2020nc/bt2020/smpte2084, progressive)
11:47 ukiran: Monitor is normal monitor supports sRGB/Bt.709 color gamut
11:50 pq: ukiran, what is your question?
11:56 pq: ukiran, since your content is WCG HDR and your monitor is "normal" color gamut and SDR, you will necessarily also need some form of color gamut and tone mapping to squeeze the content into the tiny color volume of the monitor.
11:58 pq: a plain format conversion, even correctly done, likely won't look good enough
12:02 ukiran: my question is what are the steps involved in order to apply the CICP triplets here ?
12:03 pq: The first step is to understand what each element in each triplets means.
12:03 pq: They are not operations.
12:04 pq: Instead, they describe how the pixels were or need to be encoded.
12:05 pq: You need to decode the input image, apply any appropriate gamut and tone mapping, and then encode for the monitor.
12:05 pq: the video triplet describes the input encoding, and the monitor triplet describes the output encoding
12:06 ukiran: bt2020nc/bt2020/smpte2084 --> this actually tells how the pixels are encoded in the given file
12:07 pq: It gives the color encoding, yes. You need other things as well to actually interpret the binary data.
12:08 pq: pixel format, quantization range, chroma siting
12:09 ukiran: Okay.
12:10 pq: and if we are talking about actual *files* and not images in RAM, then those are compressed as well with codecs and whatnot, but that's irrelevant to Wayland.
12:11 pq: so far we only have Wayland protocol for carrying uncompressed 2D images
12:13 pq: width, height, stride, pixel format & modifier - these are the starting point for color decoding using quantization range, chroma siting (YUV only), MatrixCoefficients, TransferCharacteristics, ColourPrimaries.
12:14 pq: The result from those is an image in some arbitrary (your chosen) tristimulus triplets.
12:14 pq: Then you have to do color gamut and tone mapping, which are more of an art than science.
12:15 pq: If you are compositing, do that at this point.
12:15 pq: Then encode the result according to a similar set of monitor parameters.
12:16 ukiran: do we display the pixel by encoding on the monitor ?
12:17 pq: The video signal sent to a monitor is necessarily encoded in a way the monitor uses it.
12:18 pq: Otherwise you don't get what you want on screen.
12:18 pq: IOW, you must encode the image in the exact way the monitor expects it, or will show up wrong.
12:19 ukiran: encoding means the pixel format encoding like yuv420, yuv422, argb2101010, etc
12:20 pq: encoding includes pixel format, quantization range, chroma siting, MatrixCoefficients, TransferCharacteristics, ColourPrimaries
12:20 pq: it's the exact same list, always, for content, monitors, whatever.
12:21 ukiran: okay.. i think i get what you say.
12:21 pq: and it *all* is needed to be known in order to understand what the colors are
12:23 ukiran: Thanks for the detailed information. will do some more home work to understand these things better
12:23 pq: You could also have "CIE 1931 XYZ for the 2-degree standard observer" but as a color encoding that is quite wasteful, hence not used for transmission or storage.
12:28 pq: ukiran, color-and-hdr repository also has a small glossary, FWIW.
12:40 zubzub: ugh SDL doesn't seem use the frame callback to throttle and instead seems to use it's own internal timer <o>
12:41 zubzub: the only way to make it throttle is to delay the display sync callback
12:41 pq: that may be wise, because apps written on SDL probably do not expect at all the frame callback semantics
12:42 zubzub: mfw https://i.kym-cdn.com/photos/images/newsfeed/001/525/401/ffd.jpg
12:42 pq: we've been talking that Mesa EGL should probably do the same if it could
12:43 zubzub: could do what? throttle on display sync or frame callback?
12:43 pq: to stop using frame callbacks for throttling
12:43 pq: because apps simply cannot handle eglSwapBuffers blocking indefinitely
12:44 pq: apps expcet that swap interval 1 means a steady pace regardless of whether the window is visible or not.
12:44 zubzub: what if the compositor can't handle the rate at which the app is committing?
12:44 pq: there is a bug open about that in... Wayland, was it?
12:45 pq: zubzub, then the app dies on overflowing its socket. Which the wl_display.sync helps avoid.
12:46 pq: any way, a compositor handling a wl_surface commit should be very very quick
12:49 zubzub: ok looks like I just need to change my entire backpressure architecture, no biggy
12:50 pq: how did you do backpressure if not by controlling wl_buffer.release events?
12:50 zubzub: frame callback
12:50 pq: that's not backpressure, is it?
12:51 pq: frame callbacks are a different mechanism for clients that are prepared to make use of it.
12:51 zubzub: sure by neither is buffer release or display sync
12:52 zubzub: I can an SDL game at 60fps on a 30Hz monitor
12:52 pq: buffer release is, though?
12:52 zubzub: it mighty implicitly do that if the client allocates a fixed number of buffers
12:52 pq: Unfortunately, EGL, SDL and other "game toolkits" kind of things are not that.
12:52 pq: yes, that's the definition of backpressure, right?
12:53 zubzub: in my case the buffer is only needed for like 1ms, but the rest of the commit can take anything from 5ms to 50ms
12:53 pq: both EGL and Vulkan work on fixed size buffer pools AFAIK
12:55 zubzub: anyway so I have to queue buffer releases (not because I'm using them, but to keep the client from using them) if the client is committing too fast
12:56 zubzub: that all still sounds very much like abusing side effects for what the protocol was intended to do
12:57 pq: yeah, and frame callbacks is what the clients should listen, but instead they would just break if it doesn't fire at a fixed rate, so a timer they use.
12:57 pq: The problem of not using frame callbacks is indeed the question of the frequency and phase of the timer. Presentation-time feedback could help there, if client libs used it.
12:57 zubzub: would be nice if I could just tell them at what rate (and by able to change the rate at will)
12:58 pq: yes, that's in the presentation-time feedback event
12:58 pq: completely yours to control
12:58 d_ed: allowing clients to use blocking buffers again safely would also work. There's work at https://gitlab.freedesktop.org/wayland/wayland-protocols/-/merge_requests/99
12:58 pq: good luck finding anyone who listens for it, thoguh
12:59 pq: d_ed, thanks, that was probably the link I was thinking of.
13:04 kennylevinsen: well, there isn't technically anything wrong with SDL using its own timer, it's just silly if it wanted frame callbacks
13:04 d_ed: it does also do something with framecallbacks
13:05 zubzub: rendering @60FPS on a 30Hz monitor is imho a bit silly
13:05 kennylevinsen: committing at arbirtrary times and arbitrary rates is fine, frame callbacks are just one way to voluntarily throttle. There isn't normally any backpressure there - a client can commit at 1000 fps, and all commits get accepted (and immediately superseded)
13:06 kennylevinsen: granted, for most applications, frame callbacks fit the bill and is superior to what they currently do, but it is voluntary
13:07 JEEB: mpv went through a few iterations on how it attempted to time video rendering on wayland
13:08 d_ed: Qt's situation is quite complex too to handle all clients, and it's still not perfect
13:08 d_ed: the fallback timers we need to have for the few bad clients end up interfering with our well behaved clients
13:09 zubzub: kennylevinsen: in my case it's commit latency. A commit is handled in like 8ms but there might be 50ms latency before it is shown on screen, I can't make the commit block, or the app would run at <20FPS instead of 60FPS
13:09 d_ed: SDL currently appears to do the following: if there's a frame block locally and wait for a framecallback, but after an arbitrary timeout commit and proceed anyway
13:10 kennylevinsen: zubzub: hmm, most compositors do not really do anything on commit, only when the output needs to render
13:10 zubzub: I have a weird compositor
13:10 zubzub: network and video and stuff
13:10 daniels: that's an understatement
13:10 kennylevinsen: there must be room for those as well
13:10 kennylevinsen: but throttling rendition could still be a good idea
13:11 zubzub: anyway, I guess the buffer release abuse that pq suggested might make it work
13:12 pq: weeelll...
13:12 pq: it might also multiply the latency from draw to screen
13:13 pq: or would it...
13:13 pq: depends on how you throttle those releases vs. which image you use
13:14 zubzub: it would force the app to only start drawing on frame callback (because coincidentally only then it's other buffer becomes available)
13:17 pq: the pool side in Mesa EGL is IIRC 4
13:17 pq: *size
13:17 kennylevinsen: I have a few clients that would wait for buffer release, but I assume other clients might just allocate more buffers and get mad once the number went above N where N is some single-digit number...
13:17 pq: the max pool size should never be less than that, because then it might start limiting framerate in some conditions
13:18 pq: under the usual operating environment, that is
13:19 pq: so you would need to keep all 4 buffers reserved until you are ready to let the client proceed and release one of them
13:19 zubzub: kennylevinsen: yeah that's also what I fear will happen
13:19 pq: anyway, you are trying to force clients behave in a way they have simply not been written to behave
13:20 pq: I doubt it can end up well.
13:20 pq: clients are allowed to be wasteful, maybe they have a good reason to not throttle, or maybe they do
13:21 pq: *don't
13:25 zubzub: sure, I'm just dissapointed that a display sync is [ab]used as a compositor paint fence just because [most] compositors just happen to work that way
13:26 pq: what's a paint fence?
13:26 zubzub: like a wait for compositor to signal me it has painted what I committed
13:27 pq: no, it definitely does not do that
13:27 pq: The sync request is only used to ensure that the compositor has processed the previously sent requests, which is the definition of the sync request.
13:27 zubzub: well SDL seems to use it that way I noticed
13:28 pq: didn't you say SDL uses a timer?
13:28 zubzub: pq: yes that's what it's supposed to used for
13:28 zubzub: I throttles down if the dispay sync is later
13:28 pq: and is it SDL or is it eglSwapBuffers from Mesa with swap interval 0?
13:28 zubzub: that I don't know
13:29 zubzub: I just happen to have tested with supertux and tuxracer :p
13:29 pq: Mesa does that, and the whole point there is to ensure the compositor has seen the previous requests.
13:30 pq: Because if the previous requests might have released some buffers, then Mesa would be sure to have seen the release events by the sync returns.
13:32 pq: There are expectations that wl_display.sync replies ASAP, unlike with wl_buffer.release or frame callback.
13:33 pq: So delaying sync replies is very much unexpected, and if you do that, you would break the games that could not use the frame callback to begin with.
13:33 zubzub: I'm not advocating or trying to do that! (not sure where I might have said that either)
13:34 pq: I mean, no-one uses sync as a "compositor paint fence".
13:35 zubzub: well implicitly they do no? If the app starts rendering as soon as there is a buffer and the sync callback is fired?
13:37 zubzub: aaanyway, I have to figure out how to deal with this first :)
13:39 pq: it's not a "compositor paint fence", it says nothing about compositor's painting
13:39 zubzub: I know, I worded it badly
13:39 pq: it's only purpose is to make sure the compositor is keeping up with the commits
13:39 zubzub: yes I know that :)
13:40 pq: alright
13:42 zubzub: what I was trying to say is that just because a compositor can keep up with the rate of commits, doesn't mean it can keep up with rendering or state updates of an app.
13:44 zubzub: and that a sync & callback is used by apps because most compositors are single threaded and all these updates happen in the commit and as a side effect the sync callback is only fired *after* the compositor has done all the work
13:53 pq: a compositor is expected to *not* render on commit
13:54 pq: a compositor renders whenever it feels like, but definitely not as a part of handling a wl_surface.commit request
13:54 pq: a compositor is expected to discard frames when a client commits frames more often than frame callbacks would imply
13:57 pq: Also window state updates do not need to be put into use immediately on commit, no. They do need to be accumulated, though, so that when a compositor decides to use a new set of state, the set is complete.
13:57 zubzub: a frame being the render result of state that was committed (potentially too fast)
13:58 pq: so don't render that on commit, render it on your own time
13:59 pq: using the latest committed buffer and state you have
14:08 zubzub: ugh
14:13 zubzub: I think I'm just going to become a goat farmer or something
14:14 kennylevinsen: ... fast forward to having to throttle goat grass consumption
14:34 MrCooper: zubzub: forcing clients to wait for frame callbacks like that will prevent some of them from sustaining full frame rate
14:34 MrCooper: (coincidentally, Mesa's EGL Wayland platform code accidentally did just that until this week)
14:47 JEEB: oh right, one thing I noticed regarding CICP/H.273. there is actually a document regarding their usage :D
14:48 JEEB: https://www.itu.int/rec/T-REC-H.Sup19/en
15:03 pq: JEEB, oh right, the supplements. When I re-write well_known.rst, I had in mind to use spec links where one can also find the supplements and not just the spec itself.
16:16 DemiMarie: Is there a reason that Mesa on Wayland only supports 2 of the 4 Vulkan display sync options?
16:20 kennylevinsen: some of the others imply tearing, no?
16:20 kennylevinsen: and so it requires the tearing protocol
16:22 emersion: missing features in the wayland protocol, DemiMarie
16:24 DemiMarie: emersion: are those being worked on?
16:24 emersion: not actively
16:24 emersion: there are multiple opened MRs for presentation protocols
16:43 MrCooper: there is a tearing updates protocol now though, that should cover at least IMMEDIATE
16:44 emersion: right, and the MR for that is… stuck due to lack of reviews iirc?
17:17 daniels: is it? I thought I reviewed
21:37 emersion: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/18268#note_1651731
21:37 emersion: i'm in no hurry to land this
22:06 zamundaaa[m]: emersion: on the topic of tearing... is there any progress on the kernel side?
22:06 zamundaaa[m]: All this stuff in userspace ultimately barely matters if atomic modesetting doesn't support it
22:06 emersion: i'm wondering whether the kernel side really makes sense
22:07 emersion: if the kernel API restricts to primary plane FB_ID and nothing else, then… there's no real benefit
22:07 emersion: restricts as in only allows the atomic commit to contain FB_ID
22:07 emersion: and no other prop
22:08 zamundaaa[m]: that would definitely not be good
22:09 emersion: my last email explains why it's tricky to allow other props even if they don't change
22:09 emersion: we'd need to rework the kernel internals quite a bit to fix it
22:22 zamundaaa[m]: The reason that the early fail is desired is that that i915 doesn't do the proper checks itself, right? How hard would it be to fix that?
22:23 emersion: drivers would need to check each property one by one
22:24 emersion: and not forget to expand the check once a new property is added
22:25 zamundaaa[m]: sure, but then the API wouldn't be limited to what the most restricted hardware can do
22:59 JEEB: https://twitter.com/821_milktea_/status/1635617224378687491
23:06 emersion: JEEB: wrong chan?
23:07 JEEB: whoops
23:07 JEEB: off by one
23:08 JEEB: sorry for that