06:45learn-wayland: can someone point me to good resource to install wayland in debain. I don't want any wm or de with it as I want to test my own build of sway!
07:07RAOF: learn-wayland: "sudo apt build-dep sway"? If you're building sway then you just need the build-dependencies of Sway.
07:16learn-wayland: rolf: will installing wayland to my x11 debain system break something?
07:29YaLTeR[m]: didn't you hear? Wayland breaks everything!
07:29YaLTeR[m]: On a more serious note, no, it won't
07:31learn-wayland: YaLTeR[m]: hahaha thanks I'm new to this thing so please don't mind.
09:16kennylevinsen: ... I wonder if I'll ever get around to writing that article about how seats work...
10:30wlb: wayland/main: Simon Ser * util: convert macros to inline functions https://gitlab.freedesktop.org/wayland/wayland/commit/36cef8653fe5 src/wayland-util.c
10:30wlb: wayland Merge request !377 merged \o/ (util: convert macros to inline functions https://gitlab.freedesktop.org/wayland/wayland/-/merge_requests/377)
11:53wlb: weston Issue #897 closed \o/ (Some nested compositors (KDE) failed to work with weston when backend is RDP https://gitlab.freedesktop.org/wayland/weston/-/issues/897)
12:05wlb: weston Merge request !1482 closed (Provide changes to seat management in RDP backend)
12:27zzag: d_ed[m]: jadahl: emersion: drakulix[m]: do you think that it's worth to resume governance meetings? imho they were quite productive
12:27emersion: sure, would be great
12:27zzag: ... and we've got a few protocols that are worth discussing
12:28pq: I've been wondering why people have not requested more of the meetings
12:29zzag: heh some kde thing happened around that time iirc and people just forgot about it :D
12:39jadahl: zzag: which ones should be prioritized?
12:39jadahl: as talking points I mean
12:45zzag: I was thinking toplevel-icon, placement, and surface group transaction (this one should be the easiest because we're all on the same page, and we just need to hash out the subsurface related details to finally get it in). do you have something that you would like to discuss?
12:48d_ed[m]: I suggest you pick one thing at a time
12:52jadahl: yea, those three are vastly different, perhaps good to narrow it down to one general topic
12:52zzag: hmm, indeed.. what about the surface group transaction protocol?
12:53zzag: it should be the least controversial
12:53zzag: and it seems like a good starting point to resume the meetings
12:53jadahl: sure, we can try to figure out how much interaction with subsurfaces can be possible to specify
12:53zzag: (and also if we could simplify some of subsurface stuff in long term :D )
12:54Company: if you ever want to talk about fractional scaling, please invite me
12:55zzag: I'll send a doodle link to the wayland-devel mailing list later today or tomorrow
12:56zzag: would you be available the next week?
13:26colinmarc: Hi! I'm implementing the color management draft protocol and I have a very specific question that's quite tricky to google. My question is: why is ST2084 PQ so... weird?
13:26colinmarc: When I get buffers from the app and linearize, I'm left with this number of absolute nits from 0 to 10,000. To blend that with SDR content, I have to rescale to some reasonable value of `1.0`. I've seen the suggestion 203 nits in Rec. 2048, but I've also seen 100 in a bunch of places, and 80. Which one should I use? Are there other ways around this issue? Thanks in advance!
13:37kennylevinsen: You will nee to deal with the difference in target brightness one way or another. When outputting HDR, you would rescale SDR to a standard target brightness of choice to not have a word processor incinerate your eyeballs with 4000+ nits of white.
13:41colinmarc: The way I'm doing blending right now is supposed to be independent of the output. (I'm blending into scRGB, with values above 1.0 clipped for SDR output). I guess scaling up SDR and scaling down HDR are the same operation in that sense - if I'm dividing by 203 and then multiplying back out by 203 on the other side.
13:42kennylevinsen: I plead the fifth and invoke pq, the color whisperer :)
13:45vaxry: 🎷🎶
13:46vaxry: oh wait wrong whisperer
13:50pq: colinmarc, heya
13:50pq: colinmarc, the first thing is, nits at a lie.
13:51pq: In physics, cd/m^2 a.k.a nit is a measure of absolute luminance. But when we are talking about video signals, nit is actually a unit of relative luminance. It is relative to the implied viewing environment, and also relative to the display capabilities (dynamic range).
13:51JEEB: colinmarc: 100 nits (!in dark env!) is the film SDR spec, but the thing is that if you clip HDR stuff with that, you are actually not capturing enough for what was visible as SDR graphics white in HDR. that is what was then via experiments noted to be ~203 nits in rec. 2048 etc
13:52drakulix[m]: Sounds good to me. d_ed . I would additionally be interested in discussing fifo and commit-timing. (Also ext-screencopy, but I think only emersion is on-board with that one.)
13:52JEEB: so you map SDR graphics white in HDR to SDR graphics white to SDR for the "simplest" clipping style thing
13:53JEEB: it gets even more fun with MS bringing in 80 nits (!in dark env!) which is mentioned in sRGB spec
13:53JEEB: but I would probably ignore that
13:54kennylevinsen: also note: while there may be different opinions of what the true SDR mastering brightness is, people are also used to being able to arbitrarily set the SDR target with whatever "brightness" dial is available to them
13:55pq: All of the 203, 100 and 80 nit values are correct *in their respective contexts*. Taking that value out of its context makes it meaningless.
13:55JEEB: yea
13:55Company: the great thing is that it's absolutely undefined so everyone just does whatever
13:55Company: and then blames everyone else for doing it wrong
13:55JEEB: I think it's not undefined since we have recs for SDR<->HDR conversions
13:55colinmarc: pq: so like, other transfer functions are relative to the screen brightness, and pq is *also* relative to the screen, just with an extra coefficient?
13:56pq: colinmarc, nothing that simple, no.
13:56Company: JEEB: dunno, there seems to be a surprising lack of testsuites
13:56colinmarc: ðŸ«
13:56JEEB: PQ is also related to screen brightness since you would not show the image at the same brightness at 500 nits and 250 nits of env brightness
13:56JEEB: uhh, not screen but env
13:56JEEB: :V
13:57JEEB: Company: that is true, there is no reference test suite like with codecs or containers
13:57pq: It seems like the differences between display luminance capability and environment brightness are compensated by non-linear adjustment that is related to "gamma". HLG OOTF is one such definition.
13:58pq: I mean, difference between display A in environment 1, and display B in environment 2.
13:59JEEB: Company: for example https://www.itu.int/pub/R-REP-BT.2446-1-2021
13:59pq: colinmarc, if you're looking for a straight answer "how do I", I don't have an answer. I haven't really looked into luminance mapping yet.
13:59Company: JEEB: the thing where this is gonna go wrong in the future is when color transforms are sometimes done by a compositor, sometimes by a toolkit, sometimes by an app, and sometimes by the hardware, depending on the state of everything
14:00Company: JEEB: ie when you have a fullscreen app and opening a context menu changes the brightness, because it switches from direct scanout to compositing in the compositor
14:00kennylevinsen:shakes fist at Apple TV's mapping of SDR to HDR takes away the ability to control SDR brightness independently of monitor HDR brightness
14:00JEEB: colinmarc: you can take a look at libplacebo and https://gitlab.com/standards/HDRTools/
14:00colinmarc: JEEB: so I did look at libplacebo, and I think it does the 203 nits thing.
14:00JEEB: yes
14:01pq: colinmarc, other TFs are indeed relative to the display in a straight manner, and both the reference display and reference viewing environment are usually defined the respective or associated specs.
14:02JEEB: mpv/libplacebo has done it for ages. SDR graphics white in SDR is taken in as 100 nits, then boosted to what people perceive as graphics white in HDR
14:02colinmarc: Company: Right, I thought about that! If I pick 203, and then I switch to fullscreen exclusive (which in my compositor happens most of the time), I'm no longer touching the nonlinear value, so the brightness could change if I'm compositing vs not.
14:02Company: colinmarc: my assumption about that is that it will be solved as people encounter it
14:03JEEB: Company: but yea given that there N different ways of doing both gamut and tone mapping, and since none of those are specified other than "clip", you really can't verify anything else than clip implementation
14:03Company: and then things will be agreed on and all the projects will start doing the same thing
14:04JEEB: but yea, would be nice if we had some sort of ref. test suite even for that :)
14:04colinmarc: pq: I really appreciate the help. Can you provide any context on why ST2084 is "absolute" if the absolute thing is a lie anyways?
14:04Company: the same thing is happening with graphics offload atm
14:04Company: where compositors accept YUV and GTK accepts YUV but they don't necessarily agree on the YUV primaries
14:04pq: colinmarc, PQ TF though, being "absolute", can encode luminances beyond what any display at hand can, or even the mastering display could, produce. You still have a reference viewing environment where the signal would apply as-is, if you were sending to the mastering display. HDR metadata gives you the mastering display capapbilities, which gives you some sort of reference of how high luminance can even be
14:04JEEB: colinmarc: tl;dr "absolute within the reference viewing environment"
14:05pq: meaninful for the signal. Then there is also the standard diffuse or graphics white level associated with the PQ system. Using all of these, one is somehow supposed to be able to adapt the signal for the display at hand.
14:05JEEB: this is why apple actually utilizes the reference viewing environment metadata, although way too often I've just seen some hard-coded values there
14:05JEEB:goes check some of his winter snow clips
14:05colinmarc: but the SDR reference white isn't part of the metadata, is it?
14:06JEEB: no, it's specified now
14:07pq: Company, if you haven't noticed, people are working hard to define a KMS UAPI so that display controller (KMS) and compositor would perform tone mapping exactly the same way, so that switching between them would be seamless.
14:08pq: Company, OTOH an application should never attempt to switch between application and compositor tone mapping assuming it should be seamless. It won't.
14:09colinmarc: The application doesn't get to decide, right? The compositor can pop something over top like a notification or whatever
14:09JEEB: either https://www.itu.int/pub/R-REP-BT.2390 or https://www.itu.int/pub/R-REP-BT.2408 contain the HLG and PQ graphics white level values
14:10colinmarc: It's the latter, I believe.
14:10Company: pq: it will have to be for any sort of acceleration to work
14:10JEEB: I think it's probably mentioned in the former, but the latter was updated later so indeed I would start there
14:11Company: pq: like direct scanout, in particular with multiple overlay planes
14:12pq: colinmarc, I have no idea why PQ system was designed like it was. I believe it was extracted from Dolby Video, which is proprietary. I have enough problems trying to figure out just the "how". :-)
14:13pq: Company, you mean YUV-RGB conversion matrix? The primaries are yet another thing to get wrong.
14:14Company: pq: I mean more or less anything - but video playback is the obvious example
14:14colinmarc: It's such a rabbit hole, this stuff! I'm also dealing with YCbCr encoding in my case (for video streaming). For some reason, unlike with RGB <-> RGB conversions, RGB <-> YCbCr conversions are defined in terms of `R', G', B'` - with a transfer function baked in
14:16JEEB: the matrix defines the order of applying trc I think. with most doing it one way, and then one of the NCL/CL ones being different - I think? (possibly also ICtCp, but not sure)
14:16colinmarc: Yes, iiuc BT2020 defines a linear transformation as well, but that's not used anywhere really
14:16JEEB: yes, NCL is the common one
14:16Company: pq: and Gnome apps have this tendency to round corners, which means they can't be offloaded into a subsurface unless they are maximized/fullscreen or have black bars larger than the corners
14:16JEEB: anyways with RGB you don't have matrix coefficients :P
14:17JEEB: you only have primaries and transfer
14:17Company: pq: which means that the offload status changes as the player window is resized
14:17JEEB: or well, H.273 wise it's called "transfer: identity"
14:17JEEB: ugh, not transfer
14:18colinmarc: JEEB: I'm using matrices to do RGB <-> RGB. You can premultiply the "to XYZ" and "from XYZ" matrix. Unless I'm getting something wrong?
14:18JEEB: I should not attempt to multitask
14:19colinmarc: Oh, I think I understand what you mean.
14:22JEEB: colinmarc: I just recall the 0) handling chroma subsampling 1) matrix coefficients 2) transfer 3) primaries for some matrix to linear XYZ (except for the exceptions), although my brain could misremember these steps :P
14:22pq: that's correct
14:22pq: If your pipeline is a matrix-TF-matrix, then you can convert any electrical YCbCr and ICtCp to optical RGB (and XYZ with an additional matrix). Talking only about format conversion, not mapping.
14:23colinmarc: (there's one more parameter, which is whether your range is 16-235 or full)
14:23pq: quantization range handling needs an offset along with the first matrix
14:23JEEB: yea
14:23JEEB: the whole range stuff is something for which I rewrote the FFmpeg docs in 2020 or so
14:24colinmarc: It just occurred to me to ask - what is MPEG range in a 10-bit context?
14:24JEEB: https://lists.ffmpeg.org/doxygen/trunk/pixfmt_8h.html#a3da0bf691418bc22c4bcbe6583ad589a
14:24JEEB: I specified it in a bit depth unrelated manner which is listed in BT.2100 as well
14:25JEEB: (BT.2100 actually finally defines the full range representation as well)
14:26colinmarc: found it in 2100, thanks!
14:26pq: anyway, you decode content to optical, map source graphics white to destination graphics white, and clip or map to destination display capability. And don't forget the RGB-RGB conversion to account for different primaries. That could be a start.
14:26JEEB: yup
14:26pq: oh, and then encoding for the destination display
14:27colinmarc: I have all of that, in theory. my math is wrong somewhere because I'm getting trippy colors. Currently debugging that.
14:28colinmarc: I was stuck for a long time on the 203 nits thing, though. It's just really counterintuitive
14:28pq: in Weston we're only soon'ish entering the stage where we get to deal with all this, since we started with ICC workflow and the parametric image description stuff is brewing.
14:28JEEB: and the ICC stuff is the stuff which I consider much more complex than "just H.273 values" :D
14:28pq: well...
14:29pq: ICC is really "just trust the ICC file" and "use a battle-tested CMM to give you the pipeline"
14:30JEEB: colinmarc: it becomes more intuitive when you understand that previously while actual screens were much more bright, video was limited to 100 nits. and graphics white for most people is not 100 nits (due to ambient environment or otherwise).
14:30JEEB: so now that the brightness of coded content is no longer limited to 100 nits, the graphics white gets put somewhere closer to where it should be
14:31JEEB: many screens are by default around 200-250 nits IIRC? so it seems to match.
14:32JEEB: pq: well sure, solvable by "just throw it at an implementation" :)
14:33pq: JEEB, I think it really is the only solution, because these CMMs contains code the work around broken ICC files that are abundant.
14:33pq: *to work around
14:33JEEB: pretty much
14:33colinmarc: Where does one get a CMM?
14:34pq: colinmarc, pick a software; I picked LittleCMS for Weston.
14:34pq: mature product, comes as a library, and a compatible license
14:36pq: ArgyllCMS is another I think?
14:36colinmarc: will check it out! btw, in case it's interesting to anyone, I've been testing the color management protocol on a *very* rough mesa patch: https://gitlab.freedesktop.org/colinmarc/mesa/-/tree/vulkan-wsi-color-management?ref_type=heads
14:37colinmarc: my plan was to clean it up and open an MR next week, since I'll be away from my GPU for a week starting tomorrow.
14:37JEEB: yup
14:38pq: colinmarc, is that not based on the work JoshuaAshton and drakulix[m] did? Or did they implement only the simpler protocol?
14:39pq: colinmarc, please also mind the two Khronos issues I opened about EGL and Vulkan interop, thanks. :-)
14:40colinmarc: pq: I saw those. The vulkan side of this is very very thin! There's no way to wire up most of the protocol via vulkan, like the surface/output image descriptions.
14:40colinmarc: I didn't wire up the metadata yet, but there is an extension to do that with vulkan
14:40JEEB: also if someone is interested in the most recent version of H.273, as 2023-09 is for some reason not yet public I linked the final draft @ https://patchwork.ffmpeg.org/project/ffmpeg/patch/20240329003343.1099137-1-jeebjp@gmail.com/#84379
14:40pq: yes, EGL is very similar
14:40zamundaaa[m]: yeah Vulkan needs a new extension to get the preferred surface metadata
14:41zamundaaa[m]: Colin Marc: fwiw KWin's implementation also only clips HDR content to SDR the brightness on SDR screens, and in practice it's a good starting point... people don't often put HDR content on SDR screens *yet*
14:42colinmarc: My compositor is a little weird / easier to test with in that I only have to hand off the content to a video encoder. The actual displaying is happening on my macbook, where SDR/HDR is pretty well handled.
14:43JEEB: zamundaaa[m]: and windows anyways does that I think (at least at point of win10)
14:43colinmarc: It's not that bad to propose an extension to vulkan, right? Not that I'm volunteering...
14:43JEEB: colinmarc: btw if you want to yell in annoyance, check the difference between BT.709 transfer handling between macOS/QT and iOS
14:44zamundaaa[m]: Joshua wanted to do it at some point. I don't think anything happened with that yet, did it?
14:44zamundaaa[m]: JoshuaAshton?
14:44JEEB: macOS handles it in pre-BT.1886 manner, while iOS's handling was coded post-BT.1886 :D
14:44pq: colinmarc, FWIW, much of the Wayland color-management protocol is not even meant to be used via EGL or Vulkan, but the protocol should be enough to implement what EGL/Vulkan there is with their extension.
14:44colinmarc: zamundaaa[m]: Sorry, I misunderstood at first, you're talking about mapping to SDR reference white and then clipping?
14:45colinmarc: pq: That's quite tricky, isn't it? I feel like most protocols are either meant to be used exclusively by the driver or exclusively by the client.
14:45JEEB: in scRGB that is "just" graphics white at 1.0, but yes :)
14:47pq: colinmarc, it is exclusive IIRC, yeah. Which is why an extension to get the preferred image description out is probably necessary.
14:48pq: colinmarc, but I would not expect to be able to use ICC files via Vulkan, or even custom primaries nor arbitrary combinations of TF/primaries/etc.
14:48zamundaaa[m]: Colin Marc: there's no mapping yet. Compositing in KWin happens in nits, and when I encode for the display I apply the inverse EOTF and clip the result to 1.0
14:49zamundaaa[m]: Unless an ICC profile is set for the screen, that just means clamp((nits / sdrBrightness) ^ (1/2.2), 0.0, 1.0)
14:50colinmarc: Gotcha, yeah, that's more or less exactly what I have.
14:51pq: till tomorrow \o.
14:51colinmarc: thank you for the help!!
16:03wlb: wayland-protocols Merge request !46 closed (Introduce extended-drag protocol)
17:03wlb: wayland Issue #451 opened by Sebastian Wick (swick) Seat-derived objects should become inert when the capaibility goes away https://gitlab.freedesktop.org/wayland/wayland/-/issues/451
21:30zamundaaa[m]: I just found this in kernel documentation about the drm plane SRC properties: "Devices that don’t support subpixel plane coordinates can ignore the fractional part."
21:32zamundaaa[m]: why can't drivers that don't support fractional coordinates just reject commits that try to use them?
21:32zamundaaa[m]: This makes the fractional part of the coordinates useless for generic userspace :/
21:38vsyrjala: i guess someone would need to figure out an answer to 1) how precise should such a check be? 2) how much userspace would we break by making it more strict?
21:40vsyrjala: also the rules regarding filtering across the edge are another mess. my take is that you want "clamp to edge" behaviour at the fb edges, not at the viewport edges. ie. filter should be allowed to fetch an arbitrary amount past the viewport edge
21:41vsyrjala: but i'm sure someone somewhere is already assuming nothing past the viewport edge is read
21:43zamundaaa[m]: I would expect nothing past the viewport edge to be read... but my main use case for the properties is really just supporting direct scanout with viewporter
21:43zamundaaa[m]: so I'll look up what viewporter does or doesn't specify for that
21:44vsyrjala: not allowing filtering across the edge will lead to quality loss for pan&scan type of stuff
21:46zamundaaa[m]: true. It would also not match what compositors generally do in rendering
21:46vsyrjala: do compositors even use wider than bilinear filtering?
21:49zamundaaa[m]: Not that I know of
21:49vsyrjala: i guess we could always add more properties to control the behaviour
21:49zamundaaa[m]: viewporter says content outside the source rectangle is ignored
21:51zamundaaa[m]: but the compositing path in KWin at least doesn't seem to have any special code for making sure content outside of it doesn't get used even in filtering...
21:51vsyrjala: "outside" is a bit ambiguous if it allows sub-pixel coordinates. same for sub-sampled formats
21:51zamundaaa[m]: yeah
22:07emersion: iirc there is an issue about this somewhere
22:08zamundaaa[m]: emersion: I thought so too but couldn't find it
22:08emersion: i recall someone tried using a buffer with 1px squares of different colors to achieve single pixel buffer behavior
22:08emersion: and then realized that worked on no compositor because of the filtering