07:12 pq: drakulix[m], yeah, but I believe you'd definitely want to offer end users a way to override that, or even not use that by default. The sRGB assumption when nothing else is said may give end users what they are used to see, for better or worse.
07:15 pq: LUT also have precision problems especially for curves that are used for inverse EOTF.
07:17 JEEB: yup
07:17 pq: emersion, swick[m], Weston has code to infer the linearizing curve from an arbitrary output ICC profile, thanks to vitaly. For matrix-shaper profiles, you can dissect the profile to get/drop the right curve directly.
07:19 pq: swick[m], I think blending in scRGB is just fine, particularly when using floating-point for the value storate. The only caveat is that blend-to-output conversion is more complicated.
07:20 pq: no matter what space you blend in, you always should take into account the actual useful part of the incoming and other color volumes
07:25 JEEB: yea I guess the positive of linear scRGB is that [0,1] is SDR, so if you want SDR output you may just clamp that range and encode as output transfer, while for HDR output you encode as output HDR transfer with [0,1] going to HDR graphics white (~203 nits), and then further values further than that :)
07:27 pq: emersion, light-linear = tristimulus = optical; and the other one: perceptually linear might be close to: ignoring quantization, light-non-linear = electrical = encoded.
07:28 pq: drakulix[m], indeed, Wayland has never guaranteed any specific blending result for semi-transparent sub-surfaces. I will try hard to keep it that way.
07:37 pq: emersion, re: [0, 1]; in the end, the values to feed into a display panel are ratios: percentage of how bright to make each sub-pixel light source. Naturally you cannot go negative, because negative light does not exist, and you cannot go above 100% because you'd "fry the hardware" so it won't let you. From this point of view, greater than 1.0 values are too wild and negative values are just incomprehensible.
07:37 pq: All this changes when you start doing math on those "lamp power" percentanges.
07:38 pq: negative values are hard, because traditionally color has been represented as unsigned integers. Use of floating point removes that problem. However, LUT processing elements are also popular, and a LUT simply cannot be applied outside of its defined input range which is implicitly [0, 1].
07:39 pq: So the problems with outside [0, 1] range values are practical. Math has no problem with them.
07:44 pq: JEEB, well, clamping may not be a very good way of gamut mapping, but it's certainly easy - it can even happen accidentally. :-)
07:57 JEEB: ayup :)
07:57 JEEB: just looking at the current tone and gamut mapping rework merge request at libplacebo you can hone that dagger a *lot*
07:59 JEEB: pq: for the record if the results I got earlier still hold true, Windows on the desktop just does HDR white graphics white remap and then clamp when doing HDR to SDR :)
08:00 JEEB: definitely imperfect but at least a constant
08:00 JEEB: minus one "white" there, Friday and E_NO_COFFEE :)
08:00 pq: hehe
08:15 drakulix[m]: pq: So the issue with an EDID-based ICC profile instead of the standard sRGB as an output profile is that users expect “wrong” visuals?
08:15 drakulix[m]: What is exactly the problem here? Is it that using a correct profile might make sRGB content less vibrant on outputs with wider color spaces?
08:21 dottedmag: pq: Changing the blending algorithm every other frame sounds like a great way to annoy every application developer out there while staying technically correct.
08:23 dottedmag: I wonder what the semi-transparent surfaces are good for if the blending algorithm is not specified. Any application that cares about the result will have to avoid them.
08:23 dottedmag: *subsurfaces
08:23 emersion: any application which cares about the result needs to avoid a lot of things tbh
08:23 emersion: YUV buffers for instance
08:28 pq: drakulix[m], EDID is also a) too often more or less lies, and b) usually does not reflect monitor adjustments at all.
08:28 pq: at least historically
08:29 pq: if you have a monitor certification system and the monitor is certified, then maybe they also check EDID, who knows
08:30 pq: drakulix[m], yes, making traditional content less "vibrant" is something I would expect people to complain about.
08:31 drakulix[m]: So is there any value using that over the built in sRGB profile?
08:31 drakulix[m]: Or should I expect every users, who cares about color accuracy to just supply their own ICC file anyway?
08:32 pq: drakulix[m], I'd offer "assume sRGB" and "trust EDID" as options, and end users can pick which they like better. Provide your own ICC file would be third option.
08:34 drakulix[m]: Great. Thanks for all the feedback! Still trying to wrap my head around all that stuff and its great to be able to check, what I think I have understood so far. 👍
08:34 pq: another problem with EDID is that it described only one set of color parameters, roughly, but a monitor can driven in many different "modes" based on the metadata sent to it. Which mode does the EDID apply to? I don't know.
08:35 pq: there is metadata aside from the HDR metadata as well, which may change how the monitor works
08:36 pq: maybe that is specified in HDMI and DP specs, maybe vendors implement that...
08:36 pq: who knows
08:40 pq: dottedmag, I have never implied changing the algorithm every frame. Compositors can have different implementations.
08:44 pq: drakulix[m], also ICC is quite much limited to SDR so far, so for HDR displays one may need to offer more complicated UI for anyone who wants to fine-tune. With HDR the sRGB option is replaced by assumptions provided the used video signalling mode, like BT.2100 PQ or HLG.
08:45 pq: ICC is working towards HDR, but it might be a while before there are usable implementations for it available
08:46 drakulix[m]: Right. Do the color characteristics from the EDID play any role here or would I just take the luminance values from the HDR metadata block?
08:46 drakulix[m]: I am still having trouble imagining how I would built an output profile for an HDR monitor with LittleCMS.
08:46 pq: I don't really know either. :-)
08:47 drakulix[m]: Oh perfect. Well maybe we can figure something out next week then. ^^
08:48 pq: I know they got three monitors for the hackfest, but will anyone bring a spectrophotometer so we can actually check?
08:49 pq: ...spectroradiometer?
08:51 pq: oh, spectrocolorimeter
08:52 drakulix[m]: probably a better question for the hackfest channel.
08:53 pq: there is a channel?
08:54 drakulix[m]: yeah. I think it was linked in the last mail?
08:55 pq: oh, matrix
09:05 dottedmag: pq: I was kidding
09:18 wlb: weston Issue #712 closed \o/ (client: weston-simple-dmabuf-v4l2 does not translate planar video formats correctly from V4L2 to DRM https://gitlab.freedesktop.org/wayland/weston/-/issues/712)
10:01 pq: drakulix[m], would you happen to have a pointer on how to create a Wayland client in Rust that uses compositor-specific surface role extensions? What crates to combine to not need to reinvent a whole UI drawing toolkit?
10:02 drakulix[m]: Do you have a toolkit in mind, that you want to be using?
10:03 pq: no, but I'd probably prefer something Rust-native
10:04 pq: I've been eyeing on iced, but haven't stared at it long enough to understand how to bypass the platform abstractions to get my hands into Wayland objects.
10:05 drakulix[m]: The big problem is, that most of them are based on winit, which handles xdg-surfaces exclusively.
10:05 drakulix[m]: However most (if not all) GL-abstractions these days go through, go through the raw-window-handle crate, which would allow you to use any wl_surface.
10:05 pq: right
10:05 drakulix[m]: That leaves input of course as a problem. Few toolkits can be driven completely externally. egui and iced are definitely options, I have successfully integrated both into my compositor.
10:07 pq: Would you have a link to some sources I could study in the long run?
10:07 drakulix[m]: Sure. gimme a sec
10:29 drakulix[m]: First I would suggest to go with sctk (or smithay client-toolkit) to avoid doing everything yourself: https://github.com/Smithay/client-toolkit/tree/master... (full message at <https://matrix.org/_matrix/media/v3/download/matrix.org/PpvKvHJgPMKbhUsprriWZiHC>)
10:33 pq: cool, thanks!
10:53 pq: For anyone who simply wants to build a HDR compositor without the hassle of ICC color management, please have a look at https://gitlab.freedesktop.org/pq/color-and-hdr/-/blob/main/doc/hdr10.md .
10:55 JEEB: i like how cta shop actually needs no registration, you just input a@b.tld and trollgatan 13, sverige as address info and bob's your uncle
10:56 pq: heh, well, how'd they even check that
10:56 JEEB: it just redirects at checkout so no checks
10:57 JEEB: still annoying of course so I'm happy people then archive the docs on archive.org
10:57 JEEB: anyways, more on topic: nice text, I would also just take into mention RGB case as well unless it was already mentioned there
10:58 JEEB: since many apps deal with hdr10 (pq, bt2020) but in rgb
10:58 pq: that's what I didn't quite understand: can it be HDR10 if it is RGB?
10:58 JEEB: at least that's what the marketing departments seem to call it :D
10:58 pq: you can't sub-sample RGB...
10:58 JEEB: it seems to be pq plus bt2020 and the static metadata
10:59 JEEB: for video media it's 4.2.0 or so o' course
10:59 pq: anyway, it's not a big difference: RGB means you need to implement less of color-representation and nothing else - that's even mentioned there.
11:00 JEEB: yup
11:01 JEEB: i should attempt to add the opengl extension into libplacebo, although iirc the only things showing the bt2020 and pq extension were android devices
11:02 pq: not even Windows? Is Windows, where you tried(?) it, only scRGB then?
11:02 JEEB: on windows it's the d3d11 api usually
11:03 JEEB: i haven't tried the gl side stuff :)
11:03 pq: ah, of course
11:03 JEEB: both scrgb and hdr10 capabilities are in the d3d11 api so if driver vendor maps to those they should work
11:07 JEEB: in libplacebo I did so that HDR content -> HDR10, non-HDR wide gamut -> scRGB
11:08 pq: How does scRGB work on Windows? Is it simply hard-clipped to sRGB for traditional monitors and BT.2020 for WCG/HDR monitors, and then let the monitor do whatever? Any metadata?
11:09 JEEB: yea, SDR output from compositor is clipped/clamped, and if the output is wide gamut or HDR, it gets possibly adjusted to HDR graphics white and then clipped/clamped according to output (I *think*)
11:11 pq: so they basically expect the app to know to use only the usable portion of the scRGB space to begin with?
11:12 pq: I mean usable portion on the specific monitor
11:13 JEEB: not really. you can do your own tone or gamut mapping according to your screen's capabilities in either output mode (in order to f.ex. minimize the monitor's own remap)
11:14 JEEB: scRGB is just simpler if you are thinking in terms of SDR content, since you have common [0,1]. and then if you are then extending that range
11:14 pq: is it also "you must do" that?
11:14 JEEB: no
11:15 pq: I'm looking for a better color gamut mapping than clipping, and that requires knowing the boundaries of the color volume, but scRGB in itself is infinite.
11:16 JEEB: you end up having to actually looking up the actually utilized area
11:16 pq: How does Windows know what the boundaries on app content are with scRGB? Does Windows simply not do anything better than clipping?
11:17 JEEB: in my experience the most it does is handle the difference between SDR graphics white and HDR graphics white, and then clips
11:17 pq: ok
11:17 swick[m]: mh, that explains why they get way with scRGB
11:17 swick[m]: without extra metadata
11:17 pq: so if you want any "proper" color gamut mapping, the app needs to do it itself.
11:18 JEEB: yea, which is why libplacebo now has a whole rework MR going on :D
11:18 pq: cool
11:19 JEEB: https://code.videolan.org/videolan/libplacebo/-/merge_requests/441/
11:19 pq: This means we can make assumptions about Windows apps that use scRGB: that they target the color volume the OS tells them to... right?
11:20 swick[m]: or they just use an arbitrary color volume in there and accept the clipping
11:21 JEEB: as an scRGB user I target whatever I target. it's a simple extension for game developers, and for libplacebo I utilize it if the content is not HDR but wider gamut. that way at least the compositor gets the wide gamut content, and if in theory they support Display P3 or whatever in the future, that can get shown better than on an sRGB device.
11:21 JEEB: so yes, in my use cases I accept the clip
11:22 JEEB: I hope this makes sense
11:23 pq: I kind of understand that, but I wonder what it means to a Wayland compositor.
11:23 pq: Do we need to leave scRGB unmanaged, then?
11:32 swick[m]: For compatibility with windows?
11:33 pq: yeah, where else would we get a precedent of how scRGB should be handled?
11:33 swick[m]: wine could set the target color volume to the display capabilities and then the behavior should be identical to windows, no?
11:33 pq: it would help porting games to Linux I suppose, and help wine-wayland
11:35 pq: I don't think that would result in identical display, because we'd change clipping to proper gamut mapping, but I also don't know if that makes a significant difference.
11:35 pq: proper... as proper as a compositor bothers to implement
11:36 swick[m]: we'd clip the out of target color volume colors and would not have to gamut map colors inside the target color volume because it is the same as the output color volume
11:36 pq: oh right
11:39 pq: yeah, that should work
12:46 pq: Talking about EDID, this is what my a few years old monitor says: Desired content max luminance: 115 (603.666 cd/m^2), Desired content max frame-average luminance: 90 (351.250 cd/m^2)
12:47 pq: that's quite a bright MaxFALL, leaving the HDR headroom at less than 1x (or 2x, how you want to think of it).
12:48 pq: If you rendered content aiming at those values, I don't think it would be much HDR.
12:51 pq: emersion, btw. did you get your 3D LUT fully working?
13:11 emersion: pq, yup!
14:27 JoshuaAshton: pq: For scRGB. What we do is just translate from 709 -> 2020 with a CTM. We then just clip it at that point going into the next parts of the color pipeline. 2020 is so wide and at the range of perception that you don't really need to do anything better for mapping scRGB content.
14:28 JoshuaAshton: You can then do whatever 2020->Native Gamut Mapping you do for typical HDR10/PQ content
14:30 JoshuaAshton: We have a Shaper + 3D LUT to handle all of our gamut remapping stuff (we actually have the shaper TF go from scRGB -> PQ encoding at that point to go into that part which is funny)
14:36 JoshuaAshton: *clip it as in we clip negatives off, sorry just clarifying there
14:42 JoshuaAshton: ofc you can also use the HDR metadata to potentially aid in any mapping, but just be aware that you probably won't get it :-)
15:55 swick[m]: negative values in the CTM makes me nervous. KMS color stuff is so underspecified...
15:58 JoshuaAshton: swick[m]: It's completely fine on AMD at least, I believe this is actually why they have CTM still in hardware despite exposing Shaper + 3D LUT
16:00 swick[m]: most CTMs don't consume or produce negative values so I would not bet on all vendors supporting that
16:00 swick[m]: but if it works for you that's great
16:00 jadahl: only so great if anyone can use it, and anyone can only use it if its known to work universally :P
16:00 swick[m]: personally I'm very hesitant to use any of the KMS API for color transformations right now
16:01 JoshuaAshton: melissawen, Harry Wentland and I have been doing a lot of work in the AMDGPU space to get stuff in shape there
16:01 JoshuaAshton: Lots of back and forth so we can ensure we get HDR scanout for SteamOS/Gamescope on AMD
16:01 swick[m]: doesn't help if all other vendors do something else
16:02 swick[m]: at least for us
16:02 JoshuaAshton: Sure, that is why all our properties are like AMD_PLANE_DEGAMMA_TF, AMD_PLANE_LUT3D etc right now
16:03 JoshuaAshton: If I am honest... there are lots of things that I have learnt specifically about AMD hardware with how certain LUTs are (shaper is fixed point etc) and how the segementation stuff works for them that makes me think that doing a generic color mgmt system is going to be a lot harder than just throwing LUTs are the wall
16:03 JoshuaAshton: AMD DC code internally mitigates stuff like this by having LUTs and then a TF that goes either before or after that affects the weighting/segmentation
16:04 JoshuaAshton: Obviously other vendors might not have that luxury or have other situations here
16:04 JoshuaAshton: Another thing that I found recently is that we really need to use the fixed function ROM block for degamma of PQ, etc on AMD. At least the current code that does de-PQ using a LUT produces visible banding.
16:07 swick[m]: yeah, all of those are known issues. segmented LUTs exist in different forms on different hardware, fixed TF implementations also exist. shaper curves for 3d LUTs are basically a requirement.
16:08 swick[m]: and precision in the pipeline and of each operation is also relevant
16:08 swick[m]: and KMS just doesn't handle any of that
16:08 JoshuaAshton: Harry Wentland is probably sick of me emailing them about precision of the shaper LUT haha :-)
16:08 JoshuaAshton: Took a long time for us to get stuff worked out there in a way that works for us
16:09 swick[m]: I can imagine. We really need this to work generically though eventually and that's the big issue...
16:09 JoshuaAshton: Eventually being a key word. I think in the short term (I know people are going to groan) but vendored properties are probably a decent cal.
16:09 JoshuaAshton: call
16:10 swick[m]: sure, I'm fine with that as long as the HDR signalling works generically
16:11 JoshuaAshton: I was speaking to zamundaaa ( zamundaaa[m] ), and KDE also had some interest in trying out our vendored AMD property implementation stuff
16:12 JoshuaAshton: We are probably going to look at attempting to upstream that soon-ish
16:13 JoshuaAshton: Like, even if it isn't what people end up going for once we have something generic -- there is always probably going to be some vendor-specific thing some compositor vendor (*cough* gamescope) wants to care about. It's also probably great to do bringup of stuff regardless also
16:14 JoshuaAshton: I see it as no different to Vulkan vendor-specific exts pathing the way for a generic EXT really
16:20 swick[m]: I'm certainly not complaining about you testing all of the hardware features ;)
16:20 swick[m]: just too much other stuff to do first before I can focus on that
16:21 JoshuaAshton: While I am here I assume you also agree that tetrahedral for 3D LUT >>>>> linear
16:21 JoshuaAshton: :b
16:22 JoshuaAshton: I was super pleased when I found AMDGPU had that in hw
16:23 swick[m]: oh do they? that's neat.
16:23 JoshuaAshton: ya
16:25 swick[m]: AMD hardware really has a much better color pipeline than nvidia...
16:25 swick[m]: you're going to have fun when you're looking at that at some point
16:25 JoshuaAshton: I've already looked and was disappointed
16:26 JoshuaAshton: When we do a generic SteamOS I will probably have to composite in a bunch more cases
16:26 JoshuaAshton: on NV at least
16:26 swick[m]: yeah, I doubt we'll ever be able to use the current NV pipeline at all
16:27 JoshuaAshton: no 3D LUT at all
16:27 JoshuaAshton: super disappointing
16:27 JoshuaAshton: at least when I was grepping their open-gpu-whatever registers
16:28 swick[m]: that's also my understanding
16:28 swick[m]: no flexibility in the order of operations as well
16:29 JoshuaAshton: I mean... AMD also has no flexibility there :S
16:29 JoshuaAshton: but there's enough stuff
16:29 JoshuaAshton: that you can kinda do what you want
16:29 JoshuaAshton: per-plane 3D LUT, per-crtc 3D LUT, hdr_mul, blend_lut, per-plane ctm, etc etc
16:37 jadahl: vendored methods for offloading compositing has always been on the table hasn't it, just that it was intended to be hidden behind "drivers" in libliftoff?
16:40 JoshuaAshton: I have not heard of that but that sounds like a great idea to me :-)
16:41 swick[m]: it's one of the ideas discussed and it's not horrible
16:42 jadahl: the hard part is then to figure out a libliftoff api that makes everyone happy :P
16:43 swick[m]: yeah, and making sure everything is consistent. that's my bigger worry...
16:44 jadahl: we need conformance tests, perhaps it'll be as universally used as the wayland conformance tests
16:44 swick[m]: heh
16:46 swick[m]: I still prefer enumerating possible pipeline configurations and describing each element in it sufficiently. that still requires a user space library that can map an arbitrary color pipeline to the hardware pipeline but it already removes a lot of vendor specificness.
17:03 JoshuaAshton: I am waiting for display HW to evolve to the point where it's just a shader :-)
17:26 drakulix[m]: Please don’t. I don’t have any problems with drivers and some common interface to link against, but I don’t want to rely on libliftoff given we just build our own plane-offloading code in smithay and libliftoff doesn’t integrate with our apis that well.. :/
17:27 swick[m]: I also don't like libliftoff very much and prefer a design like drm_hwcomposer
17:27 swick[m]: but that does't integrate well with mutter, so... eh
17:31 LaserEyess: I read some of this backlog, and the first question I have is "how does Windows do this?" do they really just leave it to hw manufacturers to just write their own drivers to map to a higher level API?
17:31 LaserEyess: or, do they just... not, and handle everything in userspace
18:55 wlb: weston Merge request !1226 opened by Leandro Ribeiro (leandrohrb) Add ICC VCGT tests https://gitlab.freedesktop.org/wayland/weston/-/merge_requests/1226
19:28 JEEB: LaserEyess: if you're talking about gamut/tone mapping, as far as I can tell the windows compositor tries to be as hands-off as possible
19:28 JEEB: it handles any possible SDR/HDR graphics white difference and then clamps/clips to output space
22:49 emersion: drakulix[m]: why not? it's designed to be very much like libdrm and as unintrusive as possible
22:57 JoshuaAshton: Even so, if you wanted to write your own equivelant thing in Rust or whatever, I am sure that'd be fine.
22:57 JoshuaAshton: The real question is who does the "generifying" part.
22:59 JoshuaAshton: I would prefer some userspace component, as us in Gamescope would always like to have an AMD-specific path and generic everywhere else because we want to get the absolute most out of that hardware as we ship a device with it and have very high color mgmt demands and also need scanout
23:15 ngortheone: Hello. I am playing with wayland protocol and I am hand-encoding messages to send over the wire. There is something I don't understand and the spec does not really explan. When I construct `get_registry` request I need to provide an Arg of type new_id
23:15 ngortheone: and it seems that the only value that works is 2
23:15 ngortheone: Why is the argument needed if a registry always has a obj_id == 2?
23:15 ngortheone: if I sepcify any other id the server responds with an error message "invalid arguments for wl_display@1.get_registry" (I tried 0, 3 , 42)
23:16 ngortheone: what is "new_id" ? How can I know what new_id value is expected?
23:16 ngortheone: After reading the spec I thought new_id is a way to tell the server what id I want to bind the new object to
23:17 ngortheone: But it seems that I am wrong
23:17 ngortheone: There is nothing in the documentation that explains this.
23:51 kennylevinsen: ngortheone: https://gitlab.freedesktop.org/wayland/wayland/-/blob/main/src/connection.c#L798
23:52 kennylevinsen: I'd suggest looking at Wayland logging on the server side, as it might tell you more
23:57 kennylevinsen: It is just the ID selected by the client, nothing else. With libwayland the id ends up in this handler: https://gitlab.freedesktop.org/wayland/wayland/-/blob/main/src/wayland-server.c#L1024