12:21 JEEB: Company: https://www.itu.int/pub/publications.aspx?lang=en&parent=R-REP-BT.2446 is I guess the closest to having something actually specified for HDR to SDR and vice versa
12:21 JEEB: (finally remembered the correct document name)
12:27 vaxry: is there a technical reason why we can't set an event loop timer's timeout to less than 1ms?
12:27 vaxry: what if I want 500µs?
12:28 pq: vaxry, just the libwayland-server API. Use your own timerfd.
12:28 vaxry: huh, that makes sense. Right. Thanks bae
12:28 pq: or, an actual event loop library
12:31 pq: JEEB, thanks, that looks useful.
12:33 JEEB: pq: and even this already has three methods for different use cases :D
12:33 JEEB: although that is mostly relating to whether round-trip is required tone mapping wise
12:34 JEEB: (and various other details which I conveinently did not mention :) )
12:34 JEEB: *conveniently
12:35 zamundaaa[m]: Having a standardized conversion algorithm doesn't really help with the problem imo
12:35 JEEB: yea, this is just probably useful in telling people what various bits there are in the process
12:35 pq: no, but it's a starting point, better than from scratch
12:35 Company: it's at least better than nothing
12:35 zamundaaa[m]: Company: no, I'm saying it's not
12:36 zamundaaa[m]: It's useful to have, on its own, but it's not useful for switching between app conversion and compositor conversion
12:36 Company: no, for that you need actual code
12:36 Company: and apps and compositors to agree on things
12:36 zamundaaa[m]: No, I'm saying that flat out does not work
12:37 pq: I don't think such agreement will ever happen.
12:37 Company: then people will see flickering
12:37 zamundaaa[m]: For example, the user can configure how SDR content is mapped to HDR displays in KWin. Even if you'd communicate the parameters and algorithm to the app, moving the window between screens will have sudden changes in appearance
12:37 zamundaaa[m]: That configuration is per screen, to be clear
12:38 Company: sure, but you have to do that or stuff will flicker
12:38 zamundaaa[m]: You don't have to do that stuff
12:38 zamundaaa[m]: You either offload something, always, or you don't offload it, ever
12:38 Company: that's not how things work though
12:39 pq: You just have to not switch between app and compositor conversion.
12:39 zamundaaa[m]: It's the only way things can work without flicker
12:39 Company: so how do you imagine direct scanout is gonna work?
12:39 pq: bby choosing compositor conversion always
12:40 pq: well, you can also choose app conversion always, and still hit direct scanout, but then you do a re-rendering in the app
12:40 Company: that's not direct scanout then?
12:40 pq: yes, it is direct scanout
12:41 pq: The definition of direct scanout is scanning out a client buffer.
12:41 zamundaaa[m]: Company: direct scanout just means that the compositor doesn't do an extra copy
12:41 zamundaaa[m]: I think you mean "zero copy" or similar
12:41 pq: It may not be the same as zero-copy.
12:43 Company: I mean, the relevant question for me atm is the GTK graphics offload
12:43 Company: which dynamically switches between subsurface and GTK's compositing
12:43 JEEB: also here's the new bits that got added in latest H.273 for matrix coefficients https://github.com/FFmpeg/FFmpeg/commit/29561c8e2d4ccecaa93afcaed73678e3f6011b0a
12:44 JEEB: (for whatever reason 2023-09 H.273 is still not freely available)
12:44 swick[m]: I'm with Company on this one. there just isn't a way to "always" offload something from a toolkit perspective because at any point the render nodes can change so it becomes impossible to offload
12:45 swick[m]: there needs to be some kind of standardization in the long run
12:45 zamundaaa[m]: As I already wrote, that is completely impossible
12:45 swick[m]: you keep saying things like that like it's the absolute truth but it's really not
12:45 zamundaaa[m]: Like, not just not great or anything, but it's literally not doable with multi monitor and user configuration
12:46 pq: not the first thing that toolkits will need to re-think for Wayland
12:46 swick[m]: there is no re-thinking here, that's just not how any of them work and will work
12:46 zamundaaa[m]: swick: then please explain to me how the compositor can ensure a smooth transition between an app compositing SDR content from a display with SDR gamut wideness 0% to a display with SDR gamut wideness 50%
12:47 pq: or maybe changing the "render nodes" in a toolkit does warrant flicker?
12:47 Company: zamundaaa[m]: that's not the goal - the goal is that the app can do the same steps that the compositor does
12:47 pq: nothing worked how Wayland works, until things were re-written for the better
12:48 Company: note the order: the app can do what the compositor does, not the compositor can do whatever the app wants
12:48 zamundaaa[m]: Company: the app can't do the same steps as the compositor. It can't create two images for two screens
12:48 swick[m]: that's not the point
12:48 pq: Company, that literally will never happen, because it would require changing Wayland from descriptive to prescriptive.
12:49 Company: no, but it can create one of the two images
12:49 Company: pq: "can", not "is forced to"
12:49 swick[m]: pq: it doesn't. the compositor just has to tell the app what it is going to do
12:49 mclasen: the 2 monitors are a distraction here
12:49 pq: not a difference here, it's a spec that will tie the hands of compositors
12:50 Company: I mean, the compositor gets pixels from the app, then transforms the pixels somehow and then it ends up on screen
12:50 swick[m]: pq: the app doesn't tell the compositor what to do, the compositor tells the client what it will do, and the client can then adjust to that, or not
12:50 pq: would a compositor be sending SPIRV shaders to apps?
12:50 Company: if the app knows about that transform, it can avoid problems
12:50 zamundaaa[m]: mclasen: no, they're not
12:51 zamundaaa[m]: I don't want flicker when you move a window from one screen to the other
12:51 swick[m]: pq: the compositor can also say "custom stuff" at which point the client can't adjust to the compositor, and can stop offloading
12:51 mclasen: I don't want that either, but that is not the case that will have toolkits temporily switch from offloading to rendering
12:51 swick[m]: pq: there is no need to describe every possible scenario, just the common ones so that offloading might work
12:51 Company: zamundaaa[m]: I don't get why that is a problem - you get pixels from the app and do stuff to them
12:52 zamundaaa[m]: Company: if the app applies the "same" conversion steps as the compositor, but the conversion steps differ between two screens, then there will be flicker if the app does SDR conversions with compositor-given parameters
12:52 Company: zamundaaa[m]: and what you want to do is tell the app as much as possible to minimize the stuff you need to do
12:52 pq: swick[m], doesn't the option of "custom" thwart the whole idea?
12:53 Company: zamundaaa[m]: then the compositor would tell the app and the app could choose the best format to avoid that
12:53 swick[m]: no? then the apps can make an informed decision if they want to offload something
12:54 pq: swick[m], sounds like the KMS color pipeline UAPI is coming to Wayland here.
12:54 swick[m]: apps could also not support one of the strategies the compositor is using and then not offload something
12:54 swick[m]: I think it will be higher level
12:54 Company: zamundaaa[m]: the goal here isn't to force apps to behave a certain way or to force compositors to behave a certain way - the goal is to have both cooperate to get the best result
12:55 zamundaaa[m]: Company: I'm telling you there is no way to cooperate. Either the app does one fixed algorithm for conversion, or the compositor does the conversion. There's no switching between them
12:55 zamundaaa[m]: Or changing the app's algorithm to match what the compositor does
12:56 pq: a noble goal, pardon my pessimism
12:57 Company: the compositor and the app both have a bunch of buffers that need compositing - and they both use the same hardware to do that
12:58 Company: so as long as they cooperate, they can switch around who does which part of that
12:58 pq: A committee defines a few standard algorithms, then compositors decide if they implement any of them, applications decide if they implement any of them, end users decide which one the compositor uses, and the app can check if it won the lottery. Ok, it's not zero probability, if the committee happens to find generally accepted algorithms.
12:59 zamundaaa[m]: Company: the compositor is not doing one single algorithm with one single fixed set of parameters
13:00 Company: pq: yeah, and because we cooperate, the app and compositor know which ones the other implements (most likely the same ones)
13:01 pq: but yes, so far an app can only provide one rendering per window, so it can match only one output at a time. For any other output, the compositor will necessarily do something more.
13:02 Company: yeah, and that's fine
13:02 pq: as long as the app does not switch *outputs*, it won't flicker, though
13:02 Company: the compositor will decide how it prefers to handle that case and tell the app
13:04 Company: that's a corner case though, the main problem is that we want to offload the video stream while you watch but when you move the mouse and the app wants to composite controls on top, it can do that without any flicker
13:05 pq: right
13:06 Company: same problem when the compositor wants to display a notification or an alt-tab overlay on top of a fullscreen window
13:06 pq: so the client picks one output to render for, and then it sticks to that to avoid flicker when it switches between compositor and app *blending*.
13:07 pq: Company, I don't see how that would be the same problem at all.
13:08 pq: A compositor has a prescriptive rendering API (KMS and GL/Vulkan). A client has only a descriptive API (Wayland).
13:08 swick[m]: it's also possible to use the same processing on all monitors even when they have different capabilities
13:08 swick[m]: all that changes are parameters of the display
13:09 swick[m]: and that might even work for all the weird cases
13:09 Company: pq: the problem is ultimately the same one - switching from 1 buffer using direct scanout to 2 buffers - in one case that 2nd buffer is provided by the same app and in the other it's provided by a different app or the compositor itself
13:09 swick[m]: I'm not 100% sure this will work, but I'm equally not as convinced as zamundaaa that this won't work
13:09 mclasen: the client renders using gl/vulkan as well
13:10 pq: Company, the starting point is half the same yes. The tools you have at your disposal not.
13:11 zamundaaa[m]: Company why would you need to switch to compositing controls yourself? Why can't it just be on a subsurface?
13:11 davidre: I dont see the problem if a notification appears you have to do compositing since you need to display two different buffers
13:12 mclasen: zamundaaa[m]: rounded corners
13:12 davidre: I am a bit confused tbh so ignore me
13:12 Company: zamundaaa[m]: it depends on the controls - but you're at least gonna switch from direct scanout to something else
13:12 zamundaaa[m]: direct scanout vs not doesn't matter
13:12 davidre: Why does gtk care if the compositor does direct scanout or not
13:12 davidre: If you have two different things something has two do compositing
13:12 Company: davidre: because it needs to provide buffer suitable for direct scanout
13:13 zamundaaa[m]: Company: direct scanout isn't some different way of operation, it's just a different way to show the exact same thing
13:13 davidre: if the client composits video and controls the compositor can do direct scanout maybe
13:13 zamundaaa[m]: whether or not you allocate the buffer with a different modifier or scanout flag will not affect how it looks
13:13 davidre: if it doesnt the compositor will composit
13:13 davidre: in the end it's the same amount of work
13:13 davidre: just somewhere else in the stack
13:14 Company: davidre: right - as long as both parts of the stack would do the same work
13:14 Company: davidre: if they'd do things different, then you can get flicker
13:14 zamundaaa[m]: Company: "both parts of the stack" there is only one part, it's the compositor
13:14 Company: zamundaaa[m]: not if the app does the compositing
13:14 zamundaaa[m]: Yes, but that's the thing you shouldn't do
13:15 davidre: Yes but you would not switch parameters of your buffer on the fly
13:15 davidre: or just choose the best tool for your use case
13:15 zamundaaa[m]: mclasen: rounded corners is a good point, but that really just means that we need wp-rounded-corners-v1 :)
13:15 mclasen:watches the folks who invented scanout flags and tranches tell us never to look at them
13:16 zamundaaa[m]: mclasen: that's not at all what this discussion is about
13:16 Company: zamundaaa[m]: people can use all sort of weird effects inside GTK that subsurfaces can't do - usually they don't (when maximized), but sometimes they do
13:17 Company: rounded corners is the obvious example for non-maximized apps ofc
13:17 mclasen: zamundaaa[m]: you just said to not switch parameters on the fly
13:18 zamundaaa[m]: mclasen: color management parameters don't have anything to do with modifiers or scanout flags, and only very little with buffer formats
13:20 mclasen: I thought we were talking about direct scanout
13:20 mclasen:lets the color nerds have it and goes to do something else
13:21 Company: we were talking about color management
13:21 Company: which involves direct scanout, because - same problem - the colorspaces supported by direct scanout are smaller than those a compositor can handle
13:22 zamundaaa[m]: direct scanout is not the same problem as clients offloading content
13:22 pq: If toolkit API has been designed so that all effects and stuff are free to use any time on anything, then yeah, that does make off-loading very hard. It has been intentionally designed to be app-composited. But then, only very few types of scenegraph objects should even be eligible for off-loading.
13:22 zamundaaa[m]: The compositor programs the KMS color pipeline
13:22 zamundaaa[m]: the client doesn't program the compositor color pipeline
13:23 swick[m]: yeah, direct scanout vs compositor compositing doesn't matter
13:23 Company: pq: in the GTK API you need to explicitly mark content for offloading, but GTK will then only offload if it's suitable
13:23 Company: pq: which can change on a frame-by-frame basis
13:23 zamundaaa[m]: Company: how many applications use these effects that don't work with subsurfaces?
13:23 pq: Company, why wouldn't that marking also forbid effects that cannot be off-loaded?
13:24 swick[m]: zamundaaa: all of them basically because rounded corners
13:24 Company: pq: because GTK doesn't forbid things
13:24 pq: yes, why?
13:24 zamundaaa[m]: swick: Yeah, but we could offload that one to the compositor
13:24 pq: old-school X11: off-load every widget! modern X11 / old Wayland: off-load nothing! mor recent Wayland: off-load... something?
13:24 swick[m]: zamundaaa: that's just moving an entire rendering api to wayland
13:24 swick[m]: it is kind of stupid tbh
13:24 zamundaaa[m]: Company: If you had a protocol for corner rounding, couldn't you make the app opt in or opt out of those effects?
13:24 Company: pq: that would be really hard to communicate - when setting up your widgets, you don't know what effects they will ultimately use
13:25 zamundaaa[m]: zamundaaa[m]: as in, app-wide opt in/out
13:25 Company: zamundaaa[m]: themes can decide to round corners
13:25 swick[m]: zamundaaa: let's say you use some controls that use a popover and that gets on the content
13:25 swick[m]: and you're done
13:25 zamundaaa[m]: why would you be done? You'd put it on a subsurface
13:26 swick[m]: no you wouldn't
13:26 mclasen: fractional scaling makes subsurfaces useless
13:26 zamundaaa[m]: mclasen: that is a problem that can and should be solved
13:26 swick[m]: at that point you'd have to put literally everything on subsurfaces
13:26 swick[m]: every popover, every whatever
13:26 swick[m]: and then maybe you want some specific blending behavior
13:26 swick[m]: and you're done as well
13:27 swick[m]: or specific subpixel alignment, and you're done as well
13:27 swick[m]: that idea is much more ridiculous than trying to match the color conversion of the compositor
13:28 zamundaaa[m]: You wouldn't need to put every individual thing into a subsurface, only all things below, and all things above the thing you're offloading to the compositor
13:28 Company: that still assumes that the thing itself is offloadable into a subsurface
13:29 mclasen: we do that, to some degree
13:29 mclasen: but putting the offloaded subsurface below the main surface
13:29 Company: and GTK allows you to write css that applies a css filter to stuff
13:30 zamundaaa[m]: mclasen: then this sounds quite doable
13:30 Company: so you can turn stuff black-and-white if you want to
13:30 Company: or make parts semi-opaque or whatever fun effect you want
13:31 Company: and that's independent of the offloading
13:33 Company: looking at the code that people have written, not offloading is a way too common case
13:34 Company: like, Adwaita used to (still does?) draw an outline shadow over the main window to make the window border appear thicker
13:34 Company: which of course it also drew over the video in the video player
13:36 Company: but that's basically the thing I want to support - make things work fine always, and make optimizations require extra work by the app developer who cares
13:38 Company: because 95% of app devs don't care - but the 5% are the people who write Gimp and Inkscape, and they care about proper color management
13:38 Company: same for video players - there's gonna be 2 or 3 video players where the devs will make sure offloading works
14:30 bnason: So I'm sure this is possible, but I have 0 idea of what would be involved. I want to create a small app that just detects the mouse hitting the edges/corners of the screens. How hard would that be?
14:34 wlb: weston Merge request !1486 opened by Pekka Paalanen (pq) gl-renderer: consolidate GLSL functions https://gitlab.freedesktop.org/wayland/weston/-/merge_requests/1486 [GL renderer]
14:35 pq: bnason, very hard. Wayland does not generally allow spying on the pointer motion, nor to place window at the edges to catch the pointer.
14:49 bnason: What about creating full screen windows that are transparent (visually and to mouse/cursor events)?
14:50 pq: you can try that, but it would just behave like a normal window: other windows could be on top, and then it won't get pointer motion
14:50 bnason: Isn't there a protocol for layering? So I can create a super top level layer? (I'm not super familiar with all the internals)
14:51 pq: if you make it truly fullscreen, as in the fullscreen mode, you cannot make it transparent because you get black background. You could do maximized, but then it won't necessarily cover full screen.
14:51 pq: bnason, not a generic one, no. There is a wlr extension that some compositors support.
14:51 bnason: I'm trying to figure out a working solution for my multi pc setup. I used to use Barrier, but after switching my main pc to wayland that no longer works. Right now I'm RKVM, which works great but requires key presses to switch between PCs. My idea was to create a wayland/x companion app that detects when I hit the edges of the screen and auto
14:51 bnason: switches for me.
14:52 pq: bnason, that kind of feature cannot be an independent app. I needs to integrate with the compositor somehow.
14:52 bnason: ah ok thanks
14:52 pq: *it needs to
14:52 davidre: there is a stack in the works for that iwth inputcapture portal, remote deskto portal and libei
14:52 davidre: it's implemented in input-leap and gnome
14:53 davidre: I am doing a plasma impl as we speak
14:53 pq: cool!
14:53 bnason: Yea, I'm using labwc so can't use that yet unfortunately. but it seems like if that support is added, input-leap would just work so my solution wouldn't even be needed. (though I do like how rkvm can be used in the console before my WM)
14:56 bnason: davidre, can you point me to that input capture protocol?
14:56 davidre: *portal
14:57 davidre: https://flatpak.github.io/xdg-desktop-portal/docs/doc-org.freedesktop.portal.InputCapture.html
14:57 bnason: What exactly is a portal?
14:58 davidre: https://flatpak.github.io/xdg-desktop-portal/docs/introduction.html
14:59 bnason: ah ok so technically that is outside wayland
15:12 zamundaaa[m]: bnason: you could in theory place overlay windows on the screen edges with layer shell to do what you want
15:17 bnason: wlr layer shell?
15:18 zamundaaa[m]: yes
15:54 bnason: thanks ill look into that
17:12 graywolf: Hello, I have laptop A running X11, and laptop B running Wayland (ubuntu, gnome in case it matters). I would like to be able to view windows from B on A. If B had X11, I would just use X11 forwarding over ssh, what can I use for wayland?
17:14 MrCooper: SSH X forwarding works in a Wayland session
17:14 MrCooper: (it's a real mystery why many people assume it doesn't :)
17:14 MrCooper: for native Wayland apps, waypipe works mostly the same
17:15 graywolf: Ah, ok, I will give it a try (I assumed it will not work).
17:15 graywolf: I found waypipe but I assumed it expectd wayland on both ends
17:15 graywolf: Thank you :)
17:16 MrCooper: doesn't matter what kind of session the remote machine uses, just what the app you run via SSH uses
17:16 graywolf: Oh... I am dumb. Right, the remote apps talks with my local X11, so it does not matter at all there is wayland there
17:19 Ermine: you can use cage with waypipe, since cage has x11 backend (as any other wlroots-based compositor)
17:38 Consolatis: bnason: > How hard would that be?
17:38 Consolatis: really simple, maybe 10 lines of python with gtk and layershell bindings
19:48 siak: I am on Manjaro KDE and have recently switched to Wayland. Now I see some artifacts here and there like block traces of mouse icon and apps pane not rolling up.
19:49 siak: Oh it is not Wayland suopport. But I am a developer so...
21:21 siak: In what layer Wayland resided logically?
21:22 siak: resides*
21:28 kennylevinsen: siak: it's a protocol
21:29 siak: Between Windows manager and graphics driver?
21:29 kennylevinsen: no, between graphical applications and the display server
21:30 kennylevinsen: If you use KDE, your display server is kwin
21:31 siak: So there is still a display server? It resides between graphics driver and apps?
21:32 siak: DRI and apps
21:32 kennylevinsen: In Wayland there is *only* a display server. It takes roles similar to an X display server, window manager and compositor all in one
21:33 kennylevinsen: No that's not the task of a display server
21:33 kennylevinsen: For rendering, apps talk to your graphics driver and render devices directly. But they can't put stuff on screen or receive input that way.
21:34 kennylevinsen: The display server is the KMS/DRM master, controlling outputs (displays) ND inputs (keyboard/mice/etc.). It routes input to the focused application, and puts the rendered content supplied by applications on screen.
21:35 kennylevinsen: s/ND/and/ - dumb phone
21:37 kennylevinsen: So from a graphics perspective, the display server takes *already rendered content* from applications and stuffs it on screen, possibly transforming it in the process (scale, color, blending).
21:40 siak: How can an app have access to render devices but not be able to put stuff on screen? And why should it be this way?
21:42 kennylevinsen: that's how the GPU drivers work. Only one process (the master) has access to the outputs through the primary DRM device, while the render device can be used arbitrarily.
21:42 zamundaaa[m]: siak: rendering and displaying are simply separate things
21:43 zamundaaa[m]: On many devices they're even entirely different devices
21:45 siak: Yes I can understand that. Just did not know that there are different access level to them.
21:46 kennylevinsen: Clients also have no idea what's supposed to be on screen - they don't even know what output they're on, much less how colors or scale should be adjusted, or how to blend with other transparent content that they cannot access
21:46 kennylevinsen: and updating the screen is done by atomically swapping the whole output buffer out with a new one - not by drawing - so the process updating needs to know how everything needs to look
21:47 siak: On Wayland we don't care about render devices? Apps give rendered image to display server?
21:48 siak: Is the window manager the same thing as display server?
21:54 kennylevinsen: apps give rendered buffers yes
21:55 kennylevinsen: in Wayland, window management is part of the display server's tasks. We also often call the whole thing a "Wayland compositor", but it's still all those things in one.
21:58 siak: WHat other tasks it has?
22:00 siak: And btw problems I see in Wayland on KDW are actuallt kwin bugs?
22:01 kennylevinsen: generally yes which is why this channel is not for end-user support, only for protocol, server or client development
22:01 siak: Are the apps aware of Wayland or they just talk to kwin?
22:02 zamundaaa[m]: Wayland is the API / language apps use to talk to KWin
22:05 siak: So all my apps knew Wayland? That is a surprise.
22:06 kennylevinsen: Most will, the few that do not go through the Xwayland compat layer
22:10 siak: Aha. Thanks for the info. I understand it way better.