12:39tzimmermann: jfalempe, thanks for reviewing. i hope this solution will finally stick. i have a similar pathcset for ast in a bit.
13:17zmike: jenatali: is there any kind of error message here? https://gitlab.freedesktop.org/mesa/mesa/-/jobs/61940382
13:17zmike: or is "FAILED" the whole of it
13:24jenatali: zmike: raw view shows the errors: https://gitlab.freedesktop.org/mesa/mesa/-/jobs/61940382/raw
13:24jenatali: driQueryOptionb, i, and str versions are unresolved
13:24jenatali: I think it's a gitlab bug
13:25jfalempe: tzimmermann: you're welcome, thanks for cleaning this bmc support.
13:25zmike: I see
13:25zmike: very convenient
13:25zmike: thank you gitlab
13:32MrCooper: apparently virtio on s390x (big endian) advertises only BGRX8888 as supported, seems like a bug?
13:49alyssa: austriancoder: mind reviewing https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30501 ? thx
13:57alyssa: what is the way to CI test common code patches that's not considered "abusive"?
13:58alyssa: would like to get https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30327 unstuck
13:59MrCooper: if you have no other way to test, manually triggering just the needed jobs seems fair
14:00alyssa: MrCooper: every gl & vk job is needed
14:01zmike: I've been complaining about this for months because there is no way
14:01zmike: you just have to mash on the play buttons until it starts
14:02alyssa: and.. people do that...?
14:02zmike: no other option
14:03zmike: pressing play on the top of the build job columns is usually enough
14:03zmike: but you have to do it in stages as it completes
14:03zmike: which is supremely annoying
14:03alyssa: ummmmm... ok..
14:03alyssa: i mean the other option is to stop working on common code which i had for a while but had hoped that 7 months later thing would be working again
14:04alyssa: i see.
14:13mattst88: is "start CI jobs" not a thing you can do from the gitlab REST API?
14:13Hazematman: Can you not use the `./bin/ci/ci_run_n_monitor.py` script with a regex to match all the gl & vk tests?
14:13Hazematman: That's what i've been doing to trigger specific jobs, without fiddling with the gitlab UI
14:14alyssa: Hazematman: it would be .* regex since it's touching all the drivers
14:14alyssa: but apparently this is considered "abusive"
14:15zmike: .* doesn't work
14:16zmike: or at least it has never worked any of the times I've tried it
14:16alyssa: that too (:
14:16Hazematman: zmike: It does, if you do the `--force-manual` or whatever the option is. But the use case of testing against all CI for exploratory changes seems like a good one. It would be nice if there was a way to that without being "abusive" for hard ot test changes
14:19zmike: why would I need --force-manual to trigger jobs that are not manual?
14:20zmike: https://gitlab.freedesktop.org/mesa/mesa/-/issues/9986 should just be fixed
14:21Hazematman: zmike: I'm not sure, but that's how I've gotten all jobs to run with the `.*` regex 🤷♂️
14:21zmike: I'm trying to run fewer jobs, not more 🤕
14:24alyssa: zmike: see also https://gitlab.freedesktop.org/mesa/mesa/-/issues/7849#note_2515127
14:25zmike: yeah there have been a lot of tickets about it
14:38zmike: alyssa: I assume panfrost passes dEQP-EGL.functional.partial_update.render_outside_damage_region ?
14:46alyssa: zmike: dunno, I haven't booted a Mali device in a while
14:47zmike: oh I figured you'd just know
15:21alyssa: dEQP-EGL was passing back in the day
15:22alyssa: I don't know if those are new tests or if something's regressed since
15:22zmike: I only asked because there are not many drivers in mesa that support KHR_partial_update and panfrost is one of them
15:23alyssa: ah
15:23alyssa: i don't think that ext does anything on panfrost on hw newer than mali-t860
15:30MrCooper: alyssa: only running jobs which aren't actually needed is abusive
15:31zmike: MrCooper: you probably know this - does EGL_KHR_partial_update include clear operations in the damage region?
15:31zmike: or only draw commands
15:32zmike: the spec refers only to "client api rendering" which can be ambiguous
15:33mattst88: do we have anything that cleans ~/.cache/mesa_shader_cache? I just realized mine was 16G.
15:33alyssa: MrCooper: sorry, I'm struggling to parse that, could you rephrase? thanks
15:33zmike: mattst88: rm -r ?
15:33MrCooper: alyssa: it's not abusive if you actually need all those jobs
15:34alyssa: ah, yeah.
15:34MrCooper: what's abusive is blindly running all jobs when only some subset is needed
15:34alyssa: the issue as always is that I don't actually need the -full jobs, just the set Marge would run, but there's not a way to separate them
15:35MrCooper: zmike: not sure offhand
15:35alyssa: mattst88: this doesn't directly answer your question, but if you aren't already - consider setting MESA_SHADER_CACHE_DIR=/dev/shm for CTS runs
15:35mattst88: zmike: yeah, that's what I did -- but it'd be nice if we had a program that could run occasionally and clean files that are older than $date or something
15:35alyssa: should be faster, avoids polluting the cache, and saves your SSD some write cycles
15:36alyssa: I think CI does something similar
15:36mattst88: ccache for example allows setting a limit on the cache size and then when it's reached, deletes files to keep the cache size-limited
15:36mattst88: alyssa: that's a good idea. thanks
15:36MrCooper: mattst88: the cache is supposed to be pruned to 1G by default IIRC, so that sounds like something might have gone wrong there
15:36mattst88: MrCooper: hm, okay. thanks
15:36alyssa: (Previously I had disabled the shader cache for CTS, but it's faster to enable it but backed by RAM. At least for GL.)
15:37mattst88: alyssa: presumably because you actually get lots of cache hits running CTS?
15:37alyssa: Yeah
15:39mattst88: I'd be cool to have some cache stats like `ccache -s` gives you
16:13jenatali: alyssa: Thanks for tackling the DXIL change in that MR. I expected you were going to ping me to do it :P
16:39gfxstrand: Lynne: Typing it this week
16:43alyssa: jenatali: I expect you to debug it ;P
16:52ccr: "no, mr. bond .. I expect you to debug."
17:53Lynne: gfxstrand: amazing
17:53Lynne: I remember you were saying it would take enourmous amount of hacks to implement, did you find a way around?
17:55alyssa: Lynne: she is embracing the hacks ;)
17:57Lynne: for sane descriptor handling, as me and a certain chair would say, its all worth it
18:38alyssa: Lynne: I am not sure I agree that edb is sane descriptor handling..
18:39Lynne: do you subscribe to the religion of d3d12-style descriptor heaps?
18:39austriancoder: alyssa: done
18:40alyssa: Lynne: I mean. At least I *understand* heaps ;)
18:40alyssa: austriancoder: thanks!
18:41Lynne: I don't think heaps are simpler at all...
18:41Lynne: what's simpler than a buffer which you map and just set descriptors into?
18:41jenatali: It's just an opaque version of the same?
18:43Sachiel: it's not so simple when the hw doesn't work that way
18:44alyssa: admittedly i haven't taken the time to understand EDB
18:45alyssa: but the idea of having multiple descriptors buffers instead of just 1 heap is a sticky point to me
18:45alyssa: heaps are just easier to reason about for me
18:45jenatali: Yeah. D3D's got one heap because some hardware can only have one
18:46alyssa: honeykrisp is 100% heaps internally, even though the hardware is sometimes more flexible than that
18:49dj-death: alyssa: how big's your heap?
18:50gfxstrand: EDB is sane if you're AMD
18:50jenatali: Right
18:51alyssa: dj-death: which one
18:52dj-death: alyssa: okay :)
18:52dj-death: alyssa: so what heaps do you have and what sizes? ;)
18:53alyssa: tiny sampler heap, hardware
18:54alyssa: massive texture heap, arbitrary size (well up to 4GiB i guess), these are closer to buffers in hardware but we just use one as a global heap
18:55dj-death: okay
18:55alyssa: there's no structured buffer hardware so that's whatever software wants
18:55alyssa: Huh
18:55alyssa: I guess I do subscribe to the heap religion
18:55dj-death: so all buffer accesses are done with global load/store?
18:56alyssa: Yep
18:56alyssa: not even bounds checking in hardware
18:56alyssa: not even for vertex fetch (:
18:57dj-death: yeah interesting
18:59DavidHeidelberg: karolherbst: HW clear_buffer is giving small speedup 130 -> 120ms per token on Intel TGL. Not that cool as on freedreno, thou nice (+ accounting the fact I have i7-1185G7 @ 3Ghz). also CPU usage decreases, but it's 1 core 50% -> 25-30%...
18:59alyssa: dj-death: """interesting"""
18:59alyssa: i think you mean i've set a new record for running dxvk on garbage hardware ;p
19:00karolherbst: DavidHeidelberg: yeah... anything which does stuff on the CPU will tank perf, and I'm already thinking of ways on how to mitigate all of this, even considering not using those callbacks at all... (e.g. going through a temporary buffer first or something)
19:01dj-death: alyssa: I don't know
19:01karolherbst: I was also considering adding some interface so that drivers can tell what's blocking and what's not
19:02karolherbst: but I think threaded context is also doing things like that? not quite sure
19:02dj-death: alyssa: define "garbage" ;)
19:02dj-death: alyssa: I think I prefer less capable than slightly broken in 536 different ways
19:03karolherbst: nvidia also doesn't have bound checks on buffers :P though they do exists for other things generally :D
19:06alyssa: dj-death: well.. this is a gles3.1 part that i'm trying to run dx12 on....
19:06alyssa: and it's the pipelineist hw in the industry and I did ESO on it....
20:24cwabbott: uhh, am I the only one for whom https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/29986 broke everything
20:25cwabbott: https://www.irccloud.com/pastebin/FC64H99w/
20:26cwabbott: ishitatsuyuki: ^
21:17zmike: karolherbst: do you mean is_resource_busy ?
21:22karolherbst: no, like.. some operations are more or less blocking by default on some drivers e.g. texture_subdata, you can e.g. have a temporary resource you do subdata on and then do resource_copy to the actual resource instead so you don't risk waiting on the resource on the CPU side for the subdata
21:22karolherbst: or rather, that's what I'm considering doing
21:23karolherbst: but I'd rather not
21:23karolherbst: maybe I should write shaders for all those ops :D
21:24karolherbst: but yeah.. maybe is_resource_busy would help to only do something weird if it's really needed
21:25zmike: yeah you're talking about the staging buffer with unmap sync dance
21:27zmike: if you mean doing it truly async, i.e., using a separate context and threads, tc doesn't do that for textures
21:30karolherbst: yeah... I'll have to check if threaded_context even makes sense for what I'm doing or not or if it's better if I do it all manually
21:40zmike: tc mostly just stuff for buffer invalidation
22:46Company: llvmpipe git seems to be very flaky atm
22:47Company: is there bigger refactorings going on?
23:15airlied: Company: zmike is burning the whole place down
23:15Company: so I can just wait for a while until it settles
23:15Company: before filing tons of bugs
23:16Company: I'm doing a bunch of perf optimizations atm
23:16Company: well, trying to
23:16Company: and testing them on my rpi and llvmpipe from time to time to see if fps goes up there
23:17airlied: probably worth filing one or two, I'm not sure if CI doesn't cover it if he'll catch things
23:17airlied: is it just failing to load?
23:18airlied: since there isn't much llvmpipe development going on it, it's all around the glx/egl/dri bits
23:18Company: no, I've had a few random crashes
23:18Company: that weren't reproducible
23:18Company: and it's complaining about syncs being invalid from time to time
23:18airlied: oh those might be worth trying to file
23:19Company: that's the most evil ones probably because if it's syncs it's like texture handoff from gstreamer
23:24Company: after having a look, those might be my fault and llvmpipe is the only one finding a race with make_current() (because it's too slow)
23:25Company: will investigate tomorrow