12:39 tzimmermann: jfalempe, thanks for reviewing. i hope this solution will finally stick. i have a similar pathcset for ast in a bit.
13:17 zmike: jenatali: is there any kind of error message here? https://gitlab.freedesktop.org/mesa/mesa/-/jobs/61940382
13:17 zmike: or is "FAILED" the whole of it
13:24 jenatali: zmike: raw view shows the errors: https://gitlab.freedesktop.org/mesa/mesa/-/jobs/61940382/raw
13:24 jenatali: driQueryOptionb, i, and str versions are unresolved
13:24 jenatali: I think it's a gitlab bug
13:25 jfalempe: tzimmermann: you're welcome, thanks for cleaning this bmc support.
13:25 zmike: I see
13:25 zmike: very convenient
13:25 zmike: thank you gitlab
13:32 MrCooper: apparently virtio on s390x (big endian) advertises only BGRX8888 as supported, seems like a bug?
13:49 alyssa: austriancoder: mind reviewing https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30501 ? thx
13:57 alyssa: what is the way to CI test common code patches that's not considered "abusive"?
13:58 alyssa: would like to get https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30327 unstuck
13:59 MrCooper: if you have no other way to test, manually triggering just the needed jobs seems fair
14:00 alyssa: MrCooper: every gl & vk job is needed
14:01 zmike: I've been complaining about this for months because there is no way
14:01 zmike: you just have to mash on the play buttons until it starts
14:02 alyssa: and.. people do that...?
14:02 zmike: no other option
14:03 zmike: pressing play on the top of the build job columns is usually enough
14:03 zmike: but you have to do it in stages as it completes
14:03 zmike: which is supremely annoying
14:03 alyssa: ummmmm... ok..
14:03 alyssa: i mean the other option is to stop working on common code which i had for a while but had hoped that 7 months later thing would be working again
14:04 alyssa: i see.
14:13 mattst88: is "start CI jobs" not a thing you can do from the gitlab REST API?
14:13 Hazematman: Can you not use the `./bin/ci/ci_run_n_monitor.py` script with a regex to match all the gl & vk tests?
14:13 Hazematman: That's what i've been doing to trigger specific jobs, without fiddling with the gitlab UI
14:14 alyssa: Hazematman: it would be .* regex since it's touching all the drivers
14:14 alyssa: but apparently this is considered "abusive"
14:15 zmike: .* doesn't work
14:16 zmike: or at least it has never worked any of the times I've tried it
14:16 alyssa: that too (:
14:16 Hazematman: zmike: It does, if you do the `--force-manual` or whatever the option is. But the use case of testing against all CI for exploratory changes seems like a good one. It would be nice if there was a way to that without being "abusive" for hard ot test changes
14:19 zmike: why would I need --force-manual to trigger jobs that are not manual?
14:20 zmike: https://gitlab.freedesktop.org/mesa/mesa/-/issues/9986 should just be fixed
14:21 Hazematman: zmike: I'm not sure, but that's how I've gotten all jobs to run with the `.*` regex 🤷‍♂️
14:21 zmike: I'm trying to run fewer jobs, not more 🤕
14:24 alyssa: zmike: see also https://gitlab.freedesktop.org/mesa/mesa/-/issues/7849#note_2515127
14:25 zmike: yeah there have been a lot of tickets about it
14:38 zmike: alyssa: I assume panfrost passes dEQP-EGL.functional.partial_update.render_outside_damage_region ?
14:46 alyssa: zmike: dunno, I haven't booted a Mali device in a while
14:47 zmike: oh I figured you'd just know
15:21 alyssa: dEQP-EGL was passing back in the day
15:22 alyssa: I don't know if those are new tests or if something's regressed since
15:22 zmike: I only asked because there are not many drivers in mesa that support KHR_partial_update and panfrost is one of them
15:23 alyssa: ah
15:23 alyssa: i don't think that ext does anything on panfrost on hw newer than mali-t860
15:30 MrCooper: alyssa: only running jobs which aren't actually needed is abusive
15:31 zmike: MrCooper: you probably know this - does EGL_KHR_partial_update include clear operations in the damage region?
15:31 zmike: or only draw commands
15:32 zmike: the spec refers only to "client api rendering" which can be ambiguous
15:33 mattst88: do we have anything that cleans ~/.cache/mesa_shader_cache? I just realized mine was 16G.
15:33 alyssa: MrCooper: sorry, I'm struggling to parse that, could you rephrase? thanks
15:33 zmike: mattst88: rm -r ?
15:33 MrCooper: alyssa: it's not abusive if you actually need all those jobs
15:34 alyssa: ah, yeah.
15:34 MrCooper: what's abusive is blindly running all jobs when only some subset is needed
15:34 alyssa: the issue as always is that I don't actually need the -full jobs, just the set Marge would run, but there's not a way to separate them
15:35 MrCooper: zmike: not sure offhand
15:35 alyssa: mattst88: this doesn't directly answer your question, but if you aren't already - consider setting MESA_SHADER_CACHE_DIR=/dev/shm for CTS runs
15:35 mattst88: zmike: yeah, that's what I did -- but it'd be nice if we had a program that could run occasionally and clean files that are older than $date or something
15:35 alyssa: should be faster, avoids polluting the cache, and saves your SSD some write cycles
15:36 alyssa: I think CI does something similar
15:36 mattst88: ccache for example allows setting a limit on the cache size and then when it's reached, deletes files to keep the cache size-limited
15:36 mattst88: alyssa: that's a good idea. thanks
15:36 MrCooper: mattst88: the cache is supposed to be pruned to 1G by default IIRC, so that sounds like something might have gone wrong there
15:36 mattst88: MrCooper: hm, okay. thanks
15:36 alyssa: (Previously I had disabled the shader cache for CTS, but it's faster to enable it but backed by RAM. At least for GL.)
15:37 mattst88: alyssa: presumably because you actually get lots of cache hits running CTS?
15:37 alyssa: Yeah
15:39 mattst88: I'd be cool to have some cache stats like `ccache -s` gives you
16:13 jenatali: alyssa: Thanks for tackling the DXIL change in that MR. I expected you were going to ping me to do it :P
16:39 gfxstrand: Lynne: Typing it this week
16:43 alyssa: jenatali: I expect you to debug it ;P
16:52 ccr: "no, mr. bond .. I expect you to debug."
17:53 Lynne: gfxstrand: amazing
17:53 Lynne: I remember you were saying it would take enourmous amount of hacks to implement, did you find a way around?
17:55 alyssa: Lynne: she is embracing the hacks ;)
17:57 Lynne: for sane descriptor handling, as me and a certain chair would say, its all worth it
18:38 alyssa: Lynne: I am not sure I agree that edb is sane descriptor handling..
18:39 Lynne: do you subscribe to the religion of d3d12-style descriptor heaps?
18:39 austriancoder: alyssa: done
18:40 alyssa: Lynne: I mean. At least I *understand* heaps ;)
18:40 alyssa: austriancoder: thanks!
18:41 Lynne: I don't think heaps are simpler at all...
18:41 Lynne: what's simpler than a buffer which you map and just set descriptors into?
18:41 jenatali: It's just an opaque version of the same?
18:43 Sachiel: it's not so simple when the hw doesn't work that way
18:44 alyssa: admittedly i haven't taken the time to understand EDB
18:45 alyssa: but the idea of having multiple descriptors buffers instead of just 1 heap is a sticky point to me
18:45 alyssa: heaps are just easier to reason about for me
18:45 jenatali: Yeah. D3D's got one heap because some hardware can only have one
18:46 alyssa: honeykrisp is 100% heaps internally, even though the hardware is sometimes more flexible than that
18:49 dj-death: alyssa: how big's your heap?
18:50 gfxstrand: EDB is sane if you're AMD
18:50 jenatali: Right
18:51 alyssa: dj-death: which one
18:52 dj-death: alyssa: okay :)
18:52 dj-death: alyssa: so what heaps do you have and what sizes? ;)
18:53 alyssa: tiny sampler heap, hardware
18:54 alyssa: massive texture heap, arbitrary size (well up to 4GiB i guess), these are closer to buffers in hardware but we just use one as a global heap
18:55 dj-death: okay
18:55 alyssa: there's no structured buffer hardware so that's whatever software wants
18:55 alyssa: Huh
18:55 alyssa: I guess I do subscribe to the heap religion
18:55 dj-death: so all buffer accesses are done with global load/store?
18:56 alyssa: Yep
18:56 alyssa: not even bounds checking in hardware
18:56 alyssa: not even for vertex fetch (:
18:57 dj-death: yeah interesting
18:59 DavidHeidelberg: karolherbst: HW clear_buffer is giving small speedup 130 -> 120ms per token on Intel TGL. Not that cool as on freedreno, thou nice (+ accounting the fact I have i7-1185G7 @ 3Ghz). also CPU usage decreases, but it's 1 core 50% -> 25-30%...
18:59 alyssa: dj-death: """interesting"""
18:59 alyssa: i think you mean i've set a new record for running dxvk on garbage hardware ;p
19:00 karolherbst: DavidHeidelberg: yeah... anything which does stuff on the CPU will tank perf, and I'm already thinking of ways on how to mitigate all of this, even considering not using those callbacks at all... (e.g. going through a temporary buffer first or something)
19:01 dj-death: alyssa: I don't know
19:01 karolherbst: I was also considering adding some interface so that drivers can tell what's blocking and what's not
19:02 karolherbst: but I think threaded context is also doing things like that? not quite sure
19:02 dj-death: alyssa: define "garbage" ;)
19:02 dj-death: alyssa: I think I prefer less capable than slightly broken in 536 different ways
19:03 karolherbst: nvidia also doesn't have bound checks on buffers :P though they do exists for other things generally :D
19:06 alyssa: dj-death: well.. this is a gles3.1 part that i'm trying to run dx12 on....
19:06 alyssa: and it's the pipelineist hw in the industry and I did ESO on it....
20:24 cwabbott: uhh, am I the only one for whom https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/29986 broke everything
20:25 cwabbott: https://www.irccloud.com/pastebin/FC64H99w/
20:26 cwabbott: ishitatsuyuki: ^
21:17 zmike: karolherbst: do you mean is_resource_busy ?
21:22 karolherbst: no, like.. some operations are more or less blocking by default on some drivers e.g. texture_subdata, you can e.g. have a temporary resource you do subdata on and then do resource_copy to the actual resource instead so you don't risk waiting on the resource on the CPU side for the subdata
21:22 karolherbst: or rather, that's what I'm considering doing
21:23 karolherbst: but I'd rather not
21:23 karolherbst: maybe I should write shaders for all those ops :D
21:24 karolherbst: but yeah.. maybe is_resource_busy would help to only do something weird if it's really needed
21:25 zmike: yeah you're talking about the staging buffer with unmap sync dance
21:27 zmike: if you mean doing it truly async, i.e., using a separate context and threads, tc doesn't do that for textures
21:30 karolherbst: yeah... I'll have to check if threaded_context even makes sense for what I'm doing or not or if it's better if I do it all manually
21:40 zmike: tc mostly just stuff for buffer invalidation
22:46 Company: llvmpipe git seems to be very flaky atm
22:47 Company: is there bigger refactorings going on?
23:15 airlied: Company: zmike is burning the whole place down
23:15 Company: so I can just wait for a while until it settles
23:15 Company: before filing tons of bugs
23:16 Company: I'm doing a bunch of perf optimizations atm
23:16 Company: well, trying to
23:16 Company: and testing them on my rpi and llvmpipe from time to time to see if fps goes up there
23:17 airlied: probably worth filing one or two, I'm not sure if CI doesn't cover it if he'll catch things
23:17 airlied: is it just failing to load?
23:18 airlied: since there isn't much llvmpipe development going on it, it's all around the glx/egl/dri bits
23:18 Company: no, I've had a few random crashes
23:18 Company: that weren't reproducible
23:18 Company: and it's complaining about syncs being invalid from time to time
23:18 airlied: oh those might be worth trying to file
23:19 Company: that's the most evil ones probably because if it's syncs it's like texture handoff from gstreamer
23:24 Company: after having a look, those might be my fault and llvmpipe is the only one finding a race with make_current() (because it's too slow)
23:25 Company: will investigate tomorrow