08:19DavidHeidelberg: karolherbst: u no use Matrix anymore?
09:09karolherbst: DavidHeidelberg: not really
09:21karolherbst: DavidHeidelberg: are dms even working properly?
13:45zmike: eric_engestrom: I don't recall who over there is in charge of v3d ci, but what's going on with it
13:46zmike: I've seen an unreasonable number of hw flakes there over the past week
14:30eric_engestrom: zmike: jasuarez & chema would be the main contact for that farm, although they're not always on irc
14:30eric_engestrom: but yeah, we're aware of this issue but haven't been able to figure it out, and we're hoping to replace this farm with a ci-tron farm soon™
14:30eric_engestrom: I'm also hoping https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30340 will help
14:34jasuarez: zmike: link to the failing job?
14:34jasuarez: I can try to disable that rpi for now
14:40jasuarez: Btw last week we moved some duts to another device and we commit a mistake when plugging cables. Already fixed thoug
14:46zmike: jasuarez: seems like I'm getting a random piglit flake now, but last night I had some hw fails
14:47zmike: don't have links unfortunately
14:48zmike: eric_engestrom: does ci-tron fix hardware issues?
14:48zmike: cuz not booting seems like a hardware issue
14:49jasuarez: I'm sorry about that. Sometimes the serial cables don't work for unknown reason. No logs in kernel about why. We are trying to figure out or minimize the problem, like retrying under this situation
14:51zmike: these things happen I guess
14:54MrCooper: daniels: FYI, just bisected a crash with EGL running via waypipe to 361f3622587e ("dri: Unify createImage and createImageWithModifiers")
14:56eric_engestrom: zmike: ci-tron helps with configurations issues like writing down the wrong pairing of dut/serial/pdu, which there have been many of in the baremetal farm because it has to be written down by hand, while on ci-tron it's all automatically detected
14:58zmike: I see
14:58zmike: neat
15:11daniels: MrCooper: a backtrace would be super handy
15:11daniels: MrCooper: I've been told that it breaks Xwl on virtio-amdgpu but waiting to hear more
15:12daniels: MrCooper: knowing the underlying driver would also be v useful
15:14MrCooper: daniels: https://paste.debian.net/hidden/6a32e5eb/ , looks like the surface struct contains garbage
15:14MrCooper: this is with radeonsi
15:15MrCooper: notably looks like it's picking the older GPU using the radeon kernel driver, so no modifiers
15:16MrCooper: slight correction, actually it's using amdgpu, still no modifiers though
15:16daniels: the contents of *surf look fine to me?
15:17daniels: this is the same as what Dmitry reported on virtio-amdgpu, and it seems to be that image creation fails which leaves the swapchain without a backbuffer
15:54MrCooper: daniels: create_dri_image_from_dmabuf_feedback calls create_dri_image with the linear & invalid modifiers
15:56MrCooper: create_dri_image can handle that without pscreen->resource_create_with_modifiers in principle, that might go against principles of modifier-full APIs though?
16:02MrCooper: daniels: FWIW, waypipe advertises the linear & invalid modifiers for all formats via zwp_linux_dmabuf_v1
16:36daniels: MrCooper: bingo - you've nailed it, thanks
16:37MrCooper: so handling this in create_dri_image is fine? or should it rather be handled in create_dri_image_from_dmabuf_feedback?
16:38gurenta: Hey, im trying to build mesa with meson and im failing at the clang-ccp dependency! Does anybody know the package that I need for RHEL / Oracle Linux
16:42soreau: gurenta: probably whatever package provides libclang-cpp.so, llvm?
16:43soreau: on debian looks like libclang-cpp#-dev
16:44Ryback_: gurenta: "dnf provides */clang-cpp" should work?
16:47gurenta: libclang-cpp.so.17 is in clang-libs-17.0.6-5.0.1.el9.i686 which i installed
16:47daniels: MrCooper: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30383
16:48soreau: gurenta: is there a symlink libclang-cpp.so pointing to the lib? if it's in a nonstandard location, you might have to point LIBRARY_PATH to it
16:49soreau: though I would like to think that wouldn't be neccessary
16:52daniels: gurenta: you need clang-devel
16:52daniels: (libclang-cpp.so.17 is only for runtime, you need the devel package to build things)
16:53gurenta: Yep
16:53gurenta: Thats it
16:53gurenta: Thank you guys Thanks daniels
16:54gurenta: Now it finishes
16:54gurenta: tyvm
16:57daniels: MrCooper: sorry I missed half your messages here - gonna write it up in the MR so it's more visible
16:57MrCooper: no worries, thanks
16:59daniels: np, thanks for pointing straight at the solution
17:00MrCooper: o/\o
17:02parthiban: frankbinns: I have an A133 SoC which comes with GE8300 PowerVR. I tried to load the mainline driver (which is for GE8320), but unable to read the PBVNC. Existing mainline driver for GE8320 compatible with GE8300 as well? Many thanks.
19:21mattst88: I have a deref that isn't lowered in nir_lower_io because deref->modes == nir_var_mem_push_const, but state->modes == nir_var_uniform, so nir_deref_mode_is_one_of() returns false
19:22mattst88: this is failing in a shader with an infinite loop, so my guess is that some information isn't getting propagated due to the wonky CFG
19:25mattst88: I'm not even sure if this is where such a deref should be lowered
19:26alyssa: eric_engestrom: https://gitlab.freedesktop.org/mesa/mesa/-/jobs/61564598 bweh?
19:27mattst88: kind of looks like nir_lower_explicit_io is actually responsible for lowering derefs of nir_var_mem_push_const
19:28zmike: jasuarez: ^
19:31jasuarez: Ok. That's a different issue...
19:31jasuarez: Isn't Marge retrying?
19:32zmike: I'm just fwding the ping
19:34alyssa: jasuarez: hi I'm Marge
19:35jasuarez: :)
19:39alyssa: going to bump right up against the 60min timeout, isn't it... splendid..
19:41jasuarez: alyssa: got merged at the end
19:41alyssa: yay!
19:41alyssa: jasuarez: incidentally, the rpi5 job seems to be a bit slower than the soft limit in policy though it does seem ok for the hard limit
19:41alyssa: `Duration: 12:35`, recommendation I think is to keep deqp-runner time under 10min
19:42jasuarez: Really?
19:42jasuarez: I thought it were more around 15min
19:42jasuarez: With hard limit in 20 min
19:42alyssa: https://docs.mesa3d.org/ci/index.html#ci-job-user-expectations
19:43alyssa: > we require that the test farm be able to handle a whole pipeline’s worth of jobs in less than 15 minutes (to compare, the build stage is about 10 minutes). Given boot times and intermittent network delays, this generally means that the test runtime as reported by deqp-runner should be kept to 10 minutes.
19:44jasuarez: Right. But the overall job time including booting and so on should be around 15mim
19:44alyssa: at least as of December 2023, the rest of the rpi jobs were meeting these targets with plenty of breathing room, so just the new rpi5 job that should be tweaked
19:45jasuarez: It took 13 min in total
19:46alyssa: yes, that's why I said "slower than the soft limit in policy though it does seem ok for the hard limit'
19:47alyssa: to compare, the rpi4 jobs are e.g. 8min of deqp-runner time with a hair of 10min of total job time
19:49jasuarez: Ok. I'll try to adjust so test time < 10min
19:51alyssa: cheers, thanks :)
20:16jimc: howto use DRM-CI effectively ?
20:16jimc: Ive got a fork (plain old copy) of drm-next on gitlab,
20:16jimc: and Ive branched that to merge in my dd-fix-9 branch
20:16jimc: (approx)
20:16jimc: Ive set up 2 pipelines, on them respectively,
20:16jimc: they are failing the same tests.
20:16jimc: So I havent added any new failures,
20:16jimc: but thats not exactly a ringing endorsement.
20:16jimc: It would be nice to see all green both without and with my patches
20:16jimc: It prompts a few questions:
20:16jimc: why doesnt drm/next have a drm-ci pipeline ?
20:16jimc: wouldnt that provide the current, historic status for each merge ?
20:16jimc: test results on drm-next could run many times over time,
20:16jimc: uncovering flakes, and providing sufficient testing to quantify "flakiness"
21:16sghuge: bnieuwenhuizen: pixelcluster konstantin Quick question about the ploc_internal.comp shader from BVH, is there a dependency on fixed subgroup size? or we can expect it to work with 8/16/32 etc?
21:17bnieuwenhuizen: sghuge: I think we divide by an assumed subgroup size in there at least
21:18bnieuwenhuizen: like all the divide by 64 for the aggregation are assumed subgroup sizes
21:21sghuge: bnieuwenhuizen: ACK. Thanks for confirming that.