ChanServ changed the topic of #panfrost to: Panfrost - FLOSS Mali Midgard + Bifrost + Valhall - Logs https://oftc.irclog.whitequark.org/panfrost
qpla has quit [Ping timeout: 480 seconds]
lcagustini has quit [Remote host closed the connection]
lcagustini has joined #panfrost
Consolatis has quit [Ping timeout: 480 seconds]
Consolatis has joined #panfrost
hexdump01 has joined #panfrost
hexdump0815 has quit [Ping timeout: 480 seconds]
remexre has quit [Ping timeout: 480 seconds]
remexre has joined #panfrost
lucas_ has joined #panfrost
lcagustini has quit [Ping timeout: 480 seconds]
qpla has joined #panfrost
faveoled has joined #panfrost
faveoled has quit [Remote host closed the connection]
h112 has joined #panfrost
h112 has quit []
rasterman has joined #panfrost
MoeIcenowy has quit [Quit: ZNC 1.9.1 - https://znc.in]
MoeIcenowy has joined #panfrost
krei-se has quit [Quit: ZNC 1.9.1 - https://znc.in]
chewitt has joined #panfrost
krei-se has joined #panfrost
<chewitt> Hello folks :)
<chewitt> mesa 25.1 appears to introduce a new dependency on LLVM for panfrost
<chewitt> it looks related to clc ?
<chewitt> which I'm guessing is related to OpenCL support?
<chewitt> If yes, is there a way to disable that at build time?
<chewitt> as we have no need/use for OpenCL support and pulling LLVM into the build process has a large cumulative impact on our distro CI pipelines
<linkmauve> chewitt, many drivers in Mesa have started to use clc to implement extensions that aren’t present in hardware, such as geometry or tesselation shaders. I don’t know if this is the case for panfrost, but it is for instance for asahi.
<chewitt> sounds plasible
<chewitt> plausible .. (fat thumbs today)
<linkmauve> There is src/panfrost/libpan/query_pool.cl for instance.
<linkmauve> But apparently that’s used only before v10, but I’m not aware of a way to build panfrost for only a specific architecture.
<chewitt> I don't see 'src/panfrost/libpan/query_pool.cl' in mesa source anywhere
<chewitt> I do find mentions of clc under src/asahi/clc
<chewitt> ahh, I reset my 'main' branch against the wrong remote (was out of date)
<chewitt> I also need to report a probably long-standing regression with T820 support
<chewitt> the kernel driver loads, then
<chewitt> panfrost d00c0000.gpu: GPU Fault 0x00ff0388 (GPU_SHAREABILITY_FAULT) at 0x000000759dedce40
<chewitt> followed by this (repeating every 60 seconds)
<chewitt> [ 19.832891] panfrost d00c0000.gpu: shader power transition timeout
<chewitt> [ 19.834924] panfrost d00c0000.gpu: tiler power transition timeout
<chewitt> [ 19.836948] panfrost d00c0000.gpu: l2 power transition timeout
<chewitt> actually not ever 60 secs, looks to be some kind of exponential back-off
<chewitt> this is on Linux 6.14.5 with mesa 25.1 but I know the fault has been around for a while (more than a year possibly)
<chewitt> so it's probably something specific to Midgard, and a difference between T860 and T820
<chewitt> (as Rockchip folks have an eye on T860 support, and nobody has an eye on Amlogic S912 support)
<chewitt> I did experiment with extending the shader/tiler/l2 timeout values in the kernel driver, but moving from 2000ms (current) to 5000ms makes no difference
<linkmauve> chewitt, for this kind of issue, it might be useful to bisect the kernel to determine which version or commit broke it exactly.
<chewitt> I know .. I'd love to avoid that task though :)
<chewitt> I've seen comments about 'midgard being broken' from folks before though, so I might hold out to see if there's a memory jog or known issue in the back of people's minds first
<chewitt> the GPU seems to be working okay for our use-case (running Kodi) .. no visual glitches or artefacts in the GUI
<chewitt> and I have a hunch the issue is on the mesa side not the kernel side (the driver didn't change much for aeons)
<linkmauve> chewitt, ah, when you say it happens right on boot, do you actually start using the GPU at this point? Could you try without doing that?
<chewitt> exactly .. the fault occurs when Kodi starts some time after the driver probes
<chewitt> the journal is better for seeing that https://paste.libreelec.tv/helping-chipmunk.log
<chewitt> tracing what happens inside mesa is probably more productive than bisecting the kernel
<chewitt> I might need pointers on how to do that .. it's been a while
<linkmauve> Maybe try with another compositor first?
<chewitt> not really possible in the distro without major work
<linkmauve> Maybe try another distribution then?
<chewitt> there's no desktop environment at all in LibreELEC, we run Kodi directly on GBM buffers
<chewitt> but I can set environment variables to dump stuff to the journal
pbrobinson has quit [Ping timeout: 480 seconds]
<alyssa> chewitt: fwiw it's a build-time dependency, not a runtime one
<alyssa> i.e. you don't need to ship llvm with libreelec images, you just need the right toolchain when building mesa
<chewitt> historically only our x86_64 image needs to build LLVM for mesa
<chewitt> now most of the aarch64 images need it too
<chewitt> it means our current roster of images only just builds (all) in under 24h, where before they'd be done way under the limit
<chewitt> our CI isn't that sophisticated, but it's cost-effective given we're compiling an entire embedded distro image (x13 or so)
<alyssa> aarch64 is playing with the big kids now \shrug/
<chewitt> une opportunité pour utiliser ma Français de l'ecole :)
cyrinux9490 has quit []
<alyssa> I admit that libreelec is the sort of use case that slips thru the cracks here but i'm looking at a much bigger stack of cards here
cyrinux9490 has joined #panfrost
<alyssa> and clc is/will be a huge boon for panfrost's abililty to be a competent/credible alternative to the DDK (which uses glsl to a similar effect, iirc)
<CounterPillow> It appears LE currently builds the entire distribution for each device again, even though things like the toolchain could be shared?
<chewitt> on some level it's all about optimisation
<chewitt> different hardware targets have different silicon supporting different features and the original idea is to compile with optimisation
<chewitt> so the buildsystem compiles the toolchain for that target and then everything else
<chewitt> in theory we could build and cache the toolchain (until some toolchain element is updated) .. but we don't currently do that (we never needed to go that far)
dsimic is now known as Guest15486
dsimic has joined #panfrost
<chewitt> in our dev branch we track current/latest versions and bump toolchain items frequently enough that we'd need to rebuild it pretty regularly
<chewitt> there's nothing to say we can't change; but in the history of the project we've never needed to (and have avoided) taking that step
Guest15486 has quit [Ping timeout: 480 seconds]
<chewitt> the penalty is compile time, but it also means we don't need to pay any special attention to bumping/merging and tracking toolchain stuff
<chewitt> and we have only a handful of staff compared to a typical Debian derivative, so being relaxed has value
<chewitt> anyway.. i'm waffling
<CounterPillow> I'm not advocating for you doing a complex packaging system here with tracking ABIs and such, I'm just saying your actions run that's currently like 13 different full builds could have another job that feeds into those 13 which provides the toolchain for all of them.
<CounterPillow> at that point, you don't have to track the toolchain separately, it's still built in the same go, just only once.
lucas_ has quit [Read error: Connection reset by peer]
lcagustini has joined #panfrost
lcagustini has quit [Remote host closed the connection]
lcagustini has joined #panfrost
pbrobinson has joined #panfrost
rasterman has quit [Quit: Gettin' stinky!]
pbrobinson has quit [Ping timeout: 480 seconds]
Consolatis_ has joined #panfrost
Consolatis is now known as Guest15507
Consolatis_ is now known as Consolatis
Guest15507 has quit [Ping timeout: 480 seconds]