ChanServ changed the topic of #freedesktop to: https://www.freedesktop.org infrastructure and online services || for questions about freedesktop.org projects, please see each project's contact || for discussions about specifications, please use https://gitlab.freedesktop.org/xdg or xdg@lists.freedesktop.org
<whot>
Consolatis: static content is unrealistic when you are asked to produce every single page that gitlab might possibly link to, ever
<whot>
Consolatis: you can't disk-space your way out of a git blame request for every git commit in the mesa repository. and you can't cache it either because there's too many and too disparate to rely on caching
<Consolatis>
I agree with git blame and git commits (same for the repo content at each individual commit). Those are likely better served with requiring a user account. git blame I would remove completely. I was mostly talking about Issues and MRs and their index lists
jturney has joined #freedesktop
scrumplex has joined #freedesktop
scrumplex_ has quit [Ping timeout: 480 seconds]
Usjjs[m] has joined #freedesktop
JanC is now known as Guest24255
JanC has joined #freedesktop
Guest24255 has quit [Ping timeout: 480 seconds]
georgc has joined #freedesktop
gchini has quit [Ping timeout: 480 seconds]
Usjjs[m] has left #freedesktop [User left]
MrCooper_ has joined #freedesktop
MrCooper has quit [Ping timeout: 480 seconds]
snetry has joined #freedesktop
sentry has quit [Ping timeout: 480 seconds]
Zathras has joined #freedesktop
Zathras_11 has quit [Ping timeout: 480 seconds]
JanC has quit [Ping timeout: 480 seconds]
<daniels>
Consolatis: when you say ‘instead of one giant query, I would aggressively cache the content in Redis through many smaller queries so clients are effectively served static content’, you’re describing exactly how GitLab actually works
<daniels>
but yeah, you might want to go read up on LWN’s experience of the crawlers, as a largely-static website with much much much less content to attempt to cache
<daniels>
and then think about the number of URLs on the internet, and whether if you were trying to make an LLM crawler and didn’t care about externalities, whether you’d bother indexing every URL you’d ever seen and querying whether or not you needed to do it again, or just suck up the content en masse and not bother with the URL tracking
JanC has joined #freedesktop
sima has joined #freedesktop
swatish2 has joined #freedesktop
AbleBacon has quit [Remote host closed the connection]
swatish2 has quit [Ping timeout: 480 seconds]
andy-turner has joined #freedesktop
sima has quit [Ping timeout: 480 seconds]
alarumbe has quit []
ximion has quit [Remote host closed the connection]
todi1 has quit [Ping timeout: 480 seconds]
<karolherbst>
can somebody point me out where the spam bot fetches emojis? I'd be interested in maybe making the bot a bit more competent in that regard. We've seen a few insulting emoji reactions being used and were wondering if the bot can help out point out if individuals get harassed or targeted by rando accounts or something
<karolherbst>
and insulting/racist emoji reactions is something we (and also other communities) have seen happening in those cases
mergen has joined #freedesktop
<karolherbst>
no idea what to do about it yet, but I was wondering digging through the API and see what could be done there
<karolherbst>
those reactions can be removed through the API, but it's a real pain and takes a lot of time doing it all manually