#soylent | Logs for 2025-10-17
« return
[01:00:12] -!- bender has quit [Remote host closed the connection]
[01:00:21] -!- bender [bender!bot@Soylent/Bot/Bender] has joined #soylent
[01:01:23] -!- Loggie [Loggie!Loggie@Soylent/BotArmy] has joined #soylent
[01:19:53] -!- AlwaysNever has quit [Read error: Connection reset by peer]
[02:43:32] -!- c0lo [c0lo!~c0lo@124.190.mg.jlq] has joined #soylent
[02:44:17] <c0lo> Social security, keeping the man warm for as long as he lives https://www.youtube.com
[02:44:21] <systemd> ^ 03This is Getting Wild
[03:02:42] -!- halibut has quit [Remote host closed the connection]
[03:02:59] -!- halibut [halibut!~halibut@CanHazVHOST/halibut] has joined #soylent
[03:04:48] -!- kolie_web [kolie_web!~kolie_web@Soylent/Staff/Management/kolie] has joined #soylent
[03:04:48] -!- mode/#soylent [+o kolie_web] by Imogen
[03:04:56] <kolie_web> o/
[03:29:18] <chromas> Social security, giving the government extra money so they can decide if they want to give some of it back decades later.
[04:18:58] <c0lo> Corporate welfare, delivers at least once every 4 years. One only needs to pay a brok... a lobbyist, that is.
[04:26:58] -!- kolie_web has quit [Quit: Client closed]
[06:22:20] <janrinok> It looks like the site is slowing down again. I have seen a couple of Backend Fetch Failed and some pages are being built slowly enough to see the various templates being processed.
[06:22:33] <janrinok> kolie ^
[06:24:54] <janrinok> The network throughput is slowly climbing. Input is staying steady at <1M, but the output has climbed from <1M to 5M over the last hour. CPU usage is varying but staying about average.
[06:34:33] -!- AlwaysNever [AlwaysNever!~donaldo@315.38.1.669.dynamic.jazztel.es] has joined #soylent
[06:35:21] <janrinok> 0628UTC cpu's are around 100% and staying there. network output is ~7M. Page refreshes are now slow.
[06:36:57] <janrinok> 503 backend fetch fail
[06:38:17] <janrinok> net o/p falling rapidly, now <2M
[06:38:27] <janrinok> 2nd 503 backend fetch fail
[06:48:43] <c0lo> Bots?
[06:49:01] <janrinok> net o/p now shot back to +7M, Cpus coming back down to average 50-60%.
[06:49:23] <janrinok> bots are looking normal - still there but nothing that appears to be hammering
[06:51:02] <janrinok> The system has just issued a new set of bot-blocks (usually 1 hr duration) for specific bots.
[06:51:03] <chromas> Were you able to peep in and run top or anything?
[06:53:50] <janrinok> yes, top was showing 100% but there was no job that appeared to be taking a huge amount. chec.py was a bit ott. traefik is busy but that is its job (I think!) Sphinx is building an index...
[06:55:17] <chromas> Building an index sounds a little expensive
[06:56:27] <janrinok> yes, but it didn't take the CPUs into lock-up
[06:57:37] <janrinok> chec.py is running again - I don't know what that does
[07:00:54] <janrinok> There is still a lot of outgoing network traffic. I assume that is a backup being sent to the remote.
[07:01:38] <chromas> Maybe encrypting the traffic hogs the cpu
[07:01:51] <janrinok> could be...
[07:02:06] <janrinok> But 16 cores?
[07:02:43] <chromas> quantum encryptionses
[07:03:27] <janrinok> touch wood - it appears to be calming down again.
[07:04:10] <janrinok> ... I knew I shouldn't have typed that! output back up again.
[07:05:15] <janrinok> 2 x gzip running and they are the top 2 rows of htop at the moment.
[07:06:38] <chromas> of course touching wood is going to keep it up
[07:08:01] <janrinok> yeah, funny how it has that effect
[07:10:30] <janrinok> If it is a backup it is taking a hell of a long time
[07:12:01] <chromas> Time to find some expert devs to port rehash to rust
[07:12:06] <chromas> We'll pay them in internets
[07:13:05] <janrinok> htop load averages: 40.44 129.77 129.83
[07:13:54] <janrinok> I reckon cobol or fortran would at least match it for speed
[07:14:50] <chromas> You're right. Rust takes all day to compile anything. Better switch to D ;)
[07:16:41] <janrinok> load averages now: 240.90 163.45 142.15
[07:19:31] <fab23> it most often is a sign of heavy disk I/O, e.g. many tasks waiting for reponse from the disk
[07:21:19] <janrinok> They have come back down again: 110.32 155.44 146.80
[07:23:49] <janrinok> network has come way back down again.
[07:23:51] <fab23> the values usually are for: current, 5 min avg, 15 min avg
[07:25:43] <fab23> but the load is quite heavy, as from outside the network does not respond any more to ping.
[07:26:32] <fab23> could also be an issue with that version of the kernel (if it was updated recently)
[07:26:35] <janrinok> yes, there have been a couple of comments reporting backend fetch fails
[07:28:07] <janrinok> It is working OK for me but page refreshes are a bit slow
[07:29:02] <janrinok> That shouldn't affect the containers but it would affect the system.
[07:33:26] <janrinok> I "think" it has all quietened down for the moment. Thanks guys for your help/suggestions/advice/information.
[08:35:33] <Ingar> haircut installed, I look sexy again
[08:40:00] * chromas wolf whistles
[08:53:07] <Ingar> and it ain't even full moon yet!
[08:55:50] <janrinok> Again? Is it a temporary thing that only lasts a few days, or even hours?
[10:12:18] <c0lo> What's that temporary thing you ask, janrinok? Ingar looking sexy or S/N not returning 503?
[10:13:24] * Ingar hangs on in exitement
[10:29:45] <janrinok> I was referring to Ingar looking sexy (in his opinion)
[10:49:32] <Ingar> I always look sexy, but long hairs are not practical
[10:58:07] -!- c0lo has quit [Ping timeout: 268 seconds]
[11:06:09] -!- c0lo [c0lo!~c0lo@124.190.mg.jlq] has joined #soylent
[11:11:04] -!- c0lo has quit [Ping timeout: 268 seconds]
[13:07:08] <Ingar> anyone know any fun ways to destroy old hard drives?
[13:07:30] <Ingar> preferably without too much mess
[13:09:28] <janrinok> I assume you mean spinning rust drives and not SSDs?
[13:11:21] <Ingar> yeah
[13:11:37] <Ingar> an ssd you ccan mostly jsut wipe and trim
[13:12:01] <Ingar> drilling a hole might be approriate
[13:26:26] <janrinok> I usually destroy mine in 2 stages. Most people are defeated if the circuit board is damaged. I just shove a screwdriver under it and lift until the circuit board breaks. Remove as much of the board as you wish before you get bored.
[13:28:35] <janrinok> The platters require a rather more professional set-up to extract data from them. I usually wait until I have some pent-up energy, and then I attack them with a cold-chisel and a very heavy hammer. 15 minutes of that, I am quite warm, pent-up energy has dissipated, and I have several destroy drives in front of me.
[13:29:52] <janrinok> Even slight distortion of the platters is usually sufficient, but I like to do a proper job and try to ensure that they can never be spun again :)
[13:30:25] <janrinok> * several destroyed drives
[13:30:26] -!- Runaway1956 has quit [Ping timeout: 268 seconds]
[13:31:25] <janrinok> If you live near the coast, throwing them into the sea is usually enough to ensure that nobody will find them with expending too much energy.
[13:31:37] <janrinok> * without
[13:32:14] <janrinok> when you call any of this "fun" is a personal matter, I suppose.
[13:33:06] <janrinok> Of course, my drives are usually wiped using DBAN before any destruction takes place.
[14:09:32] <janrinok> kolie, site is like molasses again. There is a task running which is using almost all of the available CPU and sending ~7M of data. It has been running for about 4 minutes again.
[14:10:05] <janrinok> If it is the back-up then it needs to be capped at lower limits so that the site remains operational.
[14:11:01] <janrinok> 503s - backend fetch failed.
[14:19:12] <chromas> cp /dev/zero /dev/sdg and then the Office Space printer scene
[14:59:57] <kolie> I've watched the backups they don't take all the cpu when I've seen them run.
[15:00:03] <kolie> they take a core.
[15:00:38] <kolie> backups running now and its not taking all cpu so looks good.
[15:00:57] <janrinok> Well something is dragging the site to a standstill once an hour
[15:01:00] <kolie> rehash is taking 900% cpu though.
[15:02:11] <kolie> it just dropped to 50% rehash now its fluctuating 50%-1000%
[15:02:56] <janrinok> I haven't got visibility of rehash internals when it is running. There is something that runs at approx hour:04 and the delays coincide with that. They may not be related, but that is when we often see 503s
[15:03:29] <kolie> I'm just looking at docker stats and top
[15:03:32] <kolie> You have the same.
[15:03:50] <janrinok> where are you seeing docker stats?
[15:03:55] <kolie> "docker stats"
[15:04:03] <janrinok> in the console?
[15:04:31] <kolie> on the docker host.
[15:05:32] -!- bender has quit [Remote host closed the connection]
[15:05:42] -!- bender [bender!bot@Soylent/Bot/Bender] has joined #soylent
[15:06:23] <janrinok> what is just starting now?
[15:06:34] <kolie> what starting?
[15:07:38] <janrinok> because the network output has just shot up. There is nothing unusual on the input, so it must be internally generated. Whatever is causing the 503s is happening at the same time but might not be directly related.
[15:08:24] <kolie> Yea we transfer rsyncs to the other system.
[15:08:45] <janrinok> It has just knocked by software off because it is getting no response from the site
[15:09:05] <janrinok> It has just knocked by software off because it is getting no response from the site
[15:09:11] <kolie> I can't parse that.
[15:09:17] <janrinok> oops, wrong key
[15:10:10] <kolie> at 5 past the hour we transfer rsyncs.
[15:11:54] <janrinok> the only thing that I saw that caused me pause to thought is that 2 x gzip were running and taking a lot of resources.
[15:12:29] <kolie> the backup is limited to 50% of a core and 512MB ram.
[15:12:37] <kolie> Its done btw.
[15:12:44] <kolie> But you can see rehash is using a shit ton of resource.
[15:12:47] <janrinok> I copied you that email that I mentioned yesterday earlier on.
[15:13:04] <kolie> i confirm receipt.
[15:30:37] -!- bender has quit [Remote host closed the connection]
[15:30:46] -!- bender [bender!bot@Soylent/Bot/Bender] has joined #soylent
[15:44:34] -!- bender has quit [Remote host closed the connection]
[15:44:44] -!- bender [bender!bot@Soylent/Bot/Bender] has joined #soylent
[18:27:40] -!- Deucalion has quit [Ping timeout: 268 seconds]
[18:27:40] -!- systemd has quit [Ping timeout: 268 seconds]
[18:28:17] -!- Ingar has quit [Ping timeout: 268 seconds]
[18:28:55] -!- chromas has quit [Ping timeout: 268 seconds]
[18:34:19] -!- bender has quit [Remote host closed the connection]
[18:36:12] -!- Deucalion [Deucalion!~Fluff@Soylent/Staff/IRC/juggs] has joined #soylent
[18:36:12] -!- mode/#soylent [+v Deucalion] by Imogen
[18:36:41] -!- chromas [chromas!~chromas@Soylent/Staph/Infector/chromas] has joined #soylent
[18:36:41] -!- mode/#soylent [+v chromas] by Imogen
[18:40:58] -!- Runaway1956 [Runaway1956!~OldGuy@the.abyss.stares.back] has joined #soylent
[18:56:16] -!- bender [bender!bot@Soylent/Bot/Bender] has joined #soylent
[19:20:42] -!- progo6 has quit [Ping timeout: 268 seconds]
[19:37:58] -!- chromas has quit [Ping timeout: 268 seconds]
[19:40:39] -!- chromas [chromas!~chromas@Soylent/Staph/Infector/chromas] has joined #soylent
[19:40:39] -!- mode/#soylent [+v chromas] by Imogen
[19:42:11] -!- progo [progo!~progo@eegc-73-589-96-43.nwrknj.fios.verizon.net] has joined #soylent
[19:47:13] -!- progo has quit [Ping timeout: 268 seconds]
[19:47:50] -!- chromas has quit [Ping timeout: 268 seconds]
[20:08:31] -!- chromas [chromas!~chromas@Soylent/Staph/Infector/chromas] has joined #soylent
[20:08:31] -!- mode/#soylent [+v chromas] by Imogen
[20:09:09] -!- progo [progo!~progo@eegc-73-589-96-43.nwrknj.fios.verizon.net] has joined #soylent
[20:12:30] -!- AlwaysNever has quit [Ping timeout: 268 seconds]
[20:14:11] -!- AlwaysNever [AlwaysNever!~donaldo@315.38.1.669.dynamic.jazztel.es] has joined #soylent
[20:14:58] -!- progo has quit [Ping timeout: 268 seconds]
[20:14:58] -!- chromas has quit [Ping timeout: 268 seconds]
[20:15:20] -!- bender has quit [Remote host closed the connection]
[20:16:32] -!- bender [bender!bot@Soylent/Bot/Bender] has joined #soylent
[20:17:02] -!- progo [progo!~progo@eegc-73-589-96-43.nwrknj.fios.verizon.net] has joined #soylent
[20:17:15] -!- chromas [chromas!~chromas@Soylent/Staph/Infector/chromas] has joined #soylent
[20:17:15] -!- mode/#soylent [+v chromas] by Imogen
[20:22:22] -!- chromas has quit [Ping timeout: 268 seconds]
[20:51:04] -!- chromas [chromas!~chromas@Soylent/Staph/Infector/chromas] has joined #soylent
[20:51:04] -!- mode/#soylent [+v chromas] by Imogen
[21:39:47] -!- c0lo [c0lo!~c0lo@124.190.mg.jlq] has joined #soylent
[21:41:51] -!- Runaway1956 has quit [Read error: -0x7880: SSL - The peer notified us that the connection is going to be closed]
[23:33:41] -!- kyonko [kyonko!SOY@2001:5b0:50c6:vuxp:ooij:wttg:wppo:pskg] has joined #soylent
[23:44:58] <kolie> janrinok, you up lol