#soylent | Logs for 2024-05-19

« return
[00:48:53] -!- progo has quit [Quit: The Lounge - https://thelounge.chat]
[02:42:39] -!- soylentil45 [soylentil45!~soylentil@2603:7080:wrln:qy::rsh] has joined #soylent
[03:04:21] -!- soylentil55 [soylentil55!~soylentil@tmw-111-280-780-334.res.spectrum.com] has joined #soylent
[03:27:24] -!- soylentil55 [soylentil55!~soylentil@tmw-111-280-780-334.res.spectrum.com] has parted #soylent
[04:56:27] <Fnord666> I'm going to claim "not it" also. Never had access so ...
[05:09:34] -!- mrpg [mrpg!~Thunderbi@Soylent/Staff/Editor/mrpg] has joined #soylent
[05:09:34] -!- mode/#soylent [+v mrpg] by Imogen
[05:16:19] <mrpg> Came to say hi
[05:16:20] <mrpg> hi
[05:16:22] <mrpg> bye
[05:16:24] -!- mrpg has quit [Quit: chao pescao]
[05:36:00] -!- aristarchus [aristarchus!~aristarch@146.190.nmy.qtx] has joined #soylent
[05:37:05] <aristarchus> 502 bad gateway, as in the Gates (bill) of Hell (chromas).   Is this the end?
[05:41:13] <aristarchus> Perhaps this is the End, just past the Ides of May, as the Sibyl prophesied at the birth of the janrinok!   Doom is upon us!   The community thus perishes?  Oh, the Huge Manatees!
[05:44:39] <aristarchus> No response from the alleged "staff".  Disruption of communication can only mean one thing!  Hot grits!
[05:45:52] -!- drussell has quit [Ping timeout: 252 seconds]
[05:46:49] -!- drussell [drussell!~drussell@a4627691kd3g1a0z4.wk.shawcable.net] has joined #soylent
[05:47:37] <aristarchus> Lots of stuff going down.  Ignar, Indi is down.  Coordinated attack on reason and sanity?
[05:52:19] <aristarchus> Thank goodness I have everyone on "ignore", so I can not notice that no one is responding, or here.  Something of a ghosttown, are we now?
[06:11:00] -!- soylentil58 [soylentil58!~soylentil@106.70.hhl.gyu] has joined #soylent
[06:11:04] <aristarchus> Worst nightmare, SN is down, and only aristarchus is present.  Last Soylentil standing.   More of a curse, than a prize.
[06:21:17] -!- aristarchus has quit [Quit: Client closed]
[06:24:09] <chromas> Who knew buttplugs could talk
[06:24:25] <chromas> I guess they probably come with 5g now
[06:26:25] <chromas> Butt with the backend down at least we don't have any shitposting
[06:28:46] -!- dx3bydt3 has quit [Ping timeout: 252 seconds]
[06:29:15] -!- dx3bydt3 [dx3bydt3!~|dx3bydt3@129.224.oqq.ulu] has joined #soylent
[07:48:54] <janrinok> chromas, yes, I was thinking the same thing.
[07:49:24] <chromas> Same words in mind, I bet
[07:52:08] <janrinok> do me a favour please - can you log in to dev.soylentnews.org? I just need to confirm that you can. It is still up and I could post some stories there.
[07:53:15] <chromas> content-length: 0
[07:53:42] <janrinok> The docker containers just keep on going. Several things that I have noticed: The outages occur at weekends. I think perhaps to cron jobs are going at the same time and using up the memory available.
[07:54:19] <janrinok> was that your response to logging in?
[07:54:34] <chromas> just loading the page
[07:54:39] <chromas> empty document
[07:54:54] <janrinok> so you have nothing on the front page?
[07:55:56] <chromas> correct
[07:56:05] <janrinok> how strange, it was up a few minutes ago. I wonder is someone is working on the same idea?
[07:56:39] <janrinok> if we install the latest backup from prod on to dev, we can make dev the main server.
[07:57:11] <janrinok> The only person who can do that at the moment is kolie.
[07:58:11] <janrinok> But it is not giving an error message so the dev site is still there.
[08:01:25] <janrinok> and it is downloading the favicon, so there is a response of some sort. I can SSH into dev/staging so for some reason it has stopped responding in the last few minutes.
[08:02:53] <chromas> I wonder if the icon comes from slash or if apache just has a rule to push a file out for that url
[08:04:28] -!- Runaway has quit [Ping timeout: 252 seconds]
[08:10:09] <janrinok> If dev has had a problem too then I suspect that this is not the same problem that we have had previously, but that is pure speculation
[08:11:01] <janrinok> chromas - dev is back up again, can you try logging in again pse?
[08:11:53] <chromas> What'd you do?
[08:12:36] <janrinok> nothin'! I think that someone else is working on it, or it is just a docker box resetting itself.
[08:13:03] <chromas> "These messages will be kept in the system for only 14 days, whether they have been read or not. After 14 days, they will be deleted."
[08:13:28] <chromas> well I've got a comment moderation message from June last year...or as you Brits would say, last year June
[08:13:37] <chromas> probably been a little more than 14 days
[08:13:40] <janrinok> do you not get the front page?
[08:13:58] <chromas> I did. Logged in and I have a message from almost a year ago
[08:14:07] * janrinok corrects chromas - we say last June
[08:14:29] <chromas> You have to put the year first, I decided
[08:15:44] <chromas> so anyhow it loads at the moment
[08:16:16] <janrinok> OK, I've got to be afk for about an hour, and then I can post some stories on dev.
[08:16:28] <chromas> =dev.sub
[08:16:33] <chromas> there used to be a command
[08:16:54] <janrinok> lol - everything used to work, but that seems to be a long time ago now :)
[08:17:10] <chromas> yeah like this
[08:17:13] <chromas> =g'day janrinok
[08:17:16] -!- systemd-oomd has quit [Remote host closed the connection]
[08:17:25] <janrinok> g'day - speak to you later
[08:17:38] <chromas> (uncaught exception)
[08:18:43] <janrinok> pos :)
[08:22:53] -!- systemd-oomd [systemd-oomd!~systemd@pid1] has joined #soylent
[08:23:00] <chromas> =g'day teste
[08:23:04] <chromas> yay, comments
[08:23:17] <chromas> when in doubt, comment it out
[08:23:54] <chromas> or, why fix what's broken when you can just delete it?
[08:27:56] <janrinok> lol
[08:39:49] <chromas> woohoo++
[08:39:52] <chromas> scope_guards++
[08:40:52] <chromas> in my jenius, I didn't account for bad utf-8, so when I'd get an error, the db commit never happened, so everything else with the db would fail because it would cry about trying to nest transactions
[08:41:36] <chromas> so now that I've graduated to a capital J, I still don't account for bad encoding, but I just work around that particular error :D
[09:03:54] <Ingar> 503 is the Gateway to Hell
[09:09:13] <Ingar> ari following libindi worries me though
[09:09:19] <Ingar> I'll need to find a new hobby
[09:10:48] <janrinok> mornin' Ingar
[09:11:22] <Ingar> hi janrinok
[09:11:37] <janrinok> he is probably trying to find information on you so that he can piss you off even more!
[09:12:46] <Ingar> then he's doing quite a bad job
[09:13:15] <Ingar> anyway, let's not talk about useless things :)
[09:17:42] <janrinok> Everything he does is a bad job.
[09:18:19] <Ingar> so, from here on, we will refer to ari as BJA
[09:18:53] <janrinok> that works for me!
[09:33:25] -!- aristarchus [aristarchus!~aristarch@212.102.ty.jzm] has joined #soylent
[09:37:56] <aristarchus> Only down for maintenance, unlike the case here, so nothing to worry about, Ingar.  Free Software for the Universe?   I hear some only look at the night sky through Windows.  Sad, that.
[09:46:16] <Ingar> aristarchus: I can appreciate your pun, are you into astro?
[09:47:26] <aristarchus> Are you familiar with my name?
[09:48:51] <Ingar> aristarchus: the one in the middle of Oceanus Procellarum
[09:49:31] <aristarchus> Only a  namesake, not the original.
[09:51:37] <Ingar> aristarchus: at least it established that we both know what we're talking about
[09:55:19] <Ingar> have a galaxy http://ingar.intranifty.net
[09:56:39] <aristarchus> With, or without SN 2023ixf?
[09:57:05] <Ingar> it has faded away by now, I got that last year
[09:57:26] <Ingar> http://ingar.intranifty.net
[09:58:05] <Ingar> this year's image is a lot better in quality though
[09:58:15] <Ingar> (git gut)
[09:58:21] <aristarchus> Ah, the old blink comparator!  Nice!
[09:59:18] <Ingar> Thanks! :)
[10:00:07] <aristarchus> Meanwhile, the site is still down, and shows signs of staying that way.  Time for jan to be nice to certain people, even though it goes against his nature?
[10:01:16] <Ingar> most likely he has already done so by now, but I assume the USofA is still vast asleep
[10:01:31] <Ingar> also weird to get 502
[10:05:46] -!- aristarchus has quit [Quit: Client closed]
[10:16:27] <janrinok> Ingar, it appears to be the same problem that we have had before. The disk is full! I can get onto helium and can see the problem but I cannot take any corrective action. Emails have been sent.
[10:17:03] <janrinok> seems like you have an astro buddy there in BJA
[10:33:28] <Ingar> I might even image the crater tonight
[11:00:28] -!- drussell has quit [Ping timeout: 252 seconds]
[11:01:11] -!- drussell [drussell!~drussell@a4627691kd3g1a0z4.wk.shawcable.net] has joined #soylent
[11:12:41] <Ingar> janrinok: in any case, thanks for doing whatever you can. I'll patiently wait
[11:18:43] <janrinok> np, but it is very frustrating
[12:29:57] <AlwaysNever> hello all, so the site is down again.
[12:30:42] <ted-ious> AlwaysNever: It looks like nobody fixed the problem with backup files filling up the disks.
[12:31:23] <AlwaysNever> ted-ious: is it the backup files filling up?, or is it that the MySQL logging has not been disabled yet?
[12:33:15] <ted-ious> I don't know.
[12:33:36] <ted-ious> I'm just guessing because this problem seems to keep happening over and over.
[12:34:06] <ted-ious> Along with the certbot script not being run once a month.
[12:34:12] <AlwaysNever> https://serverfault.com
[12:34:12] <systemd-oomd> ^ 03Why /var/lib/mysql takes too much space whereas actual db is small enough
[12:34:29] <AlwaysNever> "disable_log_bin"
[12:35:02] <ted-ious> I don't know enough about the system to say whether or not that is a good idea.
[12:35:27] <ted-ious> But I do know that the problem doesn't appear to be solved if it keeps happening.
[12:36:33] <AlwaysNever> Looks like the disk full problem comes from MySQL logging, no other explanation I can think of
[12:37:34] <AlwaysNever> MySQL logging in SN is either an obsolete remanent of the old Master-Slave scheme of MySQL (no longer in use in SN), o cause by the new default in MySQL 8 to enable logging with a 30 day retention period
[12:38:05] <AlwaysNever> just disable MysQL binary logging
[12:39:53] -!- Runaway1956 [Runaway1956!~OldGuy@the.abyss.stares.back] has joined #soylent
[12:41:51] <ted-ious> You're trying to convince the wong person. :)
[12:42:38] <ted-ious> If I had to guess it would be because the database might crash and need to be recovered.
[12:43:34] <ted-ious> I don't know why you would need those logs any farther back than the last backup or two but maybe there's a reason.
[12:44:21] <AlwaysNever> The reason is Oracle wants MySQL users to buy support for MySQL
[12:44:58] <ted-ious> I doubt that is the exact reason here. :)
[12:45:19] <ted-ious> When did oracle buy mysql and when was soylent news built?
[12:49:29] <AlwaysNever> That's not the point: what OS is SN using? Ubuntu. When where those OS lastly updated in SN? Recently. What does Oracle MySQL do in its last release? Enable binary loggin by default with a 30 day retention period.
[12:50:09] <janrinok> AlwaysNever, I can get on to helium but I do not have root on that server. I cannot access mysql to disable logging. For some reason, all those who used to have access have been taken out of that group. The only person that I think can reinstate us is NCommander.
[12:50:12] <AlwaysNever> More info: https://askubuntu.com
[12:50:12] <systemd-oomd> ^ 03How to Solve Increasing Size of MySQL Binlog Files Problem?
[12:50:32] <janrinok> Emails have been sent but I do not expect any response over the weekend.
[12:51:02] <ted-ious> Oh ok well maybe there's some benefit to those logs if there's a database problem?
[12:53:29] <AlwaysNever> Those MySQL logs are to be able to return to any arbitrary point in time in the past
[12:53:55] <AlwaysNever> for a web app like SN, in case of disaster going back to the last daily backup is enough
[12:54:09] <janrinok> Soylent news was rebuilt in late 2022/ early 2024. The software was upgraded at that time. I suspect - but this is speculation - that the access to root on the servers changed sometime during the rebuild.
[12:54:12] <AlwaysNever> therefore, for SN there is no need to enable MySQL binary logs
[12:54:22] <AlwaysNever> they should be disabled
[12:54:52] <janrinok> Well what is stopping you? Oh, you haven't got access. Well neither has anyone else....
[12:55:28] <AlwaysNever> janrinok: yes, I know that you no longer have access, I'm just giving my two cents of advise
[12:56:07] -!- soylentil41 [soylentil41!~soylentil@ysig-694-71-0-819.washdc.fios.verizon.net] has joined #soylent
[12:56:12] <janrinok> and I appreciate that advice. But knowing what needs doing and being in a position to do it are very different.
[12:58:01] -!- soylentil41 has quit [Client Quit]
[12:59:02] <AlwaysNever> yes, it's sad that NC has not wanted to speed up the transfer of the very thing he was ready to just shut down
[12:59:47] -!- anontor has quit [Quit: anontor]
[13:00:00] <janrinok> There is an alternative solution. The dev server, which is dockerised, is still active. I have written suggesting that we transfer the live system to the current dev server which is NOT subject to all of these problems.
[13:01:07] <janrinok> That will need the agreement of both NCommander and kolie. I have had discussions with kolie and I believe that he is amenable to the suggestion but he will still have to find the time to do the change over, and only if NCommander agrees.
[13:01:32] <AlwaysNever> I would no hurry thing up once we are in the muddle; I think it's probably better to wait for it to be fixed in production, and go from there in a calm way
[13:02:08] <janrinok> Nobody will be working on production. We do not intend using the existing set up on the new site.
[13:03:32] <AlwaysNever> janrinok: are you confident the dev environment is fully OK?, has it all the dependencies needed to make the site tick?
[13:03:59] <AlwaysNever> that is a risky change over
[13:05:07] <AlwaysNever> Also, the Nginx front-ends would need to be reconfigured to go against the new Apache backend, not to straigh forward
[13:05:07] <janrinok> It has been running for over 6 months. All the problems that we are currently having (backups, certificates etc) are fully functioning. I cannot say how well it will cope with a huge amount of traffic but neither site needs that requirement today. We have about 350 current users.
[13:05:47] <janrinok> It also contains an email system, IRC, etc etc.
[13:06:31] <janrinok> I have no doubt that we will have a few hiccups in the change over but not to the extent that we are currently having in lost downtime.
[13:06:42] <AlwaysNever> Unless it was already planned to do that changeover to the development environment, and everything was ALREADY ready for the changeover, I thing doing the changeover in a hurry is probably a bad idea
[13:06:59] <janrinok> All the front ends, load balancing etc are already implemented.
[13:08:17] <janrinok> The original plan was to dockerise production, but I think that came to a halt when it was announced that the site was closing. All the build exists now for the production system.
[13:09:34] <AlwaysNever> I'm not expert in docker, I much prefer the virtual machine approach, so I cannot give advise about anything docker
[13:09:51] <janrinok> It would ease the site administration, but complicate the changing of software and bug squashing. Those are problems that can be overcome but at the moment are a bit convoluted to be useful on a daily basis.
[13:19:58] <AlwaysNever> also, does the dev. environment have an up-to-date backup of the database? I've read here that the dev. enviroment was showing data one year old as the newest...
[13:24:16] <janrinok> The dev database isn't a copy of the prod database. The whole point of dev is to test the software. It contains all sorts of contrived comments to test different problems that have (hopefully) been fixed and we often use it to try to break the software. The database should NOT be updated on dev because we run all those contrived comments repeatedly. The database would be replaced with a copy of the prod database. The existing site
[13:24:16] <janrinok> would then exist on the dev server. We would then require the DNS to be updated to point to the appropriate hardware and the site would exist online again.
[13:24:54] <janrinok> NO changes would be required of users.
[13:25:04] <janrinok> *by users!
[13:27:32] <janrinok> The first couple of thousand accounts also exist on dev but that is simply to give us data to work with. If you are one of those whose account falls in that group then you can log onto dev anytime you like. That is precisely how we used to test new releases before going live.
[13:28:44] <janrinok> If not, you can create a new account (you will probably get a different UID but your nickname can remain the same).
[13:29:32] <janrinok> permissions and privileges are exactly the same as you have now. You will not have access to pages on dev that you cannot currently see on prod.
[13:36:13] <AlwaysNever> janrinok: by your description, I see the dev. environment as very experimental; not the best place to fail-over production into.
[13:37:27] <janrinok> It has been used experimentally for 10 years. That is how we test the software. The move to dockerisation was initiated by NCommander who asked kolie to do the work. The intention was that the whole site would be converted.
[13:38:19] <AlwaysNever> Let's just wait for 1) prod. to be fixed; 2) site ownership to be transferred; 3) proper admin access to be acquired; 4) then, plan for a migration.
[13:38:49] <janrinok> The first stage was - as always - to do things on dev. That is exactly what has been done. It works and has been working for over 6 months. The change to the production system was never started because NCommander decided to close the site down rather than continue with his plan.
[13:40:25] <janrinok> Nobody, now or in the future, will be working on the current prod. It has too many problems. There are no volunteers to do the work. It is more expensive than some alternatives. People have been asking us to sort out the software. It has already been done.
[13:40:32] <AlwaysNever> I migration cannot be done without propper admin access to the old system; you don't want to go begging to NC for this file and that file and then that config setting...
[13:40:38] <AlwaysNever> *A migration
[13:42:28] <janrinok> We only need the database and the DNS changes. That should all be part of the transfer of assets that we have been working towards. The current plan is to rebuild production on new servers in a simple yet robust way. There is no plan to continue with the existing servers.
[13:44:17] <janrinok> People have also been asking why the current site costs so much. We intend to reduce our running costs. Millions of users are actually interacting with docker containers everyday. There is nothing magic about them.
[13:46:03] <janrinok> They keep the software functions in a modular and secure fashion. They can be expanded to meet demand very quickly and without a huge amount of extra work. For our current user base the cost of those linode servers cannot be justified. It is you who has to pay the bill....
[13:48:04] <janrinok> The dev server has all the files we need with the exception of the live database. It needs agreement from NCommander who technically 'owns' the dev server to make the switch over. I have posed the question and all I can do is await the reply.
[13:48:55] <janrinok> On the new site the community will own everything. No one person or small group of people will have the ability to close the site down against the community's wishes.
[13:52:20] <janrinok> The difference is that I am requesting that the servers are configured now - before the transfer of assets - so that we do not have these repeated site outages.
[13:56:13] <janrinok> The certificates problem will be fixed.
[13:56:21] <janrinok> The database problem will be fixed.
[13:56:35] <janrinok> The running costs will be significantly reduced.
[13:57:14] <janrinok> It will be easy - and using Ansible automatic - to create new serves as and when required.
[13:57:26] <janrinok> *servers
[14:08:28] <AlwaysNever> I've nothing against it, as I am not the one doing that work.
[14:10:49] <janrinok> The work has already been done by kolie at the request of NCommander. It would have already have happened if NCommander had continued. Nobody can guarantee that there will not be problems in the future but the current problems which are resulting in several days downtime every couple of weeks will be resolved.
[14:14:35] <janrinok> The dockerisation doesn't care which OS is used so the eternal debate about which the site should employ in the future disappears too. The sys-admins are required to keep the server running but they do not need to get involved in restarting specific bits of software etc. It eases their task - but perhaps makes it less interesting and challenging. I will accept that in return for a stable site that any good sys-admin can maintain
[14:14:35] <janrinok> with the minumum of effort. They are all volunteers too, remember.
[14:15:48] <janrinok> The burden of maintaining the site with only 2 sys-admins for over 2 years cost us dearly in the end.
[14:35:48] <Fnord666> janrinok - " The sys-admins are required to keep the server running but they do not need to get involved in restarting specific bits of software etc." I'm curious. How does docker do tthat
[14:35:58] <Fnord666> janrinok - " The sys-admins are required to keep the server running but they do not need to get involved in restarting specific bits of software etc." I'm curious. How does docker do that?
[14:37:17] -!- fab23 [fab23!fabian@2001:8a8:izvs:s::i] has joined #soylent
[14:38:14] <janrinok> Docker containers - and the docker management software - can be configured to restart failed containers (repeatedly, or for a specific number of times). They are also capable of being restarted automatically in the correct order, so container A will be started (or restarted) before container B etc.
[14:40:25] <janrinok> I am still at the early stages but it is fairly straightforward once you get the hang of it. Ansible allows you to configure a server remotely, then install the required containers to meet the function or functions of that server, and then kick them all into life. It is how cloud computing controls thousands of new servers being created and destroyed on demand. We will not be using cloud resources but the theory is similar.
[14:42:50] <janrinok> If there is a problem then the problem is usually external to the container. It is an access problem or running out of memory problem. That remains the sys-admins job, but even that can be managed by desktop displays similar to the Linode controls, or phpAdmin, or Grafana etc.
[14:43:06] <Fnord666> Ok. I understand that. restarting a container though is like rebooting a VM though really. Do we plan on having a separate container for each bit of SN? I'm curious what the Dev docker structure looks like.
[14:43:29] <janrinok> So a given server can be managed remotely by several sys-admins in turn.
[14:43:52] <Fnord666> Understood.
[14:45:02] <Fnord666> Presumably the docker container manager coordinates who is doing what to each container and prevents two poeple from restarting the same container at the same time.
[14:45:17] <janrinok> There is an IRC container, and email container, a rehash container, etc. They build automatically depending on a script (Dockerfile). Persistent data (e.g. databases) can be created in normal userspace but the mariadb container will manage it for you.
[14:46:02] <janrinok> I haven't even got to that level yet but it will be similar to the controls that Linode have in place. I am sure that it can be managed.
[14:46:31] <Fnord666> I follow all that. The question is where will be be hosting this and what will they charge us per container, if that's how it's billed.
[14:47:38] <Fnord666> I've never done anything with docker that wasn't on my local machine so I have no idea how docker hosting works or is billed.
[14:47:43] <janrinok> You can have multiple instances of the same container running on the same computer - for a server you simply have to say what port you want each server to listen on. There are additional containers for load balancing and reverse proxy servers.
[14:49:59] <janrinok> Currently the only docker that we are running is on kolie's servers which I think he provided at no cost. The proposed plan is to buy our own hardware or to use less expensive hardware (VPS etc) to run our containers. That is something for the new site to decide. I am just trying to keep our site visible on the internet for the moment!
[14:52:06] <Fnord666> Understood and seriously, thank you for all of your efforts on that front!!!
[14:52:08] <janrinok> There are already thousands of containers that exist to carry out every imaginable function and they have been well tested over time.
[14:52:34] <Fnord666> I'm not questioning using docker at all. I think it's a great idea.
[14:53:35] <janrinok> There are recognised repos (the docker repos being the one managed by the company and the most used I think) and you can also place your own containers into the repos - in much the same way that github worked originally.
[14:54:25] <Fnord666> Yep. I've done that as well when I was testing out docker and kubernetes.
[14:54:55] <janrinok> I have 4 new books in my library and I am slogging away trying to get up to speed on the topic. But each day makes me wonder why we didn't do this earlier - apart from the fact that it was not the 'traditional' way of adminning servers!
[14:55:11] <AlwaysNever> Ok, and all those SN containers, where are going to exist? On a "cloud" somewhere, or on a VPS/VM owned by SN?
[14:55:48] <Fnord666> As janrinok just said, they will be hosted on hardware or VPS owned by SN
[14:56:08] <janrinok> That isn't my decision to make. Initially they exist today on kolie's hardware - which I believe has been gifted to SN but we will have to get that clarified.
[14:56:21] <Fnord666> I stand corrected.
[14:56:45] <Fnord666> Good point about the new SN having to make that call.
[14:57:25] <janrinok> We may decide to purchase our own hardware, or we might find that hiring VPSs is more cost effective. Whatever it is it should not cost $6000 per annum!
[14:57:56] <Fnord666> I'm not a big fan of "cloud" because they typically bill by resources used and many cloud providers make it difficult to limit the resources that can be used.
[14:58:02] <AlwaysNever> a container is just a light-weight VM which shares the same kernel as it's host. provided the host is Linux kernel, in theory the container can be moved "from-cloud-to-cloud" freely
[14:58:13] <Fnord666> There've been a number of companies that have gotten unexpected, rather large bills.
[14:58:18] <janrinok> No, I do not want to use the cloud either.
[14:59:19] <janrinok> Docker works on Linux. macOS and Windows, although the latter requires some additional software
[15:00:51] <AlwaysNever> so Docker is like the old Java promise of "runs everywhere"... hmm, too much hype.
[15:01:09] <janrinok> I would suggest that any hardware that we use is spread both geographically and by accessibility so that no one person has complete control of all the hardware and data. That would effectively but us right back at the beginning again.
[15:02:26] <Fnord666> The next challenge will be finding sys-asmin volunteers willing to work with the system in its dockerized state.
[15:02:28] <AlwaysNever> "spread geographically" is Google-scale something.
[15:02:28] <janrinok> AlwaysNever, I agree that it is not what many people are used to, but it is also used by many famous software companies as their preferred solution. It does work...
[15:03:31] <janrinok> No, one server in the US and another in Europe and another in Australia. Redundancy and security in one.
[15:03:32] <Fnord666> Kubernetes is used by a number of companies to manage their docker workload and distribution. We might want to look into that as well.
[15:03:48] <AlwaysNever> Kubernetes: overkill
[15:04:04] <AlwaysNever> Kubernetes is for Google-scale things
[15:04:09] <Fnord666> True, especially for a site like this one at its current scale.
[15:04:14] <janrinok> There are others too Fnord666 - but Kubernetes is not favourite at the moment.
[15:04:47] <Fnord666> Kubernetes must just have the best in class PR department. :)
[15:04:54] <janrinok> ... for the reasons just given.
[15:05:46] <AlwaysNever> janrinok: the born-big scheme/architecture is what is costing so much $$$ yearly
[15:05:48] <janrinok> For the moment we don't need anything. Ansible can cope easily with the size of task we are presenting to it
[15:06:33] <janrinok> If you bought a server - how much would you expect to pay and how long would you estimate its useful lifetime?
[15:06:38] <AlwaysNever> what is needed is a SQL/Backend server, a Apache/Perl server, and one or several reverse proxy front-ends
[15:06:48] <AlwaysNever> that's 4 VMs or containerr
[15:07:22] <AlwaysNever> that should be about 100 US$/month
[15:07:24] <janrinok> plus IRC, load balancing, email, etc. But you are right - the load isn't significant.
[15:07:49] -!- soylentil43 [soylentil43!~soylentil@iqe-987-907-954-378.res.spectrum.com] has joined #soylent
[15:08:43] <soylentil43> ping
[15:08:45] <AlwaysNever> IRC is super-lean; load balancing should be done by round-robin DNS, and email should be outsourced to Gmail (spam is a bitch)
[15:09:12] <janrinok> I am not giving Google anything! But that is not my decision to make...
[15:09:39] -!- soylentil43 has quit [Client Quit]
[15:09:43] <janrinok> We have promised to protect people's data - not have it mined by Google.
[15:10:53] <janrinok> There are email containers (we already have one) that do all the DKIM and SPF that is required.
[15:11:12] <AlwaysNever> Email can be done in-house, but only successfully if done military style: SPF -all, DMAC p=reject; event then, it's a lot of work
[15:11:46] <janrinok> Tie that in with Traefik and you are golden
[15:11:55] <Fnord666> The company I work for typically expects the lifetime of physical hardware to be 3 years, but i think that's tied more to how long they can amortize the cost in the accounting department.
[15:13:18] <janrinok> so if we bought 3 servers at $1000 each (we do not need huge machines) then we have only used what today would last us 6 months but we can make it last 3 years? That is not a bad saving, even if it is optimistic
[15:14:10] <janrinok> If we can hire VPSs cheaper over the same period then perhaps that is the way we should go instead.
[15:14:41] <AlwaysNever> The company I work for buys refurbished servers 5-years old, puts in new SmartArray batteries, and puts them to work for 5 more years with zero problems
[15:14:56] <janrinok> Damn - that is even better
[15:15:10] <Fnord666> I'm guessing that the VPS cost for the same resources would be more expensive if we scale them the same, but one of the advantages of a VPS is that you can change the memory, etc as needed.
[15:15:42] <Fnord666> Like I said, I believe the 3 year figure is an accounting thing.
[15:15:49] <janrinok> Whatever, I haven't got to that stage because our site keeps falling over!
[15:16:16] <Fnord666> Very true. A stable site is priority number one.
[15:16:34] <janrinok> I would accept a man with flags sending semaphore if it met our requirement reliably!
[15:17:04] <Fnord666> Lol!
[15:17:28] <AlwaysNever> I hope kolie will take care of the hardware/housing side of things with a friendly prize/cost
[15:18:03] <AlwaysNever> so I understand that side of things is (or will be) covered
[15:18:52] <Fnord666> Apologies all but I need to head out and run some errands. Have a good day everyone!
[15:18:54] <janrinok> He already has, and we have a second similar offer available too. If everything is provided by kolie he will have complete control over the hardware and software. The new bylaws will not permit that. There has to be some diversity
[15:19:03] <janrinok> Fnord666, laters
[15:19:36] <AlwaysNever> kolie is the best option right now
[15:20:21] <AlwaysNever> the bylaws should not require more than a "safeguard person" have an up-to-date copy of the database
[15:20:30] <janrinok> We already have 2 but I am not announcing the other.
[15:21:59] <janrinok> bylaws only cover how a company will me managed. What you are referring to is covered by policy documents, and we are in complete agreement. That us why I said dispersed hardware so that one person is NOT in a position of absolute control.
[15:22:12] <janrinok> *will be managed
[15:23:35] <janrinok> anyway, back in a while after I have got my evening meal cooking
[15:23:38] <janrinok> afk
[16:00:23] -!- soylentil89 [soylentil89!~soylentil@131.226.lg.wi] has joined #soylent
[16:06:43] <fab23> janrinok: if SN buys own H/W, you still need a spot in a colocation and internet connectivity, which does also cost some $$$. There are even VPS hoster around where traffic limits are quite high and if e.g. used up, they just scale you down to 10 Mbit/s (e.g. Hetzner), which I think is plenty for SN anyway.
[16:10:56] <fab23> janrinok: see e.g. https://www.hetzner.com and switch to "Shared vCPU (x86)" for low prices
[16:10:58] <systemd-oomd> ^ 03Truly thrifty cloud hosting - Hetzner Online GmbH ( https://www.hetzner.com )
[16:13:02] <fab23> janrinok: you can even create additional disks for data, which I would recommend. Start with the lowest system and you can scale up only cpu/ram (but not the disk), so scaling down again is possible as well.
[16:16:46] <janrinok> fab23, Yes. both our current servers provided by kolie, and those offered by another person come with connectivity almost for free. However, it is unlikely that we will be needing $6000 per annum for the amount of traffic that we currently have, or even for any reasonable increase in community size.
[16:17:57] <janrinok> I am not making decisions that should be made by the new site and community - that is for them to decide for themselves.
[16:18:33] <fab23> janrinok: As far as I have learned from the orange site, VPS are much more cheaper from EU provider then US providers. ;)
[16:18:41] <janrinok> I think at the moment any reasonably-spec'd home machine would meet our needs.
[16:19:19] <fab23> I also think so, small system with probably 2 - 4 CPU and 16 - 32 GB RAM should be fine
[16:19:51] <janrinok> I'll will bear that in mind. Thanks for the info.
[16:20:43] <Runaway1956> I've rebooted my router over and over, and still get "Bad Gateway"
[16:21:12] <Runaway1956> whoops /endsarcasm
[16:21:45] <janrinok> yep - it isn't coming back up today. I have emailed everyone I can but I do not expect a response during the weekend. Do you have backscroll? ^^^
[16:21:52] <fab23> at Hetzer you can also rent a dedicated Server, starts below 50.- / month with decent configuration: https://www.hetzner.com
[16:21:53] <systemd-oomd> ^ 03Dedicated Server Hosting
[16:22:08] <janrinok> fab23, I am looking at their site now
[16:22:49] <janrinok> Runaway1956, if you have back scroll go back an hour or two and catch up. You may be able to input something?
[16:22:57] <fab23> janrinok: I also can not understand why Linode does eat up 6k / year.
[16:23:00] <Runaway1956> I got a mechano keyboard, 96%, they didn't give me a backscroll.
[16:23:09] <Runaway1956> again /endsarcasm
[16:23:10] <janrinok> lol
[16:23:44] <janrinok> You can look at the logs https://logs.sylnt.us
[16:24:10] <Runaway1956> It's rough when you take my smartassery seriously jan.
[16:24:28] <janrinok> I didn't - I have been chuckling along
[16:25:04] <janrinok> but if you haven't got znc then the logs are the next best thing
[16:25:44] <Runaway1956> Since we mentioned keyboards - I'm really liking my El Cheapo brand of mechanical keyboard - it has off-brand brown keys that really resemble some of the old electric typewriters.
[16:26:33] <Runaway1956> It's not a real IBM clicky keyboard by a long shot, but it's really nice - much nicer than any membrane keyboard.
[16:28:01] <janrinok> I've settled on logitech kbds - but not the very cheap one. You get what you pay for.
[16:49:13] -!- mrpg [mrpg!~Thunderbi@Soylent/Staff/Editor/mrpg] has joined #soylent
[16:49:13] -!- mode/#soylent [+v mrpg] by Imogen
[16:50:48] <mrpg> Bad Gateway! Go stand in the corner!
[16:51:24] <janrinok> hi mrpg! Hope you are well
[16:52:29] <mrpg> Hi, I hope you are well too
[16:52:47] <mrpg> Here it is winter, 4 to 15 celcius
[16:53:20] <janrinok> I am - well, ok for my age :D I am enjoying a warm late spring day, but that won't make you feel any better...
[16:59:54] -!- anontor [anontor!~anontor@ykv-tnak-7.zbau.f1netze.de] has joined #soylent
[17:27:19] <mrpg> Will make me feel better but not hotter.
[17:27:24] <mrpg> https://www.youtube.com
[17:27:25] <systemd-oomd> ^ 03Asking French People At The Train Station: Where Are You Going?
[17:28:41] <mrpg> I have 3 weeks waiting for ubuntu budgie 24.04, there are some problems when upgrading so they stopped the upgrade. the update.
[17:38:16] <mrpg> It was somethign about python, but now I think it is something related to snapd, I forgot.
[17:47:02] <mrpg> take care, later.
[17:47:06] -!- mrpg has quit [Quit: chao pescao]
[17:47:14] <janrinok> laters - just finished my evening meal!
[17:48:37] -!- gueso [gueso!~gueso@2605:59c8:25c7:tykg:joty:ogju:zuvt:swnt] has joined #soylent
[17:52:02] <chromas> The only thing that needs to be containerized really is rehash, because it's a shithole of messy code
[17:52:03] -!- aristarchus [aristarchus!~aristarch@121.127.mn.gzt] has joined #soylent
[17:53:23] <janrinok> rehash is containerised.
[17:53:58] <chromas> right, but irc and stuff don't really need it since it's easy enough to set up. if it's already done then that's fine though
[17:54:56] <janrinok> There are containers but I don't imagine that they have the ability to create new channels at will or to restrict access to them. But it is better than nothing if we lose the main servers.
[17:55:15] <janrinok> We always have Libera as a backup
[17:56:09] -!- gueso has quit [Quit: Ping timeout (120 seconds)]
[18:03:35] <fab23> janrinok: some other toughts about potential new SN infra, is you start to distribute the server to multiple location, then you still need either a central database or a primary - primary sync between multiple servers, but then the application needs to support that as well, e.g. with creating globally uniq IDs.
[18:04:00] <fab23> janrinok: s/, is you/, if you/
[18:06:01] <janrinok> Yep, but we have done that before so it _is_ possible, but I will have to look into it more deeply. The other idea is to have each server assume the master role for a period of time (7 days?) before resyncing the other systems and then handing over. That would mean that updates and other changes could take place in that server's downtime without affecting the system.
[18:06:25] <chromas> sounds expensive
[18:06:41] <fab23> and error prone
[18:07:00] <janrinok> nah. it is only a case of sending a copy of the database to the other servers. They load it and one of them becomes the master.
[18:07:08] <chromas> having multiple dbs (mysql cluster) has only brought pain and not even one instance of benefit
[18:08:04] <janrinok> The benefit is that we have a system that is beyond the control of a single individual, and we can use the 'downtime' server as dev.
[18:08:24] <fab23> janrinok: the other thing, if e.g. stuff is rented from a Hoster, have the credentials shared with 2 - 3 people in charge of the SN, and where possible and needed create dedicated access for e.g. sysadmin. In Hetzner Cloud you can e.g. share a project (a collection of VMs) with 3rd party.
[18:08:29] <janrinok> I wouldn't suggest doing it using mysql cluster.
[18:08:48] <fab23> janrinok: there is still the single point of access to the domain :)
[18:09:37] <janrinok> But that can be changed if someone decides that they want to take control without agreement. Anyway, we are miles away from this yet. This is all spitballing.
[18:09:49] <chromas> there's nothing you can do about that
[18:10:06] <fab23> janrinok: hm, maybe have a look into MariaDB Galera cluster, or then PostgreSQL which is out of the box able to have such setups.
[18:10:12] <chromas> we all had access and each could have locked everyone else out, but didn't
[18:10:47] <fab23> that is always the risk when multiple people need / have access
[18:11:12] <janrinok> I'm not sure postgresql is going to help us because rehash seems to be stuck with mysql/mariadb for some specific format requirements. But again, I am not looking closely at that...
[18:11:14] <chromas> how about mongodb? I heard it's webscale
[18:11:23] <fab23> but on the other hand, I am currently helping somebody out getting access to servers from a person who died unexpetedly.
[18:12:09] <fab23> janrinok: just an idea with PostgreSQL, sure it depends on needed features of the application
[18:12:31] <janrinok> we will have a board that is spread over 2 continents and staff that are probably equally diverse. I think we will be safe.
[18:12:46] -!- aristarchus has quit [Quit: Client closed]
[18:13:26] <janrinok> chromas, for the time being lets keep it simple - KISS
[18:14:51] <fab23> one server with everything :)
[18:15:14] <janrinok> chromas, you say it sounds expensive - it will be cheaper than $6000 per annum1!
[18:15:46] <chromas> well yeah, there was never a reason for it to be that expensive. one single server with remote backups is all we ever needed
[18:16:27] <janrinok> agreed. But that is the reason for having more than one server installation.
[18:34:19] <Fnord666> chromas: You left off the /s. :) mongodb or any noSQL solution will be a significant change from the DBs we are using now.
[18:34:38] <chromas> the /s was loudly implied :)
[18:34:56] <chromas> I don't have to whip out this gem do I?
[18:35:01] <chromas> =yt mongodb is web scale
[18:35:01] <systemd-oomd> https://youtube.com - Episode 1 - Mongo DB Is Web Scale (05:36; 849,011 views; 👍12,334)
[18:35:02] <Fnord666> Yes it was but it can't be loud enough that someone won't miss it. :)
[18:35:12] <janrinok> lol
[18:35:36] <Fnord666> There will be no whipping out of anything in this channel.
[18:39:21] <chromas> https://scontent-sea1-1.xx.fbcdn.net
[18:40:33] <janrinok> or wagon train?
[18:44:13] <Fnord666> Lol not a chance I'm clicking a random link like that. Sorry. :)
[18:45:39] <janrinok> It was safe as images go - but not as safe as not even looking :)
[18:46:21] <janrinok> I probably compromised my inside leg measurement and colour of eyes with all that crap that followed though
[20:16:49] -!- soylentil79 [soylentil79!~soylentil@ilz-248-735-137-165.biz.spectrum.com] has joined #soylent
[20:23:08] -!- randymon [randymon!~randymon@172.58.ixl.wm] has joined #soylent
[20:23:57] <randymon> yo
[20:24:08] <requerdanos> Greetings.
[20:24:38] <randymon> i'm late to the discussion, just wondering what the situation is with the 502. Is it a site rebuild that's required?
[20:26:15] <requerdanos> according to my understanding, a misconfiguration resulting in a disk getting full is exacerbated by a misconfiguration or other problem preventing easy access to the first problem. The short-term solution (correct the disk full) and the long-term solution (that site rebuild you mentioned) both seem to be indicated.
[20:27:07] <randymon> bummer. thanks for the explanation, i'll have to just check back in a couple of weeks
[20:27:10] -!- randymon has quit [Client Quit]
[20:41:34] <chromas> all the nonsense in the url lets you make your way through to the image without needing an account and all tha
[21:03:11] -!- Runaway1956 has quit [Read error: Connection reset by peer]
[21:03:14] -!- Runaway [Runaway!~OldGuy@the.abyss.stares.back] has joined #soylent
[21:11:41] -!- soylentil79 has quit [Quit: Client closed]
[21:19:43] -!- soylentil58 has quit [Quit: Client closed]
[23:41:00] <chromas> https://www.youtube.com
[23:41:00] <systemd-oomd> ^ 03Sat
[23:41:08] <chromas> oh
[23:41:14] <chromas> https://www.youtube.com
[23:41:17] <systemd-oomd> ^ 03"If you can dream it, you can do it" - Royce du Pont #motivational
[23:52:31] -!- soylentil89 [soylentil89!~soylentil@131.226.lg.wi] has parted #soylent