#soylent | Logs for 2024-05-20

« return
[00:07:00] -!- ChrisK [ChrisK!~ChrisK@2001:67c:6ec:zms:wwt:js:til:wqo] has joined #soylent
[01:29:43] -!- soylentil72 [soylentil72!~soylentil@zqy63-321-383-834.static.internode.on.net] has joined #soylent
[01:30:11] -!- soylentil72 has quit [Client Quit]
[01:35:45] -!- soylentil17 [soylentil17!~soylentil@tmw-111-280-780-334.res.spectrum.com] has joined #soylent
[01:38:59] -!- gueso [gueso!~gueso@2605:59c8:25c7:tykg:joty:ogju:zuvt:swnt] has joined #soylent
[01:41:41] -!- NotSanguine [NotSanguine!~Thunderbi@dgp-169-243-882-749.biz.spectrum.com] has joined #soylent
[01:42:31] -!- NotSanguine has quit [Client Quit]
[02:33:00] -!- gueso has quit [Quit: Ping timeout (120 seconds)]
[02:55:49] -!- drussell has quit [Ping timeout: 252 seconds]
[02:56:44] -!- drussell [drussell!~drussell@a4627691kd3g1a0z4.wk.shawcable.net] has joined #soylent
[03:03:35] -!- soylentil17 has quit [Quit: Client closed]
[03:42:07] -!- fliptop has quit [Quit: Ex-Chat]
[07:26:36] <janrinok> I have just exchanged emails with NCommander. He will be unavailable for this week and possibly next week too.
[07:27:11] <janrinok> .op
[07:27:11] -!- mode/#soylent [+o janrinok] by Imogen
[07:27:45] janrinok changed topic of #soylent to: The site is giving a 502 ERROR. We are working on possible solutions but NCommander will not be available this week and possibly next week too.
[07:27:54] <janrinok> .deop
[07:27:54] -!- mode/#soylent [-o janrinok] by Imogen
[07:31:26] -!- ChrisK has quit [Ping timeout: 258 seconds]
[07:52:28] <prg> so how's the new corporation coming along? still waiting for some paperwork?
[07:54:07] <janrinok> That has only 1 step to go and that is the creation of a bank account. It is hindered a little by having the necessary people spread over 2 continents but it will hopefully be sorted soon. Then we are good to go. NCommander and the Board have been kept informed.
[07:54:47] <janrinok> The bank would prefer that all 3 people appear at the same time in the same bank - but that isn't going to happen!
[07:56:11] <janrinok> I have been busy elsewhere with the site so it might have already been resolved but I just haven't kept up-to-date on developments.
[07:56:50] <prg> at least it sounds like it's moving in the right direction then
[07:58:02] <janrinok> oh yes - the site downtime is my biggest headache at the moment, as you might have guessed if you read the logs. https://logs.sylnt.us
[07:59:07] <prg> yeah I keep lurking here and read about what's happening
[08:02:56] <janrinok> we have (I believe) a quick fix solution but I am still awaiting access to the necessary servers to try to implement it. And it is somewhat experimental so we will have to wait and see....
[08:04:20] <Ingar> as my lawyer said, what the bank prefers is irrelevant, what's legal is relevant
[08:04:26] <prg> wish you best of luck then that you get it sorted out
[08:05:20] <Ingar> is there a backup of the db ?
[08:05:38] <Ingar> the obvious solution is jsut to migrate
[08:07:43] <janrinok> Ingar, I agree but we need NCommanders assistance for that and he is not available this week, and possible the next also
[08:19:04] <Ingar> also, cert for the logs site has expired :D
[08:21:24] <janrinok> Yes, that is all part of the same problem. The certs are on a server I cannot reach. kolie is looking at my access to try to fix it. The mail system is not working either.
[08:28:34] -!- drussell has quit [Ping timeout: 252 seconds]
[08:29:14] -!- drussell [drussell!~drussell@a4627691kd3g1a0z4.wk.shawcable.net] has joined #soylent
[09:36:54] <Ingar> approriate soundtrack https://www.youtube.com
[09:36:56] <systemd-oomd> ^ 03Horizon
[09:38:57] <janrinok> we are just resting!
[09:51:34] <fab23> thats perfectly fine on a public holiday
[09:55:19] <janrinok> what holiday is it today?
[09:55:34] <janrinok> They are celebrating here too, but I have no idea what it is
[09:56:11] <janrinok> I think it might be Pentecost...
[09:56:16] <Ingar> Pentecost Monday
[09:57:40] <fab23> yes it is
[09:57:58] <janrinok> I m not used to all the religious holidays that they recognise here.
[09:58:16] <Ingar> approriate soundtrack https://www.youtube.com :-)
[09:58:17] <systemd-oomd> ^ 03IXION | Original Soundtrack | 07 A Speech On Earth
[09:58:47] <Ingar> (never played that game, love that track)
[09:58:53] <fab23> janrinok: probably an advantage of retirement, else you would know them. :)
[09:59:00] <janrinok> I've managed to restart mysql but it hasn't made any difference
[09:59:21] <chromas> Whether your cost is penta or holo, it's always a good excuse to party
[09:59:23] <janrinok> Nah, the UK doesn't recognise most of them
[10:00:10] <Ingar> makes sense, them heathen
[10:00:42] <fab23> janrinok: so you should also be able to adjust the servers my.cnf then? I guess even when you turn of binlog, you may still to have remove existing files manually.
[10:00:53] <fab23> s/of/off/
[10:01:29] <chromas> KDE shows me a ton of religious holidays too. Ascension, Corpus Christi, Trinity, Memorial Day, New Moon
[10:02:04] <Ingar> Full Moon is the one we care about
[10:02:32] <Ingar> need to mow the lawn around the stone circle in the forest
[10:02:47] <chromas> it's got full and also first quarter
[10:02:52] <chromas> no waxing gibbous though
[10:04:50] <chromas> You must mow the lawn with a group of sheep the size of which is not a multiple of five, and be sure to go counterclockwise, or if you're in the UK, anticlockwise
[10:05:45] <janrinok> is that anticounterclockwise, or just counteranticlockwise?
[10:05:45] <Ingar> all while whearing the traditional top hat
[10:05:50] <Ingar> *wearing
[10:06:16] <janrinok> I am wearing it now - how did you know? Have you hacked my IRC?
[10:06:52] <Ingar> the Fellowship of the Top Hats doesn't need hacks to reckognize fellow members
[10:06:59] <janrinok> fab23, I was hoping so, but life is never that easy...
[10:07:22] <Ingar> (I actually do own a Top Hat :-p )
[10:07:55] <janrinok> I hire mine...
[10:09:33] <janrinok> I'm looking for a simple command to reboot the entire system
[10:09:47] <Ingar> sudo reboot
[10:10:25] <janrinok> nah - that might break the other servers which are relying on this one for their data. They have to be restarted in a specific order I believe
[10:10:49] <Ingar> I assume, first the DB, then the web
[10:11:03] <janrinok> but if I don't make any progress I might try that as a last resort - we can't have less site than the one we have now.
[10:11:23] <Ingar> janrinok: if the problem is disk full, rebootingmost likely won't solve anything
[10:11:29] <janrinok> I know.
[10:15:12] <fab23> du -sch /path/to/mysql/* | sort -n # will show you where the larges files are.
[10:18:28] <janrinok> yes, but you MUST NOT just delete them - there is a specific warning against doing that. You have to get into mysql and use a specific PURGE command. My next hurdle is actually getting to a mysql prompt. There are password all over the place - some of which seem to be out of date.
[10:20:16] <janrinok> I can see all the bin files, I can set the duration to 3 days rather than a month, but until I can get inside mysql I cannot PURGE them. The warning states quite clearly that mysql will simply fall over and will require more recovery than we need at the moment.
[10:22:48] <fab23> I see
[10:23:17] -!- soylentil86 [soylentil86!~soylentil@614.138.721.662.dyn.plus.net] has joined #soylent
[10:23:49] -!- soylentil86 has quit [Client Quit]
[10:25:15] <fab23> janrinok: the setting with 3 days is in the .cnf? and MySQL does not check / purge on restart?
[10:25:25] <janrinok> secondly, mysql is running but is not receiving any requests, it is simply waiting for something else. I suspect that Rehash is not running but that is a different problem completely.
[10:26:12] <janrinok> I don't know and I am not rushing to cause any more problems than we currently have.
[10:26:27] <fab23> so even a simple 'telnet localhost 3306' does show kind of a connection?
[10:26:43] <janrinok> I can see it in htop too
[10:27:41] <janrinok> Yes, it is working, I am now searching for the elusive pw
[10:28:38] <fab23> hm, is it maybe doing some checking or such, nothing in log file? check the *.err file in the mysql directory
[10:29:34] <janrinok> I have huge log files. I am waiting until kolie comes online as he knows more about how the site is currently configured and he must also know the passwords - so that he could build the containers.
[10:31:41] <fab23> good plan
[12:34:55] -!- fliptop [fliptop!~fliptop@69.43.kn.gu] has joined #soylent
[12:35:48] -!- fliptop has quit [Changing host]
[12:35:48] -!- fliptop [fliptop!~fliptop@Soylent/Staff/Sysop/fliptop] has joined #soylent
[12:35:48] -!- mode/#soylent [+v fliptop] by Imogen
[13:03:03] <drussell> Here in Canada we call today "Victoria Day." (Except, of course, in Quebec where they call it "National Patriotes Day" because they couldn't possibly bring themselves honor the monarch of the wrong country...)
[13:03:41] <drussell> "Journée nationale des patriotes or Fête des Patriotes"
[13:22:49] <inz> Here in Finland we call today "monday"
[13:23:13] <Ingar> I doubt Monday is monday in Suomi
[13:36:01] <inz> That'd be maanantai, but I assume the Quebecians ackshully call the day "Journée nationale des Patriotes" too
[14:15:18] <janrinok> lol
[14:15:43] <janrinok> inz - nice one "we call today 'monday'"!
[15:20:56] -!- marginc [marginc!~marginc@2001:67c:6ec:hqg:nti:wy:qwo:jgv] has joined #soylent
[15:24:24] -!- schestowitz[TR2] [schestowitz[TR2]!~schestowi@2a00:23c8:7480:soqy:xoix:hkxh:kshk:jluy] has parted #soylent
[15:57:44] <kolie> o/
[16:04:31] <janrinok> hi kolie
[16:04:58] <kolie> i dont know the password its stored on the device I can check where.
[16:05:40] <janrinok> ah ok, I thought perhaps you used the same password for the containers
[16:07:05] <janrinok> And have you any idea which server has the apache installed?
[16:08:00] <kolie> Not on the top of my head, I solve problems as they come up and usually run down the stack without any assumptions.
[16:08:26] <kolie> I work on a lot of linux systems, tough to keep them all in mind but im pretty good with digital archealogy
[16:08:34] <janrinok> that is the next problem. The apache that I have found - which is probably not the correct one - is reporting a error and is unable to start. But the logs are empty and don't appear to have been used since 2022 - which corresponds to when NC began his updates.
[16:08:37] <kolie> thats why the scripted infra is nice.
[16:08:56] <janrinok> I couldn't agree more!
[16:08:59] <kolie> Ok, well I wouldn't go around trying to start various services unless you are sure thats the service and its in use
[16:09:02] <kolie> theres a ton of old config.
[16:09:33] <kolie> The way I do this is, I check the sites ip, find the box with that ip, see whats running on 443 on that box, check its config, see where it goes to
[16:09:36] <janrinok> Yeah, and the documentation is no use whatsoever - it bears little resemblance to what we have today.
[16:10:05] <janrinok> We tried that, I cannot find a match but I don't know which names are now in use.
[16:10:16] <kolie> ok well lets do that then.
[16:10:38] <kolie> 23.239.29.31 is the front end
[16:11:07] <kolie> magnesium is holding that.
[16:11:25] <kolie> 443 is held by nginx
[16:11:38] <janrinok> The only box I cannot get in to is Magnesium!
[16:12:30] <kolie> 443 is passed to rehash via proxy pass
[16:12:51] <kolie> rehash is defined as flourine
[16:13:34] <kolie> so flourine is likely having an issue
[16:14:57] <kolie> disk looks fine.
[16:16:27] -!- halibut has quit [Quit: Timeout]
[16:17:09] <janrinok> the /etc/apache2 directory on flourine has nothing that I recognise as normal Apache configuration
[16:17:49] <kolie> nothing of too much interest there tbh
[16:18:34] <janrinok> there is no systemd service configured for apache2 on flourine
[16:19:08] <kolie> yea one sec you cant just assume this is like any other system
[16:19:16] <kolie> if its not there, its setup another way
[16:19:23] <janrinok> lol - that is one thing that I have discovered!
[16:20:01] <janrinok> why use standards... :)
[16:20:40] <kolie> There is a few assumptions there, one being that the only standard to run apache directly applies here :)
[16:20:53] <kolie> but give me a second to look at this..
[16:21:15] <kolie> the other fun thing, which i know you ran into
[16:21:20] <kolie> There is a delay on commands timing out.
[16:21:29] <kolie> so some boxes have a 20 second shell delay
[16:21:51] <janrinok> that explains some of the behaviour that I have been seeing
[16:22:42] -!- halibut [halibut!~halibut@CanHazVHOST/halibut] has joined #soylent
[16:27:53] <kolie> I don't see port 80 listening on uhh fluorine
[16:27:56] <chromas> custom roots was always the sn way
[16:28:08] <kolie> its /srv/soylentnews yea is the apache dir
[16:28:12] -!- pTamok [pTamok!~pTamok@zxjqc-6p2n34-211.connect.netcom.no] has joined #soylent
[16:28:26] <kolie> which makes sense in production where you have potentially different configs on the same box... vms makes this a little silly.
[16:29:54] <janrinok> ok, I can understand that
[16:30:35] <kolie> and slash runs under uhh, /etc/init.d/slash if i recall correctly.
[16:31:27] <chromas> that is unexpected! I recall it being under /home in the past :D
[16:32:17] <janrinok> I don't feel quite so stupid now...
[16:32:40] <janrinok> another point for docker
[16:33:25] <chromas> use systemd-nspawn; you can just pass it directories or images to boot :)
[16:34:06] <kolie> ok
[16:34:08] <kolie> so varnish was down
[16:34:17] <kolie> website is back up
[16:34:20] <kolie> systemctl start varnish
[16:34:27] <kolie> got 80 listening on flourine
[16:34:33] <chromas> what's the front-end if not varnish?
[16:34:46] <kolie> nginx is on uhh whatever i said before.
[16:34:53] <janrinok> you are a star!
[16:34:55] <kolie> And it does ssl termination
[16:35:02] <kolie> And it forwards to fluorine
[16:35:07] <chromas> nginx -> varnish -> slashd ?
[16:35:10] <kolie> fluroines first entry is into varnish yea
[16:35:29] <chromas> yo dawg I put a front-end on your front-end :D
[16:35:34] <chromas> Thanks for getting us up btw
[16:35:47] <kolie> rehash uses varnish in its normal setup, i guess we are the definers of that
[16:35:54] <kolie> but uhh yea rehash runs on fluorine
[16:36:05] <kolie> and all the other stuff needs the gateway which is the nginx.
[16:36:12] <kolie> you know to route to other non rehash things.
[16:36:35] <kolie> I see how you get this setup, eventually, piece by piece, it makes logical sense if you design this over time how it got this way.
[16:37:13] <kolie> Its not like I have documenation on this, I literally just walked through debugging this from nothing but yea glad its up.
[16:38:09] <kolie> My goal was to not figure this out before, but do a reimplement from the knowledge of how it got to here, what we need now from a fresh update if we have greenfield infra
[16:38:13] <chromas> weird to see varnish be down though. I don't recall it ever dying on us
[16:39:32] <kolie> Agreed.
[16:39:53] <kolie> you wanna really bake your noodle?
[16:40:23] <janrinok> if I knew what it meant - but go ahead
[16:40:46] <kolie> lol, any thing in the old system should work and doesnt like this, every time its like, well thats never happened before.
[16:41:00] <kolie> Too many hands in the pot over time i suspect.
[16:41:08] <janrinok> quite probably
[16:41:12] <kolie> And all of them well intentioned but not necessarily on the same page.
[16:41:23] <janrinok> not even in the same book!
[16:41:26] <chromas> yeah, slash dying on us used to happen at least weekly, but now it's All New™ funs
[16:41:41] <kolie> Sorry I couldn
[16:41:44] <chromas> I wouldn't say all well intentioned, but mostly
[16:41:45] <kolie> get around sooner, kids and all.
[16:41:59] <kolie> Just happened to have them a lot lately.
[16:43:29] -!- soylentil88 [soylentil88!~soylentil@80.111.pqz.op] has joined #soylent
[16:44:09] <chromas> it's alright; gave ari a couple days off from creating sockpuppets
[16:44:33] <kolie> he needed the break hes been without vacation for a record now
[16:45:36] -!- soylentil88 has quit [Client Quit]
[16:45:38] <janrinok> np - I didn't expect you to respond over the weekend. I was pleased but surprised to get the email from NC this morning
[16:46:36] <janrinok> I tried doing what you did but I couldn't find the path through the system. But I wasn't expecting it to be quite so unusual as this is.
[16:48:20] <kolie> Yea part of what I do for work is basically thid kind of digital archaeology/ high value break fix.
[16:48:32] <kolie> Happy to assist of course.
[16:51:35] <kolie> certbot renewal is on magnesium
[16:51:39] <kolie> certbot certonly --server https://acme-v02.api.letsencrypt.org --manual --preferred-challenges dns
[16:51:39] <kolie> -d '*.soylentnews.org,*.sylnt.us,soylentnews.org,sylnt.us'
[16:51:54] <kolie> restart nginx after, and then you'll have to find and copy the certs to the other services.
[16:52:09] <janrinok> I cannot get to magnesium via ssh or kerberos.
[16:53:40] <kolie> so on chat.soylentnews.org, as root, ssh magnesium
[16:53:48] <kolie> boom your on mag.
[16:54:26] <janrinok> Nope, for me it asks for a password
[16:54:37] <kolie> if you are on chat.soylentnews.org
[16:54:43] <kolie> and you type "sudo su" you become root
[16:54:47] <kolie> then ssh magnesium works
[16:54:57] <janrinok> which server is chat.soylentnews.org
[16:55:03] <kolie> beryllium
[16:55:49] <janrinok> I've been trying to get there for weeks as root and it would not let me.
[16:56:09] <kolie> trying to get on berry?
[16:56:28] -!- marginc has quit [Ping timeout: 258 seconds]
[16:56:36] <janrinok> I've been able to get to bery all the time. I can ssh to any other box as janrinok
[16:56:49] <kolie> yea well ssh to mag only works as root.
[16:56:56] <janrinok> I have tried ssh'ing as root but no joy - until now
[16:57:15] <kolie> I didn't change anything, not sure but happy it works for you now.
[16:57:16] <janrinok> There is a logic behind that?
[16:57:26] <janrinok> I'm happy too!
[16:57:47] <kolie> I suspect you may not have been sudo'd and it was an oversite tbh because this is how I always get into mag.
[16:58:37] <janrinok> possibly, but as every other box accepts me as janrinok and then allows me to sudo up to root then I might have tried that far more times that as root
[16:59:22] <janrinok> If the site is up I can get back to where we started - trying to fix the certs!
[16:59:42] <kolie> yea I don't understand the kerb/ticket system and it was wonky in the past, hesod was repaired at some point but honestly, i just root into everything these days.
[16:59:58] <kolie> I can assist if we can document the steps for the future :)
[17:00:08] <kolie> what certs need to be updated?
[17:01:25] <drussell> The one on the mail server / IRC server is still out of date
[17:01:46] <kolie> Ok so after running certybot, latest certs will be in /etc/letsencrypt/live/soylentnews.org
[17:01:56] <janrinok> yes, as is the one on IRC
[17:02:21] <kolie> dovecot and solanum need manual help after certbot.
[17:02:23] <drussell> 72.14.184.41
[17:03:04] <kolie> dovecot actually has the right path
[17:03:09] <kolie> so usually it just needs a restart
[17:03:18] <janrinok> I'm going to have to go and make my evening meal - I am an hour late already :)
[17:03:28] <janrinok> But a happy 1 hour late....
[17:04:02] <kolie> "systemctl restart dovecot" should be enough, I just executed it.
[17:05:18] <kolie> ircd should be the same, im not sure if a rehash is enough
[17:24:34] <fab23> kolie: don't know about the IRCd in use on SN, but for others rehash or so is enough, restart would kick everybody out
[17:26:47] <fab23> for inspircd it's e.g. kill -USR1
[17:37:01] <AlwaysNever> hey, nice to see the web site working again!
[17:37:07] <AlwaysNever> thanks kolie for the help
[17:37:30] <AlwaysNever> there is nothing like root access to everywhere!
[17:40:25] <kolie> well check ircd if ithas a newcert or not
[17:41:36] <kolie> yea doesnt look like a rehash refreshes files on disk for ssl
[17:43:34] <fab23> does reload (instead of restart) work?
[17:48:32] <kolie> ok rehash does reload
[17:48:37] <kolie> i think the uhh cert location is wrong
[17:51:08] <janrinok> kolie - did you see the comment that I left in the PM?
[17:52:23] <kolie> I did.
[17:52:49] <kolie> I don't know how well dev is/isnt setup or configured. I wasn't sure if it was left in a productionish state.
[17:54:17] <kolie> after certbot is run on magnesium, on bery run scp root@magnesium:/etc/letsencrypt/live/soylentnews.org/* /etc/letsencrypt/live/soylentnews.org/
[17:54:17] <kolie> , then systemctl restaqrt dovecot, then /rehash as an ircd oper
[17:54:53] <kolie> that will update email and ircd certs. I believe that solanum and services will stop talking when that is done, until services have the right server fingerprint added ( its based on the current ssl cert )
[17:57:55] <janrinok> kolie> , then systemctl restaqrt dovecot, then /rehash as an ircd oper - I have no idea what this means. :)
[17:58:13] <kolie> restarting dovecot is done on uhh berrylium
[17:58:19] <janrinok> I can restart dovecot easy enough...
[18:01:12] <kolie> The steps are 1) running the certbot incantation on magnesium, 2) scp the updated certs to the auxillary services which are all on chat.soylentnews.org/berrylium 3) restarting dovecot to reload the cert on bery 4) running /rehash as an oper or -USR1 on solanum
[18:03:47] <kolie> btw the current docker is on linode.
[18:05:33] <kolie> I have plenty of dedicated servers / VPS's available on my companies hosting platform, and my offer stands to run/provision vms on my commercial services just as any other customer I'd take on for hosting. Plenty of idle capacity, and as its soylentnews primary expense it seemed the least I could do to further the value of subs.
[18:06:25] <kolie> With the docker scripts + automating backup importing and exporting via chef/puppet/ansible who ever cares, who is hosting it is not really a concern as long as you test the environment periodically loads up fresh somewhere.
[18:08:03] <kolie> But yea I don't particularly care, if you need/want it, it's available, and I seem to have a track record at sn so it's better than a random offer otherwise, I have the technical capacity to assist with any major issues and I know the stack/software/players.
[18:09:12] <kolie> Ideally the system would be documented down very particularly in a DR plan, and various staff trained on it, so you wouldn't have to be technical parrticularly but you could pull down / reboot the system elsewhere with minimal specific knowledge.
[18:11:08] <janrinok> We have been asking for that for a long time - about 10 years now. I've asked for it, martyb has asked for it, and so have others. But it requires someone who already knows how it works to write it. At that is where the problem begins...
[18:11:42] <kolie> Well, I got the closest its been done in years :)
[18:12:23] <janrinok> The docker version is almost self documenting. But nobody is going to write a DR plan for the current system.
[18:12:48] <kolie> My exact shared thoughts on that topic led to the reimplementation
[18:12:54] <requerdanos> unless "start over from scratch" counts as a plan
[18:13:39] <kolie> Assuming the right settings/knobs are set for the environment, the current docker system will be far more production grade than the existing system, as is.
[18:13:42] <janrinok> I agree - you have. And I wish that we could switch to it straight away. There will be hiccups but that will happen whatever path we follow. At least the docker problems are easily fixable as a rule.
[18:14:02] <kolie> It's been tested pretty well at this point.
[18:14:42] <kolie> whos the resident rehash/perl dev
[18:15:29] <janrinok> I don't think it has been given a reasonable load to cope with so I am not sure how quickly it will respond if it becomes the prod, but we will never know until we try it.
[18:15:45] <janrinok> There is no resident rehash/perl dev.
[18:16:08] <kolie> That's the biggest fire you have, to figure out the future of perl.
[18:17:32] <janrinok> I think you must be confusing us with another site. mechanicjay understood it but it wasn't his job to maintain it, that finished with TMB 2 years ago. For the moment a working rehash container will suffice until we can get ourselves sorted. We have volunteer Perl programmers for the new site (2 I think).
[18:18:42] <janrinok> I have invited them to look at rehash but they are not required to do anything on the current site.
[18:19:41] <kolie> So without playing semantics you do have someone in place for perl cool.
[18:19:57] <kolie> Current old new hell heaven whatever the case may be.
[18:20:01] -!- NotSanguine [NotSanguine!~Thunderbi@dgp-169-243-882-749.biz.spectrum.com] has joined #soylent
[18:20:46] <janrinok> That is the plan, but we will have to wait and see. They volunteered about 8 months ago when we thought that a new site was just weeks away.
[18:21:16] <kolie> Yea people wanted to slow boat it I get it.
[18:21:21] <kolie> Change is hard.
[18:21:47] <janrinok> we are getting there - but I wouldn't want to do it again
[18:28:19] <janrinok> We know that Perl has got to go. And we need to find an off-the-shelf app that we can give our own style and influence to. Just another web site would not be enough - they come and go too quickly.
[18:29:31] <kolie> As years of seeing these types of transitions, doing a feature by feature reimplementation in a modern language, with growing/active/huge community with tons of support is going to give you the best SN experience without sacrificing anything.
[18:29:44] <janrinok> Most people like the old style look and feel. Another smartphone app is not what the majority want.
[18:30:19] <kolie> Yea well, the front end can basically be kept, and a backend redeveloped keeping the same front end shell.
[18:30:40] <kolie> you could keep it pixel accurate if you really wanted.
[18:31:03] <janrinok> The first priority - i think - is to get a stable site that we can build a community from again. Each time we have a downtime we lose some community interest and trust.
[18:34:13] -!- pTamok has quit [Ping timeout: 258 seconds]
[18:37:38] <janrinok> It is getting on in my evening. I will have to refill the queues in the morning. But there are still a couple of stories that people are only just seeing for the first time so I think that will keep them interested.
[18:43:36] <janrinok> kolie, I'm going to have to go and prepare for tomorrow - I have several medical appts that I have to attend. Thanks again for your help in restoring the site, I'm sure others will say the same either on here or in comments on the site itself.
[18:44:06] <janrinok> I'll be back on tomorrow.
[18:47:22] <kolie> no worries man. always happy to help this stuff is quick and easy for sure, SN has very small easy stuff not hard production problems or scale.
[18:49:21] <janrinok> .op
[18:49:21] -!- mode/#soylent [+o janrinok] by Imogen
[18:49:35] janrinok changed topic of #soylent to: SN Main Channel | Keep discussions civil | https://soylentnews.org | Impersonating another user's nick is forbidden | Some PISG charts: https://stats.sylnt.us | This channel IS logged and publicly displayed here https://logs.sylnt.us
[18:49:46] <janrinok> .deop
[18:49:46] -!- mode/#soylent [-o janrinok] by Imogen
[18:51:02] -!- Xyem has quit [Quit: ZNC - http://znc.in]
[18:53:22] -!- drussell has quit [Ping timeout: 252 seconds]
[18:54:20] -!- drussell [drussell!~drussell@a4627691kd3g1a0z4.wk.shawcable.net] has joined #soylent
[18:55:21] -!- Xyem [Xyem!~xyem@yu801-49.members.linode.com] has joined #soylent
[18:55:22] -!- Xyem has quit [Changing host]
[18:55:22] -!- Xyem [Xyem!~xyem@Soylent/Staff/Developer/Xyem] has joined #soylent
[18:55:22] -!- mode/#soylent [+v Xyem] by Imogen
[19:38:31] -!- marginc [marginc!~marginc@192.42.wox.gkq] has joined #soylent
[20:16:08] <Ingar> http://ingar.intranifty.net(Aristarchus).jpg
[21:24:17] <Bytram> kolie: Hi there; long time no see!
[21:24:57] <kolie> been around just lurk.
[21:25:30] <kolie> if you say soylent three times in a mirror im sure to appear behind you.
[21:25:38] <Bytram> I've had access problems fir last few days..
[21:25:52] <Bytram> seems ok now
[21:26:23] <Bytram> pm?
[21:26:27] <kolie> sure.
[21:42:10] <chromas> Maybe you can put pipecode up in place of rehash ;)
[21:42:52] <kolie> Too much missing, you'd give up a lot of admin and backend.
[21:45:07] <chromas> We don't know what all features it has but it shouldn't be hard to add whatever's missing
[21:45:37] <chromas> I wonder if Bryan's still around at all
[21:46:17] <chromas> Pipecode was made specifically to be a new slashdot-style site
[22:46:27] -!- marginc has quit [Ping timeout: 258 seconds]
[23:30:02] <drussell> The certificate for postfix is working properly now on 72.14.184.41 but the web server for the IRC logs at http://logs.sylnt.us is still on the expired one
[23:37:33] <AlwaysNever> janrinok: "We know that Perl has got to go" - what? why? SN is feature-complete, why a rewrite?
[23:37:46] <AlwaysNever> that is a crazy idea
[23:39:09] <AlwaysNever> push the Perl to the deep backend, and it will keep going
[23:41:49] <kolie> there are few if anyone who really understand the code base at all anymore, and it is largely unmantainable. I believe the staff would like certain tooling and theres a lot of stuff that would be nice to have on the site that having the codebase as it is has made a non-starter.
[23:42:55] <kolie> Performance gains would be awesome, and not being tied to outdated apache / mod_perl as an api.
[23:43:40] <kolie> I know there has been some issue with the latest perl and the ecosystem around some of the plugins, which are now not working, and the services that use them have updated APIs, we basically just mark parts of the site INOP and limp on.
[23:45:01] <kolie> I'm also certain that there exist race conditions in the code that lead to invalid / corrupt linking in the DB.
[23:45:56] <AlwaysNever> To replace Perl is a task for when SN has 1 million active users - a task VERY far away in th future, if at all
[23:46:03] <kolie> I disagree.
[23:47:15] <kolie> It's a very simple site, and documenting the existing codebase and mapping feature by feature into a roadmap and then going down the bulleted list is a pretty straightforward task.
[23:47:47] <kolie> I think you'd likely attract more interest in the site, and active maintainers as a result.
[23:47:55] <AlwaysNever> Oh, man, I see a systemd-soylend coming....
[23:48:30] <kolie> Like you said it's feature complete. A modern implementation of those features which, you know, is maintainable is good enoguh.
[23:48:32] <AlwaysNever> my nerves will not stand another earhquake like that
[23:49:15] <kolie> How about a working donation progress / auto updated that doesn't need martyb to manually edit templates every day?
[23:50:32] <kolie> You can't make a small change on the site, without setting up a pretty brittle dev environment, and then any thing you do set up, any change you make, you have zero idea whats going to actually happen when you push it to prod.
[23:50:36] <AlwaysNever> I would blackbox the apache_mod into a container/VM and let it be; just put a modern web proxy in front, and properly curate the MySQL database
[23:50:54] <kolie> That's where it's at on the dev system.
[23:55:49] <AlwaysNever> I don't see the need to manually edit templates to show the donations in the front page. Just make an official jounal, where the Treseaurer updates a-la-blog, and link that blog in one of the side panels in the front-page, and no need to contantly be editing templates
[23:56:58] <kolie> Or just have updates reflected from the database and dont require any interaction at all.
[23:57:28] <AlwaysNever> but then, that assumes the site is not feature complete; I posit that it IS feature complete
[23:58:10] <kolie> I'm glad the site meets your needs.
[23:58:25] <AlwaysNever> I like it old school, I like it with the layout based in HTML tables, I like it with minimal CSS
[23:58:43] <kolie> Yea I don't think anyone who's looked at the perl has a problem with any of that.
[23:59:35] <kolie> I was playing around and I had the entire frontpage working, pixel identical, with just a different back end.