#dev | Logs for 2024-12-12

« return
[22:34:14] -!- Deucalion [Deucalion!~Fluff@Soylent/Staff/IRC/juggs] has joined #dev
[22:34:14] -!- Deucalion has quit [Changing host]
[22:34:14] -!- Deucalion [Deucalion!~Fluff@188.226.nuy.w] has joined #dev
[22:30:27] -!- Deucalion has quit [Ping timeout: 272 seconds]
[07:01:31] -!- systemd [systemd!~systemd@pid1] has joined #dev
[07:01:31] -!- systemd has quit [Changing host]
[07:01:31] -!- systemd [systemd!~systemd@50.54.nrl.yto] has joined #dev
[02:04:52] <kolie> Just waiting to see a route withdrawl.
[02:04:21] <kolie> I'm pretty close to having a BGP feed client for realtime route queries.
[01:04:08] <robc207> I received the ThinkCenter tiny PCs today. I'm planning to donate up to 7 of those if there is a need
[01:03:34] <kolie> Just need to add in a nic for it, and then add the vlan into the other cluster.
[01:03:18] <kolie> Theres a staff box vm, its on the +1 proxmox machine, might use that as a control/bastion
[01:02:22] <kolie> using the 24 lts
[01:02:13] <kolie> the ubuntu vm is where k3s was situated.
[01:01:55] <kolie> Bare metal running proxmox, theres 3+1 nodes available, and the 3 node cluster each has 1 ubuntu vm allocated 16gb with 100gb local storage, not ceph
[01:00:43] <robc207> What OS are you using on the VM layer?
[01:00:03] <kolie> few gen old 1u dells, enterprise but cost effect 2.5 sata ssds
[00:59:40] <kolie> If it came to it I'm sure the cost of the subs could cover any loss, and I'm using pretty commodity gear.
[00:59:27] <robc207> Yes that is an advantage
[00:59:09] <kolie> Lucky for SN that they don't own the hardware and I donate the usage of the VMs.
[00:57:36] <robc207> My concerns would be with the cost of replacing failed hardware components.
[00:55:19] <kolie> I'm like Top Tier, dedicated enterprise san, mid tier++, vSAN, and just under that im like mid tier+ longhorn
[00:54:20] <kolie> I think in this particular instance, using proxmox as a vSAN may be more advisable then trying to use longhorn in cluster.
[00:53:40] <robc207> Since then I've had no further issue
[00:53:19] <kolie> Pretty exceptional situation I'd say.
[00:53:05] <robc207> Lost quorum after a power outage and found my mSATA to USB bridge smoked
[00:51:27] <kolie> Enterprise spinning rust, but spinning rust none the less.
[00:51:17] <kolie> I have a FreeBSD based iSCSI node, but its ina different DC, I'd have to move it in. I have some "retired" iSCSI gear from work I might be able to utilize too, I think it's spinning rust though.
[00:50:32] <kolie> when I say iSCSI exclusive, I meant in enterprise settings, with 10gb+ dedicated gear.
[00:50:10] <kolie> ceph is pretty well supported.
[00:49:03] <kolie> I've used iSCSI almost exclusively before for pvols.
[00:48:35] <kolie> why did longhorn lose quorom
[00:48:04] <kolie> its not doing anything.
[00:47:50] <kolie> Each proxmox/1u node has a dedicated ssd
[00:47:38] <kolie> I have some iSCSI/NFS appliances I can wire in.
[00:47:28] <kolie> robc207, the options here are possibly ceph/proxmox
[00:47:07] -!- chromas2 [chromas2!~chromas@Soylent/Staph/Infector/chromas] has joined #dev
[00:47:03] -!- chromas has quit [Ping timeout: 272 seconds]
[00:45:09] -!- systemd has quit [Ping timeout: 272 seconds]
[00:43:43] <robc207> For Persistent Volumes, if we already have a fault tolerant NFS service, this is the easiest. If we do not, and we want to have in-cluster storage...being from a Rancher shop I turn to Longhorn. My home cluster I have SSDs on each node and I'm running Longhorn. Truth be told, I did lose quorum on Longhorn one time and was able recover data from the raw block device using Podman and an iSCSI loopback.
[00:38:42] <robc207> k3s comes with traefik and we certainly can use that. I've always wanted a reason to learn more about it. I use MetalLB for layer-2 and ARP replies because some networking teams prefer this. Ultimately it's up to you.
[00:23:14] <kolie> We are playing with it at the moment.