@pyrosis@lemmy.world cover

This profile is from a federated server and may be incomplete. View on remote instance

Mirror all data on NAS A to NAS B

I'm duplicating my server hardware and moving the second set off site. I want to keep the data live since the whole system will be load balanced with my on site system. I've contemplated tools like syncthing to make a 1 to 1 copy of the data to NAS B but i know there has to be a better way. What have you used successfully?

pyrosis ,
@pyrosis@lemmy.world avatar

My favorite is using the native zfs sync capabilities. Though that requires zfs and snapshots configured properly.

pyrosis ,
@pyrosis@lemmy.world avatar

I noticed some updates on live video streaming. I do wonder if that will help in how jellyfin interepts commercial breaks.

Let's say I have an m3u8 playlist with a bunch of video streams. I've noticed in jellyfin when they go to like a commercial the stream freaks out. It made me wonder if the player just couldn't understand the ad insertion.

Anyway wonderful update regardless and huge improvement.

pyrosis ,
@pyrosis@lemmy.world avatar

I use using docker networks but that's me. They are created for every service and it's easy to target the gateway. Just make sure DNS is correct for your hostnames.

Lately I've been optimizing remote services for reverse proxy passthru. Did you know that it can break streams momentarily and make your proxy work a little harder if your host names don't match outside and in?

So in other words if you want full passthru of a tcp or udp stream to your server without the proxy breaking it then opening a new stream you would have to make sure the internal network and external network are using the same fqdn for the service you are targeting.

It actually can break passthru via sni if they don't use the same hostname and cause a slight delay. Kinda matters for things like streaming videos. Especially if you are using a reverse proxy and the service supports quic or http2.

So a reverse proxy entry that simply passes without breaking the stream and resending it might ook like...

Obviously you would need to get the http port working on jellyfin and have ipv6 working with internal DNS in this example.

server {
    listen 443 ssl;
    listen [::]:443 ssl;  # Listen on IPv6 address

    server_name jellyfin.example.net;

    ssl_certificate /path/to/ssl_certificate.crt;
    ssl_certificate_key /path/to/ssl_certificate.key;

    location / {
        proxy_pass https://jellyfin.example.net:8920;  # Use FQDN
        ...
    }
}
pyrosis ,
@pyrosis@lemmy.world avatar

I agree with this. The only vm I have that has multiple interfaces is an opnsense router vm heavily optimized for kvm to reach 10gb speeds.

One of the interfaces beyond wan and lan is an interface that links to a proxmox services bridge. It's a proxbridge I gave to a container and is just a gateway in opnsense. It points traffic destined for services directly at the container ip. It keeps the service traffic on the bridge instead of having to hit the physical network.

pyrosis ,
@pyrosis@lemmy.world avatar

To most of your comment I completely agree minus the freedom for choosing different disk sizes. You absolutely can do that with btrfs or just throwing a virtual layer on top of some disks with something like mergerfs.

pyrosis ,
@pyrosis@lemmy.world avatar

It's the production vs development issue. My advice is the old tech advice. "If it's not broken don't try to fix it"

Modified into a separate proxmox development environment. Btw proxmox is perfect for this with vm and container snapshots.

When you get a vm or container in a more production ready state then you can attempt migrations. That way the users don't kill you :)

pyrosis ,
@pyrosis@lemmy.world avatar

Have you considered the increase in disk io and that hypervisor prefer to be in control of all hardware? Including disks...

If you are set on proxmox consider that it can directly share your data itself. This could be made easy with cockpit and the zfs plugin. The plugin helps if you have existing pools. Both can be installed directly on proxmox and present a separate web UI with different options for system management.

The safe things here to use are the filesharing and pool management operations. Basically use the proxmox webui for everything it permits first.

Either way have fun.

pyrosis ,
@pyrosis@lemmy.world avatar

It depends on your needs. It's entirely possible to just format a bunch of disks as xfs and setup some mount points you hand to a union filesystem like mergerfs or whatever. Then you would just hand that to proxmox directly as a storage location. Management can absolutely vary depending how you do this.

At its heart it's just Debian so it has all those abilities of Debian. The web UI is more tuned to vm/lxc management operations. I don't really like the default lvm/ext4 but they do that to give access to snapshots.

I personally just imported an existing zfs pool into proxmox and configured it to my liking. I discovered options like directly passing datasets into lxc containers with lxc options like lxc.mount.entry

I recently finished optimizing my proxmox for performance in regards to disk io. It's modified with things like log2ram, tmpfs in fstab for /tmp and /var/tmp, tcp congestion control set to cubic, a virtual opnsense heavily modified for 10gb performance, a bunch of zfs media datasets migrated to one media dataset and optimized for performance. Just so many tweaks and knobs to turn in proxmox that can increase performance. Folks even mention docker I've got it contained in an lxc. My active ram usage for all my services down to 7 gigs and disk io jumping .9 - 8%. That's crazy but it just works.

pyrosis ,
@pyrosis@lemmy.world avatar

Yup you can. In fact you likely should and will probably find yourself improving disk io dramatically compared to your original thoughts doing this. It's better in my opinion to let the hypervisor manage disks operations. That means in my opinion it should also share files with smb and NFS especially if you are already considering nas type operations.

Since proxmox supports zfs out of the box along with btrfs and even XFS you have a myriad of options. You combine that with cockpit and you have a nice management interface.

I went the zfs route because I'm familiar with it and I appreciate it's native sharing options built into the filesystem. It's cool to have the option to create a new dataset off the pool and directly pass it into a new lxc container.

pyrosis ,
@pyrosis@lemmy.world avatar

Bookmark this if you utilize zfs at all. It will serve you well.

https://jrs-s.net/2018/08/17/zfs-tuning-cheat-sheet/

You will be amused with zfs performance in proxmox due to all the tuning that is possible. If this is going to be an existing zfs pool keep in mind it's easier to just install proxmox with the zfs option and let it create a zfs rpool during setup. For the rpool tweak a couple options. Make sure ashift is at least 12 during the install or 13 if you are using some crazy fast SSD as proxdisk for the rpool.

It needs to be 12 if it's a modern day spinner and probably a good setting for most ssds. Do not go over 12 if it's a spinning disk.

Now beyond that you can directly import your existing zfs pool into proxmox with a single import command. Assuming you have an existing zfs pool.

In this scenario zfs would be fully maintaining disk operations for both an rpool and a media pool.

You should consider tweaking a couple things to really improve performance via the guide de I linked.

Proxmox vms/zvols live in their own dataset. Before you start getting to crazy creating vms make sure you are taking advantage of all the performance tweaks you can. By default proxmox sets a default record size for all datasets to 128k. qcow2, raw, and even zvols will benefit from record size of 64k because it tends to improve the underlying filesystem performance of things like ext4, XFS, even UFS. Imo it's silly to create vm filesystems like btrfs if you're vm is sitting on top of a cow filesystem.

Another huge improvement is tweaking the compression algorithm. lz4 is blazing fast and should be your default go to for zfs. The new one is pretty good but can slow things down a bit for active operations like active vm disks. So make sure your default compression is lz4 for datasets with vm disks. Honestly it's just a good default to specify for the entire pool. You can select other compressions for datasets with more static data.

If you have a media dataset full of files like music, vids, pics. Setting a record size of 1mb will heavily improve disk io operations.

In proxmox it will default to grabbing half of your memory for arc. Make sure you change that after install. It's a file that defines arc_max in byte number format. Set the max to something more reasonable if you have 64 gigs of memory. You can also define the arc_min

Some other huge improvements? If you are using an SSD for your proxmox install I highly recommend you install log2ram on your hypervisor. It will stop all those constant log writes on your SSD. It will also sync them to disk on a timer and shutdown/reboot. It's also a huge performance and SSD lifespan improvement to migrate
/tmp and /var/tmp to tmpfs

So many knobs to turn. I hope you have fun playing with this.

pyrosis ,
@pyrosis@lemmy.world avatar

Another thing to keep in mind with zfs is underlying vm disks will perform better if the zfs pool is a type of mirror or stripe of mirrors. Z1 Z2 type pools are better for media and files. Cm disk io will improve on the mirror type style dramatically. Just passing what I've learned over time in optimizing systems.

pyrosis ,
@pyrosis@lemmy.world avatar

At its core cockpit is like a modern day webmin that allows full system management. So yes it can help with creating raid devices and even lvms. It can help with mount points and encryption as well.

I do know it can help share whatever with smb and NFS. Just have a look at the plugins.

As for proxmox it's just using Debian underneath. That Debian already happens to be optimized for virtualization and has native zfs support baked in.

https://cockpit-project.org/applications

pyrosis ,
@pyrosis@lemmy.world avatar

Pretty much this it gets it's own folder and in jellyfin it's own library. You just give mom access to this and whatever else you want to. you unselect that library for everyone else. The setting is under users. It's straightforward and is a check mark based select. You probably have it set to all libraries right now. Uncheck that and you can pick and choose per user.

pyrosis ,
@pyrosis@lemmy.world avatar

How about defense against dhcp option 121 changing the routing table and decloaking all VPN traffic even with your kill switch on? They got a plan for that yet? Just found this today.

https://www.leviathansecurity.com/blog/tunnelvision

pyrosis ,
@pyrosis@lemmy.world avatar

Of course but you don't control rogue dhcp servers some asshat might plug in anywhere else that isn't your network

pyrosis ,
@pyrosis@lemmy.world avatar

I doubt it would matter in some environments at all.

As an example a pc managed by a domain controller that can modify firewall rules and dhcp/dns options via group policy. At that point firewall rules can be modified.

pyrosis ,
@pyrosis@lemmy.world avatar

Setups for hardware decoding are based on the underlying OS. An example quite common is docker on Debian or Ubuntu. You will need to pass the appropriate /dev/ directories and at times files into your jellyfin docker container with the device environment variable. Commonly that would be /dev/dri

It gets more complicated with a vm because you are likely going to be passing the hardware directly into the vm which will prevent other devices outside the vm from using it.

You can get around this by placing docker directly on the os or placing docker in a Linux container with appropriate permissions and the same devices passed into the Linux container. In this manner system devices and other services will still have access the the video card.

All this to say it depends on your setup and where you have docker installed how you will pass the hardware into jellyfin. However jellyfin on docker will need you to pass the video card into the container with the device environment variable. Docker will need to see the device to be able to do that.

pyrosis ,
@pyrosis@lemmy.world avatar

Nothing but love for that project. I've been using docker-ce and docker-compse. I had portainer-ce but just got tired of it. It's easier for me to just make a compose file and get things working exactly like I want.

pyrosis ,
@pyrosis@lemmy.world avatar

Are you using tvheadend and their jellyfin plugin? Asking out of curiosity.

https://github.com/tvheadend/tvheadend

Anyway Plex and emby come to mind.

pyrosis ,
@pyrosis@lemmy.world avatar

Oh then definitely tvheadend. You can run the server lots of ways even docker. Also has plugin support.

pyrosis ,
@pyrosis@lemmy.world avatar

Music playlists are different from Plex. You can create them import them or generate an instant list.

4k is seamless and performs better imo. You can use transcoding or not if you have files they way you want them. If you do you can select on a per user basis who gets to transcode.

You can set bandwidth limits.

I've seen a feature to allow multi user streaming the same movie so you ig watch at the same time. I use npm and often a couple peeps might watch a movie at the same time without using this feature and works fine

I use the client app on Android and a firestick atm. I think I just downloaded it but you can side load too if you want. The media server app is available for various os. So technically you could set it up on whatever you want. Just check your app store

https://jellyfin.org/downloads/clients/

It can plug into homebrew or m3u playlists for live tv if that is your suggestion. It has a plugin for nextpvr and tvheadend if you utilize those for over the air or already have an m3u setup too in those pvr services. Those are great btw and available in docker containers.

It always defaulted to what I have my files encoded. It absolutely can transcode to support other clients and you decide preferences. I did notice since most of my files are h.264 with few h265 sometimes it helped to turn off transcoding for me because the client supported it natively. Jellyfin was transcoding h265 mkv to like an MP4. Anyway a quirk

Login is pretty simple. Passwords users can change. Has codes it can generate to approve a new device if you are already logged into an app on your phone. Like 6 temp numbers. Can also setup pins or whatever they call them under users.

pyrosis ,
@pyrosis@lemmy.world avatar

I'll be honest op if it's on a TV I use the newer fire sticks with the jellyfin app. They already have support for various codecs and stream from my server just fine. Cheap too and come with a remote.

If I were just trying to get a home made client up I would consider Debian bookworm and just utilize the Deb from the GitHub link here...

https://jellyfin.org/downloads/clients/

Personally I'd throw on cockpit to make remote administration a bit easier and setup an auto start at login for the jellyfin media player with the startup apps. You can even add a launch variable to launch it full screen like...

jellyfin --fullscreen

The media player doesn't really need special privileges so you could create a basic user account just for jellyfin.

pyrosis ,
@pyrosis@lemmy.world avatar

Usually a reverse proxy runs behind the firewall/router. The idea you are pointing 80/443 at the proxy with port forwarding once traffic hits your router.

So if someone goes to service.domain.com

You would have dynamic DNS telling domain.com the router is the IP.

You would tell domain.com that service.domain.com exists as a cname or a record.
You could also say *.domain.com is a cname. That would point any hosttname to your router.

From here in the proxy you would say service.domain.com points to your services IP and port. Usually that is would be on the lan but in your case it would be through a tunnel.

It is possible and probably more resource efficient to just put the proxy on the VPS and point your public domain traffic directly at the VPS IP.

So you could say on the domain service.domain.com points to the VPS IP as an a record. Service2.domain.com points to the VPS IP as another a record.

You would allow 80/443 on the VPS and create entries for the services

Those would look like the service.domain.com pointing to localhost:port

In your particular case I would just run the proxy on the public VPS the services are already on.

Don't forget you can enable https certificates when you have them running. You can secure the management interface on its own service3.domain.com with the proxy if you need to.

And op consider some blocklists for your vps firewall like spamhaus. It wouldn't hurt to setup fail2ban either.

pyrosis ,
@pyrosis@lemmy.world avatar

It's definitely encrypted they can just tell by signature that it is wireguard or whatever and block it.

They could do this with ssh if they felt like it.

pyrosis ,
@pyrosis@lemmy.world avatar

You can do that or you can use a reverse proxy to expose your services without opening ports for every service. With a reverse proxy you would point port 80 and 443 to the reverse proxy once traffic hits your router/firewall. In the reverse proxy you would configure hostnames that point to the local service IP/ports. Reverse proxy servers like nginx proxy manager then allow you to setup https certificates for every service you expose. They also allow you to disable access to them through a single interface.

I do this and have setup some blocklists on the opnsense firewall. Specifically you could setup the spamhaus blocklists to drop any traffic that originates from those ips. You can also use the Emerging Threats Blocklist. It has spamhaus and a few more integrated from dshield ect. These can be made into simple firewall rules.

If you want to block entire country ips you can setup the GeoIP blocklist in opnsense. This requires a maxmind account but allows you to pick and choose countries.

You can also setup the suricatta ips in opnsense to block detected traffic from daily updates lists. It's a bit more resource intensive from regular firewall rules but also far more advanced at detecting threats.

I use both firewall lists and ips scanning both the wan and lan in promiscuous mode. This heavily defends your network in ways that most modern networks can't even take advantage.

You want even more security you can setup unbound with DNS over TLS. You could even setup openvpn and route all your internal traffic through that to a VPN provider. Personally I prefer having individual systems connect to a VPN service.

Anyway all this to say no you don't need a VPN static IP. You may prefer instead a domain name you can point to your systems. If you're worried about security here identify providers that allow crypto and don't care about identity. This is true for VPN providers as well.

pyrosis ,
@pyrosis@lemmy.world avatar

This is a journey that will likely fill you with knowledge. During that process what you consider "easy" will change.

So the answer right now for you is use what is interesting to you.

Yes plenty ways to do the same thing in different ways. Imo though right now jump in and install something. Then play with it.

Just remember modern CPUs can host many services from a single box. How they do that can vary.

pyrosis ,
@pyrosis@lemmy.world avatar

Probably these directories...

/tmp
/var/tmp
/var/log

Two are easy to migrate to tmpfs if you are trying to reduce disk writes. Logs can be a little tricky because of the permissions. It is worth getting it right if you are concerned about all those little writes on an SSD. Especially if you have plenty of memory.

This is filesystem agnostic btw so the procedure can apply to other filesystems on Linux operating systems.

pyrosis ,
@pyrosis@lemmy.world avatar

Hardware support can be a bit of an issue with bsd in my experience. But if you're asking for hardware it doesn't take as much as you may think for jellyfin.

It can transcode just fine with Intel quic sync.

So basically any moden Intel CPU or slightly older.

What you need to consider more is storage space for your system and if your system will do more than just Jellyfin.

I would recommend a bare bones server from super micro. Something you could throw in a few SSDs.

If you are not too stuck on bsd maybe have a look at Debian or proxmox. Either way I would recommend docker-ce. Mostly because this particular jellyfin instance is very well maintained.

https://fleet.linuxserver.io/image?name=linuxserver/jellyfin

pyrosis ,
@pyrosis@lemmy.world avatar

That's somewhat true. However, the hardware support in bsd especially around video has been blah. If you are interesting in playing with zfs on linux I would recommend proxmox. That particular os is one of the few that allows you to install on a zfs rpool from the installer. Proxmox is basically a debian kernel that's been modified a bit more for virtualization. One of the mods made was including zfs support from the installer.

Depending on what you get if you go the prox route you could still install bsd in a vm and play with filesystem. You may even find some other methods to get jellyfin the way you like it with lxc, vm, or docker.

I started out on various operating systems and settled on debian for a long time. The only reason I use prox is the web interface is nice for management and the native zfs support. I change things from time to time and snapshots have saved me from myself.

pyrosis ,
@pyrosis@lemmy.world avatar

What is the underlying filesystem of the proxmox hypervisor and how did you pass storage into the omv vm? Also, is anything else accessing this storage?

I ask because...

The "file lock ESTALE" error in the context of NFS indicates that the file lock has become "stale." This occurs when a process is attempting to access a file that is locked by another process, but the lock information has expired or become invalid. This can happen due to various reasons such as network interruptions, server reboots, or changes in file system state.

pyrosis ,
@pyrosis@lemmy.world avatar

So you mentioned using proxmox as the underlying system but when I asked for proxmox filesystem I'm more referring to if you just kept the defaults during installation which would be lvm/ext4 as the proxmox filesystem or if you changed to zfs as the underlying proxmox filesystem. It sounds like you have additional drives that you used the proxmox command line to "passthru" as scsi devices. Just be aware this not true passthru. It is slightly virtualized but is handing the entire storage of the device to the vm. The only true passthru without a slight virtualization would be pci passthru utilizing IOMMU.

I have some experience with this specifically because of a client doing similar with a truenas vm. They discovered they couldn't import their pool into another system because proxmox had slightly virtualized the disks when they added them to vm in this manner. In other words zfs wasn't directly managing the disks. It was managing virtual disks.

Anyway, it would still help to know the underlying filesystem of the slightly virtualized disks you gave to mergerfs. Are these ext4, xfs, btrfs? mergerfs is just a union filesystem that unifies storage across multiple mountpoints into a single virtual filesystem. Which means you have another couple layers of complexity in your setup.

If you are worried about disk IO you may consider letting the hypervisor manage these disks and storage a bit more directly. Removing some of the filesystem layers.

I could recommend just making a single zfs pool from these disks within proxmox to do this. Obviously this is a pretty big transition on a production system. Another option would be creating a btrfs raid from these disks within proxmox and adding that mountpoint as storage to the hypervisor.

Personally I use zfs but btrfs works well enough. Regardless this would allow you to just hand storage to vms from the gui and the hypervisor would aid much more efficiently with disk io.

As for the error it's typically repaired by unmount mount operations. As I mentioned before the cause can be various but usually is a loss of network connectivity or an inability to lock something down in use.

My advice would be to investigate reducing your storage complexity. It will simplify administration and future transitions.


Repost to op as op claims his comments are being purged

pyrosis ,
@pyrosis@lemmy.world avatar

I have to admit I was doing the same but with the greek versions. Though I liked to throw in hydra's and the like.

Good file servers for Proxmox?

Hello! I have Proxmox VE running on a Dell R730 with an H730. Proxmox manages the disks in a ZFS RAID which is exactly how I want it. Because I intend for this server to have a NAS/file server, I want to set up a container or VM in proxmox that will provide network storage shares to domain-joined systems. Pretty much everything...

pyrosis ,
@pyrosis@lemmy.world avatar

Hmm. If you are going to have proxmox managing zfs anyway then why not just create datasets and share them directly from the hypervisor?

You can do that in terminal but if you prefer a gui you can install cockpit on the hypervisor with the zfs plugin. It would create a separate web gui on another port. Making it easy to create, manage, and share datasets as you desire.

It will save resources and simplify zfs management operations if you are interested in such a method.

pyrosis , (edited )
@pyrosis@lemmy.world avatar

My npm has web sockets enabled and blocking common exploits.

Just checked syncthing and it's set to 0.0.0.0:8384 internally but that shouldn't matter if you changed the port.

When Syncthing is set to listen on 0.0.0.0, it means it's listening on all available network interfaces on the device. This allows it to accept connections from any IP address on the network, rather than just the local interface. Essentially, it makes Syncthing accessible from any device within the network.

Just make sure you open those firewall ports on the server syncthing is running on.

Btw the syncthing protocol utilizes port 22000 tcp and udp. Udp utilizing a type of quic if you let it.

So it's a good idea to allow udp and tcp on 22000 if you have a firewall configured on the syncthing server.

Edit

Wording for firewall ports and the purpose of 0.0.0.0

pyrosis ,
@pyrosis@lemmy.world avatar

Out of curiosity what filesystem did you choose for you opnsense vm. Also can you tell if it's a zvol, qcow2, or raw disk. FYI if it's a qcow2 or a raw they both would benefit from a record size of 64k if they exist in a vm dataset. If it's a zvol still 64 k can help.

I also utilize a heavily optimized setup running opnsense within proxmox. My vm filesystem is ufs because it's on top of proxmox zfs. You can always find some settings in your opnsense vm to migrate log files to tmpfs which places them in memory. That will heavily reduce disk writes from opnsense.

pyrosis ,
@pyrosis@lemmy.world avatar

It looks like you could also do a zpool upgrade. This will just upgrade your legacy pools to the newer zfs version. That command is fairly simple to run from terminal if you are already examining the pool.

Edit

Btw if you have ran pve updates it may be expecting some newer zfs flags for your pool. A pool upgrade may resolve the issue enabling the new features.

pyrosis ,
@pyrosis@lemmy.world avatar

Upgrading a ZFS pool itself shouldn't make a system unbootable even if an rpool (root pool) exists on it.

That could only happen if the upgrade took a shit during a power outage or something like that. The upgrade itself usually only takes a few seconds from the command line.

If it makes you feel better I upgraded mine with an rpool on it and it was painless. I do have a everything backed up tho so I rarely worry. However ai understand being hesitant.

pyrosis ,
@pyrosis@lemmy.world avatar

I'm specifically referencing this little bit of info for optimizing zfs for various situations.

Vms for example should exist in their own dataset with a tuned record size of 64k

Media should exist in its own with a tuned record size of 1mb

lz4 is quick and should always be enabled. It will also work efficiently with larger record sizes.

Anyway all the little things add up with zfs. When you have an underlying zfs you can get away with more simple and performant filesystems on zvols or qcow2. XFS, UFS, EXT4 all work well with 64k record sizes from the underlying zfs dataset/zvol.

Btw it doesn't change immediately on existing data if you just change the option on a dataset. You have to move the data out then back in for it to have the new record size.

pyrosis ,
@pyrosis@lemmy.world avatar

Keep in mind it's more an issue with writes as others mentioned when it comes to ssds. I use two ssds in a zfs mirror that I installed proxmox directly on. It's an option in the installer and it's quite nice.

As for combating writes that's actually easier than you think and applies to any filesystem. It just takes knowing what is write intensive. Most of the time for a linux os like proxmox that's going to be temp files and logs. Both of which can easily be migrated to tmpfs. Doing this will increase the lifespan of any ssd dramatically. You just have to understand restarting clears those locations because now they exist in ram.

As I mentioned elsewhere opnsense has an option within the gui to migrate tmp files to memory.

pyrosis ,
@pyrosis@lemmy.world avatar

You are very welcome :)

pyrosis ,
@pyrosis@lemmy.world avatar

It looks like you are using legacy bios. mine is using uefi with a zfs rpool

proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
31FA-87E2 is configured with: uefi (versions: 6.5.11-8-pve, 6.5.13-5-pve)

However, like with everything a method always exists to get it done. Or not if you are concerned.

If you are interested it would look like...

Pool Upgrade

sudo zpool upgrade <pool_name>

Confirm Upgrade

sudo zpool status

Refresh boot config

sudo pveboot-tool refresh

Confirm Boot configuration

cat /boot/grub/grub.cfg

You are looking for directives like this to see if they are indeed pointing at your existing rpool

root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet

here is my file if it helps you compare...

#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#

### BEGIN /etc/grub.d/000_proxmox_boot_header ###
#
# This system is booted via proxmox-boot-tool! The grub-config used when
# booting from the disks configured with proxmox-boot-tool resides on the vfat
# partitions with UUIDs listed in /etc/kernel/proxmox-boot-uuids.
# /boot/grub/grub.cfg is NOT read when booting from those disk!
### END /etc/grub.d/000_proxmox_boot_header ###

### BEGIN /etc/grub.d/00_header ###
if [ -s $prefix/grubenv ]; then
  set have_grubenv=true
  load_env
fi
if [ "${next_entry}" ] ; then
   set default="${next_entry}"
   set next_entry=
   save_env next_entry
   set boot_once=true
else
   set default="0"
fi

if [ x"${feature_menuentry_id}" = xy ]; then
  menuentry_id_option="--id"
else
  menuentry_id_option=""
fi

export menuentry_id_option

if [ "${prev_saved_entry}" ]; then
  set saved_entry="${prev_saved_entry}"
  save_env saved_entry
  set prev_saved_entry=
  save_env prev_saved_entry
  set boot_once=true
fi

function savedefault {
  if [ -z "${boot_once}" ]; then
    saved_entry="${chosen}"
    save_env saved_entry
  fi
}
function load_video {
  if [ x$feature_all_video_module = xy ]; then
    insmod all_video
  else
    insmod efi_gop
    insmod efi_uga
    insmod ieee1275_fb
    insmod vbe
    insmod vga
    insmod video_bochs
    insmod video_cirrus
  fi
}

if loadfont unicode ; then
  set gfxmode=auto
  load_video
  insmod gfxterm
  set locale_dir=$prefix/locale
  set lang=en_US
  insmod gettext
fi
terminal_output gfxterm
if [ "${recordfail}" = 1 ] ; then
  set timeout=30
else
  if [ x$feature_timeout_style = xy ] ; then
    set timeout_style=menu
    set timeout=5
  # Fallback normal timeout code in case the timeout_style feature is
  # unavailable.
  else
    set timeout=5
  fi
fi
### END /etc/grub.d/00_header ###

### BEGIN /etc/grub.d/05_debian_theme ###
set menu_color_normal=cyan/blue
set menu_color_highlight=white/blue
### END /etc/grub.d/05_debian_theme ###

### BEGIN /etc/grub.d/10_linux ###
function gfxmode {
        set gfxpayload="${1}"
}
set linux_gfx_mode=
export linux_gfx_mode
menuentry 'Proxmox VE GNU/Linux' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-/dev/sdc3' {
        load_video
        insmod gzio
        if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
        insmod part_gpt
        echo    'Loading Linux 6.5.13-5-pve ...'
        linux   /ROOT/pve-1@/boot/vmlinuz-6.5.13-5-pve root=ZFS=/ROOT/pve-1 ro       root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
        echo    'Loading initial ramdisk ...'
        initrd  /ROOT/pve-1@/boot/initrd.img-6.5.13-5-pve
}
submenu 'Advanced options for Proxmox VE GNU/Linux' $menuentry_id_option 'gnulinux-advanced-/dev/sdc3' {
        menuentry 'Proxmox VE GNU/Linux, with Linux 6.5.13-5-pve' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.5.13-5-pve-advanced-/dev/sdc3' {
                load_video
                insmod gzio
                if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
                insmod part_gpt
                echo    'Loading Linux 6.5.13-5-pve ...'
                linux   /ROOT/pve-1@/boot/vmlinuz-6.5.13-5-pve root=ZFS=/ROOT/pve-1 ro       root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
                echo    'Loading initial ramdisk ...'
                initrd  /ROOT/pve-1@/boot/initrd.img-6.5.13-5-pve
        }
        menuentry 'Proxmox VE GNU/Linux, with Linux 6.5.13-5-pve (recovery mode)' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.5.13-5-pve-recovery-/dev/sdc3' {
                load_video
                insmod gzio
                if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
                insmod part_gpt
                echo    'Loading Linux 6.5.13-5-pve ...'
                linux   /ROOT/pve-1@/boot/vmlinuz-6.5.13-5-pve root=ZFS=/ROOT/pve-1 ro single       root=ZFS=rpool/ROOT/pve-1 boot=zfs
                echo    'Loading initial ramdisk ...'
                initrd  /ROOT/pve-1@/boot/initrd.img-6.5.13-5-pve
        }
        menuentry 'Proxmox VE GNU/Linux, with Linux 6.5.11-8-pve' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.5.11-8-pve-advanced-/dev/sdc3' {
                load_video
                insmod gzio
                if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
                insmod part_gpt
                echo    'Loading Linux 6.5.11-8-pve ...'
                linux   /ROOT/pve-1@/boot/vmlinuz-6.5.11-8-pve root=ZFS=/ROOT/pve-1 ro       root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
                echo    'Loading initial ramdisk ...'
                initrd  /ROOT/pve-1@/boot/initrd.img-6.5.11-8-pve
        }
        menuentry 'Proxmox VE GNU/Linux, with Linux 6.5.11-8-pve (recovery mode)' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.5.11-8-pve-recovery-/dev/sdc3' {
                load_video
                insmod gzio
                if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
                insmod part_gpt
                echo    'Loading Linux 6.5.11-8-pve ...'
                linux   /ROOT/pve-1@/boot/vmlinuz-6.5.11-8-pve root=ZFS=/ROOT/pve-1 ro single       root=ZFS=rpool/ROOT/pve-1 boot=zfs
                echo    'Loading initial ramdisk ...'
                initrd  /ROOT/pve-1@/boot/initrd.img-6.5.11-8-pve
        }
}

### END /etc/grub.d/10_linux ###

### BEGIN /etc/grub.d/20_linux_xen ###

### END /etc/grub.d/20_linux_xen ###

### BEGIN /etc/grub.d/20_memtest86+ ###
### END /etc/grub.d/20_memtest86+ ###

### BEGIN /etc/grub.d/30_os-prober ###
### END /etc/grub.d/30_os-prober ###

### BEGIN /etc/grub.d/30_uefi-firmware ###
menuentry 'UEFI Firmware Settings' $menuentry_id_option 'uefi-firmware' {
        fwsetup
}
### END /etc/grub.d/30_uefi-firmware ###

### BEGIN /etc/grub.d/40_custom ###
# This file provides an easy way to add custom menu entries.  Simply type the
# menu entries you want to add after this comment.  Be careful not to change
# the 'exec tail' line above.
### END /etc/grub.d/40_custom ###

### BEGIN /etc/grub.d/41_custom ###
if [ -f  ${config_directory}/custom.cfg ]; then
  source ${config_directory}/custom.cfg
elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then
  source $prefix/custom.cfg
fi
### END /etc/grub.d/41_custom ###

You can see the lines by the linux sections.

pyrosis ,
@pyrosis@lemmy.world avatar

most filesystems as mentioned in the guide that exist within qcow2, zvols, even raws, that live on a zfs dataset would benefit form a zfs recordsize of 64k. By default the recordsize will be 128k.

I would never utilize 1mb for any dataset that had vm disks inside it.

I would create a new dataset for media off the pool and set a recordsize of 1mb. You can only really get away with this if you have media files directly inside this dataset. So pics, music, videos.

The cool thing is you can set these options on an individual dataset basis. so one dataset can have one recordsize and another dataset can have another.

pyrosis ,
@pyrosis@lemmy.world avatar

If you are somewhat comfortable with the cli you could install proxmox as zfs then create datasets off the pool to do whatever you want. If you wanted a nicer gui to manage zfs you could also install cockpit on the proxmox hypervisor directly along with the zfs plugin to manage the datasets and share them a bit easier. Obviously you could do all of that from the command line too.

Personally I use proxmox now where before I made use of Debian. The only reason I switched was it made vm/lxc management easy. As for truenas it's also basically Debian with a different gui. These days I'm more focused on optimization in my home lab journey. I hope you enjoy the experience however you begin and whatever applications you start with.

pyrosis ,
@pyrosis@lemmy.world avatar

Firewall and deciding on an entry point for system administration is a big consideration.

Generating a strong unique password helps immensely. A password manager can help with this.

If this is hosting services reducing open ports with something like Nginx Proxy Manager or equivalent. Tailscale and equivalent(wire guard, wireguard-easy, headscale, net bird, and net maker) are also options.

Getting https right. It’s not such a big deal if all the services are internal. However, it’s not hard to create an internal certificate authority and create certs for services.

If you have server on a VPS. Firewall is again your primary defense. However, if you expose something like ssh fail2ban can help ban ips that make repeated attempts to login to your system. This isn’t some drop in replacement for proper ssh configuration. You should be using key login and secure your ssh configuration away from password logins.

It also helps if you are using something like a proxy for services to setup a filter list. NPM for example allows you to outright deny connection attempts from specific IP ranges. Or just deny everything and allow specific public IPs.

Also, if you are using something like proxmox. Remember to configure your services for least privileges. Basically the idea being just giving a service what it needs to operate and no more. This can encompass service user/group names for file access ect.

All these steps add up to pretty good security if you constantly assess.

Even basic steps in here like turning on the firewall and only opening ports your services need help immensely.

pyrosis ,
@pyrosis@lemmy.world avatar

I like to utilize nginx proxy manager alongside docker-ce and portainer-ce.

This allows you to forward web traffic to a single internal NPM IP. As for setting up the service ips. I like to utilize the gateway ips that docker generates for each service.

If you have docker running on the same internal IP as NPM you can directly configure the docker gateway ips for each service within the NPM web configuration.

This dumps the associated traffic into the container network for another layer of isolation.

This is a bit of an advanced configuration but it works well for my environment.

I would just love some support for quic within NPM.

pyrosis ,
@pyrosis@lemmy.world avatar

Well specifically I’m referring to the internal hub on your system and how it shares port bandwidth. It doesn’t really matter for things like a mouse or keyboard. However, when you are talking like permanent flash disks it’s worth investigating how the bandwidth is shared between ports. Specifically the switching back and forth between the storage devices. Some filesystems handle this better than others.

I was was also referring to a way I found that stabilizes the connection. That being a USB to SATA controller via like one port. That way that port tends to take advantage of all the bandwidth without switching around.

Also keep in mind USB flash media is notorious for wear compared to something like nvme/msata disks.

It’s possible to combat writes on flash media by utilizing things like ram disks in Linux. Basically migrating write heavy locations like temp and logs to the ram disk. Though you need to consider that restarting wipes those locations because they are living in ram now. Some operating systems do this automatically like opnsense with a check mark.

pyrosis ,
@pyrosis@lemmy.world avatar

I think I would get rid of that optical drive and install a converter for another drive like a 2.5 SATA. That way you could get an SSD for the OS and leave the bays for raid.

Other than that depending on what you want to put on this beast and if you want to utilize the hardware raid will determine the recommendations.

For example if you are thinking of a file server with zfs. You need to disable the hardware raid completely by getting it to expose the disks directly to the operating system. Most would investigate if the raid controller could be flashed into IT mode for this. If not some controllers do support just a simple JBOD mode which would be better than utilizing the raid in a zfs configuration. ZFS likes to directly maintain the disks. You can generally tell its correct if you can see all your disk serial numbers during setup.

Now if you do want to utilize the raid controller and are interested in something like proxmox or just a simple Debian system. I have had great performance with XFS and hardware raid. You lose out on some advanced Copy on Write features but if disk I/O is your focus consider it worth playing with.

My personal recommendation is get rid of the optical drive and replace it with a 2.5 converter for more installation options. I would also recommend getting that ram maxed and possibly upgrading the network card to a 10gb nic if possible. It wouldn’t hurt to investigate the power supply. The original may be a bit dated and you may find a more modern supply that is more rnergy efficient.

OS generally recommendation would be proxmox installed in zfs mode with an ashift of 12.

(It’s important to get this number right for performance because it can’t be changed after creation. 12 for disks and most ssds. 13 for more modern ssds.)

Only do zfs if you can bypass all the raid functions.

I would install the rpool in a basic zfs mirror on a couple SSDs. When the system boots I would log into the web gui and create another zfs pool out of the spinners. Ashift 12. Now if this is mostly a pool for media storage I would make it a z2. If it is going to have vms on it I would make it a raid 10 style. Disk I/O is significantly improved for vms in a raid 10 style zfs pool.

From here for a bit of easy zfs management I would install cockpit on top of the hypervisor with the zfs plugin. That should make it really easy to create, manage, and share zfs datasets.

If you read this far and have considered a setup like this. One last warning. Use the proxmox web UI for all the tasks you can. Do not utilize the cockpit web UI for much more than zfs management.

Have fun creating lxcs and vms for all the services you could want.

pyrosis ,
@pyrosis@lemmy.world avatar

Well op look at it this way...

A single 50mb nginx docker image can be used multiple times for multiple docker containers.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • tech
  • kbinEarth
  • testing
  • interstellar
  • wanderlust
  • All magazines