Just some Internet guy

He/him/them 🏳️‍🌈

This profile is from a federated server and may be incomplete. View on remote instance

Max_P ,
@Max_P@lemmy.max-p.me avatar

I have none of that on my phone, just plain old keyboard.

But the reason it's everywhere is it's the new hot thing and every company in the world feels like they have to get on board now or they'll be potentially left behind, can't let anyone have a headstart. It's incredibly dumb and shortsighted but since actually innovating in features is hard and AI is cheap to implement, that's what every company goes for.

Max_P ,
@Max_P@lemmy.max-p.me avatar

I think it can also get weird when you call other makefiles, like if you go make -j64 at the top level and that thing goes on to call make on subprojects, that can be a looooot of threads of that -j gets passed down. So even on that 64 core machine, now you have possibly 4096 jobs going, and it surfaces bugs that might not have been a problem when we had 2-4 cores (oh no, make is running 16 jobs at once, the horror).

Max_P ,
@Max_P@lemmy.max-p.me avatar

Easiest for this might be NextCloud. Import all the files into it, then you can get the NextCloud client to download or cache the files you plan on needing with you.

Max_P ,
@Max_P@lemmy.max-p.me avatar

I'd say mostly because the client is fairly good and works about the way people expect it to work.

It sounds very much like a DropBox/Google Drive kind of use case and from a user perspective it does exactly that, and it's not Linux-specific either. I use mine to share my KeePass database among other things. The app is available on just about any platform as well.

Yeah NextCloud is a joke in how complex it is, but you can hide it all away using their all in one Docker/Podman container. Still much easier than getting into bcachefs over usbip and other things I've seen in this thread.

Ultimately I don't think there are many tools that can handle caching, downloads, going offline, reconcile differences when back online, in a friendly package. I looked and there's a page on Oracle's website about a CacheFS but that might be enterprise only, there's catfs in Rust but it's alpha, and can't work without the backing filesystem for metadata.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Paywalled medium article? I'll pass.

Fuck employers that steal from their employees paychecks though.

Max_P ,
@Max_P@lemmy.max-p.me avatar

The page just deletes itself for me when using that. It loads and .5 second later it just goes blank. They really don't want people to bypass it.

Max_P ,
@Max_P@lemmy.max-p.me avatar

You guys still use fstab? It's systemd/Linux, you use mount units.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Yeah that's what it does, that was a shitpost if it wasn't obvious :p

Though I do use ZFS which you configure the mountpoints in the filesystem itself. But it also ultimately generates systemd mount units under the hood. So I really only need one unit, for /boot.

Max_P ,
@Max_P@lemmy.max-p.me avatar

I forgot about that, I should try it on my new laptop.

Did I just solve the packaging problem? (please feel free to tell me why I'm wrong)

You know what I just realised? These "universal formats" were created to make it easier for developers to package software for Linux, and there just so happens to be this thing called the Open Build Service by OpenSUSE, which allows you to package for Debian and Ubuntu (deb), Fedora and RHEL (rpm) and SUSE and OpenSUSE (also...

Max_P ,
@Max_P@lemmy.max-p.me avatar

The problem is that you can't just convert a deb to rpm or whatever. Well you can and it usually does work, but not always. Tools for that have existed for a long time, and there's plenty of packages in the AUR that just repacks a deb, usually proprietary software, sometimes with bundled hacks to make it run.

There's no guarantee that the libraries of a given distro are at all compatible with the ones of another. For example, Alpine and Void use musl while most others use glibc. These are not binary compatible at all. That deb will never run on Alpine, you need to recompile the whole thing against musl.

What makes a distro a distro is their choice of package manager, the way of handling dependencies, compile flags, package splitting, enabled feature sets, and so on. If everyone used the same binaries for compatibility we wouldn't have distros, we would have a single distro like Windows but open-source but heaven forbid anyone dares switching the compiler flags so it runs 0.5% faster on their brand new CPU.

The Flatpak approach is really more like "fine we'll just ship a whole Fedora-lite base system with the apps". Snaps are similar but they use Ubuntu bases instead (obviously). It's solving a UX problem, using a particular solution, but it's not the solution. It's a nice tool to have so developers can ship a reference environment in which the software is known to run well into and users that just want it to work can use those. But the demand for native packages will never go away, and people will still do it for fun. That's the nature of open-source. It's what makes distros like NixOS, Void, Alpine, Gentoo possible: everyone can try a different way of doing things, for different usecases.

If we can even call it a "problem". It's my distro's job to package the software, not the developer's. That's how distros work, that's what they signed up for by making a distro. To take Alpine again for example, they compile all their packages against musl instead of glibc, and it works great for them. That shouldn't become the developer's problem to care what kind of libc their software is compiled against. Using a Flatpak in this case just bypasses Alpine and musl entirely because it's gonna use glibc from the Fedora base system layer. Are you really running Alpine and musl at that point?

And this is without even touching the different architectures. Some distros were faster to adopt ARM than others for example. Some people run desktop apps on PowerPC like old Macs. Fine you add those to the builds and now someone wants a RISC-V build, and a MIPS build.

There are just way too many possibilities to ever end up with an universal platform that fits everyone's needs. And that's fine, that's precisely why developers ship source code not binaries.

Max_P ,
@Max_P@lemmy.max-p.me avatar

My experience with AI is it sucks and never gives the right answer, so no, good ol' regular web search for me.

When half your searches only gives you like 2-3 pages of result on Google, AI doesn't have nearly enough training material to be any good.

Max_P ,
@Max_P@lemmy.max-p.me avatar

If you want FRP, why not just install FRP? It even has a LuCI app to control it from what it looks like.

OpenWRT page showing the availability of FRP as an app

NGINX is also available at a mere 1kb in size for the slim version, full version also available as well as HAproxy. Those will have you more than covered, and support SSL.

Looks like there's also acme.sh support, with a matching LuCI app that can handle your SSL certificate situation as well.

Max_P ,
@Max_P@lemmy.max-p.me avatar

No but it does solve people not wanting to bother making an account for your effectively single-user self-hosted instance just to open a PR. I could be up and running in like 10 minutes to install Forgejo or Gitea, but who wants to make an account on my server. But GitHub, practically everyone has an account.

Max_P ,
@Max_P@lemmy.max-p.me avatar

There's been a general trend towards self-hosted GitLab instances in some projects:

Small projects tend to not want to spin up infrastructure, but on GitHub you know your code will still be there 10 years later after you disappear. The same cannot be said of my Cogs instance and whatever was on it.

And overall, GitHub has been pretty good to users. No ads, free, pretty speedy, and a huge community of users that already have an account where they can just PR your repo. Nobody wants to make an account on some random dude's instance just to open a PR.

Max_P ,
@Max_P@lemmy.max-p.me avatar

The whole point is you can take the setup and maintenance time out of the equation, it's still not very appealing for the reasons outlined.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Most VoIP providers have either an HTTP API you can hit and/or email to/from text.

Additionally, some carriers do offer an email address that can be used to send a text to one of their users but due to spam it's usually pretty restricted.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Example of what?

VoIP provider: voip.ms

They support like 5 different ways to deal with SMS and MMS, there's options. https://wiki.voip.ms/article/SMS-MMS

Carrier that accepts texts by email: Bell Canada accepts emails at [email protected] and deliver it as SMS or MMS to the number. Or at least they used to, I can't find current documentation about it and that feels like something that would be way too exploitable for spam.

How much does it matter what type of harddisk i buy for my server?

Hello, I'm relatively new to self-hosting and recently started using Unraid, which I find fantastic! I'm now considering upgrading my storage capacity by purchasing either an 8TB or 10TB hard drive. I'm exploring both new and used options to find the best deal. However, I've noticed that prices vary based on the specific...

Max_P ,
@Max_P@lemmy.max-p.me avatar

The concern for the specific disk technology is usually around the use case. For example, surveillance drives you expect to be able to continuously write to 24/7 but not at crazy high speeds, maybe you can expect slow seek times or whatever. Gaming drives I would assume are disposable and just good value for storage size as you can just redownload your steam games. A NAS drive will be a little bit more expensive because it's assumed to be for backups and data storage.

That said in all cases if you use them with proper redundancy like RAIDZ or RAID1 (bleh) it's kind of whatever, you just replace them as they die. They'll all do the same, just not with quite the same performance profile.

Things you can check are seek times / latency, throughput both on sequential and random access, and estimated lifespan.

I keep hearing good things about decomissioned HGST enterprise drives on eBay, they're really cheap.

Max_P ,
@Max_P@lemmy.max-p.me avatar

I mean, OPs distro choice didn't help here:

EndeavourOS is an Arch-based distro that provides an Arch experience without the hassle of installing it manually for x86_64 machines. After installation, you’re provided with a lightweight and almost bare-bones environment ready to be explored with your terminal, along with our home-built Welcome App as a powerful guide to help you along.

If you want Arch with actual training wheels you probably want Manjaro or at least a SteamOS fork like Chimera/HoloISO.

It probably would have been much smoother with an actual beginner friendly distro like Nobara and Bazzite, or possibly Mint/Pop for a more classic desktop experience.

It's not perfect and still has woes but OP fell for Arch with a fancy graphical installer, it still comes with the expectation of the user being able to maintain an Arch install.

Max_P ,
@Max_P@lemmy.max-p.me avatar

EndeavourOS isn't a gaming distro it's just an Arch installer with some defaults. It's still Arch and comes with Arch's woes. It's not a beginner friendly just works kind of distro.

Coming from kionite, you'd probably want Bazzite if you want a gaming distro: it's also Fedora atomic with all the gaming stuff added.

Max_P ,
@Max_P@lemmy.max-p.me avatar

It would be nice if they'd make "web" search the good old keyword search we used to have that made Google good, now that normies will just use the AI search and it doesn't have to care about natural language anymore.

Predatory forcing of circular dependency?

I think ---DOCKER--- is doing this. I installed based, and userspace(7)-pilled liblxc and libvirt and then this asshole inserted a dependency when I tried to install from their Debian package with sudo dpkg -i. One of them was qemu-system, the other was docker-cli because they were forcing me to use Docker-Desktop, which I would...

Max_P ,
@Max_P@lemmy.max-p.me avatar

I don't have an answer as to what happened, I checked the script and it looks sane to me, it installs the docker-ce package which should be the open-source community version as one would expect.

Maybe check what the package depends on and see if it pulls in all of that. Even qemu is a bit weird, it makes sense for docker-machine but I expect that to be a different package anyway. I guess Docker Desktop probably does use it, that way they can make it work the same on all platforms which is kind of dumb to do on Linux.

But,

Why don't we all use LXC and ditch this piece of shit?

Try out Podman. It's mostly a drop-in replacement for Docker, daemonless, rootless and less magical.

Max_P ,
@Max_P@lemmy.max-p.me avatar

I must be lucky, works just fine for me with SDDM configured for Wayland only, autologin to a Wayland session.

max-p@media ~ % cat /etc/sddm.conf
[Autologin]
User=max-p
Session=plasma
#Session=plasma-bigscreen
Relogin=true

[General]
DisplayServer=wayland
Max_P ,
@Max_P@lemmy.max-p.me avatar

Arch. That leads me to believe it's possibly a configuration issue. Mine is pretty barebones, it's literally just that one file.

AFAIK the ones in sddm.conf.d are for useful because the GUI can focus on just one file without nuking other user's configurations. But they all get loaded so it shouldn't matter.

The linked bug report seems to blame PAM modules, kwallet in particular which I don't think I've got configured for unlock at login since there's no password to that account in the first place.

[Thread, post or comment was deleted by the author]

  • Loading...
  • Max_P ,
    @Max_P@lemmy.max-p.me avatar

    ActivityPub makes this impossible. Everything on the fediverse is completely public, including votes, subscriptions and usernames. Even if Lemmy did offer the option, other servers wouldn't necessarily.

    And honestly this is a system that would be mainly used for spam and hate speech anyway. Just make a throwaway like everywhere else.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    Kbin is an example. But just due to the nature of the protocol, it has to be stored somewhere but Lemmy also just lets admins view all the individual votes directly in the UI.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    Still report as well, it sends emails to the mods and the admins. Just make sure it's identifiable at a glance, like just type "CSAM" or whatever 1-2 words makes sense. You can add details after to explain but it needs to be obvious at a glance, and also mods/admins can send those to a special priority inbox to address it as fast as possible. Having those reports show up directly in Lemmy makes it quicker to action or do bulk actions when there's a lot of spam.

    It's also good to report it directly into the Lemmy admin chat on Matrix as well afterwards, because in case of CSAM, everyone wants to delete it from their instance ASAP in case it takes time for the originating instance to delete it.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    That's fine to do once you've reported it: you've done your part, there's no value still seeing the post it's gonna get removed anyway.

    If the IBM PC used an ARM (or related) CPU instead of the Intel 8088, would smartphones ultimately have sucked less?

    Developers still continue to shaft anyone that isn't using an IBM PC compatible. But if the IBM PC was more closely related to the latest Nexus/Pixel device, then would the gaming experience on smartphones be any good?

    Max_P , (edited )
    @Max_P@lemmy.max-p.me avatar

    Why do you keep comparing phones and PCs? They're not comparable and never will. My PC can draw probably close to 1000W when running full bore. Mobile chips have a TDP of like 10-20W. My PC can throw 50-100x more power at the problem than your phone can. In the absolute worst case, it would have a dozen or two of those power efficient ARM chips because it can. And PC games would make use of all of them and you circle back to PC superiority. My netbook is within the same range and crappier than my phone in many aspects, around 5-10W. My new Framework 16 has a TDP of 45W, already like 2-4x more than a high end phone has.

    Even looking at Apple, the M2 has a TDP of 20W because it was spun off their iPad chips, and primarily targets mobile devices like MacBooks. So while the performance is impressive in the efficiency department, I could build an ARM server with 10x the core count and have a 10x more powerful computer than the top of the line M3 iMac.

    PCs running ARM would have no effect on the mobile ecosystem whatsoever. Android runs Linux, and Linux runs on a lot of CPU architectures. You can run Android on RISC-V today if you want to spend the time building it. Or MIPS. Or PowerPC. There's literally nothing stopping you from doing that.

    The gaming experience on mobile sucks because gaming on mobile sucks. If you ran your phone at full power to game and have the best graphics it would probably be dead in 1-2 hours. Nobody would play games that murders their battery. And most people that do play games on mobile want like 10 minute games to play while sitting on the toilet, or on a bus or train or whatever. Thus, battery life is an important factor in making a game: you don't want your game to chew through battery because then people start rationing their gameplay to make it to the end of the day or the next charger.

    PCs are better not because of IBM, or even the x86 architecture, not even because of Windows. They're better because PCs can be built with any part you want, and you can throw as many CPUs and GPUs and NPUs and FPGAs at the problem as you want. Heck there's even SBC PCs on PCI/PCIe cards so you can have multiple PCs in your PC.

    Whatever you can come up with that fits in a mobile device, I can make a 10-20x more powerful PC if anything by throwing 10-20 phones in it and split the load across all of them.

    PC games are ambitious and make use of as much hardware as it can deal with. If you want to show off your 3D tech you don't limit yourself to mobile, you target dual RTX 4090 Ti graphics cards. There are great games made for lower end hardware, and consoles like the switch runs ARM, like the Zelda games. The switch is vastly inferior to modern phones, and Yuzu can run those games better than the switch can. My PC will happily run BotW and TotK at 4K 240Hz HDR if I ask it to. But it was designed for the Switch and it's pretty darn good games. So the limitation clearly isn't that PCs exist, it's what developers write their games for. CPU architecture isn't a problem, we have emulators, we have Rosetta, we have Box64, we have FEX.

    If PCs didn't exist, something else would have taken its place a long time ago, and we'd circle back to the exact same problem/question. Heck there's routers and firewalls that run games better than your phone.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    There are better "gaming" distros, but unless someone uses their PC exclusively for gaming, when it comes time to install other kinds of software for school or work or whatever, they're going to get thrown in the deep ends of Linux.

    But guess what does have two decades of software and tutorials to set up just about everything in existence? Ubuntu, and by extension Mint.

    Sure you can squeeze more out of your games with something like Bazzite, but the general platform that anything Linux-native targets is usually Ubuntu. Sure there's distrobox and stuff that's like telling the average gamer to go set up WSL. It's not hard per-se but the amount of things to learn increases very quickly.

    Thus, even though Ubuntu is very average these days, it's still a safe bet for new users.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    Probably not the best example in retrospect, since its only gotcha is that it's Fedora Atomic.

    Mainly my point is if you Google "how do I install X" you'll get plenty of Ubuntu results out of the box, which when you're an overwhelmed newbie is very helpful. Like, if you start with nothing, you just kissed goodbye to your Windows 11 install, you dive head first into Bazzite and you've got Firefox, Discord and Steam going, everything feels good. Then you start looking up "how to install X on Linux", first you get a bunch of Ubuntu results, then you swap Linux for "bazzite", nothing because it's fairly new, but it's Fedora so you look into Fedora but you realize Bazzite is actually Fedora Atomic and it's a whole other way of installing things, maybe you just try running a .run or .sh file, or you give up and try to just make install from source but t̶h̴e̸ ̵f̸i̸l̸e̷s̸y̶s̷t̸e̶m̴ ̴i̶s̸ ̷r̷e̴a̴d̴o̷n̶l̷y̷ a̴n̵d̸w̷̪͊h̵̟̏y̴̻͛ ̸͉̒i̶͖͆s̸̪̎ ̸̗̏Ḷ̴͌i̶̞͑n̶̫͂u̵̯͋x̴͓͋ ̵͈̀ŝ̴̗o̴̱̒ ̴̭̎d̸̨͊a̷͙̽m̵̘̈ṇ̸̐ c̷͓͝ò̵̙m̵̲͛p̷̖̓ĺ̴̰ĭ̵̥c̵̰̽ă̸̩t̷͗ͅe̵͈̍d̵̻̃.

    I would argue Ubuntu kinda sucks, but it sucks in a familiar windows-y kind of way where pretty much everyone knows how to fix it or make it work usually by blindly executing stuff. Not great, but it works, and it doesn't require much thinking. Ubuntu is pretty much the only distro you can find your way without caring what a distro is just by the pile of tutorials for Ubuntu or assuming Ubuntu. Case in point: Linus from LTT when he tried to apt install steam on Manjaro, after nuking his entire DE on Pop_OS using the same command. It's entirely his fault, but that's still a common and frustrating experience and they add up.

    Same reason sometimes I just tell people honestly, just stick with Windows. Linux would be a good fit, it would be way better, but they're not willing or accepting of the learning curve. Sometimes you're just better sticking with what most people use, so everyone knows how to fix your problems.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    The quality of what the community is doing vs what they shipped with NSO especially on launch is laughable.

    Native OoT and MM on the switch would have been really sick. Instead they went with 90s level of emulator quality.

    Help required, Certain VPN does not connect and times out

    Can someone help, i have been having trouble connected with my home universities vpn, for past 15-20days, it is an openvpn connection, so i have been using networkmanager-openvpn to import my config files, and they have worked previously, but for last 15-20 days i get connection timed out, all certificates used are correct, i...

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    Check the logs, but it's probably related to the deprecation of compression. OpenVPN 2.6 now requires a flag client-side to enable it as it is known to be the cause of too many vulnerabilities.

    Add

    comp-lzo yes
    allow-compression yes
    

    To your config and try again. If it still doesn't work set log level to 4, redact personal info and post the logs.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    You can try running it directly, sudo openvpn --config yourconf.ovpn

    That will also tell us if NetworkManager is at fault.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar
    ERROR: failed to negotiate cipher with server.  Add the server's cipher ('AES-128-CBC') to --data-ciphers (currently 'AES-256-GCM:AES-128-GCM:CHACHA20-POLY1305') if you want to connect to this server.
    

    That's your error. So I think

    data-ciphers AES-128-CBC
    

    In your config should resolve this. Basically there's some issues with CBC and it's now off by default.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    "Trust me bro" from the developer pretty much.

    I think it makes sense, they're a small developer and it's all stuff I'd expect from the ad networks so if you get premium you also kill the ads and therefore the data collection.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    Saw Boost mentioned already, but also I think Tesseract deserves a shoutout for clean and modern web experience.

    Self-hosted website for posting web novel/fiction

    Hey hello, self-hosting noob here. I just want to know if anyone would know a good way to host my writing. Something akin to those webcomic sites, except for writing. Multiple stories with their own "sections" (?) and a chapter selection for each. Maybe a home page or profile page to just briefly detail myself or whatever, I...

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    Wordpress or some of its alternatives would probably work well for this. Another alternative would be static site generators, where you pretty much just write the content in Markdown.

    It's also a pretty simple project, it would be a great project to learn basic web development as well.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    The 10 year old PC has a much much bigger power budget than a phone. It wasn't until really recently that ARM got anywhere close to x86 performance.

    While the phone technically possibly could be better, it would also drain in an hour or two if it was maxed out. And most people have crappy phones that can barely hold 60fps doing nothing so mobile games usually target the lower end devices to maximize the amount of potential players, while also remaining battery conscious.

    There's also just not that much demand. Nobody has space on their phones for a 120GB game, and nobody wants to play a AAA game on their phones because gaming on a phone sucks ass and if you're going to dock the phone you might as well get a console.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    To be fair you don't really have to use filters for this. Cameras are much better at capturing the colors of the aurora while in person it looks like a faint white glow in the sky. Possibly some white-balance thing where it way overcompensate.

    Cameras also need relatively long exposures to capture those so it'll also appear much brighter and vivid than we see with our own eyes, possibly because of the low light conditions we use our cones more than the rods.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    Thanks for the very quick fix, much appreciated :) Now I can get rid of lemmy-ui entirely :D

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    1 and 2, there's also the option to still buy it but then download a pirated copy that actually works. In a professional setting at least, if you sell stuff made with the pirated software you have the license and rights to do so. Personally I would rather skip the plugin entirely, but if you must, and you must legally-ish, that's an option.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    That's the eternal cycle of social media. It starts nice and then it get flooded by MAGA extremists until it becomes a cesspool of hate and disinformation.

    See: Facebook, Reddit, Twitter, TikTok is well on that path as well.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    Fairly new to ham, what's nice to listen to during an aurora? Just funny noise bursts? Any antenna precautions so I don't fry my SDR?

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    Nothing hotter than a giant electric fleshlight whirring away as you get off.

    I saw one in a sex shop, it looks like such a chore to get going and clean up afterwards. It's fucking huge too. Hands are so much easier to clean, and readily available anywhere anytime.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    I route through my server or my home router when using public WiFi and stuff. I don't care too much about the privacy aspect, my real identity is attached to my server and domain anyway. I even have rDNS configured, there's no hiding who the IP belongs to.

    That said, server providers are much less likely to analyze your traffic because that'd be a big no-no for a lot of companies using those servers. And of course any given request may actually be from any of Lemmy, Mastodon, IRC bots or Matrix, so pings to weird sites can result entirely from someone posting that link somewhere.

    And it does have the advantage that if you try to DDoS that IP you'll be very unsuccessful.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    I can definitely see the improvement, even just between my desktop monitor (27in 1440p) and the same resolution at 16 inch on my laptop. Text is very nice and sharp. I'm definitely looking at 4K or even 5K next monitor upgrade cycke.

    But the improvement is far from how much of an upgrade 480p to 1080p and moving away from CRTs to flat screens. 1080p was a huge thing when I was in highschool as CRT TVs were being phased out in favor of those new TVs.

    For media I think 1080p is good enough. I've never gone "shit, I only downloaded the 1080p version". I like 4K when I can have it like on YouTube and Netflix, but 1080p is still a quite respectable resolution otherwise. The main reason to go higher resolutions for me is text. I'm happy with FSR to upscale the games from 1080p to 1440p for slightly better FPS.

    HDR is interesting and might be what convinces people to upgrade from 1080p. On a good TV it feels like more of an upgrade than 4K does.

    Max_P ,
    @Max_P@lemmy.max-p.me avatar

    If you dig deeper into systemd, it's not all that far off the Unix philosophy either. Some people seem to think the entirety of systemd runs as PID1, but it really only spawns and tracks processes. Most systemd components are separate processes that focus on their own thing, like journald and log management. It's kinda nice that they all work very similarly, it makes for a nice clean integrated experience.

    Because it all lives in one repo doesn't mean it makes one big fat binary that runs as PID1 and does everything.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • tech
  • kbinEarth
  • testing
  • interstellar
  • wanderlust
  • All magazines