This profile is from a federated server and may be incomplete. View on remote instance

CeeBee ,

It's incredibly easy, and in fact desirable, to not hear anything in that voice.

CeeBee ,

This isn't a "random" YouTube channel.

CeeBee ,

It's theoretically possible, but the issue that anyone trying to do that would run into is consistency.

How do you restore the snapshots of a database to recover deleted comments but also preserve other comments newer than the snapshot date?

The answer is that it's nearly impossible. Not impossible, but not worth the massive monumental effort when you can just focus on existing comments which greatly outweigh any deleted ones.

CeeBee ,

It can be done quite easily, trust me.

The words of every junior dev right before I have to spend a weekend undoing their crap.

I've been there too many times.

There are always edge cases you need to account for, and you can't account for them until you run tests and then verify the results.

And you'd be parsing billions upon billions of records. Not a trivial thing to do when running multiple tests to verify. And ultimately for what is a trivial payoff.

You don't screw around with infinitely invaluable prod data of your business without exhausting every single possibility of data modification.

It's a piece of cake.

It hurts how often I've heard this and how often it's followed by a massive screw up.

CeeBee , (edited )

There are so many ways this can be done that I think you are not thinking of.

No, I can think of countless ways to do this. I do this kind of thing every single day.

What I'm saying is that you need to account for every possibility. You need to isolate all the deleted comments that fit the criteria of the "Reddit Exodus".

How do you do that? Do you narrow it down to a timeframe?

The easiest way to do this is identify all deleted accounts, find the backup with the most recent version of their profile with non-deleted comments, and insert that user back into the main database (not the prod db).

Now you need to parse billions upon billions upon billions of records. And yes, it's billions because you need the system to search through all the records to know which record fits the parameters. And you need to do that across multiple backups for each deleted profile/comment.

It's a lot of work. And what's the payoff? A few good comments and a ton of "yes this ^" comments.

I sincerely doubt it's worth the effort.

Edit: formatting

CeeBee ,

This makes me thing you don't understand my meaning. I think you're talking about one day reddit decides to search for an restore obfuscated and deleted comments.

Yes, that is what we're talking about. There were a large amount of users that updated their comments to something basic and then deleted those comments. I'm fairly confident that before they happened they had zero need to implement a spam prevention system like you're suggesting. The fact that all those users' (including myself) comments are still <deleted> is evidence of that.

They may have implemented someone like that recently, but not before.

CeeBee ,

The monitors blacking out and the system not booting is suspicious.

I think you've said you tried a few distros. Does the exact same behaviour happen on all distros?

If possible, I would suggest giving it all another go with x11 instead of Wayland. Nvidia is still the worst for using with Wayland. It's gotten a lot better, and it's almost there for complete support, but there are still some issues here and there.

CeeBee ,

Sony is just as bad in their own ways.

CeeBee ,

Go vegan

I swear vegans are eventually going to out class religious people for pushing their own beliefs.

CeeBee ,

I've tried a lot of scenarios and languages with various LLMs. The biggest takeaway I have is that AI can get you started on something or help you solve some issues. I've generally found that anything beyond a block or two of code becomes useless. The more it generates the more weirdness starts popping up, or it outright hallucinates.

For example, today I used an LLM to help me tighten up an incredibly verbose bit of code. Today was just not my day and I knew there was a cleaner way of doing it, but it just wasn't coming to me. A quick "make this cleaner: <code>" and I was back to the rest of the code.

This is what LLMs are currently good for. They are just another tool like tab completion or code linting

Bankrupt Steward Health puts its hospitals up for sale, discloses $9 bln in debt ( www.reuters.com )

Bankrupt Steward Health Care has put all of its 31 U.S. hospitals up for sale, hoping to finalize transactions by the end of the summer to address its $9 billion in total liabilities, its attorneys said at a Tuesday court hearing in Houston....

CeeBee ,

Hospitals should never be privately owned. Business and healthcare have diametrically opposed incentives.

CeeBee ,

Didn't forget that fertility rates have dropped by 50% worldwide.

CeeBee ,

CUDA and AI stuff is very much Linux focused. They run better and faster on Linux, and the industry puts their efforts into Linux. CNC and 3D printing software is mostly equal between Linux and Windows.

The one thing Linux lacks in this area is CAD support from the big players. FreeCAD and OpenCAD exist, and they work very well, but they do miss a lot of the polish the proprietary software has. There are proprietary CAD solutions for Linux, but they're more industry specific and not general purpose like AutoCAD.

CeeBee ,

Linux is sadly very messy for a sysadmin.

wut?

CeeBee ,

I can’t quite put my finger on what exactly makes me feel so strongly, but it’s something to do with how sentences and paragraphs are constructed.

It has the classic 3 section style. Intro, response, conclusion.

It starts by acknowledging the situation. Then it moves on to the suggestion/response. Then finally it gives a short conclusion.

CeeBee ,

The company behind tik tok said

China. It’s China that “said”.

CeeBee ,

How are you restricting internet access for it?

CeeBee ,

Maybe waiting to see which side comes out on top. Kinda like Volkswagen. (Yes I know I didn’t exactly happen like that)

CeeBee ,

That’s not AI tho.

What do you mean?

CeeBee ,

I worked in the object recognition and computer vision industry for almost a decade. That stuff works. Really well, actually.

But this checkout thing from Amazon always struck me as odd. It’s the same issue as these “take a photo of your fridge and the system will tell you what you can cook”. It doesn’t work well because items can be hidden in the back.

The biggest challenge in computer vision is occlusion, followed by resolution (in the context of surveillance cameras, you’re lucky to get 200x200 for smaller objects). They would have had a really hard, if not impossible, time getting clear shots of everything.

My gut instinct tells me that they had intended to build a huge training set over time using this real-world setup and hope that the sheer amount of training data could help overcome at least some of the issues with occlusion.

CeeBee , (edited )

The most infuriating thing for me is the constant barrage of “LLMs aren’t AI” from people.

These people have no understanding of what they’re talking about.

Edit: to everyone down voting me, look at this image image

CeeBee ,

Thanks for that read. I definitely agree with the author for the most part. I don’t really agree that current LLMs are a form of AGI, but it’s definitely close.

But what isn’t up for debate is the fact that LLMs are 100% AI. There’s no debate there. But I think the reason why people argue that is because they conflate “intelligence” with concepts like sapience, sentience, consciousness, etc.

These people don’t understand that intelligence is a concept that can, and does, exist outside of consciousness.

CeeBee ,

This is a headline I absolutely did not have in my “things that can go wrong” bingo card.

‘IRL Fakes:’ Where People Pay for AI-Generated Porn of Normal People ( www.404media.co )

A Telegram user who advertises their services on Twitter will create an AI-generated pornographic image of anyone in the world for as little as $10 if users send them pictures of that person. Like many other Telegram communities and users producing nonconsensual AI-generated sexual images, this user creates fake nude images of...

CeeBee ,

FR is not generative AI, and people need to stop crying about FR being the boogieman. The harm that FR can potentially cause has been covered and surpassed by other forms of monitoring, primarily smartphone and online tracking.

CeeBee ,

As soon as anyone can do this on their own machine with no third parties involved

We’ve been there for a while now

CeeBee ,

And is the photo/video generator completely on home machines without any processing being done remotely already?

Yes

CeeBee ,

Did you really just try to excuse and downplay a company claiming full ownership and rights over all user’s data?

CeeBee ,

But I don’t see how you can make the customer go for a ride if the customer doesn’t want to go for a ride.

Don’t hand over the keys on the basis that company requirements for liability mitigation were not met.

I know that sounds like a stretch, but Tesla buyers don’t own their cars. Tesla has control over the system (OTA updates), you “have to” bring it to Tesla for repairs and service, and they’ve even tried to control who can resell a cyberteuck.

You’re basically renting a Tesla at full price.

Antibiotics May Soon Become Useless | The Walrus ( thewalrus.ca )

As living organisms, bacteria are encoded by DNA, and DNA occasionally mutates. Sometimes genetic mutations render a bacterium immune to an antibiotic’s chemical tactics. The few cells that might escape antibiotic pressure then have a sudden advantage: with their counterparts wiped out, resources abound, and the remaining...

CeeBee ,

People often cite evolution for antibiotic resistance, but that’s not the case.

There’s an inverse relation between bacteriophage resistance and antibiotic resistance. Antibiotic resistance requires more efflux pumps and less strong cell walls, and bacteriophage resistance requires stronger cell walls and less efflux pumps.

What’s happening is allele drift within colonies towards better antibiotic resistance, but these colonies are also very susceptible to bacteriophages.

This is no different than the frequency of spots on deer in a population group increasing our decreasing over generations.

I know many people call that evolution, but I think it’s important to be precise with our definitions. These traits for antibiotic or bacteriophage resistance are already present within the genome. They each just get expressed under different conditions, and the phenotypic strength of each is inversely proportional to the expression of the other.

This isn’t a simple or straightforward relationship. Genetics are always incredibly complex, but this relationship is confirmed.

CeeBee ,

Yes, genetic drift is evolution

Not “genetic drift”. Although I did forget a critical word. I meant to say “allele frequency drift” which is distinctly different than genetic drift.

Allele frequency drift simply describes a shift in how common a genetic trait exists, or is expressed, within a population group. The overall genetics of the group are the same. Even if there were no changes to the collective genetics of a population over millions of years (no evolution) you can still have allele frequency drift.

This is what I mean by “allele frequency drift isn’t evolution”. It’s a mathematical expression of the ratio a gene is expressed within a population group. It doesn’t describe any genomic changes or mutations.

The first generation can have frequency 1.0 of a trait, gen 2 can have 1.5, gen 3 can have 2.0, and then back down again over the next few generations. But generation 10 can have an (nearly) identical genome to generation 1.

CeeBee ,

I understand what you’re saying about drift, but I’m not sure that feels sufficient to explain the prevalence of anti-biotic resistance.

One interesting discovery was the remains of a person in Peru from something like 900 years ago. One really interesting aspect of the discovery was the gut bacteria in the remains. When they sequenced the genome of some of the bacteria they found that they were the same species as we have today. But more importantly was that the genes that encode for antibiotic resistance existed in those bacteria.

ancient-origins.net/…/ancient-peruvian-mummy-surp…

www.ncbi.nlm.nih.gov/pmc/articles/PMC4589460/

The discussion here isn’t about how antibiotic resistance first came about, the discussion is about how bacteria have been reacting to modern medicine. Why are bacteria becoming harder to treat with antibiotics as time goes on?

The point I was making is that bacteria already have antibiotic resistance in the genome, but the phenotypic expression is inversely related to bacteriophage resistance.

Antibiotic resistance needs

  • weaker/more flexible cell walls
  • more efflux pumps

Bacteriophage resistance needs

  • stronger/stiffer cell walls (to protect against punctures)
  • less efflux pumps (to increase material strength of cell wall)

In any population group there’s going to be variation in the expression of genes (the phenotype). In that population there are going to be individuals with greater antibiotic resistance and others with greater bacteriophage resistance. When antibiotics are introduced it kills most of the bacteria, but there can be a few individuals with higher antibiotic resistance that can potentially repopulate a new generation with an allele frequency shifted towards higher antibiotic resistance.

I know what I just described is “natural selection”, but that’s not evolution. Natural selection is one of the processes that is part of evolution, but it is not evolution in of itself.

Edit: formatting

CeeBee ,

Most of that is in the kernel anyways.

CeeBee ,

does nothing cool with your hormones, or any of that nonsense.

There’s quite a bit of evidence that it helps with things like immune response and insulin resistance.

I can attest personally that my usual severe allergies get better (where I get my sense of smell back) when I skip lunches. Although it has to be consistent over a period of at least a few weeks for that to work.

CeeBee ,

Yes I’m sure. I’m one of the longest cared for patient by a top immunologist professor.

I can wait just about anything during breakfast and dinner. And I don’t technically fast. I have snacks here and there. And I might have an apple with a slice of cheese for “lunch” every so often. But the reduction of intake during the day makes a huge difference.

CeeBee , (edited )

Won’t this damage the ccd?

Yes, which is why you need to use a solar filter.

Edit: eclipse.aas.org/imaging-video/images-videos

CeeBee ,

eclipse.aas.org/imaging-video/images-videos

When shooting still images or video of a solar eclipse, one rule is paramount: special-purpose solar filters must always remain on cameras and telescopes during the partial phases (including the annular phase of an annular eclipse).

It’s a good way to fry your camera. If you’re taking a single shot, you’re fine. But if you’re recording continuously, you can damage your phone’s sensor.

CeeBee , (edited )

they literally have no mechanism to do any of those things.

What mechanism does it have for pattern recognition?

that is literally how it works on a coding level.

Neural networks aren’t “coded”.

It’s called an LLM for a reason.

That doesn’t mean what you think it does. Another word for language is communication. So you could just as easily call it a Large Communication Model.

Neural networks have hundreds of thousands (at the minimum) of interconnected layers neurons. Llama-2 has 76 billion parameters. The newly released Grok has over 300 billion. And though we don’t have official numbers, ChatGPT 4 is said to be close to a trillion.

The interesting thing is that when you have neural networks of such a size and you feed large amounts of data into it, emergent properties start to show up. More than just “predicting the next word”, it starts to develop a relational understanding of certain words that you wouldn’t expect. It’s been shown that LLMs understand things like Miami and Houston are closer together than New York and Paris.

Those kinds of things aren’t programmed, they are emergent from the dataset.

As for things like creativity, they are absolutely creative. I have asked seemingly impossible questions (like a Harlequin story about the Terminator and Rambo) and the stuff it came up with was actually astounding.

They regularly use tools. Lang Chain is a thing. There’s a new LLM called Devin that can program, look up docs online, and use a command line terminal. That’s using a tool.

That also ties in with problem solving. Problem solving is actually one of the benchmarks that researchers use to evaluate LLMs. So they do problem solving.

To problem solve requires the ability to do analysis. So that check mark is ticked off too.

Just about anything that’s a neutral network can be called an AI, because the total is usually greater than the sum of its parts.

Edit: I wrote interconnected layers when I meant neurons

What if there's a bigger, still unknown reference point?

When we think about teleportation, there’s always someone talking about how you should take into account the earth and the sun moving through space. Let’s step back a little (not so much) what if the galaxy we’re currently in is rotating really really fast around another, bigger, still unknown, spacial object?

CeeBee ,

Earth itself is moving around the sun at about100,000 km/h and the sun is traveling through the galaxy st about 1 million km/h.

So if Marty went back/forward just one hour then he’d be about 1,100,000 kilometers away from Earth in space (or 900,000 kilometers, depending on the orbital direction of Earth relative to the sun’s direction of travel).

And then there’s the motion and speed of the Milkyway itself.

This is all assuming that the layout of the underlying fabric of spacetime is absolute (which it seems to be, outside of expansion).

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • tech
  • kbinEarth
  • testing
  • interstellar
  • wanderlust
  • All magazines