• 1 Post
  • 35 Comments
Joined 11 months ago
cake
Cake day: October 20th, 2023

help-circle


  • Doesn’t really change much.

    You NEVER connect to sensitive resources via wifi. Different orgs and levels have different rules about whether a device capable of wifi can even be in the same room, but the key is to not connect it to the secure network. This is commonly referred to as “an airgap”. And if you are wondering how different ships can communicate with each other and The US? Don’t ask questions!

    For less sensitive resources? YOLO that shit. But it is also incredibly trivial to set up a security model where users cannot connect to arbitrary networks.

    So StinkyNet would, presumably, only be usable by personal devices. Which should have absolutely nothing sensitive on them to begin with. And if anything on any of the ship’s sensitive networks was even able to connect to StinkyNet then… the Navy done fucked up.

    Which… might explain the rapid action to punish those who violated policy.


  • And if there is not immense amounts of “do not have a fucking fitbit” levels of warnings and policies, that is a problem for the US Navy itself. Because a lot of those will also cache data and send the last N days once they get back to shore.

    Again, unless they were ACTUALLY doing sensitive stuff (rather than just “sensitive by default” to protect Leadership™ from having to think and make decisions) then we are looking at the same problem the russians have in Ukraine.

    Otherwise? It is a policy violation, not a security violation, in and of itself. What people then share on social media is on them.


    And a friendly reminder: Policy is made to minimize the risk of a security issue and you should follow it (if only because you are paid to). But it is VERY important to understand what you are actually protecting yourself from so that you understand if policy is even doing anything. Otherwise you get complete insanity as more and more bureaucrats and Leaders™ add bullshit so they can get a bonus for being “security minded”.







  • Because the Mastodon community did the same thing we do every time there is a chance to get people away from corporations (e.g. Linux vs Windows).

    People were looking for an alternative. The general consensus was it was hard to really grok federation. So, of course, The Community insisted on explaining federation and why it was good while basically only commenting on the instances that had closed applications. It was the equivalent of insisting someone who wanted to try Linux for gaming NEEDS to use arch and only needs to know twenty command line operations to get up and running.

    So… everyone instead just went to Bluesky and Threads where sign-up links were provided rather than directory links and manifestos.

    And… I am perfectly happy with that. Lemmy has a LOT of issues where so much of the community is talking about their ex-girlfriend (reddit) all the time and we basically get constant content and engagement farming that makes no fucking sense considering the userbase.

    Whereas Mastodon actually IS a really good community that feels very different from twitter/bluesky/threads. It isn’t for everyone but I very regularly have genuinely good conversations with people in the town hall/microblog format. Whereas… I am not sure if I have ever had even a meaningful conversation on lemmy (whereas I’ve probably had maybe ten on reddit over the years?).


  • Generally speaking, all the major instances are federated with all the other major instances.

    The differences are the super tiny instances (which are generally effectively zero traffic) and the controversial instances (mostly tankies). Said controversial instances don’t want to advertise that nobody can stand them and the rest of the instances don’t want to deal with the bullshit from bringing it up again.

    I think it would be a nice novelty to visualize this. But I don’t think there would be much actionable information coming out of it and , because this is The Internet, it will likely lead to harassment and brigading.


  • Depends on the system

    The “Yo dog, a tidal wave is hitting right now. That is why it looks like it is super duper low tide.” alert? Yeah, you’ll get that. Whether that is time to meaningfully act or not depends.

    But most regions have additional services you can and should sign up for that will give the early warnings. So “Seismic activity detected a mile or three off shore or however tidal waves work. We are monitoring the situation” is a good indicator of “maybe today is not the day I go to the beach”. And “We are no longer monitoring the situation. Please proceed as normal” is a sign that maybe you do want to go for swimmies after all.

    Same with other disasters. I live in a region that has a lot of wildfires. We tend to get the early warnings and even the “We might say to evacuate in a few days” through a different service. We get the “Get the fuck out of town immediately” alerts through the normal emergency alert system.


  • The issue is: What is right and what is wrong?

    "mondegreen"s are so ubiquitous that there are multiple websites dedicated to it. Is it “wrong” to tell someone that the song where Jimi Hendrix talked about kissing a guy is Purple Haze? And even pointing out where in the song that happens has value.

    In general, I would prefer it if all AI Search Engines provided references. Even a top two or three pages. But that gets messy when said reference is telling someone they misunderstood a movie plot or whatever. “The movie where Anthony Hopkins pays Brad Pitt for eternal life using his daughter is Meet Joe Black. Also you completely missed the point of that movie” is a surefired way to make customers incredibly angry because we live in bubbles where everything we do or say (or what influencers do or say and we pretend we agree with…) is reinforced, truth or not.

    And while it deeply annoys me when I am trying to figure out how to do something in Gitlab CI or whatever and get complete nonsense based on a single feature proposal from five years ago? That… isn’t much better than asking for help in a message board where people are going to just ignore the prompt and say whatever they Believe.

    In a lot of ways, the backlash against the LLMs reminds me a lot of when people get angry at self checkout lines. People have this memory of a time that never was where cashiers were amazingly quick baggers and NEVER had to ask for help to figure out if something was an Anaheim or Poblano pepper or have trouble scanning something or so forth. Same with this idea of when search (for anything non-trivial) was super duper easy and perfect and how everyone always got exactly the answer they wanted when they posted on a message board rather than complete nonsense (if they weren’t outright berated for not searching for a post from ten years ago that is irrelevant).


  • More drives is always better. But you need to understand how you are making it better.

    https://en.wikipedia.org/wiki/Standard_RAID_levels is a good breakdown of the different RAID levels. Those are slightly different depending on if you are doing “real”/hardware RAID or software raid (e.g. ZFS) but the principle holds true and the rest is just googling the translation (for example, Unraid is effectively RAID4 with some extra magic to better support mismatched drive sizes)

    That actually IS an important thing to understand early on. Because, depending on the RAID model you use, it might not be as easy as adding another drive. Have three 8 TB and want to add a 10? That last 2 TB won’t be used until EVERY drive has at least 10 TB. There are ways to set this up in ZFS and Ceph and the like but it can be a headache.

    And the issue isn’t the cloudflare tunnel. The issue is that you would have a publicly accessible service running on your network. If you use the cloudflare access control thing (login page before you can access the site) you mitigate a lot of that (while making it obnoxious for anything that uses an app…) but are still at the mercy of cloudflare.

    And understand that these are all very popular tools for a reason. So they are also things hackers REALLY care about getting access to. Just look up all the MANY MANY MANY ransomware attacks that QNAP had (and the hilarity of QNAP silently re-enabling online services with firmware updates…). Because using a botnet to just scan a list of domains and subdomains is pretty trivial and more than pays for itself after one person pays the ransom.

    As for paying for that? I would NEVER pay for nextcloud. It is fairly shit software that is overkill for what people use it for (file syncing and document server) and dogshit for what it pretends to be (google docs+drive). If I am going that route, I’ll just use Google Docs or might even check out the Proton Docs I pay for alongside my email and VPN.

    But for something self hosted where the only data that matters is backed up to a completely different storage setup? I still don’t like it being “exposed” but it is REALLY nice to have a working shopping list and the like when I head to the store.


  • A LOT of questions there.

    Unraid vs Truenas vs Proxmox+Ceph vs Proxmox+ZFS for NAS: I am not sure if Unraid is ONLY a subscription these days (I think it was going that way?) but for a single machine NAS with a hodgepodge of drives, it is pretty much unbeatable.

    That said, it sounds like you are buying dedicated drives. There are a lot of arguments for not having large spinning disk drives (I think general wisdom is 12 TB is the biggest you should go for speed reasons?), but at 3x18 you aren’t going to really be upgrading any time soon. So Truenas or just a ZFS pool in Proxmox seems reasonable. Although, with only three drives you are in a weird spot regarding “raid” options. Seeing as I am already going to antagonize enough people by having an opinion, I’ll let someone else wage the holy war of RAID levels.

    I personally run Proxmox+Ceph across three machines (with one specifically set up to use Proxmox+ZFS+Ceph so I can take my essential data with me in an evacuation). It is overkill and Proxmox+ZFS is probably sufficient for your needs. The main difference is that your “NAS” is actually a mount that you expose via SMB and something like Cockpit. Apalrd did a REALLY good video on this that goes step by step and explains everything and it is well worth checking out https://www.youtube.com/watch?v=Hu3t8pcq8O0.

    Ceph is always the wrong decision. It is too slow for enterprise and too finicky for home use. That said, I use ceph and love it. Proxmox abstracts away most of the chaos but you still need to understand enough to set up pools and cephfs (at which point it is exactly like the zfs examples above). And I love that I can set redundancy settings for different pools (folders) of data. So my blu ray rips are pretty much YOLO with minimal redundancy. My personal documents have multiple full backups (and then get backed up to a different storage setup entirely). Just understand that you really need at least three nodes (“servers”) for that to make sense. But also? If you are expanding it is very possible to set up the ceph in parallel to your initial ZFS pool (using separate drives/OSDs), copy stuff over, and then cannibalize the old OSDs. Just understand that makes that initial upgrade more expensive because you need to be able to duplicate all of the data you care about.

    I know some people want really fancy NASes with twenty million access methods. I want an SMB share that I can see when I am on my local network. So… barebones cockpit exposing an SMB share is nice. And I have syncthing set up to access the same share for the purpose of saves for video games and so forth.

    Unraid vs Truenas vs Proxmox for Services: Personally? I prefer to just use Proxmox to set up a crapton of containers/vms. I used Unraid for years but the vast majority of tutorials and wisdom out there are just setting things up via something closer to proxmox. And it is often a struggle to replicate that in the Unraid gui (although I think level1techs have good resources on how to access the real interface which is REALLY good?).

    And my general experience is that truenas is mostly a worst of all worlds in every aspect and is really just there if you want something but are afraid of/smart enough not to use proxmox like a sicko.

    Processor and Graphics: it really depends on what you are doing. For what you listed? Only frigate will really take advantage and I just bought a Coral accelerator which is a lot cheaper than a GPU and tends to outperform them for the kind of inference that Frigate does. There is an argument for having a proper GPU for transcoding in Plex but… I’ve never seen a point in that.

    That said: A buddy of mine does the whole vlogger thing and some day soon we are going to set up a contract for me to sit down and set her up an exporting box (with likely use as a streaming box). But I need to do more research on what she actually needs and how best to handle that and she needs to figure out her budget for both materials and my time (the latter likely just being another case where she pays for my vacation and I am her camera guy for like half of it). But we probably will grab a cheap intel gpu for that.

    External access: Don’t do it, that is a great way to get hacked.

    That out of the way. My nextcloud is exposed to the outside world via a cloudflare tunnel. It fills me with anxiety but as long as you regularly update everything it is “fine”.

    My plex? I have a lifetime plex pass so I just use their services to access it remotely. And I think I pay an annual fee for homeassistant because I genuinely want to support that project.

    Everything else? I used to use wireguard (and openvpn before it) but actually switched to tailscale. I like the control that the former provided but much prefer the model where I expose individual services (well, VMs). Because it is nice to have access to my cockpit share when I want to grab a file in a hotel room. There is zero reason that anything needs access to my qbitorrent or calibre or opnsense setup. Let alone even seeing my desktop that I totally forgot to turn off.

    But the general idea I use for all my selfhosted services is: The vast majority of interactions should happen when I am at home on my home network. It is a special case if I ever need to access anything remotely and that is where tailscale comes in.

    Theoretically you can also do the same via wireguard and subnetting and vlans but I always found that to be a mess to provide access both locally and remotely and the end result is I get lazy. Also, Tailscale is just an app on basically any machine whereas wireguard tends to involve some commands or weird phone interactions.


  • NuXCOM_90Percent@lemmy.ziptoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    43
    ·
    edit-2
    21 days ago

    Sorry, did I miss something? Boeing took over the FAA?

    JESUS GOD DAMNED CHRIST!!! WE ARE ALL GOING TO DIE!!! Now THAT is news.

    It has nothing to do with “trying to solve this problem” and pretending it does is just an obnoxious strawman. The issue is coming into a completely unrelated thread to spew some idiocy because your favorite influencer does the same. It is engagement farming for absolutely zero reason.

    The ONLY mention of Boeing in that article was that they were being considered for a fallback. Which also includes misinformation about NASA deeming it unsafe (as opposed to not as safe/unnecessarily risky when there are safer options). Which… is an FAA and NASA decision. Because you can bet spacex would gladly fly their rockets if they were allowed to as well.


  • NuXCOM_90Percent@lemmy.ziptoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    54
    ·
    21 days ago

    Sorry, what?

    Boeing is willing to pay for spacex not being perfect? And we should put astronauts on known safety risks because Challenger and Columbia weren’t enough for you?

    Look, I get it. Everyone is influencer-pilled. But this isn’t even reddit: it is fricking lemmy. So how about trying to respond to topics and discussions rather than just non-existent karma and engagement farming with non sequitors?


  • NuXCOM_90Percent@lemmy.ziptoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    21 days ago

    There are (or at least were) actually competent engineers at spacex. While we can’t rule out overengineering to an obscene degree, the amount of propulsion is going to be very limited. Basically enough to make minor adjustments and then one last burn to “safely” land.

    Which is basically comparable to wind carrying a conventional booster off course. Yes, it is possible but it is mitigated by landing in an ocean and not doing this on windy days.

    No, The issue is that there was a failure. Doesn’t matter when or where it happens. Something that was supposed to work didn’t and we need to understand that before we have yet another Challenger.

    Let’s put it this way (yay metaphors, these never leave to pedantism and derailment): You just got home from driving to the local fun fair. You close your door and your mirror falls off. It happened AFTER you drove and AFTER you turned off the engine but… are you going to go on any road trips before figuring out what the hell happened?


  • I mean. Traditional systems go through a LOT of very rigorous and documented-ish processes to be reused (not quite Rocket of Theseus but…). They are expected to be unusable after a launch and being able to reuse them is kind of an added bonus.

    Reusable systems are specifically designed to be… reused. So if they aren’t reusable after a launch, something went horribly wrong and we need to understand why. Because maybe we got lucky and the proverbial door fell off after landing this time. Maybe next time it falls off mid-flight.