• 0 Posts
  • 30 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle





  • affiliate@lemmy.worldtoScience Memes@mander.xyzSo much
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 days ago

    that would be a lot clearer. i’ve just been burned in the past by notation in analysis.

    my two most painful memories are:

    • in the (baby) rudin textbook, he uses f(x+) to denote the limit of _f _from the right, and f(x-) to denote the limit of f from the left.
    • in friedman analysis textbook, he writes the direct sum of vector spaces as M + N instead of using the standard notation M ⊕ N. to make matters worse, he uses M ⊕ N to mean M is orthogonal to N.

    there’s the usual “null spaces” instead of “kernel” nonsense. ive also seen lots of analysis books use the → symbol to define functions when they really should have been using the ↦ symbol.

    at this point, i wouldn’t put anything past them.


  • affiliate@lemmy.worldtoScience Memes@mander.xyzSo much
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 days ago

    unless f(x0 ± δ) is some kind of funky shorthand for the set f(x) : x ∈ ℝ, x - x0 | < δ . in that case, the definition would be “correct”.

    it’s much more likely that it’s a typo, but analysts have been known to cook up some pretty bizarre notation from time to time, so it’s not totally out of the question.


  • affiliate@lemmy.worldtoScience Memes@mander.xyzSo much
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    11 days ago

    i think the ε-δ approach leads to way more cumbersome and long proofs, and it leads to a good amount of separation between the “idea being proved” and the proof itself.

    it’s especially rough when you’re chasing around multiple “limit variables” that depend on different things. i still have flashbacks to my second measure theory course where we would spend an entire two hour lecture on one theorem, chasing around ε and η throughout different parts of the proof.

    best to nip it in the bud id say


  • affiliate@lemmy.worldtoScience Memes@mander.xyzSo much
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    12 days ago

    i still feel like this whole ε-δ thing could have been avoided if we had just put more effort into the “infinitesimals” approach, which is a bit more intuitive anyways.

    but on the other hand, you need a lot of heavy tools to make infinitesimals work in a rigorous setting, and shortcuts can be nice sometimes






  • that’s not the full story though. according to the NIH, the US government spent over 30 billion dollars on the covid vaccines.

    and this is not unique to the covid vaccine. here’s a source with two particularly damning quotes:

    “Since the 1930s, the National Institutes of Health has invested close to $900 billion in the basic and applied research that formed both the pharmaceutical and biotechnology sectors.”

    and

    A 2018 study on the National Institute of Health’s (NIH) financial contributions to new drug approvals found that the agency “contributed to published research associated with every one of the 210 new drugs approved by the Food and Drug Administration from 2010–2016.” More than $100 billion in NIH funding went toward research that contributed directly or indirectly to the 210 drugs approved during that six-year period.


  • affiliate@lemmy.worldtoScience Memes@mander.xyzTensors
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    15 days ago

    the “categorical” way of defining tensor products is essentially “that thing that lets you turn multi-linear maps into linear maps”, and linear maps (of finite dimensional vector spaces) are basically matrices anyways. so i don’t see it as much of a stretch to say tensors are matrices.

    (can you tell that i never took a physics class?)


  • affiliate@lemmy.worldtoScience Memes@mander.xyzTensors
    link
    fedilink
    English
    arrow-up
    3
    ·
    15 days ago

    a tensor is a multi-linear map V × … × V × V* × … × V* → F, and a multi-linear map V × … × V × V* × … × V* → F is the same as a linear map V ⊗ … ⊗ V ⊗ V* ⊗ … ⊗ V* → F. and a linear map is ““the same thing as”” a matrix. so in this way, you can associate matrices to tensors. (but the matrices are formed in the tensor space V ⊗ … ⊗ V ⊗ V* ⊗ … ⊗ V*, not in the vector space V.)