• Kausta@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    You havent seen anything until you need to put a 4.2gb gzipped csv into a pandas dataframe, which works without any issues I should note.

      • Kausta@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Yeah, it was just a simple example. Although using just pandas (without something like dask) for loading terabytes of data at once into a single dataframe may not be the best idea, even with enough memory.

  • QuizzaciousOtter@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Is 600 MB a lot for pandas? Of course, CSV isn’t really optimal but I would’ve sworn pandas happily works with gigabytes of data.

    • gigachad@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      I guess it’s more of a critique of how bad CSV is for storing large data than pandas being inefficient

    • marcos@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Is 600 MB a lot for pandas?

      No, but it’s easy to make a program in Python that doesn’t like it.

    • tequinhu@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      It really depends on the machine that is running the code. Pandas will always have the entire thing loaded in memory, and while 600Mb is not a concern for our modern laptops running a single analysis at a time, it can get really messy if the person is not thinking about hardware limitations

      • naught@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Pandas supports lazy loading and can read files in chunks. Hell, even regular ole Python doesn’t need to read the whole file at once with csv

    • MoonHawk@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      What do you mean not optimal? This is quite literally the most popular format for any serious data handling and exchange. One byte per separator and newline is all you need. It is not compressed so allows you to stream as well. If you don’t need tree structure it is massively better than others

      • QuizzaciousOtter@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        I think portability and easy parsing is the only advantage od CSV. It’s definitely good enough (maybe even the best) for small datasets but if you have a lot of data you need a compressed binary format, something like parquet.