I am seeking advice regarding my ebook collection on a Linux system, which is stored on an external drive and sorted into categories. However, there are still many unsorted ebooks. I have tried using Calibre for organization, but it creates duplicate files during import on my main drive where I don’t want to keep any media. I would like to:

  • Use Calibre’s automatic organization (tags, etc.) without duplicating files
  • Maintain my existing folder structure while using Calibre
  • Automatically sort the remaining ebooks into my existing categories/folder structure

I am considering the use of symlinks to maintain the existing folder structure if there is a simple way to automate the process due to my very large collection.

Regarding automatic sorting by category, I am looking for a solution that doesn’t require manual organization or a significant time investment. I’m wondering if there’s a way to extract metadata based on file hashes or any other method that doesn’t involve manual work. Most of the files should have title and author metadata, but some won’t.

Has anyone encountered a similar problem and found a solution? I would appreciate any suggestions for tools, scripts, or workflows that might help. Thank you in advance for any advice!

  • solrize@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    If the files are literally duplicated (exact same bytes in the files, so matching md5sums) then maybe you could just delete the duplicates and maybe replace them with links.

    Automatically sorting books by category isn’t so easy. Is the metadata any good? Are there categories already? ISBN’s? Even titles and authors? It starts to be kind of a project but you could possibly import MARC records (library metadata) which have some of thatinfo in them, if you can match up the books to library records. I expect that the openlibrary.org API still works but I haven’t used it in ages.

    • CoderSupreme@programming.devOP
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      3 months ago

      If the files are literally duplicated (exact same bytes in the files, so matching md5sums) then maybe you could just delete the duplicates and maybe replace them with links.

      If it was only a handful of ebooks I’d consider using symlinks but with a large collection that seems daunting, unless there is a simple way to automate that?

      Automatically sorting books by category isn’t so easy. Is the metadata any good? Are there categories already? ISBN’s? Even titles and authors? It starts to be kind of a project but you could possibly import MARC records (library metadata) which have some of thatinfo in them, if you can match up the books to library records. I expect that the openlibrary.org API still works but I haven’t used it in ages.

      If there’s still no simple way to get the metadata based on the file hashes, I’ll just wait until AI becomes intelligent enough to retrieve the metadata. I’m looking for a solution that doesn’t require manual organization or spending too much time. I’m wondering if there’s a way to extract metadata based on file hashes or any other method that doesn’t involve manual work. Most of the files should have title and author metadata, but some won’t. I’m not in a rush to solve this issue, and I can still find most ebooks by their title without any organization after all.

      • constantokra@lemmy.one
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        I hope someone gives you a good answer, because I’d like one myself. My method has just been to do this stuff little by little. I would also recommend calibre web for interfacing instead of calibre. You can run both in docker, and access calibre on your server from whatever computer you happen to be on. I find centralizing collections makes the task of managing them at least more mentally manageable.

        You might want to give an idea of the size of your library. What some people consider large, others might consider nothing much. If it is exceedingly large you’re better off asking someplace with more data hoarders instead of a general Linux board.

        • Ledivin@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          3 months ago

          I hope someone gives you a good answer

          I honestly don’t know that there is one. What OP is looking for is effectively an AI librarian… this is literally a full-time job for some people. I’m sure OP doesn’t have quite that many books, but the point remains

          • solrize@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            3 months ago

            How many ebooks are you talking about (millions)? Is there just a question of finding duplicated files? That’s easy with a shell script. For metadata, see if the books already have it since a lot do. After that, you can use fairly crude hacks as an initial pass at matching library records. There’s code like that around already, try some web searches, maybe code4lib (library related programming) if that is still around. I saw your earlier comment before you deleted it and it was perfectly fine.