7 comments

  • refulgentis 0 minutes ago
    The sloppiest slop I've seen in a couple weeks:

    - fork of a fork of a quantization library

    - suspicious burst of ~nothing comments from new accounts

    - 6 comments 7 hours in, 4 flagged/dead, 2 also spammy and/or confused

    - Demo shows it's worse: 800 ms instead of 2.6 ms for text embedding search

    - "but it saves space" - yes. 1.2 MB in RAM instead of 7.2 MB to turn search in 1s instead of sub-frame.

    - Keyword matching on TurboQuant being somethign cool

    - It's not even wrong to do this with the output embeddings, there's way more obvious ways to save even more space

    - README is a LLM thinking author is asking of work, not a README explaining anything

  • glohbalrob 5 hours ago
    Very cool. I added the new multi embedding 2 model to my site the other week from google

    I guess need to dig into this and see if it’s faster and has more use cases! Thanks for publishing your work

  • hhthrowaway1230 5 hours ago
    Awesome! Also love the gaussian splat demo, cool use case!
  • himmelsee2018 2 hours ago
    [flagged]
  • newbrowseruser 5 hours ago
    [dead]
  • bingbong06 5 hours ago
    [flagged]
  • aritzdf 3 hours ago
    [flagged]