- cross-posted to:
- hackernews@derp.foo
- technews@radiation.party
- cross-posted to:
- hackernews@derp.foo
- technews@radiation.party
LlamaIndex is a simple, flexible data framework for connecting custom data sources to large language models.
Also discussed on HN: LlamaIndex: Unleash the power of LLMs over your data
[poxrud]: Is this an alternative/competitor to langchain? If so which one is easier to use?
[mabcat]: It’s an alternative, does a similar job, depends on/abstracts over langchain for some things. It’s easier to use than langchain and you’ll probably get moving much faster.
They’ve aimed to make a framework that starts concise and simple, has useful defaults, then lets you adjust or replace specific parts of the overall “answer questions based on a vectorized document collection” workflow as needed.
[rollinDyno]: I gave this a shot a while back and found plenty of examples but little documentation. For instance, there is a tree structure for storing the embeddings and the library is able to construct it with a single line. However, I couldn’t find an clear explanation of how that tree is constructed and how to take advantage of it.
[freezed88]: Hey all! Jerry here (from LlamaIndex). We love the feedback, and one main point especially seems to be around making the docs better: - Improve the organization to better expose both our basic and our advanced capabilities - Improve the documentation around customization (from LLM’s to retrievers etc.) - Improve the clarity of our examples/notebooks.
Will have an update in a day or two :)