Anythingllm Lets You Chat With Documents Locally Here S How To Use It

Download and Set Up AnythingLLM

Go ahead and download AnythingLLM from here. It’s freely available for Windows, macOS, and Linux. Next, run the setup file and it will install AnythingLLM. This process may take some time. After that, click on “Get started” and scroll down to choose an LLM. I have selected “Mistral 7B”. You can download even smaller models from the list below.

Next, choose “AnythingLLM Embedder” as it requires no manual setup.

After that, select “LanceDB” which is a local vector database.

Finally, review your selections and fill out the information on the next page. You can also skip the survey.

Now, set a name for your workspace.

You are almost done. You can see that Mistral 7B is being downloaded in the background. Once the LLM is downloaded, move to the next step.

Upload Your Documents and Chat Locally

First of all, click on “Upload a document“.

Now, click to upload your file or drag and drop them. You can upload any file format including PDF, TXT, CSV, audio files, etc.

I have uploaded a TXT file. Now, select the uploaded file and click on “Move to Workspace“.

Next, click on “Save and Embed“. After that, close the window.

You can now start chatting with your documents locally. As you can see, I asked a question from the TXT file, and it gave a correct reply citing the text file.

I further asked some questions, and it responded accurately.

The best part is that you can also add a website URL and it will fetch the content from the website. You can now start chatting with the LLM.

So this is how you can ingest your documents and files locally and chat with the LLM securely. No need to upload your private documents on cloud servers that have sketchy privacy policies. Nvidia has launched a similar program called Chat with RTX, but it only works with high-end Nvidia GPUs. AnythingLLM brings local inferencing even on consumer-grade computers, taking advantage of both CPU and GPU on any silicon.