# Obsidian and Ollama - Free Local AI Powered PKM
![](https://i.ytimg.com/vi/0KttkhL7-b4/maxresdefault.jpg)
## Introduction [(00:00:00)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=0s)
- [[Artificial intelligence]] has been integrated into various applications, and there is a growing interest in leveraging AI with private instances and installations, particularly with [[Obsidian (software) | Obsidian]], allowing for offline use [(00:00:00)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=0s).
- The goal is to explore using Obsidian with AI in a private setup, enabling offline functionality and increased data privacy [(00:00:14)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=14s).
- Reasons for using a private AI solution include the ability to work offline, increased data privacy, and low costs, as well as dedicated performance without interference from others [(00:00:42)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=42s).
- Scenarios for using this setup with Obsidian include local notes, where Q&A is applied to a single note, and Q&A scenarios that analyze the entire Vault for context [(00:01:20)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=80s).
- Another scenario is using a default Q&A model that has access to a vast amount of information from external sources like the internet [(00:01:44)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=104s).
- The video will discuss Obsidian and AI, focusing on a private setup that allows for offline use, and is presented by Anton [(00:00:26)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=26s).
## Running AI Locally on Your Machine [(00:02:00)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=120s)
- [[Obsidian (software) | Obsidian]] can be used with local AI-powered personal knowledge management (PKM) solutions, allowing users to leverage AI to answer questions and learn new topics by accessing various models and information within the model [(00:02:00)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=120s).
- Several solutions can be used to run AI locally on a machine, including Ollama, LM Studio, [[Generative pre-trained transformer | GPT]], and Jan, which can be easily installed and set up on a local system [(00:02:18)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=138s).
- These solutions support various operating systems, such as [[MacOS | Mac]], [[Microsoft Windows | Windows]], and [[Linux]], as long as the hardware can support them, particularly requiring a system with a good GPU [(00:03:05)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=185s).
- For Mac systems, the M series (M1, M2, or M3) is recommended, while Windows and Linux systems should have a GPU to run these local models efficiently [(00:03:46)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=226s).
- Ollama is one of the solutions that is easy to install and get running on a machine, and it is supported with Obsidian, although other options like LM Studio are also available [(00:04:18)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=258s).
- There are numerous models available for use with these solutions, including many supported by Ollama and LM Studio, allowing users to experiment and find the best model for their specific use case [(00:04:43)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=283s).
- Some of the models tested include LLaMA 2, LLaMA 3, and [[Mistral AI | Mistral]], which were used to evaluate their performance and responsiveness to questions [(00:05:34)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=334s).
- [[Obsidian (software) | Obsidian]] and Ollama are free local AI-powered personal knowledge management (PKM) tools that offer similar performance, but the more information in the Vault, the longer it takes to index and get into the databases used by the models for Q&A [(00:06:01)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=361s).
- There are many plugins available for Obsidian, but only four were used in the test, with three being comparable to each other: Co-Pilot, Smart Second Brain, and another unnamed plugin, while Canoli is different as it's leveraged in the canvas and can do more things [(00:06:31)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=391s).
- The plugins Co-Pilot, Smart Second Brain, and the unnamed plugin are similar in that they allow users to pose questions or ask for information and work with the model in a way similar to [[ChatGPT]], providing responses based on the information within the model [(00:06:52)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=412s).
- The plugins support different features, with some supporting local and remote models, and others allowing for generalized chat with the model, chat with a single note, or chat with the entire Vault [(00:07:16)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=436s).
- BM is a plugin that allows users to chat with a single note and access general models for general information, but it does not support chatting with the entire Vault [(00:07:37)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=457s).
- In contrast, Co-Pilot and Smart Second Brain index the entire Vault, post the information into their database, and then allow users to do Q&A on the entire Vault [(00:07:53)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=473s).
## Installing and Configuring Olama for AI [(00:08:00)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=480s)
- To get started with Olama for AI, the first step is to install a solution that allows running models locally, which can be done by downloading Olama from ama.com or through their [[GitHub]] page [(00:08:32)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=512s).
- The installation process is straightforward, and once installed, the service can be run from the command line or through an icon at the top of the screen on a [[MacOS | Mac]] [(00:09:07)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=547s).
- To verify that the service is running, users can check the icon or go to the URL localhost:11434, where they should see the Olama running page [(00:09:36)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=576s).
- After installing and running Olama, users can start pulling down the models they want to use, such as Olama 2, which is supported by most plugins [(00:10:05)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=605s).
- To pull down models, users can follow the instructions on the GitHub page and use the command "Olama pull" followed by the model name [(00:10:23)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=623s).
- Users can also check which models they already have installed by using the command "Olama list" [(00:10:36)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=636s).
- Once the models are downloaded, users can proceed to set up Olama in [[Obsidian (software) | Obsidian]] by going to the plugins area and downloading the necessary plugins [(00:11:08)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=668s).
## Set Up and Test Smart Second Brain Plugin [(00:11:15)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=675s)
- The Smart Second Brain plugin is being set up in Obsidian, and the user has cleared the plugin to start fresh and demonstrate the full setup process [(00:11:15)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=675s).
- The plugin allows users to exclude certain files and folders, and by default, it does not auto-start when leveraging the Smart Second Brain due to indexing [(00:11:41)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=701s).
- The plugin can be set up to run automatically or manually, depending on the user's preference [(00:12:01)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=721s).
- The Smart Second Brain plugin provides a local solution that can be used completely offline or pointed to a third-party service [(00:12:16)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=736s).
- The user has pointed the plugin to a local URL, which is running Ollama, and can see a list of recommended models [(00:12:30)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=750s).
- The recommended models may not be installed, and users can select from the list or install additional models [(00:12:41)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=761s).
- The user has selected the Llama 3 model and will use the default settings for the plugin [(00:13:40)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=820s).
- The plugin is now set up in [[Obsidian (software) | Obsidian]], and a setup process has started, which provides instructions for setting up the application [(00:13:50)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=830s).
- The instructions include downloading the application, extracting the zip, starting Ollama, and testing to see if it's running [(00:14:19)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=859s).
- The user has already verified that Ollama is running and can test the setup by copying a specific piece of code and setting up the environment [(00:14:31)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=871s).
- The AMA service is leveraged by Obsidian to enable AI-powered functionality, and restarting the service allows Obsidian to utilize it [(00:14:58)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=898s).
- The embedding model is used for indexing the Vault in Obsidian, and users can choose from recommended models or install one if they haven't already [(00:15:15)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=915s).
- The indexing process can take several minutes, and the time may vary depending on the chosen model, with the estimated time displayed in the chat [(00:16:02)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=962s).
- The indexing process must be completed before the plugin can be used, and users can monitor the progress in the chat [(00:16:29)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=989s).
- Once the indexing is complete, the AMA plugin can be used to answer questions, and users can adjust settings such as the chat window appearance and language [(00:17:18)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1038s).
- The plugin also allows users to adjust the creativity and similarity of the information within their notes, although some settings may not be customizable [(00:17:43)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1063s).
- The AMA plugin can answer general questions, and if it can't find relevant information in the user's notes, it will use the model to provide an answer [(00:18:16)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1096s).
- The plugin can provide information on a wide range of topics, including scientific facts, such as the distance from the [[Earth]] to the Moon [(00:18:41)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1121s).
- [[Obsidian (software) | Obsidian]] is a knowledge management tool that can be used with the Ollama plugin to provide local AI-powered functionality, allowing users to ask questions and receive answers based on the information in their Vault [(00:18:50)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1130s).
- The Ollama plugin can sometimes provide incorrect or confusing responses, such as suggesting a non-existent file or providing unrelated information [(00:18:55)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1135s).
- The plugin can provide accurate information on certain topics, such as the average distance from the center of the moon to the center of the Earth [(00:19:37)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1177s).
- To verify the accuracy of the response, users can cross-check with other models or sources, such as [[ChatGPT]] [(00:20:03)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1203s).
- When asking questions to the Ollama plugin, users need to be specific and detailed to get accurate results, as vague questions may not yield the desired response [(00:20:27)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1227s).
- The plugin can be used to extract information from notes in the Vault, but users need to ask the right questions and may need to adjust the similarity threshold to get the desired results [(00:23:21)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1401s).
- Adjusting the similarity threshold to a lower value, such as 70, may help the plugin find relevant information in the Vault [(00:23:43)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1423s).
- The Ollama plugin has a user interface feature, an octopus icon, which some users may find distracting or in the way [(00:22:05)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1325s).
- With the right question and settings, the Ollama plugin can successfully find and extract information from notes in the Vault, such as planned trips or vacations [(00:24:13)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1453s).
- Lowering the similarity in [[Obsidian (software) | Obsidian]] and Ollama's AI-powered PKM plugin results in correct responses, indicating the need for a feature to rate the quality of responses, such as a thumbs up or thumbs down option [(00:24:16)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1456s).
- The thumbs up or thumbs down feature would allow the plugin to learn from user feedback and improve its performance over time [(00:24:29)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1469s).
- Currently, users can prompt the plugin to indicate whether a response was correct or incorrect, but having a built-in quality of life feature for rating responses would be beneficial [(00:24:39)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1479s).
- The plugin's functionality is comparable to other plugins, such as Copilot, which will be explored in further comparison [(00:24:55)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1495s).
## Set Up and Test Co-Pilot Plugin [(00:25:00)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1500s)
- [[Obsidian (software) | Obsidian]] and Ollama can be used together as a free local AI-powered Personal Knowledge Management (PKM) system, allowing users to index their entire Vault and ask questions to it [(00:25:01)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1501s).
- To set up the system, users need to install the co-pilot plugin, enable it, and change the embedding model to Ollama [(00:25:22)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1522s).
- The system has different models, including GP4, Llama Local, and Vault QA, which can be used to ask questions and get answers [(00:25:58)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1558s).
- To reset the settings, users need to refresh the index button, reset everything, and save and reload the changes [(00:26:22)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1582s).
- After making changes to the settings, users need to save and reload the plugin to apply the changes [(00:26:49)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1609s).
- The system has a default model that can be changed to a local model, such as Llama, and users can select the model they want to use [(00:27:01)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1621s).
- To force the system to reindex, users can try changing one of the models, saving and reloading the changes, and then selecting the Vault QA option [(00:27:36)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1656s).
- The system will go through an indexing process, similar to the Smart Second Brain plugin, which may take some time [(00:29:12)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1752s).
- Once the system is set up, users can ask questions and get answers, such as the distance from the Moon to the [[Earth]] [(00:29:47)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1787s).
- The system can also be used to answer more complex questions, and users can experiment with different models and settings to get the best results [(00:30:21)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1821s).
- [[Obsidian (software) | Obsidian]] and Ollama can be used as a free local AI-powered Personal Knowledge Management (PKM) system.
- A test was conducted to compare the performance of two AI models, Smart Second Brain and Co-Pilot, in responding to a question about a plan for the year.
- The test revealed that Co-Pilot provided a more accurate response on the first try, including a link to the source, whereas Smart Second Brain did not answer correctly on the first attempt.
- The results suggest that Co-Pilot may be more effective in finding and responding with accurate information, but still requires trial and error to get the desired results.
- It is essential to ensure that questions are detailed enough to get accurate responses from the AI models.
- The AI models may not always provide correct information, and it is crucial to verify the accuracy of the responses.
- The test was conducted with the page open, which may or may not have affected the results, but it is expected that it should not have made a difference.
- The responses from Co-Pilot seem to be better based on previous testing.
## Set Up and Test BMO Plugin [(00:32:20)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1940s)
- A plugin was tested, which is more of a chatbot that can interact with a single note without indexing, and it works well out of the box for its intended purpose [(00:32:20)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1940s).
- The plugin allows users to open a note and ask the AI questions, and it will try to find the relevant information in the note [(00:32:38)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=1958s).
- The plugin was tested with a question about planned trips, but it did not return the expected results, possibly due to the note being in a mind map format [(00:33:36)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=2016s).
- The note was converted to markdown format, and the question was asked again, resulting in the plugin returning some information from the file [(00:34:37)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=2077s).
- However, the returned information was not entirely accurate, with some incorrect information being provided [(00:35:29)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=2129s).
- The plugin was tested again with a revised question, and it returned some correct information, but the results were still hit-or-miss [(00:36:00)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=2160s).
- The plugin's performance is not as good as more advanced AI models like [[OpenAI | Open AI]] API or Chat [[Generative pre-trained transformer | GPT]], but it can still be useful for getting general information [(00:36:50)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=2210s).
- The plugin may require additional setup and training to improve its performance, such as providing prompts to explain the format of the file and the meaning of certain terms [(00:37:05)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=2225s).
- The plugin is good for answering factual questions, but it may struggle with understanding questions about information in the user's Vault that is not formatted in a way that is easy for the AI to understand [(00:37:41)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=2261s).
- The user is experimenting with AI-powered plugins in [[Obsidian (software) | Obsidian]], including Co-Pilot, and notes that while there's room for improvement, the plugins can provide better responses with time and repetition, as well as user training on asking better questions [(00:37:52)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=2272s).
- The Co-Pilot plugin is preferred for its faster indexing process, and the user appreciates its offline functionality, allowing for local use on their laptop without an internet connection [(00:38:11)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=2291s).
- The user plans to set up their system to run offline for specific use cases, while using the Open AI setup for internet-based tasks, and values the flexibility to mix and match plugins and configurations [(00:38:50)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=2330s).
- The user also mentions Canoli, another plugin with potential use cases, and plans to explore it further, providing feedback on their experience with the plugins and local models in Obsidian [(00:39:19)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=2359s).
## Closing Remarks [(00:39:43)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=2383s)
- The video provided information on [[Obsidian (software) | Obsidian]] and Ollama, free local AI-powered Personal Knowledge Management (PKM) tools [(00:39:43)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=2383s).
- Viewers are encouraged to like the video and subscribe to the channel if they found the information helpful [(00:39:46)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=2386s).
- The video concludes with a farewell message, wishing viewers a nice day until the next video [(00:39:49)](https://www.youtube.com/watch?v=0KttkhL7-b4&t=2389s).