ollama

name

ollama

setting

model

setting

host

description

This is a very simple interface to send a query to ollama and output the response as it comes back, without leaving Amplenote.

Ollama is a local LLM app that provides an API locally for generation. Responses are streamed as they are generated, and are generated entirely on the local machine.



Ollama must be run with OLLAMA_ORIGINS=https://plugins.amplenote.com ollama serve

instructions

Set up ollama by following the instructions at https://ollama.ai/

Run ollama, allowing access from Amplenote: OLLAMA_ORIGINS=https://plugins.amplenote.com ollama serve

Press cmd+o on macOS, or ctrl+o on Windows/Linux, to bring up the jump to note dialog

Type "ollama" and select the ollama plugin

Ask away:



The following settings are available:

host defaults to localhost:11434 - change this if you are running ollama in a non-default way.

model the name of the model to use, defaults to llama2. For more models, visit https://ollama.ai/library

{
async appOption(app) {
const prompt = await app.prompt("What question would you like answered?");
if (!prompt) return;
 
const host = app.settings["host"] || "localhost:11434";
const model = app.settings["model"] || "llama2";
 
const response = await fetch(`http://${ host }/api/generate`, {
body: JSON.stringify({ model, prompt }),
method: "POST",
});
const reader = response.body.pipeThrough(new TextDecoderStream()).getReader();
 
let responseText = "";
while (true) {
const { value, done } = await reader.read();
if (done) break;
 
try {
const { response } = JSON.parse(value);
if (typeof (response) === "string") {
responseText += response;
app.alert(responseText, {
actions: [ { icon: "pending", label: "Generating response" } ],
scrollToEnd: true,
});
}
} catch (error) {
console.error(error);
}
}
 
app.alert(responseText, {
actions: [ { icon: "done", label: "Response complete" } ],
scrollToEnd: true,
});
},
}