Compare commits

..

11 Commits

Author SHA1 Message Date
b30d47e351 Add LLM info into main Readme 2024-06-19 12:33:19 +02:00
3ce0df7eaf Added Ollama and Ollama Web UI 2024-06-19 12:21:05 +02:00
e88e67f913 Fix broken appsettings file
Invalid json
2024-06-19 12:12:05 +02:00
5053553182 Update dotnet.yml 2024-06-02 19:12:45 +02:00
327ccc9675 Update dotnet.yml 2024-06-02 18:57:56 +02:00
cbc99c2773 Update dotnet.yml 2024-06-02 18:55:14 +02:00
d56215f685 Update README.md 2024-06-02 18:53:02 +02:00
967bee923a Update dotnet.yml 2024-06-02 18:51:38 +02:00
f8e6854569 Create readme.md 2024-06-02 00:04:14 +02:00
03150a3d04 Update README.md 2024-06-01 23:53:15 +02:00
32b6e09336 Update README.md 2024-06-01 23:42:17 +02:00
6 changed files with 82 additions and 19 deletions

View File

@@ -7,10 +7,13 @@ on:
jobs:
build:
runs-on: windows-latest
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0 # required for github-action-get-previous-tag
- name: Setup .NET
uses: actions/setup-dotnet@v1
@@ -27,15 +30,19 @@ jobs:
run: dotnet publish ./Bot/Lunaris2.csproj --configuration Release --output ./out
- name: Zip the build
run: 7z a -tzip ./out/Bot.zip ./out/*
run: 7z a -tzip ./out/Lunaris.zip ./out/*
- name: Get the tag name
id: get_tag
run: echo "::set-output name=tag::${GITHUB_REF#refs/tags/}"
- name: Get previous tag
id: previoustag
uses: 'WyriHaximus/github-action-get-previous-tag@v1'
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Get the version
id: get_version
run: echo "::set-output name=version::$(date +%s).${{ github.run_id }}"
- name: Get next minor version
id: semver
uses: 'WyriHaximus/github-action-next-semvers@v1'
with:
version: ${{ steps.previoustag.outputs.tag }}
- name: Create Release
id: create_release
@@ -43,8 +50,8 @@ jobs:
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # This token is provided by Actions, you do not need to create your own token
with:
tag_name: ${{ steps.get_version.outputs.version }}
release_name: Release v${{ steps.get_version.outputs.version }}
tag_name: ${{ steps.semver.outputs.patch }}
release_name: Release ${{ steps.semver.outputs.patch }}
draft: false
prerelease: false
@@ -55,6 +62,6 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./out/Bot.zip
asset_name: Bot.zip
asset_path: ./out/Lunaris.zip
asset_name: Lunaris.zip
asset_content_type: application/zip

View File

@@ -0,0 +1,8 @@
## Ollama - Large Language Model Chat - Handler
This handler "owns" the logic for accessing the ollama api, which runs the transformer model.
> How to get started with a local chat bot see: [Run LLMs Locally using Ollama](https://marccodess.medium.com/run-llms-locally-using-ollama-8f04dd9b14f9)
Assuming you are on the same network as the Ollama server you should configure it to be accessible to other machines on the network, however this is only required if you aren't running it from localhost relative to the bot.
See: [How do I configure Ollama server?](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server)

View File

@@ -7,7 +7,7 @@ flowchart TD
EventListener[DiscordEventListener] --> A2[SlashCommandReceivedHandler]
A --> |Message| f{If bot is mentioned}
f --> v[ChatHandler]
f --> |ChatCommand| v[ChatHandler]
A2[SlashCommandReceivedHandler] -->|Message| C{Send to correct command by
looking at commandName}
@@ -23,6 +23,11 @@ Program registers an event listener ```DiscordEventListener``` which publish a m
await Mediator.Publish(new MessageReceivedNotification(arg), _cancellationToken);
```
|Name| Description |
|--|--|
| SlashCommandReceivedHandler | Handles commands using ``/`` from any Discord Guild/Server. |
| MessageReceivedHandler| Listens to **all** messages. |
## Handler integrations
```mermaid
flowchart TD
@@ -35,3 +40,10 @@ flowchart TD
o --> v
E --> Lava[Lavalink]
```
|Name| Description |
|--|--|
| JoinHandler| Handles the logic for **just** joining a voice channel. |
| PlayHandler| Handles the logic for joining and playing music in a voice channel. |
| HelloHandler| Responds with Hello. (Dummy handler, will be removed)|
| GoodbyeHandler| Responds with Goodbye. (Dummy handler, will be removed)|
| ChatHandler| Handles the logic for LLM chat with user. |

View File

@@ -9,7 +9,7 @@
"Token": "discordToken",
"LavaLinkPassword": "youshallnotpass",
"LavaLinkHostname": "127.0.0.1",
"LavaLinkPort": 2333
"LavaLinkPort": 2333,
"LLM": {
"Url": "http://192.168.50.54:11434",
"Model": "gemma"

View File

@@ -7,6 +7,7 @@ Lunaris2 is a Discord bot designed to play music in your server's voice channels
- Play music from YouTube directly in your Discord server.
- Skip tracks, pause, and resume playback.
- Queue system to line up your favorite tracks.
- Local LLM (AI chatbot) that answers on @mentions in Discord chat. See more about it below.
## Setup
@@ -17,6 +18,11 @@ Lunaris2 is a Discord bot designed to play music in your server's voice channels
5. Make sure you got docker installed. And run the file ``start-services.sh``, make sure you got git-bash installed.
6. Now you can start the project and run the application.
## LLM
Lunaris supports AI chat using a large language model, this is done by hosting the LLM locally, in this case Docker will set it up for you when you run the start-services script.
The LLM is run using Ollama see more about Ollama [here](https://ollama.com/). Running LLM locally requires much resources from your system, minimum requirements is at least 8GB of ram. If your don't have enought ram, select a LLM model in the [appsettings file](https://github.com/Myxelium/Lunaris2.0/blob/master/Bot/appsettings.json#L15) that requires less of your system.
## Usage
- `/play <song>`: Plays the specified song in the voice channel you're currently in.
@@ -25,7 +31,3 @@ Lunaris2 is a Discord bot designed to play music in your server's voice channels
## Contributing
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
## License
[MIT](https://choosealicense.com/licenses/mit/)

View File

@@ -24,6 +24,40 @@ services:
ports:
# you only need this if you want to make your lavalink accessible from outside of containers
- "2333:2333"
ollama:
volumes:
- ollama:/root/.ollama
# comment below to not expose Ollama API outside the container stack
ports:
- 11434:11434
container_name: ollama
pull_policy: always
tty: true
restart: unless-stopped
image: ollama/ollama:latest
ollama-webui:
build:
context: .
args:
OLLAMA_API_BASE_URL: '/ollama/api'
dockerfile: Dockerfile
image: ollama-webui:latest
container_name: ollama-webui
depends_on:
- ollama
ports:
- 3000:8080
environment:
- "OLLAMA_API_BASE_URL=http://ollama:11434/api"
extra_hosts:
- host.docker.internal:host-gateway
restart: unless-stopped
volumes:
ollama: {}
networks:
# create a lavalink network you can add other containers to, to give them access to Lavalink
lavalink: