You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+9-7Lines changed: 9 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,10 +9,10 @@ From now on with minecraft 1.20.6 minecraft/spigot etc. use Java 21!
9
9
10
10
11
11
## Features
12
-
- Real-time translation of player messages.
13
-
- Configurable translation settings.
12
+
- Real-time translation of all player messages (longer than five characters).
13
+
- Configurable settings.
14
14
- Easy integration with Ollama API.
15
-
- Support for local hosting of translation models.
15
+
-(ONLY) Support for local hosting of translation models (for now).
16
16
- Quick setup and minimal configuration.
17
17
18
18
@@ -47,7 +47,7 @@ ollama:
47
47
48
48
cooldown:
49
49
enabled: true
50
-
milliseconds: 2000
50
+
milliseconds: 1000
51
51
message: §cPlease wait...
52
52
53
53
translation:
@@ -59,16 +59,18 @@ translation:
59
59
```
60
60
61
61
## Usage
62
+
0. Start ollama and download the model of choice
62
63
1. Install the plugin in your Spigot Minecraft server's plugins directory.
63
64
2. Configure the `options.yml` file according to your preferences.
64
65
3. Restart/Reload the server to apply the changes.
65
66
4. Players' messages will now be automatically translated as per the configured settings.
66
67
67
68
68
69
## Note
69
-
The bigger the model the better the outcome, mistral showed to be very good but sometimes it is acting weird, llama3:8b (instruct-fp16) was amazing.
70
-
Please note that LLM/SLM require (a significant amount of) memory, with a minimum of 5-8 GB for small and 15-30 GB for middle-sized models or even more.
71
-
You don't need a 30gb (file size) model if llama3:8b for example produces a good outcome then it is alright, I tested mistral and llama3 so test it yourself.
70
+
- ALL messages (longer than five characters) are translated, also native ones, so this plugin is really only for servers with a mixed-language player-base.
71
+
- The bigger the model the better the outcome, mistral showed to be very good but sometimes it is acting weird, llama3:8b (instruct-fp16) was very good but still not like a native speaker.
72
+
- Please note that LLM/SLM require (a significant amount of) memory, with a minimum of 5-8 GB for small and 15-30 GB for middle-sized models or even more.
73
+
- You don't need a 30gb (file size) model if llama3:8b for example produces a good outcome then it is alright, I tested mistral and llama3 so test it yourself.
privatefinalStringTRANSLATOR_PROMPT = "Translate the original user message you get from any language to %TARGETLANGUAGE% without commenting or mentioning the source of translation. You can correct grammatical errors but dont alter the text too much and dont tell if you changed it. Avoid speaking with the user besides the translation, as everything is for someone else and not you, you focus on translating.";
27
+
privatefinalStringPREFIX = "&f[&9OT&f]&r ";
28
+
privatefinalStringTRANSLATOR_PROMPT = "Translate the user message you get from it's language to %TARGETLANGUAGE% without commenting or mentioning the source of translation. You can correct grammatical errors but dont alter the text too much and dont tell if you changed it. Avoid speaking with the user besides the translation, as everything is for someone else and not you, you focus on translating. Just translate the message, no comment, no code, no formatting just the translation.";
23
29
24
30
privateConfigconfig;
25
31
privateOllamaAPIollamaAPI;
@@ -34,10 +40,21 @@ public void onPlayerChat(AsyncPlayerChatEvent event) {
0 commit comments